Fellows rickC

September 2020
by Anthea Rutter

Rick has been in the field of evaluation for over 40 years. He has been President of the AES and conference convenor. These days, he balances work as Emeritus Professor at Murdoch University, running a small consultancy, providing training in evaluation, chairing AES Fellows and participating in the AES Awards Committee. 

As we can see from previous Fellows profiles, people come into the evaluation profession via different routes.  As evaluation is a relatively small field, this means that close connections are formed between evaluators. When asked what brought Rick into evaluation, he cited Ralph Straton, one of our earliest Fellows of the AES.

Ralph convinced me that this is an area where you can contribute to the improvement of society through scientific work. He also explained that he was attracted to the idea of working on short-term projects and then moving onto the next project. I felt that everyone was talking about the importance of evaluation for the development and implementation of good public policy and felt that I could contribute to increasing the use of it.

All of our Fellows have a myriad of expertise and interests, so I was keen to find out Rick’s main area of interest

I don’t belong to a particular school of evaluation– utilisation is the closest. My main approach is to use a number of tools to increase the utilisation of evaluation. A strong focus is to build the relationship between the evaluators and the stakeholders as research shows this can increase use. I try to work with agencies that are helping those less able and the marginalised groups in society.

All of us meet a number of challenges we face as we develop our skills in evaluation. Challenges of course are not negative experiences and help us to hone our practice. Rick shared some common challenges.

For half of my career I found it a challenge to get the time to do evaluation as I was working in the public sector and universities. It has also been a challenge keeping up with the field as it has matured. I think a major challenge we now face is developing approaches and tools that work well in evaluating public policies not just programs.

Interestingly, I did not know that Rick was an anthropologist! Finding out that fact was an incentive to ask him what he saw as the highlights of his career.

One was going to Papua New Guinea to conduct a two-year evaluation study for the World Bank; for me this was just a dream come true. And it was a fantastic experience to see the range of cultures in person. Getting my PhD in Evaluation was also great, as it was rare to do this in Australia at the time. Becoming an AES Fellow was a great honour, considering the quality contributions made by the other Fellows. More recently I have enjoyed training aspiring evaluators through workshops and graduate courses at Murdoch and UWA [University of Western Australia]. A number of them have become formal evaluators and started their own careers.

All of us have been influenced by people during our careers and some have had greater influences than others. Citing another of our Fellows was great.

Michael Patton’s work on utilisation and Carol Weiss on how evaluation works with government have been really influential. Working closely with John Owen over a number of years has been fantastic, as he always sees deeper into an issue than I do. Being able to think differently about evaluation through teaching, especially international students, has also taught me more about the need to be flexible in evaluation.

I was keen to find out from Rick how he felt the field of evaluation changed during his career. His answer reflected the growth and maturity of evaluation in Australia.

It’s much more professional and is now recognised as a legitimate discipline and profession. People are now more careful how they operate as evaluators. It has a strong theory base and a number of tools have been developed: for example, program logic, Most Significant Change. Our influence has gone beyond the discipline of evaluation into the areas of policy development, decision-making and corporate cultural. In my view, in Australia, people have a much higher regard for evaluation than they did 40 years ago.

Rick provided some good insights when asked what he felt were the main skills or competencies that evaluators need to have to keep pace with emerging trends in evaluation practice.

I think that professional associations such as the AES have worked well to identify what competencies are needed. People need to be reflective and should regularly look back to see whether they are approaching the evaluation study in the correct way. You also need to get along with people and be professional in dealing with all stakeholders. We also have an ethical responsibility to address social issues cautiously and with care.

Most of our Fellows have felt there are some key issues which we should be thinking about as well as try to resolve in the next decade, and Rick was no exception.

Politically there is currently a threat to evidence-based decision making, which I feel needs to be kept as the primary model. I think evaluation is still underutilised, considering the number of public programs in operation. I think we should do more professional development in terms of longer-term training programs, not just one-off workshops. I also think there is an opportunity to work closer with universities and organisations like The Institute of Public Administration in Australia (IPPA). Although we have made inroads, we need to do it more systematically.

Rick had some sound advice to give when asked how the AES can still be relevant in the future.

I feel we need stronger involvement as an advocate - making submissions to governments at state and federal level and looking at getting involved with some of the state government evaluation agencies. The AES should also continue its extensive training role.

Rick also provided some thoughts on other areas in which the AES could concentrate.

Our priority should be to get our own region sorted and working well. There is a gap in that we lack a regional body for this part of the world. I think we need to take the lead to set it up. We are the largest, best funded and most organised society in the region. We need to put effort into working closely with New Zealand as well. 

There are probably some of the Fellows who could be useful in this process. Depending upon the level of interest, how can we use those who want to do things to improve evaluation in the region?

As a final question to Rick, I asked him what he wished he had known before he started out as an evaluator.

I wish I had known how enlightening and how much fun conducting evaluation studies can be. In the beginning I felt that evaluation studies were dry, technical documents (which of course they can be but hopefully are not!). But fairly quickly I came to see that in this role, one could be paid to explore and gain an in-depth understanding of a range of really interesting and usually highly beneficial social policies and programs in areas in which you might not be an expert. And through this process you could influence government policy and practice to the benefit of many people, especially those in vulnerable situations. It has become a truly enjoyable and humbling experience, provided through evaluation, to explore  and come to understand the wonderful impact people can make in other people's lives and to more fully appreciate the very positive impact government policies and programs have on the quality of life in Australia.  

 “I wish I had known how enlightening and how much fun conducting evaluation studies can be,” muses AES Fellow Rick Cummings, as he reflects on his over-40-year evaluation journey.  

--------------------------

Rick Cummings is an Emeritus Professor in the Sir Walter Murdoch Graduate School of Public Policy and International Affairs at Murdoch University. 

ConstableCare

August 2020
by Kwadwo Adusei-Asante

COVID-19 has changed our way of life, including how we evaluate programs. The pandemic has rendered conventional evaluation approaches difficult to execute, and programs have faced new delivery challenges. These are challenging times for organisations that are required to deliver programs and measure agreed outcomes for their funders.

This blog draws on my experience with Constable Care Child Safety Foundation in WA. During these uncertain times, we have been forced to think outside the box and adopt new ways of doing evaluation. Our focus has been on capturing evaluation data when ‘what works’ is preferred over ‘the ideal’.

About Constable Care

Constable Care provides child safety education. The organisation works to empower young people aged 4-18 years to find creative harm prevention solutions to issues that affect them including bullying, mental health, peer pressure, personal safety, cybersafety, gambling, and drug and alcohol abuse.

Constable Care delivers eight programs through which it engages over 100,000 children and young people every year and reaches over 500,000 families in WA. The programs range from 40 minutes theatrical performances to ten weeks shows, as well as an excursion destination for primary school children to learn pedestrian, bicycle and public transport travel safety skills.

Constable Care needs to demonstrate outcomes to funders and key stakeholders. I have been providing evaluation advice and support to Constable Care over the last two years. As a result, Constable Care has digitised its evaluation procedures and developed an evaluation framework that captures baseline and program data using a range of quantitative/qualitative-based evaluation techniques. While each program is different and are evaluated with tailored models, largely, the models capture students’ demography, perceptions, knowledge, behavioural intentions, and overall satisfaction with the program.

Evaluations at Constable Care in COVID-19

Before the COVID-19 pandemic, Constable Care delivered most of its programs through face-to-face contacts in classrooms and in the community. This meant that most of its evaluation data could be collected at such events using iPads. While children were still attending school, the pandemic lockdown meant it was no longer possible for program staff or evaluators to visit schools.

Following consultations with its funders and stakeholders, Constable Care developed online versions of its programs and uploaded them on its website and social media platforms. Under this arrangement, Constable Care has relied on teachers to use its online video shows to educate students on harm prevention.

The new normal required innovation and adaptation of Constable Care evaluation procedure. Initial discussions about developing web-based surveys and embedding them in the online videos’ description tabs was discarded due to technical difficulties and concerns with data validity. After much deliberation, we adopted a proxy evaluation technique augmented with google analytics.

At the height of the pandemic, we used a proxy technique to collect evaluation data to measure the impact of the new online video content. Teachers became proxies who collected Constable Care’s evaluation data before and after the children watched the online videos. The teachers received evaluation forms and instruction documents, which explained the purpose of the evaluation activity and how the data should be collected.

Little is known about the use of proxies in program evaluation. It is not a perfect technique, as it presents ethical issues pertaining to potential conflict of interest and data quality, and heavily relies on the cooperation of proxies. Practitioners may want to explore the technique as an option if physical contact with evaluation respondents is practically impossible and in settings where trust is low, or the evaluator is a stranger. In our case, the cooperation of teachers has been commendable, enabling the organisation to obtain rich and useful evaluation data.

Additionally, we are using google analytics to capture quantitative data on Constable Care’s online videos.  Google analytics is an online analytics service offered by Google which tracks and reports website traffic. Google analytics has enabled Constable Care to capture ‘number of views’ data on each of the four platforms listed in Figure 1.

figure1

Figure 1: Number of views of Constable care online videos 15 March 2020 and 04 July 2020 

Figure 2 suggests that while the organisation’s online video content primarily targets WA students, the videos have been viewed by people from four continents. This unintended outcome has been welcomed by the funders of Constable Care and presents an opportunity to work internationally.

figure2 

Figure 2: Constable Care online video analytics: Facebook

COVID-19 has presented many challenges to professionals in various fields, program evaluators not excepted. On the brighter side, COVID-19, has prompted new ways of working and being. In my view, innovative thinking may not necessarily require inventing new models or theories, but an exploration of ideas and tools we would normally take for granted. In uncertain times, the quest for perfection may need to be set aside to focus on what is feasible.  

Currently, WA is easing restrictions—normalcy is gradually being restored. This notwithstanding, Constable Care’s proxy evaluation technique has enabled the organisation to evaluate its programs in full and partial COVID-19 lockdown modes. Rather than simply being a compromise, the pandemic has enabled Constable Care to discover new possibilities and build new relationships, while collecting robust data to show it. Evaluation practitioners need to focus on finding the right evaluation solutions that are contextually feasible.

-------------------------- 

Dr. Kwadwo Adusei-Asante is a Senior Lecturer at Edith Cowan University in Western Australia. 


 

Fellows sueF

July 2020
by Anthea Rutter

Sue Funnell was one of the early trail blazers in evaluation methods. By her estimate, Sue has been in the profession for over 43 years. Over this time, she has held a number of roles in evaluation, including as the director of her own consulting company. She was a founding member of the AES, had two terms as President, was chair of the Awards Committee, and a presenter and trainer.

I first came across Sue in the 90s when she detailed her approach to program logic. We were also on the AES Board together for a few years. Sue has had a huge influence on the practice of evaluation, so I was very interested to find out how she came into the field.

I joined the Centre for Research and Measurement in evaluation, NSW Department of Education in the 70s. It was my first job after finishing a Psychology Honours degree. I started a part time Master’s degree in measurement and evaluation led by Ralph Straton at the University of Sydney and then received an Educational Research and Development Centre scholarship to the University of Illinois in the US. My project was in measurement, but I arrived there to find a hotbed of evaluators: Stake, Hastings and House, amongst others. This consolidated my interest in evaluation.

From that initial focus on measurement and her reputation as a leader in the development of approaches to program logic, what have emerged as Sue’s main areas of interest?

Mainly programs that achieve their results through behaviour change, such as educational and advisory programs and regulatory initiatives. I’m also interested in helping evaluators and commissioners to develop a sound description and understanding of the evaluand, so that they can identify appropriate evaluation questions.

A career as long as Sue’s is bound to have challenges, what have been the main ones?

I reckon balancing clients’ needs and expectations, particularly relating to time horizons on the one hand and my commitment to quality on the other. Also commissioners of evaluations constantly change and, with this, comes changes in their demands on a particular evaluation. 

Another challenge is the speed of change in the policy context, which would be greater now! It would appear that people are more interested in short-term initiatives and results than in longer-term strategic approaches.

As well as challenges, a good career has its highlights – I asked Sue about hers.

Working with Bryan Lenne in the Program Evaluation Unit in the NSW Public Service Board was a game changer. This started me on the path to enhancing program logic approaches, providing a tool to get managers to think about their programs and how to ‘connect the dots’. I’ve received lots of positive feedback as well as criticism of my approach to program logic. I’ve honed the approach over time, and this culminated co-writing Purposeful Program Theory: effective use of theories of change and logic models with Patricia Rogers.

As well as this, setting up my own successful company in 1992 and, for 25 years, I was working across a wide range of policy areas and a wide range of jurisdictions and levels: local, state, federal, international, NGOs.

As you’d expect, Sue’s approach to evaluation has had a number of influences, among these:

  • Working in the Program Evaluation Unit in the NSW Public Service Board with Bryan Lenne
  • Ernie House’s early (and continuing) work on Social Justice
  • Hatry’s work on Comparison is the Name of the Game
  • Undertaking meta evaluations, particularly to do with the evaluation function in organisations
  • The Joint Committee Standards on Utility, Feasibility, Accuracy and Propriety
  • Patton’s work on Utilisation-Focused Evaluation
  • Locally, material coming out of different levels of government around program budgeting, in particular the concepts of appropriateness, effectiveness and efficiency.

Sue also gave an honest appraisal of the strengths and challenges in the growth and development of the practice of evaluation.

In the early days, evaluation was a fledgling field trying to define itself. There was much greater emphasis on evaluation models, such as Stufflebeam’s Decision Making model and Stake’s Responsive evaluation. I doubt these days whether current evaluators think much about or use models. Perhaps this happens in academia, but I doubt whether they play a great role for practising evaluators.

Evaluation has been strengthened by becoming multi-disciplinary recognising the need to draw on many fields. A more nuanced understanding of what is gold standard has developed. Amongst evaluators, what is gold standard is what is fit for purpose. Importantly, applying program logic is neutral with respect to choice of methodology to address evaluation questions. However, from time to time, there is a push, especially from Government, for RCTs to be the only gold standard.

There has, over the years, been a constant tension between relative emphasis on monitoring and performance indicators on the one hand and evaluation studies on the other. There has also been a frequent re-badging of performance information and evaluation approaches by state and federal government, often with nothing or little new added!

There has been greater participation in evaluation by large companies (such as the big four). A lot is done in the name of evaluation that might more accurately be called management review.

When I asked Sue the main skills or competencies evaluators need to have or develop to keep pace with emerging trends, her first thought was that she had been out of evaluation for a while. But, on reflection, she had some key insights.

Fleet-footedness and adaptability, while minimising compromises to quality is important. 

We can also make greater use of secondary data and possibly rely less on primary data. Social media has probably become a greater source of secondary information, but evaluators need to have the competencies to assess that information over time and draw on a wide range of social media sources, so that they are not influenced unduly by a particular social bubble.

Beyond the skills we need, I asked Sue what issues evaluators ought to be thinking as about and seeking to resolve in the next decade?

If evaluators want to contribute to worthwhile social changes, then they need to actively address social justice issues and take some stance. This raises the question of whether evaluators should become more socially activist and perhaps one way to do this is to move a bit away from evaluating individual programs to evaluating how well government and society are addressing issues. For example, in relation to domestic violence, what has been done in this area, how well is it working and what can be done? How can we do it and how can we evaluate it? However, a vexed question is ‘ who pays for it?’. I don’t have the answer to that.


 

August 2020
by Renée Madsen

Regionally-based evaluators – those living and working outside major cities – are a vital part of the evaluation ecosystem.  They bring the benefits of evaluation to areas where essential services can be thinly spread and under pressure to deliver the best possible results with limited resources. Regionally-based evaluators ensure that evaluation is accessible to those who would not otherwise be able to engage with evaluation expertise, and we represent the profession in areas it would not otherwise reach.

With increasing numbers of people moving from capital cities to regional areas, regional evaluators with their unique perspectives and experiences will become ever more important in pushing the boundaries of what evaluation can achieve, and ensuring that the profession adds value for all Australians, regardless of where they live.

However, at the moment regionally-based evaluators make up less than 15% of all AES members. What does this mean for our profession’s ability to give regional communities the same access to evaluation as major cities? And how can we increase the capacity of our profession to truly understand and involve regional communities in evaluation?

I’m an evaluator based in Townsville, North Queensland. I have been evaluating a diverse range of programs for 20 years, and I’ve been an AES member for almost 10 of those. I believe in building the strengths of regional areas, and the power of evaluation to create positive change, and like many regional evaluators, I love combining the two.

Townsville

Evaluating a strategic Townsville beach. Source: Renée Madsen 

How many regional AES members are there?

For the purposes of this article, the definition of regional is anywhere outside a major city, as defined by the ABS Remoteness Structure. The map below shows the number of regional AES members in Australia.

region members map

Number of regional AES members. The ABS Remoteness Structure considers all areas of the Northern Territory and Tasmania as regional. Source: Renée Madsen and Michelle Wightwick.

Regionally-based evaluators are only a small proportion of AES members. For example, Victoria has the largest number of AES members at 356, but less than 10% of these members are based outside a major city. This trend is repeated across all states and territories (except NT and Tas). Overall, only 14.6% of all AES members are regional.

With such a small number of us living outside major cities, can our profession give regional communities the same access to and standard of evaluation as major cities?  Given the cost of travel and the need to understand local context, can regional areas enjoy equitable access to evaluation expertise? And do we, as a profession, have the capacity and the commitment to truly empower and involve regional communities in evaluation?

The value of regionally-based evaluators

Our numbers may be small, but our contribution is big! Here are just a few ways that regional evaluators add value to the profession: 

We go the extra mile - sometimes literally. 

Conference attendance and professional development can involve multi-leg flights and/or long car journeys, and all the expense that comes with those. Without the benefit of local networking events, we take the initiative and reach out to find other practitioners using social media or Google, or ask someone to send us the notes from seminars that we couldn’t attend before everything went online due to COVID-19. (Shoutout to the AES Qld Regional Committee for sending me seminar notes over the years!)

We’re also highly driven to learn extra skills to expand our toolkit. With limited access to technical specialists, who are generally based in major cities, we often become a ‘jack of all trades’, choosing to learn specialist techniques and approaches ourselves so we can implement them with regional communities that would not receive the benefit of them otherwise.

We know how things are done in our local community.

Every evaluator knows that stakeholder engagement can make or break an evaluation. Regional practitioners understand the local stakeholders and their relationships, where the landmines are, and the likely touchpoints for collaboration. Mistrust of government programs - and anyone associated with them - runs deep in some places. This is particularly relevant for evaluators, as much of our work is evaluating government-funded programs.

More Aboriginal and Torres Strait Islander people live in regional areas than capital cities, and locally-based evaluators are more likely to have helpful contacts, understand the lay of the land, and know how our local First Nations brothers and sisters prefer to be invited to share their knowledge and experiences.

We’re masters at making evaluation relevant.

One of the great rewards of being a practitioner in the regions is bringing evaluation to people who have never used it before. In places where ‘government’ and ‘head office’ are often a long way away, regional evaluators become very good at explaining what evaluation is and making it relevant. We work across a wide range of scales and industries, from multi-million-dollar programs to small volunteer projects and everything in between, involving government, researchers, industry bodies, technical specialists and community groups.

We are practical, flexible, creative and resourceful

There’s never a dull moment when you’re a regionally-based evaluator. I’ve facilitated an evaluation discussion with graziers in an outback pub garden with no walls to stick up my trusty butchers paper (the horror!) while the pub’s resident dog wandered around snuffling our table scraps. I’ve explained to bemused volunteer conservation groups that the funding body needs them to report on whether they used corflute or cardboard tree guards for high-level evaluation purposes. (No, I’m not making that one up.) Working outside of corporate office environments teaches us to be highly flexible and resourceful. 

We’re also experts at adapting evaluation to diverse communities and being creative in low technology settings. Instead of a workshop in a central location with lovely catering and ipads all round, we’re more likely to be standing in the middle of a field or on the phone at night, competing for attention with events like mustering; or scribbling diagrams on the back of a coaster as we talk with community members. We’ll cobble together elements of different approaches to find new and practical ways to use evaluation and add value to the communities we live and work in. Everyone deserves good evaluation!

cows region

As a regional evaluator, you may be subject to intense scrutiny. Source: Renée Madsen

What I’d love to see as a regional evaluator:

  • More presentations from regional evaluators at AES conferences. We have great stories to tell and a wealth of knowledge to share. However, our attendance costs remain high and it would be good to explore ways to offset these costs for regional members in ways that promote equity for all AES members.
  • More collaborative partnerships between regional evaluators and those based in capital cities – there are many opportunities for mutual learning and skill sharing across evaluation practice, training, promoting our profession, etc.
  • More project-based collaboration with regionally-based evaluators. It would be good to see AES members based in metropolitan areas proactively working with locally based evaluators to ensure the best outcome for evaluations in regional areas.  Keep a register of evaluators who are on the ground in different regions and collaborate with them on projects that involve their communities.
  • More informal connections. If you’re visiting a regional town, find out if there are any local AES members and catch up for a chat. Opportunities for peer-support are limited for regionally-based evaluators. To support this, the online AES Member Directory could include the ability to search by city as well as by state. (Website developers are checking whether this is possible for the upcoming revamped AES website.) Try checking with your state AES Committee to see if they can point you in the direction of who is working in regional areas you are visiting.
  • Sustained opportunities for online AES networking sessions, seminars and workshops – being able to engage in professional learning and networking through virtual platforms provides a fantastic opportunity for all of us to connect across geographic distances.

Renée MadsenGet in touch and share your thoughts….

I would love to hear from evaluators who live and work outside metropolitan areas. What’s it like for you as a regionally-based evaluator? What do you think about the future of evaluation outside major cities?

For my ‘big city’ colleagues - much has been made of society’s capacity to make changes to established practices in the wake of COVID-19. Will you do anything differently in connecting and collaborating with your regional colleagues?

-------------------------- 

Renée Madsen is Principal Consultant at Create and Evaluate, a group facilitation and evaluation consultancy in Townsville, North Queensland. Connect with Renée on LinkedIn or visit www.createandevaluate.com.au . Renée would like to thank her fellow regional evaluators Dr Julie Funnell, Ms Barbara Colls and Ms Donna Turner for their contributions to this article.

 


 

covid statement

June 2020
by AES Relationships Committee

The changing context
The global scale and speed of disruption caused by the COVID-19 pandemic is unprecedented in our lifetimes. The pathway to recovery and management of COVID-19 is expected to be complex and challenging, with significant long-term implications for individuals, organisations, governments and the country.

The coordinated national response in Australia has so far been successful because the best available data and evidence has significantly influenced decision-making. The evidence-informed approach that has served us well to-date remains equally critical going forward.

During the pandemic, many public sector initiatives and supports have been designed, adjusted or expanded to assist individuals, households and businesses to survive and adapt. Some services have been interrupted or halted. As restrictions lift, consideration will need to be given to which adjustments are maintained.

The AES considers that sound data collection and analysis should be built into the establishment of any new or adapted initiatives to maximise the value of evaluation. Evaluation can also support the development of new initiatives and support service redesign activities.

Evidence and evaluation play an important role
Evaluation – and evaluative thinking – remains central in offering systematic review of new and changing initiatives, and to pre-empt potential unintended consequences. It can be undertaken across the policy and program life-cycle to:

  • Ensure clarity of purpose, objectives and alignment of values
  • Assist with monitoring progress and meeting reporting requirements
  • Identify immediate improvement opportunities
  • Understand impact and its drivers, including for different cohorts
  • Understand how design and operation influence impact in different contexts
  • Support good governance, sound decision-making and smart resource allocation
  • Promote knowledge transfer and capability development.

Evaluators are adapting their approaches
To effectively deliver on existing work, evaluators have adapted their approaches to meet physical distancing requirements. Although service clients and stakeholders may feel harder to reach, digital platforms are enabling connections across traditional geographic and social boundaries.

Evaluators are able to continue their work by:

  • Reassessing objectives: Updating evaluation objectives to ensure they remain useful
  • Shifting phasing: Changing delivery timeframes and milestones
  • Adapting design: Shifting design, methods and data collection to achieve the evaluation’s objectives
  • Appropriately engaging stakeholders: Considering how COVID-19 is affecting key stakeholders and adapting engagement methods appropriately
  • Contextualising findings: Interpreting data and forming findings based on contextualised information across different phases of the crisis (e.g. the response and recovery phases).

The AES recommends that monitoring, evaluation and evidence continue wherever possible to support post-pandemic recovery and review.

This statement has been prepared by AES members for AES members to support discussions about why evaluation has particular relevance and value during the pandemic, and how evaluations may be adapted.

Download as a PDF.