Member login
 
   
Forgot Login?   Sign up  

This is the AES Blog, where we regularly post articles by the Australasian evaluation community on the subjects that matter to us. If you have an idea, please contact us on This email address is being protected from spambots. You need JavaScript enabled to view it.. Blog guidelines can be found here.



Fellows zitaU

May 2020
by Anthea Rutter

Zita has been in the evaluation profession for over 26 years and has taken a number of roles over that period. She has been an evaluation lead, consultant and lecturer in evaluation. Zita was introduced as a Fellow in 2013 at the International Evaluation Conference held in Brisbane. 

People come into the evaluation profession through a number of routes, so I was interested to find out from Zita what brought her into the field of evaluation. Her answer echoed the experiences of a large number of evaluators.

I fell into it! Happenstance played a role. I was contracted by Griffith University to write a critique of their environmental educational materials following my PhD dissertation. While also undertaking an instructional design course (incorporating program evaluation), I wrote a module for a Griffith University teacher education project, Teaching for a Sustainable World, which included an evaluation instrument for resource materials. In a roundabout way, this led to lecturing in evaluation to graduate Instructional Design students at Deakin University.

All the evaluators I have talked to have a myriad of evaluation expertise and interests. Zita’s was straight to the point!

Governance and survey design. In all of that work, the front-end design was always my interest. A lot of the work you do at the back end – the reporting end – is dependent upon how you developed the front end. Another of my interests is organisational development and 365-degree feedback. They all feed into each other.

Most of us have been challenged during our careers: indeed, you could say that overcoming challenges helps us to grow as professionals. Zita shares her own lessons, plus sound advice on pricing evaluations.

The first one is that people often do not know what evaluation is. Capacity building has been a large part of any consultancy which I do, so that people not only understand the process but they get to champion evaluation. 

Another challenge is with organisational development. When undertaking a strategic review for a medical institution, I was struck by the fact that consultants within organisational development were using similar techniques to evaluation techniques. 

Pricing is always a big challenge. When I had my evaluation consultancy, I was asked to give a presentation at the AEA [American Evaluation Association]. My presentation was on pricing evaluation, “the tail wagging the dog”. I talked about program budgets, workflow, project activities pipeline and cost benefit (plus other areas). Attention to pricing can position evaluation and our services. 

Another challenge is having the confidence to talk about cost in the early stages of a career. Of course, clients really expect that conversation. Towards the end, I put in several line items. One of them was meta evaluation, review, thinking time etc. You need the confidence and experience to do this. If younger evaluators can shadow the older more experienced ones it would give them some confidence. Need to have the budget talk. It is important to match each other’s expectations – makes for a happier evaluation, than overpricing and not delivering.

Alongside the challenges, a long successful career has its highlights.

On the work front, Zita talked about a major evaluation in the tertiary sector. For her, this was particularly enjoyable as it covered a large number of issues and areas: evaluation of their project writing; implementation; and an institutional review of each of the universities. 

Another highlight was being awarded the Evaluation Training and Service Award – a co-award. As well, the evaluation development award, for an online survey management system. An outstanding highlight was becoming a Fellow. It says so much about peer recognition and the huge amount of work we put in as evaluators. This recognition brings it all together.

In common with many of the Fellows’ interviews, there are a number of influences defining her practice, and Zita cited a few.

John Owen’s methods book [Program Evaluation, Forms and Approaches] was very influential. I was asked to write a critical review of it, so I became familiar with it. Then I was very struck by Patricia Rogers when I heard her speak on logic modelling. It was at the AEA. The theme was truth, beauty and justice, based on Ernie House’s book [Evaluating with Validity]. I was very impressed, talking about logic modelling in terms of truth, beauty, justice etc. so then I looked at logic modelling in a different way. Also, the AEA big names were very influential, especially in capacity building, the GAO [US Government Accountability Office], and Ray Rist from World Bank – he talked about evaluation capacity to strengthen governance. He referred to a supply and demand model which talked about institutional, financial, human, and technical capital. I felt it was very important for my practice.

Like many professions, evaluation has gone through many changes over the years, and these changes mean different things to different people. I asked Zita what she saw as the major changes to the profession.

Zita responded that there is now more focus on cultural sensitivity and Indigenous evaluation. Also, more focus on evaluative thinking. She also felt that there is a greater emphasis on process than there used to be. Outcome has changed over time – less on performance but more on process matters. As well, evaluations now have a greater focus on impact. However, she feels that we should always take into account the latest trends as it changes your own thinking over time.

To keep pace with the emerging trends in evaluation practice, I was curious to find out what Zita felt were the main skills or competencies that evaluators need to have or develop.

People need to be open to a range of methodologies. You might get comfortable with a range of methods, but you need to be open to different ways of doing things. You need variety, and to try to think about something new and see what that means for your toolbox. Being reflective as you make it more diverse.

We also discussed what she saw as the main social issues that evaluators ought to be thinking about and seeking to resolve in the next decade.

The conversation pointed to a desire for accreditation. The professional development the AES runs has no standard. The government needs to say that we are a professional body which provides professional training and competency which is recognised by someone – we need to be an accredited society.

Zita has been involved with the society in a number of roles: member of the awards, ethics and standards committees, and member and presenter for the AES Victorian branch. She was also on the AES Conference Committee for the Melbourne conference (twice). I felt she would be in a good position to ponder the direction which the AES should take in the future.

Zita reiterated her desire for an accredited AES. She also felt that the AES should be the go-to for any media comment in evaluation. University departments teaching evaluation in their own discipline without reference to the AES is something she would like to see changed.

--------------------------

Zita Unger is an independent director on various boards and has several governance roles. Her main interests are in the areas of governance, strategy and evaluation.


 

Fellows jeromeW

March 2020
by Anthea Rutter

Jerome Winston’s career spans over 45 years. He has fascinating insights into how evaluation was viewed in the 70s, which reminded me that back then, evaluation was not viewed as a separate profession, but as part of other disciplines.

I started teaching at Preston Institute of Technology (which, following two mergers, became Phillip Institute of Technology and then RMIT University).  At first, I was teaching both diploma and degree courses in engineering and applied chemistry. When the School of Social Work opened, they were looking for staff who would teach computing and statistics. As an applied scientist, I proposed building monitoring and evaluation into practice, so recommended that computing and statistics be taught as just one aspect of program planning, monitoring and evaluation. This suggestion, first adopted in social work, was later included in a number of other disciplines such as nursing, chiropractic and leisure studies.

Jerome then talked about the 80s and the advent of program budgeting in the Victorian – and later, federal – government, and what this meant for the next stage of his career.

Although program budgeting was intended to incorporate evaluation, Jerome believed that reporting simple, aggregated, numerical data as ‘performance indicators’ would not provide the depth of information needed about most government programs.  The use – and misuse – of ‘performance indicators’ became a main focus of Jerome’s research. 

In 1978, Jerome designed post graduate programs in data collection and analysis for research, monitoring and evaluation. These programs started at Phillip Institute of Technology (PIT) at about the same time that John Owen’s program in evaluation was starting at The University of Melbourne. Most of Jerome’s career was as a senior lecturer in multi-method research, monitoring and evaluation at PIT (later, RMIT).

The AES Fellows’ reasons for coming into the field of evaluation have been eclectic and Jerome presented yet another pathway.

I wouldn’t have gone into evaluation unless I had started with an interest in both science and government. When I met social work academics at PIT, I found they shared a broad sense of systems theory, research methods, and data collection and analysis. I ended up as an applied scientist teaching multi-method approaches to evaluation in the human services.

My main interest is in applying systems theory to the planning and evaluation of human services. My other interest is integrating multiple methods of data collection and analysis, and their use in building practice knowledge. I don’t expect any method, on its own, to be particularly useful. 

As an evaluation practitioner, he points to the challenges of bringing together multiple disciplines.

Most of the challenges I have encountered have to do with responding to the splitting of disciplines from each other – finding ways to bridge gaps among disciplines – gaps between public administration, applied science, planning, budgeting, evaluation and management. 

The main highlights for his career have been about building networks as well as being able to embrace opportunities.

In the 70s and early 80s, colleagues supported me to set up two different networks: the Australian Evaluation Network and its occasional newsletter were intended to link people across Australia. In Victoria, Colin Sharp and I set up the Evaluation Training Network, so that our colleagues could run low-cost evaluation workshops.  Then, meeting Anona Armstrong and being invited by her to contribute to planning the first evaluation conferences, then becoming a foundation member of the AES, and then a Board member. 

Towards the end of the 80s, I was encouraged by colleagues in Canberra to apply for an executive interchange into the Australian Public Service. I was selected to work for six months in the evaluation section of the Commonwealth Department of Finance at the time they were introducing program budgeting – and performance indicators – across the public service. 

About the same time, I started to speak on evaluation and performance indicators at conferences on public administration and accounting in Australia and New Zealand. This led in 1994 to co-leading conference workshops in Kuala Lumpur with Dr. Arunaselam Rasappan – then an evaluation trainer and consultant at the Malaysian government’s public administration training college and later the head of an evaluation research, training and development centre that a few of us established in Malaysia. 

Of the influences in his career, it was no surprise that they have been practice based.

The first influence was the philosophy of social work to which I was introduced at PIT.  Their approach saw evaluation as an ‘intervention for change’ integral to professional practice. Another influence was having the opportunity to work within the Department of Finance in Canberra on evaluation and what it meant within that department. 

I also asked him what changes he had seen during his career. Jerome’s perception is that formative evaluation has disappeared as a concept in some organisations that promote evaluation. He thinks that the emphasis has been more on summative and impact evaluation, with limited work on theory, without which summative evaluation provides limited information. 

In Australia and New Zealand, evaluation was typically understood as a team activity. We did not expect one person – ‘the evaluator ‘– to carry out an evaluation, largely on their own, so we did not use the term ‘evaluator’ as frequently as it is used now, referring instead to ‘evaluation teams’ and ‘evaluation practitioners’.

I was also keen to find out what skills and competencies the evaluators of today need to have to keep up with emerging trends in evaluation practice.

I think most of the members of the AES come from a narrow professional or academic background. In the 80s, the AES conferences included more auditors, public health, public administration and public finance professionals, economists, etc. We need to return to our multi-profession roots, which were evident in evaluation journals in the 1970s and early 1980s.  

If you let society get unjust enough, and I think we are right there now, then the situation becomes a state of dangerous unrest. Those are my driving forces and where I think that’s where the field of evaluation can make its best contribution.

When I asked Jerome about what he saw as the major social issues evaluators ought to be thinking about as well as seeking to resolve, his answers were very perceptive.

We need to understand that Indigenous cultures have different approaches to using knowledge in their community from what is common in the dominant Aussie culture. We sometimes have quite naïve approaches to Indigenous cultures. 

Another issue is including the ‘value’ in ‘evaluation’.  Some evaluation practitioners do what they are told is wanted, rather than insist on reporting on how other ‘values’ may influence findings, conclusions and recommendations. 

I asked Jerome how he saw the AES maintain its relevance. His answer was focused and direct.

Build those bridges between professional disciplines that share an interest in evaluation. Take advantage of individuals’ different sources of knowledge and skills. Increase the relevance of evaluation at the practice level, and it is important that we keep doing research about the practice of monitoring and evaluation.

--------------------------

Jerome Winston continues to work with the research centre in Malaysia – the Centre for Development and Research in Evaluation. He does a range of work for government and aid programs on how well new evaluation models and frameworks work, and why. He also runs a small consultancy in Australia.


 

May 2020
by Jade Maloney

Over the last couple of months, evaluators around the world have been grappling with the question of whether and how we evaluate in the COVID-19 context. What can and should be done now, and what should wait? How can we be most useful?

For a recent online session with AES members, which Keren Winterford, Greg Masters and I hosted on behalf of the NSW Committee, I rounded up a range of reflections on these questions to prompt discussion.

We need to consider carefully whether to pause or press on

Evaluation is the oxygen that powers decision making. Too little and we are likely to make poor decisions. And when faced with big challenges, we need more than usual. Too much evaluation without action leads to hyperventilation. Analysis paralysis. As an evaluator, it is your responsibility to keep the breathing steady. [Chris Lysy]

To decide whether to pause or press on with our existing evaluations, we need to ask ourselves a series of questions.

Can it be done without undue stress to an organisation responding to COVID-19? At the best of times, evaluation can be anxiety inducing, does the organisation/ team have the bandwidth to engage?

Can the evaluation still be useful now? Can you adapt your questions? Can you assess how COVID-19 adaptations are working? Can you help to identify what adjustments should continue post COVID-19?

Can you adapt your methods to comply with physical distancing? Will the people you are trying to engage, engage online? Can you draw on existing data sources?

The World Bank’s, Adapting Evaluation Designs sets out four questions that you can adapt to work through whether to press on. Better Evaluation, has also begun a series on Adapting evaluation in the time of COVID-19. Part 1: MANAGE has a range of useful prompts to help you work through changes to stakeholder context, engagement, decision-making protocols, information needs, Terms of Reference and budgeting.

Think beyond individual “evaluations” to tap into our value

I think one of the key gaps or aspects I don’t see addressed much is around utility of evaluation in this space. A lot of the discussion online is around the ‘how’ – how do we adapt evaluation? But I feel a deeper question is around the ‘why’ of evaluation. Why is it still important to do evaluation in this context? Is it actually important to making a difference? This is quite a tricky question and one that can make an evaluator really uncomfortable as it forces us to reconsider our work. But, on the contrary, I see this as an opportunity to reinforce our conviction, sense of purpose and clarity. Evaluation was already often an after-thought and now urgent customer-facing delivery initiatives are definitely taking priority. The case for evaluation will be harder to make. We need to genuinely think about the value evaluation can bring in these times and more broadly. {Florent Gomez, NSW Department of Customer Service]

As Michael Quinn Patton has said, we need to be prepared to make a case for the value of evaluation now. We can do this by proactively demonstrating the ongoing relevance of evaluative thinking, supporting real-time sensemaking of data, engaging in systems thinking (identifying the interconnections and their implications), enabling decision-making based on “good enough” data, and identifying the potential for negative unintended consequences so they can be prevented. In other words, “All evaluators must now become developmental evaluators, capable of adapting to complex dynamics systems, preparing for the unknown, for uncertainties, turbulence, lack of control, nonlinearities, and for emergence of the unexpected.”

For guidance on Sense-making in real-time check out Canadian facilitator Chris Corrigan’s blog.  First, observe the situation. Then, look for patterns and inquire into these. What do you notice in general? What are the exceptions to these generalisations? The contradictions? The surprises? What are you curious about? Then, using complexity concepts, look at what is keeping the patterns you have identified in place and the actionable insights that could enable change.

Sense making in real time 

My team at ARTD have also developed the 3 R Framework as a tool for using evaluative thinking under pressure. It is based around questions because, in our experience, being an evaluator is about asking effective questions at the right time, not about having all the answers. You can use the framework to direct responses at an organisational, team, program or individual level. If you’re applying it within your organisation, team or to a program, we suggest getting a diverse group of people together to reflect, drawing on existing data, stories and experiences to ensure you are not missing critical insights as you make decisions.

3 R Framework 

While being useful right now, we can also keep our eye on the long game – what data needs to be collected now to enable evaluation of pandemic responses?

Think through the implications of your choices

Among evaluators I have spoken to around Australia and overseas, there is a strong concern about the equity implications of changes. It is important we recognise differential impacts of the crisis, consider accessibility when adapting our methods and whose voice is missed if we draw only on existing data.

We also need to be as purposeful in choosing our online methods as we are in planning methods generally. Not everything has to become a Zoom session. Asynchronous (contributing at different times) online methods bring different benefits and drawbacks to synchronous (contributing at the same time) online methods.

Remember: not everything is changing and some things should

One of the things I have found most useful in this time is my colleague Emily Verstege’s reminder (with reference to Kieran Flanagan and Dan Gregory’s Forever Skills) that, while many things are changing, including how we evaluate, what is at the core of evaluation is not. We can take comfort in this, as well as in the potential to change things that need changing.

One of the benefits of taking our regular AES session online was the ability to engage with members in regional areas and other jurisdictions. It’s something the committee is already thinking about continuing when physical distancing rules are relaxed.

I have been most struck by the validity of that old maxim that necessity is the mother of invention. In many areas of work and life, more fundamental change has occurred in the last few weeks than in previous years, despite the relentless urging for innovation. Witness working from home arrangements, expansion of telehealth services, online delivery of educational programs.

Hopefully, one of the legacies of this awful crisis is that some of these new practices become ingrained and that we become more comfortable challenging the status quo and traditional modes of operation. Returning to normal is neither feasible nor desirable. Evaluators have a large role to play in leading that campaign but we also need to challenge our existing practices. [Greg Masters, Nexus Management Consulting and NSW AES Committee member]

If you have ideas for further AES blogs, the AES Blog Working Group would be keen to hear them. Please complete the online form below.

 

-------------------------- 

Jade Maloney is a Partner and Managing Director of ARTD Consultants, which specialises in program evaluation. 


 

Fellows gillW

February 2020
by Anthea Rutter

Gill was named an AES Fellow in 2018, and I was pleased to introduce her at the AES conference in Launceston that year. We started with what brought her into the field of evaluation, and what it was about realist methodology that not only piqued her interest but now defines her as a practitioner

I came into evaluation from a background in human services and managing human services. I’d always been concerned about how we could tell whether we were doing any good or not. I was introduced to realist evaluation through some work I was doing in crime prevention, and it provided a way to work out why some things work for some people but not for others. I found out through reading evaluations that there’s quite a common pattern – that programs often don’t work for those who are most disadvantaged, and some actually cause harm to them. I wanted to know why.

The realist approach assumes that outcomes will be different for different people. The more I worked with it, the more I realised that it’s not just how I approach evaluation, it’s actually how I see the world. I am a realist. It has shaped my life and my thinking in general. People who use it often don’t understand it and often get it wrong. It’s a methodological approach rather than a method, [that is] a philosophy for method.

It was clear from our conversation that Gill is committed to realist philosophies and methodologies. I was intrigued by her passion.

I describe myself as a realist methodologist. Within that I think my real area of expertise is developing methods or strategies for the application of realist methods in things which are hard to evaluate, for example, prevention programs. How do you evaluate things which haven’t happened? More recently I have looked at how to use realist methods in very large scale, very complex programs.

The other area of interest is in grappling with the implications of the fundamental philosophy of realism. Others have done a lot of work on realist ontology. My two current interests are realist axiology – how you think about valuing from a realist perspective – and what does that mean for evaluation? The other is realist epistemology. Some people have argued that realists are constructivists, epistemologically. But I think there are points of difference and I’m interested in what that means for practice.

All of us have experienced challenges along the way, and I was keen to explore these with Gill.

It’s not a single thing but a range of things. Some commissioners have asked for realist evaluation, but it turned out they didn’t understand it and what it can do. There are challenges in other projects where people who have been taken on as part of the team look as though they will be ok using a realist lens, but it turns out they’re not.

Challenges in terms of the usual constraints on evaluation, money and time. I do pick difficult things to evaluate and there can be challenges with that. Generally it’s the interaction of a number of factors in particular programs. The skill is being able to think through and negotiate the different factors in an evaluation.

She also pointed out some highlights.

A particular one is Nick inviting me to do the PhD –this was in a sense a starting point and an influential moment which changed my direction. I had decided to move into evaluation in some way, but this changed everything.

Writing the standards for realist evaluation was another one – that was an honour – but also working deeply and closely with those who really understood realist approaches. I enjoyed thinking about what really matters if you want to use this approach coherently and consistently.

A number of people and methodologies had a great influence on Gill’s practice.

Nick Tilley and Ray Pawson, of course. Bhaskar’s work, including his model thinking about levels of reality, the empirical, the actual and the real. Patricia Rogers. I’ve done a lot of training in other methods too, and probably each of them has had some influence.

I’ve also adapted other methods to suit realist evaluation. One example is Most Significant Change stories. To do that, you have to look back at what the developers of a particular theory or method were trying to achieve, and the strengths and weaknesses of that for realist work. So for MSC stories, I looked at what Rick Davies intended, but then recognised that selecting the ‘most significant’ changes hides all the variation that realist analysis depends on. So I worked with a project to develop other strategies to maintain that variation while still identifying what it was that mattered to people, and why.

Gill had some definite ideas on how evaluation had changed over the years.

The pendulum swings back and forth in relation to methodologies and methods. At the moment there are parts of government here, and some overseas, that are swinging towards positivist approaches, i.e. Randomised Controlled Trials. I worry about that and think it could be a danger because RCTs don’t give all the information you need to make some kinds of evidence-informed judgments.

I see a lot of younger people coming into the profession, which I think is great. The courses at University of Melbourne (CPE) and our own in Darwin does help to bring in younger people. I see the influence of technology, for example, the ability to manipulate big data.

I think there are some challenges too. For example, the use of social media in evaluation is fraught with dangers, but the ability to record data via iPad in the international development context is great. There are lots of implications in regard to new technologies.

Gill’s response to the issue of skills and competencies for the evaluator of today reinforced some of the fundamental qualities evaluators need in order to be successful practitioners.

The two biggest competencies for evaluators, I think, are the ability to think hard and well, because our job is to make judgments. Your judgments are supposed to be well informed. The skill of the evaluator lies in the analytic ability to think through the implications of what people are doing, but also the implications of the data you’ve collected, and work out what it all means.

The other competency is that you have to be able to engage with people, even though it can be difficult because people often feel uncomfortable with being evaluated, and with some of the findings. The relationship with the client is important.

She was definite about some of the social issues she thinks evaluators should be thinking about as well as helping to resolve in the next decade.

I choose to work in areas that are grappling with things which are threats to humanity – environment and climate issues, or international development issues, which have big implications for the balance of power.

The other priority for me relates to social justice, for example, women’s issues, youth, domestic violence, sexual assault, employment/unemployment – anything to do with social disadvantage, which is underpinned by injustice.

If you let society get unjust enough, and I think we are right there now, then the situation becomes a state of dangerous unrest. Those are my driving forces and where I think that’s where the field of evaluation can make its best contribution.

Gill has been involved with the society in a number of roles: as a committee member, a Board member (twice) and convening a conference committee, so I felt she would be in a good position to ponder the direction which the AES should take in the future.

The AES has gone through a necessary stage of being inward focused, looking at the constitution, the strategic plan and so on. Now it needs to be more outwardly focused. At this exact moment, it needs to think about the implications of the proposal for an Evaluator General.

The society should have a stronger policy advocacy focus, which should be manifested at both a national and a state level. The members live in states and territories, and for many of us, our working lives are framed by state and territory legislation.

The third way in which it can look outward is dealing with other professions because the things they are doing are informing policy and practice. We need stronger bridges with other fields. It needs to begin a conversation which can inform practice both ways; otherwise we will become irrelevant.

The fourth way is to build some knowledge of the implications of new technologies. There are people within the field with specialist knowledge but many of us don’t know enough, and haven’t thought hard enough, about them as yet. Myself included.

--------------------------

Gill Westhorp is a Professorial Research Fellow at Charles Darwin University, Darwin, where she leads the Realist Research Evaluation and Learning Initiative (RREALI). She is also Director of Community Matters Pty Ltd, a research and evaluation consultancy based in South Australia.