Member login
 
   
Forgot Login?   Sign up  

This is the AES Blog, where we regularly post articles by the Australasian evaluation community on the subjects that matter to us. If you have an idea, please contact us on This email address is being protected from spambots. You need JavaScript enabled to view it.. Blog guidelines can be found here.



Fellows jeromeW

March 2020
by Anthea Rutter

Jerome Winston’s career spans over 45 years. He has fascinating insights into how evaluation was viewed in the 70s, which reminded me that back then, evaluation was not viewed as a separate profession, but as part of other disciplines.

I started teaching at Preston Institute of Technology (which, following two mergers, became Phillip Institute of Technology and then RMIT University).  At first, I was teaching both diploma and degree courses in engineering and applied chemistry. When the School of Social Work opened, they were looking for staff who would teach computing and statistics. As an applied scientist, I proposed building monitoring and evaluation into practice, so recommended that computing and statistics be taught as just one aspect of program planning, monitoring and evaluation. This suggestion, first adopted in social work, was later included in a number of other disciplines such as nursing, chiropractic and leisure studies.

Jerome then talked about the 80s and the advent of program budgeting in the Victorian – and later, federal – government, and what this meant for the next stage of his career.

Although program budgeting was intended to incorporate evaluation, Jerome believed that reporting simple, aggregated, numerical data as ‘performance indicators’ would not provide the depth of information needed about most government programs.  The use – and misuse – of ‘performance indicators’ became a main focus of Jerome’s research. 

In 1978, Jerome designed post graduate programs in data collection and analysis for research, monitoring and evaluation. These programs started at Phillip Institute of Technology (PIT) at about the same time that John Owen’s program in evaluation was starting at The University of Melbourne. Most of Jerome’s career was as a senior lecturer in multi-method research, monitoring and evaluation at PIT (later, RMIT).

The AES Fellows’ reasons for coming into the field of evaluation have been eclectic and Jerome presented yet another pathway.

I wouldn’t have gone into evaluation unless I had started with an interest in both science and government. When I met social work academics at PIT, I found they shared a broad sense of systems theory, research methods, and data collection and analysis. I ended up as an applied scientist teaching multi-method approaches to evaluation in the human services.

My main interest is in applying systems theory to the planning and evaluation of human services. My other interest is integrating multiple methods of data collection and analysis, and their use in building practice knowledge. I don’t expect any method, on its own, to be particularly useful. 

As an evaluation practitioner, he points to the challenges of bringing together multiple disciplines.

Most of the challenges I have encountered have to do with responding to the splitting of disciplines from each other – finding ways to bridge gaps among disciplines – gaps between public administration, applied science, planning, budgeting, evaluation and management. 

The main highlights for his career have been about building networks as well as being able to embrace opportunities.

In the 70s and early 80s, colleagues supported me to set up two different networks: the Australian Evaluation Network and its occasional newsletter were intended to link people across Australia. In Victoria, Colin Sharp and I set up the Evaluation Training Network, so that our colleagues could run low-cost evaluation workshops.  Then, meeting Anona Armstrong and being invited by her to contribute to planning the first evaluation conferences, then becoming a foundation member of the AES, and then a Board member. 

Towards the end of the 80s, I was encouraged by colleagues in Canberra to apply for an executive interchange into the Australian Public Service. I was selected to work for six months in the evaluation section of the Commonwealth Department of Finance at the time they were introducing program budgeting – and performance indicators – across the public service. 

About the same time, I started to speak on evaluation and performance indicators at conferences on public administration and accounting in Australia and New Zealand. This led in 1994 to co-leading conference workshops in Kuala Lumpur with Dr. Arunaselam Rasappan – then an evaluation trainer and consultant at the Malaysian government’s public administration training college and later the head of an evaluation research, training and development centre that a few of us established in Malaysia. 

Of the influences in his career, it was no surprise that they have been practice based.

The first influence was the philosophy of social work to which I was introduced at PIT.  Their approach saw evaluation as an ‘intervention for change’ integral to professional practice. Another influence was having the opportunity to work within the Department of Finance in Canberra on evaluation and what it meant within that department. 

I also asked him what changes he had seen during his career. Jerome’s perception is that formative evaluation has disappeared as a concept in some organisations that promote evaluation. He thinks that the emphasis has been more on summative and impact evaluation, with limited work on theory, without which summative evaluation provides limited information. 

In Australia and New Zealand, evaluation was typically understood as a team activity. We did not expect one person – ‘the evaluator ‘– to carry out an evaluation, largely on their own, so we did not use the term ‘evaluator’ as frequently as it is used now, referring instead to ‘evaluation teams’ and ‘evaluation practitioners’.

I was also keen to find out what skills and competencies the evaluators of today need to have to keep up with emerging trends in evaluation practice.

I think most of the members of the AES come from a narrow professional or academic background. In the 80s, the AES conferences included more auditors, public health, public administration and public finance professionals, economists, etc. We need to return to our multi-profession roots, which were evident in evaluation journals in the 1970s and early 1980s.  

If you let society get unjust enough, and I think we are right there now, then the situation becomes a state of dangerous unrest. Those are my driving forces and where I think that’s where the field of evaluation can make its best contribution.

When I asked Jerome about what he saw as the major social issues evaluators ought to be thinking about as well as seeking to resolve, his answers were very perceptive.

We need to understand that Indigenous cultures have different approaches to using knowledge in their community from what is common in the dominant Aussie culture. We sometimes have quite naïve approaches to Indigenous cultures. 

Another issue is including the ‘value’ in ‘evaluation’.  Some evaluation practitioners do what they are told is wanted, rather than insist on reporting on how other ‘values’ may influence findings, conclusions and recommendations. 

I asked Jerome how he saw the AES maintain its relevance. His answer was focused and direct.

Build those bridges between professional disciplines that share an interest in evaluation. Take advantage of individuals’ different sources of knowledge and skills. Increase the relevance of evaluation at the practice level, and it is important that we keep doing research about the practice of monitoring and evaluation.

--------------------------

Jerome Winston continues to work with the research centre in Malaysia – the Centre for Development and Research in Evaluation. He does a range of work for government and aid programs on how well new evaluation models and frameworks work, and why. He also runs a small consultancy in Australia.


 

December 2019
by Jade Maloney, Jo Farmer and Eunice Sotelo

With so many authors and approaches to evaluation, knowing what to pay attention to can be hard. Evaluation, just like the catwalk, is subject to the whims of the day. How do you know what’s a passing fad and what will remain in fashion?

At the AES Victoria regional seminar in November, Brad Astbury suggested the following 10 books will stand the test of time.

1960s BA blog

1960s: In the age of bell bottoms, beehives and lava lamps, and protests against the Vietnam war, evaluation was on the rise. The U.S. Congress passed the Elementary and Secondary Education Act (ESEA) in 1965, the first major piece of social legislation to require evaluation of local projects undertaken with federal funds and to mandate project reporting. The hope was that timely and objective information about projects could reform local governance and practice of education for disadvantaged children, and that systemic evaluation could reform federal management of education programs.

While you wouldn’t return to the fashion or the days when evaluation was uniquely experimental, hold onto your copy Experimental and quasi-experimental designs for research by Donald T. Campbell and Julian C. Stanley. These two coined the concepts of external and internal validity. Even if you’re not doing an experiment, you can use a validity checklist.

1970s BA blog

1970s: Back in the day of disco balls and platform shoes, the closest thing evaluation has to a rock star – Michael Quinn Patton – penned the first edition of Utilisation-focused evaluation. Now in its fourth edition, it’s the bible for evaluation consultants. It’s also one of the evaluation theories with the most solid evidence base – drawn from Patton’s research. In today’s age of customer centricity, it’s clear focusing on intended use by intended users is a concept that’s here to stay.

Carol Weiss’s message – ignore politics at your peril – could also have been written for our times. Her Evaluation research: methods of assessing program effectiveness provides a solid grounding in the politics of evaluation. It also describes theory-based evaluation – an approach beyond the experimental, that is commonly used today.

1980s BA blog

1980s: While you may no longer work out in in fluorescent tights, leotards and sweat bands, your copy of Qualitative evaluation methods won’t go out of fashion any time soon. In his second appearance on the list, Michael Quinn Patton made a strong case that qualitative ways of knowing are not inferior.

You may also know the name of the second recommended author from this decade, but more likely for the statistical test that bears his name (Cronbach’s alpha) than his contribution to evaluation theory, which is under-acknowledged. In Toward reform of program evaluation, Lee Cronbach and associates set out 95 theses to reform evaluation (in the style of Martin Luther’s 95 theses). That many of the 95 theses still ring true could be seen as either depressing or a consolation for the challenges evaluators face. For Astbury – ever the evaluation lecturer – thesis number 93 “the evaluator is an educator; his success is to be judged by what others learn” is the standout, but there’s one in there for everyone. (No. 13. “The evaluator’s professional conclusions cannot substitute for the political process” aligns with Weiss’s message, while No. 9. “Commissioners of evaluation complain that the messages from evaluations are not useful, while evaluators complain that the messages are not used” could have been pulled from Patton’s Utilisation-focused evaluation).

1990s BA blog

1990s: Alongside Vanilla Ice, the Spice Girls and the Macarena, the 90s brought us CMOCs – context, mechanism, outcome, configurations – and a different way of doing evaluation. Ray Pawson and Nick Tilley’s Realistic evaluation taught us not to just ask what works, but what works, for whom, in what circumstances, and why?

To balance the specificity of this perspective, the other recommendation from the 90s is agnostic. Foundations of program evaluation: Theories of practice by William Shadish Jr., Thomas Cook and Laura Leviton describes the three stages in the evolution of evaluation thinking. It articulates the criteria for judging the merits of evaluation theories: the extent to which they are coherent on social programming, knowledge construction, valuing, use, and practice. The message here is there is no single theory or ideal theory of evaluation to guide practice.

2000s BA blog

2000s: While iPods and flash mobs are a faint memory, these two books have had a lasting impact: Evaluation: An integrated framework for understanding, guiding, and improving public and non-profit policies and programs by Melvin Mark, Gary Henry and George Julnes; and Evaluation roots: Tracing theorists’ views and influences edited by Marvin Alkin.

The former covers four key purposes of evaluation: to review the merit of programs and their value to society (as per Scriven’s definition); to improve the organisation and its services; to ensure program compliance with mandates; to build knowledge and expertise for future programs. The take-out is to adopt a contingency perspective.

The latter is the source of the evaluation theory tree – which sparked commentary at this year’s AES and AEA conferences for its individualism, and limited gender and cultural diversity. Still, Brad reminds us that there’s value in learning from the thinkers as well as the practitioners; we can learn in the field but also from the field. According to Sage, Alkin’s Evaluation roots is one of the most sold book on evaluation.

It’s a reminder that there is much still to learn from those who’ve come before – that we can learn as much from those who’ve thought about evaluation for decades as we can from our practical experience.

2010s BA blog

2010s: The age of the selfie has not yet faded and nor has Evaluating values, biases and practical wisdom by Ernest R. House. It covers three meta-themes: values (Scriven, House); biases (Campbell and the experimental approach; expanding the concept of validity); and practical wisdom (on Aristotle’s notion of praxis – blending/embedding theory and practice). It gives us the wise advice to pay more attention to cognitive biases and conflicts of interest.

So now to the questions.

Why didn’t Scriven make the list? Because he’s written few books and there wasn’t enough room in the 90s. Nevertheless, Michael Scriven’s Evaluation thesaurus and The Logic of evaluation are among the books Astbury notes are worth reading.

What about local authors? Grab a copy of Building in research and evaluation: Human inquiry for living systems by Yoland Wadsworth and Purposeful program theory: Effective use of theories of change and logic models by Sue Funnell and Patricia Rogers. If you’re new to evaluation Evaluation methodology basics: The nuts and bolts of sound evaluation – from this year’s AES conference keynote E. Jane Davidson – can help you get a grasp on evaluation in practice.

Brad ended by sounding a word of warning not to get too caught up in the fads of the day. Buzzwords may come and go, but to avoid becoming a fashion victim, these ten books should be a staple of any evaluator’s bookshelf. 

-------------------------- 

Brad Astbury is a Director at ARTD Consulting, based in the Melbourne office. He has over 18 years’ experience in evaluation and applied social research and considerable expertise in combining diverse forms of evidence to improve both the quality and utility of evaluation. He has managed and conducted needs assessments, process and impact studies and theory-driven evaluations across a wide range of policy areas for industry, government, community and not-for-profit clients. Prior to joining ARTD in 2018, Brad worked for over a decade at the University of Melbourne, where he taught and mentored postgraduate evaluation students.


 

Fellows gillW

February 2020
by Anthea Rutter

Gill was named an AES Fellow in 2018, and I was pleased to introduce her at the AES conference in Launceston that year. We started with what brought her into the field of evaluation, and what it was about realist methodology that not only piqued her interest but now defines her as a practitioner

I came into evaluation from a background in human services and managing human services. I’d always been concerned about how we could tell whether we were doing any good or not. I was introduced to realist evaluation through some work I was doing in crime prevention, and it provided a way to work out why some things work for some people but not for others. I found out through reading evaluations that there’s quite a common pattern – that programs often don’t work for those who are most disadvantaged, and some actually cause harm to them. I wanted to know why.

The realist approach assumes that outcomes will be different for different people. The more I worked with it, the more I realised that it’s not just how I approach evaluation, it’s actually how I see the world. I am a realist. It has shaped my life and my thinking in general. People who use it often don’t understand it and often get it wrong. It’s a methodological approach rather than a method, [that is] a philosophy for method.

It was clear from our conversation that Gill is committed to realist philosophies and methodologies. I was intrigued by her passion.

I describe myself as a realist methodologist. Within that I think my real area of expertise is developing methods or strategies for the application of realist methods in things which are hard to evaluate, for example, prevention programs. How do you evaluate things which haven’t happened? More recently I have looked at how to use realist methods in very large scale, very complex programs.

The other area of interest is in grappling with the implications of the fundamental philosophy of realism. Others have done a lot of work on realist ontology. My two current interests are realist axiology – how you think about valuing from a realist perspective – and what does that mean for evaluation? The other is realist epistemology. Some people have argued that realists are constructivists, epistemologically. But I think there are points of difference and I’m interested in what that means for practice.

All of us have experienced challenges along the way, and I was keen to explore these with Gill.

It’s not a single thing but a range of things. Some commissioners have asked for realist evaluation, but it turned out they didn’t understand it and what it can do. There are challenges in other projects where people who have been taken on as part of the team look as though they will be ok using a realist lens, but it turns out they’re not.

Challenges in terms of the usual constraints on evaluation, money and time. I do pick difficult things to evaluate and there can be challenges with that. Generally it’s the interaction of a number of factors in particular programs. The skill is being able to think through and negotiate the different factors in an evaluation.

She also pointed out some highlights.

A particular one is Nick inviting me to do the PhD –this was in a sense a starting point and an influential moment which changed my direction. I had decided to move into evaluation in some way, but this changed everything.

Writing the standards for realist evaluation was another one – that was an honour – but also working deeply and closely with those who really understood realist approaches. I enjoyed thinking about what really matters if you want to use this approach coherently and consistently.

A number of people and methodologies had a great influence on Gill’s practice.

Nick Tilley and Ray Pawson, of course. Bhaskar’s work, including his model thinking about levels of reality, the empirical, the actual and the real. Patricia Rogers. I’ve done a lot of training in other methods too, and probably each of them has had some influence.

I’ve also adapted other methods to suit realist evaluation. One example is Most Significant Change stories. To do that, you have to look back at what the developers of a particular theory or method were trying to achieve, and the strengths and weaknesses of that for realist work. So for MSC stories, I looked at what Rick Davies intended, but then recognised that selecting the ‘most significant’ changes hides all the variation that realist analysis depends on. So I worked with a project to develop other strategies to maintain that variation while still identifying what it was that mattered to people, and why.

Gill had some definite ideas on how evaluation had changed over the years.

The pendulum swings back and forth in relation to methodologies and methods. At the moment there are parts of government here, and some overseas, that are swinging towards positivist approaches, i.e. Randomised Controlled Trials. I worry about that and think it could be a danger because RCTs don’t give all the information you need to make some kinds of evidence-informed judgments.

I see a lot of younger people coming into the profession, which I think is great. The courses at University of Melbourne (CPE) and our own in Darwin does help to bring in younger people. I see the influence of technology, for example, the ability to manipulate big data.

I think there are some challenges too. For example, the use of social media in evaluation is fraught with dangers, but the ability to record data via iPad in the international development context is great. There are lots of implications in regard to new technologies.

Gill’s response to the issue of skills and competencies for the evaluator of today reinforced some of the fundamental qualities evaluators need in order to be successful practitioners.

The two biggest competencies for evaluators, I think, are the ability to think hard and well, because our job is to make judgments. Your judgments are supposed to be well informed. The skill of the evaluator lies in the analytic ability to think through the implications of what people are doing, but also the implications of the data you’ve collected, and work out what it all means.

The other competency is that you have to be able to engage with people, even though it can be difficult because people often feel uncomfortable with being evaluated, and with some of the findings. The relationship with the client is important.

She was definite about some of the social issues she thinks evaluators should be thinking about as well as helping to resolve in the next decade.

I choose to work in areas that are grappling with things which are threats to humanity – environment and climate issues, or international development issues, which have big implications for the balance of power.

The other priority for me relates to social justice, for example, women’s issues, youth, domestic violence, sexual assault, employment/unemployment – anything to do with social disadvantage, which is underpinned by injustice.

If you let society get unjust enough, and I think we are right there now, then the situation becomes a state of dangerous unrest. Those are my driving forces and where I think that’s where the field of evaluation can make its best contribution.

Gill has been involved with the society in a number of roles: as a committee member, a Board member (twice) and convening a conference committee, so I felt she would be in a good position to ponder the direction which the AES should take in the future.

The AES has gone through a necessary stage of being inward focused, looking at the constitution, the strategic plan and so on. Now it needs to be more outwardly focused. At this exact moment, it needs to think about the implications of the proposal for an Evaluator General.

The society should have a stronger policy advocacy focus, which should be manifested at both a national and a state level. The members live in states and territories, and for many of us, our working lives are framed by state and territory legislation.

The third way in which it can look outward is dealing with other professions because the things they are doing are informing policy and practice. We need stronger bridges with other fields. It needs to begin a conversation which can inform practice both ways; otherwise we will become irrelevant.

The fourth way is to build some knowledge of the implications of new technologies. There are people within the field with specialist knowledge but many of us don’t know enough, and haven’t thought hard enough, about them as yet. Myself included.

--------------------------

Gill Westhorp is a Professorial Research Fellow at Charles Darwin University, Darwin, where she leads the Realist Research Evaluation and Learning Initiative (RREALI). She is also Director of Community Matters Pty Ltd, a research and evaluation consultancy based in South Australia.


 

Fellows scottB

November 2019
by Anthea Rutter

The question of what brings a person into the field of evaluation is always an interesting question to ask, particularly as you are never sure of the answer. In this case I did not expect the answer I got.

My crisis of confidence! In the late 1980s I was a program manager in the NT Health and Community Services Department, when I came to have serious doubts whether or not our programs were making a positive difference. I had been taught evidence-based decision making, and when I asked myself whether we were making a difference, I didn’t know, and it really bothered me. A work colleague suggested I might be interested in reading this new book by Patton on utilisation focused evaluation. I was immediately hooked, and I knew then that I wanted to work as an evaluator! I subsequently did courses at Darwin Uni in research methods, evaluation and statistics. I then joined ATSIC in Canberra (1991-1992); I worked as an evaluator in Indigenous affairs and have been working in evaluation ever since. Later on, after I had a bit more practical experience, I did my Master’s degree at Murdoch University in Perth and studied evaluation with Ralph Straton.

Clearly Scott is a person who thinks hard about his practice, so I was interested in what he regarded as his main area of interest.

I am interested in theories of change which is very important in international development. I’m also interested in impact evaluation methods, particularly critical multiplism which is not well known in Australia. It was developed by Cook and Shadish and is based on a particular view of reality – the idea being that the world is complex, and we can never know it perfectly. The best that we can do as evaluators is to study it from multiple perspectives using multiple methods. CM also believes that causality is probalistic not deterministic. Not every smoker gets cancer, but a significant proportion do and hence we can say smoking causes cancer. To test causal relationships CM uses three criteria first proposed by John Stuart Mill in 1850. In order to conclude that program A causes outcome B you need to establish an association between A and B, and you need to show that A occurs in time before B. Finally we need to rule out alternative explanations for the relationship between A and B. If and only if we can credibly satisfy all 3 tests can we conclude that program A causes outcome B. The real value of CM is that is asks us to focus on the evidence we need for making causal inferences, rather than getting bogged down in unproductive debates about experiments vs case studies vs surveys etc. 

My other main interest is evaluation capacity building. I was doing that in China, Vietnam, Cambodia and Laos for four years with the Asian Development Bank. The international experience with ECB is now quite clear. We can focus our capacity building efforts on: leadership’s demand for and ability to make use of evaluative feedback; our institutional infrastructure (evaluation policies, resources, staff skills, IT systems etc.); or on the supply of evaluation feedback. The international lesson is that demand is where we need to focus our capacity building efforts; supply side strategies (producing more evaluation reports) simply doesn’t work.

Clearly Scott has worked in some complex areas requiring multiple skill levels, and I wanted to know, in particular, what he saw as major challenges to his practice.

Initially developing my own skills was a big challenge. Evaluation is such a big field with so much to learn! Undertaking cross-cultural evaluations is very complex. There are many potential dimensions to performance and some of them are not immediately obvious. Speaking truth to power is an issue all evaluators face at some point in their career. I’ve had some tense discussions in Australia when evaluating the economic impact of the Melbourne Grand Prix, the privatisation of a prison, mental health services for suicidal youth, contracting NGOs for service delivery, and when evaluating the policy advising process in state government agencies. All highly controversial evaluations that ultimately helped stakeholders to engage with the issue and make more informed decisions. I have also noticed that the commitment to evaluation of both state and Commonwealth governments waxes and wanes over time; this is very short sighted, and the public deserves better. We should be aiming to use public monies for best effect.

A career so varied as Scott’s must have had some highlights and I was keen to discover what they were.

I worked on a wide variety of challenging evaluation topics: the delivery of health and community services in rural Australia, cost-benefit study of the Melbourne Grand Prix, assessing cement production in China, the effectiveness of petrol sniffing programs for remote Indigenous youth, financial management reforms in Mongolia, quality assuring Australia’s international aid program, and complaint handling systems in government departments. I’ve had the great fortune to have had a number of highly skilled advisors, people who went out of their way to coach and mentor me. They include Gordon Robertson, Des Pearson, Patrick Batho, Ralph Straton, Darryl Cauley, David Andrich, Ron Penney, John Owen, Robert Ho, Rick Cummings, Burt Perrin and Ray Rist. I’ve been exceptionally lucky in that regard. 

AES Fellowship – big highlight.

All of us are influenced either by particular people or theories which help to define our evaluation practice. Scott’s response was brief and to the point.

Those people named above plus my academic background in social research methods and later in public policy analysis [were my main influences].

A question which I asked all of the Fellows was to find out how the field of evaluation had changed during the course of their careers. His response made me reflect how we have matured as a profession and expanded our horizons into multiple areas of practice.

One thing which I have noticed is that the AES membership has changed. When I first joined, it was all academics and government staff. Now we have a lot more NGOs and private consultants. A great many more Australians are now working in international development; that was quite rare when I first got into evaluation. Another change is the range of new impact evaluation methods which we have seen coming up in the last 10 years. I’ve also noticed that 25 years ago, there were various programs that were considered to be almost impossible to evaluate: environment, community development, Indigenous programs and policy advice to name a few. These topics were considered to be too complex and hard to evaluate. Now we routinely do such evaluations. I think that the boundaries and work of practicing evaluators has evolved significantly over time.

All of us, as evaluators, want to ensure that our practice is developing so that we keep up with emerging trends and remain relevant. Scott’s response to this topic was concise and informative.

Evaluators need:

  • People skills – facilitation, negotiation, conflict management, communication
  • Evaluation theory and practice – knowing different models, being familiar with various approaches, plus having an expert understanding of evaluation logic
  • Research skills – broad skills including plain English style reporting. 

In the future we will see more of a demand for real-time evaluation. I believe evaluation will increasingly adopt action research methods, and appreciative enquiry will become much more common. Value for money is generally underdone in most of the evaluations that I read these days.

I asked Scott about what he saw as the main social issues/problems that evaluators ought to be thinking about and seeking to resolve in the next decade. His response showed a great deal of insight into the issues and the ways we can address them.

I think our communities are experiencing a loss of confidence in government and parliamentary processes. I would like to see government focusing on processes for good policy formulation and evaluation, and AES members should be helping with this so that more informed decisions can be made.

 I believe that our theory of change for evaluation itself needs to be better. I don’t think that evaluation has fulfilled what we set out to do in the 60s and 70s. We talk a lot about transparency, and that this should drive better program results, but the world doesn’t work that way. We rely on a supply driven model, focusing on delivering reports but not building demand for performance feedback and the ability of decision makers to make use of this feedback. Evaluators need to be more involved at the front-end of program planning/design.

 I see a lot of contracted evaluation work and often it’s not of very good quality. This is partly because of poorly written terms of reference and inadequate budgets, and also partly due to our own skill levels. I worry about the status and credibility of the evaluation field. A few years ago I was against the idea of professional accreditation for evaluators but now I’m starting to change my mind on this. I see so many badly written terms of reference and evaluation reports. Accreditation might help to raise the bar. However, we would have to have many more training opportunities for evaluators and I cannot see that happening in the near future. Still I think it’s a debate worth having in the AES.

The answer to the question on how the AES can position itself to still be relevant in the future is an important one for AES members as well as the Board.  Scott’s comments on this displayed a level of maturity and understanding of the situation.

I think it’s important to begin by clarifying the AES’s role and priorities. Is the AES an interest group, an advocacy body or a professional association (or some mixture of all three)? We can begin by focusing on members’ needs and priorities (while recognising the difficulties of working that out!). Individuals, like governments, ebb and flow in their degree of interest in evaluation. Is there an opportunity for the AES to form more alliances and partnerships? I think there is, particularly with external agencies such as IPAA and ANZSOG. It’s hard for the AES to get things done when we rely so heavily on voluntary members; we simply lack the advantages of having a well-developed administrative capacity. I’ve been impressed with the Board’s recent work on engaging with and advising Commonwealth government departments such as DoF. 

In the Commonwealth government, evaluation was at its peak from 1986-96. In the last 8 months there seems to be more talk of evaluation with the view that we need to lift the state of current practice so hopefully we will get more evaluation into decision making processes. In my view, the main issue for central agencies (PM&C, Dept of Finance, ANAO, Treasury) is the lack of demand for evaluative feedback and incentives to drive the continuous improvement of programs. On a positive note, we are seeing some discussion recently on issues such as the potential benefits of having an Evaluator General.

Before we completed the discussion, Scott candidly shared one of his biases (new evaluators take note!).

One of my biases is that coming up with answers to evaluation questions is generally not that difficult. The hard part is actually identifying good questions to ask: questions that stakeholders care about; questions that reduce uncertainty; questions that support learning, collaborative relationships and better program results.

--------------------------

Scott is currently a consultant for Oxford Policy Management in the UK but living in Canberra. He was previously Principal Specialist Performance Management and Results in the Department of Foreign Affairs Canberra.

His major roles in evaluation include: Evaluator with ATSIC, Auditor General’s Office Perth and Melbourne, Asian Development Bank Philippines, Vietnam UNDP, Department of Human Services Melbourne, and AusAID Canberra.