By Legacy B on Monday, 25 November 2019
Category: Leaders

An insightful conversation with Scott Bayley

by Anthea Rutter

The question of what brings a person into the field of evaluation is always an interesting question to ask, particularly as you are never sure of the answer. In this case I did not expect the answer I got.

My crisis of confidence! In the late 1980s I was a program manager in the NT Health and Community Services Department, when I came to have serious doubts whether or not our programs were making a positive difference. I had been taught evidence-based decision making, and when I asked myself whether we were making a difference, I didn’t know, and it really bothered me. A work colleague suggested I might be interested in reading this new book by Patton on utilisation focused evaluation. I was immediately hooked, and I knew then that I wanted to work as an evaluator! I subsequently did courses at Darwin Uni in research methods, evaluation and statistics. I then joined ATSIC in Canberra (1991-1992); I worked as an evaluator in Indigenous affairs and have been working in evaluation ever since. Later on, after I had a bit more practical experience, I did my Master’s degree at Murdoch University in Perth and studied evaluation with Ralph Straton. 

 

Clearly Scott is a person who thinks hard about his practice, so I was interested in what he regarded as his main area of interest.

I am interested in theories of change which is very important in international development. I’m also interested in impact evaluation methods, particularly critical multiplism which is not well known in Australia. It was developed by Cook and Shadish and is based on a particular view of reality – the idea being that the world is complex, and we can never know it perfectly. The best that we can do as evaluators is to study it from multiple perspectives using multiple methods. CM also believes that causality is probalistic not deterministic. Not every smoker gets cancer, but a significant proportion do and hence we can say smoking causes cancer. To test causal relationships CM uses three criteria first proposed by John Stuart Mill in 1850. In order to conclude that program A causes outcome B you need to establish an association between A and B, and you need to show that A occurs in time before B. Finally we need to rule out alternative explanations for the relationship between A and B. If and only if we can credibly satisfy all 3 tests can we conclude that program A causes outcome B. The real value of CM is that is asks us to focus on the evidence we need for making causal inferences, rather than getting bogged down in unproductive debates about experiments vs case studies vs surveys etc. 

My other main interest is evaluation capacity building. I was doing that in China, Vietnam, Cambodia and Laos for four years with the Asian Development Bank. The international experience with ECB is now quite clear. We can focus our capacity building efforts on: leadership’s demand for and ability to make use of evaluative feedback; our institutional infrastructure (evaluation policies, resources, staff skills, IT systems etc.); or on the supply of evaluation feedback. The international lesson is that demand is where we need to focus our capacity building efforts; supply side strategies (producing more evaluation reports) simply doesn’t work. 

 

Clearly Scott has worked in some complex areas requiring multiple skill levels, and I wanted to know, in particular, what he saw as major challenges to his practice.

Initially developing my own skills was a big challenge. Evaluation is such a big field with so much to learn! Undertaking cross-cultural evaluations is very complex. There are many potential dimensions to performance and some of them are not immediately obvious. Speaking truth to power is an issue all evaluators face at some point in their career. I’ve had some tense discussions in Australia when evaluating the economic impact of the Melbourne Grand Prix, the privatisation of a prison, mental health services for suicidal youth, contracting NGOs for service delivery, and when evaluating the policy advising process in state government agencies. All highly controversial evaluations that ultimately helped stakeholders to engage with the issue and make more informed decisions. I have also noticed that the commitment to evaluation of both state and Commonwealth governments waxes and wanes over time; this is very short sighted, and the public deserves better. We should be aiming to use public monies for best effect.

 

A career so varied as Scott’s must have had some highlights and I was keen to discover what they were.

I worked on a wide variety of challenging evaluation topics: the delivery of health and community services in rural Australia, cost-benefit study of the Melbourne Grand Prix, assessing cement production in China, the effectiveness of petrol sniffing programs for remote Indigenous youth, financial management reforms in Mongolia, quality assuring Australia’s international aid program, and complaint handling systems in government departments. I’ve had the great fortune to have had a number of highly skilled advisors, people who went out of their way to coach and mentor me. They include Gordon Robertson, Des Pearson, Patrick Batho, Ralph Straton, Darryl Cauley, David Andrich, Ron Penney, John Owen, Robert Ho, Rick Cummings, Burt Perrin and Ray Rist. I’ve been exceptionally lucky in that regard.

AES Fellowship – big highlight.

 

All of us are influenced either by particular people or theories which help to define our evaluation practice. Scott’s response was brief and to the point.

Those people named above plus my academic background in social research methods and later in public policy analysis [were my main influences].

 

A question which I asked all of the Fellows was to find out how the field of evaluation had changed during the course of their careers. His response made me reflect how we have matured as a profession and expanded our horizons into multiple areas of practice.

One thing which I have noticed is that the AES membership has changed. When I first joined, it was all academics and government staff. Now we have a lot more NGOs and private consultants. A great many more Australians are now working in international development; that was quite rare when I first got into evaluation. Another change is the range of new impact evaluation methods which we have seen coming up in the last 10 years. I’ve also noticed that 25 years ago, there were various programs that were considered to be almost impossible to evaluate: environment, community development, Indigenous programs and policy advice to name a few. These topics were considered to be too complex and hard to evaluate. Now we routinely do such evaluations. I think that the boundaries and work of practicing evaluators has evolved significantly over time. 

 

All of us, as evaluators, want to ensure that our practice is developing so that we keep up with emerging trends and remain relevant. Scott’s response to this topic was concise and informative.

Evaluators need:

  • People skills – facilitation, negotiation, conflict management, communication
  • Evaluation theory and practice – knowing different models, being familiar with various approaches, plus having an expert understanding of evaluation logic
  • Research skills – broad skills including plain English style reporting. 

In the future we will see more of a demand for real-time evaluation. I believe evaluation will increasingly adopt action research methods, and appreciative enquiry will become much more common. Value for money is generally underdone in most of the evaluations that I read these days.

 

 

I asked Scott about what he saw as the main social issues/problems that evaluators ought to be thinking about and seeking to resolve in the next decade. His response showed a great deal of insight into the issues and the ways we can address them.

I think our communities are experiencing a loss of confidence in government and parliamentary processes. I would like to see government focusing on processes for good policy formulation and evaluation, and AES members should be helping with this so that more informed decisions can be made.

 I believe that our theory of change for evaluation itself needs to be better. I don’t think that evaluation has fulfilled what we set out to do in the 60s and 70s. We talk a lot about transparency, and that this should drive better program results, but the world doesn’t work that way. We rely on a supply driven model, focusing on delivering reports but not building demand for performance feedback and the ability of decision makers to make use of this feedback. Evaluators need to be more involved at the front-end of program planning/design.

 I see a lot of contracted evaluation work and often it’s not of very good quality. This is partly because of poorly written terms of reference and inadequate budgets, and also partly due to our own skill levels. I worry about the status and credibility of the evaluation field. A few years ago I was against the idea of professional accreditation for evaluators but now I’m starting to change my mind on this. I see so many badly written terms of reference and evaluation reports. Accreditation might help to raise the bar. However, we would have to have many more training opportunities for evaluators and I cannot see that happening in the near future. Still I think it’s a debate worth having in the AES.

 

The answer to the question on how the AES can position itself to still be relevant in the future is an important one for AES members as well as the Board.  Scott’s comments on this displayed a level of maturity and understanding of the situation.

I think it’s important to begin by clarifying the AES’s role and priorities. Is the AES an interest group, an advocacy body or a professional association (or some mixture of all three)? We can begin by focusing on members’ needs and priorities (while recognising the difficulties of working that out!). Individuals, like governments, ebb and flow in their degree of interest in evaluation. Is there an opportunity for the AES to form more alliances and partnerships? I think there is, particularly with external agencies such as IPAA and ANZSOG. It’s hard for the AES to get things done when we rely so heavily on voluntary members; we simply lack the advantages of having a well-developed administrative capacity. I’ve been impressed with the Board’s recent work on engaging with and advising Commonwealth government departments such as DoF. 

In the Commonwealth government, evaluation was at its peak from 1986-96. In the last 8 months there seems to be more talk of evaluation with the view that we need to lift the state of current practice so hopefully we will get more evaluation into decision making processes. In my view, the main issue for central agencies (PM&C, Dept of Finance, ANAO, Treasury) is the lack of demand for evaluative feedback and incentives to drive the continuous improvement of programs. On a positive note, we are seeing some discussion recently on issues such as the potential benefits of having an Evaluator General..

 

Before we completed the discussion, Scott candidly shared one of his biases (new evaluators take note!).

One of my biases is that coming up with answers to evaluation questions is generally not that difficult. The hard part is actually identifying good questions to ask: questions that stakeholders care about; questions that reduce uncertainty; questions that support learning, collaborative relationships and better program results. 

Scott is currently a consultant for Oxford Policy Management in the UK but living in Canberra. He was previously Principal Specialist Performance Management and Results in the Department of Foreign Affairs Canberra. His major roles in evaluation include: Evaluator with ATSIC, Auditor General’s Office Perth and Melbourne, Asian Development Bank Philippines, Vietnam UNDP, Department of Human Services Melbourne, and AusAID Canberra. 

Related Posts