by Anthea Rutter
Sue Funnell was one of the early trail blazers in evaluation methods. By her estimate, Sue has been in the profession for over 43 years. Over this time, she has held a number of roles in evaluation, including as the director of her own consulting company. She was a founding member of the AES, had two terms as President, was chair of the Awards Committee, and a presenter and trainer.
I first came across Sue in the 90s when she detailed her approach to program logic. We were also on the AES Board together for a few years. Sue has had a huge influence on the practice of evaluation, so I was very interested to find out how she came into the field.
I joined the Centre for Research and Measurement in evaluation, NSW Department of Education in the 70s. It was my first job after finishing a Psychology Honours degree. I started a part time Master’s degree in measurement and evaluation led by Ralph Straton at the University of Sydney and then received an Educational Research and Development Centre scholarship to the University of Illinois in the US. My project was in measurement, but I arrived there to find a hotbed of evaluators: Stake, Hastings and House, amongst others. This consolidated my interest in evaluation.
From that initial focus on measurement and her reputation as a leader in the development of approaches to program logic, what have emerged as Sue’s main areas of interest?
Mainly programs that achieve their results through behaviour change, such as educational and advisory programs and regulatory initiatives. I’m also interested in helping evaluators and commissioners to develop a sound description and understanding of the evaluand, so that they can identify appropriate evaluation questions.
A career as long as Sue’s is bound to have challenges, what have been the main ones?
I reckon balancing clients’ needs and expectations, particularly relating to time horizons on the one hand and my commitment to quality on the other. Also commissioners of evaluations constantly change and, with this, comes changes in their demands on a particular evaluation.
Another challenge is the speed of change in the policy context, which would be greater now! It would appear that people are more interested in short-term initiatives and results than in longer-term strategic approaches.
As well as challenges, a good career has its highlights – I asked Sue about hers.
Working with Bryan Lenne in the Program Evaluation Unit in the NSW Public Service Board was a game changer. This started me on the path to enhancing program logic approaches, providing a tool to get managers to think about their programs and how to ‘connect the dots’. I’ve received lots of positive feedback as well as criticism of my approach to program logic. I’ve honed the approach over time, and this culminated co-writing Purposeful Program Theory: effective use of theories of change and logic models with Patricia Rogers.
As well as this, setting up my own successful company in 1992 and, for 25 years, I was working across a wide range of policy areas and a wide range of jurisdictions and levels: local, state, federal, international, NGOs.
As you’d expect, Sue’s approach to evaluation has had a number of influences, among these:
- Working in the Program Evaluation Unit in the NSW Public Service Board with Bryan Lenne
- Ernie House’s early (and continuing) work on Social Justice
- Hatry’s work on Comparison is the Name of the Game
- Undertaking meta evaluations, particularly to do with the evaluation function in organisations
- The Joint Committee Standards on Utility, Feasibility, Accuracy and Propriety
- Patton’s work on Utilisation-Focused Evaluation
- Locally, material coming out of different levels of government around program budgeting, in particular the concepts of appropriateness, effectiveness and efficiency.
Sue also gave an honest appraisal of the strengths and challenges in the growth and development of the practice of evaluation.
In the early days, evaluation was a fledgling field trying to define itself. There was much greater emphasis on evaluation models, such as Stufflebeam’s Decision Making model and Stake’s Responsive evaluation. I doubt these days whether current evaluators think much about or use models. Perhaps this happens in academia, but I doubt whether they play a great role for practising evaluators.
Evaluation has been strengthened by becoming multi-disciplinary recognising the need to draw on many fields. A more nuanced understanding of what is gold standard has developed. Amongst evaluators, what is gold standard is what is fit for purpose. Importantly, applying program logic is neutral with respect to choice of methodology to address evaluation questions. However, from time to time, there is a push, especially from Government, for RCTs to be the only gold standard.
There has, over the years, been a constant tension between relative emphasis on monitoring and performance indicators on the one hand and evaluation studies on the other. There has also been a frequent re-badging of performance information and evaluation approaches by state and federal government, often with nothing or little new added!
There has been greater participation in evaluation by large companies (such as the big four). A lot is done in the name of evaluation that might more accurately be called management review.
When I asked Sue the main skills or competencies evaluators need to have or develop to keep pace with emerging trends, her first thought was that she had been out of evaluation for a while. But, on reflection, she had some key insights.
Fleet-footedness and adaptability, while minimising compromises to quality is important.
We can also make greater use of secondary data and possibly rely less on primary data. Social media has probably become a greater source of secondary information, but evaluators need to have the competencies to assess that information over time and draw on a wide range of social media sources, so that they are not influenced unduly by a particular social bubble.
Beyond the skills we need, I asked Sue what issues evaluators ought to be thinking as about and seeking to resolve in the next decade?
If evaluators want to contribute to worthwhile social changes, then they need to actively address social justice issues and take some stance. This raises the question of whether evaluators should become more socially activist and perhaps one way to do this is to move a bit away from evaluating individual programs to evaluating how well government and society are addressing issues. For example, in relation to domestic violence, what has been done in this area, how well is it working and what can be done? How can we do it and how can we evaluate it? However, a vexed question is ‘ who pays for it?’. I don’t have the answer to that.