Welcome to the AES Blog
Realist evaluation in practice: an interview with Brad Astbury
by Eunice Sotelo & Victoria Pilbeam
Many evaluators are familiar with realist evaluation, and have come across the realist question “what works for whom, in what circumstances and how?” The book Doing Realist Research (2018) offers a deep dive into key concepts, with insights and examples from specialists in the field.
We caught up with Brad Astbury from ARTD Consultants about his book chapter. Before diving in, we quickly toured his industrial chic coworking office on Melbourne’s Collins Street – brick walls, lounges and endless fresh coffee. As we sipped on our fruit water, he began his story with a language lesson.
Doing Realist Research (2018) was originally intended to be a Festschrift, German for ‘celebration text’, in honour of recently retired Ray Pawson of Realistic Evaluation fame. Although the book is titled ‘research’, many of the essays in the book, like Brad’s, are in fact about evaluation.
The book’s remit is the practice and how-to of realist evaluation and research. Our conversation went wide and deep, from the business of evaluation to the nature of reality.
His first take-home message was to be your own person when applying evaluation ideas.
You don’t go about evaluation like you bought something from Ikea – with a set of rules saying screw here, screw there. I understand why people struggle because there’s a deep philosophy that underpins the realist approach. Evaluators are often time poor, and they’re looking for practical stuff. At least in the book there are some examples, and it’s a good accompaniment to the realist book [by Pawson and Tilley, 1997]. |
Naturally, we segued into what makes realist evaluation realist.
The signature argument is about context-mechanism-outcome, the logic of inquiry, and the way of thinking informed by philosophy and the realist school of thought. That philosophy is an approach to a causal explanation that pulls apart a program and goes beyond a simple description of how bits and pieces come together, which is what most logic models provide. [The realist lens] focuses on generative mechanisms that bring about the outcome, and looks beneath the empirical, observable realm, like pulling apart a watch. I like the approach because as a kid I used to like pulling things apart. Don’t forget realist evaluation is only 25 years old; there’s room for development and innovation. I get annoyed when people apply it in a prescriptive way – it’s not what Ray or Nick would want. [They would probably say] here’s a set of intellectual resources to support your evaluation and research; go forth and innovate as long as it adheres to principles. |
Brad admits it’s not appropriate in every evaluation to go that deep or use an explanatory lens. True to form (Brad previously taught an impact evaluation course at the Centre for Program Evaluation), he cheekily countered the argument that realist evaluation isn’t evaluation but a form of social science research.
Some argue you don’t need to understand how programs work. You just need to make a judgment about whether it’s good or bad, or from an experimental perspective, whether it has produced effects, not how those effects are produced. Evaluation is a broad church; it’s open for debate.
If it’s how and why, it’s realist. If it’s ‘whether’ then that’s less explicitly realist because it’s not asking how effects were produced but whether there were effects and if you can safely attribute those to the program in a classic experimental way. Because of the approach’s flexibility and broadness, you can apply it in different aspects of evaluation |
Brad mused on his book chapter title, “Making claims using realist methods”. He preferred the original, “Will it work elsewhere? Social programming in open systems”. So did we.
The chapter is about external validity, and realist evaluation is good at answering the question of whether you can get something that worked in some place with certain people to work elsewhere. Where realist approaches don’t work well is estimating the magnitude of the effect of a program. |
As well as a broad overview of where realist evaluation fits in evaluation practice, Brad provided us with the following snappy tips for doing realist research:
Don’t get stuck on Context-Mechanism-Outcome (CMO)
When learning about realist evaluation, people can get stuck on having a context, mechanism and outcome. The danger of the CMO is using it like a generic program logic template (activities, outputs and outcomes), and listing Cs, Ms and Os, which encourages linear thinking. We need to thoughtfully consider how they’re overlaid to produce an explanation of how outcomes emerge.
A way to overcome this is through ‘bracketing’: set aside the CMO framework, build a program logic and elaborate on the model by introducing mechanisms and context. |
Integrate prior research into program theory
Most program theory is built only on the understanding of stakeholders and the experience of the evaluator. It means we’re not being critical of our own and stakeholders’ assumptions about how something works.
A way to overcome this is through ‘abstraction’: through research, we can bring in wider understandings of what family of interventions is involved and use this information to strengthen the program theory. We need to get away from ‘this is a very special and unique program’ to ‘what’s this a case of? Are we looking at incentives? Regulation? Learning?’ As part of this work, realist evaluation requires evaluators to spend a bit more time in the library than other approaches. |
Focus on key causal links
Brad looks to the causal links with greatest uncertainty or where there are the biggest opportunities for leveraging what could help improve the program.
When you look at a realist program theory, you can’t explore every causal link. It’s important to focus your fire, and target evaluation and resources on things that matter most. |
When asked for his advice to people interested in realist evaluation, Brad’s response was classic:
Just read the book ‘Realistic Evaluation’ from front to back, multiple times. |
As a parting tip, he reminded us to aspire to be a theoretical agnostic. He feels labels can constrain how we do the work.
To a kid with a hammer, every problem can seem like a nail. Sometimes, people just go to the theory and methods that they know best. Rather than just sticking to one approach or looking for a neat theoretical label, just do a good evaluation that is informed by the theory that makes sense for the particular context. |
--------------------------
Brad Astbury is a Director at ARTD Consultants. He specialises in evaluation design, methodology, mixed methods and impact evaluation.
Eunice Sotelo, research analyst, and Victoria Pilbeam, consultant, work at Clear Horizon Consulting. They also volunteer as mentors for the Asylum Seeker Resource Centre’s Lived Experience Evaluators Project (LEEP).