By Denika Blacklock
I have been working in development for 15 years and have specialised in M&E for the past 10 years. In all that time, I have never been asked to design an M&E framework for or undertake an evaluation of a project which did not focus entirely on a logframe. Understandably, it is a practical tool for measuring results – particularly quantitative results – in development projects.
However, as the drive for increased development effectiveness and, thankfully, more accountability to stakeholders has progressed, simply measuring what we have successfully done (versus what we have successfully changed or improved) requires more than just numbers. More concerning is the fact that logframes measure linear progression toward preset targets. Any development practitioner worth their degree can tell you that development – and development projects – is never linear, and at our best, we guess at what our output targets could conceivably be under ideal conditions, with the resources (money, time) available to us.
I have lately found myself faced with the challenging scenario of developing M&E frameworks for development projects in which ‘innovation’ is the objective, but I am required to design frameworks with old tools like logframes and results frameworks (organisational/donor requirements) which cannot accommodate actual innovation in development.
Word cloud image sourced from Google
The primary problem: logframes require targets. If we set output targets, then the results of activities will be preconceived, and not innovative. Target setting molds how we will design and implement activities. How can a project be true to the idea of fostering innovation in local development with only a logframe at hand to measure progress and success?
My argument was that if the project truly wanted to foster innovation, we needed to ‘see what happens, not decide beforehand what will happen with targets.’ Moreover, I was able to counterargue the idea that a target of ‘x number of new ideas for local development’ was a truly ineffective (if not irresponsible) way of going about being ‘open-minded about measuring innovation.’ There could be 15 innovative ideas that could be implemented, or one or two truly excellent ones. It was not going to be the number of ideas or how big their pilot activities were that would determine how successful ‘innovation in local development’ would be, but what those projects could do. The project team was quick to understand that as soon as we set a specific numerical or policy target, the results would no longer be innovative. It would no longer be driven by ideas from government and civil society, but by international good practice and development requirements that we measure everything.
There was also the issue of how innovation would be defined. It does not necessarily need to be ‘shiny and new’ but it does need to be effective and workable. And whether the ideas ended up being scalable or not, the entire process needed to be something we could learn from. Working out how to measure this using a logframe felt like one gigantic web of complication and headaches.
My approach was to look at all of the methods of development monitoring ‘out there’ (i.e. Google). When it came to tracking policy dialogue (and how policy ideas could be piloted to improve local development), outcome mapping seemed the most appropriate way forward. I created a tool (Step 1, Step 2, etc.) that the project team could use on an annual basis to map the results of policy dialogue to support local development. The tool was based on the type of information the project team had access to, the people that the project team would be allowed to speak to, as well as the capacity within the project team to implement the tool (context is key). Everyone was very happy with the tool – it was user-friendly, and adaptable between urban and rural governments. The big question was how to link this to the logframe.
In the end, we opted for setting targets on learning, such as how many lessons learned reports the project team would undertake during the life of the project (at the mid-term and end of the project). At its core, innovation is about learning: what works, what does not and why. Surprisingly, there was not a lot of pushback on having targets which were not a direct reflection of ‘what had been done’ by the project. Personally, I felt refreshed by the entire process!
I completed the assignment even more convinced than I already was that despite the push to change what we measure in development, we will never be effective at it unless those driving the development process (donors, big organisations) really commit to moving beyond the ‘safe’ logframe (which allows them to account for every cent spent). As long as we continue to stifle innovation needing to know – in advance – about what the outcome will be, we will only be accountable to those holding the money and not to those who are supposed to benefit from development. Until this change in mindset happens at the top of the development pyramid, we will remain ‘log-framed’ in a corner that we cannot escape from because we have conditioned ourselves to think that the only success that counts is that which we have predicted.
Denika is a development and conflict analyst, and independent M&E consultant based in Bangkok.
Personal blog: http://theoryinpracticejournal.blogspot.com/
By Liz Smith
At the 2018 AES conference, Ignite presentations were introduced to light some fire in our evaluation belly. Ignite presentations are a set formula of five minutes and 20 slides with each slide advancing automatically after 15 seconds. Presenters have to concisely and quickly pitch their idea.
My thoughts in 2017 when submitting an Ignite conference abstract was: ‘Great idea, let’s get a piece of this action. Let’s push my boundaries and try something new. Woohoo!’. In contrast, my thoughts one week out from #aes18LST were: ‘WTF have I got myself into this time!’
Let me share, my lessons on doing my first Ignite presentation.
Effective Ignite presentations have one central theme about which you are passionate
I was arguing for short, plain English evaluation reports. I wanted to offer tips to create readable reports. Over the last two years, Litmus has implemented a company-wide plain English strategy. As a finalist in New Zealand’s Plain English Awards, this is a topic I am very passionate about and have much (probably too much) to say.
Work on content first to create a compelling and interesting story
I followed the advice from Ignite gurus and developed my story first. I worked out 15 seconds equals 35 words a slide. I struck to this rule of thumb. My first writing attempt fit with the formula. But, it was a shopping list of tips to write readable reports. Pretty boring as my critical friends agreed!
I decided to use an analogy to create a more compelling presentation around plain English reporting. I was presenting the week of Suffrage 125; a celebration of New Zealand women winning the right to vote in 1893. I set myself the challenge of using women’s suffrage to spark interest in my presentation. I found using women’s suffrage as a backdrop resulted in a story that caught and held attention.
As a feminist, I also wanted to shine the light on women’s suffrage and their achievements at #aes18LST.
Three key ideas and two critical friends are a winning formula
Getting to a compelling and interesting Ignite presentation that captured the audience’s attention was challenging. I had two colleagues – Phoebe Balle and Sam Abbato - who advised and cajoled me through the development phases. The phase when many an idea hits the cutting room floor. At times, a painful but very necessary process. The key tip, they both constantly reiterated, was I needed three points to support the central idea. Dump the rest!
Practice, then practice some more, and if needed cheat!
You have no excuse not to practice. In one hour, you can practice your Ignite presentation at least ten times. Practice does pay off. You get a sense of the flow between your script and your slides. And again more Ignite content hits the cutting room floor to burn to ashes.
On the big day, you are supposed to eloquently present your Ignite without reference to your carefully crafted script (really!). The argument goes the story flow will be better and less stilted. But be warned, time constraints do not allow for off-piste and off-the-cuff ideas.
I, like some at #aes18LST, cheated. We had our scripts (our comfort blankets) to keep us on track. Not being a purist, I’m okay with this. I believe it is better to give things a go, in whatever way that works for you.
You need to breathe slowly and prepare for the worst
In preparing for my Ignite, I watched others at #aes18LST to refine my presentation. The AES audience was definitely on your side. However, I observed the audience’s anxiety levels mirrored the presenter’s. I found the trick was to breathe and pause to create an environment inducive for the audience to listen.
You also needed to prepare for the worst - technology failure. It happened! Kudos to Jade Maloney/Katherine Rich and Joanna Farmer who presented their Ignites slideless. Their ability to create visual images through words and actions was admirable and entertaining. Missing out on Joanna’s cat in a box picture was a conference low.
#aes19 is your chance to give Ignite a go
The Ignite presentations at #aes18LST were informative and entertaining. I was amazed by how much you could learn from a carefully structured five minute Ignite presentation.
I am hoping #aes19SYD has the option for this dynamic presentation format. AES conferences offer evaluators a safe environment to present and test their boundaries. So what is your big Ignite theme for 2019? Go on, light some evaluation fires!
You can find more technical tips at the great resources I used for developing my Ignite presentation:
Liz Smith, Partner Litmus Limited, a New Zealand based research and evaluation agency specialising in health and justice sectors. Vice President AES 2013-2018.
By Gerard Atkinson
Have you ever felt like you have put in a lot of work on an evaluation, only to find that what you have delivered hasn’t had the reach or engagement you expected? I’m not sure I have met an evaluator who hasn’t felt this way at least once in their career.
It was because of this that late last month I led a session at the 2018 Australasian Evaluation Society conference in Launceston, titled “Evolving the evaluation deliverable”. The aim of the session was to brainstorm ideas about more engaging ways of delivering evaluation findings. We had about 50 people attend, representing a mix of government, consultant and NGO evaluators. Over the course of the hour, we used interactive exercises to come up with fresh and exciting ideas for driving engagement.
A quick history of the deliverable
Since the earliest days of evaluation as a discipline, deliverables have been evolving. We started with the classic report, which then gave birth to a whole range of associated documents, from executive summaries to separate technical appendices to brochures and flyers. With the advent of visual presentation software, reports evolved to become highly visual, with slide decks and infographics becoming the primary deliverable. More recently, the desire to surface insights from databases has led to the creation of dashboards which enable rapid (and in some cases real-time) analysis of information from evaluation activities. The latest developments in this area even extend to intelligent systems for translating data into narrative insights, quite literally graphs that describe themselves.
Defining our scope
To keep the workshop focused, we used existing theoretical frameworks around deliverables in evaluation to guide our thinking. To begin with, we focused on instrumental use of evaluations (i.e. to drive decision making and change in the program being evaluated). We then restricted ourselves to deliverables that are distributive in nature, rather than presentations or directly working with stakeholders. Finally, we acknowledged the many systemic factors that impact on evaluation use, and focused on the goal of increasing self-directed engagement by users.
The ultimate outcome of this process was a guiding principle for our next generation deliverable – to maximise self-directed engagement with evaluation outcomes.
So what did we come up with?
Over the course of the session, we engaged in three creative exercises, each focusing on a particular aspect of the topic. Participants worked in small groups to discuss prompts and put ideas down on paper.
What might the next deliverable look like?
The first creative exercise had participants draw what they thought the next deliverable might look like. This question produced the widest variety of responses and showed the depth of creativity of participants. One group even developed a prototype of a next-generation “chatterbox” deliverable as an example (more on that below). There was a consistent theme of moving beyond purely visual and text-based forms of presentation to incorporate verbal and tactile modes of engagement.
Some of the ideas included:
There was a lot of synergy in this part of the session with Karol Olejniczak’s keynote on “serious games” as a tool for facilitating evaluation activities, and it was good to see how that presentation inspired participants to incorporate that style of thinking and design in a broader context.
How can we integrate it into our existing work?
The second question posed in the workshop addressed how we might align these new deliverables with our existing set of deliverables. I got participants to commence the exercise by having one person come up with an idea, then have other members of the group build on that idea. The responses to the exercise fell into three broad themes.
What skills are required to design, develop and deliver it?
The final round was the “lightning” round, where participants came up with responses to three questions as fast as they could. For each of the three questions, participants put forward responses that fell into the following categories:
What do we have already?
What don’t we have already?
What will we do ourselves and where will we get in help?
In the space of a one-hour workshop, we were able to surface some great insights into how we engage with stakeholders and create some exciting new ideas for deliverables. I hope that people will be able to build on these and develop them into real deliverables that support evaluation communication.
Gerard is a manager with ARTD Consultants, specialising in evaluation strategy, dashboards and data visualisation. He also has a side career as an opera singer.
By the AES blog team
The Launceston conference certainly set us some challenges as evaluators. The corridors of the Hotel Grand Chancellor were abuzz with ideas about how we can transform our practice to make a difference on a global scale, harness the power of co-design on a local level, take up the opportunities presented by gaming, and ensure cultural safety and respect. Since then, the conversations have continued in blogland. Here’s what some of our members had to say.
Elizabeth Smith, Litmus: The shock and awe of transformations: Reflections from AES2018 Conference – on the two challenges that struck a chord: the need to transform evaluation in Indigenous settings and support Indigenous evaluators and the need to focus globally and act locally to transform the world https://www.linkedin.com/pulse/shock-awe-transformations-reflections-from-aes2018-conference-smith/
Charlie Tulloch, Policy Performance: Australian Evaluation Society Conference: Lessons from Lonnie – on the evolution of AES conferences, from presentations about projects to sharing insights, including from failures and challenges https://www.linkedin.com/pulse/australian-evaluation-society-conference-lessons-from-charlie-tulloch/
Fran Demetriou, Lirata Consulting: AES 2018 conference reflections: power, values, and food – on the experience of an emerging evaluator and all those great food metaphors https://www.aes.asn.au/blog/1474-aes-2018-conference-reflections.html
ARTD team: Transforming evaluation: what we’re taking from #aes18LST – on the very different things that spoke to each of us, from the challenge to ensure cultural safety and respect to leveraging big data and Gill Westhorp’s realist axiology https://artd.com.au/transforming-evaluation-what-we-re-taking-from-aes18lst/16:216/
Natalie Fisher, NSF Consulting: Australasian Evaluation Conference 2018 – Transformations – on measuring transformation (relevance, depth of change, scale of change and sustainability), transforming our mindsets and capabilities, the power balance and how we write reports http://nsfconsulting.com.au/aes-conference-2018/
Joanna Farmer, beyondblue: Evaluating with a mental health lived experience – on the strengths and challenges this brings, and breaking the dichotomy between evaluator and person with lived experience by being explicit about values and tackling power dynamics https://www.linkedin.com/pulse/evaluating-mental-health-lived-experience-joanna-farmer
Byron Pakula, Clear Horizon: The blue marble flying through the universe is not so small... – on Michael Quinn Patton take outs – transformation should hit you between the eyes and we should assess whether this intervention contributed to the transformation https://www.clearhorizon.com.au/all-blog-posts/the-blue-marble-flying-through-the-universe-is-not-so-small.aspx
David Wakelin, ARTD: AES18 Day 1: How can we transform evaluation? – on how big data may help us transform evaluation and tackle the questions we need to answer, without losing sight of ethics and the people whose voice we need to hear https://artd.com.au/aes18-day-1-how-can-we-transform-evaluation/16:215/
Jade Maloney, ARTD: How will #aes18LST transform you? – on Michael Quinn Patton’s call to action – evaluating transformations requires us to transform evaluation – the take-outs from Patton and Kate McKegg’s Principles-Focused Evaluation workshop https://www.aes.asn.au/blog/1466-how-will-aes18lst-transform-you.html
Jess Dart, Clear Horizon: Values-based co-design with a generous portion of developmental evaluation – on Penny Hagan’s tools that integrate design and evaluation, including the rubric and card pack they have developed for assessing co-design capability and conditions. https://www.clearhorizon.com.au/all-blog-posts/values-based-co-design-with-a-generous-portion-of-developmental-evaluation.aspx
AES Blog Working Group: Eunice Sotelo, Jade Maloney, Joanna Farmer and Matt Healy
By Fran Demetriou
The theme of transformations resonated with me. I’m relatively new to evaluation and it’s been an intense journey over the last two years in learning about what evaluation is and how to go about it well. This conference (my first ever evaluation conference) was a pivotal point in that journey.
As an ‘emerging evaluator’, my first question was… ‘what does that mean?’ I participated in one of the emerging evaluators panels, where one of the facilitators, Eunice Sotelo, did some excellent miming of the concept (I can’t justify it with text, so you’ll have to ask her nicely to demonstrate it). An audience member in the session called us caterpillars, following on from butterfly references in Michael Quinn Patton’s inspiring opening plenary. I’m not sure we have a working definition of transformation yet, but I’ve got some good imagery.
This caterpillar came to the conference with a good grounding in evaluation, but with a lot more to understand, including where I was at and what I needed to do to develop.
Here’s what I’ve taken away from my first AES conference:
Community spirit and failing forwards
I was struck by the diversity of content in the sessions. There is so much to learn about and so much innovation underway to enable us to better address complex social problems. This felt overwhelming as a newcomer, but I was comforted to find a community of evaluators at the conference who wanted to share, collaborate and learn from one another.
It was great to have so many interactive sessions to enable those connections. As an emerging evaluator, I also appreciated the focus given at the conference to welcome us into the community, focus on our development, and provide platforms to hear our perspectives on opportunities to develop the sector.
The emphasis on learning from failure was valuable. One of my conference highlights was Matt Healey’s interactive session (Learning from failure: A Safe Space Session) where, under Chatham House rules, evaluators with various backgrounds, specialisations and levels of experience shared some of those facepalm moments. It was comforting to know others had made similar mistakes to me, but even more beneficial to learn from others’ mistakes to help mitigate them in my own practice.
I learned that, as we continue to transform our practice to tackle complex problems, there are going to be failures along the way – and that’s ok, so long as we recognise them, learn and adapt. I went along to the panel session Umbrellas and rain drops: Evaluating systems change lessons and insights from Tasmania and listened as a highly experienced team shared the challenges they have encountered implementing systems change through the Beacon Foundation in Tasmanian schools. For me, it helped surface the importance of having strong relationships with partners and funders who are willing to fail forwards with us.
We have power! Let’s share it, empower others and be ready to let go
The conference reiterated for me the power that we hold as evaluators. We have the power to influence who is included in evaluations, and how – and we need to push back to make sure those who are affected by decisions are involved meaningfully in the process.
Through some enlightening role play, the We are Women! We are Ready! Amplifying our voice through Participatory Action Research (Tracy McDiarmid and Alejandra Pineda from the International Women’s Development Agency) session helped me to reflect on the ever-present power dynamics between evaluation stakeholders, and how to critically assess and address this to ensure stakeholders are included.
I learned that power isn’t just about how you include stakeholders, but what you bring to each evaluation through your own identity, and the often unstated cultural values you hold. A challenge I will be taking back to my practice is to be more critically aware of my own identity and the impact it has on evaluations I work on.
These conversations and discussions were summed up for me in Sharon Gollan’s and Kathleen Stacey’s plenary with the galvanising question: “When will YOU challenge the power when it is denying inclusion?”
It’s all about values
Very much connected to power is whose values are heard and counted in an evaluation. I went to several sessions dedicated explicitly to values in evaluations. It was exciting to see both the development of theory and the sharing of practical tools for eliciting values in evaluations.
In their plenary, Sharon Gollan and Kathleen Stacey provided a reminder that the benchmark for doing evaluation has been defined by the dominant culture. This was a powerful insight for me – it seems obvious, but it’s something easily overlooked. The way we undertake evaluation has cultural values embedded deep within it, and we must take care to think about the suitability of our approaches especially with Indigenous communities.
Being able to elicit values in each stage of an evaluation is a separate challenge altogether from understanding they are important. It was great then to have several sessions focused on identifying different types of values, articulating values approaches, specifying where values fit into an evaluation (at the start, and then they permeate everything), and how to work with these values, especially in culturally appropriate ways.
We like food metaphors
And finally, we must be a hungry bunch, because the sessions were peppered with food references.
Some savoury metaphors included policy being described as spaghetti, with evaluation making it a bento box (Jen Thompson in Traps for young players: a panel session by new evaluators for new evaluators), and a key takeaways slide with a pizza image (Joanna Farmer in When an evaluator benefits: the challenges of managing values and power in evaluating with lived experience).
Pudding was offered up by Jenny Riley’s and Clare Davies’ appetisingly named Outcomes, Dashboards and Cupcakes and Matt Healey’s ignite session on evaluators as cake Just add water: The ingredients of an evaluator.
My favourite food reference, reflecting the importance of power and values, was from Lisa Warner, who was quoted by a panellist in Developmental evaluation in Indigenous contexts: transforming power relations at the interface of different knowledge systems: “If you’re not at the table, you’re on the menu”.
I don’t know about you, but I certainly feel well nourished!
I’ll be transforming my work to better address values, power and inclusion, and I look forward to the Emerging Evaluators Special Interest Group kicking off soon to continue learnings with and from others.
Thanks for a great first conference, and I look forward to seeing you in Sydney next year!
Fran Demetriou works at Lirata Consulting as an Evaluator, and volunteers as an M&E advisor for the Asylum Seeker Resource Centre’s Mentoring Program.
* Please note that the original version of this article incorrectly quotes a Developmental evaluation in Indigenous contexts: transforming power relations at the interface of different knowledge systems panellist for saying: “If you’re not at the table, you’re on the menu”. In fact, the panellist was quoting Lisa Warner who said this in her STEPS team presentation. The post has been updated to reflect this.