Member login
 
   
Forgot Login?   Sign up  

This is the AES Blog, where we regularly post articles by the Australasian evaluation community on the subjects that matter to us. If you have an idea, please contact us on This email address is being protected from spambots. You need JavaScript enabled to view it.. Blog guidelines can be found here.



October 2018
By the AES blog team

The Launceston conference certainly set us some challenges as evaluators. The corridors of the Hotel Grand Chancellor were abuzz with ideas about how we can transform our practice to make a difference on a global scale, harness the power of co-design on a local level, take up the opportunities presented by gaming, and ensure cultural safety and respect. Since then, the conversations have continued in blogland. Here’s what some of our members had to say.

Elizabeth Smith, Litmus: The shock and awe of transformations: Reflections from AES2018 Conference – on the two challenges that struck a chord: the need to transform evaluation in Indigenous settings and support Indigenous evaluators and the need to focus globally and act locally to transform the world https://www.linkedin.com/pulse/shock-awe-transformations-reflections-from-aes2018-conference-smith/ 

Charlie Tulloch, Policy Performance: Australian Evaluation Society Conference: Lessons from Lonnie – on the evolution of AES conferences, from presentations about projects to sharing insights, including from failures and challenges https://www.linkedin.com/pulse/australian-evaluation-society-conference-lessons-from-charlie-tulloch/ 

Fran Demetriou, Lirata Consulting: AES 2018 conference reflections: power, values, and food – on the experience of an emerging evaluator and all those great food metaphors https://www.aes.asn.au/blog/1474-aes-2018-conference-reflections.html 

ARTD team: Transforming evaluation: what we’re taking from #aes18LST – on the very different things that spoke to each of us, from the challenge to ensure cultural safety and respect to leveraging big data and Gill Westhorp’s realist axiology https://artd.com.au/transforming-evaluation-what-we-re-taking-from-aes18lst/16:216/ 

Natalie Fisher, NSF Consulting: Australasian Evaluation Conference 2018 – Transformations – on measuring transformation (relevance, depth of change, scale of change and sustainability), transforming our mindsets and capabilities, the power balance and how we write reports http://nsfconsulting.com.au/aes-conference-2018/ 

Joanna Farmer, beyondblue: Evaluating with a mental health lived experience – on the strengths and challenges this brings, and breaking the dichotomy between evaluator and person with lived experience by being explicit about values and tackling power dynamics https://www.linkedin.com/pulse/evaluating-mental-health-lived-experience-joanna-farmer 

Byron Pakula, Clear Horizon: The blue marble flying through the universe is not so small... – on Michael Quinn Patton take outs – transformation should hit you between the eyes and we should assess whether this intervention contributed to the transformation https://www.clearhorizon.com.au/all-blog-posts/the-blue-marble-flying-through-the-universe-is-not-so-small.aspx 

David Wakelin, ARTD: AES18 Day 1: How can we transform evaluation? – on how big data may help us transform evaluation and tackle the questions we need to answer, without losing sight of ethics and the people whose voice we need to hear https://artd.com.au/aes18-day-1-how-can-we-transform-evaluation/16:215/ 

Jade Maloney, ARTD: How will #aes18LST transform you? – on Michael Quinn Patton’s call to action – evaluating transformations requires us to transform evaluation – the take-outs from Patton and Kate McKegg’s Principles-Focused Evaluation workshop https://www.aes.asn.au/blog/1466-how-will-aes18lst-transform-you.html 

Jess Dart, Clear Horizon: Values-based co-design with a generous portion of developmental evaluation – on Penny Hagan’s tools that integrate design and evaluation, including the rubric and card pack they have developed for assessing co-design capability and conditions. https://www.clearhorizon.com.au/all-blog-posts/values-based-co-design-with-a-generous-portion-of-developmental-evaluation.aspx 

We’ve endeavoured to include blogs from all AES members. If you have a blog that didn’t make our list, contact This email address is being protected from spambots. You need JavaScript enabled to view it. and we’ll make sure you’re on our list

AES Blog Working Group: Eunice Sotelo, Jade Maloney, Joanna Farmer and Matt Healy

October 2018
By Fran Demetriou

caterpillar

The theme of transformations resonated with me. I’m relatively new to evaluation and it’s been an intense journey over the last two years in learning about what evaluation is and how to go about it well. This conference (my first ever evaluation conference) was a pivotal point in that journey.

As an ‘emerging evaluator’, my first question was… ‘what does that mean?’ I participated in one of the emerging evaluators panels, where one of the facilitators, Eunice Sotelo, did some excellent miming of the concept (I can’t justify it with text, so you’ll have to ask her nicely to demonstrate it). An audience member in the session called us caterpillars, following on from butterfly references in Michael Quinn Patton’s inspiring opening plenary. I’m not sure we have a working definition of transformation yet, but I’ve got some good imagery.

This caterpillar came to the conference with a good grounding in evaluation, but with a lot more to understand, including where I was at and what I needed to do to develop.

Here’s what I’ve taken away from my first AES conference:

Community spirit and failing forwards
I was struck by the diversity of content in the sessions. There is so much to learn about and so much innovation underway to enable us to better address complex social problems. This felt overwhelming as a newcomer, but I was comforted to find a community of evaluators at the conference who wanted to share, collaborate and learn from one another.

It was great to have so many interactive sessions to enable those connections. As an emerging evaluator, I also appreciated the focus given at the conference to welcome us into the community, focus on our development, and provide platforms to hear our perspectives on opportunities to develop the sector. 

The emphasis on learning from failure was valuable. One of my conference highlights was Matt Healey’s interactive session (Learning from failure: A Safe Space Session) where, under Chatham House rules, evaluators with various backgrounds, specialisations and levels of experience shared some of those facepalm moments. It was comforting to know others had made similar mistakes to me, but even more beneficial to learn from others’ mistakes to help mitigate them in my own practice.

I learned that, as we continue to transform our practice to tackle complex problems, there are going to be failures along the way – and that’s ok, so long as we recognise them, learn and adapt. I went along to the panel session Umbrellas and rain drops: Evaluating systems change lessons and insights from Tasmania and listened as a highly experienced team shared the challenges they have encountered implementing systems change through the Beacon Foundation in Tasmanian schools. For me, it helped surface the importance of having strong relationships with partners and funders who are willing to fail forwards with us. 

We have power! Let’s share it, empower others and be ready to let go
The conference reiterated for me the power that we hold as evaluators. We have the power to influence who is included in evaluations, and how – and we need to push back to make sure those who are affected by decisions are involved meaningfully in the process.

Through some enlightening role play, the We are Women! We are Ready! Amplifying our voice through Participatory Action Research (Tracy McDiarmid and Alejandra Pineda from the International Women’s Development Agency) session helped me to reflect on the ever-present power dynamics between evaluation stakeholders, and how to critically assess and address this to ensure stakeholders are included.

I learned that power isn’t just about how you include stakeholders, but what you bring to each evaluation through your own identity, and the often unstated cultural values you hold. A challenge I will be taking back to my practice is to be more critically aware of my own identity and the impact it has on evaluations I work on.

These conversations and discussions were summed up for me in Sharon Gollan’s and Kathleen Stacey’s plenary with the galvanising question: “When will YOU challenge the power when it is denying inclusion?”

It’s all about values
Very much connected to power is whose values are heard and counted in an evaluation. I went to several sessions dedicated explicitly to values in evaluations. It was exciting to see both the development of theory and the sharing of practical tools for eliciting values in evaluations.
In their plenary, Sharon Gollan and Kathleen Stacey provided a reminder that the benchmark for doing evaluation has been defined by the dominant culture. This was a powerful insight for me – it seems obvious, but it’s something easily overlooked. The way we undertake evaluation has cultural values embedded deep within it, and we must take care to think about the suitability of our approaches especially with Indigenous communities.
Being able to elicit values in each stage of an evaluation is a separate challenge altogether from understanding they are important. It was great then to have several sessions focused on identifying different types of values, articulating values approaches, specifying where values fit into an evaluation (at the start, and then they permeate everything), and how to work with these values, especially in culturally appropriate ways.

We like food metaphors
And finally, we must be a hungry bunch, because the sessions were peppered with food references. 

Some savoury metaphors included policy being described as spaghetti, with evaluation making it a bento box (Jen Thompson in Traps for young players: a panel session by new evaluators for new evaluators), and a key takeaways slide with a pizza image (Joanna Farmer in When an evaluator benefits: the challenges of managing values and power in evaluating with lived experience).

Pudding was offered up by Jenny Riley’s and Clare Davies’ appetisingly named Outcomes, Dashboards and Cupcakes and Matt Healey’s ignite session on evaluators as cake Just add water: The ingredients of an evaluator.

My favourite food reference, reflecting the importance of power and values, was from Lisa Warner, who was quoted by a panellist in Developmental evaluation in Indigenous contexts: transforming power relations at the interface of different knowledge systems: “If you’re not at the table, you’re on the menu”.

What’s next?
I don’t know about you, but I certainly feel well nourished! 

I’ll be transforming my work to better address values, power and inclusion, and I look forward to the Emerging Evaluators Special Interest Group kicking off soon to continue learnings with and from others.

Thanks for a great first conference, and I look forward to seeing you in Sydney next year!

Fran Demetriou works at Lirata Consulting as an Evaluator, and volunteers as an M&E advisor for the Asylum Seeker Resource Centre’s Mentoring Program. 
LinkedIn: https://www.linkedin.com/in/francesca-demetriou-975345a5/
Twitter: @Fran_Demetriou

* Please note that the original version of this article incorrectly quotes a Developmental evaluation in Indigenous contexts: transforming power relations at the interface of different knowledge systems panellist for saying: “If you’re not at the table, you’re on the menu”. In fact, the panellist was quoting Lisa Warner who said this in her STEPS team presentation. The post has been updated to reflect this.

October 2018
By Gerard Atkinson

EvolvingTheEvaluationDeliverable blog

Have you ever felt like you have put in a lot of work on an evaluation, only to find that what you have delivered hasn’t had the reach or engagement you expected? I’m not sure I have met an evaluator who hasn’t felt this way at least once in their career.

It was because of this that late last month I led a session at the 2018 Australasian Evaluation Society conference in Launceston, titled “Evolving the evaluation deliverable”. The aim of the session was to brainstorm ideas about more engaging ways of delivering evaluation findings. We had about 50 people attend, representing a mix of government, consultant and NGO evaluators. Over the course of the hour, we used interactive exercises to come up with fresh and exciting ideas for driving engagement.

A quick history of the deliverable
Since the earliest days of evaluation as a discipline, deliverables have been evolving. We started with the classic report, which then gave birth to a whole range of associated documents, from executive summaries to separate technical appendices to brochures and flyers. With the advent of visual presentation software, reports evolved to become highly visual, with slide decks and infographics becoming the primary deliverable. More recently, the desire to surface insights from databases has led to the creation of dashboards which enable rapid (and in some cases real-time) analysis of information from evaluation activities. The latest developments in this area even extend to intelligent systems for translating data into narrative insights, quite literally graphs that describe themselves.

Defining our scope
To keep the workshop focused, we used existing theoretical frameworks around deliverables in evaluation to guide our thinking. To begin with, we focused on instrumental use of evaluations (i.e. to drive decision making and change in the program being evaluated). We then restricted ourselves to deliverables that are distributive in nature, rather than presentations or directly working with stakeholders. Finally, we acknowledged the many systemic factors that impact on evaluation use, and focused on the goal of increasing self-directed engagement by users.

The ultimate outcome of this process was a guiding principle for our next generation deliverable – to maximise self-directed engagement with evaluation outcomes.

So what did we come up with?
Over the course of the session, we engaged in three creative exercises, each focusing on a particular aspect of the topic. Participants worked in small groups to discuss prompts and put ideas down on paper.

What might the next deliverable look like?
The first creative exercise had participants draw what they thought the next deliverable might look like. This question produced the widest variety of responses and showed the depth of creativity of participants. One group even developed a prototype of a next-generation “chatterbox” deliverable as an example (more on that below). There was a consistent theme of moving beyond purely visual and text-based forms of presentation to incorporate verbal and tactile modes of engagement.

Some of the ideas included:

  • Podcasts including rich stories based on qualitative data, with the ability to splice chapters and information according to the needs and interests of the listener.
  • The “Chatterbox” (pictured) – one of our participants, Rebecca King from Oxfam, put forward the idea of using a chatterbox toy that allows the user to play with the results as a game and select the topics that interest them.
  • Following the childhood theme, building blocks featuring key findings and picture books were also put forward as ideas.
  • At the other end of the spectrum, high-tech solutions were proposed. These included using virtual reality environments to present findings, having QR codes incorporated in deliverables to enable users to dive more deeply into the material, and even a “virtual assistant” ala Siri or Alexa that can guide users through findings in a dialogue-driven fashion.

EvolvingTheEvaluationDeliverable blog2

There was a lot of synergy in this part of the session with Karol Olejniczak’s keynote on “serious games” as a tool for facilitating evaluation activities, and it was good to see how that presentation inspired participants to incorporate that style of thinking and design in a broader context.

How can we integrate it into our existing work?
The second question posed in the workshop addressed how we might align these new deliverables with our existing set of deliverables. I got participants to commence the exercise by having one person come up with an idea, then have other members of the group build on that idea. The responses to the exercise fell into three broad themes.

  • Data: We need both the tools and the skills to generate the right types of data to support these deliverables.
  • Dialogue: One of the most interesting insights of the session was that even though these deliverables are distributive in nature, we should design new deliverables that enable a two-way conversation between evaluator and audience.
  • Driving buy-in: Work will need to be done to get user buy-in, whether that is through advertising the benefits of these new channels of delivery, working directly with end users in the design process to create a sense of ownership, or through ongoing discussion through the delivery process to optimise the deliverable.

What skills are required to design, develop and deliver it?
The final round was the “lightning” round, where participants came up with responses to three questions as fast as they could. For each of the three questions, participants put forward responses that fell into the following categories:

What do we have already?

  • Creativity: evaluators have strong skills in lateral thinking and in engaging with new ways of presenting material.
  • Courage: speaking truth to power is a key skill for evaluators (and is the theme of the upcoming AEA conference; this extends to the ways in which we speak that truth.
  • Networks: our work puts us in contact with a diverse range of sectors and practitioners that we can work with to realise new deliverables.

What don’t we have already?

  • IT skills: participants identified that some of the proposed ideas would require upskilling in the IT platforms that underpin them.
  • Artistic skills: despite existing creativity, participants felt that there were opportunities to hone and finesse skills in graphic design and audio production to make these new deliverables as engaging and professional as possible.
  • New media skills: similarly, new modes of delivery such as podcasts require skills in voice acting that could benefit from greater investment.

What will we do ourselves and where will we get in help?

  • Create the “new age” evaluator: based on the ideas presented, a new type of evaluator would have to emerge that blended traditional methods with new ways of delivering and communicating ideas to stakeholders.
  • Need to partner: participants also felt that we would need to establish and maintain partnerships with specialist professionals, such as graphic designers, to ensure deliverables are both high quality and reflect the latest design trends.

Summary
In the space of a one-hour workshop, we were able to surface some great insights into how we engage with stakeholders and create some exciting new ideas for deliverables. I hope that people will be able to build on these and develop them into real deliverables that support evaluation communication.

Gerard is a manager with ARTD Consultants, specialising in evaluation strategy, dashboards and data visualisation. He also has a side career as an opera singer. 

September 2018
By Jade Maloney

aes18 hero socialmedia

Our world is transforming at a dizzying rate. What does this mean for evaluation and, by extension, evaluators?

That’s the question posed by the 2018 Australasian Evaluation Society conference in Launceston this week. So what do our keynotes think?

Kate McKegg – well known for her work advancing developmental evaluation practice – asks us to think deeply about what we really mean when we say transformation. What might the dimensions be? What exactly is it we are trying to transform: people, places, practices, structures, systems, technologies or something else? Does it have to be global? Or does what occurs at the national, regional, local, family or individual level count? Will we recognise transformation for what it is as it happens and be able to capture it? Can we really deliver transformation or does it have to be experienced?

McKegg’s co-presenter, Michael Quinn Patton (of utilisation-focused, developmental, and now principles-focused evaluation fame), tells us that evaluating transformation means transforming evaluation and lays down a challenge. Is evaluation going to be part of the problem (maintaining the status quo) or part of the solution (supporting and enabling transformation)?

The pair’s pre-conference workshop had everyone buzzing, both those who had read Principles-Focused Evaluation from cover to cover and those who were new to the concept. Participants learned the distinctions between rules – where the focus is on compliance and there isn’t need for interpretation – and principles – which provide guidance and direction, but need to be interpreted within specific contexts. They also learned about layering principles and that less is more in both number and description.

For Lee-Anne Molony, Managing Director at Clear Horizon, who chaired the session: a quote from William Easterly (The Tyranny of Experts) neatly summed up the value of taking a principles-based approach: ‘It is critical to get the principles of acting right before acting’. This plays out most in good ‘design’ but as evaluators our role is to support the process of ensuring those ‘right principles’ are clarified well enough that they are meaningful and relevant (provide sufficient guidance for decision makers); are able to be adhered to (at least in theory); and the results they would produce if adhered to are clear (or can be determined).

For Keryn Hassall, one of the participants: principles-focused evaluation offers an opportunity for transforming evaluation practices, and for supporting more sophisticated program management. Principles are the best way to guide decisions in complex, adaptive contexts, and where there are no easy answers to how to solve problems. Programs where the journey is just as important than the destination can look like failure when evaluated using government evaluation guidelines that focus on reporting on specified outcomes. Learning about principles-focused evaluation helps evaluators deepen their role to help program managers deliver meaningful programs.

But principles-focused evaluation is only one of the ideas on the table. Penny Hagen is strengthening the relationship between co-design and evaluation, Karol Olejniczak is getting us to gamify, and Sharon Gollan and Kathleen Stacey are asking us to apply the lens of cultural accountability to ensure evaluation is culturally safe.

With all of this on offer, you’d be hard-pressed not to find a way to transform your practice by the end of the week.

Thanks to aes18 conference convenor Jess Dart for coordinating input from keynotes and Eunice Sotelo for curating the questions.

Jade is a partner at ARTD Consultants.