By Legacy B on Tuesday, 23 October 2018
Category: Practice

Evolving the evaluation deliverable: Ideas from #aes18LST workshop participants

By Gerard Atkinson

Have you ever felt like you have put in a lot of work on an evaluation, only to find that what you have delivered hasn’t had the reach or engagement you expected? I’m not sure I have met an evaluator who hasn’t felt this way at least once in their career.

It was because of this that late last month I led a session at the 2018 Australasian Evaluation Society conference in Launceston, titled “Evolving the evaluation deliverable”.

The aim of the session was to brainstorm ideas about more engaging ways of delivering evaluation findings. We had about 50 people attend, representing a mix of government, consultant and NGO evaluators. Over the course of the hour, we used interactive exercises to come up with fresh and exciting ideas for driving engagement.

A quick history of the deliverable

Since the earliest days of evaluation as a discipline, deliverables have been evolving. We started with the classic report, which then gave birth to a whole range of associated documents, from executive summaries to separate technical appendices to brochures and flyers. With the advent of visual presentation software, reports evolved to become highly visual, with slide decks and infographics becoming the primary deliverable. More recently, the desire to surface insights from databases has led to the creation of dashboards which enable rapid (and in some cases real-time) analysis of information from evaluation activities. The latest developments in this area even extend to intelligent systems for translating data into narrative insights, quite literally graphs that describe themselves.

Defining our scope

To keep the workshop focused, we used existing theoretical frameworks around deliverables in evaluation to guide our thinking. To begin with, we focused on instrumental use of evaluations (i.e. to drive decision making and change in the program being evaluated). We then restricted ourselves to deliverables that are distributive in nature, rather than presentations or directly working with stakeholders. Finally, we acknowledged the many systemic factors that impact on evaluation use, and focused on the goal of increasing self-directed engagement by users.

The ultimate outcome of this process was a guiding principle for our next generation deliverable – to maximise self-directed engagement with evaluation outcomes.

So what did we come up with?

Over the course of the session, we engaged in three creative exercises, each focusing on a particular aspect of the topic. Participants worked in small groups to discuss prompts and put ideas down on paper.

What might the next deliverable look like?

The first creative exercise had participants draw what they thought the next deliverable might look like. This question produced the widest variety of responses and showed the depth of creativity of participants. One group even developed a prototype of a next-generation “chatterbox” deliverable as an example (more on that below). There was a consistent theme of moving beyond purely visual and text-based forms of presentation to incorporate verbal and tactile modes of engagement.

Some of the ideas included:

There was a lot of synergy in this part of the session with Karol Olejniczak’s keynote on “serious games” as a tool for facilitating evaluation activities, and it was good to see how that presentation inspired participants to incorporate that style of thinking and design in a broader context.

How can we integrate it into our existing work?

The second question posed in the workshop addressed how we might align these new deliverables with our existing set of deliverables. I got participants to commence the exercise by having one person come up with an idea, then have other members of the group build on that idea. The responses to the exercise fell into three broad themes.

What skills are required to design, develop and deliver it?

The final round was the “lightning” round, where participants came up with responses to three questions as fast as they could. For each of the three questions, participants put forward responses that fell into the following categories:

What do we have already?

What don’t we have already?

What will we do ourselves and where will we get in help?

Summary

In the space of a one-hour workshop, we were able to surface some great insights into how we engage with stakeholders and create some exciting new ideas for deliverables. I hope that people will be able to build on these and develop them into real deliverables that support evaluation communication.

Gerard is a manager with ARTD Consultants, specialising in evaluation strategy, dashboards and data visualisation. He also has a side career as an opera singer.