Welcome to the AES Blog
Evolving the evaluation deliverable: Ideas from #aes18LST workshop participants
By Gerard Atkinson
Have you ever felt like you have put in a lot of work on an evaluation, only to find that what you have delivered hasn’t had the reach or engagement you expected? I’m not sure I have met an evaluator who hasn’t felt this way at least once in their career.
It was because of this that late last month I led a session at the 2018 Australasian Evaluation Society conference in Launceston, titled “Evolving the evaluation deliverable”.
The aim of the session was to brainstorm ideas about more engaging ways of delivering evaluation findings. We had about 50 people attend, representing a mix of government, consultant and NGO evaluators. Over the course of the hour, we used interactive exercises to come up with fresh and exciting ideas for driving engagement.
A quick history of the deliverable
Since the earliest days of evaluation as a discipline, deliverables have been evolving. We started with the classic report, which then gave birth to a whole range of associated documents, from executive summaries to separate technical appendices to brochures and flyers. With the advent of visual presentation software, reports evolved to become highly visual, with slide decks and infographics becoming the primary deliverable. More recently, the desire to surface insights from databases has led to the creation of dashboards which enable rapid (and in some cases real-time) analysis of information from evaluation activities. The latest developments in this area even extend to intelligent systems for translating data into narrative insights, quite literally graphs that describe themselves.
Defining our scope
To keep the workshop focused, we used existing theoretical frameworks around deliverables in evaluation to guide our thinking. To begin with, we focused on instrumental use of evaluations (i.e. to drive decision making and change in the program being evaluated). We then restricted ourselves to deliverables that are distributive in nature, rather than presentations or directly working with stakeholders. Finally, we acknowledged the many systemic factors that impact on evaluation use, and focused on the goal of increasing self-directed engagement by users.
The ultimate outcome of this process was a guiding principle for our next generation deliverable – to maximise self-directed engagement with evaluation outcomes.
So what did we come up with?
Over the course of the session, we engaged in three creative exercises, each focusing on a particular aspect of the topic. Participants worked in small groups to discuss prompts and put ideas down on paper.
What might the next deliverable look like?
The first creative exercise had participants draw what they thought the next deliverable might look like. This question produced the widest variety of responses and showed the depth of creativity of participants. One group even developed a prototype of a next-generation “chatterbox” deliverable as an example (more on that below). There was a consistent theme of moving beyond purely visual and text-based forms of presentation to incorporate verbal and tactile modes of engagement.
Some of the ideas included:
- Podcasts including rich stories based on qualitative data, with the ability to splice chapters and information according to the needs and interests of the listener.
- The “Chatterbox” (pictured) – one of our participants, Rebecca King from Oxfam, put forward the idea of using a chatterbox toy that allows the user to play with the results as a game and select the topics that interest them.
- Following the childhood theme, building blocks featuring key findings and picture books were also put forward as ideas.
- At the other end of the spectrum, high-tech solutions were proposed. These included using virtual reality environments to present findings, having QR codes incorporated in deliverables to enable users to dive more deeply into the material, and even a “virtual assistant” ala Siri or Alexa that can guide users through findings in a dialogue-driven fashion.
There was a lot of synergy in this part of the session with Karol Olejniczak’s keynote on “serious games” as a tool for facilitating evaluation activities, and it was good to see how that presentation inspired participants to incorporate that style of thinking and design in a broader context.
How can we integrate it into our existing work?
The second question posed in the workshop addressed how we might align these new deliverables with our existing set of deliverables. I got participants to commence the exercise by having one person come up with an idea, then have other members of the group build on that idea. The responses to the exercise fell into three broad themes.
- Data: We need both the tools and the skills to generate the right types of data to support these deliverables.
- Dialogue: One of the most interesting insights of the session was that even though these deliverables are distributive in nature, we should design new deliverables that enable a two-way conversation between evaluator and audience.
- Driving buy-in: Work will need to be done to get user buy-in, whether that is through advertising the benefits of these new channels of delivery, working directly with end users in the design process to create a sense of ownership, or through ongoing discussion through the delivery process to optimise the deliverable.
What skills are required to design, develop and deliver it?
The final round was the “lightning” round, where participants came up with responses to three questions as fast as they could. For each of the three questions, participants put forward responses that fell into the following categories:
What do we have already?
-
Creativity: evaluators have strong skills in lateral thinking and in engaging with new ways of presenting material.
-
Courage: speaking truth to power is a key skill for evaluators (and is the theme of the upcoming AEA conference; this extends to the ways in which we speak that truth.
-
Networks: our work puts us in contact with a diverse range of sectors and practitioners that we can work with to realise new deliverables.
What don’t we have already?
-
IT skills: participants identified that some of the proposed ideas would require upskilling in the IT platforms that underpin them.
-
Artistic skills: despite existing creativity, participants felt that there were opportunities to hone and finesse skills in graphic design and audio production to make these new deliverables as engaging and professional as possible.
-
New media skills: similarly, new modes of delivery such as podcasts require skills in voice acting that could benefit from greater investment.
What will we do ourselves and where will we get in help?
-
Create the “new age” evaluator: based on the ideas presented, a new type of evaluator would have to emerge that blended traditional methods with new ways of delivering and communicating ideas to stakeholders.
-
Need to partner: participants also felt that we would need to establish and maintain partnerships with specialist professionals, such as graphic designers, to ensure deliverables are both high quality and reflect the latest design trends.
Summary
In the space of a one-hour workshop, we were able to surface some great insights into how we engage with stakeholders and create some exciting new ideas for deliverables. I hope that people will be able to build on these and develop them into real deliverables that support evaluation communication.
Gerard is a manager with ARTD Consultants, specialising in evaluation strategy, dashboards and data visualisation. He also has a side career as an opera singer.