Member login
 
   
Forgot Login?   Sign up  

September 2018
By Ruby Fischer

Picture1

Evaluations are like diets – you know they’re good for you, you always start off with good intentions and desperate optimism, but eventually you slip back into your old habits.

So how do you stick to them?

Here are 5 tips from AES NSW’s latest seminar on how NGOs can stick with evaluation in our do-more-with-less world.

1. Evaluation is a lifestyle, not a quick fix
Evaluation shouldn’t be an after thought and you shouldn’t do it just because everyone else is doing it. Evaluation needs to be embedded in your culture. That means everyone needs to be 100% committed.

And that means as the evaluation champion, you need to tell the right story. Effective storytelling embeds the evaluation change in your organisations DNA. The word evaluation is often met with the ‘I just ate a sour lemon' look. Positioning your evaluation as about continuous improvement, less judgement on the team, can help ensure they take the little steps for long lasting change, like encouraging feedback during service delivery.

It definitely means answering the question, ‘why does this matter?’

2. Find your purpose and motivation
So why does the evaluation matter? Is the evaluation for accountability? Is it for learning and development? Is it answering the questions - how much did we do, how well did we do it, what differences did we make?
Take some time to really get your head around your purpose. This makes it easier to keep your motivation high and helps you to design your evaluation by focusing you on what you need to know.

Picture2

3. Be realistic
You are probably not going to become a spinach smoothie drinking, marathon running, yogi master overnight. Similarly, you are not going to solve Australia’s most wicked social issues, no matter how awesome your program may be. You need to have realistic evaluation expectations given your limited time and budget, and you need to manage funders expectations as well.

Ask yourself, what can we reasonably expect to achieve? What outcomes reflect those reasonable expectations, and how do we measure them? Your evaluation needs to be fit for purpose, for community and for resources. Remember, it is better measure a few things well, rather than many poorly.

4. Phone a friend
Friends are super helpful. Use them. Leverage your network. Do you know experts? What online resources can you use? Hepatitis NSW, who presented a fantastic case study at the NSW AES seminar, checked in with an academic about their survey structure improved readability and questions. Their response rate doubled from 10% to 20%.

Remember partnerships are a two-way street. You’ve got a lot of value to offer too. Don’t forget it!

5. Incentivise
A little incentive goes a long way. It’s no surprise that incentives increase response rates. Used tactically, they’re also really cost effective.
It’s always important to say thank you for the time your client took to provide feedback. The incentive could be a chance to win a voucher or a little present. Know your target group, and you will know an appropriate incentive.

What lessons have you learnt on your evaluation journey? This email address is being protected from spambots. You need JavaScript enabled to view it..

Ruby is a consultant at Nexus Management Consulting.

 

 

A practitioner’s take on developmental evaluation

September 2018
By Zazie Tolmer

russian

Late last year an opportunity came up for a Clear Horizon consultant to work full time as an embedded evaluator in a Collective Impact initiative. I jumped at the opportunity and have been part of the backbone team for the last eight months.

Over that time, the way I approach my practice has changed considerably and I finally feel that I am getting a handle on what it feels like to be a Developmental Evaluator. I have learnt:

To go where the positive energy is – Rather than trying to situate evaluation through planning, I focus on the pockets of people where there is real current interest in drawing on evaluative thinking, for example, wrapping evaluation around a small prototype or developing a theory of change. This provides a place to work from that is immediately useful and creates demand for evaluation. I tried initially to drive the scoping and development of an M&E framework and plan for the initiative, but I did not get very far, fast! Collective Impact initiatives operate deliberately in the complex, and the rationalisation that is required in more standard M&E planning goes against the grain.

To do Russian Doll evaluation – When you don’t have a plan but you want to start, it can help to do a small discrete evaluation project first. For example, an exploratory qualitative study looking at partners’ perceptions and experience of the impacts of the work. Once you have one piece completed, you can start to spring off it into other evaluative projects. It can be really hard to reconcile all the different evaluation needs and tensions on a Collective Impact initiative. I have found that if you produce one evaluative output, the rest of the backbone and partners: a) understand concretely what evaluation can do; and b) are better able to articulate what they might need next from evaluation. In my mind, this approach to evaluation looks like a Russian doll set where you start with the smallest doll and keep building on it and wrapping more evaluation layers around it, until you have built up your evaluation to cover the full initiative.

To listen, keep listening and never stop listening – I have learnt to leave the consultant in me at the door and to be guided by the group rather than taking the lead in my area of ‘expertise’. The group have a much better understanding than I do of what is needed next. My job is really to listen out for where there might be an opportunity for an evaluative piece and to translate this into something that can be delivered within the time and resourcing constraints. I’m also learning to leave ‘me’ out of it. For example, I have stopped to think about the evaluation pieces as my work and am emphasising quality (honestly the best that can be done with the available resources and time) over rigour (bullet proof evidence).

Listening also means anticipating. The evaluation work I do for the initiative I am working on includes evaluation pieces that have been identified together and others that are bubbling along in the background ready to be shared when the timing is right. These pieces are more like hunches that sometimes work out and sometimes don’t. When they do, they create good momentum!

At this year’s AES conference in Launceston, the team and I will be presenting on the transformative power of Developmental Evaluation.

Zazie is a principal consultant at Clear Horizon

August 2018
By AES Blog Working Group

aes hero 600

Australasia has some excellent evaluators. More than that, we have an evaluation community full of ideas and a willingness to share. The AES has long provided a place for us to come together, at regional events and the annual conference, to develop our community together. Now we’re taking it online!

The new AES blog will be a space for AES members – both new and experienced – to share their perspectives, reflecting on their theory and practice, and learnings from AES events. Even if you haven’t written a blog before, we welcome your contribution. The AES blog team can help you work out how to share your idea with our community. We are also open to reposting blogs, with original credit.

Seeking your best ideas

In the coming months, the AES blog will be the place to find interesting content that speaks to our Australian evaluation context.

  • Has a recent AES event got you reflecting on your own practice conducting evaluations?
  • Do you have a key learning from your experience commissioning evaluations?
  • Does a recent development in evaluation get you excited?

If the answer is yes, the AES blog team would like to hear from you!

What are we looking for?

To help you put together an article for the AES blog, we’ve developed some guidelines. The full guidelines – word count, publishing etc. – can be found here.

  1. No jargon or bureaucratese. Keep the language simple and straightforward. If you are using technical terms, please explain them for people who may not be familiar with them.
  2. Keep it interesting and conversational. Try to make your post as practical as possible – even when discussing theory, think about how the reader might use the concepts in practice. This might include links to tools or resources, as well as practice tips and tricks.
  3. Write in your own voice. Where appropriate, use first person voice as if you are talking to an acquaintance or friend sitting next to you. Speak for yourself, as an expert (because you are sharing insights and reflections through your writing) and as a learner (because there is always room to grow and improve).
  4. Well-structured. The reader should know what to expect from the blog: the title and introduction should create an expectation of what is to follow. Include a clear conclusion, such as a call to action for comments/sharing, or food for thought. Use section headings to break up the text when it makes sense. Subheadings make it easier for the reader to skim the blog to the section they would most like to read. Keep each section as contained and coherent as possible so the reader does not feel lost for not having read what came before.
  5. Original content. Most of the text must be of your own writing. If quoting another source, give a link or citation, and limit the quote to a few sentences.
  6. Self-edit. Write the content in a program with a good text editor to eliminate most writing errors. While the AES blog team vets all articles, we ask that you provide us as clean a copy as possible.

We’ll be officially launching the blog at the AES Conference in September. This is a great time to share your thoughts on our conference theme – Transformations. Check out our first blog post, from ARTD Director Jade Maloney, reflecting on her experiences at AES conferences gone by.

If you have a blog idea or an article you’d like to share, please send it through to us at This email address is being protected from spambots. You need JavaScript enabled to view it.. Members of the blog team will be attending the AES Conference in Launceston – come and say hi if you want to find out more, or chat through an idea.

AES Blog Working Group
Eunice Sotelo
Jade Maloney
Joanna Farmer
Liz Smith
Matt Healy

August 2018
By Jade Maloney

conference 2017

There’s still a chill in the air, but the days are starting to lengthen, and you can sense the promise of spring. Must nearly be time for another AES conference.

I remember my first one: Canberra, 2009. I was still ‘green’, 18 months after falling out of publishing and into a role in evaluation. Andrew Leigh had just come out with his proposal for a hierarchy of evidence to inform Australian policy making, and there was an afternoon panel, including Leigh himself, to discuss it. The proposal in itself was nothing new (it drew on models from medical research in the US and social policy in the UK), but it added fuel to the still burning embers of the fire that was (is?) the methodology wars.

I didn’t yet know enough to unpack the arguments for and against the primacy of Randomised Control Trials (RCTs), but I couldn’t help thinking that there must be a more nuanced question and answer than the heated audience commentary suggested.

Fast forward to Canberra 2017. I’ve now got my own views about the kinds of questions that RCTs can and cannot answer, and I nearly choke on my chicken as economist Nicholas Gruen says RCTs are not the panacea they’re made out to be. We need to ask the right questions at the right times and choose appropriate methods to answer them. I agree.

The purpose of this trip down memory lane is not to ignite a methodological debate. It’s to say that the conference is a window into what matters to evaluation at the time. It seems to me that the discussion has also matured and that meaningful conversations at the conference – including candid discussions about failures – have an important role in developing the discipline. (I suppose discipline is the right term since we’re not technically a ‘profession’ – although the pathways to professionalisation project will lead us there).

So I’m excited about how the interactive sessions planned for Launceston 2018 will help us reflect on how we are transforming evaluation and what comes next.

As someone who spans the roles of design, implementation and evaluation, I’ll be jumping into sessions on design and discussions about what the rise of co-design means for the evaluator role and required competencies.

As someone who works in disability policy and has lived experience of mental health issues, I’m also keen to find out about how others are implementing participatory and empowerment approaches in practice, like Joanna Farmer’s session on the challenges of managing values and power in evaluating with a lived experience.

And as conference co-convener for Sydney 2019, I’m keen to hear from other AES members what they love about the conference and what else might be possible. Because, after all, while keynotes and panellists can strike the match, only the participants can carry the torch through conversations.

Jade is a partner at ARTD Consultants.