Member login
Forgot Login?   Sign up  

Presentations picture

September 2019
by Gerard Atkinson

There is less than two weeks to go until the International Evaluation Conference #aes19SYD, taking place on 15 – 19 September here in Sydney. For those presenting at the conference, it’s time to polish off your presentation skills and get your materials ready. In the theme of “unboxing evaluation”, we’ve unboxed the art of developing effective and engaging presentations and put together an easy guide you can use not just in conferences but in any presentation.

Here are our top tips:

Prepare your thinking.

Preparing for a presentation is entirely different to rehearsal and takes place before you even start making your slides. Effective preparation is about identifying what you want to talk about, doing your research, and building a framework for delivering your presentation. Rehearsal, though important, comes much later.

Create an objective statement.

An objective statement is a single sentence that frames your rationale and scope. A good objective statement clearly articulates the given time period, what the presentation will achieve, and what you want your audience to do as a result. For example, when I teach my one-hour presentations seminar, my objective statement is: “Over the next 50 minutes, I want to cover the key elements of creating and delivering a compelling presentation to inspire you to go make your own.” It’s not setting out to change the world, but it sets out the scope of what to create.

Do your research.

This goes beyond just topic research (which is crucial, of course), and includes understanding such things as:

  • the level of knowledge of the audience
  • the number of people
  • the level of seniority
  • the venue size and layout
  • available technology
  • the time of day.

Develop a presentation framework.

Start building the structure of your presentation as a list or (my favourite) a storyboard. There are many different frameworks and formats out there, and you’ll see quite a few at AES 2019, including the rapid-fire Ignite presentations. My personal favourite framework adapts traditional storytelling techniques by following a format of “Open-Body-Close”. It’s a simple framework but can be adapted to presentations of nearly all formats and lengths. Here’s how it works:

  • The opening section is designed to engage an audience and preview the talk.
  • The body section, which can be repeated for each key point of your presentation, states the point, supports it, then links it logically with the next key point.
  • The closing section reinforces engagement, reviews the topics covered, and gives a call to action to the audience.

By using this framework you can tell many different kinds of stories, for example chronologically or starting broadly and delving deeper into a topic as you go along. You can adapt it to fit the narrative you want to tell.

Kill the deck (if you can).

This is always a controversial tip, but there’s a good reason for it. Slides distract the audience. If you can remove a slide from a presentation, do it. If you need to use slides, remember that they should always be used to underline the point you are trying to make. Photos and (well-designed) charts do this best, followed by diagrams. If you need to use bullet points or text, keep it short and avoid reading them out verbatim.

Use speaker notes.

Scripts can be useful in laying out in exact terms what you want to say in a presentation, but they make it hard to be engaging. Actors train for years to be able to take a script and make it look natural. Instead use speaker notes, which are a shorthand version of a script that outline in abbreviated form the content of each key point. They act as prompts for what you want to say, but allow you to deliver a more natural style of speaking.

Develop useful handouts.

Your slides will not convey the full content of your talk on their own (see above). This means that they shouldn’t be used as handouts. Instead, a handout should be a practical resource that turns the key points of your talk into tools that the audience can use afterwards. Most importantly, distribute handouts after the talk to avoid having distractions during the presentation. 

Rehearse, rehearse, rehearse.

Rehearsal is about replicating your presentation environment as closely as possible. Find a room, set it up as you will on the day, and rehearse the talk as if it were the real thing. It’ll help you get a feel for your timing and flow, and boost your confidence. If you can get some sympathetic co-workers to sit in and give feedback, even better. Repeat this process. The more times you can run through the presentation ahead of time, the more comfortable you will be with the material.

Present with credibility.

Credibility is a combination of confidence, character, and charisma. Confidence comes from research and rehearsal. Character and charisma come from the way you deliver your presentation. Some quick ways to build credibility are to use open body language to engage with the audience, and to vary the way you use your voice (tone, volume, tempo). Both go a long way in engaging the audience and carrying them along with you throughout your presentation.

Handle Q&As at the end.

Question and answer sessions are seen by some people as the trickiest parts of a presentation because they can be hard to predict. Prior research can help you anticipate and prepare for some of the questions you might be asked. It’s best to keep questions until the end of the presentation, as this helps keeps things on track. Let the audience know at the start of the presentation so that they can note down their questions for later. To handle Q&As, here’s a five-step process:

  • Ask: Take a step forward while asking the audience if they have any questions.
  • Select: Select questioners by gesturing to them with an open palm (rather than pointing) or their name, if you know it.
  • Listen: Give questioners total concentration, eye contact, and actively listen to their question.
  • Repeat: Pause, then repeat or rephrase the question to the whole group to show you understand what they’re asking. This also helps when there’s no roving microphone.
  • Answer: Make eye contact with members of the audience while answering.

An alternative (and compatible) approach to managing Q&As effectively comes from Eve Tuck.

  • Ask a neutral person to facilitate the Q&A.
  • At the end of the presentation, invite the audience to talk to each other for a few minutes and share the questions they are thinking of asking.
  • Have the facilitator encourage the audience to consider whether those questions are useful to the broader discussion and best asked during the session, or in another context (e.g. the coffee break).

See Eve’s Twitter feed for the full list of suggestions for Q&As.

I hope these tips can help you prepare, construct and deliver your own presentations with confidence. Looking forward to seeing a lot of great presentations #aes19SYD.


Gerard is a Manager at ARTD Consultants.


Fellows anneM

June 2019
by Anthea Rutter

Anne and I have been colleagues and friends for many years. I have long been an admirer of her ability as a practical evaluator and I refer to Anne and Ian’s book frequently for my own practice. I caught up with Anne at the AES International Conference in Launceston, Tasmania, where we found time to share some lunch and some great conversation.

I am always intrigued by the many routes which professionals follow to bring them into the field of evaluation. Although I have known Anne for many years, I was unsure of how she came into the field.

When I was an academic in social work, we were starting to pick up contracts in evaluation. I liked project work, so always put my hand up. I liked the organising aspect, as well as adding new knowledge and improving, rather than service delivery. Eventually, I began sub-contracting for some small evaluation companies before starting my business.

As all of us are aware, we are influenced by a number of elements which eventually shape what and who we are. I asked Anne about the influences which have helped define her practice.

Being part of the evaluation community of practice has been an important part of my career. Being a lone evaluator would be tough without opportunities to engage with other evaluators through conferences and AES Network meetings. You need to interact with others to see different people’s take on things and test your ideas. This is essential for informed practice. Being in a relationship with another evaluator also has its benefits for testing your ideas out.

Anne’s last comment made me reflect that I don’t know a partnership where both parties are evaluators. Over the course of a career, all of us have faced challenges, including AES Fellows. We all have an opportunity to learn from those experiences. I was keen to find out from Anne about the challenges she has faced during her career.

A major challenge in evaluation is managing the political aspect, negotiating the report and findings. People often challenge the findings, so you need all the skills you can muster in terms of negotiation to advocate for and justify your conclusions.

Also, some clients do not fully understand what an evaluation can and can’t do. Expectations that were not part of the original Terms of Reference and outside the remit of the evaluation’s scope and focus often come up when the draft report is delivered.

After being in a career for over 20 years there is bound to be a few changes along the way. I was interested to find out from Anne what had changed and how the field of evaluation is today.

When I was a newbie, I wasn’t sure that the field of evaluation was a good fit for me. At that time, over 20 years ago, the field of evaluation had a more quantitative and positivist focus, with a strong public sector performance and financial management leaning. I was not sure whether I fitted. But evaluation has evolved so much since then. There has been a huge paradigm shift from the quant/qual debates to the evolution of a range of evaluation-specific methods: Realist, Most Significant Change, participatory, developmental etc. Evaluators are also more diverse in their professional backgrounds and methodological leanings. The field is so much richer. It will be interesting to see what happens in the next 10 years!

Anne and I discussed the fact that, with all of the changes in the profession over the years, there could be changes to the skills and competencies needed to cope with the changes. Anne was very specific in her answer and her ideas covered the whole gamut of an evaluation.

Evaluators need foundation skills, including formulating theories of change and evaluation questions, identifying mixed methods data sources and matching data to questions, data collection, analysis and reporting. Evaluators also need foundation skills in how to build organisational systems for both monitoring and evaluation functions. More and more evaluators are being asked to build capacity within organisations for the above competencies and this may be a new skillset for evaluators.

Evaluators also need facilitation skills and conflict resolution skills. And everyone needs an understanding of ethics – you need to know when ethical standards are being held and when they are being compromised.

What do you wish you had known before starting out as an evaluator?

I wish I had known how to better predict and manage my workload as an evaluation consultant… particularly to enjoy the lean periods and just relax into them. The peaks and troughs always evened out over the long term but in retrospect I feel that the troughs were not well used to relax and recuperate from the demanding peaks.

The final question to Anne was a bit of crystal ball gazing. I asked what she saw in AES’s future. Again, in true Anne fashion, she was very clear about where and what the AES should be doing.

I think we should attempt to develop a closer role with government bodies. There are a number of opportunities for building a stronger link between the AES and both levels of government. There was once an exercise undertaken by an AES sub-committee, mapping government bodies across states/territories and nationally, and though a big job, there is an opportunity for the AES to provide an avenue for evaluation capacity building in government.

In terms of training, the AES training program could also cater better to advanced evaluators by identifying specialist areas that could be developed and delivered by experienced trainers in those areas. The AES also needs to make sure its partnerships are robust and, not least, consult its members regularly.


Through her company, Anne Markiewicz and Associates, Anne assists organisations to establish Monitoring and Evaluation (M&E) systems and regularly conducts workshops on developing M&E frameworks for AES members. In 2016, she co-authored Developing Monitoring and Evaluation Frameworks with Ian Patrick.


Fellows johnO NEW

May 2019
by Anthea Rutter

Interviewing John was a pleasure for me. He was my teacher at the Centre for Program Evaluation back in the 90s. Indeed, John and his colleagues have taught a large number of the members of the AES over the years. John and I have also worked on projects together. Even though we have a shared history, I was curious to find out what brought him into the field of evaluation in the first place.

I was at the Australian Council of Educational Research in 1975. I was asked to be the officer in charge of a national evaluation of a science education curriculum in secondary schools. However I had no knowledge of evaluation, so in order to do the project, I started reading books about evaluation and how I could translate some of these ideas into a framework, so I could undertake the study. My background was in science, in particular physics, and in my last couple of years at the Melbourne Advanced College of Education, I got interested in science education and taught courses for aspiring teachers.

John’s knowledge in the field of evaluation is vast and I was keen to find out what he regarded as his main area of interest.

Theories of evaluation. I was concerned that traditional evaluation did not make much of a difference to the planning and delivery of interventions, so I became interested in the utilisation of evaluation findings, and how policies and programs can be improved in terms of utilisation. More generally, I was engaged in research and knowledge utilisation, including factors that affected take-up of this kind of knowledge.

I felt strongly that someone who has been in a field for a number of years must have had challenges to his practice, and John was no exception.

I guess even though I had done this project at ACER I wasn’t aware of the breadth of the thinking about evaluation that was emerging in the 1980s, so when I came back to Melbourne College of Advanced Education, the Principal asked us to set up the Centre for Program Evaluation with Gerry Ellsworth around that time. The challenge for us was to actually decide how a Centre would work and how we could incorporate all of the emerging theories into a coherent package for a teaching course. We knew that we had an opportunity to offer something that was not offered in Australasia. There were a lot of new learning going on. The challenge was to put it together and make sense for people about to work in the field of evaluation. The other challenge was political. The course was not just for teachers. We tried to protect ourselves in the institution. My challenge was to actually see myself as a teacher of evaluation which was different to one in science education.

When I first came back from ACER, they asked me to be the coordinator of a graduate course in curriculum. I had worked in innovation and change. I managed to integrate my work in innovation and change into the evaluation program.

Apart from challenges, a career as broad as John’s must have had a number of highlights and so I asked John about the major ones.

I guess we are talking about post PhD – for me getting a doctorate was a highlight. After that, I guess, when I became Director of the Centre [for Program Evaluation at The University of Melbourne]. One highlight was working to develop a distance education course in evaluation. Once again, this was something new: we had a new Centre that had been operating for a while, which was innovative, and now we were thinking of an innovative offering in teaching. Actually, I learnt a lot about the evaluation field in Australia from the AES. Getting that course up and running was a highlight. Another highlight was being made a Fellow of the Society – very thrilled, an acknowledgement – and the consequent involvement in the Society. Really enjoyed the Society which has been effective in promoting and maintaining the profession.

For most of us, we are not lone operators and there are a number of influences – individuals as well as different evaluation or research models that have influenced our practice. I wanted to find out from John what he considers were the major influences that really helped to define his practice.

The notion of evaluation for decision making underlies my practice. In terms of people and models, I do remember coming across Dan Stufflebeam’s CIPP model. If I was looking at a conceptual influence it may be that one. He had the notion of context, input, process and product. Another one is that I used to get concerned about evaluating a complex problem, then suddenly I came across program logic ideas. It was not heavily used until the 90s. Possibly Joe Wholey had referred to it. Now I understood how to unpack the intervention. Possibly having a scientific background helped me to understand the logic approach.

For those of us who have been in the evaluation field for a long time we are aware that changes occur in practice and I was keen to get John to reflect on them.

When I first started reading about evaluation it was about measuring impact and using rigid methods of determining impacts implied by quantitative methods of evaluation. Since then the field has expanded and been influenced by thinkers such as Michael Patton, the emphasis on utilisation by Marv Elkin and the expansion of the field – so in a sense evaluative inquiry could be used to influence programs as they were being delivered rather than as an assessment at the end. My book [Program Evaluation: Forms and Approaches] summarises my view of these things.

The notion of skills and competencies are very important to John’s role as a teacher, so I wanted to find out what he saw as the main skills or competencies the evaluator of today needs to keep pace with emerging trends in practice.

First of all, they need skills and competencies. There seems to be a view among certain organisations that anyone can do evaluation. There are two sets of skills, one related to epistemology which gets to what knowledge is needed, and also the different models which could be used. The second set are methodological skills. At least an understanding of data management, and being able to be creative in designing methodologies which help you with the compilation of information from which you can make findings and conclusions. Also, the need for evaluators to have the attitude that they can refine methodology if they need it. I am sure there are methodologies associated with technology which need to be learned. But there is a basic underlying rationale.

During John’s time as an evaluator and teacher I felt that he must have reflected on some of the social and political issues which we as a profession ought to be thinking about and trying to resolve.

Perhaps if I was going to put some energy into something, I think that evaluation in government is still at a basic level. I think in the helping professions, education and social interventions, we have a pretty good track record, but I don’t think that we have tackled the big problem of how government departments deal with evaluation, i.e. feedback loops around collecting data, producing findings and using them. Perhaps these organisations are large, more complex, but I don’t see much of accountability. There is general research around which shows that there is little effort in using evaluation in designing and delivering programs. So, this is a major issue to be looking at by the AES and by leaders in government.

The AES has been an important part of John’s life and so I felt he would have views on how the Society can best position itself to still be relevant in the future. I was not wrong!

I have a strong view about this. To position ourselves, we should make more links with societies which have cognate interests, so we can influence the work of evaluation and applied research more. By talking to people like auditors and market researchers. There are groups out there who could have an indirect influence on the Society. We need some work to make links and partnerships, and policies which acknowledge that evaluation could be an umbrella approach useful to other professional organisations. I have long held that position. You hear about the huge conferences which auditors have. We should be in there talking to these people and talking to them about the fact that our knowledge could benefit them.

What do you wish you had known before starting out as an evaluator?

I would have benefitted from a graduate course/subject in sociology, particularly one that dealt with the sociology of knowledge. Unfortunately such courses were not readily available at university, and even if they had been, they would not have readily meshed with my science studies. Perhaps a course on the philosophy of science would have also been good, so that I could have come to grips with giants like Popper and Russell.


John Owen has 40 years of experience in evaluation and is currently a private consultant. His major roles in evaluation include Director at the Centre for Program Evaluation, as a teacher, and presenter at workshops.


strategic planning

June 2019
by John Stoney, AES President

I was a policy and program wonk before I became an evaluation tragic. One of the things that excited me about remaining on the AES Board and taking up the President role was that one of the first tasks would be developing our next set of Strategic Priorities for the period 2019–2022.

The AES is in good shape. This I think reflects a number of dynamics, one of which is previous Boards – together with the broader AES leadership teams and members – developing a set of Priorities that have served us well. They have provided a sound foundation and framework to guide all the work that has occurred in the last three years. This has enabled the AES to prosper. (I would also suggest that the other dynamics are hard work, commitment, vision and a generosity of time from the AES office team, successive Conference Convenors and Organising Committees, the various Board Committees, Board members and our general membership).

By a number of metrics, things look good organisationally. As I type, we are on the cusp of having 1,000 members. Our finances are very healthy. We have a highly successful (and expanding) workshop program and have had a succession of successful (both financially and reputationally) conferences. As an organisation, in the last year, we have launched our first Reconciliation Action Plan, engaged in some key Australian Government review processes and looked to practically implement recommendations from the Pathways to Professionalisation Report, as well as exploring various ways to enhance member value.

Having said that, inevitably the world around us is dynamic, providing both potential challenges and opportunities. The task for those of us with stewardship responsibilities for the AES (and, by association, the broader profession) is to navigate our way through and be adaptive during the next 3–5 years in a way that ensures the AES remains in a good place, in good shape and – most importantly – continues to meet the needs of its members.

For that reason, it's important to hear from members and obtain your feedback on what you think the next Strategic Priorities should encompass in terms of goals and priorities under each of the proposed domains.

To that end, a Consultation Paper has been developed and sent to members. As you'll see, the Board and its various Committees sense that the next set of Priorities are an evolution of the current ones. In some instances, the proposed goals and priorities remain consistent; in others, they have changed to reflect developments under the AES Strategic Priorities 2016-2019, plus the current and emerging context.

I would encourage as many AES members as possible to provide their feedback, and to also consider what they may like to actively contribute to in terms of the key roles, activities and potential projects that will be undertaken to implement our next generation of priorities.

If you’re an AES member and haven't received (or have misplaced) your email invite, please don't hesitate to contact us.

Look forward to hearing from you,

John Stoneyavatar.jpg.320x320px
An internal Australian Government evaluation practitioner by day and the AES President at all other times, John has been on the AES Board since 2016. He has had responsibility for the Influence domain supported by the membership of the Advocacy and Alliances Committee. When not at work or undertaking AES duties, he takes any opportunity he can to discuss matters of evaluation theory, practice and use with fellow travellers (both evaluative and non-evaluative) over a cup of coffee (and maybe a donut).

May 2019
by Eunice Sotelo & Victoria Pilbeam


Many evaluators are familiar with realist evaluation, and have come across the realist question “what works for whom, in what circumstances and how?” The book Doing Realist Research (2018) offers a deep dive into key concepts, with insights and examples from specialists in the field.

We caught up with Brad Astbury from ARTD Consultants about his book chapter. Before diving in, we quickly toured his industrial chic coworking office on Melbourne’s Collins Street – brick walls, lounges and endless fresh coffee. As we sipped on our fruit water, he began his story with a language lesson.

Doing Realist Research (2018) was originally intended to be a Festschrift, German for ‘celebration text’, in honour of recently retired Ray Pawson of Realistic Evaluation fame. Although the book is titled ‘research’, many of the essays in the book, like Brad’s, are in fact about evaluation.

The book’s remit is the practice and how-to of realist evaluation and research. Our conversation went wide and deep, from the business of evaluation to the nature of reality.

His first take-home message was to be your own person when applying evaluation ideas.

You don’t go about evaluation like you bought something from Ikea – with a set of rules saying screw here, screw there. I understand why people struggle because there’s a deep philosophy that underpins the realist approach. Evaluators are often time poor, and they’re looking for practical stuff. At least in the book there are some examples, and it’s a good accompaniment to the realist book [by Pawson and Tilley, 1997].

Naturally, we segued into what makes realist evaluation realist.

The signature argument is about context-mechanism-outcome, the logic of inquiry, and the way of thinking informed by philosophy and the realist school of thought. That philosophy is an approach to a causal explanation that pulls apart a program and goes beyond a simple description of how bits and pieces come together, which is what most logic models provide.

[The realist lens] focuses on generative mechanisms that bring about the outcome, and looks beneath the empirical, observable realm, like pulling apart a watch. I like the approach because as a kid I used to like pulling things apart.

Don’t forget realist evaluation is only 25 years old; there’s room for development and innovation. I get annoyed when people apply it in a prescriptive way – it’s not what Ray or Nick would want. [They would probably say] here’s a set of intellectual resources to support your evaluation and research; go forth and innovate as long as it adheres to principles.

Brad admits it’s not appropriate in every evaluation to go that deep or use an explanatory lens. True to form (Brad previously taught an impact evaluation course at the Centre for Program Evaluation), he cheekily countered the argument that realist evaluation isn’t evaluation but a form of social science research.

Some argue you don’t need to understand how programs work. You just need to make a judgment about whether it’s good or bad, or from an experimental perspective, whether it has produced effects, not how those effects are produced. Evaluation is a broad church; it’s open for debate.

If it’s how and why, it’s realist. If it’s ‘whether’ then that’s less explicitly realist because it’s not asking how effects were produced but whether there were effects and if you can safely attribute those to the program in a classic experimental way. Because of the approach’s flexibility and broadness, you can apply it in different aspects of evaluation.

Brad mused on his book chapter title, “Making claims using realist methods”. He preferred the original, “Will it work elsewhere? Social programming in open systems”. So did we.

The chapter is about external validity, and realist evaluation is good at answering the question of whether you can get something that worked in some place with certain people to work elsewhere.
Like any theory-driven evaluation question, the realist approach can answer multiple questions. Most evaluations start with program logics, so we can do a better job at program logics if we insert a realist lens to help support evaluation planning, and develop monitoring and evaluation plans, the whole kit and caboodle.

Where realist approaches don’t work well is estimating the magnitude of the effect of a program.

As well as a broad overview of where realist evaluation fits in evaluation practice, Brad provided us with the following snappy tips for doing realist research:

Don’t get stuck on Context-Mechanism-Outcome (CMO)

When learning about realist evaluation, people can get stuck on having a context, mechanism and outcome. The danger of the CMO is using it like a generic program logic template (activities, outputs and outcomes), and listing Cs, Ms and Os, which encourages linear thinking. We need to thoughtfully consider how they’re overlaid to produce an explanation of how outcomes emerge.

A way to overcome this is through ‘bracketing’: set aside the CMO framework, build a program logic and elaborate on the model by introducing mechanisms and context.

Integrate prior research into program theory

Most program theory is built only on the understanding of stakeholders and the experience of the evaluator. It means we’re not being critical of our own and stakeholders’ assumptions about how something works.

A way to overcome this is through ‘abstraction’: through research, we can bring in wider understandings of what family of interventions is involved and use this information to strengthen the program theory. We need to get away from ‘this is a very special and unique program’ to ‘what’s this a case of? Are we looking at incentives? Regulation? Learning?’ As part of this work, realist evaluation requires evaluators to spend a bit more time in the library than other approaches.

Focus on key causal links

Brad looks to the causal links with greatest uncertainty or where there are the biggest opportunities for leveraging what could help improve the program.

When you look at a realist program theory, you can’t explore every causal link. It’s important to focus your fire, and target evaluation and resources on things that matter most.

When asked for his advice to people interested in realist evaluation, Brad’s response was classic:

Just read the book ‘Realistic Evaluation’ from front to back, multiple times.

As a parting tip, he reminded us to aspire to be a theoretical agnostic. He feels labels can constrain how we do the work.

To a kid with a hammer, every problem can seem like a nail. Sometimes, people just go to the theory and methods that they know best. Rather than just sticking to one approach or looking for a neat theoretical label, just do a good evaluation that is informed by the theory that makes sense for the particular context.


Brad Astbury is a Director at ARTD Consultants. He specialises in evaluation design, methodology, mixed methods and impact evaluation.

Eunice Sotelo, research analyst, and Victoria Pilbeam, consultant, work at Clear Horizon Consulting. They also volunteer as mentors for the Asylum Seeker Resource Centre’s Lived Experience Evaluators Project (LEEP).