Member login
Forgot Login?   Sign up  

Fellows anneM

June 2019
by Anthea Rutter

Anne and I have been colleagues and friends for many years. I have long been an admirer of her ability as a practical evaluator and I refer to Anne and Ian’s book frequently for my own practice. I caught up with Anne at the AES International Conference in Launceston, Tasmania, where we found time to share some lunch and some great conversation.

I am always intrigued by the many routes which professionals follow to bring them into the field of evaluation. Although I have known Anne for many years, I was unsure of how she came into the field.

When I was an academic in social work, we were starting to pick up contracts in evaluation. I liked project work, so always put my hand up. I liked the organising aspect, as well as adding new knowledge and improving, rather than service delivery. Eventually, I began sub-contracting for some small evaluation companies before starting my business.

As all of us are aware, we are influenced by a number of elements which eventually shape what and who we are. I asked Anne about the influences which have helped define her practice.

Being part of the evaluation community of practice has been an important part of my career. Being a lone evaluator would be tough without opportunities to engage with other evaluators through conferences and AES Network meetings. You need to interact with others to see different people’s take on things and test your ideas. This is essential for informed practice. Being in a relationship with another evaluator also has its benefits for testing your ideas out.

Anne’s last comment made me reflect that I don’t know a partnership where both parties are evaluators. Over the course of a career, all of us have faced challenges, including AES Fellows. We all have an opportunity to learn from those experiences. I was keen to find out from Anne about the challenges she has faced during her career.

A major challenge in evaluation is managing the political aspect, negotiating the report and findings. People often challenge the findings, so you need all the skills you can muster in terms of negotiation to advocate for and justify your conclusions.

Also, some clients do not fully understand what an evaluation can and can’t do. Expectations that were not part of the original Terms of Reference and outside the remit of the evaluation’s scope and focus often come up when the draft report is delivered.

After being in a career for over 20 years there is bound to be a few changes along the way. I was interested to find out from Anne what had changed and how the field of evaluation is today.

When I was a newbie, I wasn’t sure that the field of evaluation was a good fit for me. At that time, over 20 years ago, the field of evaluation had a more quantitative and positivist focus, with a strong public sector performance and financial management leaning. I was not sure whether I fitted. But evaluation has evolved so much since then. There has been a huge paradigm shift from the quant/qual debates to the evolution of a range of evaluation-specific methods: Realist, Most Significant Change, participatory, developmental etc. Evaluators are also more diverse in their professional backgrounds and methodological leanings. The field is so much richer. It will be interesting to see what happens in the next 10 years!

Anne and I discussed the fact that, with all of the changes in the profession over the years, there could be changes to the skills and competencies needed to cope with the changes. Anne was very specific in her answer and her ideas covered the whole gamut of an evaluation.

Evaluators need foundation skills, including formulating theories of change and evaluation questions, identifying mixed methods data sources and matching data to questions, data collection, analysis and reporting. Evaluators also need foundation skills in how to build organisational systems for both monitoring and evaluation functions. More and more evaluators are being asked to build capacity within organisations for the above competencies and this may be a new skillset for evaluators.

Evaluators also need facilitation skills and conflict resolution skills. And everyone needs an understanding of ethics – you need to know when ethical standards are being held and when they are being compromised.

What do you wish you had known before starting out as an evaluator?

I wish I had known how to better predict and manage my workload as an evaluation consultant… particularly to enjoy the lean periods and just relax into them. The peaks and troughs always evened out over the long term but in retrospect I feel that the troughs were not well used to relax and recuperate from the demanding peaks.

The final question to Anne was a bit of crystal ball gazing. I asked what she saw in AES’s future. Again, in true Anne fashion, she was very clear about where and what the AES should be doing.

I think we should attempt to develop a closer role with government bodies. There are a number of opportunities for building a stronger link between the AES and both levels of government. There was once an exercise undertaken by an AES sub-committee, mapping government bodies across states/territories and nationally, and though a big job, there is an opportunity for the AES to provide an avenue for evaluation capacity building in government.

In terms of training, the AES training program could also cater better to advanced evaluators by identifying specialist areas that could be developed and delivered by experienced trainers in those areas. The AES also needs to make sure its partnerships are robust and, not least, consult its members regularly.


Through her company, Anne Markiewicz and Associates, Anne assists organisations to establish Monitoring and Evaluation (M&E) systems and regularly conducts workshops on developing M&E frameworks for AES members. In 2016, she co-authored Developing Monitoring and Evaluation Frameworks with Ian Patrick.


strategic planning

June 2019
by John Stoney, AES President

I was a policy and program wonk before I became an evaluation tragic. One of the things that excited me about remaining on the AES Board and taking up the President role was that one of the first tasks would be developing our next set of Strategic Priorities for the period 2019–2022.

The AES is in good shape. This I think reflects a number of dynamics, one of which is previous Boards – together with the broader AES leadership teams and members – developing a set of Priorities that have served us well. They have provided a sound foundation and framework to guide all the work that has occurred in the last three years. This has enabled the AES to prosper. (I would also suggest that the other dynamics are hard work, commitment, vision and a generosity of time from the AES office team, successive Conference Convenors and Organising Committees, the various Board Committees, Board members and our general membership).

By a number of metrics, things look good organisationally. As I type, we are on the cusp of having 1,000 members. Our finances are very healthy. We have a highly successful (and expanding) workshop program and have had a succession of successful (both financially and reputationally) conferences. As an organisation, in the last year, we have launched our first Reconciliation Action Plan, engaged in some key Australian Government review processes and looked to practically implement recommendations from the Pathways to Professionalisation Report, as well as exploring various ways to enhance member value.

Having said that, inevitably the world around us is dynamic, providing both potential challenges and opportunities. The task for those of us with stewardship responsibilities for the AES (and, by association, the broader profession) is to navigate our way through and be adaptive during the next 3–5 years in a way that ensures the AES remains in a good place, in good shape and – most importantly – continues to meet the needs of its members.

For that reason, it's important to hear from members and obtain your feedback on what you think the next Strategic Priorities should encompass in terms of goals and priorities under each of the proposed domains.

To that end, a Consultation Paper has been developed and sent to members. As you'll see, the Board and its various Committees sense that the next set of Priorities are an evolution of the current ones. In some instances, the proposed goals and priorities remain consistent; in others, they have changed to reflect developments under the AES Strategic Priorities 2016-2019, plus the current and emerging context.

I would encourage as many AES members as possible to provide their feedback, and to also consider what they may like to actively contribute to in terms of the key roles, activities and potential projects that will be undertaken to implement our next generation of priorities.

If you’re an AES member and haven't received (or have misplaced) your email invite, please don't hesitate to contact us.

Look forward to hearing from you,

John Stoneyavatar.jpg.320x320px
An internal Australian Government evaluation practitioner by day and the AES President at all other times, John has been on the AES Board since 2016. He has had responsibility for the Influence domain supported by the membership of the Advocacy and Alliances Committee. When not at work or undertaking AES duties, he takes any opportunity he can to discuss matters of evaluation theory, practice and use with fellow travellers (both evaluative and non-evaluative) over a cup of coffee (and maybe a donut).

May 2019
by Eunice Sotelo & Victoria Pilbeam


Many evaluators are familiar with realist evaluation, and have come across the realist question “what works for whom, in what circumstances and how?” The book Doing Realist Research (2018) offers a deep dive into key concepts, with insights and examples from specialists in the field.

We caught up with Brad Astbury from ARTD Consultants about his book chapter. Before diving in, we quickly toured his industrial chic coworking office on Melbourne’s Collins Street – brick walls, lounges and endless fresh coffee. As we sipped on our fruit water, he began his story with a language lesson.

Doing Realist Research (2018) was originally intended to be a Festschrift, German for ‘celebration text’, in honour of recently retired Ray Pawson of Realistic Evaluation fame. Although the book is titled ‘research’, many of the essays in the book, like Brad’s, are in fact about evaluation.

The book’s remit is the practice and how-to of realist evaluation and research. Our conversation went wide and deep, from the business of evaluation to the nature of reality.

His first take-home message was to be your own person when applying evaluation ideas.

You don’t go about evaluation like you bought something from Ikea – with a set of rules saying screw here, screw there. I understand why people struggle because there’s a deep philosophy that underpins the realist approach. Evaluators are often time poor, and they’re looking for practical stuff. At least in the book there are some examples, and it’s a good accompaniment to the realist book [by Pawson and Tilley, 1997].

Naturally, we segued into what makes realist evaluation realist.

The signature argument is about context-mechanism-outcome, the logic of inquiry, and the way of thinking informed by philosophy and the realist school of thought. That philosophy is an approach to a causal explanation that pulls apart a program and goes beyond a simple description of how bits and pieces come together, which is what most logic models provide.

[The realist lens] focuses on generative mechanisms that bring about the outcome, and looks beneath the empirical, observable realm, like pulling apart a watch. I like the approach because as a kid I used to like pulling things apart.

Don’t forget realist evaluation is only 25 years old; there’s room for development and innovation. I get annoyed when people apply it in a prescriptive way – it’s not what Ray or Nick would want. [They would probably say] here’s a set of intellectual resources to support your evaluation and research; go forth and innovate as long as it adheres to principles.

Brad admits it’s not appropriate in every evaluation to go that deep or use an explanatory lens. True to form (Brad previously taught an impact evaluation course at the Centre for Program Evaluation), he cheekily countered the argument that realist evaluation isn’t evaluation but a form of social science research.

Some argue you don’t need to understand how programs work. You just need to make a judgment about whether it’s good or bad, or from an experimental perspective, whether it has produced effects, not how those effects are produced. Evaluation is a broad church; it’s open for debate.

If it’s how and why, it’s realist. If it’s ‘whether’ then that’s less explicitly realist because it’s not asking how effects were produced but whether there were effects and if you can safely attribute those to the program in a classic experimental way. Because of the approach’s flexibility and broadness, you can apply it in different aspects of evaluation.

Brad mused on his book chapter title, “Making claims using realist methods”. He preferred the original, “Will it work elsewhere? Social programming in open systems”. So did we.

The chapter is about external validity, and realist evaluation is good at answering the question of whether you can get something that worked in some place with certain people to work elsewhere.
Like any theory-driven evaluation question, the realist approach can answer multiple questions. Most evaluations start with program logics, so we can do a better job at program logics if we insert a realist lens to help support evaluation planning, and develop monitoring and evaluation plans, the whole kit and caboodle.

Where realist approaches don’t work well is estimating the magnitude of the effect of a program.

As well as a broad overview of where realist evaluation fits in evaluation practice, Brad provided us with the following snappy tips for doing realist research:

Don’t get stuck on Context-Mechanism-Outcome (CMO)

When learning about realist evaluation, people can get stuck on having a context, mechanism and outcome. The danger of the CMO is using it like a generic program logic template (activities, outputs and outcomes), and listing Cs, Ms and Os, which encourages linear thinking. We need to thoughtfully consider how they’re overlaid to produce an explanation of how outcomes emerge.

A way to overcome this is through ‘bracketing’: set aside the CMO framework, build a program logic and elaborate on the model by introducing mechanisms and context.

Integrate prior research into program theory

Most program theory is built only on the understanding of stakeholders and the experience of the evaluator. It means we’re not being critical of our own and stakeholders’ assumptions about how something works.

A way to overcome this is through ‘abstraction’: through research, we can bring in wider understandings of what family of interventions is involved and use this information to strengthen the program theory. We need to get away from ‘this is a very special and unique program’ to ‘what’s this a case of? Are we looking at incentives? Regulation? Learning?’ As part of this work, realist evaluation requires evaluators to spend a bit more time in the library than other approaches.

Focus on key causal links

Brad looks to the causal links with greatest uncertainty or where there are the biggest opportunities for leveraging what could help improve the program.

When you look at a realist program theory, you can’t explore every causal link. It’s important to focus your fire, and target evaluation and resources on things that matter most.

When asked for his advice to people interested in realist evaluation, Brad’s response was classic:

Just read the book ‘Realistic Evaluation’ from front to back, multiple times.

As a parting tip, he reminded us to aspire to be a theoretical agnostic. He feels labels can constrain how we do the work.

To a kid with a hammer, every problem can seem like a nail. Sometimes, people just go to the theory and methods that they know best. Rather than just sticking to one approach or looking for a neat theoretical label, just do a good evaluation that is informed by the theory that makes sense for the particular context.


Brad Astbury is a Director at ARTD Consultants. He specialises in evaluation design, methodology, mixed methods and impact evaluation.

Eunice Sotelo, research analyst, and Victoria Pilbeam, consultant, work at Clear Horizon Consulting. They also volunteer as mentors for the Asylum Seeker Resource Centre’s Lived Experience Evaluators Project (LEEP).



Fellows johnO NEW

May 2019
by Anthea Rutter

Interviewing John was a pleasure for me. He was my teacher at the Centre for Program Evaluation back in the 90s. Indeed, John and his colleagues have taught a large number of the members of the AES over the years. John and I have also worked on projects together. Even though we have a shared history, I was curious to find out what brought him into the field of evaluation in the first place.

I was at the Australian Council of Educational Research in 1975. I was asked to be the officer in charge of a national evaluation of a science education curriculum in secondary schools. However I had no knowledge of evaluation, so in order to do the project, I started reading books about evaluation and how I could translate some of these ideas into a framework, so I could undertake the study. My background was in science, in particular physics, and in my last couple of years at the Melbourne Advanced College of Education, I got interested in science education and taught courses for aspiring teachers.

John’s knowledge in the field of evaluation is vast and I was keen to find out what he regarded as his main area of interest.

Theories of evaluation. I was concerned that traditional evaluation did not make much of a difference to the planning and delivery of interventions, so I became interested in the utilisation of evaluation findings, and how policies and programs can be improved in terms of utilisation. More generally, I was engaged in research and knowledge utilisation, including factors that affected take-up of this kind of knowledge.

I felt strongly that someone who has been in a field for a number of years must have had challenges to his practice, and John was no exception.

I guess even though I had done this project at ACER I wasn’t aware of the breadth of the thinking about evaluation that was emerging in the 1980s, so when I came back to Melbourne College of Advanced Education, the Principal asked us to set up the Centre for Program Evaluation with Gerry Ellsworth around that time. The challenge for us was to actually decide how a Centre would work and how we could incorporate all of the emerging theories into a coherent package for a teaching course. We knew that we had an opportunity to offer something that was not offered in Australasia. There were a lot of new learning going on. The challenge was to put it together and make sense for people about to work in the field of evaluation. The other challenge was political. The course was not just for teachers. We tried to protect ourselves in the institution. My challenge was to actually see myself as a teacher of evaluation which was different to one in science education.

When I first came back from ACER, they asked me to be the coordinator of a graduate course in curriculum. I had worked in innovation and change. I managed to integrate my work in innovation and change into the evaluation program.

Apart from challenges, a career as broad as John’s must have had a number of highlights and so I asked John about the major ones.

I guess we are talking about post PhD – for me getting a doctorate was a highlight. After that, I guess, when I became Director of the Centre [for Program Evaluation at The University of Melbourne]. One highlight was working to develop a distance education course in evaluation. Once again, this was something new: we had a new Centre that had been operating for a while, which was innovative, and now we were thinking of an innovative offering in teaching. Actually, I learnt a lot about the evaluation field in Australia from the AES. Getting that course up and running was a highlight. Another highlight was being made a Fellow of the Society – very thrilled, an acknowledgement – and the consequent involvement in the Society. Really enjoyed the Society which has been effective in promoting and maintaining the profession.

For most of us, we are not lone operators and there are a number of influences – individuals as well as different evaluation or research models that have influenced our practice. I wanted to find out from John what he considers were the major influences that really helped to define his practice.

The notion of evaluation for decision making underlies my practice. In terms of people and models, I do remember coming across Dan Stufflebeam’s CIPP model. If I was looking at a conceptual influence it may be that one. He had the notion of context, input, process and product. Another one is that I used to get concerned about evaluating a complex problem, then suddenly I came across program logic ideas. It was not heavily used until the 90s. Possibly Joe Wholey had referred to it. Now I understood how to unpack the intervention. Possibly having a scientific background helped me to understand the logic approach.

For those of us who have been in the evaluation field for a long time we are aware that changes occur in practice and I was keen to get John to reflect on them.

When I first started reading about evaluation it was about measuring impact and using rigid methods of determining impacts implied by quantitative methods of evaluation. Since then the field has expanded and been influenced by thinkers such as Michael Patton, the emphasis on utilisation by Marv Elkin and the expansion of the field – so in a sense evaluative inquiry could be used to influence programs as they were being delivered rather than as an assessment at the end. My book [Program Evaluation: Forms and Approaches] summarises my view of these things.

The notion of skills and competencies are very important to John’s role as a teacher, so I wanted to find out what he saw as the main skills or competencies the evaluator of today needs to keep pace with emerging trends in practice.

First of all, they need skills and competencies. There seems to be a view among certain organisations that anyone can do evaluation. There are two sets of skills, one related to epistemology which gets to what knowledge is needed, and also the different models which could be used. The second set are methodological skills. At least an understanding of data management, and being able to be creative in designing methodologies which help you with the compilation of information from which you can make findings and conclusions. Also, the need for evaluators to have the attitude that they can refine methodology if they need it. I am sure there are methodologies associated with technology which need to be learned. But there is a basic underlying rationale.

During John’s time as an evaluator and teacher I felt that he must have reflected on some of the social and political issues which we as a profession ought to be thinking about and trying to resolve.

Perhaps if I was going to put some energy into something, I think that evaluation in government is still at a basic level. I think in the helping professions, education and social interventions, we have a pretty good track record, but I don’t think that we have tackled the big problem of how government departments deal with evaluation, i.e. feedback loops around collecting data, producing findings and using them. Perhaps these organisations are large, more complex, but I don’t see much of accountability. There is general research around which shows that there is little effort in using evaluation in designing and delivering programs. So, this is a major issue to be looking at by the AES and by leaders in government.

The AES has been an important part of John’s life and so I felt he would have views on how the Society can best position itself to still be relevant in the future. I was not wrong!

I have a strong view about this. To position ourselves, we should make more links with societies which have cognate interests, so we can influence the work of evaluation and applied research more. By talking to people like auditors and market researchers. There are groups out there who could have an indirect influence on the Society. We need some work to make links and partnerships, and policies which acknowledge that evaluation could be an umbrella approach useful to other professional organisations. I have long held that position. You hear about the huge conferences which auditors have. We should be in there talking to these people and talking to them about the fact that our knowledge could benefit them.

What do you wish you had known before starting out as an evaluator?

I would have benefitted from a graduate course/subject in sociology, particularly one that dealt with the sociology of knowledge. Unfortunately such courses were not readily available at university, and even if they had been, they would not have readily meshed with my science studies. Perhaps a course on the philosophy of science would have also been good, so that I could have come to grips with giants like Popper and Russell.


John Owen has 40 years of experience in evaluation and is currently a private consultant. His major roles in evaluation include Director at the Centre for Program Evaluation, as a teacher, and presenter at workshops.


Fellows patriciaR

April 2019
by Anthea Rutter

While Patricia Rogers is one of the most recently named Fellows, many of you will be familiar with her work from AES conference keynotes, Better Evaluation and her report on Pathways to advance professionalisation within the context of the AES (with Greet Peersman). She is Professor of Public Sector Evaluation, RMIT University, and an award-winning evaluator, well known around the world.

While she is one busy lady, I managed to catch her at the last conference in Launceston, which was apt because conferences were a key thread in her reflections. 

Patricia talked to me about her interest in different models of evaluation and her passion for looking for ideas that would make a difference.  One of those ideas was Michael Scriven’s goal-free evaluation.

In 1986 I was working in Sydney but about to move back to Melbourne to work in local government.  The AES conference was on in Sydney – I hadn’t heard about it, but I went to meet up with some people after Michael Scriven’s opening keynote and saw people in uproar over the notion that you could and perhaps should evaluate without reference to the stated goals.

That was my first introduction to the AES.  The following year I went to the AES conference in Canberra and was introduced to program logic, as being done by Brian Lenne, Sue Funnell and others in NSW.

Patricia went on to write a book on program logic with Sue Funnell, Purposeful Program Logic.

What are your main interests now?

I’m interested in all sorts of ways that evaluation, and evaluative thinking, can be more useful. I guess I’m particularly interested in how to develop, represent and use theories of change. At first, I was interested in theories of change in terms of developing performance indicators, but then I learned how useful they could be for helping people have a coherent vision of what they are trying to do, for bringing together diverse evidence, and for supporting adapting learning from successful pilots to other contexts.

Another area of ongoing interest for me is how to address complexity.  Again this stemmed from an AES conference – I can see a common thread here!  I was puzzling over how to make sense of an evaluation involving hundreds of projects with diverse types of evidence.  Michael Patton gave a keynote drawing on Brenda Zimmerman’s ideas about simple, complicated and complex. It gave me a way to distinguish between different types of challenges in evaluation and different strategies to address them.

Who has influenced your wide-ranging interests?

The AES has been pivotal. I was reading down the list of fellows, and I really felt pleased that I know them all and I have worked with a lot of them and respect them. I have learnt from conference sessions, had helpful feedback, plus mentoring and peer support – that sort of generosity and friendship. In terms of individual people, Jerry Winston’s insights into evaluation have been amazing. I met him 30 years ago when I started teaching at Phillip Institute (now RMIT University). His approach around systems, seeing evaluation as a scientific enquiry, and using adult learning principles for evaluation and evaluation capacity building were way ahead of everyone else. In many ways I’m still catching up to and understanding his thinking.

In terms of practice and theory Michael Patton has also resonated with me. I value his consistent focus on making sure evaluation is useful, especially through actively engaging intended users in evaluation processes, his use of both quantitative and qualitative data, and his incorporation of new ideas from management and public administration into his practice.

Evaluation has changed a lot over the 30 years Patricia has been in the field. What has she noticed most?

One of the problems is that while the field of evaluation has changed, the common understanding of evaluation has not always kept up. So there continues to be misconceptions, such as that evaluation is only about measuring whether goals have been achieved. Also there is a perception of evaluation as being low-quality research. i.e. if you can’t make it as a serious researcher then you do low-quality research which is called evaluation. Whereas good quality evaluation, which needs to be useful and valid and ethical and feasible all at the same time, is enormously challenging and also potentially enormously socially useful. Not just in terms of producing findings but in supporting the thinking and working together to identify what is of value and how it might be improved.

I agree, evaluation is never an easy endeavour, so it is reassuring to hear from others that it doesn’t always go smoothly, but you can recover. What has been one of your biggest challenges?

One of my biggest disappointments was when I was working with a government department which had commissioned a big evaluation of a new initiative, but the people who had asked for it had moved on. The department was still obliged to do the evaluation to meet Treasury requirements, but they were not at all keen on it. I asked to meet with the senior management and tried to use an appreciative inquiry approach to identify what a good evaluation might be for them, and how we might achieve that.  I asked them, ‘Tell me about an evaluation which has really worked for you.’  There was a long silence, and then they said they couldn’t think of any.  It’s hard when people have had such a negative experience of evaluation that they can’t imagine how it could be useful. In hindsight, I should have called the issue – and either got commitment to the evaluation or walked away.

Patricia and I talked about the skills and competencies evaluators need today so that they can keep up with emerging trends. This led us to Ikigai – finding the intersect of what you like doing, what you are good at, what the world needs and what you can get paid for.

Ikigai PatriciaRogersv2

Getting this right, we agreed, would help you jump out of bed in the morning.

What do evaluators need today?

Evaluators all need to keep learning about new methods, new processes, and new technologies. It’s not just about summarising surveys and interviews any more. We need to take the leap into digital technology and crowd sourced data and learning. For most people, it would be useful to learn more about how to use new technologies including digital tools to gather, analyse, report data and support evaluative thinking.

Another important skill and competency is managing uncertainty for yourself and your team as situations and requirements will change over time.

Most of us also need to learn more about culturally responsive evaluation and inclusive practice, including being more aware of power dynamics and having strategies for them.

We need to be engaged in ongoing learning about new ways of doing evaluation and new ideas about how to make it work better. That’s why my work is now focused on the BetterEvaluation project, an international collaboration which creates and shares information on ways to do evaluation better.

Beyond continuous learning what do evaluators need to be focused on over the next decade? What issues do they need to resolve?

It’s about democracy. It’s about being inclusive, respectful, and supporting deliberative democracy and what does that mean. We should be ensuring that the voice of those less powerful, for example Indigenous groups and migrants, are heard as well as being part of the decision-making.

The last question I asked Patricia, and can I say that this was mainly answered on the run – literally as I walked down with her to the session she was chairing! – was about the AES’s role in the change process.

The AES has an important role to play in improving professional practice in evaluation (including by evaluators and those managing evaluations). With my colleague Greet Peersman, we have just produced a report for the AES on Pathways to Professionalisation which includes discussing the positioning of the AES. We need more people to know about the AES, and we need more people to be more engaged in AES events like the conference and more AES people engaged in public discussions.  

How can we make the conference more accessible – for example, more subsidised places or lower-cost options. How can the AES be more involved in discussions about public policy and service delivery?


Patricia Rogers is Professor of public sector evaluation at RMIT, and currently on three years’ leave to lead the evidence and evaluation hub at the Australia and New Zealand School of Government.