Member login
 
   
Forgot Login?   Sign up  

Fellows patriciaR
 

April 2019
by Anthea Rutter

While Patricia Rogers is one of the most recently named Fellows, many of you will be familiar with her work from AES conference keynotes, Better Evaluation and her report on Pathways to advance professionalisation within the context of the AES (with Greet Peersman). She is Professor of Public Sector Evaluation, RMIT University, and an award-winning evaluator, well known around the world.

While she is one busy lady, I managed to catch her at the last conference in Launceston, which was apt because conferences were a key thread in her reflections. 

Patricia talked to me about her interest in different models of evaluation and her passion for looking for ideas that would make a difference.  One of those ideas was Michael Scriven’s goal-free evaluation.

In 1986 I was working in Sydney but about to move back to Melbourne to work in local government.  The AES conference was on in Sydney – I hadn’t heard about it, but I went to meet up with some people after Michael Scriven’s opening keynote and saw people in uproar over the notion that you could and perhaps should evaluate without reference to the stated goals.

That was my first introduction to the AES.  The following year I went to the AES conference in Canberra and was introduced to program logic, as being done by Brian Lenne, Sue Funnell and others in NSW.

Patricia went on to write a book on program logic with Sue Funnell, Purposeful Program Logic.

What are your main interests now?

I’m interested in all sorts of ways that evaluation, and evaluative thinking, can be more useful. I guess I’m particularly interested in how to develop, represent and use theories of change. At first, I was interested in theories of change in terms of developing performance indicators, but then I learned how useful they could be for helping people have a coherent vision of what they are trying to do, for bringing together diverse evidence, and for supporting adapting learning from successful pilots to other contexts.

Another area of ongoing interest for me is how to address complexity.  Again this stemmed from an AES conference – I can see a common thread here!  I was puzzling over how to make sense of an evaluation involving hundreds of projects with diverse types of evidence.  Michael Patton gave a keynote drawing on Brenda Zimmerman’s ideas about simple, complicated and complex. It gave me a way to distinguish between different types of challenges in evaluation and different strategies to address them.

Who has influenced your wide-ranging interests?

The AES has been pivotal. I was reading down the list of fellows, and I really felt pleased that I know them all and I have worked with a lot of them and respect them. I have learnt from conference sessions, had helpful feedback, plus mentoring and peer support – that sort of generosity and friendship. In terms of individual people, Jerry Winston’s insights into evaluation have been amazing. I met him 30 years ago when I started teaching at Phillip Institute (now RMIT University). His approach around systems, seeing evaluation as a scientific enquiry, and using adult learning principles for evaluation and evaluation capacity building were way ahead of everyone else. In many ways I’m still catching up to and understanding his thinking.

In terms of practice and theory Michael Patton has also resonated with me. I value his consistent focus on making sure evaluation is useful, especially through actively engaging intended users in evaluation processes, his use of both quantitative and qualitative data, and his incorporation of new ideas from management and public administration into his practice.

Evaluation has changed a lot over the 30 years Patricia has been in the field. What has she noticed most?

One of the problems is that while the field of evaluation has changed, the common understanding of evaluation has not always kept up. So there continues to be misconceptions, such as that evaluation is only about measuring whether goals have been achieved. Also there is a perception of evaluation as being low-quality research. i.e. if you can’t make it as a serious researcher then you do low-quality research which is called evaluation. Whereas good quality evaluation, which needs to be useful and valid and ethical and feasible all at the same time, is enormously challenging and also potentially enormously socially useful. Not just in terms of producing findings but in supporting the thinking and working together to identify what is of value and how it might be improved.

I agree, evaluation is never an easy endeavour, so it is reassuring to hear from others that it doesn’t always go smoothly, but you can recover. What has been one of your biggest challenges?

One of my biggest disappointments was when I was working with a government department which had commissioned a big evaluation of a new initiative, but the people who had asked for it had moved on. The department was still obliged to do the evaluation to meet Treasury requirements, but they were not at all keen on it. I asked to meet with the senior management and tried to use an appreciative inquiry approach to identify what a good evaluation might be for them, and how we might achieve that.  I asked them, ‘Tell me about an evaluation which has really worked for you.’  There was a long silence, and then they said they couldn’t think of any.  It’s hard when people have had such a negative experience of evaluation that they can’t imagine how it could be useful. In hindsight, I should have called the issue – and either got commitment to the evaluation or walked away.

Patricia and I talked about the skills and competencies evaluators need today so that they can keep up with emerging trends. This led us to Ikigai – finding the intersect of what you like doing, what you are good at, what the world needs and what you can get paid for.

Ikigai PatriciaRogersv2

Getting this right, we agreed, would help you jump out of bed in the morning.

What do evaluators need today?

Evaluators all need to keep learning about new methods, new processes, and new technologies. It’s not just about summarising surveys and interviews any more. We need to take the leap into digital technology and crowd sourced data and learning. For most people, it would be useful to learn more about how to use new technologies including digital tools to gather, analyse, report data and support evaluative thinking.

Another important skill and competency is managing uncertainty for yourself and your team as situations and requirements will change over time.

Most of us also need to learn more about culturally responsive evaluation and inclusive practice, including being more aware of power dynamics and having strategies for them.

We need to be engaged in ongoing learning about new ways of doing evaluation and new ideas about how to make it work better. That’s why my work is now focused on the BetterEvaluation project, an international collaboration which creates and shares information on ways to do evaluation better.

Beyond continuous learning what do evaluators need to be focused on over the next decade? What issues do they need to resolve?

It’s about democracy. It’s about being inclusive, respectful, and supporting deliberative democracy and what does that mean. We should be ensuring that the voice of those less powerful, for example Indigenous groups and migrants, are heard as well as being part of the decision-making.

The last question I asked Patricia, and can I say that this was mainly answered on the run – literally as I walked down with her to the session she was chairing! – was about the AES’s role in the change process.

The AES has an important role to play in improving professional practice in evaluation (including by evaluators and those managing evaluations). With my colleague Greet Peersman, we have just produced a report for the AES on Pathways to Professionalisation which includes discussing the positioning of the AES. We need more people to know about the AES, and we need more people to be more engaged in AES events like the conference and more AES people engaged in public discussions.  

How can we make the conference more accessible – for example, more subsidised places or lower-cost options. How can the AES be more involved in discussions about public policy and service delivery?

-------------------------------------------------------------

Patricia Rogers is Professor of public sector evaluation at RMIT, and currently on three years’ leave to lead the evidence and evaluation hub at the Australia and New Zealand School of Government.


 

Fellows anonaA 

March 2019
by Anthea Rutter

Although a number of AES members have founded consultancies to channel their evaluation work, it is another thing to think about – and actually achieve – the founding of a professional society. This is exactly what Emeritus Professor Anona Armstrong did. Through her company Evaluation Training & Services, the fledging society was born in the early 80s. Not only did Anona found the AES, she had the honour and distinction of having a piece of music written for her and performed at the AES International Conference in 1992.

Unfortunately, I wasn’t able to interview Anona in person, but I am very aware of her achievements – Anona and I go back a long way! Though I know some of the earlier history of the society, it was important to get an accurate record, so the first question on our string of emails was:

Forming a society is no mean feat…how did it happen? I guess I sort of anticipated her answer (something on the lines of ’well…Rome wasn’t built in a day’).

Like all good things the society was formed by slow steps. It started with the first ‘National Evaluation Conference’ which was held in 1982 by the Department of Psychology and the Program in Public Policy Studies at the University of Melbourne.

I compiled a Directory of Australian Evaluators in 1983, and 99 people responded, identifying 52 different categories in which they were conducting evaluation. This Directory became the foundation for a mailing list, and the building blocks for the development of the Australasian Evaluation Society (AES).

Anona also acknowledged that the growth in the membership of the AES was due “in no small way to formal teaching but also to evaluation training provided by AES members.

Like many evaluators, I was curious about how others began their careers, particularly in the ‘80s when evaluation was still a fledging field in Australia.

So how did you get into evaluation?

In the 80s most people had never heard of evaluation. I was regularly asked ‘What do you mean?’ I remember receiving phone calls from people asking if I could do an evaluation of their property. At the time, evaluation was a novelty. Many government program managers thought that setting performance objectives for government programs could not be done.

Anona was referring to her foray into impact assessment, when she realised that there were no accepted measures. So she looked into the ‘new’ research on social indicators, which gave her the “foundation for measuring impact as well as performance management in the public sector.” It’s no less than what I would expect from someone who founded the AES.

As an evaluator I was also curious about Anona’s thoughts on some of the early writers who influenced the field of evaluation in Australia. I was not disappointed – I got a valuable history lesson on her take on the assent of evaluation in this country.

Evaluation in Australia owes its origins to the US. Madaus, Stufflebeam and Scriven (1983) traced evaluation in the US back to the educational reforms of the 19th Century, but modern evaluation is usually associated with the 1960s when the US Government’s General Accounting Office (GAO) sponsored evaluation of the large-scale curriculum development projects introduced under President Lyndon Johnson’s ‘War on Poverty’ (Madeus et al, 1983).

By the 70s, new conceptual frameworks were introduced that were specific to evaluation (e.g. goal-free evaluation, Scriven, 1974; naturalistic, Guber and Lincoln, 1981; needs assessment, Stufflebeam, 1977; systematic, Rossi, Freeman and Wright, 1979; utilisation focused, qualitative evaluation, Patton, 1978, 1980).

As a practitioner who has been in the evaluation game for a long time, I was very keen to get Anona’s thoughts on the changes she had seen in the field of evaluation.

A lot has changed! The discipline of evaluation in Australia in those early years was owned by academia and generally regarded as a form of research. The main purposes for evaluation in those days were to improve government programs or to monitor performance.

Evaluation has expanded: from a focus on government programs to all areas of endeavour; from a small field to a core competency of professional roles; and from academia to consulting. There’s also an increased focus on internal organisation evaluation and performance measurement.

Another area I was keen to get Anona’s input were skills that evaluators need now.

The basic evaluation skills have not changed very much, but new skills are required for the use of technology, implementing agile organisations and data mining.

In her speech to the Australasian Evaluation Society International Conference in 2015 on the future of evaluation, Anona expanded on this theme.

Governments are coping with much more complex international and local environments. They face new challenges: globalisation of trade, global conflict, climate change, the gap between rich and poor, rapid advances in information and communications technology, and generational shifts in values. Locally, Australian governments are experiencing a combination of slower economic growth, an aging population, shrinking PAYE taxes, and growth in global companies that can manipulate their finances to minimise their tax. Society, too, is experiencing massive changes evident in global migration, and the influence of social media. At the same time, a more informed electorate is emerging, composed of a diversity of communities with many different values, but all with rising expectations.

Anona then went on from the original question on what skills are needed to what could be the role of evaluators in the present day.

In this environment, what is the role of evaluators? Well, we still have a role within organisations as designers of programs, determining needs and evaluating performance.

I then took this question further and asked about the areas in which evaluators and the field of evaluation could make a difference.

Anona suggested five directions in which evaluators could have a major impact:

  • Addressing social issues

Now is the time to extend the focus of our activities and use our skills to address some of the larger social issues. Evaluators have a role in establishing the value to society, and to organisations, of actions that count as corporate social responsibility and sustainability. While sustainability has different meanings in different contexts, there is a need for measures that allow comparability at least between entities engaged in the same industry or service.

  • Flexibility of methods

Evaluators have traditionally used social science methods to determine the worth of programs. The debate over qualitative versus quantitative methodologies is surely over. We need the different data provided by both methodologies. The new debate is probably about social science methods versus financial methods. Value for money is becoming a standard component of evaluation. Whether in health, education or social services, it is time that evaluators addressed the reality that financial decisions drive governments as much as political ones and answering financial questions requires financial analysis.

  • New technology

Evaluators need to take advantage of new data mining and other IT systems that are powerful tools for analysis and communication. This means adding financial analysis, modelling and data mining to the evaluator’s competency frameworks.

  • Growing importance of governance

Across the world, there is a greater emphasis on best practice governance. Corporate governance is defined as the overarching framework of rules, relationships and processes within and by which authority is exercised and controlled in order to achieve objectives. Standards for governance are now issued by governments and professional associations. In the higher education system, universities must meet the governance standards, set by TESQA, called the Threshold standards.

  • Growing need for evaluation

Trade agreements with New Zealand, several of the 10 ASEAN countries, India, South Korea and China, etc. are opening up new avenues for Australian trade and services. Services generate 72.7% of Australia’s GDP. The services that will be required are financial, education, business, transport, health, and government services. Every new government program will require an evaluation.

As a final point, I asked Anona what role the AES should have in ensuring the future of evaluation. Her response was very much Anona – visionary and straight to the point.

The AES must address some of the big issues facing society, not only Indigenous and failing organisational issues; not only failures, but how to achieve success.

The AES may need to fund some significant projects to market itself. There’s an opportunity to tie the image of AES to the use of agile technology and to the investigation of the new organisation structures, modes of employment, threats to democracy and of unstable government, and cross-border cultural conflicts.

-------------------------------------------------------------
Emeritus Professor Anona Armstrong is the Director of the Centre for Corporate Governance Research – Victoria University of Technology, and Chair of the Board of Southern Cross Institute of Higher Education. She received an AM Order of Australia, in recognition of her contribution to community and education.

The material for this article is taken from interview questions completed by Anona as well as extracts from her welcome address to the Australian Evaluation Society Conference, Reaching Across Borders Melbourne, 5-9 September 2015.


 

February 2019
by Alicia McCoy, Alison Rogers, Leanne Kelly

EvalLiteracy

Evaluation in NGOs in Australia has evolved at a fast pace. Ten years ago, the evaluation landscape in the non-profit sector in Australia looked very different than it does today. There was less evaluation occurring, very few organisations had internal evaluation functions, and funders were often satisfied with output focused reports. However, in what has now become a rather volatile socio-political environment, organisations are under increasing demand to measure their outcomes; even their social impact. As a result, more and more organisations are grappling with how to engage with evaluation meaningfully in a way that makes sense for the context in which they work. Organisations’ ability to do this ranges from limited to sophisticated, depending on a variety of factors including financial and human resources, evaluation knowledge and skills, and motivation. Where they exist, internal evaluators in NGOs are required to navigate this multifaceted issue on daily basis.

It is important that NGO evaluators connect, share, debrief, and celebrate their work in this area. The evaluation profession as a whole deals with issues of complexity in various degrees. The NGO context comes with its particular set of challenges and can be incredibly complex. It is very useful to discuss and navigate this complexity with those “in the know”. For not only are these individuals practising the complex discipline of evaluation, they are doing so in an organisational environment where they must continually consider factors such as culture, leadership, long-term professional relationships, and complicated systems and processes. These factors can “make or break” evaluation efforts and building an enabling environment for evaluation to flourish is indeed a matter of organisational change to bring others along on the journey. It is a challenging, fascinating and at times frustrating context to work in.

One factor that internal evaluators in NGOs must pay close attention to is how to promote evaluation among colleagues. This involves understanding the spectrum of evaluation literacy that exists in their organisation and working with this. Internal evaluators must be able to engage with colleagues who have a good understanding of evaluation, as well as those who don’t. They also need to be able to ‘speak to’ those who have a positive attitude towards evaluation, those that don’t, and those who might understand the importance of evaluation but find it anxiety-provoking.

As internal evaluators working in diverse NGOs we have found that motivating and enabling others to access, understand and use evaluation information is an important part of our roles. This can include engaging people in discussions about what evaluation means to them, how it relates to organisational achievements and sharing knowledge about evaluation. We have found that understanding social connections and how people work together is important for building evaluation literacy - making evaluation more appropriate, understandable and accessible. Internal evaluators need to cooperate with teams and have high level interpersonal skills to adapt and tailor information about evaluation to the local conditions.

The recent AES conference provided us with a great opportunity to discuss this topic with our colleagues across the NGO sector. Interesting ideas were shared about ways that internal evaluators can promote evaluation and support colleagues to access, understand and use evaluation. Some key highlights include:

  • Collaboratively develop a plan to link individuals to the success of the group or common goal. This could include brainstorming to discover expectations, value and issues.
  • Use a system to manage information and hold individuals accountable for their contribution. A joint calendar with targeted reminders could be an example.
  • Create opportunities for engaging and providing encouragement. This could mean using every opportunity to promote evaluation.
  • Tailor your communication style appropriately for multiple audiences. Knowing your audience and adapting information to make it meaningful is essential.
  • Incorporate opportunities for reflection to consider how well the group is functioning. This could be as simples a routinely asking for feedback on the process.

Look out for our article in the first edition of the Canadian Journal of Evaluation in 2019 where we go into more detail on the topic of evaluation literacy in NGOs. (Rogers, A., Kelly, L., & McCoy, A. (2019) Evaluation Literacy: Perspectives of internal evaluators in non-government organisations. Canadian Journal of Program Evaluation. In Press.)

A Slack Workspace (On the inside looking out – Internal and NGO evaluators community of practice) has been created to keep the NGO evaluation conversation alive. Anyone working in an internal evaluation role in an NGO is welcome to join at https://invaluate.slack.com/ 

If you are interested in connecting with us to further discuss evaluation in NGOs, please contact Alicia at This email address is being protected from spambots. You need JavaScript enabled to view it..

Alicia McCoy is Head of Research and Evaluation at Beyond Blue and has recently completed her PhD at The University of Melbourne.

Alison Rogers is the Strategic and Innovation Advisor with The Fred Hollows Foundation’s Indigenous Australia Program and a PhD candidate with the Centre for Program Evaluation at The University of Melbourne.

Leanne Kelly is the Research and Development Coordinator at Windermere Child and Family Services and is undertaking a PhD through scholarship with the Alfred Deakin Institute at Deakin University. 

 

Fellows generic 

In evaluation, a good mentor can help you navigate the perplexing terrain of diverse schools of thought on what evaluation is about and how it should be done. Their guidance can help you avoid the pitfalls which can occur when you are translating a plan into practice. And their insight into where the profession of evaluation has been can help you shape where evaluation is going.

The 18 AES Fellows have over 550 years of experience between them. There is certainly a lot we could learn from them.

Over the next year, Anthea Rutter, an AES Fellow herself, as well as Research Fellow at the Centre for Program Evaluation, will be picking their brains on their influences, the challenges they’ve overcome, the skills evaluators will need now and into the future, how the discipline has changed, and where they think it is headed.

What can you expect? Here’s a taster of what’s up and coming on the AES blog:

Scott Bayley on evaluation questions: The hard part is actually identifying good questions to ask – questions that stakeholders care about; questions that reduce uncertainty, questions that support learning, collaborative relationships and better program results.

Sue Funnell on evaluator skills and competencies: Fleet footedness and adaptability while minimising compromises to quality.

Colin Sharp on voice: How do we enable evaluation to give a voice to the clients and consumers of human services, so they can talk to the politicians? How do we talk truth to power?

Penny Hawkins on overcoming challenges: The resilience needed to continually defend evaluations in the face of unhealthy challenges can require a lot of effort. This reality is not something that’s taught on evaluation courses but learnt though difficult experiences.

Join us next week as we go back to the founding of the AES with Anona Armstrong.

March 2019
by Anthea Rutter and the AES Blog Working Group


 

November 2018
by Rachel Aston, Ruth Aston, Timoci O’Connor

How often do we really use research to inform our evaluation practice? Many of us tend to use research and evidence to help us understand what we are evaluating, what outcomes we might expect to see and in what time frame, but we don’t often use research to inform how we do evaluation.

At both the Australian Evaluation Society’s International Conference in September and the Aotearoa New Zealand Evaluation Association conference in July Rachel and Ruth Aston, Timoci O’Connor and Robbie Francis presented our different perspectives to the discussion—Tim and Rachel as evaluators, Ruth as a researcher, and Robbie as a program practitioner.

Monitoring program design and implementation for continuous improvement

In Ruth’s study of 7,123 complex interventions to reduce exposure to modifiable risk of cardiovascular disease, she found that two program-specific factors can influence the magnitude (amount) of impact of interventions. These were:

  • design: what the intervention looked like
  • implementation: how and how well the design is enacted within a specific context.

Eleven specific indicators make up these two factors, but Ruth found that 80 per cent of reviewed interventions did not monitor many of these indicators. She concluded that often we don’t monitor the indicators that can give us critical information about how to improve the effectiveness of complex interventions.

Research can help us address this practice gap. Evaluative thinking, along with design-thinking and implementation science can help us operationalise and embed the process, principles and decision-making structures to facilitate progressive impact measurement and continuous improvement.

An evaluation practice example

Rachel is currently working on a three-year impact evaluation of a stepped care model for primary mental healthcare services. One of the challenges in this project is that mental health outcomes are not likely to shift over the course of the evaluation. Further, the intervention for the stepped care model is dynamic – it’s being rolled out in three stages over two years but progression towards impact needs to be monitored from day one.

By incorporating the research on the importance of monitoring design and implementation, we are able to look at the quality, fidelity and reach of implementation of the stepped care model. One of the tools we’re using to do this is the Consolidated Framework for Implementation Research (CFIR) – a validated framework that incorporates a large number of validated constructs (developed by Laura Damschroder and her colleagues), https://cfirguide.org/.

The constructs and overall framework can be used to build data collection tools, such as surveys, interview schedules, observation protocols and to develop coding frameworks for analysis. Using the CFIR and focusing on how, how well and how much the stepped care model has been implemented, we can develop actionable feedback to improve implementation and consequently, the effectiveness of the model.

A program practitioner’s perspective

Robbie Francis, Director of The Lucy Foundation described how the Foundation has utilised information gained from the monitoring and evaluation of the design and implementation of their exciting Coffee Project in Pluma, Hidalgo Mexico. Her reflections reinforce how adaptations can be made to program design and implementation to improve potential for impact. Robbie also provides an important practical message about the place of principles in evaluating the impact of the Coffee Project.

Click on image below for video of Robbie Francis, The Lucy Foundation

Robbie video image

Closing comments

We have a role and a duty as evaluators to use the evidence we have at hand to inform and enhance our practice. This includes traditional research, evaluation practice experience, and program practitioner insights.

While this is important for any evaluation, it is arguably more important when evaluating complex interventions aiming to achieve social change. If we are going to continue to invest in and evaluate complex interventions, which seems likely given the challenging nature of the social problems we face today, then we need to think critically about our role as evaluators in advocating for the importance of:

  • effective design
  • attention to implementation science in planning and roll-out
  • effective monitoring
  • developing tools and measures for quality monitoring implementation and intervention design
  • sharing and disseminating exemplars of quality intervention design and implementation
  • working with policy makers and commissioners to enable evaluations of complex interventions to focus on using design and implementation as proxy indicators of impact in early rollout.

Above all, we need to accept, review and use all forms of evidence we have at our disposal. This will enable us to continually learn, become evidence-informed practitioners and use evaluative thinking in our work for the purposes of improving our practice, and generating useful, accurate and timely evaluation.

Damschroder, L., Aron, D., Keith, R., Kirsh, S., Alexander, J. and Lowery, J. (2009). Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implementation Science, 4(1).

Dr Ruth Aston
Research Fellow, University of Melbourne
Ruth has nine years’ experience in research and project evaluation. Ruth has managed several large-scale evaluations across Australia and internationally. She recently completed her PhD on 'Creating Indicators for Social Change in Public Health'. She also has a strong interest in interdisciplinary research with diverse cultural groups.

Rachel Aston
Senior Consultant, ARTD Consultants
Rachel is an experienced social researcher and evaluator who joined ARTD in 2018. She brings over six years’ experience conducting research and evaluation for government, NGOs and in the higher education sector. Rachel’s academic background is in anthropology and social research.

Timoci O’Connor
Lecturer, University of Melbourne
Timoci has over ten years’ experience in conducting research and evaluation projects in the public health, education, international development and community sectors. He holds a Masters of Public Health and is currently doing his PhD exploring the nature of feedback in community-based health interventions utilising mobile technologies and describing its influence on program outcomes. He is Ikiribati/ Fijian.

Robbie Francis
Director, The Lucy Foundation
Robbie Francis is a young woman who has packed a lot into 29 years. Having lived with a physical disability since birth, she has worked in the disability sector for over a decade as a support worker, documentary maker, human rights intern, researcher, consultant and as an advisor. In 2014 Robbie co-founded The Lucy Foundation, a social enterprise committed to empowering people with disabilities by working with local communities to promote education, employment and a culture of disability inclusiveness through sustainable trade.