Member login
 
   
Forgot Login?   Sign up  

Fellows anonaA 

March 2019
by Anthea Rutter

Although a number of AES members have founded consultancies to channel their evaluation work, it is another thing to think about – and actually achieve – the founding of a professional society. This is exactly what Emeritus Professor Anona Armstrong did. Through her company Evaluation Training & Services, the fledging society was born in the early 80s. Not only did Anona found the AES, she had the honour and distinction of having a piece of music written for her and performed at the AES International Conference in 1992.

Unfortunately, I wasn’t able to interview Anona in person, but I am very aware of her achievements – Anona and I go back a long way! Though I know some of the earlier history of the society, it was important to get an accurate record, so the first question on our string of emails was:

Forming a society is no mean feat…how did it happen? I guess I sort of anticipated her answer (something on the lines of ’well…Rome wasn’t built in a day’).

Like all good things the society was formed by slow steps. It started with the first ‘National Evaluation Conference’ which was held in 1982 by the Department of Psychology and the Program in Public Policy Studies at the University of Melbourne.

I compiled a Directory of Australian Evaluators in 1983, and 99 people responded, identifying 52 different categories in which they were conducting evaluation. This Directory became the foundation for a mailing list, and the building blocks for the development of the Australasian Evaluation Society (AES).

Anona also acknowledged that the growth in the membership of the AES was due “in no small way to formal teaching but also to evaluation training provided by AES members.

Like many evaluators, I was curious about how others began their careers, particularly in the ‘80s when evaluation was still a fledging field in Australia.

So how did you get into evaluation?

In the 80s most people had never heard of evaluation. I was regularly asked ‘What do you mean?’ I remember receiving phone calls from people asking if I could do an evaluation of their property. At the time, evaluation was a novelty. Many government program managers thought that setting performance objectives for government programs could not be done.

Anona was referring to her foray into impact assessment, when she realised that there were no accepted measures. So she looked into the ‘new’ research on social indicators, which gave her the “foundation for measuring impact as well as performance management in the public sector.” It’s no less than what I would expect from someone who founded the AES.

As an evaluator I was also curious about Anona’s thoughts on some of the early writers who influenced the field of evaluation in Australia. I was not disappointed – I got a valuable history lesson on her take on the assent of evaluation in this country.

Evaluation in Australia owes its origins to the US. Madaus, Stufflebeam and Scriven (1983) traced evaluation in the US back to the educational reforms of the 19th Century, but modern evaluation is usually associated with the 1960s when the US Government’s General Accounting Office (GAO) sponsored evaluation of the large-scale curriculum development projects introduced under President Lyndon Johnson’s ‘War on Poverty’ (Madeus et al, 1983).

By the 70s, new conceptual frameworks were introduced that were specific to evaluation (e.g. goal-free evaluation, Scriven, 1974; naturalistic, Guber and Lincoln, 1981; needs assessment, Stufflebeam, 1977; systematic, Rossi, Freeman and Wright, 1979; utilisation focused, qualitative evaluation, Patton, 1978, 1980).

As a practitioner who has been in the evaluation game for a long time, I was very keen to get Anona’s thoughts on the changes she had seen in the field of evaluation.

A lot has changed! The discipline of evaluation in Australia in those early years was owned by academia and generally regarded as a form of research. The main purposes for evaluation in those days were to improve government programs or to monitor performance.

Evaluation has expanded: from a focus on government programs to all areas of endeavour; from a small field to a core competency of professional roles; and from academia to consulting. There’s also an increased focus on internal organisation evaluation and performance measurement.

Another area I was keen to get Anona’s input were skills that evaluators need now.

The basic evaluation skills have not changed very much, but new skills are required for the use of technology, implementing agile organisations and data mining.

In her speech to the Australasian Evaluation Society International Conference in 2015 on the future of evaluation, Anona expanded on this theme.

Governments are coping with much more complex international and local environments. They face new challenges: globalisation of trade, global conflict, climate change, the gap between rich and poor, rapid advances in information and communications technology, and generational shifts in values. Locally, Australian governments are experiencing a combination of slower economic growth, an aging population, shrinking PAYE taxes, and growth in global companies that can manipulate their finances to minimise their tax. Society, too, is experiencing massive changes evident in global migration, and the influence of social media. At the same time, a more informed electorate is emerging, composed of a diversity of communities with many different values, but all with rising expectations.

Anona then went on from the original question on what skills are needed to what could be the role of evaluators in the present day.

In this environment, what is the role of evaluators? Well, we still have a role within organisations as designers of programs, determining needs and evaluating performance.

I then took this question further and asked about the areas in which evaluators and the field of evaluation could make a difference.

Anona suggested five directions in which evaluators could have a major impact:

  • Addressing social issues

Now is the time to extend the focus of our activities and use our skills to address some of the larger social issues. Evaluators have a role in establishing the value to society, and to organisations, of actions that count as corporate social responsibility and sustainability. While sustainability has different meanings in different contexts, there is a need for measures that allow comparability at least between entities engaged in the same industry or service.

  • Flexibility of methods

Evaluators have traditionally used social science methods to determine the worth of programs. The debate over qualitative versus quantitative methodologies is surely over. We need the different data provided by both methodologies. The new debate is probably about social science methods versus financial methods. Value for money is becoming a standard component of evaluation. Whether in health, education or social services, it is time that evaluators addressed the reality that financial decisions drive governments as much as political ones and answering financial questions requires financial analysis.

  • New technology

Evaluators need to take advantage of new data mining and other IT systems that are powerful tools for analysis and communication. This means adding financial analysis, modelling and data mining to the evaluator’s competency frameworks.

  • Growing importance of governance

Across the world, there is a greater emphasis on best practice governance. Corporate governance is defined as the overarching framework of rules, relationships and processes within and by which authority is exercised and controlled in order to achieve objectives. Standards for governance are now issued by governments and professional associations. In the higher education system, universities must meet the governance standards, set by TESQA, called the Threshold standards.

  • Growing need for evaluation

Trade agreements with New Zealand, several of the 10 ASEAN countries, India, South Korea and China, etc. are opening up new avenues for Australian trade and services. Services generate 72.7% of Australia’s GDP. The services that will be required are financial, education, business, transport, health, and government services. Every new government program will require an evaluation.

As a final point, I asked Anona what role the AES should have in ensuring the future of evaluation. Her response was very much Anona – visionary and straight to the point.

The AES must address some of the big issues facing society, not only Indigenous and failing organisational issues; not only failures, but how to achieve success.

The AES may need to fund some significant projects to market itself. There’s an opportunity to tie the image of AES to the use of agile technology and to the investigation of the new organisation structures, modes of employment, threats to democracy and of unstable government, and cross-border cultural conflicts.

-------------------------------------------------------------
Emeritus Professor Anona Armstrong is the Director of the Centre for Corporate Governance Research – Victoria University of Technology, and Chair of the Board of Southern Cross Institute of Higher Education. She received an AM Order of Australia, in recognition of her contribution to community and education.

The material for this article is taken from interview questions completed by Anona as well as extracts from her welcome address to the Australian Evaluation Society Conference, Reaching Across Borders Melbourne, 5-9 September 2015.


 

Fellows generic 

In evaluation, a good mentor can help you navigate the perplexing terrain of diverse schools of thought on what evaluation is about and how it should be done. Their guidance can help you avoid the pitfalls which can occur when you are translating a plan into practice. And their insight into where the profession of evaluation has been can help you shape where evaluation is going.

The 18 AES Fellows have over 550 years of experience between them. There is certainly a lot we could learn from them.

Over the next year, Anthea Rutter, an AES Fellow herself, as well as Research Fellow at the Centre for Program Evaluation, will be picking their brains on their influences, the challenges they’ve overcome, the skills evaluators will need now and into the future, how the discipline has changed, and where they think it is headed.

What can you expect? Here’s a taster of what’s up and coming on the AES blog:

Scott Bayley on evaluation questions: The hard part is actually identifying good questions to ask – questions that stakeholders care about; questions that reduce uncertainty, questions that support learning, collaborative relationships and better program results.

Sue Funnell on evaluator skills and competencies: Fleet footedness and adaptability while minimising compromises to quality.

Colin Sharp on voice: How do we enable evaluation to give a voice to the clients and consumers of human services, so they can talk to the politicians? How do we talk truth to power?

Penny Hawkins on overcoming challenges: The resilience needed to continually defend evaluations in the face of unhealthy challenges can require a lot of effort. This reality is not something that’s taught on evaluation courses but learnt though difficult experiences.

Join us next week as we go back to the founding of the AES with Anona Armstrong.

March 2019
by Anthea Rutter and the AES Blog Working Group


 

November 2018
by Rachel Aston, Ruth Aston, Timoci O’Connor

How often do we really use research to inform our evaluation practice? Many of us tend to use research and evidence to help us understand what we are evaluating, what outcomes we might expect to see and in what time frame, but we don’t often use research to inform how we do evaluation.

At both the Australian Evaluation Society’s International Conference in September and the Aotearoa New Zealand Evaluation Association conference in July Rachel and Ruth Aston, Timoci O’Connor and Robbie Francis presented our different perspectives to the discussion—Tim and Rachel as evaluators, Ruth as a researcher, and Robbie as a program practitioner.

Monitoring program design and implementation for continuous improvement

In Ruth’s study of 7,123 complex interventions to reduce exposure to modifiable risk of cardiovascular disease, she found that two program-specific factors can influence the magnitude (amount) of impact of interventions. These were:

  • design: what the intervention looked like
  • implementation: how and how well the design is enacted within a specific context.

Eleven specific indicators make up these two factors, but Ruth found that 80 per cent of reviewed interventions did not monitor many of these indicators. She concluded that often we don’t monitor the indicators that can give us critical information about how to improve the effectiveness of complex interventions.

Research can help us address this practice gap. Evaluative thinking, along with design-thinking and implementation science can help us operationalise and embed the process, principles and decision-making structures to facilitate progressive impact measurement and continuous improvement.

An evaluation practice example

Rachel is currently working on a three-year impact evaluation of a stepped care model for primary mental healthcare services. One of the challenges in this project is that mental health outcomes are not likely to shift over the course of the evaluation. Further, the intervention for the stepped care model is dynamic – it’s being rolled out in three stages over two years but progression towards impact needs to be monitored from day one.

By incorporating the research on the importance of monitoring design and implementation, we are able to look at the quality, fidelity and reach of implementation of the stepped care model. One of the tools we’re using to do this is the Consolidated Framework for Implementation Research (CFIR) – a validated framework that incorporates a large number of validated constructs (developed by Laura Damschroder and her colleagues), https://cfirguide.org/.

The constructs and overall framework can be used to build data collection tools, such as surveys, interview schedules, observation protocols and to develop coding frameworks for analysis. Using the CFIR and focusing on how, how well and how much the stepped care model has been implemented, we can develop actionable feedback to improve implementation and consequently, the effectiveness of the model.

A program practitioner’s perspective

Robbie Francis, Director of The Lucy Foundation described how the Foundation has utilised information gained from the monitoring and evaluation of the design and implementation of their exciting Coffee Project in Pluma, Hidalgo Mexico. Her reflections reinforce how adaptations can be made to program design and implementation to improve potential for impact. Robbie also provides an important practical message about the place of principles in evaluating the impact of the Coffee Project.

Click on image below for video of Robbie Francis, The Lucy Foundation

Robbie video image

Closing comments

We have a role and a duty as evaluators to use the evidence we have at hand to inform and enhance our practice. This includes traditional research, evaluation practice experience, and program practitioner insights.

While this is important for any evaluation, it is arguably more important when evaluating complex interventions aiming to achieve social change. If we are going to continue to invest in and evaluate complex interventions, which seems likely given the challenging nature of the social problems we face today, then we need to think critically about our role as evaluators in advocating for the importance of:

  • effective design
  • attention to implementation science in planning and roll-out
  • effective monitoring
  • developing tools and measures for quality monitoring implementation and intervention design
  • sharing and disseminating exemplars of quality intervention design and implementation
  • working with policy makers and commissioners to enable evaluations of complex interventions to focus on using design and implementation as proxy indicators of impact in early rollout.

Above all, we need to accept, review and use all forms of evidence we have at our disposal. This will enable us to continually learn, become evidence-informed practitioners and use evaluative thinking in our work for the purposes of improving our practice, and generating useful, accurate and timely evaluation.

Damschroder, L., Aron, D., Keith, R., Kirsh, S., Alexander, J. and Lowery, J. (2009). Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implementation Science, 4(1).

Dr Ruth Aston
Research Fellow, University of Melbourne
Ruth has nine years’ experience in research and project evaluation. Ruth has managed several large-scale evaluations across Australia and internationally. She recently completed her PhD on 'Creating Indicators for Social Change in Public Health'. She also has a strong interest in interdisciplinary research with diverse cultural groups.

Rachel Aston
Senior Consultant, ARTD Consultants
Rachel is an experienced social researcher and evaluator who joined ARTD in 2018. She brings over six years’ experience conducting research and evaluation for government, NGOs and in the higher education sector. Rachel’s academic background is in anthropology and social research.

Timoci O’Connor
Lecturer, University of Melbourne
Timoci has over ten years’ experience in conducting research and evaluation projects in the public health, education, international development and community sectors. He holds a Masters of Public Health and is currently doing his PhD exploring the nature of feedback in community-based health interventions utilising mobile technologies and describing its influence on program outcomes. He is Ikiribati/ Fijian.

Robbie Francis
Director, The Lucy Foundation
Robbie Francis is a young woman who has packed a lot into 29 years. Having lived with a physical disability since birth, she has worked in the disability sector for over a decade as a support worker, documentary maker, human rights intern, researcher, consultant and as an advisor. In 2014 Robbie co-founded The Lucy Foundation, a social enterprise committed to empowering people with disabilities by working with local communities to promote education, employment and a culture of disability inclusiveness through sustainable trade.


 

February 2019
by Alicia McCoy, Alison Rogers, Leanne Kelly

EvalLiteracy

Evaluation in NGOs in Australia has evolved at a fast pace. Ten years ago, the evaluation landscape in the non-profit sector in Australia looked very different than it does today. There was less evaluation occurring, very few organisations had internal evaluation functions, and funders were often satisfied with output focused reports. However, in what has now become a rather volatile socio-political environment, organisations are under increasing demand to measure their outcomes; even their social impact. As a result, more and more organisations are grappling with how to engage with evaluation meaningfully in a way that makes sense for the context in which they work. Organisations’ ability to do this ranges from limited to sophisticated, depending on a variety of factors including financial and human resources, evaluation knowledge and skills, and motivation. Where they exist, internal evaluators in NGOs are required to navigate this multifaceted issue on daily basis.

It is important that NGO evaluators connect, share, debrief, and celebrate their work in this area. The evaluation profession as a whole deals with issues of complexity in various degrees. The NGO context comes with its particular set of challenges and can be incredibly complex. It is very useful to discuss and navigate this complexity with those “in the know”. For not only are these individuals practising the complex discipline of evaluation, they are doing so in an organisational environment where they must continually consider factors such as culture, leadership, long-term professional relationships, and complicated systems and processes. These factors can “make or break” evaluation efforts and building an enabling environment for evaluation to flourish is indeed a matter of organisational change to bring others along on the journey. It is a challenging, fascinating and at times frustrating context to work in.

One factor that internal evaluators in NGOs must pay close attention to is how to promote evaluation among colleagues. This involves understanding the spectrum of evaluation literacy that exists in their organisation and working with this. Internal evaluators must be able to engage with colleagues who have a good understanding of evaluation, as well as those who don’t. They also need to be able to ‘speak to’ those who have a positive attitude towards evaluation, those that don’t, and those who might understand the importance of evaluation but find it anxiety-provoking.

As internal evaluators working in diverse NGOs we have found that motivating and enabling others to access, understand and use evaluation information is an important part of our roles. This can include engaging people in discussions about what evaluation means to them, how it relates to organisational achievements and sharing knowledge about evaluation. We have found that understanding social connections and how people work together is important for building evaluation literacy - making evaluation more appropriate, understandable and accessible. Internal evaluators need to cooperate with teams and have high level interpersonal skills to adapt and tailor information about evaluation to the local conditions.

The recent AES conference provided us with a great opportunity to discuss this topic with our colleagues across the NGO sector. Interesting ideas were shared about ways that internal evaluators can promote evaluation and support colleagues to access, understand and use evaluation. Some key highlights include:

  • Collaboratively develop a plan to link individuals to the success of the group or common goal. This could include brainstorming to discover expectations, value and issues.
  • Use a system to manage information and hold individuals accountable for their contribution. A joint calendar with targeted reminders could be an example.
  • Create opportunities for engaging and providing encouragement. This could mean using every opportunity to promote evaluation.
  • Tailor your communication style appropriately for multiple audiences. Knowing your audience and adapting information to make it meaningful is essential.
  • Incorporate opportunities for reflection to consider how well the group is functioning. This could be as simples a routinely asking for feedback on the process.

Look out for our article in the first edition of the Canadian Journal of Evaluation in 2019 where we go into more detail on the topic of evaluation literacy in NGOs. (Rogers, A., Kelly, L., & McCoy, A. (2019) Evaluation Literacy: Perspectives of internal evaluators in non-government organisations. Canadian Journal of Program Evaluation. In Press.)

A Slack Workspace (On the inside looking out – Internal and NGO evaluators community of practice) has been created to keep the NGO evaluation conversation alive. Anyone working in an internal evaluation role in an NGO is welcome to join at https://invaluate.slack.com/ 

If you are interested in connecting with us to further discuss evaluation in NGOs, please contact Alicia at This email address is being protected from spambots. You need JavaScript enabled to view it..

Alicia McCoy is Head of Research and Evaluation at Beyond Blue and has recently completed her PhD at The University of Melbourne.

Alison Rogers is the Strategic and Innovation Advisor with The Fred Hollows Foundation’s Indigenous Australia Program and a PhD candidate with the Centre for Program Evaluation at The University of Melbourne.

Leanne Kelly is the Research and Development Coordinator at Windermere Child and Family Services and is undertaking a PhD through scholarship with the Alfred Deakin Institute at Deakin University. 

 

November 2018
By Denika Blacklock

I have been working in development for 15 years and have specialised in M&E for the past 10 years. In all that time, I have never been asked to design an M&E framework for or undertake an evaluation of a project which did not focus entirely on a logframe. Understandably, it is a practical tool for measuring results – particularly quantitative results – in development projects.

However, as the drive for increased development effectiveness and, thankfully, more accountability to stakeholders has progressed, simply measuring what we have successfully done (versus what we have successfully changed or improved) requires more than just numbers. More concerning is the fact that logframes measure linear progression toward preset targets. Any development practitioner worth their degree can tell you that development – and development projects – is never linear, and at our best, we guess at what our output targets could conceivably be under ideal conditions, with the resources (money, time) available to us.

I have lately found myself faced with the challenging scenario of developing M&E frameworks for development projects in which ‘innovation’ is the objective, but I am required to design frameworks with old tools like logframes and results frameworks (organisational/donor requirements) which cannot accommodate actual innovation in development.

ThinkingOutsideTheLogframe

Word cloud image sourced from Google

The primary problem: logframes require targets. If we set output targets, then the results of activities will be preconceived, and not innovative. Target setting molds how we will design and implement activities. How can a project be true to the idea of fostering innovation in local development with only a logframe at hand to measure progress and success?

My argument was that if the project truly wanted to foster innovation, we needed to ‘see what happens, not decide beforehand what will happen with targets.’ Moreover, I was able to counterargue the idea that a target of ‘x number of new ideas for local development’ was a truly ineffective (if not irresponsible) way of going about being ‘open-minded about measuring innovation.’ There could be 15 innovative ideas that could be implemented, or one or two truly excellent ones. It was not going to be the number of ideas or how big their pilot activities were that would determine how successful ‘innovation in local development’ would be, but what those projects could do. The project team was quick to understand that as soon as we set a specific numerical or policy target, the results would no longer be innovative. It would no longer be driven by ideas from government and civil society, but by international good practice and development requirements that we measure everything.

There was also the issue of how innovation would be defined. It does not necessarily need to be ‘shiny and new’ but it does need to be effective and workable. And whether the ideas ended up being scalable or not, the entire process needed to be something we could learn from. Working out how to measure this using a logframe felt like one gigantic web of complication and headaches.

My approach was to look at all of the methods of development monitoring ‘out there’ (i.e. Google). When it came to tracking policy dialogue (and how policy ideas could be piloted to improve local development), outcome mapping seemed the most appropriate way forward. I created a tool (Step 1, Step 2, etc.) that the project team could use on an annual basis to map the results of policy dialogue to support local development. The tool was based on the type of information the project team had access to, the people that the project team would be allowed to speak to, as well as the capacity within the project team to implement the tool (context is key). Everyone was very happy with the tool – it was user-friendly, and adaptable between urban and rural governments. The big question was how to link this to the logframe.

In the end, we opted for setting targets on learning, such as how many lessons learned reports the project team would undertake during the life of the project (at the mid-term and end of the project). At its core, innovation is about learning: what works, what does not and why. Surprisingly, there was not a lot of pushback on having targets which were not a direct reflection of ‘what had been done’ by the project. Personally, I felt refreshed by the entire process!

I completed the assignment even more convinced than I already was that despite the push to change what we measure in development, we will never be effective at it unless those driving the development process (donors, big organisations) really commit to moving beyond the ‘safe’ logframe (which allows them to account for every cent spent). As long as we continue to stifle innovation needing to know – in advance – about what the outcome will be, we will only be accountable to those holding the money and not to those who are supposed to benefit from development. Until this change in mindset happens at the top of the development pyramid, we will remain ‘log-framed’ in a corner that we cannot escape from because we have conditioned ourselves to think that the only success that counts is that which we have predicted.

Denika is a development and conflict analyst, and independent M&E consultant based in Bangkok.
Personal blog: http://theoryinpracticejournal.blogspot.com/
Twitter: @DenikaKarim