3 impact measurement mindsets that may be holding you back

"One of the most rewarding aspects of facilitating face-to-face workshops is the rich discussions that unfold..."

These conversations often reveal deep, practical insights into outcomes measurement that go beyond what you’ll find in textbooks. I’m always struck by the honesty and pragmatism that practitioners from diverse fields bring to the table, and I greatly enjoy the questions and real-world examples shared.

Over the years, as I’ve taught outcomes measurement and listened to the challenges participants face, I’ve noticed three common, and mostly unproductive ways of thinking that can hold people back.

1. Thinking that the work has to be effortful, difficult and comprehensive to be ‘rigorous’

In research, rigour means to be careful, consistent, systematic and disciplined. It means you’ve taken some time to think through what you are doing and why. It means asking the right questions and then thinking about whether the methods and tools you are using are ‘the right fit’ in answering those questions.

It does not mean you need to measure the world or drown in data – in fact that is often a sign that you have failed to be rigorous.

Some of the best outcomes measurement practices are small-scale, easy to implement and elegantly designed. The ‘ideal’ measurement environment may involve only a handful of well-selected measures, but they are ones that generate the most meaningful evidence that directly informs decisions. As John Maynard Keynes famously noted, “It is better to be roughly right than comprehensively perfect.”

2. Thinking there is ‘one way’ to evaluate

One of the most fascinating things about evaluation is that it does not belong to any single school of thought or discipline. Evaluation is an essential part of the work for educators, health practitioners, the community sector and many others. Practitioners from diverse disciplinary backgrounds find themselves playing in the evaluation space – and all bringing different tools, methods and mindsets to the process.

This means that learning ‘how to evaluate’ is a bit like taking a Kon-Tiki tour across multiple countries, with different concepts, cultures and languages at play. It’s an adventurous, exploratory process, taking you through different intellectual terrains and perspectives. For example, psychology brings confidence and significance measures, anthropology has tools for deep listening, cultural engagement and healthy doses of self-doubt (with epistemology and phenomenology as close cousins helping to harness critical lived experience insights). Economics provides the hardline value testing and the reminder that so many factors lead to change. Health sciences and clinical research provides the evidence hierarchy and the importance of the counterfactual, while public health modelling can map behavior change like no-one else.

Over the years the evaluation community has also welcomed community development practitioners (brilliant expertise in empowerment evaluation), geographers (context is everything!), social workers and linguists (leave no one behind) and mathematicians (someone needs to make sense of the spreadsheets). In recent times we are also looking for IT to help with sophisticated data infrastructure, and marketing and communications to do magic with the findings. The bottom line is, there is no ‘one way’ to evaluate.

This means it is not always straightforward or easy. That’s why the Centre for Social Impact at UWA has developed the Outcomes Measurement Workshops to help you on this complicated journey. They involve putting on our ‘multifocal lenses’ and appreciating the many different ways to do evaluative thinking, as well as having the problem-solving skills needed to navigate this diverse world. In this interactive short course, you embark on a transformative journey that will equip you with invaluable tools and insights to revolutionise how you measure, assess, and communicate your organisation's impact. If you are looking for rules and templates, you won’t be happy. If you love adventure and exploration and new ways of thinking, you are going to love learning about evaluation (and our two-day ‘around the world’ crash course).

3. Not closing the loop

    Over the years we have met many organisations deeply committed to outcomes measurement, highly proficient in their measurement activities but ultimately failing to see the value or benefits of their efforts. What’s the root cause of this disconnect?

    Often this is a sign of compulsive ‘busy work’, that happens without the time to pause and consider what is meaningful.

    These are the organisations that collect a whole lot of data but do not have the time, skills or resources to analyse it or cannot find the space in their workflow to make sense of it (i.e., turn data into evidence). Some organisations don’t even use the data they have, let alone present it back to people they collected it from.

    This can be a sign of a lack of understanding purpose, not spending the time looking at framing, or being so drowned in the volume of data and busyness that any sense-making activities are suffocated (you can’t see the wood for the trees!).

    These are very common scenarios and also understandable. For many organisations their focus and priority is on serving people, and outcomes measurement is an afterthought. However, after meeting so many organisations struggling with measuring impact, the – highly reassuring – answer to that struggle is usually to slow down, do less and simplify. It doesn’t need to be more work or more difficult. Taking the time to develop an organising structure, selecting the right tools and frameworks and knowing what you want to end up with is sometimes all that is needed.

    Attending an Outcomes Measurement Workshop is one way for taking that time out to reflect. Many participants appreciate this facilitated, dedicated time not only to unlearn some assumptions that might be holding them back, but also to think through best approaches and practices together so they can bring clarity back to their work.

    BY LISETTE KALEVELD

    This opinion piece is written by Lisette Kaleveld, a social researcher at the Centre for Social Impact at The University of Western Australia (CSI UWA). Lisette has extensive experience leading national, state-wide, and local evaluation projects across the mental health, education, and disability sectors.

    For the past five years, Lisette has been facilitating the Outcomes Measurement Workshops across Australia, helping organisations elevate and showcase their impact. She also serves as the convener for the Australian Evaluation Society (WA Branch).