ODI Logo ODI

Trending

Our Programmes

Search

Newsletter

Sign up to our newsletter.

Follow ODI

Measuring development impact isn't just technical, it's political

Written by Anne Buffardi, Tiina Pasanen

Explainer

Over the last decade, a lot of the discussion around evaluation has focused on what methods are best. But too much focus on methods can mean that we don’t pay enough attention to the difficult politics and relationships involved in development programming and evaluation – like who defines what we even mean by ‘impact’ or how to judge ‘success’.

Four years ago, we set out to develop and test evaluation methods for hard to measure interventions. By ‘hard to measure’ we’re talking about development programmes that tackle entrenched issues like women’s economic or social empowerment; with multiple components and diverse groups of people and interests; operating in challenging settings, such as areas affected by conflict, disasters or economic and political instability. For these programmes, more traditional impact evaluation methods, such as randomised controlled trials, may not be feasible.

But we quickly learned that there are more fundamental challenges that needed to be tackled, long before an evaluation design and methods are selected.

Evaluations are trying to do too much

Diverse actors – implementing staff, senior managers, donors, government officials, intended beneficiaries – often identify scores of relevant questions that would help their work.

But answering all of these questions simply isn’t feasible – and sometimes, it’s not even plausible to answer certain questions in the given timeframe.

Another common problem is trying to balance the pressure to demonstrate results with the need to learn. This involves looking at what’s not working, as well as what is, and sometimes there are strong incentives not to do this.

All this is amplified in ‘hard to measure’ programmes, with large numbers of people, organisations, questions and learning needs in the mix.

These challenges are not new, but they are persistent.  The more we worked with different programmes and shared our experiences at public events, the more we realised that these challenges are felt widely across the sector.

Why does this problem persist?

Prioritising different needs and interests is inherently political. It means saying no: addressing the questions and interests of some groups but not others.

This is maybe why we still focus so much attention on methods.  It enables us to feel like there is a technical solution that will produce the answer to development impact. But for the ‘hard to measure’ development programmes, it’s rarely that simple. So by focusing predominantly on methods, we shift attention away from the tough discussions like ‘what types of changes do we need to see to keep going? When do we need to try a different approach, and what should that look like?

So what can be done?

There are no easy answers.  Prioritising which evaluation questions to address involves difficult decisions and saying no to relevant and interesting questions.

Identifying and improving methods is necessary.  But it’s not sufficient to enable evidence-based development policy and practice.

We mustn’t let debates about methods mask more fundamental questions and conversations that need to take place – and there are tools out there to help you do this.

Together, the development community needs to confront the incentives that push us towards technical solutions and shorter-term thinking. This involves making hard choices and being transparent about what those choices are, and who made them.

For example, people responsible for commissioning and funding evaluations need to be clear about what questions and whose perspectives take priority, and then be transparent about how this decision has been reached. And when impossible lists of evaluation questions are presented, evaluators need to push back about what is feasible and plausible to answer given the budget and time frame.

While it’s usually through evaluations that important questions like how do we define impact, or how do we determine success arise, these are fundamental matters that relate to, but are larger than, a single programme evaluation.  They affect the extent to which development initiatives and evaluations are perceived as credible and findings are used. So tackling these issues head on, early, is really important.

The Methods Lab evaluation toolkit provides practical discussion, guidance, tools and templates for programme and evaluation doers, commissioners and donors to understand and navigate evaluation for ‘hard to measure’ development programmes.