ODI Logo ODI

Trending

Our Programmes

Search

Newsletter

Sign up to our newsletter.

Follow ODI

2018: time to update the DAC evaluation criteria?

Written by Tiina Pasanen

Explainer

When I see evaluation terms of references with a totally unrealistic 30+ ‘key’ evaluation questions, I blame the Development Assistance Committee (DAC) evaluation criteria. Well, at least partly.

They are without doubt the most referenced evaluation criteria in international development. The five DAC evaluation areas – relevance, effectiveness, efficiency, impact and sustainability – set the standard for what should be considered and measured.

I’m not the first to suggest that it’s time to have a rethink. But for me, the issue is that we’ve stopped seeing the criteria as a useful tool to support our thinking, and started using it instead of thinking.

It’s not the criteria as such, it’s how we use them

Of course it’s not bad criteria per se. Investigating and understanding relevance, effectiveness, efficiency, impact and sustainability of our development programmes is important.

The problem is that often it is not plausible, feasible or even appropriate to capture all these dimensions in one evaluation. But many try. And oh, how they try.

So the issue is how the DAC criteria are often applied in practice. As with any tool (like a logframe), when it stops being a guidance to support your work and becomes the only possible way to structure your thinking or evaluation, we are in trouble.

Not every evaluation can, or should, cover all the DAC criteria

Too many evaluation terms of reference (ToR) land on my desk with 30+ evaluation questions, diligently categorised under the five DAC principles – and with a completely unfeasible timeframe and budget. We’ve all seen those, right?

Prioritising is difficult and evaluation is political – what gets evaluated and what doesn’t? Whose preferences and priorities count? And it means saying no to interesting questions.

But if we don’t prioritise and focus, our evaluations are unlikely to be useful for, or used by, implementing teams or donors.

To support use, we all need to spend a bit more time on thinking through what it is that we actually want or need to know. For example, is the evaluation more about learning or accountability? What is realistic given the budget and resources? What is possible to measure given the issues or concepts being assessed, length of the programme and the context it is operating in? How will the results be used? 

Of course not all evaluations try to cover everything. For example, you can criticise impact evaluations for many reasons, but at least they focus on one key dimension - impact - and try to measure it in a systematic, robust manner. But even then, it is essential to do some form of evaluability assessment to understand what is plausible, feasible and useful.

There’s often a gap between ToR and implementation

To be fair, in practice not everything listed in an evaluation ToR gets evaluated.

When I’ve discussed the issue with donors and fellow evaluators, they often admit that the DAC criteria ‘just has to be included according to the organisation guidelines’. And some say that their evaluation wouldn’t be taken seriously if it’s not mentioned there.

However, after the winning bid has been chosen, they will negotiate with the other party. They select which questions to actually focus on, and which will be covered in a very light-touch, unsystematic and often anecdotal way.

For me, this feels unnecessary. Can’t we just skip the pretending part?

What are we missing (because it’s not included in the DAC criteria)?

Finally, the question is not only which of the five dimensions to focus on, but are these the most relevant in the first place? The criteria were defined over 15 years ago, perhaps it is time to update them.

For example, we could add dimensions like gender, equity or inclusion of the most vulnerable groups.

Some sectors, such as humanitarian aid, already have guidance on applying the DAC criteria to reflect the context and realities they operate in. Perhaps other sectors could do the same?

But with or without the criteria, the real issue is people not doing enough to prioritise what is most important early on. And so, given the DAC’s respected position in the evaluation field, I don’t think we should ditch it completely. I feel that it should be updated, to have a menu of options, and clear guidance, for evaluation commissioners and evaluators to use and support their thinking.

What’s your experience of the DAC evaluation criteria? And what would you like to see in that menu of options?