ODI Logo ODI

Trending

Our Programmes

Search

Newsletter

Sign up to our newsletter.

Follow ODI

Monitoring and evaluation: five reality checks for adaptive management

Written by Tiina Pasanen

Explainer

Over the past couple of years the concept of adaptive management has gained a lot of traction in international development: real-time monitoring, experimenting, quick feedback loops, and – oh yes – ongoing learning. It all sounds really good.

But if we are honest, it also sounds rather fuzzy.

There have been some commendable attempts to build new adaptive programmes – such as SAVI, LASER and PERL – but there’s also been a lot of re-labelling of existing projects as adaptive. And sure, some of them are, but others aren’t really, sorry.

While programmes are quick to call themselves ‘adaptive’, many donors, programme designers or implementers are not really thinking through the practical implications of this for programming, or for monitoring and evaluation (M&E).

Adaptive projects put learning at the centre – they aim to be flexible and responsive to changing contexts and needs, doing more of ‘what works’ and less of what doesn’t. So ongoing M&E is really important. Here are five reality checks.

1. We need bigger M&E budgets  

If done properly, there’s really no way around this. We will probably need to collect more data (for example adaptive programmes might be doing multiple smaller ‘experiments’ simultaneously). But we also need more time to stop and make sense of what’s going on. Although there’s a lot of talk about ‘failing-fast’ (sounds cool), the fact is that analysis takes time and decisions need proper consideration.

For some time now, the rule of thumb is that M&E costs should be between five and ten percent of the overall project budget. With adaptive management, this is not enough. The LASER programme recommends dedicating 20-25% of the budget to programme management and M&E. At the UK Evaluation Society conference, we heard about a skills training programme using an adaptive management approach, with 30% of the total budget dedicated to M&E. The presenter asked those of us in the audience if we had any idea how to deliver it with a smaller budget, while not compromising on robust data quality. We didn’t.

2. We also need adaptive M&E budgets 

The project simply can’t be adaptive if the budget is not. This sounds obvious, but it’s so often not the reality. Some of the M&E budget should be allocated later, once evidence has been gathered on what is actually needed.

3. We need more people involved in monitoring and analysis

Quick feedback loops and ongoing learning can’t happen in an M&E silo. Not only do we need to collect more data but also we need technical staff to be involved in data reflection, analysis and decision-making.

Therefore, monitoring and learning needs to be included in technical staff job descriptions – and prioritised. Which leads us to…

4. We need to ensure that (managers and technical) staff have the right competencies

If managers and technical staff are involved in data sense-making, they will need different or additional skills and competencies to traditional programme implementation staff.

Furthermore, people need to be comfortable with uncertainty, ‘failure’ and changing plans. Even if a donor has agreed to a ‘safe-to-fail portfolio’, we all know how difficult it is to get out of the ‘we need to prove this was success’ thinking and admit that some things didn’t work out as planned or hoped.

5. We need to select evaluation approaches carefully

If you look closely at the literature on M&E for adaptive programming, it’s actually much more about monitoring than evaluation. But, evaluation is still important.

For good adaptive programming, evaluative thinking needs to be built into the monitoring processes.  And this means that traditional mid-term evaluations are likely not needed – or at least are not very useful in their traditional form.

Typical process- or performance-evaluation can work well. But some outcome- or impact-evaluation approaches (such as randomised control trials) may not be suitable. This is because collecting control group data and trying to keep variables constant, is not always possible given the fluid nature of adaptive programmes.  

For adaptive programming, we might need more ‘goal-’ or ‘indicator free-’ evaluation approaches, such as outcome harvesting. And if we want evaluation approaches to be aligned with the learning promoted by adaptive programming, we might want to try developmental evaluation, or switch to evaluation approaches that give implementing organisations a bigger role in design, data collection and analysis. These of course have their own challenges, but create more opportunities for internal capacity-building and uptake of results.