ODI Logo ODI

Trending

Our Programmes

Search

Newsletter

Sign up to our newsletter.

Follow ODI

Three ways to incorporate evaluative thinking in monitoring

Written by Tiina Pasanen

Explainer

Monitoring and evaluation are traditionally considered to be interlinked but nevertheless distinct processes.

Monitoring is an ongoing system of gathering information and tracking a project’s performance using pre-selected indicators. Evaluation, by contrast, is about making overall assessments of a project’s effectiveness, outcomes, and whether it has met its objectives and aims. The former is usually done by project staff during the project’s lifespan, often mainly (and sometimes only) for donor reporting purposes. The latter is typically done by external evaluators towards the end of the programme.

However, over the past few years, the line between the two processes has started to blur, with ‘evaluative thinking’ increasingly creeping into monitoring processes and activities.

By evaluative thinking I simply mean ongoing questioning and analysis of (in this case) monitoring data to make evidence-informed improvements and alterations to project-related activities while the project is still running. It is about asking questions such as: ‘What do we see happening?’ ‘What is working?’ ‘What is not working and why?’

While project staff have always assessed how things are going, evaluative thinking is about making these assessments and reflections more structured and – in the case of large multi-project programmes – more systematic across projects and countries.

This is not to say end-of-project evaluations aren’t necessary. These still have value, as they give an overall ‘bigger’ picture and outsider’s perspective of the programme. It is about learning and improving ongoing programming.

There a few possible explanations for this development. It could be due to increased emphasis on learning in international development. But it might (at least partly) be a response to the increasingly common understanding that findings and recommendations from end-of-project evaluations usually come too late to be useful for the current programming. It could also reflect growing recognition that programme staff are often very capable of analysing their own work. Let’s be honest – the findings we external evaluators come up with, usually with limited involvement and time, aren’t always such revelations for those who have actually worked in a programme for a long time.

Either way, it’s a welcome shift, and we should actively encourage it. There are many ways to support programme staff to do their own analysis and reflection on monitoring data. Here are three approaches I’ve come across that may be helpful in fostering evaluative thinking:

Use learning partners to facilitate learning 

This is an approach that several funders such as DFID and The Mastercard Foundation have tested and invested in recent years. By bringing in a semi-external/internal learning partner (and not an external evaluator or a portfolio manager) the aim is, among other things, to continuously support learning and the use of monitoring and other data while the programmes are running. One strategy to support this is organising regular learning seminars or meetings where the evidence of progress is jointly analysed and discussed by programme staff, learning partners and sometimes also with funders.

Use self-assessment scorecards to rate evidence and test programme assumptions

Scorecards have traditionally been used to improve the quality of public services (as in this example and toolkit). But they can also be used by programmes to generate qualitative evidence and embed a culture of learning and reflection, as I discovered from a presentation given by the Making All Voices Count (MAVC) governance programme at the latest UKES conference. While there are several steps when using scorecards, the basic idea is for a project team to first gather evidence of outcomes and changes that they have seen taking place (positive or negative) and then come together and jointly reflect and rate the quality of that evidence. The aim is to foster critical thinking, test programme assumptions and jointly develop actions to improve programming.

Use outcome mapping to understand progress towards transformational changes 

RAPID’s long-time favourite, this approach can be especially useful for programmes trying to address complex issues such as women’s empowerment, research influence or advocacy. It breaks outcomes down into smaller, more manageable steps. It can help programmes to understand which changes are within their control, influence and interest. The aim is to recognise and appreciate smaller behavioural changes that can in the long run lead into more substantial transformative changes. However, the key part of the outcome mapping is same as above with self-assessment score cards: having joint sense-making at regular time points where programme teams analyse the data collected and adapt their plans and strategies based on the evidence.

Whatever approaches a programme chooses to use, it’s also worth reiterating that joint reflection really is key. The common thread running through the tools and approaches above is the importance of regular and structured reflection meetings where programme data is jointly analysed.

This sounds simple but it is not easy. Anyone who has ever worked in international development knows how difficult it is to have the head space and time for joint reflection. It can be very resource intensive to bring people together from several teams and countries. Where there are a lot of partners involved, each doing their own thing, there’s also always the danger of merely promoting one’s work and sharing only success stories. Sharing what doesn’t work requires trust, and trust takes time to appear. But when facilitated well and conducted in a safe place, joint reflection can create healthy debate on strategic and technical issues (what works and doesn’t work), foster knowledge sharing across projects and trigger new ideas and solutions.