ODI Logo ODI

Trending

Our Programmes

Search

Newsletter

Sign up to our newsletter.

Follow ODI

How cognitive biases affect monitoring, evaluation and learning – and what can be done about it

Explainer

Written by Tiina Pasanen

Hero image description: Digital illustration representing biases and mental shortcuts

Those of us working in monitoring, evaluation and learning (MEL) are used to thinking about biases.

We want to ensure, for example, that the sampling is done correctly in household surveys or that programmes don’t cherry-pick only the most successful case studies for their evaluations.

But most biases we think about – or at least I think about – relate to data (whether it is of good quality, reliable and representable) and less about cognitive biases. These are linked to how we as evaluators or implementors notice evidence in the first place; how we interpret and analyse it (either individually or jointly); and ultimately, how we make conclusions and decisions based on analysis and reflection.

This is especially important given the mounting calls to use the monitoring and evaluation data (and other evidence) not just for donor reporting but also for project-level learning, decentralised decision-making and programme adaptation.

While tacit knowledge is invaluable, it doesn’t mean that we are not subject to cognitive biases (even if we have all the evidence in the world) and that those biases and mental shortcuts (‘heuristics’) we use won’t lead to biased interpretations and decisions. Ultimately, everyone is vulnerable to these.

Motivated reasoning

Decades of psychological research have concluded that generally people are not good at making decisions. Our rationality is ‘bounded’ and we use motivated reasoning, meaning that our pre-existing views and attitudes unconsciously lead to biased assessment of the evidence.

This can happen, for instance, when we see data that challenges our existing views. We experience discomfort (‘cognitive dissonance’) at being exposed to conflicting thoughts and come up with reasons to ignore the evidence.

This may be especially relevant – and perhaps prevalent – for us working in international development as motivated reasoning is common with views that we are particularly invested in and want to protect. And when it comes to our programmes, the amount of effort, time and belief we put in them makes it difficult to accept monitoring data or evaluation results which tell us that the project is not progressing as expected.

Selected biases related to decision-making

Research has identified dozens of different biases and mental shortcuts related to decision-making. Here I list a few that may be especially relevant for MEL-related events, such as evaluation validation workshops or sense-making or strategy-testing sessions:

  • Confirmation bias is perhaps the most discussed and well-known bias. It simply means that we tend to search for, notice and interpret information in a way that confirms our existing views or beliefs.
  • Group reinforcement or ‘group thinking’ refers to a situation where we privately think otherwise but we self-censor ourselves to conform to the view of the majority.
  • Illusory correlations happen when we interpret particular characteristics, variables or events as correlated when they are not. This is closely linked to a ‘need for coherence’ which predisposes us to establish patterns and causal relationships when they may be non-existent.
  • Availability heuristics refers to situations where we make decisions based on how available examples are in our mind (i.e. how easy it is to remember or imagine them).

How to mitigate cognitive biases

While there may never be a way to eliminate our biases, there are ways to mitigate them. These include:

1. Generating awareness

Simply acknowledging that none of us can escape biases can make us more aware and ease the introduction of additional measures and processes to address these.

2. Make reflection sessions structured

This report from Institute for Government lists three things that can happen in group discussions. Though the report focuses on policy formulation, its findings are widely applicable:

  • Discussion can make groups’ views more extreme;
  • Groups tend to focus on what most people already know;
  • Initial contributions can strongly sway group opinion.

People also tend to jump to the solutions before the problem has been properly diagnosed, addressed and reflected upon. Given all this, having more structured sessions, where the most vocal or powerful people don’t dominate the discussion, can provide a more equal and balanced platform. I recently attended an Action Learning training session and I feel we could draw many insights from the way Action Learning systematises and structures learning sessions.

3. Use critical friends or ‘red teams’

Having a range of people with different perspectives in the room doesn’t automatically mean biases are eliminated, but at their best, ‘critical friends’ can pose difficult questions, unearth implicit assumptions, question group thinking, and balance dominating voices.

They are often external experts, but the role of the ‘devil’s advocate’ or ‘red teams’ can be given to individuals or people within a group too. When given a clear mandate, they can raise counter arguments and question the logic of our thinking which otherwise might be seen as unconstructive or even hostile. This may be especially influential as research suggests that people are more likely to accept critical comments from someone who is part of the same group.

Cognitive biases and how to have more ‘effective’ MEL workshops or meetings have been discussed for years. However, given concerns around polarisation of public debate and how the internet amplifies such biases, it’s ever more important that we remind ourselves of such biases.

As MEL experts or workshop facilitators, we also need to put in place structures and systems where critical thinking and diverse views can be introduced before, during and after joint analysis and decision-making sessions. By doing so, we can address some of these biases and raise the quality of both the analysis and decisions.