Monitoring and learning during Covid-19: beyond remote data collection

11 September 2020
Insight
A pile of papers

In the first weeks and months after Covid-19 hit, I attended and spoke at several online workshops and webinars about on how to do monitoring, evaluation and learning (MEL) and research during the crisis.

These sessions were all very helpful, with experienced experts discussing valid issues like doing data collection remotely, the limits and strengths of phone and SMS surveys, specific ethical considerations, whether smaller sample sizes and less stratification would be fine and so on.

While I enjoyed these discussions, something started to bother me: MEL considerations during Covid-19 focused mainly or entirely on remote data collection.

This focus was of course natural. In-person data collection was one thing that obviously had to change given the social distancing.

But MEL goes – or at least should go – beyond data collection, and these other elements are still affected by the crisis too. What I saw were two persistent MEL challenges becoming amplified.

Challenge 1: reporting is (still) prioritised over other tasks

What gets monitored and evaluated, and what is prioritised over something else, is always political. This is not any different during Covid-19.

While many research and implementation programmes quickly pivoted their activities to support the Covid-19 response and/or started to collect additional (and important!) data on the effects of Covid-19 in communities they support, many also continued writing long donor reports based on their existing (and in many cases, at least partly outdated) logframes. In some cases, staff spent weeks during the toughest Covid-19 months producing dozens of documents to meet unchanged deadlines and annual review requirements.

This reinforced the understanding many people working in the MEL field have: when it comes to MEL, donor reporting is still the priority regardless of growing rhetoric around learning. A year ago, I wrote about the obsessive measurement disorder that the development field suffers from and Covid-19 has further strengthened this assessment. We still haven’t moved forward.

While the importance of data, especially during Covid-19, should not be undermined (e.g. to help re-orientate or prioritise programme activities) it should be possible to adapt reporting requirements too when drastic changes in the context take place.

Challenge 2: how to be inclusive in making sense of data and results

Good MEL processes go beyond donor reporting. Among other things, they include structured opportunities for programme staff to reflect, make sense of the data and learn. This is no less important during the Covid-19 pandemic.

Many programmes quickly adapted their ways of working, with online workshops and webinars (for joint reflection and learning) becoming commonplace. But (MEL) online workshops only work when people have affordable and strong internet connections – something that should not be taken for granted, especially when we want to include staff not based in capitals or big organisations.

This issue is widely recognised. In our ‘Adapting to Covid-19’ webinar (see the slide deck) back in April, many participants called for ‘low-tech’ solutions to ensure that people without the necessary bandwidth to engage in webinars can still be involved.

While we haven’t cracked this yet, I see different types of ‘consultation platforms’ as one option (for example, our Youth Forward Learning Partnership recently used this platform). Using these tools, participants don’t have to be online at the same time and can instead add their inputs and comments during a certain period of time (e.g. within a week). Removing these barriers can hopefully increase engagement from a more diverse range of people.

Monitoring, evaluation and learning in a post-Covid world

These two challenges – what gets monitored and prioritised, and who is involved in making sense of data – are of course not new. The Covid-19 crisis has just made them more evident (staff stress around logframe indicators while supporting local communities during a global crisis) and amplified (making inclusivity in evidence analysis more challenging).

But the crisis has also highlighted what kind of MEL we need: systems that are intentional, meaningful and fit-for-purpose.

If project activities and strategies can be adjusted, so should MEL processes and purposes. We need to keep supporting evaluative thinking even when it’s easier to focus only on data collection.

Finally, measuring and reporting what we do – or should have done according to old plans – should not take over from implementing what needs to be done in the moment, especially when a crisis like Covid-19 is in full force.

Authors

Research Fellow
Tiina Pasanen is a Research Fellow specialised in monitoring, evaluation and learning (MEL) methods [...]