Five challenges of analysing progress – and how we got around them

15 February 2017
Comment

There has been no shortage of bad news stories recently. I’ll forgo the links, but you know the ones – stories about how war, corruption and waste slow development efforts and make aid ineffective.

Sadly, it’s often the positive development stories that don’t make the headlines – how Nepal has reduced maternal mortality by an estimated 75% since the early 1990s; how Peru lowered urban poverty from 42% to 16% between 2001 and 2013, and how Burkina Faso reclaimed 2-300,000 hectares of land from the Saharan desert over the past 30 years are just a few that deserve more attention.

I’ve spent the past five years researching stories like these as part of ODI’s Development Progress project. These are more than just ‘feel good’ stories and far from straightforward. Here are a few of the biggest challenges we found in trying to better analyse examples of progress like these, and what we did about them:

1. Global goalposts are really ambitious

Both the Millennium Development Goals and Sustainable Development Goals set the bar high – universal primary education, zero hunger. With few countries achieving these, finding positive stories is hard.

So we used a broader framework, looked carefully at starting points, and identified cases that could demonstrate progress at the national or large state level (even if the global goals would not be met). The results showed that global goals are useful, but other factors like political leadership, community action, and targeted technical assistance are often more important to driving change on the ground.

2. Progress hasn’t helped everyone

Negative commentary and evaluations are justified: progress is complex, and often leaves far too many people behind. In every one of the case studies we produced, progress sat alongside serious challenges.

In Kenya, despite more children going to increasingly higher levels of school, there are significant concerns about learning quality. In Sri Lanka, major reductions in unemployment sit alongside worries for the role and risks of migration. We tackled issues like these by embracing the messiness of progress – highlighting the good news while still acknowledging the bad.

3. It’s hard to prove causation

Our research intentionally looked at large-scale development gains, going beyond what could safely be attributed to one programme or intervention. Still, in each case study we identified a number of ‘drivers of progress’.

In the Nepal story, for instance, we found that the drop in maternal mortality was caused by an increase in remittances leading to a decline in fertility alongside the actions of a small group of ministry of health officials, increases in public health expenditure, and expanded health facilities in rural areas.

How did we come up with that formula? These findings are judgment calls based on a close look at context, evidence and experience. Looking at macro level changes meant forgoing attributional precision, as is the case in much of social science research. However lessons on financing progress, aggregating findings from multiple studies, provide a good example of how useful this can be.

4. There isn’t enough high-quality, timely data

Data was absolutely essential to our project, both in identifying cases and then to support deeper analysis to understand when and how change came about. The inadequacies of development data will hardly be a surprise, and as the SDGs come into play, we are seeing action on this globally.

In our experience, the availability and quality of internationally comparable data to track development progress varied in its inadequacy (for instance, there was much more available on health compared to political voice). Overall, the biggest issues encountered were a lack of disaggregation and, particularly in the service sectors, limited data on quality of outcomes (rather than access). We largely filled in gaps by gathering qualitative data through interviews and other means, recognising all the while that this was not representative.

5. Getting our findings to the right audiences

Over the project’s lifespan, we commissioned and managed more than 100 research products. Such breadth and depth of output led to two issues.

The first was synthesising findings in a meaningful way. We attempted to address this by pulling together aggregate analysis for different audiences, rather than one final overarching report. This included research that looked at our findings in specific sectors like education, employment or security; analysis in cross-cutting areas like measurement or finance; and significance for external processes like SDG early implementation.

The second issue was in bringing out what was often complex work in a way that would capture attention. We did that by investing heavily in the communications side of our work: producing a range of multimedia, inviting key thinkers to blog when relevant research came out, and partnering with others to hold events to explore both our own – and related – work and its real world application.

These five points weren’t our only challenges. Others included difficulty in limiting the scope of inquiry, a tendency of researchers to over-focus on critique, concerns about whitewashing over problems, and the possibility that positive research could be used by some as propaganda.

In the end, did we overcome these challenges and tell some worthwhile stories? I think so, but am perhaps just a bit biased. In the end, the best judges are those of you who have the chance to delve in and explore the project’s research most relevant to your own work. If you haven’t done this yet, do take a closer look and find out more.


This is the second blog in a three-part series on the Development Progress project. Read Liz Stuart’s piece arguing that with aid under attack, we need stories of development progress more than ever. In the next in the series, Kate Bird draws on 50 case studies to distil some of the key lessons for achieving the Sustainable Development Goals by 2030.