Four years ago I wrote about our first experiences of attempting a longitudinal panel survey in fragile and conflict-affected situations. No-one, as far as we knew, had done it before, at this scale, with this many countries. Some people working in similar fields thought we were crazy but we took a deep breath and went ahead. Four years later, we’re publishing the results of a two-wave panel survey covering the Democratic Republic of Congo (DRC), Nepal, Pakistan, Sri Lanka and Uganda.
So: were we crazy?
Not entirely. Firstly, by building a baseline for the panel in six countries and then progressing to a second wave in five countries (we couldn’t return to South Sudan due to a major escalation of violent, armed conflict) we’ve shown research like this is possible.
And we didn’t just go back to the same places. In places often presumed to be ‘no-go’ locations for research we found six out of seven people we originally interviewed.
We certainly had our fair share of good fortune and frustrating setbacks. Our teams navigated all sorts of obstacles: phones tapped by the secret service; theft of wages and equipment; curfews, blockades and protests. They tracked people across significant distances, waded through rivers in deep ravines, and – literally – climbed mountains. For us, the experience suggests a recalibration is in order. We can and should change our expectations about what data collection is possible in fragile and conflict-affected situations.
Secondly, as our analysis has demonstrated, this panel data generates findings that could not otherwise have been found with a regular cross-sectional approach (where we survey in the same places but with a new sample of people each time). Our research on food security makes this clear. Although across all five countries there was little change in average food security between the two waves of interviews, tracking the food security of each individual over time demonstrated a far more complicated set of trajectories than is often assumed in donors policies and programmes.
Here’s what I’ve learned from the experience
It’s crucial to stay focused
Not far into the survey development process I banned the word ‘interesting’, instead trying to put the emphasis on our work being ‘useful.’ It wasn’t a popular move, but during our attempts to prioritise between different themes and questions, between ways of analyzing the data and presenting our findings, this framing helped to maintain a focus on the questions that governments and donors wanted answered. In the end, I think that has made our findings more operationally useful.
Document, document, document
Trying things that might not work is critical for innovation and progress but it means being prepared to accept and learn from failure. Documenting the process carefully means that, at the very least, you’ll generate lessons that might aid other researchers in the future.
Sometimes a ‘null’ finding is highly valuable, even when it’s frustrating
We didn’t anticipate that – across the six countries – there would be no consistent relationship between the services people access and people’s perceptions of government. In fact, policy has long assumed that quality services lead to better perceptions of government. However, not finding any relationships made us rethink service delivery and state legitimacy. Instead of seeing it as a relatively simple transactional relationship, we’ve developed an understanding of how it is much more nuanced and contextual. A relationship that takes us beyond just telling people ‘it’s complicated’ and offers more practical solutions.
As it turns out, a bit of boldness can take us a long way. And we weren’t crazy after all.