When will we learn how to learn?

14 April 2011
Articles and blogs
The current popularity of impact evaluation (IE) in development and humanitarian arenas is without precedence. Concerted efforts in recent years have seen IE move from an interesting area of work to the forefront of global efforts to bring evidence to bear on development policy and practice. While this is clearly a good thing, some of these efforts have not practiced what they’ve preached in terms of the use of evidence.

It would seem that the understanding of what is needed for evidence to influence policy and practice has all too often been set aside. A driving assumption appears to have been that certain methodologies – in particular, randomised control trials (RCTs) and variations thereupon – are a gold standard for evidence, and therefore a silver bullet for changing policy and practice. Although the need for this form of technical expertise is clear, the resulting polarised nature of the debate has arguably been to the detriment of wider shared goals of improving policy and practice in development and humanitarian efforts.

Research conducted by ODI, ALNAP, 3ie and others has highlighted the key challenges for impact evaluation going forward. There is a growing understanding that ‘the notion that evidence can be reliably placed in hierarchies is illusory’. The take-up of evaluation findings, and indeed any evidence, in policy and practice is as much a human, social and political process as a technical one. No particular approach leads to greater, more truthful generalisable findings than any other. No particular methodology provides a guaranteed impact on policy and practice. And none of this is unique to development by the way – many of these findings echo the perspectives of senior leaders in the medical world, where RCTs originate. As one the UK’s leading practitioners, Sir Michael Rawlins of the National Institute of Clinical Excellence put it: ‘Decision makers need to assess and appraise all the available evidence irrespective as to whether it has been derived from RCTs or observational studies, and the strengths and weaknesses of each need to be understood if reasonable and reliable conclusions are to be drawn’.

In order to help policy makers, researchers and practitioners navigate these thorny issues, ODI has recently published a think-piece that outlines key lessons for impact evaluations that make a difference. Drawing from the best available research, the lessons cover institutional readiness implementation and dissemination, providing a set of pointers that we hope will provide a useful checklist for planning, implementing and following up on impact evaluations. But these are not intended to be definitive - we warmly invite debate, discussion and reframing of these lessons.

If there is an overarching message, it is that all of us working on and advocating for impact evaluation – regardless of our methodological leanings – need to navigate a dynamic and shifting political terrain. Armed with multiple forms of evidence, we need to be ready to use them in diverse ways to influence, persuade and cajole. We need to find new ways of understanding and engaging the important stakeholders who we seek to influence, especially those communities who we hope will benefit from development interventions. We will need to accept that learning is not linear, straightforward or automatic. Perhaps more challenging will be to agree that no single methodology or approach has a monopoly on learning. And we need to be honest about when we have been able to influence change, and when we haven’t.

It may be fair to say that the challenge we now face is learning how to learn. A starting point would be a more open and honest dialogue about the challenges we face in improving development and humanitarian efforts, focused on the complex dynamics of how knowledge gets used in policy and practice. This will frequently be uncomfortable – and humbling. But if we do it right the potential benefits are considerable.