Mareike Schomerus, Justice and Security Research Programme, LSE
The session aims to open up new paths of discussion to square the very definition of empirical research as acquiring knowledge through observation and experience with policy requirements. Debating evidence in this way requires constant questioning, redefining success and failure, and adjusting approaches based on newly understood plausibilities, which could possibly change perspectives on finding evidence for what works in security and justice programmes.
To RSVP to attend this event, please email Claire Bracegirdle at: [email protected]
The seminar made it clear that evidence has semantic importance. It is a value based word that differs in meaning across each organisation – hence its purposeful presentation in the title in inverted commas.
So what is evidence? In general working definitions are focused on proving or disproving the methodology behind an intervention. It seeks to demonstrate the cause and effect of those interventions to varying levels of quality – from circumstantial hearsay ‘up’ to counter-factualised evidence evinced through control groups.
But is this presupposed ‘hierarchy of evidence’ useful? Perhaps not. Perhaps the right question is not what is the ‘best method’ of generating the ‘best type’ evidence, but how can each of the many different types of evidence be best understood to create a fuller picture? Categories to help think about the utility of ‘evidence’ more creatively could be:
Some of the associated drawbacks and possibilities presented by each of these categories were then discussed.
Upon taking office, Justine Greening said that DFID’s main focus was to be contextually relevant – ‘to do the right things in the right places at the right times.’ This is a nice sentiment, but shirks the question of how to gauge that right action and moment. Usually it entails looking for ‘proof’ that one action in one particular moment has had a specific effect, and then using this ‘proof’ to guide policy elsewhere. This suggests that ‘proof’ from the past can help predict the future, or that it can provide final answers to security and justice issues. This is an obvious leap of faith, if not entire fallacy. Perhaps we should reject the concept of ‘proof’ as evidence altogether because, in reality, people’s experiences of security and justice are constantly being recalibrated and there are no end points or set interventions that ‘work’?
Oppositely, embracing possibility is not relying on typical proof at all. Rather it involves collecting myriad ideas and inspirations to create a newer, bigger picture of what is actually going on. Sadly, there is little space for this in the typical donor/NGO programme sphere where proof-based approaches to business cases and monitoring and evaluation dominate. Yet there needs to be more space for creative thinking to check assumptions rather than repeat actions. Embracing possibility can spur more nuanced and applicable solutions to security and justice challenges.
The prominent characteristic behind a principled approach to evidence and programming is that of inflexibility. It attributes values to security and justice interventions that are almost doctrinal – for example that they are central to state-building and therefore mustbe implemented at all costs in all places. Yet this principle closes the door to other non-principled programmatic options that may in fact be better for peace and development – for example working with non-state security actors. It shirks pragmatism and risks slipping into one-size-fits-all approaches, replete with indicators and ‘evidence’ that measure progress towards principles that in actual fact have little bearing on people’s actual experiences of security and justice.
Plausibility involves taking a much more detailed look at the intended outcomes of a programme and considering if and how they might fit within the intended context. Many programmes fail to do this and consequently have completely implausible aims. To be plausible in your approach to security and justice evidence allows a fuller range of information to come to the fore and be considered ‘evidence.’ It also takes you a step away from the typical search for cause and effect evidence and permits a more reflective appreciation of what changes may plausibly be happening and why. In doing this it challenges chains of assumption and improves the utility of ‘evidence’ by providing multiple insights.
So given these different lenses, what works?
The short answer is, we don’t know. We haven’t learnt how to record dynamic changes particularly well and there is no lab-style answer anywhere on the horizon. But we do know our measurements need to match the flexibility of the field if ‘evidence’ is to be reflective of actual situations and useful enough to continually challenge our assumptions of what works. Ultimately, it is about using ‘evidence’ to stay broad in our programmatic options and understanding rather than use it to incorrectly whittle security and justice interventions down to a limited proof menu of ‘what works.’ Because it won’t.
This began a rich discussion, with initial questions around the utility of theories of change, of problem-based approaches, and about the idea of donors supporting good teams rather than just good proposals steeped in topical data collection methods and language.
There was concern that sometimes theories of change can stop short of being more than a different lay-out for quite similar, standard cause and effect thinking. Sometimes ‘it feels different but isn’t really’, as one participant noted, and is still based on ‘principles’ rather than ‘plausibility’. It was felt that theories of change need to be thought about more critically and creatively, and taken as one part of the solution rather than the solution itself.
The difficulties of measurement were also discussed. How do you measure change? Do you work backwards from a problem (taking into account the subjectivity of who is defining the ‘problem’)? Do you look harder for unexpected changes? Do you work more ‘politically’? There is no clear ‘evidence’ that any of these are the ‘right’ way to undertake measurement, but neither is there any evidence to write them off. Again, the discussion highlighted the importance of remaining open to varied ways of gathering and measuring ‘evidence’.
Another participant asked whether we should also be thinking about incentives for collecting evidence? Why, in fact, are we actually collecting evidence either as an organisation or an individual? Introspectively, it was questioned whether there was actually an appetite for change and learning in the security and justice field, or just an appetite for being seen to be collecting ‘evidence’.
Ultimately, we need to have more curiosity and enquiry about our work. As donors and practitioners, we need to have the courage to intervene in security and justice programmes as and when changes are needed and possibilities arise. The current predilection for sifting for ‘proof’ at pre-set data collection points in programming is not honest or dynamic enough to inform security and justice interventions that accurately meet people’s needs.