Top tips for the UK’s Independent Commission on Aid Impact (ICAI)

9 May 2011
,
Alison Evans
On Thursday 12 May Andrew Mitchell will the launch his flagship Independent Commission for Aid Impact or ICAI for short.  A commitment dating back to the 2009 Conservative Green Paper, the establishment of ICAI represents if not the final chapter, then a critical chapter in the Secretary of State’s ongoing narrative about accountability and value-for-money in UK aid.  

So what will ICAI do? 

ICAI will be responsible for the independent scrutiny of UK aid spending (or at least a representative portion of it) across government departments.  With 95% or more of official development assistance (ODA) routed through DFID, this is primarily a tool to hold DFID’s feet to the fire. The Commission, which sits outside of government, will report through a series of evaluations, value-for-money studies and investigations to the International Development Select Committee (IDC) of the UK Parliament. The IDC will, in turn, demand a management response from the government to evaluation findings and recommendations.

The Commission is led by a Chief Commissioner (Graham Ward) and a Board of Commissioners (John Githongo, Diana Good and Mark Foster), supported by a small full time Secretariat. The Commission will take strategic decisions on what should be evaluated and when; oversee delivery through a contracted service provider; and report to the IDC. The service provider (to be officially announced next week) is expected to bring a critical mass of evaluation and audit experience to the role, while contracting in specific thematic or geographical  expertise as required.

 There will be much interest in ICAI’s forward work programme, but before that it is made public I thought I’d share  some of my top tips for ICAI’s future success:

  1. Independence but not isolation. Independence helps foster a climate of impartiality in evaluation. Independence does not always have to be physical but it does need to be clearly enshrined in reporting lines and the principles governing evaluation management. ICAI does this.  But independence in itself is no guarantee of effectiveness.  Handled badly, independence can isolate the evaluator from the evaluatee and lead to a climate of mutual distrust and finger wagging. In this scenario, impartiality may be protected but at the cost of both relevance and impact. Handled well and the value is there for all to see through the receptivity of those evaluated (and the broader community) to the insights and lessons learned (even if the findings initially make for uncomfortable reading).  ICAI needs to get this balance right.
  2. Timeliness and relevance.  Given the pace of change in the world these days, effective evaluation requires engagement with real-time decisions and challenges as well as ex-post development results.  While gathering evidence after programme maturity is the most reliable method for assessing the full impact of ‘actions on the ground’, there are significant costs to this and an ever present risk that the world has moved on before the evaluation findings are forthcoming.  Blending both formative and summative evaluations with short-term assessments and longer-term impact studies is probably the best way to overcome this dilemma.  It will be critical that ICAI proves that it can be timely and relevant as well as rigorous and independent if it is to be a success. 
  3. Horses for courses.  There has been a fair bit of existential angst in the evaluation community in recent years about which methods provide the clearest evidence of what works, or not, in development.  The debate has become more heated as challenges to aid effectiveness have intensified. ODI has contributed thoughts on this issue for some time and, at risk of sounding Pollyanna-ish, there seems no better advice to give to ICAI than: measure what is meaningful; match your methods to the task in hand; uphold the value of complementary and/or mixed methods and always be open and transparent in your choices.  Quality evaluation is not, after all,  a one horse race.
  4. Remember the demand-side.  ICAI will increase the supply of independent evaluations of UK aid. But supply is actually less of a problem than demand.  Demand for robust, reliable policy and programme evaluation remains disappointingly low. This is particularly so in developing economies where the institutions and incentives for evidence-based policy-making remain relatively weak.  While it is not within ICAI’s mandate to address this ailing demand, it can ask why it is that so much policy-making happens without the benefit of verifiable evaluation evidence, and add its voice to the need for change in how evidence is used to inform policy and planning. 
  5. Evaluate the evaluators. No public body should be above scrutiny.  So while ICAI will be looking to demonstrate value-for-money for the UK taxpayer, ICAI itself must also convince the public of its cost-effectiveness in a rapidly changing aid and development context. 

There is much at stake and ICAI has much to prove. I for one will be closely watching the watchdog over these coming months to see how it shapes up.