Thursday 21 July 2011

Evaluating trade assistance programmes

What Aid for Trade said about evaluation

On Monday I attended the final session of the WTO's Aid for Trade meeting focused on monitoring and evaluation.

Why do we need M & E?

An obvious question but worth asking: electorates are skeptical of development aid’s impact and so want to see independent, scientific studies that demonstrate its impact

· A recent poll in the Financial Times shows that respondents in most OECD countries considered defence and development aid as priority areas for spending cuts

· From the ESCAP delegate, Out of 1 Australian dollar, 67cents ends up in expat salaries

A4T evaluation particularly challenging

The moderator Michael Roberts (WTO) pointed to the success of evaluation in the health sector (i.e. does a dollar spent on health result in lives’ saved) but how can we apply it to A4T? This type of “gold standard” of evaluation seen in health is more difficult in trade due to difficulty in assigning casuality.

Given these challenges evaluation should at least be a learning process, where feedback loops are in place. Fear of making mistakes should be replaced by analysis and learning from them as seen in the private sector (see the new book “Adapt” by Tim Harford on the value of making mistakes).

The wonders of evaluation

Aaditya Mattoo, World Bank expert on evaluation illustrated how powerful evaluation in aid programmes with an example of randomized trials by the Poverty Action Lab and work of Estor Duflo.

Problem: Kids in rural Kenya were not getting sufficient education. How does the government get its biggest bang for its buck investment.

The study show that de-worming kids (In much of the developing world, most kids have intestinal worms, leaving them sick, anemic and more likely to miss school), resulted in 25 percent less absenteeism. The cost of this (35c) compared highly favourably to other policy options like paying for school uniforms or an additional teacher (around US100).

However, such trials are difficult to implement in the A4T context. How can you get robust data sets? How can you assign causality when many other factors are affecting outcomes (e.g. other economic and social policies).

For this reason, he said that A4T was at a primitive stage compared to health in evaluation, but we need to be pragmatic.

The three tyrannies of evaluation

Despite the need for figures showing value for money, Mattoo warned the meeting against what he called the tyrannies of evaluation. These are:

Methodology – i.e. avoiding evaluation as we have not perfected the methodology

Causality – i.e. this programme was wholly responsible for a 10% rise in incomes.

In A4T we can not easily assign casuality but we can draw more qualitative conclusions. For example, an IADB study on a trade capacity building programme could not assign casuality on volume of trade increased but made a useful finding that it increased their diversity of exports. A lesson from another study was that A4T aid should not be directed to the large companies but to the marginal companies.

Measurement – i.e. avoiding evaluating what can not be measured, so we end up focusing on only what we can measure.

At a side event later in the evening the Bank launched a new study Where to Spend the Next Million? Applying Impact Evaluation to Trade Assistance. It looks at how M and E is being applied in A4T.


No comments: