Thursday 15 September 2011

Training on project design

Results orientated Project Design

This week I attended an in house training on project design. It was a big commitment of time but valuable for learning more on how to strengthen project design.
This type of exercise should help us to move towards stronger results based management and strengthening accountability in Aid for Trade delivery (see my July in house blog after the A4T Review).
Here are some snapshots of the discussions. It is far from exhaustive - just several “takeaways” that I found interesting.
Outcome and outputs
What is the difference?
There was an exercise to work out what the difference between outcomes or outputs. It became evident that there was a grey area (and thus confusion) between the two. The training was useful to clarify the difference.
The definition used initially in the training was:
Output: a deliverable from the project in terms of a product (e.g. a market guide) or a service (e.g. a training event)
Outcome: the effect of using the output (e.g. a company enjoys greater sales from having used the guide or attending a training event).
Confusion arose due partly to the definition used by the OECD in which the output is not just a deliverable but it “may also include changes resulting from the intervention”. Outcome is described as “the likely…effects of an intervention’s outputs”. There is strongly similarity between these two definitions making it more likely that we will interpret definitions differently.
Committing to outcomes entails risks
We learnt from Irene that one agency (GIZ) does not commit to outcomes – presumably as such a commitment think that means unacceptable levels of risk – an agency has considerably more control over achieving outputs than outcomes.
However, as someone pointed out, it is surprising that the German parliament accepts this. Generally speaking, the taxpayer is paying for outcomes (“poverty reduction “ educating women” “ protecting children from diseases”). Indeed, the MDGs are stated outcomes and the UN and donors are signed up to delivering these.
Agencies making commitments on outcomes (like the UN) have therefore to make a risk assessment of the linkage between outputs and outcomes and preparing the appropriate indicators and baselines for measuring if they have been achieved. Also:
• Sharing best practice and experience, publishing evaluation etc. ensuring feedback loops helps us to learn about these linkages.
• To what extent do linkages vary according to different economic, social and cultural contexts?
• Buy-in or co-financing to the project from national stakeholders can demonstrate that outputs lead to positive outcomes.
• Evaluations of cost effectiveness should measures the ratio between inputs and outcomes (not outputs).
• “the more specific (in defining the outcome and outputs) you are, the better you can manage a project”

Indicators
The value of my Friends and Links
An indicator is a factor that shows evidence of an outcome being achieved. (e.g. sales increase of an entrepreneur). We reviewed weak and strong indicators, again from project documents.
The IMDIS type indicators explain outputs but are not so useful indicators of outcomes. For example reporting on the number of workshops organized does not tell us anything about outcome (i.e. companies learning about market trends and changing their business strategy as a result).
Related indicators being used by projects like “Number of buyers contacted” also provide an incomplete picture. The number of buyers is not that informative. For example, when an agency takes companies to a trade fair, we want to know the quality of buyers’ enquiries (i.e. ones that lead to business) from buyers, not the number of enquiries. One high quality enquiry can be worth more in terms of sales or a long term partnership than 10 vague or low value enquiries.

Social networking indicators are analogous. Having lots of “Friends” on Facebook doesn’t tell me anything about the quality of those friends. Similarly, a job-seeking graduate would prefer to be to one influential executive than 10 low level employees.
Assessing risk
We reviewed risks and assumptions from project documents according to
• measures for the level of probability (of the assumption holding) and
• the likely impact on the project if the assumption doesn’t hold.
If you can apply a quantitative value (say from 1-4), multiplying the two gives a rating of risk, thus allowing Management to get a snapshot of risk. An appealing idea, but shouldn’t be a substitute for careful analysis of the risks and assumptions.
Re-planning
Logframes do not need to be written in stone.
They can be revised during implementation. If, for example, external factors changed (e.g. if market conditions change).
In the 90s changing the logframe was viewed negatively by evaluators. Now evaluators take the reverse opinion, that changes are welcome, shows that project is flexible and adaptive to changing conditions (e.g. in the market, socio-economic conditions). Do we have the scope for that, particularly with respect to what we have agreed with donors? Yes, if they are agreeing to outcomes – how we get there (the type of activities and outputs or the “road” we follow) should allow for flexibility. The mid-term evaluation is an opportunity to re-evaluate the planned outputs and even outcome.
Poorly designed projects can be approved for political, disbursement pressures. However, the organization will pay the price later (outcomes not achieved, financial problems etc)
A step too far? Applying logframes to family life
We heard from several participants anecdotes that logframes were being applied to personal lives, including:
• An assessment of whether it was a good idea to get married – risks and assumptions revolved around the fact the two were from very different cultures. His fiancĂ©e wasn’t impressed.
• Used for planning a family weekend
• Used for resolving a family conflict between in-laws (“Outcome”: harmony between in laws “Outputs” Built capacity of in-laws to show kindness and understanding etc etc?)

Other takeaways
“we must be in learning mode, not punishment“
“project management is a learnable skill”
“evaluation is mainly a learning exercise, not policing”
Having a scoring system based on a World Bank type checklist would be “instructive and bring transparency” to the review process.

No comments: