How to Evaluate Humanitarian Action

ALNAP/InterWorks course offered at World Health Organization (WHO)

Geneva, 10 to 11 September, 2002

Course Background

The two day course “How to Evaluate Humanitarian Action” was developed by InterWorks in early 2002 at ALNAP’s request. The course includes research and materials developed by ALNAP and other sources. It is designed for the practitioner, particularly those responsible for evaluation within their organizations and consultants who undertake evaluations. The pilot of the course was held at Overseas Development Institute in London in March of 2002. For the Geneva course, more than 40 responses were received from 2 course announcements in July and August. A total of 24 participants attended, including 6 from WHO.

Venue

The course was held in the WHO headquarters in Salle A, which is designed for formal meetings and has fixed tables in a split U-shape. Participants occupied about two-thirds of the room. There was plenty of space within the room for separate group discussions. The power point projector and other equipment were very good. WHO provided coffee and tea for the breaks and lunch was available in the WHO cafeteria.

Participants

There was a broad range of participants: seven consultants (three sponsored by Church World Service); seven from UN organizations (WHO and UNHCR); five from NGOs (ICRC, IOM, International Rescue Committee – Spain, Novib - Netherlands, MedAir -Switzerland, Swiss Peace Foundation, ZOA - Netherlands); two from donor organizations (CIDA and JICA); and, one from academia (Kings College – London). The diverse mix and experience of the participants enriched the discussion.

Course Organization

The course materials and presentation were designed by InterWorks in cooperation with ALNAP. The organization of the supporting arrangements at WHO was excellent overall and was undertaken by the Department of Emergency and Humanitarian Action particularly by Gaya Gamhewage, Technical Officer of Training, and Archie Joseph, Training Assistant. Sheila Reed from InterWorks was the course manager and Gaya Gamhewage also facilitated.

Session Contents

A brief description of the sessions follows.

Tuesday, 10 September

Introduction

Dr. Alessandro Loretti, Coordinator of Emergency Health Intelligence and Capacity Building of the Department of Emergency and Humanitarian Action, welcomed the participants and described the objectives of WHO in sponsoring the course. WHO is committed to improving its performance in monitoring and evaluation for the benefit of the affected populations, to provide additional tools for public health management and to increase its accountability in relation to member countries and international partners.

The participants then introduced themselves, giving their positions, locations, years of experience in humanitarian action, and their main objectives for the course. The participants and facilitators had a total of 129 years of experience to offer the course.

Session 1

The first session provided a “Foundation for Evaluating Humanitarian Action”. Topics discussed included: the relationship of evaluation to learning and accountability, the values of humanitarians and the need to uphold the internationally, nationally and locally recognized standards and instruments. The participants were asked in four groups to discuss the three main purposes of evaluation of humanitarian action, and to agree on the top three. These turned out to be: 1) learning; 2) accountability; and 3) assessment of how well goals and objectives were achieved. A few groups noted that considerable debate was required to pinpoint the top three and in some cases there was no consensus.

Session 2

The goals and characteristics of the “Utilization Focused Evaluation” and the various management arrangements of an evaluation were discussed. The criteria promoted by ALNAP for EHA were reviewed and it was noted that others could be added or some taken out depending on the goals of the evaluation. The importance of the Terms of Reference was highlighted as a reflection of the planning process, the goals and criteria selected as well as evidence of stakeholder buy-in and as a critical guide for the evaluator. The four groups reviewed a sample TOR and each scrutinized a certain part of it, mentioning their satisfactions and concerns.

Session 3

The TOR exercise went overtime and into this session. Following the reporting on the TOR exercise, the discussion launched into “Team Membership and Management”, including characteristics of effective teams, the need for team building, and the need to maintain relationships with management and other stakeholders throughout the evaluation.

Session 4

The participants split into groups to discuss four scenarios representing problems in team building and management and then reported their insights to the plenary.

At the end of the day, facilitators asked the participants to discuss in their groups their satisfactions and dissatisfactions with the day of training. Participants voiced their feelings to one reporter per group and the four reporters met with the facilitators to discuss the outcome. On the satisfaction side, the participants were happy with the enthusiasm of the facilitators, the relevance of the materials, the excellent participation, the diverse composition of the entire group, the useful exercises, “a nice learning atmosphere”, and the wide range of experiences that were brought in by both participants and facilitators.

There were also a number of concerns. It was felt that the TOR and team exercises were too long and more time should be spent on the criteria. More clarity was required as to the materials to be covered and the goals of the exercises. Examples were needed of different types of evaluations. The reporters kindly stayed well over the scheduled time to talk about ideas for course improvement.

Wednesday, 11 September

Session 5

Sheila briefed the plenary on the four reporters’ comments and addressed some of the concerns and gaps in information. She made a resolution to provide support after the workshop particularly where questions were unanswered or people felt that they were stuck on some stages of the evaluation. “Implementation and Data Collection Strategy” was the first topic, with the emphasis on including affected people/stakeholders in every evaluation. After a discussion of qualitative and quantitative methods and their strengths and weaknesses, it was recommended to mix the two methods, particularly bringing some of the features of quantitative into the qualitative. Qualitative and quantitative sampling techniques were reviewed.

Session 6

The session focused on “Participatory Methods” and dealing with bias. Application and selection of methods to answer questions that address the criteria were demonstrated. Two case studies were described by participants, the first by Gaya in regard to her experience in a Sri Lankan NGO and the evaluation of a project impacting 8,000 children in the military controlled border zone. Innovative techniques were used to gather information in a society where participation was the norm. These included a facilitated survey which helped children recall their feelings about the conflict.

Smruti Patel talked about her experience with participatory evaluations in India and other countries. One innovative technique was to ask villagers to draw time lines, one indicating when a drought began and one indicating when assistance was received. The comparison revealed a gap which inspired the village to advocate for better early warning and more timely response. Smruti’s emphasis was on using evaluation methods that “gives back” to the community instead of just extracting information.

Session 7

“Data Analysis Tools” as well as tools to help organize information, were reviewed, such as the problem tree, capacities and vulnerability analysis and the critical path. Some time was spent discussing the application of the Sphere standards in relation to local standards. A “methods and analysis” exercise consisting of four sets of possible problems were discussed among the groups. The groups presented their solutions to address and avoid the problems and offered many innovative ideas.

Session 8

Gaya presented some background on coming to conclusions, report writing and dissemination. It was emphasized that the evaluation report should be long enough to fully address the issues and demonstrate how conclusions were reached from the data. The report will have to be well written and formatted in order to gain the attention of people who are inundated with reports every week. As a best practice, results should be delivered in a seminar with stakeholders to gain their feedback. If this is not possible, the evaluator must seek individual feedback. The evaluation itself should be evaluated by following the ALNAP pro-forma which is also useful as a planning tool.

Conclusion

At the close of the day, participants were asked to reflect upon the most important best practice or lesson that they gained from the course. They mentioned the need to review and strengthen the TOR, the need for stakeholder buy-in, the need to clarify the goal or purpose of the evaluation and to evaluate your own evaluations. More time should be spent on developing the TOR and team building – meaning specifically that more of the budget should be dedicated to evaluations. One participant felt strongly that community involvement still needed to be moved to the forefront of any evaluation strategy.

Participants were then asked to write down one or two actions that they would take as soon as they returned to work to improve evaluations. A few, during the workshop, had actually modified TORs for evaluations they were going to undertake. One participant said he would go back to evaluations that he had conducted over the past two years to see if and how the recommendations had been used. Most people mentioned sharing the workshop materials or modifying them for training within their own organizations. (All overheads and the workbook were provided on a CD.) Most wished to take time to digest the materials and to use them in their next evaluation.

Course Evaluation

The following analysis is derived from 20 completed participant evaluation forms.

General Course Rating

Of 20 people, 6 rated the course as Excellent, 8 as Very Good, 4 as Good and 2 as Fair.

Average Responses

The responses in the two boxes below indicate an average from 20 responses. The highest degree of satisfaction on a scale of 1- 5 was indicated by a score of 5.

  1. Subject matter was adequately covered
/ 3.8
  1. Content was suitable for my background and experience
/ 4.0
  1. Programme was well-paced
/ 3.7
  1. Notebook contents were relevant
/ 4.7
  1. Participants were encouraged to take an active part
/ 4.7
  1. The course met my individual objectives
/ 4.1
  1. The course was relevant to my job
/ 4.2
  1. I would recommend this programme to my colleagues
/ 4.2
  1. Facilitation
/ 4.6
  1. Group exercises
/ 3.9
  1. Case Study exercise
/ 3.8
  1. Meeting space
/ 3.2
  1. Meals/refreshments
/ 4.0
  1. Overall organisation
/ 4.3
  1. Other participants
/ 4.5
  1. Was the course length: correct? (9 responses) too short? (11 responses)
  2. Were there: just enough participants? (19 responses ) too many ? (1 response)

(For questions 18 - 20, the comments were organized under the most common topics.)

  1. If more than one subject was covered, which received too much or too little time?

Too much time:
  • Group exercises takes up too much time better spent on discussions of teaching material
  • Too much on team building and TOR
  • Teams – too much time
  • Teambuilding too much
  • Too much TORs and teambuilding
  • Team work – too long
  • TOR exercise too long
  • Too much on team building
Too little time:
  • Too little time on practical use of terms like effectiveness – need exercises on these criteria
  • More on process and sharing of experiences
  • Follow-up - too little time
  • Too little – standards and criteria
  • Too little time on qualitative and quantitative surveys
  • More on how communities should participate in the evaluation
  • More time for best practices and sampling
  • Designing criteria and analysis were too little
  • Much too little time on impact assessment methods
  • More on identification of indicators
  • Less on team and more on ways of doing different evaluations
  • More time on criteria and demonstrate methods and tools per criteria
  • More time spent on analysis of an evaluation, both good and poor examples
  • More on participation methods in evaluation

Overall timing:
  • Two days were not enough for this course
  • Day 2 was too rushed
  • Time allocation on day 1 was unbalanced but day 2 was perfect
  • Do it slower over 3 days
  • Try to add more days for digestion of the course
  • 3 day course
  • There are too many methods to cover – either summarize or prolong the course
  • Short time to digest all information we have received

  1. Do you have any suggestions that you feel could improve this course?

Presentations and Power Point slides:
  • Jumping around in the slideshow created unrest – develop them more
  • It was not beneficial in some cases to discuss impromptu issues
  • Introduction in the beginning on what will included and how
  • Give out copies of powerpoints right away
  • More flow to the presentation, use participatory methods to convey some of the sessions – less overhead slides
  • Acknowledge sources of information of all material
  • Make a list of unanswered questions to be followed up later
Improvement in Exercises:
  • Analyse in-depth a case study for each step of the evaluation or presentation of a case by a participant
  • More concrete group exercises on substantive issues, more case studies with plenary debriefing
  • Introduce case study exercise to facilitate implementation in the field
  • In groups, re-write the TOR
  • Sharing most recent discussions being held among UN, humanitarian and NGOs
  • Guest speakers on selected topics such as data collection, sampling techniques, data analysis
Pre-course materials:
  • Include some other websites in the pre-course materials, not just ALNAP
  • More references and discussions on pre-course materials
  • Include reading the TOR in the pre-course materials
  • The pre-course reading was relevant and informative
  • Make more linkages with the pre-reading
General venue:
  • More attention to seating arrangements
  • Better layout of the room
General course design:
  • The methods also apply to development
  • The course was too general as participants had relevant experience
  • Divide presentations between two trainers
  • Conduct workshops on more specific topics in evaluation
  • Have a clear target group - either designers, consultants or implementers of projects
  • Make efforts to enlist participants from implementation agencies as well as donor agencies to demystify the process
  • Make it more relevant to humanitarian action Go back to the objectives of the participants and see if they were met

  1. Any other comments?

  • “Excellent facilitators”
  • “Excellent course – enjoyed both the sharing and networking”
  • “Thanks to the team that organized this workshop”
  • “Thank you so much”
  • “Excellent two days and worth attending”

Recommendations

The following recommendations were determined from the oral and written comments of the participants and discussions between Gaya and Sheila following the course.

Length of Course: A longer course would help to avoid a rushed pace and coverage of more and in-depth material but the trade-offs in attracting people also need to be considered. The two day course is very attractive because of less travel time and fewer days away from the job. However, a two and a half or three day course may be appropriate. ALNAP is offering a three day similar course in Japan at the end of September, so this may help to indicate the most appropriate time frame. .

Another idea is a residential course where the introductions start on the first evening. Two evenings together would allow participants to get to know each other better and promote greater involvement and discussion.

Mix of Participants: The mix of participants including NGOs, donors, UN staff, and consultants should be kept in order to provide a richness of experience and to bring in a wide range of issues.

Pre-course Packages: The design of the course should be flexible and responsive. A pre-course survey can be conducted to determine the existing level of evaluation skills and needs and priorities for skill building. The background of the course participants should also be shared beforehand.

A broader list of recommended reading materials can be issued, although it is noted from past experience that pre-reading is not always done. A review of key points in the reading material at the beginning of the course might help as well as more linkages within the sessions.

General design: The responses to the pre-course surveys would steer the contents. It will generally be important to devote the better part of one day to the criteria and another day to methodology.

Exercises: In general, donors struggle to determine the goals of the evaluation and how to make them cost effective. The “implementors” look for a good planning framework which allows them to select the best methods for information collection and analysis. Others want to focus on participatory and community-based evaluations. While these are basically overlapping concerns, the difficulty in a short course is how to address all levels of involvement, to consider all dimensions of evaluation and to promote relevant skills.

A case study that can be split up throughout the course is likely to be more effective than generalized exercises. Or, a number of case studies illustrating different types or stages of evaluation may be assigned for pre-course reading. Participants can chose the one that applies to their situations for group work in the course.

Facilitation Methods and Overheads: The course should definitely be split between two facilitators, and/or experts should be engaged to talk about certain topics. Skill building can be promoted, for example, by use of a range of participatory methods that illustrate how to use them.

The overheads need to be reduced and more focus put on participatory exercises. Since the overheads are subject to change until the last minute, CDs with the final overheads should be produced at the end of the course. The course notebook might contain the basic overheads to follow during the sessions.

1