Strengthening the Research in Participatory Research

Richard Coe

Steve Franzel

ICRAF, Nairobi, Kenya

The problem: ‘The plural of anecdote is not data’[1]

There are numerous and well-documented reasons for using participatory methods in rural development activities. These often require intense involvement of facilitators in communities, necessarily limiting the number of farmers and communities reached by a project. If the objective is empowerment and improving livelihoods in those particular communities then such involvement is acceptable. The ‘research’ involves the participants – individuals and communities – discovering solutions to their problems.

However in many cases the facilitators will also have broader research objectives. Many projects have the joint aims of (1) facilitating change among the immediate project beneficiaries and (2) providing evidence for efficient targeting and organization of more wide scale activities. This second aim requires systematic collection of information on technological, institutional or policy changes and the processes that lead to them. This is a typical researcher agenda. However it has to be carried out in the context of a participatory project . Some effort has to be made to ensure that the information collected is relevant beyond the immediate communities in which it is collected. Without this the result may be case studies, the applicability or generalisability of which is completely unknown.

For example, in an intensive participatory research exercise, farmers in three villages identify soil fertility as a major problem and decide to test improved fallows, among several other options, as a possible solution. Researchers want to know who in the community is likely to adopt improved fallows and why or why not, so they can provide this information to other communities across the area, where soil fertility is perceived to be a problem. Researchers and farmers together develop several hypotheses: (1) that farmers on clay-loam soils are more likely to adopt than farmers on sandy soils; (2) that farm size does not influence adoption, (3) that the poor are more likely to adopt than the better off because they lack cash for buying fertilizer. Data on these issues are not critical for farmers in the 3 villages, because they will find out for themselves from their own tests whether, for example, improved fallows are a good or bad idea on sandy soils. But the data are important for villages in the broader area, who would find it very useful to know, for example, that tree growth is much better on clay-loam than sandy soils or that improved fallows are not a suitable practice for poor farmers.

Attempting to meet the two objectives, facilitating change among project beneficiaries and providing lessons of broader applicability that will benefit others, will inevitably lead to compromises. All stakeholders have to agree that these compromises are acceptable.

The solution: Use principles of design and analysis

Many research activities involve surveys of randomly selected farmers or carefully controlled trials comparing test and control treatments. The methods used to design such ‘classical’ studies (sometimes misleadingly called ‘statistical studies’) are based on principles which ensure that sound inferences can be made. These principles are equally important when the study is participatory and community based and include objectives of community empowerment. This is not to say that we have to use randomized block designs or simple random samples. The context and its constraints, particularly the incorporation of participatory methods, mean that the application of the principles can lead to very diverse types of study design. However the key research design concepts remain the same[2]. These are listed in a following section.

Much the same is true when it comes to analysis of data. The reasons for, and ideas behind, rigorous analysis of data are much the same in classical and participatory studies. The tools may be different, but they are available and valuable.

Research design concepts needed

Summarised below are the concepts of research design which are needed if the information collected is going to allow firm inferences to be made. Methods are not discussed. The appropriate methods will depend on context and the type of participation desired.

Objectives

Every study needs clear and precise objectives. If the objectives are vague then the results will be also. In this context an important component of the objectives is a statement of who the information is for. Generating information for participating farmers, for other members of the same community or for wider application require different designs. Almost certainly there will be multiple objectives and the feasibility of meeting them within a single study needs to be confirmed.

Comparison

Most objectives require comparison – of technology options, and of social or biophysical environments. Studies have to be designed so that the conditions to be compared actually occur. Furthermore, effects will be clearest when some observations towards the extremes are included. Casual selection of participants will tend to result in most observations being around the middle of a gradient, with few of the more informative observations which occur towards the ends of the range. The idea of a ‘control’, or baseline against which others are compared is useful but requires careful implementation. Different stakeholders may have different views about appropriate controls. A trial may thus require more than one control plot or different farmers may have different control plots.

Confounding

Closely linked to comparison is the idea of confounding. If two conditions vary together then their effects can not be separated. For example, if we want to compare different ethnic groups and rainfall zones, but the different groups tend to live in different zones, then it is not possible to distinguish their effects. Special effort has to be made to find groups living outside their usual zone to break the confounding.

Uncertainty and variability

Every conclusion reached will be subject to uncertainty because the information on which it is based is incomplete. Some attempt must be made to measure this uncertainty, or we can have no confidence in the conclusions. The design must therefore contain elements of repetition so that the variability is revealed and uncertainty in results deduced. If the objectives require an understanding of variability over time then the design has to specify what is held constant over time and what is allowed to change.

The variability inevitable in a study is not just a source of uncertainty, but also of new information. If explanations for some of the variability can be found then we have richer conclusions. The design therefore has to allow for this possibility.

Heterogeneity and stratification

Every population being studied will be heterogeneous. Sources of heterogeneity which are known or suspected at the start of the study, such as gender, ethnicity or agroecozone, can be used to stratify the study. Stratification allows difference between groups to be revealed and increases the efficiency of data generation.

Bias

Bias occurs if measured differences (between options, farmers etc) consistently over or underestimate real differences. Bias may arise through the measurement tools used but also though the design – for example if only wealthy farmers are included. Random selection is the key design idea needed to avoid bias. It may often be impractical to use, but we may well be able to ensure that data behaves as if random sampling was done.

Measurement

Measurements are made to meet specific information requirements. As for the rest of the design, choice of tools and methods will depend on the detailed objectives, but must be made rationally not arbitrarily.

The layout hierarchy

Nearly all studies involve a hierarchy of ‘units’ – for example districts, communities within districts, farms within communities, fields within farms, niches within fields and plants within niches. At every level in the hierarchy we have to decide how many units will be included, which units to include and the process for choosing these. These decisions have to be made with full understanding of the design concepts above.

Quantitative analysis

Much of the data collected in participatory studies is qualitative, and more of it is ‘semi quantitative’ (eg ranks and scores) and not amenable to analysis by the familiar statistical tools used in classical studies. However there are strong reasons for doing a quantitative analysis. These reasons include:

  1. Compact and informative summaries of voluminous and complex data can be found.
  2. If the research design was adequate then we can be explicit about the uncertainties in the results.
  3. We can unravel complex interactions and influences.
  4. Unexplained variation and surprising observations are highlighted, suggesting new hypotheses or topics for investigation
  5. Quantitative results can be used in further analyses – eg impact models or mapping for recommendations – in a way which qualitative results can not.

Well developed methods exits for analysis of scores and rank data. Other interesting tools are emerging for other types of data – for example quantitative analysis of influence diagrams. Quantitative analysis of qualitative date (eg transcripts of a group interview) require the data to be coded. Criticism of quantitative analysis often comes down to two problems which are easily rectified:

  1. Coding responses too soon (eg by using precoded closed questions rather than open questions).
  2. Failing to use the original narrative after coding. The uncoded information can help explain or understand variation or odd observations revealed in quantitative analysis of the coded data.

Both these problems can be overcome by change of practices rather than any fundamental change of approach or new tools. A third criticism of quantitative analysis is more philosophical: the wish of many participatory researchers to assess only consensus among group members rather than the views of individual members. For example many PRA manuals recommend having community members agree on a single matrix ranking of say preferences among crop varieties, rather than have each member do their own matrix ranking and then assess differences. It is more constructive to describe and understand heterogeneity than to assume there is none.

1

[1] This heading is quoted by Ian Wilson in Wilson IM (2000). Sampling and Qualitative Research (DfID: unpublished theme paper) which describes very clear why and how ideas of sampling can be used to increase the effectiveness of qualitative research projects.

[2] A very readable book on this subject is King G, Keohane RO, Verba S (1994) Designing social inquiry: scientific inference in qualitative research. Princeton University Press. 247pp.