1

M&E as learning:

Rethinking the dominant paradigm

JIM WOODHILL[*][1]

This chapter argues that for monitoring and evaluation (M&E) to make a useful contribution to improving the impact of soil and water conservation work there must be a much greater focus on learning. A learning paradigm challenges the quantitative indicator based and externally driven approaches that have characterised M&E in the development field. The chapter proposes five key functions for M&E; accountability, supporting strategic and operational management, knowledge creation and empowerment. From this perspective on the functions of M&E current M&E trends and debates are examined which leads to the identification of the key building blocks for a learning orientated M&E paradigm. The chapter concludes by outlining the elements of a learning system that embodies such a paradigm. The argument of the chapter is not to throw away indicators (both quantitative and qualitative) or to compromise the collection and analysis of good data. Solid learning requires solid information. Rather, this chapter asks those in development initiatives to place the indicator and information management aspects of M&E in a broader context of team and organisational learning. The challenge to be faced is to use effective reflective processes that can capture and utilise actors’ wealth of tacit knowledge that is all to often ignored.

Introduction

Monitoring and evaluation (M&E) is high on the development agenda as this book testifies. Yet there remains a vast gap between theory and practice. Almost universally donors, development organizations, project managers and development practitioners want to see better monitoring and evaluation. But this does not come easily. Why is this the case and what is going wrong?

This chapter offers the building blocks of an alternative M&E paradigm[2] that aligns more closely with practice and the realities of how people create knowledge, make sense of their situations and adapt to change. Such a paradigm focuses on individual, group and organisational learning, a perspective which has been absent in classical numerical and indicator driven approaches to M&E. These building blocks emerge from a critical look at M&E in the broader development agenda and which provides important background for the soil and water conservation (SWC) work described in this book that often takes place within the context of development cooperation systems and procedures. The intention is to provide a context for the more specific SWC related aspects of M&E discussed in other chapters.

To varying degrees, most development practitioners now agree that M&E should incorporate more ‘participatory’ approaches, that ‘learning lessons’ is important, that more focus should be on providing management information, that outcome and impacts need greater emphasis, that M&E must be linked with planning, and that accountability to beneficiaries, partners and donors is critical. All these elements could make M&E more ‘useful’[3](Patton, 1997)and can be considered elements of a new M&E paradigm. However, outdated assumptions and practices continue to hamper the development of such learning-oriented innovations to M&E. The current gap between theory and practice can only be resolved it is argued by shifting the perspective from indicator- and data- driven M&E systems to learning-oriented systems.

The idea of organizational learning and the value of facilitating learning within communities, project teams and professional groups has become well recognized(Argyris and Schön, 1978; Bawden, 1992; Senge, 1992). But sadly learning remains anambiguous concept, one that is deemed as simply ‘training’ by many in development. Unfortunately the everyday image of learning is much coloured by classroom experiences with teachers expounding ‘facts’ and students expected to remember and regurgitate these in exams.

The idea of learning that underpins the paradigm of M&E outlined in this chapter is quite different. Learning is viewed not only as the accumulation of knowledge or skills but rather as the ability to constantly improve the efficacy of action. The implications of such a learning perspective for M&E will be outlined in the chapter.

Much development work, including soil and water conservation, has traditionally been supported via time-bound, output focused projects. However, growing doubts about the effectiveness of projects as such has fed interest in more flexible programme approaches and the provision of support to build the self reliance and enabling capacity of key institutions and organizations. Until relatively recently the theory and practice of M&E in development has been shaped almost exclusively by a concern with projects. This chapter is concerned with M&E in a wider context as it relates to not only projects but also to programmes, organisational performance and institutional change. Reflecting this broader concern, the chapter will use ‘development initiatives’ as an inclusive term to cover M&E at a project, programme or organisational level.

The argument in the Chapter for a learning systems approach to M&E is divided into three main sections. First, a foundation is laid by establishing the key functions of M&E. This leads, secondly, into a critical look at emerging issuesand debates within the M&E field. From this critique of current theory and practice the chapter then outlines eight building blocks for an alternative M&E paradigm. The chapter concludes by discussing design implications for learning-oriented M&E systems.

The Key Functions of M&E

The functions of M&E systems are often taken for granted and not carefully examined. As a foundation for the discussion about alternative approaches to M&E,this section defines the terms being used and proposes five key functions of M&E systems.

Many M&E experts like to make a very clear distinction between monitoring and evaluation. This author does not, instead viewing them as two overlapping spheres of activity and information. ‘Monitoring’ does focus more on the regular collection of data, while evaluation involves making judgements about the data. In theory, monitoring is viewed as a regular activity while evaluation is a more periodic occurrence. But even in everyday life, monitoring and evaluation are very interlinked. When driving and monitoring the speed indicator, it is necessary to simultaneously evaluate the appropriateness of our speed relative to the road and traffic conditions. Leaving evaluation to later would be downright dangerous. This example illustrates that where monitoring stops and evaluation begins is rather less clear than M&E theory often claims.

The separation of monitoring from evaluation has been partly driven by the classical approach to development projects, in which evaluation was undertaken every now and then by external experts, while monitoring was the task of project implementers. It is exactly this scenario that has resulted in an inability of many development initiatives to learn effectively as it disconnects the information collection from the sense-making that precedes improved action.

In summary then, monitoring and evaluation is viewed in this chapter as an integrated process of continual gathering and assessing information to make judgments about progress towards particular goals and objectives, as well as to identify unintended positive or negative consequences of action. As will be discussed later, M&E must also provide insight into why success or failure has occurred.

The term “M&E system” refers to the complete set of interlinked activities that must be undertaken in a coordinated way to plan for M&E, gather and analyse information, report and to support decision making and the implementation of improvements.

The alternative paradigm of M&E being outlined in this chapter presupposes that any M&E system needs to fulfil the following five purposes. This is not simply an assumption but one borne out by the hands-on practice of M&E.

  1. Accountability – demonstrating to donors, beneficiaries and implementing partners that expenditure, actions and results are as agreed or are as can reasonably be expected in a given situation.
  2. Supporting operational management - providing the basic management information needed to direct, coordinate and control the human, financial and physical resources required achieve any given objective.
  3. Supporting strategic management – providing the information for and facilitating the processes required to set and adjust goals, objectives and strategies and to improve quality and performance.
  4. Knowledge creation – generating new insights that contribute to the established knowledge base in a given field.
  5. Empowerment – building the capacity, selfreliance and confidence of beneficiaries and implementing staff and partners to effectively guide, management and implement development initiatives.

Within development, accountability and, in particular reporting to donors, has tended to drive most M&E efforts. Such reporting has often been seen by the implementers as a tedious administrative task that has to be done but which contributes little to the quality of their efforts or achievements. Furthermore, reporting requirements have tended to focus at the input and activity level and be descriptive rather than analytical about performance. Inevitably this has fed a focus on quantitative indicators rather than qualitative explanations. While the need to be accountable is clearly important, the way M&E is conceived to meet this function is rather different than what is required for the remaining four functions. However, the broadening of the idea of accountability to include ‘downward accountability’ aimed at beneficiaries is bringing about some changes in the mechanisms of M&E for this function(Guijt, 2004).

It would seem common sense that M&E should be able to provide the necessary information for operational as well as strategic management. In reality, this is often not the case. Most development initiatives appear to have sufficient monitoring (although often of an informal nature) to manage the operational side of basic activity implementation and financial management. However, more rare are systems that enable development organizations to make a critical analysis of progress towards outcomes and impacts in a participatory and learning-oriented way with beneficiaries, staff and partners.

Strategic management involves asking questions such as: Is the initiative really working towards the correct objectives? Why are failures occurring, is it because the of wrong assumptions (incorrect theory of action[4]) or due to problems with implementation? How can problems be overcome and successes built on? Such questions cannot be answered by a few quantitative indicators but require in-depth discussion and engagement between different actors in a development initiative or organization.

It is at this point that the boundaries between M&E and management also begin to merge. An important, and sometimes unpleasant, lesson for M&E specialists is that M&E cannot drive management. There needs to be a demand from management for the type of M&E that will enable performance to be assessed and improvements to be made. Unfortunately, a common assumption often made by M&E system developers is that improving M&E will lead to improved management and performance. This is most definitely not a guaranteed causal connection. In part due to the image of M&E as number counting and dull reporting, many managers do not engage closely with M&E systems or issues and do not consider M&E as useful for supporting their management responsibilities. This disengagement becomes a self-fulfilling prophecy as the lack of management orientation during the M&E design stage will certainly make it ineffective in terms of that function.

The fourth function of M&E is knowledge generation. All human actions are based on a set of underlying assumptions or theories about how the world works. These assumptions or theories may be explicit, but are also often just implicit everyday understandings about what does and does not work. When, for example, a watershed management programme is designed, it hopefully draws on up-to-date knowledge about watershed management. As the programme proceeds, analysis of what is and is not working and investigating why this is so, may challenge or confirm existing theories and assumptions. In so doing, these insights may contribute to new theory. In this way, M&E in the form of action research can contribute to the established knowledge base.

All soil and water conservationinitiatives or indeed sustainable development more generally, take place within contextually specific environmental and socio-political phenomena and processes. This requires those involved to adapt theoretical ideas about SWC to suit their situation and to innovate continually. Not surprisingly, solutions for the complex challenges they face very often emerge from the trial-and-error of experience. Consequently structured reflection, documentation and communication about the experiences of a particular development initiative in relation to existing theory becomes a critical component of society’s overall knowledge process. The importance of this aspect of ‘M&E’ is likely to grow in importance. However, as will be discussed below, there is still much to learn about how to generate useful ‘lessons learned’.

The fifth function, and yet perhaps the most overlooked is empowerment. This means empowering all stakeholders, whether beneficiaries, managers, staff or implementing partners to play a constructive role in contributing to optimising the impact of the development initiative. As is well known, knowledge is power, involving or not involving different stakeholder groups in generating, analysing and making decisions about the knowledge associated with a development initiative can be, respectively, extremely empowering or disempowering.

A Critical Look at Emerging Issues for M&E Theory and Practice

This section critically examines six issues that are central to the current theory and practice of M&E. It begins by examining the logical framework approach, which has perhaps been the key force in shaping development planning and M&E over the last several decades. The current concern with accountability for impact is then discussed before moving on to the dilemmas of a quantitative indicator driven approach to M&E. Subsequently questions are raised about the effectiveness of M&E that claim to be participatory or to be ‘learning lessons’. Finally the thorny issue of adequately resourcing M&E is raised.

The Eternal Logframe

The logical framework approach (or ‘logframe’) is central to the story of M&E in development and has fed much fierce debate about advantages and disadvantages(Gasper, 2000). The logframe is now a relatively ‘middle-aged’ procedure after its entry into development practice from about 1970 on. Over time, and now present under various guises and evolutions, it has become close to a universal tool for development planning. On the surface, the logical framework approach embodies much good common sense. It involves being clear objectives and how they will be achieved, making explicit the underlying assumptions about cause and effect relationships, identifying potential risks, and establishing how progress will be monitored. Who feels the need to argue about this?

However, in practice, the logical framework approach also introduced some significant difficulties for those planning and implementing development initiatives.

  1. Lack of flexibility: In theory, a logical framework can be modified and updated regularly. However, once a development initiative has been enshrined in a logframe format and funding has been agreed on this basis, development administrators wield it as an inflexible instrument. Further it may be the case that the while the broad goals and objectives of an initiative can be agreed on ahead of time it is not possible or sensible to focus on defining specific outputs and activities, as demanded by the approach.
  2. Lack of attention to relationships: As any development practitioner well knows it is the relationships between different actors and the way these relationships are facilitated and supported that ultimately determines what will be achieved. The logical framework’s focus on output delivery means that often too little attention is given to the processes and relationships that underpin the achievement of development objectives. The outcome mapping methodology developed by IDRC has been developed to respond to this issue (Earl et al., 2001).
  3. Problem-based planning: The logframe approach begins with clearly defining problems and then works out solutions to these problems. Alternative approaches to change emphasise much more the idea of creating a positive vision to work towards rather than simply responding to current problems. Further, experience shows that solving one problem often creates a new problem, the logframe approach is not well suited to iterative problem solving.
  4. Insufficient attention to outcomes: For larger scale development initiatives, the classic four level logical framework offers insufficient insight into the crucial ‘outcomes’ level, critical to understanding the link between delivering outputs and realising impact.
  5. Oversimplification of M&E: The logframe implies that M&E is simply a matter of establishing a set of quantitative indicators (means of verification) and associated data collection mechanisms. In reality, much more detail and different aspects need to be considered if an M&E system is to be effective.
  6. Inappropriateness at programme and organisational levels: The logical framework presupposes a set of specific objectives and a set of clear linear cause and effect relationships to achieve these objectives. While this model may be appropriate for certain aspects of projects, at the programme and organisationallevel there is mostly a more complex and less linear development path. For programmes and organisations there are often cross cutting objectives best illustrated using a matrix approach rather than a linear hierarchy. For example, an organisation may be interested in its gender or policy advocacy work in relation to a number of content areas such as watershed management planning and local economic development.

While the core ideas behind the logical framework approach can be used in flexible and creative ways, this is very rarely the practice and even the basic mechanical steps are often poorly implemented. Consequently the dominance of its use and poor application has become a significant constraint to more creative and grounded thinking about M&Eand the way development initiatives are managed.

The Demand for Accountability and Impact

In all countries, the consequences of free market ideology and policy have led to pressure on public expenditure. The result is much greater scrutiny over the use of public funds for development and environmental programmes. Furthermore, growing public and political scepticism about the results from the last 50 years of international development cooperation (whether justified or not) is forcing development agencies to demand greater accountability and greater evidence of impact for each euro spent. The number of development organizations competing for both public and private funding has also dramatically increased, making accountability an important aspect of being competitive in bidding for funding.