Monitoring the millennium development goals
Draft Document
Monitoring the millennium development goals
A catalogue of procedures and an assessment of statistical capacity
By Ludovico Carraro, Salman Khan, Simon Hunt, Georgina Rawle, Matt Robinson, Manos Antoninis – Oxford Policy Management
Copyright-2003, by The Department for International Development
Enquiries concerning reproduction should be sent to:
The Department for International Development
Contracts Department
Abercrombie House
Eaglesham Road
East Kilbride
Glasgow GT 75 8EA
This report has been prepared by Oxford Policy Management
The findings, conclusion and interpretations expressed in this document are those of Oxford Policy Management alone and should be in no way taken to reflect the policies or opinions of DFID.
Comments are welcome and should be addressed to Ludovico Carraro (e-mail: )
May 2003
Preface/Acknowledgements
In the compilation of this report, the authors received valuable help and support from the Paris 21 task team, in particular from Sarah Hennell of DFID, Neil Fantom of the World Bank, and Martin Dyble of Eurostat, who kept in regular touch and provided valuable data and feedback.
In addition, a number of individuals from various organisations provided data and statistics, useful advice and comments, and responses to various queries, all of which helped substantially in the production of the report. In this regard the authors would like to acknowledge the help of: Rachael Beaven from DFID; Colin Mathers, Carla AbouZahr, Mercedes de Onis, Monika Blössner, Tony Burton; José Hueb, Margie Schneider, and Abdul Ghaffar from WHO; Tessa Wardlaw, Roleand Monasch from UNICEF; Douglas Lynd, Olivier Lobe, and Anuja Singh from UNESCO; Desmond Jones from UNAIDS; Habib Khan from the Ministry of Education in Pakistan; Chet Chaulagai from the Ministry of Health in Malawi; Kristi Fair from Macro International in relation to the education data section of the Malawi DHS; Hammad Ali from the Federal Bureau of Statistics in Pakistan; Simon Scott from the OECD; Michael Minger from the ITU; and Loganaden Naiken, in relation to issues concerning the calculation of dietary energy consumption data.
Executive Summary
Background
This short study was commissioned against a background of attempts by the Millenium Development Goals (MDG) Task Force, and others, that are aimed at improving the quality of MDG indicators.
The particular aim of the study is to provide a better understanding of the monitoring process and its standards and to highlight areas of possible improvement.
There are three main outputs that have been produced as part of the study:
- a catalogue - which documents for all 48 indicators, the Agency responsible for reporting a particular indicator; the definition of the indicator; availability and timeliness of data; the original sources of the data used in deriving the indicator; the actual construction of the indicator and the process of reporting it; the checks performed on the original source data; and a comparison of the MDG data with other international data source.
- an international report – which documents the key quality related issues that arose out of the cataloguing exercise containing both an international overview of the issues and some suggestions that might explored to resolve them;
- a national report – which examines MDG processes in two countries Pakistan and Malawi.
The Catalogue
The catalogue is a compilation of 48 tables, one for each indicator, that form part of a separate document. The catalogue is very much a work in progress as data, definitions, sources, processes and systems for calculation and dissemination are being constantly updated.
The International Study
The International Study comprises the first part of this present document. On the basis of the work undertaken in developing the catalogue it identifies five main issues that if addressed would substantially improve data quality:
- Definitional issues – need for consistent usage across all users;
- Data availability – can be poor and raises problems vis-à-vis the quality of regional estimates, and trends in performance;
- Issues that arise from modelling exercises – using models to estimate missing data raises issues about the value of estimates, and in particular those models that are calibrated on old data;
- Timeliness – indicators that use household surveys deliver data that is on average 3 to 5 years backdated;
- Comparability – cross-sectional and time series comparisons can be difficult not only because of the problems of consistent definitions, data availability, modelled estimates, etc., but also because of different methods used in the calculation of particular indicators (see Table 1 in Appendix 1).
The study discusses five activities that if addressed could help to overcome some of these shortcomings:
- an appropriate use of available data from household surveys - especially through the consolidation of existing survey networks and through more proactive use of surveys when data is missing and old;
- changes in the use by international organisations of data reporting questionnaires sent out to national governments – there is scope for improvements in quality control;
- changes in the use of international population data in the calculation of some indicators- in a number of instances a mix of population estimates rather than single population estimates are used for the calculation of indicators with inevitable consequences for credibility of some indicators in some countries and regions;
- changes in the management of common methodologies and definitions – for some indicators international agencies still strive to achieve consensus on definitions and uniformity in their application; and
- changes in data management practices – in particular providing footnote details on data points; the sources of data; and even access to original datasets would help to improve the interpretation and comparison of particular indicators.
The National Study
The National Study comprises the second part of this current document. It provides the pre-cursor for a larger country study to be undertaken later in 2003.
Using the experiences of Pakistan and Malawi, this brief desk study:
- makes the comparison between national used data and the internationally available data under the MDG
- compares indicators that have been selected for monitoring the PRSP with those under the MDG, and
- briefly discusses statistical capacity in both countries to monitor MDG and PRSP indicators.
Further Issues
Finally, this report identifies a number of areas in which international efforts might be focussed, and suggests some of the issues that might be taken forward in expanded country studies.
Table of contents
Preface/Acknowledgements
Executive Summary
1.Background
Table of contents
2.Introduction
3.International study
3.1.Salient characteristics of the MDG indicators
3.1.1.Definitional issues
3.1.2.Data availability
3.1.3.Issues that arise from modelling exercises
3.1.4.Timeliness
3.1.5.Comparability
3.2.Key issues to address
3.2.1.An appropriate use of available data from household surveys
3.2.2.The use of international agency questionnaires for data reporting
3.2.3.The use of international population data in the calculation of indicators
3.2.4.The importance of common methodologies and definitions......
3.2.5.Data management
4.Country studies: Pakistan and Malawi
4.1.Introduction
4.2.Comparison of national and international data on MDGs
4.3.Comparison of indicators monitoring MDGs and PRSP in Pakistan and Malawi
4.4.Statistical capacity to monitor MDG and PRSP indicators
5.Further Issues
5.1.International study
References
Appendices
1) 48 tables with indicators
2) Pakistan
3) Malawi
Abbreviations
1.Introduction
This report summarises the preliminary findings of an on-going study that assesses the status of the capacity to monitor the Millennium Development Goals (MDG). The study has two components: first, an international assessment of how international lead agencies are monitoring the MDG; and second, some country-specific analysis on how two governments are engaged in the reporting process of the MDG as well as an assessment of their ability and commitment to monitor the MDG indicators.
The aim of the study is to get a better understanding of the monitoring process and its standards, and to highlight possible areas of improvement. Indeed, other reports have already recognised “serious and far-reaching problems in almost all of the millennium development goals indicators in terms of data availability, accuracy, coverage in order to produce global and regional estimates, and consistency over time” (sic) (Inter-agency Expert Group on MDG Indicators, 2002). However, these problems are not comprehensively documented, moreover there is no systematic account of the methodologies used by the various agencies in compiling the data. It is in these two main areas that this study wants to make a first contribution. Therefore, this paper does not comment on the international community’s progress towards reaching the goals, nor is it an attempt to discuss which indicators are better in monitoring the goals, or what the indicators aim to capture. Instead, it focuses merely on understanding processes of data reporting and compilation, and the methodologies behind them.
In particular, the international study has developed a catalogue of the various processes in place to monitor the 48 indicators of the MDG.[1] For each indicator the catalogue specifies: the agency responsible for reporting international statistics (as identified by a meeting of the relevant stakeholder institutions in March 2002); definition of the indicator; timeliness and availability of data; original sources; construction/compilation of the indicator (estimates and adjustments); process of reporting; checks performed on the original sources; and, a comparison with other international sources.
The country studies (at the moment Malawi and Pakistan) aim at presenting examples of the interaction between lead international agencies and national authorities, and at investigating available national statistical resources and whether these are appropriately used in the reporting process. Moreover, the country studies compare the set of indicators identified by the Poverty Reduction Strategy Papers (PRSP) with the MDG indicators to analyse both to what extent the international monitoring effort is shared by national targets and how national statistical production is institutionalised to produce reliable data.
This paper presents the main findings of the study divided into the international and country component, while the catalogue and the documentation material of the two country studies are included in special appendixes.
2.International study
MDG monitoring and its coordination by the United Nations Statistics Division (UNSD) began in 2001. Both monitoring activities and their coordination is still evolving with the existing framework of responsibilities providing the platform for the further development of responsibilities and systems. Indeed, since this study begun (March 2003), a number of changes in systems and methodologies have already taken place, and it is recognised that there might be further changes underway of which the present analysis is unaware. Nonetheless, this summary highlights some general points that may provide a platform of support to further innovations and improvements.
We have divided this section into two parts, firstly we provide some observations on the salient characteristics of the MDG indicators and their compilation that have arisen from this study’s work on the indicator catalogue; and secondly, a discussion of the key issues
2.1.Salient characteristics of the MDG indicators
The 48 MDG indicators combine those that have been widely used and for which data reporting systems are well established with those that are relatively new or indeed completely new to a broad international user group. As a result the pattern of data coverage and data quality is very variable across the range of indicators.
For example, for an indicator such as “the prevalence of underweight children below five years of age” the Global Database on Child Growth and Malnutrition (WHO) has been collecting and categorizing relevant data since 1986. However for an indicator such as “the proportion of households with secure tenure”, there is effectively no data available.[2]
The cataloguing exercise raises and echoes the issues raised by the Inter-agency Expert Group on MDG Indicators namely:
- Definitional Issues
- Data availability
- Issues that arise from modelling exercises
- Timeliness
- Comparability
2.1.1.Definitional issues
For some indicators – usually for indicators that are relatively new to the debate – there can be a lack of consistency in the use of a common definition. Inevitably some of the indicators related to new problems are subject to more debate and more revision compared to others for which indicators are better understood and widely used.
This is the case for “the condom use rate of the contraceptive prevalence rate”[3], “number of children orphaned by HIV/AIDS”[4], and “the unemployment rate of 15 to 24 years olds”, where several indicators rather than a single indicator are being used in practise.
Furthermore, indicators that monitor the spread of HIV/AIDS that were identified in 2001 have subsequently been changed. Nevertheless some agencies still refer to the earlier indicators, or use slightly different definitions.[5]
2.1.2.Data availability
For a number of key indicators based on country reported data or household surveys, data availability (as measured in terms of number of countries with at least one observation after 1995) is relatively poor. This raises serious doubts vis-à-vis the credibility of regional and global estimates. This is particularly true for indicators monitoring goal 2 (education), 3 (gender equality) and 6 (combating HIV/AIDS, malaria). Not only is the number of countries for which data are available relatively low, but information is also missing for particularly large countries (China and/or India).
A further issue is that of the number of observations available for the same country over time. This is essential to assess and monitor changes. In general, for those indicators for which there is a general lack of data in the period since 1995, it is also the case that there is insufficient data to make comparisons over time.
2.1.3.Issues that arise from modelling exercises
Some of the indicators raise issues because, in the absence of actual observations, estimation models, using the limited available data, are engaged in generating estimates. The two main issues are that:
- the appropriateness of data and models used in the estimation exercises especially for models of literacy and mortality rates, maternal mortality rates, and malaria prevalence, and
- some of the methodologies for estimating missing data, and in particular approaches that provide estimates based on key informant responses, do not provide “statistical” estimates.
Indicators that rely heavily on theoretical models rather than on observed data appear to provide timely and very comprehensive coverage, but they depend crucially on the accuracy of the predictive model and whether its specific assumptions hold in each particular country: their precision and quality can often be questioned. For at least some of the MDG indicators, modelling is the appropriate way to address the lack of available data, and provides a more useful solution than a complete data vacuum. Nonetheless, there can be problems with this approach. As an example, if literacy estimates of young people (15-24) are based on a model constructed on a census of 10 or 15 year-olds, it is reasonable to have doubts regarding the quality of the estimates.
In some cases indicators are simply derived from the responses of key informants, and in these cases it is impossible to quantify the likely error of the estimates. For example, ’the proportion of the population with access to affordable essential drugs on a sustainable basis’ is an indicator based on interviews with country experts and only classifies the proportion of access in four broad categories. Alternatively this indicator could be provided by facility surveys and/or by the health management information systems.
2.1.4.Timeliness
Indicators that depend on household surveys produce data that are on average 3 to 5 years backdated. This occurs mainly because particular household surveys are conducted every 5 years, and particular rounds of surveys in various countries take place in different years. Furthermore, some of the indicators collected with household surveys in one year refer to the situation of previous years. Finally there is a lag involved between the time that data is processed and estimates are released.
Nonetheless, for a number of household surveys (mainly the Multiple Integrated Cluster Surveys - MICS - and the Demographic and Health Surveys – DHS) updates of estimates can be very quick: estimates are reported as soon as they appear in preliminary reports and then in the final published documents.
2.1.5.Comparability
Finally, issues mentioned above affect the ability to make both cross-sectional and over time comparisons. In fact, data quality must meet certain standards. However, both comparison over time and comparison between countries are difficult for a number of indicators, because of differences in the methodology used to estimate the indicators, or differences in the definition of the indicator.
2.2.Key issues to address
Unfortunately, many of the problems outlined in the previous section can only be addressed as more data become available. However, focusing on the construction/collation and reporting procedures used for some core indicators[6] identified areas of possible immediate improvements, not based on the collection of more data, but on better management and use of the present resources. This section illustrates the key areas that could be addressed. These include:
- an appropriate use of available data from household surveys;
- changes in the use by international organisations of data reporting questionnaires sent out to national governments,
- changes in the use of international population data in the calculation of some indicators;
- changes in the management of common methodologies and definitions; and
- changes in data management practices.
2.2.1.An appropriate use of available data from household surveys
Growth in the availability of well-conducted household surveys provides an increasing opportunity to improve the reliability, availability and timeliness of a number of MDG indicators. About one-third of indicators depend wholly or partly on household survey data. Indicators such as the proportion of the population with sustainable access to an improved water source, and improved sanitation, which previously relied on administrative data, have begun to make an extensive use of household surveys (user data) with clear benefits for the quality of data.