INTERNATIONAL POLICY RESEARCH:

‘EVIDENCE’ FROM CERI/OECD

Tom Schuller

Centre for Educational Research and Innovation

OECD

Paris

Roundtable paper

European Conference on Education Research

Crete

September 2004

INTERNATIONAL POLICY RESEARCH: ‘EVIDENCE’ FROM CERI/OECD [1]

Abstract

This paper discusses, from the inside, issues involved in how OECD’s Centre for Educational Research and Innovation addresses the task of conducting international policy research. It begins with a brief descriptive account of CERI’s work. I then consider three particular issues which relate to how research evidence is compiled. First, I consider why the rhetoric of lifelong learning is only weakly supported by systematic research. Secondly I suggest that an increasing focus on the outcomes of education raises questions about causality in a policy research context, for example of what kinds of evidence are valued. Thirdly, I ask what might be meant by learning from international experience. I conclude that policy research conducted within the context of an international bureaucracy certainly differs from university-based research or consultancy contract work, but the differences may be less significant than the resemblances.

Key words: international policy research; evidence; causality; randomised controlled trials; mixed methodology; transdisciplinarity.

According to Rinne et al, OECD “has become established as a kind of ‘eminence grise’ of the educational policy of industrialised countries”, and “has claimed for itself a central position in the collection, processing, classification, analysing, storing, supplying and marketing of education policy information – the extensive control of information on education” (2004: 456). The first observation is a clear judgement on the part of the authors, i.e. it is their own view of where OECD stands; the second is more ambiguous: does it mean that OECD has actually achieved this position of control, or merely that it claims it? In either case these are fairly extensive statements, which are both flattering and challenging to those of us who work within OECD’s Directorate for Education.

Academic interest in OECD’s role in educational policy analysis and formation is growing(see Papadopoulos 1994 for a historical account of OECD’s education work up to a decade ago). A similar study to the Finnish one cited above is being carried out at the University of Bremen, located within a wider study of international governance, though the results will not be available until 2006. Both these projects cover OECD work on education at a very general level and over a long timespan. At the other extreme, at the 2003 ECER meeting a roundtable was held specifically on one piece of work, the OECD review of Educational R & D in England. The discussion was subsequently published in the European Education Research Journal (Wolter et al 2004), with the four roundtable participants expressing divergent views on the nature and quality of the review process. On the one hand the review was criticised for adopting a linear model of the research-practice relationship; for overriding the complexities of knowledge generation and for undermining the status of education as a discipline. But the commentators (though they did not share a common position) also referred to positive features of the review, such as the quality of the analysis and the attempt to provide a rational framework for discussing the effectiveness of research.

This paper does not attempt either to justify or rebut these claims, or the broader ones cited in the first paragraph. It has a much more limited goal: to give a summary overview of the work of one part of OECD’s education research capacity, the Centre for Educational Research and Innovation (CERI); and by drawing on some parts of that work to offer a perspective on current debates over how evidence is compiled and used in educational policy research. The first sections are largely descriptive and can be skipped by anyone familiar with CERI and OECD or uninterested in such an account. The second part shifts gear into more of an epistemological and methodological discussion, but without high philosophical pretensions: it addresses the issue of evidence within a context of current policy concerns.

Education at OECD

The Directorate for Education was created as a separate directorate in 2002. Education was previously part of a joint directorate with Labour and Social Affairs, but education was considered sufficiently significant to warrant a discrete directorate (one of nine within OECD). The principal units within the Directorate are the Education and Training Policy division (ETP), which deals mainly with national and thematic reviews; the Indicators and Analysis division, including the famous Programme for International Student Assessment which is arguably OECD’s highest profile activity; IMHE (Institutional Management in Higher Education); PEB (Programme on Educational Building); a programme building links with non-member countries such as China and Brazil; and CERI, the Centre for Educational Research and Innovation.

CERI was set up in 1968, with a mandate which is renewed every five years. It is what is knownin OECD as a Part 2 operation. This means that Member countries are not obliged, when they join OECD, to take part in (and contribute financially to) its work, but may opt to join. In fact, all Member countries do participate in CERI, with countries such as Chile and Israel having observer status, but the Part 2 status allows a certain degree of latitude in comparison with Part 1 programmes (governed by the regular OECD Committees, including the Education Committee to which the Education and Training Policy Division reports). The CERI Governing Board is made up of nominees from the countries, the nominations being made mostly but not exclusively by ministries of education. Governing Board members may be policy-makers, for instance from the ministry itself; or they may be researchers, for example from a university. The former category is larger, so that policy-makers outnumber research representatives, but the Board’s composition marks it out somewhat from Part 1 OECD Committees.

The Board meets twice yearly, and every two years agrees a programme of work for the next biennium. A list of current and planned projects is attached as Annex A.[2] The most recent programme of work, for 2005-6, was approved at the May 2005 meeting of the Governing Board, which also approved the following as key themes cutting across the different CERI projects:

A. Lifelong learning, as an overarching theme

B1 Innovation and knowledge management

B2 Human and social capital

B3 A futures focus

B4 Learning and teaching.

The substantive components of CERI’s title – ‘research’ and ‘innovation’ - have meaning within the OECD context. That is, CERI is as concerned with original knowledge accumulation of different kinds, and pursues lines of investigation which are not wholly predetermined; and it is oriented towards innovation in the sense of identifying and analysing new trends and issues in education. The research may be primary, in the sense of commissioning or executing work involve new data-gathering, or secondary and synthetic, bringing together results from existing research in Member countries In both cases, however, one of CERI’s key characteristics is the aim of developing new tools, frameworks and indicators for the gathering and analysis of data, both quantitative and qualitative. There is an agenda-setting as well as a reporting role, and this extends to the setting of research as well as policy agendas.

CERI operates with an extremely small core staff, as do all the units within the Directorate. At present we have about 13 professional and 10 support staff. Given the range of countries as well as issues to be covered this is very limited, especially when compared with some of the large education research units within universities or elsewhere in Member countries. CERI analysts themselves write reports, but also draw extensively on external consultants to provide thinkpieces, to contribute analyses of individual countries or to synthesise results from several countries.

The major tangible outputs are reports published by OECD (see Annex B for list of recent publications). Dissemination also takes place through conferences and seminars, and through the posting of grey material, ‘declassified’ papers which are not formally published but are made available to the public. As everywhere, the Web is an increasingly important dissemination tool. We are now paying more strategic attention to the whole issue of CERI’s public profile and the dissemination of its work.

Pace the impression which the Finnish paper cited earlier may give, neither the Directorate for Education nor CERI within it is a free-ranging thinktank with its own educational agenda. The programme of work is set within parameters agreed by Member countries, and the outputs are monitored and scrutinised by their representatives. Of course the issues and to some extent the outcomes are shaped by a variety of political pressures, some of them emanating from the political stances of particular countries, some of them generated more generally from the interaction of different countries’ positions and priorities, which are articulated to differing degrees of explicitness. To illustrate this I turn now not to a specific example of a substantive educational project, but to the issue of how the nature of evidence itself is thought about. Although I do so in relation to CERI as an international educational research unit, these questions are common to researchers more generally, especially those engaged in policy-relevant research.

Evidence and policy research

No one seriously believes that policies are developed, implemented or evaluated by reference to research evidence alone, in some kind of aseptic rationalist bubble (see Levin 2004). Occasionally oneof those stages (development, implementation, evaluation) maybe determined by evidence alone, but in general research evidence, where it figures at all, is just one part of the policy-making process (or rather several parts, since there can obviously be many different pieces of evidence, often conflicting). The extent to which evidence influences policy is subject to a whole number of factors, from political opportunism to pressure of public opinion to quality and capacity of both the producers and suppliers of research, but this is not the place to discuss the role of educational policy-making in general. Instead I deal with three aspects of the debate on how research evidence is compiled, understood and used. Much of the discussion results from work done as part of a small CERI project on ‘Evidence-based Policy Research’, which forms a strand within our activity on knowledge management.

Identifying and prioritising issues

In order for research to be carried out, and evidence compiled, a research agenda must be identified and, in some measure, legitimised. Agenda-shaping has always been a classic aspect of the exercise of power, whether it is done overtly or covertly (see e.g. Lukes 1963). The agenda not only concerns substance (what is to be researched) but also the techniques to be used, the expected outputs, the timescale on which they are to be delivered and the criteria for evaluation. All of these are of legitimate concern to bodies which fund or sponsor research, including governments, although different bodies will place different emphasis on the various aspects. Arguably, for example, a research council is more likely to give close scrutiny to research methodologies as well as to substance when reviewing the projects it funds, in order to ensure academic credibility, whilst a sponsoring government department will define the policy relevance more tightly.

This is a complex area, and I want here to focus only on two aspects, one substantive and one relating to evaluation. In the first part of the paper I described, without further observation, how lifelong learning was proposed as the overarching framework within which CERI’s research programme should be executed. This may appear as a rather bland affirmation of the unrejectable. After all, lifelong learning is hardly a new subject within OECD. The 1996 Education Ministerial meeting adopted lifelong learning as an overall framework for OECD’s work on education, and the 2001 Education Ministerial reaffirmed it. The Education Committee’s programme of thematic reviews has addressed early childhood education and care, transition from initial education to working life, and adult education, and substantial work has been done on strategies to increase incentives to invest in lifelong. Despite this, although most countries also support lifelong learning in their policy discourse and rhetoric, the extent to which actual policies areformulated and implemented within this frame of reference is very much open to question.

‘Beyond Rhetoric’, in fact, was the title of a recent major OECD publication on adult learning (OECD 2003), which posed serious questions on this front. Arguably, indeed, we do not even have the frameworks for understanding what progress if any is being made on lifelong learning (see e.g. Coffield 2000; Istance 2003; Istance et al 2003). Thus the commitment to it as the overarching theme is potentially significant in itself. However the key point here is the apparent vicious circle which exists in a policy research context – vicious in the sense that the interaction between existing institutional and sectoral structures, political visibility and data availability combine to make it difficult to establish lifelong learning as a central issue for policy and research. Schools, and to a lesser extent higher education, have an unmistakeable policy home. To my knowledge every country has a ministry of education dealing with primary and secondary education. Schooling is universal and compulsory. There are teachers to be recruited and buildings to be managed.Even in decentralised countries, governments have high profile responsibilities in respect of schooling. As noted, the same is not true to quite the same extent for higher education, where responsibility is often far more diffuse, and the governmental portfolio may be much more restricted andlinked with one or more of a variety of other domains such as science or employment; but it remains the case that the institutional basis of the HE sector is easily identified in the public and political mind. Universities and colleges are easily recognisable, if not easily understood.Both schools and higher education are therefore relatively amenable not only to research as such but to research policy and funding, even though the research may not be good, effectively managed or properly disseminated (see Hargreaves 1999, Foray 2003, OECD 2004 on knowledge management in education).

Lifelong learning, by contrast, lacks an institutional base, a professional identity, an administrative location and a political profile. Moreover although there is no shortage of general conceptual or philosophical views, some of high canonical status (e.g. Delors 1996; OECD 1996), there is no agreed framework around which knowledge can be cumulatively constructed. This is one reason why the futures orientation of CERI work in this area carries such potential significance: by sketching out possible scenarios, and linking them to empirical trends and issues, we can aspire to generate such a framework.Several CERI projects make greater or lesser reference to lifelong learning: for example the work on brain sciences and education (see CERI/OECD 2001a) involves three networks of neuroscientific researchers, of which one, based in Japan, is focussing on the lifespan (the other two are on literacy and numeracy); and the work on Schooling for Tomorrow (CERI/OECD 2001b) includes scenarios which envisage the school at the heart of a lifelong learning community. But the task of redressing the balance in educational research so that initial formal education does not dominate to such an extent is a challenging one, as any analysis of the content profile of papers submitted to European or other educational research congresses, and to education research journals, will illustrate.As a mild test of commitment, it would be interesting to review national research agendas, or programmes funded by governments, to get an idea of how far these related in any meaningful way to a lifelong learning agenda.[3]

The second issue is one of evaluation in relation to policy research. This is important in the light of the influence sometimes attributed to OECD’s work (though the evidence on the extent and nature of that influence is usually somewhat impressionistic). How and by whom are quality and impact to be judged? For academic research the answer may be phrased primarily in terms of peer review and judgement, with no necessary dimension of application (though see e.g. Levin 2004). For policy research, especially that funded directly by governments, this cannot be the response. But it is not immediately evident what the mechanisms are for judging impact, and this is something we wrestle with, both internally and as a matter of accountability. OECD as a whole has recently imposed upon itself a short-term model of impact measurement, with all outputs being given a single figurescore by the end of March in the year following their production. In other words, a report published in December will be evaluated for its impact almost instantaneously. The model does at least separate quality from impact – importantly, since a report may be of high quality and yet for reasons beyond the authors’ control have no impact. But whatever form the Organisation’s internal procedures take the issue remains: how is our work to be assessed and made publicly accountable in a meaningful and appropriate way? I pose this as a question, inviting constructive suggestions! All I would say here is that for CERI the impact should be judged in relation to the research community as well as the policy world.