1

Summarising an Investigation into Methods of Evaluating the Effectiveness of Advisory Work in a South Wales LEA. (Towards a Self-Reviewing LEA)

Rod Cunningham (Education Adviser – Mathematics)

Torfaen County Borough Council

Paper presented at the British Educational Research Association Annual Conference, Cardiff University, September 7-10 2000.

Address for correspondence:

Education Department

Torfaen County Borough Council

County Hall

Cwmbran, NP44 2WN

Tel 01633 648181

e-mail

Abstract

The increasing pressure on LEAs to measure the effectiveness of their advisory teams arises from two considerations; the need for external accountability and the concern for advisers to improve their own practice. The performance indicators currently employed summarise data from three different areas; pupil attainment scores, school inspection reports and surveys on opinions of school staff about the work of advisers. These performance indicators tend to be focused on outcome rather than process. This paper summarises the shortcomings of present approaches to measurement of adviser effectiveness and proposes that the assumptions underlying these measurements may need to be examined. It is proposed that advisers should be seen as mediators of or as providing stimulus for development in schools rather than as specifically causing improved pupil learning (which is implied by the reductionist model underpinning the present performance indicators). Particular areas where advisers are likely to be influential are: developing a learning culture; improving the quality of teaching and developing systems of monitoring and evaluation. The paper summarises attempts to date to capture adviser effectiveness based on a revised and more dynamic model. This includes the technique of clustering schools on the basis of longitudinal data and comparisons across subjects and gender. Clusters then provide the basis for a more detailed examination of the complex interplay of factors within schools.

Key Words: Performance Indicators, Advisory Effectiveness, Clustering, Complexity.

Acknowledgement:

Many thanks to teaching and LEA colleagues who gave freely of their time during this study.

1.0 An introduction to the study and its scope

Rationale for the study[1]

There are several reasons why the establishment of measures of adviser effectiveness are important at the present time.

1)Recent policy documents clarify the importance of the role of the Local Education Authority (LEA) in school improvement on the one hand and spell out the need to account for money spent in terms of strict success criteria on the other. If an LEA can devise its own PIs it can begin to take control of its own process of evaluation.

2)Performance indicators are important for development. Advisers use examples of effective practice to share with teachers and school managers. Information about what works with teachers and ultimately with pupils, assists advisers to improve their own practice.

Clearly the task of evaluating adviser effectiveness is a difficult one, fraught with methodological, philosophical and ethical difficulties. Simple correlations between amount of adviser time spent in a school and subsequent improvement in pupil achievement are unlikely to hold the key. Change and improvement within educational settings may well be influenced by many factors other than adviser intervention. Such change may also occur in fits and starts at one time and in an incremental and linear way at another. This study aims to explore the complexity of these changes with a view to identifying promising measures of adviser effectiveness.

An Overview

The main arguments in this study have been summarised in the following paragraphs:

An LEA is judged ultimately by the way in which it works with its schools and by the effectiveness of this liaison. If pupils in schools make good progress and this can in some way be linked to the work done by the LEA advisers, then the advisory team will be judged in a positive light. As with the framework for inspection of schools, LEA inspections are concerned primarily with the raising of standards. I argue that, in the absence of longitudinal, pupil-level data, comparisons between the achievement of the same pupils in English and mathematics gives some grounds for judging school effectiveness. If a pupil can achieve a good mark in one of these two subjects then they should be able to achieve a reasonable mark in the other. The extent to which the school has managed to minimize subject differentials may be a measure of its effectiveness. Sammons [1999], notes that Headteachers and Heads of Department rank good progress for students of all abilities very highly among factors which they consider ought to be taken into account when judging school effectiveness. Data is compiled by gender in subjects. This provides a second useful source of comparison. Since differential achievement is perceived as a major issue in the LEA it is also useful to use this data to stimulate discussion about different teaching strategies. As with subject differences it was argued that a better performing school would minimise gender differentials. When a pupil-level database is established a more refined approach would be to measure the progress made by boys and girls in mathematics and English and perhaps link this specifically to strategies introduced by teachers and supported by LEA advisers.

The comparative data on pupil achievement was viewed alongside data on adviser work to uncover any possible connections. Given the assumptions of the study it was not expected that robust measures of adviser effectiveness would be established in this way. In fact this part of the work called into question some of the measures of effectiveness of schools at present being used in the LEA. What did appear significant, however, was that the comparative data could be used to cluster schools and that this clustering formed the basis of deciding which schools would be investigated further. A major part of the study has been the development of the methodology to compare performance indicators.

Outline of the Study

Part One: Preparation for the Research Project:Defining the Adviser, LEA Role and discussing underlying assumptions.

Government and local authority documents were used to show the development of the adviser role in Torfaen. Attempts were made to operationalise this role to make aspects of it available for evaluation. As part of this work a range of performance indicators used elsewhere in measures of adviser effectiveness was investigated to establish what is at present being used. I then surveyed literature on educational change with particular reference to factors which have a direct impact on pupil achievement. Issues around the measurement of increased pupil progress were discussed which then allowed me to suggest areas in which advisory work can influence pupils’ progress.

Part two: A small-scale research project to identify possible performance indicators

In order to explore the possibilities of using outcome and process performance indicators practically and in a local context a small-scale project was undertaken. This project developed out of the discussions in part one. These discussions influenced the methodology to be employed. School-level data provided an overview and assisted in the identification of schools for more detailed work. The results were analysed and displayed to show possible connections between adviser work and the performance of schools.

Research Questions

The study attempts to answer the following questions:

1)Can instances be identified where raised standards of pupil achievement in a school are linked to work undertaken by advisers?

2)Can some account be given of the mechanisms involved in such cases?

3)If the answers to questions 1 and 2 are broadly positive, then are the educational change processes involved always linear and incremental or can such change sometimes be dramatic? (in other words, is there some justification for the assumptions upon which the study is based?)

4)Can some performance indicators be identified from the exploratory work above?

5)Can these Performance Indicators be used for self-review purposes by the LEA?

The first episode involved the collection of a range of data at school and LEA level. Although data is provided at pupil level it is not organised into a pupil level database but collated and analysed at school level at the present time. Apart from National Curriculum tests there was the opportunity to organise data about a range of adviser activity and involvement in schools. The results of two borough-wide perception surveys, filled in by Headteachers about the services of the LEA, is also available. From this broad database a selection was made of a small number of schools for episode two in which it appeared valuable to collect more detailed information. This was done by interview (structured and semi-structured) of teachers and headteachers in the selected schools and by discussion with adviser and advisory teacher colleagues. The motivation for this work was an attempt to uncover some significant activities or events which have acted as triggers for change, which would suggest specific links with advisory work.

Part three:Discussion of results, conclusions and implications for further work.

Using the data collected in part two an attempt has been made to answer the research questions posed. The assumptions discussed in part one suggest that this is most likely to be possible if a combination of process and outcome indicators are used to build up a picture of adviser effectiveness.

Definitions of Effectiveness and broad assumptions underpinning the study

The following comprise the major assumptions underpinning the study:

1)A major issue in judging school success is pupil progress (not necessarily in academic subjects).

2)Judgements about adviser and LEA effectiveness rest on success of schools and the part the LEA plays in this.

3)There are a range of performance indicators (PIs) used at present to judge the effectiveness of schools and LEAs. Most of these are outcome PIs, i.e. summative data.

4)It is possible to generate process PIs for schools and probably for LEAs. (for example, the extent to which people reflect on their own performance)

5)A mix of outcome and process PIs may be useful for judging the success of schools and LEAs.

6)Schools may be differentially successful for boys/girls in different curriculum areas and differ in effectiveness over time.

7)More successful schools maximise the potential of boys and girls.

8)The researcher will be an important part of the research landscape and will not be a detached observer.

9)Any sort of monitoring will affect what is being monitored.

3.0 The role of LEAs and of advisers

The Role of the LEA in England and Wales

In 1997, however, the DFEE consultation paper ‘Excellence in Schools’ and the corresponding ‘Building Excellent Schools Together’ from the Welsh Office [1997], outlined a significant, albeit changed role for LEAs.

An effective LEA will:

Challenge schools to raise standards and act as a voice for parents;

Provide clear performance data that can be readily used by schools;

Offer educational services to schools who choose to use them;

Provide focused support to schools who are under performing;

Focus their efforts on national priorities such as literacy and numeracy; Work with the DFEE and other LEAs to help celebrate excellence and spread best practice. [DFEE, 1997, p27].

Defining The Adviser’s Role in Torfaen

The role of advisers in Torfaen has been developed in response to the policy documents cited above. There are two parts to this role, that of School Development Adviser and of Subject Specific Adviser. In the case of Mathematics the two functions have been described as follows in a working document which formed part of discussions between the mathematics adviser and his line manager, the principal adviser.

School Development Adviser (about 45% of time)

As a Mathematics Adviser (about 55% of time)

4.0 Possible Adviser Influences on Pupil Learning

It is difficult to decide what effects can be traced back to, or in some way attributable to the work of the advisory team. On the one hand these effects may be quite subtle and on the other complex and dynamic

A map of possible influences on pupil learning was constructed using the following: Creemer’s work on Theorising Educational Effectiveness [Creemers, 1994], Sammons, Hillman and Mortimore [1995] Key Characteristics of Effective Schools, work on pedagogy (for example [Gipps and MacGilchrist, 1999]) and work on formative assessment, [Black and Wiliam, 1998]. The issues of focus on learning, quality of teaching and effective monitoring and evaluation were prominent in these sources. My intention was to combine these features with some pupil and teacher level outcomes about which data could be collected perhaps by interview and observation. Outcomes suggested by Gray and Wilcox [1995] appeared to be useful. The idea for a flow diagram came from Creemers [1994, p 119], however this appeared to cover too broad a scope for the purposes of looking at Adviser effectiveness. Using figure 1, I consider how advisers operate in three areas:

1)to assist schools to establish reflectivity among staff

2)to help staff improve the quality of their teaching and

3)to work with schools on improving monitoring and evaluation.

The fourth area of influence (resource provision and external factors) is not under the influence of the advisory team. Advisers assist with the efficient use of the resources available but cannot directly influence working conditions or allocation of resources.

Adviser’s Work in assisting staff to improve the quality of their teaching

John Harland produced a report on The Work and Impact of Advisory Teachers [Harland, 1988] in which he identified four modes of advisory teacher working. These he called:

Provisioning; providing materials and resources.

Horative; giving oral feedback to teachers on their work.

Role Modelling; demonstrating aspects of teaching.

Zetetic; stimulating critical thinking through challenging remarks.

He concluded that these four modes were effective at different stages of an advisory teacher’s work and with different teachers. The establishment and maintenance of trust was vital. Provisioning began to establish trust, giving feed back and role modelling developed trust further. Role modelling could lead to a behaviouristic copying of the techniques in question unless accompanied by a discussion about the links between the underlying concepts to be taught and the methods used. The zetetic mode held the greatest promise and also the greatest risk. Harland found that teachers could react badly to challenging comments and could then reject the advisory teacher support. On the other hand teachers who rose to the challenge were likely to develop their practice significantly. Harland’s study involved a number of extended interviews with key people in the LEA and at school level. The work was small-scale and exploratory in nature. He did not link the interview data to the performance of pupils in the classes of the teachers involved. The generality of his findings could therefore be questioned. The impact referred to is essentially impact on teacher attitudes and not necesserally on pupils’ learning. Further work may show, however, that Harland’s four modes of working with teachers help explain why professional development is only effective over time. Trust needs to be built up initially before a teacher is ready to start the relatively the uncomfortable process of self-reflection, challenge and of changing practice.

5.0 Performance Indicators used to measure school and adviser effectiveness

Extensive work has been undertaken on the measurement of school effectiveness (particularly at the London Institute of Education). A number of studies have involved collaboration between members of the International School Effectiveness and Improvement Centre (ISEIC) based at the Institute and Local education Authorities in the United Kingdom. [ Goldstein, et. al. [2000], Sammons and Smees [1997], Mujtaba and Sammons [1999], Thomas [1997], Thomas and Mortimore [1996], Smees and Thomas [1999]]. Elliot, Smees and Thomas [1998] describe the work some LEAs typically undertake with their schools which is in four stages: summarizing the data, putting the data in context; analyzing added value and judging improvement over time. The basis of this work is the use of pupil level data collected over time and analyzed using multilevel, statistical techniques. Elliot et. al. [1998] point out: ‘Feedback within a value added framework does not provide schools with conclusive answers but often points to the need for further data and evidence’ [p 66]. To this end many studies report the collection of data on pupil attitudes. [ Thomas, Smees and Boyd [1998], MacBeath [1999]]. Goldstein and Speigelhalter [1996] and Goldstein [1996, 1997] point out that multilevel, value added approaches are problematic for judgmental purposes and that the main value of this work is in raising questions with schools. Even using multi-level analysis [Goldstein and Speigelhalter [1996] and Goldstein [1996, 1997]], the vast majority of schools would not have performance scores which varied significantly from one-another. This begs the question about what such performance scores reveal.

6.0 Research Design and Data Collection

The system of displaying and analyzing the data may require some explanation. This will be done by using an example of one junior school. The results for Year 6 national curriculum assessments for mathematics and English in the year 1998/1999 are shown below, sorted by gender.

Percentage of pupils achieving
Level 4 or above / maths /

English

boys / 57 / 40
girls / 60 / 82

This particular school is worried about the performance of girls in maths and boys in English. The boys, as a group, entered the school with lower attainment scores than the girls. The boys’ results in mathematics was thought to indicate a reasonable achievement, given their starting point, whereas a slightly higher percentage of level 4s for girls represented underachievement. It was argued that if the girls can achieve at the level they did for English, then more should be expected of them in maths. These results suggest to the school that the effectiveness of their teaching for girls in mathematics and boys in English could be called into question and could form the basis of further investigation. Schools achieving overall higher or lower results may exhibit this same pattern. In order to highlight this I used a system of coding. This is shown below.