Empirical research in chemical education
Empirical research in chemical education – priorities and paradigms
Keith S. Taber
Faculty of Education, University of Cambridge
Chair, Chemical Education Research Group,
Royal Society of Chemistry
Dr. Keith S. Taber
University Lecturer in Education
University of Cambridge Faculty of Education
Homerton Site
Hills Road
Cambridge CB2 2PH
Note: Dr. Taber is unable to attend the symposium in person, and - at the suggestion of the organisers - his contribution is presented by another member (and immediate past chair) of the RSC Chemical Education Research Group committee, Alan Goodwin of Manchester Metropolitan University.
Many thanks are due to Alan, and also to another member of the group, Dr. Vanessa Kind for her kind support.
Empirical research in chemical education – priorities and paradigms
Abstract:
The debate about the utility and value of educational research is one which has been active at several levels. These include the generic level, a level focused within science education, and a third within the community who might identify with the label “chemical education researchers”. A number of threads to this debate may be identified and perhaps separated.
There is a general suspicion that there is a particular problem with the notion of research in education. Some suggest that activity which counts as academic research is of little use to education, and activity that furthers the practice of education cannot be considered genuine “research”. As chemical education researchers we can (and will) argue with this position. However, there is an issue relating to the applied nature of education research that means that such a criticism cannot be ignored.
Less controversially, there does seem to be an issue with the question of how research is disseminated to improve practice (rather than to inform further research): something that seems to be bound up with professional expectations and institutional norms. Put simply, sometimes it does not seem to be anyone’s job to try and apply research findings.
Another key issue relates to the model of research which chemical educators feel should be followed. There are issues here of viewpoints sometimes imposed by professional backgrounds. For example, qualitative research is not always understood by journal referees, so is sometimes evaluated by inappropriate standards.
Within the area of student learning, several decades of work have examined alternative conceptions, which according to some commentators are now producing diminishing returns. This is a useful theme to explore. In the ‘constructivism is dead’ view we can see the outcome of the failure of researchers to establish clear priorities for the field. Within this context we can see the absence of any coherent research programme, with a consequent mis-match of methodological approaches, and even an apparent vagueness about the purpose of the research enterprise.
In this paper we will consider these issues and problems, and ask what lessons are to be learnt for planning future research in chemical education.
Introduction
I was invited to contribute to the symposium to provide a perspective from beyond Germany, and so I wish to begin by giving my personal impressions about some aspects of education, science education, and educational research in the UK. I will then move on to discuss something at once both narrower and broader: the specific case of empirical research in chemical education, but internationally.
A personal view of the UK experience: control and quantify.
Educational Researchers in the UK have been through a period of feeling that their work is widely considered to be of little worth. The British Educational Research Association has felt the need to defend its activities, in the face of a Chief Inspector of Schools who ‘knew’ what good education was, and seemed to dismiss any research that might have informed policy (Mortimer, 1999). We seem to be moving beyond this point now, but at a cost of increasing government interference in the activities of the profession. Whereas the UK used to be a somewhat laissez-faire system, where innovations could be readily piloted and implemented (ideas such as Nuffield sponsored courses based on fresh educational thinking, and a wide range of examination syllabuses to meet the needs of different groups of pupils) we have become increasingly subject to centralised control, statutory and non-statutory ‘guidance’, and a mentality that that seems to want to quantify and rank order anything it can (and by implication downgrade anything it cannot) readily measure.
We now have a National Curriculum that lays down the science to be taught at different ages ranges (DFE/QCA, 1999); and another one which tells us how to train the science teachers who will deliver the school curriculum (DfEE, 1998). It is also worth noting that even though potential trainee teachers have to have ‘passed’ the school leaving examinations in maths and English before they can be enrolled on the course, and despite there being government prescription of what needs to be achieved during a teacher-training course, the trainees are still additionally required to take and pass centrally set ‘skills tests’ of their literacy and numeracy before they complete their training.
In case that is not prescriptive enough, we have approved schemes of work to cover the school science curriculum up to age 14 (when approved examination syllabuses take over), and now an official ‘strategy’ on how teachers should to go about teaching the recommended schemes in order to cover the required content.
This does not mean that we have a static system: hardly! It’s always in flux. The ‘standards’ document for training the teachers changes this year. The way the 16-18 years olds are examined has just been changed in the past few years. The curriculum for ‘less academic’ 14-18 years olds is likely to change soon. But all of this is led from the centre. The present government seems to be very keen on developing a variety of provision - as long as it can lay down the rules of exactly what that variety should be.
The examination system at age 16 was reorganised some years ago, and since then new national tests have been introduced at age 14, and even in primary schools. The examination system at 18 has been changed in the past few years, in part to make it normal practice for students to take formal examinations at age 17 as a half-way point between the school leaving examinations at age 16 and the university entrance level examinations at age 18.
And it is not only the pupils that are measured and graded, so are the schools, with regular inspections where failure to meet expectations can have very serious outcomes (including management from without, and ultimately school closure).
And higher education is also part of this mind-set as well. In the university there are assessments, and scores, for a department’s teaching quality. And there is also a separate grading of the department’s research.
Educational research (as in other academic areas) is important enough for it to be put through a quality audit every 4 or so years that impacts on individual staff - the shame of not being included in the exercise because your output will not reflect well on your institution - and on the university. The latter is not just about the prestige: research funding is significantly dependent upon the grade achieved. Each department has to make decisions about which staff are considered research-active: being perceived as successful depends upon both the overall grading and upon having most staff research-active.
When the teacher education provision is assessed for teaching quality (interestingly, an inspection process that is more closely tied to schools inspection than the teaching quality assessment in other areas of the University’s work), the trainee teachers have to be graded by the training partnerships (i.e. the university department with its partner schools). The inspector will observe a sample of trainees at work in schools and make her own gradings. The overall inspection grade awarded to the department depends in part on the university classifying enough trainees in the top grade: but also upon making classifications that match the inspector’s decisions. Unless the institution can recruit, retain and train sufficient numbers of high quality trainees to meet these criteria it will be “dammed if it does, damned if it doesn’t” situation.
Strangely, despite this emphasis on achievement, quality, evidence, and the like, education policy still seems to be largely dependent upon the instinctive beliefs of the few who hold the power. The former chief inspector who was judged to have done so much damage to the credibility of educational research has moved on and been replaced (so now his criticisms of the education system are restricted to newspaper columns). However, the legacy of his time in office will not be easily shaken off. In my view this has been a change in the way teachers’ professionalism is understood. Teachers are expected to be more highly skilled than ever, but they are not to be trusted with making deeply educational decisions: the teacher should (according to this new order) be an expert at executing educational policy, but should not have a role in setting the agenda.
This has parallels with what has been happening in ‘continuing professional development’ or (‘in-service training’). In my 20 years in teaching I have seen a move away from teachers being offered, and funded to attend, a wide range of in service courses that commonly included opportunities for higher degree study. Money is certainly still available, but largely targeted at categories of course that fit a central agenda. In a similar way I feel that educational research is being brought under a more centralised control: with major funds for educational research being channelled through central projects with pre-established time-scales and objectives.
Of course many of the developments are in themselves sound, and I would not wish to suggest that there is no good practice in the UK system. But decisions that used to largely belong to the individual teaching professionals (decisions about what to teach, and when and in how much depth to teach it; decision about what to study for one’s own professional development; decisions about which aspects of practice to usefully research) seem to be increasingly in the hands of government, or government sponsored bodies.
I don’t wish to say much more about this, except to make three specific points relating to science education and educational research.
Firstly, despite various drafts, the input of the professional and learned bodies, consultations, and revisions, the national curriculum for science is seriously flawed. In an attempt to make sure that all students followed a course of broad and balanced science, a curriculum was produced which included what were seen as all the key areas of biology, chemistry, physics and earth science. A curriculum full of content.
Now I would suggest that this gives very good coverage for the most able students who wish to proceed to college and university level study of science and related fields. However, much of the content of the curriculum is going to be of little direct relevance to many pupils. Worse, this over-emphasis on content ignores key findings from educational research.
One does not need to be entirely convinced by the Piagetian stage theory of conceptual development to recognise that the Piagetian research programme has explored the very important question of pupils’ readiness to cope with the abstract nature of much science (e.g. Shayer & Adey, 1981). This research suggests that the intrinsic nature of many of the topics in school science means that they are met too early for pupils to meaningfully understand them.
Now I certainly do not believe that pupils’ conceptual development is purely, or even mainly, controlled by genetics, and so suitable programmes should be able to ‘accelerate’ their development to some extent. I also believe that it is possible to teach simple versions of many complex ideas in ways that pupils can get some kind of authentic feel for the concepts involved (cf. Bruner, 1960). However, my own work convinces me that many conceptual problems (and alternative conceptions) in chemistry are largely due to prior teaching and oversimplifying chemical concepts - so I feel that teaching some ideas too early way well do ‘more harm than good’ (Taber, 2000a, 2001a, 2002a).
I may be wrong, but so might the authors of a curriculum that seems to be based on a belief that the most important thing is to include all the key ideas in a subject! It would be useful to have some more research to explore this issue, if this would be possible without seeming to be tied to a pre-established ideological position. (In other words, can any research to investigate whether an over-packed curriculum is detrimental avoid being seen as an attempt to ‘undermine standards’; move away from the current emphasis on ‘back to basics’ and ‘traditional approaches’; and return to a ‘progressive’ approach which has been judged as misguided by policy makers?)
My second, brief, point concerns practical work. It has long been a British tradition that school science is a practical subject. Now many school practicals may have had confused aims, and may well have often misrepresented the practice of science, and often no doubt failed to demonstrate or persuade. I’m sure some practical work in schools was quite poor - but there was always a good variety.
With the advent of new exams at 16, and then the national curriculum, came a standard way of assessing practical work: or rather assessing so-called investigative practical work. This reduced investigations into a small number of substituent parts, which could be assessed, and scored, separately. It is very hard, then, for teachers not to start considering that the practical work worth spending time and effort on is that which can be analysed and assessed on the national scheme. In many schools other types of practical work necessarily suffer (especially when there is so much content to work through).
Finally, the most significant research programme in science education over the past two decades has surely been that which follows from the constuctivist perspective (discussed in more detail later in the paper). Now some aspects of this have been absorbed into the official orthodoxy. Teachers are expected to know about common misconceptions in a topic, and are encouraged to use diagnostic testing to elicit pupils’ ideas at the start of a topic, and to plan teaching accordingly (DfEE, 1988). This is all good. But the other side of the research programme - that of building up constructivist teaching schemes (Driver & Oldham, 1986) that respond to pupils’ ideas (in the way real science works!), and take time to explore pupils’ ideas through investigative work, does not fit into the busy teaching schemes - where concept areas often only get a few lessons before the class moves on.
Note the irony that constructivism encourages genuine investigative practical work sparked by pupils’ thinking: but a national curriculum that assesses investigations encourages teachers to use standard practicals where they can be sure the pupils have a chance to demonstrate attainment against the set criteria. Genuine investigations are much too open-ended and risky!
One major outcome of much research into learning science is the realisation that significant conceptual learning is a slow process that depends upon learners having time to explore and apply scientific ideas in a range of context (e.g. Taber, 2001b).The UK system seems very much ‘forces this week and photosynthesis next’. And don’t let things over-run or they won’t get to ‘do’ acids before the half-term holiday!
Of course there are many imaginative and intelligent teachers who have the vision and courage to do wonderful things within the framework of the curriculum. It is only statutory to cover certain topics at each ‘key stage’, e.g. during ages 11-14: teachers can organise the teaching of the content in any way they wish. In principle this could include extended project work, integrated science topics etc. In practice most schools are ‘playing safe’, especially where they suspect the inspectors will be looking to see if the approved schemes of work are being followed.
Given that most teachers are overworked and feel the pressure of covering the material before the next assessment hurdle, the highly specified system is a significant constraint. And one that is not well informed by educational research.
Crossing the borders?
But then perhaps educational research seldom makes contact with practice to the extent we might wish? Onno de Jong (2000) has discussed the problem of transfer of research findings into the classroom. Put simply, over-simply perhaps, researchers are funded to research, and teachers are paid to teach, so who has responsibility to see that research is used to inform teaching?
Of course there are avenues for researchers to report their findings to teachers (in the UK we have meetings of the Association for Science Education, and its journal the School Science Review, the Royal Society of Chemistry (RSC) periodical Education in Chemistry, etc.). But reading a paper or listening to a presentation is unlikely to help teachers change practice significantly or quickly - and, anyway, researchers are required to publish in the more prestigious research literature (seldom read by teachers, and at a cost beyond school libraries or departmental budgets) if they wish to help their departments in the Research Assessment Exercise mentioned earlier!