56
CHAPTER FOUR
METHODOLOGY
4.1 INTRODUCTION
The prime aim of this investigation was to develop an instrument to explore the difficulties that students encounter when studying quantum mechanics, and to reveal the mental models students use when solving problems, their interpretations of physical models and their understanding of basic ‘technical’ terminology. The design of this instrument was strongly directed by the materials gathered during the analysis of the preliminary surveys described in Chapter 3. This information assisted in the development of questions and in the selection of appropriate tools for analysis.
4.2 DEVELOPMENT OF THE INSTRUMENT
The development of the instrument involved several stages. (1) Selecting the content of the survey questions, (2) identifying the context in which each question could be presented, (3) designing the general layout of the instrument and a suite of tools for analysis of the responses and (4) taking account of a set of administrative constraints. The development process was iterative, in the sense that at each stage the prior steps would be re-evaluated to ensure the integrity of the instrument as a whole.
4.2.1 Content and Context
This section builds upon the recommendations identified in Chapter 3. Each of the four content areas was researched with the aim of finding the most appropriate contexts in which to present the final questions.
A literature search of textbooks, research papers and quiz material was undertaken. This data was combined with ideas obtained informally through discussions with academic staff in order to explore novel ways in which to present each piece of content. These discussions provided a set of recommended contextual environments in which the questions could be set.
The significance of the photoelectric effect
The photoelectric effect is of historical importance in establishing the particle nature of light and is presented in the majority of secondary school and introductory tertiary physics text books. The subject matter concerning the two experimental results were explicitly covered in the curriculum of the New South Wales Higher School Certificate and is a focus topic in the Physics 1 and Physics 1A streams at the University of Sydney. The two experimental results covered by secondary and introductory tertiary texts are ‘for monochromatic light above the threshold frequency, the number of electrons ejected from a metal surface increases as the intensity increases’ and the second being ‘that electrons are only ejected for frequencies of monochromatic light above a certain frequency; and for light below this frequency no electrons will be ejected’ (Young, 1992).
The key concepts associated with the results are involved in the way the particle and wave models of light are used to explain these experimental observations. Textbooks present these ideas in several formats, ranging from textual descriptions through to pictorial representations. The textual descriptions usually describe the wave and particle models of light, then describe the processes and predictions that each makes. The pictorial models, such as a “bird on a wire” use this visual scenario to describe the processes involved with each of the models.
From the inspection of examination papers and quizzes[1] it is evident that those who wrote these examinations and quizzes expect that a student should be able to clearly describe the observed results of the Photoelectric Effect in terms of the wave and particle models of light.
The nature of the topic and the associated material lends itself to the development of a survey question being constructed utilising an analogy and/or visual model. This would allow exploration of both the students’ understanding of the photoelectric effect and their interpretation and handling of the particle and wave models of light.
The meaning of Uncertainty
The term “uncertainty” has a variety of meanings in different situations. When the word is used in everyday language it can mean doubtfulness, not being confident, an uncertain state or unpredicability. Even within the discipline of physics, uncertainty carries with it different meanings depending on the context in which it is used. For example, the term uncertainty in classical measurement refers to a range of values for which the measured result lies[2], arising from unavoidable discrepancies from measurement to measurement.
In quantum mechanics the idea of “uncertainty” was proposed in response to the wave particle duality of microscopic objects. Knowing where an object will be at a particular time is a particle-like measurement. But nature tells us that quantum mechanical objects do not necessarily behave like particles in the sense that if you travelled back in time and repeated your measurement of the object's position at the same moment in time, you would not necessarily get the same answer. The results of such a measurement are said to be “uncertain”.
The preliminary Third Year quiz revealed that there was confusion between the meanings of “uncertainty” and “indeterminism” in the context of quantum mechanics. Therefore it would be appropriate to step back and just investigate the students’ understanding of the term itself, in a context where uncertainty is explicit.
To create the context for a question an imaginary world scenario, similar to that used by George Gamow (Gamow, 1993)[3], where quantum effects existed on a bigger scale, would be both novel and create an explicit quantum mechanical environment. A question could then address the meaning of the term uncertainty in relation to making a measurement.
The nature of Waves
A good understanding of the wave model and the set of properties associated with the measurement and description of waves is crucial and considered to be assumed knowledge after the completion of first year study at university. This is especially true in the study of quantum mechanics because much of the mathematics and associated concepts are dominated by the ideas contained in wave mechanics. The preliminary Third Year quizzes revealed the following result (see Chapter 2).
The students articulated their understanding of ‘What is a wave?’ using one of a small number of paths resulting in fifty percent of the sample not closing or including in their response the defining properties of interference and/or diffraction. Therefore it is appropriate to repeat the experiment especially with First Year students to see if these same two paths are evident earlier in their academic studies and across a broader cross-section of students. The preliminary study had revealed a series of distracters which could be utilised in a tick-a-box response section. It was expressly decided that the accepted ‘correct’ response of interference/diffraction would not be included in the distracters.
The nature of Energy Levels
The idea of an energy level is introduced fairly early in Secondary Schools’ science curricula in topics associated with atomic structure. It happens during discussions concerning the Bohr model of the atom and is revisited during topics in line spectra and the photoelectric effect. Introductory university level physics and chemistry texts further expand the idea to types of orbitals, then to band structures associated with semi-conductor materials. The types of questions supplied in secondary and undergraduate textbook quizzes are mainly limited to simple calculations concerning the differences in energies between levels and questions that ask the student to draw pictures of orbits or energy level diagrams.
It would therefore be interesting to probe what ideas the students hold about these basic ideas and models. Two prominent questions that could be explored are, the very basic question, “What is an energy level?” and one which probes the connection with standing waves “What does it mean for a wavelength to fit into an atom?”. There had been no preliminary research into these questions and therefore open-ended dialogue type questions would be appropriate to elicit a range of student ideas and conceptions.
4.2.2 Guidelines
The development of good questions requires that certain guidelines should be followed to make sure that the final instrument will yield valid and analysable responses.
Design and Layout
The guidelines presented in Cohen on good practice design (Cohen and Manion, 1994) were considered. The points given were adapted to the environment of the current study to assist in the development of the instrument. The final set of adopted recommendations follow.
The following recommended design and layout practises were adopted:
· The appearance of the survey must look attractive and interesting.
· The question sheet will be well laid out on a light buff coloured paper.
· The answer booklet will include the question and clear instructions with ample space for the student to write and/or draw their responses.
· The placement of a tick in a box will be used where appropriate and questions with parts should be sub-lettered.
· Instructions should be on the survey question sheet and in the answer booklet.
· The first question should be interesting and stimulating.
· The sequencing of the questions needs to be varied to ensure a consistent mix of responses.
Logistics
The survey instrument is subject to a number of logistical and administrative constraints:
· The survey will be conducted in a lecture theatre.
· The survey must be distributed to, and collected from, approximately 90 students per lecture.
· 40 minutes will be given to complete the survey.
· The analysis must quickly provide feedback to the students.
· The questions require the students to construct carefully thought out answers and provide a rich response.
· The survey must provide some mechanism of immediate feedback to the student and lecturer.
Form of the questions
As reported in the preliminary analysis of the first year multiple-choice questions the responses to the questions give no real measure of a student’s understanding or conceptual development. It was noted that at best you could only be sure of those students who “got it wrong”.
Open ended questions tend to generate a lot of diverse responses that can result in coding and validity problems in the final analyses. The analysis of diverse responses is also very time consuming. As a prerequisite, the survey requires an initial formative feedback mechanism to lecturers and students, therefore a reasonably quick and easy to analyse format will need to be utilised by the researcher. Purely open ended questions are therefore inappropriate.
To strike a medium that allows initial quick analysis of responses it was decided that questions where good distracters are known, be developed as tick-a-box-and- explain format questions. Questions where no information is available to develop distracters should follow an open-ended question format with sufficient prompting to direct and contain the diversity of the responses. This type of open-ended question would hopefully elicit a set of distracters for future tick-a-box-and-explain format questions.
Guideline summary
Addressing the constraints and aforementioned guidelines it was decided that one question would be devoted to each area/topic and that the overall size of the question sheet be confined to one page (double sided). The form and style of each question would be as follows: The “Nature of Waves”, “Photoelectric Effect” and “Uncertainty” would take the form of tick-a-box-and-explain because there where known distracters isolated in the preliminary analyses and the “Structure of the Atom” question would be open-ended.
4.2.3 Intended Analysis
The data collected from the analysis would be recorded in a spreadsheet. Each row would contain all the information pertaining to an individual student. Each column of the spreadsheet would contain information pertaining to a single item (either a tick response or agreed category). A one [1] in a cell would indicate that the student’s response met the criteria for that item, whereas a zero [0] would indicate they were not a member.
This method of data recording would provide a simple format for comparison between any combination of the items.
Tick-a-box Responses
A simple correctness analysis would be performed by comparing the student’s response to the approved sample tick-a-box response. This analysis would in the first instance provide a fast feedback mechanism for the students (an essential requirement of the formative assessment process) and secondly provide timely statistical information for the lecturers.
Open-ended Responses
Phenomenographic
A phenomenographic analysis would be directed toward examining how the students think about the concepts presented in the responses, seeking to categorise the responses into qualitatively different groupings, based solely on the data contained in what the students say or write.
The responses to each question would be examined independently by each member of the research team in order to identify a provisional set of categories. A meeting would be held and an agreement would be made as to the suitability of the analysis. This decision would depend on whether the phenomenographic analysis of the responses yielded a set of identified codable categories.
Assuming the analysis was to proceed, a final set of categories and an associated set of shared meanings would be developed. A team of three researchers would then independently re-categorise all the responses in accordance with this set of agreed shared meanings. These analyses would then be compared and any differences resolved by discussion. The phenomenographic analysis would provide a preliminary mapping of the students’ perceptions for each of the concepts/models presented in the students’ response.
Context
A contextual analysis of the responses is considered important because the context in which a student couches their response may be closely related to the mental models they have constructed. It may also be closely associated with the difficulties that students encounter in answering questions. It was expected from the preliminary analysis students would offer their responses in a variety of contexts. For example some would chose to mention properties of a particle (what it has); some would offer a metaphor or pictorial image (what it is like); and some would bring forward experimental evidence (what it does). The appropriate categories would be agreed upon and coded.
Content
The use of terminology by a student is an important component of their response therefore the appropriateness of a content analysis would be assessed by the research team. A record of all the terms and ideas presented in each response would be recorded.
Correctness
The correctness analysis of the written responses would not be considered as important as the preceding analyses. A simple “correct” or “incorrect” would be recorded against each written response. Therefore a marking scheme would not be developed for each question. Correctness would be gauged by comparing the student’s response to the sample answer. A response would be considered “correct” if it was equivalent to the sample answer; an additional contradictory statement or lack of detail would constitute an “incorrect” response.