Report of an investigation into assessment practice

at NorthumbriaUniversity

November 2006

Joanne Smailes

Graeme Arnott

Chris Hall

Pat Gannon-Leary

1. Introduction

2. Methodology

2.1. Quantitative analysis of module descriptors

2.2. Utilising the module descriptors

2.3. Qualitative research - Focus groups and interviews

3. Findings

3.1. Methods of Assessment

3.2. Amount of Assessment

3.3. Distribution of Assessment

3.3.1 Assessment timing across Schools

3.3.2 Focus group perceptions

3.4. Frequency of assessment

3.5. Assessment Criteria and Learning Outcomes

3.6. Formative Activities

3.7 Feedback

3.7.1 Feedback practice

3.7.2. Providing distinguishing feedback

3.7.3. Improving the value of feedback

3.8 Student involvement in assessment

4. Conclusion and Recommendations

4.1. Module Descriptors

4.2. Methods of Assessment

4.3. Amount and Frequency of Assessment

4.4. Understanding of Assessment Requirements and Feedback

4.5. Student Involvement in Assessment

4.6. Point for further consideration

Bibliography

Appendix 1: Suggested Format for Module Descriptor

Appendix 2: Schedule of Interview / Focus Group Questions for Students

Appendix 3: Schedule of Interview / Focus Group Questions for Staff

School Acronyms used throughout the report

AS / School of Applied Science
BE / School of Built Environment
CEIS / School of Computing, Engineering & Information Sciences
DES / School of Design
HCES / School of Health, Community & Education Studies
LAW / School of Law
NBS / NorthumbriaBusinessSchool
SASS / School of Arts and Social Sciences
PSS / School Psychology and Sports Sciences

1. Introduction

Assessment, particularly its role in the process of learning, is a subject of continual contemplation across all educational sectors. One of the objectives in Northumbria’s Learning and Teaching Strategy (2003 – 2006) was a review of the nature and pattern of assessment following the restructure of the University’s credit framework to a recommended shift toward 20 credit modules and/or a minimum of a 20,20,10,10 module structure in a semester.

In addition - although unrelated - at the time of preliminary investigations and construction of this report results from the first two National Student Surveys (2005 and 2006) had been published. The survey included 22 statements each rated by students on a scale of 1-5 (Definitely disagree – Definitely agree). In the section ‘Assessment and Feedback’, the score received by Northumbria was 3.4 and 3.6. respectively. Although this indicates student satisfaction, the score is lower than expected

NorthumbriaUniversity has been recognised for its excellence in assessment practice by the HEFCE through the establishment of a Centre for Excellence in Teaching and Learning (CETL) in Assessment for Learning (AfL). The CETL directors (Professors Liz McDowell and Kay Sambell) the Impact of Assessment project, thelast significant review of practice within the institution, completed 1994-6. Twelve years on their findings have offered a very useful benchmark for comparison purposes and this review, whilst providing a current overview of the nature and pattern of assessment and drawing upon some of the examples of CETL practice, will also seek to highlight potential improvements in efficiencies and effectiveness.

2. Methodology

In the context of such developments, this research project was designed around the triangulation of data collection methods in order to develop a detailed picture of assessment practice, and perceptions thereof, at NorthumbriaUniversity. Data collection methods, supported by literature review, included a review of assessment as recorded in module descriptor information, as well as student focus groups, academic staff focus groups, and individual interviews with staff.

2.1. Quantitative analysis of module descriptors

Initially, a quantitative paper-based investigation of assessment type and timing across all Schools was undertaken. Using University information systems a total of the number of active programmes across all Schools was determined. A stratified sample of programmes was drawn based on School size and ensuring both undergraduate and postgraduate programmes were included (based on square root of the total number of programmes within a School). In total, forty-three programmes across ten[1] Schools were selected. The core i.e. compulsory modules from each of these programmes, a total of 604, formed the sample from which assessment data was extracted.

Module descriptors feed the quality assurance cycle with data against which modules can be approved for delivery, and subsequently monitored in external examination procedures, and by internal and external quality assurance audits. Commonly they record the purpose and scope of a module, its intended learning outcomes, and the means by which this will be achieved i.e. the syllabus, learning and teaching strategy, student workload, assessment types and timing.

The important element of the descriptor, for the purposes of this research project, was the Module Summative Assessment statement which indicates all pieces of assessment associated with a module, their types, weighting, and submission points in the academic year (Figure 1).

Fig. 1: Typical Module Summative Assessment information from a NorthumbriaUniversity module descriptor

For each programme included in the sample, module descriptors for each core module were obtained and the Module Summative Assessment information coded or quantified for incorporation into an Excel spreadsheet for analysis.

The spreadsheet recorded for each module:

  • Module code
  • The semester basis of the module (semester1, semester 2 or Year Long)
  • The number of assignments associated with the module
  • Assignment type (categorised according to Figure 2)
  • Submission points for the assignments

Fig. 2: Typical Module Summative Assessment information from a NorthumbriaUniversity module descriptor

A / Written assignments, incorporating essays, reports, articles, coursework etc.
B / Practicals and Labs, incorporating practical tests, lab work, lab reports logbooks and workbooks
C / Portfolios, includes professional, research and practice portfolios
D / Subject Specific Skill Exercises, for example, digital production, model making
E / Exams, i.e. formal exams, class tests, seen exams, phase tests, MCQs
F / Subject Specific Product Creation products such as a garment or fashion product, a programme, a video
G / Projects and Dissertations
H / Oral Communication based assessment incorporating presentations, vivas, speeches, debates, poster presentations
U / Unspecified:

2.2. Utilising the module descriptors

The task of utilising information contained in the descriptors was complicated in a variety of ways. Most significantly, there was great variation in the terminology used by authors to describe assessment type.

On some occasions additional clarification of what was intended by a description in the Module Summative Assessment statement was possible from information contained in the Learning and Teaching Strategy statements. However, this was not consistently available or the relevant sections completed.

In order to avoid investigator bias or error, descriptions received the minimum of interpretation. This however resulted in a large number of assessment descriptions ranging from those that were quite precise such as ‘specification of a product design’ through the more common forms such as ‘open-book examination, or ‘critical essay’, to the vague, for example ‘written assignment’. Consequently, for analysis purposes the range of assessment types was condensed into a more concise range that could accommodate these extremes of detail. These are listed in Figure 2 above,

2.3. Qualitative research - Focus groups and interviews

Themes for discussion in focus groups were identified from a literature review, analysis of the data gathered in the paper-based exercise, and concerns emerging from National Student Survey. Two principal stakeholder groups were selected with whom to discuss these themes, namely students and academic staff.

The original intention was for members of the student focus groups to originate from the programmes sampled in the paper-based review. Programme leaders in the University were emailed, with a request for student representatives who might be approached to take part in the research. Although there was a good response from the programme leaders, follow-up messages to students produced a very poor response, so alternative methods of procuring a sample had to be found.

The Students’ Union was contacted, and asked to email programme representatives requesting their co-operation, and stressing that this was an opportunity for them to make their voices heard and influence the assessment process. An incentive of a £10 shopping voucher was also provided. This had a better yield than the original method, but the response rate was still disappointing. Two student focus groups were arranged in the Students’ Union Training Room[2]. Attendance figures were again low and three students were approached in the Students’ Union building on the day and agreed to take part.

A total of 15 students attended the two focus group sessions (one of 9, one of 6 students, 10 were female, 5 male, and all were undergraduates, ranging from 1st to 4th years.

Although students from all Schools were invited to take part, programmes ultimately represented were from the subject areas of Drama, Law Exempting Degree, Business and HR Management, History of Modern Art, Design and Film, Business and Marketing, Psychology, Business Information Systems, Law, Geography and Marketing therefore representing eight of the nine Schools. The majority of the student participants were programme representatives and therefore were accustomed to attending meetings, and to presenting the views of others, as well as their own.

As focus group numbers were lower than expected, data from student focus groups from a complementary research project focussing specifically upon feedback has also been included where relevant. Students who took part in these focus groups came from the following Schools: AS, BE, CEIS, DES, LAW, NBS and SASS

Staff response for focus group participation was also low. One focus group was held but only a small number of Schools was represented. Therefore this was supplemented by a series of telephone interviews, to improve cross-university representation.

All focus groups meetings were tape-recorded and supported by notes made by a non-participant observer.

3. Findings

An important aspect of contemporary educational practice is the stress placed upon assessment not only as an exercise in acknowledging achievement but also as one of the principal means of supporting learning.

Gibbs and Simpson (2004/5) reinforce this approach in their article reviewing assessment arrangements, for which they claim ‘it is not about measurement at all – it is about learning.’ This represents the fundamental shift in assessment practice that has begun to take place in higher education in the last decade.

It is evident that careful assessment design is necessary to ensure that it supports learning and so that student effort is directed and managed. The number and nature of assessment tasks, their frequency or infrequency, their scheduling and explanation, are all significant factors to consider, as they can individually and collectively influence the effectiveness of assessment. Similarly, carefully constructed approaches to providing feedback in a relevant, timely and meaningful fashion are vital in supporting learning.

These factors feature in the findings of this survey of assessment practice at NorthumbriaUniversity and will be discussed in terms of the assessment methods, assessment load and distribution of student learning effort, formative activity, feedback practice and the involvement of students in the assessment process.

3.1. Methods of Assessment

A wide variety of methods is used to assess learning at NorthumbriaUniversity. The review of the types of assessment recorded in 604 module descriptors identified 77 different ways in which assessment was described. For analysis purposes these have been condensed into nine categories (See section 2.1), Figure 3 illustrates each assessment method as a proportion of all assessment recorded within the sample.

Fig. 3: Proportional representation of all Assessment Methods used (n=1009).

Written assignments / 37%
Practicals and Labs. / 7%
Portfolios / 10%
Subject specific skill exercises / 4%
Exams / 22%
Subject specific product creation / 2%
Projects and dissertations / 9%
Communication / 8%
Unspecified / 1%

The most utilised assessment methods represented were written assignments (37% of all assessment) and examinations (22%) although there were significant differences in their employment across the Schools as illustrated in Fig 4

Fig. 4: Assignment types by School

School / Written Assignment / Exams / Practical / Portfolio / Sub Spec Skill / Product Design / Comm / Project
AS / 21.6% / 29.1% / 20.3% / 9.5% / 2.0% / 1.4% / 8.8% / 7.4%
BE / 61.7% / 14.3% / 2.5% / 4.2% / 4.2% / 4.2% / 9.2%
CEIS / 24.4% / 35.0% / 23.9% / 2.4% / 5.7% / 6.5% / 4.1% / 0.8%
DES / 29.0% / 1.4% / 4.3% / 14.5% / 4.3% / 13.0% / 33.3%
HCES / 44.9% / 3.4% / 2.7% / 29.3% / 3.4% / 3.4% / 3.4% / 9.5%
LAW / 68.8% / 3.1% / 9.4%
NBS / 31.1% / 46.4% / 7.3% / 5.3% / 1.3% / 4.6% / 3.3%
SASS / 38.2% / 18.3% / 0.5% / 9.1% / 19.4% / 12.9%
PSS / 36.4% / 21.2% / 12.1% / 6.1% / 6.1% / 3.0% / 6.1% / 9.1%

Four Schools in the survey employed all of the methods. However, there is a tendency for Schools to rely, quite markedly, upon a particular or small range, of assessment methods. For example BE heavily utilises written assignments (61.7% of assessment) as does LAW (68.8%). DES and PSS, on the other hand, rely mostly upon a duet of assessment methods. DES places emphasis upon written assignments (29%) and Projects (33.3%) whilst the approach in PSS is written assignments (36.4%) and examinations (21.2%). Offered as an issue for further possible debate is to consider why such differences occur. Is this due to the nature of the discipline?

In two Schools examinations account for more that one third of the total assessment methods used. However, this is not necessarily reflected across all programmes within these Schools. For example, within one programme in NBS examinations form 65% of the assessment across its core modules whilst another programme has no form of examination.

It is interesting to note students’ reaction to the use of examinations.

“You could go to every single lecture, and still not do well in the exam.”
“Our teacher gave us pointers to use, saying ‘this will come up and that will come up, so study this’, and it didn’t come up. So students were all complaining about what happened in the exam… if she’s going to be giving us tips, they should be the right tips, not misleading us.”
“On our course, a lot of people were questioning how some lecturers gave out essay questions that were going to be in the exam, and other lecturers didn’t even give topics to revise …so it would be good if there was some sort of criteria as to how much help you would be given, as so many people spent time working on stuff that was just not related to their exam.”

These views are not that dissimilar to those expressed by students in the Impact of Assessment project (McDowell and Sambell, 1999), a decade earlier, indicating some consistent and longstanding views:

“Exams here are so pointless, .the questions are so precise. You are never going to need that kind of useless information”
“If you have come from a working background, exams are not a lot of use to you…They suit people who have come from school, who have been taught to do exams and nothing else. I’m out of that routine.”

Although strong feelings were expressed students were not necessarily anti-examination: concerns were more to do with the weighting placed upon certain assessment methods. Students expressed a preference for assessments that are not based completely on examinations. In the feedback focus group SASS students cited friends studying in AS who had many small assignments in contrast to their:

“one 3,000 word essay …worth 100%. If you don’t do well on that you are screwed.”

They also commented that:

“You need something like that happening all the way through so you have got into the work”
“If you have four phased miniature exams worth 5% each, at least you get the chance to break up your feedback.”

Within focus groups, students generally described their experience of assessment as encompassing a number of methods including presentations, workshops, essays, lab reports, examinations, maps, and supervised professional practice. Most felt that the range of methods used was reasonable, although those who were assessed entirely by examination felt that they would prefer to have a mixture of assessment types.

Staff responses highlighted yet more types of assessment, including an all-day project that started in the morning and finished in the evening, and a week-long project that was issued on a Tuesday morning, and completed by Friday afternoon (assessed by presentation to professionals in the field).

It is clear that, within Northumbria, a wide range of assessment methods is employed and this in general is appreciated by the students. During the analysis process assessment was described in over 70 different ways but it was clear these in essence fell into eight main categories. In this regard an issue for further debate is offered for consideration: Should module tutors be restricted to using a more limited range of assessment descriptions?

Recommendation:

Within pedagogic literature, examinations are well known to be anxiety associated assessment methods which in turn are linked with the least desirable (surface) approach to learning (Ramsden, 2003). It is recognised that examinations are a popular requirement in professional exemptions. However, for programmes where examinations make up more than 40% of the overall assessment it is recommended that a clear rationale is made for this choice. Additionally, programmes should be encouraged to ensure that there continues to be a good mix of assessment methods employed and that the use of these is spread across a module/programme’s duration

3.2. Amount of Assessment

The design of assessment has a significant influence upon students’ approaches to it, and also the time and effort that they devote to the tasks (Brown, 1997; Struyven et. al. 2002). Student effort might be affected by various elements of assessment design that influence when a student will study i.e. the number and frequency of assessment tasks, hand in dates and the frequency of assessment, and how a student will study i.e. the relevance of assessment and their understanding of the requirements, as Gibbs and Simpson describe in their discussion of conditions that support good assessment (2004/5).

It is important that students are not overburdened by disproportionate amounts of assessment in relation to module size. At Northumbria, this is taken into account through the Notional Student Workload, where there is an expectation that summative assessment activity should be no more than 20% of the workload hours. Additionally, within the Guidelines for Good Assessment Practice there is a recommendation that, for a 10 credit module, there should be a maximum of two pieces of summative assessment. However, as recognised by much of the CETL: AfL activity it is not always possible to separate assessment from learning and teaching and in fact enforcing a distinction between the two is potentially damaging in reinforcing assessment purely as a measurement.