Draft paper: not for citation without authors’ permission
EARLI/ Northumbria Assessment Conference, Berlin, August 27th- 29th, 2008.
Generating dialogue in coursework feedback: exploring the use of interactive coversheets
Sue Bloxham & Liz Campbell, University of Cumbria, UK
(, )
Introduction: the importance of dialogue in developing self-regulation
Assessment feedback is essentially a communication process between students and tutors. According to Sadler (1989)., there are three key elements to assessment that lead to learning. Firstly, students must know what the standard or goal is that they are trying to achieve; secondly, they should know how their current achievement compares to those goals and, finally, they must take action to reduce the gap between the first two. Giving and receiving feedback occurs within complex contexts (Higgins et al., 2001). but should include aspects of each of these key elements however it is ultimately the quality of the communication dialogue between students and tutors which will determine the effectiveness of the feedback.
Although feedback in higher education is considered to be under-researched (Carless, 2006)., there is conclusive evidence of dissatisfaction with existing practices (Hounsell, 2008).. Nevertheless, recent work is developing our conceptual understanding regarding how feedback can effectively contribute to student learning (Higgins et al., 2001Handley et al., 2007Gibbs & Simpson, 2004-5Nicol & Macfarlane-Dick, 2006Brown & Glover, 2006).. Sadler’s model leads to an emphasis on developing self-regulation in students (Nicol & Macfarlane-Dick, 2004). such that they have the ability to judge their own performance and make changes appropriately. Consequently, recent research is paying particular emphasis to the notion of feedforward (Hounsell, 2007). where feedback helps students to understand and reduce the gap and exploratory studies have examined the extent to which different types of tutor feedback better enable students to do this (Brown & Glover, 2006).. This paper is examining one such approach.
Socio-constructivist approaches to student learning and assessment (Rust et al., 2005). emphasise how becoming a successful HE student, measured essentially through the capacity to write satisfactory assignments and examinations, is perceived as a complex task and not open to simple tutor instruction or written advice. It involves the learning of tacit knowledge, new social practices and forms of expression and negotiating the meaning and demands of individual assignments with tutors and peers. Indeed, recent work, is emphasising how students can only ‘come to know’ the expectations and standards of their subject discipline if they become partners in the assessment process (O'Donovan et al., 2008)., if they join the relevant community of practice (Lave & Wenger, 1999).. This is assessment as learning but perhaps not as the term is currently use applied.
Therefore, in relation to Sadler’s first element of understanding the standard or goal, the concept of ‘tacit’ knowledge helps explain how students’ acquisition of meaningful knowledge about what is expected in assessments is constrained (O'Donovan et al., 2004).. This also impacts on the second element in Sadler’s prescription for student improvement; that is that students should know how their current achievement compares to the goals and standards. Feedback is essential to this element but problems often arise because students struggle to interpret or understand the language (Orr, 2005Price, 2005Read et al., 2005).. Tutors have attempted to make guidance and feedback more transparent by linking it to assessment criteria, learning outcomes, grade descriptors and marking schemes but such transparency is somewhat undermined by the difficulty of communicating the meaning of such tools (Price & Rust, 1999Ecclestone, 2001). which are also written in the discourse of the academic discipline and inaccessible to those outside that community of practice.
So, as Higgins (2000). argues, the failure of communication in relation to feedback has its roots, amongst other things, in the differing and often tacit discourses of academic disciplines from which students are frequently excluded. Furthermore, Ivanic et al. (2000). make the point that one tutor’s feedback may not apply to another tutor’s work and consequently students are less likely to pay attention to it. Indeed, evidence is growing that frequent engagements with a task are more important than ‘explicit’ criteria in helping students understand the standards and expectations of assessment tasks (Gibbs & Dunbar-Goddet, 2007).. The rationale for this is that learning the tacit knowledge apparent in communities of practice can only take place through informal activities such as observation, imitation, participation, and dialogue (Lave & Wenger, 1991).. It is an active, shared process, not a passive engagement. Therefore repeated cycles of formative assessment allow students to gradually become part of a subject community because they encourage regular participation, imitation, even possibly dialogue, regarding assessment tasks. This informal learning process cannot be short-circuited by simply trying to grasp written criteria.
The limitations of written criteria are emphasized by Woolf (2004). who argues that they only make sense in context. They are devised by staff who have a tacit knowledge of the meaning of terms which is specific to the particular context, and not readily understood by those outside that community of practice. This raises the question of how far students can digest and act on written feedback which is usually a oneway ‘monologic’ communication (Lillis, 2001Millar, 2005). located in a discourse which students don’t have access to (Carless, 2006)..
Research with Oxbridge students in comparison with those operating in different assessment environments tends to support the view that frequent oral feedback, with the potential for dialogue, is an important feature in helping students understand what they should be aiming for in their assignments. Likewise very different students in Bloxham and West’s (2007). study identified dialogue with tutors as a key aid in negotiating the meaning of both assessment guidance and written feedback. The Equality Challenge Unit report (2008). on black and minority ethnic (BME) students’ attainment found that BME students sought dialogue with tutors in order to help them understand what tutors are looking for and therefore to have confidence in marking. The report recommends that institutions should consider ‘ways in which to strengthen conversations with students about study expectations, standards, performance criteria, assessment and feedback’ (p29). Likewise, Caruana and Spurling (2007). also stress the importance of tutor student dialogue in helping international students understand the expectations of UK assessments.
Students’ perception that dialogue with tutors impacts on their understanding is not unexpected given the above discussion. Certainly other research (Parmar & Trotter, 2004). supports the view that students value sessions which are organised to allow questions and discussion as this is where they ‘felt their learning was being cemented’ (p. 163). Northedge (2003a). and Ivanic et al. (2000). also stress the importance of feedback which seeks to engage the student in some form of dialogue. Northedge considers that such a dialogue can be stimulated through asking students questions in written feedback although Bloxham and West’s research indicated that they prefer interaction in a face-to-face form. The resource requirements for a one-to-one tutor-student dialogue are significant and therefore peer–to peer feedback is increasingly used to provide this opportunity for discussion. However, Northedge argues that student to student discussion may be too low level as they continue to use an ‘everyday’ discourse and are not obliged to practice using the language of the discipline.
Carless (2006). also stresses the importance of ‘assessment dialogues’ between students and tutors as a means to tackle students’ misunderstandings regarding feedback and assessment processes in general and the differing perceptions of students and staff. He stresses the role of power and emotion in grading and feedback and the impact this has on students’ interpretation of feedback. Again, this relates to assessment as learning:
Given the centrality of assessment to learning, students need to learn about assessment in the same way that they engage with subject content. Assessment dialogues can help students to clarify ‘the rules of the game’, the assumptions know to lecturers but less transparent to students (Carless, 2006,:230).
In conclusion, the research on learning and feedback emphasises the importance of students coming to understand the gap between the goals and standards they are aiming at and their current achievement. In the complex world of higher education assessment, these standards are complex, discipline (if not tutor specific) and difficult to teach to novices to the academic community. The tacit knowledge that is involved in recognising the goals and standards is largely learned informally through active participation in the academic community of practice. Consequently, researchers and students are stressing the need for dialogue between tutor and student in the assessment and feedback process although such practice appears enormously resource-heavy in most current higher education contexts.
Interactive cover sheets
This research examines a process explicitly aimed at increasing the dialogue between tutor and student about assessment whilst not creating an additional workload for staff. It emerged from concerns that staff on the BA/BSc Outdoor Studies Programme were devoting inordinate amounts of time to written feedback whilst students were reporting that they did not receive enough, nor was there evidence that feedback was being used to improve future assignments. The process was designed to shift the power balance in assessment such that it moved the learner from a passive and powerless role in the feedback process to one in which they could take some responsibility for their interaction with the marker. In addition, tutors were concerned to improve their understanding of the different processes students go through in order to produce a piece of work. It was envisaged that giving the tutor some analysis of the background to writing as well as the work itself would give tutors a greater insight into how students tackle their writing. Both aspects were planned to enable staff to target their feedback comments more effectively in order to support student understanding of their performance and thus to support self-regulation.
The intervention involved interactive cover sheets (see examples in appendix 1). These sheets, attached to the front of students’ assignments when they submit them, had a number of purposes and varied in relation to the particular assignment. They typically asked students to undertake some self assessment including a prediction regarding their grade and, in relation to the seen examination, asked for information about their revision. In all cases, students were invited to ask questions about their assignments and tutors provided feedback in response to those questions. This latter element was particularly designed to commence a dialogue with students regarding their work and to encourage them to see that they had some power and responsibility in gaining useful feedback. It is this ‘asking questions’ aspect of the coversheets that this paper will focus on.
This approach reverses Northedge’s (2003) suggestion that it is the tutor who should create the dialogue by asking questions in written feedback. The intention of the interactive cover sheets is that the students can prompt dialogue on the issues of importance to them. In doing this, some of the control passes to the student and it was hoped that the process would enable them not only to get specific help on matters of concern but also to help them engage with their feedback and learn from it in terms of the goals and standards of their subject discipline. A pilot intervention took place with volunteer first year students during 2006-7 and, as a result, when the process was extended to all first year students in 2007-8, training was given to help them ask themselves critical questions about their work and to seek feedback (through asking questions) on their work. For initial assignments Tutors were advised to provide important feedback to students even if they did not ask for it on their cover sheet. An interim workshop took place with the students midway through the second semester to evaluate previous feedback and identify potential sources of feedback over and above written tutor comments. The students were then warned that they would receive no feedback on their final assignment unless they took action in asking questions.
The research reported here focuses on the 2007-8 experience with the questions asked in the initial three assignments being used to develop the coding system used to analyse all their questions. Students (n=23) completed three coursework assignments and one examination using the interactive cover sheet.
The assessments using the interactive cover sheet, in order of completion, were:
· A poster
· An essay
· An exam
· A poster
Students also completed two other assignments towards the end of the year where the new coversheets were not used. In this case the students submitted their work on a pen drive and received recorded oral feedback to their questions which were written in the text or at the end of their assignment. They were invited to ask questions in the certain knowledge that ‘no questions equals no feedback’:
· A project designing a learning resource with commentary
· An ecology investigation project
Furthermore, a number of other assignments were completed, such as class tests, exams, competence tests and a group presentation which are not included in this study.
Data collection and analysis
Data was collected throughout the year and included:
· Coding and analysis of the questions asked by the students on their coversheets
· Interviews with 9 students conducted by an independent interviewer at the end of their first year but before they had received marks and feedback on the last assignment. The interviewees are an ‘opportunity’ sample based on volunteers. Ten interviews were planned although one student failed to turn up for the interview. All names have been changed
· A focus group with representatives of staff who taught the year group facilitated by an independent researcher (n=3)
· Informal feedback from all staff involved in teaching the year group (n=6)
· An exercise with students where they were asked to rank the different categories of questions in terms of importance
· Assessment Experience Questionnaire (Dunbar-Goddet et al.). administered to students in both 2006/7 and 07/08 cohorts at the end of their first academic year
The questions asked by students on their coversheets in 2006-7 were discussed and sorted by two independent groups of researchers in order to determine a consistent and replicable method for coding the different types of questions. The early results of this process were used to code students’ questions in 2007-8 although this work remains at an early stage and the results presented here must be viewed in that light. The questions were coded into 11 categories as set out in Table 1. Coversheet data was also analysed to identify student’s ability to predict their own marks across the assignments.