First Catch Your Student Qualitative Approaches to Research on Information and Learning

First Catch Your Student Qualitative Approaches to Research on Information and Learning

First catch your student – qualitative approaches to research on information and learning technologies

Liz McDowell and Gwen Marples

Paper presented at the Higher Education Close Up Conference 2, Lancaster University, 16-18 July 2001

Introduction

The impetus for this discussion paper stemmed from a practical problem that was occupying the time, energy, and ingenuity of researchers and steering groups in a number of projects. The basic problem was that a number of projects researching and evaluating technology-based learning in Higher Education were attempting to gather data from students – and the students were proving to be extremely elusive! Researchers were finding themselves frustrated, anxious and even despairing of tracking down relevant students and gaining their co-operation in completing questionnaires or participating in interviews and discussions. Data collection approaches which had worked well in the past, were no longer working. Researchers felt that the difficulties they were experiencing were increasing. This was more than an irritating practical problem which a small amount of initiative and creative thinking could resolve, but was becoming a problem of practice in the sense of raising questions about deeper issues such as the ethics, values, and validity in research practice. This prompted us to examine the issue and offer our explorations as a discussion paper. Although we draw particularly on studies concerning information & communication technologies (ICTs) and learning technologies, we suspect that many of the issues raised will be relevant to other research on student learning or the student experience in HE.

A wide variety of research approaches have been employed to study the use of ICTs, computer-assisted learning (CAL) and other learning technologies. Established approaches include laboratory-style studies where learners are asked to use a specific technology in an artificial, controlled situation where their responses, progress and comments can be closely observed and recorded. Other studies, over many years, have adopted quasi-experimental approaches in the classroom where the learning, progress and opinions of students using technology-based approaches are compared with those of a control group; or where progress is ascertained through pre- and post-testing of knowledge. In recent years, there has been a growing emphasis on discovering the opinions of teachers and students or understanding their experiences and perspectives. This leads to the need to obtain qualitative and, in many cases, ecologically valid data. The methodological frameworks used vary from survey approaches, through case studies to in-depth ethnographic-style studies. Many individual academics and course teams have used approaches based on gathering, analysing and interpreting qualitative data to evaluate their local implementations of ICT for learning and teaching. Increasingly, national bodies who fund research and development projects have specifically encouraged qualitative approaches. One such example, the Jubilee project funded by the JISC (Joint Information Systems Committee, ) in the UK, is referred to in this paper.

Gathering qualitative data necessitates making contact, sometimes on a sustained basis, with students out in the real world of the university, rather than enticing students into a laboratory. Any standard textbook on research methods discusses the issues involved such as: gaining access to the real-life context in which the research is going to take place; achieving co-operation and good will; building relationships; and dealing with ethical issues concerning the informed consent of research participants and matters of confidentiality. Research methods texts often raise the issues rather than offering practical suggestions, perhaps because every situation is different. Not many research studies report openly any difficulties they may have experienced in gaining and retaining access. More often they report the research as a linear, neat and relatively problem-free process (Punch, 1998). There are examples of descriptive accounts of the messy, difficult and anxiety-provoking process of getting access to and remaining in research sites, such as Walford’s (1991) account of his heroic and persistent attempts to gain access to a City Technology College for his research. In the higher education sector, Ottewill and Brown (1999) have provided a detailed account of their experience with a student focus group in research on resource-based learning. We offer here three examples of research studies relating to the student experience of ICTs and learning technologies in HE to illustrate some of the range of issues arising.

Example 1: The Virtual Gallery module

This was a small-scale local study of a one semester option module, the Virtual Gallery, in a History of Art degree course. The module involved students in extensive use of the Web and the creation of their own set of Web pages to present a ‘gallery’ of art works of their own choice. The module was treated as a case study and the data collection methods were: interviews with the lecturer and teaching assistant; interviews with students close to the beginning and again at the end of the module; observation and ‘informal’ interviews and discussions during class sessions; and examination of documentary material about the module.

Access here appeared to be no problem. The idea to research the module arose from informal contact between the lecturer and myself (the researcher). I then turned up at the first class session of the module. The lecturer explained briefly what I was doing and I gave a little more explanation and asked students for their co-operation. I had a brief written agreement with the lecturer about how the study would be conducted and reported but the students had nothing in writing. Since the class sessions were in a computer room where for most of the time students were working individually at PCs it was easy to gather data by hanging around, chatting to individual students and watching what went on. It was a little more difficult to persuade students to be interviewed outside class times but a sufficient number of volunteers (the whole class was only 10 students) were willing to be interviewed at the beginning and I had even more success at the later stage when students seemed much more keen to talk to me and gave me their phone numbers which made it much easier to make arrangements to meet. It appeared that I had built up a relationship, or at least stimulated the interest of some of them.

  • Access. I was easily able to gain access to the class but in the students’ minds I must have been seen as closely associated with the lecturer. This lack of independence may well have affected the quality of the data obtained.
  • Informed consent – the lecturer involved clearly gave consent (and indeed support) but did the students really have the choice not to participate? As classroom observation was involved they really only had the option to limit their involvement if they wished.
  • Levels of participation. The class was small and all students participated in the research, in that they were observed in class sessions and spoke to me during those sessions. All the students who completed the module said that they were willing to be interviewed. In practice, it proved impossible to make arrangements with some of them. I do not know whether this was because of reluctance on their part or was simply due to their genuine unavailability.
  • Building relationships –I did seem to build a successful relationship with the students, sufficient for them to feel comfortable and have some sense of involvement in the research. Being there most weeks for class sessions was a major factor. Several of the students became very interested in talking about their experiences and spent longer with me than I would have expected.
  • Confidentiality – I had an agreement with the lecturer about confidentiality and about publication of the research. Although I did assure the students that no comments that individuals made would be reported back to the lecturer and that I would take steps in reporting to try to ensure that they were not identifiable, they had to take this on trust and in any case, in a small group of students on a module is such confidentiality really possible?
  • Mutual benefits and disbenefits – students seemed to appreciate that they were doing something new in this ICT-based module, accepted that it should be researched and were pleased that their views were being taken seriously. They all said that they would like to see a copy of the report (although I don’t know if they read it). On the other hand, did having a researcher around disrupt their learning in any way?

Main Researcher: Liz McDowell

Example 2 : EASEIT-Eng

EASEIT-Eng (Evaluative and Advisory Support to encourage innovative teaching – Engineering) is a TLTP3 project, funded by HEFCE and DENI, and is concerned with the development of a standardised procedure for the evaluation of computer-based engineering learning and teaching materials ( A consortium of universities is involved in the project, namely Loughborough, Heriot-Watt, Northumbria, Hull, Hertfordshire and Surrey together with the LTSN (Learning & Teaching Support Network centre) Engineering. A major part of the project involves evaluating the implementation of computer-assisted learning (CAL) packages in engineering courses, predominantly at undergraduate level. The project methodology is to conduct evaluations of CAL in use in universities across the UK. Evaluations are written up as a short case studies made available via a searchable database on the EASEIT-Eng website. The purpose of the database is to help engineering academics make an appropriate judgement about the suitability of a particular CAL package for intended use within their own teaching. I have carried out a number of these evaluations as a member of the evaluation team. Methods for data collection in the evaluation process include: questionnaires, non-participant observation, individual staff interviews (audio-taped) and focus group interviews with small groups of student users (audio-taped). Data collection is usually carried out on a single visit to the department involved since most evaluations involve evaluators in travel to a site away from their home base.

  • Access. Locating relevant users of CAL was the initial problem. The networks of the project team and LTSN enabled some contacts to be made but a considerable amount of time was spent by evaluators in following up potential contacts and, in effect, ‘cold calling’. Letters to Heads of Engineering Departments did not produce many results. The project then employed a staff member specifically to locate and recruit academic staff who are using CAL packages and this increased the success rate. Once a lecturer had agreed to participate they were used as the main means of accessing students and in each case I was dependent upon their level of commitment and enthusiasm in recruiting students to the study. Lecturers were usually able and willing to allow observation of the CAL software in use to be carried out where this was part of a timetabled session.
  • Informed consent. We made sure that lecturers understood what was involved in participation in the evaluation and gave their consent. However, the situation was less clear for students. Lecturers varied in the extent to which they encouraged students to participate in the questionnaire and interview phases of the evaluation. Although I always gave as full an explanation as possible and stressed that students did not have to take part, this aspect was often rushed.
  • Levels of participation. Where lecturers were more directive, higher levels of student co-operation were achieved. Lecturers might ask a particular student group to talk to me or allow class time for questionnaire completion. Response was lower and more difficult to obtain where the lecturer was more ‘hands-off’ and, for example, gave me a few minutes at the end of a lecture to explain what I was doing and recruit student volunteers for the focus groups and where students were expected to take questionnaires away and return them later, or received them by email.
  • Building relationships. In a one-off data collection exercise, the idea of building relationships was limited to the approach I took in obtaining students’ co-operation in completing questionnaires; in obtaining volunteers for focus group interviews; and in the conduct of the interviews. Students’ lack of interest in the topic or concerns about what would be reported back to the lecturer had to be quickly overcome in a short space of time. There was a danger of discussions becoming stilted and uninformative.
  • Confidentiality. None of the students were identified by name in any reports but nevertheless reports were made public as a necessary part of the project’s aim to share experiences. This may have meant that lecturers reading the report felt that they could identify the source of some of the student comments.
  • Mutual benefits and disbenefits. In a one-off data collection exercise of this type it is hard to see that students would see any immediate gain from participating though some did find it interesting to discuss the issues raised and were pleased to have their views heard. Some may have found being observed disruptive but at least this only occurred on a single occasion.

Main researcher: Gwen Marples

Example 3 : JUBILEE

The JUBILEE project is funded by the JISC. Its aim is to evaluate information-seeking in relation to electronic information services in HE and, on a more limited basis, in Further Education. The project methodology is a development of the multiple case study approach used in an earlier successful project, IMPEL (Impact on People of Electronic Libraries, In each cycle of the project, extended field work visits are made to case study sites where data is collected in a range of ways from documentary sources, and from interviews, focus groups and questionnaires directed to staff, including lecturers, library & information (LIS) staff and others, and students. User opinions have also been gathered through electronic discussion forums, both asynchronous (email-based) and real-time (‘chat’ rooms). In the first two years of the project, twelve case studies have been completed. The most problematic aspect of the work has been making contact with and obtaining responses from students. Although, problems had been encountered in the earlier IMPEL project, the research team believe that the difficulties being experienced now are much greater.

  • Access. In order to gain access to students, the researcher needed the support of a local contact, initially a member of LIS staff or a lecturer. Universities were unwilling to release contact details for students to outsiders. Some departments were willing to release email addresses or to send out emails to students once participation had been agreed.
  • Informed consent. Considerable time was spent in negotiating agreements about how the research would be conducted at university, department or individual staff member levels. The procedures to be followed and the levels of discussion and negotiation needed varied from site to site. There was much less time or opportunity available to discuss such matters with students. The questionnaires clearly explained the purpose and nature of the research and the researchers always gave students an explanation when they were interviewed. However, in order to gain more data some informal methods were used such as ‘chatting’ to students in common rooms, and then the situation was less clear-cut.
  • Levels of participation. The highest levels of participation in interviews and completion of questionnaires was where lecturers asked students to complete questionnaires in class time and where they themselves made the arrangements for the researchers to meet a group of students for the focus group discussion. Questionnaires left in student pigeon-holes or sent out by email received a low-level of response even though different formats and length of questionnaire were tried. Additional success was obtained by making direct contact with students in informal settings and by getting agreement from students for telephone interviews rather than relying on arrangements to meet on campus. The electronic discussion forums did not succeed very well in extending participation and response even though they were promoted through student associations.
  • Building relationships. Although good relationships were built with some staff pre-, post- and during the fieldwork, there was much less opportunity to do this with students. Usually they were only aware of the research at the point when they were asked to participate or complete a questionnaire. Consideration has been given to offering some small financial incentive to students but this has not been tested in practice.
  • Confidentiality. Students have been assured that the responses they give will be treated confidentially and that they will not be associated with individuals in any reporting. In what is clearly an external research study covering a number of universities, they are unlikely to be concerned about what they say being reported back to their own lecturers.
  • Mutual benefits and disbenefits. It is unlikely that students would gain very much from some of the data collection activities such as the questionnaires. Some do say that they enjoy or find discussions and interviews interesting, but their involvement on a one-off basis is unlikely to be seen by them as particularly beneficial. On the other hand, students are not being asked to give up very much of their time to participate in the research.

Main Researcher: Pat Gannon-Leary. Reported by: Liz McDowell