RECENT DEVELOPMENTS IN HIGHER EDUCATION IN THE UK
James Hough
Professor James Hough is an independent consultant in education and economics. He was formerly Professor of the Economics of Education and Dean of the Faculty of Education and Humanities at LoughboroughUniversity, before becoming a member of Loughborough University Council and Visiting Professor of Economics at the University of Luton.
Notes for CEET Seminar, MonashUniversity Faculty of Education, 11.00 Friday 12 August 2005
(i) Higher Education Institutions:
-Pre-1992 “Traditional” Universities
-Post-1992 “Modern” Universities (mainly ex-Polytechnics)
-Colleges (increasingly called “UniversityColleges” or “University Institutions”)
-Now 94 universities plus 77 colleges.
-The Open University: over 200,000 students.
-A fairly clear “Pecking Order” of institutions.
-147,000 academic staff (39% female); of Professors, only 14% are female.
(ii) Students:
-Higher Education in the UK has mushroomed to having 2.2m students (undergraduates 1.7m.; full-time students 1.3m.).
-Student numbers up by 40% since 1994/5.
-The percentage of the relevant age group studying full-time in H.E. has leapt from 4% in the 1960s to 43% today, and is still increasing, ushering in what has been called “the age of mass higher education”[1].
-The Government’s aim: 50% of relevant age group in H.E. by 2010
-Applications per place range from around 20/1 at Nottingham to only 3/1 at Oxbridge.
-Increasingly, students take a “gap year” either before or after their course (or both).
-Currently concern re equity in admissions policies—should students from comprehensive schools be admitted with lower “A Level” grades than students from private schools?[2]
(iii) Funding:
-Public Expenditure on H.E. = 0.7% of GDP, one of the lowest figures for any OECD country (Canada = 1.6%, France = 1.0%, Australia = 0.8%).
-Public funding to universities per student has declined steadily in real terms for the last 20 years or more, due to what the Government terms “efficiency gains”, with what the Government sees as a marked “improvement” in student-teacher ratios.
-With increased student choice of subjects under the “semester” system, courses consisting of mass lectures to audiences of 100-150 students are increasingly common. A student seminar group which previously might have had 12 members now may have 24.
HEFCE is responsible for distributing public funds for higher education in England. In distributing the funds HEFCE aims to meet the needs of students, employers and the nation by promoting high quality teaching and research. HEFCE claims to allocate funds using funding methods and policies that are open and transparent.
HEFCE's overall budget is set by government, so the funding method does not affect the total sum available for distribution to institutions. Annually the Secretary of State for Education and Skills sets out the resources and priorities for higher education. HEFCE provides advice to the Secretary of State on the funding needs of higher education in England.
Most HEFCE funding is distributed as block grants to institutions, allocated according to formulae which take account of certain factors within each institution, including the number and type of students, the subjects taught and the amount and quality of research undertaken there.
Summary of allocations 2005-06:
In 2005-06 HEFCE will distribute £6,332 million. Breakdown of the allocations:
Teaching / £4,004 million (of which £282 millionis for widening participation)
Research / £1,251 million
Special funding / £428 million
Earmarked capital funding / £649 million
(iv) Student Finance:
Has undergone a revolution, consequent on:
The Dearing Report, 1998,“Higher Education in the Learning Society”, said:
“Students should share responsibility for funding higher education courses”.
The Teaching and Higher Education Act 1998:
-Abolished maintenance grants (which the Government now says was a mistake).
-Introduced tuition fees for UK students, £1,000 p.a.
-Around one-third of students pay full fees, one-third pay around half, one-third pay no fees.
-The Higher Education Act 2004: Tuition fees to be increased to £3,000 per year from October 2006.Universities hoping for further fee increases but Government says not until at least 2009, after a review to be undertaken in 2008.
-Maintenance grants to be reintroduced in 2006, on a limited basis[3].
-Many universities have introduced or significantly increased bursary/scholarship schemes, partly to lessen hardship, partly to attract high quality students.
-Many students now graduate with debts of £10,000-£15,000[4].
(v) Research Assessment Exercise:
Guidelines have been announced for the Research Assessment Exercise 2008, which will be the sixth in a series of such exercises starting in 1986, the results of which are being linked more and more closely to the grants from the university funding bodies.
RAE 2008 will operate similarly to RAE 2001. The main body of the assessment will take place in 2007-8 and outcomes will be published in December 2008. Guidance to assessment panels was published in January 2005[5].
the 2001 Research Assessment Exercise
(From: The Roberts Report, “Review of Research Funding Method”, 2003)
The RAE operates through a process of peer review by experts of high standing covering all subjects. Judgements are made using the professional skills, expertise and experience of the experts; it is not a mechanistic process. All research assessed is allocated to one of 68 ‘units of assessment’ which are discipline-based. For each unit of assessment there is a panel of between nine and 18 experts, mostly from the academic community but with some industrial or commercial members.
The assessment panels awarded a rating on a scale of 1 to 5*, according to how much of the work was judged to reach national or international levels of excellence (see below).
1.Institution Submissions
Each publicly funded university and higher education college in the UK is invited to submit information about their research activity for assessment. The information they supply provides the basis on which judgements are made. Submissions have to be in a standard format, which includes qualitative and quantitative information. Most of the information is provided electronically on specially written software.
The submissions are based around members of staff in each academic unit in which the institution is submitting. It is up to each institution to decide which subjects (and therefore which units of assessment) to submit to, and which members of staff to include in each submission.
For each member of research staff, up to four items of research output may be listed. All forms of research output (books, papers, journals, recordings, products) are treated equally; panels are concerned only with the quality of the research. Similarly, all research (whether applied, basic or strategic) is treated equally. In addition, the HEI must provide information in a number of different categories, as shown below.
Category / DescriptionStaff information /
- summaries of all academic staff
- details of research-active staff
- research support staff and research assistants
Research output /
- up to four items of research output for each researcher
Textual description /
- information about the research environment, structure and policies
- strategies for research development
- qualitative information on research performance and measures of esteem
Related data /
- amounts and sources of research funding
- numbers of research students
- number and sources of research studentships
- numbers of research degrees awarded
- indicators of peer esteem
2.Assessment Period and Census Date
The 2001 RAE used a census date of 31 March 2001 to capture a ‘snapshot’ of all staff in post on that date. The census date determines the staff eligible to be included in the submissions of each institution. The RAE assesses the quality of research output by eligible staff over a period of seven years for arts and humanities subjects, and five years for all other subjects.
3.Panels’ Judgements
The panels use their professional judgement to form a view of the overall quality of the research in each submission within their unit of assessment using all the evidence presented in the submission.
To assess submissions fairly and consistently within each unit of assessment, each panel draws up a statement describing its working methods and assessment criteria. These are published in advance of submissions being made. This statement shows which aspects of the submission the panel regards as most important, and areas that it wants institutions to comment on in their submissions. The differences in working methods and criteria between panels is a reflection of the need to recognise differences in the way research is conducted and published in the various disciplines.
Panels review all submissions, and read selectively from the research outputs cited. As the panels are concerned with quality, not quantity, information on the total number of publications produced is not requested. Panels do not visit institutions as part of their work.
4.Ratings
The subject panels used a standard scale to award a rating for each submission. Ratings ranged from 1 to 5* (five star), according to how much of the work was judged to reach national or international levels of excellence. The table below shows the definition of each rating.
The rating scale
Rating / Description5* / Quality that equates to attainable levels of international excellence inmore than half of the research activity submitted and attainable levels of national excellence in the remainder
5 / Quality that equates to attainable levels of international excellence inup to half of the research activity submitted and to attainable levels of national excellence in virtually all of the remainder
4 / Quality that equates to attainable levels of national excellence in virtually all of the research activity submitted, showing some evidence of international excellence
3a / Quality that equates to attainable levels of national excellence in over two-thirds of the research activity submitted, possibly showing evidence of international excellence
3b / Quality that equates to attainable levels of national excellence in more than half of the research activity submitted
2 / Quality that equates to attainable levels of national excellence in up to half of the research activity submitted
1 / Quality that equates to attainable levels of national excellence in none, or virtually none, of the research activity submitted
From the Panel for Economics and Econometrics:
“International and national excellence is defined in terms of:
Overall quality of nominated published outputs which will be assessed according to criteria which will include:
- Substantive contribution in the broad sense, including research contributions to theory, methodology, policy and practice;
- Originality;
- Technical excellence.
- Research depth, vitality and prospects which are be assessed taking into account:
- Postgraduate research activity;
- Indicators of peer-review esteem;
- Development and staff profile of department;
- External research income.
The Panel bases its assessment of submissions primarily on its professionally informed judgement of the quality of research outputs.”
5.Funding Allocations
Each of the funding bodies uses the ratings to allocate research funding by formula to the institutions it funds. The formulae used by each funding body may vary, with the overlying principle of funding selectively – more funding for higher quality research.
The main way of measuring the volume of research is by the number of research active staff submitted to the RAE for assessment.
A Review of RAE 2001 by Farrant, Billing and Temple, 2003, found:
“RAE 2001 was successful in its primary purpose: ratings of research quality were produced, to timetable, and were used by the funding bodies in determining grant for 2002-3. Furthermore, the ratings have commanded a large measure of confidence among the HEIs, the researchers assessed and the academic community.
However, matching the key principles with institutions’ assessment:
Contacts’ assessment of the RAE’s achievement against each of the key principles laid down at the outset is shown below:
RAE 2/99, paragraph 1.3 / %Yes / %No / Yes/NoC / Consistency / 30 / 32 / 0.9
H / Parity / 28 / 25 / 1.1
G / Neutrality / 36 / 25 / 1.4
j / Transparency / 43 / 23 / 1.9
f / Efficiency / 49 / 17 / 2.9
e / Credibility / 55 / 10 / 5.5
d / Continuity / 54 / 8 / 6.8
a / Peer review / 66 / 9 / 7.3
b / Clarity / 53 / 6 / 8.8
The four principles where achievement was judged least satisfactory were:
Consistency. Assessments made through the RAE should be consistent especially across cognate areas and in the calibration of quality ratings against international standards of excellence.
Parity. The RAE is concerned only with assessing the quality of research of participating HEIs, regardless of its type, form or place of output.
Neutrality. The RAE exists to assess the quality of research in HEIs. It should carry out that function without distorting what it is measuring. In other words, the RAE should not encourage or discourage any particular type of activity or behaviour, other than providing a general stimulus to the improvement of research quality overall.
Transparency. The credibility of the RAE is reinforced by transparency about the process for making decisions. Except where there is a need to preserve confidentiality (for example in panels’ discussions or when dealing with the names of nominees for panel membership or with the strategic research plans of institutions) all decisions and decision-making processes will be explained openly.”
(vi) Assessment of Teaching Quality:
The same years have seen periodic attempts to assess Teaching Quality.
Mainly responsibility of Quality Assurance Agency for Higher Education (QAA).
1.Institutional audit:
Purpose:Institutional audit in England, since 2003, aims to ensure that institutions are:
- providing higher education, awards and qualifications of an acceptable quality and an appropriate academic standard; and (where relevant)
- exercising their legal powers to award degrees in a proper manner.
All English higher education institutions are being audited between 2003 and 2005; from 2006 audits will take place on a six-year cycle.
Process:Institutional audit combines scrutiny of internal quality assurance systems at institutional level, with a more detailed investigation at discipline level of whether those systems are operating in the manner intended. Before the audit visit, the institution produces self-evaluation documents (SEDs), one for institutional level and one for a sample of disciplines. SEDs are key reference points for audit teams during the visit.
Judgements:The audit team expresses 'broad confidence', 'limited confidence', or 'no confidence', that can reasonably be placed in:
- the soundness of the institution's present and likely future management of the quality of its programmes and the academic standards of its awards.
The audit team also makes a judgement on:
- the accuracy, integrity, completeness and frankness of the information that the institution publishes about the quality of its programmes and the academic standards of its awards.
2.Academic review of subjects:
Purpose for higher education institutions:Academic review of subjects looks at programmes taught at sub-degree, degree and postgraduate levels. The institutions eligible for academic review were visited in the academic years 2002-03 and 2003-04.
Process: The review period extends over a period of about six weeks, from an initial meeting of the review team with the subject provider to the final review team meeting when judgements are made. During the review visit, the review team gathers evidence to test statements made in the institution's SED, and to form robust judgements on the standards and quality of the provision. This is achieved through scrutiny of documentary evidence, meetings with relevant staff and, sometimes, direct observation of teaching.
Students:The review team meets current students during the visit, and their views are treated as one source of evidence among several. The reviewers may also meet former students, their employers and representatives from relevant industries or professions.
Judgements: Since 2001, for each academic review, the team expresses 'confidence', 'limited confidence', or 'no confidence' in:
- academic standards (learning outcomes; the curriculum; student assessment;
student achievement).
The team also makes judgements of 'commendable', 'approved' or 'failing' for:
- the quality of learning opportunities (teaching and learning; student progression; learning resources).
3.Code of Practice for the assurance of academic quality and standards in higher education:
The Code of Practice sets out guidelines on good practice relating to the management of academic standards and quality. Each section of the Code of Practicehas precepts or principles that institutions should satisfy, with guidance on how they might meet these precepts. The Codeof Practice has 10 sections:
- Postgraduate research programmes
- Collaborative provision
- Students with disabilities
- External examining
- Academic appeals and student complaints on
academic matters - Assessment of students
- Programme approval, monitoring and review
- Career education, information and guidance
- Placement learning
- Student recruitment and admissions
4.Previously: England and Northern Ireland: 1993 to 2001:
From April 1993 to April 1995, teaching quality assessments in England and Northern Ireland looked at the student learning experience and student achievement. Each subject area was judged as 'excellent', 'satisfactory' or 'unsatisfactory'.
From April 1995 to December 2001, universal subject review covered six aspects of provision. Each aspect was graded on a scale of 1 to 4, in ascending order of merit. This summary was called the graded profile.[6]
5.Quality Enhancement:
A separate report by the Teaching Quality Enhancement Committee (TQEC), 2003, concluded:
“There is a widespread perception that the arrangements for QE are complex and fragmented, and insufficiently “user-focused”. Many observers consider that the sector needs to give a higher profile to the process of continuous quality improvement and professional development for all those who support student learning. The committee supports this view, and notes that the primary responsibility for this lies with HE institutions.”
Web Sites:There are many useful web sites relating to all of the above, and many of them are interlinked, including:
1
[1] In the late 1960s, although two-thirds of children were born into working-class homes, 80% of university students came from the one-third of children born into middle-class homes. The 80% percentage did not decline significantly for the following 20 years. I have not seen a recent definitive figure, but I would guess that the percentage may now have fallen to 75%, or even 70%: so universities are still dominated by the offspring of the middle-classes.
[2] “The results of a study of almost 5,000 students by academics at Warwick University.…can be seen as justification for universities awarding places to state school pupils with lower grades on the basis that they have greater potential and will do better at university.…children from independent schools did less well at university than kids from state schools.…on average, independent school pupils do less well.” (Source: The Sunday Times, London, 10 July 2005).