srhe

The Society for Research into Higher Education


Leave a comment

The ongoing saga of REF 2028: why doesn’t teaching count for impact?

by Ian McNay

Surprise, surprise…or not.

The initial decisions on REF 2028 (REF 2028/23/01 from Research England et al), based on the report on FRAP – the Future Research Assessment Programme – contain one surprise and one non-surprise among nearly 40 decisions. To take the second first, it recommends, through its cost analysis report, that any future exercise ‘should maintain continuity with rules and processes from previous exercises’ and ‘issue the REF guidance in a timely fashion’ (para 82). It then introduces significant discontinuities in rules and processes, and anticipates giving final guidance only in winter 2024-5, when four years (more than half) of the assessment period will have passed.

The second surprise is, finally, the open recognition of the negative effects on research culture and staff careers of the REF and its predecessors (para 24), identified by respondents to the FRAP consultation about the 2028 exercise. For me, this new humility is a double edged sword: many of the defects identified have been highlighted in my evidence-based articles (McNay, 2016, McNay, 2022), and, indeed, by the report commissioned by HEFCE (McNay, 1997) on the impact on individual and institutional behaviour of the 1992 exercise:

  • Lack of recognition of a diversity of excellences including work on local or regional issues because of the geographical interpretation of national/international excellence (para 37). Such local work involves different criteria of excellence, perhaps recognised in references to partnership and wider impact.
  • The need for outreach beyond the academic community, such as a dual publication strategy – one article in an academic journal matched with one in a professional journal in practical language and close to utility and application of a project’s findings.
  • Deficient arrangements for assessing interdisciplinary work (paras 60 and 61)
  • The need for a different, ‘refreshed’, approach to appointments to assessment panels (para 28)
  • The ‘negative impact on the authenticity and novelty of research, with individuals’ agendas being shaped by perceptions of what is more suitable to the exercise: favouring short-term inputs and impacts at the expense of longer-term projects…staying away from areas perceived to be less likely to perform well’. ‘The REF encourages …focus on ‘exceptional’ impacts and those which are easily measurable, [with] researchers given ‘no safe space to fail’ when it came to impact’.
  • That last negative arises in major part because of the internal management of the exercise, yet the report proposes an even greater corporate approach in future. The evidence-based articles and reports, and innovative processes and artefacts that arise from our research will have a reduced contribution to published assessments on the quality of research, though there is encouragement of a wider diversity of research outputs. More emphasis will be placed on institutional and unit ‘culture’ (para 28), so individuals disappear, uncoupled from consideration of culture-based quality. That culture is controlled by management; I spent several years as a Head of School trying to protect and develop further a collegial enterprise culture, which encouraged research and innovative activities in teaching. The senior management wanted a corporate bureaucracy approach with targets and constant monitoring, which work at Exeter has shown leads to lower output, poorer quality and higher costs (Franco-Santos et al, 2014).

At least 20 per cent of the People, Culture and Environment sub-profile for a unit will be based on an assessment of the Institutional Level (IL) culture, and this element will make up 25 per cent of a unit’s overall quality profile, up from 15 percent from 2021. This proposed culture-based approach will favour Russell Group universities even further – their accumulated capital has led to them outscoring other universities on ‘environment’ in recent exercises, even when the output scores have been the other way round. Elizabeth Gadd, of Loughborough, had a good piece on this issue in Wonkhe on 28 June 2023. The future may see research-based universities recruiting strongly in the international market to provide subsidy to research from higher student fees, leaving the rest of us to offer access and quality teaching to UK students on fees not adjusted for inflation. Some recognition of excellent research in unsupportive environment would be welcome, as would reward for improvement as operated when the polytechnics and colleges joined research assessment exercises.

The culture of units will be judged by the panels – a separate panel will assess IL cultures – and will be based on a ‘structured statement’ from the management, assessing itself, plus a questionnaire submission. I have two concerns here: can academic panels competent to peer-assess research also judge the quality and contribution of management; and, given behaviours in the first round of impact assessment (Oancea, 2016), how far can we rely on the integrity of these statements?

The sub-profile on Contribution to Knowledge and Understanding sub-profile will make up 50 per cent of a unit’s quality profile – down from 60 per cent last time and 65 per cent in 2014. At least 10 per cent will be based on the structured statement, so Outputs – the one thing that researchers may have a significant role in – are down to only 40 per cent, at most, of what is meant by overall research quality (the FRAP International Committee recommended 33 per cent). Individuals will not be submitted. HESA data will be used to quantify staff and the number of research outputs that can be submitted will be an average of 2.5 per FTE. There is no upper limit for an individual, and staff with no outputs can be included, as well as those who support research by others, or technicians who publish. Research England (and this is mainly about England; the other three countries may do better and certainly will do things differently) is firm that the HESA numbers will not be used as the volume multiplier for funding (still a main purpose of the REF), though it is not clear where that will come from – Research England is reviewing their approach to strategic institutional research funding. Perhaps staff figures submitted to HESA will have an indicator of individuals’ engagement with research.

Engagement and Impact broadens the previous element of simply impact. Our masters have discovered that early engagement of external partners in research, and 6 months attachment at 0.2 contract level allows them to be included, and enhances impact. Wow! Who knew? The work that has impact can be of any level to avoid the current quality level designations stopping local projects being acknowledged.

The three sub-profiles have fuzzy boundaries and overlap. Not just in a linear connection – environment, output, impact – but, because, as noted above, for example, engagement comes from the external environment but becomes part of the internal culture. It becomes more of a Venn diagram, that allows the adoption of an ‘holistic’ approach to ‘responsible research assessment’. We wait to see what those both mean in practice.

What is clear in that holistic approach is that research has nothing to do with teaching, and impact on teaching still does not count. That has created an issue for me in the past since my research feeds (not leads) my teaching and vice versa. I use discovery learning and students’ critical incidents as curriculum organisers, and they produce ‘evidence’ similar to that gathered through more formal interview and observation methods. An example. I recently led a workshop for a small private HEI on academic governance. There was a newly appointed CEO. I used a model of institutional and departmental cultures which influence decision making and ‘governance’ at different levels. That model, developed to help my teaching is now regarded by some as a theoretical framework and used as a basis for research. Does it therefore qualify for inclusion in impact? The session asked participants to consider the balance among four cultures of collegial, bureaucratic, corporate, entrepreneurial, relating to the degrees of central control of policy development and of policy delivery (McNay, 1995).  It then dealt with some issues more didactically, moving to the concept of the learning organisation where I distributed a 20 item questionnaire, (not yet published, but available on request for you to use) to allow scoring out of 10 per item, of behaviours relating to capacity to change, innovate and learn, leading to improved quality. Only one person scored more than 100 in total and across the group the modal score was in the low 70s, or just over 35%. That gave the new CEO an agenda with some issues more easily pursued than others and scores indicating levels of concern and priority. So my role moved into consultancy. There will be impact, but is the research base sufficient, was it even research, and does the use of teaching as a research transmission process (Boyer, 1990) disqualify it?

I hope this shows that the report contains a big agenda, with more to come. SRHE members need to consider what it means to them, but also what it means for research into institutions and departments to help define culture and its characteristics. I will not be doing it, but I hope some of you will. We need to continue to provide an evidence base to inform decisions even if it takes up to 20 years for the findings to have an impact.

SRHE itself might say several things in response to the report:

  • welcome the recognition of previous weaknesses, but note that a major one has not been recorded: the impact of RAE/REF on teaching, when excellent research has gained extra money, but excellent teaching has not, leading to an imbalancing of effort within the HE system. The research-teaching nexus also needs incorporating into the holistic view of research. Teaching is a major element in dissemination of research (Boyer, 1990) and so a conduit to impact, and should be recognised as such. That is because the relationship between researcher/teacher and those gaining new knowledge and understanding is more intimate and interactive than a reader of an article experiences. Discovery learning, drawing on learners’ experiences in CPD programmes can be a source of evidence, enhancing the knowledge and understanding of the researcher to incorporate in further work and research publications.
  • welcome the commitment to more diversity of excellences. In particular, welcome the commitment to recognise local and regionally directed research and its significant impact. The arguments about intimacy and interaction apply here, too. Research in partnership is typical of such work and different criteria are needed to evaluate excellence in this context.
  • welcome the intention to review panel membership to reflect the wider view of research now to be adopted.
  • urge an earlier clarification on panel criteria to avoid another 18 months, at least, trying, without clarity or guidance, to do work that will fit with the framework of judgement within which that work will be judged.
  • be wary of losing the voice of the researchers in the reduction of emphasis on research and its outputs in favour of presentations on corporate culture.

References

McNay, I (1997) The Impact of the 1992 RAE on Institutional and Individual Behaviour in English HE: the evidence from a research project Bristol HEFCE


1 Comment

Examining the Examiner: Investigating the assessment literacy of external examiners

By Dr Emma Medland

Quality assurance in higher education has become increasingly dominant worldwide, but has recently been subject to mounting criticism. Research has highlighted challenges to comparability of academic standards and regulatory frameworks. The external examining system is a form of professional self-regulation involving an independent peer reviewer from another HE institution, whose role is to provide quality assurance in relation to identified modules/programmes/qualifications etc. This system has been a distinctive feature of UK higher education for nearly 200 years and is considered best practice internationally, being evident in various forms across the world.

External examiners are perceived as a vital means of maintaining comparable standards across higher education and yet this comparability is being questioned. Despite high esteem for the external examiner system, growing criticisms have resulted in a cautious downgrading of the role. One critique focuses on developing standardised procedures that emphasise consistency and equivalency in an attempt to uphold standards, arguably to the neglect of an examination of the quality of the underlying practice. Bloxham and Price (2015) identify unchallenged assumptions underpinning the external examiner system and ask: ‘What confidence can we have that the average external examiner has the “assessment literacy” to be aware of the complex influences on their standards and judgement processes?’ (Bloxham and Price 2015: 206). This echoes an earlier point raised by Cuthbert (2003), who identifies the importance of both subject and assessment expertise in relation to the role.

The concept of assessment literacy is in its infancy in higher education, but is becoming accepted into the vernacular of the sector as more research emerges. In compulsory education the concept has been investigated since the 1990s; it is often dichotomised into assessment literacy or illiteracy and described as a concept frequently used but less well understood. Both sectors describe assessment literacy as a necessity or duty for educators and examiners alike, yet both sectors present evidence of, or assume, low levels of assessment literacy. As a result, it is argued that developing greater levels of assessment literacy across the HE sector could help reverse the deterioration of confidence in academic standards.

Numerous attempts have been made to delineate the concept of assessment literacy within HE, focusing for example on the rules, language, standards, and knowledge, skills and attributes surrounding assessment. However, assessment literacy has also been described as Continue reading

Paul Temple


Leave a comment

Crossing the Threshold

By Paul Temple

When education students are taught about the difference between norm-referenced and criterion-referenced assessment, the example often given of criterion referencing is the driving test. The skills you need to demonstrate in order to pass the practical test are closely defined, and an examiner can readily tell whether or not you have mastered them. So you have to do a hill start without the car running backwards, reverse around a corner without hitting the kerb or ending up in the middle of the road, and so on. The driving test could then, in principle, have a 100% or a 0% pass rate. (A non-education example of a norm-referenced examination is the football league: to stay in the Premier League, a team doesn’t have to be objectively brilliant, just fractionally better than that year’s weakest three teams.) But the driving test is also a threshold assessment: the examiner expects the candidate to be able to negotiate the town centre one-way system competently, but not to show that they can take part in a Bond movie car-chase. You have to cross the threshold of competent driving: you don’t have to show that you can go beyond it.

This seems a clear enough distinction: so why do so many academics apparently have difficulty with it? Continue reading

Geoff Stoakes


Leave a comment

Grade point averages

By Geoff Stoakes

In May, the Higher Education Academy (HEA) published a report about the pilot study into a national grade point average (GPA) system. This study was prompted by the debate around the perceived limitations of the honours degree classification (HDC) system, in particular, insufficient differentiation between student performance, a lack of recognition outside the UK, and limited transparency in how the HDC is calculated by different higher education providers.

In his speech on 1 July 2015 at Universities UK, Jo Johnson, Minister for Universities and Science, highlighted that one of the things he wants to focus on in the forthcoming green paper is how a Teaching Excellence Framework can help improve how degrees are classified. He believes that the standard model of classes of honours on its own is “no longer capable of providing the recognition hardworking students deserve and the information employers require.” Continue reading