By Ian McNay
A recent report to the Leadership Foundation for HE (Franco-Santos et al, 2014) showed that ‘agency’ approaches to performance management favoured by managers, which ‘focus on short-term results or outputs through greater monitoring and control’ are ‘associated with …lower levels of institutional research excellence’. A small project I conducted last summer on the REF experience showed such approaches as pervasive for many institutions in the sample. Nevertheless, the main feature of the 2014 REF was the grade inflation over 2008.
Perhaps such outstanding global successes, setting new paradigms, are under-reported, or are in fields beyond the social sciences where I do most of my monitoring. More likely ‘strategic entry policies’ and ‘selectivity in many submissions’, noted by the panel, have shifted the profile from 2008. A second successive reduction of 15 percent in staff numbers submitted, with 15 institutions opting out and only nine coming in since 2008 will also have had an effect. Welcome back to Cardiff, hidden in a broad social sciences submission last time. The gain from Cardiff’s decision is that it will scoop all of the QR funding for education research in Wales. None of the six other institutions making submissions in 2008, with some 40 staff entered, appeared in 2014 results. That may be a result of the disruptions from mergers, or a set of strategic decisions, given the poor profiles last time
What has emerged, almost incidentally, is an extended statement of the objectives/ends of the exercise, which now does not just ‘inform funding’. It provides accountability for public investment in research and evidence of the benefit of this investment. The outcomes provide benchmarking information and establish reputational yardsticks for use within the HE sector and for public information. Still nothing about improving the quality of research, so any such consequence is not built in to the design of the exercise, but comes as a collateral benefit.
My analysis here is only of the Education Unit of Assessment [UoA 25]. More can be found at www.ref.ac.uk .
Something that stands out immediately is the effect of the contribution of scores on environment and impact on the overall UoA profile. Across all submissions, the percentage score for 4* ratings was 21.7 for outputs, but 42.9 for impact and 48.4 for environment. At the extreme, both the Open University and Edinburgh more than doubled their 4* score between the output profile and the overall profile. Durham, Glasgow and Ulster came close to doing so, and Cardiff and the Institute of Education added 16 and 18 points respectively to their percentage 4* overall profile.
Seven of the traditional universities, plus the OU, had 100 per cent ratings for environment. All submitted more than 20 FTE staff for assessment – the link between size and a ‘well found’ department is obvious. Several ‘research intensive’ institutions that might be expected to be in that group are not there: Cambridge scored 87.5 per cent. Seven were judged to have no 4* elements in their environment. Durham, Nottingham and Sheffield had a 100 per cent score for impact in their profile, making Nottingham the only place with a ‘double top’.
In England, 21 traditional universities raised their overall score; 6 did not. Among the less research intensive universities, 29 dropped between the two, but there were also seven gains in 4* ratings: Edge Hill went from 2.1 per cent [output] to 9 percent [overall] because of a strong impact profile; Sunderland proved King Lear wrong in his claim that ‘nothing will come of nothing’: they went from zero [output rating] to 5 per cent overall, also because of their impact score. Among the higher scoring modern universities, Roehampton went down from 31 to 20 at 4*, but still led the modern universities with 71 percent overall on the 3*/4* categories.
I assume there was monitoring of inter-assessor comparability, but there appears to have been those who used a 1-10 scale and added a zero, and those who used simple fractions, for both impact and environment. Many do not get beyond 50/50. For output, it is different; even with numbers submitted in low single figures, most scores go to the single decimal point allowed.
For me, one of the saddest contrasts was thrown up by alphabetic order. Staffordshire had an output rating of 25 percent at 4*; Southampton had 18 percent; when it came to overall profile, the rankings were reversed: Staffordshire went down to 16 because it had no scores at 3* or 4* in either impact or environment; Southampton went up to 31 percent. There is a personal element here: in 2001 I was on the continuing education subpanel. In those days there was a single grade; Staffordshire had good outputs – its excellent work on access was burgeoning, but was held down a grade because of concerns about context issues – it was new and not in a traditional department. Some other traditional departments were treated more gently because they had a strong history.
The contribution of such elements was not then quantified, nor openly reported, so today’s openness at least allows us to see the effect. I believed, and still do,that, like the approach to A level grades, contextual factors should be taken account of, but in the opposite direction. Doing well despite a less advantaged context should be rewarded more. My concern about the current structure of REF grading is that, as in 2001, it does the exact opposite. The message for units in modern universities, whose main focus is on teaching, is to look to the impact factor. A research agenda built round impact from the project inception stage may be the best investment for those who wish to continue to participate in future REFs, if any. There is a challenge because much of their research may be about teaching, but the rules of the game bar impact on teaching from inclusion in any claim for impact. That dismisses any link between the two, the Humboldtian concept of harmony, and any claims to research- led teaching. As a contrast, on the two criticisms outlined here, the New Zealand Performance –Based Research Fund [PBRF] has, among its aims:
- To increase the quality of basic and applied research
- To support world-leading research-led teaching and learning at degree and postgraduate level.
Might the impending review of REF take this more integrated and developmental approach?
Reference: Franco-Santos, M., Rivera, P. and Bourne, M.(2014) Performance Management in UK Higher Education Institutions, London, Leadership Foundation for Higher Education
SRHE Fellow Ian McNay is emeritus professor at the University of Greenwich.