By Ian McNay
Recent headlines on the Research Excellence Framework (REF) were a bit over the top when the scores are scrutinised closely, particularly in Education, as detailed in my earlier blog post back in February. So, I won’t take long, but want to add to what I said then to emphasise the impact of the three elements of scoring and the discontinuities between exercises. Andrew Pollard, chair of the Education panel, concentrates on ‘research activity’ in his comment on the BERA website, where 30 percent was at 4* level. However, output had only 21.7 per cent at that level; the scores were boosted by scores of over 40 per cent for impact and environment.
If you look hard you can find a breakdown of scores by element for 2008, and these show how things have changed. For environment, in 2008, 5 units scored 50 per cent or higher at 4*, with a top score of 75 per cent; 19 scored 50 per cent or more when 3* and 4* are combined. This time, at 4* 18 units scored 50 per cent or more and 8 scored 100 per cent. Combining 3* and 4* shows that 52 units scored more than 50 per cent with 23 scoring 100 per cent across those top two levels. That grade inflation suggests either considerable investment for development or less demanding criteria.
On impact there is no precedent. In 2008 the third element was esteem indicators, where the top score for 4* was 40 per cent, by only two units, with a further 10 getting 30 per cent. For impact in 2014, 13 units scored more than 50 per cent – well above the highest score for esteem. Perhaps we judged our peer academics more harshly than users of research did. Or, perhaps, they were less obsessed with long term, large scale, statistical studies using big data sets which successive panels have set as the acme of quality work, and more concerned with ’did it make a difference?’ Impact gets only 3 paragraphs in the panel report, which gives more emphasis to a strong theoretical base in an academic discipline than to Mode 2 approaches, based on problem-solving in a professional context. Perhaps that contributed to the view of the user members and impact assessors who ‘raised queries about the value for money of such a time-consuming exercise’.
When it came to money and QR allocations in England, there were surprises. The biggest cash losers across the board were Manchester, Imperial College, Cambridge, and Leeds. Not, perhaps, what government was expecting from concentration of funding, which had been supposedly increased by changing the funding ratio between 3* and 4* work. They did, of course, have more to lose, but seven of the ten HEIs with the biggest percentage decreases were pre-92 designations. The main gainers in percentage terms were all more recently designated universities, led by Edge Hill, with an increase of over 350 per cent. Congratulations also to the other four who doubled their money: Bedfordshire, Huddersfield, Canterbury Christ Church, and Northumbria, with Anglia Ruskin on 99.5 per cent. In part, elite universities lost out because they cut the numbers submitted drastically, whereas the modern universities had invested in developing staff and many submitted more people in Education as well as improving their quality profile. Between 2001 and 2014 modern universities in England reduced numbers by just over 15 per cent – about 82 FTE, many by not submitting at all, so like for like comparisons show an increase of about 24 FTE; pre-92 figures went down by over 30 per cent – 350 FTE. In Scotland, both modern universities submitting increased their numbers, whereas between 2008 and 2014, Edinburgh and Strathclyde dropped by over 50 per cent and Glasgow and Stirling by over 30 per cent. That helps explain the shift in the quality profile, but, despite the higher proportions of higher graded work, the very small gain in numbers of 4* outputs came at a very high price for staff now deemed not research active enough.
There were, of course, whingers among the losers. In a letter to THE one Essex professor claimed that ‘institutions that historically do very well should not face large cuts just because their performance has slipped a bit’. Such a principle, if applied elsewhere, would have seen England in the knockout stages of the cricket world cup instead of Bangladesh, who beat them, sure, but who, historically, do not have as good a record, and should be kept in their place. So, the privileged fight to retain their privileged positions against challengers supporting greater equity. Well there is a surprise. The same issues will be put to the vote on 7 May.
SRHE Fellow Ian McNay is emeritus professor at the University of Greenwich.