srhe

The Society for Research into Higher Education

Ian Mc Nay


Leave a comment

The REF: Digging beyond the headlines

By Ian McNay

Recent headlines on the Research Excellence Framework (REF) were  a bit over the top when the scores are scrutinised closely, particularly in Education, as detailed in my earlier blog post back in February. So, I won’t take long, but want to add to what I said then to emphasise the impact of the three elements of scoring and the discontinuities between exercises. Andrew Pollard, chair of the Education panel, concentrates on ‘research activity’ in his comment on the BERA website, where 30 percent was at 4* level. However, output had only 21.7 per cent at that level; the scores were boosted by scores of over 40 per cent for impact and environment.

If you look hard you can find a breakdown of scores by element for 2008, and these show how things have changed. For environment, in 2008, 5 units scored 50 per cent or higher at 4*, with a top score of 75 per cent; 19 scored 50 per cent or more when 3* and 4* are combined. This time, at 4* 18 units scored 50 per cent or more and 8 scored 100 per cent. Combining 3* and 4* shows that 52 units scored more than 50 per cent with 23 scoring 100 per cent across those top two levels.  That grade inflation suggests either considerable investment for development or less demanding criteria.

On impact there is no precedent. In 2008 the third element was esteem indicators, where the top score for 4* was 40 per cent, by only two units, with a further 10 getting 30 per cent. For impact in 2014, 13 units scored more than 50 per cent – well above the highest score for esteem. Perhaps we judged our peer academics more harshly than users of research did. Or, perhaps, they were less obsessed with long term, large scale, statistical studies using big data sets which successive panels have set as the acme of quality work, and more concerned with ’did it make a difference?’ Continue reading

Ian Mc Nay


1 Comment

REF outcomes: Some comments and analysis on the Education Unit of Assessment

By Ian McNay

A recent report to the Leadership Foundation for HE (Franco-Santos et al, 2014) showed that ‘agency’ approaches to performance management favoured by managers, which ‘focus on short-term results or outputs through greater monitoring and control’ are ‘associated with …lower levels of institutional research excellence’. A small project I conducted last summer on the REF experience showed such approaches as pervasive for many institutions in the sample. Nevertheless, the main feature of the 2014 REF was the grade inflation over 2008.

Perhaps such outstanding global successes, setting new paradigms, are under-reported, or are in fields beyond the social sciences where I do most of my monitoring. More likely ‘strategic entry policies’ and ‘selectivity in many submissions’, noted by the panel, have shifted the profile from 2008. A second successive reduction of 15 percent in staff numbers submitted, with 15 institutions opting out and only nine coming in since 2008 will also have had an effect. Welcome back to Cardiff, hidden in a broad social sciences submission last time. The gain from Cardiff’s decision is that it will scoop all of the QR funding for education research in Wales. None of the six other institutions making submissions in 2008, with some 40 staff entered, appeared in 2014 results. That may be a result of the disruptions from mergers, or a set of strategic decisions, given the poor profiles last time

What has emerged, almost incidentally, is an extended statement of the objectives/ends of the exercise, which now does not just ‘inform funding’. It provides accountability for public investment in research and evidence of the benefit of this investment. The outcomes provide benchmarking information and establish reputational yardsticks for use within the HE sector and for public information. Still nothing about improving the quality of research, so any such consequence is not built in to the design of the exercise, but comes as a collateral benefit.

My analysis here is only of the Education Unit of Assessment [UoA 25]. More can be found at www.ref.ac.uk .

Something that stands out immediately is the effect of the contribution of scores on environment and impact on the overall UoA profile. Across all submissions, the percentage score for 4* ratings was 21.7 for outputs, but 42.9 for impact and 48.4 for environment.  At the extreme, both the Open University and Edinburgh more than doubled their 4* score between the output profile and the overall profile. Durham, Glasgow and Ulster came close to doing so, and Cardiff and the Institute of Education  added 16 and 18 points respectively to their percentage 4* overall profile.

Seven of the traditional universities, plus the OU, had 100 per cent ratings for environment. All submitted more than 20 FTE staff for assessment – the link between size and a ‘well found’ department is obvious. Several ‘research intensive’ institutions that might be expected to be in that group are not there: Cambridge scored 87.5 per cent. Seven were judged to have no 4* elements in their environment. Durham, Nottingham and Sheffield had a 100 per cent score for impact in their profile, making Nottingham the only place with a ‘double top’.

In England, 21 traditional universities raised their overall score; 6 did not. Among the less research intensive universities, 29 dropped between the two, but there were also seven gains in 4* ratings: Edge Hill went from 2.1 per cent [output] to 9 percent [overall] because of a strong impact profile; Sunderland proved King Lear wrong in his claim that ‘nothing will come of nothing’: they went from zero [output rating] to 5 per cent overall, also because of their impact score. Among the higher scoring modern universities, Roehampton went down from 31 to 20 at 4*, but still led the modern universities with 71 percent overall on the 3*/4* categories.

I assume there was monitoring of inter-assessor comparability, but there appears to have been those who used a 1-10 scale and added a zero, and those who used simple fractions, for both impact and environment. Many do not get beyond 50/50. For output, it is different; even with numbers submitted in low single figures, most scores go to the single decimal point allowed.

For me, one of the saddest contrasts was thrown up by alphabetic order. Staffordshire had an output rating of 25 percent at 4*; Southampton had 18 percent; when it came to overall profile, the rankings were reversed: Staffordshire went down to 16 because it had no scores at 3* or 4* in either impact or environment; Southampton went up to 31 percent. There is a personal element here: in 2001 I was on the continuing education subpanel. In those days there was a single grade; Staffordshire had good outputs – its excellent work on access was burgeoning, but was held down a grade because of concerns about context issues – it was new and not in a traditional department. Some other traditional departments were treated more gently because they had a strong history.

The contribution of such elements was not then quantified, nor openly reported, so today’s openness at least allows us to see the effect.  I believed, and still do,that, like the approach to A level grades, contextual factors should be taken account of, but in the opposite direction. Doing well despite a less advantaged context should be rewarded more. My concern about the current structure of REF grading is that, as in 2001, it does the exact opposite. The message for units in modern universities, whose main focus is on teaching, is to look to the impact factor. A research agenda built round impact from the project inception stage may be the best investment for those who wish to continue to participate in future REFs, if any. There is a challenge because much of their research may be about teaching, but the rules of the game bar impact on teaching from inclusion in any claim for impact. That dismisses any link between the two, the Humboldtian concept of harmony, and any claims to research- led teaching. As a contrast, on the two criticisms outlined here, the New Zealand Performance –Based Research Fund [PBRF] has, among its aims:

  • To increase the quality of basic and applied research
  • To support world-leading research-led teaching and learning at degree and postgraduate level.

Might the impending review of REF take this more integrated and developmental approach?

Reference: Franco-Santos, M., Rivera, P. and Bourne, M.(2014) Performance Management in UK Higher Education Institutions, London, Leadership Foundation for Higher Education

SRHE Fellow Ian McNay is emeritus professor at the University of Greenwich.


Leave a comment

News values

Ian Mc Nay

Ian McNay

My interest [obsession?] with the way the press report HE issues has had several items to feed it recently. I had a spat, unpublished, with John Morgan of Times Higher Education over an article on 27 March on student number allocations by HEFCE headlined ‘No bonanza for those who left places unfilled’. The story opened with the assertion that ‘the big post-92s suffer’, having  proved [sic] ‘less popular’, and the third paragraph lists four of them.

Then comes the table giving percentage reductions, where those with the biggest reductions are not post-92s, but Leeds, Bath and Surrey. The article comes to them in the fourth column, with a claim that their reduction was probably ‘strategic’. As a researcher, I looked for evidence of the different reasons behind reductions. There was none, since ‘figures were issued on a “no approach” embargo’ where no questions could be asked of institution staff. So, opinion, based on speculation, based on stereotypical bias, is presented as news reportage.

The reporting of research demonstrating the [not new] findings that state school entrants outperform those from private schools with the same entry qualifications, mentioned the recommendation to consider adjusting offers, and produced the usual protective outcry on the web page. Nobody reported the evidence from UCAS stats that grades are adjusted by Russell Group universities, where applicants from privileged backgrounds are more likely to get an offer than those with similar qualifications from less advantaged backgrounds.

Finally in this rant is the question: ‘what is newsworthy?’ In recent weeks, the Centre for Leadership and Enterprise at Greenwich has offered commissioned programmes for staff in the Nigerian Ministry of Education, including the permanent secretary, covering issues of policy on teacher development and deployment, vocational provision, standards, and school governance; and for senior staff from Ukraine – both sides of the country and the language divide – on leadership as a new Higher Education Law is developed.

I thought these together were newsworthy: a small centre working with staff from countries with challenging contexts and offering good news to balance the bad. I was wrong apparently. Judged by the University as not worth a press release or even a mention in the University’s daily coverage on its web pages.

There is, apparently, a ‘London effect’: had we been in Lincoln, or Teesside, or even at the university’s Medway campus, it would have been worth trying to get something in to the local press. London journalists are more blasé and world-weary, it appears, so nothing appeared. But at least you now know about it. I am due in Kyiv in October; if I get taken hostage, will that count as news?

SRHE Fellow Ian McNay is emeritus professor at the University of Greenwich.