srhe

The Society for Research into Higher Education


1 Comment

The ‘Holy Grail’ of pedagogical research: the quest to measure learning gain

by Camille Kandiko Howson, Corony Edwards, Alex Forsythe and Carol Evans

Just over a year ago, and learning gain was ‘trending’. Following a presentation at SRHE Annual Research Conference in December 2017, the Times Higher Education Supplement trumpeted that ‘Cambridge looks to crack measurement of ‘learning gain’; however, research-informed policy making is a long and winding road.

Learning gain is caught between a rock and a hard place — on the one hand there is a high bar for quality standards in social science research; on the other, there is the reality that policy-makers are using the currently available data to inform decision-making. Should the quest be to develop measures that meet the threshold for the Research Excellence Framework (REF), or simply improve on what we have now?

The latest version of the Teaching Excellence and Student Outcomes Framework (TEF) remains wedded to the possibility of better measures of learning gain, and has been fully adopted by the OfS.  And we do undoubtedly need a better measure than those currently used. An interim evaluation of the learning gain pilot projects concludes: ‘data on satisfaction from the NSS, data from DHLE on employment, and LEO on earnings [are] all … awful proxies for learning gain’. The reduction in value of the NSS to 50% in the most recent TEF process make it no better a predictor of how students learn.  Fifty percent of a poor measure is still poor measurement.  The evaluation report argues that:

“The development of measures of learning gain involves theoretical questions of what to measure, and turning these into practical measures that can be empirically developed and tested. This is in a broader political context of asking ‘why’ measure learning gain and, ‘for what purpose’” (p7).

Given the current political climate, this has been answered by the insidious phrase ‘value for money’. This positioning of learning gain will inevitably result in the measurement of primarily employment data and career-readiness attributes. The sector’s response to this narrow view of HE has given renewed vigour to the debate on the purpose of higher education. Although many experts engage with the philosophical debate, fewer are addressing questions of the robustness of pedagogical research, methodological rigour and ethics.

The article Making Sense of Learning Gain in Higher Education, in a special issue of Higher Education Pedagogies (HEP) highlights these tricky questions. Continue reading

Ian Mc Nay


Leave a comment

Ian McNay writes…

By Ian McNay

How many Eleanors can you name? Roosevelt, Marx, Bron, Aquitaine, Rigby…add your own. Why am I asking this? Because it is a new metric for widening access. The recent issue of People Management, the journal of CIPD, reports that in 2014 the University of Oxford admitted more girls named Eleanor than students who had received free school meals. Those who were taught at private schools were 55% more likely to go to Oxbridge than student who received free school meals. Those two universities have even reduced the proportion of students they admitted who came from lower socio-economic groups in the decade from 2004=5, from 13.3% to 10% at Oxford and from 12.4% to 10.2% at Cambridge. Other Russell Group universities also recorded a fall, according to HESA data. So, second question: how many people do you know who have had free school meals or whose children have had? Not a visible/audible characteristic: they do not wear wristlet identifiers. But your university planning office will have the stats if you want to check its record. Continue reading

Charlotte Mathieson


Leave a comment

A Culture of Publish or Perish? The Impact of the REF on Early Career Researchers

By Charlotte Mathieson

This article aims to highlight some of the ways in which the REF has impacted upon early career researchers, using this as a spring-broad to think about how the next REF might better accommodate this career group.

In my role at the Institute of Advanced Study at the University of Warwick I work closely with a community of early career researchers and have experienced first-hand the many impacts that this REF has had on my peer group; but I wanted to ensure that this talk reflected a broader range of experiences across UK HE, and therefore in preparation I distributed an online survey asking ECRs about their experiences and opinions on the REF 2014.

Survey overview

– 193 responses collected between December 2014 and March 2015
– responses gathered via social media and email from across the UK
– 81.3 % had completed PhDs within the last 8 years
– 41.5 % were REF returned
– 18.7% were currently PhD students
– 10.9% had left academia since completing a PhD

5 main points emerged as most significant from among the responses: Continue reading

Image of Rob Cuthbert


Leave a comment

Was that a foul REF?

By Rob Cuthbert

The Research Excellence Framework, the UK’s latest version of research quality assessment, reached its conclusion just after the SRHE Research Conference. Publication of the results in mid-December led to exhaustive coverage in all the HE media. 

In the Research Season 2008-2014 the controversy was not so much about who ended up top of the league, but whether the English premier league can still claim to be the best in the world.

Big clubs were even more dominant, with the golden triangle pulling away from the rest and filling the top league positions. But controversy raged about the standard of refereeing, with many more players being labelled world class than ever before. Referees supremo David Sweeney was quick to claim outstanding success, but sponsors and commentators were more sceptical, as the number of goals per game went up by more than 50%.

During the season transfer fees had reached record heights as galactico research stars were poached by the big clubs before the end of the transfer window. To secure their World University League places the leading clubs were leaving nothing to chance. It was a league of two halves. After positions based on research outcomes had been calculated there was a series of adjustments, based on how many people watched the game (impact), and how big your stadium was (environment). This was enough to ensure no surprises in the final league table, with big clubs exploiting their ground advantage to the full. And of course after the end of the season there is usually a further adjustment to ensure that the big clubs get an even bigger share of the funding available. This process, decreed by the game’s governing body, is known as ‘financial fair play’.

Some players had an outstanding season – astronomers were reported to be ‘over the moon’ at the final results, but not everyone was happy: one zoologist confided that he was ‘sick as a parrot’. The small clubs lacked nothing in effort, especially at Northampton, where they responded superbly to their manager’s call to put in 107%. But not everyone can be a winner, research is a results business and as always when a team underperforms, some clubs will be quick to sack the manager, and many more will sack the players.

Scepticism about the quality of the league lingers among the game’s governing body, suspicious about high scoring, and there is a risk that the money from the Treasury will finally dry up. The game may not have finished yet, but some … some people are running onto the pitch, they think it’s all over. It is for now.

Rob Cuthbert is Emeritus Professor of Higher Education Management, University of the West of England, Joint Managing Partner, Practical Academics rob.cuthbert@btinternet.com, Editor, Higher Education Review www.highereducationreview.com, and Chair, Improving Dispute Resolution Advisory Service www.idras.ac.uk


Leave a comment

Performance-based research assessment is narrowing and impoverishing the university in New Zealand, UK and Denmark

performance

The article below is reposted from the original piece published at the LSE Impact of Social Sciences blog It is reposted under Creative Commons 3.0.

 

Susan Wright, Bruce Curtis, Lisa Lucas & Susan Robertson provide a basic outline of their working paper on how performance-based research assessment frameworks in different countries operate and govern academic life. They find that assessment methods steer academic effort away from wider purposes of the university, enhance the powers of leaders, propagate unsubstantiated myths of meritocracy, and demand conformity. But the latest quest for ‘impact’ may actually in effect unmask these operations and diversify ‘what counts’ across contexts.

Our working paper Research Assessment Systems and their Impacts on Academic Work in New Zealand, the UK and Denmark arises from the EU Marie Curie project ‘Universities in the Knowledge Economy’ (URGE) and specifically from its 5th work package, which examined how reform agendas that aimed to steer university research towards the ‘needs of a knowledge economy’ affected academic research and the activities and conduct of researchers. This working paper has focused on Performance-Based Research Assessment systems (PBRAs). PBRAs in the UK, New Zealand and Denmark now act as a quality check, a method of allocating funding competitively between and within universities, and a method for governments to steer universities to meet what politicians consider to be the needs of the economy. Drawing on the studies reported here and the discussions that followed their presentation to the URGE symposium, four main points can be highlighted. Continue reading