srhe

The Society for Research into Higher Education

Paul Temple


Leave a comment

Merchant rankers

by Paul Temple

I blogged a while back on THE’s transformation from a publisher of news and opinions on higher education to a producer and vendor of rankings data. Every issue of the magazine it seems now comes with the latest rankings publication, often thicker than the parent publication. The latest one that I’ve seen gives the “2019 University Impact Rankings”. You’ve got to admire the ingenuity of THE’s Chief Knowledge Officer, Phil Baty, and his team in dreaming up ever-more varied ways of ranking universities – and the cleverness of these latest rankings, examining contributions to the UN Sustainable Development Goals (SDGs), is that a wider range of universities than just the usual suspects can claim their place in the sun. So expect to see Kyung Hee University in South Korea boasting of its top world ranking under SDG 11, “Sustainable cities and communities”.

The UN has developed 17 SDGs taking in a wide sweep of worthwhile objectives, including peace, health, welfare, equalities, sustainability, and more. Probably all universities contribute in different ways to many of these goals, but how should their varying achievements in this field be ranked? Well, the difficulty of adding incommensurables together to produce a single number in order to create a league table has never so far got in the way of people with a ranking product to sell. So you won’t be surprised to hear that it turned out to be a piece of cake to add a university’s contribution to, say, “good health and wellbeing”, to a number reflecting its work on “gender equality”, to its number on “climate action”, to compare that total number to a number from a university on the other side of the world which says it contributes to a different set of SDGs – and to come up with a league table. (The University of Auckland came top, since you ask.)

As I said in my earlier blog about the THE annual university awards, you might think, where’s the harm in universities doing a bit of mild boasting about their contributions to perfectly worthwhile aims? Well, I think there are a couple of problems. One was brought home to me recently at a graduation ceremony, where the speech by the presiding member of the UCL brass was almost entirely about how well UCL and its constituent parts had done in the recent QS rankings. This both misleads families and friends, and probably many graduates, into thinking that rankings are some sort of unarguable, football league-style assessment, with a university’s work being counted in the same way as a team’s goals. But it also misses an opportunity to tell your own institutional story – “we are a terrific university, and this is why” – rather than sub-contracting the job to someone with a commercial axe to grind. What happened to institutional self-confidence?

The other problem is that the more universities appear to buy in to rankings like these, the more THE and other rankers are encouraged to offer consultancies based on their rankings. This is dangerous territory. Rather than claiming, however implausibly, that their consultancy services are entirely separate from their rankings activities, THE goes out of its way to link them. Imagine then a marketing director of a university in difficulties of some sort reading the several full-page ads for THE’s consultancy services in the Impact Rankings publication, with their offers of “expert guidance” and “tailored analysis for advancement” drawing on THE’s “deep expertise” with THE experts becoming “an extension of…universities’ marketing departments”. It wouldn’t be surprising if they thought, “Hmmm, maybe working with these guys might help us move up some of these rankings – at least we’d understand more about how they’re put together and we might then make some changes in what we do….”

So sets of methodologically worthless data become turned into income streams for rankings producers because university leaderships take them seriously, which in turn will drive universities’ policy-making in the direction of moving up one league table or another, which in turn will encourage rankers to produce even more league tables in order to exert more power. How on earth did we allow this to happen?

SRHE member Paul Temple, Centre for Higher Education Studies, UCL Institute of Education, University College London.


1 Comment

The ‘Holy Grail’ of pedagogical research: the quest to measure learning gain

by Camille Kandiko Howson, Corony Edwards, Alex Forsythe and Carol Evans

Just over a year ago, and learning gain was ‘trending’. Following a presentation at SRHE Annual Research Conference in December 2017, the Times Higher Education Supplement trumpeted that ‘Cambridge looks to crack measurement of ‘learning gain’; however, research-informed policy making is a long and winding road.

Learning gain is caught between a rock and a hard place — on the one hand there is a high bar for quality standards in social science research; on the other, there is the reality that policy-makers are using the currently available data to inform decision-making. Should the quest be to develop measures that meet the threshold for the Research Excellence Framework (REF), or simply improve on what we have now?

The latest version of the Teaching Excellence and Student Outcomes Framework (TEF) remains wedded to the possibility of better measures of learning gain, and has been fully adopted by the OfS.  And we do undoubtedly need a better measure than those currently used. An interim evaluation of the learning gain pilot projects concludes: ‘data on satisfaction from the NSS, data from DHLE on employment, and LEO on earnings [are] all … awful proxies for learning gain’. The reduction in value of the NSS to 50% in the most recent TEF process make it no better a predictor of how students learn.  Fifty percent of a poor measure is still poor measurement.  The evaluation report argues that:

“The development of measures of learning gain involves theoretical questions of what to measure, and turning these into practical measures that can be empirically developed and tested. This is in a broader political context of asking ‘why’ measure learning gain and, ‘for what purpose’” (p7).

Given the current political climate, this has been answered by the insidious phrase ‘value for money’. This positioning of learning gain will inevitably result in the measurement of primarily employment data and career-readiness attributes. The sector’s response to this narrow view of HE has given renewed vigour to the debate on the purpose of higher education. Although many experts engage with the philosophical debate, fewer are addressing questions of the robustness of pedagogical research, methodological rigour and ethics.

The article Making Sense of Learning Gain in Higher Education, in a special issue of Higher Education Pedagogies (HEP) highlights these tricky questions. Continue reading


Leave a comment

Beware of slogans

by Alex Buckley

Slogans, over time, become part of the furniture. They start life as radical attempts to change how we think, and can end up victims of their own success. Higher education is littered with ex-slogans: ‘student engagement’, ‘graduate attributes’, ‘technology enhanced learning’, ‘student voice’, ‘quality enhancement’, to name just a few. Hiding in particularly plain sight is ‘teaching and learning’ (and ‘learning and teaching’). We may use the phrase on a daily basis without thinking much about it, but what is the point of constantly talking about teaching and learning in the same breath? Continue reading