by Rebekah Smith McGloin and Rachel Handforth, Nottingham Trent University
Research excellence’ is a ubiquitous concept to which we are mostly habituated in the UK research ecosystem. Yet, at the end of an academic year which saw the publication of UKRI EDI Strategy, four UKRI council reviews of their investments in PGR, House of Commons inquiry on Reproducibility and Research Integrity and following on from the development of manifesto, concordat, declaration and standards to support Open Research in recent years, it feels timely to engage in some critical reflection on cultures of excellence in research.
The notion of ‘excellence’ has become an increasingly important part of the research ecosystem over the last 20 years (OECD, 2014). The drivers for this are traced to the need to justify the investment of public money in research and the increasing competition for scarce resources (Münch, 2015). University rankings have further hardwired and amplified judgments about degrees of excellence into our collective consciousness (Hazelkorn, 2015).
Jong, Franssen and Pinfield (2021) highlight that the idea of excellence is a ‘boundary object’ (Star and Griesemer, 1989) however. That is, it is a nebulous construct which is poorly defined and is used in many different ways. It has nevertheless shaped policy, funding and assessment activities since the turn of the century. Ideas of excellence have been enacted through the Research Excellence Framework and associated allocation to universities of funding to support research, competitive schemes for grant funding, recruitment to flagship doctoral training partnerships and individual promotion and reward.
We can trace a number of recent initiatives at sector level, inter alia, that have sought to broaden ideas of research excellence and to challenge systemic and structural inequalities in our research ecosystem. These include the increase of impact weighting in REF2021 to 25%, trials of systems of partial randomisation as part of the selection process for some smaller research grants, e.g. British Academy from 2022, the Concordats and Agreements Review work in 2023 to align and increase influence, capacity, and efficiency of activity to support research culture and the recent Research England investment in projects designed to address the broken pipeline into research by increasing participation of people from racialised groups in doctoral education.
At the end of June, we are hosting an event at NTU which will focus on redefining cultures of research excellence through the lens of inclusion. The symposium, to be held at our Clifton Campus on Wednesday 28 June, provides an opportunity to re-examine the broad notion of research excellence, in the context of systemic inequalities that have historically locked out certain types of researchers and research agendas and locked in others.
The event focuses on two mutually-reinforcing areas: the possibility of creating more responsive and inclusive research agendas through co-creation between academics and communities; and broadening pathways into research through the inclusive recruitment of PhD and early career researchers. We take the starting position that approaches which focus on advancing equity are critical to achieving excellence in UK research and innovation.
The day will include keynotes from Dr Bernadine Idowu and Professor Kalwant Bhopal, the launch of a new competency-based PGR recruitment framework, based on sector consultation, and a programme of speakers talking about their approaches to diversifying researcher recruitment and engaging the community in setting research agendas.
NTU will be showcasing two new projects that are designed to challenge old ideas of research excellence and forge new ways of thinking. EDEPI (Equity in Doctoral Education through Partnership and Innovation Programme) is a partnership with Liverpool John Moores and Sheffield Hallam Universities and NHS Trusts in the three cities. The project will explore how working with the NHS can improve access and participation in doctoral education for racially-minoritised groups. Co(l)laboratory is a project with University of Nottingham, based on the Universities for Nottingham civic agreement with local public-sector organisations. Collab will present early lessons from a community-informed approach to cohort-based doctoral training.
Our event is a great opportunity for universities and other organisations who are, in their own ways, redefining cultures of research excellence to share their approaches, challenges and successes. We invite individuals, project teams and organisations working in these areas to join us at the end of June, with the hope of building a community of practice around building inclusive research cultures, within and across the sector.
Dr Rebekah Smith McGloin is Director of the Doctoral School at Nottingham Trent University and is Principal Investigator on the EDEPI and Co(l)laboratory projects.
Dr Rachel Handforth is Senior Lecturer in Doctoral Education and Civic Engagement at NTU.
Historian GR Evans takes the long view of developments in interdisciplinary studies, with particular reference to experience at Cambridge, where progress may at times be slow but is also measured. Many institutions have in recent years developed new academic structures or other initiatives intended to promote interdisciplinary collaboration. We invite further blogs on the topic from other institutional, disciplinary, multidisciplinary or interdisciplinary perspectives.
A recent Times Higher Education article explored ‘academic impostor syndrome’ from the point of view of an academic whose teaching and research crossed conventional subject boundaries. That seemed to have made the author feel herself a misfit. She has a point, but perhaps one with broader ramifications.
There is still a requirement of specialist expertise in the qualification of academics. In its Registration Conditions for the grant of degree-awarding powers the Office for Students adopts a requirement which has been in used since the early 1990s. An institution which is an established applicant seeking full degree-awarding powers must still show that it has “A self-critical, cohesive academic community with a proven commitment to the assurance of standards supported by effective quality systems.”
A new applicant institution must show that it has “an emerging self-critical, cohesive academic community with a clear commitment to the assurance of standards supported by effective (in prospect) quality systems.” The evidence to be provided is firmly discipline-based: “A significant proportion (normally around a half as a minimum) of its academic staff are active and recognised contributors to at least one organisation such as a subject association, learned society or relevant professional body.” The contributions of these academic staff are: “expected to involve some form of public output or outcome, broadly defined, demonstrating the research-related impact of academic staff on their discipline or sphere of research activity at a regional, national or international level.”
The establishment of a range of subjects identified as ‘disciplines’ suitable for study in higher education is not much more than a century old in Britain, arriving with the broadening of the university curriculum during the nineteenth century and the creation of new universities to add to Oxford and Cambridge and the existing Scottish universities. Until then the medieval curriculum adapted in the sixteenth century persisted, although Cambridge especially honoured a bent for Mathematics. ‘Research’, first in the natural sciences, then in all subjects, only slowly became an expectation. The higher doctorates did not become research degrees until late in the nineteenth century and the research PhD was not awarded in Britain until the beginning of the twentieth century, when US universities were beginning to offer doctorates and they were established as a competitive attraction in the UK .
The notion of ‘interdisciplinarity’ is even more recent. The new ‘disciplines’ gained ‘territories’ with the emergence of departments and faculties to specialise in them and supervise the teaching and examining of students choosing a particular subject. In this developing system in universities the academic who did not fully belong, or who made active connections between disciplines still in process of defining themselves, could indeed seem a misfit. The interdisciplinary was often disparaged as neither one discipline nor another and often regarded by mainstream specialists as inherently imperfect. Taking an interest in more than one field of research or teaching might perhaps be better described as ‘multi-disciplinary’ and requires a degree of cooperativeness among those in charge of the separate disciplines. But it is still not easy for an interdisciplinary combination to become a recognised intellectual whole in its own right, though ‘Biochemistry’ shows it can be done.
Research selectivity and interdisciplinarity
The ‘research selectivity’ exercises which began in the late 1980s evolved into the Research Assessment Exercises (1986, 1989, 1992, 1996, 2001, 2008), now the Research Excellence Framework. The RAE Panels were made up of established academics in the relevant discipline and by the late 1990s there were complaints that this disadvantaged interdisciplinary researchers. The Higher Education Funding Council for England and the other statutory funding bodies prompted a review, and in November 1997 the University of Cambridge received the consultation paper sent round by HEFCE. A letter in response from Cambridge’s Vice-Chancellor was published, giving answers to questions posed in the consultation paper. Essential, it was urged, were ‘clarity and uniformity of application of criteria’. It suggested that: “… there should be greater interaction, consistency, and comparability between the panels than in 1996, especially in cognate subject areas. This would, inter alia, improve the assessment of interdisciplinary work.”
The letter also suggested “the creation of multidisciplinary sub-panels, drawn from the main panels” or at least that the membership of those panels should include those “capable of appreciating interdisciplinary research and ensuring appropriate consultation with other panels or outside experts as necessary”. Universities should also have some say, Cambridge suggested, about the choice of panel to consider an interdisciplinary submission. On the other hand Cambridge expressed “limited support for, and doubts about the practicality of, generic interdisciplinary criteria or a single interdisciplinary monitoring group”, although the problem was acknowledged.[1]
Interdisciplinary research centres
In 2000 Cambridge set up an interdisciplinary Centre for Research in the Arts, Humanities, and Social Sciences. In a Report proposing CRASSH the University’s General Board pointed to “a striking increase in the number and importance of research projects that cut across the boundaries of academic disciplines both within and outside the natural sciences”. It described these as wide-ranging topics on which work could “only be done at the high level they demand” in an institution which could “bring together leading workers from different disciplines and from around the world … thereby raising its reputation and making it more attractive to prospective staff, research students, funding agencies , and benefactors.”[2]
There have followed various Cambridge courses, papers and examinations using the term ‘interdisciplinary’, for example an Interdisciplinary Examination Paper in Natural Sciences. Acceptance of a Leverhulme Professorship of Neuroeconomics in the Faculty of Economics in 2022 was proposed on the grounds that “this appointment serves the Faculty’s strategy to expand its interdisciplinary profile in terms of research as well as teaching”. It would also comply with “the strategic aims of the University and the Faculty … [and] create a bridge between Economics and Neuroscience and introduce a new interdisciplinary field of Neuroeconomics within the University”. However the relationship between interdisciplinarity in teaching and in research has still not been systematically addressed by Cambridge.
‘Interdisciplinary’ and ‘multidisciplinary’
A Government Report of 2006 moved uneasily between ‘multidisciplinary’ and ‘interdisciplinary’ in its use of vocabulary, with a number of institutional case studies. The University of Strathclyde and King’s College London (Case Study 2) described a “multidisciplinary research environment”. The then Research Councils UK (Case Study 5b) said its Academic Fellowship scheme provided “an important mechanism for building interdisciplinary bridges” and at least 2 HEIs had “created their own schemes analogous to the Academic Fellowship concept”.
In sum it said that all projects had been successful “in mobilising diverse groups of specialists to work in a multidisciplinary framework and have demonstrated the scope for collaboration across disciplinary boundaries”. Foresight projects, it concluded, had “succeeded in being regarded as a neutral interdisciplinary space in which forward thinking on science-based issues can take place”. But it also “criticised the RAE for … the extent to which it disincentivised interdisciplinary research”. And it believed that Doctoral Training Projects still had a focus on discipline-specific funding, which was “out of step with the growth in interdisciplinary research environments and persistent calls for more connectivity and collaboration across the system to improve problem-solving and optimise existing capacity”.
Crossing paths: interdisciplinary institutions, careers, education and applicationswas published by the British Academy in 2016. It recognised that British higher education remained strongly ‘discipline-based’, and recognized the risks to a young researcher choosing to cross boundaries. Nevertheless, it quoted a number of assurances it had received from universities, saying that they were actively seeking to support or introduce the ‘interdisciplinary’. It provided a set of Institutional Case Studies. including Cambridge’s statement about CRASSH, as hosting a range of externally funded interdisciplinary projects. Crossing paths saw the ‘interdisciplinary’ as essentially bringing together existing disciplines in a cluster. It suggested “weaving, translating, convening and collaborating” as important skills needed by those venturing into work involving more than one discipline. It did not attempt to explore the definition of interdisciplinarity or how it might differ from the multi-disciplinary.
Interdisciplinary teaching has been easier to experiment with, particularly at school level where subject-based boundaries may be less rigid. There seems to be room for further hard thought not only on the need for definitions but also on the notion of the interdisciplinary from the point of view of the division of provision for posts in – and custody of – individual disciplines in the financial and administrative arrangements of universities. This work-to-be-done is also made topical by Government and Office for Students pressure to subordinate or remove established disciplines which do not offer the student a well-paid professional job on graduation.
SRHE member GR Evans is Emeritus Professor of Medieval Theology and Intellectual History in the University of Cambridge.
[1]Cambridge University Reporter, 22 April (1998).
[2]Cambridge University Reporter, 25 October (2000).
Dr Richard Davies, co-convenor of SRHE’s Academic Practice network, ran a network event on 26 January 2022 ‘What makes a good SRHE Conference abstract?’. A regular reviewer for the SRHE Conference, Richard also asked colleagues what they look for in a good paper for the conference and shared the findings in a well-attended event.
Writing a submission for a conference is a skill – distinct from writing for journals or public engagement. It is perhaps most like an erudite blog. In the case of the SRHE conference, you have 750 words to show the reviewer that your proposed presentation is (a) worth conference delegates’ attention, and (b) a better fit for this conference than others (we get more submissions than the conference programme can accommodate so it is a bit competitive!).
Think of it as a short paper, not an abstract
It is difficult to summarise a 5-6000 word paper in 750 words and cover literature, methodology, data and findings. As a reviewer, I often find myself unsatisfied with the result. It is better to think of this as a short paper, that you can present in 15 minutes at the conference. This means focussing on a specific element of your study which can be communicated in 750 words and following the argument of that focus through precise methodology, a portion of your data, and final conclusions. Sure, tell the reviewers this is part of a large study, but you are focusing on a specific element of it. The short paper will then, if well written, be clear and internally coherent. If I find a submission is neither clear nor coherent, then I would usually suggest rejecting because if I cannot make sense of it then I will assume delegates will not be able to as well.
Practical point: get a friend or colleague to read the short paper – do they understand what you are saying? They don’t have to be an expert in higher education or even research. As reviewers, most of us regularly read non-UK English texts, as an international society we are not expecting standard English – just clarity to understand the points the author is making. Whether UK-based or international, we are not experts in different countries’ higher education systems and so do not assume the reviewer’s prior knowledge of the higher education system you are discussing
Reviewer’s judgement
Although we work to a set of criteria, as with most academic work, there is an element of judgement, and reviewers take a view of your submission as a whole. We want to know: will this be of interest to SRHE conference delegates? Will it raise questions and stimulate discussion? In my own area of philosophy of education, a submission might be philosophically important but not explicitly about higher education; as a result I would tend to suggest it be rejected. It might be suitable for a conference but not this conference.
Practical point: check you are explicitly talking about higher education and how your paper addresses an interesting area of research or practice. Make sure the link is clear – don’t just assume the reviewers will make the connection. Even if we can, we will be wary of suggesting acceptance.
Checking against the criteria
The ‘Call for Papers’ sets out the assessment criteria against which we review submissions. As a reviewer, I read the paper and form a broad opinion, I then review with a focus on each specific criterion. Each submission is different and will meet each criterion (or not) in a different way and to varying degrees. As a reviewer, I interpret the criterion in the light of the purpose and methodology of the submission. As well as clarity and suitability for the conference, I also think about the rigour with which it has been written. This includes engagement with relevant literature, the methodology/methods and the quality of the way the data (if any) are used. I want to know that this paper builds on previous work but adds some original perspective and contribution. I want to know that the study has been conducted methodically and that the author has deliberated about it. Where there are no data, either because it is not an empirical study or the paper reports the initial phases of what will be an empirical study, I want to know that the author’s argument is reasonable and illuminates significant issues in higher education.
Practical point: reviewers use the criteria to assess and ‘score’ submissions. It is worth going through the criteria and making sure that you are sure that it is clear how you have addressed each one. If you haven’t got data yet, then say so and say why you think the work is worth presenting at this early stage.
Positive news
SRHE welcomes submissions from all areas of research and evaluation in higher education, not just those with lots of data! Each submission is reviewed by two people and then moderated, and further reviewed, if necessary, by network convenors – so you are not dependent on one reviewer’s assessment. Reviewers aim to be constructive in their feedback and to uphold the high standard of presentations we see at the conference, highlighting areas of potential improvement for both accepted and rejected submissions.
Finally, the SRHE conference does receive more submissions than can be accepted, and so some good papers don’t make it. Getting rejected is not a rejection of your study (or you); sometimes it is about clarity of the submission, and sometimes it is just lack of space at the conference.
Dr Richard Davies is an academic, educationalist and informal educator. He is primarily concerned with helping other academics develop their research on teaching and learning in higher education. His own research is primarily in philosophical approaches to higher educational policy and practice. He co-convenes SRHE’s AP (Academic Practice) Network – you can find out more about the network by clicking here.
A spectrum of interesting critical issues related to ‘quality’ were brought to light during the SRHE Academic Practice Network conference on 22-23 June 2021. The conference Qualifying the debate on quality attracted my attention and I was keen to share my perspectives on the implications of having quality teacher educators in order to produce quality classroom teachers.
My substantive work as an Education Officer, supervising principals and teachers in our schools and secondly as an Adjunct Lecturer teaching student teachers in a Bachelors of Education Programme, positioned me an inside observer and participant in this phenomenon. My doctoral thesis (2020) explored teacher educators’ perceptions about their continuing professional development and their experiences as they transitioned into and assumed roles as teacher educators. Hence, I am quite pleased to write this blog that captures the essence of my presentation from the conference.
Ascribing the label of “quality” to education has different meanings and interpretations in different conditions and settings. ‘Quality’ depends on geographical boundaries and contexts, with consideration given to quality assurance, regulations and established standards using certain measures (Churchward and Willis, 2018). Attaining ‘quality’ can therefore be elusive, especially when we try to address all the layers within an education system. The United Nations sustainable development goal number 4 is aimed at offering ‘quality’ education for all in an inclusive and equitable climate. But this quality education is to be provided by teachers, with no mention (as is generally the case) of the direct input of teacher educators who sit at the apex of the ‘quality chain’. These teacher educators work in higher education institutions and are tasked with the responsibility of formally preparing quality classroom teachers. The classroom teachers in turn would ensure that our students receive this inclusive equitable quality education within schools and other learning institutions.
Although the lack of attention to teacher educators’ professional development is now receiving more attention, as reported in the literature, this once forgotten group of professionals who make up a distinct group within the education sector need to receive constant support and continuous professional development. This attention will enable them to offer improved quality service to their student teachers. Without giving teacher educators the support and attention they deserve, quality education cannot be realised in our classrooms. Sharma (2019) reminds us that every child deserves quality classroom teachers.
Responsibilities of teacher educators
An understanding of what teacher educators are expected to do is therefore critical, if we are to recognize their value in the quality chain. Darling-Hammond (2006) opines that teacher educators must have knowledge of their learners and their social context, knowledge of content and of teaching. Furthermore, Kosnik et al (2015) explain that they should have knowledge of pedagogy in higher education, research and government initiatives. Teacher educators must also have knowledge of teachers’ lives, what it is like to teach children and also the teachers of children; they therefore should have had the experience of being teachers (Bahr and Mellor, 2016). In essence, they should be equipped with teachers’ knowledge and skills, in addition to what they should know and do as teacher educators. It appears that the complexity of teacher educators’ work is usually underestimated and devalued. This is evidenced especially when it is taken for granted that good classroom teachers are suitably qualified to become teacher educators and that they do not require formal training and continued differentiated support as they transition and work as teacher educators in higher education.
Improving the quality of teacher educators’ work
Targeted continuing professional development (CPD) of different types and forms that address different purposes according to teacher educators’ needs and that of their institutions is suggested. I have recommended (Antonio, 2019) a multidimensional approach to teacher educators’ CPD. This approach takes into consideration forms of CPD (informal, formal and communities of practice); types of CPD (site-based, standardised and self-directed); and purposes of CPD – transmissive, malleable and transformative proposed by Kennedy (2014). Teacher educators must have a voice in determining the combination and nature of their CPD. Notwithstanding, there needs to be a ‘quality barometer’ which gives various stakeholders the opportunity to assist in guiding their development. Their CPD must have relevance in this 21st century era.
Interventions as a necessity
The idea that teacher educators are self-made, good classroom teachers who can transmit these skills and knowledge into higher education institutions without formal training as teacher educators should be examined decisively. Systems need to be established for teacher educators to be formally trained at levels beyond that of ordinary classroom teachers. However, their CPD should be fostered under the experienced supervision of professors who themselves have been proven to be 21st Century aware in the areas of technological pedagogical content knowledge, as well as other soft skills. No one should be left untouched in our quest to providing quality education for all. We must be serious in simultaneously addressing the delivery of quality education at every level of education systems. Our children deserve quality classroom teachers and quality teacher educators hold the key.
Desirée Antonio is Education Officer, School Administration within the Ministry of Education, Sports and Creative Industries, Antigua and Barbuda. She has been an educator for nearly 40 years. Her current work involves the supervision of teachers and principals, providing professional development and contributing to policy development. She has a keen interest in Continuing Professional Development as a strategy that can be used to assist in responding to the ever-changing challenging and complex environment in which we work as educators.
As an Adjunct Lecturer, University of the West Indies, Five Islands Campus, Desirée teaches student teachers in a Bachelors of Education Programme. Her doctoral thesis explored the continuing professional development of teacher educators who work in the region of the Organisation of Eastern Caribbean States. Her involvement over the past year in many webinars and workshops with SRHE inspired her to develop and host an inaugural virtual research symposium on behalf of the Ministry of Education in May 2021, with the next to be held in 2022.
References
Antonio, D (2019) Continuing Professional Development (CPD) of Teacher Educators (TEs) within the ecological environment of the island territories of the Organisation of Eastern Caribbean States (OECS) PhD thesis submitted in accordance with the requirements of the University of Liverpool
Churchward, P, and Willis, J (2018) ‘The pursuit of teacher quality: identifying some of the multiple discourses of quality that impact the work of teacher educators’ Asia-Pacific Journal of Teacher Education, 47(3): 251–264 https://doi.org/10.1080/1359866X.2018.1555792
Kennedy, A (2014) ‘Understanding continuing professional development: the need for theory to impact on policy and practice’ Professional Development in Education, 40(5), 688–697 https://doi.org/10.1080/19415257.2014.955122
Kosnik, C., Menna, L., Dharamshi, P, Miyata, C, Cleovoulou, Y, and Beck, C (2015) ‘Four spheres of knowledge required: an international study of the professional development of literacy/English teacher educators’ Journal of Education for Teaching, 41(April 2015): 52–77 https://doi.org/10.1080/02607476.2014.992634
Sharma, R (2020) ‘Ensuring quality in Teacher Education’ EPRA International Journal of Multidisciplinary Research (IJMR) 5(10)
Trust is the magic ingredient that allows social life to exist, from the smallest informal group to entire nations. High-trust societies tend to be more efficient, as it can be assumed that people will, by and large, do what they’ve agreed without the need for constant checking. Ipsos-MORI carries out an annual “veracity index” survey in Britain to discover which occupational groups are most trusted: “professors”, which I think we can take to mean university academic staff, score highly (trusted by 83% of the population), just below top-scoring doctors and judges, way above civil servants (60%) – and with government ministers playing in a different league on 16%. So most people, then, seem to trust university staff to do a decent job – much more than they trust ministers. It’s therefore a little strange that over the last 35 years the bitterest struggles between universities and governments have been fought in the “quality wars”, with governments claiming repeatedly that university teachers can’t be trusted to do their jobs without state oversight. Disputes about university expansion and funding come and go, but the quality wars just rumble on. Why?
From the mid-1980s (when “quality” was invented) up to the appearance of the 2011 White Paper, Higher Education: Students at the Heart of the System, quality in higher education was (after a series of changes to structures and methods) regulated by the Quality Assurance Agency, which required universities to show that they operated effective quality management processes. This did not involve the inspection of actual teaching: universities were instead trusted to give an honest, verifiable, account of their own quality processes. Without becoming too dewy-eyed about it, the process came down to one group of professionals asking another group of professionals how they did their jobs. Trust was the basis of it all.
The 2011 White Paper intended to sweep this away, replacing woolly notions of trust-based processes with a bracing market-driven discipline. The government promised to “[put] financial power into the hands of learners [to make] student choice meaningful…[it will] remove regulatory barriers [to new entrants to the sector to] improve student choice…[leading to] higher education institutions concentrating on high-quality teaching” (Executive Summary, paras 6-9). On this model, decisions by individual students would largely determine institutional income from teaching, so producing better-quality courses: trust didn’t matter. Market forces can be seen to drive forward quality in other fields through competition, why not in universities?
Well, of course, for lots of reasons, as critics of the White Paper were quick to point out, naturally to no avail. But having been told that they were to operate in a marketised environment where the usual market mechanisms would deal with quality (good courses expanding, others shrinking or failing), exactly a decade later universities find themselves being subjected to a bureaucratic (I intend the word in its social scientific sense, not as a lazy insult) quality regime, the very antithesis of a market system.
We see this in the latest offensive in the quality wars, just opened by the OFS with its July 2021 “Consultation on Quality and Standards”. This 110-page second-round consultation document sets out a highly-detailed process for assessing quality and standards: you can almost feel the pain of the drafter of section B1 on providing “a high quality academic experience”. What does that mean? It means, for example, ensuring that each course is “coherent”. So what does “coherent” mean? Well, it means, for example, providing “an appropriate balance between breadth and depth”. So what does…? And so on. This illustrates the difficulty of considering academic quality as an ISO 9001 (remember that?) process with check-lists, when probably every member of a course team will – actually, in a university, should – have different, equally valid, views on what (say) “appropriate breadth and depth” means.
Government approaches to quality and standards in university teaching have, then, over the last 30 or so years, moved from a largely trust-based system, to one supposedly driven by market forces, to a bureaucratic, box-ticking one. In all this time, ministers have failed to give convincing examples of the problems that the ever-changing quality regimes were supposed to deal with. (Degree mills and similar essentially fraudulent operations can be dealt with through normal consumer legislation, given the will to do so. I once interviewed an applicant for one of our courses who had worked in a college I hadn’t heard of: had there been any problems about its academic standards, I asked. “Not really”, she replied brightly, “it was a genuine bogus college”.)
Why, then, do the quality wars continue? – and we can be confident that the current OFS proposals do not signal the end of hostilities. It is hard to see this as anything other than ministerial displacement activity. Sorting out the social care crisis, or knife crime, will take real understanding and the redirection of resources: easier by far to make a fuss about a non-problem and then be seen to act decisively to solve it. And to erode trust in higher education a little more.
Dr Paul Temple is Honorary Associate Professor in the Centre for Higher Education Studies, UCL Institute of Education, London. His latest paper, ‘The University Couloir: exploring physical and intellectual connectivity’, will appear shortly in Higher Education Policy.
What are the key issues in HE quality and standards, right now? Maintaining quality and standards with the massive transition to remote learning? Dealing with the consequences of the 2020 A-levels shambles? The student experience, now that most learning for most students is remote and off-campus? Student mental health and engagement with their studies and their peers? One or more of these, surely, ought to be our ‘new normal’ concerns.
But not for the government. Minister Michele Donelan assured us that quality and standards were being constantly monitored – by other people – as in her letter of 2 November to vice-chancellors:
“We have been clear throughout this pandemic that higher education providers must at all times maintain the quality of their tuition. If more teaching is moved online, providers must continue to comply with registration conditions relating to quality and standards. This means ensuring that courses provide a high-quality academic experience, students are supported and achieve good outcomes, and standards are protected. We have worked with the Office for Students who are regularly reviewing online tuition. We also expect students to continue to be supported and achieve good outcomes, and I would like to reiterate that standards must be maintained.”
So student health and the student experience are for the institutions to worry about, and get right, with the Office for Students watching. And higher education won’t need a bailout, unlike most other sectors of the market economy, because with standards being maintained there’s no reason for students not to enrol and pay fees exactly as usual. Institutional autonomy is vital, especially when it comes to apportioning the blame.
For government, the new normal was just the same as the old normal. It wasn’t difficult to read the signs. Ever since David Willetts, ministers had been complaining about low quality courses in universities. But with each successive minister the narrative became increasingly threadbare. David, now Lord, Willetts, at least had a superficially coherent argument: greater competition and informed student choice would drive up quality through competition between institutions for students. It was never convincing, but at least it had an answer to why and how quality and standards might be connected with competition in the HE market. Promoting competition by lowering barriers to entry for new HE providers was not a conspicuous success: some of the new providers proved to be a big problem for quality. Information, advice and guidance were key for improving student choice, so it seemed that the National Student Survey would play a significant part, along with university rankings and league tables. As successive ministers took up the charge the eggs were mostly transferred to the Teaching Excellence Framework basket, with TEF being championed by Jo, now Lord, Johnson. TEF began in 2016 and became a statutory requirement in the Higher Education and Research Act 2017, which also required TEF to be subject to an independent review. From the start TEF had been criticised as not actually being about teaching, or excellence, and the review by Dame Shirley Pearce, previously VC at Loughborough, began in 2018. Her review was completed before the end of 2019, but at the time of writing had still not been published.
However the ‘low quality courses’ narrative has just picked up speed. Admittedly it stuttered a little during the tenure of Chris Skidmore, who was twice briefly the universities minister, before and after Jo Johnson’s equally brief second tenure. The ‘Skidmore test’ suggested that any argument about low quality courses should specify at least one of the culprits, if it was not to be a low quality argument. However this was naturally unpopular with the narrative’s protagonists and Skidmore, having briefly been reinstalled as minister after Jo Johnson’s decision to step down, was replaced by Michele Donelan, who has remained resolutely on-message, even as any actual evidence of low quality receded even further from view. She announced in a speech to Universities UK at their September 2020 meeting that the once-praised NSS was now in the firing line: “There is a valid concern from some in the sector that good scores can more easily be achieved through dumbing down and spoon-feeding students, rather than pursuing high standards and embedding the subject knowledge and intellectual skills needed to succeed in the modern workplace. These concerns have been driven by both the survey’s current structure and its usage in developing sector league tables and rankings.”
UUK decided that they had to do something, so they ‘launched a crackdown’ (if you believe Camilla Turner in The Telegraph on 15 November 2020) by proposing, um, “a new charter aimed at ensuring institutions take a “consistent and transparent approach to identifying and improving potentially low value or low quality courses.” It’s doubtful if even UUK believed that would do the trick, and no-one else gave it much credence. But with the National Student Survey and even university league tables now deemed unreliable, and the TEF in deep freeze, the government urgently needed some policy-based evidence. It was time for this endlessly tricky problem to be dumped in the OfS in-tray. Thus it was that the OfS announced on 17 November 2020 that: “The Office for Students is consulting on its approach to regulating quality and standards in higher education. Since 2018, our focus has been on assessing providers seeking registration and we are considering whether and how we should develop our approach now that most providers are registered. This consultation is taking place at an early stage of policy development and we would like to hear your views on our proposals.”
Instant commentators were unimpressed. Were the OfS proposals on quality and standards good for the sector? Johnny Rich thought not, in his well-argued blog for the Engineering Professors’ Council on 23 November 2020, and David Kernohan provided some illustrative but comprehensive number-crunching in his Wonkhe blog on 30 November 2020: “Really, the courses ministers want to get rid of are the ones that make them cross. There’s no metric that is going to be able to find them – if you want to arbitrarily carve up the higher education sector you can’t use “following the science” as a justification.” Liz Morrish nailed it on her Academic Irregularities blog on 1 December 2020.
In the time-honoured way established by HEFCE, the OfS consultation was structured in a way which made it easy to summarise responses numerically, but much less easy to interpret their significance and their arguments. The core of the approach was a matrix of criteria, most of which all universities would expect to meet, but it included some ‘numerical baselines’, especially on something beyond the universities’ control – graduate progression to professional and managerial jobs. It also included a proposed baseline for drop-out rates. The danger of this was that it would point the finger at universities which do the most for disadvantaged groups, but here too government and OfS had a cunning plan. Nick Holland, the OfS Competition and Registration Manager, blogged on 2 December 2020 that the OfS would tackle “pockets of low quality higher education provision”, with the statement that “it is not acceptable for providers to use the proportion of students from disadvantaged backgrounds they have as an excuse for poor outcomes.” At a stroke universities with large proportions of disadvantaged students could either be blamed for high drop-out rates, or, if they reduced drop-out rates, they could be blamed for dropping standards. Lose-lose for the universities concerned, but win-win for the low quality courses narrative. The outrider to the low quality courses narrative was an attack on the 50% participation rate (in which Skidmore was equally culpable), which seemed hard to reconcile with a ‘levelling up’ narrative, but Michele Donelan did her best with her speech to NEON, of all audiences, calling for a new approach to social mobility, which seemed to add up to levelling up by keeping more people in FE. The shape of the baselines became clearer as OfS published Developing an understanding of projected rates of progression from entry to professional employment: methodology and results on 18 December 2020. After proper caveats about the experimental nature of the statistics, here came the indicator (and prospective baseline measure): “To derive the projected entry to professional employment measure presented here, the proportion of students projected to obtain a first degree at their original provider (also referred to as the ‘projected completion rate’) is multiplied by the proportion of Graduate Outcomes respondents in professional employment or any type of further study 15 months after completing their course (also referred to as the ‘professional employment or further study rate’).” This presumably met the government’s expectations by baking in all the non-quality-related advantages of selective universities in one number. Wonkhe’s David Kernohan despaired, on 18 December 2020, as the proposals deviated even further from anything that made sense: “Deep within the heart of the OfS data cube, a new plan is born. Trouble is, it isn’t very good.”
Is it too much to hope that OfS and government might actually look at the academic research on quality and standards in HE? Well, yes, but there is rather a lot of it. Quality in Higher Educationis into its 26th year, and of course there is so much more. Even further back, in 1986 the SRHE Annual Conference theme was Standards and criteria in higher education, with an associated book edited by one of the founders of SRHE, Graeme Moodie (York). (This was the ‘Precedings’ – at that time the Society’s practice was to commission an edited volume in advance of the annual conference.) SRHE and the Carnegie Foundation subsequently sponsored a series of Anglo-American seminars on ‘Questions of Quality’. One of the seminar participants was SRHE member Tessa, now Baroness, Blackstone, who would later become the Minister for Further and Higher Education, and one of the visiting speakers for the Princeton seminar was Secretary of State for Education Kenneth Baker. At that time the Council for National Academic Awards was still functioning as the validating agency, assuring quality, for about half of the HE sector, with staff including such SRHE notables as Ron Barnett, John Brennan and Heather Eggins. When it was founded SRHE aimed to bring research and policy together; they have now drifted further apart. Less attention to peer review, but more ministers becoming peers.
Rob Cuthbert is Emeritus Professor of Higher Education Management, University of the West of England and Joint Managing Partner, Practical Academics
January 2020 marks the second year of the Office for Students’ (OfS) operations. The OfS represents the latest organisational iteration of state direction of (once) British and (now) English higher education, stretching back to the creation of the University Grants Committee (UGC) in 1919. We therefore have a century’s-worth of experience to draw on: what lessons might there be?
There are, I think, two ways to consider the
cavalcade of agencies that have passed through the British higher education
landscape since 1919. One is to see in it how higher education has been viewed at
various points over the last century. The other way is to see it as special
cases of methods of controlling public bodies generally. I think that both
perspectives can help us to understand what has happened and why.
In the post-war decades, up to the later
1970s, central planning was almost unquestioningly accepted across the
political spectrum in Britain as the correct way to direct nationalised
industries such as electricity and railways, but also to plan the economy as a
whole, as the National Plan of 1965
showed. In higher education, broadly similar methods – predict and provide – were
operated by the UGC for universities, and by a partnership of central
government and local authorities for the polytechnics and other colleges. A key
feature of this mode of regulation was expert judgement, largely insulated from
political pressures. As Michael Shattock and Aniko Horvath observe in The Governance of British Higher Education (Bloomsbury,
2020), “In the 1950s it had been the UGC, not officials in the ministry, who
initiated policy discussions about the forecast rate of student number
expansion and its financial implications, and it was the UGC, not a minister,
that proposed founding the 1960s ‘New Universities’” (p18).
Higher education, then, was viewed as a
collective national resource, to be largely centrally planned and funded, in a
similar way to nationalised industries.
The rejection of central planning methods by
the Thatcher governments (1979-1990) affected the control of higher education
as it did other areas of national life through the ‘privatisation’ of public
enterprises. Instead, resource allocation decisions were to be made by markets,
or where normal markets were absent, as with higher education, by using ‘quasi-markets’
to allocate public funds. Accordingly the UGC was abolished by legislation in
1988, and (eventually) national funding bodies were created, the English
version being the Higher Education Funding Council for England (HEFCE). Whereas
the UGC had a key task of preserving academic standards, by maintaining the ‘unit
of resource’ at what was considered to be an adequate level of funding per
student (as a proxy for academic standards), HEFCE’s new task, little-noted at
the time, became the polar opposite: it was required to drive down unit costs
per student, thereby supposedly forcing universities to make the efficiency
gains to be expected of normal market forces.
The market, then, had supplanted central
planning as an organising principle in British public life (perhaps the lasting
legacy of the Thatcher era); and universities discovered that the seemingly technical
changes to their funding arrangements had profoundly altered their internal
economies.
HEFCE’s main task, however, as with the UGC
before it, was to allocate public money to universities, though now applying a
different methodology. The next big shift in English higher education policy,
under the 2010 coalition government, changed the nature of central direction
radically. Under the full-cost fees policy, universities now typically received
most of their income from student loans, making HEFCE’s funding role largely
redundant. So, after the usual lag between policy change and institutional restructuring,
a new agency was created in 2018, the Office for Students (OfS), modelled on
the lines of industry regulators for privatised utilities such as energy and
telecoms.
In contrast to its predecessor agencies, OfS
is neither a planning nor a funding body (except for some special cases).
Instead, as with other industry regulators, it assumes that a market exists,
but that its imperfect nature (information asymmetry being a particular concern)
calls for detailed oversight and possibly intervention, in order to ‘mitigate
the risk’ of abuses by providers (universities) which could damage the
interests of consumers (students). It has no interest in maintaining a
particular pattern of institutional provision, though it does require that external
quality assurance bodies validate academic standards in the institutions it
registers.
As with utilities, we have seen a shift in
Britain, in stages, from central planning and funding, to a fragmented but
regulated provision. The underlying assumption is that market forces will have
beneficial results, subject to the regulator preventing abuses and ensuring
that minimal standards are maintained. This approach is now so widespread in
Britain that the government has produced a code to regulate the regulators
(presumably anticipating the question,
Quis custodiet ipsos custodes?).
Examining the changing pattern of state
direction of higher education in England in the post-1945 period, then, we see
the demise of central planning and its replacement, first by quasi-markets, and
then by as close to the real thing as we are likely to get. Ideas of central
funding to support planning goals have been replaced by reliance on a market
with government-created consumers, overseen by a regulator, intervening in the
detail (see OfS’s long list of ‘reportable events’) of institutional management.
Despite every effort by governments to create
a working higher education marketplace, the core features of higher education
get in the way of it being a consumer good (for the many reasons that are
repeatedly pointed out to and repeatedly ignored by ministers). Central planning has gone, but its replacement depends on central
funding and central intervention. I don’t think that we’ve seen the last of
formal central planning in our sector.
SRHE member Paul
Templeis Honorary Associate Professor,
Centre for Higher Education Studies, UCL Institute of Education, University
College London. See his latest paper ‘University spaces: Creating cité and place’, London Review of Education, 17 (2): 223–235 at https://doi.org/10.18546
“…problems arise when language goes on holiday. And here we may indeed fancy naming to be some remarkable act of mind, as it were a baptism of an object.”
Ludwig Wittgenstein, Philosophical
Investigations, para 38 (original emphasis)
The paradigm shift of students to customers at the heart of higher
education has changed strategies, psychological self-images, business models
and much else. But are the claims for and against students as customers (SAC)
and the related research as useful, insightful and angst ridden as we may at
first think? There are alarms about
changing student behaviours and approaches to learning and the relationship
towards academic staff but does the naming ‘customers’ reveal what were already
underlying, long standing problems? Does the concentrated focus on SAC obscure rather
than reveal?
One aspect of SAC is the observation that academic performance
declines, and learning becomes more surface and instrumental (Bunce, 2017).
Another is that SAC inclines students to be narcissist and aggressive, with HEI
management pandering to the demands of both students and their feedback on the
NSS, with other strategies to create iconic campus buildings, to maintain or
improve league table position (Nixon, 2018).
This raises some methodological questions on (a) the research on
academic performance and the degree of narcissism/aggression prior to SAC (ie around 1997 with the Dearing Report); (b) the scope and range of
the research given the scale of student numbers, participation rates, the
variety of student motivations, the nature of disciplines and their own
learning strategies, and the hierarchy of institutions; and (c) the combination
of (a) and (b) in the further question whether SAC changed the outlook of students to their education – or is it that
we are paying more attention and making different interpretations?
Some argue that the mass system created in some way marketisation of
HE and the SAC with all its attendant problems of changing the pedagogic
relationship and cognitive approaches. Given Martin Trow’s definitions of
elite, mass and universal systems of HE*, the UK achieved a mass system by the
late 1980s to early 1990s with the rapid expansion of the polytechnics;
universities were slower to expand student numbers. This expansion was before
the introduction of the £1,000 top up fees of the Major government and the
£3,000 introduced by David Blunkett (Secretary of State for Education in the
new Blair government) immediately after the Dearing Report. It was after the
1997 election that the aspiration was for a universal
HE system with a 50% participation rate.
If a mass system of HE came about (in a ‘fit of forgetfulness’ ) by
1991 when did marketisation begin? Marketisation may be a name we give to a
practice or context which had existed previously but was tacit and culturally and
historically deeper, hidden from view. The unnamed hierarchy of institutions of
Oxbridge, Russell, polytechnics, HE colleges, FE colleges had powerful cultural
and socio-political foundations and was a market of sorts (high to low value
goods, access limited by social/cultural capital and price, etc). That hierarchy was not, however,
necessarily top-down: the impact of social benefit of the ‘lower orders’ in
that hierarchy would be significant in widening participation. The ‘higher
order’ existed (and exists) in an ossified form. And as entry was restricted,
the competition within the sector did not exist or did not present existential
threats. Such is the longue durée when trying to analyse marketisation and the
SAC.
The focus on marketisation should help us realise that over the long
term the unit of resource was drastically reduced; state funding was slowly and
then rapidly withdrawn to the point where the level of student enrolment was
critical to long term strategy. That meant not maintaining but increasing
student numbers when the potential pool of students would fluctuate – with the present demographic trough ending in 2021
or 2022. Marketisation can thus be separated to some extent from the cognitive
dissonance or other anxieties of the SAC. HEIs (with exceptions in the
long-established hierarchy) were driven by the external forces of the funding
regime to develop marketing strategies, branding and gaming feedback systems in
response to the competition for students and the creation of interest groups – Alliance,
Modern, et al. The enrolled students
were not the customers in the marketisation but the product or outcome of
successful management. The students change to customers as the focus is then on
results, employment and further study rates. Such is the split personality of institutional
management here.
Research on SAC in STEM courses has a noted inclination to surface
learning and the instrumentalism of ‘getting a good grade in order to get a
good job’, but this prompts further questions. I am not sure that this is an
increased inclination to surface learning, nor whether surface and deep are
uncritical norms we can readily employ. The HEAC definition of deep learning
has an element of ‘employability’ in the application of knowledge across
differing contexts and disciplines (Howie and Bagnall, 2012). A student in 2019
may face the imperative to get a ‘degree level’ job in order to pay back
student loans. This is rational related to the student loans regime and widening participation, meaning this
imperative is not universally applied given the differing socio-economic
backgrounds of all students.
(Note that the current loan system is highly regressive as a form of
‘graduate tax’.)
And were STEM students more inclined toward deep or surface learning
before they became SAC? Teaching and
assessment in STEM may have been poorand
may have encouraged surface level learning (eg
through weekly phase tests which were tardily assessed).
What is deep learning in civil engineering when faced with stress
testing concrete girders or in solving quarternion equations in mathematics: is
much of STEM actually knowing and processing algorithms? How is such learnable
content in STEM equivalent in some cognitive way to the deep learning in modern
languages, history, psychology et al?
This is not to suggest a hierarchy of disciplines but differences, deep
differences, between rules-based disciplines and the humanities.
Learning is complex and individualised, and responsive to, without
entirely determining, the curriculum and the forms of its delivery. In the
research on SAC the assumptions are that teaching and assessment delivery is both
relatively unproblematic and designed to encourage deep, non-instrumental
learning. Expectations of the curriculum delivery and assessment will vary
amongst students depending on personal background of schooling and parents, the
discipline and personal motivations and the expectations will often be
unrealistic. Consider why they are unrealistic – more than the narcissism of
being a customer. (There is a very wide range of varieties of customer: as a
customer of Network Rail I am more a supplicant than a narcissist.)
The alarm over the changes (?) to the students’ view of their learning
as SAC in STEM should be put in the context of the previously high drop-out
rate of STEM students (relatively higher than non-STEM) which could reach 30%
of a cohort. The causes of drop out were thoroughly examined by Mantz Yorke(Yorke and Longden, 2004), but as
regards the SAC issue here, STEM drop outs were explained by tutors as lack of the
right mathematical preparation. There is comparatively little research on the
motivations for students entering STEM courses before they became SAC; such research is not over the long term or longitudinal.
However, research on the typology of students with differing motivations for
learning (the academic, the social, the questioning student etc) ranged across
all courses, does exist (a 20 year survey by Liz Beatty, 2005). Is it possible that after widening
participation to the point of a universal system, motivations towards the
instrumental or utilitarian will become more prominent? And is there an
implication that an elite HE system pre-SAC was less instrumentalist, less
surface learning? The creation of PPE (first Oxford in 1921 then spreading
across the sector) was an attempt to produce a mandarin class, where career
ambition was designed into the academic disciplines. That is, ‘to get a good
job’ applies here too but it will be expressed in different, indirect and
elevated ways of public service.**
There are some anachronisms in the research on SAC. The acceptance of
SAC by management, by producing student charters and providing students places
on boards, committees and senior management meetings is not a direct result of students or management
considering students as customers. Indeed, it predates SAC by many years and
has its origins in the 1960s and 70s.
I am unlikely to get onto the board of Morrisons, but I could for the
Co-op – a discussion point on partnerships, co-producers, membership of a community
of learners. The struggle by students to get representation in management has taken
fifty years from the Wilson government Blue Paper Student Protest (1970) to today. It may have been a concession, but
student representation changed the nature of HEIs in the process, prior to SAC.
Student Charters appear to be mostly a coherent, user-friendly reduction of
lengthy academic and other regulations that no party can comprehend without
extensive lawyerly study. A number of HEIs produced charters before the SAC era
(late 1990s). And iconic university buildings have been significantly
attractive in the architectural profession a long time before SAC –
Birmingham’s aspiration to be an independent city state with its Venetian
architecture recalling St Mark’s Square under the supervision of Joseph
Chamberlain (1890s) or Jim Stirling’s post-modern Engineering faculty building
at Leicester (1963) etc (Cannandine 2002).
Students have complex legal identities and are a complex and often fissiparous
body. They are customers of catering, they are members of a guild or union, learners,
activists and campaigners, clients, tenants, volunteers, sometimes disciplined
as the accused, or the appellant, they adopt and create new identities
psychologically, culturally and sexually. The language of students as customers
creates a language game that excludes other concerns: the withdrawal of state
funding, the creation of an academic precariat, the purpose of HE for learning
and skills supply, an alienation from a community by the persuasive self-image
as atomised customer, how deep learning is a creature of disciplines and the changing job market, that
student-academic relations were problematic and now become formalised ‘complaints’.
Students are not the ‘other’ and they are much more than customers.
Phil Pilkington
is Chair of Middlesex University Students’ Union Board of Trustees, a former
CEO of Coventry University Students’ Union, an Honorary Teaching Fellow of
Coventry University and a contributor to WonkHE.
*Martin Trow
defined an elite, mass and universal systems of HE by participation rates of
10-20%, 20-30% and 40-50% respectively.
** Trevor
Pateman, The Poverty of PPE, Oxford, 1968; a pamphlet criticising the course by
a graduate; it is acknowledged that the curriculum, ‘designed to run the Raj in
1936’, has changed little since that critique. This document is a fragment of
another history of higher education worthy of recovery: of complaint and
dissatisfaction with teaching and there were others who developed the
‘alternative prospectus’ movement in the 1970s and 80s.
References
Beatty L,
Gibbs G, and Morgan A (2005) ‘Learning orientations and study contracts’, in Marton, F, Hounsell, D and
Entwistle, N, (eds) (2005) The Experience
of Learning: Implications for teaching and studying in higher education, 3rd
(Internet) edition. Edinburgh: University of Edinburgh, Centre for Teaching,
Learning and Assessment.
Bunce,
Louise (2017) ‘The student-as-consumer approach in HE and its effects on
academic performance’, Studies in Higher Education,
42(11): 1958-1978
Howie P and Bagnall R (2012) ‘A critique of
the deep and surface learning model’, Teaching
in Higher Education 18(4); they state the distinction of learning is
“imprecise conceptualisation, ambiguous language, circularity and a lack
of definition…”
Nixon, E, Scullion, R and Hearn, R (2018) ‘Her majesty the student:
marketised higher education and the narcissistic (dis)satisfaction of the
student consumer’, Studies in Higher
Education 43(6): 927-943
Cannandine, David (2004), The ‘Chamberlain Tradition’, in In Churchill’s Shadow, Oxford: Oxford
University Press; his biographical sketch of Joe Chamberlain shows his vision
of Birmingham as an alternative power base to London.
Yorke M and Longden B (2004) Retention
and student success in higher education, Maidenhead: SRHE/Open University
Press
by Camille Kandiko Howson, Corony Edwards, Alex Forsythe and Carol Evans
Just over a year ago, and learning gain was ‘trending’. Following a presentation at SRHE Annual Research Conference in December 2017, the Times Higher Education Supplement trumpeted that ‘Cambridge looks to crack measurement of ‘learning gain’; however, research-informed policy making is a long and winding road.
Learning gain is caught between a rock and a hard place — on the one hand there is a high bar for quality standards in social science research; on the other, there is the reality that policy-makers are using the currently available data to inform decision-making. Should the quest be to develop measures that meet the threshold for the Research Excellence Framework (REF), or simply improve on what we have now?
The latest version of the Teaching Excellence and Student Outcomes Framework (TEF) remains wedded to the possibility of better measures of learning gain, and has been fully adopted by the OfS. And we do undoubtedly need a better measure than those currently used. An interim evaluation of the learning gain pilot projects concludes: ‘data on satisfaction from the NSS, data from DHLE on employment, and LEO on earnings [are] all … awful proxies for learning gain’. The reduction in value of the NSS to 50% in the most recent TEF process make it no better a predictor of how students learn. Fifty percent of a poor measure is still poor measurement. The evaluation report argues that:
“The development of measures of learning gain involves theoretical questions of what to measure, and turning these into practical measures that can be empirically developed and tested. This is in a broader political context of asking ‘why’ measure learning gain and, ‘for what purpose’” (p7).
Given the current political climate, this has been answered by the insidious phrase ‘value for money’. This positioning of learning gain will inevitably result in the measurement of primarily employment data and career-readiness attributes. The sector’s response to this narrow view of HE has given renewed vigour to the debate on the purpose of higher education. Although many experts engage with the philosophical debate, fewer are addressing questions of the robustness of pedagogical research, methodological rigour and ethics.