srhe

The Society for Research into Higher Education


1 Comment

In search of the perfect blend – debunking some myths

by Ling Xiao and Lucy Gill-Simmen

A blended learning approach has been adopted by most UK universities in response to the Covid pandemic. Institutions and higher education educators have become fully committed to and invested enormous resources in delivering such an approach. This has helped to maintain a high quality education and to some extent, to mitigate the potential dire impact of the pandemic on the students’ learning experience. This approach has no doubt accelerated a reshaping of pedagogic approaches, facilitating deep learning, autonomy, personalised learning and more. With the rapid pace of the UK’s vaccine rollout, and the semi promise by the UK Government that we’ll be back to some kind of normal by the end of June, there is hope for a possible return to campus in September, 2021.  As a result, this now marks a time when we need to reflect on what we have learned from the blended learning approach and figure out what to take forward in designing teaching and learning post-pandemic, be it hybrid or hyflex or both.

The Management Education Research Group (MERG) in the School of Business and Management at Royal Holloway, University of London recently held a symposium on ‘Reflecting on Blended Learning, What’s Next?’. It showcased blended learning examples from various universities across the UK and was followed by a panel discussion where we posed the question: what worked and what didn’t work? We found that some of our previous assumptions were brought into question and a number of myths were debunked.

Myth 1: Pre-Recorded videos should be formal and flawless

Overnight, and without any training, educators took on the role of film creators, film editors and videographers. Spending hours, days, weeks and even months developing lecture recordings from the confines of our home working spaces we were stopping, starting, re-starting, editing out the slightest imperfection. It has to be perfect, right? Not so fast.

For many institutions, recorded video is the primary delivery vehicle for blended learning content, with academics pre-recording short presentations, lectures and informal videos to complement text-based information and communication. Many of us postulated that a formal and meticulous delivery style for pre-recorded videos is required to help to maintain high quality educational materials and for students to perceive us as professionals. Academics’ personal experiences however suggest it is vital to keep the human elements, as students enjoy and engage better with a personalised delivery. A personalised style helps to build relationships with students which then provide foundations of learning. Mayer (2009) describes it as a personalisation principle of learning through video and recommends that a conversation style is preferential to a formal style for learning. This also resonates with recent insights from Harvard Business School Professor, Francesca Gino, who reflects in her webinar on the power of teaching with vulnerability in COVID-19. She explains the importance of being open, honest and transparent with students and sharing one’s own human side in order to strengthen the educator-learner bond.

Myth 2: Students enjoy learning from the comfort of their homes

Blended learning empowers students to become autonomous learners, since they can engage with their courses when real-time contact with lecturers is not possible. However, such autonomy isn’t all it’s cracked up to be, and turns out to be a lonely road for many students. Instead of relishing staying at home and learning when they want, some students declare they miss the structure, the sense of community and the feeling of belonging they associate with attending university in person.

Universities are more than places for learning, they serve as the centre of their communities for students. Students not only learn directly from the education but also, just as much, from interaction and collaboration with lecturers or their fellow classmates. It emerged in conversation between students and faculty that students felt it generally took longer to establish a sense of community in an online class than in a traditional face-to-face classroom but that it could be achieved. So, it’s up to us as educators to foster a sense of community amongst online learners.

Central to learning community is the concept of cooperative learning, which has been shown to promote productivity, expose students to interdisciplinary teams, foster idea generation, and to promote social interaction (Machemer, 2007). One such technique is to introduce collaborative learning opportunities, and those that reach beyond online group work and assessment – which in itself may prove problematic for learners. Instead, educators should look to develop co-creation projects such as wikis or blogs where students can come together to cocreate content together. Social annotation platforms such as Google Docs and Padlet enable students to share knowledge, develop the understanding of learning objects through collaborating on notes, commenting specific parts of materials, etc (Novak et al, 2012; Miller et al, 2018). Padlet for example has proved to be particularly popular with students for collaborative learning given its ease of use.

Myth 3 :  It makes sense to measure student engagement merely by participation metrics

After months of preparation and instructional design and preparing the perfect learning journey for students, we tend to expect students to learn and to engage in a way that we as educators perceive to be optimal for fulfilment of learning outcomes.

We all know that students learn and engage in many different ways, but we often find ourselves trawling the data and metrics to see whether students watched the videos, engaged in the readings we provided, posted on the fora we clearly labelled and participated in the mini quizzes and reflection exercises we created. However, as our hearts sink at what appears to be at times a relatively low uptake, we jump to the conclusion that students aren’t engaging. Here’s the thing:  they are, we just don’t see it. Engagement as a construct is something far more complex and multi-faceted which we can’t necessarily measure using the report logs on the VLE.

Student engagement is often labelled the “holy grail of learning” (Sinatra, Heddy and Lombardi, 2015: 1) since it correlates strongly with educational outcomes, including academic achievement and satisfaction. This can therefore lead to a level of frustration on the part of educators when engagement appears low. However, engagement comes in many forms, and in forms which are often not directly visible and/or measurable. For example, cognitive, behavioural and emotional engagement all have very different indicators which are not immediately apparent. Hence new ways of evaluating of student engagement in the blended learning environment are needed. Halverson and Graham (2019) propose a possible conceptual framework for engagement that includes cognitive and emotional indicators, offering examples of research measuring these engagement indicators in technology mediated learning contexts.

Myth 4: Technology is the make or break for blended learning

The more learning technologies we can add to our learning design, the better, right? Wrong. Some students declared the VLE has too much going on; they couldn’t keep up with all the apps and technologies they are required to work with to achieve their learning.

Although technology clearly plays a key role in the provision of education (Gibson, 2001; Watson, 2001; Gordon, 2014), it is widely acknowledged that technology should not determine but instead complement theories and practices of teaching. The onset of Covid-19 has shifted our focus to technology rather than pedagogy. For example, educators felt an immediate need for breakout room functionality: although this can be a significant function for discussion, this is not necessarily the case for disciplines such as accounting, which requires students continuously to apply techniques in order to excel at applied tasks. Pedagogy should determine technology. The chosen technology must serve a purpose and facilitate the aim of the pedagogy and should not be used as bells and whistles to make the learning environment appear more engaging. In our recent research, we provide empirical evidence for the effective pedagogical employment of Padlet to support learning and engagement (Gill-Simmen, 2021). Technology has an impact on pedagogy but should not be the driver in a blended or hybrid learning environment. Learning technologies are only applicable and of value when the right content is presented in right format and right time.

In summary, we learned these lessons for our future approach to hybrid learning:

  1. Aim for ‘human’ not perfection in instructional design
  2. Students don’t want to learn alone – create opportunities for collaborative learning
  3. Student engagement may not always be measurable – consider tools for assessing emotional, cognitive and behavioural engagement
  4. Technology should support pedagogy, not vice versa – implement only those technologies which facilitate student learning

SRHE member Dr Ling Xiao is the Director of MERG and a Senior Lecturer in Financial Management at Royal Holloway, University of London.  Follow Ling via @DrLingXiao on Twitter.

SRHE member Dr Lucy Gill-Simmen is a Senior Lecturer in Marketing at Royal Holloway, University of London and Program Director for Kaplan, Singapore. Follow Lucy via @lgsimmen on Twitter.

References

Gill-Simmen, L. (2021). Using ‘Padlet’ in Instructional Design to Promote Cognitive Engagement: A Case Study of UG Marketing Students, (In Press) Journal of Learning Development in Higher Education.

Machemer, P.L. (2007). Student perceptions of active learning in a large cross-disciplinary classroom, Active Learning in Higher Education, 8(1): 9-29.

Mayer, R. E. (2009). Multimedia learning Cambridge, England: Cambridge University Press (2nd edn).


Leave a comment

An SRHE playlist

by Leo Goedegebuure

There are many ways of communicating. Text always has been our main medium, but the last year has clearly shown that there are other ways. One of the most popular articles in the recent special issue of Studies in Higher Education on the impact of the pandemic was Amy Metcalfe’s photo-based essay. We had a massive SRHE webinar on the contributions, with a truly global audience. Taking David Bowie’s Sound and Vision to the extreme, we have done the Vision but we haven’t done the Sound.

2021 will be another special year. By the end of the year we will still not be able to come together face-to-face at the 2021 SRHE conference, although an exciting alternative kind of conference is being planned. It will be good to have a decent soundtrack for the event. So we thought we might kick this off with a bit of advanced planning – and activity. Last minute work can be a bit tedious and stressful. So we propose a two-pronged approach to this. We’ll start by inviting this year’s contributors to the Studies in Higher Education special issue to submit their 5-song playlist in addition to their accepted and on-line published article. And we invite all the readers of this blog to do the same. What we expect as outcome of this fun and silly project is a reflection of the diversity of our community in music.

So let me kick this off. The basic model is: song and a brief one-sentence reason why, plus Spotify link.  Here we go:

1 Amy Macdonald – Let’s Start a Band                                   

The obvious opener for a project like this

2 David Bowie – Life on Mars                                                     

The amazing achievement of the Mars Perseverance Rover so far and a tribute to one who left too early

3 Bruce Springsteen – The Ghost of Tom Joad                     

Too many ghosts of 2020 and a brilliant contribution of Tom Morello

4 REM – Nightswimming                                                              

Quietly avoiding restrictions without creating chaos and such a great song

5 Vreemde Kostgangers – Touwtje uit de Deur                   

My Dutch heritage; the literal translation is “A Rope from the Letterbox” reflecting on a time when you could just pull a little rope to enter your neighbour’s house

Amy Metcalfe has also skipped in already with her suggestions, which have been included in the playlist:

1 Snow Patrol – Life on Earth

“This is something else.”

2 Foster the People – Imagination

“We can’t change the things we can’t control.”

3 Haelos – Hold On

“Hold on.”

4 The Weeknd – Blinding Lights

I’ve been on my own for long enough.”

5 Lastlings – Out of Touch

“Don’t want this to fall apart; Is this what you really need?”

There will be more to follow from contributors to the SHE special issue, but everyone is invited to send in their own 5-track playlist to rob.cuthbert@uwe.ac.uk and leo.g@unimelb.edu.au. We will provide updates via the blog at regular intervals, and aim to compile a comprehensive playlist later in the year – which may or may not become lockdown listening, depending on where in the world you are and how your country is faring in the pandemic. We hope you enjoy it.

Leo Geodegebuure is Editor-in-Chief, Studies in Higher Education, Professorial Fellow in the Melbourne Centre for the Study of Higher Education and the Melbourne Sustainable Society Institute, and Honorary Professor in the School of Global, Urban and Social Studies, College of Design and Social Context, RMIT University.


1 Comment

The Social Mobility Index (SMI): A welcome and invitation to debate from the Exeter Centre for Social Mobility

by Anna Mountford-Zimdars and Pallavi Banerjee

There is a new English league table on the block! Welcome! The exciting focus of this new ranking concerns social mobility – the clue is in the name and it is called the Social Mobility Index (SMI). Focusing on social mobility differentiates the SMI from other league tables, which often include dimensions such as prestige, research income, staff qualifications, student satisfaction, and employment outcomes.

The SMI is specifically about an institution’s contribution to supporting disadvantaged learners. It uses the OfS model of access to, progression within and outcomes after higher education. Leaning on a methodology developed for a SMI in the US, the English version contains three dimensions: (1) Access, drawing on the Index of multiple deprivation (IMD); (2) Continuation, using progression data into the second year drawing on IMD; and (3) Salaries (adjusted for local purchasing power), using Longitudinal Education Outcomes (LEO) salary data collected one year after graduation.

The SMI report thoughtfully details the rationale for the measures used and is humble in acknowledging that other measures might be developed that are more useful. But do the reflections of the authors go far enough? Let’s take the graduate outcome LEO data for example. These capture salaries 15 months into employment – too early for an outcome measure. It is also not broken down by IMD, there are heaps of missing data in LEO and those who continue into further study are not captured. Low IMD students may or may not be earning the same sort of salaries as their more advantaged peers. The regional weightings seem insufficient in light of the dominance of high-salary regions of both the US and English SMI. These shortcomings make the measure a highly problematic one to use, though the authors are right to endeavour to capture some outcome of higher education.  

We would like a bolder SMI. Social Mobility is not only about income but about opportunities and choice and about enabling meaningful contribution in society. This was recognised in Bowen and Bok’s (2000) evaluation of affirmative action, which measured ‘impact’ not only as income but as civic contribution, health, well-being.  Armstrong and Hamilton (2015) show the importance of friendship and marriage formation as a result of shared higher education experiences. The global pandemic has shown that the most useful jobs we rely on such as early years educators are disgracefully underpaid. The present SMI’s reduction of ‘success’ to a poor measure of economic outcomes needs redressing in light of how far the academic debate has advanced.

Also, social mobility is about more than class, it is about equal opportunities for first generation students, disabled students, men and women, refugees, asylum seekers, global majority ethnic groups as well as local, regional, national and international contributions. It is also about thinking not only about undergraduate student access, progress and success but about postgraduates, staff and the research and teaching at universities.

A really surprising absence in the introduction of this new SMI is reference to the Times Higher Education Impact Rankings. These are the only global performance tables that assess universities against the United Nations’ Sustainable Development Goals. First published in 2019, this ranking includes a domain on reducing inequality. The metrics used by the Times Higher ranking are: Research on reducing inequalities (27%); First-generation students (23.1%); Students from developing countries (15.4%); Students and staff with disabilities (11.4%); and Institutional measures against discrimination – including outreach and admission of disadvantaged groups (23.1%). The THE ranking celebrates that institutions also contribute to social mobility through what they research and teach. This dimensions should be borrowed for an English SMI in light of the importance attached to research-led, research-informed and research-evidenced practices in the higher education sector.

The use of individual measures in the THE ranking, of those with parents without a background of higher education (first generation students) and those with disabilities, including staff, has merit.  Yes, individual-level measures are often challenging to ‘operationalise’. But this shouldn’t prevent us from arguing that they are the right measures to aspire to using. However the use of first generation students also highlights that the debate in the UK, focusing on area-level disadvantages such as the IMD or POLAR, is different from the international framing of first generation students measuring the educational background of students.

The inclusion of staff in the THE ranking is an interesting domain that merits consideration. For example, data on, for example, the gender pay gap is easily obtainable in England and it would indicate something about the organisational culture. Athena Swan awards or the Race Equality Charter or other similar awards which are an indicator of the diversity and equality in an institution could be considered as organisational commitments to the agenda and are straight-forward to operationalise.

We warmly welcome the SMI and congratulate Professor David Phoenix for putting the debate centre-stage and note that his briefing is already stimulating debate with Professor Peter Scott’s thoughtful contribution to the debate.  It is important to think about social mobility and added value as part of the league table world. It is in the nature of league tables that they oversimplify the work done by universities and staff and the achievements of their students.

There is real potential in the idea of an SMI and we hope that our contribution to the debate will bring some of these dimensions into the public debate of how we construct the index. This will create a SMI that celebrates good practice by institutions in the space of social mobility and encourages more good practice that will ultimately make higher education more inclusive and diverse while supporting success for all.

SRHE member Anna Mountford-Zimdars is Professor of Social Mobility and Academic Director of the Centre for Social Mobility at the University of Exeter. Pallavi Amitava Banerjee is a Fellow of the Higher Education Academy. She is an SRHE member and Senior Lecturer in Education in the Graduate School of Education at the University of Exeter.

Image of Rob Cuthbert


1 Comment

Some different lessons to learn from the 2020 exams fiasco

by Rob Cuthbert

The problems with the algorithm used for school examinations in 2020 have been exhaustively analysed, before, during and after the event. The Royal Statistical Society (RSS) called for a review, after its warnings and offers of help in 2020 had been ignored or dismissed. Now the Office for Statistics Regulation (OSR) has produced a detailed review of the problems, Learning lessons from the approach to developing models for awarding grades in the UK in 2020. But the OSR report only tells part of the story; there are larger lessons to learn.

The OSR report properly addresses its limited terms of reference in a diplomatic and restrained way. It is far from an absolution – even in its own terms it is at times politely damning – but in any case it is not a comprehensive review of the lessons which should be learned, it is a review of the lessons for statisticians to learn about how other people use statistics. Statistical models are tools, not substitutes for competent management, administration and governance. The report makes many valid points about how the statistical tools were used, and how their use could have been improved, but the key issue is the meta-perspective in which no-one was addressing the big picture sufficiently. An obsession with consistency of ‘standards’ obscured the need to consider the wider human and political implications of the approach. In particular, it is bewildering that no-one in the hierarchy of control was paying sufficient attention to two key differences. First, national ‘standardisation’ or moderation had been replaced by a system which pitted individual students against their classmates, subject by subject and school by school. Second, 2020 students were condemned to live within the bounds not of the nation’s, but their school’s, historical achievements. The problem was not statistical nor anything to do with the algorithm, the problem was with the way the problem itself had been framed – as many commentators pointed out from an early stage. The OSR report (at 3.4.1.1) said:

“In our view there was strong collaboration between the qualification regulators and ministers at the start of the process. It is less clear to us whether there was sufficient engagement with the policy officials to ensure that they fully understood the limitations, impacts, risks and potential unintended consequences of the use of the models prior to results being published. In addition, we believe that, the qualification regulators could have made greater use of  opportunities for independent challenge to the overall approach to ensure it met the need and this may have helped secure public confidence.”

To put it another way: the initial announcement by the Secretary of State was reasonable and welcome. When Ofqual proposed that ranking students and tying each school’s results to its past record was the only way to do what the SoS wanted, no-one in authority was willing either to change the approach, or to make the implications sufficiently transparent for the public to lose confidence at the start, in time for government and Ofqual to change their approach.

The OSR report repeatedly emphasises that the key problem was a lack of public confidence, concluding that:

“… the fact that the differing approaches led to the same overall outcome in the four countries implies to us that there were inherent challenges in the task; and these 5 challenges meant that it would have been very difficult to deliver exam grades in a way that commanded complete public confidence in the summer of 2020 …”

“Very difficult”, but, as Select Committee chair Robert Halfon said in November 2020, things could have been much better:

“the “fallout and unfairness” from the cancellation of exams will “have an ongoing impact on the lives of thousands of families”. … But such harm could have been avoided had Ofqual not buried its head in the sand and ignored repeated warnings, including from our Committee, about the flaws in the system for awarding grades.”

As the 2021 assessment cycle comes closer, attention has shifted to this year’s approach to grading, when once again exams will not feature except as a partial and optional extra. When the interim Head of Ofqual, Dame Glynis Stacey, appeared before the Education Select Committee, Schools Week drew some lessons which remain pertinent, but there is more to say. An analysis of 2021 by George Constantinides, a professor of digital computation at Imperial College whose 2020 observations were forensically accurate, has been widely circulated and equally widely endorsed. He concluded in his 26 February 2021 blog that:

“the initial proposals were complex and ill-defined … The announcements this week from the Secretary of State and Ofqual have not helped allay my fears. … Overall, I am concerned that the proposed process is complex and ill-defined. There is scope to produce considerable workload for the education sector while still delivering a lack of comparability between centres/schools.”

The DfE statement on 25 February kicks most of the trickiest problems down the road, and into the hands of examination boards, schools and teachers:

“Exam boards will publish requirements for schools’ and colleges’ quality assurance processes. … The head teacher or principal will submit a declaration to the exam board confirming they have met the requirements for quality assurance. … exam boards will decide whether the grades determined by the centre following quality assurance are a reasonable exercise of academic judgement of the students’ demonstrated performance. …”

Remember in this context that Ofqual acknowledges “it is possible for two examiners to give different but appropriate marks to the same answer”. Independent analyst Dennis Sherwood and others have argued for alternative approaches which would be more reliable, but there is no sign of change.

Two scenarios suggest themselves. In one, where this year’s results are indeed pegged to the history of previous years, school by school, we face the prospect of overwhelming numbers of student appeals, almost all of which will fail, leading no doubt to another failure of public confidence in the system. The OSR report (3.4.2.3) notes that:

“Ofqual told us that allowing appeals on the basis of the standardisation model would have been inconsistent with government policy which directed them to “develop such an appeal process, focused on whether the process used the right data and was correctly applied”.

Government policy for 2021 seems not to be significantly different:

Exam boards will not re-mark the student’s evidence or give an alternative grade. Grades would only be changed by the board if they are not satisfied with the outcome of an investigation or malpractice is found. … If the exam board finds the grade is not reasonable, they will determine the alternative grade and inform the centre. … Appeals are not likely to lead to adjustments in grades where the original grade is a reasonable exercise of academic judgement supported by the evidence. Grades can go up or down as the result of an appeal.” (emphasis added)

There is one crucial exception: in 2021 every individual student can appeal. Government no doubt hopes that this year the blame will all be heaped on teachers, schools and exam boards.

The second scenario seems more likely and is already widely expected, with grade inflation outstripping the 2020 outcome. There will be a check, says DfE, “if a school or college’s results are out of line with expectations based on past performance”, but it seems doubtful whether that will be enough to hold the line. The 2021 approach was only published long after schools had supplied predicted A-level grades to UCAS for university admission. Until now there has been a stable relationship between predicted grades and examination outcomes, as Mark Corver and others have shown. Predictions exceed actual grades awarded by consistent margins; this year it will be tempting for schools simply to replicate their predictions in the grades they award. Indeed, it might be difficult for schools not to do so, without leaving their assessments subject to appeal. In the circumstances, the comments of interim Ofqual chief Simon Lebus that he does not expect “huge amounts” of grade inflation seem optimistic. But it might be prejudicial to call this ‘grade inflation’, with its pejorative overtones. Perhaps it would be better to regard predicted grades as indicators of what each student could be expected to achieve at something close to their best – which is in effect what UCAS asks for – rather than when participating in a flawed exam process. Universities are taking a pragmatic view of possible intake numbers for 2021 entry, with Cambridge having already introduced a clause seeking to deny some qualified applicants entry in 2021 if demand exceeds the number of places available.

The OSR report says that Ofqual and the DfE:

“… should have placed greater weight on explaining the limitations of the approach. … In our view, the qualification regulators had due regard for the level of quality that would be required. However, the public acceptability of large changes from centre assessed grades was not tested, and there were no quality criteria around the scale of these changes being different in different groups.” (3.3.3.1)

The lesson needs to be applied this year, but there is more to say. It is surprising that there was apparently such widespread lack of knowledge among teachers about the grading method in 2020 when there is a strong professional obligation to pay attention to assessment methods and how they work in practice. Warnings were sounded, but these rarely broke through to dominate teachers’ understanding, despite the best efforts of education journalists such as Laura McInerney, and teachers were deliberately excluded from discussions about the development of the algorithm-based method. The OSR report (3.4.2.2) said:

“… there were clear constraints in the grade awarding scenario around involvement of service delivery staff in quality assurance, or making the decisions based on results from a model. … However, we consider that involvement of staff from centres may have improved public confidence in the outputs.”

There were of course dire warnings in 2020 to parents, teachers and schools about the perils of even discussing the method, which undoubtedly inhibited debate, but even before then exam processes were not well understood:

“… notwithstanding the very extensive work to raise awareness, there is general limited understanding amongst students and parents about the sources of variability in examination grades in a normal year and the processes used to reduce them.” (3.2.2.2)

My HEPI blog just before A-level results day was aimed at students and parents, but it was read by many thousands of teachers, and anecdotal evidence from the many comments I received suggests it was seen by many teachers as a significant reinterpretation of the process they had been working on. One teacher said to Huy Duong, who had become a prominent commentator on the 2020 process: “I didn’t believe the stuff you were sending us, I thought it [the algorithm] was going to work”.

Nevertheless the mechanics of the algorithm were well understood by many school leaders. FFT Education Datalab was analysing likely outcomes as early as June 2020, and reported that many hundreds of schools had engaged them to assess their provisional grade submissions, some returning with a revised set of proposed grades for further analysis. Schools were seduced, or reduced, to trying to game the system, feeling they could not change the terrifying and ultimately ridiculous prospect of putting all their many large cohorts of students in strict rank order, subject by subject. Ofqual were victims of groupthink; too many people who should have known better simply let the fiasco unfold. Politicians and Ofqual were obsessed with preventing grade inflation, but – as was widely argued, long in advance –  public confidence depended on broader concerns about the integrity and fairness of the outcomes.

In 2021 we run the same risk of loss of public confidence. If that transpires, the government is positioned to blame teacher assessments and probably reinforce a return to examinations in their previous form, despite their known shortcomings. The consequences of two anomalous years of grading in 2020 and 2021 are still to unfold, but there is an opportunity, if not an obligation, for teachers and schools to develop an alternative narrative.

At GCSE level, schools and colleges might learn from emergency adjustments to their post-16 decisions that there could be better ways to decide on progression beyond GCSE. For A-level/BTEC/IB decisions, schools should no longer be forced to apologise for ‘overpredicting’ A-level grades, which might even become a fairer and more reliable guide to true potential for all students. Research evidence suggests that “Bright students from poorer backgrounds are more likely than their wealthier peers to be given predicted A-level grades lower than they actually achieve”. Such disadvantage might diminish or disappear if teacher assessments became the dominant public element of grading; at present too many students suffer the sometimes capricious outcomes of final examinations.

Teachers’ A-level predictions are already themselves moderated and signed off by school and college heads, in ways which must to some extent resemble the 2021 grading arrangements. There will be at least a two-year discontinuity in qualification levels, so universities might also learn new ways of dealing with what might become a permanently enhanced set of differently qualified applicants. In the longer term HE entrants might come to have different abilities and needs, because of their different formation at school. Less emphasis on preparation for examinations might even allow more scope for broader learning.

A different narrative could start with an alternative account of this year’s grades – not ‘standards are slipping’ or ‘this is a lost generation’, but ‘grades can now truly reflect the potential of our students, without the vagaries of flawed public examinations’. That might amount to a permanent reset of our expectations, and the expectations of our students. Not all countries rely on final examinations to assess eligibility to progress to the next stage of education or employment. By not wasting the current crisis we might even be able to develop a more socially just alternative which overcomes some of our besetting problems of socioeconomic and racial disadvantage.

Rob Cuthbert is an independent academic consultant, editor of SRHE News and Blog and emeritus professor of higher education management. He is a Fellow of the Academy of Social Sciences and of SRHE. His previous roles include deputy vice-chancellor at the University of the West of England, editor of Higher Education Review, Chair of the Society for Research into Higher Education, and government policy adviser and consultant in the UK/Europe, North America, Africa, and China.