SRHE Blog

The Society for Research into Higher Education

Image of Rob Cuthbert


1 Comment

Some different lessons to learn from the 2020 exams fiasco

by Rob Cuthbert

The problems with the algorithm used for school examinations in 2020 have been exhaustively analysed, before, during and after the event. The Royal Statistical Society (RSS) called for a review, after its warnings and offers of help in 2020 had been ignored or dismissed. Now the Office for Statistics Regulation (OSR) has produced a detailed review of the problems, Learning lessons from the approach to developing models for awarding grades in the UK in 2020. But the OSR report only tells part of the story; there are larger lessons to learn.

The OSR report properly addresses its limited terms of reference in a diplomatic and restrained way. It is far from an absolution – even in its own terms it is at times politely damning – but in any case it is not a comprehensive review of the lessons which should be learned, it is a review of the lessons for statisticians to learn about how other people use statistics. Statistical models are tools, not substitutes for competent management, administration and governance. The report makes many valid points about how the statistical tools were used, and how their use could have been improved, but the key issue is the meta-perspective in which no-one was addressing the big picture sufficiently. An obsession with consistency of ‘standards’ obscured the need to consider the wider human and political implications of the approach. In particular, it is bewildering that no-one in the hierarchy of control was paying sufficient attention to two key differences. First, national ‘standardisation’ or moderation had been replaced by a system which pitted individual students against their classmates, subject by subject and school by school. Second, 2020 students were condemned to live within the bounds not of the nation’s, but their school’s, historical achievements. The problem was not statistical nor anything to do with the algorithm, the problem was with the way the problem itself had been framed – as many commentators pointed out from an early stage. The OSR report (at 3.4.1.1) said:

“In our view there was strong collaboration between the qualification regulators and ministers at the start of the process. It is less clear to us whether there was sufficient engagement with the policy officials to ensure that they fully understood the limitations, impacts, risks and potential unintended consequences of the use of the models prior to results being published. In addition, we believe that, the qualification regulators could have made greater use of  opportunities for independent challenge to the overall approach to ensure it met the need and this may have helped secure public confidence.”

To put it another way: the initial announcement by the Secretary of State was reasonable and welcome. When Ofqual proposed that ranking students and tying each school’s results to its past record was the only way to do what the SoS wanted, no-one in authority was willing either to change the approach, or to make the implications sufficiently transparent for the public to lose confidence at the start, in time for government and Ofqual to change their approach.

The OSR report repeatedly emphasises that the key problem was a lack of public confidence, concluding that:

“… the fact that the differing approaches led to the same overall outcome in the four countries implies to us that there were inherent challenges in the task; and these 5 challenges meant that it would have been very difficult to deliver exam grades in a way that commanded complete public confidence in the summer of 2020 …”

“Very difficult”, but, as Select Committee chair Robert Halfon said in November 2020, things could have been much better:

“the “fallout and unfairness” from the cancellation of exams will “have an ongoing impact on the lives of thousands of families”. … But such harm could have been avoided had Ofqual not buried its head in the sand and ignored repeated warnings, including from our Committee, about the flaws in the system for awarding grades.”

As the 2021 assessment cycle comes closer, attention has shifted to this year’s approach to grading, when once again exams will not feature except as a partial and optional extra. When the interim Head of Ofqual, Dame Glynis Stacey, appeared before the Education Select Committee, Schools Week drew some lessons which remain pertinent, but there is more to say. An analysis of 2021 by George Constantinides, a professor of digital computation at Imperial College whose 2020 observations were forensically accurate, has been widely circulated and equally widely endorsed. He concluded in his 26 February 2021 blog that:

“the initial proposals were complex and ill-defined … The announcements this week from the Secretary of State and Ofqual have not helped allay my fears. … Overall, I am concerned that the proposed process is complex and ill-defined. There is scope to produce considerable workload for the education sector while still delivering a lack of comparability between centres/schools.”

The DfE statement on 25 February kicks most of the trickiest problems down the road, and into the hands of examination boards, schools and teachers:

“Exam boards will publish requirements for schools’ and colleges’ quality assurance processes. … The head teacher or principal will submit a declaration to the exam board confirming they have met the requirements for quality assurance. … exam boards will decide whether the grades determined by the centre following quality assurance are a reasonable exercise of academic judgement of the students’ demonstrated performance. …”

Remember in this context that Ofqual acknowledges “it is possible for two examiners to give different but appropriate marks to the same answer”. Independent analyst Dennis Sherwood and others have argued for alternative approaches which would be more reliable, but there is no sign of change.

Two scenarios suggest themselves. In one, where this year’s results are indeed pegged to the history of previous years, school by school, we face the prospect of overwhelming numbers of student appeals, almost all of which will fail, leading no doubt to another failure of public confidence in the system. The OSR report (3.4.2.3) notes that:

“Ofqual told us that allowing appeals on the basis of the standardisation model would have been inconsistent with government policy which directed them to “develop such an appeal process, focused on whether the process used the right data and was correctly applied”.

Government policy for 2021 seems not to be significantly different:

Exam boards will not re-mark the student’s evidence or give an alternative grade. Grades would only be changed by the board if they are not satisfied with the outcome of an investigation or malpractice is found. … If the exam board finds the grade is not reasonable, they will determine the alternative grade and inform the centre. … Appeals are not likely to lead to adjustments in grades where the original grade is a reasonable exercise of academic judgement supported by the evidence. Grades can go up or down as the result of an appeal.” (emphasis added)

There is one crucial exception: in 2021 every individual student can appeal. Government no doubt hopes that this year the blame will all be heaped on teachers, schools and exam boards.

The second scenario seems more likely and is already widely expected, with grade inflation outstripping the 2020 outcome. There will be a check, says DfE, “if a school or college’s results are out of line with expectations based on past performance”, but it seems doubtful whether that will be enough to hold the line. The 2021 approach was only published long after schools had supplied predicted A-level grades to UCAS for university admission. Until now there has been a stable relationship between predicted grades and examination outcomes, as Mark Corver and others have shown. Predictions exceed actual grades awarded by consistent margins; this year it will be tempting for schools simply to replicate their predictions in the grades they award. Indeed, it might be difficult for schools not to do so, without leaving their assessments subject to appeal. In the circumstances, the comments of interim Ofqual chief Simon Lebus that he does not expect “huge amounts” of grade inflation seem optimistic. But it might be prejudicial to call this ‘grade inflation’, with its pejorative overtones. Perhaps it would be better to regard predicted grades as indicators of what each student could be expected to achieve at something close to their best – which is in effect what UCAS asks for – rather than when participating in a flawed exam process. Universities are taking a pragmatic view of possible intake numbers for 2021 entry, with Cambridge having already introduced a clause seeking to deny some qualified applicants entry in 2021 if demand exceeds the number of places available.

The OSR report says that Ofqual and the DfE:

“… should have placed greater weight on explaining the limitations of the approach. … In our view, the qualification regulators had due regard for the level of quality that would be required. However, the public acceptability of large changes from centre assessed grades was not tested, and there were no quality criteria around the scale of these changes being different in different groups.” (3.3.3.1)

The lesson needs to be applied this year, but there is more to say. It is surprising that there was apparently such widespread lack of knowledge among teachers about the grading method in 2020 when there is a strong professional obligation to pay attention to assessment methods and how they work in practice. Warnings were sounded, but these rarely broke through to dominate teachers’ understanding, despite the best efforts of education journalists such as Laura McInerney, and teachers were deliberately excluded from discussions about the development of the algorithm-based method. The OSR report (3.4.2.2) said:

“… there were clear constraints in the grade awarding scenario around involvement of service delivery staff in quality assurance, or making the decisions based on results from a model. … However, we consider that involvement of staff from centres may have improved public confidence in the outputs.”

There were of course dire warnings in 2020 to parents, teachers and schools about the perils of even discussing the method, which undoubtedly inhibited debate, but even before then exam processes were not well understood:

“… notwithstanding the very extensive work to raise awareness, there is general limited understanding amongst students and parents about the sources of variability in examination grades in a normal year and the processes used to reduce them.” (3.2.2.2)

My HEPI blog just before A-level results day was aimed at students and parents, but it was read by many thousands of teachers, and anecdotal evidence from the many comments I received suggests it was seen by many teachers as a significant reinterpretation of the process they had been working on. One teacher said to Huy Duong, who had become a prominent commentator on the 2020 process: “I didn’t believe the stuff you were sending us, I thought it [the algorithm] was going to work”.

Nevertheless the mechanics of the algorithm were well understood by many school leaders. FFT Education Datalab was analysing likely outcomes as early as June 2020, and reported that many hundreds of schools had engaged them to assess their provisional grade submissions, some returning with a revised set of proposed grades for further analysis. Schools were seduced, or reduced, to trying to game the system, feeling they could not change the terrifying and ultimately ridiculous prospect of putting all their many large cohorts of students in strict rank order, subject by subject. Ofqual were victims of groupthink; too many people who should have known better simply let the fiasco unfold. Politicians and Ofqual were obsessed with preventing grade inflation, but – as was widely argued, long in advance –  public confidence depended on broader concerns about the integrity and fairness of the outcomes.

In 2021 we run the same risk of loss of public confidence. If that transpires, the government is positioned to blame teacher assessments and probably reinforce a return to examinations in their previous form, despite their known shortcomings. The consequences of two anomalous years of grading in 2020 and 2021 are still to unfold, but there is an opportunity, if not an obligation, for teachers and schools to develop an alternative narrative.

At GCSE level, schools and colleges might learn from emergency adjustments to their post-16 decisions that there could be better ways to decide on progression beyond GCSE. For A-level/BTEC/IB decisions, schools should no longer be forced to apologise for ‘overpredicting’ A-level grades, which might even become a fairer and more reliable guide to true potential for all students. Research evidence suggests that “Bright students from poorer backgrounds are more likely than their wealthier peers to be given predicted A-level grades lower than they actually achieve”. Such disadvantage might diminish or disappear if teacher assessments became the dominant public element of grading; at present too many students suffer the sometimes capricious outcomes of final examinations.

Teachers’ A-level predictions are already themselves moderated and signed off by school and college heads, in ways which must to some extent resemble the 2021 grading arrangements. There will be at least a two-year discontinuity in qualification levels, so universities might also learn new ways of dealing with what might become a permanently enhanced set of differently qualified applicants. In the longer term HE entrants might come to have different abilities and needs, because of their different formation at school. Less emphasis on preparation for examinations might even allow more scope for broader learning.

A different narrative could start with an alternative account of this year’s grades – not ‘standards are slipping’ or ‘this is a lost generation’, but ‘grades can now truly reflect the potential of our students, without the vagaries of flawed public examinations’. That might amount to a permanent reset of our expectations, and the expectations of our students. Not all countries rely on final examinations to assess eligibility to progress to the next stage of education or employment. By not wasting the current crisis we might even be able to develop a more socially just alternative which overcomes some of our besetting problems of socioeconomic and racial disadvantage.

Rob Cuthbert is an independent academic consultant, editor of SRHE News and Blog and emeritus professor of higher education management. He is a Fellow of the Academy of Social Sciences and of SRHE. His previous roles include deputy vice-chancellor at the University of the West of England, editor of Higher Education Review, Chair of the Society for Research into Higher Education, and government policy adviser and consultant in the UK/Europe, North America, Africa, and China.