srhe

The Society for Research into Higher Education


Leave a comment

More roadworks on Quality Street

by Paul Temple

Trust is the magic ingredient that allows social life to exist, from the smallest informal group to entire nations. High-trust societies tend to be more efficient, as it can be assumed that people will, by and large, do what they’ve agreed without the need for constant checking. Ipsos-MORI carries out an annual “veracity index” survey in Britain to discover which occupational groups are most trusted: “professors”, which I think we can take to mean university academic staff, score highly (trusted by 83% of the population), just below top-scoring doctors and judges, way above civil servants (60%) – and with government ministers playing in a different league on 16%. So most people, then, seem to trust university staff to do a decent job – much more than they trust ministers. It’s therefore a little strange that over the last 35 years the bitterest struggles between universities and governments have been fought in the “quality wars”, with governments claiming repeatedly that university teachers can’t be trusted to do their jobs without state oversight. Disputes about university expansion and funding come and go, but the quality wars just rumble on. Why?

From the mid-1980s (when “quality” was invented) up to the appearance of the 2011 White Paper, Higher Education: Students at the Heart of the System, quality in higher education was (after a series of changes to structures and methods) regulated by the Quality Assurance Agency, which required universities to show that they operated effective quality management processes. This did not involve the inspection of actual teaching: universities were instead trusted to give an honest, verifiable, account of their own quality processes. Without becoming too dewy-eyed about it, the process came down to one group of professionals asking another group of professionals how they did their jobs. Trust was the basis of it all.

The 2011 White Paper intended to sweep this away, replacing woolly notions of trust-based processes with a bracing market-driven discipline. The government promised to “[put] financial power into the hands of learners [to make] student choice meaningful…[it will] remove regulatory barriers [to new entrants to the sector to] improve student choice…[leading to] higher education institutions concentrating on high-quality teaching” (Executive Summary, paras 6-9). On this model, decisions by individual students would largely determine institutional income from teaching, so producing better-quality courses: trust didn’t matter. Market forces can be seen to drive forward quality in other fields through competition, why not in universities?

Well, of course, for lots of reasons, as critics of the White Paper were quick to point out, naturally to no avail. But having been told that they were to operate in a marketised environment where the usual market mechanisms would deal with quality (good courses expanding, others shrinking or failing), exactly a decade later universities find themselves being subjected to a bureaucratic (I intend the word in its social scientific sense, not as a lazy insult) quality regime, the very antithesis of a market system.

We see this in the latest offensive in the quality wars, just opened by the OFS with its July 2021 “Consultation on Quality and Standards”. This 110-page second-round consultation document sets out a highly-detailed process for assessing quality and standards: you can almost feel the pain of the drafter of section B1 on providing “a high quality academic experience”. What does that mean? It means, for example, ensuring that each course is “coherent”. So what does “coherent” mean? Well, it means, for example, providing “an appropriate balance between breadth and depth”. So what does…? And so on. This illustrates the difficulty of considering academic quality as an ISO 9001 (remember that?) process with check-lists, when probably every member of a course team will – actually, in a university, should – have different, equally valid, views on what (say) “appropriate breadth and depth” means.

Government approaches to quality and standards in university teaching have, then, over the last 30 or so years, moved from a largely trust-based system, to one supposedly driven by market forces, to a bureaucratic, box-ticking one. In all this time, ministers have failed to give convincing examples of the problems that the ever-changing quality regimes were supposed to deal with. (Degree mills and similar essentially fraudulent operations can be dealt with through normal consumer legislation, given the will to do so. I once interviewed an applicant for one of our courses who had worked in a college I hadn’t heard of: had there been any problems about its academic standards, I asked. “Not really”, she replied brightly, “it was a genuine bogus college”.)

Why, then, do the quality wars continue? – and we can be confident that the current OFS proposals do not signal the end of hostilities. It is hard to see this as anything other than ministerial displacement activity. Sorting out the social care crisis, or knife crime, will take real understanding and the redirection of resources: easier by far to make a fuss about a non-problem and then be seen to act decisively to solve it. And to erode trust in higher education a little more.

Dr Paul Temple is Honorary Associate Professor in the Centre for Higher Education Studies, UCL Institute of Education, London. His latest paper, ‘The University Couloir: exploring physical and intellectual connectivity’, will appear shortly in Higher Education Policy.


1 Comment

Examining the Examiner: Investigating the assessment literacy of external examiners

By Dr Emma Medland

Quality assurance in higher education has become increasingly dominant worldwide, but has recently been subject to mounting criticism. Research has highlighted challenges to comparability of academic standards and regulatory frameworks. The external examining system is a form of professional self-regulation involving an independent peer reviewer from another HE institution, whose role is to provide quality assurance in relation to identified modules/programmes/qualifications etc. This system has been a distinctive feature of UK higher education for nearly 200 years and is considered best practice internationally, being evident in various forms across the world.

External examiners are perceived as a vital means of maintaining comparable standards across higher education and yet this comparability is being questioned. Despite high esteem for the external examiner system, growing criticisms have resulted in a cautious downgrading of the role. One critique focuses on developing standardised procedures that emphasise consistency and equivalency in an attempt to uphold standards, arguably to the neglect of an examination of the quality of the underlying practice. Bloxham and Price (2015) identify unchallenged assumptions underpinning the external examiner system and ask: ‘What confidence can we have that the average external examiner has the “assessment literacy” to be aware of the complex influences on their standards and judgement processes?’ (Bloxham and Price 2015: 206). This echoes an earlier point raised by Cuthbert (2003), who identifies the importance of both subject and assessment expertise in relation to the role.

The concept of assessment literacy is in its infancy in higher education, but is becoming accepted into the vernacular of the sector as more research emerges. In compulsory education the concept has been investigated since the 1990s; it is often dichotomised into assessment literacy or illiteracy and described as a concept frequently used but less well understood. Both sectors describe assessment literacy as a necessity or duty for educators and examiners alike, yet both sectors present evidence of, or assume, low levels of assessment literacy. As a result, it is argued that developing greater levels of assessment literacy across the HE sector could help reverse the deterioration of confidence in academic standards.

Numerous attempts have been made to delineate the concept of assessment literacy within HE, focusing for example on the rules, language, standards, and knowledge, skills and attributes surrounding assessment. However, assessment literacy has also been described as Continue reading

Vicky Gunn


Leave a comment

Learning Analytics, surveillance, and the future of understanding our students

By Vicky Gunn

There has been a flurry of activity around Learning Analytics in Scotland’s higher education sector this past year. Responding no doubt to the seemingly unlimited promises of being able to study our students, we are excitedly wondering just how best to use what the technology has to offer. At Edinburgh University, a professorial level post has been advertised; at my own institution we are pulling together the various people who run our student experience surveys (who have hitherto been distributed across the institution) into a central unit in Planning so that we can triangulate surveys, evaluations and other contextual data-sets; elsewhere systems which enable ‘early warning signals’ with regards to student drop-out have been implemented with gusto.

I am one of the worst of the learning analytics’ offenders.  My curiosity to observe and understand the patterns in activity, behaviour, and perception of the students is just too intellectually compelling. The possibility that we could crunch all of the data about our students into one big stew-pot and then extract answers to meaning-of-student-life questions is a temptation I find too hard to resist (especially when someone puts what is called a ‘dashboard’ in front of me and says, ‘look what happens if we interrogate the data this way’). Continue reading