srhe

The Society for Research into Higher Education

Image of Rob Cuthbert


Leave a comment

Quality and standards in higher education

By Rob Cuthbert

What are the key issues in HE quality and standards, right now? Maintaining quality and standards with the massive transition to remote learning? Dealing with the consequences of the 2020 A-levels shambles? The student experience, now that most learning for most students is remote and off-campus? Student mental health and engagement with their studies and their peers? One or more of these, surely, ought to be our ‘new normal’ concerns.

But not for the government. Minister Michele Donelan assured us that quality and standards were being constantly monitored – by other people – as in her letter of 2 November to vice-chancellors:

“We have been clear throughout this pandemic that higher education providers must at all times maintain the quality of their tuition. If more teaching is moved online, providers must continue to comply with registration conditions relating to quality and standards. This means ensuring that courses provide a high-quality academic experience, students are supported and achieve good outcomes, and standards are protected. We have worked with the Office for Students who are regularly reviewing online tuition. We also expect students to continue to be supported and achieve good outcomes, and I would like to reiterate that standards must be maintained.”

So student health and the student experience are for the institutions to worry about, and get right, with the Office for Students watching. And higher education won’t need a bailout, unlike most other sectors of the market economy, because with standards being maintained there’s no reason for students not to enrol and pay fees exactly as usual. Institutional autonomy is vital, especially when it comes to apportioning the blame.

For government, the new normal was just the same as the old normal. It wasn’t difficult to read the signs. Ever since David Willetts, ministers had been complaining about low quality courses in universities. But with each successive minister the narrative became increasingly threadbare. David, now Lord, Willetts, at least had a superficially coherent argument: greater competition and informed student choice would drive up quality through competition between institutions for students. It was never convincing, but at least it had an answer to why and how quality and standards might be connected with competition in the HE market. Promoting competition by lowering barriers to entry for new HE providers was not a conspicuous success: some of the new providers proved to be a big problem for quality. Information, advice and guidance were key for improving student choice, so it seemed that the National Student Survey would play a significant part, along with university rankings and league tables. As successive ministers took up the charge the eggs were mostly transferred to the Teaching Excellence Framework basket, with TEF being championed by Jo, now Lord, Johnson. TEF began in 2016 and became a statutory requirement in the Higher Education and Research Act 2017, which also required TEF to be subject to an independent review. From the start TEF had been criticised as not actually being about teaching, or excellence, and the review by Dame Shirley Pearce, previously VC at Loughborough, began in 2018. Her review was completed before the end of 2019, but at the time of writing had still not been published.

However the ‘low quality courses’ narrative has just picked up speed. Admittedly it stuttered a little during the tenure of Chris Skidmore, who was twice briefly the universities minister, before and after Jo Johnson’s equally brief second tenure. The ‘Skidmore test’ suggested that any argument about low quality courses should specify at least one of the culprits, if it was not to be a low quality argument. However this was naturally unpopular with the narrative’s protagonists and Skidmore, having briefly been reinstalled as minister after Jo Johnson’s decision to step down, was replaced by Michele Donelan, who has remained resolutely on-message, even as any actual evidence of low quality receded even further from view. She announced in a speech to Universities UK at their September 2020 meeting that the once-praised NSS was now in the firing line: “There is a valid concern from some in the sector that good scores can more easily be achieved through dumbing down and spoon-feeding students, rather than pursuing high standards and embedding the subject knowledge and intellectual skills needed to succeed in the modern workplace. These concerns have been driven by both the survey’s current structure and its usage in developing sector league tables and rankings.”

UUK decided that they had to do something, so they ‘launched a crackdown’ (if you believe Camilla Turner in The Telegraph on 15 November 2020) by proposing, um, “a new charter aimed at ensuring institutions take a “consistent and transparent approach to identifying and improving potentially low value or low quality courses.” It’s doubtful if even UUK believed that would do the trick, and no-one else gave it much credence. But with the National Student Survey and even university league tables now deemed unreliable, and the TEF in deep freeze, the government urgently needed some policy-based evidence. It was time for this endlessly tricky problem to be dumped in the OfS in-tray. Thus it was that the OfS announced on 17 November 2020 that: “The Office for Students is consulting on its approach to regulating quality and standards in higher education. Since 2018, our focus has been on assessing providers seeking registration and we are considering whether and how we should develop our approach now that most providers are registered. This consultation is taking place at an early stage of policy development and we would like to hear your views on our proposals.”

Instant commentators were unimpressed. Were the OfS proposals on quality and standards good for the sector? Johnny Rich thought not, in his well-argued blog for the Engineering Professors’ Council on 23 November 2020, and David Kernohan provided some illustrative but comprehensive number-crunching in his Wonkhe blog on 30 November 2020: “Really, the courses ministers want to get rid of are the ones that make them cross. There’s no metric that is going to be able to find them – if you want to arbitrarily carve up the higher education sector you can’t use “following the science” as a justification.” Liz Morrish nailed it on her Academic Irregularities blog on 1 December 2020.

In the time-honoured way established by HEFCE, the OfS consultation was structured in a way which made it easy to summarise responses numerically, but much less easy to interpret their significance and their arguments. The core of the approach was a matrix of criteria, most of which all universities would expect to meet, but it included some ‘numerical baselines’, especially on something beyond the universities’ control – graduate progression to professional and managerial jobs. It also included a proposed baseline for drop-out rates. The danger of this was that it would point the finger at universities which do the most for disadvantaged groups, but here too government and OfS had a cunning plan. Nick Holland, the OfS Competition and Registration Manager, blogged on 2 December 2020 that the OfS would tackle “pockets of low quality higher education provision”, with the statement that “it is not acceptable for providers to use the proportion of students from disadvantaged backgrounds they have as an excuse for poor outcomes.” At a stroke universities with large proportions of disadvantaged students could either be blamed for high drop-out rates, or, if they reduced drop-out rates, they could be blamed for dropping standards. Lose-lose for the universities concerned, but win-win for the low quality courses narrative. The outrider to the low quality courses narrative was an attack on the 50% participation rate (in which Skidmore was equally culpable), which seemed hard to reconcile with a ‘levelling up’ narrative, but Michele Donelan did her best with her speech to NEON, of all audiences, calling for a new approach to social mobility, which seemed to add up to levelling up by keeping more people in FE. The shape of the baselines became clearer as OfS published Developing an understanding of projected rates of progression from entry to professional employment: methodology and results on 18 December 2020. After proper caveats about the experimental nature of the statistics, here came the indicator (and prospective baseline measure): “To derive the projected entry to professional employment measure presented here, the proportion of students projected to obtain a first degree at their original provider (also referred to as the ‘projected completion rate’) is multiplied by the proportion of Graduate Outcomes respondents in professional employment or any type of further study 15 months after completing their course (also referred to as the ‘professional employment or further study rate’).” This presumably met the government’s expectations by baking in all the non-quality-related advantages of selective universities in one number. Wonkhe’s David Kernohan despaired, on 18 December 2020, as the proposals deviated even further from anything that made sense: “Deep within the heart of the OfS data cube, a new plan is born. Trouble is, it isn’t very good.”

Is it too much to hope that OfS and government might actually look at the academic research on quality and standards in HE? Well, yes, but there is rather a lot of it. Quality in Higher Education is into its 26th year, and of course there is so much more. Even further back, in 1986 the SRHE Annual Conference theme was Standards and criteria in higher education, with an associated book edited by one of the founders of SRHE, Graeme Moodie (York). (This was the ‘Precedings’ – at that time the Society’s practice was to commission an edited volume in advance of the annual conference.) SRHE and the Carnegie Foundation subsequently sponsored a series of Anglo-American seminars on ‘Questions of Quality’. One of the seminar participants was SRHE member Tessa, now Baroness, Blackstone, who would later become the Minister for Further and Higher Education, and one of the visiting speakers for the Princeton seminar was Secretary of State for Education Kenneth Baker. At that time the Council for National Academic Awards was still functioning as the validating agency, assuring quality, for about half of the HE sector, with staff including such SRHE notables as Ron Barnett, John Brennan and Heather Eggins. When it was founded SRHE aimed to bring research and policy together; they have now drifted further apart. Less attention to peer review, but more ministers becoming peers.

Rob Cuthbert is Emeritus Professor of Higher Education Management, University of the West of England and Joint Managing Partner, Practical Academics

Paul Temple


Leave a comment

From ‘predict and provide’ to ‘mitigate the risk’: thoughts on the state and higher education in Britain

by Paul Temple

January 2020 marks the second year of the Office for Students’ (OfS) operations. The OfS represents the latest organisational iteration of state direction of (once) British and (now) English higher education, stretching back to the creation of the University Grants Committee (UGC) in 1919. We therefore have a century’s-worth of experience to draw on: what lessons might there be?

There are, I think, two ways to consider the cavalcade of agencies that have passed through the British higher education landscape since 1919. One is to see in it how higher education has been viewed at various points over the last century. The other way is to see it as special cases of methods of controlling public bodies generally. I think that both perspectives can help us to understand what has happened and why.

In the post-war decades, up to the later 1970s, central planning was almost unquestioningly accepted across the political spectrum in Britain as the correct way to direct nationalised industries such as electricity and railways, but also to plan the economy as a whole, as the National Plan of 1965 showed. In higher education, broadly similar methods – predict and provide – were operated by the UGC for universities, and by a partnership of central government and local authorities for the polytechnics and other colleges. A key feature of this mode of regulation was expert judgement, largely insulated from political pressures. As Michael Shattock and Aniko Horvath observe in The Governance of British Higher Education (Bloomsbury, 2020), “In the 1950s it had been the UGC, not officials in the ministry, who initiated policy discussions about the forecast rate of student number expansion and its financial implications, and it was the UGC, not a minister, that proposed founding the 1960s ‘New Universities’” (p18).

Higher education, then, was viewed as a collective national resource, to be largely centrally planned and funded, in a similar way to nationalised industries.

The rejection of central planning methods by the Thatcher governments (1979-1990) affected the control of higher education as it did other areas of national life through the ‘privatisation’ of public enterprises. Instead, resource allocation decisions were to be made by markets, or where normal markets were absent, as with higher education, by using ‘quasi-markets’ to allocate public funds. Accordingly the UGC was abolished by legislation in 1988, and (eventually) national funding bodies were created, the English version being the Higher Education Funding Council for England (HEFCE). Whereas the UGC had a key task of preserving academic standards, by maintaining the ‘unit of resource’ at what was considered to be an adequate level of funding per student (as a proxy for academic standards), HEFCE’s new task, little-noted at the time, became the polar opposite: it was required to drive down unit costs per student, thereby supposedly forcing universities to make the efficiency gains to be expected of normal market forces.

The market, then, had supplanted central planning as an organising principle in British public life (perhaps the lasting legacy of the Thatcher era); and universities discovered that the seemingly technical changes to their funding arrangements had profoundly altered their internal economies.

HEFCE’s main task, however, as with the UGC before it, was to allocate public money to universities, though now applying a different methodology. The next big shift in English higher education policy, under the 2010 coalition government, changed the nature of central direction radically. Under the full-cost fees policy, universities now typically received most of their income from student loans, making HEFCE’s funding role largely redundant. So, after the usual lag between policy change and institutional restructuring, a new agency was created in 2018, the Office for Students (OfS), modelled on the lines of industry regulators for privatised utilities such as energy and telecoms.

In contrast to its predecessor agencies, OfS is neither a planning nor a funding body (except for some special cases). Instead, as with other industry regulators, it assumes that a market exists, but that its imperfect nature (information asymmetry being a particular concern) calls for detailed oversight and possibly intervention, in order to ‘mitigate the risk’ of abuses by providers (universities) which could damage the interests of consumers (students). It has no interest in maintaining a particular pattern of institutional provision, though it does require that external quality assurance bodies validate academic standards in the institutions it registers.

As with utilities, we have seen a shift in Britain, in stages, from central planning and funding, to a fragmented but regulated provision. The underlying assumption is that market forces will have beneficial results, subject to the regulator preventing abuses and ensuring that minimal standards are maintained. This approach is now so widespread in Britain that the government has produced a code to regulate the regulators (presumably anticipating the question, Quis custodiet ipsos custodes?).

Examining the changing pattern of state direction of higher education in England in the post-1945 period, then, we see the demise of central planning and its replacement, first by quasi-markets, and then by as close to the real thing as we are likely to get. Ideas of central funding to support planning goals have been replaced by reliance on a market with government-created consumers, overseen by a regulator, intervening in the detail (see OfS’s long list of ‘reportable events’) of institutional management.

Despite every effort by governments to create a working higher education marketplace, the core features of higher education get in the way of it being a consumer good (for the many reasons that are repeatedly pointed out to and repeatedly ignored by ministers). Central planning has gone, but its replacement depends on central funding and central intervention. I don’t think that we’ve seen the last of formal central planning in our sector.

SRHE member Paul Temple is Honorary Associate Professor, Centre for Higher Education Studies, UCL Institute of Education, University College London. See his latest paper ‘University spaces: Creating cité and place’, London Review of Education, 17 (2): 223–235 at https://doi.org/10.18546


Leave a comment

Customer Services

by Phil Pilkington

“…problems arise when language goes on holiday. And here we may indeed fancy naming to be some remarkable act of mind, as it were a baptism of an object.”

Ludwig Wittgenstein, Philosophical Investigations, para 38 (original emphasis)

The paradigm shift of students to customers at the heart of higher education has changed strategies, psychological self-images, business models and much else. But are the claims for and against students as customers (SAC) and the related research as useful, insightful and angst ridden as we may at first think?  There are alarms about changing student behaviours and approaches to learning and the relationship towards academic staff but does the naming ‘customers’ reveal what were already underlying, long standing problems? Does the concentrated focus on SAC obscure rather than reveal?

One aspect of SAC is the observation that academic performance declines, and learning becomes more surface and instrumental (Bunce, 2017). Another is that SAC inclines students to be narcissist and aggressive, with HEI management pandering to the demands of both students and their feedback on the NSS, with other strategies to create iconic campus buildings, to maintain or improve league table position (Nixon, 2018).

This raises some methodological questions on (a) the research on academic performance and the degree of narcissism/aggression prior to SAC (ie around 1997 with the Dearing Report); (b) the scope and range of the research given the scale of student numbers, participation rates, the variety of student motivations, the nature of disciplines and their own learning strategies, and the hierarchy of institutions; and (c) the combination of (a) and (b) in the further question whether SAC changed the outlook of students to their education – or is it that we are paying more attention and making different interpretations?

Some argue that the mass system created in some way marketisation of HE and the SAC with all its attendant problems of changing the pedagogic relationship and cognitive approaches. Given Martin Trow’s definitions of elite, mass and universal systems of HE*, the UK achieved a mass system by the late 1980s to early 1990s with the rapid expansion of the polytechnics; universities were slower to expand student numbers. This expansion was before the introduction of the £1,000 top up fees of the Major government and the £3,000 introduced by David Blunkett (Secretary of State for Education in the new Blair government) immediately after the Dearing Report. It was after the 1997 election that the aspiration was for a universal HE system with a 50% participation rate.

If a mass system of HE came about (in a ‘fit of forgetfulness’ ) by 1991 when did marketisation begin? Marketisation may be a name we give to a practice or context which had existed previously but was tacit and culturally and historically deeper, hidden from view. The unnamed hierarchy of institutions of Oxbridge, Russell, polytechnics, HE colleges, FE colleges had powerful cultural and socio-political foundations and was a market of sorts (high to low value goods, access limited by social/cultural capital and price, etc). That hierarchy was not, however, necessarily top-down: the impact of social benefit of the ‘lower orders’ in that hierarchy would be significant in widening participation. The ‘higher order’ existed (and exists) in an ossified form. And as entry was restricted, the competition within the sector did not exist or did not present existential threats. Such is the longue durée when trying to analyse marketisation and the SAC.

The focus on marketisation should help us realise that over the long term the unit of resource was drastically reduced; state funding was slowly and then rapidly withdrawn to the point where the level of student enrolment was critical to long term strategy. That meant not maintaining but increasing student numbers when the potential pool of students would fluctuate – with  the present demographic trough ending in 2021 or 2022. Marketisation can thus be separated to some extent from the cognitive dissonance or other anxieties of the SAC. HEIs (with exceptions in the long-established hierarchy) were driven by the external forces of the funding regime to develop marketing strategies, branding and gaming feedback systems in response to the competition for students and the creation of interest groups – Alliance, Modern, et al. The enrolled students were not the customers in the marketisation but the product or outcome of successful management. The students change to customers as the focus is then on results, employment and further study rates. Such is the split personality of institutional management here.

Research on SAC in STEM courses has a noted inclination to surface learning and the instrumentalism of ‘getting a good grade in order to get a good job’, but this prompts further questions. I am not sure that this is an increased inclination to surface learning, nor whether surface and deep are uncritical norms we can readily employ. The HEAC definition of deep learning has an element of ‘employability’ in the application of knowledge across differing contexts and disciplines (Howie and Bagnall, 2012). A student in 2019 may face the imperative to get a ‘degree level’ job in order to pay back student loans. This is rational related to the student loans regime and widening participation, meaning this imperative is not universally applied given the differing socio-economic backgrounds of all students.

(Note that the current loan system is highly regressive as a form of ‘graduate tax’.)

And were STEM students more inclined toward deep or surface learning before they became SAC?  Teaching and assessment in STEM may have been poorand may have encouraged surface level learning (eg through weekly phase tests which were tardily assessed).

What is deep learning in civil engineering when faced with stress testing concrete girders or in solving quarternion equations in mathematics: is much of STEM actually knowing and processing algorithms? How is such learnable content in STEM equivalent in some cognitive way to the deep learning in modern languages, history, psychology et al? This is not to suggest a hierarchy of disciplines but differences, deep differences, between rules-based disciplines and the humanities.

Learning is complex and individualised, and responsive to, without entirely determining, the curriculum and the forms of its delivery. In the research on SAC the assumptions are that teaching and assessment delivery is both relatively unproblematic and designed to encourage deep, non-instrumental learning. Expectations of the curriculum delivery and assessment will vary amongst students depending on personal background of schooling and parents, the discipline and personal motivations and the expectations will often be unrealistic. Consider why they are unrealistic – more than the narcissism of being a customer. (There is a very wide range of varieties of customer: as a customer of Network Rail I am more a supplicant than a narcissist.)

The alarm over the changes (?) to the students’ view of their learning as SAC in STEM should be put in the context of the previously high drop-out rate of STEM students (relatively higher than non-STEM) which could reach 30% of a cohort. The causes of drop out were thoroughly examined by Mantz Yorke(Yorke and Longden, 2004), but as regards the SAC issue here, STEM drop outs were explained by tutors as lack of the right mathematical preparation. There is comparatively little research on the motivations for students entering STEM courses before they became SAC; such research is not over the long term or longitudinal. However, research on the typology of students with differing motivations for learning (the academic, the social, the questioning student etc) ranged across all courses, does exist (a 20 year survey by Liz Beatty, 2005). Is it possible that after widening participation to the point of a universal system, motivations towards the instrumental or utilitarian will become more prominent? And is there an implication that an elite HE system pre-SAC was less instrumentalist, less surface learning? The creation of PPE (first Oxford in 1921 then spreading across the sector) was an attempt to produce a mandarin class, where career ambition was designed into the academic disciplines. That is, ‘to get a good job’ applies here too but it will be expressed in different, indirect and elevated ways of public service.**

There are some anachronisms in the research on SAC. The acceptance of SAC by management, by producing student charters and providing students places on boards, committees and senior management meetings is not a direct result of students or management considering students as customers. Indeed, it predates SAC by many years and has its origins in the 1960s and 70s.

I am unlikely to get onto the board of Morrisons, but I could for the Co-op – a discussion point on partnerships, co-producers, membership of a community of learners. The struggle by students to get representation in management has taken fifty years from the Wilson government Blue Paper Student Protest (1970) to today. It may have been a concession, but student representation changed the nature of HEIs in the process, prior to SAC. Student Charters appear to be mostly a coherent, user-friendly reduction of lengthy academic and other regulations that no party can comprehend without extensive lawyerly study. A number of HEIs produced charters before the SAC era (late 1990s). And iconic university buildings have been significantly attractive in the architectural profession a long time before SAC – Birmingham’s aspiration to be an independent city state with its Venetian architecture recalling St Mark’s Square under the supervision of Joseph Chamberlain (1890s) or Jim Stirling’s post-modern Engineering faculty building at Leicester (1963) etc (Cannandine 2002).

Students have complex legal identities and are a complex and often fissiparous body. They are customers of catering, they are members of a guild or union, learners, activists and campaigners, clients, tenants, volunteers, sometimes disciplined as the accused, or the appellant, they adopt and create new identities psychologically, culturally and sexually. The language of students as customers creates a language game that excludes other concerns: the withdrawal of state funding, the creation of an academic precariat, the purpose of HE for learning and skills supply, an alienation from a community by the persuasive self-image as atomised customer, how deep learning is a creature of disciplines and the changing job market, that student-academic relations were problematic and now become formalised ‘complaints’. Students are not the ‘other’ and they are much more than customers.

Phil Pilkington is Chair of Middlesex University Students’ Union Board of Trustees, a former CEO of Coventry University Students’ Union, an Honorary Teaching Fellow of Coventry University and a contributor to WonkHE.

*Martin Trow defined an elite, mass and universal systems of HE by participation rates of 10-20%, 20-30% and 40-50% respectively.

** Trevor Pateman, The Poverty of PPE, Oxford, 1968; a pamphlet criticising the course by a graduate; it is acknowledged that the curriculum, ‘designed to run the Raj in 1936’, has changed little since that critique. This document is a fragment of another history of higher education worthy of recovery: of complaint and dissatisfaction with teaching and there were others who developed the ‘alternative prospectus’ movement in the 1970s and 80s.

References

Beatty L, Gibbs G, and Morgan A (2005) ‘Learning orientations and study contracts’, in Marton, F, Hounsell, D and Entwistle, N, (eds) (2005) The Experience of Learning: Implications for teaching and studying in higher education, 3rd (Internet) edition. Edinburgh: University of Edinburgh, Centre for Teaching, Learning and Assessment.

Bunce, Louise (2017) ‘The student-as-consumer approach in HE and its effects on academic performance’, Studies in Higher Education, 42(11): 1958-1978

Howie P and Bagnall R (2012) ‘A critique of the deep and surface learning model’, Teaching in Higher Education 18(4); they state the distinction of learning is “imprecise conceptualisation, ambiguous language, circularity and a lack of definition…”

Nixon, E, Scullion, R and Hearn, R (2018) ‘Her majesty the student: marketised higher education and the narcissistic (dis)satisfaction of the student consumer’, Studies in Higher Education  43(6): 927-943

Cannandine, David (2004), The ‘Chamberlain Tradition’, in In Churchill’s Shadow, Oxford: Oxford University Press; his biographical sketch of Joe Chamberlain shows his vision of Birmingham as an alternative power base to London.

Yorke M and Longden B (2004) Retention and student success in higher education, Maidenhead: SRHE/Open University Press


1 Comment

The ‘Holy Grail’ of pedagogical research: the quest to measure learning gain

by Camille Kandiko Howson, Corony Edwards, Alex Forsythe and Carol Evans

Just over a year ago, and learning gain was ‘trending’. Following a presentation at SRHE Annual Research Conference in December 2017, the Times Higher Education Supplement trumpeted that ‘Cambridge looks to crack measurement of ‘learning gain’; however, research-informed policy making is a long and winding road.

Learning gain is caught between a rock and a hard place — on the one hand there is a high bar for quality standards in social science research; on the other, there is the reality that policy-makers are using the currently available data to inform decision-making. Should the quest be to develop measures that meet the threshold for the Research Excellence Framework (REF), or simply improve on what we have now?

The latest version of the Teaching Excellence and Student Outcomes Framework (TEF) remains wedded to the possibility of better measures of learning gain, and has been fully adopted by the OfS.  And we do undoubtedly need a better measure than those currently used. An interim evaluation of the learning gain pilot projects concludes: ‘data on satisfaction from the NSS, data from DHLE on employment, and LEO on earnings [are] all … awful proxies for learning gain’. The reduction in value of the NSS to 50% in the most recent TEF process make it no better a predictor of how students learn.  Fifty percent of a poor measure is still poor measurement.  The evaluation report argues that:

“The development of measures of learning gain involves theoretical questions of what to measure, and turning these into practical measures that can be empirically developed and tested. This is in a broader political context of asking ‘why’ measure learning gain and, ‘for what purpose’” (p7).

Given the current political climate, this has been answered by the insidious phrase ‘value for money’. This positioning of learning gain will inevitably result in the measurement of primarily employment data and career-readiness attributes. The sector’s response to this narrow view of HE has given renewed vigour to the debate on the purpose of higher education. Although many experts engage with the philosophical debate, fewer are addressing questions of the robustness of pedagogical research, methodological rigour and ethics.

The article Making Sense of Learning Gain in Higher Education, in a special issue of Higher Education Pedagogies (HEP) highlights these tricky questions. Continue reading


Leave a comment

Beware of slogans

by Alex Buckley

Slogans, over time, become part of the furniture. They start life as radical attempts to change how we think, and can end up victims of their own success. Higher education is littered with ex-slogans: ‘student engagement’, ‘graduate attributes’, ‘technology enhanced learning’, ‘student voice’, ‘quality enhancement’, to name just a few. Hiding in particularly plain sight is ‘teaching and learning’ (and ‘learning and teaching’). We may use the phrase on a daily basis without thinking much about it, but what is the point of constantly talking about teaching and learning in the same breath? Continue reading


1 Comment

Examining the Examiner: Investigating the assessment literacy of external examiners

By Dr Emma Medland

Quality assurance in higher education has become increasingly dominant worldwide, but has recently been subject to mounting criticism. Research has highlighted challenges to comparability of academic standards and regulatory frameworks. The external examining system is a form of professional self-regulation involving an independent peer reviewer from another HE institution, whose role is to provide quality assurance in relation to identified modules/programmes/qualifications etc. This system has been a distinctive feature of UK higher education for nearly 200 years and is considered best practice internationally, being evident in various forms across the world.

External examiners are perceived as a vital means of maintaining comparable standards across higher education and yet this comparability is being questioned. Despite high esteem for the external examiner system, growing criticisms have resulted in a cautious downgrading of the role. One critique focuses on developing standardised procedures that emphasise consistency and equivalency in an attempt to uphold standards, arguably to the neglect of an examination of the quality of the underlying practice. Bloxham and Price (2015) identify unchallenged assumptions underpinning the external examiner system and ask: ‘What confidence can we have that the average external examiner has the “assessment literacy” to be aware of the complex influences on their standards and judgement processes?’ (Bloxham and Price 2015: 206). This echoes an earlier point raised by Cuthbert (2003), who identifies the importance of both subject and assessment expertise in relation to the role.

The concept of assessment literacy is in its infancy in higher education, but is becoming accepted into the vernacular of the sector as more research emerges. In compulsory education the concept has been investigated since the 1990s; it is often dichotomised into assessment literacy or illiteracy and described as a concept frequently used but less well understood. Both sectors describe assessment literacy as a necessity or duty for educators and examiners alike, yet both sectors present evidence of, or assume, low levels of assessment literacy. As a result, it is argued that developing greater levels of assessment literacy across the HE sector could help reverse the deterioration of confidence in academic standards.

Numerous attempts have been made to delineate the concept of assessment literacy within HE, focusing for example on the rules, language, standards, and knowledge, skills and attributes surrounding assessment. However, assessment literacy has also been described as Continue reading

Image of Rob Cuthbert


Leave a comment

The Thirty Years Quality War

By Rob Cuthbert

Ten years ago David Watson[1] (2006 p2) said that in England since the 1980s: “the audit society and the accountability culture have collided (apparently) with academic freedom and institutional autonomy”. He called this clash between accountability and autonomy the ‘Quality Wars’ and identified five major casualties: the shrinking of higher education’s sectoral responsibilities; truth – managers mistaking criticism for resistance, staff mistaking resistance for criticism; solidarity – because of the rise of the ‘gangs’ – the Russell Group and others; students, as quality assurance became ever less effective at delivering enhancement; and the reputation of UK HE abroad, as our determination to label things unsatisfactory advertised the few deficiencies of our sector and obscured our strengths.

Ten years on, the hostilities continue and the casualties mount. Continue reading