srhe

The Society for Research into Higher Education


1 Comment

Examining the Examiner: Investigating the assessment literacy of external examiners

By Dr Emma Medland

Quality assurance in higher education has become increasingly dominant worldwide, but has recently been subject to mounting criticism. Research has highlighted challenges to comparability of academic standards and regulatory frameworks. The external examining system is a form of professional self-regulation involving an independent peer reviewer from another HE institution, whose role is to provide quality assurance in relation to identified modules/programmes/qualifications etc. This system has been a distinctive feature of UK higher education for nearly 200 years and is considered best practice internationally, being evident in various forms across the world.

External examiners are perceived as a vital means of maintaining comparable standards across higher education and yet this comparability is being questioned. Despite high esteem for the external examiner system, growing criticisms have resulted in a cautious downgrading of the role. One critique focuses on developing standardised procedures that emphasise consistency and equivalency in an attempt to uphold standards, arguably to the neglect of an examination of the quality of the underlying practice. Bloxham and Price (2015) identify unchallenged assumptions underpinning the external examiner system and ask: ‘What confidence can we have that the average external examiner has the “assessment literacy” to be aware of the complex influences on their standards and judgement processes?’ (Bloxham and Price 2015: 206). This echoes an earlier point raised by Cuthbert (2003), who identifies the importance of both subject and assessment expertise in relation to the role.

The concept of assessment literacy is in its infancy in higher education, but is becoming accepted into the vernacular of the sector as more research emerges. In compulsory education the concept has been investigated since the 1990s; it is often dichotomised into assessment literacy or illiteracy and described as a concept frequently used but less well understood. Both sectors describe assessment literacy as a necessity or duty for educators and examiners alike, yet both sectors present evidence of, or assume, low levels of assessment literacy. As a result, it is argued that developing greater levels of assessment literacy across the HE sector could help reverse the deterioration of confidence in academic standards.

Numerous attempts have been made to delineate the concept of assessment literacy within HE, focusing for example on the rules, language, standards, and knowledge, skills and attributes surrounding assessment. However, assessment literacy has also been described as Continue reading

Paul Temple


Leave a comment

Crossing the Threshold

By Paul Temple

When education students are taught about the difference between norm-referenced and criterion-referenced assessment, the example often given of criterion referencing is the driving test. The skills you need to demonstrate in order to pass the practical test are closely defined, and an examiner can readily tell whether or not you have mastered them. So you have to do a hill start without the car running backwards, reverse around a corner without hitting the kerb or ending up in the middle of the road, and so on. The driving test could then, in principle, have a 100% or a 0% pass rate. (A non-education example of a norm-referenced examination is the football league: to stay in the Premier League, a team doesn’t have to be objectively brilliant, just fractionally better than that year’s weakest three teams.) But the driving test is also a threshold assessment: the examiner expects the candidate to be able to negotiate the town centre one-way system competently, but not to show that they can take part in a Bond movie car-chase. You have to cross the threshold of competent driving: you don’t have to show that you can go beyond it.

This seems a clear enough distinction: so why do so many academics apparently have difficulty with it? Continue reading

Geoff Stoakes


Leave a comment

Grade point averages

By Geoff Stoakes

In May, the Higher Education Academy (HEA) published a report about the pilot study into a national grade point average (GPA) system. This study was prompted by the debate around the perceived limitations of the honours degree classification (HDC) system, in particular, insufficient differentiation between student performance, a lack of recognition outside the UK, and limited transparency in how the HDC is calculated by different higher education providers.

In his speech on 1 July 2015 at Universities UK, Jo Johnson, Minister for Universities and Science, highlighted that one of the things he wants to focus on in the forthcoming green paper is how a Teaching Excellence Framework can help improve how degrees are classified. He believes that the standard model of classes of honours on its own is “no longer capable of providing the recognition hardworking students deserve and the information employers require.” Continue reading