By Dr Emma Medland
Quality assurance in higher education has become increasingly dominant worldwide, but has recently been subject to mounting criticism. Research has highlighted challenges to comparability of academic standards and regulatory frameworks. The external examining system is a form of professional self-regulation involving an independent peer reviewer from another HE institution, whose role is to provide quality assurance in relation to identified modules/programmes/qualifications etc. This system has been a distinctive feature of UK higher education for nearly 200 years and is considered best practice internationally, being evident in various forms across the world.
External examiners are perceived as a vital means of maintaining comparable standards across higher education and yet this comparability is being questioned. Despite high esteem for the external examiner system, growing criticisms have resulted in a cautious downgrading of the role. One critique focuses on developing standardised procedures that emphasise consistency and equivalency in an attempt to uphold standards, arguably to the neglect of an examination of the quality of the underlying practice. Bloxham and Price (2015) identify unchallenged assumptions underpinning the external examiner system and ask: ‘What confidence can we have that the average external examiner has the “assessment literacy” to be aware of the complex influences on their standards and judgement processes?’ (Bloxham and Price 2015: 206). This echoes an earlier point raised by Cuthbert (2003), who identifies the importance of both subject and assessment expertise in relation to the role.
The concept of assessment literacy is in its infancy in higher education, but is becoming accepted into the vernacular of the sector as more research emerges. In compulsory education the concept has been investigated since the 1990s; it is often dichotomised into assessment literacy or illiteracy and described as a concept frequently used but less well understood. Both sectors describe assessment literacy as a necessity or duty for educators and examiners alike, yet both sectors present evidence of, or assume, low levels of assessment literacy. As a result, it is argued that developing greater levels of assessment literacy across the HE sector could help reverse the deterioration of confidence in academic standards.
Numerous attempts have been made to delineate the concept of assessment literacy within HE, focusing for example on the rules, language, standards, and knowledge, skills and attributes surrounding assessment. However, assessment literacy has also been described as a fluid and negotiated concept that must be applicable to different groups of practitioners across different contexts, which will surely not be well served by a single definition. Instead there have been calls for the development of a shared language or discourse of assessment literacy. This would circumvent the need for over-simplistic dichotomies and could, I believe, be initiated through the identification of the constituent elements of the concept.
The project I have recently completed, thanks to SRHE’s Newer Researcher Award, aimed to extend the findings from a pilot study (Medland 2015) that drew upon Price et al (2012) as a theoretical framework to identify six constituent elements of the concept of assessment literacy. These are: i. Community; ii. Dialogue; iii. Knowledge and Understanding; iv. Programme-wide approach; v. Self-regulation; and, vi. Standards. This SRHE funded project aimed to validate and extend these findings in two stages through:
- The analysis of a sample of cross-institutional external examiner reports; and
- Dialogue with external examiners about how they conceive and enact assessment literacy within their roles, through interviews with examiners involved in the first stage of the project.
Framework analysis, a technique developed by Ritchie and Spencer in the 1980s and described in Ritchie et al (2006), was used to analyse the external examiner reports collected for stage one and the transcribed semi-structured interview data collected for stage two.
As identified in the pilot study, stage one of the project indicated that assessment literacy is a concept in its infancy, unfamiliar to most external examiners. However, the six constituent elements of assessment literacy were apparent in both written reports and transcribed interview recordings focusing on the perceptions of the role, as follows:
Standards were generally described as being embedded within the local cultures and immersion within this community was a key attraction of the role. However, integration within the target community was evidently very difficult to achieve, primarily in view of the restricted time and resources that each external examiner could dedicate to the role, which in turn was informed by the low level of reward and recognition of the role across the sector. In addition, a programme-wide overview, whilst being central to the development of assessment literacy was generally not achieved, and sometimes actively discouraged by institutions. Nevertheless, the informal dialogue that external examiners engaged in with the programme team, and sometimes the students too, supported their integration into the community and provided insight into the co-constructed nature of standards, although this aspect of practice was generally not shared at the institution level. It became evident that assessment knowledge and understanding, manifested in the practice-theory relationship, needs to be brought out more explicitly if the external examiner system is to avoid compounding the gap present in HE and that continuing professional development via self-regulation has only very recently been seriously considered (eg https://www.heacademy.ac.uk/hefce-degree-standards). As such, of the six constituent elements of the concept of assessment literacy identified and validated by the pilot study and this research project, standards and dialogue were the most fully developed and understood elements. Central questions still remain unanswered that are generally grounded within the divergence of practices identified by the research and exacerbated by the means of learning the trade, which seems akin to a broken apprenticeship model.
The remaining elements of community, knowledge and understanding, programme-wide approach and self-regulation require attention in order for the assessment literacy of the external examiners involved in this research to develop in a more explicit manner. This might be achieved through greater consistency relating to how the role is perceived and enacted, and greater support from both home and host institutions so that the apprenticeship model might be fixed, or replaced with an alternative means of embracing the diversity of practices. If the external examining system is to avoid further downgrading, then it must acknowledge the role of assessment literacy as equally important to disciplinary expertise, as well as subjecting itself to critique and development. This research project, therefore, aimed to initiate a discussion concerning what a discourse of assessment literacy might look like. It shows how identifying the constituent elements apparent in the concept of assessment literacy might both guide the development of the assessment-related beliefs and understandings of a sample of external examiners and provide a tool for the conceptualisation of assessment literacy.
SRHE member Dr Emma Medland is a Lecturer in Higher Education in the Department of Higher Education at the University of Surrey. She won an SRHE Newer Researcher Award in 2015.
Bloxham, S, and Price, M (2015) “External Examining: Fit for purpose?” Studies in Higher Education, 40 (2), 195-211
Cuthbert, M (2003) ‘The external examiner: How did we get here?’ [Online]. Available:
(accessed 31 August 2015)
Medland, E (2015) ‘Examining the Assessment Literacy of External Examiners’, London Review of Education, 13 (3), 21-33
Price, M, Rust, C, O’Donovan, B, and Handley, K (2012) Assessment Literacy: The foundation for improving student learning Oxford: The Oxford Centre for Staff and Learning Development
Ritchie, J, Spencer, L, and O’Connor, W (2006) “Carrying out qualitative analysis”, in Ritchie, J and Lewis, J (eds) (2006) Qualitative Research Practice: A guide for social science students and researchers, London: SAGE Publications, pp219-262