SRHE Blog

The Society for Research into Higher Education


Leave a comment

Walk on by: the dilemma of the blind eye

by Dennis Sherwood

Forty years on…

I don’t remember much about my experiences at work some forty-odd years ago, but one event I recall vividly is the discussion provoked by a case study at a training event. The case was simple, just a few lines:

Sam was working late one evening, and happened to walk past Pat’s office. The door was closed, but Sam could hear Pat being very abusive to Alex. Some ten minutes later, Sam saw Alex sobbing.

What might Sam do?

What should Sam do?

Quite a few in the group said “nothing”, on the grounds that whatever was going on was none of Sam’s business. Maybe Pat had good grounds to be angry with Alex and if the local culture was, let’s say, harsh, what’s the problem? Nor was there any evidence that Alex’s sobbing was connected with Pat – perhaps something else had happened in the intervening time.

Others thought that the least could Sam do was to ask if Alex was OK, and offer some comfort – a suggestion countered by the “it’s a tough world” brigade.

The central theme of the conversation was then all about culture. Suppose the culture was supportive and caring. Pat’s behaviour would be out of order, even if Pat was angry, and even if Alex had done something Pat had regarded as wrong.

So what might – and indeed should – Sam do?

Should Sam should confront Pat? Or inform Pat’s boss?

What if Sam is Pat’s boss? In that case then, yes, Sam should confront Pat: failure to do so would condone bad behaviour, which in this culture, would be a ‘bad thing’.

But if Sam is not Pat’s boss, things are much more tricky. If Sam is subordinate to Pat, confrontation is hardly possible. And informing Pat’s boss could be interpreted as snitching or trouble-making. Another possibility is that Sam and Pat are peers, giving Sam ‘the right’ to confront Pat – but only if peer-to-peer honesty and mutual pressure is ‘allowed’. Which it might not be, for many, even benign, cultures are in reality networks of mutual ‘non-aggression treaties’, in which ‘peers’ are monarchs in their own realms – so Sam might deliberately choose to turn a blind eye to whatever Pat might be doing, for fear of setting a precedent that would allow Pat, or indeed Ali or Chris, to poke their noses into Sam’s own domain.

And if Sam is in a different part of the organisation – or indeed from another organisation altogether – then maybe Sam’s safest action is back where we started. To do nothing. To walk on by.

Sam is a witness to Pat’s bad behaviour. Does the choice to ‘walk on by’ make Sam complicit too, albeit at arm’s length?

I’ve always thought that this case study, and its implications, are powerful – which is probably why I’ve remembered it over so long a time.

The truth about GCSE, AS and A level grades in England

I mention it here because it is relevant to the main theme of this blog – a theme that, if you read it, makes you a witness too. Not, of course, to ‘Pat’s’ bad behaviour, but to another circumstance which, in my opinion, is a great injustice doing harm to many people – an injustice that ‘Pat’ has got away with for many years now, not only because ‘Pat’s peers’ have turned a blind eye – and a deaf ear too – but also because all others who have known about it have chosen to ‘walk on by’.

The injustice of which I speak is the fact that about one GCSE, AS and A level grade in every four, as awarded in England, is wrong, and has been wrong for years. Not only that: in addition, the rules for appeals do not allow these wrong grades to be discovered and corrected. So the wrong grades last for ever, as does the damage they do.

To make that real, in August 2025, some 6.5 million grades were awarded, of which around 1.6 million were wrong, with no appeal. That’s an average of about one wrong grade ‘awarded’ to every candidate in the land.

Perhaps you already knew all that. But if you didn’t, you do now. As a consequence, like Sam in that case study, you are a witness to wrong-doing.

It’s important, of course, that you trust the evidence. The prime source is Ofqual’s November 2018 report, Marking Consistency Metrics – An update, which presents the results of an extensive research project in which very large numbers of GCSE, AS and A level scripts were in essence marked twice – once by an ‘assistant’ examiner (as happens in ‘ordinary’ marking each year), and again by a subject senior examiner, whose academic judgement is the ultimate authority, and whose mark, and hence grade, is deemed ‘definitive’, the arbiter of ‘right’.

Each script therefore had two marks and two grades, enabling those grades to be compared. If they were the same, then the ‘assistant’ examiner’s grade – the grade that is on the candidate’s certificate – corresponds to the senior examiner’s ‘definitive’ grade, and is therefore ‘right’; if the two grades are different, then the assistant examiner’s grade is necessarily ‘non-definitive’, or, in plain English, wrong.

You might have thought that the number of ‘non-definitive’/wrong grades would be small and randomly distributed across subjects. In fact, the key results are shown on page 21 of Ofqual’s report as Figure 12, reproduced here:

Figure 1: Reproduction of Ofqual’s evidence concerning the reliability of school exam grades

To interpret this chart, I refer to this extract from the report’s Executive Summary:

The probability of receiving the ‘definitive’ qualification grade varies by qualification and subject, from 0.96 (a mathematics qualification) to 0.52 (an English language and literature qualification).

This states that 96% of Maths grades (all varieties, at all levels), as awarded, are ‘definitive’/right, as are 52% of those for Combined English Language and Literature (a subject available only at A level). Accordingly, by implication, 4% of Maths grades, and 48% of English Language and Literature grades, are ‘non-definitive’/wrong. Maths grades, as awarded, can therefore be regarded as 96% reliable; English Language and Literature grades as 52% reliable.

Scrutiny of the chart will show that the heavy black line in the upper blue box for Maths maps onto about 0.96 on the horizontal axis; the equivalent line for English Language and Literature maps onto 0.56. The measures of the reliability of the grades for each of the other subjects are designated similarly. Ofqual’s report does not give any further numbers, but Table 1 shows my estimates from Ofqual’s Figure 12:

 Probability of
 ‘Definitive’ grade‘Non-definitive’ grade
Maths (all varieties)96%4%
Chemistry92%8%
Physics88%12%
Biology85%15%
Psychology78%22%
Economics74%26%
Religious Studies66%34%
Business Studies66%34%
Geography65%35%
Sociology63%37%
English Language61%39%
English Literature58%42%
History56%44%
Combined English Language and Literature (A level only)52%48%

Table 1: My estimates of the reliability of school exam grades, as inferred from measurements of Ofqual’s Figure 12.

Ofqual’s report does not present any corresponding information for each of GCSE, AS or A level separately, nor any analysis by exam board. Also absent is a measure of the all-subject overall average. Given, however, the maximum value of 96%, and the minimum of 52%, the average is likely to be somewhere in the middle, say, in the seventies; in fact, if each subject is weighted by its cohort, the resulting average over the 14 subjects shown is about 74%. Furthermore, if other subjects – such as French, Spanish, Computing, Art… – are taken into consideration, the overall average is most unlikely to be greater than 82% or less than 66%, suggesting that an overall average reliability of 75% for all subjects is a reasonable estimate.

That’s the evidence that, across all subjects and levels, about 75% of grades, as awarded, are ‘definitive’/right and 25% – one in four – are ‘non-definitive’/wrong – evidence that has been in the public domain since 2018. But evidence that has been much disputed by those with vested interests.

Ofqual’s results are readily explained. We all know that different examiners can, legitimately, give the same answer (slightly) different marks. As a result, the script’s total mark might lie on different sides of a grade boundary, depending on who did the marking. Only one grade, however, is ‘definitive’.

Importantly, there are no errors in the marking studied by Ofqual – in fact, Ofqual’s report mentions ‘marking error’ just once, and then in a rather different context. All the grading discrepancies measured in Ofqual’s research are therefore attributable solely to legitimate differences in academic opinion. And since the range of legitimate marks is far narrower in subjects such as Maths and Physics, as compared to English Literature and History, then the probability that an ‘assistant’ examiner’s legitimate mark might result in a ‘non-definitive’ grade will be much higher for, say, History as compared to Physics. Hence the sequence of subjects in Ofqual’s Figure 12.

As regards appeals, in 2016, Ofqual – in full knowledge of the results of this research (see paragraph 28 of this Ofqual Board Paper, dated 18 November 2015) – changed the rules, requiring that a grade can be changed only if a ‘review of marking’ discovers a ‘marking error’. To quote an Ofqual ‘news item’ of 26 May 2016:

Exam boards must tell examiners who review results that they should not change marks unless there is a clear marking error. …It is not fair to allow some students to have a second bite of the cherry by giving them a higher mark on review, when the first mark was perfectly appropriate. This undermines the hard work and professionalism of markers, most of whom are teachers themselves. These changes will mean a level-playing field for all students and help to improve public confidence in the marking system.

This assumes that the legitimate marks given by different examiners are all equally “appropriate”, and identical in every way.

This assumption. however, is false: if one of those marks corresponds to the ‘definitive’ grade, and another to a ‘non-definitive’ grade, they are not identical at all. Furthermore, as already mentioned, there is hardly any mention of marking errors in Ofqual’s November 2018 report. All the grade discrepancies they identified can therefore only be attributable to legitimate differences in academic opinion, and so cannot be discovered and corrected by the rules that have been in place since 2016.

Over to you…

So, back to that case study.

Having read this far, like Sam, you have knowledge of wrong-doing – not Pat tearing a strip off Alex, but Ofqual awarding some 1.5 million wrong grades every year. All with no right of appeal.

What are you going to do?

You’re probably thinking something like, “Nothing”, “It’s not my job”, “It’s not my problem”, “I’m in no position to do anything, even if I wanted to”.

All of which I understand. No, it’s certainly not your job. And it’s not your problem directly, in that it’s not you being awarded the wrong grade. But it might be your problem indirectly – if you are involved with admissions, and if grades play a material role, you may be accepting a student who is not fully qualified (in that the grade on the certificate might be too high), or – perhaps worse – rejecting a student who is (in that the grade on the certificate is too low). Just to make that last point real, about one candidate in every six with a certificate showing AAA for A level Physics, Chemistry and Biology in fact truly merited at least one B. If such a candidate took a place at Med School, for example, not only is that candidate under-qualified, but a place has also been denied to a candidate with a certificate showing AAB but who merited AAA.

And although you, as an individual, are indeed not is a position to do anything about it, you, collectively, surely are.

HE is, by far, the largest and most important user of A levels. And relying on a ‘product’ that is only about 75% reliable. HE, collectively, could put significant pressure on Ofqual to fix this, if only by printing “OFQUAL WARNING: THE GRADES ON THIS CERTIFICATE ARE ONLY RELIABLE, AT BEST, TO ONE GRADE EITHER WAY” on every certificate – not my statement, but one made by Ofqual’s then Chief Regulator, Dame Glenys Stacey, in evidence to the 2 September 2020 hearing of the Education Select Committee, and in essence equivalent to the fact that about one grade in four is wrong. That would ensure that everyone is aware of the fact that any decision, based on a grade as shown on a certificate, is intrinsically unsafe.

But this – or some other solution – can happen only if your institution, along with others, were to act accordingly. And that can happen only if you, and your colleagues, band together to influence your department, your faculty, your institution.

Yes, that is a bother. Yes, you do have other urgent things to do.

If you do nothing, nothing will happen.

But if you take action, you can make a difference.

Don’t just walk on by.

Dennis Sherwood is a management consultant with a particular interest in organisational cultures, creativity and systems thinking. Over the last several years, Dennis has also been an active campaigner for the delivery of reliable GCSE, AS and A level grades. If you enjoyed this, you might also like https://srheblog.com/tag/sherwood/.


1 Comment

Becoming a professional services researcher in HE – making the train tracks converge

by Charlotte Verney

This blog builds on my presentation at the BERA ECR Conference 2024: at crossroads of becoming. It represents my personal reflections of working in UK higher education (HE) professional services roles and simultaneously gaining research experience through a Masters and Professional Doctorate in Education (EdD).

Professional service roles within UK HE include recognised professionals from other industries (eg human resources, finance, IT) and HE-specific roles such as academic quality, research support and student administration. Unlike academic staff, professional services staff are not typically required, or expected, to undertake research, yet many do. My own experience spans roles within six universities over 18 years delivering administration and policy that supports learning, teaching and students.

Traversing two tracks

In 2016, at an SRHE Newer Researchers event, I was asked to identify a metaphor to reflect my experience as a practitioner researcher. I chose this image of two train tracks as I have often felt that I have been on two development tracks simultaneously –  one building professional experience and expertise, the other developing research skills and experience. These tracks ran in parallel, but never at the same pace, occasionally meeting on a shared project or assignment, and then continuing on their separate routes. I use this metaphor to share my experiences, and three phases, of becoming a professional services researcher.

Becoming research-informed: accelerating and expanding my professional track

The first phase was filled with opportunities; on my professional track I gained a breadth of experience, a toolkit of management and leadership skills, a portfolio of successful projects and built a strong network through professional associations (eg AHEP). After three years, I started my research track with a masters in international higher education. Studying felt separate to my day job in academic quality and policy, but the assignments gave me opportunities to bring the tracks together, using research and theory to inform my practice – for example, exploring theoretical literature underpinning approaches to assessment whilst my institution was revising its own approach to assessing resits. I felt like a research-informed professional, and this positively impacted my professional work, accelerating and expanding my experience.

Becoming a doctoral researcher: long distance, slow speed

The second phase was more challenging. My doctoral journey was long, taking 9 years with two breaks. Like many part-time doctoral students, I struggled with balance and support, with unexpected personal and professional pressures, and I found it unsettling to simultaneously be an expert in my professional context yet a novice in research. I feared failure, and damaging my professional credibility as I found my voice in a research space.

What kept me going, balancing the two tracks, was building my own research support network and my researcher identity. Some of the ways I did this was through zoom calls with EdD peers for moral support, joining the Society for Research into Higher Education to find my place in the research field, and joining the editorial team of a practitioner journal to build my confidence in academic writing.

Becoming a professional services researcher: making the tracks converge

Having completed my doctorate in 2022, I’m now actively trying to bring my professional and research tracks together. Without a roadmap, I’ve started in my comfort-zone, sharing my doctoral research in ‘safe’ policy and practitioner spaces, where I thought my findings could have the biggest impact. I collaborated with EdD peers to tackle the daunting task of publishing my first article. I’ve drawn on my existing professional networks (ARC, JISC, QAA) to establish new research initiatives related to my current practice in managing assessment. I’ve made connections with fellow professional services researchers along my journey, and have established an online network  to bring us together.

Key takeaways for professional services researchers

Bringing my professional experience and research tracks together has not been without challenges, but I am really positive about my journey so far, and for the potential impact professional services researchers could have on policy and practice in higher education. If you are on your own journey of becoming a professional services researcher, my advice is:

  • Make time for activities that build your research identity
  • Find collaborators and a community
  • Use your professional experience and networks
  • It’s challenging, but rewarding, so keep going!

Charlotte Verney is Head of Assessment at the University of Bristol. Charlotte is an early career researcher in higher education research and a leader in within higher education professional services. Her primary research interests are in the changing nature of administrative work within universities, using research approaches to solve professional problems in higher education management, and using creative and collaborative approaches to research. Charlotte advocates for making the academic research space more inclusive for early career and professional services researchers. She is co-convenor of the SRHE Newer Researchers Network and has established an online network for higher education professional services staff engaged with research.


Leave a comment

Gaps in sustainability literacy in non-STEM higher education programmes

by Erika Kalocsányiová and Rania Hassan

Promoting sustainability literacy in higher education is crucial for deepening students’ pro-environmental behaviour and mindset (Buckler & Creech, 2014; UNESCO, 1997), while also fostering social transformation by embedding sustainability at the core of the student experience. In 2022, our group received an SRHE Scoping Award to synthesise the literature on the development, teaching, and assessment of sustainability literacy in non-STEM higher education programmes. We conducted a multilingual systematic review of post-2010 publications from the European Higher Education Area (EHEA), with the results summarised in Kalocsányiová et al (2024).

Out of 6,161 articles that we identified as potentially relevant, 92 studies met the inclusion criteria and are reviewed in the report. These studies involved a total of 11,790 participants and assessed 9,992 university programmes and courses. Our results suggest a significant growth in research interest in sustainability in non-STEM fields since 2017, with 75 studies published compared to just 17 in the preceding seven years. Our analysis also showed that Spain, the United Kingdom, Germany, Turkey, and Austria had the highest concentration of publications, with 25 EHEA countries represented in total. The 92 reviewed studies were characterised by high methodological diversity: nearly half employed quantitative methods (47%), followed by qualitative studies (40%) and mixed methods research (13%). Curriculum assessments using quantitative content analysis of degree and course descriptors were among the most common study types, followed by surveys and intervention or pilot studies. Curriculum assessments provided a systematic way to evaluate the presence or absence of sustainability concepts within curricula at both single HE institutions and in comparative frameworks. However, they often captured only surface-level indications of sustainability integration into undergraduate and postgraduate programmes, without providing evidence on actual implementation and/or the effectiveness of different initiatives. Qualitative methods, including descriptive case studies and interviews that focused on barriers, challenges, implementation strategies, and the acceptability of new sustainability literacy initiatives, made up 40% of the current research. Mixed methods studies accounted for 13% of the reviewed articles, often applying multiple assessment tools simultaneously, including quantitative sustainability competency assessment instruments combined with open-ended interviews or learning journals.

In terms of disciplines, Economics, Business, and Administrative Studies held the largest share of reviewed studies (26%), followed by Education (23%). Multiple disciplines accounted for 22% of the reviewed publications, reflecting the interconnected nature of sustainability. Finance and Accounting contributed only 6%, indicating a need for further research. Similarly, Language and Linguistics, Mass Communication and Documentation, and Social Sciences collectively represented only 12% of the reviewed studies. Creative Arts and Design with just 2% was also a niche area. Although caution should be exercised when drawing conclusions from these results, they highlight the need for more research within the underrepresented disciplines. This in turn can help promote awareness among non-STEM students, stimulate ethical discussions on the cultural dimensions of sustainability, and encourage creative solutions through interdisciplinary dialogue.

Regarding factors and themes explored, the studies focused primarily on the acquisition of sustainability knowledge and competencies (27%), curriculum assessment (23%), challenges and barriers to sustainability integration (10%), implementation and evaluation research (10%), changes in students’ mindset (9%), key competences in sustainability literacy (5%), and active student participation in Education for Sustainable Development (5%). In terms of studies discussing acquisition processes, key focus areas included the teaching of Sustainable Development Goals, awareness of macro-sustainability trends, and knowledge of local sustainability issues. Studies on sustainability competencies focussed on systems thinking, critical thinking, problem-solving skills, ethical awareness, interdisciplinary knowledge, global awareness and citizenship, communication skills, and action-oriented mindset. These competencies and knowledge, which are generally considered crucial for addressing the multifaceted challenges of sustainability (Wiek et al., 2011), were often introduced to non-STEM students through stand-alone lectures, workshops, or pilot studies involving new cross-disciplinary curricula.

Our review also highlighted a broad range of pedagogical approaches adopted for sustainability teaching and learning within non-STEM disciplines. These covered case and project-based learning, experiential learning methods, problem-based learning, collaborative learning, reflection groups, pedagogical dialogue, flipped classroom approaches, game-based learning, and service learning. While there is strong research interest in the documentation and implementation of these pedagogical approaches, few studies have so far attempted to assess learning outcomes, particularly regarding discipline-specific sustainability expertise and real-world problem-solving skills.

Many of the reviewed studies relied on single-method approaches, meaning valuable insights into sustainability-focused teaching and learning may have been missed. For instance, studies often failed to capture the complexities surrounding sustainability integration into non-STEM programs, either by presenting positivist results that require further contextualisation or by offering rich context limited to a single course or study group, which cannot be generalised. The assessment tools currently used also seemed to lack consistency, making it difficult to compare outcomes across programmes and institutions to promote best practices. More robust evaluation designs, such as longitudinal studies, controlled intervention studies, and mixed methods approaches (Gopalan et al, 2020; Ponce & Pagán-Maldonado, 2015), are needed to explore and demonstrate the pedagogical effectiveness of various sustainability literacy initiatives in non-STEM disciplines and their impact on student outcomes and societal change.

In summary, our review suggests good progress in integrating sustainability knowledge and competencies into some core non-STEM disciplines, while also highlighting gaps. Based on the results we have formulated some questions that may help steer future research:

  • Are there systemic barriers hindering the integration of sustainability themes, challenges and competencies into specific non-STEM fields?
  • Are certain disciplines receiving disproportionate research attention at the expense of others?
  • How do different pedagogical approaches compare in terms of effectiveness for fostering sustainability literacy in and across HE fields?
  • What new educational practices are emerging, and how can we fairly assess them and evidence their benefits for students and the environment?

We also would like to encourage other researchers to engage with knowledge produced in a variety of languages and educational contexts. The multilingual search and screening strategy implemented in our review enabled us to identify and retrieve evidence from 25 EHEA countries and 24 non-English publications. If reviews of education research remain monolingual (English-only), important findings and insights will go unnoticed hindering knowledge exchange, creativity, and innovation in HE.

Dr. Erika Kalocsányiová is a Senior Research Fellow with the Institute for Lifecourse Development at the University of Greenwich, with research centering on public health and sustainability communication, migration and multilingualism, refugee integration, and the implications of these areas for higher education policies.

Rania Hassan is a PhD student and a research assistant at the University of Greenwich. Her research centres on exploring enterprise development activities within emerging economies. As a multidisciplinary and interdisciplinary researcher, Rania is passionate about advancing academia and promoting knowledge exchange in higher education.


Leave a comment

Promoting equity and employability using live briefs

by Lucy Cokes

‘Live briefs’ are used in Higher Education programmes, and I suggest that they can help promote equity and employability if they are used in very specific ways. The use of live briefs takes place not only in the creative industries, but also across more practical or core subjects in HE. and has many parallels with a wide range of other teaching tools.

Live Briefs have been part of my teaching to students on the Creative Advertising degrees at Falmouth University for the past 10 years, with the last four years using live paid briefs as part of assessment. Done right I passionately believe that live briefs, with their ability to test students through an authentic task, develop creative problem-solving skills, and in turn, enhancing student satisfaction, are a valuable tool.

How live briefs are usually used

A live brief is defined as “a type of design project that is distinct from a typical studio project in its engagement of real clients or users, in real-time settings” (Sara, 2006, p. 1). Often, lecturers believe they are assigning ‘live briefs’, but frequently these are merely ‘simulations’ or ‘mock briefs’ using either outdated, or fictional client briefs which lack a genuine and immediate client need. Distant cousins of live briefs include the use of case studies in teaching, or the use of authentic tasks. However, I believe that the use of a live brief should be the unrivalled method to enhancing students’ employability skills and prospects at university in comparison to these other approaches. Typically, live briefs are sourced through lecturers’ professional networks and are presented to students most frequently as an extracurricular opportunity. These opportunities have often resulted in students securing paid or unpaid placements at agencies or being offered full-time positions post-graduation. By not fully embedding these live opportunities into assessments, there is an inadvertent disadvantage to those already disadvantaged.

How live briefs could be used

Live briefs can be, with effort, integrated into the students’ assessment brief for their modules. Students are often asked to deliver a pitch to the client as part of their assessment with one of the ideas chosen by the client. The winning students should ideally be paid for their time, with full guidance from tutors acting to provide feedback and project manage the process. Course leaders need to use caution when explicitly stating a particular module will contain live paid briefs, as they are often hard to come by. Instead, it is suggested that modules be designed in such a way they can be ‘plugged in’ when accessible.

There are many challenges in using live briefs, these include:

  • Planning in good time prior to start of a module.
  • The need to fit timings with pre-established assessment deadlines.
  • Additional time required for lectures to source the live briefs and manage the ‘clients’.
  • Potential administration constraints with invoicing ‘clients’, paying students and suppliers.

Live briefs seem particularly well-suited to non-profits, small businesses, or government agencies.  Experience has shown that these types of organisations tend so see the partnership with a university and students to be more cost effective, providing social benefits, whilst also being able to be more flexible around the module deadlines.

Organisations benefit from bringing their projects to the university, as they gain a dedicated fresh set of minds working on their problem. The same clients often come back year after year. Chris Thompson of Safer Futures shared that he “…thought the standard, confidence and professionalism of the student pitches and research was exceptional.”

The hierarchy of live briefs has been produced to assist lecturers in deciding how best to use live briefs in their teaching and push for the gold star of having paid opportunities embedded into assessment.  This is a call for a shift in culture and attitudes toward the use of live briefs, so we are not inadvertently decreasing social mobility in the UK through their use.

Live Brief Hierarchy

The hierarchy has been designed to help lecturers navigate the options whilst considering the ever-increasing demand for improved employability equity.

Figure 1: Hierarchy of the use of Live Briefs in University Teaching.
Ranked based on perceived equity and employability status.

Working on live briefs enhances the students’ employability by improving general employability skills, and providing the ability to include this work in their portfolios and CVs. The approach of using live briefs outside of assessment does not provide equal opportunities to students from diverse backgrounds. Less privileged students often work nearly full-time during evenings and weekends to support themselves financially while studying. Indeed, 55% of UK students now work an average of 13.5 hours a week meaning they have less availability to participate in extracurricular assignments  (BBC, 2023). The Social Mobility Commission has noted that “unpaid internships are damaging for social mobility”  (Milburn, 2017). I see a parallel between the use of extracurricular briefs and unpaid internships, so I advocate that we discourage the use of unpaid extracurricular briefs, as they reduce our chances of ‘levelling up’ in the UK.

The Gold Star of Live Briefs

Justyna, BA Creative Advertising graduate, shared her thoughts on working on a paid live brief. “It gave me more motivation to produce the best possible work. But it was mainly because I was excited about the opportunity to actually make a campaign, still as a student. It was a great way of getting work experience and seeing how the industry works. I believe that the campaign I made is one of the most valuable experiences on my CV”.

Embedding live briefs into briefs assessment, producing work for clients, and compensating students for their contributions present significant challenges. However, I believe incremental improvements to the existing practice of utilising live briefs outside of formal assessment without remuneration should be pursued. The deliberate consideration of these options and the effort to implement such changes will gradually shift the culture and attitudes toward the use of live briefs among both university academic staff and external organisations. This progressive adaptation will enhance the integration of live briefs into the curriculum, ultimately benefiting the student experience, learning and employability whilst simultaneously resulting in clear knowledge exchange advantages for the external organisations.

Lucy Cokes is a senior lecturer at Falmouth University, School of Communications. She has been working in higher education for the past ten years and is a Fellow of Advance HE. She leads the Behaviour Change for Good modules on the Advertising courses and started the inhouse agency ‘BE good’ to manage the live projects which have included a number of government funded campaigns around VAWG and Healthy Relationships. Prior to this she ran a highly successful digital marketing agency with 80 staff in the UK across 3 offices.

References

BBC (2023) Most university students working paid jobs, survey shows. [Online] Available at: https://www.bbc.co.uk/news/education-65964375 [Accessed 23 August 2023]

Milburn, A. (2017) Unpaid internships are damaging to social mobility. [Online] Available at: https://www.gov.uk/government/news/unpaid-internships-are-damaging-to-social-mobility [Accessed 22 August 2023].

Sara, R. (2006) Live Project Good Practice: A Guide for the Implementation of Live Projects, s.l.: Centre for Education in the Built Environment


3 Comments

The hidden layers of transparency in UK HE assessment practices

by Chahna Gonsalves and Zhongan Lin

Transparency in assessment practices is a critical component of the UK’s higher education sector, but it is a term that carries many layers of meaning. This blog post explores a study that examined how transparency is framed in assessment policies across 151 UK higher education institutions (HEIs). The findings reveal that while institutions strive for transparency, they often overlook the complexities and multidimensional nature of the concept.

Understanding transparency: more than just clear documentation

Transparency in assessment is often associated with clear documentation of criteria, grading practices, and feedback mechanisms. However, this techno-rational approach, which emphasizes explicit documentation and information dissemination, is just one facet of transparency. Our study highlights the need for a more nuanced understanding that includes socio-cultural practices and socio-material enactments.

Techno-rational approaches: the dominant paradigm

The study found that techno-rational approaches dominate the transparency discourse in HEI policies. These approaches focus on ensuring that assessment criteria, learning outcomes, and grading standards are clearly articulated and accessible. For example, many policies mandate the use of detailed assessment briefs, rubrics, and grade descriptors. While this approach aims to make evaluative processes clear and consistent, it often falls short in addressing the dynamic and interpretive nature of academic standards.

One of the most compelling findings was the over-reliance on explicit standards documents, which presume that written criteria can universally ensure fairness and consistency. This static view overlooks the reality that academic standards are co-constructed within specific social and cultural contexts. Without acknowledging this, policies may fail to convey the nuanced, tacit knowledge necessary for fully understanding and applying assessment criteria.

The limitations of techno-rational transparency

Simply providing clear documentation does not guarantee that all stakeholders will understand or effectively use the information. For instance, non-native English speakers and students with varying levels of academic literacy may struggle with the language used in assessment criteria. Moreover, policies often fail to specify effective methods for disseminating this information, relying heavily on static documents rather than interactive or diverse formats that could enhance understanding.

Socio-cultural practices: engaging stakeholders in meaningful dialogue

Beyond documentation, transparency also involves socio-cultural practices that engage stakeholders in ongoing dialogue and clarification of assessment criteria. Policies that promote discussion between educators and students, co-creation of assessment criteria, and collaborative marking processes can foster a deeper understanding and shared meaning of what is expected. For instance, involving students in the creation of rubrics and providing opportunities for mock marking can enhance their evaluative judgment and assessment literacy.

One interesting insight from the study was the importance of dialogue in building a shared understanding of assessment standards. Policies that encourage discussion about assessment criteria not only help students grasp what is expected but also allow educators to refine and clarify their expectations. This dynamic, interactive process contrasts sharply with the static dissemination of information typical of techno-rational approaches.

Socio-material enactments: the role of tools and artefacts

The study also highlights the importance of socio-material enactments, where transparency is realized through the interaction between social practices and material artifacts. This includes the use of digital platforms, rubrics, exemplars, and other assessment tools that facilitate a tangible understanding of assessment criteria. Effective use of these tools can bridge the gap between educators’ tacit knowledge and students’ understanding, fostering a more comprehensive view of transparency.

For example, the use of digital platforms to share assessment criteria and feedback can significantly enhance transparency. These platforms allow for continuous access and interaction with assessment materials, making it easier for students to understand and engage with the criteria. However, the study found that detailed guidance on such platforms is often scant in policies, pointing to a significant area for improvement.

Who benefits from transparency? A multifaceted audience

Transparency in assessment is not solely for students. It also encompasses other stakeholders, including markers, external examiners, tutors, and even employers. The study found that while most policies address the need for transparency for students and markers, they often neglect other crucial stakeholders. This oversight can lead to inconsistencies in how assessments are interpreted and applied, potentially undermining the fairness and effectiveness of the evaluation process.

A particularly intriguing aspect of the study was the identification of specific roles and responsibilities for promoting transparency. By clearly defining who is responsible for ensuring transparency – whether it be module leaders, programme teams, or tutors – institutions can better align their policies with the needs of various stakeholders. This clarity can help avoid the pitfalls of ambiguous roles and ensure a more consistent application of assessment criteria.

Methodology: building the framework

To develop a comprehensive framework we conducted a detailed content analysis of assessment policy documents from 151 UK HEIs. The data collection process involved systematically retrieving and examining these publicly accessible documents, which included academic manuals, assessment policies, feedback strategies, and codes of practice. We excluded documents that were outdated or inaccessible, resulting in a final corpus of 264 documents. Through both deductive and inductive coding methods, we analysed the texts to identify recurring themes and patterns related to transparency. This process involved categorising the data into three main discourses – techno-rational, socio-cultural, and socio-material – guided by Ajjawi, Bearman, and Boud’s (2021) framework. The iterative coding and categorization helped us build a nuanced understanding of how transparency is conceptualized and communicated in HEI assessment policies.

Towards a holistic framework for transparency

Our study proposes a holistic framework that integrates techno-rational, socio-cultural, and socio-material approaches to transparency. This framework emphasizes the need for clear, accessible documentation, active engagement with stakeholders, and effective use of assessment tools and artifacts. By recognizing the diverse needs of all stakeholders, HEIs can develop more inclusive and effective assessment policies.

One of the key contributions of this study is its challenge to the notion of transparency as a static attribute. Instead, transparency is presented as a dynamic, contextually situated practice that requires continuous negotiation and interaction among stakeholders. This perspective shifts the focus from merely providing information to actively engaging stakeholders in the assessment process.

Figure 1. Framework of assessment transparency in Higher Education

Implications for policy and practice

To improve transparency in assessment, HEIs must move beyond merely publishing information to actively engaging with stakeholders through dialogue and interaction. Policies should be clear about the roles and responsibilities of different stakeholders in ensuring transparency. Furthermore, the use of diverse and interactive dissemination methods can enhance understanding and support students’ academic success.

For policymakers, the study suggests that transparency should be explicitly defined within institutional contexts, with guidelines that emphasize both the dissemination of information and the engagement of stakeholders. Educational practitioners are encouraged to adopt participatory practices in assessment design, involving students in creating and understanding assessment criteria, which is pivotal in promoting transparency.

Conclusion: enhancing transparency for a fairer education system

Transparency in assessment is a complex, multifaceted concept that goes beyond clear documentation. By integrating techno-rational, socio-cultural, and socio-material approaches, HEIs can foster a more inclusive and effective assessment environment. This study underscores the importance of comprehensive policies that not only provide clear information but also engage stakeholders in meaningful ways, ultimately contributing to a fairer and more equitable higher education system.

Reflecting on our roles as stakeholders

As readers, it is crucial to reflect on our roles within the higher education assessment ecosystem. Whether we are students, educators, policymakers, or external examiners, we each play a part in fostering transparency. Understanding the nuances of transparency and actively engaging in dialogue and interaction can help us contribute to more equitable and effective assessment practices. By recognizing and fulfilling our roles, we can collectively enhance the transparency and quality of education in our institutions.

Reference

Gonsalves, C and Lin, Z (2024) ‘Clear in advance to whom? Exploring ‘transparency’ of assessment practices in UK higher education institution assessment policy’ Studies in Higher Education, 1-17. https://doi.org/10.1080/03075079.2024.2381124

Chahna Gonsalves is a Senior Lecturer in Marketing (Education) at King’s College London. She is Senior Fellow of the Higher Education Association and Associate Fellow of the Staff Educational Development Association. Her interest in rubrics and the language of assessment is an extension of her role as Department Education Lead.

Zhonghan Lin is a Doctoral Researcher based at the Center for Language, Discourse and Communication, the School of Education, Communication and Society, King’s College London. Her research interests include urban multilingualism, education in ethnically and linguistically diverse societies, and family language policy.


1 Comment

Doctoral progress reviews: managing KPIs or developing researchers?

by Tim Clark

All doctoral students in the UK are expected to navigate periodic, typically annual, progress reviews as part of their studies (QAA, 2020). Depending on the stage, and the individual institutional regulations, these often play a role in determining confirmation of doctoral status and/or continuation of studies. Given that there were just over 100,000 doctoral students registered in the UK in 2021 (HESA, 2022), it could therefore be argued that the progress review is a relatively prominent, and potentially high stakes, example of higher education assessment.  However, despite this potential significance, guidance relating to doctoral progress reviews is fairly sparse, institutional processes and terminology reflect considerable variations in approach, empirical research to inform design is extremely limited (Dowle, 2023) and perhaps most importantly, the purpose of these reviews is often unclear or contested.

At the heart of this lack of clarity appears to be a tension surrounding the frequent positioning of progress reviews as primarily institutional tools for managing key performance indicators relating to continuation and completion, as opposed to primarily pedagogical tools for supporting individual students learning (Smith McGloin, 2021). Interestingly however, there is currently very little research regarding effectiveness or practice in relation to either of these aspects. Yet, there is growing evidence to support an argument that this lack of clarity regarding purpose may frequently represent a key limitation in terms of engagement and value (Smith McGloin, 2021, Sillence, 2023; Dowle, 2023). As Bartlett and Eacersall (2019) highlight, the common question is ‘why do I have to do this?’

As a relatively new doctoral supervisor and examiner, with a research interest in doctoral pedagogy, in the context of these tensions, I sought to use a pedagogical lens to explore a small group of doctoral students’ experiences of navigating their progress review. My intention for this blog is to share some learning from this work, with a more detailed recent paper reporting on the study also available here (Clark, 2023). 

Methods and Approach

This research took place in one post-1992 UK university, where progress assessment consisted of submission of a written report, followed by an oral examination or review (depending on the stage). These progress assessments are undertaken by academic staff with appropriate expertise, who are independent of the supervision team. This was a small-scale study, involving six doctoral students, who were all studying within the humanities or social sciences. Students were interviewed using a semi-structured narrative ‘event-focused’ (Jackman et al, 2022) approach, to generate a rich narrative relating to their experience of navigating through the progress review as a learning event.

In line with the pedagogical focus, the concept of ‘assessment for learning’ was adopted as a theoretical framework (Wiliam, 2011). Narratives were then analysed using an iterative ‘visit and revisit’ (Srivastava and Hopwood, 2009) approach. This involved initially developing short vignettes to consider students’ individual experiences before moving between the research question, data and theoretical framework to consider key themes and ideas.

Findings

The study identified that the students understood their doctoral progress reviews as having significant potential for supporting their learning and development, but that specific aspects of the process were understood to be particularly important. Three key understandings arose from this: firstly, that the oral ‘dialogic’ component of the assessment was seen as most valuable in developing thinking, secondly, that progress reviews offered the potential to reframe and disrupt existing thinking relating to their studies, and finally, that progress reviews have the potential to play an important role in developing a sense of autonomy, permission and motivation.

In terms of design and practice, the value of the dialogic aspect of the assessment was seen as being in its potential to extend thinking through the assessor, as a methodological and disciplinary ‘expert’, introducing invitational, coaching format, questions to provoke reflection and provide opportunities to justify and explore research decisions. When this approach was taken, students recalled moments where they were able to make ‘breakthroughs’ in their thinking or where they later realised that the discussion was significant in shaping their future research decisions. Alongside this, a respectful and supportive approach was viewed as important in enhancing psychological safety and creating a sense of ownership and permission in relation to their work:

“I think having that almost like mentoring, which is like a mini mentoring or mini coaching session, in these examination spots is just really helpful”

“I’m pootling along and it’s going okay and now this bombshell’s just dropped, but it was helpful because, yeah, absolutely it completely shifted it.”

“It’s my study… as long as I can justify academically and back it up. Why I’ve chosen to do what I’ve done then that’s okay.” 

Implications

Clearly this is a small-scale study, with a relatively narrow disciplinary focus, however its value is intended to lie in its potential to provoke consideration of progress reviews as tools for teaching, learning and researcher development, rather than to assert any generalisable understanding for practice.

This consideration may include questions which are relevant for research leaders, supervisors and assessors/examiners, and for doctoral students. Most notably: is there a shared understanding of the purpose of doctoral progress reviews and why we ‘have’ to do it? And how does this purpose inform design, practice and related training within our institutions?

Within this study it was evident that in this context the role of dialogic assessment was significant, and given the additional resource required to protect or introduce such an approach, this may be an aspect which warrants further exploration and investigation to support decision making. In addition, it also framed the perceived value of the careful construction of questions, which invite and encourage reflection and learning, as opposed to seeking solely to ‘test’ this.

Dr Timothy Clark is Director of Research and Enterprise for the School of Education at the University of the West of England, Bristol. His research focuses on aspects of doctoral pedagogy and researcher development.


3 Comments

Lessons from learning analytics

by Liz Moores and Rob Summers

Why bother collecting learning analytics data?

Some of the reported benefits of using learning analytics data include enabling personalised learning and narrowing attainment gaps. Indeed, a quick dip into some of the recent TEF feedback summaries to higher education institutions seems to suggest that use of learning analytics is valued by TEF panels. But can we learn more from the data to influence teaching practice? Aside from the potential benefits for a more personalised learning experience, we think that it’s a good way of understanding the learning process more generally. Over the past few years, we’ve been analysing some of the data generated from Aston University.

Last minute cramming is not effective in improving attainment.

Yes, your parents were correct – it’s much better to work consistently! Early engagement with studies really appears to matter. In fact, the average attainment levels of those first-year students whose engagement remained at the lowest relative levels throughout the year was very similar to those whose early engagement was lowest in the first three weeks but became the very highest in the last three weeks. In contrast, those who started off enthusiastically, but then lost interest, were awarded higher average marks than any of the groups that started off slowly, regardless of how much or whether their engagement peaked later. The consistency of the data – in that those who started off with high engagement tended to finish with high engagement – was remarkable. Also noteworthy were the effects of early engagement on attainment. For the chart below, we divided students into activity quintiles based on only their first three weeks of engagement (Q5 being the highest engagement) and on end of year mark quintiles (Q5 being the highest attainment). The width of the lines connecting engagement quintile to mark quintile is indicative of the proportion of students linking the two measures. The results highlight how few students pass from higher activity quintiles to lower mark quintiles and vice versa.

Of course, these results come with the usual caveats that we cannot infer cause and effect (it could be that the lower engagers in the first three weeks were just low achieving students). However, for us, this highlights the importance of a good induction into academic life – possibly enhanced by some structured engagement exercises to help get first years into good habits (ie tell them how they should be engaging, and the different ways that they can, not just that they should be doing so). There were probably a fair few students represented in this figure that were not even sure what they were supposed to be doing with all their ‘spare’ time

Behaviour outweighs demographics when predicting attainment

The recent pandemic generated much discussion about digital poverty, suggesting that who we teach might be important – at the very least in terms of access to technology. Our recent evidence suggests that both how you teach and who you teach mattered. However, it is important to note that behaviour outweighed demographics in predicting attainment, albeit that in this case behaviour was probably also influenced by demographics. The gap between disadvantaged students’ attainment and their peers widened during online teaching and assessment conditions, and disadvantaged students were also less likely to obtain all 120 module credits on their first try. We also observed changes in their patterns of engagement, although less so for synchronously delivered teaching (as compared to recorded lectures). Students with the lowest engagement were the ones driving the widened gap; those who engaged well with synchronously provided teaching (even if online) fared much better.

So, we should stop teaching online and get people into the classroom early?

No – not necessarily. We don’t want to claim that all online teaching is bad – instead we need to understand what forms of online teaching work, what good looks like, and how our various teaching strategies affect different groups. Anecdotally, many students have appreciated the flexibility of online teaching, particularly where it has included facilities such as the ability to ask questions anonymously. And if you want to reuse those pre-recorded videos, there has been some interesting research from other research groups on ‘watch-parties’. With the cost-of-living crisis, many students will appreciate being able to log into a lecture from home rather than forking out a bus fare or missing out on some part time work. What is important is to understand what works – and for whom.

Professor Liz Moores is Deputy Dean in the College of Health and Life Sciences at Aston University and has research interests in the evaluation of higher education, particularly as applied to widening participation issues.

Dr Rob Summers is research manager at the Centre for Transforming Access and Student Outcomes (TASO). Before joining TASO, Rob worked in the student outreach team at Aston University managing a randomised controlled trial of two post-16 outreach programmes as part of the TASO MIOM (Multi-intervention, Outreach and Mentoring) project.


Leave a comment

The ongoing saga of REF 2028: why doesn’t teaching count for impact?

by Ian McNay

Surprise, surprise…or not.

The initial decisions on REF 2028 (REF 2028/23/01 from Research England et al), based on the report on FRAP – the Future Research Assessment Programme – contain one surprise and one non-surprise among nearly 40 decisions. To take the second first, it recommends, through its cost analysis report, that any future exercise ‘should maintain continuity with rules and processes from previous exercises’ and ‘issue the REF guidance in a timely fashion’ (para 82). It then introduces significant discontinuities in rules and processes, and anticipates giving final guidance only in winter 2024-5, when four years (more than half) of the assessment period will have passed.

The second surprise is, finally, the open recognition of the negative effects on research culture and staff careers of the REF and its predecessors (para 24), identified by respondents to the FRAP consultation about the 2028 exercise. For me, this new humility is a double edged sword: many of the defects identified have been highlighted in my evidence-based articles (McNay, 2016, McNay, 2022), and, indeed, by the report commissioned by HEFCE (McNay, 1997) on the impact on individual and institutional behaviour of the 1992 exercise:

  • Lack of recognition of a diversity of excellences including work on local or regional issues because of the geographical interpretation of national/international excellence (para 37). Such local work involves different criteria of excellence, perhaps recognised in references to partnership and wider impact.
  • The need for outreach beyond the academic community, such as a dual publication strategy – one article in an academic journal matched with one in a professional journal in practical language and close to utility and application of a project’s findings.
  • Deficient arrangements for assessing interdisciplinary work (paras 60 and 61)
  • The need for a different, ‘refreshed’, approach to appointments to assessment panels (para 28)
  • The ‘negative impact on the authenticity and novelty of research, with individuals’ agendas being shaped by perceptions of what is more suitable to the exercise: favouring short-term inputs and impacts at the expense of longer-term projects…staying away from areas perceived to be less likely to perform well’. ‘The REF encourages …focus on ‘exceptional’ impacts and those which are easily measurable, [with] researchers given ‘no safe space to fail’ when it came to impact’.
  • That last negative arises in major part because of the internal management of the exercise, yet the report proposes an even greater corporate approach in future. The evidence-based articles and reports, and innovative processes and artefacts that arise from our research will have a reduced contribution to published assessments on the quality of research, though there is encouragement of a wider diversity of research outputs. More emphasis will be placed on institutional and unit ‘culture’ (para 28), so individuals disappear, uncoupled from consideration of culture-based quality. That culture is controlled by management; I spent several years as a Head of School trying to protect and develop further a collegial enterprise culture, which encouraged research and innovative activities in teaching. The senior management wanted a corporate bureaucracy approach with targets and constant monitoring, which work at Exeter has shown leads to lower output, poorer quality and higher costs (Franco-Santos et al, 2014).

At least 20 per cent of the People, Culture and Environment sub-profile for a unit will be based on an assessment of the Institutional Level (IL) culture, and this element will make up 25 per cent of a unit’s overall quality profile, up from 15 percent from 2021. This proposed culture-based approach will favour Russell Group universities even further – their accumulated capital has led to them outscoring other universities on ‘environment’ in recent exercises, even when the output scores have been the other way round. Elizabeth Gadd, of Loughborough, had a good piece on this issue in Wonkhe on 28 June 2023. The future may see research-based universities recruiting strongly in the international market to provide subsidy to research from higher student fees, leaving the rest of us to offer access and quality teaching to UK students on fees not adjusted for inflation. Some recognition of excellent research in unsupportive environment would be welcome, as would reward for improvement as operated when the polytechnics and colleges joined research assessment exercises.

The culture of units will be judged by the panels – a separate panel will assess IL cultures – and will be based on a ‘structured statement’ from the management, assessing itself, plus a questionnaire submission. I have two concerns here: can academic panels competent to peer-assess research also judge the quality and contribution of management; and, given behaviours in the first round of impact assessment (Oancea, 2016), how far can we rely on the integrity of these statements?

The sub-profile on Contribution to Knowledge and Understanding sub-profile will make up 50 per cent of a unit’s quality profile – down from 60 per cent last time and 65 per cent in 2014. At least 10 per cent will be based on the structured statement, so Outputs – the one thing that researchers may have a significant role in – are down to only 40 per cent, at most, of what is meant by overall research quality (the FRAP International Committee recommended 33 per cent). Individuals will not be submitted. HESA data will be used to quantify staff and the number of research outputs that can be submitted will be an average of 2.5 per FTE. There is no upper limit for an individual, and staff with no outputs can be included, as well as those who support research by others, or technicians who publish. Research England (and this is mainly about England; the other three countries may do better and certainly will do things differently) is firm that the HESA numbers will not be used as the volume multiplier for funding (still a main purpose of the REF), though it is not clear where that will come from – Research England is reviewing their approach to strategic institutional research funding. Perhaps staff figures submitted to HESA will have an indicator of individuals’ engagement with research.

Engagement and Impact broadens the previous element of simply impact. Our masters have discovered that early engagement of external partners in research, and 6 months attachment at 0.2 contract level allows them to be included, and enhances impact. Wow! Who knew? The work that has impact can be of any level to avoid the current quality level designations stopping local projects being acknowledged.

The three sub-profiles have fuzzy boundaries and overlap. Not just in a linear connection – environment, output, impact – but, because, as noted above, for example, engagement comes from the external environment but becomes part of the internal culture. It becomes more of a Venn diagram, that allows the adoption of an ‘holistic’ approach to ‘responsible research assessment’. We wait to see what those both mean in practice.

What is clear in that holistic approach is that research has nothing to do with teaching, and impact on teaching still does not count. That has created an issue for me in the past since my research feeds (not leads) my teaching and vice versa. I use discovery learning and students’ critical incidents as curriculum organisers, and they produce ‘evidence’ similar to that gathered through more formal interview and observation methods. An example. I recently led a workshop for a small private HEI on academic governance. There was a newly appointed CEO. I used a model of institutional and departmental cultures which influence decision making and ‘governance’ at different levels. That model, developed to help my teaching is now regarded by some as a theoretical framework and used as a basis for research. Does it therefore qualify for inclusion in impact? The session asked participants to consider the balance among four cultures of collegial, bureaucratic, corporate, entrepreneurial, relating to the degrees of central control of policy development and of policy delivery (McNay, 1995).  It then dealt with some issues more didactically, moving to the concept of the learning organisation where I distributed a 20 item questionnaire, (not yet published, but available on request for you to use) to allow scoring out of 10 per item, of behaviours relating to capacity to change, innovate and learn, leading to improved quality. Only one person scored more than 100 in total and across the group the modal score was in the low 70s, or just over 35%. That gave the new CEO an agenda with some issues more easily pursued than others and scores indicating levels of concern and priority. So my role moved into consultancy. There will be impact, but is the research base sufficient, was it even research, and does the use of teaching as a research transmission process (Boyer, 1990) disqualify it?

I hope this shows that the report contains a big agenda, with more to come. SRHE members need to consider what it means to them, but also what it means for research into institutions and departments to help define culture and its characteristics. I will not be doing it, but I hope some of you will. We need to continue to provide an evidence base to inform decisions even if it takes up to 20 years for the findings to have an impact.

SRHE itself might say several things in response to the report:

  • welcome the recognition of previous weaknesses, but note that a major one has not been recorded: the impact of RAE/REF on teaching, when excellent research has gained extra money, but excellent teaching has not, leading to an imbalancing of effort within the HE system. The research-teaching nexus also needs incorporating into the holistic view of research. Teaching is a major element in dissemination of research (Boyer, 1990) and so a conduit to impact, and should be recognised as such. That is because the relationship between researcher/teacher and those gaining new knowledge and understanding is more intimate and interactive than a reader of an article experiences. Discovery learning, drawing on learners’ experiences in CPD programmes can be a source of evidence, enhancing the knowledge and understanding of the researcher to incorporate in further work and research publications.
  • welcome the commitment to more diversity of excellences. In particular, welcome the commitment to recognise local and regionally directed research and its significant impact. The arguments about intimacy and interaction apply here, too. Research in partnership is typical of such work and different criteria are needed to evaluate excellence in this context.
  • welcome the intention to review panel membership to reflect the wider view of research now to be adopted.
  • urge an earlier clarification on panel criteria to avoid another 18 months, at least, trying, without clarity or guidance, to do work that will fit with the framework of judgement within which that work will be judged.
  • be wary of losing the voice of the researchers in the reduction of emphasis on research and its outputs in favour of presentations on corporate culture.

References

McNay, I (1997) The Impact of the 1992 RAE on Institutional and Individual Behaviour in English HE: the evidence from a research project Bristol HEFCE


1 Comment

Examining the Examiner: Investigating the assessment literacy of external examiners

By Dr Emma Medland

Quality assurance in higher education has become increasingly dominant worldwide, but has recently been subject to mounting criticism. Research has highlighted challenges to comparability of academic standards and regulatory frameworks. The external examining system is a form of professional self-regulation involving an independent peer reviewer from another HE institution, whose role is to provide quality assurance in relation to identified modules/programmes/qualifications etc. This system has been a distinctive feature of UK higher education for nearly 200 years and is considered best practice internationally, being evident in various forms across the world.

External examiners are perceived as a vital means of maintaining comparable standards across higher education and yet this comparability is being questioned. Despite high esteem for the external examiner system, growing criticisms have resulted in a cautious downgrading of the role. One critique focuses on developing standardised procedures that emphasise consistency and equivalency in an attempt to uphold standards, arguably to the neglect of an examination of the quality of the underlying practice. Bloxham and Price (2015) identify unchallenged assumptions underpinning the external examiner system and ask: ‘What confidence can we have that the average external examiner has the “assessment literacy” to be aware of the complex influences on their standards and judgement processes?’ (Bloxham and Price 2015: 206). This echoes an earlier point raised by Cuthbert (2003), who identifies the importance of both subject and assessment expertise in relation to the role.

The concept of assessment literacy is in its infancy in higher education, but is becoming accepted into the vernacular of the sector as more research emerges. In compulsory education the concept has been investigated since the 1990s; it is often dichotomised into assessment literacy or illiteracy and described as a concept frequently used but less well understood. Both sectors describe assessment literacy as a necessity or duty for educators and examiners alike, yet both sectors present evidence of, or assume, low levels of assessment literacy. As a result, it is argued that developing greater levels of assessment literacy across the HE sector could help reverse the deterioration of confidence in academic standards.

Numerous attempts have been made to delineate the concept of assessment literacy within HE, focusing for example on the rules, language, standards, and knowledge, skills and attributes surrounding assessment. However, assessment literacy has also been described as Continue reading