SRHE Blog

The Society for Research into Higher Education


Leave a comment

From ad hoc to constructive: the ABC levels of GenAI integration in business education

by Qianqian Chai and Xue Zhou

Introduction – the challenge of GenAI integration in business education

Since the release of ChatGPT, Generative Artificial Intelligence (GenAI) has rapidly entered higher education. Business schools, with their strong ties to industry and emphasis on applied skills, provide a particularly important setting for examining GenAI’s role in curriculum design. Yet, while adoption has expanded quickly, the educational outcomes of GenAI integration have not been consistent (Kurtz et al, 2024). Across cases, educators identified both benefits and risks, including engagement, skills development, and overreliance. This unevenness suggests that rather than reflecting a single trajectory of adoption, early practice appears to involve different approaches to integration. The central issue is not whether GenAI is used, but how different approaches shape outcomes.

This blog draws on our recent study of GenAI integration in business modules at a UK Russell Group university (Zhou et al, 2026). Through a qualitative analysis of 17 educator cases across 24 modules, we examined how GenAI was incorporated into curriculum design, and how different approaches were associated with distinct benefits and challenges. Using the lens of constructive alignment (Biggs, 1996), we identified three patterns of integration: ad hoc, blended, and constructive, which together form the ABC levels for understanding how GenAI is integrated into curriculum practice. We use these levels to explore why some approaches appear more educationally effective than others. In particular, this blog will offer research-informed insights into how GenAI can be integrated more effectively and sustainably in business higher education. While the cases are drawn from business education, the patterns identified and principles of constructive integration have wider relevance across disciplines where GenAI is increasingly embedded in curriculum design.

ABC levels of GenAI integration in the business curriculum

Our analysis identified three levels of GenAI integration: ad hoc, blended, and constructive. Table 1 outlines these distinctions across key dimensions.

Table 1 ABC levels of GenAI curriculum integration

Constructive integration represents a qualitatively different approach, grounded in constructive alignment, where intended learning outcomes, teaching activities, and assessment are deliberately designed to develop and evaluate students’ ability to use GenAI critically and effectively. At this level, GenAI is not an optional or supporting tool, but an integral component of disciplinary learning, with a clear pedagogical purpose and coherent role across the curriculum.

By contrast, ad hoc integration is characterised by occasional and isolated use, where GenAI is introduced as an optional or experimental tool without being planned into the broader curriculum design. Blended integration moves beyond this by incorporating GenAI into selected learning activities or tasks, giving it a more purposeful pedagogical role, but its use remains only partially embedded. Both approaches therefore fall short of the coherence and strategic alignment that define constructive integration.

The distinction between these patterns is therefore not simply a matter of more or less GenAI use, but of how GenAI is positioned within the curriculum: as an experiment, as a support, or as a capability to be deliberately developed. Although developed from business education contexts, this typology offers a lens that can be applied more broadly to understand how GenAI is positioned within different disciplinary curricula.

Why constructive integration matters

Across the cases, GenAI integration generated benefits and challenges across students, educators, and institutions. At the student level, reported benefits included stronger engagement, confidence, and employability-related skills, while the main risks centred on overreliance, inequality, ethical concerns, and ineffective outputs. For educators, benefits included efficiency gains, professional learning, and improved teaching performance, but these were accompanied by increased workload and the need to redesign activities and assessments. At the programme level, GenAI enhanced curriculum relevance but raised concerns about academic standards.

Figure 1 shows that these benefits and challenges were not distributed evenly across the three patterns of integration. Constructive integration displayed the strongest and broadest benefits, while ad hoc and blended approaches showed narrower gains alongside more exposed challenges. In other words, the issue is not whether GenAI brings value or risk, but how curriculum design shapes the balance between them.

Figure 1 Trade-offs of GenAI integration: challenges (red) vs benefits (green)

What makes constructive integration different is not the removal of challenge, but the stronger presence of educational value. In the study, constructively integrated cases were linked more clearly to student engagement, capability development, employability, and curriculum relevance because GenAI was embedded through aligned outcomes, activities, and assessment, rather than added on as a tool or support. Importantly, these cases also showed stronger educator development, including pedagogical reflection and confidence, despite workload pressures. This suggests constructive integration enhances both student outcomes and educator learning by embedding AI within coherent curriculum design.

How constructive integration is achieved

Table 2 presents examples from the modules in this study, showing how GenAI was constructively integrated into existing pedagogical strategies without requiring curriculum redesign.

Table 2 Constructive GenAI Integration into Existing Pedagogical Strategies

Taken together, the cases suggest several practical principles for integrating GenAI more coherently within the curriculum. These principles are not specific to business education, but reflect broader curriculum design considerations that can be adapted across disciplines with different pedagogical traditions.

  • Integration builds on existing pedagogical strategies: GenAI should be embedded within approaches already familiar to the discipline, such as project-based or simulation-based learning, without requiring curriculum redesign (Chugh et al, 2023).
  • Sharpen the role of GenAI by disciplinary purpose: In different contexts, GenAI supported strategic analysis, research and synthesis, reflective thinking, or data interpretation. Its value depends on alignment with module aims (Zhou & Milecka-Forrest, 2021).
  • Make AI use purposeful through assessment and evaluative tasks: In stronger cases, GenAI was connected to tasks that required students to interpret, justify, compare, or critique AI-supported outputs, rather than simply using AI to complete tasks (Biggs & Tang, 2010).
  • Support deeper student engagement through scaffolding: Structured guidance, such as prompting strategies, comparison activities, and reflective tasks, enabled more critical and purposeful use (Cukurova & Miao, 2024).

Overall, constructive integration is less about introducing new tools than about redesigning existing curriculum elements so that GenAI is meaningfully aligned with disciplinary learning.

Conclusion

The ABC levels developed in our study show that GenAI integration in business education does not follow a single trajectory but ranges from ad hoc and blended use to constructive integration. The key difference lies in approach: constructive integration embeds GenAI through aligned outcomes, activities, assessment, and scaffolding. The challenges observed across GenAI integration practices suggest an urgent shift from ad hoc GenAI integration toward strategic and constructive integration in business education. In this way, higher education can support students’ employability and capability development, strengthen educators’ professional and pedagogical confidence, and enable institutions to sustain coherent, future-facing curricula.

Dr Qianqian Chai is a Lecturer in Business and Management at Queen Mary University of London and Chair of the AI in Education Innovation Sub-committee in the School of the Arts. Her research focuses on AI in higher education, including curriculum design, academic integrity, and policy. q.chai@qmul.ac.uk

Professor Xue Zhou is a Professor in AI in Business Education and Dean of AI at the University of Leicester. Her research interests include digital literacy, digital technology adoption, cross-cultural adjustment, and online professionalism. xue.zhou@le.ac.uk


Leave a comment

In defence of SoTL: anchoring educational evaluation and educational research

by Liz Austen

By the end of 2025 I had attended three HE related conferences: Euro SoTL, the Wonkhe Festival of HE and the SRHE Annual Conference. I presented on similar topics at all three events; what evidence do we generate to help us understand and act to enhance student experiences and outcomes in higher education? During the Wonkhe panel and my SRHE session, I defined two approaches at the disposal of HE practitioners:

Higher Education Evaluation: an approach which helps to understand and explore what works and doesn’t work in a given context and is of value to stakeholders. The aim of evaluation is to generate actionable evidence-informed learning, which encourages, informs and supports continuous improvement of process and impact (Evaluation Collective 2025)

Higher Education Research: to extend knowledge and understanding in all areas of educational activity and from a wide range of perspectives, including those of learners, educators, policymakers and the public (adapted from BERA, 2024)

At the Wonkhe panel, Clare Loughlin-Chow (CEO of SRHE) helpfully outlined the higher education research topics that were most prevalent in the SRHE journals. Omar Khan (CEO of TASO) then outlined the scope and priories of TASO, an affiliate member of the government’s What Works Network which focuses on higher education evaluation. My conceptual discussion of evidence generation brought the two together.

At EuroSoTL earlier in the year, my colleagues and I outlined our new institutional approach to the Scholarship of Teaching and Learning:

Scholarship of Teaching and Learning (SoTL): to improve student learning through engagement in the existing knowledge of teaching and learning, developing contextual ideas and innovation in practice, reflecting on practice, applying methodological rigour, working in partnership with students, and sharing of scholarship publicly (adapted from Felton (2013))

When I attended SRHE in December 2025, SoTL appeared in only one session I attended and some of this discussion focused on the challenges of bringing SoTL into spaces for educational research. My hand in the air comment – that criticism of SoTL by educational researchers was an example of ‘academic snobbery’ – certainly raised a few eyebrows. This blog post considers the relationship between these three approaches and whether, for the good of our students, it’s time for some reconciliation.

Educational evaluation, SoTL and educational research

Educational research in higher education has developed over the last 60 years. Interestingly, research into teaching and learning is cited as the most theorised by this type of research (Tight, 2012). Higher education evaluation, sometimes considered as applied research, was recently propelled by the Office for Students’ agenda to ‘evaluate, evaluate, evaluate (Office for Students, 2022). SoTL has developed alongside HE research and evaluation, emerging from Boyer’s work in 1990.

The aims of each endeavour are distinct, tied by the notion of ‘enquiry’. Research seeks to build new knowledge, and evaluation seeks to provide judgment on a contextual problem. SoTL has a narrower focus on teaching and learning than the broader scope of research and evaluation but incorporates prior knowledge and contextual problem solving through focused enquiry (Gray, 2025). SoTL builds on the foundation of social sciences methodology and can integrate disciplinary methodology into practitioner’s teaching and learning enquiry (Riddell, 2026). Educational evaluation often asks questions of the effectiveness of interventions, but in some teaching and learning spaces, the evaluative language of ‘intervention’ isn’t appropriate (Austen (2025) in Austen and McCaig (2025)). Exploring what works through SoTL enquiry aligns better. Often the bridging term ‘pedagogic research’ is used as integral to SoTL (close to practice) but distinct from educational research (broader anticipated impact). Our chosen SoTL definition uses neither research nor evaluation terminology, but has component parts – knowledge, innovation, method, dissemination – that are central to all.

The essential agents in educational research, evaluation and SoTL are the same – individual students (as partners, as participants and as voice givers), individual staff (academic and professional services), institutional groups or clusters, collaborating HEIs, and third space organisations. Reasons for enquiry are also similar and include sector expectations and shared learning, the desire for institutional enhancement and impact, personal development and career progression. Or as Ashwin & Trigwell (in Evans et al, 2021) note:  to inform a wider audience; to inform a group within context; to inform oneself. All research, evaluation and SoTL agents must navigate the practical and ethical considerations of ‘insider’ enquiry if they are exploring their own practices or within their institutional contexts (BERA, 2018; Barnett & Camfield, 2016).

Output pathways are also interconnected. The SoTL staircase (Beckingham, 2023) recognised the variety of outputs encouraged by SoTL and includes those traditionally aligned with research and evaluation (reports and journal articles). Research outputs may be guided by REF criteria, and evaluation outputs by readership. The conclusions in research articles frequently state that more research needed, and evaluation reports often sit unread in metaphorical desk draws. In comparison, SoTL practitioners benefit from publications which are close to practice, quicker to publish, and more likely to influence change.

Both educational evaluation and educational research are inherently theoretical, grounded by educational or pedagogic theory or a theory of change. SoTL is more action focused, less theoretical than research yet can be more exploratory than evaluation. In 2011, Kanuka questioned SoTL’s credibility due to the lack of theoretical underpinning or reference to existing scholarship. At times, I suggest that educational research can be positioned too far in the opposite direction. The presentations at SRHE were heavily theoretical and sometimes I was left thinking ‘so how would this work actually improve the learning experiences of students’? In contrast, the breadth of SoTL includes both theory and action, albeit in more pragmatic ways.

There are values and specific skill sets of educational researchers and evaluators (and often epistemological disagreements occur between the two). This commitment to identity can be excluding and may help to understand why SoTL has been challenged. Canning & Masika (2022) caution us on the ‘threat to serious scholarship’ posed by SoTL, which they believe risks devaluing research into higher education learning and teaching. Their criticism of ‘anything goes’ I would frame as an important approach to inclusion. Their criticism of the ‘watered-down version of teaching and learning research’ I frame as SoTL’s recognition of the developmental, particularly in building staff confidence. Where confusion over definitions and scope still occur, I question whether institutional SoTL has been well grounded or well led.

Conclusion

There is clearly a divide between higher education research and SoTL. There are few recent SRHE blog posts which reference SoTL at all and one that does advises against flag-in-the-sand nomenclature (Sheridan, 2019). Having spent a lot of time in these circles, I believe higher education evaluators are more agnostic, but I include them in this discussion as they bring a new dynamic to this debate.

In this blog I have identified the ways in which research, evaluation and SoTL have their own agendas and yet have much in common. I argue that SoTL emerges as a grounding anchor between higher education research and higher education evaluation. SoTL borrows from both. SoTL feeds into both. SoTL is more than both (Potter, 2025). SoTL’s inherent value is the ability to build a community which improves student experiences and outcomes in an enquiry led and timely way.

For more details on the approach to SoTL at Sheffield Hallam University see: https://lta.shu.ac.uk/scholarship

Reference

Riddell, J (2026) ‘Hope circuits in practice: how the scholarship of teaching and learning fuels pedagogical courage and systemic change’ Guest Lecture, Sheffield Hallam University

Liz Austen is Professor of Higher Education Evaluation and Associate Dean Learning, Teaching and Student Success at Sheffield Hallam University. She has worked as an independent Evaluation Consultant on HE sector contracts and is a regular keynote speaker on all things evaluation in HE. Her focus is on evidence informed practice across the student lifecycle. Liz also leads a cross sector HE network called the Evaluation Collective.


Leave a comment

The missing middle ground between research-led and practice-led education

by Saeed Talebi and Nick Morton

A peer reviewer recently challenged our pedagogical approach. We had described embedding an industry-led research project on Digital Twin development into our built environment curriculum as ‘research-informed teaching’. The reviewer disagreed: this was ‘practice-led rather than research-informed,’ they argued, because students weren’t producing research outputs themselves.

The comment revealed a conceptual confusion we suspect is widespread in higher education. We often assume that if students aren’t producing original research, then any industry-focused teaching must simply be vocational training with academic window-dressing. This leaves practice-facing disciplines in an awkward position: industry engagement is essential to what we do, but it risks being dismissed as less scholarly. There is, however, a middle ground.

Healey and Jenkins’ (2009) model offers a useful way through this confusion. They identify four modes of engaging undergraduates with research: research-led (learning about current scholarship), research-oriented (learning research methods), research-based (undertaking inquiry), and research-tutored (engaging in research discussions). These are mapped across two dimensions: whether students are positioned as audience or participants, and whether the emphasis falls on research content or processes. The model’s key insight is that students can be meaningfully engaged with research even when they aren’t producing research outputs themselves. The question isn’t simply whether students are ‘doing research’, it’s whether they’re positioned as passive recipients of established knowledge or as active participants in scholarly inquiry.

Practice-led teaching operates on different logic, though that logic has a closer relationship to applied research than is sometimes acknowledged. Its primary aim is developing professional competence through authentic engagement with messy problems and competing stakeholder priorities. The distinction isn’t whether industry is involved – it can be present in both approaches. The distinction lies in how students are positioned in relation to knowledge. In practice-led education, knowledge tends to be treated as relatively settled. In research-informed education, knowledge is contested, evolving, and open to question. An opportunity arises when these approaches coincide without conscious design, and a risk emerges when they collapse into one another. Research-informed teaching can become performative, referencing staff publications without changing how students learn. Practice-led teaching can slip into employability theatre, where live briefs are added without interrogating what knowledge students are actually developing.

As Professor Hanifa Shah OBE recently argued in Times Higher Education, STEAM education at its best equips students to “move fluidly between analytical and imaginative modes of thinking“, asking critical questions, considering ethical implications, and bringing meaning to innovation. This is precisely the disposition that research-informed teaching seeks to develop. In STEAM disciplines, including architecture, built environment, computing and engineering, emerging technologies create spaces where research and practice intersect meaningfully. Digital Twins and real-time monitoring tools, for example, allow students to work with live systems while engaging critically with the assumptions and ethics embedded within them. Students aren’t merely applying research after the fact, nor mimicking professional routines. They’re learning to question how data is generated, how models simplify reality, and how decisions are shaped by both evidence and judgement. Practice becomes a site of inquiry.

There’s an institutional dimension here too. Across the sector, promotion frameworks, workload models, and teaching quality metrics often reward research visibility and industry engagement without asking how either is translated pedagogically. Academics are encouraged to ‘bring research into teaching’ and ‘embed employability’, yet rarely supported in doing the difficult design work that meaningful integration requires. Recent discussions within the sector have highlighted how delivery models shape the possibilities for integrating academic and workplace learning. These are sector-wide conversations, and they reflect shared challenges around diverse learner cohorts, blended delivery, and the risk of compliance overtaking genuine learning. As a result, many innovative practices remain dependent on individual effort rather than structural support.

None of this means practice-led and research-informed approaches are mutually exclusive. The most effective curricula often blend elements of both. But blending deliberately is quite different from conflating accidentally.

When designing industry-engaged teaching, it’s worth asking honest questions. Are students positioned as inquirers or executors? Are they engaging with contested knowledge or settled practice? Does assessment reward critical reflection or merely competent performance? Is the industry project a vehicle for scholarly inquiry, or is scholarly framing a veneer over vocational training?

The answers won’t always be clear-cut, and that’s fine. But asking the questions helps us design with intention rather than stumbling into confusion – and helps us articulate what we’re doing when a peer reviewer, a sceptical colleague, or a university committee asks us to justify our approach.

Dr Saeed Talebi is an Associate Professor in the Department of Architecture and Built Environment at Birmingham City University and a Senior Fellow of the Higher Education Academy (SFHEA). He has held a number of T&L leadership roles, including Departmental Lead, Course Leader, and Academic Lead for Teaching Excellence and Student Experience. He has a keen interest in pedagogy in higher education, with particular interest in research-informed teaching and the integration of emerging technologies and practice-led projects into built environment curricula to enhance student outcomes and experience. He has also led the delivery of large STEAM research projects.

Professor Nick Morton is the Academic Director of Partnerships and STEAM at Birmingham City University. A Principal Fellow of the Higher Education Academy (PFHEA), he was awarded a National Teaching Fellowship in recognition of his track record in curriculum development. He has held a number of senior leadership roles at BCU, including Associate Dean for Teaching Education and Student Experience, overseeing Computing, Engineering and the Built Environment. He was elected Vice-Chair of the Council of Heads of the Built Environment (CHOBE) in 2012 and is a Fellow of the Royal Institution of Chartered Surveyors (FRICS).


Leave a comment

Judgement under pressure: generative AI and the emotional labour of learning

by Joanne Irving-Walton

What AI absorbs and why that matters

Most debates about generative AI in higher education fixate on what it produces: essays, summaries, answers, paraphrases. I find myself increasingly interested in something else – what it absorbs. Over the past year, as conversations about AI have threaded through seminars and tutorials, a pattern has gradually become visible. In those discussions, students rarely begin with content production; instead, they talk about how it helps them get started and steadies them enough to keep going. They use it when the blank page paralyses, when feedback stings and when uncertainty feels exposing. One student described asking AI to “make it feel possible”. Another spoke of feeding tutor comments into the system so they could be “explained more kindly”. A third reflected, almost apologetically, “I don’t want it to do my work… I just need something to push against before I say it out loud and risk looking stupid”.

In each case, AI is not replacing thinking. It is absorbing part of the emotional labour involved in it, and as that labour is redistributed, the texture of judgement shifts. Academic judgement does not tend to emerge from comfort. It develops in the stretch between knowing and not knowing, when confidence dips, stakes feel heightened, and your sense of competence is quietly tested (Barnett, 2007). Staying in that stretch long enough for thinking to clarify demands more than intellectual effort; it requires emotional steadiness, time, space and the capacity to tolerate uncertainty without rushing to resolution (Biesta, 2013). Traditionally, that steadying work has been shared across learning relationships: tutors reframing feedback, peers normalising confusion, supervisors encouraging persistence through doubt. Generative AI now occupies part of that terrain.

I do not think this is inherently a problem. For some students, it is transformative. It marks a shift in where the labour of learning takes place and that change deserves examination rather than alarm.

Four modes of engagement and emotional labour

When students talk about how they use AI, their practices tend to cluster into four overlapping orientations. These are not moral categories so much as shifts in where emotional and cognitive labour is undertaken.

Instrumental engagement appears when students use AI to summarise readings, refine phrasing or impose structure. Here the friction lies in form-making and shaping thought into something communicable. The judgement at stake is procedural: what is proportionate or efficient in this context?

Dialogic engagement emerges when students test interpretations or rehearse arguments. AI becomes a low-stakes sounding board, absorbing some of the vulnerability of articulating something half-formed. The question beneath it is interpretive: what does this mean, and how far do I trust my reading and myself?

Metacognitive engagement is evident when students ask AI to critique their reasoning or compare approaches. What is absorbed here is evaluative tension and the discomfort of examining one’s own argument. The judgement in play is comparative and strategic: which option is stronger, and why? And then there is affective-regulatory engagement. Here, AI absorbs the anxiety that precedes judgement itself. It breaks tasks into steps, softens feedback, lowers the threshold for beginning, offers reassurance before submission and quietens the internal ruminations and rehearsals of everything that might go wrong. This is not peripheral to learning. It is increasingly central.

Figure: Where the labour of learning now lives

Accessibility, safety and the risk of smoothing too much

For many students, particularly those navigating anxiety, executive dysfunction, neurodivergence or heavy external commitments, this emotional buffering is not indulgence but access (Rose & Meyer, 2002). Breaking tasks into steps or privately rehearsing ideas before speaking can widen participation rather than diminish it.

We should not romanticise struggle. Nor should we imagine that institutional structures have ever been able to hold every student perfectly. For some learners, AI offers another place to rehearse thinking, one that sits alongside, rather than replaces, human dialogue.

But there is a tension here. If AI consistently absorbs the strain of uncertainty before ideas encounter resistance, if feedback is softened before it unsettles, if structure replaces the slow work of wrestling thought into form, then something quieter begins to shift. Much of this work happens privately, in browser tabs and late-night prompts, in spaces students do not always feel comfortable admitting to. That makes it harder for us to see what is being strengthened and what may be thinning. The danger is not comfort, but the quiet disappearance of formative strain.

By formative strain, I do not mean suffering for its own sake, nor simply the “desirable difficulties” described in cognitive load theory (Bjork & Bjork, 2011) or the stretching associated with a Vygotskian zone of proximal development (Vygotsky, 1978). I am referring to the lived experience of remaining with ambiguity, critique and partial understanding long enough for judgement to consolidate; the emotional as well as cognitive work of staying with a problem. If that work is always pre-processed, it may narrow the rehearsal space where judgement forms.

Scaffold or substitute

Much depends on whether AI remains a scaffold or begins to function as a substitute. Used as scaffold, it lowers the emotional threshold just enough for deeper engagement, absorbing anxiety without displacing judgement. Used as substitute, it reduces not only strain but evaluation itself; the work of deciding and committing shifts elsewhere. The distinction lies less in the tool than in how it is woven into the learning environment.

Individual awareness and institutional responsibility

It would be easy, and unfair, to frame this as a matter of individual discernment. Students already carry a great deal. But nor is this simply a matter of institutional correction. We are all navigating new terrain in real time, without a settled script.

If we are serious about judgement formation, then responsibility is shared — and it is evolving. This is less about detection or prohibition than about openness. AI engagement is happening whether we discuss it or not. The question is whether we bring it into the light. That might mean inviting students to reflect on how they used AI in a task, not as confession, but as analysis. It might mean modelling, in our own teaching, what it looks like to question or refine an AI response rather than accept it wholesale. It certainly means acknowledging the emotional labour of learning openly (Newton, 2014), recognising that starting can be harder than finishing and that this, too, is part of learning.

At a structural level, we also need some candour. Systems built on speed, metrics and visible output inevitably amplify the appeal of friction-reducing tools. If polish is rewarded more consistently than process, we should not be surprised when students bypass the stretch between uncertainty and articulation. Cultivating discernment, then, is not a matter of allocating blame. It is a collective project of making the shifting terrain of AI use visible, discussable and educative.

Where the emotional work now lives

Generative AI has not diminished the importance of human judgement. If anything, it has made visible how emotionally mediated that judgement has always been (Immordino-Yang & Damasio, 2007). The interior work of learning – the hesitation, the rehearsal, the private negotiation of uncertainty – has never been fully observable. It has always unfolded, at least in part, elsewhere.

What AI changes is not the existence of that interior space, but its texture. Some of that labour now takes place in dialogue with a system that can stabilise, extend or subtly redirect thinking. That creates an opportunity: we are at a juncture where the emotional dimensions of learning can be surfaced and examined more deliberately than before.

It also carries risk. Students can disappear down an AI rabbit hole just as easily as they once disappeared into rumination. The question is not whether the interior work exists, but how it is shaped and whether it ultimately strengthens judgement or thins it.

References

Barnett, R (2007) A will to learn: Being a student in an age of uncertainty Open University Press

Biesta, GJJ (2013) The beautiful risk of education Paradigm Publishers

Bjork, EL & Bjork, RA (2011) ‘Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning’ in MA Gernsbacher, RW Pew, LM Hough & JR Pomerantz (eds), Psychology and the real world: Essays illustrating fundamental contributions to society (pp. 56–64) Worth Publishers

Newton, DP (2014) Thinking with feeling: Fostering productive thought in the classroom Routledge

Vygotsky, LS (1978) Mind in society: the development of higher psychological processes Harvard University Press Rose, DH & Meyer, A (2002) Teaching every student in the digital age: universal design for learning ASCD

Joanne Irving-Walton is a Principal Lecturer at Teesside University, working across learning and teaching and international partnerships. She is particularly interested in how academic judgement and professional identity develop through the emotional realities of higher education.


Leave a comment

Can folk pedagogies help us understand the limited impact of research on higher education?

by Alex Buckley

The SRHE conference is a great place to see our field in all its glory. From the sessions I attended in December 2025, one thing that was abundantly clear was the desire of so many HE researchers to change the world. A distinctive feature of contemporary HE research – reflecting the social sciences more broadly – is the focus on political and ethical issues, with avowedly political and ethical intentions. The improvement of society is often the explicit end, rather than the more humble improvement of our own part of the education system.

Despite this desire to make a difference, higher education research has for many years been held up as an area where the impact of those working in the field is not what it could be. As George Keller said in 1985, “hardly anyone in higher education pays attention to the research and scholarship about higher education”,

Asking the right questions?

There hasn’t been a lot of work on the gap between research and practice in HE – though there is a fair amount in the schools sector from which we can extrapolate, to a greater or lesser extent – but one issue that has received some attention is the fundamental one: are researchers actually asking the right questions?

Vivianne Robinson is a researcher who has laid a substantial amount of blame at the feet of researchers, who “have little to offer by way of alternative solutions, when the problems they have been studying are not those of the practitioner” (Robinson 1993). I have recently used Robinson’s model of Problem-Based Methodology to explore whether research about exams in higher education does engage sufficiently with the challenges that teachers take themselves to face. The results were not encouraging.

One of the more straightforward of Robinson’s criteria for impactful research is that researchers should be addressing teachers’ beliefs, and correcting them where they are erroneous. That’s important, but what if those beliefs are hard to shift? We all have stubborn hunches about how higher education works: good ways of motivating students, how to write feedback that will make students pay attention, how to clearly communicate complex ideas. What if there are teacher beliefs that are deeply embedded, so deeply that we don’t always know we have them, but that aren’t helping us and need to change?

One idea that has been explored in the school sector, but has largely passed us by, is the concept of ‘folk pedagogies’. This idea was developed in the 1990s as an extension of the more famous concept of ‘folk psychologies’: the tacit theories that we all have that allow us to make sense of people’s behaviour. For Jerome Bruner, a natural next step from folk psychologies was the idea that we have intuitive theories about how people learn.

“Watch any mother, any teacher, even any babysitter with a child and you’ll be struck by how much of what they do is steered by notions of ‘what children’s mind are like and how to help them learn,’ even though they may not be able to verbalise their pedagogical principles.” Bruner (1996)

There has been some research in the school sector about the implications of this idea, particularly in terms of how much difference research makes to educational practice. Folk pedagogies have two features that will make them a factor in the impact of education research: they interfere with the uptake of new research-based ideas and approaches, and they are stubborn. On the first point, the idea is that new ideas about higher education will have to replace the old if they are to influence teachers; and on the second, evidence suggests that even where trainee teachers have ostensibly internalised more scientific theories of learning, the folk pedagogies come creeping back.

In the case of higher education, what might these commonsense, intuitive theories look like? They might just be very general ideas about how people learn, applied to the particular context of higher education. Bruner identifies a range of broad folk pedagogical views, such as one which sees ‘children as knowers’, with a focus on the gathering and organising of facts. Perhaps one kind of folk psychology of higher education would be the application of that idea specifically to students in universities rather than other sectors: a focus on the selection, organisation and retention of propositional knowledge within degree programmes. Perhaps there are also specific intuitive theories about higher education that influence teachers’ practices. Perhaps there is a folk intuition that university students should not be spoon-fed – that they must take responsibility for their own learning and seek to develop their own views. Perhaps there is a folk intuition that students should encounter challenging views that encourage them to question their own certainties. In the absence of research, we can only speculate (and introspect).

Respecting the ‘folk’

The idea that teachers have deep intuitions about how students learn, that those intuitions can prevent them from acting on more evidence-based beliefs, and that those intuitions are hard to shake; none of those ideas are particularly earth-shattering. They are probably common sense among those researching and enhancing higher education. The value of the idea of ‘folk pedagogies’ lies instead in the way that it encourages us to take those intuitions seriously, both as an object of study and a powerful barrier to change.

Rather than dismissing intuitions about higher education – as ignorant beliefs and hide-bound traditions – we can study them. What are they? Where do they come from? How do they change? The idea of folk pedagogies is not pejorative. There’s no shame in having intuitions about how learning works. As with folk psychological theories, they are necessary parts of how we navigate the world, and something we can’t do without. There is also deep wisdom to be found in those intuitions, even if they are sometimes misleading. Research goes wrong by departing from common sense, at least as much as the other way around.

Acknowledging the existence of folk theories of higher education can help improve the impact of our research in all sorts of ways. We can research them, to understand why teachers and students (and others) do what they do, and the conditions in which deep intuitions can change. It can help us understand where – and why – research has departed so far from common sense as to be of little practical relevance.

It can also help us understand the scale of the challenge. In much of what we do, we’re seeking to modify what university teachers do, which very often means changing how they think. The reality is that we aren’t usually changing superficial, specific beliefs, at least not where the improvements we’re seeking are substantive. We’re changing deep beliefs picked up over a lifetime. Our model of improvement may then need to fit the old adage: if you’re not making progress at a snail’s pace, you’re not making progress. That’s a bit different from annual quality enhancement cycles or short-term strategic initiatives. We can change the world, but it will take time.

References

Bruner, J. (1996). The culture of education. Harvard

Robinson, V. M. J. (1993). Problem-Based Methodology: Research for the Improvement of Practice. Pergamon Press

Dr Alex Buckley is an Associate Professor in the Learning & Teaching Academy at Heriot-Watt University, Scotland. His research is focused on conceptual aspects of research and practice in assessment and feedback.


Leave a comment

What is a poem doing in a literature review?

by Nguyen Phuong Le, Kathleen Pithouse-Morgan and Thang Long Nguyen

If the phrase ‘write a poem’ makes your stomach do a tiny backflip, you are in good company. The three of us came to poetry from very different places. Kathleen has been working with poetry in teaching and research for many years, across different countries and contexts. Phuong first encountered poetic inquiry while working with Kathleen as a research assistant, learning her way into the field as a newcomer. Long joined as a critical reader of this blog, bringing curiosity from outside poetry‑based research.

Those different starting points matter. None of us came to this work believing poetry was an obvious or easy fit for literature reviewing.

In our conversations, workshops, and conference sessions, we have seen friends, postgraduate students, supervisors, lecturers, and experienced researchers worry that they are ‘not creative’. Some worry their English is not ‘good enough’. Others feel uneasy because poetry sounds personal, exposing, and even childish, in a higher education context.

Our starting point is simple: using a small, low-stakes poetic process to think with literature, stay engaged, and find your way into scholarly conversation. When you do this with another person, the process can feel even more doable. You cannot get this wrong, because the point is not to produce a ‘professional’ poem.

Why poetry in a literature review, seriously?

You don’t have to write poems to review literature. Most reviews are written in conventional academic prose. But if you are doing qualitative research, you may already know that knowledge is not only built through tidy argument. It is also built through attention, resonance, discomfort, contradiction, and voice.

Literature reviews can become a performance of mastery: you read fast, extract key points, categorise, critique, cite, and move on. Although these steps seem straightforward, the focus on moving quickly and efficiently may mean we miss what texts invite us to feel, picture, and connect with. The emotional texture of reading disappears, along with much of what makes qualitative work matter: empathy, imagination, and relational engagement.

Poetry calls for slower digestion. It invites you to ask, ‘What stays with me?’. It offers a way to respond before you feel ready to produce polished academic claims. That response can later feed your analytic writing, without needing to look like academic writing at the start.

What do we mean by “collaborative feedback poetry”?

Kathleen and Phuong’s article, ‘Reimagining qualitative literature reviewing through collaborative feedback poetry’ (Pithouse-Morgan & Le, 2025), introduces the term collaborative feedback poetry to describe a literature-reviewing strategy in which people respond to academic texts through short poems and exchange poetic responses with one another.

In such a strategy, collaboration matters. Many researchers struggle not only with the literature and writing, but also with the loneliness of the process. Working alongside someone else shifts the emotional climate. You are no longer trying to “prove” that you understand. You are noticing, articulating, and learning together.

Feedback matters as much as the poem. In academic settings, feedback often points out what is missing, what is weak, and what needs to be fixed. In collaborative feedback poetry, the focus is not on correction but on extension. The poem becomes a doorway, inviting you to walk further into the text rather than retreat from it.

“But I’m not a poet!”

That’s the point.

In the first few minutes of Kathleen’s collaborative feedback poetry sessions, the atmosphere is often tense. People apologise before they write. They say they are not creative, have never written a poem, or worry that their English is not good enough.

What changes things is permission: Permission to know, from the start, that there is no way to get this wrong.

Permission to be simple.

Permission to be incomplete.

Permission to use a home language.

When that permission feels real, participants begin to read, talk, and act differently. The literature starts to feel less like a wall and more like a space they can enter – through poetry, in whatever form it takes.

Phuong has seen these hesitations surface in conference conversations and informal chats with colleagues in Vietnam. After presentations on poetry as a literature‑reviewing practice, people are often interested but quiet. Later, they admit their worry about whether there is a ‘right’ kind of poem, or that writing poetry in a second or third language will expose them as less than capable.

That hesitancy matters. So instead of defending poetry in abstract terms, we slow down and walk through a small example.

Here is one example, a short haiku:

Creative Arts Professors’ Concerns

Pandemic’s harsh fall,

professors’ struggles echo,

incomplete sonnets.

(First published in Pithouse-Morgan & Le, 2025)

Phuong wrote this haiku in response to two papers by creative arts educators in higher education: Holmgren (2018) and Meskin and van der Walt (2022). Holmgren’s paper, written before the COVID-19 pandemic, explores musical interpretation through philosophic poetic inquiry and autoethnodrama. Meskin and van der Walt’s paper, written during the pandemic, uses poetic inquiry and reciprocal found poetry to reflect on disruptions to educator-artists’ academic and creative lives.

Rather than summarising either paper, Phuong read them together and asked: ‘What feeling carries across both texts?’ The answer was interruption – teaching and creative work that could not fully unfold. This is where ‘incomplete sonnets’ came from.

The poem does not replace the literature review. Instead, it marks what stayed with the reader after reading closely. This is not (just) an artistic move, but an act of attention and relation.

When we introduce this process, we usually ask a few simple questions, such as ‘What stayed with you after reading?’ ‘Which words carry that feeling?’ ‘What happens when you space those words out on a page?’ And ‘What occurs when another person reads and responds to your poem?’

When we introduce this process, we ask readers to notice what remains with them after reading. Kathleen’s poem Growing Beyond came from that noticing: reading across texts about doctoral students’ poetic inquiry (Chan, 2003; Kang et al, 2022) and attending to what stayed with her. In their poetry, Chan and Kang et al wrote about what it felt like to be doctoral students, including experiences of isolation, marginalisation, and internal struggle. Their work highlights the restorative, reflective, and critical possibilities of poetic inquiry in higher education. The poem opens with an impulse Kathleen recognised in their writing:

A sudden compulsion,

a yearning to express,

to write poetry.

                (First published in Pithouse-Morgan & Le, 2025)

Why the collaborative element carries weight

Higher education research can be intensely individualised. Even when we are part of a student cohort or a research centre, as students or academics, we often read and write alone before submitting work for evaluation or review. Collaborative feedback poetry encourages a different kind of scholarly space. The goal is not to show you are clever, but to practise staying with ideas and emotions in the supportive presence of another.

That matters for students and academics at different levels, and for supervisors and educators trying to teach literature reviewing without turning it into a fear-fest. It also matters for multilingual writers, who are too often made to feel that academic voice counts only when it sounds like confident English.

Collaboration does not remove difficulty; it changes what difficulty feels like. You are not stranded in it. You are accompanied. To us, this companionship feels more welcoming than working alone, not least because, like many of you, we are also trying to find and express our voices within the wider literature.

A takeaway for you

If you want to try this, keep it small. Choose one article. Give yourself ten minutes to jot down words that come to mind as you read, and select phrases from the text that grab your attention. Shape these into a short poem, in any form, with space around the words. Share it with someone you trust. Ask them to respond – not by grading it, but by writing back with their own short poem. Then briefly discuss what the poems say and why that matters.

If you leave with just one idea, let it be this: literature reviewing is not only about demonstrating coverage. It is also about cultivating relationships with ideas, voices, emotions, and sometimes with each other. Collaborative feedback poetry is one way to make these relationships visible and accessible.

By now, we hope you feel encouraged to step into poetic literature reviewing in ways that feel doable and enjoyable. With baby steps, of course.

Acknowledgement

This work was supported by the Leverhulme Trust through the British Academy/Leverhulme Small Research Grants Scheme. (Grant holder: Kathleen Pithouse-Morgan).

Nguyen Phuong Le is a lecturer in English Education at Hanoi National University of Education, Vietnam. She is a graduate of the Master of Arts in Digital Teaching and Learning at the University of Nottingham, UK, and the Bachelor of Arts in English at Northern Kentucky University, US. Passionate about digital education and literature, she has held various positions in research, teaching, and learning across higher education and educational organisations.

Kathleen Pithouse-Morgan is a Professor of Education at the University of Nottingham, UK, and an honorary professor at the University of KwaZulu-Natal, South Africa. She focuses on professional learning and supporting professionals as self-reflexive, creative learners. Passionate about arts-inspired research and teaching, especially using poetic methods, she co-convenes the British Educational Research Association’s Arts-Based Educational Research group.

Thang Long Nguyen is currently a student of the Master of Arts in Sociology at University College Dublin, Ireland. Graduated from Doshisha University in Japan with a Bachelor of Arts in Liberal Arts, he has an interdisciplinary interest in themes of nationalism. Still, he is deeply concerned with the progress of education in social sciences and humanities in his home country, Vietnam.


Leave a comment

Understanding complex ambiguous problems through the lens of Soft Systems Methodology

by Joy Garfield and Amrik Singh

As the future leaders of a society that is increasingly complex and challenging higher education students need a good grasp of social, political, economic and environmental issues and need to feel equipped to propose reasonable recommendations. This can seem a daunting prospect for anyone, let alone higher education students who may have little or no prior experience of working in these areas. Students need to understand the world view of the stakeholders and the what, how and whys of the situation being explored. What is the problem situation, how will we understand it, and why are we trying to understand it? Here we describe an approach successfully used in our postgraduate teaching at Aston Business School, UK.

Soft Systems Methodology (SSM) (Checkland, 1986) has been successfully used in many different contexts for complex problem-solving. With its seven-stage structure it provides a framework for structuring/framing wicked problems by initially thinking about what is happening in the real world from the point of view of different stakeholders. An idealised world without any constraints is then explored from different stakeholder perspectives so that different wants/needs for a new system can be considered. Students are encouraged to use empathetic discourse to understand the multiple perspectives of the stakeholders in the problem situation. The comparison between the real world and idealised worlds allows for an eventual accommodation of future ways forward.

Soft Systems Methodology is currently used to teach complex problem solving to postgraduate students at Aston. The module team have developed a group-based approach that has been found to produce a deeper understanding of concepts and yield better overall results, particularly given that students are mostly international postgraduate students. For most of the students their first language is not English, and they are new to complex problem-solving.

Teaching sessions are structured around the different stages of the Soft Systems Methodology. Group work is used so that students support one another in their learning of the concepts and then apply these individually to their chosen assessment topic. The UK criminal justice system is taken as an in-class example and students are asked to think about a particular complex area to focus on, eg overcrowding in prisons in a particular city. Terminology can be particularly complex and hard to grasp if your first language is not native English, so the language used to explain concepts is kept simple and a number of areas of scaffolding are used to help to support the learning.

The first task related to SSM involves students identifying the stakeholders and their power/interest in the complex situation. Students are then taught the concepts of a rich picture and they draw a rich picture as a group for their chosen problem situation using white board paper (example below). The rich picture itself enables students to understand the real world, stakeholder issues, conflicts, and relationships together with who interacts with the problem from outside of its boundary.  Students present their rich pictures to the wider group for formative feedback.

This helps with constructive feedback and a deeper understanding of the complex issue. The rich pictures may seem simple, but simplifying a complex problem is complex in itself! This helps students to understand and tease apart the complexities of the problem situation. The rich picture depicts the problem situation better than just making notes alone.

For the realisation of the idealised world, students put themselves in the shoes of the stakeholder.  This involves empathetic discourse whereby students interview one another about what they would want for a system, without taking into consideration any constraints from different stakeholder perspectives. Students are then able to expand these statements as a group to take into consideration the different aspects. From this, students construct a model which helps depict the transformation activities that the stakeholders wish to conduct to reach their desired output.

By gaining a better understanding of the real world from drawing the rich picture and thinking about an idealised world and possible transformation activities, students can then gain an understanding for the changes going forward.

Topics chosen by students for their assessment have included: housing refugees in the UK; online exams or in person exams at university; homelessness; impact of the pandemic on tourism; child marriages in India; a start up in France to reduce plastic packaging; finding the appropriate route for a railway between two cities in Germany. These are all complex and ambiguous problems that need to be understood before any potential solutions are made.

During the module students develop confidence in the application of SSM and come to a true understanding of the process of accommodating different stakeholder perspectives – especially when consensus is not always possible. What we understand from this journey is that there is no ‘one shoe fits all’ solution when understanding complex ambiguous problems.

Empathy enriches the SSM process by ensuring the human side of systems is as important as the technical side. It helps to create solutions that work not just in theory but in real, messy, human-centric environments. Empathetic discourse is very valuable to understand the voice of the stakeholders. What we have learned from the delivery of the module is that when complex ambiguous problems are human centric, then the solutions are human centric also.

Checkland, P (1986) Systems thinking, systems practice Chichester: Wiley

Dr Joy Garfield is a Senior Teaching Fellow and Director of Learning and Teaching for an academic department at Aston Business School, Aston University, UK.  Her subject discipline area is information systems, particularly systems modelling and complex problem solving. With just over 20 years of experience in academia, she has worked at a number of UK universities. Joy is a Senior Fellow of Advance HE and is currently an external examiner at Sheffield Hallam University and the University of Westminster. 

Dr Amrik Singh is a Senior Teaching Fellow at Aston University, UK. He has over 15 years of academic experience in Higher Education. He is also a Senior Fellow of the Advance HE, SFHEA. His teaching areas includes operations management, effective management consultancy, and business operations excellence. 


2 Comments

Reflective teaching: the “small shifts” that quietly change everything

by Yetunde Kolajo

If you’ve ever left a lecture thinking “That didn’t land the way I hoped” (or “That went surprisingly well – why?”), you’ve already stepped into reflective teaching. The question is whether reflection remains a private afterthought … or becomes a deliberate practice that improves teaching in real time and shapes what we do next.

In Advancing pedagogical excellence through reflective teaching practice and adaptation I explored reflective teaching practice (RTP) in a first-year chemistry context at a New Zealand university, asking a deceptively simple question: How do lecturers’ teaching philosophies shape what they actually do to reflect and adapt their teaching?

What the study did

I interviewed eight chemistry lecturers using semi-structured interviews, then used thematic analysis to examine two connected strands: (1) teaching concepts/philosophy and (2) lecturer-student interaction. The paper distinguishes between:

  • Reflective Teaching (RT): the broader ongoing process of critically examining your teaching.
  • Reflective Teaching Practice (RTP): the day-to-day strategies (journals, feedback loops, peer dialogue, etc) that make reflection actionable.

Reflection is uneven and often unsystematic

A striking finding is that not all lecturers consistently engaged in reflective practices, and there wasn’t clear evidence of a shared, structured reflective culture across the teaching team. Some lecturers could articulate a teaching philosophy, but this didn’t always translate into a repeatable reflection cycle (before, during, and after teaching). I  framed this using Dewey and Schön’s well-known reflection stages:

  • Reflection-for-action (before teaching): planning with intention
  • Reflection-in-action (during teaching): adjusting as it happens
  • Reflection-on-action (after teaching): reviewing to improve next time

Even where lecturers were clearly committed and experienced, reflection could still become fragmented, more like “minor tweaks” than a consistent, evidence-informed practice.

The real engine of reflection: lecturer-student interaction

Interaction isn’t just a teaching technique – it’s a reflection tool.

Student questions, live confusion, moments of silence, a sudden “Ohhh!” – these are data. In the study, the clearest examples of reflection happening during teaching came from lecturers who intentionally built in interaction (eg questioning strategies, pausing for problem-solving).

One example stands out: Denise’s in-class quiz is described as the only instance that embodied all three reflection components using student responses to gauge understanding, adapting support during the activity, and feeding insights forward into later planning.

Why this matters right now in UK HE

UK higher education is navigating increasing diversity in student backgrounds, expectations, and prior learning alongside sharper scrutiny of teaching quality and inclusion. In that context, reflective teaching isn’t “nice-to-have CPD”; it’s a way of ensuring our teaching practices keep pace with learners’ needs, not just disciplinary content.

The paper doesn’t argue for abandoning lectures. Instead, it shows how reflective practice can help lecturers adapt within lecture-based structures especially through purposeful interaction that shifts students from passive listening toward more active/constructive engagement (drawing on engagement ideas such as ICAP).

Three “try this tomorrow” reflective moves (small, practical, high impact)

  1. Plan one interaction checkpoint (not ten). Add a single moment where you must learn something from students (a hinge question, poll, mini-problem, or “explain it to a partner”). Use it as reflection-for-action.
  1. Name your in-the-moment adjustment. When you pivot (slow down, re-explain, swap an example), briefly acknowledge it: “I’m noticing this is sticky – let’s try a different route.” That’s reflection-in-action made visible.
  1. End with one evidence-based note to self. Not “Went fine.” Instead: “35% missed X in the quiz – next time: do Y before Z.” That’s reflection-on-action you can actually reuse.

Questions to spark conversation (for you or your teaching team)

  • Where does your teaching philosophy show up most clearly: content coverage, student confidence, relevance, or interaction?
  • Which “data” do you trust most: NSS/module evaluation, informal comments, in-class responses, attainment patterns and why?

If your programme is team-taught, what would a shared reflective framework look like in practice (so reflection isn’t isolated and inconsistent)?

If reflective teaching is the intention, this article is the nudge: make reflection visible, structured, and interaction-led, so adaptation becomes a habit, not a heroic one-off.

Dr Yetunde Kolajo is a Student Success Research Associate at the University of Kent. Her research examines pedagogical decision-making in higher education, with a focus on students’ learning experiences, critical thinking and decolonising pedagogies. Drawing on reflective teaching practice, she examines how inclusive and reflective teaching frameworks can enhance student success.


Leave a comment

Walk on by: the dilemma of the blind eye

by Dennis Sherwood

Forty years on…

I don’t remember much about my experiences at work some forty-odd years ago, but one event I recall vividly is the discussion provoked by a case study at a training event. The case was simple, just a few lines:

Sam was working late one evening, and happened to walk past Pat’s office. The door was closed, but Sam could hear Pat being very abusive to Alex. Some ten minutes later, Sam saw Alex sobbing.

What might Sam do?

What should Sam do?

Quite a few in the group said “nothing”, on the grounds that whatever was going on was none of Sam’s business. Maybe Pat had good grounds to be angry with Alex and if the local culture was, let’s say, harsh, what’s the problem? Nor was there any evidence that Alex’s sobbing was connected with Pat – perhaps something else had happened in the intervening time.

Others thought that the least could Sam do was to ask if Alex was OK, and offer some comfort – a suggestion countered by the “it’s a tough world” brigade.

The central theme of the conversation was then all about culture. Suppose the culture was supportive and caring. Pat’s behaviour would be out of order, even if Pat was angry, and even if Alex had done something Pat had regarded as wrong.

So what might – and indeed should – Sam do?

Should Sam should confront Pat? Or inform Pat’s boss?

What if Sam is Pat’s boss? In that case then, yes, Sam should confront Pat: failure to do so would condone bad behaviour, which in this culture, would be a ‘bad thing’.

But if Sam is not Pat’s boss, things are much more tricky. If Sam is subordinate to Pat, confrontation is hardly possible. And informing Pat’s boss could be interpreted as snitching or trouble-making. Another possibility is that Sam and Pat are peers, giving Sam ‘the right’ to confront Pat – but only if peer-to-peer honesty and mutual pressure is ‘allowed’. Which it might not be, for many, even benign, cultures are in reality networks of mutual ‘non-aggression treaties’, in which ‘peers’ are monarchs in their own realms – so Sam might deliberately choose to turn a blind eye to whatever Pat might be doing, for fear of setting a precedent that would allow Pat, or indeed Ali or Chris, to poke their noses into Sam’s own domain.

And if Sam is in a different part of the organisation – or indeed from another organisation altogether – then maybe Sam’s safest action is back where we started. To do nothing. To walk on by.

Sam is a witness to Pat’s bad behaviour. Does the choice to ‘walk on by’ make Sam complicit too, albeit at arm’s length?

I’ve always thought that this case study, and its implications, are powerful – which is probably why I’ve remembered it over so long a time.

The truth about GCSE, AS and A level grades in England

I mention it here because it is relevant to the main theme of this blog – a theme that, if you read it, makes you a witness too. Not, of course, to ‘Pat’s’ bad behaviour, but to another circumstance which, in my opinion, is a great injustice doing harm to many people – an injustice that ‘Pat’ has got away with for many years now, not only because ‘Pat’s peers’ have turned a blind eye – and a deaf ear too – but also because all others who have known about it have chosen to ‘walk on by’.

The injustice of which I speak is the fact that about one GCSE, AS and A level grade in every four, as awarded in England, is wrong, and has been wrong for years. Not only that: in addition, the rules for appeals do not allow these wrong grades to be discovered and corrected. So the wrong grades last for ever, as does the damage they do.

To make that real, in August 2025, some 6.5 million grades were awarded, of which around 1.6 million were wrong, with no appeal. That’s an average of about one wrong grade ‘awarded’ to every candidate in the land.

Perhaps you already knew all that. But if you didn’t, you do now. As a consequence, like Sam in that case study, you are a witness to wrong-doing.

It’s important, of course, that you trust the evidence. The prime source is Ofqual’s November 2018 report, Marking Consistency Metrics – An update, which presents the results of an extensive research project in which very large numbers of GCSE, AS and A level scripts were in essence marked twice – once by an ‘assistant’ examiner (as happens in ‘ordinary’ marking each year), and again by a subject senior examiner, whose academic judgement is the ultimate authority, and whose mark, and hence grade, is deemed ‘definitive’, the arbiter of ‘right’.

Each script therefore had two marks and two grades, enabling those grades to be compared. If they were the same, then the ‘assistant’ examiner’s grade – the grade that is on the candidate’s certificate – corresponds to the senior examiner’s ‘definitive’ grade, and is therefore ‘right’; if the two grades are different, then the assistant examiner’s grade is necessarily ‘non-definitive’, or, in plain English, wrong.

You might have thought that the number of ‘non-definitive’/wrong grades would be small and randomly distributed across subjects. In fact, the key results are shown on page 21 of Ofqual’s report as Figure 12, reproduced here:

Figure 1: Reproduction of Ofqual’s evidence concerning the reliability of school exam grades

To interpret this chart, I refer to this extract from the report’s Executive Summary:

The probability of receiving the ‘definitive’ qualification grade varies by qualification and subject, from 0.96 (a mathematics qualification) to 0.52 (an English language and literature qualification).

This states that 96% of Maths grades (all varieties, at all levels), as awarded, are ‘definitive’/right, as are 52% of those for Combined English Language and Literature (a subject available only at A level). Accordingly, by implication, 4% of Maths grades, and 48% of English Language and Literature grades, are ‘non-definitive’/wrong. Maths grades, as awarded, can therefore be regarded as 96% reliable; English Language and Literature grades as 52% reliable.

Scrutiny of the chart will show that the heavy black line in the upper blue box for Maths maps onto about 0.96 on the horizontal axis; the equivalent line for English Language and Literature maps onto 0.56. The measures of the reliability of the grades for each of the other subjects are designated similarly. Ofqual’s report does not give any further numbers, but Table 1 shows my estimates from Ofqual’s Figure 12:

 Probability of
 ‘Definitive’ grade‘Non-definitive’ grade
Maths (all varieties)96%4%
Chemistry92%8%
Physics88%12%
Biology85%15%
Psychology78%22%
Economics74%26%
Religious Studies66%34%
Business Studies66%34%
Geography65%35%
Sociology63%37%
English Language61%39%
English Literature58%42%
History56%44%
Combined English Language and Literature (A level only)52%48%

Table 1: My estimates of the reliability of school exam grades, as inferred from measurements of Ofqual’s Figure 12.

Ofqual’s report does not present any corresponding information for each of GCSE, AS or A level separately, nor any analysis by exam board. Also absent is a measure of the all-subject overall average. Given, however, the maximum value of 96%, and the minimum of 52%, the average is likely to be somewhere in the middle, say, in the seventies; in fact, if each subject is weighted by its cohort, the resulting average over the 14 subjects shown is about 74%. Furthermore, if other subjects – such as French, Spanish, Computing, Art… – are taken into consideration, the overall average is most unlikely to be greater than 82% or less than 66%, suggesting that an overall average reliability of 75% for all subjects is a reasonable estimate.

That’s the evidence that, across all subjects and levels, about 75% of grades, as awarded, are ‘definitive’/right and 25% – one in four – are ‘non-definitive’/wrong – evidence that has been in the public domain since 2018. But evidence that has been much disputed by those with vested interests.

Ofqual’s results are readily explained. We all know that different examiners can, legitimately, give the same answer (slightly) different marks. As a result, the script’s total mark might lie on different sides of a grade boundary, depending on who did the marking. Only one grade, however, is ‘definitive’.

Importantly, there are no errors in the marking studied by Ofqual – in fact, Ofqual’s report mentions ‘marking error’ just once, and then in a rather different context. All the grading discrepancies measured in Ofqual’s research are therefore attributable solely to legitimate differences in academic opinion. And since the range of legitimate marks is far narrower in subjects such as Maths and Physics, as compared to English Literature and History, then the probability that an ‘assistant’ examiner’s legitimate mark might result in a ‘non-definitive’ grade will be much higher for, say, History as compared to Physics. Hence the sequence of subjects in Ofqual’s Figure 12.

As regards appeals, in 2016, Ofqual – in full knowledge of the results of this research (see paragraph 28 of this Ofqual Board Paper, dated 18 November 2015) – changed the rules, requiring that a grade can be changed only if a ‘review of marking’ discovers a ‘marking error’. To quote an Ofqual ‘news item’ of 26 May 2016:

Exam boards must tell examiners who review results that they should not change marks unless there is a clear marking error. …It is not fair to allow some students to have a second bite of the cherry by giving them a higher mark on review, when the first mark was perfectly appropriate. This undermines the hard work and professionalism of markers, most of whom are teachers themselves. These changes will mean a level-playing field for all students and help to improve public confidence in the marking system.

This assumes that the legitimate marks given by different examiners are all equally “appropriate”, and identical in every way.

This assumption. however, is false: if one of those marks corresponds to the ‘definitive’ grade, and another to a ‘non-definitive’ grade, they are not identical at all. Furthermore, as already mentioned, there is hardly any mention of marking errors in Ofqual’s November 2018 report. All the grade discrepancies they identified can therefore only be attributable to legitimate differences in academic opinion, and so cannot be discovered and corrected by the rules that have been in place since 2016.

Over to you…

So, back to that case study.

Having read this far, like Sam, you have knowledge of wrong-doing – not Pat tearing a strip off Alex, but Ofqual awarding some 1.5 million wrong grades every year. All with no right of appeal.

What are you going to do?

You’re probably thinking something like, “Nothing”, “It’s not my job”, “It’s not my problem”, “I’m in no position to do anything, even if I wanted to”.

All of which I understand. No, it’s certainly not your job. And it’s not your problem directly, in that it’s not you being awarded the wrong grade. But it might be your problem indirectly – if you are involved with admissions, and if grades play a material role, you may be accepting a student who is not fully qualified (in that the grade on the certificate might be too high), or – perhaps worse – rejecting a student who is (in that the grade on the certificate is too low). Just to make that last point real, about one candidate in every six with a certificate showing AAA for A level Physics, Chemistry and Biology in fact truly merited at least one B. If such a candidate took a place at Med School, for example, not only is that candidate under-qualified, but a place has also been denied to a candidate with a certificate showing AAB but who merited AAA.

And although you, as an individual, are indeed not is a position to do anything about it, you, collectively, surely are.

HE is, by far, the largest and most important user of A levels. And relying on a ‘product’ that is only about 75% reliable. HE, collectively, could put significant pressure on Ofqual to fix this, if only by printing “OFQUAL WARNING: THE GRADES ON THIS CERTIFICATE ARE ONLY RELIABLE, AT BEST, TO ONE GRADE EITHER WAY” on every certificate – not my statement, but one made by Ofqual’s then Chief Regulator, Dame Glenys Stacey, in evidence to the 2 September 2020 hearing of the Education Select Committee, and in essence equivalent to the fact that about one grade in four is wrong. That would ensure that everyone is aware of the fact that any decision, based on a grade as shown on a certificate, is intrinsically unsafe.

But this – or some other solution – can happen only if your institution, along with others, were to act accordingly. And that can happen only if you, and your colleagues, band together to influence your department, your faculty, your institution.

Yes, that is a bother. Yes, you do have other urgent things to do.

If you do nothing, nothing will happen.

But if you take action, you can make a difference.

Don’t just walk on by.

Dennis Sherwood is a management consultant with a particular interest in organisational cultures, creativity and systems thinking. Over the last several years, Dennis has also been an active campaigner for the delivery of reliable GCSE, AS and A level grades. If you enjoyed this, you might also like https://srheblog.com/tag/sherwood/.


Leave a comment

The urgent need to facilitate environmental justice learning in HE institutions

by Sally Beckenham

The crises we are facing globally, from climate change and climate change dispossession to drought and food insecurity, are intersecting social and environmental issues, which need to be recognized and addressed accordingly through integrated and holistic measures. This can only be achieved by eschewing the tendency of existing governance and economic systems to silo social and environmental problems, as if they are separate concerns that can be managed – and prioritised – hierarchically. Much of this requires a better understanding of environmental injustice – the ways in which poor, racialised, indigenous and other marginalized communities are overlooked and/or othered in this power hierarchy, such that they must face a disproportionate burden of environmental harm.

This is happening with disconcerting regularity around the world, often going under the radar but sometimes making headlines, as for example in May this year, when institutionalised environmental racism in the U.S. manifested in the placement of a copper mine on land inhabited by and sacred to the Apache indigenous group (Sherman, 2025). With limited political power to challenge it they are left to face dispossession, loss of livelihood and physical and mental health ill-effects (Morton-Ninomiya et al, 2023). We have seen this making headlines closer to home recently too, with evidence suggesting that toxic air in the UK is killing 500 people a week and most affecting those in socioeconomically disadvantaged areas (Gregory, 2025). An environmental problem (such as air pollution) cannot be disentangled from its social causes and effects. Or to put it another way, violence done to the environment is violence done to a particular group of people.

A transformative response to our global challenges that re-centres environmental justice will require a paradigm shift in the ways that we govern, construct our societies, build our communities, run our economies, design our technologies and engage with the non-human world. The role of higher education will be critical to even a modest move in this direction. This is because, as they are probably tired of hearing, this generation of students will shape our collective futures, so it matters that they are literate in the deep entanglement of environmental and social justice challenges. Moreover, as Stickney and Skilbeck caution, “it is inconceivable that we will meet drastic carbon reduction targets without massive coordinated efforts, involving policymakers and educators working in concert at all levels of our governments and education systems (Stickney and Skilbeck, 2020).

In Ruth Irwin’s article ‘Climate Change and Education’ she alerts us to Heidegger’s treatise in Being and Time (1962) that the effectiveness of a tool’s readiness is ‘hidden’ – only revealed when it ceases to function. Climate might be viewed as a heretofore ‘hidden’ tool, in that it affords opportunities for human action; it has “smoothly enabled our existence without conscious consideration” (Irwin, 2019). Yet its dynamic quality is now an overt, striking, looming spectre threatening the existence of all life on earth; the ‘environment’ writ large is revealing itself through ecological and social breakdown, surfacing our essential reliance upon it as natural beings. Thus unless higher education is competent in dealing with the issues of environmental crisis at all of its registers – social, environmental, political and ecological – the institution of education will be unable to fulfil its fundamental task of knowledge transfer for what is a clear public good (Irwin, 2019). Put another way, “HEIs have a responsibility to develop their educational provision in ways that will support the social transformation needed to mitigate the worst effects of the environmental crisis.” (Owens et al, 2023).

Indeed, HE requires a paradigm shift in itself given that these realities are unfolding alongside widespread scrutiny of higher education institutions; including about decolonising the academy (Jivraj, 2020; Mintz, 2021), free speech on university campuses and how they are preparing students to meet these pressing issues (Woodgates, 2025). To keep pace with these changes and meet such challenges, educators from across disciplines will need to commit to embedding environmental justice education more widely across programme curricula, session design and teaching practices. It must be recognised as a vital – rather than token – component of environmental education. Doing so fully and effectively also requires us to recognise that environmental justice education encompasses not only subject matter but pedagogical practice. This is the case for all academic disciplines – including those that might seem peripheral to the teaching of environmental issues.

EJE in HE is a developing area of scholarship and field of study that has gathered pace only over the last decade. Much of the research to date has been focused on the US, where studies have shown that environmental justice remains marginal to or excluded from the curricular offerings of most environmental studies programmes – let alone those not directly related to environmental education (Garibay et al, 2016). A report by the North American Association for Environmental Education (NAAEE), which studied the policies of 230 public U.S. HE institutions and 36 state boards of higher education, found that only 6% of institutions with climate change content in their policies referred to climate justice issues and indigenous knowledge practices (MECCE Project & NAAEE, 2023). Other work has shown that STEM education has tended to frame questions around exploitation of natural resources or technological development as disconnected from social and economic inequalities, though this is starting to be challenged (Greenberg et al, 2024).

Emerging research into EJ in HE encompasses pedagogical approaches (Rabe, 2024; Moore, 2024); classroom and teaching practices (Walsh et al, 2022; Cachelin & Nicolosi, 2022; D’Arcangelis & Sarathy, 2015), the relationship between sustainability and climate justice education (Haluza-DeLay, 2013; Kinol et al, 2023) and curriculum development (Garibay et al, 2016). In identifying what EJE looks like these studies foreground the importance of community-engaged learning (CEL), providing students with the opportunity to learn about a socio-environmental problem from those with lived experience; critical thinking with regards to positionality, power structures and (especially indigenous) knowledge systems, and a deep concern with place. These critical components are crucial because tackling an act or acts of environmental injustice against marginalised populations often cannot be achieved without addressing systemic power imbalances.

What also links these studies is an acknowledgement of the complexity of EJE. It is a difficult subject and practice to grapple with for several reasons. Firstly, it means exposing students (and educators) to “an onslaught of bad news,” (Cachelin & Nicolosi, 2022) which can elicit feelings of hopelessness and helplessness, so it is little wonder that expressions of anxiety and alarm are growing within these cohorts (Wallace, Greenburg & Clark, 2020) and that needs to be borne in mind. Secondly EJE requires us to find a way to meaningfully connect with philosophical, discursive, historical and practical questions about power, ethics and the relationship between human beings and the natural environment, within the disciplinary parameters of a specific curricula. This means doing difficult work not only to change current systems and processes (Forsythe et al, 2023) but also to make transformative rather than piecemeal efforts. For example, this might mean actively absorbing students into a community partner’s work in an engaged rather than service-learning model, or moving beyond a simple ‘guest lecture’ format to invite more in-depth input into modules or programmes from a community partner.

This is a challenge that we shouldn’t understate for many academics and institutions already coping with high workloads (Smith, 2023), stress (Kinman et al, 2019) and job insecurity across a beleaguered sector (The Independent, 2024; The Guardian, 2025). Through this emerging EJE scholarship literature, we are starting to see that, “promoting opportunities for HE educators to develop and enact critical and transformative environmental pedagogy… is a complex business mediated by a variety of (personal, material and social) factors. It involves negotiating conflict, and understanding and confronting entrenched structures of power, from the local and institutional to the national and global.” (Owens et al, 2023). 

A third (though by no means final) challenge in teaching and learning EJ in higher education is in finding and making space for it in a landscape that is strongly oriented towards sustainability education. Although there is certainly overlap – for example to the extent that the liberal logic underpinning the latter also informs distributive justice – sustainability education has different intellectual and ideological origins to EJ scholarship. Both are valuable, but we should be questioning whether we can justify a lack of explicit EJ practice and framing simply because we are already having sustainability conversations, and instead find space for both. It can be easy to (inadvertently) depoliticise environmental education by avoiding the perceived messiness and complexity of justice in favour of the more technocratic and measurable ‘sustainability’ (Haluza-DeLay, 2013).

My research seeks to develop a better understanding of the state of environmental justice education in the HE landscape, beginning by mapping its development in the UK. This will reveal the extent and means by which EJE is being incorporated across programme curricula, session design and teaching practices in the UK HE context. In doing so we can identify the intersections of EJE with other dominant pedagogies, including sustainability education and solutions-focused approaches. To pursue a provincialising agenda and avoid the parochial perspective that EJE is the preserve of HEIs in the global North, there is also much value in exploring what EJE looks like in HEIs in the global South, and where cross-cultural lessons can be shared. The questions we need to be asking are:

  • How is environmental justice being taught and learnt and where do we go from here?
  • How are educators overcoming the challenges involved in engaging with EJE?
  • What best practices could we champion?

Sharing methods, strategies and pedagogical approaches for EJE cross-institutionally and cross-culturally will be a step towards helping us build a better collective, collaborative response to the urgency of our intersecting socio-environmental crises.

Dr Sally Beckenham is Lecturer in Human Geography and Programme Lead and Admissions Tutor for the BA Human Geography & Environment in the Department of Environment & Geography, University of York. She is also Chair of the Teaching Development Pool and member of the Interdisciplinary Global Development Centre (IGDC). She is an interdisciplinary political geographer with degrees in Modern History, International Politics and International Relations, and welcomes collaboration. Email: sally.beckenham@york.ac.uk Bluesky: @sallybeckenham.bsky.social.