SRHE Blog

The Society for Research into Higher Education


Leave a comment

Students in quality assurance – representatives, partners, or even experts?

by Jens Jungblut & Bjørn Stensaker

Throughout Europe, students are often regular members of external quality assurance mandated to perform evaluations and accreditations in higher education. While this role has been secured through the Standards and Guidelines for Quality Assurance in the European Higher Education Area (ESG), we have little knowledge about how students participate in such panels and which roles they take up. In a paper presented at the SRHE conference in Nottingham in December 2025, we addressed this issue – both conceptually and empirically.

One could imagine that there are several roles that students could play as part in an external quality assurance panel. Students are most often seen as representatives of their fellow students. This has implications as to how students are appointed to such panels, as various student interest organizations usually have the power to nominate specific students to the task. More recently, the idea of students being partners has also gained interest, where a key assumption is that students should be involved and participate in all aspect and processes related to their own education – including quality assurance. The initiative “student partnerships in quality Scotland (sparqs)” is a well-known example of this inclusive approach (Varwell, 2021). However, one could argue that students may even take on an expertise-based role in quality assurance. This type of role is not based on experience per se but rather the ability to reflect upon the knowledge possessed and the ability to engage in systematic efforts to learn more – based on these reflections (Ericsson, 2017).

In our paper presented at the SRHE conference we argue that the role of students participating in quality assurance panels (or any other related processes in higher education) may not be static, restricting students to merely one role at a time (see also Stensaker & Matear, 2024). We rather argue – in line with Holen et al (2021) – that the roles students may take on are highly dynamic. A consequence of this would be that students may shift rapidly from one role to another, depending on, for example, the evaluation context, committee setting, or the issue that is being discussed.

To test our assumptions, we conducted a survey targeting students taking part in European quality assurance processes; to be more specific, we targeted the `Quality Assurance Student Experts Pool` within the European Students’ Union. This group was established in 2009 with the aim to improve the contribution of students in quality assurance in Europe. When included in the pool, students undergo training sessions providing them with relevant background knowledge about quality assurance processes and the ESG. The members of the pool are then called upon by quality assurance agencies throughout Europe to act as student representatives on their quality assurance panels at program, institutional, or national level, performing evaluations, accreditations and other forms of assessments. The `Quality Assurance Student Experts Pool` therefore represents a unique entity in Europe, as it is the only European structure that collects and trains students for these roles. 35 students (of a total of 90) responded to our survey.

The students responding have on average been involved in quality assurance for more than four years, and over 60 percent have participated in four or more evaluation or accreditation processes. In line with our expectations, the students indeed report that they are taking on several roles during the evaluation processes, they are representatives of students, they feel they are equal partners within the evaluation panel they are part of, and they also see themselves as experts. In our data, we could not identify a clear hierarchy between the different roles. However, our data suggest that students are often perceived as a partner, while less often as experts. A possible interpretation here is that temporality and experience matter: students may be initially viewed as a representative and as a partner when starting their work within the panel, and through the process of participating in multiple panels over time they might demonstrate expertise which is in turn recognized by their peers in the panels. An interesting feature coming out of the data is also that the students in the `Quality Assurance Student Experts Pool` regularly share knowledge among the members of the pool, and in that way contribute to continuously build the expertise of all members. Expertise is in this way not taken for granted or expected as a prerequisite for being a member, but rather nurtured, systematised and made available to newer and future members.

We want to thank all the students that bothered to respond to our small questionnaire. While our study is exploratory, we do think it provides new insights regarding student involvement and influence in a setting characterized by a high level of expertise and professionalism, and we hope that the findings can help future research to further unpack the dynamic nature of students’ roles in quality assurance panels.

Jens Jungblut is a Professor at the Department of Political Science at the University of Oslo. His main research interests include party politics, policy-making, and public governance in the knowledge policy domain (education & research), organizational change in higher education, agenda-setting research, and the role of (academic) expertise in policy advice.

Bjørn Stensaker is a Professor at the Department of Education at the University of Oslo. He has a special research interest in governance, leadership, and organizational change in higher education – including quality assurance. He has published widely on these topics in a range of journals and book series.


Leave a comment

Teaching students to use AI: from digital competence to a learning outcome

by Concepción González García and Nina Pallarés Cerdà

Debates about generative AI in higher education often start from the same assumption: students need a certain level of digital competence before they can use AI productively. Those who already know how to search, filter and evaluate online information are seen as the ones most likely to benefit from tools such as ChatGPT, while others risk being left further behind.

Recent studies reinforce this view. Students with stronger digital skills in areas like problem‑solving and digital ethics tend to use generative AI more frequently (Caner‑Yıldırım, 2025). In parallel, work using frameworks such as DigComp has mostly focused on measuring gaps in students’ digital skills – often showing that perceived “digital natives” are less uniformly proficient than we might think (Lucas et al, 2022). What we know much less about is the reverse relationship: can carefully designed uses of AI actually develop students’ digital competences – and for whom?

In a recent article, we addressed this question empirically by analysing the impact of a generative AI intervention on university students’ digital competences (García & Pallarés, 2026). Students’ skills were assessed using the European DigComp 2.2 framework (Vuorikari et al, 2022).

Moving beyond static measures of digital competence

Research on students’ digital competences in higher education has expanded rapidly over the past decade. Yet much of this work still treats digital competence as a stable attribute that students bring with them into university, rather than as a dynamic and educable capability that can be shaped through instructional design. The consequence is a field dominated by one-off assessments, surveys and diagnostic tools that map students’ existing skills but tell us little about how those skills develop.

This predominant focus on measurement rather than development has produced a conceptual blind spot: we know far more about how digital competences predict students’ use of emerging technologies than about how educational uses of these technologies might enhance those competences in the first place.

Recent studies reinforce this asymmetry. Students with higher levels of digital competence are more likely to engage with generative AI tools and to display positive attitudes towards their use (Moravec et al, 2024; Saklaki & Gardikiotis, 2024). In this ‘competence-first’ model, digital competence appears as a precondition for productive engagement with AI. Yet this framing obscures a crucial pedagogical question: might AI, when intentionally embedded in learning activities, actually support the growth of the very competences it is presumed to require?

A second limitation compounds this problem: the absence of a standardised framework for analysing and comparing the effects of AI-based interventions on digital competence development. Although DigComp is widely used for diagnostic purposes, few studies employ it systematically to evaluate learning gains or to map changes across specific competence areas. As a result, evidence from different interventions remains fragmented, making it difficult to identify which aspects of digital competence are most responsive to AI-mediated learning.

There is, nevertheless, emerging evidence that AI can do more than simply ‘consume’ digital competence. Studies by Dalgıç et al (2024) and Naamati-Schneider & Alt (2024) suggest that integrating tools such as ChatGPT into structured learning tasks can stimulate information search, analytical reasoning and critical evaluation—provided that students are guided to question and verify AI outputs rather than accept them uncritically. Yet these contributions remain exploratory. We still lack experimental or quasi-experimental evidence that links AI-based instructional designs to measurable improvements in specific DigComp areas, and we know little about whether such benefits accrue equally to all students or disproportionately to those who already possess stronger digital skills.

This gap matters. If digital competences are conceived as malleable rather than fixed, then AI is not merely a technology that demands certain skills but a pedagogical tool through which those skills can be cultivated. This reframing shifts the centre of the debate: away from asking whether students are ready for AI, and towards asking whether our teaching practices are ready to use AI in ways that promote competence development and reduce inequalities in learning.

Our study: teaching students to work with AI, not around it

We designed a randomised controlled trial with 169 undergraduate students enrolled in a Microeconomics course. Students were allocated by class group to either a treatment or a control condition. All students followed the same curriculum and completed the same online quizzes through the institutional virtual campus.

The crucial difference lay in how generative AI was integrated:

  • In the treatment condition, students received an initial workshop on using large language models strategically. They practised:
  • contextualising questions
  • breaking problems into steps
  • iteratively refining prompts
  • and checking their own solutions before turning to the AI.
  • Throughout the course, their online self-assessments included adaptive feedback: instead of simply marking answers as right or wrong, the system offered hints, step-by-step prompts and suggestions on how to use AI tools as a thinking partner.
  • In the control condition, students completed the same quizzes with standard right/wrong feedback, and no training or guidance on AI.

Importantly, the intervention did not encourage students to outsource solutions to AI. Rather, it framed AI as an interactive study partner to support self-explanation, comparison of strategies and self-regulation in problem solving.

We administered pre- and post-course questionnaires aligned with DigComp 2.2, focusing on five competences: information and data literacy, communication and collaboration, safety, and two aspects of problem solving (functional use of digital tools and metacognitive self-regulation). Using a difference-in-differences model with individual fixed effects, we estimated how the probability of reporting the highest level of each competence changed over time for the treatment group relative to the control group.

What changed when AI was taught and used in this way?

At the overall sample level, we found statistically significant improvements in three areas:

  • Information and data literacy – students in the AI-training condition were around 15 percentage points more likely to report the highest level of competence in identifying information needs and carrying out effective digital searches.
  • Problem solving – functional dimension – the probability of reporting the top level in using digital tools (including AI) to solve tasks increased by about 24 percentage points.
  • Problem solving – metacognitive dimension – a similar 24-point gain emerged for recognising what aspects of one’s digital competences need to be updated or improved.

In other words, the AI-integrated teaching design was associated not only with better use of digital tools, but also with stronger awareness of digital strengths and weaknesses – a key ingredient of autonomous learning. Communication and safety competences also showed positive but smaller and more uncertain effects. Here, the pattern becomes clearer when we look at who benefited most.

A compensatory effect: AI as a potential leveller, not just an amplifier

When we distinguished students by their initial level of digital competence, a pattern emerged. For those starting below the median, the intervention produced large and significant gains in all five competences, with improvements between 18 and 38 percentage points depending on the area. For students starting above the median, effects were smaller and, in some cases, non-significant.

This suggests a compensatory effect: students who began the course with weaker digital competences benefited the most from the AI-based teaching design. Rather than widening the digital gap, guided use of AI acted as a levelling mechanism, bringing lower-competence students closer to their more digitally confident peers.

Conceptually, this challenges an implicit assumption in much of the literature – namely, that generative AI will primarily enhance the learning of already advantaged students, because they are the ones with the skills and confidence to exploit it. Our findings show that, when AI is embedded within intentional pedagogy, explicit training and structured feedback, the opposite can happen: those who started with fewer resources can gain the most.

From ‘allow or ban’ to ‘how do we teach with AI?’

For higher education policy and practice, the implications are twofold.

First, we need to stop thinking of digital competence purely as a prerequisite for using AI. Under the right design conditions, AI can be a pedagogical resource to build those competences, especially in information literacy, problem solving and metacognitive self-regulation. That means integrating AI into curricula not as an add-on, but as part of how we teach students to plan, monitor and evaluate their learning.

Second, our results suggest that universities concerned with equity and digital inclusion should focus less on whether students have access to AI tools (many already do) and more on who receives support to learn how to use them well. Providing structured opportunities to practise prompting, to critique AI outputs and to reflect on one’s own digital skills may be particularly valuable for students who enter university with lower levels of digital confidence.

This does not resolve all the ethical and practical concerns around generative AI – far from it. But it shifts the conversation. Instead of treating AI as an external threat to academic integrity that must be tightly controlled, we can start to ask:

  • How can we design tasks where the added value lies in asking good questions, justifying decisions and evaluating evidence, rather than in producing a single ‘correct’ answer?
  • How can we support students to see AI not as a shortcut to avoid thinking, but as a tool to think better and know themselves better as learners?
  • Under what conditions does AI genuinely help to close digital competence gaps, and when might it risk opening new ones?

Answering these questions will require further longitudinal and multi-institutional research, including replication studies and objective performance measures alongside self-reports. Yet the evidence we present offers a cautiously optimistic message: teaching students how to use AI can be part of a strategy to strengthen digital competences and reduce inequalities in higher education, rather than merely another driver of stratification.

Concepción González García is Assistant Professor of Economics at the Faculty of Economics and Business, Catholic University of Murcia (UCAM), Spain, and holds a PhD in Economics from the University of Alicante. Her research interests include macroeconomics, particularly fiscal policy, and education.

Nina Pallarés is Assistant Professor of Economics and Academic Coordinator of the Master’s in Management of Sports Entities at the Faculty of Economics and Business, Catholic University of Murcia (UCAM), Spain. Her research focuses on applied econometrics, with particular emphasis on health, labour, education, and family economics.


2 Comments

Rethinking metrics, rethinking narratives: why widening access at elite universities requires more than procedural fairness

by Kate Ayres

For many years, the fair access agenda in UK HE has emphasised more transparent and consistent admissions processes that are underpinned by clearer criteria and targeted support. As a qualified accountant and training in Lean Six Sigma, I’ve always been drawn to efficiency, clarity, and measurable improvement – principles that shaped much of my work in HE. However, as I moved into more senior roles and worked more closely with institutional decision-makers, I started to ask a different kind of question: why do some reforms, even when implemented well, seem to make little real difference?

That question sits at the centre of my doctoral research. Despite significant reforms the social composition of Durham University’s student body has felt largely unchanged. From within the institution, it was evident that fairer offer-making was not translating into meaningful shifts in the home-student entrant profile. This revealed an uncomfortable truth: so far, no amount of investment or policy reform can, by itself, reshape the social forces that determine who sees a Durham degree as desirable.

To understand why, we need to stop looking only at what universities do, and start looking at how students behave, and how the wider customer base, or audience, signals who belongs where.

Why aren’t internal reforms enough?

The limited shift in Durham’s home-student body prompts a key question: are our current metrics assuming universities can control demand, when in fact they can only affect the choices of applicants already in their pool?

My research used fourteen years of UCAS admissions data for Durham University to analyse how applicant characteristics, predicted attainment, school type, and socio-economic background intersect with admissions decisions and outcomes. Using multivariate logistic regression and Difference-in-Differences (DiD) analysis, I examined the impact of Durham’s 2019 move from decentralised to centralised admissions.

Results

Since the centralisation of admissions in 2019:

  • Contextual students are now 72% more likely to receive an offer, reflecting a major shift in offer-making behaviour.
  • Contextual applicants to selecting departments remain 40% less likely to get offers than those applying to recruiting ones.
  • No improvement is seen in firm-acceptance rates, suggesting culture or fit still shape applicant choices.
  • Insurance-acceptance has risen 21%, showing Durham is increasingly seen as a backup option for these students.
  • Contextual students are now 2% less likely to enrol after receiving an offer, raising concerns about deeper barriers to entry.

Trend Analysis

The findings were initially encouraging with Contextual applicants became more likely to receive an offer after centralisation. However, the increased offer rate had very limited effect on who actually enrolled. Contextual applicants were increasingly likely to accept alternative universities before Durham. Meanwhile, the proportion of entrants from higher parental SES groups increased, and independent-school students (already overrepresented) continued to make up around one-third of Durham’s home undergraduate intake in 2023.

Who is in control of demand?

While Durham has a history of taking affirmative action for contextual students, these findings illustrate that the OfS-set POLAR4 ratios will never be achievable for somewhere like Durham because these measures assume that universities themselves control demand. Drawing on Organisational Ecology, I argue that this assumption is flawed.

To understand why improved offer-making did not shift entrant composition, we need to look beyond institutional behaviour and examine the ecosystem dynamics that shape demand. Just as ecosystems rely on diversity, so does HE. No institution can appeal to every audience, nor should it. Organisations operate within ecosystems shaped by social, economic, and political forces, and crucially by their audiences, who ultimately determine demand. Therefore, it is the audience that defines an organisation’s niche. In HE, applicants gravitate toward universities that align with their social tastes, expectations, and sense of belonging. Therefore, the most powerful forces shaping demand are the social networks and information transmissions within and these influence applicants long before they apply: what they hear at school, family expectations, and what peers believe “people like us” do—and where “people like us” go.

Currently, wider systemic shifts are reinforcing and entrenching Durham’s niche, especially among white independent-school applicants:

  1.  As Oxbridge intensifies its widening participation initiatives, applicants who traditionally succeed (predominantly white students from independent schools) are increasingly less likely to secure offers.
  2. These applicants seek the closest alternative to the Oxbridge experience, with Durham emerging as a preferred option.
  3. Durham is increasingly accepted as a firm choice because of its perceived “fit” with these applicants’ identity and expectations (as seen in this research).
  4. These applicants typically achieve their predicted grades, making entry more likely.
  5. Their growing presence reinforces existing social narratives about Durham’s student profile.
  6. Consequently, the entrant composition remains socially narrow, and these dynamics may intensify.
  7. The narrative of Durham as a socially exclusive institution persists.
  8. Applicants from non-traditional backgrounds thus perceive a lack of belonging.
  9. As a result, these applicants are less likely to select Durham as their firm choice.

While these dynamics may prompt questions about whether Durham could or should shift away from its position as an “almost-Oxbridge” institution, the evidence suggests that only limited movement is structurally possible. Organisational Ecology predicts that Durham’s niche will remain relatively stable over time and there are many benefits of sticking with a niche approach. The university may be able to broaden its appeal slightly at the margins, drawing in more students from POLAR4 Q3 and Q4 backgrounds, but POLAR4 Q1 and Q2 students are likely to remain outliers. The real question is therefore not whether Durham can radically transform its appeal, but whether it can create the conditions in which those who do apply feel they can belong and thrive. This is where the OfS should take action because, rather than holding universities accountable for applicant pools (which they do not control), it should focus on the areas where institutional agency is strongest. Improving the lived experience of contextual students, strengthening narrative and cultural inclusion, and raising offer-to-acceptance conversion rates are all within Durham’s sphere of influence. Current patterns, particularly the relatively low acceptance and entry rates among contextual applicants, suggest that cultural barriers remain. Regulators should therefore attend less to the composition of the total entrant pool and more to how effectively institutions support, retain, and attract those who already see themselves as potential members of the community.

Taken together, the wider systemic effects detailed above reinforce, rather than shift, Durham’s niche. Only a proportion of applicants will ever feel an affinity with the institution, which is entirely natural in a diverse HE ecosystem where students gravitate toward environments that resonate with their identities and expectations.

These systemic forces lie largely outside Durham’s control, and changing the feedback loop requires more than procedural reform. It demands narrative change within the social networks where ideas of belonging are first formed, and a commitment to ensuring that the lived experiences of contextual students at Durham are positive and affirming. Building stronger partnerships with schools can help shift these early perceptions, while amplifying the stories and experiences of students from diverse backgrounds can offer powerful, alternative points of identification. Applicants make decisions based not just on information, but on a deep, intuitive sense of whether a place feels like it’s for “people like us”. This cannot be achieved through admissions policy, strategy, or marketing alone. Institutions can also look to examples such as the University of Bristol, which has reshaped its entrant pool through doing exactly this. Their efforts have influenced not only who feels able to apply, but who can genuinely imagine themselves thriving within the institution, resulting in a gradual shift in their niche.

Proposal for new metrics

If we evaluate universities on metrics that assume they control demand, we will misread both the problem and the solution. In the short term, universities cannot determine who chooses to apply, but they can influence who feels confident enough to accept an offer, which may, as seen with Bristol, create gradual shifts in the entrant pool over time. Universities can and should work to broaden their niches, yet Organisational Ecology reminds us that institutions rarely move far from their point of peak appeal, meaning Durham’s niche is likely to remain relatively stable and only widen at the margins. Expecting rapid transformation would be like assuming a population adapted to the Arctic could swiftly relocate to the Caribbean. That’s not saying it’s not possible, but it is not fast. Any substantial change in who feels an affinity with Durham will likewise unfold slowly, as cultural experiences and social narratives evolve. In the meantime, improving the lived experience of contextual students, and seeing this reflected in rising conversion rates, is the most realistic and meaningful early sign of movement within the niche. This stability also means that proportion-based performance measures will continue to make the University appear as though it is underperforming, even when it is behaving exactly as expected within its ecological position. Durham has added complexities in that it will always occupy a relatively small share of the HE market because the physical constraints of Durham City limit expansion. This adds presents further broadening of the niche simply because they can’t change by admitting more students.

Therefore, metrics focused solely on broad institutional demand will never fully capture the dynamics of access or institutional “progress”. However, rising conversions – from offer to firm acceptance or offer to entry – among contextual students would signal a growing sense of fit, belonging, or affinity. And even if these students never form a majority, improving conversion is a meaningful and realistic way to measure widening participation progress, because it focuses on what an institution can actually influence, the student experience.

To take these social forces seriously, and to acknowledge that a healthy HE system depends on a diversity of institutions meeting the diverse needs of students, we need metrics that reflect audience attraction and demand dynamics. Current proportion-based measures, fail to capture these realities. Instead, I propose:

  • Because Russell Group institutions occupy a similar position in the Blau Space (they attract applicants with comparable social, cultural, and educational characteristics), organisational ecology theory suggests they compete in neighbouring overlapping niches. This means that isolated widening participation initiatives at a single institution may simply redistribute socially advantaged applicants across the group rather than increase diversity overall. Coordinated widening participation strategies across the Russell Group would therefore reduce competitive displacement and support genuine, sector-wide broadening of access.
  • Introduce regulatory metrics that reward successful conversion, for example offer-to-firm-acceptance rates for underrepresented groups, rather than focusing solely on offers or entrant proportions. This would bring cultural belonging into WP evaluation by capturing the fact that where these students accept an offer and enter, there is likely be a greater sense of affinity, a place where they feel they can “fit”, belong, and succeed.
  • Measure and report the impact of cross-institution outreach among universities with similar audience profiles, recognising that widening participation is driven by sector-level dynamics rather than isolated institutional efforts.
  • Track behavioural demand patterns (such as firm-choice decisions) across groups of institutions to reveal how social signalling influences applicant preferences.

The future of access lies in changing what we measure—and what we tell ourselves

Universities often feel they are held solely accountable for widening access, yet my research demonstrates that applicant perceptions, social networks, and systemic hierarchies play an equally powerful role. The most important conclusion of this research is that access outcomes are co-produced. Universities are not solely responsible for entrant composition; applicants are active agents whose perceptions and choices shape institutional realities. To make meaningful change, we need approaches that reflect this distributed responsibility. To make real progress, we must rethink both the metrics we prioritise and the narratives we reproduce.

Fair admissions processes matter – but without addressing the social dynamics shaping applicant behaviour, procedural fairness alone will never deliver equitable outcomes. By shifting the sector’s focus to behavioural metrics and narrative change, we can begin to challenge the feedback loops that sustain exclusivity and move toward a system where access is genuinely a collaborative effort.

Durham University may never appeal to more than a small share of the applicant pool, but perhaps the real measure of success is ensuring that those who do not fit the perceived mould feel confident enough to accept and enter. Ecosystems flourish through diversity, and so does HE; no single institution can – or should – meet every need. Our responsibility is to keep access fair, to reshape the narratives that limit choice, and to support those who want to join us to feel that they truly belong. In focusing on this conversion (from offer to entrant) we move toward a more honest and sustainable understanding of what widening participation success looks like. We cannot control the applicant pool, but we can influence the student experience, the narratives that spread through their networks, and their confidence in imagining themselves belonging here.

Dr Kate Ayres is a Chartered Management Accountant (CIMA) with a DBA from Durham University, where her research explored market niches and widening participation in UK HE through organisational ecology using quantitative methods. She has worked across finance, academic, and project management roles in UK Higher Education, including positions at Durham University and the University of Oxford. Kate currently serves as an Academic Mentor on the Senior Leaders Apprenticeship at Durham University Business School. Her work brings together analytical insight, organisational experience, and a commitment to improving HE culture. She also co-manages and sings with the Durham University Staff Chamber Choir, which she founded.


Leave a comment

Widely used but barely trusted: understanding student perceptions on the use of generative AI in higher education

by Carmen Cabrera and Ruth Neville

Generative artificial intelligence (GAI) tools are rapidly transforming how university students learn, create and engage with knowledge. Powered by techniques such as neural network algorithms, these tools generate new content, including text, tables, computer code, images, audio and video, by learning patterns from existing data. The outputs are usually characterised by their close resemblance to human-generated content. While GAI shows great promise to improve the learning experience in various disciplines, its growing uptake also raises concerns about misuse, over-reliance and more generally, its impact on the learning process. In response, multiple UK HE institutions have issued guidance outlining acceptable use and warning against breaches of academic integrity. However, discussions about the role of GAI in the HE learning process have been led mostly by educators and institutions, and less attention has been given to how students perceive and use GAI.

Our recent study, published in Perspectives: Policy and Practice in Higher Education, helps to address this gap by bringing student perspectives into the discussion. Drawing on a survey conducted in early 2024 with 132 undergraduate students from six UK universities, the study reveals an impactful paradox. Students are using GAI tools widely, and expect their use to increase, yet fewer than 25% regard its outputs as reliable. High levels of use therefore coexist with low levels of trust.

Using GAI without trusting it

At first glance, the widespread use of GAI among students might be taken as a sign of growing confidence in these tools. Yet, when students are asked about their perceptions on the reliability of GAI outputs, many express disagreement when asked if GAI could be considered a reliable source of knowledge. This apparent contradiction raises the question of why are students still using tools they do not fully trust? The answer lies in the convenience of GAI. Students are not necessarily using GAI because they believe it is accurate. They are using it because it is fast, accessible and can help them get started or work more efficiently. Our study suggests that perceived usefulness may be outweighing the students’ scepticism towards the reliability of outputs, as this scepticism does not seem to be slowing adoption. Nearly all student groups surveyed reported that they expect to continue using generative AI in the future, indicating that low levels of trust are unlikely to deter ongoing or increased use.

Not all perceptions are equal

While the “high use – low trust” paradox is evident across student groups, the study also reveals systematic differences in the adoption and perceptions of GAI by gender and by domicile status (UK v international students). Male and international students tend to report higher levels of both past and anticipated future use of GAI tools, and more permissive attitudes towards AI-assisted learning compared to female and UK-domiciled students. These differences should not necessarily be interpreted as evidence that some students are more ethical, critical or technologically literate than others. What we are likely seeing are responses to different pressures and contexts shaping how students engage with these tools. Particularly for international students, GAI can help navigate language barriers or unfamiliar academic conventions. In those circumstances, GAI may work as a form of academic support rather than a shortcut. Meanwhile, differences in attitudes by gender reflect wider patterns often observed on academic integrity and risk-taking, where female students often report greater concern about following rules and avoiding sanctions. These findings suggest that students’ engagement with GAI is influenced by their positionality within Higher Education, and not just by their individual attitudes.

Different interpretations of institutional guidance

Discrepancies by gender and domicile status go beyond patterns of use and trust, extending to how students interpret institutional guidance on generative AI. Most UK universities now publish policies outlining acceptable and unacceptable uses of GAI in relation to assessment and academic integrity, and typically present these rules as applying uniformly to all students. In practice, as evidenced by our study, students interpret these guidelines differently. UK-domiciled students, especially women, tend to adopt more cautious readings, sometimes treating permitted uses, such as using GAI for initial research or topic overviews, as potential misconduct. International students, by contrast, are more likely to express permissive or uncertain views, even in relation to practices that are more clearly prohibited. Shared rules do not guarantee shared understanding, especially if guidance is ambiguous or unevenly communicated. GAI is evolving faster than University policy, so addressing this unevenness in understanding is an urgent challenge for higher education.

Where does the ‘problem’ lie?

Students are navigating rapidly evolving technologies within assessment frameworks that were not designed with GAI in mind. At the same time, they are responding to institutional guidance that is frequently high-level, unevenly communicated and difficult to translate into everyday academic practice. Yet there is a tendency to treat GAI misuse as a problem stemming from individual student behaviour. Our findings point instead to structural and systemic issues shaping how students engage with these tools. From this perspective, variation in student behaviour could reflect the uneven inclusivity of current institutional guidelines. Even when policies are identical for all, the evidence indicates that they are not experienced in the same way across student groups, calling for a need to promote fairness and reduce differential risk at the institutional level.

These findings also have clear implications for assessment and teaching. Since students are already using GAI widely, assessment design needs to avoid reactive attempts to exclude GAI. A more effective and equitable approach may involve acknowledging GAI use where appropriate, supporting students to engage with it critically and designing learning activities that continue to cultivate critical thinking, judgement and communication skills. In some cases, this may also mean emphasising in-person, discussion-based or applied forms of assessment where GAI offers limited advantage. Equally, digital literacy initiatives need to go beyond technical competence. Students require clearer and more concrete examples of what constitutes acceptable and unacceptable use of GAI in specific assessment contexts, as well as opportunities to discuss why these boundaries exist. Without this, institutions risk creating environments in which some students become too cautious in using GAI, while others cross lines they do not fully understand.

More broadly, policymakers and institutional leaders should avoid assuming a single student response to GAI. As this study shows, engagement with these tools is shaped by gender, educational background, language and structural pressures. Treating the student body as homogeneous risks reinforcing existing inequalities rather than addressing them. Public debate about GAI in HE frequently swings between optimism and alarm. This research points to a more grounded reality where students are not blindly trusting AI, but their use of it is increasing, sometimes pragmatically, sometimes under pressure. As GAI systems continue evolving, understanding how students navigate these tools in practice is essential to developing policies, assessments and teaching approaches that are both effective and fair.

You can find more information in our full research paper: https://www.tandfonline.com/doi/full/10.1080/13603108.2025.2595453

Dr Carmen Cabrera is a Lecturer in Geographic Data Science at the Geographic Data Science Lab, within the University of Liverpool’s Department of Geography and Planning. Her areas of expertise are geographic data science, human mobility, network analysis and mathematical modelling. Carmen’s research focuses on developing quantitative frameworks to model and predict human mobility patterns across spatiotemporal scales and population groups, ranging from intraurban commutes to migratory movements. She is particularly interested in establishing methodologies to facilitate the efficient and reliable use of new forms of digital trace data in the study of human movement. Prior to her position as a Lecturer, Carmen completed a BSc and MSc in Physics and Applied Mathematics, specialising in Network Analysis. She then did a PhD at University College London (UCL), focussing on the development of mathematical models of social behaviours in urban areas, against the theoretical backdrop of agglomeration economies. After graduating from her PhD in 2021, she was a Research Fellow in Urban Mobility at the Centre for Advanced Spatial Analysis (CASA), at UCL, where she currently holds a honorary position.

Dr Ruth Neville is a Research Fellow at the Centre for Advanced Spatial Analysis (CASA), UCL, working at the intersection of Spatial Data Science, Population Geography and Demography. Her PhD research considers the driving forces behind international student mobility into the UK, the susceptibility of student applications to external shocks, and forecasting future trends in applications using machine learning. Ruth has also worked on projects related to human mobility in Latin America during the COVID-19 pandemic, the relationship between internal displacement and climate change in the East and Horn of Africa, and displacement of Ukrainian refugees. She has a background in Political Science, Economics and Philosophy, with a particular interest in electoral behaviour.


1 Comment

Reclaiming the academic community: why universities need more than metrics

by Sigurður Kristinsson

For decades, talk of “the academic community” has flowed easily through mission statements, strategy documents, and speeches from university leadership. Yet few stop to consider what this community is or why it matters. As universities increasingly orient themselves toward markets, rankings, and performance metrics, the gap between the ideal of academic community and the lived reality of academic work has widened. But this drift is not merely unfortunate; it threatens the very values that justify the existence of universities in the first place.

This blog explores why academic community is essential to higher education, how contemporary systems undermine it, and what a renewed vision of academic life might require.

What do we mean when we talk about “community”?

The word “community” can be used in two different senses. One is descriptive: communities are simply networks of people connected by place, shared interests or regular interaction. From this sociological standpoint, academic communities consist of overlapping groups (faculty, students, administrators, service professionals) brought together by institutional roles, disciplinary identities, or digital networks, perhaps experiencing a sense of belonging, solidarity, and shared purpose.

But in debates about the purpose and future of universities, “community” is often used in a normative sense: an ideal of how academics ought to relate to one another. In Humboldtian (1810) spirit, contemporary advocates like Fitzpatrick (2021, 2024) and Bennett (1998, 2003) envision academic community as a moral and intellectual culture grounded in shared purpose, generosity, intellectual hospitality, mutual respect, and the collective pursuit of knowledge. From this philosophical perspective, community is not just a cluster of networks to be analyzed empirically but a normative vision of how scholarly life becomes meaningful. This aspirational view stands in stark contrast to the conditions shaping many universities today.

The pressures pulling academic life apart

For several decades, developments in universities around the world have been hostile to academic community. While the precise mechanisms vary, academics report strikingly similar pressures: managerial oversight, performance auditing, intensifying competition, and the steady erosion of collegial structures and shared governance. Five threats to academic community are particularly worrisome:

Organisational (not occupational) professionalism

In her analysis of how managerial logic has co-opted the language of professionalism to justify top-down control in public institutions, Julia Evetts (2003, 2009, 2011) introduced a distinction between occupational and organisational professionalism. Occupational professionalism in academia implies membership in a self-governing community of experts committed to serving society through knowledge. Today, however, universities increasingly define professionalism in organisational terms: compliance with targets, performance indicators, and standardised procedures. The result is a hybrid system: academics retain some autonomy, but it is overshadowed by bureaucratic accountability structures that fragment communal relationships and discourage collective responsibility (Siekkinen et al, 2020).

Managerialism

Managerialism prizes measurable production outputs, standardized procedures, and vertical decision-making. As Metz (2017) argues, these mechanisms degrade communal relationships among academics as well as between them and managers, students, and wider society: decisions are imposed without consultation; bonus systems reward narrow indicators rather than communal priorities; and bureaucratic layers reduce opportunities for collegial dialogue. Managerialism replaces trust with surveillance and collegial judgment with quantification.

Individualism

The rise of competition – over publications, grants, rankings, and prestige – has amplified what Bennett (2003) called “insistent individualism.” Colleagues become rivals or useful instruments. Achievements become personal currency. In such settings, it is easy to see oneself not as part of a community pursuing shared goods but as an isolated producer of measurable outputs. This ethos erodes the solidarity and relationality necessary for any robust academic culture.

Retreat from academic citizenship

Academic citizenship refers to the contributions – committee work, mentoring, governance, public engagement – that sustain universities beyond research and teaching. Yet because these activities are difficult to measure and often unrewarded, they are increasingly neglected (Macfarlane, 2005; Feldt et al, 2024). This neglect fragments institutions and weakens the norms of shared responsibility that should hold academic life together.

Troubled collegiality

Collegiality includes participatory and collective decision-making, a presumption of shared values, absence of hierarchy, supportiveness, a shared commitment to a common good, trust beyond a typical workplace, and professional autonomy. It has long been central to academic identity but has become contested. Some experience collegial labor as invisible and unevenly distributed; others see managerial attempts to measure collegiality as just another way of disciplining staff. Efforts to quantify collegiality may correct some injustices but also risk instrumentalizing it, turning a relational ideal into a bureaucratic category (Craig et al, 2025; Fleming and Harley, 2024; Gavin et al, 2023).

Across all these pressures, a common thread emerges: the forces shaping contemporary academia weaken the relationships required for intellectual work to flourish.

Why academic community matters

If community is eroding, why should we care? The answer lies in the link between community and the values that higher education claims to serve. A helpful framework comes from value theory, which distinguishes between instrumental, constitutive, and intrinsic goods.

Community as instrumentally valuable

Academic community helps produce the outcomes universities care about: research breakthroughs, learning, intellectual development, and democratic engagement. Collaboration makes research stronger. Peer support helps people grow. Shared norms encourage integrity, rigor, and creativity. Without community, academic values become harder to realize.

Community as constitutive of academic values

In many cases, community is not merely a helpful means but a necessary constituent. Scientific knowledge, as philosophers of science like Merton (1979) and Longino (1990) have long emphasized, is inherently social: it requires communal critique, peer review, and collective norms to distinguish knowledge from error. Learning, too, is fundamentally relational, as Vygotsky (1978) and Dewey (1916) argued. You cannot have science or education without community.

Community as intrinsically valuable

Beyond producing useful outcomes, community enriches human life. Belonging, shared purpose, and intellectual companionship are deeply fulfilling. Academic community offers a sense of identity, meaning, and solidarity that transcends individual achievement (Metz, 2017). In this sense, community contributes directly to human flourishing.

How community shapes academic life

Several examples show how academic values depend on community in practice:

Debates about educational values

The pursuit of academic values requires reflection on their meaning. Interpretive arguments about values like autonomy, virtue, or justice in education contribute to conversations that presuppose the collective norms of academic community (Nussbaum, 2010; Ebels-Duggan, 2015). These debates require shared standards of reasoning, openness to critique, and a shared commitment to better understanding.

Scientific knowledge and academic freedom

No individual can produce knowledge alone. Scientific communities ensure that discoveries are evaluated, replicated, and integrated into a larger body of understanding. Likewise, academic freedom is not a personal privilege but a communal norm that protects open inquiry (Calhoun, 2009; Frímannsson et al, 2022). It depends on solidarity among scholars.

Teaching as communal practice

Education flourishes in relational settings. Classrooms become communities in which teachers and students jointly pursue understanding. Weithman (2015) describes this as “academic friendship” – a form of companionship that expands imagination, fosters intellectual virtues, and shapes future citizens.

Across these cases, community is not optional; it is essential to academic values.

Rebuilding scademic community: structural and cultural change

Given its importance, how might universities cultivate stronger academic communities?

Structural reform

Universities should try to resist the dominance of market logic. Sector-wide policy changes could help rebalance priorities. Hiring, promotion, and reward systems should value teaching, service, mentorship, and public engagement rather than focusing exclusively on quantifiable research metrics. Without structural support, cultural change will be difficult.

Cultural renewal

A healthier academic culture requires a different mindset—one that foregrounds generosity, relationality, and shared purpose. In Generous Thinking, Fitzpatrick (2021) argues that building real community requires humility, conversation, listening, and collaboration. Community cannot be mandated; it must be practised.

This requires academics to challenge competitive individualism, share work equitably, strengthen trust and dialogue, and reimagine collegiality as a lived practice rather than a managerial tool. Most importantly, it requires us to recognize ourselves as fundamentally relational beings whose professional purpose is intertwined with others.

A moral case for academic community

Academic community is not only epistemically valuable; it is morally significant. Relational moral theories argue that human flourishing depends on identity and solidarity. We become the moral human beings we are through our communal relationships (Metz, 2021).

Applying this to academia reveals that collegiality is grounded in shared identity and shared ends. Since the moral obligations created by academic relationships remain professional, collegial community does not require intrusive intimacy. Far from suppressing dissent or professional autonomy, solidarity requires defending academic freedom and academic values generally.

A relational understanding of morality thus implies that the ideal of academic community promises not only a more fulfilling and coherent sense of occupational purpose, but also a way of relating to others that is more satisfying morally than the current environment individualistic competition.

Conclusion: the future depends on community

Universities today face an existential challenge. In the rush to satisfy markets, rankings, and managerial demands, they risk undermining the very relationships that make academic life meaningful. Academic community is not a nostalgic ideal; it is the cornerstone of learning, knowledge, virtue, and human flourishing.

If higher education is to reclaim a sense of purpose, it must begin by cultivating the social and moral conditions in which genuine community can grow. This requires structural reforms, cultural renewal, and a shared commitment to relational values. Without such efforts, universities will continue drifting toward fragmentation, losing sight of the goods they exist to protect.

Rebuilding academic community is not merely desirable. It is necessary – for the integrity of scholarship, for the flourishing of those who work within universities, and for the public good that higher education is meant to serve.

Sigurður Kristinsson is a Professor of Philosophy at the University of Akureyri, Iceland. His research applies moral and political philosophy in various contexts of professional practice, increasingly intersecting with the philosophy of higher education with emphasis on the social and democratic role of universities.


2 Comments

Reflective teaching: the “small shifts” that quietly change everything

by Yetunde Kolajo

If you’ve ever left a lecture thinking “That didn’t land the way I hoped” (or “That went surprisingly well – why?”), you’ve already stepped into reflective teaching. The question is whether reflection remains a private afterthought … or becomes a deliberate practice that improves teaching in real time and shapes what we do next.

In Advancing pedagogical excellence through reflective teaching practice and adaptation I explored reflective teaching practice (RTP) in a first-year chemistry context at a New Zealand university, asking a deceptively simple question: How do lecturers’ teaching philosophies shape what they actually do to reflect and adapt their teaching?

What the study did

I interviewed eight chemistry lecturers using semi-structured interviews, then used thematic analysis to examine two connected strands: (1) teaching concepts/philosophy and (2) lecturer-student interaction. The paper distinguishes between:

  • Reflective Teaching (RT): the broader ongoing process of critically examining your teaching.
  • Reflective Teaching Practice (RTP): the day-to-day strategies (journals, feedback loops, peer dialogue, etc) that make reflection actionable.

Reflection is uneven and often unsystematic

A striking finding is that not all lecturers consistently engaged in reflective practices, and there wasn’t clear evidence of a shared, structured reflective culture across the teaching team. Some lecturers could articulate a teaching philosophy, but this didn’t always translate into a repeatable reflection cycle (before, during, and after teaching). I  framed this using Dewey and Schön’s well-known reflection stages:

  • Reflection-for-action (before teaching): planning with intention
  • Reflection-in-action (during teaching): adjusting as it happens
  • Reflection-on-action (after teaching): reviewing to improve next time

Even where lecturers were clearly committed and experienced, reflection could still become fragmented, more like “minor tweaks” than a consistent, evidence-informed practice.

The real engine of reflection: lecturer-student interaction

Interaction isn’t just a teaching technique – it’s a reflection tool.

Student questions, live confusion, moments of silence, a sudden “Ohhh!” – these are data. In the study, the clearest examples of reflection happening during teaching came from lecturers who intentionally built in interaction (eg questioning strategies, pausing for problem-solving).

One example stands out: Denise’s in-class quiz is described as the only instance that embodied all three reflection components using student responses to gauge understanding, adapting support during the activity, and feeding insights forward into later planning.

Why this matters right now in UK HE

UK higher education is navigating increasing diversity in student backgrounds, expectations, and prior learning alongside sharper scrutiny of teaching quality and inclusion. In that context, reflective teaching isn’t “nice-to-have CPD”; it’s a way of ensuring our teaching practices keep pace with learners’ needs, not just disciplinary content.

The paper doesn’t argue for abandoning lectures. Instead, it shows how reflective practice can help lecturers adapt within lecture-based structures especially through purposeful interaction that shifts students from passive listening toward more active/constructive engagement (drawing on engagement ideas such as ICAP).

Three “try this tomorrow” reflective moves (small, practical, high impact)

  1. Plan one interaction checkpoint (not ten). Add a single moment where you must learn something from students (a hinge question, poll, mini-problem, or “explain it to a partner”). Use it as reflection-for-action.
  1. Name your in-the-moment adjustment. When you pivot (slow down, re-explain, swap an example), briefly acknowledge it: “I’m noticing this is sticky – let’s try a different route.” That’s reflection-in-action made visible.
  1. End with one evidence-based note to self. Not “Went fine.” Instead: “35% missed X in the quiz – next time: do Y before Z.” That’s reflection-on-action you can actually reuse.

Questions to spark conversation (for you or your teaching team)

  • Where does your teaching philosophy show up most clearly: content coverage, student confidence, relevance, or interaction?
  • Which “data” do you trust most: NSS/module evaluation, informal comments, in-class responses, attainment patterns and why?

If your programme is team-taught, what would a shared reflective framework look like in practice (so reflection isn’t isolated and inconsistent)?

If reflective teaching is the intention, this article is the nudge: make reflection visible, structured, and interaction-led, so adaptation becomes a habit, not a heroic one-off.

Dr Yetunde Kolajo is a Student Success Research Associate at the University of Kent. Her research examines pedagogical decision-making in higher education, with a focus on students’ learning experiences, critical thinking and decolonising pedagogies. Drawing on reflective teaching practice, she examines how inclusive and reflective teaching frameworks can enhance student success.


Leave a comment

Collegiality and competition in German Centres of Excellence

by Lautaro Vilches

Collegiality, although threatened by increasing competitive pressures and described as a slippery and elastic concept, remains a powerful ideal underpinning academic and intellectual practices. Drawing on two empirical studies, this blog examines the relationships between collegiality and competition in Centres of Excellence (CoEs) in the Social Sciences and Humanities (SSH) in Germany. These CoEs are conceptualised as a quasi-departmental new university model that contrasts with the ‘university of chairs’, which characterises the old Humboldtian university model, organised around chairs led by professors. Hence my research question: How do academics experience collegiality, and how does it relate to competition, within CoEs in the SSH?

In 2006, the government launched the Excellence Strategy (then known as the Excellence Initiative), which includes a scheme providing long-term funding for Centres of Excellence. Notably, this scheme extends beyond the traditionally more collaborative Natural Sciences, to encompass the Social Sciences and Humanities. Germany, therefore, offers a unique case to explore transformations of collegiality amidst co-existing and overlapping university models. What, then, are the key features of these models?

In the old model of the ‘university of chairs’ the chair constitutes the central organisational unit of the university, with each one led by a single professor. Central to this model is the idea of collegial leadership according to which professors govern the university autonomously, a practice that can be traced back to the old scholastic guild of the Middle Ages. During the eighteenth century, German universities underwent a process of modernisation influenced by Renaissance ideals, culminating in the establishment of University of Berlin in Prussia in 1810 by Wilhelm von Humboldt. By the late nineteenth century, the Humboldtian model of the university had become highly influential, as it offered an organisational template in which the ideals of academic autonomy, academic freedom and the  integration of research and teaching were institutionalised.

Within the university of chairs, collegiality is effectively ‘contained’ and enacted within individual chairs. In this structure, professors have no formal superiors and academic staff are directly subordinate to a single professor (as chair holder) – not an institute or faculty. As a result, the university of chairs is characterised by several small and steep hierarchies.

In recent decades – alongside the rise of the United States as the hegemonic power – the Anglo-American departmental model spread across the world, a shift that is associated with the entrepreneurial transformation of universities as they respond to growing competitive pressures.

Remarkably, CoEs in the SSH in Germany are organised as ‘quasi-departments’ resembling a multidisciplinary Anglo-American department. They are very large in comparison with other collaborative grants, often comprising more than 100 affiliated researchers. They are structured around several ‘Research Areas’ and led by 25 Principal Investigators (mostly professors) who must agree on the implementation of the multidisciplinary and integrated research programme on which the CoE is based.

The historical implications of this new model cannot be overstated. CoEs appear to operate as Trojan horses: cloaked in the prestige of excellence, they have introduced a fundamentally different organisational model into the German university of chairs, an institution that has endured over centuries.

Against the backdrop of these two models, what are the implications for collegiality and its relation to competition? A few clarifications are necessary. First, much of the research on collegiality has focused on governance, ignoring that collegiality is also practised ‘on the ground’. Here, I will define collegiality (a) as form of ‘leadership and governance’, involving relations among leaders as well as interactions between leaders and those they govern; (b) as an ‘intellectual practice’ that can be best observed in the enactments of collaborative research; and (c) as a form of ‘citizenship’, involving practices that signify belonging to the CoE and its academic community.

Second, adopting this broader understanding requires acknowledging that collegiality is not only experienced by professors (in governing collegialy the university) but also by the ‘invisible’ academic demos, namely Early Career Researchers (ECRs). Although often employed in precarious positions, ECRS are nonetheless significant members of the academic community, in particular in CoEs, which explicitly prioritise the training of ECRs as a core objective. Whilst ECRs are committed full time to the CoE and sustain much of its collaborative research activity, professors remain simultaneously bound to the duties of their respective positions as chairs.

A third clarification concerns our normative assumptions underpinning collegiality and its relationship to competition. Collegiality is sometimes idealised as an unambiguously positive value and practice in academia, whilst competition – in contrast – is seen as a threat to collegiality. However, this idealised depiction tends to underplay, for example, the role of hierarchies in academia and often invokes an indeterminate past – perhaps somewhere in the 1960s – when universities were governed autonomously by male professors and generously funded through block grants – largely protected from competition pressures or external scrutiny.

These contextual conditions have evidently changed over recent decades: competition, both at institutional and individual terms, has intensified in academia, and CoE schemes exemplify this shift. CoE members, especially ECRs, are therefore embedded in multiple and overlapping competitions: at the institutional level through the CoE’s race for excellence; and at the individual level, through the competition for getting a position in the CoE, as well as for grants, publications, and networks necessary for career advancement.

How are collegiality and competition intertwined in the CoE? I identify three complex dynamics:

  • ‘The temporal flourishing of intellectual collegiality’ refers to the blooming of collegiality as part of the collaborative research work in the CoE. ECRs describe extensive engagement in organising, leading or co-leading research seminars (alongside PIs or other postdoctoral researchers), co-editing books, developing digital collaborative platforms, inviting researchers from abroad to join the CoE or organising and participating in informal meetings. Within this dynamic, competition is presented as being located ‘outside’ the CoE, temporarily deactivated. However, at the same time, ECRs remain aware of the omnipresence of competition, which ultimately threatens collegial collaboration when career paths, research topics or publications begin to converge. For this reason, intellectual collegiality and competition stand in an exclusionary relationship.
  • ‘The rise of CoE citizenship for the institutional race of excellence’ captures the strong sense of engagement and commitment shown by ECRs (but also professors) towards the CoE. It is expressed through initiatives aimed at enhancing the CoE’s collective research performance, particularly in anticipation of competition for renewed excellence funding. This dynamic reveals that, for the CoE, citizenship and institutional competition are not oppositional but complementary, as collective engagement is mobilised in the service of competitive success.
  • ‘Collegial leadership adapting to multiple competitions’ highlights the plurality of leadership modes, each one responding to different levels and forms of competition. At the level of professors and decision-making processes at the top, traditional collegial governance is ‘overstretched’. Although professors retain full authority, they struggle to reach consensus and to lead these large multidisciplinary centres effectively. This suggests a growing demand for new skills more closely associated with the figure of an academic manager than a professor. The institutional race for excellence thus places considerable strain on collegial governance rooted in the chair-based system. Accordingly, ECRs describe different and, apparently, contradictory modes of collegial leadership. For example, the ‘laissez faire’ mode aligns with the ideals of freedom and autonomy underpinning intellectual collegiality, but also with competition among individuals. They also describe leadership as ‘impositions’, which, on the one hand, erodes trust in professors and decision-making, but, on the other hand, intersects with notions of citizenship that compel ECRs to accept decisions, even when imposed. Yet many ECRs value and expect a more ‘inclusive leadership’ that support the development of intellectual collegiality. Overall, the relationship between collegial leadership and competition is heterogeneous and adaptive, closely intertwined with the preceding dynamics.

How, then, can these dynamics be interpreted together? Overall, the findings suggest that differences between university models matter profoundly for collegiality. Expectations regarding how academics collaborate, participate in governance and decision-making processes and form intellectual communities are embedded in specific institutional contexts.

Regarding the relation between collegiality and competition, I suggest two contrasting interpretations. The first emphasises the flourishing of intellectual collegiality and the emergence of CoE citizenship, understood as a collective, multidisciplinary sense of belonging that is driven by – and complementary to – the institutional race for excellence. The second interpretation, however, views this flourishing as a temporal illusion. From this perspective, competition is omnipresent and stands in a fundamentally exclusionary relationship to collegiality: it threatens intellectual collaboration even when temporarily deactivated; it compels academics to engage in CoE-related work they may not intrinsically value; and it overstretches traditional forms of collegial leadership, promoting managerial modes that erode trust in both academic judgement and decision-making processes. Viewed in this light, competition ultimately poses a threat to collegiality. These rival interpretations may uneasily coexist, and the second one possibly predominates. More research is needed on how organisational contexts affect the relationship between collegiality and competition.

Lautaro Vilches is a researcher at Humboldt University of Berlin and a consultant in higher education. His current research examines the implications of excellence schemes for transforming universities’ organisational arrangements and their effects on academic practices such as collegiality, academic mobility and research collaboration, particularly in the Social Sciences and Humanities. As a consultant he advises universities on advancing strategic change.


Leave a comment

Academic writing and spaces of resistance

by Kate Carruthers Thomas

At SRHE’s Annual Conference 2025, I gave a paper which argued that community, collegiality and care were key elements of the writing groups and retreats I’ve facilitated for female academics. I used Massey’s heuristic device of activity space to foreground interactions of gender, space and power in those writing interventions. I concluded that in embodying community, collegiality and care, they can potentially be seen as activity spaces of resistance to the geographies of power operating across universities and the individualised, competitive neo-liberal academy.

Academics must write. Written outputs are one of the principal means by which academics enact professional capital as experts and specialists in their disciplinary fields (French, 2020 p1605). Scholarly publications are central to individual and institutional success in the UK’s Research Excellence Framework (REF). Writing does not automatically or quickly lead to publication and just finding the time to write productively presents challenges at all career stages. But as Murray and Newton state: ‘the writing element of research is not universally experienced as a mainstream activity’ (Murray and Newton, 2009 p551). 

Applying Massey’s analytical tool of activity space: the spatial network of links and activities, of spatial connections and of locations within which a particular agent operates’ (2005 p55)to this context, we can imagine the UK HE sector as an activity space shaped by networks and power relationships of disciplines, governance, financial and knowledge capitals, metrics and institutional audit. We can also imagine the sector’s 160 universities as nodes within that wider activity space. Massey coins the term ‘power geometry’ to describe how individuals and groups are differently positioned in relation to different geographies of power in activity spaces. For example, UK universities are more or less powerfully positioned across a spectrum of elite, pre-1992 and post-1992 institutions.

We can also consider each university as an activity space, with its own spatial networks and connections shaped by the wider sector and by regional and local factors. These are enacted within each university through systems of management, workload and performance, creating the environments within which ‘agents’ – staff and students – work and study. Academics in more senior ranks, with higher salaries and research-focused roles are more likely to produce scholarly publications (McGrail, Rickard and Jones, 2006). And while the relationship between research and teaching is a troubled one across the sector, this tension is exacerbated for academics located in post-1992 institutions, many describing themselves as ‘teaching intensive’. Research and publication remain strategic corporate priorities for post-1992s, yet workload allocation is heavily weighted towards teaching and pastoral support.

So, in relation to academic writing and publication, academics are also differentially positioned, more and less powerfully, within the activity space of the university. One of the key factors influencing that positioning is gender. If we scratch the statistical surface of the UK HE landscape we find longstanding gender inequality which is proving glacially slow to shift. Women form an overall majority of UK sector employees in academic and professional services roles but 49% of academic staff, 33% of Heads of Institution and 31% of Professors are women (Advance HE, 2024). They predominate in part-time, teaching-only and precarious contracts, all of which play a role in slowing or stalling academic career progression. These data cannot be seen in isolation from women’s disproportionate responsibilities for pastoral and informal service roles within the university and gendered social roles which place a burden of care for family, household and caring on many women of all working ages.

Academic writing groups and retreats are a popular response to the challenge of writing productively. They can ‘be a method of improving research outputs’ (Wardale, 2015 p1297); demystify the process of scholarly writing (Lee and Boud, 2003 p190), and ‘enable micro-environments in what is perceived of as an otherwise often unfriendly mainstream working environment’ (ibid).  Groups and retreats are often targeted at different academic career stages and/or specific groups within the academic workforce. Since 2020, as critical higher education academic and diversity worker, I have run online writing groups and in-person writing retreats for female academics at all career stages, most employed at my own post-1992 university. Over 140 individuals have participated in one or other of the interventions and I used a range of methods (survey, interview, focus group) to gather data on their motivations, experiences and outcomes.

The combined data of all three studies show that the primary motivation of every participant was to create protected space for writing, space not made sufficiently available to them within working hours, despite the professional expectation that they will produce scholarly publications. In this context, the meaning of ‘space’ is multi-dimensional: encompassing the temporal, the physical and the intellectual. The consequence of the interaction of protected temporal and physical/virtual space is intellectual space, or what was referred to by several participants as ‘headspace’ – the extended focus and concentration necessary to produce high quality scholarly writing (Couch, Sullivan and Malatsky, 2020) .

When I launched the online writing group (WriteSpace) during the UK’s first COVID-19 lockdown, MS Teams software enabled the creation of a virtual ‘writing room’ and a sense of community over distance. Socially-isolated colleagues sought contact with others, even those previously unknown to them. As lockdown restrictions eased and remote, then hybrid, working arrangements ensued, the act of writing alongside others virtually or in-person remained an important way to engage in a shared endeavour. The in-person residential retreats in 2023 and 2024, followed Murray’s structured retreat model (Murray and Newton, 2009 p543).  Participants wrote together in one room, for the same time periods over three days. They also ate, walked and socialised together.

Each of the writing interventions were multi-disciplinary spaces for female academics at all career stages, including those undertaking part-time doctoral study. Whatever their grade or experience, no one individual’s writing was more important or significant than another’s. These hierarchically flat spaces disrupted the normative power relationships of the workplace and the academy. On the retreats, additional practices of goal setting and review in pairs encouraged ongoing reflection and exchange on writing practices and developing academic identities.

Many participants experienced the facilitation of the groups and retreats as professional care – a colleague taking responsibility for timekeeping, recommending breaks and stimulating reflection on writing practices. The experience of care was extended and heightened at the residential retreats because all meals were provided in a comfortable and peaceful environment and no household chores were required. This was particularly significant in the context of women’s social roles and conditioning to care for others.

Viewing these writing interventions as activity spaces situated within the wider contexts of the university and the UK HE sector foregrounds interactions of power, space and gender in the context of academic writing. The writing interventions were not neutral phenomena. They were deliberately initiated and targeted in response to a gendered imbalance of power in the academy and the university. They were occupied solely by women. They intentionally prioritise temporal, physical and intellectual space for writing over teaching, administrative, pastoral, household and domestic responsibilities. Within them, academic writing becomes a social practice and a common endeavour.

The interventions do not remove longstanding and pervasive gender inequality across the UK sector, change gendered social roles, resolve the tensions between teaching and research in the contemporary neoliberal academy, nor increase workload allocation for academic writing. However, in embodying community, collegiality and care they can potentially be seen as activity spaces of resistance to the normative geographies of power operating across universities and the wider sector. 

Kate Carruthers Thomas is Associate Professor of Higher Education and Gender at Birmingham City University. Her research is interdisciplinary, drawing on educational, sociological and geographical theories and methods. She also has a track record in creative research dissemination including graphics, poetry and podcasting.


Leave a comment

Walk on by: the dilemma of the blind eye

by Dennis Sherwood

Forty years on…

I don’t remember much about my experiences at work some forty-odd years ago, but one event I recall vividly is the discussion provoked by a case study at a training event. The case was simple, just a few lines:

Sam was working late one evening, and happened to walk past Pat’s office. The door was closed, but Sam could hear Pat being very abusive to Alex. Some ten minutes later, Sam saw Alex sobbing.

What might Sam do?

What should Sam do?

Quite a few in the group said “nothing”, on the grounds that whatever was going on was none of Sam’s business. Maybe Pat had good grounds to be angry with Alex and if the local culture was, let’s say, harsh, what’s the problem? Nor was there any evidence that Alex’s sobbing was connected with Pat – perhaps something else had happened in the intervening time.

Others thought that the least could Sam do was to ask if Alex was OK, and offer some comfort – a suggestion countered by the “it’s a tough world” brigade.

The central theme of the conversation was then all about culture. Suppose the culture was supportive and caring. Pat’s behaviour would be out of order, even if Pat was angry, and even if Alex had done something Pat had regarded as wrong.

So what might – and indeed should – Sam do?

Should Sam should confront Pat? Or inform Pat’s boss?

What if Sam is Pat’s boss? In that case then, yes, Sam should confront Pat: failure to do so would condone bad behaviour, which in this culture, would be a ‘bad thing’.

But if Sam is not Pat’s boss, things are much more tricky. If Sam is subordinate to Pat, confrontation is hardly possible. And informing Pat’s boss could be interpreted as snitching or trouble-making. Another possibility is that Sam and Pat are peers, giving Sam ‘the right’ to confront Pat – but only if peer-to-peer honesty and mutual pressure is ‘allowed’. Which it might not be, for many, even benign, cultures are in reality networks of mutual ‘non-aggression treaties’, in which ‘peers’ are monarchs in their own realms – so Sam might deliberately choose to turn a blind eye to whatever Pat might be doing, for fear of setting a precedent that would allow Pat, or indeed Ali or Chris, to poke their noses into Sam’s own domain.

And if Sam is in a different part of the organisation – or indeed from another organisation altogether – then maybe Sam’s safest action is back where we started. To do nothing. To walk on by.

Sam is a witness to Pat’s bad behaviour. Does the choice to ‘walk on by’ make Sam complicit too, albeit at arm’s length?

I’ve always thought that this case study, and its implications, are powerful – which is probably why I’ve remembered it over so long a time.

The truth about GCSE, AS and A level grades in England

I mention it here because it is relevant to the main theme of this blog – a theme that, if you read it, makes you a witness too. Not, of course, to ‘Pat’s’ bad behaviour, but to another circumstance which, in my opinion, is a great injustice doing harm to many people – an injustice that ‘Pat’ has got away with for many years now, not only because ‘Pat’s peers’ have turned a blind eye – and a deaf ear too – but also because all others who have known about it have chosen to ‘walk on by’.

The injustice of which I speak is the fact that about one GCSE, AS and A level grade in every four, as awarded in England, is wrong, and has been wrong for years. Not only that: in addition, the rules for appeals do not allow these wrong grades to be discovered and corrected. So the wrong grades last for ever, as does the damage they do.

To make that real, in August 2025, some 6.5 million grades were awarded, of which around 1.6 million were wrong, with no appeal. That’s an average of about one wrong grade ‘awarded’ to every candidate in the land.

Perhaps you already knew all that. But if you didn’t, you do now. As a consequence, like Sam in that case study, you are a witness to wrong-doing.

It’s important, of course, that you trust the evidence. The prime source is Ofqual’s November 2018 report, Marking Consistency Metrics – An update, which presents the results of an extensive research project in which very large numbers of GCSE, AS and A level scripts were in essence marked twice – once by an ‘assistant’ examiner (as happens in ‘ordinary’ marking each year), and again by a subject senior examiner, whose academic judgement is the ultimate authority, and whose mark, and hence grade, is deemed ‘definitive’, the arbiter of ‘right’.

Each script therefore had two marks and two grades, enabling those grades to be compared. If they were the same, then the ‘assistant’ examiner’s grade – the grade that is on the candidate’s certificate – corresponds to the senior examiner’s ‘definitive’ grade, and is therefore ‘right’; if the two grades are different, then the assistant examiner’s grade is necessarily ‘non-definitive’, or, in plain English, wrong.

You might have thought that the number of ‘non-definitive’/wrong grades would be small and randomly distributed across subjects. In fact, the key results are shown on page 21 of Ofqual’s report as Figure 12, reproduced here:

Figure 1: Reproduction of Ofqual’s evidence concerning the reliability of school exam grades

To interpret this chart, I refer to this extract from the report’s Executive Summary:

The probability of receiving the ‘definitive’ qualification grade varies by qualification and subject, from 0.96 (a mathematics qualification) to 0.52 (an English language and literature qualification).

This states that 96% of Maths grades (all varieties, at all levels), as awarded, are ‘definitive’/right, as are 52% of those for Combined English Language and Literature (a subject available only at A level). Accordingly, by implication, 4% of Maths grades, and 48% of English Language and Literature grades, are ‘non-definitive’/wrong. Maths grades, as awarded, can therefore be regarded as 96% reliable; English Language and Literature grades as 52% reliable.

Scrutiny of the chart will show that the heavy black line in the upper blue box for Maths maps onto about 0.96 on the horizontal axis; the equivalent line for English Language and Literature maps onto 0.56. The measures of the reliability of the grades for each of the other subjects are designated similarly. Ofqual’s report does not give any further numbers, but Table 1 shows my estimates from Ofqual’s Figure 12:

 Probability of
 ‘Definitive’ grade‘Non-definitive’ grade
Maths (all varieties)96%4%
Chemistry92%8%
Physics88%12%
Biology85%15%
Psychology78%22%
Economics74%26%
Religious Studies66%34%
Business Studies66%34%
Geography65%35%
Sociology63%37%
English Language61%39%
English Literature58%42%
History56%44%
Combined English Language and Literature (A level only)52%48%

Table 1: My estimates of the reliability of school exam grades, as inferred from measurements of Ofqual’s Figure 12.

Ofqual’s report does not present any corresponding information for each of GCSE, AS or A level separately, nor any analysis by exam board. Also absent is a measure of the all-subject overall average. Given, however, the maximum value of 96%, and the minimum of 52%, the average is likely to be somewhere in the middle, say, in the seventies; in fact, if each subject is weighted by its cohort, the resulting average over the 14 subjects shown is about 74%. Furthermore, if other subjects – such as French, Spanish, Computing, Art… – are taken into consideration, the overall average is most unlikely to be greater than 82% or less than 66%, suggesting that an overall average reliability of 75% for all subjects is a reasonable estimate.

That’s the evidence that, across all subjects and levels, about 75% of grades, as awarded, are ‘definitive’/right and 25% – one in four – are ‘non-definitive’/wrong – evidence that has been in the public domain since 2018. But evidence that has been much disputed by those with vested interests.

Ofqual’s results are readily explained. We all know that different examiners can, legitimately, give the same answer (slightly) different marks. As a result, the script’s total mark might lie on different sides of a grade boundary, depending on who did the marking. Only one grade, however, is ‘definitive’.

Importantly, there are no errors in the marking studied by Ofqual – in fact, Ofqual’s report mentions ‘marking error’ just once, and then in a rather different context. All the grading discrepancies measured in Ofqual’s research are therefore attributable solely to legitimate differences in academic opinion. And since the range of legitimate marks is far narrower in subjects such as Maths and Physics, as compared to English Literature and History, then the probability that an ‘assistant’ examiner’s legitimate mark might result in a ‘non-definitive’ grade will be much higher for, say, History as compared to Physics. Hence the sequence of subjects in Ofqual’s Figure 12.

As regards appeals, in 2016, Ofqual – in full knowledge of the results of this research (see paragraph 28 of this Ofqual Board Paper, dated 18 November 2015) – changed the rules, requiring that a grade can be changed only if a ‘review of marking’ discovers a ‘marking error’. To quote an Ofqual ‘news item’ of 26 May 2016:

Exam boards must tell examiners who review results that they should not change marks unless there is a clear marking error. …It is not fair to allow some students to have a second bite of the cherry by giving them a higher mark on review, when the first mark was perfectly appropriate. This undermines the hard work and professionalism of markers, most of whom are teachers themselves. These changes will mean a level-playing field for all students and help to improve public confidence in the marking system.

This assumes that the legitimate marks given by different examiners are all equally “appropriate”, and identical in every way.

This assumption. however, is false: if one of those marks corresponds to the ‘definitive’ grade, and another to a ‘non-definitive’ grade, they are not identical at all. Furthermore, as already mentioned, there is hardly any mention of marking errors in Ofqual’s November 2018 report. All the grade discrepancies they identified can therefore only be attributable to legitimate differences in academic opinion, and so cannot be discovered and corrected by the rules that have been in place since 2016.

Over to you…

So, back to that case study.

Having read this far, like Sam, you have knowledge of wrong-doing – not Pat tearing a strip off Alex, but Ofqual awarding some 1.5 million wrong grades every year. All with no right of appeal.

What are you going to do?

You’re probably thinking something like, “Nothing”, “It’s not my job”, “It’s not my problem”, “I’m in no position to do anything, even if I wanted to”.

All of which I understand. No, it’s certainly not your job. And it’s not your problem directly, in that it’s not you being awarded the wrong grade. But it might be your problem indirectly – if you are involved with admissions, and if grades play a material role, you may be accepting a student who is not fully qualified (in that the grade on the certificate might be too high), or – perhaps worse – rejecting a student who is (in that the grade on the certificate is too low). Just to make that last point real, about one candidate in every six with a certificate showing AAA for A level Physics, Chemistry and Biology in fact truly merited at least one B. If such a candidate took a place at Med School, for example, not only is that candidate under-qualified, but a place has also been denied to a candidate with a certificate showing AAB but who merited AAA.

And although you, as an individual, are indeed not is a position to do anything about it, you, collectively, surely are.

HE is, by far, the largest and most important user of A levels. And relying on a ‘product’ that is only about 75% reliable. HE, collectively, could put significant pressure on Ofqual to fix this, if only by printing “OFQUAL WARNING: THE GRADES ON THIS CERTIFICATE ARE ONLY RELIABLE, AT BEST, TO ONE GRADE EITHER WAY” on every certificate – not my statement, but one made by Ofqual’s then Chief Regulator, Dame Glenys Stacey, in evidence to the 2 September 2020 hearing of the Education Select Committee, and in essence equivalent to the fact that about one grade in four is wrong. That would ensure that everyone is aware of the fact that any decision, based on a grade as shown on a certificate, is intrinsically unsafe.

But this – or some other solution – can happen only if your institution, along with others, were to act accordingly. And that can happen only if you, and your colleagues, band together to influence your department, your faculty, your institution.

Yes, that is a bother. Yes, you do have other urgent things to do.

If you do nothing, nothing will happen.

But if you take action, you can make a difference.

Don’t just walk on by.

Dennis Sherwood is a management consultant with a particular interest in organisational cultures, creativity and systems thinking. Over the last several years, Dennis has also been an active campaigner for the delivery of reliable GCSE, AS and A level grades. If you enjoyed this, you might also like https://srheblog.com/tag/sherwood/.