SRHE Blog

The Society for Research into Higher Education


Leave a comment

What the experience of neurodivergent PhD students teaches us, and why it makes me angry

by Inger Mewburn

Recently, some colleagues and I released a paper about the experiences of neurodivergent PhD students. It’s a systematic review of the literature to date, which is currently under review, but available via pre-print here.

Doing this paper was an exercise in mixed feelings. It was an absolute joy to work with my colleagues, who knew far more about this topic than me and taught me (finally!) how to do a proper systematic review using Covidence. Thanks Dr Diana TanDr Chris EdwardsAssociate Professor Kate SimpsonAssociate Professor Amanda A Webster and Professor Charlotte Brownlow (who got the band together in the first place).

But reading each and every paper published about neurodivergent PhD students provoked strong feelings of rage and frustration. (These feelings only increased, with a tinge of fear added in, when I read of plans for the US health department to make a ‘list’ of autistic people?! Reading what is going on there is frankly terrifying – solidarity to all.) We all know what needs to be done to make research degrees more accessible. Make expectations explicit. Create flexible policies. Value diverse thinking styles. Implement Universal Design Principles… These suggestions appear in report after report, I’ve ranted on the blog here and here, yet real change remains frustratingly elusive. So why don’t these great ideas become reality? Here’s some thoughts on barriers that keep neurodivergent-friendly changes from taking hold.

The myth of meritocracy

Academia clings to the fiction that the current system rewards pure intellectual merit. Acknowledging the need for accessibility requires admitting that the playing field isn’t level. Many senior academics succeeded in the current system and genuinely believe “if I could do it, anyone can… if they work hard enough”. They are either 1) failing to recognise their neurotypical privilege, or 2) not acknowledging the cost of masking their own neurodivergence (I’ll get to this in a moment).

I’ve talked to many academics about things we could do – like getting rid of the dissertation – but too many of us are secretly proud of our own trauma. The harshness of the PhD has been compared to a badge of honour that we wear proudly – and expect others to earn.

Resource scarcity (real and perceived)

Universities often respond to suggestions about increased accessibility measures with budget concerns. The vibe is often: “We’d love to offer more support, but who will pay for it?”. However, many accommodations (like flexible deadlines or allowing students to work remotely) cost little, or even nothing. Frequently, the real issue isn’t resources but priorities of the powerful. There’s no denying universities (in Australia, and elsewhere) are often cash strapped. The academic hunger games are real. However, in the fight for resources, power dynamics dictate who gets fed and who goes without.

I wish we would just be honest about our choices – some people in universities still have huge travel budgets. The catering at some events is still pretty good. Some people seem to avoid every hiring freeze. There are consistent patterns in how resources are distributed. It’s the gaslighting that makes me angry. If we really want to, we can do most things. We have to want to do something about this.

Administrative inertia

Changing established processes in a university is like turning a battleship with a canoe paddle. Approval pathways are long and winding. For example, altering a single line in the research award rules at ANU requires approval from parliament (yes – the politicians actually have to get together and vote. Luckily we are not as dysfunctional in Australia as other places… yet). By the time a solution is implemented, the student who needed it has likely graduated – or dropped out. This creates a vicious cycle where the support staff, who see multiple generations of students suffer the same way, can get burned out and stop pushing for change.

The individualisation of disability

Universities tend to treat neurodivergence as an individual problem requiring individual accommodations rather than recognising systemic barriers. This puts the burden on students to disclose, request support, and advocate for themselves – precisely the executive function and communication challenges many neurodivergent students struggle with.

It’s akin to building a university with only stairs, then offering individual students a piggyback ride instead of installing ramps. I’ve met plenty of people who simply get so exhausted they don’t bother applying for the accommodations they desperately need, and then end up dropping out anyway.

Fear of lowering ‘standards’

Perhaps the most insidious barrier is the mistaken belief that accommodations somehow “lower standards.” I’ve heard academics worrying that flexible deadlines will “give some students an unfair advantage” or that making expectations explicit somehow “spoon-feeds” students.

The fear of “lowering standards” becomes even more puzzling when you look at how PhD requirements have inflated over time. Anyone who’s spent time in university archives knows that doctoral standards aren’t fixed – they’re constantly evolving. Pull a dissertation from the 1950s or 60s off the shelf and you’ll likely find something remarkably slim compared to today’s tomes. Many were essentially extended literature reviews with modest empirical components. Today, we expect multiple studies, theoretical innovations, methodological sophistication, and immediate publishability – all while completing within strict time limits on ever-shrinking funding.

The standards haven’t just increased; they’ve multiplied. So when universities resist accommodations that might “compromise standards,” we should ask: which era’s standards are we protecting? Certainly not the ones under which most people supervising today had to meet. The irony is that by making the PhD more accessible to neurodivergent thinkers, we might actually be raising standards – allowing truly innovative minds to contribute rather than filtering them out through irrelevant barriers like arbitrary deadlines or neurotypical communication expectations. The real threat to academic standards isn’t accommodation – it’s the loss of brilliant, unconventional thinkers who could push knowledge boundaries in ways we haven’t yet imagined.

Unexamined neurodiversity among supervisors

Perhaps one of the most overlooked barriers is that many supervisors are themselves neurodivergent but don’t recognise it or acknowledge what’s going on with them! In fact, since starting this research, I’ve formed a private view that you almost can’t succeed in this profession without at least a little neurospicey.

Academia tends to attract deep thinkers with intense focus on specific topics – traits often associated with autism (‘special interests’ anyone?). The contemporary university is constantly in crisis, which some people with ADHD can find provides the stimulation they need to get things done! Yet many supervisors have succeeded through decades of masking and compensating, often at great personal cost.

The problem is not the neurodivergence or the supervisor – it’s how the unexamined neurodivergence becomes embedded in practice, underpinned by an expectation that their students should function exactly as they do, complete with the same struggles they’ve internalised as “normal.”

I want to hold on to this idea for a moment, because maybe you recognise some of these supervisors:

  • The Hyperfocuser: Expects students to match their pattern of intense, extended work sessions. This supervisor regularly works through weekends on research “when inspiration strikes,” sending emails at 2am and expecting quick responses. They struggle to understand when students need breaks or maintain strict work boundaries, viewing it as “lack of passion.” Conveniently, they have ignored those couple of episodes of burn out, never considering their own work pattern might reflect ADHD or autistic hyper-focus, rather than superior work ethic.
  • The Process Pedant: Requires students to submit written work in highly specific formats with rigid attachment to particular reference styles, document formatting, and organisational structures. Gets disproportionately distressed by minor variations from their preferred system, focusing on these details over content, such that their feedback primarily addresses structural issues rather than ideas. I get more complaints about this than almost any other kind of supervision style – it’s so demoralising to be constantly corrected and not have someone genuinely engage with your work.
  • The Talker: Excels in spontaneous verbal feedback but rarely provides written comments. Expects students to take notes during rapid-fire conversational feedback, remembering all key points. They tend to tell you to do the same thing over and over, or forget what they have said and recommend something completely different next time. Can get mad when questioned over inconsistencies – suggesting you have a problem with listening. This supervisor never considers that their preference for verbal communication might reflect their own neurodivergent processing style, which isn’t universal. Couple this with a poor memory and the frustration of students reaches critical. (I confess, being a Talker is definitely my weakness as a supervisor – I warn my students in advance and make an effort to be open to criticism about it!).
  • The Context-Switching Avoider: Schedules all student meetings on a single day of the week, keeping other days “sacred” for uninterrupted research. Becomes noticeably agitated when asked to accommodate a meeting outside this structure, even for urgent matters. Instead of recognising their own need for predictable routines and difficulty with transitions (common in many forms of neurodivergence), they frame this as “proper time management” that students should always emulate. Students who have caring responsibilities suffer the most with this kind of inflexible relationship.
  • The Novelty-Chaser: Constantly introduces new theories, methodologies, or research directions in supervision meetings. Gets visibly excited about fresh perspectives and encourages students to incorporate them into already-developed projects. May send students a stream of articles or ideas completely tangential to their core research, expecting them to pivot accordingly. Never recognises that their difficulty maintaining focus on a single pathway to completion might reflect ADHD-related novelty-seeking. Students learn either 1) to chase butterflies and make little progress or 2) to nod politely at new suggestions while quietly continuing on their original track. The first kind of reaction can lead to a dangerous lack of progress, the second reaction can lead to real friction because, from the supervisor’s point of view, the student ‘never listens’. NO one is happy in these set ups, believe me.
  • The Theoretical Purist: Has devoted their career to a particular theoretical framework or methodology and expects all their students to work strictly within these boundaries. Dismisses alternative approaches as “methodologically unsound” or “lacking theoretical rigour” without substantive engagement. Becomes noticeably uncomfortable when students bring in cross-disciplinary perspectives, responding with increasingly rigid defences of their preferred approach. Fails to recognise their intense attachment to specific knowledge systems and resistance to integrating new perspectives may reflect autistic patterns of specialised interests, or even difficulty with cognitive flexibility. Students learn to frame all their ideas within the supervisor’s preferred language, even when doing so limits their research potential.

Now that I know what I am looking for, I see these supervisory dynamics ALL THE TIME. Add in whatever dash of neuro-spiciness is going on with you and all kinds of misunderstandings and hurt feelings result … Again – the problem is not the neurodivergence of any one person – it’s the lack of self reflection, coupled with the power dynamics that can make things toxic.

These barriers aren’t insurmountable, but honestly, after decades in this profession, I’m not holding my breath for institutional enlightenment. Universities move at the pace of bureaucracy after all.

So what do we do? If you’re neurodivergent, find your people – that informal network who “get it” will save your sanity more than any official university policy. If you’re a supervisor, maybe take a good hard look at your own quirky work habits before deciding your student is “difficult.” And if you’re in university management, please, for the love of research, let’s work on not making neurodivergent students jump through flaming bureaucratic hoops to get basic support.

The PhD doesn’t need to be a traumatic hazing ritual we inflict because “that’s how it was in my day.” It’s 2025. Time to admit that diverse brains make for better research. And for goodness sake, don’t put anyone on a damn list, ok?

AI disclaimer: This post was developed with Claude from Anthropic because I’m so busy with the burning trash fire that is 2025 it would not have happened otherwise. I provided the concept, core ideas, detailed content, and personal viewpoint while Claude helped organise and refine the text. We iteratively revised the content together to ensure it maintained my voice and perspective. The final post represents my authentic thoughts and experiences, with Claude serving as an editorial assistant and sounding board.

This blog was first published on Inger Mewburn’s  legendary website The Thesis Whisperer on 1 May 2025. It is reproduced with permission here.

Professor Inger Mewburn is the Director of Researcher Development at The Australian National University where she oversees professional development workshops and programs for all ANU researchers. Aside from creating new posts on the Thesis Whisperer blog (www.thesiswhisperer.com), she writes scholarly papers and books about research education, with a special interest in post PhD employability, research communications and neurodivergence.


1 Comment

Becoming a professional services researcher in HE – making the train tracks converge

by Charlotte Verney

This blog builds on my presentation at the BERA ECR Conference 2024: at crossroads of becoming. It represents my personal reflections of working in UK higher education (HE) professional services roles and simultaneously gaining research experience through a Masters and Professional Doctorate in Education (EdD).

Professional service roles within UK HE include recognised professionals from other industries (eg human resources, finance, IT) and HE-specific roles such as academic quality, research support and student administration. Unlike academic staff, professional services staff are not typically required, or expected, to undertake research, yet many do. My own experience spans roles within six universities over 18 years delivering administration and policy that supports learning, teaching and students.

Traversing two tracks

In 2016, at an SRHE Newer Researchers event, I was asked to identify a metaphor to reflect my experience as a practitioner researcher. I chose this image of two train tracks as I have often felt that I have been on two development tracks simultaneously –  one building professional experience and expertise, the other developing research skills and experience. These tracks ran in parallel, but never at the same pace, occasionally meeting on a shared project or assignment, and then continuing on their separate routes. I use this metaphor to share my experiences, and three phases, of becoming a professional services researcher.

Becoming research-informed: accelerating and expanding my professional track

The first phase was filled with opportunities; on my professional track I gained a breadth of experience, a toolkit of management and leadership skills, a portfolio of successful projects and built a strong network through professional associations (eg AHEP). After three years, I started my research track with a masters in international higher education. Studying felt separate to my day job in academic quality and policy, but the assignments gave me opportunities to bring the tracks together, using research and theory to inform my practice – for example, exploring theoretical literature underpinning approaches to assessment whilst my institution was revising its own approach to assessing resits. I felt like a research-informed professional, and this positively impacted my professional work, accelerating and expanding my experience.

Becoming a doctoral researcher: long distance, slow speed

The second phase was more challenging. My doctoral journey was long, taking 9 years with two breaks. Like many part-time doctoral students, I struggled with balance and support, with unexpected personal and professional pressures, and I found it unsettling to simultaneously be an expert in my professional context yet a novice in research. I feared failure, and damaging my professional credibility as I found my voice in a research space.

What kept me going, balancing the two tracks, was building my own research support network and my researcher identity. Some of the ways I did this was through zoom calls with EdD peers for moral support, joining the Society for Research into Higher Education to find my place in the research field, and joining the editorial team of a practitioner journal to build my confidence in academic writing.

Becoming a professional services researcher: making the tracks converge

Having completed my doctorate in 2022, I’m now actively trying to bring my professional and research tracks together. Without a roadmap, I’ve started in my comfort-zone, sharing my doctoral research in ‘safe’ policy and practitioner spaces, where I thought my findings could have the biggest impact. I collaborated with EdD peers to tackle the daunting task of publishing my first article. I’ve drawn on my existing professional networks (ARC, JISC, QAA) to establish new research initiatives related to my current practice in managing assessment. I’ve made connections with fellow professional services researchers along my journey, and have established an online network  to bring us together.

Key takeaways for professional services researchers

Bringing my professional experience and research tracks together has not been without challenges, but I am really positive about my journey so far, and for the potential impact professional services researchers could have on policy and practice in higher education. If you are on your own journey of becoming a professional services researcher, my advice is:

  • Make time for activities that build your research identity
  • Find collaborators and a community
  • Use your professional experience and networks
  • It’s challenging, but rewarding, so keep going!

Charlotte Verney is Head of Assessment at the University of Bristol. Charlotte is an early career researcher in higher education research and a leader in within higher education professional services. Her primary research interests are in the changing nature of administrative work within universities, using research approaches to solve professional problems in higher education management, and using creative and collaborative approaches to research. Charlotte advocates for making the academic research space more inclusive for early career and professional services researchers. She is co-convenor of the SRHE Newer Researchers Network and has established an online network for higher education professional services staff engaged with research.


Leave a comment

Gaps in sustainability literacy in non-STEM higher education programmes

by Erika Kalocsányiová and Rania Hassan

Promoting sustainability literacy in higher education is crucial for deepening students’ pro-environmental behaviour and mindset (Buckler & Creech, 2014; UNESCO, 1997), while also fostering social transformation by embedding sustainability at the core of the student experience. In 2022, our group received an SRHE Scoping Award to synthesise the literature on the development, teaching, and assessment of sustainability literacy in non-STEM higher education programmes. We conducted a multilingual systematic review of post-2010 publications from the European Higher Education Area (EHEA), with the results summarised in Kalocsányiová et al (2024).

Out of 6,161 articles that we identified as potentially relevant, 92 studies met the inclusion criteria and are reviewed in the report. These studies involved a total of 11,790 participants and assessed 9,992 university programmes and courses. Our results suggest a significant growth in research interest in sustainability in non-STEM fields since 2017, with 75 studies published compared to just 17 in the preceding seven years. Our analysis also showed that Spain, the United Kingdom, Germany, Turkey, and Austria had the highest concentration of publications, with 25 EHEA countries represented in total. The 92 reviewed studies were characterised by high methodological diversity: nearly half employed quantitative methods (47%), followed by qualitative studies (40%) and mixed methods research (13%). Curriculum assessments using quantitative content analysis of degree and course descriptors were among the most common study types, followed by surveys and intervention or pilot studies. Curriculum assessments provided a systematic way to evaluate the presence or absence of sustainability concepts within curricula at both single HE institutions and in comparative frameworks. However, they often captured only surface-level indications of sustainability integration into undergraduate and postgraduate programmes, without providing evidence on actual implementation and/or the effectiveness of different initiatives. Qualitative methods, including descriptive case studies and interviews that focused on barriers, challenges, implementation strategies, and the acceptability of new sustainability literacy initiatives, made up 40% of the current research. Mixed methods studies accounted for 13% of the reviewed articles, often applying multiple assessment tools simultaneously, including quantitative sustainability competency assessment instruments combined with open-ended interviews or learning journals.

In terms of disciplines, Economics, Business, and Administrative Studies held the largest share of reviewed studies (26%), followed by Education (23%). Multiple disciplines accounted for 22% of the reviewed publications, reflecting the interconnected nature of sustainability. Finance and Accounting contributed only 6%, indicating a need for further research. Similarly, Language and Linguistics, Mass Communication and Documentation, and Social Sciences collectively represented only 12% of the reviewed studies. Creative Arts and Design with just 2% was also a niche area. Although caution should be exercised when drawing conclusions from these results, they highlight the need for more research within the underrepresented disciplines. This in turn can help promote awareness among non-STEM students, stimulate ethical discussions on the cultural dimensions of sustainability, and encourage creative solutions through interdisciplinary dialogue.

Regarding factors and themes explored, the studies focused primarily on the acquisition of sustainability knowledge and competencies (27%), curriculum assessment (23%), challenges and barriers to sustainability integration (10%), implementation and evaluation research (10%), changes in students’ mindset (9%), key competences in sustainability literacy (5%), and active student participation in Education for Sustainable Development (5%). In terms of studies discussing acquisition processes, key focus areas included the teaching of Sustainable Development Goals, awareness of macro-sustainability trends, and knowledge of local sustainability issues. Studies on sustainability competencies focussed on systems thinking, critical thinking, problem-solving skills, ethical awareness, interdisciplinary knowledge, global awareness and citizenship, communication skills, and action-oriented mindset. These competencies and knowledge, which are generally considered crucial for addressing the multifaceted challenges of sustainability (Wiek et al., 2011), were often introduced to non-STEM students through stand-alone lectures, workshops, or pilot studies involving new cross-disciplinary curricula.

Our review also highlighted a broad range of pedagogical approaches adopted for sustainability teaching and learning within non-STEM disciplines. These covered case and project-based learning, experiential learning methods, problem-based learning, collaborative learning, reflection groups, pedagogical dialogue, flipped classroom approaches, game-based learning, and service learning. While there is strong research interest in the documentation and implementation of these pedagogical approaches, few studies have so far attempted to assess learning outcomes, particularly regarding discipline-specific sustainability expertise and real-world problem-solving skills.

Many of the reviewed studies relied on single-method approaches, meaning valuable insights into sustainability-focused teaching and learning may have been missed. For instance, studies often failed to capture the complexities surrounding sustainability integration into non-STEM programs, either by presenting positivist results that require further contextualisation or by offering rich context limited to a single course or study group, which cannot be generalised. The assessment tools currently used also seemed to lack consistency, making it difficult to compare outcomes across programmes and institutions to promote best practices. More robust evaluation designs, such as longitudinal studies, controlled intervention studies, and mixed methods approaches (Gopalan et al, 2020; Ponce & Pagán-Maldonado, 2015), are needed to explore and demonstrate the pedagogical effectiveness of various sustainability literacy initiatives in non-STEM disciplines and their impact on student outcomes and societal change.

In summary, our review suggests good progress in integrating sustainability knowledge and competencies into some core non-STEM disciplines, while also highlighting gaps. Based on the results we have formulated some questions that may help steer future research:

  • Are there systemic barriers hindering the integration of sustainability themes, challenges and competencies into specific non-STEM fields?
  • Are certain disciplines receiving disproportionate research attention at the expense of others?
  • How do different pedagogical approaches compare in terms of effectiveness for fostering sustainability literacy in and across HE fields?
  • What new educational practices are emerging, and how can we fairly assess them and evidence their benefits for students and the environment?

We also would like to encourage other researchers to engage with knowledge produced in a variety of languages and educational contexts. The multilingual search and screening strategy implemented in our review enabled us to identify and retrieve evidence from 25 EHEA countries and 24 non-English publications. If reviews of education research remain monolingual (English-only), important findings and insights will go unnoticed hindering knowledge exchange, creativity, and innovation in HE.

Dr. Erika Kalocsányiová is a Senior Research Fellow with the Institute for Lifecourse Development at the University of Greenwich, with research centering on public health and sustainability communication, migration and multilingualism, refugee integration, and the implications of these areas for higher education policies.

Rania Hassan is a PhD student and a research assistant at the University of Greenwich. Her research centres on exploring enterprise development activities within emerging economies. As a multidisciplinary and interdisciplinary researcher, Rania is passionate about advancing academia and promoting knowledge exchange in higher education.


Leave a comment

Reflecting on five years of feedback research and practice: progress and prospects

by Naomi Winstone and David Carless

Over the past few years, feedback research and practice in higher education have experienced sustained research interest and significant advancements. These developments have been propelled by a deeper understanding of student responses to feedback, the impact of cultural and sociomaterial factors, and the affordances and challenges posed by digital assessment and feedback methods. In 2019, we published a book in the SRHE series titled Designing Effective Feedback Processes in Higher Education: A Learning-Focused Approach. Five years later, we find it pertinent to reflect on the changes in research, practice, and discourse surrounding feedback processes in higher education since the book’s release.

Shifting paradigms in feedback processes

The book aimed to achieve two primary objectives: to present findings from the SRHE-funded ‘feedback cultures’ project and to synthesise evidence on feedback processes that prioritise student learning – what we called learning-focused feedback. This evidence was then translated into practical guidance and stimulus for reflection. A core distinction made in the book was between an ‘old paradigm’, characterized by the one-way transmission of feedback comments from educators to students, and a ‘new paradigm’, which emphasises student learning through active engagement with feedback processes of different forms, including peer feedback, self-feedback and automated feedback.

The impact of recent developments

The past five years have seen seismic shifts affecting feedback processes. The COVID-19 pandemic demonstrated the feasibility of alternative approaches to assessment and feedback, debunking many myths about insurmountable constraints. It brought issues of relationality and social presence to the forefront. Additionally, the launch of ChatGPT in November 2022 sparked debates on the distinct value of human involvement in feedback processes. Concurrently, higher education has grappled with sector-wide challenges, such as the devaluation of tuition fees in the UK and the intensification of the consumer-provider relationship.

Significant developments in feedback research and practice

Since 2019, feedback research and practice have evolved significantly. Two developments stand out to us as particularly impactful:

1. The ongoing boom of interest in feedback literacy

Feedback literacy research has become a fast-growing trend within research into feedback in higher education. The basis of feedback literacy is that students need a set of competencies which enable them to make the most of feedback opportunities of different kinds. And for students to develop these competencies, teachers need to design opportunities for students to generate, make sense of and use a variety of feedback inputs from peers, the self, teachers, or automated systems.

Student feedback literacy includes the ability to appreciate and judge the value of feedback inputs of different forms. This attribute remains relevant to both human and non-human feedback exchanges. Sometimes feedback inputs are off-target or inaccurate, so responsibility lies with the learner in using information prudently to move work forward. This is particularly pertinent in terms of inputs or feedback from generative AI (GenAI) to which we turn next. Judging the value and accuracy of GenAI inputs, and deciding what further probing or verifying is needed become important learning strategies.

2. Challenges and affordances of GenAI

The potential impact of technological disruption is often overestimated. However, the advent of ChatGPT and other large language models (LLMs) has undeniably generated both excitement and anxiety. In higher education, while assessment design has been the primary concern, discussions around feedback have also intensified.

Given the escalating and unsustainable costs of teaching in higher education, AI is sometimes seen as a panacea. Providing feedback comments – a time-consuming task for academics – could be outsourced to GenAI, theoretically freeing up time for other activities such as teaching, administration, or research. However, we caution against this approach. The mere provision of feedback comments, regardless of their origin, epitomises an old paradigm practice. As argued in our book, a process-oriented approach to feedback means that comments alone do not constitute feedback; they are merely inputs into a feedback process. Feedback occurs only when students engage with and act upon these comments.

Nevertheless, AI offers potential benefits for new paradigm feedback practices. A potential benefit of GenAI feedback is that it can be provided at a time when students need it. And if GenAI can assist educators in drafting feedback comments, it could free up time for more meaningful engagement with students, such as facilitating the implementation of feedback, supporting peer dialogue, and enhancing evaluative expertise. GenAI can also help students generate feedback on their own work, thereby developing their own evaluative judgement. In short, GenAI may not be harmful to feedback processes if we hold true to the principles of new paradigm learning-focused approaches we presented in our book.

Looking ahead: future directions in feedback research and practice

What might the next five years hold for feedback research and practice? Feedback literacy is likely to remain a key research theme because without feedback literacy it is difficult for both teachers and students to derive benefits and satisfaction from feedback processes. The potential and pitfalls of GenAI as a feedback source is likely to be a heavily populated research field. Methodologically, we anticipate a shift towards more longitudinal studies and a greater focus on behavioural outcomes, acknowledging the complexity of feedback impacts. These can be investigated over long-term durations as well as short-term ones because the benefits of complex, higher-order feedback often take time to accrue. As researchers, we are privileged to be part of a dynamic international community, working within a rapidly evolving policy and practice landscape. The field abounds with questions, challenges, and opportunities for exploration. We are excited to see what developments the future holds.

Naomi Winstone is a cognitive psychologist specialising in the processing and impact of instructional feedback, and the influence of dominant discourses of assessment and feedback in policy and practice on the positioning of educators and students in feedback processes. Naomi is Professor of Educational Psychology and Director of the Surrey Institute of Education at the University of Surrey, UK. She is also an Honorary Professor in the Centre for Research in Assessment and Digital Learning (CRADLE) at Deakin University, Australia. Naomi is a Principal Fellow of the Higher Education Academy and a UK National Teaching Fellow.

David Carless works as a Professor at the Faculty of Education, University of Hong Kong, and is Head of the Academic Unit SCAPE (Social Contexts and Policies in Education). He is one of the pioneers of feedback literacy research and is listed as a top 0.1% cited researcher in the Stanford top 2% list for social sciences. His books include Designing effective feedback processes in higher education: A learning-focused approach, by Winstone and Carless, 2019 published by Routledge. He was the winner of a University Outstanding Teaching Award in 2016. The latest details of his work are on his website: https://davidcarless.edu.hku.hk/.


Leave a comment

My Marking Life: The Role of Emotional Labour in delivering Audio Feedback to HE Students

by Samantha Wilkinson

Feedback has been heralded the most significant single influence on student learning and achievement (Gibbs and Simpson, 2004). Despite this, students critique feedback for being unfit for purpose, considering that it does not help them clarify things they do not understand (Voelkel and Mello, 2014).

Despite written feedback being the norm in Higher Education, the literature highlights the benefit of audio feedback. King et al (2008) contend that audio feedback is often evaluated by students as being ‘richer’ than other forms of feedback.

Whilst there is a growing body of literature evaluating audio feedback from the perspective of students, the experiences of academics providing audio feedback have been explored less (Ekinsmyth, 2010). Sarcona et al (2020) is a notable exception, exploring the instructor perspective, albeit briefly. The authors share how some lecturers in their study found it quick and easy to provide audio feedback, and that they valued the ability to indicate the tone of their feedback. Other lecturers, however, stated how they had to type the notes first to remember what they wanted to say, and then record these for the audio feedback, and thus were doing twice as much work.

Whilst the affectual impact of feedback on students has been well documented in the literature (eg McFarlane and Wakeman, 2011), there is little in the academic literature on the affectual impact of the feedback process on markers (Henderson-Brooks, 2021). Whilst not specifically related to audio feedback, Spaeth (2018) is an exception, articulating that emotional labour is a performance when educators seek to balance the promotion of student learning (care) with the pressures for efficiency and quality control (time). Spaeth (2018) argues that there is a lack of attention directed towards the emotional investment on the part of colleagues when providing feedback.

Here, I bring my voice to this less explored side by exploring audio feedback as a performance of emotional labour, based on my experience of trialling of audio feedback as a means of providing feedback to university students through Turnitin on the Virtual Learning Environment. This trial was initiated by colleagues at a departmental level as a possible means of addressing the National Student Survey category of ‘perception of fairness’ in relation to feedback. I decided to reflect on my experience of providing audio feedback as part of a reflective practice module ‘FLEX’ that I was undertaking at the time whilst working towards my Masters in Higher Education.

When providing audio feedback, I felt more confident in the mark and feedback I awarded students, when compared to written feedback. I felt my feedback was less likely to be misinterpreted. This is because, when providing audio feedback, I simultaneously scrolled down the script, using it as an oral catalyst. I considered my audio feedback included more examples than conventional written feedback to illustrate points I made. This overcomes some perceived weaknesses of written feedback: that it is detached from the students’ work (McFarlane and Wakeman, 2011).

In terms of my perceived drawbacks of audio feedback, whilst some academics have found audio feedback to be quicker to produce than written feedback, I found audio feedback was more time-consuming than traditional means; a mistake in the middle of a recording meant the whole recording had to be redone. I toyed with the idea of keeping mistakes in, thinking they would make me appear more human. However, I decided to restart the recording to appear professional. This desire to craft a performance of professionalism may be related to my positionality as a fairly young, female, academic with feelings of imposter syndrome.

I work on compressed hours, working longer hours Monday-Thursday. Working in this way, I have always undertaken feedback outside of core hours, in the evening, due to the relative flexibility of providing feedback (in comparison to needing to be in person at specific times for teaching). I typically have no issue with this. However, providing audio feedback requires a different environment in comparison to providing written feedback:

Providing audio feedback in the evenings when my husband is trying to get our two children to sleep, and with two dogs excitedly scampering around is stressful. I take myself off to the bedroom and sit in bed with my dressing gown on, for comfort. Then I suddenly think how horrified students may be if they knew this was the reality of providing audio feedback. I feel like I should be sitting at my desk in a suit! I know they can’t see me when providing audio feedback, but I feel how I dress may be perceived to reflect how seriously I am taking it. (Reflective diary)                     

I work in an open plan office, with only a few private and non-soundproof pods, so providing audio feedback in the workspace is not easy. Discussing her ‘marking life’, Henderson-Brooks (2021:113) notes the need to get the perfect environment to mark in: “so, I get the chocolates (carrots nowadays), sharpen the pens (warm the screen nowadays), and warn my friends and relatives (no change nowadays) – it is marking time”. Related to this, I would always have a cup of tea (and Diet Coke) to hand, along with chocolate and crisps, to ‘treat’ myself, and make the experience more enjoyable.

When providing feedback, I felt pressure not only to make the right kind of comments, but also in the ‘correct’ tone, as I reflect below:

I feel a need to be constantly 100% enthusiastic. I am worried if I sound tired students may think I was not concentrating enough marking their assessment; if I sound low mood that I am disappointed with them; or sounding too positive that it does not match their mark. (Reflective diary)

I found it emotionally exhausting having to perform the perfect degree of enthusiasm, which I individually tailored to each student and their mark. This is confounded by the fact that I have an autoimmune disease and associated chronic fatigue which means I get very tired and have little energy. Consequently, performing my words / voice / tone is particularly onerous, as is sitting for long periods of time when providing feedback. Similarly, Ekinsmyth (2010) says that colleagues in her study felt a need to be careful about the words used in, and the tone of, audio feedback. This was exemplified when a student had done particularly well, or had not passed the assignment.

Emotions are key to the often considered mundane task of providing assignment feedback to students (Henderson-Brooks, 2021).  I have highlighted worries and anxieties when providing audio feedback, related to the emotional labour required in performing the ‘correct’ tone; saying appropriate words; and creating an appropriate environment and atmosphere for delivering audio feedback. I recommend that university colleagues wishing to provide audio feedback to students should:

  1. Publicise to students the purpose of audio feedback so they are more familiar with what to expect and how to get the most out of this mode of feedback. This may alleviate some of the worries of colleagues regarding how to perform for students when providing audio feedback.
  2. Deliver a presentation to colleagues with tips on how to successfully provide audio feedback. This may reduce the worries of colleagues who are unfamiliar with this mode of feedback.
  3. Undertake further research on the embodied, emotional and affective experiences of academics providing audio feedback, to bring to the fore the underexplored voices of assessors, and assist in elevating the status of audio feedback beyond being considered a mere administrative task.

Samantha Wilkinson is a Senior Lecturer in Childhood and Youth Studies at Manchester Metropolitan University. She is a Doctoral College Departmental Lead for PhDs in Education. Prior to this, she was a Lecturer in Human Geography at the same institution. Her research has made contributions regarding the centrality of care, friendship, intra and inter-generational relationships to young people’s lives. She is also passionate about using autoethnography to bring to the fore her experiences in academia, which others may be able to relate to. Twitter handle:@samanthawilko


1 Comment

Doctoral progress reviews: managing KPIs or developing researchers?

by Tim Clark

All doctoral students in the UK are expected to navigate periodic, typically annual, progress reviews as part of their studies (QAA, 2020). Depending on the stage, and the individual institutional regulations, these often play a role in determining confirmation of doctoral status and/or continuation of studies. Given that there were just over 100,000 doctoral students registered in the UK in 2021 (HESA, 2022), it could therefore be argued that the progress review is a relatively prominent, and potentially high stakes, example of higher education assessment.  However, despite this potential significance, guidance relating to doctoral progress reviews is fairly sparse, institutional processes and terminology reflect considerable variations in approach, empirical research to inform design is extremely limited (Dowle, 2023) and perhaps most importantly, the purpose of these reviews is often unclear or contested.

At the heart of this lack of clarity appears to be a tension surrounding the frequent positioning of progress reviews as primarily institutional tools for managing key performance indicators relating to continuation and completion, as opposed to primarily pedagogical tools for supporting individual students learning (Smith McGloin, 2021). Interestingly however, there is currently very little research regarding effectiveness or practice in relation to either of these aspects. Yet, there is growing evidence to support an argument that this lack of clarity regarding purpose may frequently represent a key limitation in terms of engagement and value (Smith McGloin, 2021, Sillence, 2023; Dowle, 2023). As Bartlett and Eacersall (2019) highlight, the common question is ‘why do I have to do this?’

As a relatively new doctoral supervisor and examiner, with a research interest in doctoral pedagogy, in the context of these tensions, I sought to use a pedagogical lens to explore a small group of doctoral students’ experiences of navigating their progress review. My intention for this blog is to share some learning from this work, with a more detailed recent paper reporting on the study also available here (Clark, 2023). 

Methods and Approach

This research took place in one post-1992 UK university, where progress assessment consisted of submission of a written report, followed by an oral examination or review (depending on the stage). These progress assessments are undertaken by academic staff with appropriate expertise, who are independent of the supervision team. This was a small-scale study, involving six doctoral students, who were all studying within the humanities or social sciences. Students were interviewed using a semi-structured narrative ‘event-focused’ (Jackman et al, 2022) approach, to generate a rich narrative relating to their experience of navigating through the progress review as a learning event.

In line with the pedagogical focus, the concept of ‘assessment for learning’ was adopted as a theoretical framework (Wiliam, 2011). Narratives were then analysed using an iterative ‘visit and revisit’ (Srivastava and Hopwood, 2009) approach. This involved initially developing short vignettes to consider students’ individual experiences before moving between the research question, data and theoretical framework to consider key themes and ideas.

Findings

The study identified that the students understood their doctoral progress reviews as having significant potential for supporting their learning and development, but that specific aspects of the process were understood to be particularly important. Three key understandings arose from this: firstly, that the oral ‘dialogic’ component of the assessment was seen as most valuable in developing thinking, secondly, that progress reviews offered the potential to reframe and disrupt existing thinking relating to their studies, and finally, that progress reviews have the potential to play an important role in developing a sense of autonomy, permission and motivation.

In terms of design and practice, the value of the dialogic aspect of the assessment was seen as being in its potential to extend thinking through the assessor, as a methodological and disciplinary ‘expert’, introducing invitational, coaching format, questions to provoke reflection and provide opportunities to justify and explore research decisions. When this approach was taken, students recalled moments where they were able to make ‘breakthroughs’ in their thinking or where they later realised that the discussion was significant in shaping their future research decisions. Alongside this, a respectful and supportive approach was viewed as important in enhancing psychological safety and creating a sense of ownership and permission in relation to their work:

“I think having that almost like mentoring, which is like a mini mentoring or mini coaching session, in these examination spots is just really helpful”

“I’m pootling along and it’s going okay and now this bombshell’s just dropped, but it was helpful because, yeah, absolutely it completely shifted it.”

“It’s my study… as long as I can justify academically and back it up. Why I’ve chosen to do what I’ve done then that’s okay.” 

Implications

Clearly this is a small-scale study, with a relatively narrow disciplinary focus, however its value is intended to lie in its potential to provoke consideration of progress reviews as tools for teaching, learning and researcher development, rather than to assert any generalisable understanding for practice.

This consideration may include questions which are relevant for research leaders, supervisors and assessors/examiners, and for doctoral students. Most notably: is there a shared understanding of the purpose of doctoral progress reviews and why we ‘have’ to do it? And how does this purpose inform design, practice and related training within our institutions?

Within this study it was evident that in this context the role of dialogic assessment was significant, and given the additional resource required to protect or introduce such an approach, this may be an aspect which warrants further exploration and investigation to support decision making. In addition, it also framed the perceived value of the careful construction of questions, which invite and encourage reflection and learning, as opposed to seeking solely to ‘test’ this.

Dr Timothy Clark is Director of Research and Enterprise for the School of Education at the University of the West of England, Bristol. His research focuses on aspects of doctoral pedagogy and researcher development.


Leave a comment

The ongoing saga of REF 2028: why doesn’t teaching count for impact?

by Ian McNay

Surprise, surprise…or not.

The initial decisions on REF 2028 (REF 2028/23/01 from Research England et al), based on the report on FRAP – the Future Research Assessment Programme – contain one surprise and one non-surprise among nearly 40 decisions. To take the second first, it recommends, through its cost analysis report, that any future exercise ‘should maintain continuity with rules and processes from previous exercises’ and ‘issue the REF guidance in a timely fashion’ (para 82). It then introduces significant discontinuities in rules and processes, and anticipates giving final guidance only in winter 2024-5, when four years (more than half) of the assessment period will have passed.

The second surprise is, finally, the open recognition of the negative effects on research culture and staff careers of the REF and its predecessors (para 24), identified by respondents to the FRAP consultation about the 2028 exercise. For me, this new humility is a double edged sword: many of the defects identified have been highlighted in my evidence-based articles (McNay, 2016, McNay, 2022), and, indeed, by the report commissioned by HEFCE (McNay, 1997) on the impact on individual and institutional behaviour of the 1992 exercise:

  • Lack of recognition of a diversity of excellences including work on local or regional issues because of the geographical interpretation of national/international excellence (para 37). Such local work involves different criteria of excellence, perhaps recognised in references to partnership and wider impact.
  • The need for outreach beyond the academic community, such as a dual publication strategy – one article in an academic journal matched with one in a professional journal in practical language and close to utility and application of a project’s findings.
  • Deficient arrangements for assessing interdisciplinary work (paras 60 and 61)
  • The need for a different, ‘refreshed’, approach to appointments to assessment panels (para 28)
  • The ‘negative impact on the authenticity and novelty of research, with individuals’ agendas being shaped by perceptions of what is more suitable to the exercise: favouring short-term inputs and impacts at the expense of longer-term projects…staying away from areas perceived to be less likely to perform well’. ‘The REF encourages …focus on ‘exceptional’ impacts and those which are easily measurable, [with] researchers given ‘no safe space to fail’ when it came to impact’.
  • That last negative arises in major part because of the internal management of the exercise, yet the report proposes an even greater corporate approach in future. The evidence-based articles and reports, and innovative processes and artefacts that arise from our research will have a reduced contribution to published assessments on the quality of research, though there is encouragement of a wider diversity of research outputs. More emphasis will be placed on institutional and unit ‘culture’ (para 28), so individuals disappear, uncoupled from consideration of culture-based quality. That culture is controlled by management; I spent several years as a Head of School trying to protect and develop further a collegial enterprise culture, which encouraged research and innovative activities in teaching. The senior management wanted a corporate bureaucracy approach with targets and constant monitoring, which work at Exeter has shown leads to lower output, poorer quality and higher costs (Franco-Santos et al, 2014).

At least 20 per cent of the People, Culture and Environment sub-profile for a unit will be based on an assessment of the Institutional Level (IL) culture, and this element will make up 25 per cent of a unit’s overall quality profile, up from 15 percent from 2021. This proposed culture-based approach will favour Russell Group universities even further – their accumulated capital has led to them outscoring other universities on ‘environment’ in recent exercises, even when the output scores have been the other way round. Elizabeth Gadd, of Loughborough, had a good piece on this issue in Wonkhe on 28 June 2023. The future may see research-based universities recruiting strongly in the international market to provide subsidy to research from higher student fees, leaving the rest of us to offer access and quality teaching to UK students on fees not adjusted for inflation. Some recognition of excellent research in unsupportive environment would be welcome, as would reward for improvement as operated when the polytechnics and colleges joined research assessment exercises.

The culture of units will be judged by the panels – a separate panel will assess IL cultures – and will be based on a ‘structured statement’ from the management, assessing itself, plus a questionnaire submission. I have two concerns here: can academic panels competent to peer-assess research also judge the quality and contribution of management; and, given behaviours in the first round of impact assessment (Oancea, 2016), how far can we rely on the integrity of these statements?

The sub-profile on Contribution to Knowledge and Understanding sub-profile will make up 50 per cent of a unit’s quality profile – down from 60 per cent last time and 65 per cent in 2014. At least 10 per cent will be based on the structured statement, so Outputs – the one thing that researchers may have a significant role in – are down to only 40 per cent, at most, of what is meant by overall research quality (the FRAP International Committee recommended 33 per cent). Individuals will not be submitted. HESA data will be used to quantify staff and the number of research outputs that can be submitted will be an average of 2.5 per FTE. There is no upper limit for an individual, and staff with no outputs can be included, as well as those who support research by others, or technicians who publish. Research England (and this is mainly about England; the other three countries may do better and certainly will do things differently) is firm that the HESA numbers will not be used as the volume multiplier for funding (still a main purpose of the REF), though it is not clear where that will come from – Research England is reviewing their approach to strategic institutional research funding. Perhaps staff figures submitted to HESA will have an indicator of individuals’ engagement with research.

Engagement and Impact broadens the previous element of simply impact. Our masters have discovered that early engagement of external partners in research, and 6 months attachment at 0.2 contract level allows them to be included, and enhances impact. Wow! Who knew? The work that has impact can be of any level to avoid the current quality level designations stopping local projects being acknowledged.

The three sub-profiles have fuzzy boundaries and overlap. Not just in a linear connection – environment, output, impact – but, because, as noted above, for example, engagement comes from the external environment but becomes part of the internal culture. It becomes more of a Venn diagram, that allows the adoption of an ‘holistic’ approach to ‘responsible research assessment’. We wait to see what those both mean in practice.

What is clear in that holistic approach is that research has nothing to do with teaching, and impact on teaching still does not count. That has created an issue for me in the past since my research feeds (not leads) my teaching and vice versa. I use discovery learning and students’ critical incidents as curriculum organisers, and they produce ‘evidence’ similar to that gathered through more formal interview and observation methods. An example. I recently led a workshop for a small private HEI on academic governance. There was a newly appointed CEO. I used a model of institutional and departmental cultures which influence decision making and ‘governance’ at different levels. That model, developed to help my teaching is now regarded by some as a theoretical framework and used as a basis for research. Does it therefore qualify for inclusion in impact? The session asked participants to consider the balance among four cultures of collegial, bureaucratic, corporate, entrepreneurial, relating to the degrees of central control of policy development and of policy delivery (McNay, 1995).  It then dealt with some issues more didactically, moving to the concept of the learning organisation where I distributed a 20 item questionnaire, (not yet published, but available on request for you to use) to allow scoring out of 10 per item, of behaviours relating to capacity to change, innovate and learn, leading to improved quality. Only one person scored more than 100 in total and across the group the modal score was in the low 70s, or just over 35%. That gave the new CEO an agenda with some issues more easily pursued than others and scores indicating levels of concern and priority. So my role moved into consultancy. There will be impact, but is the research base sufficient, was it even research, and does the use of teaching as a research transmission process (Boyer, 1990) disqualify it?

I hope this shows that the report contains a big agenda, with more to come. SRHE members need to consider what it means to them, but also what it means for research into institutions and departments to help define culture and its characteristics. I will not be doing it, but I hope some of you will. We need to continue to provide an evidence base to inform decisions even if it takes up to 20 years for the findings to have an impact.

SRHE itself might say several things in response to the report:

  • welcome the recognition of previous weaknesses, but note that a major one has not been recorded: the impact of RAE/REF on teaching, when excellent research has gained extra money, but excellent teaching has not, leading to an imbalancing of effort within the HE system. The research-teaching nexus also needs incorporating into the holistic view of research. Teaching is a major element in dissemination of research (Boyer, 1990) and so a conduit to impact, and should be recognised as such. That is because the relationship between researcher/teacher and those gaining new knowledge and understanding is more intimate and interactive than a reader of an article experiences. Discovery learning, drawing on learners’ experiences in CPD programmes can be a source of evidence, enhancing the knowledge and understanding of the researcher to incorporate in further work and research publications.
  • welcome the commitment to more diversity of excellences. In particular, welcome the commitment to recognise local and regionally directed research and its significant impact. The arguments about intimacy and interaction apply here, too. Research in partnership is typical of such work and different criteria are needed to evaluate excellence in this context.
  • welcome the intention to review panel membership to reflect the wider view of research now to be adopted.
  • urge an earlier clarification on panel criteria to avoid another 18 months, at least, trying, without clarity or guidance, to do work that will fit with the framework of judgement within which that work will be judged.
  • be wary of losing the voice of the researchers in the reduction of emphasis on research and its outputs in favour of presentations on corporate culture.

References

McNay, I (1997) The Impact of the 1992 RAE on Institutional and Individual Behaviour in English HE: the evidence from a research project Bristol HEFCE


Leave a comment

Redefining cultures of excellence: A new event exploring models for change in recruiting researchers and setting research agendas

by Rebekah Smith McGloin and Rachel Handforth, Nottingham Trent University

Research excellence’ is a ubiquitous concept to which we are mostly habituated in the UK research ecosystem.  Yet, at the end of an academic year which saw the publication of UKRI EDI Strategy, four UKRI council reviews of their investments in PGR, House of Commons inquiry on Reproducibility and Research Integrity and following on from the development of manifesto, concordat, declaration and standards to support Open Research in recent years, it feels timely to engage in some critical reflection on cultures of excellence in research. 

The notion of ‘excellence’ has become an increasingly important part of the research ecosystem over the last 20 years (OECD, 2014). The drivers for this are traced to the need to justify the investment of public money in research and the increasing competition for scarce resources (Münch, 2015).  University rankings have further hardwired and amplified judgments about degrees of excellence into our collective consciousness (Hazelkorn, 2015).

Jong, Franssen and Pinfield (2021) highlight that the idea of excellence is a ‘boundary object’ (Star and Griesemer, 1989) however. That is, it is a nebulous construct which is poorly defined and is used in many different ways. It has nevertheless shaped policy, funding and assessment activities since the turn of the century. Ideas of excellence have been enacted through the Research Excellence Framework and associated allocation to universities of funding to support research, competitive schemes for grant funding, recruitment to flagship doctoral training partnerships and individual promotion and reward.

We can trace a number of recent initiatives at sector level, inter alia, that have sought to broaden ideas of research excellence and to challenge systemic and structural inequalities in our research ecosystem. These include the increase of impact weighting in REF2021 to 25%, trials of systems of partial randomisation as part of the selection process for some smaller research grants, e.g. British Academy from 2022, the Concordats and Agreements Review work in 2023 to align and increase influence, capacity, and efficiency of activity to support research culture and the recent Research England investment in projects designed to address the broken pipeline into research by increasing participation of people from racialised groups in doctoral education.

At the end of June, we are hosting an event at NTU which will focus on redefining cultures of research excellence through the lens of inclusion. The symposium, to be held at our Clifton Campus on Wednesday 28 June, provides an opportunity to re-examine the broad notion of research excellence, in the context of systemic inequalities that have historically locked out certain types of researchers and research agendas and locked in others.

The event focuses on two mutually-reinforcing areas: the possibility of creating more responsive and inclusive research agendas through co-creation between academics and communities; and broadening pathways into research through the inclusive recruitment of PhD and early career researchers. We take the starting position that approaches which focus on advancing equity are critical to achieving excellence in UK research and innovation.

The day will include keynotes from Dr Bernadine Idowu and Professor Kalwant Bhopal, the launch of a new competency-based PGR recruitment framework, based on sector consultation, and a programme of speakers talking about their approaches to diversifying researcher recruitment and engaging the community in setting research agendas. 

NTU will be showcasing two new projects that are designed to challenge old ideas of research excellence and forge new ways of thinking. EDEPI (Equity in Doctoral Education through Partnership and Innovation Programme) is a partnership with Liverpool John Moores and Sheffield Hallam Universities and NHS Trusts in the three cities. The project will explore how working with the NHS can improve access and participation in doctoral education for racially-minoritised groups. Co(l)laboratory is a project with University of Nottingham, based on the Universities for Nottingham civic agreement with local public-sector organisations. Collab will present early lessons from a community-informed approach to cohort-based doctoral training.

Our event is a great opportunity for universities and other organisations who are, in their own ways, redefining cultures of research excellence to share their approaches, challenges and successes. We invite individuals, project teams and organisations working in these areas to join us at the end of June, with the hope of building a community of practice around building inclusive research cultures, within and across the sector.

Dr Rebekah Smith McGloin is Director of the Doctoral School at Nottingham Trent University and is Principal Investigator on the EDEPI and Co(l)laboratory projects. 

Dr Rachel Handforth is Senior Lecturer in Doctoral Education and Civic Engagement at NTU.


2 Comments

Will universities fail the Turing Test?

by Phil Pilkington

The recent anxiety over the development of AI programmes to generate unique text suggests that some disciplines face a crisis of passing the Turing Test. That is, that you cannot distinguish between the unique AI generated text and that produced by a human agent. Will this be the next stage in the battle of cheating by students? Will it lead to an arms race of countering the AI programmes to foil the students cheating? Perhaps it may force some to redesign the curriculum, the learning and the assessment processes.

Defenders of AI programmes for text generation have produced their own euphemistic consumer guides. Jasper is a ‘writing assistant’, Dr Essay ‘gets to work and crafts your entire essay for you’, Article Forge (get it?) ‘is one of the best essay writers and does the research for you!’.  Other AI essay forgers are available. The best known and the most popular is probably GPT-3 with a reported million subscribers (see The Atlantic, 6/12/2022). The promoters of the AI bots make clear that it is cheaper and quicker than using essay mills. It may even be less exploitative of those graduates in Nepal or Nottingham or Newark New Jersey serving the essay mills. There has been the handwringing that this is the ‘end of the essay’, but there have been AI developments in STEM subjects and art and design.

AI cannot be uninvented. It is out there, it is cheap and readily available. It does not necessarily follow that using it is cheating. Mike Sharples on the LSE Blog tried it out for a student assignment on learning styles. He found some simple errors of reference but made the point that GPT-3 text can be used creatively for students’ understanding and exploring a subject. And Kent University provides guidance on the use of Grammarly, which doesn’t create text as GPT-3 does ab initio but it does ‘write’ text.

Consumer reports on GPT-3 suggest that the output for given assignments is of a 2.2 or even 2.1 standard of essay, albeit with faults in the text generated. These seem to be usually in the form of incorrect or inadequate references; some references were for non-existent journals  and papers, with dates confused and so on. However, a student could read through the output text and correct such errors without any cognitive engagement in the subject. Correcting the text would be rather like an AI protocol. The next stage of AI will probably eliminate the most egregious and detectable of errors to become the ‘untraceable poison’.

The significant point here is that it is possible to generate essays and assignments without cognitive activity in the generation of the material. This does not necessarily mean a student doesn’t learn something. Reading through the generated text may be part of a learning process, but it is an impoverished form of learning. I would distinguish this as the learning that in the generated text rather than the learning how of generating the text. This may be the challenge for the post AI curriculum: knowing that is not as important as knowing how. What do we expect for the learning outcomes? That we know, for example, the War Aims of Imperial Germany in 1914 or that we know how to find that out, or how it relates to other aims and ideological outlooks? AI will provide the material for the former but not the latter.

To say that knowing that (eg the War Aims of Imperial Germany, etc) is a form of surface learning is not to confuse that memory trick with cognitive abilities, or with AI – which has no cognitive output at all. Learning is semantic, it has reference as rule-based meaning; AI text generation is syntactic and has no meaning at all (to the external world) but makes reference only to its own protocols[1]. As the Turing Test does not admit – because in that test the failure to distinguish between the human agent and the AI is based on deceiving the observer.

Studies have shown that students have a scale of cheating (as specified by academic conduct rules). An early SRHE Student Experience Seminar explored the students’ acceptance of some forms of cheating and abhorrence of other forms. Examples of ‘lightweight’ and ‘acceptable’ cheating included borrowing a friend’s essay or notes, in contrast to the extreme horror of having someone sit an exam for them (impersonation). The latter was considered not just cheating for personal advantage but also disadvantaging the entire cohort (Ashworth et al, ‘Guilty in Whose Eyes?’). Where will using AI sit in the spectrum of students’ perception of cheating? Where will it sit within the academic regulations?

I will assume that it will be used both for first drafts and for ‘passing off’ as the entirety of the student’s efforts. Should we embrace the existence of AI bots? They could be our friends and help develop the curriculum to be more creative for students and staff. We will expect and assume students to be honest about their work (as individuals and within groups) but there will be pressures of practical, cultural and psychological nature, on some students more than others, which will encourage the use of the bots. The need to work as a barista to pay the rent, to cope as a carer, to cope with dyslexia (diagnosed or not), to help non-native speakers, to overcome the disadvantages of a relatively impoverished secondary education, all distinct from the cohort of gilded and fluently entitled youth, will all be stressors for encouraging the use of the bots.

Will the use of AI be determined by the types of students’ motivation (another subject of an early SRHE Student Experience Seminar)? There will be those wanting to engage in and grasp (to cognitively possess as it were) the concept formations of the discipline (the semantical), with others who simply want to ‘get through the course’ and secure employment (the syntactical).

And what of stressed academics assessing the AI generated texts? They could resort to AI bots for that task too. In the competitive, neo-liberal, league-table driven universities of precarity, publish-or-be-redundant monetizing research (add your own epithets here), will AI bots be used to meet increasingly demanding performance targets?

The discovery of the use of AI will be accompanied by a combination of outrage and demands for sanctions (much like the attempts to criminalise essay mills and their use). We can expect some responses from institutions that it either doesn’t happen here or it is only a tiny minority. But if it does become the ‘untraceable poison’ how will we know? AI bots are not like essay mills. They may be used as a form of deception, as implied by the Turing Test, but they could also be used as a tool for greater understanding of a discipline. We may need a new form of teaching, learning and assessment.

Phil Pilkington’s former roles include Chair of Middlesex University Students’ Union Board of Trustees, and CEO of Coventry University Students’ Union. He is an Honorary Teaching Fellow of Coventry University and a contributor to WonkHE. He chaired the SRHE Student Experience Network for several years and helped to organise events including the hugely successful 1995 SRHE annual conference on The Student Experience; its associated book of ‘Precedings’ was edited by Suzanne Hazelgrove for SRHE/Open University Press.


[1] John Searle (The rediscovery of the mind, 1992) produced an elegant thought experiment to refute the existence of AI qua intelligence, or cognitive activity. He created the experiment, the Chinese Room, originally to face off the Mind-Brain identity theorists. It works as a wonderful example of how AI can be seemingly intelligent without having any cognitive content.  It is worth following the Chinese Room for its simplicity and elegance and as a lesson in not taking AI seriously as ‘intelligence’.


1 Comment

Critically analysing EdTech investors’ logic in business discourse

by Javier Mármol Queraltó

This blog is based on a presentation to the 2021 SRHE Research Conference, as part of a Symposium on Universities and Unicorns: Building Digital Assets in the Higher Education Industry organised by the project’s principal investigator, Janja Komljenovic (Lancaster). The support of the Economic and Social Research Council (ESRC) is gratefully acknowledged. The project introduces new ways to think about and examine the digitalising of the higher education sector. It investigates new forms of value creation and suggests that value in the sector increasingly lies in the creation of digital assets.

In the context of the current SARS-COVID-19 pandemic, the ongoing process of digitalisation of education has become a prominent area for social, financial and, increasingly, (critical) educational research. Higher education, as a pivotal social, economic, technological and educational domain, has seen its activities drastically affected, and Universities and the multitude of people involved in them have been forced to adapt to the unfolding crisis. HE researchers agree both on the unpreparedness of countries and institutions faced by the pandemic, and on its potential lasting impact on the educational sector (Goedegebuure and Meek, 2021). In as much as educational technologies (EdTech) have been brought to the fore due to their pivotal role in the enablement and continuation of educational practices across the globe, EdTech companies and investors have also become primary financial beneficiaries of these necessary processes of digitalisation. The extensive use and adoption of EdTech to bridge the gap between HE professionals and students due to the application of strict social distancing measures has been welcomed by investors as an opportunity for EdTech to establish themselves as key players within an educational landscape under a process of assetisation (Komljenovic, 2020, 2021). Investors and EdTech are scaffolding new digital markets in HE, reshaping the conceptualisation of universities, HE and the sector itself more generally (Williamson, 2021; Komljenovic and Robertson, 2016). In this brief entry, I focus on EdTech investors’ discourses, owing to the potential of such discourses to shape the future of educational practices broadly speaking.

Within the ‘Universities and Unicorns’ ESRC-funded project, this exploratory research (see full report) aimed at unveiling the ideological uses of linguistic, visual and multimodal devices (eg texts and charts) deployed by EdTech investors in a variety of texts that have the potential, due to their circulation and goals, to shape public understandings of the role of Educational Technologies in the unfolding crisis. The research was conducted deploying a framework anchored in Linguistics, specifically cognitive-based approaches to Critical Discourse Studies (CL-CDS; eg Mármol Queraltó, 2021b). A central assumption in this approach is that language encodes construal: the same event/situation can be alternatively linguistically formulated, and these can have diverse cognitive effects in readers (Hart, 2011). From a CL-CDS perspective, then, texts can potentially shape the way that the public think (and subsequently act) about social topics (cf Watters, 2015).

In order to extract the ideologies underlying discourse practices carried out by HE investors, we examined qualitatively a variety of texts disseminated in the public and semi-private domains. We investigated, for example, HolonIQ’s explanatory charts, interviews with professionals and blog entries (eg Charles MacIntyre, Alex Latsis, Jan Lynn-Matern), and global financial reports by IBIS Capital, BrightEye Ventures, and EdTechX, among several others. Our main goal was to better understand how EdTech investors operationalised discourse to shape the imageries of the future in the relationship between HE institutions, EdTech and governance. In line with CDS approaches, we examined the representations of social actors in context using van Leeuwen’s (2008) framework, and more in line with CL-CDS, we also operationalised the analysis of metaphorical expressions indexing Conceptual Metaphors, and Force dynamics. Force-dynamics is an essential tool deployed to examine how the tensions between actors and processes within business discourse are constructed (see Oakley, 2005).

Our study yielded important findings for the critical examination of discourse processes within the EdTech-HE-governance triangle of influences. In terms of social actor representation (whose examination also included metaphor), the main findings are:

  • EdTech investors and companies are rendered as opaque, abstract collectives, and are positively represented as ‘enablers’ and ‘disruptors’ of educational processes.
  • Governments are rendered as generic, collective entities, and depicted as necessary funders of process of digital transformation.
  • Universities or HE institutions are mainly negatively represented as potential ‘blockers’ of processes of digital transformation, and they are depicted as failing their students due to their lack of scalability and flexibility.
  • Individuals within HE institutions are identified as numbers and increasing percentages within unified collectives, students routinely cast as beneficiaries in ‘consumer’ and ‘user’ roles, while educators are activated as ‘content providers’.
  • Metaphorically, the EdTech sector is conceptualised as a ‘ship’ on a ‘journey’ towards profit, where HE institutions can be ‘obstacles along a path’ and the global pandemic and other push factors are conceptualised as ‘tailwinds’.
  • The EdTech market is conceptualised as a ‘living organism’ that grows and evolves independent of the actors involved in it. The visual representations observed reinforce these patterns and emphasise the growth of the EdTech market in very positive terms.

The formulation of ‘push’ and ‘pull’ factors is also essential to understand the discursively constructed ‘internal tensions’ within the sector. In order to examine these factors, we operationalised Force-dynamics analysis and metaphor, which allowed us to arrive to the following findings:

  • Push factors identified by investors driving the EdTech sector include the SARS-COVID19 global pandemic, the digital acceleration being experienced in the sector prior to the pandemic, the increasing number of students requiring access to HE, and investors’ actions aimed at disrupting the EdTech market.
  • Pull factors encouraging investment in the sector are conceptualised in the shape of financial predictions. The visions put forward by EdTech investors become instrumental in the achievement of those predictions.
  • The representation of the global pandemic is ambivalent and it is rendered both as a negative factor affecting societies and as a positive factor for the EdTech sector. The primary focus is on the positive outcomes of the disruption brought about by the pandemic.
  • Educational platforms are foregrounded in their enabling role and replace HE institutions as site for educational practice, de-localising educational practices from physical universities.
  • Students and educators are found to be increasingly reframed as ‘users’ and ‘content providers’, respectively. This discursive shift is potentially indicative of the new processes of assetisation of HE.

On the whole, framing business within the ‘journey’ metaphor entails that any entities or processes affecting business are potentially conceptualised as ‘obstacles along the path’, and therefore attributed negative connotations. In our case, those entities (eg governments and HE institutions) or processes (eg lack of funding) that metaphorically ‘stand in the way of business’ are automatically framed in a negative light, potentially affording a negative reception by the audience and therefore legitimising actions designed to remove those ‘obstacles’ (eg ‘disruptions’). EdTech companies and investors are represented very positively as ‘enablers’ of educational practices disrupted by the SARS-COVID19 pandemic, but also as ‘push factors’ in processes of digital acceleration within the ‘speed of action is speed of motion’ metaphor. In the premised, ever-growing EdTech sector, those actors and processes that ‘slow down’ access to profits (or processes providing access to profit) are similarly negatively represented. The conceptualisation of the SARS-COVID-19 global pandemic in this context reflects ‘calculated ambivalence’. This ambivalence was expected, as portraying the pandemic solely as a relatively positive factor for the HE sector would be in extreme detriment to EdTech investors’ activities. Our findings reflect that, while the global pandemic is initially represented as a very negative factor greatly disrupting societies and businesses, those negative impacts tend to be presented in rather vague ways and in most occasions the result of the disruption brought about by the pandemic is reduced to changes in the modality of education experienced by learners (from in-person to online education). We have found no significant mention of social or personal impacts of the pandemic (eg deaths and scenarios affecting underrepresented social groups), where the focus has been mainly on the market and the activities within it. Conversely, while the initial framing of the pandemic is inherently negative, we have seen in several examples above that the pandemic is subtly instrumentalised as a ‘push factor’, which serves to accelerate digital transformation and is hence a positive factor for the EdTech sector. In a global context of restrictions, containment measures and vaccine rollouts, it is especially ideologically relevant to find the pandemic instrumentalised as a ‘catalyst’, or as an important player in a ‘experiment of global proportions’. Framing the pandemic in such ways detaches the audience from its negative connotations, and serves to depict EdTech companies and investors as involved in high-level, complex processes that abstract the millions of diverse victims to the pandemic. Ultimately, in the ‘journey’ towards profit, the SARS-COVID-19 is a desired push factor, also realised as a ‘tailwind’, which facilitates the desired digital acceleration.

On the whole, our research demonstrated that social actor representation and the distinction between push/pull factors are crucial sites for the analysis of EdTech discourse. EdTech’s primary focus is on the positive outcomes of the disruption brought about by the pandemic. In this context, educational platforms are foregrounded in their enabling role and replace HE institutions as site for educational practice, de-localising educational practices from physical universities. Subsequently, students and educators are found to be increasingly reframed as ‘users’ and ‘content providers’ respectively. We argue that this subtle discursive shift is potentially indicative of the new processes of assetization of HE and reflects more broadly a neoliberal logic.

Javier Mármol Queraltó is a PhD candidate in Linguistics in Lancaster University. His current research deals with the multimodal representations of discourses of migration in the British and Spanish online press. He advocates a Cognitive Linguistic Approach to Critical Discourse Studies (CL-CDS), and is working on a methodology that can shed light on how public perceptions of social issues might be influenced by both the multimodal constraints of online newspaper discourse and our shared cognitive capacities. He is also interested in the multimodal and cognitive dimensions of discourses of Brexit outside the UK, news discourses of social unrest, and the marketisation/assetisation processes of HE.