SRHE Blog

The Society for Research into Higher Education


1 Comment

Who gets to decide what counts as knowledge? Big tech, AI, and the future of epistemic agency in higher education

by Mehreen Ashraf, Eimear Nolan, Manuel F Ramirez, Gazi Islam and Dirk Lindebaum

Walk into almost any university today, and you can be sure to encounter the topic of AI and how it affects higher education (HE). AI applications, especially large language models (LLM), have become part of everyday academic life, being used for drafting outlines, summarising readings, and even helping students to ‘think’. For some, the emergence of LLMs is a revolution that makes learning more efficient and accessible. For others, it signals something far more unsettling: a shift in how and by whom knowledge is controlled. This latter point is the focus of our new article published in Organization Studies.

At the heart of our article is a shift in what is referred to epistemic (or knowledge) governance: the way in which knowledge is created, organised, and legitimised in HE. In plain terms, epistemic governance is about who gets to decide what counts as credible, whose voices are heard, and how the rules of knowing are set. Universities have historically been central to epistemic governance through peer review, academic freedom, teaching, and the public mission of scholarship. But as AI tools become deeply embedded in teaching and research, those rules are being rewritten not by educators or policymakers, but by the companies that own the technology.

From epistemic agents to epistemic consumers

Universities, academics, and students have traditionally been epistemic agents: active producers and interpreters of knowledge. They ask questions, test ideas, and challenge assumptions. But when we rely on AI systems to generate or validate content, we risk shifting from being agents of knowledge to consumers of knowledge. Technology takes on the heavy cognitive work: it finds sources, summarises arguments, and even produces prose that sounds academic. However, this efficiency comes at the cost of profound changes in the nature of intellectual work.

Students who rely on AI to tidy up their essays, or generate references, will learn less about the process of critically evaluating sources, connecting ideas and constructing arguments, which are essential for reasoning through complex problems. Academics who let AI draft research sections, or feed decision letters and reviewer reports into AI with the request that AI produces a ‘revision strategy’, might save time but lose the slow, reflective process that leads to original thought, while undercutting their own agency in the process. And institutions that embed AI into learning systems hand part of their epistemic governance – their authority to define what knowledge is and how it is judged – to private corporations.

This is not about individual laziness; it is structural. As Shoshana Zuboff argued in The age of surveillance capitalism, digital infrastructures do not just collect information, they reorganise how we value and act upon it. When universities become dependent on tools owned by big tech, they enter an ecosystem where the incentives are commercial, not educational.

Big tech and the politics of knowing

The idea that universities might lose control of knowledge sounds abstract, but it is already visible. Jisc’s 2024 framework on AI in tertiary education warns that institutions must not ‘outsource their intellectual labour to unaccountable systems,’ yet that outsourcing is happening quietly. Many UK universities, including the University of Oxford, have signed up to corporate AI platforms to be used by staff and students alike. This, in turn, facilitates the collection of data on learning behaviours that can be fed back into proprietary models.

This data loop gives big tech enormous influence over what is known and how it is known. A company’s algorithm can shape how research is accessed, which papers surface first, or which ‘learning outcomes’ appear most efficient to achieve. That’s epistemic governance in action: the invisible scaffolding that structures knowledge behind the scenes. At the same time, it is easy to see why AI technologies appeal to universities under pressure. AI tools promise speed, standardisation, lower costs, and measurable performance, all seductive in a sector struggling with staff shortages and audit culture. But those same features risk hollowing out the human side of scholarship: interpretation, dissent, and moral reasoning. The risk is not that AI will replace academics but that it will change them, turning universities from communities of inquiry into systems of verification.

The Humboldtian ideal and why it is still relevant

The modern research university was shaped by the 19th-century thinker Wilhelm von Humboldt, who imagined higher education as a public good, a space where teaching and research were united in the pursuit of understanding. The goal was not efficiency: it was freedom. Freedom to think, to question, to fail, and to imagine differently.

That ideal has never been perfectly achieved, but it remains a vital counterweight to market-driven logics that render AI a natural way forward in HE. When HE serves as a place of critical inquiry, it nourishes democracy itself. When it becomes a service industry optimised by algorithms, it risks producing what Žižek once called ‘humans who talk like chatbots’: fluent, but shallow.

The drift toward organised immaturity

Scholars like Andreas Scherer and colleagues describe this shift as organised immaturity: a condition where sociotechnical systems prompt us to stop thinking for ourselves. While AI tools appear to liberate us from labour, what is happening is that they are actually narrowing the space for judgment and doubt.

In HE, that immaturity shows up when students skip the reading because ‘ChatGPT can summarise it’, or when lecturers rely on AI slides rather than designing lessons for their own cohort. Each act seems harmless; but collectively, they erode our epistemic agency. The more we delegate cognition to systems optimised for efficiency, the less we cultivate the messy, reflective habits that sustain democratic thinking. Immanuel Kant once defined immaturity as ‘the inability to use one’s understanding without guidance from another.’ In the age of AI, that ‘other’ may well be an algorithm trained on millions of data points, but answerable to no one.

Reclaiming epistemic agency

So how can higher education reclaim its epistemic agency? The answer lies not only in rejecting AI but also in rethinking our possible relationships with it. Universities need to treat generative tools as objects of inquiry, not an invisible infrastructure. That means embedding critical digital literacy across curricula: not simply training students to use AI responsibly, but teaching them to question how it works, whose knowledge it privileges, and whose it leaves out.

In classrooms, educators could experiment with comparative exercises: have students write an essay on their own, then analyse an AI version of the same task. What’s missing? What assumptions are built in? How were students changed when the AI wrote the essay for them and when they wrote them themselves? As the Russell Group’s 2024 AI principles note, ‘critical engagement must remain at the heart of learning.’

In research, academics too must realise that their unique perspectives, disciplinary judgement, and interpretive voices matter, perhaps now more than ever, in a system where AI’s homogenisation of knowledge looms. We need to understand that the more we subscribe to values of optimisation and efficiency as preferred ways of doing academic work, the more natural the penetration of AI into HE will unfold.

Institutionally, universities might consider building open, transparent AI systems through consortia, rather than depending entirely on proprietary tools. This isn’t just about ethics; it’s about governance and ensuring that epistemic authority remains a public, democratic responsibility.

Why this matters to you

Epistemic governance and epistemic agency may sound like abstract academic terms, but they refer to something fundamental: the ability of societies and citizens (not just ‘workers’) to think for themselves when/if universities lose control over how knowledge is created, validated and shared. When that happens, we risk not just changing education but weakening democracy. As journalist George Monbiot recently wrote, ‘you cannot speak truth to power if power controls your words.’ The same is true for HE. We cannot speak truth to power if power now writes our essays, marks our assignments, and curates our reading lists.

Mehreen Ashraf is an Assistant Professor at Cardiff Business School, Cardiff University, United Kingdom.

Eimear Nolan is an Associate Professor in International Business at Trinity Business School, Trinity College Dublin, Ireland.

Manuel F Ramirez is Lecturer in Organisation Studies at the University of Liverpool Management School, UK.

Gazi Islam is Professor of People, Organizations and Society at Grenoble Ecole de Management, France.

Dirk Lindebaum is Professor of Management and Organisation at the School of Management, University of Bath.


Leave a comment

The challenge of AI declaration in HE – what can we do?

by Chahna Gonsalves

The rapid integration of AI tools like ChatGPT into academic life has raised significant concerns about academic integrity. Universities worldwide are grappling with how to manage this new frontier of technology. My recent research at King’s Business School sheds light on an intriguing challenge: student non-compliance with mandatory AI use declarations. Despite clear institutional requirements to declare AI usage in their coursework, up to 74% of students did not comply. This raises key questions about how we think about academic honesty in the age of AI, and what can be done to improve compliance and foster trust.

In November 2023, King’s Business School introduced an AI declaration section as part of the coursework coversheet. Students were required to either declare their AI use or confirm that they hadn’t used any AI tools in their work. This research, which started as an evaluation of the revised coversheet, was conducted a year after the implementation of this policy, providing insights into how students have navigated these requirements over time. The findings reveal important challenges for both educators and students in adapting to this new reality.

Fear and ambiguity: barriers to transparency

In interviews conducted as part of the study, students frequently voiced their apprehension about how AI declarations might be perceived. One student likened it to “admitting to plagiarism,” reflecting a widespread fear that transparency could backfire. Such fears illustrate a psychological barrier to compliance, where students perceive AI use declarations as risky rather than neutral. This tension is exacerbated by the ambiguity of current policies. Guidelines are often unclear, leaving students uncertain about what to declare and how that declaration will impact their academic standing.

Moreover, the rapid evolution of AI tools has blurred traditional lines of authorship and originality. Before the rise of AI, plagiarism was relatively easy to define. But now, as AI tools generate content that is indistinguishable from human-authored work, what does it mean to be original? The boundaries of academic integrity are being redrawn, and institutions need to adapt quickly to provide clearer guidance. As AI technologies become more integrated into academic practice, we must move beyond rigid policies and have more nuanced conversations about what responsible AI use looks like in different contexts.

Peer influence: AI as the “fourth group member”

A particularly striking finding from the research was the role of peer influence in shaping students’ decisions around AI use and its declaration. In group work contexts, AI tools like ChatGPT have become so normalized that one student referred to ChatGPT as the “fourth man” in group projects. This normalization makes it difficult for students to declare AI use, as doing so might set them apart from their peers who choose not to disclose. The pressure to conform can be overwhelming, and it drives non-compliance as students opt to avoid the risk of being singled out.

The normalising effect of AI usage amongst peers reflects a larger trend in academia, where technological adoption is outpacing institutional policy. This raises an urgent need for universities to not only set clear guidelines but also engage students and faculty in open discussions about AI’s role in academic work. Creating a community of transparency where AI use is openly acknowledged and discussed is crucial to overcoming the current challenges.

Solutions: clearer policies, consistent enforcement, and trust

What can be done to improve compliance with AI declarations? The research offers several recommendations. First, institutions need to develop clearer and more consistent policies around AI use. The ambiguity that currently surrounds AI guidelines must be addressed. Students need to know exactly what is expected of them, and this starts with clear definitions of what constitutes AI use and how it should be declared.

Second, enforcement of these policies needs to be consistent across all courses. Many students reported that AI declarations were emphasized in some modules but barely mentioned in others. This inconsistency breeds confusion and scepticism about the importance of the policy. Faculty training is crucial to ensuring that all educators communicate the same message to students about AI use and its implications for academic integrity.

Finally, building trust between students and institutions is essential. Students must feel confident that declaring AI use will not result in unfair penalties. One approach to building this trust is to integrate AI use into low-stakes formative assessments before moving on to higher-stakes summative assessments. This gradual introduction allows students to become comfortable with AI policies and to see that transparency will not harm their academic performance. In the long run, fostering an open, supportive dialogue around AI use can help reduce the fear and anxiety currently driving non-compliance.

Moving forward: a call for open dialogue and innovation

As AI continues to revolutionize academic work, institutions must rise to the challenge of updating their policies and fostering a culture of transparency. My research suggests that fear, ambiguity, and peer influence are key barriers to AI declaration, but these challenges can be overcome with clearer policies, consistent enforcement, and a foundation of trust. More than just a compliance issue, this is an opportunity for higher education to rethink academic integrity in the age of AI and to encourage ethical, transparent use of technology in learning.

In the end, the goal should not be to police AI use, but to harness its potential for enhancing academic work while maintaining the core values of honesty and originality. Now is the time to open up the conversation and invite both students and educators to reimagine how we define integrity in the evolving landscape of higher education. Let’s make AI part of the learning process—not something to be hidden.

This post is based on my paper Addressing Student Non-Compliance in AI Use Declarations: Implications for Academic Integrity and Assessment in Higher Education in Assessment & Evaluation in Higher Education (Published online: 22 Oct 2024).

I hope this serves as a starting point for broader discussions about how we can navigate the complexities of AI in academic settings. I invite readers to reflect on these findings and share their thoughts on how institutions can better manage the balance between technological innovation and academic integrity. 

Chahna Gonsalves is a Senior Lecturer in Marketing (Education) at King’s College London. She is Senior Fellow of the Higher Education Association and Associate Fellow of the Staff Educational Development Association.


3 Comments

Restraining the uncanny guest: AI ethics and university practice

by David Webster

If GAI is the ‘uncanniest of guests’ in the University what can we do about any misbehaviour? What do we do with this uninvited guest who behaves badly, won’t leave and seems intent on asserting that it’s their house now anyway?  They won’t stay in their room and seem to have their fingers in everything.

Nihilism stands at the door: whence comes this uncanniest of all guests?[1]

Nietzsche saw the emergence of nihilistic worldviews as presaging a century of turmoil and destruction, only after which might more creative responses to the sweeping away of older systems of thought be possible. Generative Artificial Intelligence, uncanny in its own discomforting ways, might be argued as threatening the world of higher education with an upending of the existing conventions and practices that have long been the norm in the sector. Some might welcome this guest, in that there is much wrong in the way universities have created knowledge, taught students, served communities and reproduced social practice. The concern must surely be though that GAI is not a creative force, but a repackaging and re-presenting of existing human understanding and belief. We need to think carefully about the way this guest’s behaviour might exert influence in our house.

After decades of seeking to eliminate prejudices and bias, GAI threatens to reinscribe misogyny, racism, homophobia and other unethical discrimination back into the academy. Since  the majority of content used to train large language models has been generated by the most prominent and privileged groups in human culture, might not we see a recolonisation, just as universities are starting to push for a more decolonised, inclusive and equitable learning experience?

After centuries of citation tradition and careful attribution of sources, GAI seems intent on shuffling the work of human scholars and presenting it without any clarity as to whence it came. Some news organisations and  authors are even threatening to sue OpenAI as they believe their content has been used, without permission, to train the company’s ChatGPT tool.

Furthermore, this seems to be a guest inclined to hallucinate and recount their visions as the earnest truth. The guest has also imbibed substantive propaganda, taken satirical articles as serious factual account (hence the glue pizza and rock AI diet), and is targeted by pseudo-science dressed in linguistic frames of respectability. How can we deal with this confident, ambitious, and ill-informed guest who keeps offering to save us time and money?

While there isn’t a simple answer (if I had that, I’d be busy monetising it!), an adaptation of this guest metaphor might help. This is to view GAI rather like an unregulated child prodigy: awash with talent but with a lacuna of discernment. It can do so much, but often doesn’t have the capacity to know what it shouldn’t do, what is appropriate or helpful and what is frankly dangerous.

GAI systems are capable of almost magical-seeming feats, but also lack basic understanding of how the world operates and are blind to all kinds of contextual appreciation. Most adults would take days trying to draw what a GAI system can generate in seconds, and would struggle to match its ‘skills’, but even an artistically-challenged adult likely myself with barely any artistic talent at all would know how many fingers, noses or arms, were appropriate in a picture – no matter how clumsily I rendered them. The idea of GAI as a child prodigy, in need of moral guidance and requiring tutoring and careful curation of the content they are exposed to, can help us better understand just how limited these systems are. This orientation to GAI also helps us see that what are witnessing is not a finished solution to various tasks currently undertaken by people, but rather a surplus of potential. The child prodigy is capable of so much, but is still a child and critically, still requires prodigious supervision.

So as universities look to use student-facing chatbots for support and answering queries, to automate their arcane and lengthy internal processes, to sift through huge datasets and to analyse and repackage existing learning content, we need to be mindful of GAI’S immaturity. It offers phenomenal potential in all these areas and despite the overdone hype  it will drive a range of huge changes to how we work in higher education, but it is far from ready to work unsupervised. GAI needs moral instruction, it needs to be reshaped as it develops and we might do this through assuming the mindset of a watchful, if also proud, parent.

Professor Dave Webster is Director of Education, Quality & Enhancement at the University of Liverpool. He has a background in teaching philosophy, and the study of religion, with ongoing interests in Buddhist thought, and the intersections of new religious movements and conspiracy theory.  He is also concerned about pedagogy, GAI and the future of Higher Education.


[1] The Will to Power, trans. Walter Kaufmann and R. J. Hollingdale, ed., with commentary, Walter Kaufmann, Vintage, 1968.               


Leave a comment

Reflecting on five years of feedback research and practice: progress and prospects

by Naomi Winstone and David Carless

Over the past few years, feedback research and practice in higher education have experienced sustained research interest and significant advancements. These developments have been propelled by a deeper understanding of student responses to feedback, the impact of cultural and sociomaterial factors, and the affordances and challenges posed by digital assessment and feedback methods. In 2019, we published a book in the SRHE series titled Designing Effective Feedback Processes in Higher Education: A Learning-Focused Approach. Five years later, we find it pertinent to reflect on the changes in research, practice, and discourse surrounding feedback processes in higher education since the book’s release.

Shifting paradigms in feedback processes

The book aimed to achieve two primary objectives: to present findings from the SRHE-funded ‘feedback cultures’ project and to synthesise evidence on feedback processes that prioritise student learning – what we called learning-focused feedback. This evidence was then translated into practical guidance and stimulus for reflection. A core distinction made in the book was between an ‘old paradigm’, characterized by the one-way transmission of feedback comments from educators to students, and a ‘new paradigm’, which emphasises student learning through active engagement with feedback processes of different forms, including peer feedback, self-feedback and automated feedback.

The impact of recent developments

The past five years have seen seismic shifts affecting feedback processes. The COVID-19 pandemic demonstrated the feasibility of alternative approaches to assessment and feedback, debunking many myths about insurmountable constraints. It brought issues of relationality and social presence to the forefront. Additionally, the launch of ChatGPT in November 2022 sparked debates on the distinct value of human involvement in feedback processes. Concurrently, higher education has grappled with sector-wide challenges, such as the devaluation of tuition fees in the UK and the intensification of the consumer-provider relationship.

Significant developments in feedback research and practice

Since 2019, feedback research and practice have evolved significantly. Two developments stand out to us as particularly impactful:

1. The ongoing boom of interest in feedback literacy

Feedback literacy research has become a fast-growing trend within research into feedback in higher education. The basis of feedback literacy is that students need a set of competencies which enable them to make the most of feedback opportunities of different kinds. And for students to develop these competencies, teachers need to design opportunities for students to generate, make sense of and use a variety of feedback inputs from peers, the self, teachers, or automated systems.

Student feedback literacy includes the ability to appreciate and judge the value of feedback inputs of different forms. This attribute remains relevant to both human and non-human feedback exchanges. Sometimes feedback inputs are off-target or inaccurate, so responsibility lies with the learner in using information prudently to move work forward. This is particularly pertinent in terms of inputs or feedback from generative AI (GenAI) to which we turn next. Judging the value and accuracy of GenAI inputs, and deciding what further probing or verifying is needed become important learning strategies.

2. Challenges and affordances of GenAI

The potential impact of technological disruption is often overestimated. However, the advent of ChatGPT and other large language models (LLMs) has undeniably generated both excitement and anxiety. In higher education, while assessment design has been the primary concern, discussions around feedback have also intensified.

Given the escalating and unsustainable costs of teaching in higher education, AI is sometimes seen as a panacea. Providing feedback comments – a time-consuming task for academics – could be outsourced to GenAI, theoretically freeing up time for other activities such as teaching, administration, or research. However, we caution against this approach. The mere provision of feedback comments, regardless of their origin, epitomises an old paradigm practice. As argued in our book, a process-oriented approach to feedback means that comments alone do not constitute feedback; they are merely inputs into a feedback process. Feedback occurs only when students engage with and act upon these comments.

Nevertheless, AI offers potential benefits for new paradigm feedback practices. A potential benefit of GenAI feedback is that it can be provided at a time when students need it. And if GenAI can assist educators in drafting feedback comments, it could free up time for more meaningful engagement with students, such as facilitating the implementation of feedback, supporting peer dialogue, and enhancing evaluative expertise. GenAI can also help students generate feedback on their own work, thereby developing their own evaluative judgement. In short, GenAI may not be harmful to feedback processes if we hold true to the principles of new paradigm learning-focused approaches we presented in our book.

Looking ahead: future directions in feedback research and practice

What might the next five years hold for feedback research and practice? Feedback literacy is likely to remain a key research theme because without feedback literacy it is difficult for both teachers and students to derive benefits and satisfaction from feedback processes. The potential and pitfalls of GenAI as a feedback source is likely to be a heavily populated research field. Methodologically, we anticipate a shift towards more longitudinal studies and a greater focus on behavioural outcomes, acknowledging the complexity of feedback impacts. These can be investigated over long-term durations as well as short-term ones because the benefits of complex, higher-order feedback often take time to accrue. As researchers, we are privileged to be part of a dynamic international community, working within a rapidly evolving policy and practice landscape. The field abounds with questions, challenges, and opportunities for exploration. We are excited to see what developments the future holds.

Naomi Winstone is a cognitive psychologist specialising in the processing and impact of instructional feedback, and the influence of dominant discourses of assessment and feedback in policy and practice on the positioning of educators and students in feedback processes. Naomi is Professor of Educational Psychology and Director of the Surrey Institute of Education at the University of Surrey, UK. She is also an Honorary Professor in the Centre for Research in Assessment and Digital Learning (CRADLE) at Deakin University, Australia. Naomi is a Principal Fellow of the Higher Education Academy and a UK National Teaching Fellow.

David Carless works as a Professor at the Faculty of Education, University of Hong Kong, and is Head of the Academic Unit SCAPE (Social Contexts and Policies in Education). He is one of the pioneers of feedback literacy research and is listed as a top 0.1% cited researcher in the Stanford top 2% list for social sciences. His books include Designing effective feedback processes in higher education: A learning-focused approach, by Winstone and Carless, 2019 published by Routledge. He was the winner of a University Outstanding Teaching Award in 2016. The latest details of his work are on his website: https://davidcarless.edu.hku.hk/.


Leave a comment

Unmasking the complexities of academic work

by Inger Mewburn

Hang out in any tearoom and you will hear complaints about work – that’s if there even is a tea room at the end of your open plan cubicle farm. Yet surprisingly little is known about the mundane, daily realities of academic work itself – despite the best efforts of many SRHE members.

Understanding the source of academic work unhappiness is important: unhappy academics lead to unhappy students and stressed-out administrators. If we know more about academics’ working lives, we are better placed to care for our colleagues and produce the kind of research and teaching our broader communities expect of us.

To understand more about academics’ working lives, we are embarking on an ambitious research project to survey 5000 working academics and would love you to take part.

Who is doing the ‘academic housework’?

Higher education institutions are major employers and substantial contributors to national economies. Yet there is a notable lack of comprehensive research on the practicalities of academic work, particularly with respect to how we bring our ‘whole self’ to work.

Just about everyone in academia is dealing with some aspect of their lives which affects how they do their work. Some are neurodiverse, with neurodiverse teenagers at home. Others may have a disability and are part of an under-represented group. More of us than you would think face financial precariousness and just being a woman can result in being given more of the ‘academic housework’. The impact of these various circumstances can be negative or positive from the employer point of view. For example, we know that neurodivergent academics spend a lot of energy ‘masking’ to make other people’s work lives easier, often at the expense of their own wellbeing (Jones, 2023). But we also know that including neurodiverse people in research groups can increase scientific productivity. At the same time, many neurodivergent people avoid disclosing for fear of stigma (even the word ‘disclose’ suggests that individuals should feel shame for merely being who they are).

Benefits for our employers can come at a great cost for us as individuals. While a body of literature exists on factors that affect student academic performance in university settings, there is no equivalent focus on university staff. The literature on students helps us design appropriate processes and services to try to even out the playing field and help everyone reach their potential. But we do not show this same compassion towards ourselves. The existing discourse on academics as workers tends to revolve around output metrics and shallow performance measures. This narrow focus fails to capture the full spectrum of academic labour and our lived experiences.

Our research aims to fill this gap by exploring how academics experience their work from their own perspectives. We seek to understand how the production of knowledge occurs, how academic work is constructed and experienced through daily practices, with a specific focus on academic productivity and distraction. We want to see how various bio-demographic factors interrelate and impact feelings like overwhelm and exhaustion.

Why this research matters

The importance of this study is multifaceted:

1. Informing Policy and Practice: By gaining a deeper understanding of academic work patterns, institutions can develop more effective policies to support their staff and enhance productivity and wellbeing.

2. Addressing Inequalities: The COVID-19 pandemic has highlighted and exacerbated existing inequalities in academia. Our research will explore how factors such as gender, caring responsibilities, and neurodiversity impact academic work experiences.

3. Adapting to Change: As the higher education sector continues to evolve, particularly in the wake of the pandemic and the rise of digital technologies like AI, it’s crucial to understand how these changes affect academic work practices.

4. Supporting Well-being: By examining the interplay between productivity, distraction, and work intensity, we can identify strategies to better support academics’ well-being and job satisfaction.

5. Enhancing Knowledge Production: Ultimately, by understanding and improving the conditions of academic work, we can enhance the quality and quantity of knowledge production in higher education and make better classrooms for everyone.

A comprehensive approach

Our study employs a mixed-methods approach, combining a large-scale survey with follow-up interviews. This methodology allows us to capture both broad trends and individual experiences, providing a nuanced picture of academic work life.

The survey covers a wide range of topics, including:

– Perceptions of academic productivity

– Experiences of distraction and focus

– Work distribution across research, teaching, and administration

– Impact of factors such as neurodiversity, caring responsibilities, and chronic conditions

– Use of technology and AI in academic work

– Feelings of belonging and value within the academic community

We are particularly interested in exploring how these factors intersect and influence each other. For instance, how does neurodiversity impact experiences of productivity and distraction? How do caring responsibilities interact with gender in relation to the number of hours worked and where the work takes place? And who thinks AI is helpful to their work and how are people ‘cognitively offloading’ to machines?

Call for participation

The success of this research hinges on wide participation from across the academic community. We are seeking respondents from all career stages, disciplines, and geographical locations. Whether you’re a seasoned professor or a new PhD student, whether you identify as neurodivergent or not, whether you love academic life or find it challenging – your experiences are valuable and needed.

Moreover, this research provides an opportunity for self-reflection. By engaging with the survey questions, you may gain new insights into your own work practices and experiences, potentially leading to personal growth and improved work strategies.

Looking ahead

The findings from this study will be disseminated through various channels, including academic publications, teaching materials, and potentially, policy recommendations. We are committed to making our results accessible and applicable to the wider academic community.

We stand at a critical juncture in higher education. As the sector faces unprecedented challenges and changes, understanding the nature of academic work has never been more important. By participating in this research, you can play a crucial role in shaping the future of academia.

To participate in the survey or learn more about the study, please visit the survey here: https://anu.au1.qualtrics.com/jfe/form/SV_eEeXg1L3RZJJWce.

Professor Inger Mewburn is the Director of Researcher Development at The Australian National University where she oversees professional development workshops and programs for all ANU researchers. Aside from creating new posts on the Thesis Whisperer blog (www.thesiswhisperer.com), she writes scholarly papers and books about research education, with a special interest in post PhD employability, research communications and neurodivergence.

Reference

Jones, S (2023) ‘Advice for autistic people considering a career in academia’ Autism 27(7) pp 2187–2192


Leave a comment

Spotlight on the inclusion process in developing AI guidance and policy

by Lilian Schofield and Joanne J. Zhang

Introduction

When the discourse on ChatGPT started gaining momentum in higher education in 2022, the ‘emotions’ behind the response of educators, such as feelings of exclusion, isolation, and fear of technological change, were not initially at the forefront. Even educators’ feelings of apprehension about the introduction and usage of AI in education, which is an emotional response, were not given much attention. This feeling was highlighted by Ng et al (2023), who stated that many AI tools are new to educators, and many educators may feel overwhelmed by them due to a lack of understanding or familiarity with the technology. The big issues then were talks on banning the use of ChatGPT, ethical and privacy concerns, inclusive issues and concerns about academic misconduct (Cotton et al, 2023; Malinka et al, 2023; Rasul et al, 2023; Zhou & Schofield, 2023).

As higher education institutions started developing AI guidance in education, again the focus seemed to be geared towards students’ ethical and responsible usage of AI and little about educators’ guidance. Here we reflect on the process of developing the School of Business and Management, Queen Mary University of London’s AI guidance through the lens of inclusion and educators’ ‘voice’. We view ‘inclusion’ as the active participation and contribution of educators in the process of co-creating the AI policy alongside multiple voices from students and staff.

Co-creating inclusive AI guidance

Triggered by the lack of clear AI guidance for students and educators, the School of Business and Management at the Queen Mary University of London (QMUL) embarked on developing AI guidance for students and staff from October 2023 to March 2024.  Led by Deputy Directors of Education Dr Joanne J. Zhang and Dr Darryn Mitussis, the guidance was co-created with staff members through different modes, such as the best practice sharing sessions, staff away day, student-staff consultation, and staff consultation. These experiences helped shape the inclusive way and bottom-up approach of developing the AI guidance. The best practice sharing sessions allowed educators to contribute their expertise as well as provide a platform to voice their fears and apprehensions about adopting and using AI for teaching. The sessions acted as a space to share concerns and became a space where educators could have a sense of relief and solidarity. Staff members shared that knowing that others share similar apprehensions was reassuring and reduced the feeling of isolation. This collective space helped promote a more collaborative and supportive environment for educators to comfortably explore AI applications in their teaching.

Furthermore, the iterative process of developing this guidance has engaged different ‘voices’ within and outside the school. For instance, we discussed with the QMUL central team their approach and resources for facilitating AI usage for students and staff. We discussed Russell Group principles on AI usage and explored different universities’ AI policies and practices. The draft guideline was discussed and endorsed at the Teaching Away Day and education committee meetings. As a result, we suggested three principles for developing effective practices in teaching and learning:

  1. Explore and learn.
  2. Discuss and inform.
  3. Stress test and validate.

Key learning points from our process include having the avenue to use voice, whether in support of AI or not, and ensuring educators are active participants in the AI guidance-making process. This is also reflected in the AI guidance, which supports all staff in developing effective practices at their own pace.

Consultation with educators and students was an important avenue for inclusion in the process of developing the AI policy. Open communication and dialogue facilitated staff members’ opportunities to contribute to and shape the AI policy. This consultative approach enhanced the inclusion of educators and strengthened the AI policy.

Practical suggestions

Voice is a powerful tool (Arnot & Reay, 2007). However, educators may feel silenced and isolated without an avenue for their  voice. This ‘silence’ and isolation takes us back to the initial challenges experienced at the start of AI discourse, such as apprehension, fear, and isolation. The need to address these issues is pertinent, especially now when employers, students and higher education drive AI to be embedded in the curriculum and have AI-skilled graduates (Southworth et al, 2023). A co-creative approach to developing AI policies is crucial to enable critique and learning, promoting a sense of ownership and commitment to the successful integration of AI in education.

The process of developing an AI policy itself serves as the solution to the barriers to educators adopting AI in their practice and an enabler for inclusion. It ensures educators’ voices are heard, addresses their fears, and finds effective ways to develop a co-created AI policy. This inclusive participatory and co-creative approach helped mitigate fears associated with AI by creating a supportive environment where apprehensions can be openly discussed and addressed.

The co-creative approach of developing the policy with educators’ voices plays an important role in AI adoption. Creating avenues, such as the best practice sharing sessions where educators can discuss their experiences with AI, both positive and negative, ensures that voices are heard and concerns are acknowledged and addressed. This collective sharing builds a sense of community and support, helping to alleviate individual anxieties.

Steps that could be taken towards an inclusive approach to developing an inclusive AI guidance and policy are as follows:

  1. Set up the core group – Director for Education, chair of the exam board, and the inclusion of educators from different subject areas. Though the development of AI guidance can have a top-down approach, it is important that the group set-up is inclusive of educators’ voices and concerns.
  2. Design multiple avenues for educators ‘voices’ to be heard (best practice sharing sessions within and cross faulty, teaching away day).
  3. Communication channels are clear and open for all to contribute.
  4. Engaging all staff and students – hearing from students directly is powerful for staff, too; we learned a lot from students and included their voices in the guidance.
  5. Integrate and gain endorsements from the school management team. Promoting educators’ involvement in creating AI guidance legitimises their contributions and ensures that their insights are taken seriously. Additionally, such endorsement ensures that AI guidance is aligned with the needs and ethical considerations of those directly engaged and affected by the guidance.

Conclusion

As many higher education institutions move towards embedding AI into the curriculum and become clearer in their AI guidance, it is crucial to acknowledge and address the emotional dimensions educators face in adapting to AI technologies in education. Educators’ voices in contributing to AI policy and guidance are important in ensuring that they are clear about the guidance, embrace it and are upskilled in order for the embedding and implementation of AI in teaching and learning to be successful.

Dr. Lilian Schofield is a senior lecturer in Nonprofit Management and the Deputy Director of Student Experience at the School of Business and Management, Queen Mary University of London. Her interests include critical management pedagogy, social change, and sustainability. Lilian is passionate about incorporating and exploring voice, silence, and inclusion into her practice and research. She is a Queen Mary Academy Fellow and has taken up the Learning and Teaching Enhancement Fellowship, where she works on student skills enhancement practice initiatives at Queen Mary University of London.

Dr Joanne J. Zhang is Reader in Entrepreneurship, Deputy Director of Education at the School of Business and Management, Queen Mary University of London, and a visiting fellow at the University of Cambridge. She is the ‘Entrepreneurship Educator of the Year’, Triple E European Award 2022. Joanne is also the founding director of the Entrepreneurship Hub , and the QM Social Venture Fund  - the first student-led social venture fund investing in ‘startups for good’ in the UK.  Joanne’s research and teaching interests are entrepreneurship, strategy and entrepreneurship education. She has led and engaged in large-scale research and scholarship projects totalling over GBP£7m.  Email: Joanne.zhang@qmul.ac.uk


1 Comment

For meta or for worse…

by Paul Temple

Remember the Metaverse? Oh, come on, you must remember it, just think back a year, eighteen months ago, it was everywhere! Mark Zuckerberg’s new big thing, ads everywhere about how it was going to transform, well, everything! I particularly liked the ad showing a school group virtually visiting the Metaverse forum in ancient Rome, which was apparently going to transform their understanding of the classical world. Well, that’s what $36 bn (yes, that’s billion) buys you. Accenture were big fans back then, displaying all the wide-eyed credulity expected of a global consultancy firm when they reported in January 2023 that “Growing consumer and business interest in the Metaverse [is] expected to fuel [a] trillion dollar opportunity for commerce, Accenture finds”.

It was a little difficult, though, to find actual uses of the Metaverse, as opposed to vague speculations about its future benefits, on the Accenture website. True, they’d used it in 2022 to prepare a presentation for Tuvalu for COP27; and they’d created a virtual “Global Collaboration Village” for the 2023 Davos get-together; and we mustn’t overlook the creation of the ChangiVerse, “where visitors can access a range of fun-filled activities and social experiences” while waiting for delayed flights at Singapore’s Changi airport. So all good. Now tell me that I don’t understand global business finance, but I’d still be surprised if these and comparable projects added up to a trillion dollars.

But of course that was then, in the far-off days of 2023. In 2024, we’re now in the thrilling new world of AI, do keep up! Accenture can now see that “AI is accelerating into a mega-trend, transforming industries, companies and the way we live and work…better positioned to reinvent, compete and achieve new levels of performance.” As I recall, this is pretty much what the Metaverse was promising, but never mind. Possible negative effects of AI? Sorry, how do you mean, “negative”?

It’s been often observed that every development in communications and information technology – radio, TV, computers, the internet – has produced assertions that the new technology means that the university as understood hitherto is finished. Amazon is already offering a dozen or so books published in the last six months on the impact of the various forms of AI on education, which, to go by the summaries provided, mostly seem to present it in terms of the good, the bad, and the ugly. I couldn’t spot an “end of the university as we know it” offering, but it has to be along soon.

You’ve probably played around with ChatGPT – perhaps you were one of its 100 million users logging-on within two months of its release – maybe to see how students (or you) might use it. I found it impressive, not least because of its speed, but at the same time rather ordinary: neat B-grade summaries of topics of the kind you might produce after skimming the intro sections of a few standard texts but, honestly, nothing very interesting. Microsoft is starting to include ChatGPT in its Office products; so you might, say, ask it to list the action points from the course committee minutes over the last year, based on the Word files it has access to. In other words, to get it to undertake, quickly and accurately, a task that would be straightforward yet tedious for a person: a nice feature, but hardly transformative. (By the way, have you tried giving ChatGPT some text it produced and asking where it came from? It said to me, in essence, I don’t remember doing this, but I suppose I might have: it had an oddly evasive feel.)

So will AI transform the way teaching and learning works in higher education? A recent paper by Strzelecki (2023) reporting on an empirical study of the use of ChatGPT by Polish university students notes both the potential benefits if it can be carefully integrated into normal teaching methods – creating material tailored to individuals’ learning needs, for example – as well as the obvious ethical problems that will inevitably arise. If students are able to use AI to produce work which they pass off as their own, it seems to me that that is an indictment of under-resourced, poorly-managed higher education which doesn’t allow a proper engagement between teachers and students, rather than a criticism of AI as such. Plagiarism in work that I marked really annoyed me, because the student was taking the course team for fools, assuming our knowledge of the topic was as limited as theirs. (OK, there may have been some very sophisticated plagiarism which I missed, but I doubt it: a sophisticated plagiarist is usually a contradiction in terms.)

The 2024 Consumer Electronics Show (CES), held in Las Vegas in January 2024, was all about AI. Last year it was all about the Metaverse; this year, although the Metaverse got a mention, it seemed to rank in terms of interest well below the AI-enabled cat flap on display – it stops puss coming in if it’s got a mouse in its jaws – which I’m guessing cost rather less than $36bn to develop. I’ve put my name down for one.

Dr Paul Temple is Honorary Associate Professor in the Centre for Higher Education Studies, UCL Institute of Education.


4 Comments

Fair use or copyright infringement? What academic researchers need to know about ChatGPT prompts

by Anita Toh

As scholarly research into and using generative AI tools like ChatGPT becomes more prevalent, it is crucial for researchers to understand the intersections of copyright, fair use, and use of generative AI in research. While there is much discussion about the copyrightability of generative AI outputs and the legality of generative AI companies’ use of copyrighted material as training data (Lucchi, 2023), there has been relatively little discussion about copyright in relation to user prompts. In this post, I share an interesting discovery about the use of copyrighted material in ChatGPT prompts.

Imagine a situation where a researcher wishes to conduct a content analysis on specific YouTube videos for academic research. Does the researcher need to obtain permission from YouTube or the content creators to use these videos?

As per YouTube’s guidelines, researchers do not require explicit copyright permission if they are using the content for “commentary, criticism, research, teaching, or news reporting,”as these activities fall under the umbrella of fair use (Fair Use on YouTube – YouTube Help, 2023).

What about this scenario? A researcher wants to compare the types of questions posed by investors on the reality television series, Shark Tank, with questions generated by ChatGPT as it roleplays an angel investor. The researcher plans to prompt ChatGPT with a summary of each Shark Tank pitch and ask ChatGPT to roleplay as an angel investor and ask questions. In this case, would the researcher need to obtain permission from Shark Tank or its production company, Sony Pictures Television?

In my exploration, I discovered that it is indeed crucial to obtain permission from Sony Pictures Television. ChatGPT’s terms of service emphasise that users should “refrain from using the service in a manner that infringes upon third-party rights. This explicitly means the input should be devoid of copyrighted content unless sanctioned by the respective author or rights holder” (Fiten & Jacobs, 2023).

I therefore initiated communication with Sony Pictures Television, seeking approval to incorporate Shark Tank videos in my research. However, my request was declined by Sony Pictures Television in California, citing “business and legal reasons”. Undeterred, I approached Sony Pictures Singapore, only to receive a reaffirmation that Sony cannot endorse my proposed use of their copyrighted content “at the present moment”. They emphasised that any use of their copyrighted content must strictly align with the Fair Use doctrine.

This evokes the question: Why doesn’t the proposed research align with fair use? My initial understanding is that the fair use doctrine allows re-users to use copyrighted material without permission from the right holders for news reporting, criticism, review, educational and research purposes (Copyright Act 2021 Factsheet, 2022).

In the absence of further responses from Sony Pictures Television, I searched the web for answers.

Two findings emerged which could shed light on Sony’s reservations:

  • ChatGPT’s terms highlight that “user inputs, besides generating corresponding outputs, also serve to augment the service by refining the AI model” (Fiten & Jacobs, 2023; OpenAI Terms of Use, 2023).
  • OpenAI is currently facing legal action from various authors and artists alleging copyright infringement (Milmo, 2023). They contend that OpenAI had utilized their copyrighted content to train ChatGPT without their consent. Adding to this, the New York Times is also contemplating legal action against OpenAI for the same reason (Allyn, 2023).

These revelations point to a potential rationale behind Sony Pictures Television’s reluctance: while use of their copyrighted content for academic research might be considered fair use, introducing this content into ChatGPT could infringe upon the non-commercial stipulations (What Is Fair Use?, 2016) inherent in the fair use doctrine.

In conclusion, the landscape of copyright laws and fair use in relation to generative AI tools is still evolving. While previously researchers could rely on the fair use doctrine for the use of copyrighted material in their research work, the availability of generative AI tools now introduces an additional layer of complexity. This is particularly pertinent when the AI itself might store or use data to refine its own algorithms, which could potentially be considered a violation of the non-commercial use clause in the fair use doctrine. Sony Pictures Television’s reluctance to grant permission for the use of their copyrighted content in association with ChatGPT reflects the caution that content creators and rights holders are exercising in this new frontier. For researchers, this highlights the importance of understanding the terms of use of both the AI tool and the copyrighted material prior to beginning a research project.

Anita Toh is a lecturer at the Centre for English Language Communication (CELC) at the National University of Singapore (NUS). She teaches academic and professional communication skills to undergraduate computing and engineering students.


Leave a comment

What do artificial intelligence systems mean for academic practice?

by Mary Davis

I attended and made a presentation at the SRHE Roundtable event ‘What do artificial intelligence systems mean for academic practice?’ on 19 July 2023. The roundtable brought together a wide range of perspectives on artificial intelligence: philosophical questions, problematic results, ethical considerations, the changing face of assessment and practical engagement for learning and teaching. The speakers represented a range of UK HEI contexts, as well as Australia and Spain, and a variety of professional roles including academic integrity leads, lecturers of different disciplines and emeritus professors.

The day began with Ron Barnett’s fierce defence of the value of authorship and the concerns about what it means to be a writer in a Chatbot world. Ron argued that use of AI tools can lead to an erosion of trust; the essential trust relationship between writer and reader in HE and wider social contexts such as law may disintegrate and with it, society. Ron reminded us of the pain and struggle of writing and creating an authorial voice that is necessary for human writing. He urged us to think about the frameworks of learning such as ‘deep learning’ (Ramsden), agency and internal story-making (Archer) and his own ‘Will to Learn’, all of which could be lost. His arguments challenged us to reflect on the far-reaching social consequences of AI use and opened the day of debate very powerfully.

I then presented the advice I have been giving to students at my institution using my analysis of student declarations of AI use which I had categorised using a traffic light system for appropriate use (eg checking and fixing a text before submission); at risk use (eg paraphrasing and summarising); and inappropriate use (eg using assignment briefs as prompts and submitting the output as own work). I got some helpful feedback from the audience that the traffic lights provided useful navigation for students. Coincidentally, the next speaker Angela Brew also used a traffic light system to guide students with AI. She argued for the need to help students develop a scholarly mindset, for staff to stop teaching as in the 18th Century with universities as foundations of knowledge. Instead, she proposed that everyone at university should be a discoverer, a learner and producer of knowledge, as a response to AI use.

Stergios Aidinlis provided an intriguing insight into practical use of AI as part of a law degree. In his view, generative AI can be an opportunity to make assessment currently fit for purpose. He presented a three-stage model of learning with AI comprising: stage 1 as using AI to produce a project pre-mortem to tackle a legal problem as pre-class preparation; stage 2 using AI as a mentor to help students solve a legal problem in class; and stage 3 using AI to evaluate the technology after class. Stergios recommended Mollick and Mollick (2023) for ideas to help students learn to use AI. The presentation by Stergios stood out in terms of practical ideas and made me think about the availability of suitable AI tools for all students to be able to do tasks like this.

The next session by Richard Davies, one of the roundtable convenors, took a philosophical direction in considering what a ‘student’s own work’ actually means, and how we assess a student’s contribution. David Boud returned the theme to assessment and argued that three elements are always necessary: assuring learning outcomes have been met (summative assessment), enabling students to use information to aid learning (formative assessment) and building students’ capacity to evaluate their learning (sustainable assessment). He argued for a major re-design of assessment, that still incorporates these elements but avoids tasks that are no longer viable.

Liz Newton presented guidance for students which emphasized positive ways to use AI such as using it for planning or teaching, which concurred with my session. Maria Burke argued for ethical approaches to the use of AI that incorporate transparency, accountability, fairness and regulation, and promote critical thinking within AI context. Finally, Tania Alonso presented her ChatGPTeaching project with seven student rules for use of ChatGPT, such as proposing use only for areas of the student’s own knowledge.

The roundtable discussion was lively and our varied perspectives and experiences added a lot to the debate; I believe we all came away with new insights and ideas. I especially appreciated the opportunity to look at AI from practical and philosophical viewpoints. I am looking forward to the ongoing sessions and forum discussions. Thanks very much to SRHE for organising this event.

Dr Mary Davis is Academic Integrity Lead and Principal Lecturer (Education and Student Experience) at Oxford Brookes University. She has been a researcher of academic integrity since 2005 and has carried out extensive research on plagiarism, use of text-matching tools, the development of source use, proofreading, educational responses to academic conduct issues and focused her recent research on inclusion in academic integrity. She is on the Board of Directors of the International Center for Academic Integrity and co-chair of the International Day of Action for Academic Integrity.


Leave a comment

Understanding the value of EdTech in higher education

by Morten Hansen

This blog is a re-post of an article first published on universityworldnews.com. It is based on a presentation to the 2021 SRHE Research Conference, as part of a Symposium on Universities and Unicorns: Building Digital Assets in the Higher Education Industry organised by the project’s principal investigator, Janja Komljenovic (Lancaster). The support of the Economic and Social Research Council (ESRC) is gratefully acknowledged. The project introduces new ways to think about and examine the digitalising of the higher education sector. It investigates new forms of value creation and suggests that value in the sector increasingly lies in the creation of digital assets.

EdTech companies are, on average, priced modestly, although some have earned strong valuations. We know that valuation practices normally reflect investors’ belief in a company’s ability to make money in the future. We are, however, still learning about how EdTech generates value for users, and how to take account of such value in the grand scheme of things.


Valuation and deployment of user-generated data

EdTech companies are not competing with the likes of Google and Facebook for advertisement revenue. That is why phrases such as ‘you are the product’ and ‘data is the new oil’ yield little insight when applied to EdTech. For EdTech companies, strong valuations hinge on the idea that technology can bring use value to learners, teachers and organisations – and that they will eventually be willing to pay for such benefits, ideally in the form of a subscription. EdTech companies try to deliver use value in multiple ways, such as deploying user-generated data to improve their services. User-generated data are the digital traces we leave when engaging with a platform: keyboard strokes and mouse movements, clicks and inactivity.


The value of user-generated data in higher education

The gold standard for unlocking the ‘value’ of user-generated data is to bring about an activity that could otherwise not have arisen. Change is brought about through data feedback loops. Loops consist of five stages: data generation, capture, anonymisation, computation and intervention. Loops can be long and short.


For example, imagine that a group of students is assigned three readings for class. Texts are accessed and read on an online platform. Engagement data indicate that all students spent time reading text 1 and text 2, but nobody read text 3. As a result of this insight, come next semester, text 3 is replaced by a more ‘engaging’ text. That is a long feedback loop.


Now, imagine that one student is reading one text. The platform’s machine learning programme generates a rudimentary quiz to test comprehension. Based on the students’ answers, further readings are suggested or the student is encouraged to re-read specific sections of the text. That is a short feedback loop.


In reality, most feedback loops do not bring about activity that could not have happened otherwise. It is not like a professor could not learn, through conversation, which texts are better liked by students, what points are comprehended, and so on. What is true, though, is that the basis and quality of such judgments shifts. Most importantly, so does the cost structure that underpins judgment.


The more automated feedback loops are, the greater the economy of scale. ‘Automation’ refers to the decoupling of additional feedback loops from additional labour inputs. ‘Economies of scale’ means that the average cost of delivering feedback loops decreases as the company grows.


Proponents of machine learning and other artificial intelligence approaches argue that the use value of feedback loops improves with scale: the more users engage in the back-and-forth between generating data, receiving intervention and generating new data, the more precise the underlying learning algorithms become in predicting what interventions will ‘improve learning’.


The platform learns and grows with us

EdTech platforms proliferate because they are seen to deliver better value for money than the human-centred alternative. Cloud-based platforms are accessed through subscriptions without transfer of ownership. The economic relationship is underwritten by law and continued payment is legitimated through the feedback loops between humans and machines: the platform learns and grows with us, as we feed it.


Machine learning techniques certainly have the potential to improve the efficiency with which we organise certain learning activities, such as particular types of student assessment and monitoring. However, we do not know which values to mobilise when judging intervention efficacy: ‘value’ and ‘values’ are different things.


In everyday talk, we speak about ‘value’ when we want to justify or critique a state of affairs that has a price: is the price right, too low, or too high? We may disagree on the price, but we do agree that something is for sale. At other times we reject the idea that a thing should be for sale, like a family heirloom, love or education. If people tell us otherwise, we question their values. This is because values are about relationships and politics.


When we ask about the values of EdTech in higher education, we are really asking: what type of relations do we think are virtuous and appropriate for the institution? What relationships are we forging and replacing between machines and people, and between people and people?


When it comes to the application of personal technology we have valued convenience, personalisation and seamlessness by forging very intimate but easily forgettable machine-human relations. This could happen in the EdTech space as well. Speech-to-text recognition, natural language processing and machine vision are examples of how bonds can be built between humans and computers, aiding feedback loops by making worlds of learning computable.


Deciding on which learning relations to make computable, I argue, should be driven by values. Instead of seeing EdTech as a silver bullet that simply drives learning outcomes, it is more useful to think of it as technology that mediates learning relations and processes: what relationships do we value as important for students and when is technology helpful and unhelpful in establishing those? In this way, values can help us guide the way we account for the value of edtech.

Morten Hansen is a research associate on the Universities and Unicorns project at Lancaster University, and a PhD student at the Faculty of Education, University of Cambridge, United Kingdom. Hansen specialises in education markets and has previously worked as a researcher at the Saïd Business School in Oxford.