SRHE Blog

The Society for Research into Higher Education


Leave a comment

The challenge of AI declaration in HE – what can we do?

by Chahna Gonsalves

The rapid integration of AI tools like ChatGPT into academic life has raised significant concerns about academic integrity. Universities worldwide are grappling with how to manage this new frontier of technology. My recent research at King’s Business School sheds light on an intriguing challenge: student non-compliance with mandatory AI use declarations. Despite clear institutional requirements to declare AI usage in their coursework, up to 74% of students did not comply. This raises key questions about how we think about academic honesty in the age of AI, and what can be done to improve compliance and foster trust.

In November 2023, King’s Business School introduced an AI declaration section as part of the coursework coversheet. Students were required to either declare their AI use or confirm that they hadn’t used any AI tools in their work. This research, which started as an evaluation of the revised coversheet, was conducted a year after the implementation of this policy, providing insights into how students have navigated these requirements over time. The findings reveal important challenges for both educators and students in adapting to this new reality.

Fear and ambiguity: barriers to transparency

In interviews conducted as part of the study, students frequently voiced their apprehension about how AI declarations might be perceived. One student likened it to “admitting to plagiarism,” reflecting a widespread fear that transparency could backfire. Such fears illustrate a psychological barrier to compliance, where students perceive AI use declarations as risky rather than neutral. This tension is exacerbated by the ambiguity of current policies. Guidelines are often unclear, leaving students uncertain about what to declare and how that declaration will impact their academic standing.

Moreover, the rapid evolution of AI tools has blurred traditional lines of authorship and originality. Before the rise of AI, plagiarism was relatively easy to define. But now, as AI tools generate content that is indistinguishable from human-authored work, what does it mean to be original? The boundaries of academic integrity are being redrawn, and institutions need to adapt quickly to provide clearer guidance. As AI technologies become more integrated into academic practice, we must move beyond rigid policies and have more nuanced conversations about what responsible AI use looks like in different contexts.

Peer influence: AI as the “fourth group member”

A particularly striking finding from the research was the role of peer influence in shaping students’ decisions around AI use and its declaration. In group work contexts, AI tools like ChatGPT have become so normalized that one student referred to ChatGPT as the “fourth man” in group projects. This normalization makes it difficult for students to declare AI use, as doing so might set them apart from their peers who choose not to disclose. The pressure to conform can be overwhelming, and it drives non-compliance as students opt to avoid the risk of being singled out.

The normalising effect of AI usage amongst peers reflects a larger trend in academia, where technological adoption is outpacing institutional policy. This raises an urgent need for universities to not only set clear guidelines but also engage students and faculty in open discussions about AI’s role in academic work. Creating a community of transparency where AI use is openly acknowledged and discussed is crucial to overcoming the current challenges.

Solutions: clearer policies, consistent enforcement, and trust

What can be done to improve compliance with AI declarations? The research offers several recommendations. First, institutions need to develop clearer and more consistent policies around AI use. The ambiguity that currently surrounds AI guidelines must be addressed. Students need to know exactly what is expected of them, and this starts with clear definitions of what constitutes AI use and how it should be declared.

Second, enforcement of these policies needs to be consistent across all courses. Many students reported that AI declarations were emphasized in some modules but barely mentioned in others. This inconsistency breeds confusion and scepticism about the importance of the policy. Faculty training is crucial to ensuring that all educators communicate the same message to students about AI use and its implications for academic integrity.

Finally, building trust between students and institutions is essential. Students must feel confident that declaring AI use will not result in unfair penalties. One approach to building this trust is to integrate AI use into low-stakes formative assessments before moving on to higher-stakes summative assessments. This gradual introduction allows students to become comfortable with AI policies and to see that transparency will not harm their academic performance. In the long run, fostering an open, supportive dialogue around AI use can help reduce the fear and anxiety currently driving non-compliance.

Moving forward: a call for open dialogue and innovation

As AI continues to revolutionize academic work, institutions must rise to the challenge of updating their policies and fostering a culture of transparency. My research suggests that fear, ambiguity, and peer influence are key barriers to AI declaration, but these challenges can be overcome with clearer policies, consistent enforcement, and a foundation of trust. More than just a compliance issue, this is an opportunity for higher education to rethink academic integrity in the age of AI and to encourage ethical, transparent use of technology in learning.

In the end, the goal should not be to police AI use, but to harness its potential for enhancing academic work while maintaining the core values of honesty and originality. Now is the time to open up the conversation and invite both students and educators to reimagine how we define integrity in the evolving landscape of higher education. Let’s make AI part of the learning process—not something to be hidden.

This post is based on my paper Addressing Student Non-Compliance in AI Use Declarations: Implications for Academic Integrity and Assessment in Higher Education in Assessment & Evaluation in Higher Education (Published online: 22 Oct 2024).

I hope this serves as a starting point for broader discussions about how we can navigate the complexities of AI in academic settings. I invite readers to reflect on these findings and share their thoughts on how institutions can better manage the balance between technological innovation and academic integrity. 

Chahna Gonsalves is a Senior Lecturer in Marketing (Education) at King’s College London. She is Senior Fellow of the Higher Education Association and Associate Fellow of the Staff Educational Development Association.


1 Comment

Ethicality in academic knowledge production

by Dina Zoe Belluigi

‘Research cultures’, and their problematics, have received sufficient attention to have been delineated various definitions by authoritative groups within the university/ research ecology in the United Kingdom, and amongst scholars in our field of enquiry. Raising questions about ethicality within research cultures, in a recent paper I explored dys/consciousness and its effects on research production and the formation of academic researchers. The focus of the empirical component was on one part which falls within the United Kingdom – Northern Ireland (NI).

How to conceptualise thinking and seeing for the study of UK universities?

The paper begins with a mapping of conceptualisations of consciousness. It does so through their application, by those who have studied dynamics of racism in universities and educational institutions in the United Kingdom and the USA. The mapping includes scholars’ arguments about the persistence of not unconscious but dysconscious racism, the limits of critical consciousness, the necessity for anti-racism, and the constraints to realising decolonisation, when faced with janiform approaches to structural, institutional and scientific racism in academia.

Methodological approach

The conceptual mapping served as a sensitisation device through which to explore academic research cultures, about enquiry on social groups who were and are marginalised due to perceptions of their ‘otherness’ to dominantly-placed Northern Irish groups. Difference is indexed through constructions of ‘race’, ‘ethnicity’ and ‘migration’, underpinned by whiteness.

A Critical Discourse Analysis, undergirded by Critical Race Theory, was undertaken of 200 published research items that related to this area of enquiry, which were found to extend from 1994 to 2022, and were spread across disciplines. These were sourced from the repositories of the research-intensive universities in Northern Ireland. Qualitative reflections enriched the analysis. These included the participation of the related academic-authors, and report-and-respond insights from institutional research officers, and non-academic partners of such studies (n=37). Combining these sources was to probe more deeply the ways in which such outlier practices of knowledge production reinforced, evaded or resisted dominant frames and norms of conduct.

Signs of dysconsciousness

The paper’s analysis unpacks 5 signs of what was interpreted as dysconscious racism and xenophobia-ism in the context. The first sign was the under-study and under-funding of local research enquiry on/ about/ and by so-called ‘ethnic minorities’ and ‘migrants’. Secondly, were the skewed dynamics within the politics of participation and of authorship, wherein those studied were rarely positioned as authorities of knowledge produced. Thirdly, the ethicality of authors’ interests and motivations in undertaking such research were found to be complicated and undermined by strategic, and often self-serving, goals imposed by the academic research ecology. Problematics in the data collected and held by public authorities, was the fourth sign. The article culminates in the fifth sign: that the threats of risk, social sanction and double-speak related to such research, were not only exogenous to universities, but endogenous too.

Insights for further explorations

In the current neoliberal milieu, the enablers of research – such as funding, social validation or career rewards – were of such a techno-rational nature that the depth of theorisation, complexity and intellectual debate necessary to challenge the existing dysconscious racism and xenophobia-ism remained under-supported. Moreover, the article confirms observations that – rather than enrich or catalyse criticality and plurality within the dominant formations of academic knowledge and of scholars – risk-avoidance of (perceived) controversial issues is compounded when institutions are situated within complicated local socio-political conditions. This places limits on, and indeed de-idealises, promotional social responsibility imaging of ‘anchor’ universities.

Participants’ counter-narratives provided insights about the production of enquiry despite, and in some cases because of, such dominant dynamics. Of interest is that many of the authors were women (in far higher proportions than the staff composition of those institutions); and many of the authors self-identified as migrant academics. In addition to external migrants to the British Isles, this included those from the Republic of Ireland and the United Kingdom, providing a sense of how alienated ‘outsiders’ were often made to feel within that academic ‘community’. Avoiding hero narratives, the article points to the politics of authorial agency within academic practices when individuals negotiate insider-outsider, minority-majority dynamics of academic research cultures hostile to such enquiry.

The article concludes by raising questions about the mantle of ethical responsibility to justice, truth, and dissent within such constraining, homogenising conditions. While it is tempting to read this as an exceptional or peculiar case, references to related studies are included throughout the article to demonstrate that similar problematic dynamics within research cultures have been observed across university spaces in the Global North, and warrant further enquiry.

Professor Belluigi is a Council member of SRHE; Professor of Authorship, Representation and Transformation at Queen’s University Belfast; and a Visiting Professor at Nelson Mandela University.


Leave a comment

What do artificial intelligence systems mean for academic practice?

by Mary Davis

I attended and made a presentation at the SRHE Roundtable event ‘What do artificial intelligence systems mean for academic practice?’ on 19 July 2023. The roundtable brought together a wide range of perspectives on artificial intelligence: philosophical questions, problematic results, ethical considerations, the changing face of assessment and practical engagement for learning and teaching. The speakers represented a range of UK HEI contexts, as well as Australia and Spain, and a variety of professional roles including academic integrity leads, lecturers of different disciplines and emeritus professors.

The day began with Ron Barnett’s fierce defence of the value of authorship and the concerns about what it means to be a writer in a Chatbot world. Ron argued that use of AI tools can lead to an erosion of trust; the essential trust relationship between writer and reader in HE and wider social contexts such as law may disintegrate and with it, society. Ron reminded us of the pain and struggle of writing and creating an authorial voice that is necessary for human writing. He urged us to think about the frameworks of learning such as ‘deep learning’ (Ramsden), agency and internal story-making (Archer) and his own ‘Will to Learn’, all of which could be lost. His arguments challenged us to reflect on the far-reaching social consequences of AI use and opened the day of debate very powerfully.

I then presented the advice I have been giving to students at my institution using my analysis of student declarations of AI use which I had categorised using a traffic light system for appropriate use (eg checking and fixing a text before submission); at risk use (eg paraphrasing and summarising); and inappropriate use (eg using assignment briefs as prompts and submitting the output as own work). I got some helpful feedback from the audience that the traffic lights provided useful navigation for students. Coincidentally, the next speaker Angela Brew also used a traffic light system to guide students with AI. She argued for the need to help students develop a scholarly mindset, for staff to stop teaching as in the 18th Century with universities as foundations of knowledge. Instead, she proposed that everyone at university should be a discoverer, a learner and producer of knowledge, as a response to AI use.

Stergios Aidinlis provided an intriguing insight into practical use of AI as part of a law degree. In his view, generative AI can be an opportunity to make assessment currently fit for purpose. He presented a three-stage model of learning with AI comprising: stage 1 as using AI to produce a project pre-mortem to tackle a legal problem as pre-class preparation; stage 2 using AI as a mentor to help students solve a legal problem in class; and stage 3 using AI to evaluate the technology after class. Stergios recommended Mollick and Mollick (2023) for ideas to help students learn to use AI. The presentation by Stergios stood out in terms of practical ideas and made me think about the availability of suitable AI tools for all students to be able to do tasks like this.

The next session by Richard Davies, one of the roundtable convenors, took a philosophical direction in considering what a ‘student’s own work’ actually means, and how we assess a student’s contribution. David Boud returned the theme to assessment and argued that three elements are always necessary: assuring learning outcomes have been met (summative assessment), enabling students to use information to aid learning (formative assessment) and building students’ capacity to evaluate their learning (sustainable assessment). He argued for a major re-design of assessment, that still incorporates these elements but avoids tasks that are no longer viable.

Liz Newton presented guidance for students which emphasized positive ways to use AI such as using it for planning or teaching, which concurred with my session. Maria Burke argued for ethical approaches to the use of AI that incorporate transparency, accountability, fairness and regulation, and promote critical thinking within AI context. Finally, Tania Alonso presented her ChatGPTeaching project with seven student rules for use of ChatGPT, such as proposing use only for areas of the student’s own knowledge.

The roundtable discussion was lively and our varied perspectives and experiences added a lot to the debate; I believe we all came away with new insights and ideas. I especially appreciated the opportunity to look at AI from practical and philosophical viewpoints. I am looking forward to the ongoing sessions and forum discussions. Thanks very much to SRHE for organising this event.

Dr Mary Davis is Academic Integrity Lead and Principal Lecturer (Education and Student Experience) at Oxford Brookes University. She has been a researcher of academic integrity since 2005 and has carried out extensive research on plagiarism, use of text-matching tools, the development of source use, proofreading, educational responses to academic conduct issues and focused her recent research on inclusion in academic integrity. She is on the Board of Directors of the International Center for Academic Integrity and co-chair of the International Day of Action for Academic Integrity.


Leave a comment

Max Weber and the rationalisation of education

By Geoff Hinchliffe

In order to understand our own times, it can be beneficial to go back in time, in order to take advantage of a fresh perspective from afar. One thinker who was uncannily prescient about some of our current concerns in higher education was Max Weber (1864-1920). Weber has always been held in high esteem, of course, by sociologists. But I think what he has to say about the effects of bureaucratisation are of interest to anyone working in higher education at the moment.

Weber thought that the methods and techniques of bureaucracy were all-pervasive in a modern industrial society. These techniques were by no means confined to the state: bureaucracy colonised all forms of commercial and institutional behaviour – including education. And these techniques were also accompanied by a certain habit of mind which Weber called rationalisation.  In his book, the Protestant Ethic, Weber famously invokes the ‘iron cage’ which modern man had constructed for himself, signifying the development of procedures and behaviours necessary for a modern economic order whilst “the rosy blush of its laughing heir, the Enlightenment, seems to be irretrievably fading” (Weber, p. 181-2).

This ‘iron cage’ – the cage of rationalisation – includes : Continue reading