SRHE Blog

The Society for Research into Higher Education

The challenge of AI declaration in HE – what can we do?

Leave a comment

by Chahna Gonsalves

The rapid integration of AI tools like ChatGPT into academic life has raised significant concerns about academic integrity. Universities worldwide are grappling with how to manage this new frontier of technology. My recent research at King’s Business School sheds light on an intriguing challenge: student non-compliance with mandatory AI use declarations. Despite clear institutional requirements to declare AI usage in their coursework, up to 74% of students did not comply. This raises key questions about how we think about academic honesty in the age of AI, and what can be done to improve compliance and foster trust.

In November 2023, King’s Business School introduced an AI declaration section as part of the coursework coversheet. Students were required to either declare their AI use or confirm that they hadn’t used any AI tools in their work. This research, which started as an evaluation of the revised coversheet, was conducted a year after the implementation of this policy, providing insights into how students have navigated these requirements over time. The findings reveal important challenges for both educators and students in adapting to this new reality.

Fear and ambiguity: barriers to transparency

In interviews conducted as part of the study, students frequently voiced their apprehension about how AI declarations might be perceived. One student likened it to “admitting to plagiarism,” reflecting a widespread fear that transparency could backfire. Such fears illustrate a psychological barrier to compliance, where students perceive AI use declarations as risky rather than neutral. This tension is exacerbated by the ambiguity of current policies. Guidelines are often unclear, leaving students uncertain about what to declare and how that declaration will impact their academic standing.

Moreover, the rapid evolution of AI tools has blurred traditional lines of authorship and originality. Before the rise of AI, plagiarism was relatively easy to define. But now, as AI tools generate content that is indistinguishable from human-authored work, what does it mean to be original? The boundaries of academic integrity are being redrawn, and institutions need to adapt quickly to provide clearer guidance. As AI technologies become more integrated into academic practice, we must move beyond rigid policies and have more nuanced conversations about what responsible AI use looks like in different contexts.

Peer influence: AI as the “fourth group member”

A particularly striking finding from the research was the role of peer influence in shaping students’ decisions around AI use and its declaration. In group work contexts, AI tools like ChatGPT have become so normalized that one student referred to ChatGPT as the “fourth man” in group projects. This normalization makes it difficult for students to declare AI use, as doing so might set them apart from their peers who choose not to disclose. The pressure to conform can be overwhelming, and it drives non-compliance as students opt to avoid the risk of being singled out.

The normalising effect of AI usage amongst peers reflects a larger trend in academia, where technological adoption is outpacing institutional policy. This raises an urgent need for universities to not only set clear guidelines but also engage students and faculty in open discussions about AI’s role in academic work. Creating a community of transparency where AI use is openly acknowledged and discussed is crucial to overcoming the current challenges.

Solutions: clearer policies, consistent enforcement, and trust

What can be done to improve compliance with AI declarations? The research offers several recommendations. First, institutions need to develop clearer and more consistent policies around AI use. The ambiguity that currently surrounds AI guidelines must be addressed. Students need to know exactly what is expected of them, and this starts with clear definitions of what constitutes AI use and how it should be declared.

Second, enforcement of these policies needs to be consistent across all courses. Many students reported that AI declarations were emphasized in some modules but barely mentioned in others. This inconsistency breeds confusion and scepticism about the importance of the policy. Faculty training is crucial to ensuring that all educators communicate the same message to students about AI use and its implications for academic integrity.

Finally, building trust between students and institutions is essential. Students must feel confident that declaring AI use will not result in unfair penalties. One approach to building this trust is to integrate AI use into low-stakes formative assessments before moving on to higher-stakes summative assessments. This gradual introduction allows students to become comfortable with AI policies and to see that transparency will not harm their academic performance. In the long run, fostering an open, supportive dialogue around AI use can help reduce the fear and anxiety currently driving non-compliance.

Moving forward: a call for open dialogue and innovation

As AI continues to revolutionize academic work, institutions must rise to the challenge of updating their policies and fostering a culture of transparency. My research suggests that fear, ambiguity, and peer influence are key barriers to AI declaration, but these challenges can be overcome with clearer policies, consistent enforcement, and a foundation of trust. More than just a compliance issue, this is an opportunity for higher education to rethink academic integrity in the age of AI and to encourage ethical, transparent use of technology in learning.

In the end, the goal should not be to police AI use, but to harness its potential for enhancing academic work while maintaining the core values of honesty and originality. Now is the time to open up the conversation and invite both students and educators to reimagine how we define integrity in the evolving landscape of higher education. Let’s make AI part of the learning process—not something to be hidden.

This post is based on my paper Addressing Student Non-Compliance in AI Use Declarations: Implications for Academic Integrity and Assessment in Higher Education in Assessment & Evaluation in Higher Education (Published online: 22 Oct 2024).

I hope this serves as a starting point for broader discussions about how we can navigate the complexities of AI in academic settings. I invite readers to reflect on these findings and share their thoughts on how institutions can better manage the balance between technological innovation and academic integrity. 

Chahna Gonsalves is a Senior Lecturer in Marketing (Education) at King’s College London. She is Senior Fellow of the Higher Education Association and Associate Fellow of the Staff Educational Development Association.

Author: SRHE News Blog

An international learned society, concerned with supporting research and researchers into Higher Education

Leave a Reply

Discover more from SRHE Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading