SRHE Blog

The Society for Research into Higher Education


Leave a comment

Widely used but barely trusted: understanding student perceptions on the use of generative AI in higher education

by Carmen Cabrera and Ruth Neville

Generative artificial intelligence (GAI) tools are rapidly transforming how university students learn, create and engage with knowledge. Powered by techniques such as neural network algorithms, these tools generate new content, including text, tables, computer code, images, audio and video, by learning patterns from existing data. The outputs are usually characterised by their close resemblance to human-generated content. While GAI shows great promise to improve the learning experience in various disciplines, its growing uptake also raises concerns about misuse, over-reliance and more generally, its impact on the learning process. In response, multiple UK HE institutions have issued guidance outlining acceptable use and warning against breaches of academic integrity. However, discussions about the role of GAI in the HE learning process have been led mostly by educators and institutions, and less attention has been given to how students perceive and use GAI.

Our recent study, published in Perspectives: Policy and Practice in Higher Education, helps to address this gap by bringing student perspectives into the discussion. Drawing on a survey conducted in early 2024 with 132 undergraduate students from six UK universities, the study reveals an impactful paradox. Students are using GAI tools widely, and expect their use to increase, yet fewer than 25% regard its outputs as reliable. High levels of use therefore coexist with low levels of trust.

Using GAI without trusting it

At first glance, the widespread use of GAI among students might be taken as a sign of growing confidence in these tools. Yet, when students are asked about their perceptions on the reliability of GAI outputs, many express disagreement when asked if GAI could be considered a reliable source of knowledge. This apparent contradiction raises the question of why are students still using tools they do not fully trust? The answer lies in the convenience of GAI. Students are not necessarily using GAI because they believe it is accurate. They are using it because it is fast, accessible and can help them get started or work more efficiently. Our study suggests that perceived usefulness may be outweighing the students’ scepticism towards the reliability of outputs, as this scepticism does not seem to be slowing adoption. Nearly all student groups surveyed reported that they expect to continue using generative AI in the future, indicating that low levels of trust are unlikely to deter ongoing or increased use.

Not all perceptions are equal

While the “high use – low trust” paradox is evident across student groups, the study also reveals systematic differences in the adoption and perceptions of GAI by gender and by domicile status (UK v international students). Male and international students tend to report higher levels of both past and anticipated future use of GAI tools, and more permissive attitudes towards AI-assisted learning compared to female and UK-domiciled students. These differences should not necessarily be interpreted as evidence that some students are more ethical, critical or technologically literate than others. What we are likely seeing are responses to different pressures and contexts shaping how students engage with these tools. Particularly for international students, GAI can help navigate language barriers or unfamiliar academic conventions. In those circumstances, GAI may work as a form of academic support rather than a shortcut. Meanwhile, differences in attitudes by gender reflect wider patterns often observed on academic integrity and risk-taking, where female students often report greater concern about following rules and avoiding sanctions. These findings suggest that students’ engagement with GAI is influenced by their positionality within Higher Education, and not just by their individual attitudes.

Different interpretations of institutional guidance

Discrepancies by gender and domicile status go beyond patterns of use and trust, extending to how students interpret institutional guidance on generative AI. Most UK universities now publish policies outlining acceptable and unacceptable uses of GAI in relation to assessment and academic integrity, and typically present these rules as applying uniformly to all students. In practice, as evidenced by our study, students interpret these guidelines differently. UK-domiciled students, especially women, tend to adopt more cautious readings, sometimes treating permitted uses, such as using GAI for initial research or topic overviews, as potential misconduct. International students, by contrast, are more likely to express permissive or uncertain views, even in relation to practices that are more clearly prohibited. Shared rules do not guarantee shared understanding, especially if guidance is ambiguous or unevenly communicated. GAI is evolving faster than University policy, so addressing this unevenness in understanding is an urgent challenge for higher education.

Where does the ‘problem’ lie?

Students are navigating rapidly evolving technologies within assessment frameworks that were not designed with GAI in mind. At the same time, they are responding to institutional guidance that is frequently high-level, unevenly communicated and difficult to translate into everyday academic practice. Yet there is a tendency to treat GAI misuse as a problem stemming from individual student behaviour. Our findings point instead to structural and systemic issues shaping how students engage with these tools. From this perspective, variation in student behaviour could reflect the uneven inclusivity of current institutional guidelines. Even when policies are identical for all, the evidence indicates that they are not experienced in the same way across student groups, calling for a need to promote fairness and reduce differential risk at the institutional level.

These findings also have clear implications for assessment and teaching. Since students are already using GAI widely, assessment design needs to avoid reactive attempts to exclude GAI. A more effective and equitable approach may involve acknowledging GAI use where appropriate, supporting students to engage with it critically and designing learning activities that continue to cultivate critical thinking, judgement and communication skills. In some cases, this may also mean emphasising in-person, discussion-based or applied forms of assessment where GAI offers limited advantage. Equally, digital literacy initiatives need to go beyond technical competence. Students require clearer and more concrete examples of what constitutes acceptable and unacceptable use of GAI in specific assessment contexts, as well as opportunities to discuss why these boundaries exist. Without this, institutions risk creating environments in which some students become too cautious in using GAI, while others cross lines they do not fully understand.

More broadly, policymakers and institutional leaders should avoid assuming a single student response to GAI. As this study shows, engagement with these tools is shaped by gender, educational background, language and structural pressures. Treating the student body as homogeneous risks reinforcing existing inequalities rather than addressing them. Public debate about GAI in HE frequently swings between optimism and alarm. This research points to a more grounded reality where students are not blindly trusting AI, but their use of it is increasing, sometimes pragmatically, sometimes under pressure. As GAI systems continue evolving, understanding how students navigate these tools in practice is essential to developing policies, assessments and teaching approaches that are both effective and fair.

You can find more information in our full research paper: https://www.tandfonline.com/doi/full/10.1080/13603108.2025.2595453

Dr Carmen Cabrera is a Lecturer in Geographic Data Science at the Geographic Data Science Lab, within the University of Liverpool’s Department of Geography and Planning. Her areas of expertise are geographic data science, human mobility, network analysis and mathematical modelling. Carmen’s research focuses on developing quantitative frameworks to model and predict human mobility patterns across spatiotemporal scales and population groups, ranging from intraurban commutes to migratory movements. She is particularly interested in establishing methodologies to facilitate the efficient and reliable use of new forms of digital trace data in the study of human movement. Prior to her position as a Lecturer, Carmen completed a BSc and MSc in Physics and Applied Mathematics, specialising in Network Analysis. She then did a PhD at University College London (UCL), focussing on the development of mathematical models of social behaviours in urban areas, against the theoretical backdrop of agglomeration economies. After graduating from her PhD in 2021, she was a Research Fellow in Urban Mobility at the Centre for Advanced Spatial Analysis (CASA), at UCL, where she currently holds a honorary position.

Dr Ruth Neville is a Research Fellow at the Centre for Advanced Spatial Analysis (CASA), UCL, working at the intersection of Spatial Data Science, Population Geography and Demography. Her PhD research considers the driving forces behind international student mobility into the UK, the susceptibility of student applications to external shocks, and forecasting future trends in applications using machine learning. Ruth has also worked on projects related to human mobility in Latin America during the COVID-19 pandemic, the relationship between internal displacement and climate change in the East and Horn of Africa, and displacement of Ukrainian refugees. She has a background in Political Science, Economics and Philosophy, with a particular interest in electoral behaviour.


Leave a comment

The challenge of AI declaration in HE – what can we do?

by Chahna Gonsalves

The rapid integration of AI tools like ChatGPT into academic life has raised significant concerns about academic integrity. Universities worldwide are grappling with how to manage this new frontier of technology. My recent research at King’s Business School sheds light on an intriguing challenge: student non-compliance with mandatory AI use declarations. Despite clear institutional requirements to declare AI usage in their coursework, up to 74% of students did not comply. This raises key questions about how we think about academic honesty in the age of AI, and what can be done to improve compliance and foster trust.

In November 2023, King’s Business School introduced an AI declaration section as part of the coursework coversheet. Students were required to either declare their AI use or confirm that they hadn’t used any AI tools in their work. This research, which started as an evaluation of the revised coversheet, was conducted a year after the implementation of this policy, providing insights into how students have navigated these requirements over time. The findings reveal important challenges for both educators and students in adapting to this new reality.

Fear and ambiguity: barriers to transparency

In interviews conducted as part of the study, students frequently voiced their apprehension about how AI declarations might be perceived. One student likened it to “admitting to plagiarism,” reflecting a widespread fear that transparency could backfire. Such fears illustrate a psychological barrier to compliance, where students perceive AI use declarations as risky rather than neutral. This tension is exacerbated by the ambiguity of current policies. Guidelines are often unclear, leaving students uncertain about what to declare and how that declaration will impact their academic standing.

Moreover, the rapid evolution of AI tools has blurred traditional lines of authorship and originality. Before the rise of AI, plagiarism was relatively easy to define. But now, as AI tools generate content that is indistinguishable from human-authored work, what does it mean to be original? The boundaries of academic integrity are being redrawn, and institutions need to adapt quickly to provide clearer guidance. As AI technologies become more integrated into academic practice, we must move beyond rigid policies and have more nuanced conversations about what responsible AI use looks like in different contexts.

Peer influence: AI as the “fourth group member”

A particularly striking finding from the research was the role of peer influence in shaping students’ decisions around AI use and its declaration. In group work contexts, AI tools like ChatGPT have become so normalized that one student referred to ChatGPT as the “fourth man” in group projects. This normalization makes it difficult for students to declare AI use, as doing so might set them apart from their peers who choose not to disclose. The pressure to conform can be overwhelming, and it drives non-compliance as students opt to avoid the risk of being singled out.

The normalising effect of AI usage amongst peers reflects a larger trend in academia, where technological adoption is outpacing institutional policy. This raises an urgent need for universities to not only set clear guidelines but also engage students and faculty in open discussions about AI’s role in academic work. Creating a community of transparency where AI use is openly acknowledged and discussed is crucial to overcoming the current challenges.

Solutions: clearer policies, consistent enforcement, and trust

What can be done to improve compliance with AI declarations? The research offers several recommendations. First, institutions need to develop clearer and more consistent policies around AI use. The ambiguity that currently surrounds AI guidelines must be addressed. Students need to know exactly what is expected of them, and this starts with clear definitions of what constitutes AI use and how it should be declared.

Second, enforcement of these policies needs to be consistent across all courses. Many students reported that AI declarations were emphasized in some modules but barely mentioned in others. This inconsistency breeds confusion and scepticism about the importance of the policy. Faculty training is crucial to ensuring that all educators communicate the same message to students about AI use and its implications for academic integrity.

Finally, building trust between students and institutions is essential. Students must feel confident that declaring AI use will not result in unfair penalties. One approach to building this trust is to integrate AI use into low-stakes formative assessments before moving on to higher-stakes summative assessments. This gradual introduction allows students to become comfortable with AI policies and to see that transparency will not harm their academic performance. In the long run, fostering an open, supportive dialogue around AI use can help reduce the fear and anxiety currently driving non-compliance.

Moving forward: a call for open dialogue and innovation

As AI continues to revolutionize academic work, institutions must rise to the challenge of updating their policies and fostering a culture of transparency. My research suggests that fear, ambiguity, and peer influence are key barriers to AI declaration, but these challenges can be overcome with clearer policies, consistent enforcement, and a foundation of trust. More than just a compliance issue, this is an opportunity for higher education to rethink academic integrity in the age of AI and to encourage ethical, transparent use of technology in learning.

In the end, the goal should not be to police AI use, but to harness its potential for enhancing academic work while maintaining the core values of honesty and originality. Now is the time to open up the conversation and invite both students and educators to reimagine how we define integrity in the evolving landscape of higher education. Let’s make AI part of the learning process—not something to be hidden.

This post is based on my paper Addressing Student Non-Compliance in AI Use Declarations: Implications for Academic Integrity and Assessment in Higher Education in Assessment & Evaluation in Higher Education (Published online: 22 Oct 2024).

I hope this serves as a starting point for broader discussions about how we can navigate the complexities of AI in academic settings. I invite readers to reflect on these findings and share their thoughts on how institutions can better manage the balance between technological innovation and academic integrity. 

Chahna Gonsalves is a Senior Lecturer in Marketing (Education) at King’s College London. She is Senior Fellow of the Higher Education Association and Associate Fellow of the Staff Educational Development Association.


1 Comment

For meta or for worse…

by Paul Temple

Remember the Metaverse? Oh, come on, you must remember it, just think back a year, eighteen months ago, it was everywhere! Mark Zuckerberg’s new big thing, ads everywhere about how it was going to transform, well, everything! I particularly liked the ad showing a school group virtually visiting the Metaverse forum in ancient Rome, which was apparently going to transform their understanding of the classical world. Well, that’s what $36 bn (yes, that’s billion) buys you. Accenture were big fans back then, displaying all the wide-eyed credulity expected of a global consultancy firm when they reported in January 2023 that “Growing consumer and business interest in the Metaverse [is] expected to fuel [a] trillion dollar opportunity for commerce, Accenture finds”.

It was a little difficult, though, to find actual uses of the Metaverse, as opposed to vague speculations about its future benefits, on the Accenture website. True, they’d used it in 2022 to prepare a presentation for Tuvalu for COP27; and they’d created a virtual “Global Collaboration Village” for the 2023 Davos get-together; and we mustn’t overlook the creation of the ChangiVerse, “where visitors can access a range of fun-filled activities and social experiences” while waiting for delayed flights at Singapore’s Changi airport. So all good. Now tell me that I don’t understand global business finance, but I’d still be surprised if these and comparable projects added up to a trillion dollars.

But of course that was then, in the far-off days of 2023. In 2024, we’re now in the thrilling new world of AI, do keep up! Accenture can now see that “AI is accelerating into a mega-trend, transforming industries, companies and the way we live and work…better positioned to reinvent, compete and achieve new levels of performance.” As I recall, this is pretty much what the Metaverse was promising, but never mind. Possible negative effects of AI? Sorry, how do you mean, “negative”?

It’s been often observed that every development in communications and information technology – radio, TV, computers, the internet – has produced assertions that the new technology means that the university as understood hitherto is finished. Amazon is already offering a dozen or so books published in the last six months on the impact of the various forms of AI on education, which, to go by the summaries provided, mostly seem to present it in terms of the good, the bad, and the ugly. I couldn’t spot an “end of the university as we know it” offering, but it has to be along soon.

You’ve probably played around with ChatGPT – perhaps you were one of its 100 million users logging-on within two months of its release – maybe to see how students (or you) might use it. I found it impressive, not least because of its speed, but at the same time rather ordinary: neat B-grade summaries of topics of the kind you might produce after skimming the intro sections of a few standard texts but, honestly, nothing very interesting. Microsoft is starting to include ChatGPT in its Office products; so you might, say, ask it to list the action points from the course committee minutes over the last year, based on the Word files it has access to. In other words, to get it to undertake, quickly and accurately, a task that would be straightforward yet tedious for a person: a nice feature, but hardly transformative. (By the way, have you tried giving ChatGPT some text it produced and asking where it came from? It said to me, in essence, I don’t remember doing this, but I suppose I might have: it had an oddly evasive feel.)

So will AI transform the way teaching and learning works in higher education? A recent paper by Strzelecki (2023) reporting on an empirical study of the use of ChatGPT by Polish university students notes both the potential benefits if it can be carefully integrated into normal teaching methods – creating material tailored to individuals’ learning needs, for example – as well as the obvious ethical problems that will inevitably arise. If students are able to use AI to produce work which they pass off as their own, it seems to me that that is an indictment of under-resourced, poorly-managed higher education which doesn’t allow a proper engagement between teachers and students, rather than a criticism of AI as such. Plagiarism in work that I marked really annoyed me, because the student was taking the course team for fools, assuming our knowledge of the topic was as limited as theirs. (OK, there may have been some very sophisticated plagiarism which I missed, but I doubt it: a sophisticated plagiarist is usually a contradiction in terms.)

The 2024 Consumer Electronics Show (CES), held in Las Vegas in January 2024, was all about AI. Last year it was all about the Metaverse; this year, although the Metaverse got a mention, it seemed to rank in terms of interest well below the AI-enabled cat flap on display – it stops puss coming in if it’s got a mouse in its jaws – which I’m guessing cost rather less than $36bn to develop. I’ve put my name down for one.

Dr Paul Temple is Honorary Associate Professor in the Centre for Higher Education Studies, UCL Institute of Education.


4 Comments

Fair use or copyright infringement? What academic researchers need to know about ChatGPT prompts

by Anita Toh

As scholarly research into and using generative AI tools like ChatGPT becomes more prevalent, it is crucial for researchers to understand the intersections of copyright, fair use, and use of generative AI in research. While there is much discussion about the copyrightability of generative AI outputs and the legality of generative AI companies’ use of copyrighted material as training data (Lucchi, 2023), there has been relatively little discussion about copyright in relation to user prompts. In this post, I share an interesting discovery about the use of copyrighted material in ChatGPT prompts.

Imagine a situation where a researcher wishes to conduct a content analysis on specific YouTube videos for academic research. Does the researcher need to obtain permission from YouTube or the content creators to use these videos?

As per YouTube’s guidelines, researchers do not require explicit copyright permission if they are using the content for “commentary, criticism, research, teaching, or news reporting,”as these activities fall under the umbrella of fair use (Fair Use on YouTube – YouTube Help, 2023).

What about this scenario? A researcher wants to compare the types of questions posed by investors on the reality television series, Shark Tank, with questions generated by ChatGPT as it roleplays an angel investor. The researcher plans to prompt ChatGPT with a summary of each Shark Tank pitch and ask ChatGPT to roleplay as an angel investor and ask questions. In this case, would the researcher need to obtain permission from Shark Tank or its production company, Sony Pictures Television?

In my exploration, I discovered that it is indeed crucial to obtain permission from Sony Pictures Television. ChatGPT’s terms of service emphasise that users should “refrain from using the service in a manner that infringes upon third-party rights. This explicitly means the input should be devoid of copyrighted content unless sanctioned by the respective author or rights holder” (Fiten & Jacobs, 2023).

I therefore initiated communication with Sony Pictures Television, seeking approval to incorporate Shark Tank videos in my research. However, my request was declined by Sony Pictures Television in California, citing “business and legal reasons”. Undeterred, I approached Sony Pictures Singapore, only to receive a reaffirmation that Sony cannot endorse my proposed use of their copyrighted content “at the present moment”. They emphasised that any use of their copyrighted content must strictly align with the Fair Use doctrine.

This evokes the question: Why doesn’t the proposed research align with fair use? My initial understanding is that the fair use doctrine allows re-users to use copyrighted material without permission from the right holders for news reporting, criticism, review, educational and research purposes (Copyright Act 2021 Factsheet, 2022).

In the absence of further responses from Sony Pictures Television, I searched the web for answers.

Two findings emerged which could shed light on Sony’s reservations:

  • ChatGPT’s terms highlight that “user inputs, besides generating corresponding outputs, also serve to augment the service by refining the AI model” (Fiten & Jacobs, 2023; OpenAI Terms of Use, 2023).
  • OpenAI is currently facing legal action from various authors and artists alleging copyright infringement (Milmo, 2023). They contend that OpenAI had utilized their copyrighted content to train ChatGPT without their consent. Adding to this, the New York Times is also contemplating legal action against OpenAI for the same reason (Allyn, 2023).

These revelations point to a potential rationale behind Sony Pictures Television’s reluctance: while use of their copyrighted content for academic research might be considered fair use, introducing this content into ChatGPT could infringe upon the non-commercial stipulations (What Is Fair Use?, 2016) inherent in the fair use doctrine.

In conclusion, the landscape of copyright laws and fair use in relation to generative AI tools is still evolving. While previously researchers could rely on the fair use doctrine for the use of copyrighted material in their research work, the availability of generative AI tools now introduces an additional layer of complexity. This is particularly pertinent when the AI itself might store or use data to refine its own algorithms, which could potentially be considered a violation of the non-commercial use clause in the fair use doctrine. Sony Pictures Television’s reluctance to grant permission for the use of their copyrighted content in association with ChatGPT reflects the caution that content creators and rights holders are exercising in this new frontier. For researchers, this highlights the importance of understanding the terms of use of both the AI tool and the copyrighted material prior to beginning a research project.

Anita Toh is a lecturer at the Centre for English Language Communication (CELC) at the National University of Singapore (NUS). She teaches academic and professional communication skills to undergraduate computing and engineering students.