SRHE Blog

The Society for Research into Higher Education


Leave a comment

Widely used but barely trusted: understanding student perceptions on the use of generative AI in higher education

by Carmen Cabrera and Ruth Neville

Generative artificial intelligence (GAI) tools are rapidly transforming how university students learn, create and engage with knowledge. Powered by techniques such as neural network algorithms, these tools generate new content, including text, tables, computer code, images, audio and video, by learning patterns from existing data. The outputs are usually characterised by their close resemblance to human-generated content. While GAI shows great promise to improve the learning experience in various disciplines, its growing uptake also raises concerns about misuse, over-reliance and more generally, its impact on the learning process. In response, multiple UK HE institutions have issued guidance outlining acceptable use and warning against breaches of academic integrity. However, discussions about the role of GAI in the HE learning process have been led mostly by educators and institutions, and less attention has been given to how students perceive and use GAI.

Our recent study, published in Perspectives: Policy and Practice in Higher Education, helps to address this gap by bringing student perspectives into the discussion. Drawing on a survey conducted in early 2024 with 132 undergraduate students from six UK universities, the study reveals an impactful paradox. Students are using GAI tools widely, and expect their use to increase, yet fewer than 25% regard its outputs as reliable. High levels of use therefore coexist with low levels of trust.

Using GAI without trusting it

At first glance, the widespread use of GAI among students might be taken as a sign of growing confidence in these tools. Yet, when students are asked about their perceptions on the reliability of GAI outputs, many express disagreement when asked if GAI could be considered a reliable source of knowledge. This apparent contradiction raises the question of why are students still using tools they do not fully trust? The answer lies in the convenience of GAI. Students are not necessarily using GAI because they believe it is accurate. They are using it because it is fast, accessible and can help them get started or work more efficiently. Our study suggests that perceived usefulness may be outweighing the students’ scepticism towards the reliability of outputs, as this scepticism does not seem to be slowing adoption. Nearly all student groups surveyed reported that they expect to continue using generative AI in the future, indicating that low levels of trust are unlikely to deter ongoing or increased use.

Not all perceptions are equal

While the “high use – low trust” paradox is evident across student groups, the study also reveals systematic differences in the adoption and perceptions of GAI by gender and by domicile status (UK v international students). Male and international students tend to report higher levels of both past and anticipated future use of GAI tools, and more permissive attitudes towards AI-assisted learning compared to female and UK-domiciled students. These differences should not necessarily be interpreted as evidence that some students are more ethical, critical or technologically literate than others. What we are likely seeing are responses to different pressures and contexts shaping how students engage with these tools. Particularly for international students, GAI can help navigate language barriers or unfamiliar academic conventions. In those circumstances, GAI may work as a form of academic support rather than a shortcut. Meanwhile, differences in attitudes by gender reflect wider patterns often observed on academic integrity and risk-taking, where female students often report greater concern about following rules and avoiding sanctions. These findings suggest that students’ engagement with GAI is influenced by their positionality within Higher Education, and not just by their individual attitudes.

Different interpretations of institutional guidance

Discrepancies by gender and domicile status go beyond patterns of use and trust, extending to how students interpret institutional guidance on generative AI. Most UK universities now publish policies outlining acceptable and unacceptable uses of GAI in relation to assessment and academic integrity, and typically present these rules as applying uniformly to all students. In practice, as evidenced by our study, students interpret these guidelines differently. UK-domiciled students, especially women, tend to adopt more cautious readings, sometimes treating permitted uses, such as using GAI for initial research or topic overviews, as potential misconduct. International students, by contrast, are more likely to express permissive or uncertain views, even in relation to practices that are more clearly prohibited. Shared rules do not guarantee shared understanding, especially if guidance is ambiguous or unevenly communicated. GAI is evolving faster than University policy, so addressing this unevenness in understanding is an urgent challenge for higher education.

Where does the ‘problem’ lie?

Students are navigating rapidly evolving technologies within assessment frameworks that were not designed with GAI in mind. At the same time, they are responding to institutional guidance that is frequently high-level, unevenly communicated and difficult to translate into everyday academic practice. Yet there is a tendency to treat GAI misuse as a problem stemming from individual student behaviour. Our findings point instead to structural and systemic issues shaping how students engage with these tools. From this perspective, variation in student behaviour could reflect the uneven inclusivity of current institutional guidelines. Even when policies are identical for all, the evidence indicates that they are not experienced in the same way across student groups, calling for a need to promote fairness and reduce differential risk at the institutional level.

These findings also have clear implications for assessment and teaching. Since students are already using GAI widely, assessment design needs to avoid reactive attempts to exclude GAI. A more effective and equitable approach may involve acknowledging GAI use where appropriate, supporting students to engage with it critically and designing learning activities that continue to cultivate critical thinking, judgement and communication skills. In some cases, this may also mean emphasising in-person, discussion-based or applied forms of assessment where GAI offers limited advantage. Equally, digital literacy initiatives need to go beyond technical competence. Students require clearer and more concrete examples of what constitutes acceptable and unacceptable use of GAI in specific assessment contexts, as well as opportunities to discuss why these boundaries exist. Without this, institutions risk creating environments in which some students become too cautious in using GAI, while others cross lines they do not fully understand.

More broadly, policymakers and institutional leaders should avoid assuming a single student response to GAI. As this study shows, engagement with these tools is shaped by gender, educational background, language and structural pressures. Treating the student body as homogeneous risks reinforcing existing inequalities rather than addressing them. Public debate about GAI in HE frequently swings between optimism and alarm. This research points to a more grounded reality where students are not blindly trusting AI, but their use of it is increasing, sometimes pragmatically, sometimes under pressure. As GAI systems continue evolving, understanding how students navigate these tools in practice is essential to developing policies, assessments and teaching approaches that are both effective and fair.

You can find more information in our full research paper: https://www.tandfonline.com/doi/full/10.1080/13603108.2025.2595453

Dr Carmen Cabrera is a Lecturer in Geographic Data Science at the Geographic Data Science Lab, within the University of Liverpool’s Department of Geography and Planning. Her areas of expertise are geographic data science, human mobility, network analysis and mathematical modelling. Carmen’s research focuses on developing quantitative frameworks to model and predict human mobility patterns across spatiotemporal scales and population groups, ranging from intraurban commutes to migratory movements. She is particularly interested in establishing methodologies to facilitate the efficient and reliable use of new forms of digital trace data in the study of human movement. Prior to her position as a Lecturer, Carmen completed a BSc and MSc in Physics and Applied Mathematics, specialising in Network Analysis. She then did a PhD at University College London (UCL), focussing on the development of mathematical models of social behaviours in urban areas, against the theoretical backdrop of agglomeration economies. After graduating from her PhD in 2021, she was a Research Fellow in Urban Mobility at the Centre for Advanced Spatial Analysis (CASA), at UCL, where she currently holds a honorary position.

Dr Ruth Neville is a Research Fellow at the Centre for Advanced Spatial Analysis (CASA), UCL, working at the intersection of Spatial Data Science, Population Geography and Demography. Her PhD research considers the driving forces behind international student mobility into the UK, the susceptibility of student applications to external shocks, and forecasting future trends in applications using machine learning. Ruth has also worked on projects related to human mobility in Latin America during the COVID-19 pandemic, the relationship between internal displacement and climate change in the East and Horn of Africa, and displacement of Ukrainian refugees. She has a background in Political Science, Economics and Philosophy, with a particular interest in electoral behaviour.


Leave a comment

The challenge of AI declaration in HE – what can we do?

by Chahna Gonsalves

The rapid integration of AI tools like ChatGPT into academic life has raised significant concerns about academic integrity. Universities worldwide are grappling with how to manage this new frontier of technology. My recent research at King’s Business School sheds light on an intriguing challenge: student non-compliance with mandatory AI use declarations. Despite clear institutional requirements to declare AI usage in their coursework, up to 74% of students did not comply. This raises key questions about how we think about academic honesty in the age of AI, and what can be done to improve compliance and foster trust.

In November 2023, King’s Business School introduced an AI declaration section as part of the coursework coversheet. Students were required to either declare their AI use or confirm that they hadn’t used any AI tools in their work. This research, which started as an evaluation of the revised coversheet, was conducted a year after the implementation of this policy, providing insights into how students have navigated these requirements over time. The findings reveal important challenges for both educators and students in adapting to this new reality.

Fear and ambiguity: barriers to transparency

In interviews conducted as part of the study, students frequently voiced their apprehension about how AI declarations might be perceived. One student likened it to “admitting to plagiarism,” reflecting a widespread fear that transparency could backfire. Such fears illustrate a psychological barrier to compliance, where students perceive AI use declarations as risky rather than neutral. This tension is exacerbated by the ambiguity of current policies. Guidelines are often unclear, leaving students uncertain about what to declare and how that declaration will impact their academic standing.

Moreover, the rapid evolution of AI tools has blurred traditional lines of authorship and originality. Before the rise of AI, plagiarism was relatively easy to define. But now, as AI tools generate content that is indistinguishable from human-authored work, what does it mean to be original? The boundaries of academic integrity are being redrawn, and institutions need to adapt quickly to provide clearer guidance. As AI technologies become more integrated into academic practice, we must move beyond rigid policies and have more nuanced conversations about what responsible AI use looks like in different contexts.

Peer influence: AI as the “fourth group member”

A particularly striking finding from the research was the role of peer influence in shaping students’ decisions around AI use and its declaration. In group work contexts, AI tools like ChatGPT have become so normalized that one student referred to ChatGPT as the “fourth man” in group projects. This normalization makes it difficult for students to declare AI use, as doing so might set them apart from their peers who choose not to disclose. The pressure to conform can be overwhelming, and it drives non-compliance as students opt to avoid the risk of being singled out.

The normalising effect of AI usage amongst peers reflects a larger trend in academia, where technological adoption is outpacing institutional policy. This raises an urgent need for universities to not only set clear guidelines but also engage students and faculty in open discussions about AI’s role in academic work. Creating a community of transparency where AI use is openly acknowledged and discussed is crucial to overcoming the current challenges.

Solutions: clearer policies, consistent enforcement, and trust

What can be done to improve compliance with AI declarations? The research offers several recommendations. First, institutions need to develop clearer and more consistent policies around AI use. The ambiguity that currently surrounds AI guidelines must be addressed. Students need to know exactly what is expected of them, and this starts with clear definitions of what constitutes AI use and how it should be declared.

Second, enforcement of these policies needs to be consistent across all courses. Many students reported that AI declarations were emphasized in some modules but barely mentioned in others. This inconsistency breeds confusion and scepticism about the importance of the policy. Faculty training is crucial to ensuring that all educators communicate the same message to students about AI use and its implications for academic integrity.

Finally, building trust between students and institutions is essential. Students must feel confident that declaring AI use will not result in unfair penalties. One approach to building this trust is to integrate AI use into low-stakes formative assessments before moving on to higher-stakes summative assessments. This gradual introduction allows students to become comfortable with AI policies and to see that transparency will not harm their academic performance. In the long run, fostering an open, supportive dialogue around AI use can help reduce the fear and anxiety currently driving non-compliance.

Moving forward: a call for open dialogue and innovation

As AI continues to revolutionize academic work, institutions must rise to the challenge of updating their policies and fostering a culture of transparency. My research suggests that fear, ambiguity, and peer influence are key barriers to AI declaration, but these challenges can be overcome with clearer policies, consistent enforcement, and a foundation of trust. More than just a compliance issue, this is an opportunity for higher education to rethink academic integrity in the age of AI and to encourage ethical, transparent use of technology in learning.

In the end, the goal should not be to police AI use, but to harness its potential for enhancing academic work while maintaining the core values of honesty and originality. Now is the time to open up the conversation and invite both students and educators to reimagine how we define integrity in the evolving landscape of higher education. Let’s make AI part of the learning process—not something to be hidden.

This post is based on my paper Addressing Student Non-Compliance in AI Use Declarations: Implications for Academic Integrity and Assessment in Higher Education in Assessment & Evaluation in Higher Education (Published online: 22 Oct 2024).

I hope this serves as a starting point for broader discussions about how we can navigate the complexities of AI in academic settings. I invite readers to reflect on these findings and share their thoughts on how institutions can better manage the balance between technological innovation and academic integrity. 

Chahna Gonsalves is a Senior Lecturer in Marketing (Education) at King’s College London. She is Senior Fellow of the Higher Education Association and Associate Fellow of the Staff Educational Development Association.


3 Comments

Restraining the uncanny guest: AI ethics and university practice

by David Webster

If GAI is the ‘uncanniest of guests’ in the University what can we do about any misbehaviour? What do we do with this uninvited guest who behaves badly, won’t leave and seems intent on asserting that it’s their house now anyway?  They won’t stay in their room and seem to have their fingers in everything.

Nihilism stands at the door: whence comes this uncanniest of all guests?[1]

Nietzsche saw the emergence of nihilistic worldviews as presaging a century of turmoil and destruction, only after which might more creative responses to the sweeping away of older systems of thought be possible. Generative Artificial Intelligence, uncanny in its own discomforting ways, might be argued as threatening the world of higher education with an upending of the existing conventions and practices that have long been the norm in the sector. Some might welcome this guest, in that there is much wrong in the way universities have created knowledge, taught students, served communities and reproduced social practice. The concern must surely be though that GAI is not a creative force, but a repackaging and re-presenting of existing human understanding and belief. We need to think carefully about the way this guest’s behaviour might exert influence in our house.

After decades of seeking to eliminate prejudices and bias, GAI threatens to reinscribe misogyny, racism, homophobia and other unethical discrimination back into the academy. Since  the majority of content used to train large language models has been generated by the most prominent and privileged groups in human culture, might not we see a recolonisation, just as universities are starting to push for a more decolonised, inclusive and equitable learning experience?

After centuries of citation tradition and careful attribution of sources, GAI seems intent on shuffling the work of human scholars and presenting it without any clarity as to whence it came. Some news organisations and  authors are even threatening to sue OpenAI as they believe their content has been used, without permission, to train the company’s ChatGPT tool.

Furthermore, this seems to be a guest inclined to hallucinate and recount their visions as the earnest truth. The guest has also imbibed substantive propaganda, taken satirical articles as serious factual account (hence the glue pizza and rock AI diet), and is targeted by pseudo-science dressed in linguistic frames of respectability. How can we deal with this confident, ambitious, and ill-informed guest who keeps offering to save us time and money?

While there isn’t a simple answer (if I had that, I’d be busy monetising it!), an adaptation of this guest metaphor might help. This is to view GAI rather like an unregulated child prodigy: awash with talent but with a lacuna of discernment. It can do so much, but often doesn’t have the capacity to know what it shouldn’t do, what is appropriate or helpful and what is frankly dangerous.

GAI systems are capable of almost magical-seeming feats, but also lack basic understanding of how the world operates and are blind to all kinds of contextual appreciation. Most adults would take days trying to draw what a GAI system can generate in seconds, and would struggle to match its ‘skills’, but even an artistically-challenged adult likely myself with barely any artistic talent at all would know how many fingers, noses or arms, were appropriate in a picture – no matter how clumsily I rendered them. The idea of GAI as a child prodigy, in need of moral guidance and requiring tutoring and careful curation of the content they are exposed to, can help us better understand just how limited these systems are. This orientation to GAI also helps us see that what are witnessing is not a finished solution to various tasks currently undertaken by people, but rather a surplus of potential. The child prodigy is capable of so much, but is still a child and critically, still requires prodigious supervision.

So as universities look to use student-facing chatbots for support and answering queries, to automate their arcane and lengthy internal processes, to sift through huge datasets and to analyse and repackage existing learning content, we need to be mindful of GAI’S immaturity. It offers phenomenal potential in all these areas and despite the overdone hype  it will drive a range of huge changes to how we work in higher education, but it is far from ready to work unsupervised. GAI needs moral instruction, it needs to be reshaped as it develops and we might do this through assuming the mindset of a watchful, if also proud, parent.

Professor Dave Webster is Director of Education, Quality & Enhancement at the University of Liverpool. He has a background in teaching philosophy, and the study of religion, with ongoing interests in Buddhist thought, and the intersections of new religious movements and conspiracy theory.  He is also concerned about pedagogy, GAI and the future of Higher Education.


[1] The Will to Power, trans. Walter Kaufmann and R. J. Hollingdale, ed., with commentary, Walter Kaufmann, Vintage, 1968.               


Leave a comment

Spotlight on the inclusion process in developing AI guidance and policy

by Lilian Schofield and Joanne J. Zhang

Introduction

When the discourse on ChatGPT started gaining momentum in higher education in 2022, the ‘emotions’ behind the response of educators, such as feelings of exclusion, isolation, and fear of technological change, were not initially at the forefront. Even educators’ feelings of apprehension about the introduction and usage of AI in education, which is an emotional response, were not given much attention. This feeling was highlighted by Ng et al (2023), who stated that many AI tools are new to educators, and many educators may feel overwhelmed by them due to a lack of understanding or familiarity with the technology. The big issues then were talks on banning the use of ChatGPT, ethical and privacy concerns, inclusive issues and concerns about academic misconduct (Cotton et al, 2023; Malinka et al, 2023; Rasul et al, 2023; Zhou & Schofield, 2023).

As higher education institutions started developing AI guidance in education, again the focus seemed to be geared towards students’ ethical and responsible usage of AI and little about educators’ guidance. Here we reflect on the process of developing the School of Business and Management, Queen Mary University of London’s AI guidance through the lens of inclusion and educators’ ‘voice’. We view ‘inclusion’ as the active participation and contribution of educators in the process of co-creating the AI policy alongside multiple voices from students and staff.

Co-creating inclusive AI guidance

Triggered by the lack of clear AI guidance for students and educators, the School of Business and Management at the Queen Mary University of London (QMUL) embarked on developing AI guidance for students and staff from October 2023 to March 2024.  Led by Deputy Directors of Education Dr Joanne J. Zhang and Dr Darryn Mitussis, the guidance was co-created with staff members through different modes, such as the best practice sharing sessions, staff away day, student-staff consultation, and staff consultation. These experiences helped shape the inclusive way and bottom-up approach of developing the AI guidance. The best practice sharing sessions allowed educators to contribute their expertise as well as provide a platform to voice their fears and apprehensions about adopting and using AI for teaching. The sessions acted as a space to share concerns and became a space where educators could have a sense of relief and solidarity. Staff members shared that knowing that others share similar apprehensions was reassuring and reduced the feeling of isolation. This collective space helped promote a more collaborative and supportive environment for educators to comfortably explore AI applications in their teaching.

Furthermore, the iterative process of developing this guidance has engaged different ‘voices’ within and outside the school. For instance, we discussed with the QMUL central team their approach and resources for facilitating AI usage for students and staff. We discussed Russell Group principles on AI usage and explored different universities’ AI policies and practices. The draft guideline was discussed and endorsed at the Teaching Away Day and education committee meetings. As a result, we suggested three principles for developing effective practices in teaching and learning:

  1. Explore and learn.
  2. Discuss and inform.
  3. Stress test and validate.

Key learning points from our process include having the avenue to use voice, whether in support of AI or not, and ensuring educators are active participants in the AI guidance-making process. This is also reflected in the AI guidance, which supports all staff in developing effective practices at their own pace.

Consultation with educators and students was an important avenue for inclusion in the process of developing the AI policy. Open communication and dialogue facilitated staff members’ opportunities to contribute to and shape the AI policy. This consultative approach enhanced the inclusion of educators and strengthened the AI policy.

Practical suggestions

Voice is a powerful tool (Arnot & Reay, 2007). However, educators may feel silenced and isolated without an avenue for their  voice. This ‘silence’ and isolation takes us back to the initial challenges experienced at the start of AI discourse, such as apprehension, fear, and isolation. The need to address these issues is pertinent, especially now when employers, students and higher education drive AI to be embedded in the curriculum and have AI-skilled graduates (Southworth et al, 2023). A co-creative approach to developing AI policies is crucial to enable critique and learning, promoting a sense of ownership and commitment to the successful integration of AI in education.

The process of developing an AI policy itself serves as the solution to the barriers to educators adopting AI in their practice and an enabler for inclusion. It ensures educators’ voices are heard, addresses their fears, and finds effective ways to develop a co-created AI policy. This inclusive participatory and co-creative approach helped mitigate fears associated with AI by creating a supportive environment where apprehensions can be openly discussed and addressed.

The co-creative approach of developing the policy with educators’ voices plays an important role in AI adoption. Creating avenues, such as the best practice sharing sessions where educators can discuss their experiences with AI, both positive and negative, ensures that voices are heard and concerns are acknowledged and addressed. This collective sharing builds a sense of community and support, helping to alleviate individual anxieties.

Steps that could be taken towards an inclusive approach to developing an inclusive AI guidance and policy are as follows:

  1. Set up the core group – Director for Education, chair of the exam board, and the inclusion of educators from different subject areas. Though the development of AI guidance can have a top-down approach, it is important that the group set-up is inclusive of educators’ voices and concerns.
  2. Design multiple avenues for educators ‘voices’ to be heard (best practice sharing sessions within and cross faulty, teaching away day).
  3. Communication channels are clear and open for all to contribute.
  4. Engaging all staff and students – hearing from students directly is powerful for staff, too; we learned a lot from students and included their voices in the guidance.
  5. Integrate and gain endorsements from the school management team. Promoting educators’ involvement in creating AI guidance legitimises their contributions and ensures that their insights are taken seriously. Additionally, such endorsement ensures that AI guidance is aligned with the needs and ethical considerations of those directly engaged and affected by the guidance.

Conclusion

As many higher education institutions move towards embedding AI into the curriculum and become clearer in their AI guidance, it is crucial to acknowledge and address the emotional dimensions educators face in adapting to AI technologies in education. Educators’ voices in contributing to AI policy and guidance are important in ensuring that they are clear about the guidance, embrace it and are upskilled in order for the embedding and implementation of AI in teaching and learning to be successful.

Dr. Lilian Schofield is a senior lecturer in Nonprofit Management and the Deputy Director of Student Experience at the School of Business and Management, Queen Mary University of London. Her interests include critical management pedagogy, social change, and sustainability. Lilian is passionate about incorporating and exploring voice, silence, and inclusion into her practice and research. She is a Queen Mary Academy Fellow and has taken up the Learning and Teaching Enhancement Fellowship, where she works on student skills enhancement practice initiatives at Queen Mary University of London.

Dr Joanne J. Zhang is Reader in Entrepreneurship, Deputy Director of Education at the School of Business and Management, Queen Mary University of London, and a visiting fellow at the University of Cambridge. She is the ‘Entrepreneurship Educator of the Year’, Triple E European Award 2022. Joanne is also the founding director of the Entrepreneurship Hub , and the QM Social Venture Fund  - the first student-led social venture fund investing in ‘startups for good’ in the UK.  Joanne’s research and teaching interests are entrepreneurship, strategy and entrepreneurship education. She has led and engaged in large-scale research and scholarship projects totalling over GBP£7m.  Email: Joanne.zhang@qmul.ac.uk


Leave a comment

What do artificial intelligence systems mean for academic practice?

by Mary Davis

I attended and made a presentation at the SRHE Roundtable event ‘What do artificial intelligence systems mean for academic practice?’ on 19 July 2023. The roundtable brought together a wide range of perspectives on artificial intelligence: philosophical questions, problematic results, ethical considerations, the changing face of assessment and practical engagement for learning and teaching. The speakers represented a range of UK HEI contexts, as well as Australia and Spain, and a variety of professional roles including academic integrity leads, lecturers of different disciplines and emeritus professors.

The day began with Ron Barnett’s fierce defence of the value of authorship and the concerns about what it means to be a writer in a Chatbot world. Ron argued that use of AI tools can lead to an erosion of trust; the essential trust relationship between writer and reader in HE and wider social contexts such as law may disintegrate and with it, society. Ron reminded us of the pain and struggle of writing and creating an authorial voice that is necessary for human writing. He urged us to think about the frameworks of learning such as ‘deep learning’ (Ramsden), agency and internal story-making (Archer) and his own ‘Will to Learn’, all of which could be lost. His arguments challenged us to reflect on the far-reaching social consequences of AI use and opened the day of debate very powerfully.

I then presented the advice I have been giving to students at my institution using my analysis of student declarations of AI use which I had categorised using a traffic light system for appropriate use (eg checking and fixing a text before submission); at risk use (eg paraphrasing and summarising); and inappropriate use (eg using assignment briefs as prompts and submitting the output as own work). I got some helpful feedback from the audience that the traffic lights provided useful navigation for students. Coincidentally, the next speaker Angela Brew also used a traffic light system to guide students with AI. She argued for the need to help students develop a scholarly mindset, for staff to stop teaching as in the 18th Century with universities as foundations of knowledge. Instead, she proposed that everyone at university should be a discoverer, a learner and producer of knowledge, as a response to AI use.

Stergios Aidinlis provided an intriguing insight into practical use of AI as part of a law degree. In his view, generative AI can be an opportunity to make assessment currently fit for purpose. He presented a three-stage model of learning with AI comprising: stage 1 as using AI to produce a project pre-mortem to tackle a legal problem as pre-class preparation; stage 2 using AI as a mentor to help students solve a legal problem in class; and stage 3 using AI to evaluate the technology after class. Stergios recommended Mollick and Mollick (2023) for ideas to help students learn to use AI. The presentation by Stergios stood out in terms of practical ideas and made me think about the availability of suitable AI tools for all students to be able to do tasks like this.

The next session by Richard Davies, one of the roundtable convenors, took a philosophical direction in considering what a ‘student’s own work’ actually means, and how we assess a student’s contribution. David Boud returned the theme to assessment and argued that three elements are always necessary: assuring learning outcomes have been met (summative assessment), enabling students to use information to aid learning (formative assessment) and building students’ capacity to evaluate their learning (sustainable assessment). He argued for a major re-design of assessment, that still incorporates these elements but avoids tasks that are no longer viable.

Liz Newton presented guidance for students which emphasized positive ways to use AI such as using it for planning or teaching, which concurred with my session. Maria Burke argued for ethical approaches to the use of AI that incorporate transparency, accountability, fairness and regulation, and promote critical thinking within AI context. Finally, Tania Alonso presented her ChatGPTeaching project with seven student rules for use of ChatGPT, such as proposing use only for areas of the student’s own knowledge.

The roundtable discussion was lively and our varied perspectives and experiences added a lot to the debate; I believe we all came away with new insights and ideas. I especially appreciated the opportunity to look at AI from practical and philosophical viewpoints. I am looking forward to the ongoing sessions and forum discussions. Thanks very much to SRHE for organising this event.

Dr Mary Davis is Academic Integrity Lead and Principal Lecturer (Education and Student Experience) at Oxford Brookes University. She has been a researcher of academic integrity since 2005 and has carried out extensive research on plagiarism, use of text-matching tools, the development of source use, proofreading, educational responses to academic conduct issues and focused her recent research on inclusion in academic integrity. She is on the Board of Directors of the International Center for Academic Integrity and co-chair of the International Day of Action for Academic Integrity.