SRHE Blog

The Society for Research into Higher Education


Leave a comment

Teaching students to use AI: from digital competence to a learning outcome

by Concepción González García and Nina Pallarés Cerdà

Debates about generative AI in higher education often start from the same assumption: students need a certain level of digital competence before they can use AI productively. Those who already know how to search, filter and evaluate online information are seen as the ones most likely to benefit from tools such as ChatGPT, while others risk being left further behind.

Recent studies reinforce this view. Students with stronger digital skills in areas like problem‑solving and digital ethics tend to use generative AI more frequently (Caner‑Yıldırım, 2025). In parallel, work using frameworks such as DigComp has mostly focused on measuring gaps in students’ digital skills – often showing that perceived “digital natives” are less uniformly proficient than we might think (Lucas et al, 2022). What we know much less about is the reverse relationship: can carefully designed uses of AI actually develop students’ digital competences – and for whom?

In a recent article, we addressed this question empirically by analysing the impact of a generative AI intervention on university students’ digital competences (García & Pallarés, 2026). Students’ skills were assessed using the European DigComp 2.2 framework (Vuorikari et al, 2022).

Moving beyond static measures of digital competence

Research on students’ digital competences in higher education has expanded rapidly over the past decade. Yet much of this work still treats digital competence as a stable attribute that students bring with them into university, rather than as a dynamic and educable capability that can be shaped through instructional design. The consequence is a field dominated by one-off assessments, surveys and diagnostic tools that map students’ existing skills but tell us little about how those skills develop.

This predominant focus on measurement rather than development has produced a conceptual blind spot: we know far more about how digital competences predict students’ use of emerging technologies than about how educational uses of these technologies might enhance those competences in the first place.

Recent studies reinforce this asymmetry. Students with higher levels of digital competence are more likely to engage with generative AI tools and to display positive attitudes towards their use (Moravec et al, 2024; Saklaki & Gardikiotis, 2024). In this ‘competence-first’ model, digital competence appears as a precondition for productive engagement with AI. Yet this framing obscures a crucial pedagogical question: might AI, when intentionally embedded in learning activities, actually support the growth of the very competences it is presumed to require?

A second limitation compounds this problem: the absence of a standardised framework for analysing and comparing the effects of AI-based interventions on digital competence development. Although DigComp is widely used for diagnostic purposes, few studies employ it systematically to evaluate learning gains or to map changes across specific competence areas. As a result, evidence from different interventions remains fragmented, making it difficult to identify which aspects of digital competence are most responsive to AI-mediated learning.

There is, nevertheless, emerging evidence that AI can do more than simply ‘consume’ digital competence. Studies by Dalgıç et al (2024) and Naamati-Schneider & Alt (2024) suggest that integrating tools such as ChatGPT into structured learning tasks can stimulate information search, analytical reasoning and critical evaluation—provided that students are guided to question and verify AI outputs rather than accept them uncritically. Yet these contributions remain exploratory. We still lack experimental or quasi-experimental evidence that links AI-based instructional designs to measurable improvements in specific DigComp areas, and we know little about whether such benefits accrue equally to all students or disproportionately to those who already possess stronger digital skills.

This gap matters. If digital competences are conceived as malleable rather than fixed, then AI is not merely a technology that demands certain skills but a pedagogical tool through which those skills can be cultivated. This reframing shifts the centre of the debate: away from asking whether students are ready for AI, and towards asking whether our teaching practices are ready to use AI in ways that promote competence development and reduce inequalities in learning.

Our study: teaching students to work with AI, not around it

We designed a randomised controlled trial with 169 undergraduate students enrolled in a Microeconomics course. Students were allocated by class group to either a treatment or a control condition. All students followed the same curriculum and completed the same online quizzes through the institutional virtual campus.

The crucial difference lay in how generative AI was integrated:

  • In the treatment condition, students received an initial workshop on using large language models strategically. They practised:
  • contextualising questions
  • breaking problems into steps
  • iteratively refining prompts
  • and checking their own solutions before turning to the AI.
  • Throughout the course, their online self-assessments included adaptive feedback: instead of simply marking answers as right or wrong, the system offered hints, step-by-step prompts and suggestions on how to use AI tools as a thinking partner.
  • In the control condition, students completed the same quizzes with standard right/wrong feedback, and no training or guidance on AI.

Importantly, the intervention did not encourage students to outsource solutions to AI. Rather, it framed AI as an interactive study partner to support self-explanation, comparison of strategies and self-regulation in problem solving.

We administered pre- and post-course questionnaires aligned with DigComp 2.2, focusing on five competences: information and data literacy, communication and collaboration, safety, and two aspects of problem solving (functional use of digital tools and metacognitive self-regulation). Using a difference-in-differences model with individual fixed effects, we estimated how the probability of reporting the highest level of each competence changed over time for the treatment group relative to the control group.

What changed when AI was taught and used in this way?

At the overall sample level, we found statistically significant improvements in three areas:

  • Information and data literacy – students in the AI-training condition were around 15 percentage points more likely to report the highest level of competence in identifying information needs and carrying out effective digital searches.
  • Problem solving – functional dimension – the probability of reporting the top level in using digital tools (including AI) to solve tasks increased by about 24 percentage points.
  • Problem solving – metacognitive dimension – a similar 24-point gain emerged for recognising what aspects of one’s digital competences need to be updated or improved.

In other words, the AI-integrated teaching design was associated not only with better use of digital tools, but also with stronger awareness of digital strengths and weaknesses – a key ingredient of autonomous learning. Communication and safety competences also showed positive but smaller and more uncertain effects. Here, the pattern becomes clearer when we look at who benefited most.

A compensatory effect: AI as a potential leveller, not just an amplifier

When we distinguished students by their initial level of digital competence, a pattern emerged. For those starting below the median, the intervention produced large and significant gains in all five competences, with improvements between 18 and 38 percentage points depending on the area. For students starting above the median, effects were smaller and, in some cases, non-significant.

This suggests a compensatory effect: students who began the course with weaker digital competences benefited the most from the AI-based teaching design. Rather than widening the digital gap, guided use of AI acted as a levelling mechanism, bringing lower-competence students closer to their more digitally confident peers.

Conceptually, this challenges an implicit assumption in much of the literature – namely, that generative AI will primarily enhance the learning of already advantaged students, because they are the ones with the skills and confidence to exploit it. Our findings show that, when AI is embedded within intentional pedagogy, explicit training and structured feedback, the opposite can happen: those who started with fewer resources can gain the most.

From ‘allow or ban’ to ‘how do we teach with AI?’

For higher education policy and practice, the implications are twofold.

First, we need to stop thinking of digital competence purely as a prerequisite for using AI. Under the right design conditions, AI can be a pedagogical resource to build those competences, especially in information literacy, problem solving and metacognitive self-regulation. That means integrating AI into curricula not as an add-on, but as part of how we teach students to plan, monitor and evaluate their learning.

Second, our results suggest that universities concerned with equity and digital inclusion should focus less on whether students have access to AI tools (many already do) and more on who receives support to learn how to use them well. Providing structured opportunities to practise prompting, to critique AI outputs and to reflect on one’s own digital skills may be particularly valuable for students who enter university with lower levels of digital confidence.

This does not resolve all the ethical and practical concerns around generative AI – far from it. But it shifts the conversation. Instead of treating AI as an external threat to academic integrity that must be tightly controlled, we can start to ask:

  • How can we design tasks where the added value lies in asking good questions, justifying decisions and evaluating evidence, rather than in producing a single ‘correct’ answer?
  • How can we support students to see AI not as a shortcut to avoid thinking, but as a tool to think better and know themselves better as learners?
  • Under what conditions does AI genuinely help to close digital competence gaps, and when might it risk opening new ones?

Answering these questions will require further longitudinal and multi-institutional research, including replication studies and objective performance measures alongside self-reports. Yet the evidence we present offers a cautiously optimistic message: teaching students how to use AI can be part of a strategy to strengthen digital competences and reduce inequalities in higher education, rather than merely another driver of stratification.

Concepción González García is Assistant Professor of Economics at the Faculty of Economics and Business, Catholic University of Murcia (UCAM), Spain, and holds a PhD in Economics from the University of Alicante. Her research interests include macroeconomics, particularly fiscal policy, and education.

Nina Pallarés is Assistant Professor of Economics and Academic Coordinator of the Master’s in Management of Sports Entities at the Faculty of Economics and Business, Catholic University of Murcia (UCAM), Spain. Her research focuses on applied econometrics, with particular emphasis on health, labour, education, and family economics.


Leave a comment

How educators can use Gen AI to promote inclusion and widen access

by Eleni Meletiadou

Introduction

Higher education faces a pivotal moment as Generative AI becomes increasingly embedded within academic practice. While AI technologies offer the potential to personalize learning, streamline processes, and expand access, they also risk exacerbating existing inequalities if not intentionally aligned with inclusive values. Building on our QAA-funded project outputs, this blog outlines a strategic framework for deploying AI to foster inclusion, equity, and ethical responsibility in higher education.

The digital divide and GenAI

Extensive research shows that students from marginalized backgrounds often face barriers in accessing digital tools, digital literacy training, and peer networks essential for technological confidence. GenAI exacerbates this divide, demanding not only infrastructure (devices, subscriptions, internet access) but also critical AI literacy. According to previous research, students with higher AI competence outperform peers academically, deepening outcome disparities.

However, the challenge is not merely technological; it is social and structural. WP (Widening Participation) students often remain outside informal digital learning communities where GenAI tools are introduced and shared. Without intervention, GenAI risks becoming a “hidden curriculum” advantage for already-privileged groups.

A framework for inclusive GenAI adoption

Our QAA-funded “Framework for Educators” proposes five interrelated principles to guide ethical, inclusive AI integration:

  • Understanding and Awareness Foundational AI literacy must be prioritized. Awareness campaigns showcasing real-world inclusive uses of AI (eg Otter.ai for students with hearing impairments) and tiered learning tracks from beginner to advanced levels ensure all students can access, understand, and critically engage with GenAI tools.
  • Inclusive Collaboration GenAI should be used to foster diverse collaboration, not reinforce existing hierarchies. Tools like Miro and DeepL can support multilingual and neurodiverse team interactions, while AI-powered task management (eg Notion AI) ensures equitable participation. Embedding AI-driven teamwork protocols into coursework can normalize inclusive digital collaboration.
  • Skill Development Higher-order cognitive skills must remain at the heart of AI use. Assignments that require evaluating AI outputs for bias, simulating ethical dilemmas, and creatively applying AI for social good nurture critical thinking, problem-solving, and ethical awareness.
  • Access to Resources Infrastructure equity is critical. Universities must provide free or subsidized access to key AI tools (eg Grammarly, ReadSpeaker), establish Digital Accessibility Centers, and proactively support economically disadvantaged students.
  • Ethical Responsibility Critical AI literacy must include an ethical dimension. Courses on AI ethics, student-led policy drafting workshops, and institutional AI Ethics Committees empower students to engage responsibly with AI technologies.

Implementation strategies

To operationalize the framework, a phased implementation plan is recommended:

  • Phase 1: Needs assessment and foundational AI workshops (0–3 months).
  • Phase 2: Pilot inclusive collaboration models and adaptive learning environments (3–9 months).
  • Phase 3: Scale successful practices, establish Ethics and Accessibility Hubs (9–24 months).

Key success metrics include increased AI literacy rates, participation from underrepresented groups, enhanced group project equity, and demonstrated critical thinking skill growth.

Discussion: opportunities and risks

Without inclusive design, GenAI could deepen educational inequalities, as recent research warns. Students without access to GenAI resources or social capital will be disadvantaged both academically and professionally. Furthermore, impersonal AI-driven learning environments may weaken students’ sense of belonging, exacerbating mental health challenges.

Conversely, intentional GenAI integration offers powerful opportunities. AI can personalize support for students with diverse learning needs, extend access to remote or rural learners, and reduce administrative burdens on staff – freeing them to focus on high-impact, relational work such as mentoring.

Conclusion

The future of inclusive higher education depends on whether GenAI is adopted with a clear commitment to equity and social justice. As our QAA project outputs demonstrate, the challenge is not merely technological but ethical and pedagogical. Institutions must move beyond access alone, embedding critical AI literacy, equitable resource distribution, community-building, and ethical responsibility into every stage of AI adoption.

Generative AI will not close the digital divide on its own. It is our pedagogical choices, strategic designs, and values-driven implementations that will determine whether the AI-driven university of the future is one of exclusion – or transformation.

This blog is based on the recent outputs from our QAA-funded project entitled: “Using AI to promote education for sustainable development and widen access to digital skills”

Dr Eleni Meletiadou is an Associate Professor (Teaching) at London Metropolitan University  specialising in Equity, Diversity, and Inclusion (EDI), AI, inclusive digital pedagogy, and multilingual education. She leads the Education for Social Justice and Sustainable Learning and Development (RILEAS) and the Gender Equity, Diversity, and Inclusion (GEDI) Research Groups. Dr Meletiadou’s work, recognised with the British Academy of Management Education Practice Award (2023), focuses on transforming higher education curricula to promote equitable access, sustainability, and wellbeing. With over 15 years of international experience across 35 countries, she has led numerous projects in inclusive assessment and AI-enhanced learning. She is a Principal Fellow of the Higher Education Academy and serves on several editorial boards. Her research interests include organisational change, intercultural communication, gender equity, and Education for Sustainable Development (ESD). She actively contributes to global efforts in making education more inclusive and future-ready. LinkedIn: https://www.linkedin.com/in/dr-eleni-meletiadou/


1 Comment

Will GenAI narrow or widen the digital divide in higher education?

by Lei Fang and Xue Zhou

This blog is based on our recent publication: Zhou, X, Fang, L, & Rajaram, K (2025) ‘Exploring the digital divide among students of diverse demographic backgrounds: a survey of UK undergraduates’ Journal of Applied Learning and Teaching, 8(1).

Introduction – the widening digital divide

Our recent study (Zhou et al, 2025) surveyed 595 undergraduate students across the UK to examine the evolving digital divide across all forms of digital technologies. Although higher education is expected to narrow this divide and build students’ digital confidence, our findings revealed the opposite. We found that the gap in digital confidence and skills between widening participation (WP) and non-WP students widened progressively throughout the undergraduate journey. While students reported peak confidence in Year 2, this was followed by a notable decline in Year 3, when the digital divide became most pronounced. This drop coincides with a critical period when students begin applying their digital skills in real-world contexts, such as job applications and final-year projects.

Based on our study (Zhou et al, 2025), while universities offer a wide range of support such as laptop loans, free access to remote systems, extracurricular digital skills training, and targeted funding to WP students, WP students often do not make use of these resources. The core issue lies not in the absence of support, but in its uptake. WP students are often excluded from the peer networks and digital communities where emerging technologies are introduced, shared, and discussed. From a Connectivist perspective (Siemens, 2005), this lack of connection to digital, social, and institutional networks limits their awareness, confidence, and ability to engage meaningfully with available digital tools.

Building on these findings, this blog asks a timely question: as Generative Artificial Intelligence (GenAI) becomes embedded in higher education, will it help bridge this divide or deepen it further?

GenAI may widen the digital divide — without proper strategies

While the digital divide in higher education is already well-documented in relation to general technologies, the emergence of GenAI introduces new risks that may further widen this gap (Cachat-Rosset & Klarsfeld, 2023). This matters because students who are GenAI-literate often experience better academic performance (Sun & Zhou, 2024), making the divide not just about access but also about academic outcomes.

Unlike traditional digital tools, GenAI often demands more advanced infrastructure — including powerful devices, high-speed internet, and in many cases, paid subscriptions to unlock full functionality. WP students, who already face barriers to accessing basic digital infrastructure, are likely to be disproportionately excluded. This divide is not only student-level but also institutional. A few well-funded universities are able to subscribe to GenAI platforms such as ChatGPT, invest in specialised GenAI tools, and secure campus-wide licenses. In contrast, many institutions, particularly those under financial pressure, cannot afford such investments. These disparities risk creating a new cross-sector digital divide, where students’ access to emerging technologies depends not only on their background, but also on the resources of the university they attend.

In addition, the adoption of GenAI currently occurs primarily through informal channels via peers, online communities, or individual experimentation rather than structured teaching (Shailendra et al, 2024). WP students, who may lack access to these digital and social learning networks (Krstić et al, 2021), are therefore less likely to become aware of new GenAI tools, let alone develop the confidence and skills to use them effectively. Even when they do engage with GenAI, students may experience uncertainty, confusion, or fear about using it appropriately especially in the absence of clear guidance around academic integrity, ethical use, or institutional policy. This ambiguity can lead to increased anxiety and stress, contributing to wider concerns around mental health in GenAI learning environments.

Another concern is the risk of impersonal learning environments (Berei & Pusztai, 2022). When GenAI are implemented without inclusive design, the experience can feel detached and isolating, particularly for WP students, who often already feel marginalised. While GenAI tools may streamline administrative and learning processes, they can also weaken the sense of connection and belonging that is essential for student engagement and success.

GenAI can narrow the divide — with the right strategies

Although WP students are often excluded from digital networks, which Connectivism highlights as essential for learning (Goldie, 2016), GenAI, if used thoughtfully, can help reconnect them by offering personalised support, reducing geographic barriers, and expanding access to educational resources.

To achieve this, we propose five key strategies:

  • Invest in infrastructure and access: Universities must ensure that all students have the tools to participate in the AI-enabled classroom including access to devices, core software, and free versions of widely used GenAI platforms. While there is a growing variety of GenAI tools on the market, institutions facing financial pressures must prioritise tools that are both widely used and demonstrably effective. The goal is not to adopt everything, but to ensure that all students have equitable access to the essentials.
  • Rethink training with inclusion in mind: GenAI literacy training must go beyond traditional models. It should reflect Equality, Diversity and Inclusion principles recognising the different starting points students bring and offering flexible, practical formats. Micro-credentials on platforms like LinkedIn Learning or university-branded short courses can provide just-in-time, accessible learning opportunities. These resources are available anytime and from anywhere, enabling students who were previously excluded such as those in rural or under-resourced areas to access learning on their own terms.
  • Build digital communities and peer networks: Social connection is a key enabler of learning (Siemens, 2005). Institutions should foster GenAI learning communities where students can exchange ideas, offer peer support, and normalise experimentation. Mental readiness is just as important as technical skill and being part of a supportive network can reduce anxiety and stigma around GenAI use.
  • Design inclusive GenAI policies and ensure ongoing evaluation: Institutions must establish clear, inclusive policies around GenAI use that balance innovation with ethics (Schofield & Zhang, 2024). These policies should be communicated transparently and reviewed regularly, informed by diverse student feedback and ongoing evaluation of impact.
  • Adopt a human-centred approach to GenAI integration: Following UNESCO’s human-centred approach to AI in education (UNESCO, 2024; 2025), GenAI should be used to enhance, not replace the human elements of teaching and learning. While GenAI can support personalisation and reduce administrative burdens, the presence of academic and pastoral staff remains essential. By freeing staff from routine tasks, GenAI can enable them to focus more fully on this high-impact, relational work, such as mentoring, guidance, and personalised support that WP students often benefit from most.

Conclusion

Generative AI alone will not determine the future of equity in higher education, our actions will. Without intentional, inclusive strategies, GenAI risks amplifying existing digital inequalities, further disadvantaging WP students. However, by proactively addressing access barriers, delivering inclusive and flexible training, building supportive digital communities, embedding ethical policies, and preserving meaningful human interaction, GenAI can become a powerful tool for inclusion. The digital divide doesn’t close itself; institutions must embed equity into every stage of GenAI adoption. The time to act is not once systems are already in place, it is now.

Dr Lei Fang is a Senior Lecturer in Digital Transformation at Queen Mary University of London. Her research interests include AI literacy, digital technology adoption, the application of AI in higher education, and risk management. lei.fang@qmul.ac.uk

Professor Xue Zhou is a Professor in AI in Business Education at the University of Leicester. Her research interests fall in the areas of digital literacy, digital technology adoption, cross-cultural adjustment and online professionalism. xue.zhou@le.ac.uk


3 Comments

Restraining the uncanny guest: AI ethics and university practice

by David Webster

If GAI is the ‘uncanniest of guests’ in the University what can we do about any misbehaviour? What do we do with this uninvited guest who behaves badly, won’t leave and seems intent on asserting that it’s their house now anyway?  They won’t stay in their room and seem to have their fingers in everything.

Nihilism stands at the door: whence comes this uncanniest of all guests?[1]

Nietzsche saw the emergence of nihilistic worldviews as presaging a century of turmoil and destruction, only after which might more creative responses to the sweeping away of older systems of thought be possible. Generative Artificial Intelligence, uncanny in its own discomforting ways, might be argued as threatening the world of higher education with an upending of the existing conventions and practices that have long been the norm in the sector. Some might welcome this guest, in that there is much wrong in the way universities have created knowledge, taught students, served communities and reproduced social practice. The concern must surely be though that GAI is not a creative force, but a repackaging and re-presenting of existing human understanding and belief. We need to think carefully about the way this guest’s behaviour might exert influence in our house.

After decades of seeking to eliminate prejudices and bias, GAI threatens to reinscribe misogyny, racism, homophobia and other unethical discrimination back into the academy. Since  the majority of content used to train large language models has been generated by the most prominent and privileged groups in human culture, might not we see a recolonisation, just as universities are starting to push for a more decolonised, inclusive and equitable learning experience?

After centuries of citation tradition and careful attribution of sources, GAI seems intent on shuffling the work of human scholars and presenting it without any clarity as to whence it came. Some news organisations and  authors are even threatening to sue OpenAI as they believe their content has been used, without permission, to train the company’s ChatGPT tool.

Furthermore, this seems to be a guest inclined to hallucinate and recount their visions as the earnest truth. The guest has also imbibed substantive propaganda, taken satirical articles as serious factual account (hence the glue pizza and rock AI diet), and is targeted by pseudo-science dressed in linguistic frames of respectability. How can we deal with this confident, ambitious, and ill-informed guest who keeps offering to save us time and money?

While there isn’t a simple answer (if I had that, I’d be busy monetising it!), an adaptation of this guest metaphor might help. This is to view GAI rather like an unregulated child prodigy: awash with talent but with a lacuna of discernment. It can do so much, but often doesn’t have the capacity to know what it shouldn’t do, what is appropriate or helpful and what is frankly dangerous.

GAI systems are capable of almost magical-seeming feats, but also lack basic understanding of how the world operates and are blind to all kinds of contextual appreciation. Most adults would take days trying to draw what a GAI system can generate in seconds, and would struggle to match its ‘skills’, but even an artistically-challenged adult likely myself with barely any artistic talent at all would know how many fingers, noses or arms, were appropriate in a picture – no matter how clumsily I rendered them. The idea of GAI as a child prodigy, in need of moral guidance and requiring tutoring and careful curation of the content they are exposed to, can help us better understand just how limited these systems are. This orientation to GAI also helps us see that what are witnessing is not a finished solution to various tasks currently undertaken by people, but rather a surplus of potential. The child prodigy is capable of so much, but is still a child and critically, still requires prodigious supervision.

So as universities look to use student-facing chatbots for support and answering queries, to automate their arcane and lengthy internal processes, to sift through huge datasets and to analyse and repackage existing learning content, we need to be mindful of GAI’S immaturity. It offers phenomenal potential in all these areas and despite the overdone hype  it will drive a range of huge changes to how we work in higher education, but it is far from ready to work unsupervised. GAI needs moral instruction, it needs to be reshaped as it develops and we might do this through assuming the mindset of a watchful, if also proud, parent.

Professor Dave Webster is Director of Education, Quality & Enhancement at the University of Liverpool. He has a background in teaching philosophy, and the study of religion, with ongoing interests in Buddhist thought, and the intersections of new religious movements and conspiracy theory.  He is also concerned about pedagogy, GAI and the future of Higher Education.


[1] The Will to Power, trans. Walter Kaufmann and R. J. Hollingdale, ed., with commentary, Walter Kaufmann, Vintage, 1968.               


4 Comments

Fair use or copyright infringement? What academic researchers need to know about ChatGPT prompts

by Anita Toh

As scholarly research into and using generative AI tools like ChatGPT becomes more prevalent, it is crucial for researchers to understand the intersections of copyright, fair use, and use of generative AI in research. While there is much discussion about the copyrightability of generative AI outputs and the legality of generative AI companies’ use of copyrighted material as training data (Lucchi, 2023), there has been relatively little discussion about copyright in relation to user prompts. In this post, I share an interesting discovery about the use of copyrighted material in ChatGPT prompts.

Imagine a situation where a researcher wishes to conduct a content analysis on specific YouTube videos for academic research. Does the researcher need to obtain permission from YouTube or the content creators to use these videos?

As per YouTube’s guidelines, researchers do not require explicit copyright permission if they are using the content for “commentary, criticism, research, teaching, or news reporting,”as these activities fall under the umbrella of fair use (Fair Use on YouTube – YouTube Help, 2023).

What about this scenario? A researcher wants to compare the types of questions posed by investors on the reality television series, Shark Tank, with questions generated by ChatGPT as it roleplays an angel investor. The researcher plans to prompt ChatGPT with a summary of each Shark Tank pitch and ask ChatGPT to roleplay as an angel investor and ask questions. In this case, would the researcher need to obtain permission from Shark Tank or its production company, Sony Pictures Television?

In my exploration, I discovered that it is indeed crucial to obtain permission from Sony Pictures Television. ChatGPT’s terms of service emphasise that users should “refrain from using the service in a manner that infringes upon third-party rights. This explicitly means the input should be devoid of copyrighted content unless sanctioned by the respective author or rights holder” (Fiten & Jacobs, 2023).

I therefore initiated communication with Sony Pictures Television, seeking approval to incorporate Shark Tank videos in my research. However, my request was declined by Sony Pictures Television in California, citing “business and legal reasons”. Undeterred, I approached Sony Pictures Singapore, only to receive a reaffirmation that Sony cannot endorse my proposed use of their copyrighted content “at the present moment”. They emphasised that any use of their copyrighted content must strictly align with the Fair Use doctrine.

This evokes the question: Why doesn’t the proposed research align with fair use? My initial understanding is that the fair use doctrine allows re-users to use copyrighted material without permission from the right holders for news reporting, criticism, review, educational and research purposes (Copyright Act 2021 Factsheet, 2022).

In the absence of further responses from Sony Pictures Television, I searched the web for answers.

Two findings emerged which could shed light on Sony’s reservations:

  • ChatGPT’s terms highlight that “user inputs, besides generating corresponding outputs, also serve to augment the service by refining the AI model” (Fiten & Jacobs, 2023; OpenAI Terms of Use, 2023).
  • OpenAI is currently facing legal action from various authors and artists alleging copyright infringement (Milmo, 2023). They contend that OpenAI had utilized their copyrighted content to train ChatGPT without their consent. Adding to this, the New York Times is also contemplating legal action against OpenAI for the same reason (Allyn, 2023).

These revelations point to a potential rationale behind Sony Pictures Television’s reluctance: while use of their copyrighted content for academic research might be considered fair use, introducing this content into ChatGPT could infringe upon the non-commercial stipulations (What Is Fair Use?, 2016) inherent in the fair use doctrine.

In conclusion, the landscape of copyright laws and fair use in relation to generative AI tools is still evolving. While previously researchers could rely on the fair use doctrine for the use of copyrighted material in their research work, the availability of generative AI tools now introduces an additional layer of complexity. This is particularly pertinent when the AI itself might store or use data to refine its own algorithms, which could potentially be considered a violation of the non-commercial use clause in the fair use doctrine. Sony Pictures Television’s reluctance to grant permission for the use of their copyrighted content in association with ChatGPT reflects the caution that content creators and rights holders are exercising in this new frontier. For researchers, this highlights the importance of understanding the terms of use of both the AI tool and the copyrighted material prior to beginning a research project.

Anita Toh is a lecturer at the Centre for English Language Communication (CELC) at the National University of Singapore (NUS). She teaches academic and professional communication skills to undergraduate computing and engineering students.