SRHE Blog

The Society for Research into Higher Education


Leave a comment

Teaching students to use AI: from digital competence to a learning outcome

by Concepción González García and Nina Pallarés Cerdà

Debates about generative AI in higher education often start from the same assumption: students need a certain level of digital competence before they can use AI productively. Those who already know how to search, filter and evaluate online information are seen as the ones most likely to benefit from tools such as ChatGPT, while others risk being left further behind.

Recent studies reinforce this view. Students with stronger digital skills in areas like problem‑solving and digital ethics tend to use generative AI more frequently (Caner‑Yıldırım, 2025). In parallel, work using frameworks such as DigComp has mostly focused on measuring gaps in students’ digital skills – often showing that perceived “digital natives” are less uniformly proficient than we might think (Lucas et al, 2022). What we know much less about is the reverse relationship: can carefully designed uses of AI actually develop students’ digital competences – and for whom?

In a recent article, we addressed this question empirically by analysing the impact of a generative AI intervention on university students’ digital competences (García & Pallarés, 2026). Students’ skills were assessed using the European DigComp 2.2 framework (Vuorikari et al, 2022).

Moving beyond static measures of digital competence

Research on students’ digital competences in higher education has expanded rapidly over the past decade. Yet much of this work still treats digital competence as a stable attribute that students bring with them into university, rather than as a dynamic and educable capability that can be shaped through instructional design. The consequence is a field dominated by one-off assessments, surveys and diagnostic tools that map students’ existing skills but tell us little about how those skills develop.

This predominant focus on measurement rather than development has produced a conceptual blind spot: we know far more about how digital competences predict students’ use of emerging technologies than about how educational uses of these technologies might enhance those competences in the first place.

Recent studies reinforce this asymmetry. Students with higher levels of digital competence are more likely to engage with generative AI tools and to display positive attitudes towards their use (Moravec et al, 2024; Saklaki & Gardikiotis, 2024). In this ‘competence-first’ model, digital competence appears as a precondition for productive engagement with AI. Yet this framing obscures a crucial pedagogical question: might AI, when intentionally embedded in learning activities, actually support the growth of the very competences it is presumed to require?

A second limitation compounds this problem: the absence of a standardised framework for analysing and comparing the effects of AI-based interventions on digital competence development. Although DigComp is widely used for diagnostic purposes, few studies employ it systematically to evaluate learning gains or to map changes across specific competence areas. As a result, evidence from different interventions remains fragmented, making it difficult to identify which aspects of digital competence are most responsive to AI-mediated learning.

There is, nevertheless, emerging evidence that AI can do more than simply ‘consume’ digital competence. Studies by Dalgıç et al (2024) and Naamati-Schneider & Alt (2024) suggest that integrating tools such as ChatGPT into structured learning tasks can stimulate information search, analytical reasoning and critical evaluation—provided that students are guided to question and verify AI outputs rather than accept them uncritically. Yet these contributions remain exploratory. We still lack experimental or quasi-experimental evidence that links AI-based instructional designs to measurable improvements in specific DigComp areas, and we know little about whether such benefits accrue equally to all students or disproportionately to those who already possess stronger digital skills.

This gap matters. If digital competences are conceived as malleable rather than fixed, then AI is not merely a technology that demands certain skills but a pedagogical tool through which those skills can be cultivated. This reframing shifts the centre of the debate: away from asking whether students are ready for AI, and towards asking whether our teaching practices are ready to use AI in ways that promote competence development and reduce inequalities in learning.

Our study: teaching students to work with AI, not around it

We designed a randomised controlled trial with 169 undergraduate students enrolled in a Microeconomics course. Students were allocated by class group to either a treatment or a control condition. All students followed the same curriculum and completed the same online quizzes through the institutional virtual campus.

The crucial difference lay in how generative AI was integrated:

  • In the treatment condition, students received an initial workshop on using large language models strategically. They practised:
  • contextualising questions
  • breaking problems into steps
  • iteratively refining prompts
  • and checking their own solutions before turning to the AI.
  • Throughout the course, their online self-assessments included adaptive feedback: instead of simply marking answers as right or wrong, the system offered hints, step-by-step prompts and suggestions on how to use AI tools as a thinking partner.
  • In the control condition, students completed the same quizzes with standard right/wrong feedback, and no training or guidance on AI.

Importantly, the intervention did not encourage students to outsource solutions to AI. Rather, it framed AI as an interactive study partner to support self-explanation, comparison of strategies and self-regulation in problem solving.

We administered pre- and post-course questionnaires aligned with DigComp 2.2, focusing on five competences: information and data literacy, communication and collaboration, safety, and two aspects of problem solving (functional use of digital tools and metacognitive self-regulation). Using a difference-in-differences model with individual fixed effects, we estimated how the probability of reporting the highest level of each competence changed over time for the treatment group relative to the control group.

What changed when AI was taught and used in this way?

At the overall sample level, we found statistically significant improvements in three areas:

  • Information and data literacy – students in the AI-training condition were around 15 percentage points more likely to report the highest level of competence in identifying information needs and carrying out effective digital searches.
  • Problem solving – functional dimension – the probability of reporting the top level in using digital tools (including AI) to solve tasks increased by about 24 percentage points.
  • Problem solving – metacognitive dimension – a similar 24-point gain emerged for recognising what aspects of one’s digital competences need to be updated or improved.

In other words, the AI-integrated teaching design was associated not only with better use of digital tools, but also with stronger awareness of digital strengths and weaknesses – a key ingredient of autonomous learning. Communication and safety competences also showed positive but smaller and more uncertain effects. Here, the pattern becomes clearer when we look at who benefited most.

A compensatory effect: AI as a potential leveller, not just an amplifier

When we distinguished students by their initial level of digital competence, a pattern emerged. For those starting below the median, the intervention produced large and significant gains in all five competences, with improvements between 18 and 38 percentage points depending on the area. For students starting above the median, effects were smaller and, in some cases, non-significant.

This suggests a compensatory effect: students who began the course with weaker digital competences benefited the most from the AI-based teaching design. Rather than widening the digital gap, guided use of AI acted as a levelling mechanism, bringing lower-competence students closer to their more digitally confident peers.

Conceptually, this challenges an implicit assumption in much of the literature – namely, that generative AI will primarily enhance the learning of already advantaged students, because they are the ones with the skills and confidence to exploit it. Our findings show that, when AI is embedded within intentional pedagogy, explicit training and structured feedback, the opposite can happen: those who started with fewer resources can gain the most.

From ‘allow or ban’ to ‘how do we teach with AI?’

For higher education policy and practice, the implications are twofold.

First, we need to stop thinking of digital competence purely as a prerequisite for using AI. Under the right design conditions, AI can be a pedagogical resource to build those competences, especially in information literacy, problem solving and metacognitive self-regulation. That means integrating AI into curricula not as an add-on, but as part of how we teach students to plan, monitor and evaluate their learning.

Second, our results suggest that universities concerned with equity and digital inclusion should focus less on whether students have access to AI tools (many already do) and more on who receives support to learn how to use them well. Providing structured opportunities to practise prompting, to critique AI outputs and to reflect on one’s own digital skills may be particularly valuable for students who enter university with lower levels of digital confidence.

This does not resolve all the ethical and practical concerns around generative AI – far from it. But it shifts the conversation. Instead of treating AI as an external threat to academic integrity that must be tightly controlled, we can start to ask:

  • How can we design tasks where the added value lies in asking good questions, justifying decisions and evaluating evidence, rather than in producing a single ‘correct’ answer?
  • How can we support students to see AI not as a shortcut to avoid thinking, but as a tool to think better and know themselves better as learners?
  • Under what conditions does AI genuinely help to close digital competence gaps, and when might it risk opening new ones?

Answering these questions will require further longitudinal and multi-institutional research, including replication studies and objective performance measures alongside self-reports. Yet the evidence we present offers a cautiously optimistic message: teaching students how to use AI can be part of a strategy to strengthen digital competences and reduce inequalities in higher education, rather than merely another driver of stratification.

Concepción González García is Assistant Professor of Economics at the Faculty of Economics and Business, Catholic University of Murcia (UCAM), Spain, and holds a PhD in Economics from the University of Alicante. Her research interests include macroeconomics, particularly fiscal policy, and education.

Nina Pallarés is Assistant Professor of Economics and Academic Coordinator of the Master’s in Management of Sports Entities at the Faculty of Economics and Business, Catholic University of Murcia (UCAM), Spain. Her research focuses on applied econometrics, with particular emphasis on health, labour, education, and family economics.


Leave a comment

Widely used but barely trusted: understanding student perceptions on the use of generative AI in higher education

by Carmen Cabrera and Ruth Neville

Generative artificial intelligence (GAI) tools are rapidly transforming how university students learn, create and engage with knowledge. Powered by techniques such as neural network algorithms, these tools generate new content, including text, tables, computer code, images, audio and video, by learning patterns from existing data. The outputs are usually characterised by their close resemblance to human-generated content. While GAI shows great promise to improve the learning experience in various disciplines, its growing uptake also raises concerns about misuse, over-reliance and more generally, its impact on the learning process. In response, multiple UK HE institutions have issued guidance outlining acceptable use and warning against breaches of academic integrity. However, discussions about the role of GAI in the HE learning process have been led mostly by educators and institutions, and less attention has been given to how students perceive and use GAI.

Our recent study, published in Perspectives: Policy and Practice in Higher Education, helps to address this gap by bringing student perspectives into the discussion. Drawing on a survey conducted in early 2024 with 132 undergraduate students from six UK universities, the study reveals an impactful paradox. Students are using GAI tools widely, and expect their use to increase, yet fewer than 25% regard its outputs as reliable. High levels of use therefore coexist with low levels of trust.

Using GAI without trusting it

At first glance, the widespread use of GAI among students might be taken as a sign of growing confidence in these tools. Yet, when students are asked about their perceptions on the reliability of GAI outputs, many express disagreement when asked if GAI could be considered a reliable source of knowledge. This apparent contradiction raises the question of why are students still using tools they do not fully trust? The answer lies in the convenience of GAI. Students are not necessarily using GAI because they believe it is accurate. They are using it because it is fast, accessible and can help them get started or work more efficiently. Our study suggests that perceived usefulness may be outweighing the students’ scepticism towards the reliability of outputs, as this scepticism does not seem to be slowing adoption. Nearly all student groups surveyed reported that they expect to continue using generative AI in the future, indicating that low levels of trust are unlikely to deter ongoing or increased use.

Not all perceptions are equal

While the “high use – low trust” paradox is evident across student groups, the study also reveals systematic differences in the adoption and perceptions of GAI by gender and by domicile status (UK v international students). Male and international students tend to report higher levels of both past and anticipated future use of GAI tools, and more permissive attitudes towards AI-assisted learning compared to female and UK-domiciled students. These differences should not necessarily be interpreted as evidence that some students are more ethical, critical or technologically literate than others. What we are likely seeing are responses to different pressures and contexts shaping how students engage with these tools. Particularly for international students, GAI can help navigate language barriers or unfamiliar academic conventions. In those circumstances, GAI may work as a form of academic support rather than a shortcut. Meanwhile, differences in attitudes by gender reflect wider patterns often observed on academic integrity and risk-taking, where female students often report greater concern about following rules and avoiding sanctions. These findings suggest that students’ engagement with GAI is influenced by their positionality within Higher Education, and not just by their individual attitudes.

Different interpretations of institutional guidance

Discrepancies by gender and domicile status go beyond patterns of use and trust, extending to how students interpret institutional guidance on generative AI. Most UK universities now publish policies outlining acceptable and unacceptable uses of GAI in relation to assessment and academic integrity, and typically present these rules as applying uniformly to all students. In practice, as evidenced by our study, students interpret these guidelines differently. UK-domiciled students, especially women, tend to adopt more cautious readings, sometimes treating permitted uses, such as using GAI for initial research or topic overviews, as potential misconduct. International students, by contrast, are more likely to express permissive or uncertain views, even in relation to practices that are more clearly prohibited. Shared rules do not guarantee shared understanding, especially if guidance is ambiguous or unevenly communicated. GAI is evolving faster than University policy, so addressing this unevenness in understanding is an urgent challenge for higher education.

Where does the ‘problem’ lie?

Students are navigating rapidly evolving technologies within assessment frameworks that were not designed with GAI in mind. At the same time, they are responding to institutional guidance that is frequently high-level, unevenly communicated and difficult to translate into everyday academic practice. Yet there is a tendency to treat GAI misuse as a problem stemming from individual student behaviour. Our findings point instead to structural and systemic issues shaping how students engage with these tools. From this perspective, variation in student behaviour could reflect the uneven inclusivity of current institutional guidelines. Even when policies are identical for all, the evidence indicates that they are not experienced in the same way across student groups, calling for a need to promote fairness and reduce differential risk at the institutional level.

These findings also have clear implications for assessment and teaching. Since students are already using GAI widely, assessment design needs to avoid reactive attempts to exclude GAI. A more effective and equitable approach may involve acknowledging GAI use where appropriate, supporting students to engage with it critically and designing learning activities that continue to cultivate critical thinking, judgement and communication skills. In some cases, this may also mean emphasising in-person, discussion-based or applied forms of assessment where GAI offers limited advantage. Equally, digital literacy initiatives need to go beyond technical competence. Students require clearer and more concrete examples of what constitutes acceptable and unacceptable use of GAI in specific assessment contexts, as well as opportunities to discuss why these boundaries exist. Without this, institutions risk creating environments in which some students become too cautious in using GAI, while others cross lines they do not fully understand.

More broadly, policymakers and institutional leaders should avoid assuming a single student response to GAI. As this study shows, engagement with these tools is shaped by gender, educational background, language and structural pressures. Treating the student body as homogeneous risks reinforcing existing inequalities rather than addressing them. Public debate about GAI in HE frequently swings between optimism and alarm. This research points to a more grounded reality where students are not blindly trusting AI, but their use of it is increasing, sometimes pragmatically, sometimes under pressure. As GAI systems continue evolving, understanding how students navigate these tools in practice is essential to developing policies, assessments and teaching approaches that are both effective and fair.

You can find more information in our full research paper: https://www.tandfonline.com/doi/full/10.1080/13603108.2025.2595453

Dr Carmen Cabrera is a Lecturer in Geographic Data Science at the Geographic Data Science Lab, within the University of Liverpool’s Department of Geography and Planning. Her areas of expertise are geographic data science, human mobility, network analysis and mathematical modelling. Carmen’s research focuses on developing quantitative frameworks to model and predict human mobility patterns across spatiotemporal scales and population groups, ranging from intraurban commutes to migratory movements. She is particularly interested in establishing methodologies to facilitate the efficient and reliable use of new forms of digital trace data in the study of human movement. Prior to her position as a Lecturer, Carmen completed a BSc and MSc in Physics and Applied Mathematics, specialising in Network Analysis. She then did a PhD at University College London (UCL), focussing on the development of mathematical models of social behaviours in urban areas, against the theoretical backdrop of agglomeration economies. After graduating from her PhD in 2021, she was a Research Fellow in Urban Mobility at the Centre for Advanced Spatial Analysis (CASA), at UCL, where she currently holds a honorary position.

Dr Ruth Neville is a Research Fellow at the Centre for Advanced Spatial Analysis (CASA), UCL, working at the intersection of Spatial Data Science, Population Geography and Demography. Her PhD research considers the driving forces behind international student mobility into the UK, the susceptibility of student applications to external shocks, and forecasting future trends in applications using machine learning. Ruth has also worked on projects related to human mobility in Latin America during the COVID-19 pandemic, the relationship between internal displacement and climate change in the East and Horn of Africa, and displacement of Ukrainian refugees. She has a background in Political Science, Economics and Philosophy, with a particular interest in electoral behaviour.


Leave a comment

How educators can use Gen AI to promote inclusion and widen access

by Eleni Meletiadou

Introduction

Higher education faces a pivotal moment as Generative AI becomes increasingly embedded within academic practice. While AI technologies offer the potential to personalize learning, streamline processes, and expand access, they also risk exacerbating existing inequalities if not intentionally aligned with inclusive values. Building on our QAA-funded project outputs, this blog outlines a strategic framework for deploying AI to foster inclusion, equity, and ethical responsibility in higher education.

The digital divide and GenAI

Extensive research shows that students from marginalized backgrounds often face barriers in accessing digital tools, digital literacy training, and peer networks essential for technological confidence. GenAI exacerbates this divide, demanding not only infrastructure (devices, subscriptions, internet access) but also critical AI literacy. According to previous research, students with higher AI competence outperform peers academically, deepening outcome disparities.

However, the challenge is not merely technological; it is social and structural. WP (Widening Participation) students often remain outside informal digital learning communities where GenAI tools are introduced and shared. Without intervention, GenAI risks becoming a “hidden curriculum” advantage for already-privileged groups.

A framework for inclusive GenAI adoption

Our QAA-funded “Framework for Educators” proposes five interrelated principles to guide ethical, inclusive AI integration:

  • Understanding and Awareness Foundational AI literacy must be prioritized. Awareness campaigns showcasing real-world inclusive uses of AI (eg Otter.ai for students with hearing impairments) and tiered learning tracks from beginner to advanced levels ensure all students can access, understand, and critically engage with GenAI tools.
  • Inclusive Collaboration GenAI should be used to foster diverse collaboration, not reinforce existing hierarchies. Tools like Miro and DeepL can support multilingual and neurodiverse team interactions, while AI-powered task management (eg Notion AI) ensures equitable participation. Embedding AI-driven teamwork protocols into coursework can normalize inclusive digital collaboration.
  • Skill Development Higher-order cognitive skills must remain at the heart of AI use. Assignments that require evaluating AI outputs for bias, simulating ethical dilemmas, and creatively applying AI for social good nurture critical thinking, problem-solving, and ethical awareness.
  • Access to Resources Infrastructure equity is critical. Universities must provide free or subsidized access to key AI tools (eg Grammarly, ReadSpeaker), establish Digital Accessibility Centers, and proactively support economically disadvantaged students.
  • Ethical Responsibility Critical AI literacy must include an ethical dimension. Courses on AI ethics, student-led policy drafting workshops, and institutional AI Ethics Committees empower students to engage responsibly with AI technologies.

Implementation strategies

To operationalize the framework, a phased implementation plan is recommended:

  • Phase 1: Needs assessment and foundational AI workshops (0–3 months).
  • Phase 2: Pilot inclusive collaboration models and adaptive learning environments (3–9 months).
  • Phase 3: Scale successful practices, establish Ethics and Accessibility Hubs (9–24 months).

Key success metrics include increased AI literacy rates, participation from underrepresented groups, enhanced group project equity, and demonstrated critical thinking skill growth.

Discussion: opportunities and risks

Without inclusive design, GenAI could deepen educational inequalities, as recent research warns. Students without access to GenAI resources or social capital will be disadvantaged both academically and professionally. Furthermore, impersonal AI-driven learning environments may weaken students’ sense of belonging, exacerbating mental health challenges.

Conversely, intentional GenAI integration offers powerful opportunities. AI can personalize support for students with diverse learning needs, extend access to remote or rural learners, and reduce administrative burdens on staff – freeing them to focus on high-impact, relational work such as mentoring.

Conclusion

The future of inclusive higher education depends on whether GenAI is adopted with a clear commitment to equity and social justice. As our QAA project outputs demonstrate, the challenge is not merely technological but ethical and pedagogical. Institutions must move beyond access alone, embedding critical AI literacy, equitable resource distribution, community-building, and ethical responsibility into every stage of AI adoption.

Generative AI will not close the digital divide on its own. It is our pedagogical choices, strategic designs, and values-driven implementations that will determine whether the AI-driven university of the future is one of exclusion – or transformation.

This blog is based on the recent outputs from our QAA-funded project entitled: “Using AI to promote education for sustainable development and widen access to digital skills”

Dr Eleni Meletiadou is an Associate Professor (Teaching) at London Metropolitan University  specialising in Equity, Diversity, and Inclusion (EDI), AI, inclusive digital pedagogy, and multilingual education. She leads the Education for Social Justice and Sustainable Learning and Development (RILEAS) and the Gender Equity, Diversity, and Inclusion (GEDI) Research Groups. Dr Meletiadou’s work, recognised with the British Academy of Management Education Practice Award (2023), focuses on transforming higher education curricula to promote equitable access, sustainability, and wellbeing. With over 15 years of international experience across 35 countries, she has led numerous projects in inclusive assessment and AI-enhanced learning. She is a Principal Fellow of the Higher Education Academy and serves on several editorial boards. Her research interests include organisational change, intercultural communication, gender equity, and Education for Sustainable Development (ESD). She actively contributes to global efforts in making education more inclusive and future-ready. LinkedIn: https://www.linkedin.com/in/dr-eleni-meletiadou/