SRHE Blog

The Society for Research into Higher Education


Leave a comment

Teaching students to use AI: from digital competence to a learning outcome

by Concepción González García and Nina Pallarés Cerdà

Debates about generative AI in higher education often start from the same assumption: students need a certain level of digital competence before they can use AI productively. Those who already know how to search, filter and evaluate online information are seen as the ones most likely to benefit from tools such as ChatGPT, while others risk being left further behind.

Recent studies reinforce this view. Students with stronger digital skills in areas like problem‑solving and digital ethics tend to use generative AI more frequently (Caner‑Yıldırım, 2025). In parallel, work using frameworks such as DigComp has mostly focused on measuring gaps in students’ digital skills – often showing that perceived “digital natives” are less uniformly proficient than we might think (Lucas et al, 2022). What we know much less about is the reverse relationship: can carefully designed uses of AI actually develop students’ digital competences – and for whom?

In a recent article, we addressed this question empirically by analysing the impact of a generative AI intervention on university students’ digital competences (García & Pallarés, 2026). Students’ skills were assessed using the European DigComp 2.2 framework (Vuorikari et al, 2022).

Moving beyond static measures of digital competence

Research on students’ digital competences in higher education has expanded rapidly over the past decade. Yet much of this work still treats digital competence as a stable attribute that students bring with them into university, rather than as a dynamic and educable capability that can be shaped through instructional design. The consequence is a field dominated by one-off assessments, surveys and diagnostic tools that map students’ existing skills but tell us little about how those skills develop.

This predominant focus on measurement rather than development has produced a conceptual blind spot: we know far more about how digital competences predict students’ use of emerging technologies than about how educational uses of these technologies might enhance those competences in the first place.

Recent studies reinforce this asymmetry. Students with higher levels of digital competence are more likely to engage with generative AI tools and to display positive attitudes towards their use (Moravec et al, 2024; Saklaki & Gardikiotis, 2024). In this ‘competence-first’ model, digital competence appears as a precondition for productive engagement with AI. Yet this framing obscures a crucial pedagogical question: might AI, when intentionally embedded in learning activities, actually support the growth of the very competences it is presumed to require?

A second limitation compounds this problem: the absence of a standardised framework for analysing and comparing the effects of AI-based interventions on digital competence development. Although DigComp is widely used for diagnostic purposes, few studies employ it systematically to evaluate learning gains or to map changes across specific competence areas. As a result, evidence from different interventions remains fragmented, making it difficult to identify which aspects of digital competence are most responsive to AI-mediated learning.

There is, nevertheless, emerging evidence that AI can do more than simply ‘consume’ digital competence. Studies by Dalgıç et al (2024) and Naamati-Schneider & Alt (2024) suggest that integrating tools such as ChatGPT into structured learning tasks can stimulate information search, analytical reasoning and critical evaluation—provided that students are guided to question and verify AI outputs rather than accept them uncritically. Yet these contributions remain exploratory. We still lack experimental or quasi-experimental evidence that links AI-based instructional designs to measurable improvements in specific DigComp areas, and we know little about whether such benefits accrue equally to all students or disproportionately to those who already possess stronger digital skills.

This gap matters. If digital competences are conceived as malleable rather than fixed, then AI is not merely a technology that demands certain skills but a pedagogical tool through which those skills can be cultivated. This reframing shifts the centre of the debate: away from asking whether students are ready for AI, and towards asking whether our teaching practices are ready to use AI in ways that promote competence development and reduce inequalities in learning.

Our study: teaching students to work with AI, not around it

We designed a randomised controlled trial with 169 undergraduate students enrolled in a Microeconomics course. Students were allocated by class group to either a treatment or a control condition. All students followed the same curriculum and completed the same online quizzes through the institutional virtual campus.

The crucial difference lay in how generative AI was integrated:

  • In the treatment condition, students received an initial workshop on using large language models strategically. They practised:
  • contextualising questions
  • breaking problems into steps
  • iteratively refining prompts
  • and checking their own solutions before turning to the AI.
  • Throughout the course, their online self-assessments included adaptive feedback: instead of simply marking answers as right or wrong, the system offered hints, step-by-step prompts and suggestions on how to use AI tools as a thinking partner.
  • In the control condition, students completed the same quizzes with standard right/wrong feedback, and no training or guidance on AI.

Importantly, the intervention did not encourage students to outsource solutions to AI. Rather, it framed AI as an interactive study partner to support self-explanation, comparison of strategies and self-regulation in problem solving.

We administered pre- and post-course questionnaires aligned with DigComp 2.2, focusing on five competences: information and data literacy, communication and collaboration, safety, and two aspects of problem solving (functional use of digital tools and metacognitive self-regulation). Using a difference-in-differences model with individual fixed effects, we estimated how the probability of reporting the highest level of each competence changed over time for the treatment group relative to the control group.

What changed when AI was taught and used in this way?

At the overall sample level, we found statistically significant improvements in three areas:

  • Information and data literacy – students in the AI-training condition were around 15 percentage points more likely to report the highest level of competence in identifying information needs and carrying out effective digital searches.
  • Problem solving – functional dimension – the probability of reporting the top level in using digital tools (including AI) to solve tasks increased by about 24 percentage points.
  • Problem solving – metacognitive dimension – a similar 24-point gain emerged for recognising what aspects of one’s digital competences need to be updated or improved.

In other words, the AI-integrated teaching design was associated not only with better use of digital tools, but also with stronger awareness of digital strengths and weaknesses – a key ingredient of autonomous learning. Communication and safety competences also showed positive but smaller and more uncertain effects. Here, the pattern becomes clearer when we look at who benefited most.

A compensatory effect: AI as a potential leveller, not just an amplifier

When we distinguished students by their initial level of digital competence, a pattern emerged. For those starting below the median, the intervention produced large and significant gains in all five competences, with improvements between 18 and 38 percentage points depending on the area. For students starting above the median, effects were smaller and, in some cases, non-significant.

This suggests a compensatory effect: students who began the course with weaker digital competences benefited the most from the AI-based teaching design. Rather than widening the digital gap, guided use of AI acted as a levelling mechanism, bringing lower-competence students closer to their more digitally confident peers.

Conceptually, this challenges an implicit assumption in much of the literature – namely, that generative AI will primarily enhance the learning of already advantaged students, because they are the ones with the skills and confidence to exploit it. Our findings show that, when AI is embedded within intentional pedagogy, explicit training and structured feedback, the opposite can happen: those who started with fewer resources can gain the most.

From ‘allow or ban’ to ‘how do we teach with AI?’

For higher education policy and practice, the implications are twofold.

First, we need to stop thinking of digital competence purely as a prerequisite for using AI. Under the right design conditions, AI can be a pedagogical resource to build those competences, especially in information literacy, problem solving and metacognitive self-regulation. That means integrating AI into curricula not as an add-on, but as part of how we teach students to plan, monitor and evaluate their learning.

Second, our results suggest that universities concerned with equity and digital inclusion should focus less on whether students have access to AI tools (many already do) and more on who receives support to learn how to use them well. Providing structured opportunities to practise prompting, to critique AI outputs and to reflect on one’s own digital skills may be particularly valuable for students who enter university with lower levels of digital confidence.

This does not resolve all the ethical and practical concerns around generative AI – far from it. But it shifts the conversation. Instead of treating AI as an external threat to academic integrity that must be tightly controlled, we can start to ask:

  • How can we design tasks where the added value lies in asking good questions, justifying decisions and evaluating evidence, rather than in producing a single ‘correct’ answer?
  • How can we support students to see AI not as a shortcut to avoid thinking, but as a tool to think better and know themselves better as learners?
  • Under what conditions does AI genuinely help to close digital competence gaps, and when might it risk opening new ones?

Answering these questions will require further longitudinal and multi-institutional research, including replication studies and objective performance measures alongside self-reports. Yet the evidence we present offers a cautiously optimistic message: teaching students how to use AI can be part of a strategy to strengthen digital competences and reduce inequalities in higher education, rather than merely another driver of stratification.

Concepción González García is Assistant Professor of Economics at the Faculty of Economics and Business, Catholic University of Murcia (UCAM), Spain, and holds a PhD in Economics from the University of Alicante. Her research interests include macroeconomics, particularly fiscal policy, and education.

Nina Pallarés is Assistant Professor of Economics and Academic Coordinator of the Master’s in Management of Sports Entities at the Faculty of Economics and Business, Catholic University of Murcia (UCAM), Spain. Her research focuses on applied econometrics, with particular emphasis on health, labour, education, and family economics.


1 Comment

Will GenAI narrow or widen the digital divide in higher education?

by Lei Fang and Xue Zhou

This blog is based on our recent publication: Zhou, X, Fang, L, & Rajaram, K (2025) ‘Exploring the digital divide among students of diverse demographic backgrounds: a survey of UK undergraduates’ Journal of Applied Learning and Teaching, 8(1).

Introduction – the widening digital divide

Our recent study (Zhou et al, 2025) surveyed 595 undergraduate students across the UK to examine the evolving digital divide across all forms of digital technologies. Although higher education is expected to narrow this divide and build students’ digital confidence, our findings revealed the opposite. We found that the gap in digital confidence and skills between widening participation (WP) and non-WP students widened progressively throughout the undergraduate journey. While students reported peak confidence in Year 2, this was followed by a notable decline in Year 3, when the digital divide became most pronounced. This drop coincides with a critical period when students begin applying their digital skills in real-world contexts, such as job applications and final-year projects.

Based on our study (Zhou et al, 2025), while universities offer a wide range of support such as laptop loans, free access to remote systems, extracurricular digital skills training, and targeted funding to WP students, WP students often do not make use of these resources. The core issue lies not in the absence of support, but in its uptake. WP students are often excluded from the peer networks and digital communities where emerging technologies are introduced, shared, and discussed. From a Connectivist perspective (Siemens, 2005), this lack of connection to digital, social, and institutional networks limits their awareness, confidence, and ability to engage meaningfully with available digital tools.

Building on these findings, this blog asks a timely question: as Generative Artificial Intelligence (GenAI) becomes embedded in higher education, will it help bridge this divide or deepen it further?

GenAI may widen the digital divide — without proper strategies

While the digital divide in higher education is already well-documented in relation to general technologies, the emergence of GenAI introduces new risks that may further widen this gap (Cachat-Rosset & Klarsfeld, 2023). This matters because students who are GenAI-literate often experience better academic performance (Sun & Zhou, 2024), making the divide not just about access but also about academic outcomes.

Unlike traditional digital tools, GenAI often demands more advanced infrastructure — including powerful devices, high-speed internet, and in many cases, paid subscriptions to unlock full functionality. WP students, who already face barriers to accessing basic digital infrastructure, are likely to be disproportionately excluded. This divide is not only student-level but also institutional. A few well-funded universities are able to subscribe to GenAI platforms such as ChatGPT, invest in specialised GenAI tools, and secure campus-wide licenses. In contrast, many institutions, particularly those under financial pressure, cannot afford such investments. These disparities risk creating a new cross-sector digital divide, where students’ access to emerging technologies depends not only on their background, but also on the resources of the university they attend.

In addition, the adoption of GenAI currently occurs primarily through informal channels via peers, online communities, or individual experimentation rather than structured teaching (Shailendra et al, 2024). WP students, who may lack access to these digital and social learning networks (Krstić et al, 2021), are therefore less likely to become aware of new GenAI tools, let alone develop the confidence and skills to use them effectively. Even when they do engage with GenAI, students may experience uncertainty, confusion, or fear about using it appropriately especially in the absence of clear guidance around academic integrity, ethical use, or institutional policy. This ambiguity can lead to increased anxiety and stress, contributing to wider concerns around mental health in GenAI learning environments.

Another concern is the risk of impersonal learning environments (Berei & Pusztai, 2022). When GenAI are implemented without inclusive design, the experience can feel detached and isolating, particularly for WP students, who often already feel marginalised. While GenAI tools may streamline administrative and learning processes, they can also weaken the sense of connection and belonging that is essential for student engagement and success.

GenAI can narrow the divide — with the right strategies

Although WP students are often excluded from digital networks, which Connectivism highlights as essential for learning (Goldie, 2016), GenAI, if used thoughtfully, can help reconnect them by offering personalised support, reducing geographic barriers, and expanding access to educational resources.

To achieve this, we propose five key strategies:

  • Invest in infrastructure and access: Universities must ensure that all students have the tools to participate in the AI-enabled classroom including access to devices, core software, and free versions of widely used GenAI platforms. While there is a growing variety of GenAI tools on the market, institutions facing financial pressures must prioritise tools that are both widely used and demonstrably effective. The goal is not to adopt everything, but to ensure that all students have equitable access to the essentials.
  • Rethink training with inclusion in mind: GenAI literacy training must go beyond traditional models. It should reflect Equality, Diversity and Inclusion principles recognising the different starting points students bring and offering flexible, practical formats. Micro-credentials on platforms like LinkedIn Learning or university-branded short courses can provide just-in-time, accessible learning opportunities. These resources are available anytime and from anywhere, enabling students who were previously excluded such as those in rural or under-resourced areas to access learning on their own terms.
  • Build digital communities and peer networks: Social connection is a key enabler of learning (Siemens, 2005). Institutions should foster GenAI learning communities where students can exchange ideas, offer peer support, and normalise experimentation. Mental readiness is just as important as technical skill and being part of a supportive network can reduce anxiety and stigma around GenAI use.
  • Design inclusive GenAI policies and ensure ongoing evaluation: Institutions must establish clear, inclusive policies around GenAI use that balance innovation with ethics (Schofield & Zhang, 2024). These policies should be communicated transparently and reviewed regularly, informed by diverse student feedback and ongoing evaluation of impact.
  • Adopt a human-centred approach to GenAI integration: Following UNESCO’s human-centred approach to AI in education (UNESCO, 2024; 2025), GenAI should be used to enhance, not replace the human elements of teaching and learning. While GenAI can support personalisation and reduce administrative burdens, the presence of academic and pastoral staff remains essential. By freeing staff from routine tasks, GenAI can enable them to focus more fully on this high-impact, relational work, such as mentoring, guidance, and personalised support that WP students often benefit from most.

Conclusion

Generative AI alone will not determine the future of equity in higher education, our actions will. Without intentional, inclusive strategies, GenAI risks amplifying existing digital inequalities, further disadvantaging WP students. However, by proactively addressing access barriers, delivering inclusive and flexible training, building supportive digital communities, embedding ethical policies, and preserving meaningful human interaction, GenAI can become a powerful tool for inclusion. The digital divide doesn’t close itself; institutions must embed equity into every stage of GenAI adoption. The time to act is not once systems are already in place, it is now.

Dr Lei Fang is a Senior Lecturer in Digital Transformation at Queen Mary University of London. Her research interests include AI literacy, digital technology adoption, the application of AI in higher education, and risk management. lei.fang@qmul.ac.uk

Professor Xue Zhou is a Professor in AI in Business Education at the University of Leicester. Her research interests fall in the areas of digital literacy, digital technology adoption, cross-cultural adjustment and online professionalism. xue.zhou@le.ac.uk


1 Comment

Digital critical pedagogies: five emergent themes

by Faiza Hyder and Mona Sakr

In this blog we explore the nature of Digital Critical Pedagogies – an emergent field of investigation that considers what happens to critical pedagogies in the context of digital learning environments. We present findings from the first strand of a research project that looks at ‘on the ground’ realities of DCP at Middlesex University. We report five themes that emerged from the first project strand, a collaborative literature review:  digitally mediated dialogues; creating ‘safe space’ online; interweaving public pedagogies; digital inclusion; and pedagogical risk-taking. These themes represent useful and practical starting points for advancing DCP practices in higher education.

What are Digital Critical Pedagogies?

Critical Pedagogies are a commitment to learning and teaching that centre on meaningful dialogues with and between learners.  In Teaching to Transgress, bell hooks (1994) presents dialogue as the key way to connect learning in the classroom with ‘real life’ experiences in ways that prompt further inquiry and insight, and are a step towards self-actualisation. What happens to these connections when we attempt to cultivate them in digital learning environments, through forums or Zoom meetings or social media exchanges? This is the question underpinning the emerging field of Digital Critical Pedagogies (DCP): explorations in developing critical pedagogies in the context of digital encounters.

Our Research at Middlesex University

Our research explores the aspirations and realities of DCP at Middlesex University, which like the rest of the higher education sector has made seismic shifts over the course of the pandemic towards digitally mediated learning. We harbour a strong commitment to critical pedagogies and have wondered collectively about the nature of these critical pedagogies in the context of digital learning that can look and feel markedly different.

Our project was designed to develop a better understanding of the platforms and practices that facilitate effective digital critical pedagogies. It is about enabling those working ‘on the ground’ to collaborative in solving problems in response to challenges we face as a university community. It has been supported through funding from the University’s Centre of Academic Practice Enhancement. There are three stages to the project: a collaborative literature review; interview study; and a design workshop to develop recommendations that we can take forward as an institution to realise our commitment to DCP.

The first of these stages, the literature review, was co-produced with an advisory group of 12 Middlesex University academics from across the university’s disciplines. From the literature review five themes emerged which are best conceptualised as areas of special consideration when exploring and designing DCP. They represent elements of practice to reflect on carefully and develop further as part of the practice of DCP. They are:

  1. Digitally mediated dialogues
  2. Creating ‘safe space’ online
  3. Interweaving with public pedagogies
  4. Digital Inclusion
  5. Pedagogical risk-taking

Digitally mediated dialogues

While open dialogues have a special role to play in all critical pedagogies, dialogues are not a neutral social justice mechanism leaving everyone in them feeling empowered. Dialogues ride on power differentials and inequalities whether they take place in physical or digital spaces (Bali, 2014). In digital spaces, we need to be aware of the way that even the most basic of parameters (such as internet connectivity) shape who can have a voice within dialogue, and we cannot underestimate the importance of this as a consideration in digital critical pedagogies.

Creating ‘safe space’ online

Managing a ‘safe space’ for dialogue online is complex. Part of how we think about the safe space in digital critical pedagogies relates back to the previous theme of dialogue, in that how presence is mediated will impact on the capacity to create a ‘safe space’ for dialogue. Boler (2015) warned that in too much online learning and teaching we end up with ‘drive by difference’ rather than deep and meaningful engagements with diversity. When we divorce ourselves from our physical presence – from our facial expressions, body orientation, gesture and so on – the ways in which we can collaboratively construct a safe space for dialogue change. A teacher cannot ‘read the room’ in the way that they might do when they are in a physical classroom. They cannot see who feels uncomfortable or they might not appreciate the vulnerability that a learner is showing by sharing a particular story or perspective. Boler (2015) suggests that embodied multimodal communication is a key component of enabling spaces for genuine and open dialogue, so the question becomes: is it possible to do the necessary communicative work in an online space?

Interweaving with public pedagogies

Public pedagogies are processes of learning that take place in what Hill (2018) calls ‘digital counterpublics’. These are online spaces, often associated with grassroots movements (such as Black Lives Matter) or marginalised groups finding their voice, which are online space in which there are. Hill (2018), Ringrose (2018) and Castillo-Montoya et al (2019) all focus on navigating public pedagogies as part of a digital critical pedagogical approach. They investigate what happens when we open up learning and teaching spaces to engage with wider social movements across the world. In this case, the public pedagogies come first and the classroom pedagogies follow.

Digital inclusion

The literature suggests the need for an expanded vision of digital inclusion and that fostering this expanded digital inclusion is key to digital critical pedagogies. Prata-Linhares et al (2020) document access and use of digital technologies as part of education during the pandemic and the social distancing measures put in place. Seale and Dutton (2012) conceptualise digital inclusion not just as access and use but also in terms of participation, equity and empowerment. This means that it is just whether or not you have access to the physical resources, but also about whether you are empowered to engage digitally as part of your own personal identity and self-expression. Too often, digital inclusion initiatives are having to justify their own existence through showing that they are getting individuals online in order to engage in education or employment, rather than it being about the authentic empowerment of an individual or group.

Pedagogical Risk Taking

The review highlights the need for pedagogical risk-taking as part of the project of articulating and experimenting with digital critical pedagogies. A commitment to risk-taking is already part of the critical pedagogy described by hooks. Pedersen et al (2018) describe a shift to hybrid (rather than digital or online) pedagogies, because the term ‘hybrid’ emphasises the extent to which the pedagogies are always on the cusp of becoming, they are more ‘not quite there’ than ‘there’.

Pedagogical risk-taking involves exposure and this can be intimidating. Communities of practice offer an important way to enable this pedagogical risk-taking so that it is collaborative and supportive and that everyone feels that there is necessary room to fail (as well as succeed).

Anderson (2020), in discussing the digital pedagogy pivot we have seen in response to COVID19, suggests that communities of practice are essential to support collaboration, practice sharing, practice development. Putting communities of practice at the centre of digital critical pedagogies is an active way of pushing back against the discourse of ‘inevitable de-humanisation’ that characterises some writing on digital critical pedagogies (Morris and Stommel, 2018; Boler, 2015).

Next Steps

Across all of the literature, a recurring gap is the voice of learners. Although a few of the articles did carry out interviews with learners, the dominant voice in articulating and understanding digital critical pedagogies is undeniably that of the teacher. There is an urgent need for research that bridges the gap between learner and teacher.

We need careful observational research to identify which learners are heard in different types of digitally mediated communication used in learning and teaching, and to explore some of the following questions:

  • We need to think about these safe spaces. What does a ‘safe space’ look and feel like in the context of digital critical pedagogies? How do we know if we are in a safe space (as opposed to a sanitised space) for dialogue?
  • What are the benefits of interweaving with public pedagogies as part of digital critical pedagogies? We need to know far more about the learners’ experiences when they engage with public pedagogies and the ways that this interweaving can be written into learning, teaching and assessment.

Finally, we think of the themes identified from the review as not so much ‘knowledge’ but as points for reflection on practice. We hope to bring the finding to life for both Middlesex academics and further afield and are currently putting together a collaborative innovation workshop with teaching academics at the university to develop concrete recommendations about how DCP can be more systematically advanced in the university and in higher education more broadly.

Faiza Hyder has worked as a Primary School teacher for over ten years in various London boroughs including Barnet and Islington. She recently graduated with distinction as a Master’s student at Middlesex University. Faiza currently works as a researcher for ACT (Association for Citizenship Teaching). Her additional research interests include EAL (English as an Additional Language), Immigration and motherhood in migration. Twitter @HyderFaiza

Dr Mona Sakr is Senior Lecturer in Education and Early Childhood. She researches creative, digital and playful pedagogies in a range of educational contexts, from early childhood education to higher education. In relation to higher education, she has published on the use of social media as part of developing critical pedagogies and the use of creative methods (e.g. drawings) for developing insights into learner experience and student feedback. Twitter@DrMonaSakr

References (not embedded via URLs):

hooks, b (1994) Teaching to Transgress: Education as the Practice of FreedomNew York: Routledge.

Morris, SM and Stommel, J (2019) An Urgency of Teachers: The Work of Critical Digital Pedagogy. Hybrid Pedagogy. Accessed 20.12.2021: https://criticaldigitalpedagogy. pressbooks.com/.