by Joanne Irving-Walton
What AI absorbs and why that matters
Most debates about generative AI in higher education fixate on what it produces: essays, summaries, answers, paraphrases. I find myself increasingly interested in something else – what it absorbs. Over the past year, as conversations about AI have threaded through seminars and tutorials, a pattern has gradually become visible. In those discussions, students rarely begin with content production; instead, they talk about how it helps them get started and steadies them enough to keep going. They use it when the blank page paralyses, when feedback stings and when uncertainty feels exposing. One student described asking AI to “make it feel possible”. Another spoke of feeding tutor comments into the system so they could be “explained more kindly”. A third reflected, almost apologetically, “I don’t want it to do my work… I just need something to push against before I say it out loud and risk looking stupid”.
In each case, AI is not replacing thinking. It is absorbing part of the emotional labour involved in it, and as that labour is redistributed, the texture of judgement shifts. Academic judgement does not tend to emerge from comfort. It develops in the stretch between knowing and not knowing, when confidence dips, stakes feel heightened, and your sense of competence is quietly tested (Barnett, 2007). Staying in that stretch long enough for thinking to clarify demands more than intellectual effort; it requires emotional steadiness, time, space and the capacity to tolerate uncertainty without rushing to resolution (Biesta, 2013). Traditionally, that steadying work has been shared across learning relationships: tutors reframing feedback, peers normalising confusion, supervisors encouraging persistence through doubt. Generative AI now occupies part of that terrain.
I do not think this is inherently a problem. For some students, it is transformative. It marks a shift in where the labour of learning takes place and that change deserves examination rather than alarm.
Four modes of engagement and emotional labour
When students talk about how they use AI, their practices tend to cluster into four overlapping orientations. These are not moral categories so much as shifts in where emotional and cognitive labour is undertaken.
Instrumental engagement appears when students use AI to summarise readings, refine phrasing or impose structure. Here the friction lies in form-making and shaping thought into something communicable. The judgement at stake is procedural: what is proportionate or efficient in this context?
Dialogic engagement emerges when students test interpretations or rehearse arguments. AI becomes a low-stakes sounding board, absorbing some of the vulnerability of articulating something half-formed. The question beneath it is interpretive: what does this mean, and how far do I trust my reading and myself?
Metacognitive engagement is evident when students ask AI to critique their reasoning or compare approaches. What is absorbed here is evaluative tension and the discomfort of examining one’s own argument. The judgement in play is comparative and strategic: which option is stronger, and why? And then there is affective-regulatory engagement. Here, AI absorbs the anxiety that precedes judgement itself. It breaks tasks into steps, softens feedback, lowers the threshold for beginning, offers reassurance before submission and quietens the internal ruminations and rehearsals of everything that might go wrong. This is not peripheral to learning. It is increasingly central.

Figure: Where the labour of learning now lives
Accessibility, safety and the risk of smoothing too much
For many students, particularly those navigating anxiety, executive dysfunction, neurodivergence or heavy external commitments, this emotional buffering is not indulgence but access (Rose & Meyer, 2002). Breaking tasks into steps or privately rehearsing ideas before speaking can widen participation rather than diminish it.
We should not romanticise struggle. Nor should we imagine that institutional structures have ever been able to hold every student perfectly. For some learners, AI offers another place to rehearse thinking, one that sits alongside, rather than replaces, human dialogue.
But there is a tension here. If AI consistently absorbs the strain of uncertainty before ideas encounter resistance, if feedback is softened before it unsettles, if structure replaces the slow work of wrestling thought into form, then something quieter begins to shift. Much of this work happens privately, in browser tabs and late-night prompts, in spaces students do not always feel comfortable admitting to. That makes it harder for us to see what is being strengthened and what may be thinning. The danger is not comfort, but the quiet disappearance of formative strain.
By formative strain, I do not mean suffering for its own sake, nor simply the “desirable difficulties” described in cognitive load theory (Bjork & Bjork, 2011) or the stretching associated with a Vygotskian zone of proximal development (Vygotsky, 1978). I am referring to the lived experience of remaining with ambiguity, critique and partial understanding long enough for judgement to consolidate; the emotional as well as cognitive work of staying with a problem. If that work is always pre-processed, it may narrow the rehearsal space where judgement forms.
Scaffold or substitute
Much depends on whether AI remains a scaffold or begins to function as a substitute. Used as scaffold, it lowers the emotional threshold just enough for deeper engagement, absorbing anxiety without displacing judgement. Used as substitute, it reduces not only strain but evaluation itself; the work of deciding and committing shifts elsewhere. The distinction lies less in the tool than in how it is woven into the learning environment.
Individual awareness and institutional responsibility
It would be easy, and unfair, to frame this as a matter of individual discernment. Students already carry a great deal. But nor is this simply a matter of institutional correction. We are all navigating new terrain in real time, without a settled script.
If we are serious about judgement formation, then responsibility is shared — and it is evolving. This is less about detection or prohibition than about openness. AI engagement is happening whether we discuss it or not. The question is whether we bring it into the light. That might mean inviting students to reflect on how they used AI in a task, not as confession, but as analysis. It might mean modelling, in our own teaching, what it looks like to question or refine an AI response rather than accept it wholesale. It certainly means acknowledging the emotional labour of learning openly (Newton, 2014), recognising that starting can be harder than finishing and that this, too, is part of learning.
At a structural level, we also need some candour. Systems built on speed, metrics and visible output inevitably amplify the appeal of friction-reducing tools. If polish is rewarded more consistently than process, we should not be surprised when students bypass the stretch between uncertainty and articulation. Cultivating discernment, then, is not a matter of allocating blame. It is a collective project of making the shifting terrain of AI use visible, discussable and educative.
Where the emotional work now lives
Generative AI has not diminished the importance of human judgement. If anything, it has made visible how emotionally mediated that judgement has always been (Immordino-Yang & Damasio, 2007). The interior work of learning – the hesitation, the rehearsal, the private negotiation of uncertainty – has never been fully observable. It has always unfolded, at least in part, elsewhere.
What AI changes is not the existence of that interior space, but its texture. Some of that labour now takes place in dialogue with a system that can stabilise, extend or subtly redirect thinking. That creates an opportunity: we are at a juncture where the emotional dimensions of learning can be surfaced and examined more deliberately than before.
It also carries risk. Students can disappear down an AI rabbit hole just as easily as they once disappeared into rumination. The question is not whether the interior work exists, but how it is shaped and whether it ultimately strengthens judgement or thins it.
References
Barnett, R (2007) A will to learn: Being a student in an age of uncertainty Open University Press
Biesta, GJJ (2013) The beautiful risk of education Paradigm Publishers
Bjork, EL & Bjork, RA (2011) ‘Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning’ in MA Gernsbacher, RW Pew, LM Hough & JR Pomerantz (eds), Psychology and the real world: Essays illustrating fundamental contributions to society (pp. 56–64) Worth Publishers
Newton, DP (2014) Thinking with feeling: Fostering productive thought in the classroom Routledge
Vygotsky, LS (1978) Mind in society: the development of higher psychological processes Harvard University Press Rose, DH & Meyer, A (2002) Teaching every student in the digital age: universal design for learning ASCD
Joanne Irving-Walton is a Principal Lecturer at Teesside University, working across learning and teaching and international partnerships. She is particularly interested in how academic judgement and professional identity develop through the emotional realities of higher education.




















