Academics are taught many things over the years. How to write grant applications in a tone of sober optimism. How to disagree politely while eviscerating an argument. How to pretend that Reviewer 2’s comments are ‘helpful’. But we are rarely prepared for the moment when our own work achieves enlightenment and returns to the world under a different name.
It began, as these things often do, with Google Scholar. Browsing innocently, I discovered that a paper I had written many years ago, first author with two colleagues, had been reborn. Here it was miraculously renewed: Freeing the chi of change: The Higher Education Academy and enhancing teaching and learning in higher education. Same title. Same argument. Same metaphors. Different authors. Different journal. Different universe.
This was not mere influence. Nor was it scholarly dialogue. This was something more metaphysical. The article had apparently passed through the cycle of samsara, shedding its original authorship like an old skin, and had re-emerged – serene, confident, and wholly unburdened by attribution.
Opening the paper produced a strange sense of déjà vu. Paragraphs unfolded exactly as I remembered writing them. The argument progressed through familiar analytical levels. The meso level was, once again, mysteriously absent. And there it was: the metaphor of chi – blocked, stagnant, yearning to be freed – flowing unimpeded across two decades and several thousand miles.
One could not help but admire the fidelity. This was not slapdash copying. This was careful stewardship. A lightly paraphrased abstract here, a synonym substituted there. “Examines” had matured into “takes a look at”. “Work intensification” had achieved inner peace as “an increase in workload”. The original prose had been gently guided toward a simpler, more mindful state.
The production values added to the sense of cosmic theatre. Running headers attributed the article to someone else entirely, suggesting either deep enlightenment or mild confusion. Words occasionally developed spontaneous internal spacing, or none at all, as if even the typography were observing a vow of non-attachment. Peer review, meanwhile, appeared to have transcended physical form altogether.
At moments like this, one is tempted to ask philosophical questions. What is authorship, really? If an argument is copied perfectly, does it still belong to its original creator? If a journal publishes without editors in the room to hear it, does it still make a sound? If a metaphor about blocked chi appears in the forest of academic publishing, does anyone notice?
And then there is Google Scholar, calmly indexing it all, like a Zen monk sweeping leaves while entire epistemologies collapse around him.
The emotional journey is predictable. Surprise gives way to irritation, which in turn yields to a kind of exhausted amusement. After all, it is not every day one gets to read one’s own work as if it were new – especially when it has been thoughtfully simplified for contemporary consumption.
Correspondence followed. Screenshots were taken. Appendices multiplied. Examples of verbatim overlap were laid out with the careful precision of a tea ceremony. The original article was cited. The reincarnated article was cited. Karma, it seemed, was being documented.
What lingers after the initial absurdity is not just concern about misconduct, but about the ecosystems that allow such reincarnations to flourish. Journals without editors. Publishers without addresses. Ethics policies without enforcement. A publishing landscape in which the appearance of scholarship is often sufficient, and coherence is optional.
Perhaps this is the true lesson of Eastern philosophy for higher education. When systems lose balance, chi stagnates. When oversight weakens, energies flow in unexpected directions. When scholarly publishing detaches from accountability, articles achieve nirvana without the inconvenience of authorship.
The good news is that the chi remains remarkably resilient. Even when blocked, it finds a way. It circulates. It reincarnates. It reappears – sometimes with better spacing, sometimes with worse.
As for me, I have learned a valuable lesson. Should I ever wish to republish my earlier work, there are evidently paths that require no revision, no peer review, and very little effort. I will not be taking them. But it is oddly comforting to know that my chi, at least, is doing well.
SRHE Fellow Paul Trowler is Emeritus Professor of Higher Education at Lancaster University. His work focuses on teaching, learning, and organisational change, with a long-standing interest in how academic practices operate in everyday settings. More recently, he has been working on doctoral education and the practical use of AI within learning architectures that support research and learning. He continues to write and develop tools that emphasise dialogic, theory-informed approaches rather than transmission-led models.
by Concepción González García and Nina Pallarés Cerdà
Debates about generative AI in higher education often start from the same assumption: students need a certain level of digital competence before they can use AI productively. Those who already know how to search, filter and evaluate online information are seen as the ones most likely to benefit from tools such as ChatGPT, while others risk being left further behind.
Recent studies reinforce this view. Students with stronger digital skills in areas like problem‑solving and digital ethics tend to use generative AI more frequently (Caner‑Yıldırım, 2025). In parallel, work using frameworks such as DigComp has mostly focused on measuring gaps in students’ digital skills – often showing that perceived “digital natives” are less uniformly proficient than we might think (Lucas et al, 2022). What we know much less about is the reverse relationship: can carefully designed uses of AI actually develop students’ digital competences – and for whom?
In a recent article, we addressed this question empirically by analysing the impact of a generative AI intervention on university students’ digital competences (García & Pallarés, 2026). Students’ skills were assessed using the European DigComp 2.2 framework (Vuorikari et al, 2022).
Moving beyond static measures of digital competence
Research on students’ digital competences in higher education has expanded rapidly over the past decade. Yet much of this work still treats digital competence as a stable attribute that students bring with them into university, rather than as a dynamic and educable capability that can be shaped through instructional design. The consequence is a field dominated by one-off assessments, surveys and diagnostic tools that map students’ existing skills but tell us little about how those skills develop.
This predominant focus on measurement rather than development has produced a conceptual blind spot: we know far more about how digital competences predict students’ use of emerging technologies than about how educational uses of these technologies might enhance those competences in the first place.
Recent studies reinforce this asymmetry. Students with higher levels of digital competence are more likely to engage with generative AI tools and to display positive attitudes towards their use (Moravec et al, 2024; Saklaki & Gardikiotis, 2024). In this ‘competence-first’ model, digital competence appears as a precondition for productive engagement with AI. Yet this framing obscures a crucial pedagogical question: might AI, when intentionally embedded in learning activities, actually support the growth of the very competences it is presumed to require?
A second limitation compounds this problem: the absence of a standardised framework for analysing and comparing the effects of AI-based interventions on digital competence development. Although DigComp is widely used for diagnostic purposes, few studies employ it systematically to evaluate learning gains or to map changes across specific competence areas. As a result, evidence from different interventions remains fragmented, making it difficult to identify which aspects of digital competence are most responsive to AI-mediated learning.
There is, nevertheless, emerging evidence that AI can do more than simply ‘consume’ digital competence. Studies by Dalgıç et al (2024) and Naamati-Schneider & Alt (2024) suggest that integrating tools such as ChatGPT into structured learning tasks can stimulate information search, analytical reasoning and critical evaluation—provided that students are guided to question and verify AI outputs rather than accept them uncritically. Yet these contributions remain exploratory. We still lack experimental or quasi-experimental evidence that links AI-based instructional designs to measurable improvements in specific DigComp areas, and we know little about whether such benefits accrue equally to all students or disproportionately to those who already possess stronger digital skills.
This gap matters. If digital competences are conceived as malleable rather than fixed, then AI is not merely a technology that demands certain skills but a pedagogical tool through which those skills can be cultivated. This reframing shifts the centre of the debate: away from asking whether students are ready for AI, and towards asking whether our teaching practices are ready to use AI in ways that promote competence development and reduce inequalities in learning.
Our study: teaching students to work with AI, not around it
We designed a randomised controlled trial with 169 undergraduate students enrolled in a Microeconomics course. Students were allocated by class group to either a treatment or a control condition. All students followed the same curriculum and completed the same online quizzes through the institutional virtual campus.
The crucial difference lay in how generative AI was integrated:
In the treatment condition, students received an initial workshop on using large language models strategically. They practised:
contextualising questions
breaking problems into steps
iteratively refining prompts
and checking their own solutions before turning to the AI.
Throughout the course, their online self-assessments included adaptive feedback: instead of simply marking answers as right or wrong, the system offered hints, step-by-step prompts and suggestions on how to use AI tools as a thinking partner.
In the control condition, students completed the same quizzes with standard right/wrong feedback, and no training or guidance on AI.
Importantly, the intervention did not encourage students to outsource solutions to AI. Rather, it framed AI as an interactive study partner to support self-explanation, comparison of strategies and self-regulation in problem solving.
We administered pre- and post-course questionnaires aligned with DigComp 2.2, focusing on five competences: information and data literacy, communication and collaboration, safety, and two aspects of problem solving (functional use of digital tools and metacognitive self-regulation). Using a difference-in-differences model with individual fixed effects, we estimated how the probability of reporting the highest level of each competence changed over time for the treatment group relative to the control group.
What changed when AI was taught and used in this way?
At the overall sample level, we found statistically significant improvements in three areas:
Information and data literacy – students in the AI-training condition were around 15 percentage points more likely to report the highest level of competence in identifying information needs and carrying out effective digital searches.
Problem solving – functional dimension – the probability of reporting the top level in using digital tools (including AI) to solve tasks increased by about 24 percentage points.
Problem solving – metacognitive dimension – a similar 24-point gain emerged for recognising what aspects of one’s digital competences need to be updated or improved.
In other words, the AI-integrated teaching design was associated not only with better use of digital tools, but also with stronger awareness of digital strengths and weaknesses – a key ingredient of autonomous learning. Communication and safety competences also showed positive but smaller and more uncertain effects. Here, the pattern becomes clearer when we look at who benefited most.
A compensatory effect: AI as a potential leveller, not just an amplifier
When we distinguished students by their initial level of digital competence, a pattern emerged. For those starting below the median, the intervention produced large and significant gains in all five competences, with improvements between 18 and 38 percentage points depending on the area. For students starting above the median, effects were smaller and, in some cases, non-significant.
This suggests a compensatory effect: students who began the course with weaker digital competences benefited the most from the AI-based teaching design. Rather than widening the digital gap, guided use of AI acted as a levelling mechanism, bringing lower-competence students closer to their more digitally confident peers.
Conceptually, this challenges an implicit assumption in much of the literature – namely, that generative AI will primarily enhance the learning of already advantaged students, because they are the ones with the skills and confidence to exploit it. Our findings show that, when AI is embedded within intentional pedagogy, explicit training and structured feedback, the opposite can happen: those who started with fewer resources can gain the most.
From ‘allow or ban’ to ‘how do we teach with AI?’
For higher education policy and practice, the implications are twofold.
First, we need to stop thinking of digital competence purely as a prerequisite for using AI. Under the right design conditions, AI can be a pedagogical resource to build those competences, especially in information literacy, problem solving and metacognitive self-regulation. That means integrating AI into curricula not as an add-on, but as part of how we teach students to plan, monitor and evaluate their learning.
Second, our results suggest that universities concerned with equity and digital inclusion should focus less on whether students have access to AI tools (many already do) and more on who receives support to learn how to use them well. Providing structured opportunities to practise prompting, to critique AI outputs and to reflect on one’s own digital skills may be particularly valuable for students who enter university with lower levels of digital confidence.
This does not resolve all the ethical and practical concerns around generative AI – far from it. But it shifts the conversation. Instead of treating AI as an external threat to academic integrity that must be tightly controlled, we can start to ask:
How can we design tasks where the added value lies in asking good questions, justifying decisions and evaluating evidence, rather than in producing a single ‘correct’ answer?
How can we support students to see AI not as a shortcut to avoid thinking, but as a tool to think better and know themselves better as learners?
Under what conditions does AI genuinely help to close digital competence gaps, and when might it risk opening new ones?
Answering these questions will require further longitudinal and multi-institutional research, including replication studies and objective performance measures alongside self-reports. Yet the evidence we present offers a cautiously optimistic message: teaching students how to use AI can be part of a strategy to strengthen digital competences and reduce inequalities in higher education, rather than merely another driver of stratification.
Concepción González García is Assistant Professor of Economics at the Faculty of Economics and Business, Catholic University of Murcia (UCAM), Spain, and holds a PhD in Economics from the University of Alicante. Her research interests include macroeconomics, particularly fiscal policy, and education.
Nina Pallarés is Assistant Professor of Economics and Academic Coordinator of the Master’s in Management of Sports Entities at the Faculty of Economics and Business, Catholic University of Murcia (UCAM), Spain. Her research focuses on applied econometrics, with particular emphasis on health, labour, education, and family economics.
As scholarly research into and using generative AI tools like ChatGPT becomes more prevalent, it is crucial for researchers to understand the intersections of copyright, fair use, and use of generative AI in research. While there is much discussion about the copyrightability of generative AI outputs and the legality of generative AI companies’ use of copyrighted material as training data (Lucchi, 2023), there has been relatively little discussion about copyright in relation to user prompts. In this post, I share an interesting discovery about the use of copyrighted material in ChatGPT prompts.
Imagine a situation where a researcher wishes to conduct a content analysis on specific YouTube videos for academic research. Does the researcher need to obtain permission from YouTube or the content creators to use these videos?
As per YouTube’s guidelines, researchers do not require explicit copyright permission if they are using the content for “commentary, criticism, research, teaching, or news reporting,”as these activities fall under the umbrella of fair use (Fair Use on YouTube – YouTube Help, 2023).
What about this scenario? A researcher wants to compare the types of questions posed by investors on the reality television series, Shark Tank, with questions generated by ChatGPT as it roleplays an angel investor. The researcher plans to prompt ChatGPT with a summary of each Shark Tank pitch and ask ChatGPT to roleplay as an angel investor and ask questions. In this case, would the researcher need to obtain permission from Shark Tank or its production company, Sony Pictures Television?
In my exploration, I discovered that it is indeed crucial to obtain permission from Sony Pictures Television. ChatGPT’s terms of service emphasise that users should “refrain from using the service in a manner that infringes upon third-party rights. This explicitly means the input should be devoid of copyrighted content unless sanctioned by the respective author or rights holder” (Fiten & Jacobs, 2023).
I therefore initiated communication with Sony Pictures Television, seeking approval to incorporate Shark Tank videos in my research. However, my request was declined by Sony Pictures Television in California, citing “business and legal reasons”. Undeterred, I approached Sony Pictures Singapore, only to receive a reaffirmation that Sony cannot endorse my proposed use of their copyrighted content “at the present moment”. They emphasised that any use of their copyrighted content must strictly align with the Fair Use doctrine.
This evokes the question: Why doesn’t the proposed research align with fair use? My initial understanding is that the fair use doctrine allows re-users to use copyrighted material without permission from the right holders for news reporting, criticism, review, educational and research purposes (Copyright Act 2021 Factsheet, 2022).
In the absence of further responses from Sony Pictures Television, I searched the web for answers.
Two findings emerged which could shed light on Sony’s reservations:
ChatGPT’s terms highlight that “user inputs, besides generating corresponding outputs, also serve to augment the service by refining the AI model” (Fiten & Jacobs, 2023; OpenAI Terms of Use, 2023).
OpenAI is currently facing legal action from various authors and artists alleging copyright infringement (Milmo, 2023). They contend that OpenAI had utilized their copyrighted content to train ChatGPT without their consent. Adding to this, the New York Times is also contemplating legal action against OpenAI for the same reason (Allyn, 2023).
These revelations point to a potential rationale behind Sony Pictures Television’s reluctance: while use of their copyrighted content for academic research might be considered fair use, introducing this content into ChatGPT could infringe upon the non-commercial stipulations (What Is Fair Use?, 2016) inherent in the fair use doctrine.
In conclusion, the landscape of copyright laws and fair use in relation to generative AI tools is still evolving. While previously researchers could rely on the fair use doctrine for the use of copyrighted material in their research work, the availability of generative AI tools now introduces an additional layer of complexity. This is particularly pertinent when the AI itself might store or use data to refine its own algorithms, which could potentially be considered a violation of the non-commercial use clause in the fair use doctrine. Sony Pictures Television’s reluctance to grant permission for the use of their copyrighted content in association with ChatGPT reflects the caution that content creators and rights holders are exercising in this new frontier. For researchers, this highlights the importance of understanding the terms of use of both the AI tool and the copyrighted material prior to beginning a research project.
Anita Toh is a lecturer at the Centre for English Language Communication (CELC) at the National University of Singapore (NUS). She teaches academic and professional communication skills to undergraduate computing and engineering students.
One of the benefits of SRHE membership is exclusive access to the quarterly newsletter, SRHE News, www.srhe.ac.uk/publications/srhe-newsletter. SRHE News typically contains a round-up of recent academic events and conferences, policy developments and new publications, written by editor Rob Cuthbert. To illustrate the contents, here is part of the April 2023 issue on recent developments in Publishing. If you would like to see a sample issue just email rob.cuthbert@uwe.ac.uk or rob.gresham@srhe.ac.uk.
John Sherer (North Carolina) blogged for The Scholarly Kitchenon 23 March 2023 about a recent initiative to publish open access monographs in history, reporting technical problems, author resistance but also much greater take-up/use, with about three times as many reported individual engagements as even a successful paywalled monograph.
An article on 6 March 2023 by Alexander B Belles and colleagues from Penn State in the Journal of Science Policy and Governance made recommendations about how to handle the US Office of Science and Technology Policy requiring that all federally funded scholarly research be accessible to the public immediately upon publication. The article said: “While this open access policy will ultimately benefit society by increasing the availability of data and research outputs, it could place a heavy burden on researchers due to the relatively high cost of open access alongside an academic culture that tends to favor publishing in high impact subscription journals. We … offer recommendations for agencies, universities, and publishers to mitigate the impacts on researchers.” One recommendation was to consider cancelling publisher subscriptions and divert funds to author processing charges.
Jack Grove reported for Times Higher Education/insidehighered.com on 16 March 2023 on the suspiciously remarkable expansion of Swiss open-access publisher MDPI, which published no fewer than 240,500 articles in 2021, “just slightly fewer than Springer Nature and Elsevier’s combined open-access total that year, levying an average article processing charge of 1,258 Swiss francs ($1,364) per paper.” Jack Grove had reported for Times Higher Education on 15 March 2023 that analysis by economist Paolo Crosetto (National Research Institute for Agriculture, Food and Environment, France) showed “the number of MDPI’s special issues continued to rise sharply in 2022. Focusing on 98 MDPI journals with an impact factor, there were 55,985 special issues with a closing date in 2023, as of 23 February, Dr Crosetto told Times Higher Education. That compares with 39,587 open special issues identified at the end of March 2021, although only 10,504 of these eventually published anything. In 2022, 17,777 special issues published content.” Mark Hanson (Exeter) blogged about the predatoriness of MDPI on 25 March 2023.
Web of Science reported on 20 March 2023 that it had this year already disqualified some 50 journals, including an MDPI flagship journal, from having an impact factor in future. Christos Petrou of Scholarly Intelligence blogged for The Scholarly Kitchenon 30 March 2023 about the recent delisting of 50 journals, its implications for publishers, including MDPI, Hindawi and Wiley (which recently acquired Hindawi), and the consequences of the ‘guest editor’ model which underpins the recent growth of MDPI and other journals.
Shaping the field of lifelong education
The editors of theInternational Journal of Lifelong Education looked back on 40 years of the journal to develop themes which had shaped the field. They chose “citizenship and its learning; learning in, through and for work; and widening participation and higher education”. The article by John Holford (Nottingham) and his co-editors was part of the journal’s retrospective issue 41(6) (2 November 2022).
Books with DOIs are more discoverable on Google Scholar
Lettie Y Conrad (independent) and Michelle Urberg of EBSCO blogged for The Scholarly Kitchenabout their funded study to find how metadata contributes to the successful discovery of academic and research literature via the mainstream web. “Initial results indicated that DOIs have an indirect influence on the discoverability of scholarly books in Google Scholar — however, we found no direct linkage between book DOIs and the quality of Google Scholar indexing or users’ ability to access the full text via search-result links. Although Google Scholar claims to not use DOI metadata in its search index, the results of our mixed-methods study of 100+ books (from 20 publishers) demonstrate that books with DOIs are generally more discoverable than those without DOIs.
Why journal submissions get rejected
Alex Edmans (London Business School) reflected on his experience as editor of the Review of Finance and analysed his reasons for rejecting nearly 1000 submissions, for SSRN on 9 February 2023.
The ethics of peer review
The endless lament of journal editors about finding reviewers continued, as Dirk Lindebaum (Grenoble Ecole de Management) and Peter J Jordan (Griffith) mused in Organization(30(2) 396-406) on reviewer disengagement: “… an audit culture in academia and individual incentives (like reduced teaching loads or publication bonuses) have eroded the willingness of individuals to engage in the collective enterprise of peer-reviewing each others’ work on a quid pro quo basis. … it is unethical for potential reviewers to disengage from the review process … we aim to ‘politicise’ the review process and its consequences for the sustainability of the scholarly community. We propose three pathways towards greater reviewer engagement: (i) senior scholars setting the right kind of ‘reviewer’ example; (ii) journals introducing recognition awards to foster a healthy reviewer progression path and (iii) universities and accreditation bodies moving to explicitly recognise reviewing in workload models and evaluations. … the latter point … aligns individual and institutional goals in ‘measurable’ ways. In this way, ironically, the audit culture can be subverted to address the imbalance between individual and collective goals.”
Identity theft prompts scientists worldwide to contemplate legal action
Jack Grove reported for Times Higher Education/insidehighered.com on 10 February 2023 that many leading scientists had been wrongly named as authors or editors on AI-generated papers and predatory journals. Some were considering legal action, which might be supported by UKRIO.
The gaming of citation and authorship
Stuart Macdonald (Leicester) wrote a truly terrifying analysis of the extent of misrepresentation in academic publishing, in Social Science Information(online 7 February 2023): “Many authors in medicine have made no meaningful contribution to the article that bears their names, and those who have contributed most are often not named as authors. Author slots are openly bought and sold. The problem is magnified by the academic publishing industry and by academic institutions, pleased to pretend that peer review is safeguarding scholarship. In complete contrast, the editors of medicine’s leading journals are scathing about just how ineffectual is peer review in medicine. Other disciplines should take note lest they fall into the mire in which medicine is sinking.”
APCs are a heavy burden for middle-income countries
Alicia J Kowaltowski (São Paolo) and colleagues from Brazil blogged for The Scholarly Kitchen on 9 March 2023 about the way author processing charges can be a major problem for middle-income countries like Argentina, Brazil, India, Mexico, and South Africa.
Predatory journals and the mislocated centres of scholarly communication
Franciszek Krawczyk and Emanuel Kulczycki (both Adam Mickiewicz University in Poznań, Poland) argued in their article in Tapuya: Latin American Science, Technology and Society (2021, 4(1)) that so-called predatory journals may have a significant role in enabling otherwise marginalised scholars to maintain their academic careers despite a location on the periphery of mainstream academic debate. “Knowledge production is an important factor in establishing the geopolitical position of countries … we introduce the term “mislocated centres of scholarly communication” to help better understand the emergence of predatory journals, and journals that bear similarities to them, in geopolitical peripheries. Mislocated centers of scholarly communication are perceived in the peripheries as legitimized by the center but are in fact invisible or illegitimate in the center. Thus, we argue the importance of viewing these mislocated centers as the result of unequal power relations in academia. … predatory journals are a geopolitical problem because the geopolitical peripheries of science are much more often harmed by them than the center. Unlike predatory journals, mislocated centers of scholarly communication are not necessarily fraudulent but rather they are geopolitical roles imposed on some journals by a dynamic between center and peripheries.”
Routledge/Taylor & Francis acquire US publisher Stylus
The founder of Stylus Publishing announced in an email to authors on 2 March 2023 that the publisher will be sold to Taylor & Francis and operate as part of its Routledge division, as Doug Lederman reported for insidehighered.comon 3 March 2023. “Founded in 1996, Stylus’ publishing focuses on higher education, covering such areas as teaching and learning, student affairs, professional development, service learning and community engagement, study abroad, assessment, online learning, racial diversity on campus, women’s issues, doctoral education, adult education, and leadership and administration.” The publisher seems mainly to produce practical guides for US HE, with no obvious impact more widely.
Rob Cuthbert is the editor of SRHE News and Blog, emeritus professor of higher education management, Fellow of the Academy of Social Sciences and Fellow of SRHE. He is an independent academic consultant whose previous roles include deputy vice-chancellor at the University of the West of England, editor of Higher Education Review, Chair of the Society for Research into Higher Education, and government policy adviser and consultant in the UK/Europe, North America, Africa, and China. He is current chair of the SRHE Publications Committee and of the Editorial Advisory Board for Studies in Higher Education.
As I am typing away in Microsoft Word, the glaring, red squiggly underline inevitably pops up, bringing up all the insecurities I have with academic English writing, as an ethnically Chinese, bilingual Chinese-English speaker. So what if I speak English with a North American accent? So what if English has been my medium of instruction for my entire life? So what if I graduated with a Master of Arts with honours in English Language from the University of Edinburgh? My fluency in Chinese somehow discredits my English fluency, as if I cannot be equally competent in both. Because I am not white, my English will always somehow be inadequate.
The way others, and I, perceive my English ability reflects how ‘standard English’ as an idea is toxic to the identity-building of those who are not middle-class, cisgender, heterosexual, white men from the Anglophone world. April Baker-Bell talks about the concept of linguistic justice, arguing that promoting a type of ‘correct English’ has inherent white linguistic supremacy. Traditional approaches to language education do not account for the emotional harm, internalised linguistic racism, or the consequences these approaches have on the sense of self and identity of non-white students. Extending Baker-Bell’s theory, how would this apply to the use of proofreading apps?
Apps are created by people who have their own underlying assumptions and worldviews, even if these assumptions are not explicitly written in any of the apps’ documentation. When using these tools, users need to have a sense of the kind of assumptions these apps carry into their corrections. More importantly, as programmes are written by people and applied in a formulaic way, they should not have the power to define their users’ sense of identity, or even their ability to communicate in English. Algorithm-based tools will always fall short in understanding the nuances and eccentricities that make human writing exciting and intriguing. The app should not have authority over its users, and its feedback should never be taken uncritically.
However, proofreading apps could be used as a pedagogical tool when thoughtfully and critically engaged. Evija Trofimova created a resource titled ‘Digital Writing Tools: Spelling, Grammar and Style Checkers,’ which investigates how different proofreading apps can or should be used. Trofimova’s project assesses how each app can be used for best didactic experiences, with exercise suggestions and classroom activities available for users to begin to see these proofreading apps as a possible pedagogical tool, rather than law enforcement of sorts. Users of proofreading apps should always treat each ‘error’ as a learning opportunity, investigate the rule behind the correction, and actively consider whether it is indeed a correction they want to take up in their writing. If a correction is unexpected, users should be encouraged to investigate why the app suggested it and what is the underlying principle. Crucially, app users need to have a sense of where to draw the fine line between what is conventional (rather than right) and what makes writing comprehensible to readers, and what expresses the unique voice and identity of a writer. Writers should explore more ways to communicate within and outside of conventions, in a way that best represents them. This sort of creativity will go beyond simply relying on the algorithm of a proofreading app.
It needs to be said more often that English as an academic lingua franca is no one’s first language (see Marion Heron and Doris Dippold, Meaningful Teaching Interaction at the Internationalised University). Just because someone is a monolingual English speaker does not necessarily mean that they are good at academic English writing. Just because someone writes in an unexpected or unconventional way does not necessarily mean that they are wrong. An essential purpose of writing is to communicate. As academics, we need to ask ourselves, how much can someone deviate from a standard and still be comprehensible? How much room can we leave to allow students to be themselves and express themselves fully in their writing? How much are proofreading apps stifling their ability to flourish as writers? We are not teaching students to become machines. There is no point in having different students write the same essay in the exact same way. Rather, it is precisely their unique and different voices that should be celebrated.
In ‘The Danger of a Single Story,’ Chimamanda Ngozi Adichie talks about her childhood in Nigeria, reading books about white, blue-eyed characters who played in the snow, ate apples, and talked a lot about the weather, which does not reflect her experience of the world at all. Growing up, Adichie struggled with characters in novels being made up of white foreigners from the West alone, as if the Western world is a cultural ground zero. She and other Nigerians are not represented in the literary works she read. Non-white proofreading app users may easily fall into the same impressionable and vulnerable position as Adichie did. These prescriptive ‘corrections’ made by proofreading apps, just like the children stories that Adichie read, implies an ideological position that a specific language standard, such as standard British or American English, is somehow superior and ‘correct.’ However, the West is not a cultural ground zero, nor is English a neutral medium of communication. Other varieties of English used in non-Western worlds, far from being inappropriate or incorrect, should be celebrated for their ability to reflect the culture and experiences of non-Western writers. The attempt to make a piece of written work meet certain linguistic standards should not be above rhetoric, creativity, and cultural expression.
What makes proofreading apps dangerous is that their underlying assumptions remain invisible to their users. A ‘correct’ grammar may reinforce existing biases in our society, creating linguistic violence, persecution, dehumanisation, and marginalisation, which non-standard English speakers endure when using their own language in schools and everyday life. One thing that stuck with me the most from my undergraduate degree is that proper English changes throughout the ages. What is now deemed suitable was once upon a time a deviant use of the language. And what is deviant now may become mainstream in the future. People have always reacted badly to these linguistic changes, but the changes in usage stick nonetheless. As users of proofreading apps and teachers of students who use these apps, it is important to encourage everyone to think about who the apps were built for and what purposes they were meant to serve. What spelling and grammatical rules do they enforce, and why? In After Whiteness: An Education in Belonging, Willie James Jennings pushes against (theological) education that is ultimately set up for white self-sufficient masculinity, a way of organising life around a persona that distorts authentic identity. This way of being in the world forms cognitive and affective structures that seduce people into its habitation and its meaning-making. When a white Anglophone world is presumed as a norm, and others somehow have to cater to its expectations, it strangles intellectual pursuits from the perspectives of the other. It is the freedom of expression between interlocutors that will create a space for students from all backgrounds to flourish.
Ann Gillian Chu (FHEA) is a PhD (Divinity) candidate at the University of St Andrews. She has taught in higher education contexts in Britain, Canada, and China using a variety of platforms and education tools. As an ethnically Chinese woman who grew up in Hong Kong, Gillian is interested in efforts to decolonise academia, such as exploring ways to make academic conferences more inclusive.
This blog is based on a presentation to the 2021 SRHE Research Conference, as part of a Symposium on Universities and Unicorns: Building Digital Assets in the Higher Education Industry organised by the project’s principal investigator, Janja Komljenovic (Lancaster). The support of the Economic and Social Research Council (ESRC) is gratefully acknowledged. The project introduces new ways to think about and examine the digitalising of the higher education sector. It investigates new forms of value creation and suggests that value in the sector increasingly lies in the creation of digital assets.
In a lecture delivered to Stanford University in 2014, which was provocatively titled Competition is for losers, Peter Thiel argued that ‘[m]onopoly is the condition of every successful business.’ Thiel’s endorsement of monopoly over competition has become business strategy orthodoxy for Big Tech firms, which, as Birch, Cochrane and Ward (2021, p6) argue, have ‘often been willing to accept low revenues in the short- to medium-term with the longer term goal of capturing markets and monopoly rents through their expected future control over data’. Assets are replacing commodities in contemporary capitalism, and an asset can be defined as ‘something that can be owned or controlled, traded, and capitalised as a revenue stream … [and] the point is to get a durable economic rent from them’ by limiting access to the asset (Birch and Muniesa, 2020, p2). We can see these assetisation dynamics emerging in EdTech markets serving UK higher education, and in this article I offer early insights into how these dynamics, driven by the growth in use of digital platforms during the Covid-19 pandemic, are shaping university strategies and practices.
This article reports on findings from the second phase of the Universities and Unicorns: building digital assets in the higher education industry project. The project is led by Dr Janja Komljenovic at Lancaster University and it aims to investigate new processes of value creation and extraction-assetisation-in the HE sector as it increasingly digitalises its operations. In Phase 2 of our project, we are conducting a series of university case studies in the UK, along with the investor and the company case studies. The university case studies are designed to help us understand how universities work with their commercial partners and what are the synergies and tensions. We are also curious about how universities view changing business models that focus on assetisation.
Importantly, we are not evaluating the use of EdTech in the context of teaching and learning or evaluating the strategies of individual institutions. Our concern is with how the HE sector is evolving in connection with EdTech markets. We are interviewing senior leaders, academic staff, directors of IT departments, IT developers and staff working in procurement, commercialisation and legal departments. We are also collecting a range of documents relating to digital strategy, business and data management plans, technical reports, financial records, and contracts with EdTech companies.
Our fieldwork with universities is a work in progress, and in this blog post I will outline three of our emerging findings, which relate to: (1) the ways that universities think about digital strategy; (2) the value of data from a university perspective; and (3) emerging processes of assetisation.
Digital strategy
None of the universities that we have studied so far have had formal and distinct digital strategies. Rather, digital strategy is embedded in IT, teaching and learning (T&L) and library strategies. In most cases, universities appear to be ‘between’ official strategy documents that cover this area. COVID-19 clearly shifted the short-term focus to tactics – working urgently to adjust and develop digital ecosystems to accommodate new demands of large-scale shifts online – and these universities are just now catching their breath and starting to update their strategies. However, despite this lack of formal strategy, some universities are very clear regarding the use of digital platforms to lead the sector and create value. In these cases, there clearly is an overarching strategy, it just isn’t described or formally presented as such.
Universities see themselves as developing institutional digital ecosystems by joining up platforms and focusing on the interoperability of their systems. Decisions about specific platforms are increasingly shaped by their potential integration into these ecosystems, and how data can be managed and integrated across platforms.
Interestingly, digital strategy is being driven by teaching and research strategy rather than shaping it. In one case, the point was made very strongly that digital is not separate, but rather a way of delivering the core business. Digital platforms are largely being used to deliver existing activity in digital form, rather than to create new forms of economic activity and new sources of value. However, questions are being raised about the relationship between IT and teaching and learning. For example, should IT departments simply support other business functions, or might they lead on digital strategy to enable new possibilities for the university?
The value of digital data
The primary value of digital data for universities appears to be reputational, and responses from our participants thus far have been remarkably consistent in this regard. Digital platforms can help to enhance the university’s brand and extend the business over a wider geographic range. This primacy of reputational, rather than financial, value is a distinctive feature of university perspectives on digital platforms, in contrast to companies.
Engagement with digital platforms was also seen to be valuable insofar as it generates market intelligence, supports student recruitment, changes perceptions of teaching and learning (eg blended approaches); and change perceptions of students (eg enabling particular cohorts to engage in new ways with benefits for their learning outcomes). Most interviewees are not thinking about the data generated by digital platforms as an asset, but it is clear that digital content (eg recorded lectures) are being seen in these terms insofar as they can be controlled by intellectual property rights and re-used over time.
Interestingly, our participants clearly hold the view that there is more potential for universities to make use of the digital data generated by platforms they use. However, in the case of learning analytics there is also scepticism regarding what it promises and its true value at this time. Despite a number of trials and experiments, many in UK universities are yet to see the benefits beyond what can be achieved using more prosaic approaches to data analytics.
Assetisation
The universities that we have studied so far do not appear to be using data to develop new products or services that generate value through economic rents; this kind of activity appears limited to commercial providers of digital platforms. However, universities increasingly understand the potential value of the data generated by their staff and students, and they are actively pursuing access to these data in their contractual negotiations with partners.
This is where we are seeing the emergence of assetisation dynamics in EdTech markets, which reflect the business strategies that Thiel promotes in his celebration of monopolies. Even if universities are able to negotiate favourable terms in individual contracts, providing rights to access and use data generated by university users on a given platform, they do not have access to aggregated data collected by companies through the use of this platform by other universities.
There is thus concern about the assetisation of data by commercial providers, for example, in relation to the use of aggregated data sets to develop new products and services that automate aspects of academic work (eg assessment). Turnitin is a primary example that came up in many of our discussions. The monopoly created by Turnitin leaves universities with little choice but to use their platform and pay whatever is asked, and relationships with Turnitin have become strained in many cases. The value of Turnitin is based on the data they have collected, and this data could be used to develop new services that automate, and thus substitute for, aspects of teaching currently delivered by lecturers. Work is being pursued through industry bodies to negotiate fairer distribution of the potential value generated by digital platforms in such cases.
Conclusion
While our university case studies are a work in progress, these three themes are already emerging quite consistently across our research sites. The value of data for universities is primarily reputational, extending the reach of teaching and learning functions, enhancing recruitment and supporting innovation in teaching and learning. Universities see digital strategy and the use of digital platforms as a way to extend their core business, not as a means to create new kinds of economic activity. In this respect, tech sector business strategies focused on creating value from data as an asset are not yet evident in the strategies of universities. However, we are seeing early signs that data is being assetised by EdTech companies, in an effort to extract monopoly rents by locking-in users through subscriptions to digital platforms. In this sense, we are curious to see whether monopoly will be a condition of every successful business in the burgeoning HE EdTech space.
Sam Sellar is Dean of Research (Education Futures) and Professor of Education Policy at the University of South Australia. Sam’s research focuses on education policy, large-scale assessments and the datafication of education. Sam also works closely with teacher organisations around the world to understand the impact of digitalisation on teachers’ work. His most recent book is titled Algorithms of education: How datafication and artificial intelligence shape policy (University of Minnesota Press), co-authored with Kalervo N Gulson and P Taylor Webb. Contact here: sam.sellar@unisa.edu.au
References
Birch, K, Cochrane, DT, and Ward, C (2021) ‘Data as asset? The measurement, governance, and valuation of digital personal data by Big Tech’ Big Data & Society, 8(1), 20539517211017308.
Birch, K, and Muniesa, F (eds) (2020). Assetization: turning things into assets in technoscientific capitalism Boston: MIT Press
This blog is based on a presentation to the 2021 SRHE Research Conference, as part of a Symposium on Universities and Unicorns: Building Digital Assets in the Higher Education Industry organised by the project’s principal investigator, Janja Komljenovic (Lancaster). The support of the Economic and Social Research Council (ESRC) is gratefully acknowledged. The project introduces new ways to think about and examine the digitalising of the higher education sector. It investigates new forms of value creation and suggests that value in the sector increasingly lies in the creation of digital assets.
In the context of the current SARS-COVID-19 pandemic, the ongoing process of digitalisation of education has become a prominent area for social, financial and, increasingly, (critical) educational research. Higher education, as a pivotal social, economic, technological and educational domain, has seen its activities drastically affected, and Universities and the multitude of people involved in them have been forced to adapt to the unfolding crisis. HE researchers agree both on the unpreparedness of countries and institutions faced by the pandemic, and on its potential lasting impact on the educational sector (Goedegebuure and Meek, 2021). In as much as educational technologies (EdTech) have been brought to the fore due to their pivotal role in the enablement and continuation of educational practices across the globe, EdTech companies and investors have also become primary financial beneficiaries of these necessary processes of digitalisation. The extensive use and adoption of EdTech to bridge the gap between HE professionals and students due to the application of strict social distancing measures has been welcomed by investors as an opportunity for EdTech to establish themselves as key players within an educational landscape under a process of assetisation (Komljenovic, 2020, 2021). Investors and EdTech are scaffolding new digital markets in HE, reshaping the conceptualisation of universities, HE and the sector itself more generally (Williamson, 2021; Komljenovic and Robertson, 2016). In this brief entry, I focus on EdTech investors’ discourses, owing to the potential of such discourses to shape the future of educational practices broadly speaking.
Within the ‘Universities and Unicorns’ ESRC-funded project, this exploratory research (see full report) aimed at unveiling the ideological uses of linguistic, visual and multimodal devices (eg texts and charts) deployed by EdTech investors in a variety of texts that have the potential, due to their circulation and goals, to shape public understandings of the role of Educational Technologies in the unfolding crisis. The research was conducted deploying a framework anchored in Linguistics, specifically cognitive-based approaches to Critical Discourse Studies (CL-CDS; egMármol Queraltó, 2021b). A central assumption in this approach is that language encodes construal: the same event/situation can be alternatively linguistically formulated, and these can have diverse cognitive effects in readers (Hart, 2011). From a CL-CDS perspective, then, texts can potentially shape the way that the public think (and subsequently act) about social topics (cfWatters, 2015).
In order to extract the ideologies underlying discourse practices carried out by HE investors, we examined qualitatively a variety of texts disseminated in the public and semi-private domains. We investigated, for example, HolonIQ’s explanatory charts, interviews with professionals and blog entries (egCharles MacIntyre, Alex Latsis, Jan Lynn-Matern), and global financial reports by IBIS Capital, BrightEye Ventures, and EdTechX, among several others. Our main goal was to better understand how EdTech investors operationalised discourse to shape the imageries of the future in the relationship between HE institutions, EdTech and governance. In line with CDS approaches, we examined the representations of social actors in context using van Leeuwen’s (2008) framework, and more in line with CL-CDS, we also operationalised the analysis of metaphorical expressions indexing Conceptual Metaphors, and Force dynamics. Force-dynamics is an essential tool deployed to examine how the tensions between actors and processes within business discourse are constructed (see Oakley, 2005).
Our study yielded important findings for the critical examination of discourse processes within the EdTech-HE-governance triangle of influences. In terms of social actor representation (whose examination also included metaphor), the main findings are:
EdTech investors and companies are rendered as opaque, abstract collectives, and are positively represented as ‘enablers’ and ‘disruptors’ of educational processes.
Governments are rendered as generic, collective entities, and depicted as necessary funders of process of digital transformation.
Universities or HE institutions are mainly negatively represented as potential ‘blockers’ of processes of digital transformation, and they are depicted as failing their students due to their lack of scalability and flexibility.
Individuals within HE institutions are identified as numbers and increasing percentages within unified collectives, students routinely cast as beneficiaries in ‘consumer’ and ‘user’ roles, while educators are activated as ‘content providers’.
Metaphorically, the EdTech sector is conceptualised as a ‘ship’ on a ‘journey’ towards profit, where HE institutions can be ‘obstacles along a path’ and the global pandemic and other push factors are conceptualised as ‘tailwinds’.
The EdTech market is conceptualised as a ‘living organism’ that grows and evolves independent of the actors involved in it. The visual representations observed reinforce these patterns and emphasise the growth of the EdTech market in very positive terms.
The formulation of ‘push’ and ‘pull’ factors is also essential to understand the discursively constructed ‘internal tensions’ within the sector. In order to examine these factors, we operationalised Force-dynamics analysis and metaphor, which allowed us to arrive to the following findings:
Push factors identified by investors driving the EdTech sector include the SARS-COVID19 global pandemic, the digital acceleration being experienced in the sector prior to the pandemic, the increasing number of students requiring access to HE, and investors’ actions aimed at disrupting the EdTech market.
Pull factors encouraging investment in the sector are conceptualised in the shape of financial predictions. The visions put forward by EdTech investors become instrumental in the achievement of those predictions.
The representation of the global pandemic is ambivalent and it is rendered both as a negative factor affecting societies and as a positive factor for the EdTech sector. The primary focus is on the positive outcomes of the disruption brought about by the pandemic.
Educational platforms are foregrounded in their enabling role and replace HE institutions as site for educational practice, de-localising educational practices from physical universities.
Students and educators are found to be increasingly reframed as ‘users’ and ‘content providers’, respectively. This discursive shift is potentially indicative of the new processes of assetisation of HE.
On the whole, framing business within the ‘journey’ metaphor entails that any entities or processes affecting business are potentially conceptualised as ‘obstacles along the path’, and therefore attributed negative connotations. In our case, those entities (eg governments and HE institutions) or processes (eg lack of funding) that metaphorically ‘stand in the way of business’ are automatically framed in a negative light, potentially affording a negative reception by the audience and therefore legitimising actions designed to remove those ‘obstacles’ (eg ‘disruptions’). EdTech companies and investors are represented very positively as ‘enablers’ of educational practices disrupted by the SARS-COVID19 pandemic, but also as ‘push factors’ in processes of digital acceleration within the ‘speed of action is speed of motion’ metaphor. In the premised, ever-growing EdTech sector, those actors and processes that ‘slow down’ access to profits (or processes providing access to profit) are similarly negatively represented. The conceptualisation of the SARS-COVID-19 global pandemic in this context reflects ‘calculated ambivalence’. This ambivalence was expected, as portraying the pandemic solely as a relatively positive factor for the HE sector would be in extreme detriment to EdTech investors’ activities. Our findings reflect that, while the global pandemic is initially represented as a very negative factor greatly disrupting societies and businesses, those negative impacts tend to be presented in rather vague ways and in most occasions the result of the disruption brought about by the pandemic is reduced to changes in the modality of education experienced by learners (from in-person to online education). We have found no significant mention of social or personal impacts of the pandemic (eg deaths and scenarios affecting underrepresented social groups), where the focus has been mainly on the market and the activities within it. Conversely, while the initial framing of the pandemic is inherently negative, we have seen in several examples above that the pandemic is subtly instrumentalised as a ‘push factor’, which serves to accelerate digital transformation and is hence a positive factor for the EdTech sector. In a global context of restrictions, containment measures and vaccine rollouts, it is especially ideologically relevant to find the pandemic instrumentalised as a ‘catalyst’, or as an important player in a ‘experiment of global proportions’. Framing the pandemic in such ways detaches the audience from its negative connotations, and serves to depict EdTech companies and investors as involved in high-level, complex processes that abstract the millions of diverse victims to the pandemic. Ultimately, in the ‘journey’ towards profit, the SARS-COVID-19 is a desired push factor, also realised as a ‘tailwind’, which facilitates the desired digital acceleration.
On the whole, our research demonstrated that social actor representation and the distinction between push/pull factors are crucial sites for the analysis of EdTech discourse. EdTech’s primary focus is on the positive outcomes of the disruption brought about by the pandemic. In this context, educational platforms are foregrounded in their enabling role and replace HE institutions as site for educational practice, de-localising educational practices from physical universities. Subsequently, students and educators are found to be increasingly reframed as ‘users’ and ‘content providers’ respectively. We argue that this subtle discursive shift is potentially indicative of the new processes of assetization of HE and reflects more broadly a neoliberal logic.
Javier Mármol Queraltó is a PhD candidate in Linguistics in Lancaster University. His current research deals with the multimodal representations of discourses of migration in the British and Spanish online press. He advocates a Cognitive Linguistic Approach to Critical Discourse Studies (CL-CDS), and is working on a methodology that can shed light on how public perceptions of social issues might be influenced by both the multimodal constraints of online newspaper discourse and our shared cognitive capacities. He is also interested in the multimodal and cognitive dimensions of discourses of Brexit outside the UK, news discourses of social unrest, and the marketisation/assetisation processes of HE.
This blog is a re-post of an article first published on universityworldnews.com. It is based on a presentation to the 2021 SRHE Research Conference, as part of a Symposium on Universities and Unicorns: Building Digital Assets in the Higher Education Industry organised by the project’s principal investigator, Janja Komljenovic (Lancaster). The support of the Economic and Social Research Council (ESRC) is gratefully acknowledged. The project introduces new ways to think about and examine the digitalising of the higher education sector. It investigates new forms of value creation and suggests that value in the sector increasingly lies in the creation of digital assets.
EdTech companies are, on average, priced modestly, although some have earned strong valuations. We know that valuation practices normally reflect investors’ belief in a company’s ability to make money in the future. We are, however, still learning about how EdTech generates value for users, and how to take account of such value in the grand scheme of things.
Valuation and deployment of user-generated data
EdTech companies are not competing with the likes of Google and Facebook for advertisement revenue. That is why phrases such as ‘you are the product’ and ‘data is the new oil’ yield little insight when applied to EdTech. For EdTech companies, strong valuations hinge on the idea that technology can bring use value to learners, teachers and organisations – and that they will eventually be willing to pay for such benefits, ideally in the form of a subscription. EdTech companies try to deliver use value in multiple ways, such as deploying user-generated data to improve their services. User-generated data are the digital traces we leave when engaging with a platform: keyboard strokes and mouse movements, clicks and inactivity.
The value of user-generated data in higher education
The gold standard for unlocking the ‘value’ of user-generated data is to bring about an activity that could otherwise not have arisen. Change is brought about through data feedback loops. Loops consist of five stages: data generation, capture, anonymisation, computation and intervention. Loops can be long and short.
For example, imagine that a group of students is assigned three readings for class. Texts are accessed and read on an online platform. Engagement data indicate that all students spent time reading text 1 and text 2, but nobody read text 3. As a result of this insight, come next semester, text 3 is replaced by a more ‘engaging’ text. That is a long feedback loop.
Now, imagine that one student is reading one text. The platform’s machine learning programme generates a rudimentary quiz to test comprehension. Based on the students’ answers, further readings are suggested or the student is encouraged to re-read specific sections of the text. That is a short feedback loop.
In reality, most feedback loops do not bring about activity that could not have happened otherwise. It is not like a professor could not learn, through conversation, which texts are better liked by students, what points are comprehended, and so on. What is true, though, is that the basis and quality of such judgments shifts. Most importantly, so does the cost structure that underpins judgment.
The more automated feedback loops are, the greater the economy of scale. ‘Automation’ refers to the decoupling of additional feedback loops from additional labour inputs. ‘Economies of scale’ means that the average cost of delivering feedback loops decreases as the company grows.
Proponents of machine learning and other artificial intelligence approaches argue that the use value of feedback loops improves with scale: the more users engage in the back-and-forth between generating data, receiving intervention and generating new data, the more precise the underlying learning algorithms become in predicting what interventions will ‘improve learning’.
The platform learns and grows with us
EdTech platforms proliferate because they are seen to deliver better value for money than the human-centred alternative. Cloud-based platforms are accessed through subscriptions without transfer of ownership. The economic relationship is underwritten by law and continued payment is legitimated through the feedback loops between humans and machines: the platform learns and grows with us, as we feed it.
Machine learning techniques certainly have the potential to improve the efficiency with which we organise certain learning activities, such as particular types of student assessment and monitoring. However, we do not know which values to mobilise when judging intervention efficacy: ‘value’ and ‘values’ are different things.
In everyday talk, we speak about ‘value’ when we want to justify or critique a state of affairs that has a price: is the price right, too low, or too high? We may disagree on the price, but we do agree that something is for sale. At other times we reject the idea that a thing should be for sale, like a family heirloom, love or education. If people tell us otherwise, we question their values. This is because values are about relationships and politics.
When we ask about the values of EdTech in higher education, we are really asking: what type of relations do we think are virtuous and appropriate for the institution? What relationships are we forging and replacing between machines and people, and between people and people?
When it comes to the application of personal technology we have valued convenience, personalisation and seamlessness by forging very intimate but easily forgettable machine-human relations. This could happen in the EdTech space as well. Speech-to-text recognition, natural language processing and machine vision are examples of how bonds can be built between humans and computers, aiding feedback loops by making worlds of learning computable.
Deciding on which learning relations to make computable, I argue, should be driven by values. Instead of seeing EdTech as a silver bullet that simply drives learning outcomes, it is more useful to think of it as technology that mediates learning relations and processes: what relationships do we value as important for students and when is technology helpful and unhelpful in establishing those? In this way, values can help us guide the way we account for the value of edtech.
Morten Hansenis a research associate on the Universities and Unicorns projectat Lancaster University, and a PhD student at the Faculty of Education, University of Cambridge, United Kingdom. Hansen specialises in education markets and has previously worked as a researcher at the Saïd Business School in Oxford.
This blog is based on a presentation to the 2021 SRHE Research Conference, as part of a Symposium on Universities and Unicorns: Building Digital Assets in the Higher Education Industry organised by the project’s principal investigator, Janja Komljenovic (Lancaster). The support of the Economic and Social Research Council (ESRC) is gratefully acknowledged. The project introduces new ways to think about and examine the digitalising of the higher education sector. It investigates new forms of value creation and suggests that value in the sector increasingly lies in the creation of digital assets.
What makes learning more efficient? And what makes teaching more effective? According to EdTech providers and their champions, it is the digital transformation of higher education. The consulting company Gartner – which releases regular EdTech industry reports – defines this transformation as a shift from a ‘collectively-defined’ quality model in which universities provide their services – theoretically – to anyone, to a model in which quality is personally defined and delivered at scale through MOOCs or other means. In fact, Gartner emphasize the importance of EdTech providing scalable technologies for ensuring ‘cost effective education for the benefit of society’. And this seems to be the concern of many EdTech firms themselves; they aim to provide technologies that make life and work more efficient and effective for higher education institutions, managers, faculty, students, and staff.
But what does this actually mean?
I am part of a project, led by Dr Janja Komljenovic, looking at how value is increasingly being created in the higher education sector through the transformation of ‘things’ into digital and other assets – it could be students’ data, it could be research, it could be lectures, and so on. Part of our concern about these changes is the way they can end up reconfiguring societal, public, or commonly held resources as private assets from which companies can exact an economic rent. An important reason for examining this assetisation process is to analyse exactly how things are turned into private assets as a way to open them up to public scrutiny, and political intervention, should we so desire. While assets are constituted by legal forms, like property rights, and technical changes, like digital rights management, they are also the result of broader narratives about how we should or should not understand the world. Epistemic justifications matter. The World Economic Forum highlights what I mean here. They support the deployment of education technology as a way to “create better systems and data flows”. And this means more efficient and effective learning and teaching. But, what does efficiency and effectiveness mean in the case of higher education?
As we have interviewed EdTech providers in our project, we have noticed how they emphasize ‘efficiency’ as one of the key contributions of their technology, where this seems to be equated with producing an outcome at lower cost, whereas this is understood – in common sense terms – as doing something ‘better’ than before. It is important to see how the concept of efficiency is enrolled in the transformation of higher education into a range of assets. Assetisation in higher education depends on the development and promotion of a set of analytics that can identify efficiencies, understood as cost savings that someone or some institution can benefit from. Key to this assetisation process is the characterisation of efficiency as a common-sense goal for universities, managers, faculty, students, staff, and governments; in fact, efficiency can appear to be the very thing that education technologies are turning into an asset. For example, making it cheaper for students to study by enabling them to rent their textbooks, rather than have to buy them. Or making it cheaper for universities to pay subscription only for those electronic texts – or even parts of those texts – that are actually read and used by their staff and students. But this raises an important question: how do EdTech companies make money, if they are simply reducing costs all around?
EdTech companies look to the future for their success. Assets are temporal entities, entailing the creation of a stream of future revenues that can be capitalised in the present, thereby enabling investors to put a value to them that does not depend on being profitable now, or even generate significant revenues now. Efficiencies in the present often end up as defrayed costs in the future as those cost savings today compound into increased revenues for someone (eg EdTech) in the future. The future revenue expectations of EdTech companies come from the illusion of efficiency as cost savings at this point in time; for example, students can save on textbooks now but will be induced to subscribe to lifelong learning resources, or their personal data might be exploited in the future in multiple ways, or their reading habits will be used to sell something to universities, or any manner of revenue generating schemes. Someone is paying in the future.
EdTech companies have to make money somehow, and how they make money is the interesting question. Ideas about the current and future state of higher education and EdTech matter as they provide imaginaries of what is possible and desirable, which we discuss in this report. Claims to efficiency are part of how they make money; they are part of the way that EdTech companies construct new asset classes out of universities and their students, faculty, and staff. Interrogating how these supposed efficiencies are monetised is critical for getting a grip on the implications of EdTech for higher education in the longer term. It is essential we analyse this dynamic now to allow for timely public scrutiny, democratic debate and social intervention.
Kean Birch is Associate Professor at York University, Canada. He is particularly interested in understanding technoscientific capitalism and draws on a range of perspectives from science & technology studies, economic geography, and economic sociology to study it. More specifically, his research focuses on the restructuring and transformation of the economy & financial knowledges, technoscience & technoscientific innovation, and the relationship between markets & natural environments. Currently, he is researching how different things (e.g. knowledge, personality, loyalty, etc.) are turned into ‘assets’ & how economic rents are then captured from those assets – basically, in processes of assetisation and rentiership.
This blog is based on a presentation to the 2021 SRHE Research Conference, as part of a Symposium on Universities and Unicorns: Building Digital Assets in the Higher Education Industry organised by the project’s principal investigator, Janja Komljenovic (Lancaster). The support of the Economic and Social Research Council (ESRC) is gratefully acknowledged. The project introduces new ways to think about and examine the digitalising of the higher education sector. It investigates new forms of value creation and suggests that value in the sector increasingly lies in the creation of digital assets.
Universities worldwide are increasingly interested in digital technologies and how they can support higher education. A recent study by the European University Association found that most European universities are already using or planning to use data-rich products and services, such as artificial intelligence, machine learning, learning analytics, big data, and the internet of things (see Figure 18 on page 36). Indeed, it is precisely these data-rich operations that are central to the idea of the disruptive potential of education technology (edtech), as argued by my colleague, Javier Mármol Queraltó, in the recent UU project report. The discourse of investors and edtech companies promises thoroughly improved higher education based on personalisation, automation and efficiency. But how deliverable are these promises? Who innovates in the space of data-rich operations, for which services and for which users? Who profits? These are some of the questions we address in the Universities and Unicorns project, which aims to understand forms of value and ways of creating it in digital higher education. In this blog post, I will address three possible trends that can be identified from the interim findings of our quantitative analysis. But before proceeding to discuss these trends, I will contextualise our analysis.
We used Crunchbase to build three databases covering 2,012 edtech companies, 1,120 investors in edtech, and 1,962 edtech investment deals. We identified those relevant to the higher education sector, and our data reflects the state of the sector as of July 2021. Based on this analysis, we identified four key service models in the higher education edtech industry. First, the business to business (B2B) model includes digital platforms serving universities and companies, such as virtual learning environments. Second, the business to customer (B2C) model includes platforms targeting individuals directly. Third, the business to business to customer (B2B2C) model serves institutions that use or further develop the platform to reach individuals, such as Massive Open Online Courses (MOOC) or Online Programme Management platforms (OPM). Finally, the business to the customer to customer (B2C2C) model includes platforms that connect individuals, such as skills and knowledge sharing platforms. B2B2C and B2C2C platforms, in particular, act as the kind of infrastructural intermediaries that are so popular in other sectors of our social and economic lives.
Our analysis found that half of all investment went into B2B platforms, followed by investment into B2C, while B2C2C and B2B2C together received just under a quarter of all investment. However, platforms with the fastest pace of increasing investment are those targeting individuals directly or through intermediation, ie B2C and B2C2C models. This might indicate emerging parallel or alternative higher education products and services that compete with traditional university provision, especially in the context of lifelong learning.
Digital platforms that say they incorporate data-rich operations in their products and services are not the priority area for investors. While we noticed an increasing investment in data-rich platforms, it was still only less than a quarter of all investment going into innovating such products. Nevertheless, we identified three possible trends that are especially worthy of our attention: (1) data-rich operations are being innovated largely in B2B platforms; (2) there is notable unevenness in terms of the location of edtech companies and investments in those platforms who innovate in data-rich operations; and (3) there might be potential for monopolies in data-rich innovation. Let’s delve into each of these possible trends.
Almost all investment in the companies developing data-rich operations in their platforms went to the B2B service model. Looking only at higher education institutions as the target customer, already half of the investment supports data-rich innovation. Most of that went into platforms that act as the institutional digital backbone, indicating that the intention might be to support all institutional functions beyond teaching with data-rich operations, such as artificial intelligence, machine learning and various kinds of analytics beyond learning analytics. There seems to be a trend towards data-rich digital ecosystems at universities that harvest all user and other data in the near future.
There is high unevenness in where the investment in data-rich platforms is allocated. Regarding the number of companies, 239 in our database declare that they offer data-rich operations on their platforms. Almost half of those (101) are based in the USA, 21 in the UK and 19 in India. Companies based in Africa are entirely missing from the list. In terms of investment amounts, 88% of all investment in companies offering data-rich services in their platforms went into companies based in the USA, 3% each to those based in Norway and the UK, and 6% to the rest of the world. The discrepancy between the number of companies and investment size indicates that investment amounts are higher in the USA than elsewhere in the world.
Finally, if we compare different indicators of investment in companies that innovate data-rich solutions for higher education institutions, we notice interesting dynamics. Looking at the money raised, half of B2B investment went into those companies with a platform that included data-rich operations. But this is only 30% of deals and 25% of companies. This indicates that the concentration of investment in data-rich operation platforms for higher education institutions goes into a smaller number of companies who get higher investments. We wonder if this signals potential for monopolies in the future. Moreover, if we compare granted patents, we notice that a higher percentage of companies offering data-rich solution platforms own patents (30%) versus those offering other kinds of service or product platforms (10%). Digital platforms are typically still protected by a licence, but that differs from a more restrictive patent protection. We wonder if such discrepancy in patent share might indicate black-boxing of data-rich operations in higher education?
Our research on digitalising higher education is showing the complex impact of digital technology and datafication on the sector. This impact includes potential positive and supportive measures, but also many potentially worrying trends. However, further research is needed into these trends and the role of different actors, particularly financial investors and edtech companies. Please follow our project in which we will share the findings from this further work as it unfolds.
Janja Komljenovic is a Senior Lecturer and co-Director of the Higher Education Research and Evaluation at Lancaster University in the UK. She is also a Research Management Committee member of the Global Centre for Higher Education with headquarters at the University of Oxford. Janja’s research focuses on the political economy of knowledge production and higher education markets. She is especially interested in the relationship between the digital economy and the higher education sector; and in digitalisation, datafication and platformisation of knowledge production and dissemination. Janja is published internationally on higher education policy, markets and education technology.