SRHE Blog

The Society for Research into Higher Education


Leave a comment

My Marking Life: The Role of Emotional Labour in delivering Audio Feedback to HE Students

by Samantha Wilkinson

Feedback has been heralded the most significant single influence on student learning and achievement (Gibbs and Simpson, 2004). Despite this, students critique feedback for being unfit for purpose, considering that it does not help them clarify things they do not understand (Voelkel and Mello, 2014).

Despite written feedback being the norm in Higher Education, the literature highlights the benefit of audio feedback. King et al (2008) contend that audio feedback is often evaluated by students as being ‘richer’ than other forms of feedback.

Whilst there is a growing body of literature evaluating audio feedback from the perspective of students, the experiences of academics providing audio feedback have been explored less (Ekinsmyth, 2010). Sarcona et al (2020) is a notable exception, exploring the instructor perspective, albeit briefly. The authors share how some lecturers in their study found it quick and easy to provide audio feedback, and that they valued the ability to indicate the tone of their feedback. Other lecturers, however, stated how they had to type the notes first to remember what they wanted to say, and then record these for the audio feedback, and thus were doing twice as much work.

Whilst the affectual impact of feedback on students has been well documented in the literature (eg McFarlane and Wakeman, 2011), there is little in the academic literature on the affectual impact of the feedback process on markers (Henderson-Brooks, 2021). Whilst not specifically related to audio feedback, Spaeth (2018) is an exception, articulating that emotional labour is a performance when educators seek to balance the promotion of student learning (care) with the pressures for efficiency and quality control (time). Spaeth (2018) argues that there is a lack of attention directed towards the emotional investment on the part of colleagues when providing feedback.

Here, I bring my voice to this less explored side by exploring audio feedback as a performance of emotional labour, based on my experience of trialling of audio feedback as a means of providing feedback to university students through Turnitin on the Virtual Learning Environment. This trial was initiated by colleagues at a departmental level as a possible means of addressing the National Student Survey category of ‘perception of fairness’ in relation to feedback. I decided to reflect on my experience of providing audio feedback as part of a reflective practice module ‘FLEX’ that I was undertaking at the time whilst working towards my Masters in Higher Education.

When providing audio feedback, I felt more confident in the mark and feedback I awarded students, when compared to written feedback. I felt my feedback was less likely to be misinterpreted. This is because, when providing audio feedback, I simultaneously scrolled down the script, using it as an oral catalyst. I considered my audio feedback included more examples than conventional written feedback to illustrate points I made. This overcomes some perceived weaknesses of written feedback: that it is detached from the students’ work (McFarlane and Wakeman, 2011).

In terms of my perceived drawbacks of audio feedback, whilst some academics have found audio feedback to be quicker to produce than written feedback, I found audio feedback was more time-consuming than traditional means; a mistake in the middle of a recording meant the whole recording had to be redone. I toyed with the idea of keeping mistakes in, thinking they would make me appear more human. However, I decided to restart the recording to appear professional. This desire to craft a performance of professionalism may be related to my positionality as a fairly young, female, academic with feelings of imposter syndrome.

I work on compressed hours, working longer hours Monday-Thursday. Working in this way, I have always undertaken feedback outside of core hours, in the evening, due to the relative flexibility of providing feedback (in comparison to needing to be in person at specific times for teaching). I typically have no issue with this. However, providing audio feedback requires a different environment in comparison to providing written feedback:

Providing audio feedback in the evenings when my husband is trying to get our two children to sleep, and with two dogs excitedly scampering around is stressful. I take myself off to the bedroom and sit in bed with my dressing gown on, for comfort. Then I suddenly think how horrified students may be if they knew this was the reality of providing audio feedback. I feel like I should be sitting at my desk in a suit! I know they can’t see me when providing audio feedback, but I feel how I dress may be perceived to reflect how seriously I am taking it. (Reflective diary)                     

I work in an open plan office, with only a few private and non-soundproof pods, so providing audio feedback in the workspace is not easy. Discussing her ‘marking life’, Henderson-Brooks (2021:113) notes the need to get the perfect environment to mark in: “so, I get the chocolates (carrots nowadays), sharpen the pens (warm the screen nowadays), and warn my friends and relatives (no change nowadays) – it is marking time”. Related to this, I would always have a cup of tea (and Diet Coke) to hand, along with chocolate and crisps, to ‘treat’ myself, and make the experience more enjoyable.

When providing feedback, I felt pressure not only to make the right kind of comments, but also in the ‘correct’ tone, as I reflect below:

I feel a need to be constantly 100% enthusiastic. I am worried if I sound tired students may think I was not concentrating enough marking their assessment; if I sound low mood that I am disappointed with them; or sounding too positive that it does not match their mark. (Reflective diary)

I found it emotionally exhausting having to perform the perfect degree of enthusiasm, which I individually tailored to each student and their mark. This is confounded by the fact that I have an autoimmune disease and associated chronic fatigue which means I get very tired and have little energy. Consequently, performing my words / voice / tone is particularly onerous, as is sitting for long periods of time when providing feedback. Similarly, Ekinsmyth (2010) says that colleagues in her study felt a need to be careful about the words used in, and the tone of, audio feedback. This was exemplified when a student had done particularly well, or had not passed the assignment.

Emotions are key to the often considered mundane task of providing assignment feedback to students (Henderson-Brooks, 2021).  I have highlighted worries and anxieties when providing audio feedback, related to the emotional labour required in performing the ‘correct’ tone; saying appropriate words; and creating an appropriate environment and atmosphere for delivering audio feedback. I recommend that university colleagues wishing to provide audio feedback to students should:

  1. Publicise to students the purpose of audio feedback so they are more familiar with what to expect and how to get the most out of this mode of feedback. This may alleviate some of the worries of colleagues regarding how to perform for students when providing audio feedback.
  2. Deliver a presentation to colleagues with tips on how to successfully provide audio feedback. This may reduce the worries of colleagues who are unfamiliar with this mode of feedback.
  3. Undertake further research on the embodied, emotional and affective experiences of academics providing audio feedback, to bring to the fore the underexplored voices of assessors, and assist in elevating the status of audio feedback beyond being considered a mere administrative task.

Samantha Wilkinson is a Senior Lecturer in Childhood and Youth Studies at Manchester Metropolitan University. She is a Doctoral College Departmental Lead for PhDs in Education. Prior to this, she was a Lecturer in Human Geography at the same institution. Her research has made contributions regarding the centrality of care, friendship, intra and inter-generational relationships to young people’s lives. She is also passionate about using autoethnography to bring to the fore her experiences in academia, which others may be able to relate to. Twitter handle:@samanthawilko


1 Comment

Doctoral progress reviews: managing KPIs or developing researchers?

by Tim Clark

All doctoral students in the UK are expected to navigate periodic, typically annual, progress reviews as part of their studies (QAA, 2020). Depending on the stage, and the individual institutional regulations, these often play a role in determining confirmation of doctoral status and/or continuation of studies. Given that there were just over 100,000 doctoral students registered in the UK in 2021 (HESA, 2022), it could therefore be argued that the progress review is a relatively prominent, and potentially high stakes, example of higher education assessment.  However, despite this potential significance, guidance relating to doctoral progress reviews is fairly sparse, institutional processes and terminology reflect considerable variations in approach, empirical research to inform design is extremely limited (Dowle, 2023) and perhaps most importantly, the purpose of these reviews is often unclear or contested.

At the heart of this lack of clarity appears to be a tension surrounding the frequent positioning of progress reviews as primarily institutional tools for managing key performance indicators relating to continuation and completion, as opposed to primarily pedagogical tools for supporting individual students learning (Smith McGloin, 2021). Interestingly however, there is currently very little research regarding effectiveness or practice in relation to either of these aspects. Yet, there is growing evidence to support an argument that this lack of clarity regarding purpose may frequently represent a key limitation in terms of engagement and value (Smith McGloin, 2021, Sillence, 2023; Dowle, 2023). As Bartlett and Eacersall (2019) highlight, the common question is ‘why do I have to do this?’

As a relatively new doctoral supervisor and examiner, with a research interest in doctoral pedagogy, in the context of these tensions, I sought to use a pedagogical lens to explore a small group of doctoral students’ experiences of navigating their progress review. My intention for this blog is to share some learning from this work, with a more detailed recent paper reporting on the study also available here (Clark, 2023). 

Methods and Approach

This research took place in one post-1992 UK university, where progress assessment consisted of submission of a written report, followed by an oral examination or review (depending on the stage). These progress assessments are undertaken by academic staff with appropriate expertise, who are independent of the supervision team. This was a small-scale study, involving six doctoral students, who were all studying within the humanities or social sciences. Students were interviewed using a semi-structured narrative ‘event-focused’ (Jackman et al, 2022) approach, to generate a rich narrative relating to their experience of navigating through the progress review as a learning event.

In line with the pedagogical focus, the concept of ‘assessment for learning’ was adopted as a theoretical framework (Wiliam, 2011). Narratives were then analysed using an iterative ‘visit and revisit’ (Srivastava and Hopwood, 2009) approach. This involved initially developing short vignettes to consider students’ individual experiences before moving between the research question, data and theoretical framework to consider key themes and ideas.

Findings

The study identified that the students understood their doctoral progress reviews as having significant potential for supporting their learning and development, but that specific aspects of the process were understood to be particularly important. Three key understandings arose from this: firstly, that the oral ‘dialogic’ component of the assessment was seen as most valuable in developing thinking, secondly, that progress reviews offered the potential to reframe and disrupt existing thinking relating to their studies, and finally, that progress reviews have the potential to play an important role in developing a sense of autonomy, permission and motivation.

In terms of design and practice, the value of the dialogic aspect of the assessment was seen as being in its potential to extend thinking through the assessor, as a methodological and disciplinary ‘expert’, introducing invitational, coaching format, questions to provoke reflection and provide opportunities to justify and explore research decisions. When this approach was taken, students recalled moments where they were able to make ‘breakthroughs’ in their thinking or where they later realised that the discussion was significant in shaping their future research decisions. Alongside this, a respectful and supportive approach was viewed as important in enhancing psychological safety and creating a sense of ownership and permission in relation to their work:

“I think having that almost like mentoring, which is like a mini mentoring or mini coaching session, in these examination spots is just really helpful”

“I’m pootling along and it’s going okay and now this bombshell’s just dropped, but it was helpful because, yeah, absolutely it completely shifted it.”

“It’s my study… as long as I can justify academically and back it up. Why I’ve chosen to do what I’ve done then that’s okay.” 

Implications

Clearly this is a small-scale study, with a relatively narrow disciplinary focus, however its value is intended to lie in its potential to provoke consideration of progress reviews as tools for teaching, learning and researcher development, rather than to assert any generalisable understanding for practice.

This consideration may include questions which are relevant for research leaders, supervisors and assessors/examiners, and for doctoral students. Most notably: is there a shared understanding of the purpose of doctoral progress reviews and why we ‘have’ to do it? And how does this purpose inform design, practice and related training within our institutions?

Within this study it was evident that in this context the role of dialogic assessment was significant, and given the additional resource required to protect or introduce such an approach, this may be an aspect which warrants further exploration and investigation to support decision making. In addition, it also framed the perceived value of the careful construction of questions, which invite and encourage reflection and learning, as opposed to seeking solely to ‘test’ this.

Dr Timothy Clark is Director of Research and Enterprise for the School of Education at the University of the West of England, Bristol. His research focuses on aspects of doctoral pedagogy and researcher development.


Leave a comment

The ongoing saga of REF 2028: why doesn’t teaching count for impact?

by Ian McNay

Surprise, surprise…or not.

The initial decisions on REF 2028 (REF 2028/23/01 from Research England et al), based on the report on FRAP – the Future Research Assessment Programme – contain one surprise and one non-surprise among nearly 40 decisions. To take the second first, it recommends, through its cost analysis report, that any future exercise ‘should maintain continuity with rules and processes from previous exercises’ and ‘issue the REF guidance in a timely fashion’ (para 82). It then introduces significant discontinuities in rules and processes, and anticipates giving final guidance only in winter 2024-5, when four years (more than half) of the assessment period will have passed.

The second surprise is, finally, the open recognition of the negative effects on research culture and staff careers of the REF and its predecessors (para 24), identified by respondents to the FRAP consultation about the 2028 exercise. For me, this new humility is a double edged sword: many of the defects identified have been highlighted in my evidence-based articles (McNay, 2016, McNay, 2022), and, indeed, by the report commissioned by HEFCE (McNay, 1997) on the impact on individual and institutional behaviour of the 1992 exercise:

  • Lack of recognition of a diversity of excellences including work on local or regional issues because of the geographical interpretation of national/international excellence (para 37). Such local work involves different criteria of excellence, perhaps recognised in references to partnership and wider impact.
  • The need for outreach beyond the academic community, such as a dual publication strategy – one article in an academic journal matched with one in a professional journal in practical language and close to utility and application of a project’s findings.
  • Deficient arrangements for assessing interdisciplinary work (paras 60 and 61)
  • The need for a different, ‘refreshed’, approach to appointments to assessment panels (para 28)
  • The ‘negative impact on the authenticity and novelty of research, with individuals’ agendas being shaped by perceptions of what is more suitable to the exercise: favouring short-term inputs and impacts at the expense of longer-term projects…staying away from areas perceived to be less likely to perform well’. ‘The REF encourages …focus on ‘exceptional’ impacts and those which are easily measurable, [with] researchers given ‘no safe space to fail’ when it came to impact’.
  • That last negative arises in major part because of the internal management of the exercise, yet the report proposes an even greater corporate approach in future. The evidence-based articles and reports, and innovative processes and artefacts that arise from our research will have a reduced contribution to published assessments on the quality of research, though there is encouragement of a wider diversity of research outputs. More emphasis will be placed on institutional and unit ‘culture’ (para 28), so individuals disappear, uncoupled from consideration of culture-based quality. That culture is controlled by management; I spent several years as a Head of School trying to protect and develop further a collegial enterprise culture, which encouraged research and innovative activities in teaching. The senior management wanted a corporate bureaucracy approach with targets and constant monitoring, which work at Exeter has shown leads to lower output, poorer quality and higher costs (Franco-Santos et al, 2014).

At least 20 per cent of the People, Culture and Environment sub-profile for a unit will be based on an assessment of the Institutional Level (IL) culture, and this element will make up 25 per cent of a unit’s overall quality profile, up from 15 percent from 2021. This proposed culture-based approach will favour Russell Group universities even further – their accumulated capital has led to them outscoring other universities on ‘environment’ in recent exercises, even when the output scores have been the other way round. Elizabeth Gadd, of Loughborough, had a good piece on this issue in Wonkhe on 28 June 2023. The future may see research-based universities recruiting strongly in the international market to provide subsidy to research from higher student fees, leaving the rest of us to offer access and quality teaching to UK students on fees not adjusted for inflation. Some recognition of excellent research in unsupportive environment would be welcome, as would reward for improvement as operated when the polytechnics and colleges joined research assessment exercises.

The culture of units will be judged by the panels – a separate panel will assess IL cultures – and will be based on a ‘structured statement’ from the management, assessing itself, plus a questionnaire submission. I have two concerns here: can academic panels competent to peer-assess research also judge the quality and contribution of management; and, given behaviours in the first round of impact assessment (Oancea, 2016), how far can we rely on the integrity of these statements?

The sub-profile on Contribution to Knowledge and Understanding sub-profile will make up 50 per cent of a unit’s quality profile – down from 60 per cent last time and 65 per cent in 2014. At least 10 per cent will be based on the structured statement, so Outputs – the one thing that researchers may have a significant role in – are down to only 40 per cent, at most, of what is meant by overall research quality (the FRAP International Committee recommended 33 per cent). Individuals will not be submitted. HESA data will be used to quantify staff and the number of research outputs that can be submitted will be an average of 2.5 per FTE. There is no upper limit for an individual, and staff with no outputs can be included, as well as those who support research by others, or technicians who publish. Research England (and this is mainly about England; the other three countries may do better and certainly will do things differently) is firm that the HESA numbers will not be used as the volume multiplier for funding (still a main purpose of the REF), though it is not clear where that will come from – Research England is reviewing their approach to strategic institutional research funding. Perhaps staff figures submitted to HESA will have an indicator of individuals’ engagement with research.

Engagement and Impact broadens the previous element of simply impact. Our masters have discovered that early engagement of external partners in research, and 6 months attachment at 0.2 contract level allows them to be included, and enhances impact. Wow! Who knew? The work that has impact can be of any level to avoid the current quality level designations stopping local projects being acknowledged.

The three sub-profiles have fuzzy boundaries and overlap. Not just in a linear connection – environment, output, impact – but, because, as noted above, for example, engagement comes from the external environment but becomes part of the internal culture. It becomes more of a Venn diagram, that allows the adoption of an ‘holistic’ approach to ‘responsible research assessment’. We wait to see what those both mean in practice.

What is clear in that holistic approach is that research has nothing to do with teaching, and impact on teaching still does not count. That has created an issue for me in the past since my research feeds (not leads) my teaching and vice versa. I use discovery learning and students’ critical incidents as curriculum organisers, and they produce ‘evidence’ similar to that gathered through more formal interview and observation methods. An example. I recently led a workshop for a small private HEI on academic governance. There was a newly appointed CEO. I used a model of institutional and departmental cultures which influence decision making and ‘governance’ at different levels. That model, developed to help my teaching is now regarded by some as a theoretical framework and used as a basis for research. Does it therefore qualify for inclusion in impact? The session asked participants to consider the balance among four cultures of collegial, bureaucratic, corporate, entrepreneurial, relating to the degrees of central control of policy development and of policy delivery (McNay, 1995).  It then dealt with some issues more didactically, moving to the concept of the learning organisation where I distributed a 20 item questionnaire, (not yet published, but available on request for you to use) to allow scoring out of 10 per item, of behaviours relating to capacity to change, innovate and learn, leading to improved quality. Only one person scored more than 100 in total and across the group the modal score was in the low 70s, or just over 35%. That gave the new CEO an agenda with some issues more easily pursued than others and scores indicating levels of concern and priority. So my role moved into consultancy. There will be impact, but is the research base sufficient, was it even research, and does the use of teaching as a research transmission process (Boyer, 1990) disqualify it?

I hope this shows that the report contains a big agenda, with more to come. SRHE members need to consider what it means to them, but also what it means for research into institutions and departments to help define culture and its characteristics. I will not be doing it, but I hope some of you will. We need to continue to provide an evidence base to inform decisions even if it takes up to 20 years for the findings to have an impact.

SRHE itself might say several things in response to the report:

  • welcome the recognition of previous weaknesses, but note that a major one has not been recorded: the impact of RAE/REF on teaching, when excellent research has gained extra money, but excellent teaching has not, leading to an imbalancing of effort within the HE system. The research-teaching nexus also needs incorporating into the holistic view of research. Teaching is a major element in dissemination of research (Boyer, 1990) and so a conduit to impact, and should be recognised as such. That is because the relationship between researcher/teacher and those gaining new knowledge and understanding is more intimate and interactive than a reader of an article experiences. Discovery learning, drawing on learners’ experiences in CPD programmes can be a source of evidence, enhancing the knowledge and understanding of the researcher to incorporate in further work and research publications.
  • welcome the commitment to more diversity of excellences. In particular, welcome the commitment to recognise local and regionally directed research and its significant impact. The arguments about intimacy and interaction apply here, too. Research in partnership is typical of such work and different criteria are needed to evaluate excellence in this context.
  • welcome the intention to review panel membership to reflect the wider view of research now to be adopted.
  • urge an earlier clarification on panel criteria to avoid another 18 months, at least, trying, without clarity or guidance, to do work that will fit with the framework of judgement within which that work will be judged.
  • be wary of losing the voice of the researchers in the reduction of emphasis on research and its outputs in favour of presentations on corporate culture.

References

McNay, I (1997) The Impact of the 1992 RAE on Institutional and Individual Behaviour in English HE: the evidence from a research project Bristol HEFCE


Leave a comment

Redefining cultures of excellence: A new event exploring models for change in recruiting researchers and setting research agendas

by Rebekah Smith McGloin and Rachel Handforth, Nottingham Trent University

Research excellence’ is a ubiquitous concept to which we are mostly habituated in the UK research ecosystem.  Yet, at the end of an academic year which saw the publication of UKRI EDI Strategy, four UKRI council reviews of their investments in PGR, House of Commons inquiry on Reproducibility and Research Integrity and following on from the development of manifesto, concordat, declaration and standards to support Open Research in recent years, it feels timely to engage in some critical reflection on cultures of excellence in research. 

The notion of ‘excellence’ has become an increasingly important part of the research ecosystem over the last 20 years (OECD, 2014). The drivers for this are traced to the need to justify the investment of public money in research and the increasing competition for scarce resources (Münch, 2015).  University rankings have further hardwired and amplified judgments about degrees of excellence into our collective consciousness (Hazelkorn, 2015).

Jong, Franssen and Pinfield (2021) highlight that the idea of excellence is a ‘boundary object’ (Star and Griesemer, 1989) however. That is, it is a nebulous construct which is poorly defined and is used in many different ways. It has nevertheless shaped policy, funding and assessment activities since the turn of the century. Ideas of excellence have been enacted through the Research Excellence Framework and associated allocation to universities of funding to support research, competitive schemes for grant funding, recruitment to flagship doctoral training partnerships and individual promotion and reward.

We can trace a number of recent initiatives at sector level, inter alia, that have sought to broaden ideas of research excellence and to challenge systemic and structural inequalities in our research ecosystem. These include the increase of impact weighting in REF2021 to 25%, trials of systems of partial randomisation as part of the selection process for some smaller research grants, e.g. British Academy from 2022, the Concordats and Agreements Review work in 2023 to align and increase influence, capacity, and efficiency of activity to support research culture and the recent Research England investment in projects designed to address the broken pipeline into research by increasing participation of people from racialised groups in doctoral education.

At the end of June, we are hosting an event at NTU which will focus on redefining cultures of research excellence through the lens of inclusion. The symposium, to be held at our Clifton Campus on Wednesday 28 June, provides an opportunity to re-examine the broad notion of research excellence, in the context of systemic inequalities that have historically locked out certain types of researchers and research agendas and locked in others.

The event focuses on two mutually-reinforcing areas: the possibility of creating more responsive and inclusive research agendas through co-creation between academics and communities; and broadening pathways into research through the inclusive recruitment of PhD and early career researchers. We take the starting position that approaches which focus on advancing equity are critical to achieving excellence in UK research and innovation.

The day will include keynotes from Dr Bernadine Idowu and Professor Kalwant Bhopal, the launch of a new competency-based PGR recruitment framework, based on sector consultation, and a programme of speakers talking about their approaches to diversifying researcher recruitment and engaging the community in setting research agendas. 

NTU will be showcasing two new projects that are designed to challenge old ideas of research excellence and forge new ways of thinking. EDEPI (Equity in Doctoral Education through Partnership and Innovation Programme) is a partnership with Liverpool John Moores and Sheffield Hallam Universities and NHS Trusts in the three cities. The project will explore how working with the NHS can improve access and participation in doctoral education for racially-minoritised groups. Co(l)laboratory is a project with University of Nottingham, based on the Universities for Nottingham civic agreement with local public-sector organisations. Collab will present early lessons from a community-informed approach to cohort-based doctoral training.

Our event is a great opportunity for universities and other organisations who are, in their own ways, redefining cultures of research excellence to share their approaches, challenges and successes. We invite individuals, project teams and organisations working in these areas to join us at the end of June, with the hope of building a community of practice around building inclusive research cultures, within and across the sector.

Dr Rebekah Smith McGloin is Director of the Doctoral School at Nottingham Trent University and is Principal Investigator on the EDEPI and Co(l)laboratory projects. 

Dr Rachel Handforth is Senior Lecturer in Doctoral Education and Civic Engagement at NTU.


2 Comments

Will universities fail the Turing Test?

by Phil Pilkington

The recent anxiety over the development of AI programmes to generate unique text suggests that some disciplines face a crisis of passing the Turing Test. That is, that you cannot distinguish between the unique AI generated text and that produced by a human agent. Will this be the next stage in the battle of cheating by students? Will it lead to an arms race of countering the AI programmes to foil the students cheating? Perhaps it may force some to redesign the curriculum, the learning and the assessment processes.

Defenders of AI programmes for text generation have produced their own euphemistic consumer guides. Jasper is a ‘writing assistant’, Dr Essay ‘gets to work and crafts your entire essay for you’, Article Forge (get it?) ‘is one of the best essay writers and does the research for you!’.  Other AI essay forgers are available. The best known and the most popular is probably GPT-3 with a reported million subscribers (see The Atlantic, 6/12/2022). The promoters of the AI bots make clear that it is cheaper and quicker than using essay mills. It may even be less exploitative of those graduates in Nepal or Nottingham or Newark New Jersey serving the essay mills. There has been the handwringing that this is the ‘end of the essay’, but there have been AI developments in STEM subjects and art and design.

AI cannot be uninvented. It is out there, it is cheap and readily available. It does not necessarily follow that using it is cheating. Mike Sharples on the LSE Blog tried it out for a student assignment on learning styles. He found some simple errors of reference but made the point that GPT-3 text can be used creatively for students’ understanding and exploring a subject. And Kent University provides guidance on the use of Grammarly, which doesn’t create text as GPT-3 does ab initio but it does ‘write’ text.

Consumer reports on GPT-3 suggest that the output for given assignments is of a 2.2 or even 2.1 standard of essay, albeit with faults in the text generated. These seem to be usually in the form of incorrect or inadequate references; some references were for non-existent journals  and papers, with dates confused and so on. However, a student could read through the output text and correct such errors without any cognitive engagement in the subject. Correcting the text would be rather like an AI protocol. The next stage of AI will probably eliminate the most egregious and detectable of errors to become the ‘untraceable poison’.

The significant point here is that it is possible to generate essays and assignments without cognitive activity in the generation of the material. This does not necessarily mean a student doesn’t learn something. Reading through the generated text may be part of a learning process, but it is an impoverished form of learning. I would distinguish this as the learning that in the generated text rather than the learning how of generating the text. This may be the challenge for the post AI curriculum: knowing that is not as important as knowing how. What do we expect for the learning outcomes? That we know, for example, the War Aims of Imperial Germany in 1914 or that we know how to find that out, or how it relates to other aims and ideological outlooks? AI will provide the material for the former but not the latter.

To say that knowing that (eg the War Aims of Imperial Germany, etc) is a form of surface learning is not to confuse that memory trick with cognitive abilities, or with AI – which has no cognitive output at all. Learning is semantic, it has reference as rule-based meaning; AI text generation is syntactic and has no meaning at all (to the external world) but makes reference only to its own protocols[1]. As the Turing Test does not admit – because in that test the failure to distinguish between the human agent and the AI is based on deceiving the observer.

Studies have shown that students have a scale of cheating (as specified by academic conduct rules). An early SRHE Student Experience Seminar explored the students’ acceptance of some forms of cheating and abhorrence of other forms. Examples of ‘lightweight’ and ‘acceptable’ cheating included borrowing a friend’s essay or notes, in contrast to the extreme horror of having someone sit an exam for them (impersonation). The latter was considered not just cheating for personal advantage but also disadvantaging the entire cohort (Ashworth et al, ‘Guilty in Whose Eyes?’). Where will using AI sit in the spectrum of students’ perception of cheating? Where will it sit within the academic regulations?

I will assume that it will be used both for first drafts and for ‘passing off’ as the entirety of the student’s efforts. Should we embrace the existence of AI bots? They could be our friends and help develop the curriculum to be more creative for students and staff. We will expect and assume students to be honest about their work (as individuals and within groups) but there will be pressures of practical, cultural and psychological nature, on some students more than others, which will encourage the use of the bots. The need to work as a barista to pay the rent, to cope as a carer, to cope with dyslexia (diagnosed or not), to help non-native speakers, to overcome the disadvantages of a relatively impoverished secondary education, all distinct from the cohort of gilded and fluently entitled youth, will all be stressors for encouraging the use of the bots.

Will the use of AI be determined by the types of students’ motivation (another subject of an early SRHE Student Experience Seminar)? There will be those wanting to engage in and grasp (to cognitively possess as it were) the concept formations of the discipline (the semantical), with others who simply want to ‘get through the course’ and secure employment (the syntactical).

And what of stressed academics assessing the AI generated texts? They could resort to AI bots for that task too. In the competitive, neo-liberal, league-table driven universities of precarity, publish-or-be-redundant monetizing research (add your own epithets here), will AI bots be used to meet increasingly demanding performance targets?

The discovery of the use of AI will be accompanied by a combination of outrage and demands for sanctions (much like the attempts to criminalise essay mills and their use). We can expect some responses from institutions that it either doesn’t happen here or it is only a tiny minority. But if it does become the ‘untraceable poison’ how will we know? AI bots are not like essay mills. They may be used as a form of deception, as implied by the Turing Test, but they could also be used as a tool for greater understanding of a discipline. We may need a new form of teaching, learning and assessment.

Phil Pilkington’s former roles include Chair of Middlesex University Students’ Union Board of Trustees, and CEO of Coventry University Students’ Union. He is an Honorary Teaching Fellow of Coventry University and a contributor to WonkHE. He chaired the SRHE Student Experience Network for several years and helped to organise events including the hugely successful 1995 SRHE annual conference on The Student Experience; its associated book of ‘Precedings’ was edited by Suzanne Hazelgrove for SRHE/Open University Press.


[1] John Searle (The rediscovery of the mind, 1992) produced an elegant thought experiment to refute the existence of AI qua intelligence, or cognitive activity. He created the experiment, the Chinese Room, originally to face off the Mind-Brain identity theorists. It works as a wonderful example of how AI can be seemingly intelligent without having any cognitive content.  It is worth following the Chinese Room for its simplicity and elegance and as a lesson in not taking AI seriously as ‘intelligence’.


1 Comment

Critically analysing EdTech investors’ logic in business discourse

by Javier Mármol Queraltó

This blog is based on a presentation to the 2021 SRHE Research Conference, as part of a Symposium on Universities and Unicorns: Building Digital Assets in the Higher Education Industry organised by the project’s principal investigator, Janja Komljenovic (Lancaster). The support of the Economic and Social Research Council (ESRC) is gratefully acknowledged. The project introduces new ways to think about and examine the digitalising of the higher education sector. It investigates new forms of value creation and suggests that value in the sector increasingly lies in the creation of digital assets.

In the context of the current SARS-COVID-19 pandemic, the ongoing process of digitalisation of education has become a prominent area for social, financial and, increasingly, (critical) educational research. Higher education, as a pivotal social, economic, technological and educational domain, has seen its activities drastically affected, and Universities and the multitude of people involved in them have been forced to adapt to the unfolding crisis. HE researchers agree both on the unpreparedness of countries and institutions faced by the pandemic, and on its potential lasting impact on the educational sector (Goedegebuure and Meek, 2021). In as much as educational technologies (EdTech) have been brought to the fore due to their pivotal role in the enablement and continuation of educational practices across the globe, EdTech companies and investors have also become primary financial beneficiaries of these necessary processes of digitalisation. The extensive use and adoption of EdTech to bridge the gap between HE professionals and students due to the application of strict social distancing measures has been welcomed by investors as an opportunity for EdTech to establish themselves as key players within an educational landscape under a process of assetisation (Komljenovic, 2020, 2021). Investors and EdTech are scaffolding new digital markets in HE, reshaping the conceptualisation of universities, HE and the sector itself more generally (Williamson, 2021; Komljenovic and Robertson, 2016). In this brief entry, I focus on EdTech investors’ discourses, owing to the potential of such discourses to shape the future of educational practices broadly speaking.

Within the ‘Universities and Unicorns’ ESRC-funded project, this exploratory research (see full report) aimed at unveiling the ideological uses of linguistic, visual and multimodal devices (eg texts and charts) deployed by EdTech investors in a variety of texts that have the potential, due to their circulation and goals, to shape public understandings of the role of Educational Technologies in the unfolding crisis. The research was conducted deploying a framework anchored in Linguistics, specifically cognitive-based approaches to Critical Discourse Studies (CL-CDS; eg Mármol Queraltó, 2021b). A central assumption in this approach is that language encodes construal: the same event/situation can be alternatively linguistically formulated, and these can have diverse cognitive effects in readers (Hart, 2011). From a CL-CDS perspective, then, texts can potentially shape the way that the public think (and subsequently act) about social topics (cf Watters, 2015).

In order to extract the ideologies underlying discourse practices carried out by HE investors, we examined qualitatively a variety of texts disseminated in the public and semi-private domains. We investigated, for example, HolonIQ’s explanatory charts, interviews with professionals and blog entries (eg Charles MacIntyre, Alex Latsis, Jan Lynn-Matern), and global financial reports by IBIS Capital, BrightEye Ventures, and EdTechX, among several others. Our main goal was to better understand how EdTech investors operationalised discourse to shape the imageries of the future in the relationship between HE institutions, EdTech and governance. In line with CDS approaches, we examined the representations of social actors in context using van Leeuwen’s (2008) framework, and more in line with CL-CDS, we also operationalised the analysis of metaphorical expressions indexing Conceptual Metaphors, and Force dynamics. Force-dynamics is an essential tool deployed to examine how the tensions between actors and processes within business discourse are constructed (see Oakley, 2005).

Our study yielded important findings for the critical examination of discourse processes within the EdTech-HE-governance triangle of influences. In terms of social actor representation (whose examination also included metaphor), the main findings are:

  • EdTech investors and companies are rendered as opaque, abstract collectives, and are positively represented as ‘enablers’ and ‘disruptors’ of educational processes.
  • Governments are rendered as generic, collective entities, and depicted as necessary funders of process of digital transformation.
  • Universities or HE institutions are mainly negatively represented as potential ‘blockers’ of processes of digital transformation, and they are depicted as failing their students due to their lack of scalability and flexibility.
  • Individuals within HE institutions are identified as numbers and increasing percentages within unified collectives, students routinely cast as beneficiaries in ‘consumer’ and ‘user’ roles, while educators are activated as ‘content providers’.
  • Metaphorically, the EdTech sector is conceptualised as a ‘ship’ on a ‘journey’ towards profit, where HE institutions can be ‘obstacles along a path’ and the global pandemic and other push factors are conceptualised as ‘tailwinds’.
  • The EdTech market is conceptualised as a ‘living organism’ that grows and evolves independent of the actors involved in it. The visual representations observed reinforce these patterns and emphasise the growth of the EdTech market in very positive terms.

The formulation of ‘push’ and ‘pull’ factors is also essential to understand the discursively constructed ‘internal tensions’ within the sector. In order to examine these factors, we operationalised Force-dynamics analysis and metaphor, which allowed us to arrive to the following findings:

  • Push factors identified by investors driving the EdTech sector include the SARS-COVID19 global pandemic, the digital acceleration being experienced in the sector prior to the pandemic, the increasing number of students requiring access to HE, and investors’ actions aimed at disrupting the EdTech market.
  • Pull factors encouraging investment in the sector are conceptualised in the shape of financial predictions. The visions put forward by EdTech investors become instrumental in the achievement of those predictions.
  • The representation of the global pandemic is ambivalent and it is rendered both as a negative factor affecting societies and as a positive factor for the EdTech sector. The primary focus is on the positive outcomes of the disruption brought about by the pandemic.
  • Educational platforms are foregrounded in their enabling role and replace HE institutions as site for educational practice, de-localising educational practices from physical universities.
  • Students and educators are found to be increasingly reframed as ‘users’ and ‘content providers’, respectively. This discursive shift is potentially indicative of the new processes of assetisation of HE.

On the whole, framing business within the ‘journey’ metaphor entails that any entities or processes affecting business are potentially conceptualised as ‘obstacles along the path’, and therefore attributed negative connotations. In our case, those entities (eg governments and HE institutions) or processes (eg lack of funding) that metaphorically ‘stand in the way of business’ are automatically framed in a negative light, potentially affording a negative reception by the audience and therefore legitimising actions designed to remove those ‘obstacles’ (eg ‘disruptions’). EdTech companies and investors are represented very positively as ‘enablers’ of educational practices disrupted by the SARS-COVID19 pandemic, but also as ‘push factors’ in processes of digital acceleration within the ‘speed of action is speed of motion’ metaphor. In the premised, ever-growing EdTech sector, those actors and processes that ‘slow down’ access to profits (or processes providing access to profit) are similarly negatively represented. The conceptualisation of the SARS-COVID-19 global pandemic in this context reflects ‘calculated ambivalence’. This ambivalence was expected, as portraying the pandemic solely as a relatively positive factor for the HE sector would be in extreme detriment to EdTech investors’ activities. Our findings reflect that, while the global pandemic is initially represented as a very negative factor greatly disrupting societies and businesses, those negative impacts tend to be presented in rather vague ways and in most occasions the result of the disruption brought about by the pandemic is reduced to changes in the modality of education experienced by learners (from in-person to online education). We have found no significant mention of social or personal impacts of the pandemic (eg deaths and scenarios affecting underrepresented social groups), where the focus has been mainly on the market and the activities within it. Conversely, while the initial framing of the pandemic is inherently negative, we have seen in several examples above that the pandemic is subtly instrumentalised as a ‘push factor’, which serves to accelerate digital transformation and is hence a positive factor for the EdTech sector. In a global context of restrictions, containment measures and vaccine rollouts, it is especially ideologically relevant to find the pandemic instrumentalised as a ‘catalyst’, or as an important player in a ‘experiment of global proportions’. Framing the pandemic in such ways detaches the audience from its negative connotations, and serves to depict EdTech companies and investors as involved in high-level, complex processes that abstract the millions of diverse victims to the pandemic. Ultimately, in the ‘journey’ towards profit, the SARS-COVID-19 is a desired push factor, also realised as a ‘tailwind’, which facilitates the desired digital acceleration.

On the whole, our research demonstrated that social actor representation and the distinction between push/pull factors are crucial sites for the analysis of EdTech discourse. EdTech’s primary focus is on the positive outcomes of the disruption brought about by the pandemic. In this context, educational platforms are foregrounded in their enabling role and replace HE institutions as site for educational practice, de-localising educational practices from physical universities. Subsequently, students and educators are found to be increasingly reframed as ‘users’ and ‘content providers’ respectively. We argue that this subtle discursive shift is potentially indicative of the new processes of assetization of HE and reflects more broadly a neoliberal logic.

Javier Mármol Queraltó is a PhD candidate in Linguistics in Lancaster University. His current research deals with the multimodal representations of discourses of migration in the British and Spanish online press. He advocates a Cognitive Linguistic Approach to Critical Discourse Studies (CL-CDS), and is working on a methodology that can shed light on how public perceptions of social issues might be influenced by both the multimodal constraints of online newspaper discourse and our shared cognitive capacities. He is also interested in the multimodal and cognitive dimensions of discourses of Brexit outside the UK, news discourses of social unrest, and the marketisation/assetisation processes of HE.


Leave a comment

Beware efficiencies! Assetisation as the future defraying of costs savings in the present

by Kean Birch

This blog is based on a presentation to the 2021 SRHE Research Conference, as part of a Symposium on Universities and Unicorns: Building Digital Assets in the Higher Education Industry organised by the project’s principal investigator, Janja Komljenovic (Lancaster). The support of the Economic and Social Research Council (ESRC) is gratefully acknowledged. The project introduces new ways to think about and examine the digitalising of the higher education sector. It investigates new forms of value creation and suggests that value in the sector increasingly lies in the creation of digital assets.

What makes learning more efficient? And what makes teaching more effective? According to EdTech providers and their champions, it is the digital transformation of higher education. The consulting company Gartner – which releases regular EdTech industry reports – defines this transformation as a shift from a ‘collectively-defined’ quality model in which universities provide their services – theoretically – to anyone, to a model in which quality is personally defined and delivered at scale through MOOCs or other means. In fact, Gartner emphasize the importance of EdTech providing scalable technologies for ensuring ‘cost effective education for the benefit of society’. And this seems to be the concern of many EdTech firms themselves; they aim to provide technologies that make life and work more efficient and effective for higher education institutions, managers, faculty, students, and staff.

But what does this actually mean?

I am part of a project, led by Dr Janja Komljenovic, looking at how value is increasingly being created in the higher education sector through the transformation of ‘things’ into digital and other assets – it could be students’ data, it could be research, it could be lectures, and so on. Part of our concern about these changes is the way they can end up reconfiguring societal, public, or commonly held resources as private assets from which companies can exact an economic rent. An important reason for examining this assetisation process is to analyse exactly how things are turned into private assets as a way to open them up to public scrutiny, and political intervention, should we so desire. While assets are constituted by legal forms, like property rights, and technical changes, like digital rights management, they are also the result of broader narratives about how we should or should not understand the world. Epistemic justifications matter. The World Economic Forum highlights what I mean here. They support the deployment of education technology as a way to “create better systems and data flows”. And this means more efficient and effective learning and teaching. But, what does efficiency and effectiveness mean in the case of higher education?

As we have interviewed EdTech providers in our project, we have noticed how they emphasize ‘efficiency’ as one of the key contributions of their technology, where this seems to be equated with producing an outcome at lower cost, whereas this is understood – in common sense terms – as doing something ‘better’ than before. It is important to see how the concept of efficiency is enrolled in the transformation of higher education into a range of assets. Assetisation in higher education depends on the development and promotion of a set of analytics that can identify efficiencies, understood as cost savings that someone or some institution can benefit from. Key to this assetisation process is the characterisation of efficiency as a common-sense goal for universities, managers, faculty, students, staff, and governments; in fact, efficiency can appear to be the very thing that education technologies are turning into an asset. For example, making it cheaper for students to study by enabling them to rent their textbooks, rather than have to buy them. Or making it cheaper for universities to pay subscription only for those electronic texts – or even parts of those texts – that are actually read and used by their staff and students. But this raises an important question: how do EdTech companies make money, if they are simply reducing costs all around?

EdTech companies look to the future for their success. Assets are temporal entities, entailing the creation of a stream of future revenues that can be capitalised in the present, thereby enabling investors to put a value to them that does not depend on being profitable now, or even generate significant revenues now. Efficiencies in the present often end up as defrayed costs in the future as those cost savings today compound into increased revenues for someone (eg EdTech) in the future. The future revenue expectations of EdTech companies come from the illusion of efficiency as cost savings at this point in time; for example, students can save on textbooks now but will be induced to subscribe to lifelong learning resources, or their personal data might be exploited in the future in multiple ways, or their reading habits will be used to sell something to universities, or any manner of revenue generating schemes. Someone is paying in the future.

EdTech companies have to make money somehow, and how they make money is the interesting question. Ideas about the current and future state of higher education and EdTech matter as they provide imaginaries of what is possible and desirable, which we discuss in this report. Claims to efficiency are part of how they make money; they are part of the way that EdTech companies construct new asset classes out of universities and their students, faculty, and staff. Interrogating how these supposed efficiencies are monetised is critical for getting a grip on the implications of EdTech for higher education in the longer term. It is essential we analyse this dynamic now to allow for timely public scrutiny, democratic debate and social intervention.

Kean Birch is Associate Professor at York University, Canada. He is particularly interested in understanding technoscientific capitalism and draws on a range of perspectives from science & technology studies, economic geography, and economic sociology to study it. More specifically, his research focuses on the restructuring and transformation of the economy & financial knowledges, technoscience & technoscientific innovation, and the relationship between markets & natural environments. Currently, he is researching how different things (e.g. knowledge, personality, loyalty, etc.) are turned into ‘assets’ & how economic rents are then captured from those assets – basically, in processes of assetisation and rentiership.

Image of Rob Cuthbert


1 Comment

Quality and standards in higher education

By Rob Cuthbert

What are the key issues in HE quality and standards, right now? Maintaining quality and standards with the massive transition to remote learning? Dealing with the consequences of the 2020 A-levels shambles? The student experience, now that most learning for most students is remote and off-campus? Student mental health and engagement with their studies and their peers? One or more of these, surely, ought to be our ‘new normal’ concerns.

But not for the government. Minister Michele Donelan assured us that quality and standards were being constantly monitored – by other people – as in her letter of 2 November to vice-chancellors:

“We have been clear throughout this pandemic that higher education providers must at all times maintain the quality of their tuition. If more teaching is moved online, providers must continue to comply with registration conditions relating to quality and standards. This means ensuring that courses provide a high-quality academic experience, students are supported and achieve good outcomes, and standards are protected. We have worked with the Office for Students who are regularly reviewing online tuition. We also expect students to continue to be supported and achieve good outcomes, and I would like to reiterate that standards must be maintained.”

So student health and the student experience are for the institutions to worry about, and get right, with the Office for Students watching. And higher education won’t need a bailout, unlike most other sectors of the market economy, because with standards being maintained there’s no reason for students not to enrol and pay fees exactly as usual. Institutional autonomy is vital, especially when it comes to apportioning the blame.

For government, the new normal was just the same as the old normal. It wasn’t difficult to read the signs. Ever since David Willetts, ministers had been complaining about low quality courses in universities. But with each successive minister the narrative became increasingly threadbare. David, now Lord, Willetts, at least had a superficially coherent argument: greater competition and informed student choice would drive up quality through competition between institutions for students. It was never convincing, but at least it had an answer to why and how quality and standards might be connected with competition in the HE market. Promoting competition by lowering barriers to entry for new HE providers was not a conspicuous success: some of the new providers proved to be a big problem for quality. Information, advice and guidance were key for improving student choice, so it seemed that the National Student Survey would play a significant part, along with university rankings and league tables. As successive ministers took up the charge the eggs were mostly transferred to the Teaching Excellence Framework basket, with TEF being championed by Jo, now Lord, Johnson. TEF began in 2016 and became a statutory requirement in the Higher Education and Research Act 2017, which also required TEF to be subject to an independent review. From the start TEF had been criticised as not actually being about teaching, or excellence, and the review by Dame Shirley Pearce, previously VC at Loughborough, began in 2018. Her review was completed before the end of 2019, but at the time of writing had still not been published.

However the ‘low quality courses’ narrative has just picked up speed. Admittedly it stuttered a little during the tenure of Chris Skidmore, who was twice briefly the universities minister, before and after Jo Johnson’s equally brief second tenure. The ‘Skidmore test’ suggested that any argument about low quality courses should specify at least one of the culprits, if it was not to be a low quality argument. However this was naturally unpopular with the narrative’s protagonists and Skidmore, having briefly been reinstalled as minister after Jo Johnson’s decision to step down, was replaced by Michele Donelan, who has remained resolutely on-message, even as any actual evidence of low quality receded even further from view. She announced in a speech to Universities UK at their September 2020 meeting that the once-praised NSS was now in the firing line: “There is a valid concern from some in the sector that good scores can more easily be achieved through dumbing down and spoon-feeding students, rather than pursuing high standards and embedding the subject knowledge and intellectual skills needed to succeed in the modern workplace. These concerns have been driven by both the survey’s current structure and its usage in developing sector league tables and rankings.”

UUK decided that they had to do something, so they ‘launched a crackdown’ (if you believe Camilla Turner in The Telegraph on 15 November 2020) by proposing, um, “a new charter aimed at ensuring institutions take a “consistent and transparent approach to identifying and improving potentially low value or low quality courses.” It’s doubtful if even UUK believed that would do the trick, and no-one else gave it much credence. But with the National Student Survey and even university league tables now deemed unreliable, and the TEF in deep freeze, the government urgently needed some policy-based evidence. It was time for this endlessly tricky problem to be dumped in the OfS in-tray. Thus it was that the OfS announced on 17 November 2020 that: “The Office for Students is consulting on its approach to regulating quality and standards in higher education. Since 2018, our focus has been on assessing providers seeking registration and we are considering whether and how we should develop our approach now that most providers are registered. This consultation is taking place at an early stage of policy development and we would like to hear your views on our proposals.”

Instant commentators were unimpressed. Were the OfS proposals on quality and standards good for the sector? Johnny Rich thought not, in his well-argued blog for the Engineering Professors’ Council on 23 November 2020, and David Kernohan provided some illustrative but comprehensive number-crunching in his Wonkhe blog on 30 November 2020: “Really, the courses ministers want to get rid of are the ones that make them cross. There’s no metric that is going to be able to find them – if you want to arbitrarily carve up the higher education sector you can’t use “following the science” as a justification.” Liz Morrish nailed it on her Academic Irregularities blog on 1 December 2020.

In the time-honoured way established by HEFCE, the OfS consultation was structured in a way which made it easy to summarise responses numerically, but much less easy to interpret their significance and their arguments. The core of the approach was a matrix of criteria, most of which all universities would expect to meet, but it included some ‘numerical baselines’, especially on something beyond the universities’ control – graduate progression to professional and managerial jobs. It also included a proposed baseline for drop-out rates. The danger of this was that it would point the finger at universities which do the most for disadvantaged groups, but here too government and OfS had a cunning plan. Nick Holland, the OfS Competition and Registration Manager, blogged on 2 December 2020 that the OfS would tackle “pockets of low quality higher education provision”, with the statement that “it is not acceptable for providers to use the proportion of students from disadvantaged backgrounds they have as an excuse for poor outcomes.” At a stroke universities with large proportions of disadvantaged students could either be blamed for high drop-out rates, or, if they reduced drop-out rates, they could be blamed for dropping standards. Lose-lose for the universities concerned, but win-win for the low quality courses narrative. The outrider to the low quality courses narrative was an attack on the 50% participation rate (in which Skidmore was equally culpable), which seemed hard to reconcile with a ‘levelling up’ narrative, but Michele Donelan did her best with her speech to NEON, of all audiences, calling for a new approach to social mobility, which seemed to add up to levelling up by keeping more people in FE. The shape of the baselines became clearer as OfS published Developing an understanding of projected rates of progression from entry to professional employment: methodology and results on 18 December 2020. After proper caveats about the experimental nature of the statistics, here came the indicator (and prospective baseline measure): “To derive the projected entry to professional employment measure presented here, the proportion of students projected to obtain a first degree at their original provider (also referred to as the ‘projected completion rate’) is multiplied by the proportion of Graduate Outcomes respondents in professional employment or any type of further study 15 months after completing their course (also referred to as the ‘professional employment or further study rate’).” This presumably met the government’s expectations by baking in all the non-quality-related advantages of selective universities in one number. Wonkhe’s David Kernohan despaired, on 18 December 2020, as the proposals deviated even further from anything that made sense: “Deep within the heart of the OfS data cube, a new plan is born. Trouble is, it isn’t very good.”

Is it too much to hope that OfS and government might actually look at the academic research on quality and standards in HE? Well, yes, but there is rather a lot of it. Quality in Higher Education is into its 26th year, and of course there is so much more. Even further back, in 1986 the SRHE Annual Conference theme was Standards and criteria in higher education, with an associated book edited by one of the founders of SRHE, Graeme Moodie (York). (This was the ‘Precedings’ – at that time the Society’s practice was to commission an edited volume in advance of the annual conference.) SRHE and the Carnegie Foundation subsequently sponsored a series of Anglo-American seminars on ‘Questions of Quality’. One of the seminar participants was SRHE member Tessa, now Baroness, Blackstone, who would later become the Minister for Further and Higher Education, and one of the visiting speakers for the Princeton seminar was Secretary of State for Education Kenneth Baker. At that time the Council for National Academic Awards was still functioning as the validating agency, assuring quality, for about half of the HE sector, with staff including such SRHE notables as Ron Barnett, John Brennan and Heather Eggins. When it was founded SRHE aimed to bring research and policy together; they have now drifted further apart. Less attention to peer review, but more ministers becoming peers.

Rob Cuthbert is Emeritus Professor of Higher Education Management, University of the West of England and Joint Managing Partner, Practical Academics


Leave a comment

From corona crisis management to ‘new normal’ – a Danish university educational perspective

by Helle Mathiasen

This is one of a series of position statements developed following a conference on ‘Building the Post-Pandemic University’, organised on 15 September 2020 by SRHE member Mark Carrigan (Cambridge) and colleagues. The position statements are being posted as blogs by SRHE but can also be found on The Post-Pandemic University’s excellent and ever-expanding website. The statement by Helle Mathiasen can be found here.

This year, in many ways, we have all become richer with the transition of campus-based teaching to online teaching. However, it has also been a challenge for most educators and students, as explained in The evaluation of online teaching in Spring 2020 (published in Danish on 18 September 2020, to be translated to English this Autumn). Much of traditional teaching had to be quickly changed, which often resulted in digitalisation of the regular campus-based teaching without regard to the changing conditions of communication.

This type of teaching was called emergency teaching, which is important to keep in mind when planning and implementing teaching in coming semesters. Going forward, the path from emergency education to a ‘new normal’ needs to be critically and reflexively explored. There was rarely time among educators to reflect critically on the didactic choices they made in haste. The teaching had to be provided immediately but now we need to take time to reflect on our decisions, since Autumn teaching is already organized and currently being implemented. It may still be in a ‘state’ of ‘crisis’, but it is important that the solutions planned and implemented this Spring may not necessarily be able to draw the ‘new normal’. Surveys about students’ experiences of ‘emergency teaching’ tell about serious consequences, which result in low motivation, great frustration and explicit need for more interaction. 

Management is aware of the challenges posed by the digital transformation from technical, organizational, educational and strategic perspectives. 

Using a communication theoretical approach, we can open up an important discussion, focusing on the communicative possibilities when we are physically present (f2f) compared with net-mediated communication in its broadest sense. There are, so to speak, more communicative connectivity options compared to net-mediated communication, both with synchronous and asynchronous communication. Teaching is in this theoretical frame defined as a specific form of communication, whose underlying intention is to effect change by the students, who direct their attention toward the communication. It is the engineered context which brings about the possibility for the activation/ continuation of learning processes, hence knowledge construction. 

Together with the communicative perspective related to teaching, we can discuss the concept of ‘good teaching’. By good teaching we mean teaching in the presented theoretical framework, where students and educators have the opportunity to communicate. That is, both ways, and not just one-way communication. It is thus about focusing on the social dimension through communication (dialogue, plenary/group discussions). It is about providing the opportunity for social sparring and reflection – and the opportunity to ‘check’ one’s professionalism with fellow students and educators. It is about being able to immerse oneself professionally and actively participate in the social community. Being with others on campus is part of student identity building and their development towards professional people.

Increased online learning risks instrumentalising teaching to reduce it to a more or less rigid template, where time and activities are set and spontaneous discussions are tight. This may mean that the development of independence, autonomy, co-determination skills and academic bildung are given more difficult conditions in which to develop. We must pay close attention to when online teaching is more often suited to more factual knowledge and the lowest taxonomic levels, where to reach the higher levels of analysis, synthesis and creativity as well as deeper professional discussions, it is more difficult to get it to work online.

We need to think about what is teaching quality and use the knowledge/research that is in the field – so that we can offer students a variety of teaching and learning environments that provide students with the best conditions to learn what is required according to curricula. That may include online teaching, but in a critical reflective format and not with an approach where emergency teaching becomes the ‘new normal’. The digital tools and platforms are important to have access to, but indeed not enough. The attention for a didactical part is crucial, when redesigning courses into online environments and mixed f2f and online teaching environments. It requires renewed concrete attention to support the educators’ didactic development. It also requires support for students and educators when it comes to developing the opportunities for unfolding communication and knowledge sharing.

This is an invitation to discuss the communicative and educational perspective on the currently developmental digital transformation.

Helle Mathiasen is professor at the University of Copenhagen, Department of Science Education, Denmark. Her primary research interest lies currently within the field of communication forums: internet and computer-mediated, various forms of face-to-face communication forums as well as hybrid forms. This field is joined with the concepts of learning, teaching, pedagogy and didactics. The current focus of her research is on the themes of the organisation of teaching, communication environments, and learning perspectives


Leave a comment

The Impact of TEF

by George Brown

A report on the SRHE seminar The impact of the TEF on our understanding, recording and measurement of teaching excellence: implications for policy and practice

This seminar demonstrated that the neo-liberal policy and metrics of TEF (Teaching Excellence Framework) were not consonant with excellent teaching as usually understood.

Michael Tomlinson’s presentation was packed with analyses of the underlying policies of TEF. Tanya Lubicz-Nawrocka considered  the theme of students’ perceptions of excellent teaching. Her research demonstrated clearly that students’ views of excellent teaching were very different from those of TEF. Stephen Jones provided a vibrant analysis of public discourses. He pointed to the pre-TEF attacks on universities and staff by major conservative politicians and their supporters. These were to convince students and their parents that Government action was needed. TEF was born and with it the advent of US-style neo-liberalism and its consequences. His final slide suggested ways of combating TEF including promoting the broad purposes of HE teaching. Sal Jarvis succinctly summarised the seminar and took up the theme of purposes. Personal development and civic good were important purposes but were omitted from the TEF framework and metrics.

Like all good seminars, this seminar prompted memories, thoughts and questions during and after the seminar. A few of mine are listed below. Others may wish to add to them.

None of the research evidence supports the policies and metrics of TEF (eg Gibbs, 2018). The indictment of TEF by the Royal Statistics Society is still relevant (RSS, 2018). The chairman of the TEF panel is reported to have said “TEF was not supposed to be a “direct measure of teaching” but rather “a measure based on some [my italics] of the outcomes of teaching” On the continuum of neo-liberalism and collegiality, TEF is very close to the pole of neo-liberalism whereas student perspectives are nearer the pole of collegiality which embraces collaboration between staff and between staff and students. Collaboration will advance excellence in teaching: TEF will not. Collegiality has been shown to increase morale and reinforce academic values in staff and students (Bolden et al, 2012). Analyses of the underlying values of a metric are important because values shape policy, strategies and metrics. ‘Big data’ analysts need to consider ways of incorporating qualitative data. With regard to TEF policy and its metrics, the cautionary note attributed to Einstein is apposite: “Not everything that counts can be counted and not everything that is counted counts.”

SRHE member George Brown was Head of an Education Department in a College of Education and Senior Lecturer in Social Psychology of Education in the University of Ulster before becoming Professor of Higher Education at the University of Nottingham.  His 250 articles, reports and texts are mostly in Higher and Medical Education, with other work in primary and secondary education. He was senior author of Effective Teaching in Higher Education and Assessing Student Learning in Higher Education and co-founder of the British Education Research Journal, to which he was an early contributor and reviewer. He was the National Co-ordinator of Academic Staff Development for the Committee of Vice Chancellors and Principals (now Universities UK) and has served on SRHE Council.

References

Bolden, R et al (2012) Academic Leadership: changing conceptions, identities and experiences in UK higher education London: Leadership Foundation

Gibbs, G (2017) ‘Evidence does not support the rationale of the TEF’, Compass: Journal of Learning and Teaching, 10(2)

Royal Statistical Society  (2018) Royal Statistical Society: Response to the teaching excellence and student outcomes framework, subject-level consultation