SRHE Blog

The Society for Research into Higher Education


Leave a comment

Institutional constraints to higher education datafication: an English case study

by Rachel Brooks

‘Intractable’ datafication?

Over recent years, both policymakers and university leaders have extolled the virtues of moving to a more metricised higher education sector: statistics about student satisfaction with their degree programme are held to improve the decision-making processes of prospective students, while data analytics are purported to help the shift to more personalised learning, for example. Moreover, academic studies have contended that datafication has become an ‘intractable’ part of higher education institutions (HEIs) across the world.

Nevertheless, our research (conducted in ten English HEIs, funded by TASO) – of data use with respect to widening participation to undergraduate ‘sandwich’ courses (where students spend a year on a work placement, typically during the third year of a four-year degree programme) – indicates that, despite the strong claims about the advantages of making more and better use of data, in this particular area of activity at least, significant constraints operate, limiting the advantages that can accrue through datafication.

Little evidence of widespread data use

Our interviewees were those responsible for sandwich course provision in their HEI. While most thought that data could offer useful insights into the effectiveness of their area of activity, there was little evidence of ‘intractable’ data use. This was for three main reasons. First, in some cases, interviewees explained that no relevant data were collected – in relation to access to sandwich courses and/or the outcomes of such courses. Second, in some HEIs, relevant data were collected but not analysed. Such evidence tends to support the contention that ‘data lakes’ are emerging, as HEIs collect more and more data that often remain untapped. Third, in other cases, appropriate data were collected and analysed, but in a very limited manner. For example, one interviewee explained how data were collected and analysed in relation to the participation of students from under-represented ethnic groups, but not with respect to any other widening participation categories. This limited form of datafication, in which only some social characteristics were datafied, was not, therefore, able to inform any action with respect to the participation of widening participation students generally. Indeed, across all ten HEIs, there was only one example of where data were used in a systematic fashion to help analyse who was accessing sandwich courses within the institution, and the extent to which they were representative of the wider student population.

Constraints on data use

Lack of institutional capacity

In explaining this absence of data use, the most commonly identified constraint was the lack of institutional capacity to collect and/or analyse appropriate data. For example, one interviewee commented that they did not have a very good data system for placements – ‘we are still quite Excel- based’. Excel spreadsheets were viewed as limited as they could not be easily shared or updated, and data were relatively hard to manipulate. This, according to the interviewee, made collection of appropriate data laborious, and systematic analysis of the data difficult. Interviewees also pointed to the limited time staff had available to analyse data that the institution had collected.

Prioritisation of ‘externally-facing’ data

Several interviewees described how ‘externally-facing data’ – i.e. that required by regulatory bodies and/or that fed into national and international league tables – was commonly prioritised, leaving little time for information officers to devote to generating and/or analysing data for internal purposes. One interviewee, for example, was unclear about what data, if any, were collected about equity gaps but believed that they were generally only pulled together for high-level reports ‘such as for the TEF’.

Institutional cultures

A further barrier to using data to analyse access to and outcomes of sandwich courses was perceived to be the wider culture of the institution, including its attitude to risk. An interviewee explained that the data collected in their institution was limited to two main variables – subject of study and fee status (home or international) – because of ‘ongoing cautiousness at the university about how some of that data is used and how it’s shared with different teams’.

In addition, many participants outlined the struggles they had faced in gaining access to relevant data, and in influencing decisions about what should be collected and what analyses should be run. Several spoke of having to ‘request’ particular analyses to be run (which could be turned down), leading to a fairly ad hoc and inefficient way of proceeding, and illustrating the relative lack of agency accorded to staff – typically occupying mid-level organisational roles – in accessing and manipulating data.

Reflections

Examining a discrete set of activities within the UK higher education sector – those relating to sandwich courses – provides a useful lens to examine quotidian practices with respect to the availability and use of data. Despite the strong emphasis on data by government bodies and HEI senior management teams, as well as the claims made about the ‘intractability’ of HEI data use in the academic literature, our research suggests that datafication is perhaps not as widespread as some have claimed. Indeed, it indicates that some areas of activity – even those linked to high profile political and institutional priorities (in this case, employability and widening participation) – have remained largely untouched by ‘intractable’ datafication, with relevant data either not being collected or, where it is collected, not being made available to staff working in pertinent areas.

As a consequence, the extent to which students from widening participation backgrounds were accessing sandwich courses – and then succeeding on them – relative to their peers typically remained invisible. While the majority of our interviewees were able to speculate on the extent of any under-representation and/or poor experience, this was typically on the basis of anecdotal evidence and their own ‘sense’ of how inequalities were played out in this area. Although reflecting on professional experience is obviously important, many inequalities may not be visible to staff (for example, if a student chooses not to talk about their neurodiversity or first-in-family status), even if they have regular contact with those eligible to take a sandwich course. Moreover, given the status often accorded to quantitative data within the senior management teams of universities, the lack of any statistical reporting about inequalities by social characteristic, as they pertain to sandwich courses, makes it highly likely that such issues will struggle to gain the attention of senior leaders. The barriers to the effective use of metrics highlighted above may thus have a direct impact on HEIs’ capacity to recognise and address inequalities.  

The research on which this blog is based was carried out with Jill Timms (University of Surrey) and is discussed in more detail in this article Institutional constraints to higher education datafication: an English case study | Higher Education

Rachel Brooks is Professor of Higher Education at the University of Oxford and current President of the British Sociological Association. She has conducted a wide range of research on the sociology of higher education; her most recent book is Constructing the Higher Education Student: perspectives from across Europe, published (open access) with Policy Press.


Leave a comment

But what do the numbers say? How the movement towards datafication might change English higher education

by Peter Wolstencroft, Elizabeth Whitfield and Track Dinning

“The simple truth is that the average student leaves university with £45,800 of debt and if they have nothing to show for it then we have failed them” (Hansard, 2021). The speaker of these words was the then Minister for Higher and Further Education, Michelle Donelan and the sentiment underpins many of the current mechanisms used for assessing quality in English HE. The publication by the Office for Students (OfS) of their new expectations for student outcomes (OfS, 2022a) has, once again, triggered a debate about how we measure the quality of a university education and its impact on the students that study in English universities. The stakes have never been as high, as the OfS state: ‘Universities and colleges that perform below these thresholds could face investigation to allow the OfS to understand the reasons for their performance. If, following investigation, performance is not adequately explained by a provider’s context, the OfS has the power to intervene and impose sanctions for a breach of its conditions of registration.’ (OfS, 2022a)

Since the Browne Report (2010) normalised the payment by students of increased fees for undergraduate programmes, universities have faced a balancing act between two separate imperatives that have influenced the relationship between students and universities. These two approaches are firstly, the educational imperative, that stresses the primacy of the learning experience and the student’s journey through their studies and secondly, the economic imperative, that requires organisations to ensure that their finances allow them to continue to operate. It can be argued that the growing dominance of the latter is rooted in the increased measurement of the sector and how this is used to define the quality of provision provided by any given university. In practical terms, what this means is that, for English universities, adherence to benchmarking figures and ensuring that targets are met may be a key driver in decision making at all levels of the organisation.

Whilst the datafication of education (Stevenson, 2017) is not a new concept, the new OfS guidelines are likely to exacerbate this approach, indeed it can be seen as a formalisation of an ongoing process. A key consequence of this shift has been a reimagining of the relationship between the student and the institution. Originally characterised by some in the post-Browne era as one akin to a customer purchasing a product, it evolved into the student being viewed as a consumer who uses a service but who is also an active participant in the learning process and from there to a co-creator of the process (Tomlinson, 2017). Whilst this apparent balancing of influence has generally been viewed as having a positive benefit in terms of the student experience, the shift exemplified by the new regulations means that the performance of students is increasingly measured in quantitative terms. The danger with this approach is that there is the potential for universities to focus on the quantitative measure of success alone, which would neglect all of the wider, but not measured, improvements in the student journey that have occurred since the Browne Report, such as the increasing amount of employer engagement and the amplification of the student voice.

Concerns increased with the publication of the latest expectations from the OfS and their focus on quantitative measures. Whilst other quality mechanisms such as the Teaching Excellence Framework (TEF) and Ofsted inspections rely on a mixture of quantitative measurement and a supporting narrative, the new guidelines focus largely on data and the outputs for each student. Targets are set for continuation and completion rates as well as graduate outcomes and these targets are aggregated in each of a phalanx of different subsections of students. Many concerns within the sector relate to the vague nature of the wording regarding non-achievement of the targets. Despite the assertion that “(the) OfS only makes a judgement that a provider is not compliant after considering the context in which it is operating” (OfS, 2022a), there is currently no guidance as to how this consideration will be achieved.

English universities have greeted the new guidance with some concern: for example, the latest intervention focuses partially on the salaries students receive fifteen months after completing their programme of study (known as ‘graduate outcomes’). This is controversial as it is a measure that attempts to compare very disparate programmes. The Complete University Guide (2022) quotes average salaries after fifteen months’ employment for accounting and finance as £25,000;  optometry is as low as £17,000 and music degrees average £21,000. In contrast medicine graduates earn an average £30,000. Aside from the issue of disparities in earnings, there is also a lack of accounting for regional disparities with Statista (2021) reporting the median annual earning for full-time employees in the North West being 30% lower than salaries in London. This inequality and its impact on graduate outcomes has already been cited in the decision by some universities to stop offering programmes despite their educational benefit (Weale, 2022).

The backdrop to the revised guidelines (commonly known as ‘B3’ after the subsection of the document it occupies) has been an ongoing discussion about the desirable outcomes of degree level education. The discussion has increasingly focused on how to root out supposed poor practice. If students invest significant amounts of money in their education then many assert with the HE Minister that they should get ‘value for money’ and a positive outcome when their studies are complete. Defining these points drives much of the current discussions. What constitutes a ‘low quality degree’ has been one facet of this discussion, but more pertinent is the achievement gap that exists between differing groups of students and differing programmes of study. Whilst this has always been known, increasing spending has meant that there has been greater scrutiny on groups and programmes perceived to be underachieving.

The revised guidelines focus on definitions; the changes might seem relatively minor when looked at in isolation, but when grouped with other changes in the sector they might have profound implications for university procedures. Universities previously had to ensure that benchmark figures for retention and achievement were met for whole cohorts, but the sector will now subdivide student groups using criteria such as gender, sex or background (OfS, 2022b) and explore the performance of each group. This is likely to change the approach for many universities, as often these subgroups are likely to be small in number, which means that one student’s failure to complete their studies is likely to have a proportionally greater impact on the university as a whole. There is therefore a considerable danger that universities which serve large numbers of disadvantaged students will be less inclined to take risks in admission: this will narrow, rather than widen, access and participation.

On the surface the definitions appear straightforward, with universities needing to make sure that a set percentage of their students continue with their studies, complete their studies and are in what is deemed ‘graduate employment’ within 15 months of graduation. However these figures may lead to a significant change in the way universities manage data and indeed, deal with students.

Under pressure to meet set benchmarks, universities are likely to focus even more attention on the definition of a student within HE. There is always a set period of time between a student registering and when they are included in official figures. This allows for ‘buyer’s remorse’ when students withdraw early on and it also allows people to transfer between programmes if they decide that their initial choice was not the one they want to pursue. Students who withdraw from a programme before the cut-off date are not taken into account in the final figures used when calculating retention figures. This change might affect English HE in the same way as it did when introduced to the further education (FE) sector. Within FE, students were not counted in final figures until they had been enrolled for 42 days. This meant that many organisations completed what became known as a ‘data cleanse’ before the cut-off date, a process where students who were deemed to be at risk of failing their programme were removed from their studies, or moved to a different award.

The danger when introducing new metrics is always that there will be unintended consequences. Whilst trying to measure the quality of a programme of study is clearly worthwhile, the primacy of the data could cause problems. The need to ensure that programmes of study are seen as high quality means that ignoring metrics is often foolhardy and can have detrimental effects on the whole university. Instead, careful analysis is likely, to ensure that programmes score as highly as possible in each category. This could lead to a range of ethical dilemmas regarding the amount of support students receive if they are in danger of failing in their studies.

Looking further down the timeline, the shift towards the datafication of the sector is likely to affect the validation of new programmes of study. Whilst employability has been a strand within many programmes for some years, potential graduate outcomes are likely to be viewed as critical to the acceptance of a programme of study, marking a significant shift away from a purely educational analysis of proposed programmes. The challenge is to make sure that programmes of study continue to be challenging and rewarding for students but that they also meet targets, close attainment gaps and ensure positive learning outcomes for graduates.

The new guidelines are another stepping stone in the balancing act between educational and economic imperatives. The new guidelines set clear targets but it is the unclear consequences of not meeting these targets that will cause universities most concern. Universities with large numbers of disadvantaged students might need a fundamental rethinking of their student population. If there is no allowance for the incoming student population when measuring outputs, universities will need to review the level of support they provide and face the ethical dilemmas involved. Without greater clarity from the OfS, failure to meet targets may mean that more programmes in subject areas with historically low graduate starting salaries will close, data will increasingly become the key determinant of educational decision making and the relationship between students and universities will once again be redefined.

Dr Peter Wolstencroft is a Deputy Director at Liverpool Business School, part of Liverpool John Moores University. He has held a variety of roles in the sector and together with his co-authors is dedicated to enhancing the student experience for all students and in particular for those for whom higher education is a new experience. He is the author of numerous articles on education and co-authored the bestselling textbook ‘The Trainee Teacher’s Handbook: A companion for initial teacher training’.

Dr Elizabeth Whitfield is an Assistant Academic Registrar: Student Experience at Liverpool John Moores University. She is also a senior fellow of the Higher Education Academy, and a member of the programme team for the postgraduate Certificate in HE at LJMU. Current project and leadership roles focus on the student experience, in particular student communications and digital support schemes.

Dr Track Dinning is a Deputy Director at Liverpool Business School, part of Liverpool John Moores University, a Senior Fellow of Higher Education Academy and a Certified Management and Business Educator.  Her research focuses on Entrepreneurial Education and she utilises her research to develop and enhance the curriculum in the field of employability and enterprise. She has a shared vision with her co writers to ensure a high quality student experience for every student.

References

Department for Business, Innovation and Skills (2010) Securing a Sustainable Future for Higher Education (“The Browne Report”). Available at:

(accessed 1st October 2022)

Hansard (2021) University Tuition Fees Debate, Volume 702

Office for Students (2022a) New Expectations for Student Outcomes Available at : OfS sets new expectations for student outcomes – Office for Students (accessed 1st October 2022)

Office for Students (2022b) Associations Between Characteristics of Students Available at: https://www.officeforstudents.org.uk/data-and-analysis/associations-between-characteristics-of-students/ (accessed 18th October 2022)

Statista (2021) Media annual earnings for full-time employees in the United Kingdom in 2021, by region Available at: https://www.statista.com/statistics/416139/full-time-annual-salary-in-the-uk-by-region/ (accessed 8th October 2022)

Stevenson, H (2017) ‘The “Datafication” of Teaching: Can Teachers Speak Back to the Numbers?’ Peabody Journal of Education 92:4, 537-557 DOI:10.1080/0161956X.2017.1349492

The Complete University Guide (2022) What do graduates do and earn? Available at: https://www.thecompleteuniversityguide.co.uk/student-advice/careers/what-do-graduates-do-and-earn (accessed 8th October 2022)

Tomlinson, M (2017) ‘Student perceptions of themselves as ‘consumers’ of higher education’ British Journal of Sociology of Education 38:4, 450-467 DOI:10.1080/01425692.2015.1113856

Weale, S (2022) ‘Philip Pullman leads outcry after Sheffield Hallam withdraws English lit degree, The Guardian Available at: https://www.theguardian.com/education/2022/jun/27/sheffield-hallam-university-suspends-low-value-english-literature-degree (accessed 18th October 2022)