srhe

The Society for Research into Higher Education

Ian Mc Nay


Leave a comment

Ian McNay writes…

by Ian McNay

My main concern in this post is about academic freedom, free speech and surveillance. I write as one who worked on Open University courses in the 1980s (under a previous Conservative administration) which were investigated for alleged Marxist bias.

Bahram Bekhradnia, in a recent HEPI blog, has identified the small number of complaints about free speech on campus which have provoked a government response, and the ideological base of those making them. The same was true in my experience – single figure numbers of complaints about courses with 5,000 students a year in some cases and many more thousands of ‘drop-in’ viewers and listeners for OU/BBC programmes. The allegations were found to be unjustified, but led to significantly increased levels of internal monitoring and accountability. The BBC staff were very scared of possible government sanctions.

For one radio programme, Rosemary Deem, an SRHE notable, was barred from contributing because she was, as I then was, a member of the Labour party. Two was too many. I was forced to accept a distasteful right-winger, who insisted that his contribution – denying Tory cuts to education budgets – could not be criticised, questioned, commented upon, nor edited. The new rules said that all elements of a course had to be internally balanced – not one programme putting one point of view and a second with another. Ditto for course units. The monitoring group said the programme was biased, lacked balance, and should not be broadcast. I said that students were intelligent enough to recognise his ‘pedigree’ and it went out.

In 1988, another programme, on the Great Education Reform Bill, was broadcast at 2am. We arrived later that morning to a phone message from DES demanding a right of reply. The programme had featured John Tomlinson’s comments/critique. He was the Chief Education Officer of Cheshire, hardly a hotbed of revolution. We pointed out that DES staff had been sitting in our ‘classroom’ and that comments could be made to their tutor, discussed with fellow students in self-help groups, and used, if evidenced, in assessments.

My concern is that, as someone who writes on policy and its impact, my work can be seen as ‘disruptive’ [a basic element of much research and education] and ‘causing discomfort and inconvenience’ to some people – mainly policy makers. Those terms are from the current draft bill on police, crime, sentencing and courts, which aims to limit public demonstrations of dissent. Given trends in other countries, and government resistance to a more balanced view of history, I wonder how long it will be before there is more overt intrusion – by OfS? – into controlling the curriculum and suppressing challenging, but legitimate views. In the OU, Marxist critique disappeared for years, as self-censorship operated to avoid recurrent problems of justification. It could happen again.

That goes alongside recent developments with Microsoft surveillance which are intrusive and irritating. The university has just had an ‘upgrade’. In my experience, such upgrades, like restructuring, rarely improve things, and often do the opposite. I now get daily emails from Cortana, a Microsoft offshoot, saying things like ‘Two days ago you were asked a question by X. Have you replied?’ The answer is that ‘if you are reading my emails, you will know the answer to that question’. Undeterred, this AI avatar offers me advice on how to organise my coming week, blithely ignorant that I have only a 0.2 contract. When it says I have 85% of my time ‘spare’, that implies that of my 20% load, only 5% that week was not observable. Its daily plan for me is to spend 2 hours in the morning, ‘focus time…to get your work done’.

The rest is spent not getting my work done, but on email and chats, taking a break and lunchtime and two hours to learn a new skill and develop my career. Wow! Do those in charge of the balanced academic workload know about this prescription? It also believes that all emails are ‘developing your network … you added 23 new members to your networks last week’. A computer network must be much less demanding than my criteria require for the term. Its autonomous, unaccountable and unexplained treatment of my emails includes frequently deleting when I click to open one, and designating as ‘junk’ PDF journal articles relevant to my work sent by Academia. I then have to spend time digging around to find both of these. It also merges emails into a stream so that finding one of them needs a memory of the last one in the stream – often an automatic reply. More time spent digging around.

Then there are the constant disruptive phone calls to verify my sign in. The automated voice advises me that ‘if you have not initiated this verification, you should press such-and-such a key’. I did that, twice, once when two such calls came within 50 seconds of one another, which I thought suspicious. How simple minded I was! The ‘solution’ was to bar me from access until the systems administrators had sorted things. That meant a full day or more in each case. The two most recent calls even came when I had not moved to laptop based work, and I now no longer log out, so I do not sign in, but leave the machine on all day every day which may not be good ecologically, but it helps my mental health and state of mind.

I accept the need for computer security, with university generated messages warning about emails from sources outside the university, such as OfS, AdvanceHE, or HEPI and Research Professional through a university subscription, asking if I trust them. Up to a point, Lord Copper. But balance is the key. I knew there was surveillance – in a previous institution a NATFHE e-mail was held up to allow management to get its reply in simultaneously with its being sent. This, though, is blatant and overt. I suppose that is better than it being hidden, but it is neither efficient nor effective. Am I the only one experiencing this, a human being balancing Marvin, the paranoid android, or do others have similar experiences? If the latter, what are we going to do about it? It has implications for research in terms of the confidentiality of email interviews, for example.

And, finally, on a lighter note … my local optician has a poster in the window advertising ‘Myopia Management’. That sounds like a module to include in the leadership programmes that some of us run.

SRHE Fellow Ian McNay is emeritus professor at the University of Greenwich.


Leave a comment

The post-pandemic panopticon? Critical questions for facial recognition technology in higher education

by Stefanie Demetriades, Jillian Kwong, Ali Rachel Pearl, Noy Thrupkaew, Colin Maclay and Jef Pearlman

This is one of a series of position statements developed following a conference on ‘Building the Post-Pandemic University’, organised on 15 September 2020 by SRHE member Mark Carrigan (Cambridge) and colleagues. The position statements are being posted as blogs by SRHE but can also be found on The Post-Pandemic University’s excellent and ever-expanding website. The authors’ statement can be found here.

The COVID-19 pandemic is vastly accelerating technology adoption in higher education as universities scramble to adapt to the sudden upheaval of academic life. With the tidal shift to online learning and increased pressure to control physical movement on campus, the use of surveillance technology for campus security and classroom monitoring may be particularly appealing. Surveillance methods of varying technological sophistication have long been implemented and normalized in educational settings, from hallway monitors and students IDs to CCTV and automated attendance and exam proctoring. Facial recognition (FR) technology, however, represents a fundamentally new frontier with regard to its capacity for mass surveillance. These increasingly sophisticated tools offer a veneer of control and efficiency in their promise to pluck individuals out of a mass of data and assign categories of identity, behaviour, and risk. 

As these systems continue to expand rapidly into new realms and unanticipated applications, however, critical questions of impact, risk, security, and efficacy are often left under-examined by administrators and key decision-makers, even as there is growing pressure from activists, lawmakers, and corporations to evaluate and regulate FR. Not only are there well-documented concerns related to accuracy, efficacy, and privacy, most alarmingly, these FR “solutions” largely come at the expense of historically marginalized members of the campus community, particularly Black and Indigenous people and other people of colour who already disproportionately bear the greatest risks and burdens of surveillance. 

Drawing on a range of scholarship, journalism, technology and policy sources, this project identifies known issues and key categories of concern related to the use and impact of FR and its adjacencies. We offer an overview of their contours with the aim of supporting university administrators and other decision-makers to better understand the potential implications for their communities and conduct a robust exploration of the associated policy and technology considerations before them now and over time.

Extending beyond their educational mandate, universities bear significant power to influence our collective future through the students they prepare, the insights they generate, and the way they behave. In light of this unique dual role of both academic and civic leadership, we must begin by recognizing the reality of deeply rooted systemic racism and injustice that are exacerbated by surveillance technologies like FR. Even in seemingly straightforward interventions using stable technologies, adoption interacts with existing systems, policies, communities, cultures and more to generate complexities and unintended consequences. 

In the case of facial recognition, algorithmic bias is well documented, with significant consequences for racial injustice in particular. Facial recognition as a tool perpetuates long-existing systems of racial inequality and state-sanctioned violence against Black and Indigenous people and other people of color, who are historically over-surveilled, over-policed, and over-criminalized. Other marginalized and vulnerable communities, including migrants and refugees, are also put at risk in highly surveilled spaces. Notably, facial recognition algorithms are primarily trained on and programmed by white men, and consequently have five to ten times higher misidentification rates for the faces of Black women (and other racially minoritized groups) than white men. The dangers of such inaccuracies are epitomized in recent examples of black men in Detroit being wrongfully arrested as a result of misidentification through FR algorithms.

With such substantial concerns around racial and social justice weighing heavily, the question then arises: Do the assumed or promised benefits of FR for universities warrant its use? We recognize that universities have legitimate interests and responsibilities to protect the safety of their students and community, ensure secure access to resources, and facilitate equitable academic assessment. Certainly, FR tools are aggressively marketed to universities as offering automated solutions to these challenges. Empirical evidence for these claims, however, proves insufficient or murky – and indeed, often indicates that the use of FR may ultimately contradict or undermine the very goals and interests that universities may be pursuing in its implementation.

With regard to security, for instance, the efficacy of FR technology remains largely untested and unproven. Even as private companies push facial recognition as a means to prevent major crises such as school shootings, there is little evidence that such systems could have prevented past incidents. Studies of other video surveillance systems, such as closed-circuit television (CCTV), have also found little effect on campus safety, and extensive research on school security measures more broadly likewise challenges assumptions that increased surveillance materially improves safety.

As to FR’s advantages for monitoring learning and assessment, researchers have found that an overreliance on standardized visual cues of engagement—precisely the kinds of indicators FR depends on—can be ineffective or even detrimental, and there is further evidence that excessive surveillance can erode the environment of trust and cooperation that is crucial to healthy learning environments and positive student outcomes.

Significantly, the adoption of these technologies is unfolding in a context in which institutional capacity to manage digital security risks and privacy concerns is already strained. Indeed, the recent shift to online learning with COVID-19 exposed many of these vulnerabilities and shortfalls in both planning and capacity, as in, for instance, the unanticipated phenomenon of Zoombombing and widespread privacy and security concerns with the platform. In the case of FR, the high level of technical complexity and rapid pace of development make it all the more challenging for security measures and privacy policies to keep pace with the latest applications, risks, and potential liabilities. Furthermore, the systems and processes underlying FR technologies are extraordinarily opaque and complex, and administrations may not have the sufficient information to make informed decisions, particularly when relying on third party vendors whose data policies may be unclear, unstable, or insufficient.

The pandemic has ruptured the business-as-usual experience of campus life. In doing so, it impels a painful but necessary moment of reflection about the systems we are adopting into our educational landscape in the name of security and efficiency. In this moment of crisis, higher education has a collective opportunity – and, we argue, civic responsibility – to challenge the historical injustice and inherent inequalities that underlie the implementation of facial recognition in university spaces and build a more just post-pandemic university. 

Stefanie Demetriades (PhD, USC Annenberg) studies media, culture, and society, with a focus on cultural processes of meaning making around complex social problems.

Jillian Kwong is a PhD Candidate at the USC Annenberg School for Communication studying the evolving complexities tied to the ethical use, collection, and treatment of personal data in the workplace.

Ali Rachel Pearl teaches writing, literature, and cultural studies as a Postdoctoral Fellow in USC’s Dornsife Undergraduate Honors Program.

Noy Thrupkaew is an independent journalist who writes for publications including The New York Times, The Washington Post, and Reveal Radio.

Colin Maclay is a research professor of Communication and executive director of the Annenberg Innovation Lab at USC.

Jef Pearlman is a Clinical Assistant Professor of Law and Director of the Intellectual Property & Technology Law Clinic at USC.

Vicky Gunn


Leave a comment

Learning Analytics, surveillance, and the future of understanding our students

By Vicky Gunn

There has been a flurry of activity around Learning Analytics in Scotland’s higher education sector this past year. Responding no doubt to the seemingly unlimited promises of being able to study our students, we are excitedly wondering just how best to use what the technology has to offer. At Edinburgh University, a professorial level post has been advertised; at my own institution we are pulling together the various people who run our student experience surveys (who have hitherto been distributed across the institution) into a central unit in Planning so that we can triangulate surveys, evaluations and other contextual data-sets; elsewhere systems which enable ‘early warning signals’ with regards to student drop-out have been implemented with gusto.

I am one of the worst of the learning analytics’ offenders.  My curiosity to observe and understand the patterns in activity, behaviour, and perception of the students is just too intellectually compelling. The possibility that we could crunch all of the data about our students into one big stew-pot and then extract answers to meaning-of-student-life questions is a temptation I find too hard to resist (especially when someone puts what is called a ‘dashboard’ in front of me and says, ‘look what happens if we interrogate the data this way’). Continue reading