by Stefanie Demetriades, Jillian Kwong, Ali Rachel Pearl, Noy Thrupkaew, Colin Maclay and Jef Pearlman
This is one of a series of position statements developed following a conference on ‘Building the Post-Pandemic University’, organised on 15 September 2020 by SRHE member Mark Carrigan (Cambridge) and colleagues. The position statements are being posted as blogs by SRHE but can also be found on The Post-Pandemic University’s excellent and ever-expanding website. The authors’ statement can be found here.
The COVID-19 pandemic is vastly accelerating technology adoption in higher education as universities scramble to adapt to the sudden upheaval of academic life. With the tidal shift to online learning and increased pressure to control physical movement on campus, the use of surveillance technology for campus security and classroom monitoring may be particularly appealing. Surveillance methods of varying technological sophistication have long been implemented and normalized in educational settings, from hallway monitors and students IDs to CCTV and automated attendance and exam proctoring. Facial recognition (FR) technology, however, represents a fundamentally new frontier with regard to its capacity for mass surveillance. These increasingly sophisticated tools offer a veneer of control and efficiency in their promise to pluck individuals out of a mass of data and assign categories of identity, behaviour, and risk.
As these systems continue to expand rapidly into new realms and unanticipated applications, however, critical questions of impact, risk, security, and efficacy are often left under-examined by administrators and key decision-makers, even as there is growing pressure from activists, lawmakers, and corporations to evaluate and regulate FR. Not only are there well-documented concerns related to accuracy, efficacy, and privacy, most alarmingly, these FR “solutions” largely come at the expense of historically marginalized members of the campus community, particularly Black and Indigenous people and other people of colour who already disproportionately bear the greatest risks and burdens of surveillance.
Drawing on a range of scholarship, journalism, technology and policy sources, this project identifies known issues and key categories of concern related to the use and impact of FR and its adjacencies. We offer an overview of their contours with the aim of supporting university administrators and other decision-makers to better understand the potential implications for their communities and conduct a robust exploration of the associated policy and technology considerations before them now and over time.
Extending beyond their educational mandate, universities bear significant power to influence our collective future through the students they prepare, the insights they generate, and the way they behave. In light of this unique dual role of both academic and civic leadership, we must begin by recognizing the reality of deeply rooted systemic racism and injustice that are exacerbated by surveillance technologies like FR. Even in seemingly straightforward interventions using stable technologies, adoption interacts with existing systems, policies, communities, cultures and more to generate complexities and unintended consequences.
In the case of facial recognition, algorithmic bias is well documented, with significant consequences for racial injustice in particular. Facial recognition as a tool perpetuates long-existing systems of racial inequality and state-sanctioned violence against Black and Indigenous people and other people of color, who are historically over-surveilled, over-policed, and over-criminalized. Other marginalized and vulnerable communities, including migrants and refugees, are also put at risk in highly surveilled spaces. Notably, facial recognition algorithms are primarily trained on and programmed by white men, and consequently have five to ten times higher misidentification rates for the faces of Black women (and other racially minoritized groups) than white men. The dangers of such inaccuracies are epitomized in recent examples of black men in Detroit being wrongfully arrested as a result of misidentification through FR algorithms.
With such substantial concerns around racial and social justice weighing heavily, the question then arises: Do the assumed or promised benefits of FR for universities warrant its use? We recognize that universities have legitimate interests and responsibilities to protect the safety of their students and community, ensure secure access to resources, and facilitate equitable academic assessment. Certainly, FR tools are aggressively marketed to universities as offering automated solutions to these challenges. Empirical evidence for these claims, however, proves insufficient or murky – and indeed, often indicates that the use of FR may ultimately contradict or undermine the very goals and interests that universities may be pursuing in its implementation.
With regard to security, for instance, the efficacy of FR technology remains largely untested and unproven. Even as private companies push facial recognition as a means to prevent major crises such as school shootings, there is little evidence that such systems could have prevented past incidents. Studies of other video surveillance systems, such as closed-circuit television (CCTV), have also found little effect on campus safety, and extensive research on school security measures more broadly likewise challenges assumptions that increased surveillance materially improves safety.
As to FR’s advantages for monitoring learning and assessment, researchers have found that an overreliance on standardized visual cues of engagement—precisely the kinds of indicators FR depends on—can be ineffective or even detrimental, and there is further evidence that excessive surveillance can erode the environment of trust and cooperation that is crucial to healthy learning environments and positive student outcomes.
Significantly, the adoption of these technologies is unfolding in a context in which institutional capacity to manage digital security risks and privacy concerns is already strained. Indeed, the recent shift to online learning with COVID-19 exposed many of these vulnerabilities and shortfalls in both planning and capacity, as in, for instance, the unanticipated phenomenon of Zoombombing and widespread privacy and security concerns with the platform. In the case of FR, the high level of technical complexity and rapid pace of development make it all the more challenging for security measures and privacy policies to keep pace with the latest applications, risks, and potential liabilities. Furthermore, the systems and processes underlying FR technologies are extraordinarily opaque and complex, and administrations may not have the sufficient information to make informed decisions, particularly when relying on third party vendors whose data policies may be unclear, unstable, or insufficient.
The pandemic has ruptured the business-as-usual experience of campus life. In doing so, it impels a painful but necessary moment of reflection about the systems we are adopting into our educational landscape in the name of security and efficiency. In this moment of crisis, higher education has a collective opportunity – and, we argue, civic responsibility – to challenge the historical injustice and inherent inequalities that underlie the implementation of facial recognition in university spaces and build a more just post-pandemic university.
Stefanie Demetriades (PhD, USC Annenberg) studies media, culture, and society, with a focus on cultural processes of meaning making around complex social problems.
Jillian Kwong is a PhD Candidate at the USC Annenberg School for Communication studying the evolving complexities tied to the ethical use, collection, and treatment of personal data in the workplace.
Ali Rachel Pearl teaches writing, literature, and cultural studies as a Postdoctoral Fellow in USC’s Dornsife Undergraduate Honors Program.
Noy Thrupkaew is an independent journalist who writes for publications including The New York Times, The Washington Post, and Reveal Radio.
Colin Maclay is a research professor of Communication and executive director of the Annenberg Innovation Lab at USC.
Jef Pearlman is a Clinical Assistant Professor of Law and Director of the Intellectual Property & Technology Law Clinic at USC.