by Roland Bloch and Catherine O’Connell
The changing shape of higher education and consequent changes in the nature of academic labour, employment conditions and career trajectories were significant themes at the SRHE Research Conference in December 2018, where we convened a Symposium on the way metrics are associated with these changes. Metrics are relevant whether we are in management positions, researchers with an interest in the topic, or academics affected by these performative practices in daily life.
Our Symposium offered a comparative exploration of the role of metrics. Two contrasting cases – highly developed use of metrics in England, and just beginning in Germany – enabled us to explore the effects of metrics as shaped by national contexts and wider forces not bound to specific national trajectories. The multidisciplinary symposium contributions asked:
- Are there limits to the prevailing view of metrics as a neoliberal technology?
- What path dependencies are associated with the development of metrics in national contexts?
- How do new possibilities of data production and exploitation affect the understanding of academic performance and measurement?
- What theories and research orientations are needed to capture the dynamics of metricisation?
Metrics and indicators for research
Alis Oancea’s (Oxford) analysis of national research assessment in England distinguished between micro-, meso- and macro- level metrics. National research evaluation metrics (such as those deployed in the REF in England) operate as meso-level metrics and play a dual role: as targets for micro-performance monitoring and as proxies for the construction of macro-level metrics. Such meso-level metrics legitimate micro performance monitoring practices at organisational level. A performative vocabulary is associated with such performance monitoring and plays out in distinctive ways in organizational governance processes. Macro-metrics (‘post-factum calculations’) are constituted through composite criteria, not least by universities themselves in the form of various league tables and field positioning devices and enter the public discourse as performative by-products, often to the detriment of research culture. From this perspective it makes more sense to examine the ways metrics are ascribed meaning as part of wider narratives and institutional practices. Undue focus on technical issues associated with metrics can distract from more fundamental debates about how highly formalised, complex performance assessment systems operate in renegotiating the principles underpinning relationships between universities and the state.
Metrics and organisational justice
A comparative Anglo-German study by Catherine O’Connell and Namrata Rao (both Liverpool Hope) examined academics’ perceptions of fairness of accountability practices associated with metrics. Drawing on concepts from management literature that identify different dimensions of organisational justice, respondent accounts in English and German contexts emphasised significant concerns with ‘procedural justice’ (fairness of organisational processes associated with metrics). In those organisational contexts where procedural justice was evaluated in positive terms, metrics were used context-sensitively and aligned with valued enhancement practices associated with teaching and research. English academic fairness evaluations of the use of teaching and research metrics tended to be convergent, potentially indicative of the particular organisational strategy in using metrics. While the organisational use of metrics was considerably lower in German universities, there was nevertheless a widespread and decentralised use of indicators for research performance. German academic responses to the uses of such indicators were highly divergent, suggesting decentralised and disparate attention to research in relation to teaching. Metrics had a role in enhancing objectivity and transparency in research evaluation and increasing parity between research and teaching, but might not be emancipatory because of the conditions of tenure in Germany. The comparative study draws attention to institutional practices that can contribute to academic perceptions of fairness and indicates the possibilities and limits of democratic forms of accountability in metricised environments.
The ambivalent use of metrics in German higher education
Roland Bloch and Jakob Hartl (both Martin-Luther-University Halle-Wittenberg) explored ‘metrics-in-the making’ in German higher education. So far, the German system has been characterised by a scattered use of metrics. There is no centralised nationwide data collection and no systematic use of performance indicators; if they are employed at all they are mostly not sanctioned. An online survey of German academics in education and economics shows a divide between teaching and research: whereas metrics are organisationally advanced in research, teaching is largely defined by input factors such as the number of study places. Furthermore, academics’ perceptions of metrics differ even more strongly. Based on a cluster analysis, five different patterns were identified. At one extreme, academics show a positive attitude towards the organisational use of metrics, either embracing metrics whole-heartedly and stressing their emancipatory value, or displaying a strategic view on metrics, especially for pursuing their academic career. At the other extreme, some refuse all facets of academic governance on normative grounds while others see no sense in using metrics as they feel that their practice is largely determined by their teaching load. In between is a group of professors who simply ignore all organisational attempts at employing metrics, based on their position in the academic hierarchy and their constitutionally granted autonomy. All in all, respondents acknowledged some use of indicators, but struggled to define their practical consequences. Thus, the organisational use of metrics appears to be decoupled from daily routines in research and teaching. Metrics are then primarily usedto produce external legitimacy via accountability and transparency. Lacking practical consequences, metrics neither produce metricised subjects nor do they have emancipatory effects. They may however in the long run alter perceptions of what constitutes good research and good teaching, especially if both are reframed through metrics-based perceptions of academic excellence. Such a reframing is more likely to be advanced by the organisation than by the scientific community.
Shaping academic careers
Drawing on 30 biographical interviews with German professors from various subjects, Alexander Lenger (University of Siegen) identified unintended consequences of ratings, rankings, and scientometric indicators on the academic profession. Within the academic profession a significant shift is taking place, giving rise to professors with an entrepreneurial spirit and managerial skills. The findings suggest a change in the dynamics of academic careers: there may be a changing narrative from science as a calling or a way of living to science as a career. Rather than advancing scientific knowledge, a new mode of legitimation is constituted which favours a strategic perspective on research and teaching. An unintended consequence of the increasing use of metrics may be that only strategic academics will eventually be successful which may change self-selection processes in academia, and thus the reproduction of the academic profession. The findings highlight the long-term issues that arise if quantification and metrification become the normative point of reference in academia.
Measuring performance or performance measurement?
Drawing on illustrative examples of the uses of learning analytics and proprietary research repositories Anne Krueger (Humboldt-University Berlin) proposed an analytic perspective for research that highlights the performative effects of digital infrastructures in academic evaluation currently spread across universities internationally. Universities make increasing use of digital databases and online platforms to categorise academic work and to evaluate academic performance. Yet, digital infrastructures do not only facilitate evaluation practices. Through such infrastructures, definitions of high quality teaching and research are becoming inscribed into the design of online interfaces and digital databases, and may change the understanding of its object of measurement. When evaluation becomes data-driven rather than knowledge-driven it becomes less of a tool for strategic governance but is influenced by the performativity of digital infrastructure. We should therefore examine not only indicators and metrics but also the digital infrastructures that are supposed to implement them.
Metrics are often portrayed in the research literature as a harm to be eradicated but the symposium contributions emphasise how deeply implicated we are as organisational actors in the use of metrics. Collectively, the symposium contributions reflect the prevalent use of metrics in national contexts even where their use is not mandated at the national level. Such uses can often be directed towards individual rather than collective interests. Several of the papers identified considerable variability in organisational uses of metrics that are associated with varying evaluations of fairness. Organisational level analyses can draw attention to the varied forms of accountability practices that have developed at the organisational level in response to metrics and shed light on those local interpretations and applications of metrics which are regarded as more meaningful, context sensitive and socially just.
Contemporary critiques of metrics tend to assume a prior context guided by meritocratic principles. Contributions to the symposium challenge this nostalgic assumption, suggesting metrics may exert some emancipatory effects, through raising the level of transparency of academic career progression and improving parity of esteem for teaching and research. In his comment on the symposium, Alexander Mitterle (Martin-Luther-University Halle-Wittenberg) reminded us to be aware of what we are talking about when referring to the new performance regimes: metrics construct space in which different elements are brought under one order; indicators render academic practice accessible, they generate truth; quantification translates qualities into quantities and renders them commensurable, and standards normalise categorisations and targets. Metrics as socio-material relationships construct reality and ways of acting in it..
Although the use of metrics is advancing globally, their shape and extent may vary between different HE systems. The symposium papers identified the significant impact of national differences in funding frameworks and employment conditions that affect the trajectory of metrics. Comparative research helps to chart these path dependencies and identify the possibilities and limits of national policy.
Much of the research discussed at the symposium focused on the insider perspective of those who are subjected to metrics-based evaluation. A continued and extended research emphasis on stakeholder evaluation of the uses and efficacy of metrics-based evaluation may indicate the role they play in enhancing or eroding public trust. As Professor Ellen Hazelkorn observed in her keynote lecture, the HE sector should not confuse self-interest with public interest in the use of metrics.
Roland Bloch is research associate at the Centre for School and Educational Research at Martin-Luther-University Halle-Wittenberg and Catherine O’Connell is Co-Director of the Centre for Education and Policy Analysis at Liverpool Hope University