SRHE Blog

The Society for Research into Higher Education


Leave a comment

Students in quality assurance – representatives, partners, or even experts?

by Jens Jungblut & Bjørn Stensaker

Throughout Europe, students are often regular members of external quality assurance mandated to perform evaluations and accreditations in higher education. While this role has been secured through the Standards and Guidelines for Quality Assurance in the European Higher Education Area (ESG), we have little knowledge about how students participate in such panels and which roles they take up. In a paper presented at the SRHE conference in Nottingham in December 2025, we addressed this issue – both conceptually and empirically.

One could imagine that there are several roles that students could play as part in an external quality assurance panel. Students are most often seen as representatives of their fellow students. This has implications as to how students are appointed to such panels, as various student interest organizations usually have the power to nominate specific students to the task. More recently, the idea of students being partners has also gained interest, where a key assumption is that students should be involved and participate in all aspect and processes related to their own education – including quality assurance. The initiative “student partnerships in quality Scotland (sparqs)” is a well-known example of this inclusive approach (Varwell, 2021). However, one could argue that students may even take on an expertise-based role in quality assurance. This type of role is not based on experience per se but rather the ability to reflect upon the knowledge possessed and the ability to engage in systematic efforts to learn more – based on these reflections (Ericsson, 2017).

In our paper presented at the SRHE conference we argue that the role of students participating in quality assurance panels (or any other related processes in higher education) may not be static, restricting students to merely one role at a time (see also Stensaker & Matear, 2024). We rather argue – in line with Holen et al (2021) – that the roles students may take on are highly dynamic. A consequence of this would be that students may shift rapidly from one role to another, depending on, for example, the evaluation context, committee setting, or the issue that is being discussed.

To test our assumptions, we conducted a survey targeting students taking part in European quality assurance processes; to be more specific, we targeted the `Quality Assurance Student Experts Pool` within the European Students’ Union. This group was established in 2009 with the aim to improve the contribution of students in quality assurance in Europe. When included in the pool, students undergo training sessions providing them with relevant background knowledge about quality assurance processes and the ESG. The members of the pool are then called upon by quality assurance agencies throughout Europe to act as student representatives on their quality assurance panels at program, institutional, or national level, performing evaluations, accreditations and other forms of assessments. The `Quality Assurance Student Experts Pool` therefore represents a unique entity in Europe, as it is the only European structure that collects and trains students for these roles. 35 students (of a total of 90) responded to our survey.

The students responding have on average been involved in quality assurance for more than four years, and over 60 percent have participated in four or more evaluation or accreditation processes. In line with our expectations, the students indeed report that they are taking on several roles during the evaluation processes, they are representatives of students, they feel they are equal partners within the evaluation panel they are part of, and they also see themselves as experts. In our data, we could not identify a clear hierarchy between the different roles. However, our data suggest that students are often perceived as a partner, while less often as experts. A possible interpretation here is that temporality and experience matter: students may be initially viewed as a representative and as a partner when starting their work within the panel, and through the process of participating in multiple panels over time they might demonstrate expertise which is in turn recognized by their peers in the panels. An interesting feature coming out of the data is also that the students in the `Quality Assurance Student Experts Pool` regularly share knowledge among the members of the pool, and in that way contribute to continuously build the expertise of all members. Expertise is in this way not taken for granted or expected as a prerequisite for being a member, but rather nurtured, systematised and made available to newer and future members.

We want to thank all the students that bothered to respond to our small questionnaire. While our study is exploratory, we do think it provides new insights regarding student involvement and influence in a setting characterized by a high level of expertise and professionalism, and we hope that the findings can help future research to further unpack the dynamic nature of students’ roles in quality assurance panels.

Jens Jungblut is a Professor at the Department of Political Science at the University of Oslo. His main research interests include party politics, policy-making, and public governance in the knowledge policy domain (education & research), organizational change in higher education, agenda-setting research, and the role of (academic) expertise in policy advice.

Bjørn Stensaker is a Professor at the Department of Education at the University of Oslo. He has a special research interest in governance, leadership, and organizational change in higher education – including quality assurance. He has published widely on these topics in a range of journals and book series.


Leave a comment

Collegiality and competition in German Centres of Excellence

by Lautaro Vilches

Collegiality, although threatened by increasing competitive pressures and described as a slippery and elastic concept, remains a powerful ideal underpinning academic and intellectual practices. Drawing on two empirical studies, this blog examines the relationships between collegiality and competition in Centres of Excellence (CoEs) in the Social Sciences and Humanities (SSH) in Germany. These CoEs are conceptualised as a quasi-departmental new university model that contrasts with the ‘university of chairs’, which characterises the old Humboldtian university model, organised around chairs led by professors. Hence my research question: How do academics experience collegiality, and how does it relate to competition, within CoEs in the SSH?

In 2006, the government launched the Excellence Strategy (then known as the Excellence Initiative), which includes a scheme providing long-term funding for Centres of Excellence. Notably, this scheme extends beyond the traditionally more collaborative Natural Sciences, to encompass the Social Sciences and Humanities. Germany, therefore, offers a unique case to explore transformations of collegiality amidst co-existing and overlapping university models. What, then, are the key features of these models?

In the old model of the ‘university of chairs’ the chair constitutes the central organisational unit of the university, with each one led by a single professor. Central to this model is the idea of collegial leadership according to which professors govern the university autonomously, a practice that can be traced back to the old scholastic guild of the Middle Ages. During the eighteenth century, German universities underwent a process of modernisation influenced by Renaissance ideals, culminating in the establishment of University of Berlin in Prussia in 1810 by Wilhelm von Humboldt. By the late nineteenth century, the Humboldtian model of the university had become highly influential, as it offered an organisational template in which the ideals of academic autonomy, academic freedom and the  integration of research and teaching were institutionalised.

Within the university of chairs, collegiality is effectively ‘contained’ and enacted within individual chairs. In this structure, professors have no formal superiors and academic staff are directly subordinate to a single professor (as chair holder) – not an institute or faculty. As a result, the university of chairs is characterised by several small and steep hierarchies.

In recent decades – alongside the rise of the United States as the hegemonic power – the Anglo-American departmental model spread across the world, a shift that is associated with the entrepreneurial transformation of universities as they respond to growing competitive pressures.

Remarkably, CoEs in the SSH in Germany are organised as ‘quasi-departments’ resembling a multidisciplinary Anglo-American department. They are very large in comparison with other collaborative grants, often comprising more than 100 affiliated researchers. They are structured around several ‘Research Areas’ and led by 25 Principal Investigators (mostly professors) who must agree on the implementation of the multidisciplinary and integrated research programme on which the CoE is based.

The historical implications of this new model cannot be overstated. CoEs appear to operate as Trojan horses: cloaked in the prestige of excellence, they have introduced a fundamentally different organisational model into the German university of chairs, an institution that has endured over centuries.

Against the backdrop of these two models, what are the implications for collegiality and its relation to competition? A few clarifications are necessary. First, much of the research on collegiality has focused on governance, ignoring that collegiality is also practised ‘on the ground’. Here, I will define collegiality (a) as form of ‘leadership and governance’, involving relations among leaders as well as interactions between leaders and those they govern; (b) as an ‘intellectual practice’ that can be best observed in the enactments of collaborative research; and (c) as a form of ‘citizenship’, involving practices that signify belonging to the CoE and its academic community.

Second, adopting this broader understanding requires acknowledging that collegiality is not only experienced by professors (in governing collegialy the university) but also by the ‘invisible’ academic demos, namely Early Career Researchers (ECRs). Although often employed in precarious positions, ECRS are nonetheless significant members of the academic community, in particular in CoEs, which explicitly prioritise the training of ECRs as a core objective. Whilst ECRs are committed full time to the CoE and sustain much of its collaborative research activity, professors remain simultaneously bound to the duties of their respective positions as chairs.

A third clarification concerns our normative assumptions underpinning collegiality and its relationship to competition. Collegiality is sometimes idealised as an unambiguously positive value and practice in academia, whilst competition – in contrast – is seen as a threat to collegiality. However, this idealised depiction tends to underplay, for example, the role of hierarchies in academia and often invokes an indeterminate past – perhaps somewhere in the 1960s – when universities were governed autonomously by male professors and generously funded through block grants – largely protected from competition pressures or external scrutiny.

These contextual conditions have evidently changed over recent decades: competition, both at institutional and individual terms, has intensified in academia, and CoE schemes exemplify this shift. CoE members, especially ECRs, are therefore embedded in multiple and overlapping competitions: at the institutional level through the CoE’s race for excellence; and at the individual level, through the competition for getting a position in the CoE, as well as for grants, publications, and networks necessary for career advancement.

How are collegiality and competition intertwined in the CoE? I identify three complex dynamics:

  • ‘The temporal flourishing of intellectual collegiality’ refers to the blooming of collegiality as part of the collaborative research work in the CoE. ECRs describe extensive engagement in organising, leading or co-leading research seminars (alongside PIs or other postdoctoral researchers), co-editing books, developing digital collaborative platforms, inviting researchers from abroad to join the CoE or organising and participating in informal meetings. Within this dynamic, competition is presented as being located ‘outside’ the CoE, temporarily deactivated. However, at the same time, ECRs remain aware of the omnipresence of competition, which ultimately threatens collegial collaboration when career paths, research topics or publications begin to converge. For this reason, intellectual collegiality and competition stand in an exclusionary relationship.
  • ‘The rise of CoE citizenship for the institutional race of excellence’ captures the strong sense of engagement and commitment shown by ECRs (but also professors) towards the CoE. It is expressed through initiatives aimed at enhancing the CoE’s collective research performance, particularly in anticipation of competition for renewed excellence funding. This dynamic reveals that, for the CoE, citizenship and institutional competition are not oppositional but complementary, as collective engagement is mobilised in the service of competitive success.
  • ‘Collegial leadership adapting to multiple competitions’ highlights the plurality of leadership modes, each one responding to different levels and forms of competition. At the level of professors and decision-making processes at the top, traditional collegial governance is ‘overstretched’. Although professors retain full authority, they struggle to reach consensus and to lead these large multidisciplinary centres effectively. This suggests a growing demand for new skills more closely associated with the figure of an academic manager than a professor. The institutional race for excellence thus places considerable strain on collegial governance rooted in the chair-based system. Accordingly, ECRs describe different and, apparently, contradictory modes of collegial leadership. For example, the ‘laissez faire’ mode aligns with the ideals of freedom and autonomy underpinning intellectual collegiality, but also with competition among individuals. They also describe leadership as ‘impositions’, which, on the one hand, erodes trust in professors and decision-making, but, on the other hand, intersects with notions of citizenship that compel ECRs to accept decisions, even when imposed. Yet many ECRs value and expect a more ‘inclusive leadership’ that support the development of intellectual collegiality. Overall, the relationship between collegial leadership and competition is heterogeneous and adaptive, closely intertwined with the preceding dynamics.

How, then, can these dynamics be interpreted together? Overall, the findings suggest that differences between university models matter profoundly for collegiality. Expectations regarding how academics collaborate, participate in governance and decision-making processes and form intellectual communities are embedded in specific institutional contexts.

Regarding the relation between collegiality and competition, I suggest two contrasting interpretations. The first emphasises the flourishing of intellectual collegiality and the emergence of CoE citizenship, understood as a collective, multidisciplinary sense of belonging that is driven by – and complementary to – the institutional race for excellence. The second interpretation, however, views this flourishing as a temporal illusion. From this perspective, competition is omnipresent and stands in a fundamentally exclusionary relationship to collegiality: it threatens intellectual collaboration even when temporarily deactivated; it compels academics to engage in CoE-related work they may not intrinsically value; and it overstretches traditional forms of collegial leadership, promoting managerial modes that erode trust in both academic judgement and decision-making processes. Viewed in this light, competition ultimately poses a threat to collegiality. These rival interpretations may uneasily coexist, and the second one possibly predominates. More research is needed on how organisational contexts affect the relationship between collegiality and competition.

Lautaro Vilches is a researcher at Humboldt University of Berlin and a consultant in higher education. His current research examines the implications of excellence schemes for transforming universities’ organisational arrangements and their effects on academic practices such as collegiality, academic mobility and research collaboration, particularly in the Social Sciences and Humanities. As a consultant he advises universities on advancing strategic change.


Leave a comment

Risk-based quality regulation – drivers and dynamics in Australian higher education

by Joseph David Blacklock, Jeanette Baird and Bjørn Stensaker

Risk-based’ models for higher education quality regulation have been increasingly popular in higher education globally. At the same time there is limited knowledge of how risk-based regulation can be implemented effectively.

Australia’s Tertiary Education Quality and Standards Agency (TEQSA) started to implement risk-based regulation in 2011, aiming at an approach balancing regulatory necessity, risk and proportionate regulation. Our recent published study analyses TEQSA’s evolution between 2011 and 2024 to contribute to an emerging body of research on the practice of risk-based regulation in higher education.

The challenges of risk-based regulation

Risk-based approaches are seen as a way to create more effective and efficient regulation, targeting resources to the areas or institutions of greatest risk. However, it is widely acknowledged that sector-specificities, political economy and social context exert a significant influence on the practice of risk-based regulation (Black and Baldwin, 2010). Choices made by the regulator also affect its stakeholders and its perceived effectiveness – consider, for example, whose ideas about risk are privileged. Balancing the expectations of these stakeholders, along with their federal mandate, has required much in the way of compromise.

The evolution of TEQSA’s approaches

Our study uses a conceptual framework suggested by Hood et al (2001) for comparative analyses of regimes of risk regulation that charts aspects respectively of context and content. With this as a starting point we end up with two theoretical constructs of ‘hyper-regulation’ and ‘dynamic regulation’ as a way to analyse the development of TEQSA over time. These opposing concepts of regulatory approach represent both theoretical and empirical executions of the risk-based model within higher education.

From extensive document analysis, independent third-party analysis, and Delphi interviews, we identify three phases to TEQSA’s approach:

  • 2011-2013, marked by practices similar to ‘hyper-regulation’, including suspicion of institutions, burdensome requests for information and a perception that there was little ‘risk-based’ discrimination in use
  • 2014-2018, marked by the use of more indicators of ‘dynamic regulation’, including reduced evidence requirements for low-risk providers, sensitivity to the motivational postures of providers (Braithwaite et al. 1994), and more provider self-assurance
  • 2019-2024, marked by a broader approach to the identification of risks, greater attention to systemic risks, and more visible engagement with Federal Government policy, as well as the disruption of the pandemic.

Across these three periods, we map a series of contextual and content factors to chart those that have remained more constant and those that have varied more widely over time.

Of course, we do not suggest that TEQSA’s actions fit precisely into these timeframes, nor do we suggest that its actions have been guided by a wholly consistent regulatory philosophy in each phase. After the early and very visible adjustment of TEQSA’s approach, there has been an ongoing series of smaller changes, influenced also by the available resources, the views of successive TEQSA commissioners and the wider higher education landscape as a whole.

Lessons learned

Our analysis, building on ideas and perspectives from Hood, Rothstein and Baldwin offers a comparatively simple yet informative taxonomy for future empirical research.

TEQSA’s start-up phase, in which a hyper-regulatory approach was used, can be linked to a contextual need of the Federal Government at the time to support Australia’s international education industry, leading to the rather dominant judicial framing of its role. However, TEQSA’s initial regulatory stance failed to take account of the largely compliant regulatory posture of the universities that enrol around 90% of higher education students in Australia, and of the strength of this interest group. The new agency was understandably nervous about Government perceptions of its performance, however, a broader initial charting of stakeholder risk perspectives could have provided better guardrails. Similarly, a wider questioning of the sources of risk in TEQSA’s first and second phases could have highlighted more systemic risks.

A further lesson for new risk-based regulators is to ensure that the regulator itself has a strong understanding of risks in the sector, to guide its analyses, and can readily obtain the data to generate robust risk assessments.

Our study illustrates that risk-based regulation in practice is as negotiable as any other regulatory instrument. The ebb and flow of TEQSA’s engagement with the Federal Government and other stakeholders provides the context. As predicted by various authors, constant vigilance and regular recalibration are needed by the regulator as the external risk landscape changes and the wider interests of government and stakeholders dictate. The extent to which there is political tolerance for any ‘failure’ of a risk-based regulator is often unstated and always variable.

Joseph David Blacklock is a graduate of the University of Oslo’s Master’s of Higher Education degree, with a special interest in risk-based regulation and government instruments for managing quality within higher education.

Jeanette Baird consults on tertiary education quality assurance and strategy in Australia and internationally. She is Adjunct Professor of Higher Education at Divine Word University in Papua New Guinea and an Honorary Senior Fellow of the Centre for the Study of Higher Education at the University of Melbourne.

Bjørn Stensaker is a professor of higher education at University of Oslo, specializing in studies of policy, reform and change in higher education. He has published widely on these issues in a range of academic journals and other outlets.

This blog is based on our article in Policy Reviews in Higher Education (online 29 April 2025):

Blacklock, JD, Baird, J & Stensaker, B (2025) ‘Evolutionary stages in risk-based quality regulation in Australian higher education 2011–2024’ Policy Reviews in Higher Education, 1–23.

Paul Temple


1 Comment

No, it doesn’t make sense to me, either

by Paul Temple

I recently gave a cat-oriented friend a framed copy of a New Yorker cartoon showing a vet’s waiting room. A vet is saying to a man sitting there, “About your cat, Mr Schrödinger, there’s good news and there’s bad news…” Linda put the cartoon in her downstairs loo, and says that half her visitors think it’s hilarious while the rest are completely baffled.

The cartoon really summarises the totality of my knowledge of quantum mechanics, but as it seems to be one of those topics where if you think you understand it, you almost certainly don’t (and you’d be in pretty good company, see below), then my almost boundless ignorance doesn’t feel too bad. But as ideas borrowed from quantum mechanics seem to be colonising areas of discourse that were until recently understandable (we thought) to those of us without doctorates in the subject, perhaps we’d better make an effort.

A recent example of its spread is the paper by our colleague Ron Barnett, ‘Only connect: designing university futures’ in Quality in Higher Education, in which Ron uses the idea taken from quantum mechanics of “entanglement” to consider the university’s relationship with other entities. (And this is where it starts to get tricky.) As Ron notes, entanglement implies that the entities involved are mutually constitutive: one entity cannot be understood without examining the other entities with which it is entangled: “It may be true that one cannot give a description of the modern university without also referring to the economy but the reverse situation also holds: one cannot give a proper description of the economy without referring to a society’s universities. The economy is constitutive of universities, certainly; but universities are also constitutive of the economy”.

So far, so just about OK, yes? But the entanglement idea leads us into territory that is beyond weird: Einstein apparently wrote that “no reasonable definition of reality could be expected to permit” what entanglement implies, but – assuming that quantum computing is going to work, and there are some big bets on it doing so – it turns out that even he was mistaken. What Einstein couldn’t accept, it seems, was that two entangled objects, wherever in the universe they may be, become in effect one, after at first assuming opposite states.

Yes, this is way past anything that we’ve learned to accept as normal. One suggestion of how to think about entanglement asks us to imagine you and a friend tossing entangled coins. (How did they become entangled in the first place? Pass.) If, when you look at your coin, it’s heads, then your friend’s coin will, necessarily, be tails. But if your friend now looks at their coin, it will be heads, which means that your coin will now be tails: back to Schrödinger’s cat, simultaneously both dead and alive. (While the bits in normal computing have a value of either zero or one, qubits in quantum computing can have values of zero and one: Schrödinger’s cat is at the computer keyboard, which incidentally needs to be at a temperature close to absolute zero.)

With Einstein, perhaps, you may think this makes no sense, but earlier this year Google announced a breakthrough in creating an “error correction quantum computer”, having spent hundreds of millions of dollars on the project (Microsoft, Amazon, the Chinese, and others are also on the case), so they obviously think this stuff will work, regardless of the normal rules of the universe.

So, to pursue Ron’s suggestion about the university and the economy being mutually constitutive, it seems to follow that they will be – must be, following the theory – in opposite states. If you were looking for an argument for universities needing to be independent of government, might this be it? Next time a minister inveighs about universities being nests of woke, perhaps someone should explain the quantum aspects of the situation to them: the more regressive government policies become, universities will necessarily become more radical – it can’t be helped, it’s just to do with entanglement and the structure of the universe. I’m sure they’d appreciate the clarification.

Dr Paul Temple is Honorary Associate Professor in the Centre for Higher Education Studies, UCL Institute of Education.


Leave a comment

The ongoing saga of REF 2028: why doesn’t teaching count for impact?

by Ian McNay

Surprise, surprise…or not.

The initial decisions on REF 2028 (REF 2028/23/01 from Research England et al), based on the report on FRAP – the Future Research Assessment Programme – contain one surprise and one non-surprise among nearly 40 decisions. To take the second first, it recommends, through its cost analysis report, that any future exercise ‘should maintain continuity with rules and processes from previous exercises’ and ‘issue the REF guidance in a timely fashion’ (para 82). It then introduces significant discontinuities in rules and processes, and anticipates giving final guidance only in winter 2024-5, when four years (more than half) of the assessment period will have passed.

The second surprise is, finally, the open recognition of the negative effects on research culture and staff careers of the REF and its predecessors (para 24), identified by respondents to the FRAP consultation about the 2028 exercise. For me, this new humility is a double edged sword: many of the defects identified have been highlighted in my evidence-based articles (McNay, 2016, McNay, 2022), and, indeed, by the report commissioned by HEFCE (McNay, 1997) on the impact on individual and institutional behaviour of the 1992 exercise:

  • Lack of recognition of a diversity of excellences including work on local or regional issues because of the geographical interpretation of national/international excellence (para 37). Such local work involves different criteria of excellence, perhaps recognised in references to partnership and wider impact.
  • The need for outreach beyond the academic community, such as a dual publication strategy – one article in an academic journal matched with one in a professional journal in practical language and close to utility and application of a project’s findings.
  • Deficient arrangements for assessing interdisciplinary work (paras 60 and 61)
  • The need for a different, ‘refreshed’, approach to appointments to assessment panels (para 28)
  • The ‘negative impact on the authenticity and novelty of research, with individuals’ agendas being shaped by perceptions of what is more suitable to the exercise: favouring short-term inputs and impacts at the expense of longer-term projects…staying away from areas perceived to be less likely to perform well’. ‘The REF encourages …focus on ‘exceptional’ impacts and those which are easily measurable, [with] researchers given ‘no safe space to fail’ when it came to impact’.
  • That last negative arises in major part because of the internal management of the exercise, yet the report proposes an even greater corporate approach in future. The evidence-based articles and reports, and innovative processes and artefacts that arise from our research will have a reduced contribution to published assessments on the quality of research, though there is encouragement of a wider diversity of research outputs. More emphasis will be placed on institutional and unit ‘culture’ (para 28), so individuals disappear, uncoupled from consideration of culture-based quality. That culture is controlled by management; I spent several years as a Head of School trying to protect and develop further a collegial enterprise culture, which encouraged research and innovative activities in teaching. The senior management wanted a corporate bureaucracy approach with targets and constant monitoring, which work at Exeter has shown leads to lower output, poorer quality and higher costs (Franco-Santos et al, 2014).

At least 20 per cent of the People, Culture and Environment sub-profile for a unit will be based on an assessment of the Institutional Level (IL) culture, and this element will make up 25 per cent of a unit’s overall quality profile, up from 15 percent from 2021. This proposed culture-based approach will favour Russell Group universities even further – their accumulated capital has led to them outscoring other universities on ‘environment’ in recent exercises, even when the output scores have been the other way round. Elizabeth Gadd, of Loughborough, had a good piece on this issue in Wonkhe on 28 June 2023. The future may see research-based universities recruiting strongly in the international market to provide subsidy to research from higher student fees, leaving the rest of us to offer access and quality teaching to UK students on fees not adjusted for inflation. Some recognition of excellent research in unsupportive environment would be welcome, as would reward for improvement as operated when the polytechnics and colleges joined research assessment exercises.

The culture of units will be judged by the panels – a separate panel will assess IL cultures – and will be based on a ‘structured statement’ from the management, assessing itself, plus a questionnaire submission. I have two concerns here: can academic panels competent to peer-assess research also judge the quality and contribution of management; and, given behaviours in the first round of impact assessment (Oancea, 2016), how far can we rely on the integrity of these statements?

The sub-profile on Contribution to Knowledge and Understanding sub-profile will make up 50 per cent of a unit’s quality profile – down from 60 per cent last time and 65 per cent in 2014. At least 10 per cent will be based on the structured statement, so Outputs – the one thing that researchers may have a significant role in – are down to only 40 per cent, at most, of what is meant by overall research quality (the FRAP International Committee recommended 33 per cent). Individuals will not be submitted. HESA data will be used to quantify staff and the number of research outputs that can be submitted will be an average of 2.5 per FTE. There is no upper limit for an individual, and staff with no outputs can be included, as well as those who support research by others, or technicians who publish. Research England (and this is mainly about England; the other three countries may do better and certainly will do things differently) is firm that the HESA numbers will not be used as the volume multiplier for funding (still a main purpose of the REF), though it is not clear where that will come from – Research England is reviewing their approach to strategic institutional research funding. Perhaps staff figures submitted to HESA will have an indicator of individuals’ engagement with research.

Engagement and Impact broadens the previous element of simply impact. Our masters have discovered that early engagement of external partners in research, and 6 months attachment at 0.2 contract level allows them to be included, and enhances impact. Wow! Who knew? The work that has impact can be of any level to avoid the current quality level designations stopping local projects being acknowledged.

The three sub-profiles have fuzzy boundaries and overlap. Not just in a linear connection – environment, output, impact – but, because, as noted above, for example, engagement comes from the external environment but becomes part of the internal culture. It becomes more of a Venn diagram, that allows the adoption of an ‘holistic’ approach to ‘responsible research assessment’. We wait to see what those both mean in practice.

What is clear in that holistic approach is that research has nothing to do with teaching, and impact on teaching still does not count. That has created an issue for me in the past since my research feeds (not leads) my teaching and vice versa. I use discovery learning and students’ critical incidents as curriculum organisers, and they produce ‘evidence’ similar to that gathered through more formal interview and observation methods. An example. I recently led a workshop for a small private HEI on academic governance. There was a newly appointed CEO. I used a model of institutional and departmental cultures which influence decision making and ‘governance’ at different levels. That model, developed to help my teaching is now regarded by some as a theoretical framework and used as a basis for research. Does it therefore qualify for inclusion in impact? The session asked participants to consider the balance among four cultures of collegial, bureaucratic, corporate, entrepreneurial, relating to the degrees of central control of policy development and of policy delivery (McNay, 1995).  It then dealt with some issues more didactically, moving to the concept of the learning organisation where I distributed a 20 item questionnaire, (not yet published, but available on request for you to use) to allow scoring out of 10 per item, of behaviours relating to capacity to change, innovate and learn, leading to improved quality. Only one person scored more than 100 in total and across the group the modal score was in the low 70s, or just over 35%. That gave the new CEO an agenda with some issues more easily pursued than others and scores indicating levels of concern and priority. So my role moved into consultancy. There will be impact, but is the research base sufficient, was it even research, and does the use of teaching as a research transmission process (Boyer, 1990) disqualify it?

I hope this shows that the report contains a big agenda, with more to come. SRHE members need to consider what it means to them, but also what it means for research into institutions and departments to help define culture and its characteristics. I will not be doing it, but I hope some of you will. We need to continue to provide an evidence base to inform decisions even if it takes up to 20 years for the findings to have an impact.

SRHE itself might say several things in response to the report:

  • welcome the recognition of previous weaknesses, but note that a major one has not been recorded: the impact of RAE/REF on teaching, when excellent research has gained extra money, but excellent teaching has not, leading to an imbalancing of effort within the HE system. The research-teaching nexus also needs incorporating into the holistic view of research. Teaching is a major element in dissemination of research (Boyer, 1990) and so a conduit to impact, and should be recognised as such. That is because the relationship between researcher/teacher and those gaining new knowledge and understanding is more intimate and interactive than a reader of an article experiences. Discovery learning, drawing on learners’ experiences in CPD programmes can be a source of evidence, enhancing the knowledge and understanding of the researcher to incorporate in further work and research publications.
  • welcome the commitment to more diversity of excellences. In particular, welcome the commitment to recognise local and regionally directed research and its significant impact. The arguments about intimacy and interaction apply here, too. Research in partnership is typical of such work and different criteria are needed to evaluate excellence in this context.
  • welcome the intention to review panel membership to reflect the wider view of research now to be adopted.
  • urge an earlier clarification on panel criteria to avoid another 18 months, at least, trying, without clarity or guidance, to do work that will fit with the framework of judgement within which that work will be judged.
  • be wary of losing the voice of the researchers in the reduction of emphasis on research and its outputs in favour of presentations on corporate culture.

References

McNay, I (1997) The Impact of the 1992 RAE on Institutional and Individual Behaviour in English HE: the evidence from a research project Bristol HEFCE


Leave a comment

Redefining cultures of excellence: A new event exploring models for change in recruiting researchers and setting research agendas

by Rebekah Smith McGloin and Rachel Handforth, Nottingham Trent University

Research excellence’ is a ubiquitous concept to which we are mostly habituated in the UK research ecosystem.  Yet, at the end of an academic year which saw the publication of UKRI EDI Strategy, four UKRI council reviews of their investments in PGR, House of Commons inquiry on Reproducibility and Research Integrity and following on from the development of manifesto, concordat, declaration and standards to support Open Research in recent years, it feels timely to engage in some critical reflection on cultures of excellence in research. 

The notion of ‘excellence’ has become an increasingly important part of the research ecosystem over the last 20 years (OECD, 2014). The drivers for this are traced to the need to justify the investment of public money in research and the increasing competition for scarce resources (Münch, 2015).  University rankings have further hardwired and amplified judgments about degrees of excellence into our collective consciousness (Hazelkorn, 2015).

Jong, Franssen and Pinfield (2021) highlight that the idea of excellence is a ‘boundary object’ (Star and Griesemer, 1989) however. That is, it is a nebulous construct which is poorly defined and is used in many different ways. It has nevertheless shaped policy, funding and assessment activities since the turn of the century. Ideas of excellence have been enacted through the Research Excellence Framework and associated allocation to universities of funding to support research, competitive schemes for grant funding, recruitment to flagship doctoral training partnerships and individual promotion and reward.

We can trace a number of recent initiatives at sector level, inter alia, that have sought to broaden ideas of research excellence and to challenge systemic and structural inequalities in our research ecosystem. These include the increase of impact weighting in REF2021 to 25%, trials of systems of partial randomisation as part of the selection process for some smaller research grants, e.g. British Academy from 2022, the Concordats and Agreements Review work in 2023 to align and increase influence, capacity, and efficiency of activity to support research culture and the recent Research England investment in projects designed to address the broken pipeline into research by increasing participation of people from racialised groups in doctoral education.

At the end of June, we are hosting an event at NTU which will focus on redefining cultures of research excellence through the lens of inclusion. The symposium, to be held at our Clifton Campus on Wednesday 28 June, provides an opportunity to re-examine the broad notion of research excellence, in the context of systemic inequalities that have historically locked out certain types of researchers and research agendas and locked in others.

The event focuses on two mutually-reinforcing areas: the possibility of creating more responsive and inclusive research agendas through co-creation between academics and communities; and broadening pathways into research through the inclusive recruitment of PhD and early career researchers. We take the starting position that approaches which focus on advancing equity are critical to achieving excellence in UK research and innovation.

The day will include keynotes from Dr Bernadine Idowu and Professor Kalwant Bhopal, the launch of a new competency-based PGR recruitment framework, based on sector consultation, and a programme of speakers talking about their approaches to diversifying researcher recruitment and engaging the community in setting research agendas. 

NTU will be showcasing two new projects that are designed to challenge old ideas of research excellence and forge new ways of thinking. EDEPI (Equity in Doctoral Education through Partnership and Innovation Programme) is a partnership with Liverpool John Moores and Sheffield Hallam Universities and NHS Trusts in the three cities. The project will explore how working with the NHS can improve access and participation in doctoral education for racially-minoritised groups. Co(l)laboratory is a project with University of Nottingham, based on the Universities for Nottingham civic agreement with local public-sector organisations. Collab will present early lessons from a community-informed approach to cohort-based doctoral training.

Our event is a great opportunity for universities and other organisations who are, in their own ways, redefining cultures of research excellence to share their approaches, challenges and successes. We invite individuals, project teams and organisations working in these areas to join us at the end of June, with the hope of building a community of practice around building inclusive research cultures, within and across the sector.

Dr Rebekah Smith McGloin is Director of the Doctoral School at Nottingham Trent University and is Principal Investigator on the EDEPI and Co(l)laboratory projects. 

Dr Rachel Handforth is Senior Lecturer in Doctoral Education and Civic Engagement at NTU.


4 Comments

Interdisciplinarity

by GR Evans

Historian GR Evans takes the long view of developments in interdisciplinary studies, with particular reference to experience at Cambridge, where progress may at times be slow but is also measured. Many institutions have in recent years developed new academic structures or other initiatives intended to promote interdisciplinary collaboration. We invite further blogs on the topic from other institutional, disciplinary, multidisciplinary or interdisciplinary perspectives.

A recent Times Higher Education article explored ‘academic impostor syndrome’ from the point of view of an academic whose teaching and research crossed conventional subject boundaries. That seemed to have made the author feel herself a misfit. She has a point, but perhaps one with broader ramifications.  

There is still a requirement of specialist expertise in the qualification of academics. In its Registration Conditions for the grant of degree-awarding powers the Office for Students adopts a requirement which has been in used since the early 1990s. An institution which is an established applicant seeking full degree-awarding powers must still show that it has “A self-critical, cohesive academic community with a proven commitment to the assurance of standards supported by effective quality systems.”

A new applicant institution must show that it has “an emerging self-critical, cohesive academic community with a clear commitment to the assurance of standards supported by effective (in prospect) quality systems.” The evidence to be provided is firmly discipline-based: “A significant proportion (normally around a half as a minimum) of its academic staff are active and recognised contributors to at least one organisation such as a subject association, learned society or relevant professional body.” The contributions of these academic staff are: “expected to involve some form of public output or outcome, broadly defined, demonstrating the  research-related impact of academic staff on their discipline or sphere of research activity at a regional, national or international level.”

The establishment of a range of subjects identified as ‘disciplines’ suitable for study in higher education is not much more than a century old in Britain, arriving with the broadening of the university curriculum during the nineteenth century and the creation of new universities to add to Oxford and Cambridge and the existing Scottish universities. Until then the medieval curriculum adapted in the sixteenth century persisted, although Cambridge especially honoured a bent for Mathematics. ‘Research’, first in the natural sciences, then in all subjects, only slowly became an expectation. The higher doctorates did not become research degrees until late in the nineteenth century and the research PhD was not awarded in Britain until the beginning of the twentieth century, when US universities were beginning to offer doctorates and they were established as a competitive attraction in the UK .

The notion of ‘interdisciplinarity’ is even more recent. The new ‘disciplines’ gained ‘territories’ with the emergence of departments and faculties to specialise in them and supervise the teaching and examining of students choosing a particular subject. In this developing system in universities the academic who did not fully belong, or who made active connections between disciplines still in process of defining themselves, could indeed seem a misfit. The interdisciplinary was often disparaged as neither one discipline nor another and often regarded by mainstream specialists as inherently imperfect. Taking an interest in more than one field of research or teaching might perhaps be better described as ‘multi-disciplinary’ and requires a degree of cooperativeness among those in charge of the separate disciplines. But it is still not easy for an interdisciplinary combination to become a recognised intellectual whole in its own right, though ‘Biochemistry’ shows it can be done.

Research selectivity and interdisciplinarity

The ‘research selectivity’ exercises which began in the late 1980s evolved into the Research Assessment Exercises (1986, 1989, 1992, 1996, 2001, 2008), now the Research Excellence Framework. The RAE Panels were made up of established academics in the relevant discipline and by the late 1990s there were complaints that this disadvantaged interdisciplinary researchers. The Higher Education Funding Council for England and the other statutory funding bodies prompted a review, and in November 1997 the University of Cambridge received the consultation paper sent round by HEFCE. A letter in response from Cambridge’s Vice-Chancellor was published, giving answers to questions posed in the consultation paper. Essential, it was urged, were ‘clarity and uniformity of  application of criteria’. It suggested that: “… there should be greater interaction, consistency, and comparability between the panels than in 1996, especially in cognate subject areas. This would, inter alia, improve the assessment of interdisciplinary work.”

The letter also suggested “the creation of multidisciplinary sub-panels, drawn from the main panels” or at least that the membership of those panels should include those “capable of appreciating interdisciplinary research and ensuring appropriate consultation with other panels or outside experts as necessary”. Universities should also have some say, Cambridge suggested, about the choice of panel to consider an interdisciplinary submission. On the other hand Cambridge expressed “limited support for, and doubts about the practicality of, generic interdisciplinary criteria or a single interdisciplinary monitoring group”, although the problem was acknowledged.[1]

Interdisciplinary research centres

In 2000 Cambridge set up an interdisciplinary Centre for Research in the Arts, Humanities, and Social Sciences. In a Report proposing CRASSH the University’s General Board pointed to “a striking increase in the number and importance of research projects that cut across the boundaries of academic disciplines both within and outside the natural sciences”. It described these as wide-ranging topics on which work could “only be done at the high level they demand” in an institution which could “bring together leading workers from different disciplines and from around the world … thereby raising its reputation and making it more attractive to prospective staff, research students, funding agencies , and benefactors.”[2]

There have followed various Cambridge courses, papers and examinations using the term ‘interdisciplinary’, for example an Interdisciplinary Examination Paper in Natural Sciences. Acceptance of a Leverhulme Professorship of Neuroeconomics in the Faculty of Economics in 2022 was proposed on the grounds that “this appointment serves the Faculty’s strategy to expand its interdisciplinary profile in terms of research as well as teaching”.  It would also comply with “the strategic aims of the University and the Faculty … [and] create a bridge between Economics and Neuroscience and introduce a new interdisciplinary field of Neuroeconomics within the University”. However the relationship between interdisciplinarity in teaching and in research has still not been systematically addressed by Cambridge.

‘Interdisciplinary’ and ‘multidisciplinary’

A Government Report of 2006 moved uneasily between ‘multidisciplinary’ and ‘interdisciplinary’ in its use of vocabulary, with a number of institutional case studies. The University of Strathclyde and King’s College London (Case Study 2) described a “multidisciplinary research environment”. The then Research Councils UK (Case Study 5b) said its Academic Fellowship scheme provided “an important mechanism for building interdisciplinary bridges” and at least 2 HEIs had “created their own schemes analogous to the Academic Fellowship concept”.

In sum it said that all projects had been successful “in mobilising diverse groups of specialists to work in a multidisciplinary framework and have demonstrated the scope for collaboration across disciplinary boundaries”. Foresight projects, it concluded, had “succeeded in being regarded as a neutral interdisciplinary space in which forward thinking on science-based issues can take place”. But it also “criticised the RAE for … the extent to which it disincentivised interdisciplinary research”.  And it believed that Doctoral Training Projects still had a focus on discipline-specific funding, which was “out of step with the growth in interdisciplinary research environments and persistent calls for more connectivity and collaboration across the system to improve problem-solving and optimise existing capacity”.

Crossing paths: interdisciplinary institutions, careers, education and applications was published by the British Academy in 2016. It recognised that British higher education remained strongly ‘discipline-based’, and recognized the risks to a young researcher choosing to cross boundaries. Nevertheless, it quoted a number of assurances it had received from universities, saying that they were actively seeking to support or introduce the ‘interdisciplinary’. It provided a set of Institutional Case Studies. including Cambridge’s statement about CRASSH, as hosting a range of externally funded interdisciplinary projects. Crossing paths saw the ‘interdisciplinary’ as essentially bringing together existing disciplines in a cluster. It suggested “weaving, translating, convening and collaborating” as important skills needed by those venturing into work involving more than one discipline.  It did not attempt to explore the definition of interdisciplinarity or how it might differ from the multi-disciplinary.

Interdisciplinary teaching has been easier to experiment with, particularly at school level where subject-based boundaries may be less rigid. There seems to be room for further hard thought not only on the need for definitions but also on the notion of the interdisciplinary from the point of view of the division of provision for posts in – and custody of – individual disciplines in the financial and administrative arrangements of universities. This work-to-be-done is also made topical by Government and Office for Students pressure to subordinate or remove established disciplines which do not offer the student a well-paid professional job on graduation.

SRHE member GR Evans is Emeritus Professor of Medieval Theology and Intellectual History in the University of Cambridge.


[1] Cambridge University Reporter, 22 April (1998).  

[2] Cambridge University Reporter, 25 October (2000).  


1 Comment

What makes a good SRHE Conference abstract? (some thoughts from a reviewer)

by Richard Davies

Dr Richard Davies, co-convenor of SRHE’s Academic Practice network, ran a network event on 26 January 2022 ‘What makes a good SRHE Conference abstract?’. A regular reviewer for the SRHE Conference, Richard also asked colleagues what they look for in a good paper for the conference and shared the findings in a well-attended event.

Writing a submission for a conference is a skill – distinct from writing for journals or public engagement. It is perhaps most like an erudite blog. In the case of the SRHE conference, you have 750 words to show the reviewer that your proposed presentation is (a) worth conference delegates’ attention, and (b) a better fit for this conference than others (we get more submissions than the conference programme can accommodate so it is a bit competitive!).

Think of it as a short paper, not an abstract

It is difficult to summarise a 5-6000 word paper in 750 words and cover literature, methodology, data and findings. As a reviewer, I often find myself unsatisfied with the result. It is better to think of this as a short paper, that you can present in 15 minutes at the conference. This means focussing on a specific element of your study which can be communicated in 750 words and following the argument of that focus through precise methodology, a portion of your data, and final conclusions. Sure, tell the reviewers this is part of a large study, but you are focusing on a specific element of it. The short paper will then, if well written, be clear and internally coherent. If I find a submission is neither clear nor coherent, then I would usually suggest rejecting because if I cannot make sense of it then I will assume delegates will not be able to as well.

Practical point: get a friend or colleague to read the short paper – do they understand what you are saying? They don’t have to be an expert in higher education or even research. As reviewers, most of us regularly read non-UK English texts, as an international society we are not expecting standard English – just clarity to understand the points the author is making. Whether UK-based or international, we are not experts in different countries’ higher education systems and so do not assume the reviewer’s prior knowledge of the higher education system you are discussing

Reviewer’s judgement

Although we work to a set of criteria, as with most academic work, there is an element of judgement, and reviewers take a view of your submission as a whole. We want to know: will this be of interest to SRHE conference delegates? Will it raise questions and stimulate discussion? In my own area of philosophy of education, a submission might be philosophically important but not explicitly about higher education; as a result I would tend to suggest it be rejected. It might be suitable for a conference but not this conference.

Practical point: check you are explicitly talking about higher education and how your paper addresses an interesting area of research or practice. Make sure the link is clear – don’t just assume the reviewers will make the connection. Even if we can, we will be wary of suggesting acceptance.

Checking against the criteria

The ‘Call for Papers’ sets out the assessment criteria against which we review submissions. As a reviewer, I read the paper and form a broad opinion, I then review with a focus on each specific criterion. Each submission is different and will meet each criterion (or not) in a different way and to varying degrees. As a reviewer, I interpret the criterion in the light of the purpose and methodology of the submission. As well as clarity and suitability for the conference, I also think about the rigour with which it has been written. This includes engagement with relevant literature, the methodology/methods and the quality of the way the data (if any) are used. I want to know that this paper builds on previous work but adds some original perspective and contribution. I want to know that the study has been conducted methodically and that the author has deliberated about it. Where there are no data, either because it is not an empirical study or the paper reports the initial phases of what will be an empirical study, I want to know that the author’s argument is reasonable and illuminates significant issues in higher education.

Practical point: reviewers use the criteria to assess and ‘score’ submissions. It is worth going through the criteria and making sure that you are sure that it is clear how you have addressed each one. If you haven’t got data yet, then say so and say why you think the work is worth presenting at this early stage.

Positive news

SRHE welcomes submissions from all areas of research and evaluation in higher education, not just those with lots of data! Each submission is reviewed by two people and then moderated, and further reviewed, if necessary, by network convenors – so you are not dependent on one reviewer’s assessment. Reviewers aim to be constructive in their feedback and to uphold the high standard of presentations we see at the conference, highlighting areas of potential improvement for both accepted and rejected submissions.

Finally, the SRHE conference does receive more submissions than can be accepted, and so some good papers don’t make it. Getting rejected is not a rejection of your study (or you); sometimes it is about clarity of the submission, and sometimes it is just lack of space at the conference.

Dr Richard Davies is an academic, educationalist and informal educator. He is primarily concerned with helping other academics develop their research on teaching and learning in higher education. His own research is primarily in philosophical approaches to higher educational policy and practice. He co-convenes SRHE’s AP (Academic Practice) Network – you can find out more about the network by clicking here.


Leave a comment

Quality teacher educators for the delivery of quality education

by Desiree Antonio

A spectrum of interesting critical issues related to ‘quality’ were brought to light during the SRHE Academic Practice Network conference on 22-23 June 2021. The conference Qualifying the debate on quality attracted my attention and I was keen to share my perspectives on the implications of having quality teacher educators in order to produce quality classroom teachers.

 My substantive work as an Education Officer, supervising principals and teachers in our schools and secondly as an Adjunct Lecturer teaching student teachers in a Bachelors of Education Programme, positioned me an inside observer and participant in this phenomenon. My doctoral thesis (2020) explored teacher educators’ perceptions about their continuing professional development and their experiences as they transitioned into and assumed roles as teacher educators. Hence, I am quite pleased to write this blog that captures the essence of my presentation from the conference.  

Ascribing the label of “quality” to education has different meanings and interpretations in different conditions and settings. ‘Quality’ depends on geographical boundaries and contexts, with consideration given to quality assurance, regulations and established standards using certain measures (Churchward and Willis, 2018). Attaining ‘quality’ can therefore be elusive, especially when we try to address all the layers within an education system. The United Nations sustainable development goal number 4 is aimed at offering ‘quality’ education for all in an inclusive and equitable climate. But this quality education is to be provided by teachers, with no mention (as is generally the case) of the direct input of teacher educators who sit at the apex of the ‘quality chain’. These teacher educators work in higher education institutions and are tasked with the responsibility of formally preparing quality classroom teachers. The classroom teachers in turn would ensure that our students receive this inclusive equitable quality education within schools and other learning institutions.

Although the lack of attention to teacher educators’ professional development is now receiving more attention, as reported in the literature, this once forgotten group of professionals who make up a distinct group within the education sector need to receive constant support and continuous professional development. This attention will enable  them to offer improved quality service to their student teachers.  Without giving teacher educators the support and attention they deserve, quality education cannot be realised in our classrooms. Sharma (2019) reminds us that every child deserves quality classroom teachers.

Responsibilities of teacher educators

An understanding of what teacher educators are expected to do is therefore critical, if we are to recognize their value in the quality chain. Darling-Hammond (2006) opines that teacher educators must have knowledge of their learners and their social context, knowledge of content and of teaching. Furthermore, Kosnik et al (2015) explain that they should have knowledge of pedagogy in higher education, research and government initiatives. Teacher educators must also have knowledge of teachers’ lives, what it is like to teach children and also the teachers of children; they therefore should have had the experience of being teachers (Bahr and Mellor, 2016). In essence, they should be equipped with teachers’ knowledge and skills, in addition to what they should know and do as teacher educators. It appears that the complexity of teacher educators’ work is usually underestimated and devalued. This is evidenced especially when it is taken for granted that good classroom teachers are suitably qualified to become teacher educators and that they do not require formal training and continued differentiated support as they transition and work as teacher educators in higher education.

Improving the quality of teacher educators’ work   

Targeted continuing professional development (CPD) of different types and forms that address different purposes according to teacher educators’ needs and that of their institutions is suggested. I have recommended (Antonio, 2019) a multidimensional approach to teacher educators’ CPD. This approach takes into consideration forms of CPD (informal, formal and communities of practice); types of CPD (site-based, standardised and self-directed); and purposes of CPD – transmissive, malleable and transformative proposed by Kennedy (2014). Teacher educators must have a voice in determining the combination and nature of their CPD. Notwithstanding, there needs to be a ‘quality barometer’ which gives various stakeholders the opportunity to assist in guiding their development. Their CPD must have relevance in this 21st century era.

Interventions as a necessity

The idea that teacher educators are self-made, good classroom teachers who can transmit these skills and knowledge into higher education institutions without formal training as teacher educators should be examined decisively. Systems need to be established for teacher educators to be formally trained at levels beyond that of ordinary classroom teachers. However, their CPD should be fostered under the experienced supervision of professors who themselves have been proven to be 21st Century aware in the areas of technological pedagogical content knowledge, as well as other soft skills. No one should be left untouched in our quest to providing quality education for all. We must be serious in simultaneously addressing the delivery of quality education at every level of education systems. Our children deserve quality classroom teachers and quality teacher educators hold the key.

Desirée Antonio is Education Officer, School Administration within the Ministry of Education, Sports and Creative Industries, Antigua and Barbuda. She has been an educator for nearly 40 years. Her current work involves the supervision of teachers and principals, providing professional development and contributing to policy development. She has a keen interest in Continuing Professional Development as a strategy that can be used to assist in responding to the ever-changing challenging and complex environment in which we work as educators.

As an Adjunct Lecturer, University of the West Indies, Five Islands Campus, Desirée teaches student teachers in a Bachelors of Education Programme. Her doctoral thesis explored the continuing professional development of teacher educators who work in the region of the Organisation of Eastern Caribbean States. Her involvement over the past year in many webinars and workshops with SRHE inspired her to develop and host an inaugural virtual research symposium on behalf of the Ministry of Education in May 2021, with the next to be held in 2022.

References

Antonio, D (2019) Continuing Professional Development (CPD) of Teacher Educators (TEs) within the ecological environment of the island territories of the Organisation of Eastern Caribbean States (OECS) PhD thesis submitted in accordance with the requirements of the University of Liverpool

Bahr, N and Mellor, S (2016) ‘Building quality in teaching and teacher education’ in Acer, ACER Press. https://research.acer.edu.au/cgi/viewcontent.cgi?article=1003&context=aer

Churchward, P, and Willis, J (2018) ‘The pursuit of teacher quality: identifying some of the multiple discourses of quality that impact the work of teacher educators’ Asia-Pacific Journal of Teacher Education, 47(3): 251–264 https://doi.org/10.1080/1359866X.2018.1555792

Darling-Hammond, L (2006) Constructing 21st-Century Teacher Education. Journal of Teacher Education, 57(3), 300–314. https://doi.org/10.1177/0022487105285962

Kennedy, A (2014) ‘Understanding continuing professional development: the need for theory to impact on policy and practice’ Professional Development in Education, 40(5), 688–697 https://doi.org/10.1080/19415257.2014.955122

Kosnik, C., Menna, L., Dharamshi, P, Miyata, C, Cleovoulou, Y, and Beck, C (2015) ‘Four spheres of knowledge required: an international study of the professional development of literacy/English teacher educators’ Journal of Education for Teaching, 41(April 2015): 52–77 https://doi.org/10.1080/02607476.2014.992634

Sharma, R (2020) ‘Ensuring quality in Teacher Education’ EPRA International Journal of Multidisciplinary Research (IJMR) 5(10)


Leave a comment

More roadworks on Quality Street

by Paul Temple

Trust is the magic ingredient that allows social life to exist, from the smallest informal group to entire nations. High-trust societies tend to be more efficient, as it can be assumed that people will, by and large, do what they’ve agreed without the need for constant checking. Ipsos-MORI carries out an annual “veracity index” survey in Britain to discover which occupational groups are most trusted: “professors”, which I think we can take to mean university academic staff, score highly (trusted by 83% of the population), just below top-scoring doctors and judges, way above civil servants (60%) – and with government ministers playing in a different league on 16%. So most people, then, seem to trust university staff to do a decent job – much more than they trust ministers. It’s therefore a little strange that over the last 35 years the bitterest struggles between universities and governments have been fought in the “quality wars”, with governments claiming repeatedly that university teachers can’t be trusted to do their jobs without state oversight. Disputes about university expansion and funding come and go, but the quality wars just rumble on. Why?

From the mid-1980s (when “quality” was invented) up to the appearance of the 2011 White Paper, Higher Education: Students at the Heart of the System, quality in higher education was (after a series of changes to structures and methods) regulated by the Quality Assurance Agency, which required universities to show that they operated effective quality management processes. This did not involve the inspection of actual teaching: universities were instead trusted to give an honest, verifiable, account of their own quality processes. Without becoming too dewy-eyed about it, the process came down to one group of professionals asking another group of professionals how they did their jobs. Trust was the basis of it all.

The 2011 White Paper intended to sweep this away, replacing woolly notions of trust-based processes with a bracing market-driven discipline. The government promised to “[put] financial power into the hands of learners [to make] student choice meaningful…[it will] remove regulatory barriers [to new entrants to the sector to] improve student choice…[leading to] higher education institutions concentrating on high-quality teaching” (Executive Summary, paras 6-9). On this model, decisions by individual students would largely determine institutional income from teaching, so producing better-quality courses: trust didn’t matter. Market forces can be seen to drive forward quality in other fields through competition, why not in universities?

Well, of course, for lots of reasons, as critics of the White Paper were quick to point out, naturally to no avail. But having been told that they were to operate in a marketised environment where the usual market mechanisms would deal with quality (good courses expanding, others shrinking or failing), exactly a decade later universities find themselves being subjected to a bureaucratic (I intend the word in its social scientific sense, not as a lazy insult) quality regime, the very antithesis of a market system.

We see this in the latest offensive in the quality wars, just opened by the OFS with its July 2021 “Consultation on Quality and Standards”. This 110-page second-round consultation document sets out a highly-detailed process for assessing quality and standards: you can almost feel the pain of the drafter of section B1 on providing “a high quality academic experience”. What does that mean? It means, for example, ensuring that each course is “coherent”. So what does “coherent” mean? Well, it means, for example, providing “an appropriate balance between breadth and depth”. So what does…? And so on. This illustrates the difficulty of considering academic quality as an ISO 9001 (remember that?) process with check-lists, when probably every member of a course team will – actually, in a university, should – have different, equally valid, views on what (say) “appropriate breadth and depth” means.

Government approaches to quality and standards in university teaching have, then, over the last 30 or so years, moved from a largely trust-based system, to one supposedly driven by market forces, to a bureaucratic, box-ticking one. In all this time, ministers have failed to give convincing examples of the problems that the ever-changing quality regimes were supposed to deal with. (Degree mills and similar essentially fraudulent operations can be dealt with through normal consumer legislation, given the will to do so. I once interviewed an applicant for one of our courses who had worked in a college I hadn’t heard of: had there been any problems about its academic standards, I asked. “Not really”, she replied brightly, “it was a genuine bogus college”.)

Why, then, do the quality wars continue? – and we can be confident that the current OFS proposals do not signal the end of hostilities. It is hard to see this as anything other than ministerial displacement activity. Sorting out the social care crisis, or knife crime, will take real understanding and the redirection of resources: easier by far to make a fuss about a non-problem and then be seen to act decisively to solve it. And to erode trust in higher education a little more.

Dr Paul Temple is Honorary Associate Professor in the Centre for Higher Education Studies, UCL Institute of Education, London. His latest paper, ‘The University Couloir: exploring physical and intellectual connectivity’, will appear shortly in Higher Education Policy.