SRHE Blog

The Society for Research into Higher Education


Leave a comment

Can folk pedagogies help us understand the limited impact of research on higher education?

by Alex Buckley

The SRHE conference is a great place to see our field in all its glory. From the sessions I attended in December 2025, one thing that was abundantly clear was the desire of so many HE researchers to change the world. A distinctive feature of contemporary HE research – reflecting the social sciences more broadly – is the focus on political and ethical issues, with avowedly political and ethical intentions. The improvement of society is often the explicit end, rather than the more humble improvement of our own part of the education system.

Despite this desire to make a difference, higher education research has for many years been held up as an area where the impact of those working in the field is not what it could be. As George Keller said in 1985, “hardly anyone in higher education pays attention to the research and scholarship about higher education”,

Asking the right questions?

There hasn’t been a lot of work on the gap between research and practice in HE – though there is a fair amount in the schools sector from which we can extrapolate, to a greater or lesser extent – but one issue that has received some attention is the fundamental one: are researchers actually asking the right questions?

Vivianne Robinson is a researcher who has laid a substantial amount of blame at the feet of researchers, who “have little to offer by way of alternative solutions, when the problems they have been studying are not those of the practitioner” (Robinson 1993). I have recently used Robinson’s model of Problem-Based Methodology to explore whether research about exams in higher education does engage sufficiently with the challenges that teachers take themselves to face. The results were not encouraging.

One of the more straightforward of Robinson’s criteria for impactful research is that researchers should be addressing teachers’ beliefs, and correcting them where they are erroneous. That’s important, but what if those beliefs are hard to shift? We all have stubborn hunches about how higher education works: good ways of motivating students, how to write feedback that will make students pay attention, how to clearly communicate complex ideas. What if there are teacher beliefs that are deeply embedded, so deeply that we don’t always know we have them, but that aren’t helping us and need to change?

One idea that has been explored in the school sector, but has largely passed us by, is the concept of ‘folk pedagogies’. This idea was developed in the 1990s as an extension of the more famous concept of ‘folk psychologies’: the tacit theories that we all have that allow us to make sense of people’s behaviour. For Jerome Bruner, a natural next step from folk psychologies was the idea that we have intuitive theories about how people learn.

“Watch any mother, any teacher, even any babysitter with a child and you’ll be struck by how much of what they do is steered by notions of ‘what children’s mind are like and how to help them learn,’ even though they may not be able to verbalise their pedagogical principles.” Bruner (1996)

There has been some research in the school sector about the implications of this idea, particularly in terms of how much difference research makes to educational practice. Folk pedagogies have two features that will make them a factor in the impact of education research: they interfere with the uptake of new research-based ideas and approaches, and they are stubborn. On the first point, the idea is that new ideas about higher education will have to replace the old if they are to influence teachers; and on the second, evidence suggests that even where trainee teachers have ostensibly internalised more scientific theories of learning, the folk pedagogies come creeping back.

In the case of higher education, what might these commonsense, intuitive theories look like? They might just be very general ideas about how people learn, applied to the particular context of higher education. Bruner identifies a range of broad folk pedagogical views, such as one which sees ‘children as knowers’, with a focus on the gathering and organising of facts. Perhaps one kind of folk psychology of higher education would be the application of that idea specifically to students in universities rather than other sectors: a focus on the selection, organisation and retention of propositional knowledge within degree programmes. Perhaps there are also specific intuitive theories about higher education that influence teachers’ practices. Perhaps there is a folk intuition that university students should not be spoon-fed – that they must take responsibility for their own learning and seek to develop their own views. Perhaps there is a folk intuition that students should encounter challenging views that encourage them to question their own certainties. In the absence of research, we can only speculate (and introspect).

Respecting the ‘folk’

The idea that teachers have deep intuitions about how students learn, that those intuitions can prevent them from acting on more evidence-based beliefs, and that those intuitions are hard to shake; none of those ideas are particularly earth-shattering. They are probably common sense among those researching and enhancing higher education. The value of the idea of ‘folk pedagogies’ lies instead in the way that it encourages us to take those intuitions seriously, both as an object of study and a powerful barrier to change.

Rather than dismissing intuitions about higher education – as ignorant beliefs and hide-bound traditions – we can study them. What are they? Where do they come from? How do they change? The idea of folk pedagogies is not pejorative. There’s no shame in having intuitions about how learning works. As with folk psychological theories, they are necessary parts of how we navigate the world, and something we can’t do without. There is also deep wisdom to be found in those intuitions, even if they are sometimes misleading. Research goes wrong by departing from common sense, at least as much as the other way around.

Acknowledging the existence of folk theories of higher education can help improve the impact of our research in all sorts of ways. We can research them, to understand why teachers and students (and others) do what they do, and the conditions in which deep intuitions can change. It can help us understand where – and why – research has departed so far from common sense as to be of little practical relevance.

It can also help us understand the scale of the challenge. In much of what we do, we’re seeking to modify what university teachers do, which very often means changing how they think. The reality is that we aren’t usually changing superficial, specific beliefs, at least not where the improvements we’re seeking are substantive. We’re changing deep beliefs picked up over a lifetime. Our model of improvement may then need to fit the old adage: if you’re not making progress at a snail’s pace, you’re not making progress. That’s a bit different from annual quality enhancement cycles or short-term strategic initiatives. We can change the world, but it will take time.

References

Bruner, J. (1996). The culture of education. Harvard

Robinson, V. M. J. (1993). Problem-Based Methodology: Research for the Improvement of Practice. Pergamon Press

Dr Alex Buckley is an Associate Professor in the Learning & Teaching Academy at Heriot-Watt University, Scotland. His research is focused on conceptual aspects of research and practice in assessment and feedback.


Leave a comment

What is a ‘research culture’?

by GR Evans

Should  higher education providers foster a ‘research culture’? As the body responsible for research under the Higher Education and Research Act (2017), UK Research and Innovation offers its own definition. Such a ‘culture’ will encompass ‘the behaviours, values, expectations, attitudes and norms’ of ‘research communities’, influence ‘researchers’ career paths and determine ‘the way that research is conducted and communicated’.  The Royal Society adopts the same wording.

Nevertheless, agreed definition seems elusive. The British Academy points to ‘the impact and value research’ in the humanities and related disciplines ‘can deliver to policy makers and the wider public’. The Wellcome Trust is critical of ‘current practices’, which it says ‘prioritise outputs at almost any cost’ It encourages ‘curiosity-based ideas’, even if they fail to make discoveries. Cambridge University has an Action Research on Research Culture project in collaboration with the University of Edinburgh, Leiden University, Freie Universität Berlin and ETH Zurich, suggesting international reach towards defining such a culture.  A Concordat and Agreements Review (April 2023) formed a joint attempt to define ‘research culture’ initiated by Universities UK, UKRI and Wellcome. It found it was not sure ‘what a positive research culture looks like’ or what ‘research culture framework to adopt’.

Research is a relative newcomer to the work of English universities. Under the Oxford and Cambridge Act (1877). s.15, the  Commissioners who were to  frame new Statutes for each of the two universities were required to ‘have regard to the interests of education, religion, learning and research’. The inclusion of ‘research’ was still a recent arrival in universities. The prompting had come from German universities, whose influence in linking a doctorate with research had rather reluctantly been recognised. Research-based Doctorates of Philosophy began to be awarded in the USA, with Yale leading the way in 1861.

Oxford and Cambridge took note. Reform of their ancient doctorates was called for in any case. The award of doctorates in Divinity had ceased to depend on advanced scholarship, and had often became more or less honorific as new Bishops began to be granted an automatic Doctorate of Divinity. The transatlantic Doctorates of Philosophy were something new because they were expressly intended for award to younger scholars on the basis of a first research exercise. From the end of the nineteenth century Oxford and Cambridge experimented with postgraduate Bachelors degrees awarded on the basis of a piece of original research. Doctorates for young scholars came next and in 1921 Oxford granted its first DPhil and Cambridge its first PhD, both expecting original research. After some debate the existing ancient doctorates became ‘higher ‘doctorates, to be awarded to more senior scholars, normally on the basis of a significant body of published research. In all this lay the beginnings of an academic ‘research culture’, though well into the twentieth century the Fellows of Colleges did not usually have – or seek – doctorates. ‘Vacancies’ for academic jobs commonly express a preference for a candidate to have a postgraduate degree but do not  require  it.

The multiplication of English universities which began in the early nineteenth century was added to considerably from the end of the nineteenth century with the creation of the ‘redbrick’ universities in major cities. It began to be taken for granted that universities would be responsible for research as well as teaching. However when polytechnics became universities under the Further and Higher Education Act in 1992 they preserved contracts mainly concerned with teaching. That has remained the case with UCU’s ‘Post-1992 National Contract’. An institution may choose to add research to the contracts of its own academics. ‘Teaching-only’, ‘Teaching and Scholarship’ and ‘Teaching-focussed’ academic jobs have  become increasingly common.

Some universities now seek to fix the proportions of the time their teaching-and-research academics may spent on research. The private ‘alternative providers’ encouraged by Governments in the first decades of the twenty-first century have rarely made a significant effort to be research-active so far. with the Office for Students mentioning only one actively seeking research-degree-awarding powers. Cuts to contracted research time are threatened with the increasing pressure on university budgets,  Kent for example lowering it from 40% to 20%.

Doctorates continue to proliferate at DPhil/PhD level, but they may no longer require research as formerly understood. With many providers offering ‘Professional’ doctorates, leading for example to a Doctorate in Business, a Doctorate in Education, a Doctorate in Engineering,  the thesis may be replaced partly or wholly by professional experience and study may take place in conjunction with paid work as a required element.

‘Taught’ Masters degrees and even ‘taught doctorates’  have begun to multiply. For ‘Taught Doctorates’, advanced study may involve taught courses rather than, or in addition to, independent research. The ‘taught’ element may involve lectures on or exposition of the skills needed in research, or include elements in the content of the subject of the Doctorate.

Research expands to include ‘innovation’ and ‘knowledge exchange’

The definition of ‘research’ has been expanding to include ‘innovation’ and ‘knowledge exchange’, both now responsibilities of UKRI. ‘Innovate UK’ had its origins in the ‘Lambert’ Review of Business-University Collaboration (2003). This considered the ‘demand for research from business’ alongside the ‘dual support’ system of university funding, with infrastructure funded from the block grant and funding for research projects dependent on grants and the Research Councils. Lambert ‘proposed a number of principles that should be adopted to encourage world-class business research’. This encouraged the view that the ‘originality’ of research could include ‘innovation’.

Governments have actively encouraged ‘Knowledge Exchange’. The Knowledge Exchange Framework is now the responsibility of Research England within UKRI.It embraces a range of modes of ‘exchange’: partnerships involving collaborative research; contract research; consultancy; working with business; ‘continuing professional development’; intellectual property and its commercialisation; public and community engagement; local growth and regeneration, some but not all  having a defined ‘research’ element. In 2020 a Concordat for the advancement of Knowledge Exchange in Higher Education, was prompted in part to ‘deliver the UK Government’s R&D 2.4% target’ and also to ‘tackle challenges such as levelling up prosperity across the country’, as Amanda Solloway, then Minister for Science, Research and Innovation, put  in her Foreword in 2020.

In 2015 the creation of Degree Apprenticeships added a recognised further addition to ‘teaching’ in higher education, offering a form of  ‘professional’ or ‘technical’ research. Providers were to ‘specialise in working with industry and employers’. Their teaching would be: “hands-on and designed to prepare students for their careers. Their knowledge and research drive industry and the public services to innovate, thrive and meet challenges”.

However an apprenticeship is first and foremost an employment. The relationship with the exercise of degree-awarding powers has been found to carry a  heavy ‘regulatory burden’. Providers complain that they are ‘caught up in a tangle of regulation and unnecessary bureaucracy, which is hampering growth and innovation’. Degree apprenticeships have not yet caught on, for these reasons and because they are found to be ‘costly to deliver’.

Funding for them may be uncertain. The Apprenticeship Levy is a tax dating from 2015 and enforced by the  Finance Act (2016). Its operation is one of the responsibilities of the Education and Skills Funding Agency (ESFA). It is paid by employers with a pay bill of over £3m, with Government contributing from it to the training costs for small businesses. However the Levy does not fund Degree Apprenticeships.

There have been calls for the Lifelong Loan Entitlement to include degree apprenticeships but the most recent Government Policy Paper (April 2024) embracing Higher Technical Qualifications (HTQs) and including ‘modules of technical courses of clear value to employers’, is still working with the Institute for Apprenticeships and Technical Education (IfATE) about the possible application of  the LLE when ‘qualifications submitted to the gateway are technical in nature’. There is therefore some way to go before degree apprenticeships can become accepted postgraduate qualifications expressly involving research and with reliable sources of funding.

Funding for an institutional ‘research culture’ goes beyond higher education providers

Taxpayer-funding for universities began to be allocated by the academic-led University Grants Committee (UGC) from 1919. It was to take the form of a block grant, which the recipient university might allocate as it chose. At the end of the twentieth century the UGC was replaced first, briefly, by a single Funding Council and then, under the Further Education and Research Act (1992) by four separate Funding Councils for the nations of the UK, with the Higher Education Funding Council for England taking over the task for England. The new Act stipulated the permitted application of taxpayer funding for higher education between teaching and research, or for the support of either.

Under the Thatcher Government public funding for higher education was reduced, leaving the University Grants Committee less to allocate from the 1980s. (Shattock, 1984; Shattock, 2008) The decision was taken to vary grants for funding according to the research performance of universities. The resulting ‘quality-related’ (QR) research ‘selectivity’ made it necessary to devise measurements of the research results to be rewarded. In 1986 the UGS sought statements from universities on their subject areas by cost, with samples of  five ‘outputs’ from each. Satisfactory research performance came to be shaped largely by measurements of this kind.

A further exercise in ‘research selectivity’ followed in 1989. When the UGC was replaced by the statutory Universities Funding Council, another exercise followed in 1992. Its findings prompted an application for judicial review from the Institute of Dental Surgery alleging that its performance had not been properly measured. The court accepted that the Institute had had independent status for grant purposes under Education Reform Act (1988), s.235(1) and the judgment gave a detailed description of the process which had been followed in arriving at the relatively low rating the Institute was challenging. It faulted the Funding Council for its failure to give reasons for a decision which would affect future funding for the Institute of Dental Surgery.  That prompted some rethinking of the procedure to be used for rating a higher education provider’s research so as to allocate funding selectively.

The Further and Higher Education Act of 1992 replaced the short-lived first single Funding Council with four national statutory funding bodies. The resulting Higher Education Funding Council for England (HEFCE) conducted its own Research Assessment Exercise (RAE) every few years,  amending the procedure and requirements each time, with  infrastructure ‘teaching and research’ funding duly allocated on the basis of  its results.

After the exercise of 2001 with its 68 Units of Assessment there was growing concern about the fairness of a method of assessment based on disciplinary or subject ‘units’. The Second Report of the House of Commons Science and Technology Committee (April 2002) heard evidence to that effect and recommended that HEFCE ‘ensure that its quality assessment does not discourage or disadvantage interdisciplinary research’, arguing that ‘such research offers some of the most fertile ground for innovation and discovery’. That adjustment proved difficult to achieve.

The RAE was replaced in 2014 by the Research Excellence Framework (REF). Costing £246m in 2014, the REF proved to be vastly more expensive than the RAE, which had cost £66m for the 2008 exercise. It was last held in 2021 with Research England in charge instead of HEFCE. It is scheduled to be repeated in 2029.

The ‘Stern’ Report, Building on Success and Learning from Experience: an Independent Review of the Research Excellence Framework (2016), was commissioned to report on the REF of 2014. It recommended simplification of the REF submission requirements for HEIs, and rethinking of the use to be made by Government of the resulting data. It approved of continuing the long-established dual support system, with a non-hypothecated taxpayer-funded block grant dependent on institutional performance and separate project funding to be sought competitively from the Research Councils, charities and other funders.

Stern,arguing that assessment should better recognise the reality of the ways in which academic research was conducted in HEIs, used the expression  ‘research environment’ rather than ‘research culture’. In the light of the problems caused for ‘career choices, progression and morale’ for academic and research staff of selection of individuals for submission it recommended that ‘all research active staff should be returned in the REF’ and that ‘outputs’ should not be ‘portable’ to other institutions. It discouraged the hiring of ‘tall poppies’ to improve an institution’s standing in research and urged that peer review should be made more transparent. Like the RAE the REF has encouraged gaming in the recruitment of researchers. However, the REF added the criterion of ‘impact’, broadly conceived in terms of the benefit an institution’s research brought to the economy and society. That addition began to reshape public policy and  encourage the framing of a concept of an institutional ‘research culture’.

The separation of research from teaching

The ‘block grant’ lasted for nearly a century until the Higher Education and Research Act of 2017 abolished HEFCE and placed teaching and research in different Departments of State, allocating the responsibilities respectively to new bodies, the Office for Students and UK Research and Innovation. In future a much-reduced portion of teaching funding was to be allocated to providers by a new Office for Students, to supplement the income now available from higher undergraduate tuition fees. With the abolition of HEFCE, public infrastructure funding for research (laboratories and libraries) was to be allocated by Research England which was placed within  the new UK Research and Innovation. Project funding was to continue to be sought in the form of grants, including those from Research Councils  which were also moved within UKRI.

Uncertainty about the acceptability of the REF continues despite these radical organisational changes. UKRI published a review of ‘perceptions’ about the exercise of 2021. It found that views were mixed. Among the negatives were the institutional cost and negative effects of repeated measurement and the potential distortion of freedom to pursue an inquiry which might not turn out to improve the institution’s ratings, with damaging funding consequences. The review also had something to say on the effect the REF was felt to have on early career researchers. An international Agreement on reforming research assessment was arrived at in July 2022. This called for assessment to ‘reward the originality of ideas, the professional research conduct, and results beyond the state-of-the-art’.  There were calls for the abolition of the REF in England, or for changes to be made before it was held again.   

Public funding of research beyond higher education

In How we fund higher education providers (May 2023), Research England gives an account of its responsibilities in allocating the taxpayer funding of research. It is not limited to providers of higher education. Research England explains that it can fund  the research and ‘knowledge exchange’ activities not only of higher education providers (HEPs)’ and also ‘other organisations that carry out services in relation to research or knowledge exchange in eligible HEPs’.

Plans for completion of the next REF were deferred to 2029 in response to concerns raised about its content and purpose, in particular how it was to reflect the element of ‘People, Culture and Environment’. It was agreed that a ‘pilot’, still conducted in eight disciplinary areas, would be needed to settle the design of ‘indicators’. This agreement was initiated with the help of Technopolis and CRAC-Vitae (part of the Careers Research & Advisory Centre). Vice-Chancellors and other heads of research-active higher education providers funded by Research England were sent a letter explaining the plan and with a link to current expectations. However there were mixed views about the definition of ‘research culture’.

The need for ‘selectivity’ has continued to require ‘measurement’. This encourages an emphasis  on ‘research activity’ rather than the fostering of the still imperfectly-defined ‘research culture’.

SRHE member GR Evans is Emeritus Professor of Medieval Theology and Intellectual History in the University of Cambridge.


2 Comments

Yes, but what about the academic research?

by Steven Jones

Review of Influencing Higher Education Policy: A Professional Guide to Making an Impact, edited by Ant Bagshaw and Debbie McVitty (London: Routledge, 2020)

“The existence of Wonkhe won’t save us,” suggests Debbie McVitty (p13), “but it could be a good place to start.”

And so the tone is set for a new Routledge collection about HE ‘wonkery’, a relatively recent phenomenon that has doubtless changed the way in which the sector operates. Wonks are policy analysts, planners and strategists, and HE is blessed with more than its fair share. New policy development? Expect multiple ‘hot takes’ straight to your inbox. HE story in the mainstream media? Expect a range of insider perspectives that allow every possible angle to be explored. No more waiting for trade publications to drop through the letterbox, let alone for academic critiques to satisfy a journal’s peer review process. Thanks mostly to Wonkhe, we now have real-time analysis of everything that ever happens in HE.

In such a context, a book that explores the sector’s influence on policy is timely. Important questions need to be confronted. How can universities maintain integrity in an increasingly hostile regulatory and media environment? What does meaningful policy ‘impact’ look like? Which individuals or groups are most legitimately entitled to advocate on behalf of the sector? And, crucially, how can academic research evidence be communicated to those who most need to engage with it?

This collection, edited by Ant Bagshaw (Nous) and Debbie McVitty (Wonkhe) takes on some of these questions. It’s at its best when mapping legislative processes and regulatory frameworks, as William Hammonds and Chris Hale (both Universities UK) do, or comparing policy contexts, as Cathy Mitchell (Scottish Funding Council) does in relation to performance measurement. Anna Bradshaw (British Academy) and Megan Dunn (Greater London Authority) theorise the relationship between evidence and policy in valuable new ways, while Adam Wright (British Academy) and Rille Raaper (Durham University) perceptively characterise students’ framing within HE policy. Clare Randerson (University of Lincoln) enlightens readers on the OECD’s under-acknowledged role in HE policy-making, while Diane Beech’s (University of Warwick) chapter offers a useful guide to think-tanks, until spiralling into an advert for the Higher Education Policy Institute.

However, many questions remain unanswered. Partly, this is because some of the book’s contributors spend more time celebrating their own influence than critically evaluating the assumptions that underpin their proposed solutions. Indeed, many university staff will feel perplexed by McVitty’s opening assertion. Who is the ‘us’ that the existence of Wonkhe won’t save? What then makes Wonkhe a good place to start? And why does the ‘us’ need saving anyway?

This is not the sort of detail on which the book’s contributors tend to dwell. Rather, the style is choppy and pacey. In many chapters, soundbites are favoured over deeper reflection. Blunt recommendations (often bullet pointed and emboldened) are ubiquitous, generally urging ‘us’ to do things differently.

As usual in HE wonk discourses, academics hold a strange and curious place. Often we’re problematised. Sometimes we’re patronised. But mostly we’re just ignored. Rarely is it acknowledged that academic research might actually have anything useful to contribute. Universities are assumed to be desperately in need of some wonk savviness to overcome their policy naivety. Why would any institution turn to its own academic expertise when it can commission all-knowing external consultants? Scholarship isn’t part of the solution. If anything, it’s part of the problem.

Take Iain Mansfield’s (Policy Exchange) list of the “additional constraints” (p87) that he argues make policy influence tougher in HE than in other sectors. Among the subheadings presented is ‘left-leaning’. Here, academics and students are homogenised as anti-consumer, anti-rankings and generally difficult. There’s even a censorious mention of “cultural attitudes to issues such as class, race and gender” (p88). Another of Mansfield’s subheadings is ‘non-independence of research’. Here, the focus is academics’ perceived partiality. With so few scholarly sources cited, it is difficult to know on what evidence suspicion rests. However, the implication is clear: academics can’t be trusted to research themselves or their professional environment with objectivity. “Although any sector is subject to vested interests and unconscious bias,” Mansfield snipes, “only in HE are those same people writing the research” (p90).

Elsewhere in the collection, Josie Cluer (EYNews) and Sean Byrne (Ealing, Hammersmith and West London College) make the case that “only by understanding, predicting, and being ready for the Politics – with a capital P – will [wonks] be able to influence the policies that will support the sector to thrive.” Among the few examples offered is that of vice-chancellors’ pay. Here the implication is that the sector should have better managed recent negative media coverage that resulted in “a series of uncomfortable moments” (p22). While brand management matters greatly, and while the authors are right to suggest that some universities are suboptimal when it comes to shielding their reputation, the issue of senior pay is surely more nuanced than the single-paragraph analysis suggests. Being ‘ready for the Politics’ (with or without a capital P) requires universities to develop carefully thought-out internal policies, consistent with their claimed civic role and open to public scrutiny. The message implied by this book, in places, is that HE can continue its merry march toward the market, just so long as it remembers to buy in the right kind of spin.

Granted, the editors pre-empt some of these criticisms, emphasising that engagement with academic literature is not their priority, and that the collection essentially functions as a “professional guide” (p xvii). However, the analysis presented is often alarmingly thin. The first of Colette Fletcher’s (University of Winchester) five ‘lessons’ on how to influence policy – “have the confidence to be yourself” (p134) – captures something of the book’s tendency to drift into feelgood self-help rhetoric where close-up, critical analysis might be more appropriate.

Nonetheless, contributors are clearly satisfied that they have what it takes to save the sector. Everyone should get behind the wonks’ solutions, not least us pesky, prejudiced academics. Indeed, what Influencing Higher Education Policy arguably does best is highlight the growing challenge to the ways in which scholarly work is undertaken and disseminated. Wonkhe’s central role in bringing multiple perspectives to HE debates, usually in super-fast time, should be welcomed. But in such an environment, the book reminds us how easily academics and their research can be marginalised.

Without question, the UK HE sector needs to become better at influencing policy. Bagshaw is right to say we’ve relied on “benign amateurism” (p169) for too long. And without question, this collection includes several chapters’ worth of considered reflections and constructive recommendations. But elsewhere the book lapses into glib strategising where it could be reconnecting with universities’ core purpose. The best way to improve the sector’s standing is to ensure that it operates according to the highest possible ethical standards, and makes policy recommendations firmly grounded in empirical evidence.

One of the book’s contributors quotes Richard Branson to illustrates a policy point: “it’s amazing what doors can open if you reach out to people with a smile” (p139). But who knows, perhaps it’s even more amazing what doors can open if you reach out to people with rigorous academic research?

SRHE member Steven Jones is a Professor of Higher Education at the Manchester Institute of Education. His research focuses on equity issues around students’ access, experiences and outcomes. Steven is a Principal Fellow of the Higher Education Academy and he teaches on the University of Manchester’s PGCert in Higher Education. The views expressed here are his own.


Leave a comment

Performance-based research assessment is narrowing and impoverishing the university in New Zealand, UK and Denmark

performance

The article below is reposted from the original piece published at the LSE Impact of Social Sciences blog It is reposted under Creative Commons 3.0.

 

Susan Wright, Bruce Curtis, Lisa Lucas & Susan Robertson provide a basic outline of their working paper on how performance-based research assessment frameworks in different countries operate and govern academic life. They find that assessment methods steer academic effort away from wider purposes of the university, enhance the powers of leaders, propagate unsubstantiated myths of meritocracy, and demand conformity. But the latest quest for ‘impact’ may actually in effect unmask these operations and diversify ‘what counts’ across contexts.

Our working paper Research Assessment Systems and their Impacts on Academic Work in New Zealand, the UK and Denmark arises from the EU Marie Curie project ‘Universities in the Knowledge Economy’ (URGE) and specifically from its 5th work package, which examined how reform agendas that aimed to steer university research towards the ‘needs of a knowledge economy’ affected academic research and the activities and conduct of researchers. This working paper has focused on Performance-Based Research Assessment systems (PBRAs). PBRAs in the UK, New Zealand and Denmark now act as a quality check, a method of allocating funding competitively between and within universities, and a method for governments to steer universities to meet what politicians consider to be the needs of the economy. Drawing on the studies reported here and the discussions that followed their presentation to the URGE symposium, four main points can be highlighted. Continue reading