srhe

The Society for Research into Higher Education


Leave a comment

Statistical illogic: the fallacy of Jacob Bernoulli and others

by Paul Alper

Bernoulli’s Fallacy, Statistical Illogic and the Crisis of Modern Science by Aubrey Clayton.  

“My goal with this book is not to broker a peace treaty; my goal is to win the war.”    (Preface p xv)

“We should no more be teaching p-values in statistics courses than we should be teaching phrenology in medical schools.” (p239)

It is possible or even probable that many a PhD or journal article in the softer sciences has got by through misunderstanding probability and statistics. Clayton’s book aims to expose the shortcomings of a fallacy first attributed to the 17th century mathematician Jacob Bernoulli, but relied on repeatedly for centuries afterwards, despite the 18th century work of statistician Thomas Bayes, and exemplified in the work of RA Fisher, the staple of so many social science primers on probability and statistics.

In the midst of the frightening Cold War, I attended a special lecture at the University of Wisconsin-Madison on 12 February 1960 by Fisher, the most prominent statistician of the 20th century; he was touring the United States and other countries. I had never heard of him and indeed, despite being in grad school, my undergraduate experience was entirely deterministic: apply a voltage then measure a current, apply a force then measure acceleration, etc. Not a hint, not a mention of variability, noise, or random disturbance. The general public’s common currency in 1960 did not then include such terms as random sample, statistical significance, and margin of error. 

However, Fisher was speaking on the hot topic of that day: was smoking a cause of cancer?  Younger readers may wonder how in the world was this a debatable subject when in hindsight, it is so strikingly obvious. Well, it was not obvious in 1960 and the history of inflight smoking indicates how difficult it was to turn the tide, and how many years it took. Fisher’s tour of the United States was sponsored by the tobacco industry, but it would be wrong to conjecture that he was being hypocritical. And not just because he was a smoker himself.  

Fisher believed that mere observations were insufficient for concluding that A causes B; it could be that B causes A or that C is responsible for both A and B. He insisted upon experimental and not mere observational evidence. According to Fisher, it could be that people who have some underlying physical problem led them to smoke rather than smoking caused the underlying problem; or that some other cause such as pollution was to blame. According to Fisher, in order to experimentally link smoking as the cause of cancer, at random some children would be required to smoke and some would be required not to smoke and then as time goes by note the incidence of cancer in each of the two groups.

However, according to Clayton, Fisher himself, just like Jacob Bernoulli, had it backwards when it came to analysing experiments.  If Fisher and Bernoulli can make this mistake, it is easy for others to fall into this trap because ordinary language keeps tripping us up.  Clayton expends much effort into showing examples, such as the famous Prosecutor’s Fallacy. The fallacy was exemplified in the UK by the infamous Meadows case and is discussed at length by Clayton; a prosecution expert witness made unsustainable assertions about the probability of innocence being “one in 73 million”.

The Bayesian way of looking at things is to consider the probability a person is guilty, given the evidence. This is not the same as the probability of the evidence, given the person is guilty, which is the ‘frequentist’ approach adopted by Fisher, with results which can be wildly different numerically. Another example, from the medical world: there is confusion between the probability of having a disease, given a positive test for the disease:

                        Prob (Disease | Test Positive) ; the Bayesian way of looking at things

and

                         Prob (Test Positive | Disease) ; the frequentist approach

The patient is interested in the former but is often quoted the latter, known as the sensitivity of the test, which might be markedly different depending on the base rate of the disease. If the base rate is, say, one in 1,000 and the test sensitivity is, say, 90%, then for every 1000 tests, 100 will be false positives. A Bayesian would therefore conclude correctly that the chances of a false positive test are 100 times greater than the chances of actually having the disease. In other words, the hypothesis that the person has the disease is not supported by the data/evidence. However a frequentist might mistakenly say that if you test positive there is a 90% chance that you have the disease.

The quotation from page xv of Clayton’s preface which begins this essay, shows how much Clayton, a Bayesian, is determined to counter Bernoulli’s fallacy and set things straight. Fisher’s frequentist approach still finds favor among social scientists because his setup, no matter how flawed, was an easy recipe to follow. Assume a straw-man hypothesis such as ‘no effect’, take data to obtain a so-called p-value and, in the mechanical manner suggested by Fisher, if the p-value is low enough, reject the straw man. Therefore, the winner was the opposite of the straw man, namely the effect/hypothesis/contention/claim is real.

Fisher, a founder, and not just a follower of the eugenics movement, was as I once wrote, “a genius, and difficult to get along with.”  Upon reflection, I consequently changed the conjunction to an implication, “a genius, therefore difficult to get along with.”  His then son-in-law back on 12 February 1960 was George Box, also a famous statistician – among other things the author of the famous phrase in statistics, “all models are wrong, some are useful” – who had just been appointed to be the head of the University of Wisconsin’s statistics department. Unlike Fisher, Box was a very agreeable and kindly person and, as evidence of those qualities, I note that he was on the committee that approved my PhD thesis, a writing endeavour of mine which I hope is never unearthed for future public consumption.  

All of that was a long time ago, well before the Soviet Union collapsed, only to see today’s military rise of Russia. Tobacco use and sales throughout the world are much reduced while cannabis acceptance is on the rise. Statisticians have since moved on to consider and solve much weightier computational problems via the rubric of so-called Data Science. I was in my mid-twenties and I doubt that there were many people younger than I was at that Fisher presentation, so I am on track to be the last one alive who heard a lecture by Fisher disputing smoking as a cause of cancer.  He died in Australia in 1962, a month after my 26th birthday but his legacy, reputation and contribution live on and hence, the fallacy of Bernoulli as well.    

Paul Alper is an emeritus professor at the University of St. Thomas, having retired in 1998. For several decades, he regularly contributed Notes from North America to Higher Education Review. He is almost the exact age of Woody Allen and the Dalai Lama and thus, was fortunate to be too young for some wars and too old for other ones. In the 1990s, he was awarded a Nike sneaker endorsement which resulted in his paper, Imposing Views, Imposing Shoes: A Statistician as a Sole Model; it can be found at The American Statistician, August 1995, Vol 49, No. 3, pages 317 to 319.

Image of Rob Cuthbert


1 Comment

Some different lessons to learn from the 2020 exams fiasco

by Rob Cuthbert

The problems with the algorithm used for school examinations in 2020 have been exhaustively analysed, before, during and after the event. The Royal Statistical Society (RSS) called for a review, after its warnings and offers of help in 2020 had been ignored or dismissed. Now the Office for Statistics Regulation (OSR) has produced a detailed review of the problems, Learning lessons from the approach to developing models for awarding grades in the UK in 2020. But the OSR report only tells part of the story; there are larger lessons to learn.

The OSR report properly addresses its limited terms of reference in a diplomatic and restrained way. It is far from an absolution – even in its own terms it is at times politely damning – but in any case it is not a comprehensive review of the lessons which should be learned, it is a review of the lessons for statisticians to learn about how other people use statistics. Statistical models are tools, not substitutes for competent management, administration and governance. The report makes many valid points about how the statistical tools were used, and how their use could have been improved, but the key issue is the meta-perspective in which no-one was addressing the big picture sufficiently. An obsession with consistency of ‘standards’ obscured the need to consider the wider human and political implications of the approach. In particular, it is bewildering that no-one in the hierarchy of control was paying sufficient attention to two key differences. First, national ‘standardisation’ or moderation had been replaced by a system which pitted individual students against their classmates, subject by subject and school by school. Second, 2020 students were condemned to live within the bounds not of the nation’s, but their school’s, historical achievements. The problem was not statistical nor anything to do with the algorithm, the problem was with the way the problem itself had been framed – as many commentators pointed out from an early stage. The OSR report (at 3.4.1.1) said:

“In our view there was strong collaboration between the qualification regulators and ministers at the start of the process. It is less clear to us whether there was sufficient engagement with the policy officials to ensure that they fully understood the limitations, impacts, risks and potential unintended consequences of the use of the models prior to results being published. In addition, we believe that, the qualification regulators could have made greater use of  opportunities for independent challenge to the overall approach to ensure it met the need and this may have helped secure public confidence.”

To put it another way: the initial announcement by the Secretary of State was reasonable and welcome. When Ofqual proposed that ranking students and tying each school’s results to its past record was the only way to do what the SoS wanted, no-one in authority was willing either to change the approach, or to make the implications sufficiently transparent for the public to lose confidence at the start, in time for government and Ofqual to change their approach.

The OSR report repeatedly emphasises that the key problem was a lack of public confidence, concluding that:

“… the fact that the differing approaches led to the same overall outcome in the four countries implies to us that there were inherent challenges in the task; and these 5 challenges meant that it would have been very difficult to deliver exam grades in a way that commanded complete public confidence in the summer of 2020 …”

“Very difficult”, but, as Select Committee chair Robert Halfon said in November 2020, things could have been much better:

“the “fallout and unfairness” from the cancellation of exams will “have an ongoing impact on the lives of thousands of families”. … But such harm could have been avoided had Ofqual not buried its head in the sand and ignored repeated warnings, including from our Committee, about the flaws in the system for awarding grades.”

As the 2021 assessment cycle comes closer, attention has shifted to this year’s approach to grading, when once again exams will not feature except as a partial and optional extra. When the interim Head of Ofqual, Dame Glynis Stacey, appeared before the Education Select Committee, Schools Week drew some lessons which remain pertinent, but there is more to say. An analysis of 2021 by George Constantinides, a professor of digital computation at Imperial College whose 2020 observations were forensically accurate, has been widely circulated and equally widely endorsed. He concluded in his 26 February 2021 blog that:

“the initial proposals were complex and ill-defined … The announcements this week from the Secretary of State and Ofqual have not helped allay my fears. … Overall, I am concerned that the proposed process is complex and ill-defined. There is scope to produce considerable workload for the education sector while still delivering a lack of comparability between centres/schools.”

The DfE statement on 25 February kicks most of the trickiest problems down the road, and into the hands of examination boards, schools and teachers:

“Exam boards will publish requirements for schools’ and colleges’ quality assurance processes. … The head teacher or principal will submit a declaration to the exam board confirming they have met the requirements for quality assurance. … exam boards will decide whether the grades determined by the centre following quality assurance are a reasonable exercise of academic judgement of the students’ demonstrated performance. …”

Remember in this context that Ofqual acknowledges “it is possible for two examiners to give different but appropriate marks to the same answer”. Independent analyst Dennis Sherwood and others have argued for alternative approaches which would be more reliable, but there is no sign of change.

Two scenarios suggest themselves. In one, where this year’s results are indeed pegged to the history of previous years, school by school, we face the prospect of overwhelming numbers of student appeals, almost all of which will fail, leading no doubt to another failure of public confidence in the system. The OSR report (3.4.2.3) notes that:

“Ofqual told us that allowing appeals on the basis of the standardisation model would have been inconsistent with government policy which directed them to “develop such an appeal process, focused on whether the process used the right data and was correctly applied”.

Government policy for 2021 seems not to be significantly different:

Exam boards will not re-mark the student’s evidence or give an alternative grade. Grades would only be changed by the board if they are not satisfied with the outcome of an investigation or malpractice is found. … If the exam board finds the grade is not reasonable, they will determine the alternative grade and inform the centre. … Appeals are not likely to lead to adjustments in grades where the original grade is a reasonable exercise of academic judgement supported by the evidence. Grades can go up or down as the result of an appeal.” (emphasis added)

There is one crucial exception: in 2021 every individual student can appeal. Government no doubt hopes that this year the blame will all be heaped on teachers, schools and exam boards.

The second scenario seems more likely and is already widely expected, with grade inflation outstripping the 2020 outcome. There will be a check, says DfE, “if a school or college’s results are out of line with expectations based on past performance”, but it seems doubtful whether that will be enough to hold the line. The 2021 approach was only published long after schools had supplied predicted A-level grades to UCAS for university admission. Until now there has been a stable relationship between predicted grades and examination outcomes, as Mark Corver and others have shown. Predictions exceed actual grades awarded by consistent margins; this year it will be tempting for schools simply to replicate their predictions in the grades they award. Indeed, it might be difficult for schools not to do so, without leaving their assessments subject to appeal. In the circumstances, the comments of interim Ofqual chief Simon Lebus that he does not expect “huge amounts” of grade inflation seem optimistic. But it might be prejudicial to call this ‘grade inflation’, with its pejorative overtones. Perhaps it would be better to regard predicted grades as indicators of what each student could be expected to achieve at something close to their best – which is in effect what UCAS asks for – rather than when participating in a flawed exam process. Universities are taking a pragmatic view of possible intake numbers for 2021 entry, with Cambridge having already introduced a clause seeking to deny some qualified applicants entry in 2021 if demand exceeds the number of places available.

The OSR report says that Ofqual and the DfE:

“… should have placed greater weight on explaining the limitations of the approach. … In our view, the qualification regulators had due regard for the level of quality that would be required. However, the public acceptability of large changes from centre assessed grades was not tested, and there were no quality criteria around the scale of these changes being different in different groups.” (3.3.3.1)

The lesson needs to be applied this year, but there is more to say. It is surprising that there was apparently such widespread lack of knowledge among teachers about the grading method in 2020 when there is a strong professional obligation to pay attention to assessment methods and how they work in practice. Warnings were sounded, but these rarely broke through to dominate teachers’ understanding, despite the best efforts of education journalists such as Laura McInerney, and teachers were deliberately excluded from discussions about the development of the algorithm-based method. The OSR report (3.4.2.2) said:

“… there were clear constraints in the grade awarding scenario around involvement of service delivery staff in quality assurance, or making the decisions based on results from a model. … However, we consider that involvement of staff from centres may have improved public confidence in the outputs.”

There were of course dire warnings in 2020 to parents, teachers and schools about the perils of even discussing the method, which undoubtedly inhibited debate, but even before then exam processes were not well understood:

“… notwithstanding the very extensive work to raise awareness, there is general limited understanding amongst students and parents about the sources of variability in examination grades in a normal year and the processes used to reduce them.” (3.2.2.2)

My HEPI blog just before A-level results day was aimed at students and parents, but it was read by many thousands of teachers, and anecdotal evidence from the many comments I received suggests it was seen by many teachers as a significant reinterpretation of the process they had been working on. One teacher said to Huy Duong, who had become a prominent commentator on the 2020 process: “I didn’t believe the stuff you were sending us, I thought it [the algorithm] was going to work”.

Nevertheless the mechanics of the algorithm were well understood by many school leaders. FFT Education Datalab was analysing likely outcomes as early as June 2020, and reported that many hundreds of schools had engaged them to assess their provisional grade submissions, some returning with a revised set of proposed grades for further analysis. Schools were seduced, or reduced, to trying to game the system, feeling they could not change the terrifying and ultimately ridiculous prospect of putting all their many large cohorts of students in strict rank order, subject by subject. Ofqual were victims of groupthink; too many people who should have known better simply let the fiasco unfold. Politicians and Ofqual were obsessed with preventing grade inflation, but – as was widely argued, long in advance –  public confidence depended on broader concerns about the integrity and fairness of the outcomes.

In 2021 we run the same risk of loss of public confidence. If that transpires, the government is positioned to blame teacher assessments and probably reinforce a return to examinations in their previous form, despite their known shortcomings. The consequences of two anomalous years of grading in 2020 and 2021 are still to unfold, but there is an opportunity, if not an obligation, for teachers and schools to develop an alternative narrative.

At GCSE level, schools and colleges might learn from emergency adjustments to their post-16 decisions that there could be better ways to decide on progression beyond GCSE. For A-level/BTEC/IB decisions, schools should no longer be forced to apologise for ‘overpredicting’ A-level grades, which might even become a fairer and more reliable guide to true potential for all students. Research evidence suggests that “Bright students from poorer backgrounds are more likely than their wealthier peers to be given predicted A-level grades lower than they actually achieve”. Such disadvantage might diminish or disappear if teacher assessments became the dominant public element of grading; at present too many students suffer the sometimes capricious outcomes of final examinations.

Teachers’ A-level predictions are already themselves moderated and signed off by school and college heads, in ways which must to some extent resemble the 2021 grading arrangements. There will be at least a two-year discontinuity in qualification levels, so universities might also learn new ways of dealing with what might become a permanently enhanced set of differently qualified applicants. In the longer term HE entrants might come to have different abilities and needs, because of their different formation at school. Less emphasis on preparation for examinations might even allow more scope for broader learning.

A different narrative could start with an alternative account of this year’s grades – not ‘standards are slipping’ or ‘this is a lost generation’, but ‘grades can now truly reflect the potential of our students, without the vagaries of flawed public examinations’. That might amount to a permanent reset of our expectations, and the expectations of our students. Not all countries rely on final examinations to assess eligibility to progress to the next stage of education or employment. By not wasting the current crisis we might even be able to develop a more socially just alternative which overcomes some of our besetting problems of socioeconomic and racial disadvantage.

Rob Cuthbert is an independent academic consultant, editor of SRHE News and Blog and emeritus professor of higher education management. He is a Fellow of the Academy of Social Sciences and of SRHE. His previous roles include deputy vice-chancellor at the University of the West of England, editor of Higher Education Review, Chair of the Society for Research into Higher Education, and government policy adviser and consultant in the UK/Europe, North America, Africa, and China.


Leave a comment

The Impact of TEF

by George Brown

A report on the SRHE seminar The impact of the TEF on our understanding, recording and measurement of teaching excellence: implications for policy and practice

This seminar demonstrated that the neo-liberal policy and metrics of TEF (Teaching Excellence Framework) were not consonant with excellent teaching as usually understood.

Michael Tomlinson’s presentation was packed with analyses of the underlying policies of TEF. Tanya Lubicz-Nawrocka considered  the theme of students’ perceptions of excellent teaching. Her research demonstrated clearly that students’ views of excellent teaching were very different from those of TEF. Stephen Jones provided a vibrant analysis of public discourses. He pointed to the pre-TEF attacks on universities and staff by major conservative politicians and their supporters. These were to convince students and their parents that Government action was needed. TEF was born and with it the advent of US-style neo-liberalism and its consequences. His final slide suggested ways of combating TEF including promoting the broad purposes of HE teaching. Sal Jarvis succinctly summarised the seminar and took up the theme of purposes. Personal development and civic good were important purposes but were omitted from the TEF framework and metrics.

Like all good seminars, this seminar prompted memories, thoughts and questions during and after the seminar. A few of mine are listed below. Others may wish to add to them.

None of the research evidence supports the policies and metrics of TEF (eg Gibbs, 2018). The indictment of TEF by the Royal Statistics Society is still relevant (RSS, 2018). The chairman of the TEF panel is reported to have said “TEF was not supposed to be a “direct measure of teaching” but rather “a measure based on some [my italics] of the outcomes of teaching” On the continuum of neo-liberalism and collegiality, TEF is very close to the pole of neo-liberalism whereas student perspectives are nearer the pole of collegiality which embraces collaboration between staff and between staff and students. Collaboration will advance excellence in teaching: TEF will not. Collegiality has been shown to increase morale and reinforce academic values in staff and students (Bolden et al, 2012). Analyses of the underlying values of a metric are important because values shape policy, strategies and metrics. ‘Big data’ analysts need to consider ways of incorporating qualitative data. With regard to TEF policy and its metrics, the cautionary note attributed to Einstein is apposite: “Not everything that counts can be counted and not everything that is counted counts.”

SRHE member George Brown was Head of an Education Department in a College of Education and Senior Lecturer in Social Psychology of Education in the University of Ulster before becoming Professor of Higher Education at the University of Nottingham.  His 250 articles, reports and texts are mostly in Higher and Medical Education, with other work in primary and secondary education. He was senior author of Effective Teaching in Higher Education and Assessing Student Learning in Higher Education and co-founder of the British Education Research Journal, to which he was an early contributor and reviewer. He was the National Co-ordinator of Academic Staff Development for the Committee of Vice Chancellors and Principals (now Universities UK) and has served on SRHE Council.

References

Bolden, R et al (2012) Academic Leadership: changing conceptions, identities and experiences in UK higher education London: Leadership Foundation

Gibbs, G (2017) ‘Evidence does not support the rationale of the TEF’, Compass: Journal of Learning and Teaching, 10(2)

Royal Statistical Society  (2018) Royal Statistical Society: Response to the teaching excellence and student outcomes framework, subject-level consultation

Vicky Gunn


Leave a comment

Notes from North of the Tweed: Valuing our values?

By Vicky Gunn

In a recent publication, Mariana Mazzucato1. pushes the reader to engage with a key dilemma related to modern day capitalist economics. ‘Value extraction’ often occurs after a government has valued work upfront through state investment and accountability regimes. The original investment was a result of the collective possibilities afforded by a mature taxation system and an understanding that accountability can drive positive social and economic outcomes (as well as perverse ones). The value that is extracted is then distributed to those already with both financial and social capital rather than redistributed back into the systems which produced the initial work via support from the state in the first place. This means that the social contract between the State and its workers (at all levels) effectively has the State pump prime activity, only to watch the fruits of these labours be inequitably shared.

I find this to be a useful, powerful and troubling argument when considering the current relationship between State funded activity and the governance of UK HE. As a recipient of multiple grants from bodies such as the Higher Education Academy (now AdvanceHE) and the Quality Assurance Agency (now a co-regulatory body in a landscape dominated by the Office for Students), I have observed a similar pattern of activity. What this means is that after a period of state funding (ie taxpayers’ money), these agencies are forced through a change in funding models to assess the value of their pre-existing assets. The change in funding models is normally a result of a political shift in how they are valued by the various governments that established and maintained them. The pre-existing assets are research and policy outputs and activities undertaken in good faith for the purposes of open source communication to ensure the widest possible dissemination and discussion, with an attendant build up in expertise. After valuing these assets, necessary rebranding may obscure the value of this state-funded work behind impenetrable websites in which multiple prior outputs (tangible assets) are pulled into one pdf.  Simultaneously, the agencies offer intangible assets based on relationships and expertise networks back to membership subscribers through gateways – paywalls. This looks like the unregulated conversion of a value network established through the collaboration of state and higher education into a revenue generating system, restricting access to those able to pay.2. If so, it represents a form of value extraction which is limited in how and where it redistributes what was once a part of the common weal.

Scottish HE has attempted to avoid this aspect of changes in the regulatory framework in two ways:

  • Firstly, by maintaining its Quality Enhancement Framework (QEF) in a recognisable form.3. Thus: the state continues to oversee the funding of domiciled Scottish student places; the Scottish Funding Council remains an arms-length funding and policy agency which commissions the relevant quality assurance agency; Universities Scotland continues as a lobbying ‘influencer’ that mediates the worst excesses of external interventions; and the pesky Office for Students is held back at the border, whilst we all trundle away trying to second guess what role metrics will play in the quality assurance of an enhancement-led sector over the next five to ten years. Strategic cooperation and value co-creation remain core principles. And all of this with Brexit uncertainty.
  • Secondly, by refocusing the discussion around higher educational enhancement in the light of a skills agenda predicated not on unfettered economic growth, but on inclusive and sustainable economic growth.4.

Two recent outputs from this context demonstrate the value of this approach: The Creative Disciplines Collaborative Cluster’s Toolkit for Measuring Impact and the Intangibles Collaborative Cluster’s recent publication.5. Both of these projects were valued for the opportunity they provided of collaborative problem solving across Scottish HEIs. Their outputs recognise it is now more important than ever to demonstrate the impact of what we do. Technological advances in rapid, annualised data generation is driving demands to assess the  value of our higher education. The prospect of this demand requiring disciplinary engagement means academics leading their subjects (not just Heads of Quality, DVCs Student Experience, VPs Learning and Teaching) need to be more aware of frameworks of accountability than before. Underneath the production of these outputs has remained a belief in the value of cooperation over the values of competition.

However, none of this means that those of us trying to maintain a narrative of higher education as the widest possible state good can rest on our laurels. If we are to seize this particular moment there are some crucial tensions to problematise and, where appropriate, resolve. We need formal discussion around the following:

  • What is to be valued through State influence in Scottish HE? How does the ‘what is to be valued’ question relate to the values and value of this education socially, culturally and economically?
  • How are these values and value to be valued through the accountability framework for higher education in Scotland?
  • What will the disruptions created by a new regulatory framework in England (based on a particular understanding of value and values) mean for how Scottish institutions continue to engage with the QEF, when they will probably also have to respond to a framework that would like to see itself as UK-wide?
  • How can we protect years of enhancement work from asset stripping and value extraction? How can we continue with an enhancement framework with social, cultural, and economic benefits for Scotland and its wider relationship with the world, at the same time as supporting reinvestment into the enhancement of Scotland’s higher education?
  • There is a push to revalue ‘success’ as simple economic outcomes, away from inter-relational outcomes that capture intangible but nonetheless critical aspects of that education – social coherence, wellbeing, cultural confidence and vitality, collective expertise, innovation, responsible prosperity. That path of value extraction may result in more not less inequality: how can we mitigate it?
  • How can all of this be done without merely retreating to the local? Bruno Latour has noted how locality is a cultural player in the current political inability to engage effectively with the planetary issue of the day: climate crisis.6. He notes the sense of security in the local’s boundaries and a perception across Europe that we somehow abandoned the local in the push to be global. The local is important. Yet, he clarifies, climate regime change means withdrawal into the local in terms of value and values – without interaction across political boundaries at a global level – is tantamount to wilful recklessness. How we can enable higher education to secure the local and the global simultaneously is surely the big question with which we are grappling. How can Scotland’s HE leaders engage to ensure the value and values we embody through our accountability regime do not get mired in local growth agendas unable to measure the impact of that growth within a global ecology?

Sitting within a creative arts small specialist institution, these questions seem both overwhelmingly large (how can a minnow lead such a conversation, surely only a BIG university can do this?) and absolutely essential. In the creative arts our students are, in their own frames of reference, already challenging us on the questions of value, values, environmental sustainability and inequality through their artistry, designerly ethics, and architectural wisdoms. I am, however, yet to hear such a recognisable conversation occurring coherently across the various players (political, policy, institutional) in the wider sector, except in activities related to the localities of cultural policy, the creative economy, and HEI community engagement.7.

Perhaps it is time for sector leaders, social, cultural, and economic policy-makers, and student representatives to work together to identify the parameters of these questions and how we can move forward to resolve them responsibly.

SRHE member Professor Vicky Gunn is Head of Learning and Teaching at Glasgow School of Art.

Notes

  1. Mazzucato, M (2018) The Value of Everything: Making and Taking in the Global Economy,  Penguin, p xv
  2. Allee, V (2008) ‘Value network analysis and value conversion of tangible and intangible assets’, Journal of Intellectual Capital, 9 (1): 5-25.
  3. This 2016 description of the sector’s regulatory framework of enhancement remains broadly the same:  https://wonkhe.com/blogs/analysis-devolved-yet-not-independent-tef-and-teaching-accountability-in-scotland/
  4. See the Scottish Funding Council’s latest strategic framework: http://www.sfc.ac.uk/about-sfc/strategic-framework/strategic-framework.aspx
  5. Enhancement Themes outputs: Creative Disciplines Collaborative Cluster: https://www.enhancementthemes.ac.uk/current-enhancement-theme/defining-and-capturing-evidence/the-creative-disciplines
    Intangibles Collaborative Cluster: https://www.enhancementthemes.ac.uk/current-enhancement-theme/defining-and-capturing-evidence/the-intangibles-beyond-the-metrics
  • Latour, B (2018) Down to Earth: Politics in the New Climatic Regime Polity Press, p 26
  • Gilmore, A and Comunian, R (2016) ‘Beyond the campus: Higher education, cultural policy and the creative economy’, International Journal of Cultural Policy, 22: 1-9
Vicky Gunn


Leave a comment

The TEF and HERB cross the devolved border (Part 2): the paradoxes of jurisdictional pluralism

By Vicky Gunn

Higher Education teaching policy is a devolved matter in Scotland, yet the TEF has amplified the paradoxes created by the jurisdictional plurality that currently exists in the UK. Given the accountability role it plays for Whitehall, TEF’s UK-wide scope suggests an uncomfortable political geography. This is being accentuated as the Higher Education and Research Bill (at Westminster) establishes the new research funding contours across the UK.  To understand how jurisdictional plurality plays out, one needs to consider that Higher Education in Scotland is simultaneously subject to:

  • Scottish government higher educational policy, led by the Minister for Further Education, Higher Education and Science, Shirley-Anne Somerville (SNP), and managed through the Scottish Funding Council (or whatever emerges out of the recent decisions from ScotGov regarding Enterprise and Innovation), which in turn aligns with Scottish domestic social, cultural, and economic policies. The main HE teaching policy steers, as suggested by recent legislation and commissions, have been to maintain the assurance and enhancement focus (established in the Further & Higher Education (Scotland) Act, 2005) and tighten links between social mobility (Commission for Widening Access 2015) and the relationships between the economic value of graduates and skills’ development (Enterprise and Skills Review 2016).
  • Non-devolved Westminster legislation (especially relating to Home Office and immigration matters). In addition to this is the rapidly moving legislative context that governs how higher education protects its students and staff for health and safety and social inclusion purposes as well as preventing illegal activity (Consumer Protection, Counter-terrorism etc.).

Continue reading