January 2020 marks the second year of the Office for Students’ (OfS) operations. The OfS represents the latest organisational iteration of state direction of (once) British and (now) English higher education, stretching back to the creation of the University Grants Committee (UGC) in 1919. We therefore have a century’s-worth of experience to draw on: what lessons might there be?
There are, I think, two ways to consider the
cavalcade of agencies that have passed through the British higher education
landscape since 1919. One is to see in it how higher education has been viewed at
various points over the last century. The other way is to see it as special
cases of methods of controlling public bodies generally. I think that both
perspectives can help us to understand what has happened and why.
In the post-war decades, up to the later
1970s, central planning was almost unquestioningly accepted across the
political spectrum in Britain as the correct way to direct nationalised
industries such as electricity and railways, but also to plan the economy as a
whole, as the National Plan of 1965
showed. In higher education, broadly similar methods – predict and provide – were
operated by the UGC for universities, and by a partnership of central
government and local authorities for the polytechnics and other colleges. A key
feature of this mode of regulation was expert judgement, largely insulated from
political pressures. As Michael Shattock and Aniko Horvath observe in The Governance of British Higher Education (Bloomsbury,
2020), “In the 1950s it had been the UGC, not officials in the ministry, who
initiated policy discussions about the forecast rate of student number
expansion and its financial implications, and it was the UGC, not a minister,
that proposed founding the 1960s ‘New Universities’” (p18).
Higher education, then, was viewed as a
collective national resource, to be largely centrally planned and funded, in a
similar way to nationalised industries.
The rejection of central planning methods by
the Thatcher governments (1979-1990) affected the control of higher education
as it did other areas of national life through the ‘privatisation’ of public
enterprises. Instead, resource allocation decisions were to be made by markets,
or where normal markets were absent, as with higher education, by using ‘quasi-markets’
to allocate public funds. Accordingly the UGC was abolished by legislation in
1988, and (eventually) national funding bodies were created, the English
version being the Higher Education Funding Council for England (HEFCE). Whereas
the UGC had a key task of preserving academic standards, by maintaining the ‘unit
of resource’ at what was considered to be an adequate level of funding per
student (as a proxy for academic standards), HEFCE’s new task, little-noted at
the time, became the polar opposite: it was required to drive down unit costs
per student, thereby supposedly forcing universities to make the efficiency
gains to be expected of normal market forces.
The market, then, had supplanted central
planning as an organising principle in British public life (perhaps the lasting
legacy of the Thatcher era); and universities discovered that the seemingly technical
changes to their funding arrangements had profoundly altered their internal
economies.
HEFCE’s main task, however, as with the UGC
before it, was to allocate public money to universities, though now applying a
different methodology. The next big shift in English higher education policy,
under the 2010 coalition government, changed the nature of central direction
radically. Under the full-cost fees policy, universities now typically received
most of their income from student loans, making HEFCE’s funding role largely
redundant. So, after the usual lag between policy change and institutional restructuring,
a new agency was created in 2018, the Office for Students (OfS), modelled on
the lines of industry regulators for privatised utilities such as energy and
telecoms.
In contrast to its predecessor agencies, OfS
is neither a planning nor a funding body (except for some special cases).
Instead, as with other industry regulators, it assumes that a market exists,
but that its imperfect nature (information asymmetry being a particular concern)
calls for detailed oversight and possibly intervention, in order to ‘mitigate
the risk’ of abuses by providers (universities) which could damage the
interests of consumers (students). It has no interest in maintaining a
particular pattern of institutional provision, though it does require that external
quality assurance bodies validate academic standards in the institutions it
registers.
As with utilities, we have seen a shift in
Britain, in stages, from central planning and funding, to a fragmented but
regulated provision. The underlying assumption is that market forces will have
beneficial results, subject to the regulator preventing abuses and ensuring
that minimal standards are maintained. This approach is now so widespread in
Britain that the government has produced a code to regulate the regulators
(presumably anticipating the question,
Quis custodiet ipsos custodes?).
Examining the changing pattern of state
direction of higher education in England in the post-1945 period, then, we see
the demise of central planning and its replacement, first by quasi-markets, and
then by as close to the real thing as we are likely to get. Ideas of central
funding to support planning goals have been replaced by reliance on a market
with government-created consumers, overseen by a regulator, intervening in the
detail (see OfS’s long list of ‘reportable events’) of institutional management.
Despite every effort by governments to create
a working higher education marketplace, the core features of higher education
get in the way of it being a consumer good (for the many reasons that are
repeatedly pointed out to and repeatedly ignored by ministers). Central planning has gone, but its replacement depends on central
funding and central intervention. I don’t think that we’ve seen the last of
formal central planning in our sector.
SRHE member Paul
Templeis Honorary Associate Professor,
Centre for Higher Education Studies, UCL Institute of Education, University
College London. See his latest paper ‘University spaces: Creating cité and place’, London Review of Education, 17 (2): 223–235 at https://doi.org/10.18546
“…problems arise when language goes on holiday. And here we may indeed fancy naming to be some remarkable act of mind, as it were a baptism of an object.”
Ludwig Wittgenstein, Philosophical
Investigations, para 38 (original emphasis)
The paradigm shift of students to customers at the heart of higher
education has changed strategies, psychological self-images, business models
and much else. But are the claims for and against students as customers (SAC)
and the related research as useful, insightful and angst ridden as we may at
first think? There are alarms about
changing student behaviours and approaches to learning and the relationship
towards academic staff but does the naming ‘customers’ reveal what were already
underlying, long standing problems? Does the concentrated focus on SAC obscure rather
than reveal?
One aspect of SAC is the observation that academic performance
declines, and learning becomes more surface and instrumental (Bunce, 2017).
Another is that SAC inclines students to be narcissist and aggressive, with HEI
management pandering to the demands of both students and their feedback on the
NSS, with other strategies to create iconic campus buildings, to maintain or
improve league table position (Nixon, 2018).
This raises some methodological questions on (a) the research on
academic performance and the degree of narcissism/aggression prior to SAC (ie around 1997 with the Dearing Report); (b) the scope and range of
the research given the scale of student numbers, participation rates, the
variety of student motivations, the nature of disciplines and their own
learning strategies, and the hierarchy of institutions; and (c) the combination
of (a) and (b) in the further question whether SAC changed the outlook of students to their education – or is it that
we are paying more attention and making different interpretations?
Some argue that the mass system created in some way marketisation of
HE and the SAC with all its attendant problems of changing the pedagogic
relationship and cognitive approaches. Given Martin Trow’s definitions of
elite, mass and universal systems of HE*, the UK achieved a mass system by the
late 1980s to early 1990s with the rapid expansion of the polytechnics;
universities were slower to expand student numbers. This expansion was before
the introduction of the £1,000 top up fees of the Major government and the
£3,000 introduced by David Blunkett (Secretary of State for Education in the
new Blair government) immediately after the Dearing Report. It was after the
1997 election that the aspiration was for a universal
HE system with a 50% participation rate.
If a mass system of HE came about (in a ‘fit of forgetfulness’ ) by
1991 when did marketisation begin? Marketisation may be a name we give to a
practice or context which had existed previously but was tacit and culturally and
historically deeper, hidden from view. The unnamed hierarchy of institutions of
Oxbridge, Russell, polytechnics, HE colleges, FE colleges had powerful cultural
and socio-political foundations and was a market of sorts (high to low value
goods, access limited by social/cultural capital and price, etc). That hierarchy was not, however,
necessarily top-down: the impact of social benefit of the ‘lower orders’ in
that hierarchy would be significant in widening participation. The ‘higher
order’ existed (and exists) in an ossified form. And as entry was restricted,
the competition within the sector did not exist or did not present existential
threats. Such is the longue durée when trying to analyse marketisation and the
SAC.
The focus on marketisation should help us realise that over the long
term the unit of resource was drastically reduced; state funding was slowly and
then rapidly withdrawn to the point where the level of student enrolment was
critical to long term strategy. That meant not maintaining but increasing
student numbers when the potential pool of students would fluctuate – with the present demographic trough ending in 2021
or 2022. Marketisation can thus be separated to some extent from the cognitive
dissonance or other anxieties of the SAC. HEIs (with exceptions in the
long-established hierarchy) were driven by the external forces of the funding
regime to develop marketing strategies, branding and gaming feedback systems in
response to the competition for students and the creation of interest groups – Alliance,
Modern, et al. The enrolled students
were not the customers in the marketisation but the product or outcome of
successful management. The students change to customers as the focus is then on
results, employment and further study rates. Such is the split personality of institutional
management here.
Research on SAC in STEM courses has a noted inclination to surface
learning and the instrumentalism of ‘getting a good grade in order to get a
good job’, but this prompts further questions. I am not sure that this is an
increased inclination to surface learning, nor whether surface and deep are
uncritical norms we can readily employ. The HEAC definition of deep learning
has an element of ‘employability’ in the application of knowledge across
differing contexts and disciplines (Howie and Bagnall, 2012). A student in 2019
may face the imperative to get a ‘degree level’ job in order to pay back
student loans. This is rational related to the student loans regime and widening participation, meaning this
imperative is not universally applied given the differing socio-economic
backgrounds of all students.
(Note that the current loan system is highly regressive as a form of
‘graduate tax’.)
And were STEM students more inclined toward deep or surface learning
before they became SAC? Teaching and
assessment in STEM may have been poorand
may have encouraged surface level learning (eg
through weekly phase tests which were tardily assessed).
What is deep learning in civil engineering when faced with stress
testing concrete girders or in solving quarternion equations in mathematics: is
much of STEM actually knowing and processing algorithms? How is such learnable
content in STEM equivalent in some cognitive way to the deep learning in modern
languages, history, psychology et al?
This is not to suggest a hierarchy of disciplines but differences, deep
differences, between rules-based disciplines and the humanities.
Learning is complex and individualised, and responsive to, without
entirely determining, the curriculum and the forms of its delivery. In the
research on SAC the assumptions are that teaching and assessment delivery is both
relatively unproblematic and designed to encourage deep, non-instrumental
learning. Expectations of the curriculum delivery and assessment will vary
amongst students depending on personal background of schooling and parents, the
discipline and personal motivations and the expectations will often be
unrealistic. Consider why they are unrealistic – more than the narcissism of
being a customer. (There is a very wide range of varieties of customer: as a
customer of Network Rail I am more a supplicant than a narcissist.)
The alarm over the changes (?) to the students’ view of their learning
as SAC in STEM should be put in the context of the previously high drop-out
rate of STEM students (relatively higher than non-STEM) which could reach 30%
of a cohort. The causes of drop out were thoroughly examined by Mantz Yorke(Yorke and Longden, 2004), but as
regards the SAC issue here, STEM drop outs were explained by tutors as lack of the
right mathematical preparation. There is comparatively little research on the
motivations for students entering STEM courses before they became SAC; such research is not over the long term or longitudinal.
However, research on the typology of students with differing motivations for
learning (the academic, the social, the questioning student etc) ranged across
all courses, does exist (a 20 year survey by Liz Beatty, 2005). Is it possible that after widening
participation to the point of a universal system, motivations towards the
instrumental or utilitarian will become more prominent? And is there an
implication that an elite HE system pre-SAC was less instrumentalist, less
surface learning? The creation of PPE (first Oxford in 1921 then spreading
across the sector) was an attempt to produce a mandarin class, where career
ambition was designed into the academic disciplines. That is, ‘to get a good
job’ applies here too but it will be expressed in different, indirect and
elevated ways of public service.**
There are some anachronisms in the research on SAC. The acceptance of
SAC by management, by producing student charters and providing students places
on boards, committees and senior management meetings is not a direct result of students or management
considering students as customers. Indeed, it predates SAC by many years and
has its origins in the 1960s and 70s.
I am unlikely to get onto the board of Morrisons, but I could for the
Co-op – a discussion point on partnerships, co-producers, membership of a community
of learners. The struggle by students to get representation in management has taken
fifty years from the Wilson government Blue Paper Student Protest (1970) to today. It may have been a concession, but
student representation changed the nature of HEIs in the process, prior to SAC.
Student Charters appear to be mostly a coherent, user-friendly reduction of
lengthy academic and other regulations that no party can comprehend without
extensive lawyerly study. A number of HEIs produced charters before the SAC era
(late 1990s). And iconic university buildings have been significantly
attractive in the architectural profession a long time before SAC –
Birmingham’s aspiration to be an independent city state with its Venetian
architecture recalling St Mark’s Square under the supervision of Joseph
Chamberlain (1890s) or Jim Stirling’s post-modern Engineering faculty building
at Leicester (1963) etc (Cannandine 2002).
Students have complex legal identities and are a complex and often fissiparous
body. They are customers of catering, they are members of a guild or union, learners,
activists and campaigners, clients, tenants, volunteers, sometimes disciplined
as the accused, or the appellant, they adopt and create new identities
psychologically, culturally and sexually. The language of students as customers
creates a language game that excludes other concerns: the withdrawal of state
funding, the creation of an academic precariat, the purpose of HE for learning
and skills supply, an alienation from a community by the persuasive self-image
as atomised customer, how deep learning is a creature of disciplines and the changing job market, that
student-academic relations were problematic and now become formalised ‘complaints’.
Students are not the ‘other’ and they are much more than customers.
Phil Pilkington
is Chair of Middlesex University Students’ Union Board of Trustees, a former
CEO of Coventry University Students’ Union, an Honorary Teaching Fellow of
Coventry University and a contributor to WonkHE.
*Martin Trow
defined an elite, mass and universal systems of HE by participation rates of
10-20%, 20-30% and 40-50% respectively.
** Trevor
Pateman, The Poverty of PPE, Oxford, 1968; a pamphlet criticising the course by
a graduate; it is acknowledged that the curriculum, ‘designed to run the Raj in
1936’, has changed little since that critique. This document is a fragment of
another history of higher education worthy of recovery: of complaint and
dissatisfaction with teaching and there were others who developed the
‘alternative prospectus’ movement in the 1970s and 80s.
References
Beatty L,
Gibbs G, and Morgan A (2005) ‘Learning orientations and study contracts’, in Marton, F, Hounsell, D and
Entwistle, N, (eds) (2005) The Experience
of Learning: Implications for teaching and studying in higher education, 3rd
(Internet) edition. Edinburgh: University of Edinburgh, Centre for Teaching,
Learning and Assessment.
Bunce,
Louise (2017) ‘The student-as-consumer approach in HE and its effects on
academic performance’, Studies in Higher Education,
42(11): 1958-1978
Howie P and Bagnall R (2012) ‘A critique of
the deep and surface learning model’, Teaching
in Higher Education 18(4); they state the distinction of learning is
“imprecise conceptualisation, ambiguous language, circularity and a lack
of definition…”
Nixon, E, Scullion, R and Hearn, R (2018) ‘Her majesty the student:
marketised higher education and the narcissistic (dis)satisfaction of the
student consumer’, Studies in Higher
Education 43(6): 927-943
Cannandine, David (2004), The ‘Chamberlain Tradition’, in In Churchill’s Shadow, Oxford: Oxford
University Press; his biographical sketch of Joe Chamberlain shows his vision
of Birmingham as an alternative power base to London.
Yorke M and Longden B (2004) Retention
and student success in higher education, Maidenhead: SRHE/Open University
Press
by Camille Kandiko Howson, Corony Edwards, Alex Forsythe and Carol Evans
Just over a year ago, and learning gain was ‘trending’. Following a presentation at SRHE Annual Research Conference in December 2017, the Times Higher Education Supplement trumpeted that ‘Cambridge looks to crack measurement of ‘learning gain’; however, research-informed policy making is a long and winding road.
Learning gain is caught between a rock and a hard place — on the one hand there is a high bar for quality standards in social science research; on the other, there is the reality that policy-makers are using the currently available data to inform decision-making. Should the quest be to develop measures that meet the threshold for the Research Excellence Framework (REF), or simply improve on what we have now?
The latest version of the Teaching Excellence and Student Outcomes Framework (TEF) remains wedded to the possibility of better measures of learning gain, and has been fully adopted by the OfS. And we do undoubtedly need a better measure than those currently used. An interim evaluation of the learning gain pilot projects concludes: ‘data on satisfaction from the NSS, data from DHLE on employment, and LEO on earnings [are] all … awful proxies for learning gain’. The reduction in value of the NSS to 50% in the most recent TEF process make it no better a predictor of how students learn. Fifty percent of a poor measure is still poor measurement. The evaluation report argues that:
“The development of measures of learning gain involves theoretical questions of what to measure, and turning these into practical measures that can be empirically developed and tested. This is in a broader political context of asking ‘why’ measure learning gain and, ‘for what purpose’” (p7).
Given the current political climate, this has been answered by the insidious phrase ‘value for money’. This positioning of learning gain will inevitably result in the measurement of primarily employment data and career-readiness attributes. The sector’s response to this narrow view of HE has given renewed vigour to the debate on the purpose of higher education. Although many experts engage with the philosophical debate, fewer are addressing questions of the robustness of pedagogical research, methodological rigour and ethics.
Slogans, over time, become part of the furniture. They start life as radical attempts to change how we think, and can end up victims of their own success. Higher education is littered with ex-slogans: ‘student engagement’, ‘graduate attributes’, ‘technology enhanced learning’, ‘student voice’, ‘quality enhancement’, to name just a few. Hiding in particularly plain sight is ‘teaching and learning’ (and ‘learning and teaching’). We may use the phrase on a daily basis without thinking much about it, but what is the point of constantly talking about teaching and learning in the same breath? Continue reading →
Quality assurance in higher education has become increasingly dominant worldwide, but has recently been subject to mounting criticism. Research has highlighted challenges to comparability of academic standards and regulatory frameworks. The external examining system is a form of professional self-regulation involving an independent peer reviewer from another HE institution, whose role is to provide quality assurance in relation to identified modules/programmes/qualifications etc. This system has been a distinctive feature of UK higher education for nearly 200 years and is considered best practice internationally, being evident in various forms across the world.
External examiners are perceived as a vital means of maintaining comparable standards across higher education and yet this comparability is being questioned. Despite high esteem for the external examiner system, growing criticisms have resulted in a cautious downgrading of the role. One critique focuses on developing standardised procedures that emphasise consistency and equivalency in an attempt to uphold standards, arguably to the neglect of an examination of the quality of the underlying practice. Bloxham and Price (2015) identify unchallenged assumptions underpinning the external examiner system and ask: ‘What confidence can we have that the average external examiner has the “assessment literacy” to be aware of the complex influences on their standards and judgement processes?’ (Bloxham and Price 2015: 206). This echoes an earlier point raised by Cuthbert (2003), who identifies the importance of both subject and assessment expertise in relation to the role.
The concept of assessment literacy is in its infancy in higher education, but is becoming accepted into the vernacular of the sector as more research emerges. In compulsory education the concept has been investigated since the 1990s; it is often dichotomised into assessment literacy or illiteracy and described as a concept frequently used but less well understood. Both sectors describe assessment literacy as a necessity or duty for educators and examiners alike, yet both sectors present evidence of, or assume, low levels of assessment literacy. As a result, it is argued that developing greater levels of assessment literacy across the HE sector could help reverse the deterioration of confidence in academic standards.
Numerous attempts have been made to delineate the concept of assessment literacy within HE, focusing for example on the rules, language, standards, and knowledge, skills and attributes surrounding assessment. However, assessment literacy has also been described as Continue reading →
Ten years ago David Watson[1] (2006 p2) said that in England since the 1980s: “the audit society and the accountability culture have collided (apparently) with academic freedom and institutional autonomy”. He called this clash between accountability and autonomy the ‘Quality Wars’ and identified five major casualties: the shrinking of higher education’s sectoral responsibilities; truth – managers mistaking criticism for resistance, staff mistaking resistance for criticism; solidarity – because of the rise of the ‘gangs’ – the Russell Group and others; students, as quality assurance became ever less effective at delivering enhancement; and the reputation of UK HE abroad, as our determination to label things unsatisfactory advertised the few deficiencies of our sector and obscured our strengths.
Ten years on, the hostilities continue and the casualties mount. Continue reading →