by Roland Bloch and Catherine O’Connell
The changing shape of higher education and consequent changes in the nature of academic labour, employment conditions and career trajectories were significant Continue reading
The changing shape of higher education and consequent changes in the nature of academic labour, employment conditions and career trajectories were significant Continue reading →
by Camille Kandiko Howson, Corony Edwards, Alex Forsythe and Carol Evans
Just over a year ago, and learning gain was ‘trending’. Following a presentation at SRHE Annual Research Conference in December 2017, the Times Higher Education Supplement trumpeted that ‘Cambridge looks to crack measurement of ‘learning gain’; however, research-informed policy making is a long and winding road.
Learning gain is caught between a rock and a hard place — on the one hand there is a high bar for quality standards in social science research; on the other, there is the reality that policy-makers are using the currently available data to inform decision-making. Should the quest be to develop measures that meet the threshold for the Research Excellence Framework (REF), or simply improve on what we have now?
The latest version of the Teaching Excellence and Student Outcomes Framework (TEF) remains wedded to the possibility of better measures of learning gain, and has been fully adopted by the OfS. And we do undoubtedly need a better measure than those currently used. An interim evaluation of the learning gain pilot projects concludes: ‘data on satisfaction from the NSS, data from DHLE on employment, and LEO on earnings [are] all … awful proxies for learning gain’. The reduction in value of the NSS to 50% in the most recent TEF process make it no better a predictor of how students learn. Fifty percent of a poor measure is still poor measurement. The evaluation report argues that:
“The development of measures of learning gain involves theoretical questions of what to measure, and turning these into practical measures that can be empirically developed and tested. This is in a broader political context of asking ‘why’ measure learning gain and, ‘for what purpose’” (p7).
Given the current political climate, this has been answered by the insidious phrase ‘value for money’. This positioning of learning gain will inevitably result in the measurement of primarily employment data and career-readiness attributes. The sector’s response to this narrow view of HE has given renewed vigour to the debate on the purpose of higher education. Although many experts engage with the philosophical debate, fewer are addressing questions of the robustness of pedagogical research, methodological rigour and ethics.
The Australian federal government has indicated its intention to introduce partial funding based on yet to be defined performance measures.
The Mid-Year Economic and Fiscal Outlook (MYEFO) by the Australian government updates the economic and fiscal outlook from the previous budget and the budgetary position and revises the budget aggregates taking account of all decisions made since the budget was released. The 2017-2018 MYEFO papers state that the Government intends to “proceed with reforms to the higher education [HE] sector to improve transparency, accountability, affordability and responsiveness to the aspirations of students and future workforce needs” (see links below). Among these reforms are performance targets for universities to determine the growth in their Commonwealth Grant Scheme funding for bachelor degrees from 2020, to be capped at the growth rate in the 18-64 year old population, and from 1 January 2019, “a new allocation mechanism based on institutional outcomes and industry needs for sub-bachelor and postgraduate Commonwealth Supported Places”. Continue reading →
In March we co-delivered a seminar at SRHE based on our complementary research studies into doctoral support, supervision, and relationships. In recognition that very many and varied players contribute to supporting doctoral researchers along the way, we spoke to the idea of the ‘Ecology’ of doctoral study. Through both of our research and practice areas, we raise issues of:
Boundaries, for example: Who is responsible for which aspects of doctoral development? Continue reading →
By Ian McNay
My views on ‘professional’ and ‘professionalism’ in HE have been tested in several ways recently. One of my doctoral students has just got his award for a study on the topic. A major part of his work was a case study in a modern university, with a survey of teaching professionals with fellowship status in HEA either by a PGCE or a reflective portfolio of experience route. The survey group presented a homogeneous monochrome picture of what Hoyle, many years ago, labelled ‘restricted’ professionals – classroom bound with little engagement in the wider professional context, focused on subject and students, with punctuality and smart dress as professional characteristics. That reflected the response I got from some academics when I was appointed as a head of school: I met each one of my staff and, as part of the conversation, asked their view on development issues and future possibilities for the school. The response of several can be summarised by two: ‘I don’t have a view; not my role and above my pay grade’, and ‘You’re the boss. Tell me what to do and I’ll do it’. Continue reading →
Quality assurance in higher education has become increasingly dominant worldwide, but has recently been subject to mounting criticism. Research has highlighted challenges to comparability of academic standards and regulatory frameworks. The external examining system is a form of professional self-regulation involving an independent peer reviewer from another HE institution, whose role is to provide quality assurance in relation to identified modules/programmes/qualifications etc. This system has been a distinctive feature of UK higher education for nearly 200 years and is considered best practice internationally, being evident in various forms across the world.
External examiners are perceived as a vital means of maintaining comparable standards across higher education and yet this comparability is being questioned. Despite high esteem for the external examiner system, growing criticisms have resulted in a cautious downgrading of the role. One critique focuses on developing standardised procedures that emphasise consistency and equivalency in an attempt to uphold standards, arguably to the neglect of an examination of the quality of the underlying practice. Bloxham and Price (2015) identify unchallenged assumptions underpinning the external examiner system and ask: ‘What confidence can we have that the average external examiner has the “assessment literacy” to be aware of the complex influences on their standards and judgement processes?’ (Bloxham and Price 2015: 206). This echoes an earlier point raised by Cuthbert (2003), who identifies the importance of both subject and assessment expertise in relation to the role.
The concept of assessment literacy is in its infancy in higher education, but is becoming accepted into the vernacular of the sector as more research emerges. In compulsory education the concept has been investigated since the 1990s; it is often dichotomised into assessment literacy or illiteracy and described as a concept frequently used but less well understood. Both sectors describe assessment literacy as a necessity or duty for educators and examiners alike, yet both sectors present evidence of, or assume, low levels of assessment literacy. As a result, it is argued that developing greater levels of assessment literacy across the HE sector could help reverse the deterioration of confidence in academic standards.
Numerous attempts have been made to delineate the concept of assessment literacy within HE, focusing for example on the rules, language, standards, and knowledge, skills and attributes surrounding assessment. However, assessment literacy has also been described as Continue reading →