by Camille Kandiko Howson, Corony Edwards, Alex Forsythe and Carol Evans
Just over a year ago, and learning gain was ‘trending’. Following a presentation at SRHE Annual Research Conference in December 2017, the Times Higher Education Supplement trumpeted that ‘Cambridge looks to crack measurement of ‘learning gain’; however, research-informed policy making is a long and winding road.
Learning gain is caught between a rock and a hard place — on the one hand there is a high bar for quality standards in social science research; on the other, there is the reality that policy-makers are using the currently available data to inform decision-making. Should the quest be to develop measures that meet the threshold for the Research Excellence Framework (REF), or simply improve on what we have now?
The latest version of the Teaching Excellence and Student Outcomes Framework (TEF) remains wedded to the possibility of better measures of learning gain, and has been fully adopted by the OfS. And we do undoubtedly need a better measure than those currently used. An interim evaluation of the learning gain pilot projects concludes: ‘data on satisfaction from the NSS, data from DHLE on employment, and LEO on earnings [are] all … awful proxies for learning gain’. The reduction in value of the NSS to 50% in the most recent TEF process make it no better a predictor of how students learn. Fifty percent of a poor measure is still poor measurement. The evaluation report argues that:
“The development of measures of learning gain involves theoretical questions of what to measure, and turning these into practical measures that can be empirically developed and tested. This is in a broader political context of asking ‘why’ measure learning gain and, ‘for what purpose’” (p7).
Given the current political climate, this has been answered by the insidious phrase ‘value for money’. This positioning of learning gain will inevitably result in the measurement of primarily employment data and career-readiness attributes. The sector’s response to this narrow view of HE has given renewed vigour to the debate on the purpose of higher education. Although many experts engage with the philosophical debate, fewer are addressing questions of the robustness of pedagogical research, methodological rigour and ethics.
The article Making Sense of Learning Gain in Higher Education, in a special issue of Higher Education Pedagogies (HEP) highlights these tricky questions. Measuring learning gain is particularly complex as students enter higher education with varying levels of qualifications, skills, knowledge and personal attributes, all of which will impact on their performance at different stages of their university journey. Yet we currently lack any consistent means of comparing students’ abilities at the start of HE programmes of study. Without a reliable baseline we cannot measure gain, however well we succeed in developing process and progress indicators, output and outcome measures.
The multidisciplinary, multiple- and mixed-methods evident in the HEP special issue articles reflect the points raised by Debbie Cotton et al regarding the quality of higher education pedagogic research more widely. Yes, there is a need for rigour, but there is difficulty in articulating that in an area that draws on so many disciplinary approaches to research. The 2021 REF panel C criteria for Education illustrates this point: the Education Unit of Assessment covers research employing ‘a wide range of theoretical frameworks and methodologies drawn from disciplinary traditions, including, but not limited to: anthropology, applied linguistics, economics, geography, history, humanities, mathematics, statistics, philosophy, political science, psychology, science and sociology.’ As well as research traditions, disciplines need perhaps to be clearer about the pedagogies that work best in their subject specific domains, before they can start to consider how best to measure the learning gains of the students who engage with these pedagogies.
Cotton et al’s depiction of higher education pedagogic research as ‘the Cinderella of academia’ may go some way towards explaining the sector’s lack of ready expertise when it comes to tackling the sticky issue of developing an effective measure of learning gain. Evans et al introduce their HEP feature article by explaining that the motivation was to champion best practice, rigorous research and ethical practice, to move the sector away from methodological malaise and improve the validity, robustness and reliability of the work taking place. They conclude that the sum of the parts does not make a whole, as is evident in the current lack of an integrated approach within higher education to exploring learning gain. Integrated design requires researchers, practitioners, policymaker and professional services colleagues to work together from the outset, and leadership to facilitate the integration of approaches across an institution. As noted by Evans et al, 2017, ideally, “integrated academics’ are needed who can take the best of research, appraise it critically, apply it through implementing contextually appropriate pedagogies, and through good design, use outcomes from practice to inform research”.
Students differ from one another in their learning motivations and behaviours, which also vary over time. Therefore, student development and learning cannot be monotonic; ie, learning and development does not smoothly and steadily increase, and never decrease. This means that the concept of learning gain as ‘distance travelled’ is problematic as it confounds achievement with order (coming first, second, or last). Secondly, the closer together attributes are – say for example, the difference between a first and second year student, or, the differences between final-year students across all universities – the more inconsistency there will be. These inconsistencies mean that when researchers try to measure differences between students, their measurements will be inaccurate because they introduce large levels of error variance between students. Simply put, they will just be measuring the spurious noise that researchers pick up on when trying to measure differences between individuals or variables, which can result in false positive and negative results.
Therefore, while it may theoretically be possible to define and track the rank order between students over long time periods, it doesn’t follow that this will provide a measure of distance travelled, or, that such a measure could be used to compare the performance of universities or students in their progression. Measuring the learning gained by a student at a specific point in time (for example, at the end of a programme of study) will tell us little or nothing about that student’s learning journey, or their progress relative to their peers’, nor their potential for future development. And that is before even beginning to consider how to account for the different entry qualifications, previous educational opportunities, and personal circumstances of students.
Evans et al also remind us of the recent progress in the development of theoretical foundations that support high impact pedagogies (HIPs), especially in the educational, social and biological sciences. Research into HIPs in the US and the UK that promote deep engagement with, and ownership of, learning are commonly identified within inquiry-based learning contexts, involving students actively in research, and with students co-authoring their assessment experiences. Interpreting this work into valid and practical learning gain measurement requires robust research designs, supported by transparent reporting of information, that substantiate findings and permit replication. When you include students as partners, learning gain measurement becomes a pedagogical philosophy that drives programme design and evaluation. Such an approach can advance shared understandings of concepts, measurement, instrumentation and transparency.
Higher education has multiple purposes with different values placed on those purposes by different stakeholders. Scalise et al note that this requires a common value of ensuring ownership of learning gain approaches at all levels. Knitting together relationships among knowledge, concepts and skills, then embedding evidence-driven learning gain approaches within curriculum design and delivery could result in learning gain measures that inform and reflect pedagogy, not merely a metric-chasing tool presented as ‘proof’ of quality. Integral learning gain approaches have the potential to lift the lid on learning processes through exploring what students know, and in what ways; what works well, why, when and for whom, personalising learning and challenge.
As noted by Evans et al the question of ethics looms large; this is not just about informed consent and the ethical use of data, it is about the very question of why and how data is being collected. The validity question is paramount. Asking students generic questions that may bear little direct relevance to their actual studies is likely to result in ‘garbage in, and garbage out’ in terms of what can be inferred. In the case of reliability, we are obsessed with ‘more data is better’, but that is simply not true; it is about the appropriate numbers to satisfy the requirements of the test. All this comes back to questions as to how we want lecturers and students to use their time while in higher education. What knowledge, understanding, and skills are we valuing most, and how best can we develop and measure these?
Training is needed to support shared understandings of initiatives; Evans et al ask a number of important questions in this regard: How can we work together to identify systems and processes that are fit for purpose and train students and staff effectively in the ethical use of data? How can we use data effectively to support enhancements in pedagogy requiring nimble data mining and analysis? There is a wealth of existing data that can potentially be used to evaluate gainful learning in students, including the existing assessments that students complete for their academic programme of study, not to mention the data being fed into learning analytics algorithms and displayed via student ‘dashboards’. Linking multiple qualitative and quantitative data sets is becoming increasingly possible, albeit with major General Data Protection Reporting (GDPR) implications that HEIs are grappling with, but the lack of consistency and application of tools to measure constructs (eg self-efficacy) is not helpful.
Assessment criteria vary across programmes, institutions and nations; they reflect different structures and values, making comparisons difficult. This can only be resolved through a compendium of reliable and valid learning tools and measures for the higher education fields, embedded within the curriculum and compared at the appropriate level. On the other hand, over-standardising programmes drives maladaptive behaviours. It will not support the drive for increasingly flexible programmes of study or the development of agile competencies for a changing workforce. Keeping that value at the forefront of the learning gain question will help to avoid confusion between the symptoms of problems and their causes; it will support future planning in the face of change, and then we can be assured that our collective actions will contribute positively to higher education, leaving it a better place.
At the most general level we seem to have a certain degree of consensus on the potential value of measuring learning gain though the nature of the gains and the reasons for undertaking the measurement vary widely. In terms of where we go next, the pedagogical learning gain angle, specifically in relation to exploring how curriculum design may impact students differentially, must be our main focus if we are to address a key issue in HE, that of equity. We owe it to our students and colleagues to invest in robust measurement to explore which approaches to learning are most appropriate in specific contexts if we are to enable all learners to do their best within HE – students and lecturers. This should be the hallmark of high quality pedagogical research.
Camille Kandiko Howson is Senior Lecturer in Higher Education and Academic Head of Student Engagement at King’s College London.
Corony Edwards is an independent consultant, having formerly been Head of Education Quality and Enhancement at the University of Exeter.
Alex Forsythe is is Senior Lecturer at University of Liverpool, Senior Fellow of the Higher Education Academy, a Chartered Occupational Psychologist and Head of Professional Certification for the Association of Business Psychology.
Carol Evans is Professor in Higher Education within Southampton Education School at the University of Southampton and co-director of the Centre for Higher Education at Southampton (CHES).
October 12, 2018 at 3:26 am
Reblogged this on Digital learning PD Dr Ann Lawless and commented:
real-world pedagogical questions in HE…..meets the auditing culture