By Vicky Gunn
There has been a flurry of activity around Learning Analytics in Scotland’s higher education sector this past year. Responding no doubt to the seemingly unlimited promises of being able to study our students, we are excitedly wondering just how best to use what the technology has to offer. At Edinburgh University, a professorial level post has been advertised; at my own institution we are pulling together the various people who run our student experience surveys (who have hitherto been distributed across the institution) into a central unit in Planning so that we can triangulate surveys, evaluations and other contextual data-sets; elsewhere systems which enable ‘early warning signals’ with regards to student drop-out have been implemented with gusto.
I am one of the worst of the learning analytics’ offenders. My curiosity to observe and understand the patterns in activity, behaviour, and perception of the students is just too intellectually compelling. The possibility that we could crunch all of the data about our students into one big stew-pot and then extract answers to meaning-of-student-life questions is a temptation I find too hard to resist (especially when someone puts what is called a ‘dashboard’ in front of me and says, ‘look what happens if we interrogate the data this way’).
Additionally, from a quality enhancement perspective, the possibility that we could get ‘meaningful’ data on which to develop learning enhancement interventions looks like a no-brainer. This thought is even more seductive when one realises that with a little imagination and some clever code-writing, we could generate reports that feed directly into quality assurance reviews. Bingo! No more dreary annual reports where I have to write everything. Instead, press a button and my dataset will talk for me to QAA, HESA, HEPI, the Scottish Funding Council, and any other governmental agency that requires accountability.
In my utopian hope that research will out, the fantasy above rests on an uncritical belief that the creation, collection, collation, and curation of datasets is, without question, a good thing. And that, of course, is the problem. What we are actually venturing into is large-scale observation of our students and all of the ethical issues that this entails. Put like this, my penchant for learning analytics could be viewed more cynically within the current negative discourse around surveillance. This discourse has become more dominant since Edward Snowden dropped the seeming bomb-shell, that everyone in power was watching everyone else, in power and not, all of the time. I am not unaware of the strength of feeling and zeitgeist concern that this discourse manifests, especially as the students at Glasgow University have just elected and installed Edward Snowden as their (absent-in-body-but present-in-cyberspace) Rector.
So how can learning analytics deliver on its promises without cultivating a dystopian world in which data (be they student grades, forms of competency measurement, experience surveys, library and on-line virtual learning environment usage statistics, evaluations etc) rather than pastoral conversations drive how universities engage with their students? This is hardly a new question. A degree of de-personalisation of learning, or at least a de-interpersonalisation of learning, comes with the massification of higher education. This distinction is important, because learning analytics have the power to facilitate personalised learning pathways and these could be critical to student engagement and learning outcomes. In short, this is potentially a really important positive of having data from which to understand choices our students make. Such a process is, nonetheless, quite distinct from encouraging interpersonal activity between academics and their students. ‘Personalisation’ of learning as used in current discussions about learning analytics is, therefore, misleading if you think this means more student-staff interaction. It isn’t about making things more ‘personal’ (warm and cuddly), it’s about focusing local and global resources institutionally on a particular set of learning needs (interventionalist) and enabling students to make better choices around their own progression through a programme. The two need not be separated, but they can be. We need thus to be very clear about what we mean as our approaches and technologies progress.
Having said all of this, it is clear that the growth of learning analytics needs a few up-front protocols of protection as soon as possible. Otherwise it is too easy naively and blithely to get swept away on a neutral if not inherently positive assumption of what we are doing. We should especially be considering now:
- Ethical consent structures to enable students to know what is being gathered, when and how it will be used as well as opportunities for students to opt in/out;
- Critical not compliance-centered staff development in research ethics for all involved in working with the data, however that working is done;
- Longitudinal studies of the possible socio-cultural, political, economic, spatial and temporal skews in both the design of the instruments used for learning analytics and their implementation, so that policy being informed by the analysis of the data is managed as much as possible for bias, partiality, and technological determinism;
- Limiting the use of outcomes from learning analytics to being only for educational development purposes and not for reputation management, until we have a better sense of what influence reputational needs will have on how we report our findings (unlikely to be acceptable, but the debate is warranted);
- Training of senior leaders on how to have the courage and take the decision to switch the instruments off and delete the data, in the face of a catastrophic breakdown in trust or breach of security. (This one is so controversial I find it almost incomprehensible to contemplate, but whilst such a risk may seem unlikely, we need to address it).
I finish with two parting shots on this topic and both relate to teaching. Firstly, it’s not just students whose lives are increasingly gathered in and analysed through data-sets, it’s academics too. The progression towards teaching metrics is just a small part of the growing material on what we do, when and how. There is nothing inevitable about this drive for information: we are sufficiently intelligent and able to predict preferable and unwanted futures to establish some safeguards.
Secondly, there is a real need for a sector-wide discussion on the future of the quangos that oversee teaching quality assurance. Arguably, the era of descriptive case studies (which form a fundamental part of the quality bureaucracy universities submit as part of institutional review) is coming to an end, potentially quite abruptly. Learning Analytics will indicate trends and outcomes over time that will make the design of current case studies too simplistic. As resources for universities fluctuate, pressure to demonstrate – effectively rather than adequately – impact from investment will change the sentiment and type of the evidence we are expected to deliver to our governments. Some institutions will equally desire to benchmark across institutions beyond national boundaries. Increasingly we will need an assurance body that can check that the learning analytics processes we have in place are fit for purpose (primarily about learning and teaching enhancement), ethically established and maintained, and secure. This is quite a different proposition to the ideology and associated practices of audit and accountability, on whose wave QAA entered the higher education fray in 1997. Just as the universities are changing under the weight of technological promises, so will the ideologies of quality on which quangos have come to rely.
SRHE member Dr Vicky Gunn is Director of the Learning and Teaching Centre at the University of Glasgow. Follow her on Twitter: Vicky Gunn @StacyGray45.