srhe

The Society for Research into Higher Education


Leave a comment

What is an academic judgement?

By Geoff Hinchliffe

Academics make academic judgements virtually every working day. But what exactly is an academic judgement? As a starting point, one might have recourse to appropriate statutory documents: for example, the 2004 Higher Education Act mentions that a student complaint does not count as a ‘qualifying’ complaint if it relates to matters pertaining to an ‘academic judgment’ (Higher Education Act, 2004, p5, Section 12). The Office of the Independent Adjudicator (OIA) helps to provide a gloss on the term:

“Academic judgment is not any judgment made by an academic; it is a judgment that is made about a matter where the opinion of an academic expert is essential. So for example a judgment about marks awarded, degree classification, research methodology, whether feedback is correct or adequate, and the content or outcomes of a course will normally involve academic judgment.” (OIA, 2018, Section 30.2)

But although it is heartening to see that some deference is paid to academic judgment, little light is thrown on what it actually is. This can, of course, be useful: for example, Cambridge University’s (2018) complaint procedure quotes the OIA definition without further elaboration. Providing no-one is prepared to question the nature of academic judgement, who are we to complain? But, at the risk of disturbing sleeping dogs, I propose to enquire more closely as to what constitutes an academic judgement.

Two points are worth making at the outset. The first is that academic judgements should not be construed as the special preserve of those designated as ‘academics’. Students also make academic judgements along the same lines as academics, so it’s not the case that academics make special judgements that students couldn’t possibly understand. The second point is that the object of judgements – what is being judged – may vary considerably, but the kind of judgement being made is still of the same type. The elements of judgement remain the same whether the object of scrutiny is a first year undergraduate essay or a paper in a leading journal.

It seems to me that there are four basic types of academic judgement and they frequently operate in combination – it is this that gives the whole business of judgement its mystique and rarefied quality.

1. Process judgements

When we evaluate any kind of process that has (or is supposed to have) a definable outcome we tend to use a set of criteria, although the latter may be carefully delineated or may operate as background guidelines. Examples are the following of some clinical procedure (where the criteria are strict) or following laboratory protocols (ditto). But they could also include the evaluation of the methodology in a piece of empirical research, in which we consider the suitability of the methodology used and its success or otherwise. Process judgements are also entailed in the evaluation of essays in terms of structure and argumentation leading up (hopefully) to a conclusion. In this case, we evaluate the process of writing and structuring an essay or report; we consider, for example,  whether the author has ‘signposted’ the argument so that its trajectory has some kind of sense and cohesion. Process judgements tend to be ‘rule-governed’. But as I have indicated, in some cases the rules are pretty clear and in other cases there is much room for flexibility. So, regarding essays, there are no fixed rules for demonstrating a process of argumentation; but neither are there no rules at all. In the film Pirates of the Caribbean it is explained that the Pirate’s Code is not to be adhered to strictly at all times because it is not so much a fixed code as ‘guidelines’. Very sound advice too.

For those interested in philosophy, process judgements are roughly akin to what Wittgenstein thinks of as ‘following a rule’ (see Wittgenstein, 1958, paras 201-2). In this case, what Wittgenstein has in mind are the rules for using words so that they have a meaning that is understood. But since meanings are never fixed (unless through prior stipulation) then they are indeed ‘guidelines’. Who would have thought a pirate would be reading Wittgenstein?

2. Epistemic Judgements

In the case of epistemic judgements we are assessing claims to knowledge. That is, we are assessing whether the claimant has sufficient evidence and reasons for making a knowledge claim. Of course, we are also interested in the context – that is whether and to what extent the claimant is aware of relevant context that may affect the claims they are making. Furthermore, we are often reluctant to make positive judgements if the claim simply asserts a proposition – to the effect that such-and such is the case – even if the proposition is true. We need to see the evidence and reasoning that back up the knowledge claim – bare assertions are usually not enough.

There are two further features of epistemic judgements: first they are objective, in the sense of being propositional – they purport to say ‘how the world is’. Second, they are universal in the sense that in making such a judgement I am claiming that everyone will reach the same conclusion as myself. Of course, others may disagree but the idea is that, in principle, these disagreements are adjudicable (Steinberger, 2018, p38).

Notice that if I am marking a student paper then I am assessing the kinds of epistemic judgements the student is making – whether the claims made are true and whether they are well founded. It’s not the case that the student is doing one thing and I am doing something else – the writer of the paper and the assessor are both making the same kind of judgements. That is, the student needs to be in the habit of assessing their own epistemic judgements as to evidence and reasoning in exactly the same way that I, the assessor, am doing. The process is the same: the only difference is that in the one case the outcome is a paper and in the other, the outcome is a mark.

3. Reflective Judgements

This kind of judgement is tricky to explain but I think readers will see its importance. By ‘reflective’ I don’t mean the reflective judgement beloved of writers of practitioner manuals where ‘reflective’ means ‘self-reflective’. Thus interpreted, reflection is usually reflection on a procedure and one’s part in it. In other words, practitioner self-reflective judgements are really a kind of process judgement.

What I have in mind as ‘reflective’ is when we think of a piece of data, a theory, or a concept in functional or relational terms. We look for a broader framework within which phenomena can be better understood. Thus, Kant thought that when we reflect on a natural phenomenon we situate nature in a purposive or teleological framework, in order to provide a kind of interpretive unity (Kant, 2000, p67).  More generally, we can think of reflective judgements as contextual: we look for links and relationships in order to make sense of the object of study, to bring some sense of order and unity to bear. Reflective judgements can be highly creative when links are made between phenomena that weren’t thought of before. Quite a lot of Foucault’s work was of this type – for example, the way in which he related different kinds of formative behaviours into the notion of the disciplinary: in seeing that behaviours in schools, prisons, hospitals and the like were produced and reproduced he was able to fashion a new concept – the ‘disciplinary’ – which gives us real insight into how modernity works.

Peter Steinberger (2018, pp47-50) explains the nature of reflective judgements rather well, in my view – he sees reflective judgements as different from what I have called epistemic judgements (following Kant, he calls the latter ‘determinate’ judgements). What a reflective judgement does is to provide an interpretive context in which different knowledge claims can be related and thus better understood. Reflective judgements operate at the level of meaning. When we ask our students to make the links both within and across modules we are asking them to think reflectively.

4. Normative Judgements

We need to be clear that normative judgements are not the same as ethical judgements. I see the latter as delivering a verdict on the worth or rightness of a person or action. As such, ethical judgements play (or should play) a minor role in academic research and production. It is irritating if a historian gives us ethical verdicts on her subject matter (Henry was a good king, but King John was a bad one) – ethical judgements are almost always unnecessary.

But normative judgements are something else. They involve according due sensitivity to the values and norms associated with the subject matter under consideration. By their very nature, normative judgements are contextual. For example, if a student fails to appreciate the nature of religiosity in fifteenth century England (for example, by seeing it in terms of ignorance and superstition because of a modernist, secular approach which the student brings to bear) then it is the normative judgement that has gone awry. Or, if a research methodology fails to take due account of the needs of confidentiality, again, it is a normative judgement that is deficient. Normative judgements often operate in combination with other judgements (especially with process and reflective judgements) and this is one of the reasons why academic judgement can be complex.

Conclusion

If I am right then there are four basic elements to an academic judgement. Typically in any assessment all four elements are operating together – process, epistemic, reflective and normative. The judgements we use are precisely those that we want our students to develop. We can see straight away that attempts to categorise and tabulate all of these elements may be helpful but are unlikely to be comprehensive. The precise nature of the judgement will vary according to subject matter and no set of assessment criteria that I have seen comes anywhere near to giving full justice to the complexity of the judgements involved. Moreover, most of those outside academia (government ministers, MPs, media people and the like) are just clueless regarding how much they know of this complexity.

Complex, yes: but not so complex that we can’t attempt to say what is involved in giving an academic judgement. But the above sketch cannot be the last word – if I have succeeded in suggesting some initial ‘guidelines’ then that is a start.

SRHE member Geoff Hinchliffe teaches undergraduates in the School of Education at the University of East Anglia. This blog is partly based on a paper he gave at the 2018 SRHE Research Conference.

Bibliography

Cambridge University (2018) Student Complaints https://www.studentcomplaints.admin.cam.ac.uk/general-points-about-procedures/academic-judgment

Higher Education Act (2004) http://www.legislation.gov.uk/ukpga/2004/8/pdfs/ukpga_20040008_en.pdf

Kant, I (1933) Critique of Pure Reason, trans. Kemp-Smith, N, London: Macmillan

Kant, I (2000) Critique of the Power of Judgement, trans. Guyer, P and Matthews, E, Cambridge: Cambridge University Press

OIA (2018) Guidance Note on the OIA Rules http://www.oiahe.org.uk/rules-and-the-complaints-process/guidance-note-on-the-oias-rules.aspx#para30

Steinberger, P.J (2018), Political Judgement, Cambridge: Polity Press

Wittgenstein, L (1958) Philosophical Investigations, Oxford: Blackwell


1 Comment

Having faith in the university

by Søren SE Bengtsen and Ronald Barnett

A heightened gap between the university and society is now evident. On the policy level, discourses of excellence, world-classness and value-for-money press upon universities while, on the societal level, there are calls for impact, skills, employability and marketable knowledge. Additionally, in a post-truth and fake news era, universities struggle to establish their legitimacy, and some students even report that they may actually be doing themselves a disfavour by taking a higher education degree. All this is symptomatic of a wide societal, and even worldly, sudden loss of faith in the university. Continue reading


1 Comment

Metrics in higher education: technologies and subjectivities

by Roland Bloch and Catherine O’Connell

The changing shape of higher education and consequent changes in the nature of academic labour, employment conditions and career trajectories were significant Continue reading


1 Comment

The ‘Holy Grail’ of pedagogical research: the quest to measure learning gain

by Camille Kandiko Howson, Corony Edwards, Alex Forsythe and Carol Evans

Just over a year ago, and learning gain was ‘trending’. Following a presentation at SRHE Annual Research Conference in December 2017, the Times Higher Education Supplement trumpeted that ‘Cambridge looks to crack measurement of ‘learning gain’; however, research-informed policy making is a long and winding road.

Learning gain is caught between a rock and a hard place — on the one hand there is a high bar for quality standards in social science research; on the other, there is the reality that policy-makers are using the currently available data to inform decision-making. Should the quest be to develop measures that meet the threshold for the Research Excellence Framework (REF), or simply improve on what we have now?

The latest version of the Teaching Excellence and Student Outcomes Framework (TEF) remains wedded to the possibility of better measures of learning gain, and has been fully adopted by the OfS.  And we do undoubtedly need a better measure than those currently used. An interim evaluation of the learning gain pilot projects concludes: ‘data on satisfaction from the NSS, data from DHLE on employment, and LEO on earnings [are] all … awful proxies for learning gain’. The reduction in value of the NSS to 50% in the most recent TEF process make it no better a predictor of how students learn.  Fifty percent of a poor measure is still poor measurement.  The evaluation report argues that:

“The development of measures of learning gain involves theoretical questions of what to measure, and turning these into practical measures that can be empirically developed and tested. This is in a broader political context of asking ‘why’ measure learning gain and, ‘for what purpose’” (p7).

Given the current political climate, this has been answered by the insidious phrase ‘value for money’. This positioning of learning gain will inevitably result in the measurement of primarily employment data and career-readiness attributes. The sector’s response to this narrow view of HE has given renewed vigour to the debate on the purpose of higher education. Although many experts engage with the philosophical debate, fewer are addressing questions of the robustness of pedagogical research, methodological rigour and ethics.

The article Making Sense of Learning Gain in Higher Education, in a special issue of Higher Education Pedagogies (HEP) highlights these tricky questions. Continue reading


1 Comment

It’s all about performance

by Marcia Devlin

The Australian federal government has indicated its intention to introduce partial funding based on yet to be defined performance measures.

The Mid-Year Economic and Fiscal Outlook (MYEFO) by the Australian government updates the economic and fiscal outlook from the previous budget and the budgetary position and revises the budget aggregates taking account of all decisions made since the budget was released. The 2017-2018 MYEFO papers state that the Government intends to “proceed with reforms to the higher education [HE] sector to improve transparency, accountability, affordability and responsiveness to the aspirations of students and future workforce needs” (see links below). Among these reforms are performance targets for universities to determine the growth in their Commonwealth Grant Scheme funding for bachelor degrees from 2020, to be capped at the growth rate in the 18-64 year old population, and from 1 January 2019, “a new allocation mechanism based on institutional outcomes and industry needs for sub-bachelor and postgraduate Commonwealth Supported Places”. Continue reading


Leave a comment

How likely are BTEC students to enter higher education?

By Pallavi Amitava Banerjee

 

Business and Technology Education Council (BTEC) qualifications are seen by some as prized qualifications for the labour market which draw on work-based scenarios. Providers claim these career-based qualifications are designed Continue reading


1 Comment

Boundaries, Buddies, and Benevolent Dictators within the Ecology of Doctoral Study

by Kay Guccione and Søren Bengtsen

In March we co-delivered a seminar at SRHE based on our complementary research studies into doctoral support, supervision, and relationships. In recognition that very many and varied players contribute to supporting doctoral researchers along the way, we spoke to the idea of the ‘Ecology’ of doctoral study. Through both of our research and practice areas, we raise issues of:

Boundaries, for example: Who is responsible for which aspects of doctoral development? Continue reading

Ian Mc Nay


1 Comment

Professional and Professionalism

By Ian McNay

My views on ‘professional’ and ‘professionalism’ in HE have been tested in several ways recently. One of my doctoral students has just got his award for a study on the topic. A major part of his work was a case study in a modern university, with a survey of teaching professionals with fellowship status in HEA either by a PGCE or a reflective portfolio of experience route. The survey group presented a homogeneous monochrome picture of what Hoyle, many years ago, labelled ‘restricted’ professionals – classroom bound with little engagement in the wider professional context, focused on subject and students, with punctuality and smart dress as professional characteristics. That reflected the response I got from some academics when I was appointed as a head of school: I met each one of my staff and, as part of the conversation, asked their view on development issues and future possibilities for the school. The response of several can be summarised by two: ‘I don’t have a view; not my role and above my pay grade’, and ‘You’re the boss. Tell me what to do and I’ll do it’. Continue reading


1 Comment

Examining the Examiner: Investigating the assessment literacy of external examiners

By Dr Emma Medland

Quality assurance in higher education has become increasingly dominant worldwide, but has recently been subject to mounting criticism. Research has highlighted challenges to comparability of academic standards and regulatory frameworks. The external examining system is a form of professional self-regulation involving an independent peer reviewer from another HE institution, whose role is to provide quality assurance in relation to identified modules/programmes/qualifications etc. This system has been a distinctive feature of UK higher education for nearly 200 years and is considered best practice internationally, being evident in various forms across the world.

External examiners are perceived as a vital means of maintaining comparable standards across higher education and yet this comparability is being questioned. Despite high esteem for the external examiner system, growing criticisms have resulted in a cautious downgrading of the role. One critique focuses on developing standardised procedures that emphasise consistency and equivalency in an attempt to uphold standards, arguably to the neglect of an examination of the quality of the underlying practice. Bloxham and Price (2015) identify unchallenged assumptions underpinning the external examiner system and ask: ‘What confidence can we have that the average external examiner has the “assessment literacy” to be aware of the complex influences on their standards and judgement processes?’ (Bloxham and Price 2015: 206). This echoes an earlier point raised by Cuthbert (2003), who identifies the importance of both subject and assessment expertise in relation to the role.

The concept of assessment literacy is in its infancy in higher education, but is becoming accepted into the vernacular of the sector as more research emerges. In compulsory education the concept has been investigated since the 1990s; it is often dichotomised into assessment literacy or illiteracy and described as a concept frequently used but less well understood. Both sectors describe assessment literacy as a necessity or duty for educators and examiners alike, yet both sectors present evidence of, or assume, low levels of assessment literacy. As a result, it is argued that developing greater levels of assessment literacy across the HE sector could help reverse the deterioration of confidence in academic standards.

Numerous attempts have been made to delineate the concept of assessment literacy within HE, focusing for example on the rules, language, standards, and knowledge, skills and attributes surrounding assessment. However, assessment literacy has also been described as Continue reading

Ian Mc Nay


Leave a comment

Ian McNay writes …

By Ian McNay

The news from Ukraine is that, at least in Odesa (one ‘s’ in Ukrainian) market, my country is known as ‘Bye, Bye, Britain’. I was there as part of a project on developing leadership training. At the rectors’ round table, we were thanked by the British Council rep. for being honest. We were discussing HE governance, and lessons from the UK, without doing the usual thing of pretending our approach is wonderful and everybody should imitate it. We learn from mistakes more than from things that went well, perhaps because they imply that there is a need to learn.

One challenge in Ukraine is the nostalgia for the old days. When I first went there 20 years ago, I asked an undergraduate class for their models of good leaders. My first three answers were Hitler, Stalin and Thatcher, which led to a discussion of the difference between ‘strong’ and ‘good’. That preference for strength over everything else is still there. In a survey of the ex-Soviet republics, the question was asked: ‘would you rather have democracy or a dictator who solves problems?’ Ukraine topped the table of those opting for the second, with over 50% choosing efficient despotism. The Czech Republic scored only 13%.

This is relevant to us because Theresa May has been claiming to be strong and has resisted the operations of democracy. At organisational level, since power tends to corrupt, the signs are not good: a recent survey of UK managers for the Chartered Institute of Personnel and Development revealed that only 8 per cent claimed to have a strong personal moral compass, and so are susceptible to corruption. Even UK university managers would score better than that, despite the disappearance of collegial democracy.

Wouldn’t they?

Did you notice…? The Universities UK blog reported a survey of the teams who prepared the institutional submissions to the Teaching Excellence Framework, and found that even they were dismissive of its validity and reliability – basic requirements for us as researchers. 72 per cent of those most closely involved in the exercise did not believe that it ‘accurately assesses teaching and learning excellence’. Only 2 per cent, 2 per cent, thought it did. Even they might change their view, since the views of students – those ‘at the heart of the system’ and the alleged beneficiaries of the exercise – are to be given a lower weighting, since their voice, through NSS, gave the ‘wrong’ message. More weight will now be given to post-graduation data on jobs and earnings, which are more heavily conditioned by accidents of birth, and employer prejudice, than the quality of teaching and learning. So much for promoting social mobility, another claimed objective. Russell Group universities will benefit, since they scored poorly on NSS, and recruit more of those privileged by birth. That couldn’t be a reason for the change, surely? That would suggest that corruptive pressure had been applied to the reward process, as in the awarding of Olympic Games to cities or the football world cup to countries. Or in awarding Olympic medals – gold, silver, bronze – in boxing. Or bonuses to bankers. Still, footballers and bankers are now our benchmarks, according to the head of the world’s leading university, so we still have some way to fall.

Don’t we?

‘That way madness lies’ (I have just played Lear in a local ‘Best of the Bard’ concoction).

Recent reports from some universities suggest grade inflation is just as much an issue as the cost of living index. UK wide figures are not yet available for the latest batch of graduates, but in 2016, 73 per cent of first degree graduates got a first (24%) or upper second (49%), with the gender split favouring women by 75/71. Four years previously, the figure had been ‘only’ 66 per cent. So, despite expansion lowering entry tariffs, more ‘value’ is added to compensate. If 50 per cent of an age cohort now study for a degree, that means that 12 percent of an age group got a first class degree. A few years ago, when I passed the 11+, only 11 percent of the age group in my home town did so.

Did you notice the figures for ‘alternative providers’ from HESA, interesting in the light of the recent report from the HE Commission? Of the 6,200 graduates they produced (2,000 more than the previous year), 58 per cent got ‘good’ degrees. No Inflation – it was 61 per cent in 2015. 14 per cent got firsts, and women again outperformed men, by nine percentage points – 63/54.

The Commission’s report goes well beyond simply comparing the provision of full-time first degrees, emphasising the potential role of apprenticeships in adding to diversity of routes; urging flexibility of funding to allow flexibility of study patterns across the sector and outlining the greater part employers should play in developing work-related and work-relevant provision. I was interested that, of over 120 names on the attendance list, only 6 were from mainstream universities, and three of those had given evidence to the enquiry. Does the sector not think there is a challenge from the alternatives? Will they just wait for the demographic upturn early in the next decade, and then supply the same-old to a similar sub-set of the market? Are they aware that some of that demographic upturn is of children of EU immigrants who may well choose to return to their parents’ home country to study where fees are much lower, if they exist at all? And that nearly all recent growth in demand has been from BAME applicants, who suffer from admissions decisions which imply unconscious (I hope) decisions, particularly in elitist universities, as work by Vicki Boliver and Tariq Modood and statistics from UCAS show?

Finally, and still on my campaign for equity…I have a plea. At a recent symposium, participants commented on the inequity, at a global level, of the monopoly role of the English language, which has an exclusionary impact on those outside the Anglo-Saxon countries. Some national governments are bothered about its impact on knowledge transfer within the country that sponsored the work that produces journal articles. My suggestion is that any journal with ‘international’ in its title or its statement of aims should publish abstracts in, preferably, three languages, but at least two: the second being the author’s first language or that of the host institution of the research reported; the third another global language, probably Spanish. So, if you are on the editorial board of journals, or review articles submitted, can I urge you to make representation about this. It would enhance awareness across a broader landscape of HE, and allow those beyond the current privileged language enclave initial access to relevant work and to follow up with some contact with authors, since email addresses are now commonly given. It would also support the Society’s role in encouraging newer researchers. Simples!

SRHE Fellow Ian McNay is emeritus professor at the University of Greenwich