srhe

The Society for Research into Higher Education


Leave a comment

Make the tacit explicit: how to improve information on university webpages for potential doctoral applicants

by Dangeni, James Burford and Sophia Kier-Byfield

Working out how to apply for a doctoral programme can be a challenging process for many potential applicants. As countless Youtube videos, blogposts and twitter threads attest, there is much confusion and plenty of (sometimes contrasting) advice on the internet about what to do, whom to contact, and how to contact them. Some applicants find this process so challenging that they turn to a range of paid services that help them to learn how to contact a potential doctoral supervisor or develop a research proposal. There is clearly much demand for guidance on how to make a successful application to doctoral study, but for many academics and professional services staff doctoral admissions is a familiar and routine process where quick assessments can be made about the enquiries of a given applicant.

We began our recent exploratory research project, ‘Opening up the Black Box of Pre-application Doctoral Communications’, with an interest in the somewhat opaque processes that occur prior to formal doctoral admissions, but which often form a crucial part of the pathway to applying. We were concerned that the mysteries of the pre-application stage may have Equity Diversity and Inclusion (EDI) implications, making it easier for some to navigate toward doctoral study than others. We conducted a study to examine the pre-application stage of doctoral admissions in a single university context, the University of Warwick.

At the start of our study, we conducted a search for public-facing, institutional webpages relevant to doctoral admissions. Webpages are one of the key spaces where potential doctoral applicants can gather information about the application process, including institutional pages (eg produced by a Doctoral College) and departmental pages (eg departmental guidance or a potential supervisor’s webpage).

We aimed to identify and characterise information aimed at doctoral applicants prior to their making formal applications to study. Our primary goal in conducting this review was to understand: (a) the nature of pre-admissions information on university webpages; (b) whether this pre-admissions information was consistent across the institution; and, (c) whether the detail was a sufficient and adequate explanation of key pre-application steps to potential applicants.

This blog post gives our top six tips for stakeholders involved in doctoral admissions to consider for potential applicants, so that they have all the information they need from public-facing web pages.  

  1. Avoid complicated web designs, texts and duplicated material

All the webpages we reviewed provided ‘opening pages’ which covered the basic details and specifications of programme, but there was a wide variety of detail in terms of the introductions to departments. Some departments included short paragraphs, others offered more elaborate introductions which included orienting students to the research areas of the department, the ranking of the department in UK league tables, and student testimonials with multiple tabs and long paragraphs, sometimes with invalid links. These layouts can be confusing on a computer screen, but institutions and departments could also consider that potential applicants may use phones or tablets to access the information, and thus the webpage design should be tablet/phone friendly. It is also necessary to check whether the page is accessible, eg for visually impaired visitors or those with learning difficulties.

  • Display a checklist and flowchart for the pre-admissions process

We found two categories of admission information across department webpages. The first category was a link to signpost applicants to the central university portal for application advice and guidance, which provides an overview of the pre-application procedure for potential applicants to follow. The second category of admission information is commonly more tailored to a department’s specific procedures and is often accessed via a ‘how to apply’ section. However we noted that several departments did not undertake much departmental level ‘translation’ of general admissions information, perhaps simply linking applicants to the central university portal. We do not believe this would give potential applicants sufficient information to know how to get started and what to do in local contexts. In particular, decision-making related information and explanations were rare: very few departments explained details such as evaluation criteria, who is involved, the maximum cohort size each year, and the timeframe for decision-making. Therefore, we suggest that departmental webpages should consider displaying a checklist of key steps and a flowchart explaining the timeframe, decision-making process and who is involved.

  • Outline what is expected from applicants in terms of locating a supervisor before applying

Most departmental webpages advise applicants to contact prospective supervisors in advance of the application to discuss research interests and compatibility, although some do not require a nominated supervisor for application. Our web review identified that most departments do consider this process to be a key pre-application step, and some provide relevant information and guidance regarding how to identify a supervisor. Therefore, a clear indication of what is expected from applicants to contact prospective supervisors is needed on departmental webpages. Additionally, institutions/departments should encourage academic staff to update their staff profile web pages with consistent information eg current projects supervised, information on interests (topic, methodology/approach, country contexts, and capacity to take on new students) would be helpful for applicants in the preparation and communications stages.

  • Explain what counts as a ‘good’ research proposal

Another key category of pre-application information concerns how to draft a research proposal. Most departments require a research proposal for an application to be considered; research proposal-related guidance can be found in two categories. Most departments link to the central university portal for application advice and guidance, which contains the general structure of a research proposal (eg an overview of research question(s), main objective of research, potential contribution to existing research field/literature, research techniques, suggested data collection procedures and an outline timeline) and a list of department requirements. In contrast, in several departments, a webpage or a link to department-specific guidance can be found, providing an outline/structure with word count and what to include in detail. It is suggested that clear guidance on a ‘good’ research proposal (disciplinary equivalent) is necessary for applicants, including information on expected sections and length, as well as the evaluation criteria for the proposal.

  • Include clear contact information for the department for potential applicants

As emphasised on the central university portal for application advice and guidance, one of the most important points to consider is whether the academic department shares the academic interests of the applicant. While all departments suggest applicants make contact before proceeding with their application any further, different formats and categories of making the initial contact and sending inquiries can be found across departments. Though all departments provide email addresses for applicants to make general inquiries, some provide the Academic Director of PGR (who manages the department’s PGR programme) and relevant professional staff contact details. It is suggested that institutions/departments should include clear contact information for potential applicants, including which queries should be directed at which named members of staff and how long the wait time may be for responses.

  • Welcome applicants from underrepresented groups implicitly and explicitly

Only two departments across all faculties featured EDI-related information in their pre-admission information webpages. The first was in the form of a statement explaining why plenty of information was provided (‘in order to demystify the admissions process, as part of our commitment to enhancing inclusivity in doctoral education’). The second featured a video clip which sought to detail principles of an inclusive working and learning environment and welcome applications from individuals who identify with any of the protected characteristics defined by the Equality Act 2010. Websites which clearly communicate all required information to applicants serve an EDI function in that they do not require applicants to draw upon tacit information to make sense of pre-application steps that have not been carefully explained. In addition to clear and accessible information, welcome statements that determine a departmental position on inclusion can be helpful in that they directly acknowledge those under-represented in higher education. This could be written in collaboration with existing minoritised students.

Our aim is to share the findings from our institutional case study, but also to encourage reflection, review and conversation amongst colleagues about pre-application practices. We highly recommend involving staff and students in review processes as much as possible to ensure that webpages are readable, relevant and useful.

Further information

Two linked Pre-Application Doctoral Communications Research Projects have been carried by a research team based at the University of Warwick including Dr James Burford (PI), Dr Emily Henderson (Co-I), Dr Sophia Kier-Byfield, Dr Dangeni and Ahmad Akkad. The projects were funded by Warwick’s Enhancing Research Culture Fund. The team have produced a suite of open access resources including project briefings. For more information on the project see the website (www.warwick.ac.uk/padc) or #PADC_project on Twitter.

Dr Dangeni is a Professional Development Advisor at Newcastle University, where her teaching and research focus broadly on teaching and learning provision in the wider context of the internationalisation of higher education. She is particularly interested in research and practices around international students’ access, engagement and success in postgraduate taught (PGT) and postgraduate research (PGR) settings.

Dr James Burford is an Associate Professor at the University of Warwick. James’ research interests include doctoral education and the academic profession, higher education internationalisation and academic mobilities. Dr Sophia Kier-Byfield is a Postdoctoral Research Fellow at the University of Warwick, where she works on the ‘Opening Up the Black Box of Pre-Application Doctoral Communications’ projects. Her research interests broadly concern equity in higher education, feminisms in academia and inclusive pedagogies.

Image of Rob Cuthbert


1 Comment

Some different lessons to learn from the 2020 exams fiasco

by Rob Cuthbert

The problems with the algorithm used for school examinations in 2020 have been exhaustively analysed, before, during and after the event. The Royal Statistical Society (RSS) called for a review, after its warnings and offers of help in 2020 had been ignored or dismissed. Now the Office for Statistics Regulation (OSR) has produced a detailed review of the problems, Learning lessons from the approach to developing models for awarding grades in the UK in 2020. But the OSR report only tells part of the story; there are larger lessons to learn.

The OSR report properly addresses its limited terms of reference in a diplomatic and restrained way. It is far from an absolution – even in its own terms it is at times politely damning – but in any case it is not a comprehensive review of the lessons which should be learned, it is a review of the lessons for statisticians to learn about how other people use statistics. Statistical models are tools, not substitutes for competent management, administration and governance. The report makes many valid points about how the statistical tools were used, and how their use could have been improved, but the key issue is the meta-perspective in which no-one was addressing the big picture sufficiently. An obsession with consistency of ‘standards’ obscured the need to consider the wider human and political implications of the approach. In particular, it is bewildering that no-one in the hierarchy of control was paying sufficient attention to two key differences. First, national ‘standardisation’ or moderation had been replaced by a system which pitted individual students against their classmates, subject by subject and school by school. Second, 2020 students were condemned to live within the bounds not of the nation’s, but their school’s, historical achievements. The problem was not statistical nor anything to do with the algorithm, the problem was with the way the problem itself had been framed – as many commentators pointed out from an early stage. The OSR report (at 3.4.1.1) said:

“In our view there was strong collaboration between the qualification regulators and ministers at the start of the process. It is less clear to us whether there was sufficient engagement with the policy officials to ensure that they fully understood the limitations, impacts, risks and potential unintended consequences of the use of the models prior to results being published. In addition, we believe that, the qualification regulators could have made greater use of  opportunities for independent challenge to the overall approach to ensure it met the need and this may have helped secure public confidence.”

To put it another way: the initial announcement by the Secretary of State was reasonable and welcome. When Ofqual proposed that ranking students and tying each school’s results to its past record was the only way to do what the SoS wanted, no-one in authority was willing either to change the approach, or to make the implications sufficiently transparent for the public to lose confidence at the start, in time for government and Ofqual to change their approach.

The OSR report repeatedly emphasises that the key problem was a lack of public confidence, concluding that:

“… the fact that the differing approaches led to the same overall outcome in the four countries implies to us that there were inherent challenges in the task; and these 5 challenges meant that it would have been very difficult to deliver exam grades in a way that commanded complete public confidence in the summer of 2020 …”

“Very difficult”, but, as Select Committee chair Robert Halfon said in November 2020, things could have been much better:

“the “fallout and unfairness” from the cancellation of exams will “have an ongoing impact on the lives of thousands of families”. … But such harm could have been avoided had Ofqual not buried its head in the sand and ignored repeated warnings, including from our Committee, about the flaws in the system for awarding grades.”

As the 2021 assessment cycle comes closer, attention has shifted to this year’s approach to grading, when once again exams will not feature except as a partial and optional extra. When the interim Head of Ofqual, Dame Glynis Stacey, appeared before the Education Select Committee, Schools Week drew some lessons which remain pertinent, but there is more to say. An analysis of 2021 by George Constantinides, a professor of digital computation at Imperial College whose 2020 observations were forensically accurate, has been widely circulated and equally widely endorsed. He concluded in his 26 February 2021 blog that:

“the initial proposals were complex and ill-defined … The announcements this week from the Secretary of State and Ofqual have not helped allay my fears. … Overall, I am concerned that the proposed process is complex and ill-defined. There is scope to produce considerable workload for the education sector while still delivering a lack of comparability between centres/schools.”

The DfE statement on 25 February kicks most of the trickiest problems down the road, and into the hands of examination boards, schools and teachers:

“Exam boards will publish requirements for schools’ and colleges’ quality assurance processes. … The head teacher or principal will submit a declaration to the exam board confirming they have met the requirements for quality assurance. … exam boards will decide whether the grades determined by the centre following quality assurance are a reasonable exercise of academic judgement of the students’ demonstrated performance. …”

Remember in this context that Ofqual acknowledges “it is possible for two examiners to give different but appropriate marks to the same answer”. Independent analyst Dennis Sherwood and others have argued for alternative approaches which would be more reliable, but there is no sign of change.

Two scenarios suggest themselves. In one, where this year’s results are indeed pegged to the history of previous years, school by school, we face the prospect of overwhelming numbers of student appeals, almost all of which will fail, leading no doubt to another failure of public confidence in the system. The OSR report (3.4.2.3) notes that:

“Ofqual told us that allowing appeals on the basis of the standardisation model would have been inconsistent with government policy which directed them to “develop such an appeal process, focused on whether the process used the right data and was correctly applied”.

Government policy for 2021 seems not to be significantly different:

Exam boards will not re-mark the student’s evidence or give an alternative grade. Grades would only be changed by the board if they are not satisfied with the outcome of an investigation or malpractice is found. … If the exam board finds the grade is not reasonable, they will determine the alternative grade and inform the centre. … Appeals are not likely to lead to adjustments in grades where the original grade is a reasonable exercise of academic judgement supported by the evidence. Grades can go up or down as the result of an appeal.” (emphasis added)

There is one crucial exception: in 2021 every individual student can appeal. Government no doubt hopes that this year the blame will all be heaped on teachers, schools and exam boards.

The second scenario seems more likely and is already widely expected, with grade inflation outstripping the 2020 outcome. There will be a check, says DfE, “if a school or college’s results are out of line with expectations based on past performance”, but it seems doubtful whether that will be enough to hold the line. The 2021 approach was only published long after schools had supplied predicted A-level grades to UCAS for university admission. Until now there has been a stable relationship between predicted grades and examination outcomes, as Mark Corver and others have shown. Predictions exceed actual grades awarded by consistent margins; this year it will be tempting for schools simply to replicate their predictions in the grades they award. Indeed, it might be difficult for schools not to do so, without leaving their assessments subject to appeal. In the circumstances, the comments of interim Ofqual chief Simon Lebus that he does not expect “huge amounts” of grade inflation seem optimistic. But it might be prejudicial to call this ‘grade inflation’, with its pejorative overtones. Perhaps it would be better to regard predicted grades as indicators of what each student could be expected to achieve at something close to their best – which is in effect what UCAS asks for – rather than when participating in a flawed exam process. Universities are taking a pragmatic view of possible intake numbers for 2021 entry, with Cambridge having already introduced a clause seeking to deny some qualified applicants entry in 2021 if demand exceeds the number of places available.

The OSR report says that Ofqual and the DfE:

“… should have placed greater weight on explaining the limitations of the approach. … In our view, the qualification regulators had due regard for the level of quality that would be required. However, the public acceptability of large changes from centre assessed grades was not tested, and there were no quality criteria around the scale of these changes being different in different groups.” (3.3.3.1)

The lesson needs to be applied this year, but there is more to say. It is surprising that there was apparently such widespread lack of knowledge among teachers about the grading method in 2020 when there is a strong professional obligation to pay attention to assessment methods and how they work in practice. Warnings were sounded, but these rarely broke through to dominate teachers’ understanding, despite the best efforts of education journalists such as Laura McInerney, and teachers were deliberately excluded from discussions about the development of the algorithm-based method. The OSR report (3.4.2.2) said:

“… there were clear constraints in the grade awarding scenario around involvement of service delivery staff in quality assurance, or making the decisions based on results from a model. … However, we consider that involvement of staff from centres may have improved public confidence in the outputs.”

There were of course dire warnings in 2020 to parents, teachers and schools about the perils of even discussing the method, which undoubtedly inhibited debate, but even before then exam processes were not well understood:

“… notwithstanding the very extensive work to raise awareness, there is general limited understanding amongst students and parents about the sources of variability in examination grades in a normal year and the processes used to reduce them.” (3.2.2.2)

My HEPI blog just before A-level results day was aimed at students and parents, but it was read by many thousands of teachers, and anecdotal evidence from the many comments I received suggests it was seen by many teachers as a significant reinterpretation of the process they had been working on. One teacher said to Huy Duong, who had become a prominent commentator on the 2020 process: “I didn’t believe the stuff you were sending us, I thought it [the algorithm] was going to work”.

Nevertheless the mechanics of the algorithm were well understood by many school leaders. FFT Education Datalab was analysing likely outcomes as early as June 2020, and reported that many hundreds of schools had engaged them to assess their provisional grade submissions, some returning with a revised set of proposed grades for further analysis. Schools were seduced, or reduced, to trying to game the system, feeling they could not change the terrifying and ultimately ridiculous prospect of putting all their many large cohorts of students in strict rank order, subject by subject. Ofqual were victims of groupthink; too many people who should have known better simply let the fiasco unfold. Politicians and Ofqual were obsessed with preventing grade inflation, but – as was widely argued, long in advance –  public confidence depended on broader concerns about the integrity and fairness of the outcomes.

In 2021 we run the same risk of loss of public confidence. If that transpires, the government is positioned to blame teacher assessments and probably reinforce a return to examinations in their previous form, despite their known shortcomings. The consequences of two anomalous years of grading in 2020 and 2021 are still to unfold, but there is an opportunity, if not an obligation, for teachers and schools to develop an alternative narrative.

At GCSE level, schools and colleges might learn from emergency adjustments to their post-16 decisions that there could be better ways to decide on progression beyond GCSE. For A-level/BTEC/IB decisions, schools should no longer be forced to apologise for ‘overpredicting’ A-level grades, which might even become a fairer and more reliable guide to true potential for all students. Research evidence suggests that “Bright students from poorer backgrounds are more likely than their wealthier peers to be given predicted A-level grades lower than they actually achieve”. Such disadvantage might diminish or disappear if teacher assessments became the dominant public element of grading; at present too many students suffer the sometimes capricious outcomes of final examinations.

Teachers’ A-level predictions are already themselves moderated and signed off by school and college heads, in ways which must to some extent resemble the 2021 grading arrangements. There will be at least a two-year discontinuity in qualification levels, so universities might also learn new ways of dealing with what might become a permanently enhanced set of differently qualified applicants. In the longer term HE entrants might come to have different abilities and needs, because of their different formation at school. Less emphasis on preparation for examinations might even allow more scope for broader learning.

A different narrative could start with an alternative account of this year’s grades – not ‘standards are slipping’ or ‘this is a lost generation’, but ‘grades can now truly reflect the potential of our students, without the vagaries of flawed public examinations’. That might amount to a permanent reset of our expectations, and the expectations of our students. Not all countries rely on final examinations to assess eligibility to progress to the next stage of education or employment. By not wasting the current crisis we might even be able to develop a more socially just alternative which overcomes some of our besetting problems of socioeconomic and racial disadvantage.

Rob Cuthbert is an independent academic consultant, editor of SRHE News and Blog and emeritus professor of higher education management. He is a Fellow of the Academy of Social Sciences and of SRHE. His previous roles include deputy vice-chancellor at the University of the West of England, editor of Higher Education Review, Chair of the Society for Research into Higher Education, and government policy adviser and consultant in the UK/Europe, North America, Africa, and China.

MarciaDevlin


Leave a comment

Making admissions better in Australia

By Marcia Devlin

In Australia, the federal government has been focused on improving the transparency of higher education admissions. I’ve been concerned and written about this matter for some years, particularly the confusion in prospective students and their families around exclusive admissions criteria being used as a proxy for quality.

The government-appointed Higher Education Standards Panel (HESP) were asked to consider and report on how the admissions policies and processes of higher education providers could be made clearer, easier to access and more useful, to inform the choices and decisions of prospective students and their families.

In the context of an increased variety of pathways through which a prospective student can apply or be accepted into higher education in Australia, the HESP found that prospective students, their families and others, including schools, are finding it increasingly difficult to understand the full range of study options and opportunities available, and to understand how they can best take advantage of these options to meet their education and career objectives.

The HESP made 14 recommendations Continue reading

Vikki Boliver


Leave a comment

Universities must act collectively to remedy lower offer rates for ethnic minority applicants

By Vikki Boliver

The Runnymede Trust has just launched its publication Aiming Higher: Race, Inequality and Diversity in the Academy which shines a spotlight on ethnic inequalities in UK universities. The report brings together 15 short essays written by academics and policy makers which make clear that radical change is needed to address ethnic inequalities in university admissions, student experiences, degree attainments, graduate labour market outcomes, and access to academic positions especially at senior levels.

In my contribution to the Runnymede publication (see chapter 5) I focus on the issue of ethnic inequalities in university admissions chances. Although British ethnic minorities are more likely to go to university than their White British peers, some ethnic minority groups – notably the Black Caribbean, Black African, Pakistani and Bangladeshi groups – remain strikingly underrepresented in the UK’s most academically selective institutions including Russell Group universities. Of course this is partly due to ethnic inequalities in secondary school attainment which means that members of these groups are less likely to have the high grades required for entry to highly selective universities. But we also know, from analysing university admissions data that British ethnic minority applicants are less likely to be offered places at highly selective universities even when they have the same grades and ‘facilitating subjects’ at A-level as White British applicants. Continue reading