srhe

The Society for Research into Higher Education


2 Comments

Peer Observation of Teaching – does it know what it is?

by Maureen Bell

What does it feel like to have someone observing you perform in your teaching role? Suppose they tick off a checklist of teaching skills and make a judgement as to your capability, a judgement that the promotions committee then considers in its deliberations on your performance? How does it feel to go back to your department and join the peer who has written the judgement? Peer Observation of Teaching (POT) is increasingly being suggested and used as a tool for the evaluation, rather than collaborative development, of teaching practice.

Can POT for professional development co-exist with and complement POT for evaluation? Or are these diametrically opposed philosophies and activities such that something we might call Peer Evaluation of Teaching (PET) has begun to undermine the essence of POT?

I used to think the primary purpose of peer observation of teaching (POT) was the enhancement of teaching and learning. I thought it was a promising process for in-depth teaching development. More recently I have been thinking that POT has been hijacked by university quality assurance programs and re-dedicated to the appraisal of teaching by academic promotions committees. The principles and outcomes of POT for appraisal are, after all, quite opposite to those that were placed at the heart of the original POT philosophy and approach – collegial support, reflective practice and experiential learning.

In 1996 I introduced a POT program into my university’s (then) introduction to teaching course for academic staff. Participants were observed by each other, and myself as subject coordinator, and were required to reflect on feedback and plan further action. It wasn’t long before I realised that I could greatly improve participants’ experience by having them work together, experiencing at different times the roles of both observer and observed. I developed the program such that course participants worked in groups to observe each other teach and to share their observations, feedback and reflections. A significant feature of the program was a staged workshop-style introduction to peer observation which involved modelling, discussion and practice. I termed this collegial activity ‘peer observation partnerships’.

The program design was influenced by my earlier experiences of action research in the school system and by the evaluation work of Web and McEnerney (1995) indicating the importance of training sessions, materials, and meetings. Blackwell (1996), too, in Higher Education Quarterly described POT as stimulating reflection on and improvement of teaching. Early results of my program, published in IJAD in 2001, reported POT as promoting the development of skills, knowledge and ideas about teaching, as a vehicle for ongoing change and development, and as a means of building professional relationships and a collegial approach to teaching.

My feeling then was that a collegial POT process would eventually be broadly accepted as a key strategy for teaching development in universities. Surely universities would see POT as a high value, low cost, professional development activity. This motivated me to publish Peer Observation Partnerships in Higher Education through the Higher Education Research and Development Society of Australasia (HERDSA).

Gosling’s model appeared in 2002 in which he posed three categories of POT, in summary: evaluation, development, and fostering collaboration. Until then I had not considered the possibility that POT could be employed as an evaluation tool, mainly because to my mind observers did not need a particular level of teaching expertise. Early career teachers were capable of astute observation, and of discussing the proposed learning outcomes for the class along with the activity observed. I saw evaluation as requiring appropriate expertise to assess teaching quality against a set of reliable and valid criteria. Having been observed by an Inspector of Schools in my career as a secondary school teacher, I had learned from experience the difference between ‘expert observation’ and ‘peer observation’.

Looking back, I discovered that the tension between POT as a development activity rather than an evaluation tool had always existed. POT had been mooted as a form of peer review and as a staff appraisal procedure in Australia since the late eighties and early nineties, when universities were experiencing pressure to introduce procedures for annual staff appraisal. The emphasis at that time was evaluative – a performance management approach seeking efficiency and linking appraisal to external rewards and sanctions. Various researchers and commentators c.1988-1993, including Lonsdale, Abbott, and Cannon, sought an alternative approach which emphasised collegial professional development. At that time action research involving POT was prevalent in the school system using the Action Research Planner of Kemmis and McTaggert. Around this time Jarzabkowski and Bone from The University of Sydney developed a detailed guide for Peer Appraisal of Teaching. They defined the term ‘peer appraisal’ as a method of evaluation, that could both provide feedback on teaching for personal development as well as providing information for institutional or personnel purposes. ‘Observer expertise in the field of teaching and learning’ was a requirement.

In American universities various peer-review-through-observation projects had emerged in the early nineties. A scholarly discussion of peer review of teaching was taking place under the auspices of the American Association for Higher Education Peer Review of Teaching project and the national conference, ‘Making Learning Visible: Peer-review and the Scholarship of Teaching’ (2000), brought together over 200 participants. The work of both Centra and Hutchings in the 90s, and Bernstein and others in the 2000s advocated the use of peer review for teaching evaluation.

In 2002 I was commissioned by what was then the Generic Centre (UK) to report on POT in Australian universities. At that time several universities provided guidelines or checklists for voluntary peer observation, while a number of Australian universities were accepting peer review reports of teaching observations for promotion and appointment. Soon after that I worked on a government funded Peer Review of Teaching project led by the University of Melbourne, again reviewing POT in Australian universities. One of the conclusions of the report was that POT was not a common professional activity. Many universities however listed peer review of teaching as a possible source of evidence for inclusion in staff appraisal and confirmation and promotion applications.

My last serious foray into POT was an intensive departmental program developed with Paul Cooper, then Head of one of our schools in the Engineering Faculty. Along with my earlier work, the outcomes of this program, published in IJAD (2013), confirmed my view that a carefully designed and implemented collegial program could overcome problems such as those reported back in 1998 by Martin in Innovations in Education and Teaching International, 35(2). Meanwhile my own head of department asked me to design a POT program that would provide ‘formal’ peer observation reports to the promotions and tenure committee. I acquiesced, although I was concerned that once POT became formalised for evaluation purposes in this way, the developmental program would be undermined.

Around 2008 my university implemented the formal POT strategy with trained, accredited peer observers and reporting templates. POT is now accepted in the mix of evidence for promotions and is compulsory for tenure applications. In the past year I’ve been involved in a project to review existing peer observation of teaching activities across the institution, which has found little evidence of the use of developmental POT.

The Lonsdale report (see above) proposed a set of principles for peer review of teaching and for the type of evidence that should be used in decisions about promotion and tenure: Fairness such that decisions are objective; openness such that criteria and process are explicit and transparent; and consistency between standards and criteria applied in different parts of the institution and from year to year. It always seemed to me that the question of criteria and standards would prove both difficult and contentious. How does a promotions committee decipher or interpret a POT report? What about validity and reliability? What if the POT reports don’t align with student evaluation data? And what does it mean for the dynamics of promotion when one of your peer’s observations might influence your appraisal?

In 2010 Chamberlain et al reported on a study exploring the relationship between annual peer appraisal of teaching practice and professional development. This quote from a participant in the study stays with me, “… the main weakness as far as I’m concerned is that it doesn’t know what it is. Well, what is its purpose?”

POT for professional development is an activity that is collegial, subjective, and reflective. My view is that POT for professional development can only co-exist with a version of POT for evaluation that is re-named, re-framed and standardised. And let’s call it what it really is – Peer Evaluation of Teaching (PET).

Dr Maureen Bell is Editor of HERDSA NEWS, Higher Education Research and Development Society of Australasia; HERDSA Fellow; Senior Fellow, University of Wollongong Australia.


Leave a comment

Mind the Gap – Gendered and Caste-based Disparities in Access to Conference Opportunities

In an interview with Conference Inference [1] editor Emily Henderson, Nidhi S. Sabharwal discussed inequalities of access to conference opportunities in India.

Figure 1: Participation in Conferences by Gender (in a high-prestige institution)Figure 1: Participation in Conferences by Gender (in a high-prestige institution)

EH: Nidhi, can you explain first of all where conferences come into your wider research on inequalities in Indian higher education?

NS: Equitable access to professional development opportunities such as conferences is an indicator of institutional commitment to achieving diversity and inclusion of diverse social groups on campuses. Continue reading

Image of Rob Cuthbert


Leave a comment

Doing academic work

by Rob Cuthbert

Summer holidays may not be what they were, but even so it is the time of year when universities tend to empty of students and (some) staff – an opportunity to reflect on why we do what we do. What do universities do? They do academic work, of course. What exactly does that involve? Well, as far as teaching is concerned, there are six stages in the ‘value chain’. For every teaching programme a university will: Continue reading


Leave a comment

The deaf delegate – experiences of space and time in the conference (BSL version included)

By Dai O’Brien

In this post, Dai O’Brien discusses spatial and temporal challenges that deaf academics face when attending conferences, and presents some preliminary thoughts from his funded research project on deaf academics. This post is accompanied by a filmed version of this post in British Sign Language.

Access the British Sign Language version of this post here.

Attending conferences is all about sharing information, making those contacts which can help you with research ideas, writing projects and so on. This is the ideal. However, Continue reading


1 Comment

What is Times Higher Education for?

By Paul Temple

Have you been to a THE Awards bash? If not, it’s worth blagging an invite – your University must be on the shortlist for Herbaceous Border Strategy Team of the Year, or some such, as the business model obviously depends on getting as many universities as possible onto the shortlists, and then persuading each university to cough up to send along as many of its staff as possible. A night out at a posh Park Lane hotel for staff whose work most likely is normally unnoticed by the brass: where’s the harm? I went once – once is enough – mainly I think because our Marketing Director wanted to see if I really possessed a dinner jacket. (She was generous enough to say that I “scrubbed up nicely”.)

I mention this because THE itself seems to be becoming less a publication dealing with higher education news and comment and more a business aimed at extracting cash from higher education institutions, with the weekly magazine merely being a marketing vehicle in support of this aim. The Awards events are the least bothersome aspect of this. The THE rankings – highly valued as “how not to use data” examples by teachers of basic quantitative methods courses – have now entered the realm of parody (“Emerging Economy Universities with an R in their names”) although the associated conferences and double-page advertising spreads in the magazine rake in a nice bit of revenue, one imagines. THE might fairly respond by saying that nobody makes these universities come to their conferences or buy corporate advertising in their pages, and anyway they weren’t the ones who decided that the marketization of higher education worldwide would be a good idea. True, but their profit-making activities give the ratchet another turn, making it harder for universities trying to survive in a competitive market to say no to marketing blandishments, and so helping to move yet more spending away from teaching and research: something regularly lampooned by Laurie Taylor in – remind me where his Poppleton column appears?

The newer, more problematic, development is THE then selling itself as a branding consultancy to the same universities that it is including in its rankings and maybe covering in its news or comment pages. Now it goes without saying that a journal with the standards of THE would never allow the fact that it was earning consultancy fees from a university to influence that university’s position in the rankings that it publishes or how it was covered editorially. It would be unthinkable: not least because it would at a stroke undermine the whole basis of the rankings themselves. Audit firms similarly assure us that the fact that they are earning consultancy fees from a company could never affect the audit process affecting that company. The causes of misleading audit reports – on Carillion, say – should be sought elsewhere, we’re told.

But wait a minute, what’s this on the THE website? “THE is the data provider underpinning university excellence in every continent across the world. As the company behind the world’s most influential university ranking, and with almost five decades of experience as a source of analysis and insight on higher education, we have unparalleled expertise on the trends underpinning university performance globally. Our data and benchmarking tools are used by many of the world’s most prestigious universities to help them achieve their strategic goals.” This seems to be saying that the data used to create the THE rankings are available, at a price, to allow universities to improve their own performance. Leaving aside the old joke about a consultant being someone who borrows your watch to tell you the time, referring to the data used to produce rankings and in the following sentence proposing using the same data to help universities achieve their strategic goals (and I’d be surprised if these goals didn’t include rising in the aforementioned rankings) will suggest to potential clients that these two THE activities are linked. Otherwise why mention them in the same breath? This is skating on thin ethical ice.

SRHE member Paul Temple, Centre for Higher Education Studies, UCL Institute of Education, University College London.

 


Leave a comment

Why the UK must up its game when it comes to recruiting international students

By Sylvie Lomer & Terri Kim

This article was first published on conversation on 5 June 2018

International students make billions of pounds for the UK economy and help open up a window on the world to domestic students. That’s apparently why universities are supposed to recruit them, according to government policy. Yet international students are at risk because of the government’s ‘hostile environment’ to migration and because of the way the sector recruits them.

Graph

This is a risky proposition for a sector that relies on reputation, as future students could see this country as using them as cash-cows instead of valued partners. An alternative vision of ethical student recruitment would not only be morally sound, it would be economically and educationally sustainable too.

More is not always better

Success is often defined as growth. Policy on international students has in the past often set goals for increased numbers of students. For many institutions increasing numbers is a key indicator of success.

This growth can only be sustained if the supply of students keeps expanding. But population growth in the UK’s single most important market, China, is slowing down.

True, economic growth in key countries (such as China and India) which send students to the UK suggests growing middle classes. Middle class students tend to seek international education to gain an advantage in tough job markets. And – more importantly – they can afford it. But as the middle classes expand, so too does the domestic provision of higher education in such “sending” countries. Historically, the UK has been seen as “the” destination for quality higher education. But as education quality in the “sending” countries improves, the UK will gradually lose this advantage. So the UK cannot define its success in recruiting international students exclusively based on growth.

New competitors

Competitive success means outdoing other providers and growing the market share. For the last decade, the UK has held second place to the US, recruiting 11% of globally mobile students (see below graphic).

graph2Global market share of internationally mobile students for leading study destinations, 2016. IIE/Project Atlas (2017)

But rival countries are constantly changing their strategies and policies on recruitment and new competitors are entering the market. Japan, South Korea, India, China and Malaysia now all attract significant numbers of students. Seeking to gain market share against competitors then becomes a perpetual arms race.

No perfect number

There is no perfect number or ratio of international to home students. For a start, international students are concentrated in particular subjects, like business studies (see below graphic).

graph3International student numbers by subject area 2016-17. HESA 2018

International students are also concentrated in particular universities, from as few as 15 non-EU students at universities such as Leeds Trinity to over 11,000 at institutions like University College London. Some have suggested that “too many international students” affects the “quality” of the university experience. This implies that all international students are less academically able than home students, ignoring their achievements and capacity to study in second and third languages. A more positive but equally simplistic assumption is that because there are international students in a classroom, beneficial “intercultural” exchanges will happen.

This flawed simplicity of the imagined impact of international students was made clear in a survey by the UK Home Office which asked British home students whether international students had a positive or negative impact on their “university experience”. The survey had to be withdrawn after criticism that it was flawed and “open to abuse”. By positioning international students at odds with home students, the survey deepens a sense of exclusion within UK universities, rather than inclusion. Initiatives like this create the impression that universities are xenophobic and hostile places for international students. They should be egalitarian, diverse and hospitable environments for learning.

 What would success look like?

Universities need to decide for themselves what successful international student recruitment looks like. For some, this will mean large populations in particular courses. Other institutions may be more strategic in considering numbers and distribution, linked to curricular aims, graduate outcomes and teaching approaches. Raw numbers are not a helpful indicator for this decision.

The government’s role should be to support universities by establishing a welcoming environment for international students. Committing to secure funding for higher education, rather than proposing frequent changes would offer the sector the stability to engage in long term financial planning, including – but not exclusively reliant on – international recruitment. The sector and the government need to commit to developing international student recruitment ethically. Currently, international students achieve fewer good degrees than home students do, yet pay significantly higher fees.

International students can come to study in the UK in the full expectation of experiencing a “British” education, only to find themselves on a course with an entirely international cohort, potentially of students from the same country. They can also start the application process, expecting to be welcomed as a guest, and find instead a confusing, expensive visa process and a hostile media and political environment. A commitment to ethical international student recruitment would start from the premise that international education should equally benefit all students. It would mean universities putting international recruitment in service to education. And it would mean the government leading the way on valuing international students as part of a sustainable internationalised higher education sector.

 Sylvie Lomer is a Lecturer in Policy and Practice at the University of Manchester. SRHE member Terri Kim is Reader in Comparative Higher Education, Cass School of Education and Communities, University of East London.

Brenda Leibowitz


3 Comments

Brenda Leibowitz 1957 – 2018

It is with much sadness that the SRHE community notes the passing of Brenda Leibowitz, a South African scholar in academic development and higher education. Her recent work on academic staff development features twice in the SRHE/Routledge book series; first, a chapter in the edited 2016 book “Researching Higher Education: International perspectives on theory, policy and practice”, and then, with Vivienne Bozalek and Peter Kahn, a 2017 book “Theorising learning to teach in higher education”. She also presented her work at the SRHE annual conference and will be known to many in the community for her engagement across a wide range of higher education conferences in South Africa and abroad.

Brenda’s engaged scholarship over nearly 30 years was strongly rooted in her activist commitment to recognizing a democratic and transformed South Africa through education and higher education. She began teaching in secondary schools designated for ‘coloured’ pupils and this sharpened sense both of the inequities of apartheid and the possibilities in education led to a most formative stint in the Academic Development Centre at the University of the Western Cape (UWC), where her practice and emerging scholarship focused on language issues in the university. She followed this with a period of curriculum work as a Director in the national Department of Education, completing a PhD from the University of Sheffield, and moved from here to nearly a decade directing the Centre for Teaching and Learning at Stellenbosch University. Here her work moved from a focus on student development to staff development, bringing with it a critical edge and an exceptionally strong commitment to collaboration and empowerment. In 2014 her scholarship was noted with the appointment to a chair in Teaching and Learning at the University of Johannesburg (and more recently, with the award of an National Research Foundation (NRF) funded South African Research Chair Initiative (SARChI) position on Post School Education and Training).

Brenda was one of the Principal Investigators on an ESRC Newton/NRF funded project entitled “Southern African Rurality in Higher Education” (SARiHE), which began in 2016 and will complete work in 2019. Brenda’s long-term interest in social justice in higher education especially for students from rural backgrounds in South Africa helped to secure funding for this project. The Southern African University Learning and Teaching (SAULT) forum, which she helped to build, has also been important in this project and has facilitated the involvement of academics and academic developers from across nine Southern African countries.

Brenda was absolutely prolific in her deep scholarship, and pulled many others along in her wake. She published across national and international journals, book chapters and books. A flavour of the evolution of her distinctive scholarship can be seen in the perusal of some of her article titles that drew on direct quotes from her research participants:

* “Why now after all these years you want to listen to me?” Using journals in teaching history at a South African university. The History Teacher, 1996 

* “Communities isn’t just about trees and shops”: Students from two South African universities engage in dialogue about ‘community’ and ‘community work’. Journal of Community and Applied Social Psychology, 2008 

* What’s Inside the Suitcases? An investigation into the powerful resources students and lecturers bring to teaching and learning. Higher Education Research and Development, 2009 

* “Ah, but the whiteys love to talk about themselves”: Discomfort as a pedagogy for change. Race, Ethnicity and Education, 2010 

* “It’s been a wonderful life”: Accounts of the interplay between structure and agency by “good” university teachers. Higher Education, 2012 

The title of her most recent paper, with colleague Vivienne Bozalek, ‘Toward a Slow scholarship of teaching and learning in the South’ is also revealing. ‘Slow scholarship’ foregrounds qualities such as thoughtfulness, attentiveness, the valuing of relationships, creativity, and depth of engagement – qualities that embody so well Brenda’s own scholarship and her way of being in the world.

In her research, Brenda leaves an extraordinary written record of scholarship; however, more importantly, there are the many, many lives that this extraordinary educator and scholar touched and influenced deeply. Brenda had an openness and generosity of spirit that allowed her to traverse boundaries and bring together collaborative teams across all the usual divisions of discipline, social background and institutional type. She had a solid compass that never deviated from its pointing towards the long arc of social justice, but she accomplished all she did with notable humility and serious interest in others and their educational and research journeys.

The period of late apartheid bred a distinctive sort of higher education researcher, many of these working in academic development at UWC in the 1990s. In this group of hugely influential higher education scholars, including Chrissie Boughey and Melanie Walker, Brenda made a distinctive and important contribution, cut much too short by her cancer diagnosis. We will remember her with love and admiration.

Jenni Case, Lisa Lucas and Delia Marshall