SRHE Blog

The Society for Research into Higher Education

Peer Observation of Teaching – does it know what it is?

2 Comments

by Maureen Bell

What does it feel like to have someone observing you perform in your teaching role? Suppose they tick off a checklist of teaching skills and make a judgement as to your capability, a judgement that the promotions committee then considers in its deliberations on your performance? How does it feel to go back to your department and join the peer who has written the judgement? Peer Observation of Teaching (POT) is increasingly being suggested and used as a tool for the evaluation, rather than collaborative development, of teaching practice.

Can POT for professional development co-exist with and complement POT for evaluation? Or are these diametrically opposed philosophies and activities such that something we might call Peer Evaluation of Teaching (PET) has begun to undermine the essence of POT?

I used to think the primary purpose of peer observation of teaching (POT) was the enhancement of teaching and learning. I thought it was a promising process for in-depth teaching development. More recently I have been thinking that POT has been hijacked by university quality assurance programs and re-dedicated to the appraisal of teaching by academic promotions committees. The principles and outcomes of POT for appraisal are, after all, quite opposite to those that were placed at the heart of the original POT philosophy and approach – collegial support, reflective practice and experiential learning.

In 1996 I introduced a POT program into my university’s (then) introduction to teaching course for academic staff. Participants were observed by each other, and myself as subject coordinator, and were required to reflect on feedback and plan further action. It wasn’t long before I realised that I could greatly improve participants’ experience by having them work together, experiencing at different times the roles of both observer and observed. I developed the program such that course participants worked in groups to observe each other teach and to share their observations, feedback and reflections. A significant feature of the program was a staged workshop-style introduction to peer observation which involved modelling, discussion and practice. I termed this collegial activity ‘peer observation partnerships’.

The program design was influenced by my earlier experiences of action research in the school system and by the evaluation work of Web and McEnerney (1995) indicating the importance of training sessions, materials, and meetings. Blackwell (1996), too, in Higher Education Quarterly described POT as stimulating reflection on and improvement of teaching. Early results of my program, published in IJAD in 2001, reported POT as promoting the development of skills, knowledge and ideas about teaching, as a vehicle for ongoing change and development, and as a means of building professional relationships and a collegial approach to teaching.

My feeling then was that a collegial POT process would eventually be broadly accepted as a key strategy for teaching development in universities. Surely universities would see POT as a high value, low cost, professional development activity. This motivated me to publish Peer Observation Partnerships in Higher Education through the Higher Education Research and Development Society of Australasia (HERDSA).

Gosling’s model appeared in 2002 in which he posed three categories of POT, in summary: evaluation, development, and fostering collaboration. Until then I had not considered the possibility that POT could be employed as an evaluation tool, mainly because to my mind observers did not need a particular level of teaching expertise. Early career teachers were capable of astute observation, and of discussing the proposed learning outcomes for the class along with the activity observed. I saw evaluation as requiring appropriate expertise to assess teaching quality against a set of reliable and valid criteria. Having been observed by an Inspector of Schools in my career as a secondary school teacher, I had learned from experience the difference between ‘expert observation’ and ‘peer observation’.

Looking back, I discovered that the tension between POT as a development activity rather than an evaluation tool had always existed. POT had been mooted as a form of peer review and as a staff appraisal procedure in Australia since the late eighties and early nineties, when universities were experiencing pressure to introduce procedures for annual staff appraisal. The emphasis at that time was evaluative – a performance management approach seeking efficiency and linking appraisal to external rewards and sanctions. Various researchers and commentators c.1988-1993, including Lonsdale, Abbott, and Cannon, sought an alternative approach which emphasised collegial professional development. At that time action research involving POT was prevalent in the school system using the Action Research Planner of Kemmis and McTaggert. Around this time Jarzabkowski and Bone from The University of Sydney developed a detailed guide for Peer Appraisal of Teaching. They defined the term ‘peer appraisal’ as a method of evaluation, that could both provide feedback on teaching for personal development as well as providing information for institutional or personnel purposes. ‘Observer expertise in the field of teaching and learning’ was a requirement.

In American universities various peer-review-through-observation projects had emerged in the early nineties. A scholarly discussion of peer review of teaching was taking place under the auspices of the American Association for Higher Education Peer Review of Teaching project and the national conference, ‘Making Learning Visible: Peer-review and the Scholarship of Teaching’ (2000), brought together over 200 participants. The work of both Centra and Hutchings in the 90s, and Bernstein and others in the 2000s advocated the use of peer review for teaching evaluation.

In 2002 I was commissioned by what was then the Generic Centre (UK) to report on POT in Australian universities. At that time several universities provided guidelines or checklists for voluntary peer observation, while a number of Australian universities were accepting peer review reports of teaching observations for promotion and appointment. Soon after that I worked on a government funded Peer Review of Teaching project led by the University of Melbourne, again reviewing POT in Australian universities. One of the conclusions of the report was that POT was not a common professional activity. Many universities however listed peer review of teaching as a possible source of evidence for inclusion in staff appraisal and confirmation and promotion applications.

My last serious foray into POT was an intensive departmental program developed with Paul Cooper, then Head of one of our schools in the Engineering Faculty. Along with my earlier work, the outcomes of this program, published in IJAD (2013), confirmed my view that a carefully designed and implemented collegial program could overcome problems such as those reported back in 1998 by Martin in Innovations in Education and Teaching International, 35(2). Meanwhile my own head of department asked me to design a POT program that would provide ‘formal’ peer observation reports to the promotions and tenure committee. I acquiesced, although I was concerned that once POT became formalised for evaluation purposes in this way, the developmental program would be undermined.

Around 2008 my university implemented the formal POT strategy with trained, accredited peer observers and reporting templates. POT is now accepted in the mix of evidence for promotions and is compulsory for tenure applications. In the past year I’ve been involved in a project to review existing peer observation of teaching activities across the institution, which has found little evidence of the use of developmental POT.

The Lonsdale report (see above) proposed a set of principles for peer review of teaching and for the type of evidence that should be used in decisions about promotion and tenure: Fairness such that decisions are objective; openness such that criteria and process are explicit and transparent; and consistency between standards and criteria applied in different parts of the institution and from year to year. It always seemed to me that the question of criteria and standards would prove both difficult and contentious. How does a promotions committee decipher or interpret a POT report? What about validity and reliability? What if the POT reports don’t align with student evaluation data? And what does it mean for the dynamics of promotion when one of your peer’s observations might influence your appraisal?

In 2010 Chamberlain et al reported on a study exploring the relationship between annual peer appraisal of teaching practice and professional development. This quote from a participant in the study stays with me, “… the main weakness as far as I’m concerned is that it doesn’t know what it is. Well, what is its purpose?”

POT for professional development is an activity that is collegial, subjective, and reflective. My view is that POT for professional development can only co-exist with a version of POT for evaluation that is re-named, re-framed and standardised. And let’s call it what it really is – Peer Evaluation of Teaching (PET).

Dr Maureen Bell is Editor of HERDSA NEWS, Higher Education Research and Development Society of Australasia; HERDSA Fellow; Senior Fellow, University of Wollongong Australia.

Author: SRHE News Blog

An international learned society, concerned with supporting research and researchers into Higher Education

2 thoughts on “Peer Observation of Teaching – does it know what it is?

  1. Reblogged this on Digital learning PD Dr Ann Lawless and commented:
    POT – peer observation of teaching

  2. Thanks Maureen, much appreciate your thoughts.

    By the by….did you work at Murdoch Uni in the 1970’s?

Leave a Reply

Discover more from SRHE Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading