srhe

The Society for Research into Higher Education


Leave a comment

Are academic papers written with students cited as often as academic papers written with colleagues?

By James Hartley

This note compares the citation rates for publications written by the author alone with (i) those written by the author and fellow colleagues and (ii) those written together with undergraduates. Although the citation rates for publications written by the author alone, or with colleagues, are higher than those obtained for papers written with undergraduates, the data suggest that some teacher-student papers can make a substantial contribution to the research literature.                                                                                                       

Introduction

As an academic author I often wonder whether or not my papers with undergraduates are cited as often as my papers with colleagues.  On the one hand, colleagues are often more experienced and generally more familiar with academic writing and publishing than undergraduates.  On the other, undergraduates in the UK sometimes author papers arising from the research that they carried out in their final year supervised by academic staff.  To answer this query I used the website Google Scholar to examine a sample of how often my single-authored publications were cited with respect to (i) those written with colleagues and (ii) those written with undergraduate students.

Some data

Table 1 shows the results.

Table 1.  Median citation rates for 11 papers written (i) by the author alone, (ii) with colleagues, and (iii) with undergraduates.  (Data from Google Scholar 22/10/2017)

Author alone     Author with colleagues  Author with undergraduates

Median                87                                           83                                           22

Range                   61-140                                   63-393                                   7-124

N                             11                                           11                                           11

(A list of all 33 publications and their citation data is available from the author on request.)

Conclusions

As Table 1 shows these results are clear (for this particular author).  There is little overall difference between the median citation rates for papers written alone, or with colleagues.  However, the papers written with undergraduates are cited significantly less.

What does this imply?  Is it a waste of time to publish with one’s undergraduate students?  I think not for at least three overlapping reasons.

  1. The issues studied by students indicate what they think is interesting and important. Most of them focus on their experiences as learners – attending lectures, taking notes, and writing essays (e.g., see Hartley & Cameron, 1966; Hartley & Marshall, 1974).
  2. Preliminary studies conducted with students can form the basis for subsequent more substantive work on the same concerns – either by the author alone, or with other colleagues and students (see, for example, Hartley & Davies, 1976, Trueman & Hartley, 1996).
  3. The fact that some of these jointly-authored publications are still being cited today by other authors (see, e.g., Miyatsu, Nguyen & McDaniel, 2018) suggests that some preliminary work with students can make a seminal contribution to a particular field.

References

Hartley, J. & Cameron, A. (1966).  Some observations on the efficiency of lecturing.  Educational Review, 20, 30-37.

Hartley, J. & Davies, I. K. (1976).  Note-taking: A critical review.  Programmed Learning & Educational Technology, 15, 3, 207-224.

Hartley, J. & Marshall, S. (1974). One notes and notetaking.  Universities Quarterly, 28, 225-235.

Miyatsu, T., Nguyen, K. & McDaniel, M. A. (2018).  Five popular study strategies: Their pitfalls and optimal implementations. Perspectives on Psychological Science, 13, 3, 390-407.

Trueman, M. & Hartley, J. (1996).  A comparison between the time-management strategies and academic performance of mature and traditional-entry students in higher education.  Higher Education, 32, 2, 199-215.

SRHE member James Hartley is emeritus professor in the School of Psychology at Keele University.  He may be contacted at j.hartley@keele.ac.uk


1 Comment

It’s all about performance

by Marcia Devlin

The Australian federal government has indicated its intention to introduce partial funding based on yet to be defined performance measures.

The Mid-Year Economic and Fiscal Outlook (MYEFO) by the Australian government updates the economic and fiscal outlook from the previous budget and the budgetary position and revises the budget aggregates taking account of all decisions made since the budget was released. The 2017-2018 MYEFO papers state that the Government intends to “proceed with reforms to the higher education [HE] sector to improve transparency, accountability, affordability and responsiveness to the aspirations of students and future workforce needs” (see links below). Among these reforms are performance targets for universities to determine the growth in their Commonwealth Grant Scheme funding for bachelor degrees from 2020, to be capped at the growth rate in the 18-64 year old population, and from 1 January 2019, “a new allocation mechanism based on institutional outcomes and industry needs for sub-bachelor and postgraduate Commonwealth Supported Places”.

The MYEFO papers contain no information about these performance targets or institutional outcomes. Department of Education and Training (DET) webpages provide some additional detail, including that “From 2020, access to growth in CGS funding for bachelor degree courses will be performance based” and that “… performance indicators and performance targets will be agreed in 2018”. The website further indicates that data gathered in 2019 on 2018 performance will be used to determine the funding available in 2020. The information goes on to indicate that performance outcomes will only affect CGS funding for bachelor degree courses at public universities that previously had access to demand-driven funding. Access to growth will be based on each university’s achievement of performance objectives “such as attrition, low SES participation and workforce preparedness of graduates” (DET, 2018). Finally, the website states that indicators will be subjected to consultation with the sector.

I’m reminded of a scheme which many HERDSA Connect readers will remember – the Learning and Teaching Performance Fund (LTPF). The LTPF was set up to reward institutions that best demonstrate excellence in learning and teaching. The LTPF specified that funding allocations would be determined once institutions met specific teaching-related requirements, including probation and promotion practices and policies that include effectiveness as a teacher as a criterion for academics who teach, and systematic student evaluation of teaching and subjects – the results of which must inform probation and promotion decisions for these academics.

Once the hurdle requirements outlined above were met, funding allocations were then made on the basis of a set of performance indicators using a complex adjustment methodology. The performance indicators were derived from: the Graduate Destination Survey (GDS) which considered employment status, the type of work graduates are undertaking and any further study undertaken; the Course Experience Questionnaire (CEQ) which recorded graduate level of satisfaction with their generic skills and with teaching as well as overall graduate satisfaction; and DEST’s annual collection of university statistics on student progress rates.

My view around that time when I was an academic developer and a PhD student was that overall, the LTPF was a good thing because it focused attention on learning and teaching at a sectoral and institutional level in a way not previously seen in Australia. My keynote paper at a Vice-Chancellor’s Learning and Teaching Colloquium 2007 explained that view. My view now is less naïve, having had the opportunity to better understand the complexity and particular challenges of the higher education landscape in Australia. These include the degree of difficulty in offering quality higher education in a highly competitive mass education context with ever increasing student diversity, and the pace and scale of change in a digital context. Add to that some of the unintended consequences of federal higher education policies  –  policies that have cost reduction intentions and a primary focus on the economic contributions of graduates.  Performance measures now make me very nervous.

Marcia Devlin is Deputy Vice-Chancellor, Senior Vice President and Professor of Learning Enhancement at Victoria University, Australia. This article was commissioned by and was published in HERDSA CONNECT 40/3 Spring, 2018: http://www.herdsa.org.au 

Links

Morrison, S. and Corman, M. (2017). Mid-Year Economic and Fiscal Outlook 2017-18. Canberra: Commonwealth of Australia.

Devlin, M. (2007). The scholarship of teaching in Australian higher education: A national imperative. Keynote Paper, Vice-Chancellor’s Learning and Teaching Colloquium 2007, University of the Sunshine Coast.

Department of Education and Training.

 


2 Comments

Peer Observation of Teaching – does it know what it is?

by Maureen Bell

What does it feel like to have someone observing you perform in your teaching role? Suppose they tick off a checklist of teaching skills and make a judgement as to your capability, a judgement that the promotions committee then considers in its deliberations on your performance? How does it feel to go back to your department and join the peer who has written the judgement? Peer Observation of Teaching (POT) is increasingly being suggested and used as a tool for the evaluation, rather than collaborative development, of teaching practice.

Can POT for professional development co-exist with and complement POT for evaluation? Or are these diametrically opposed philosophies and activities such that something we might call Peer Evaluation of Teaching (PET) has begun to undermine the essence of POT?

I used to think the primary purpose of peer observation of teaching (POT) was the enhancement of teaching and learning. I thought it was a promising process for in-depth teaching development. More recently I have been thinking that POT has been hijacked by university quality assurance programs and re-dedicated to the appraisal of teaching by academic promotions committees. The principles and outcomes of POT for appraisal are, after all, quite opposite to those that were placed at the heart of the original POT philosophy and approach – collegial support, reflective practice and experiential learning.

In 1996 I introduced a POT program into my university’s (then) introduction to teaching course for academic staff. Participants were observed by each other, and myself as subject coordinator, and were required to reflect on feedback and plan further action. It wasn’t long before I realised that I could greatly improve participants’ experience by having them work together, experiencing at different times the roles of both observer and observed. I developed the program such that course participants worked in groups to observe each other teach and to share their observations, feedback and reflections. A significant feature of the program was a staged workshop-style introduction to peer observation which involved modelling, discussion and practice. I termed this collegial activity ‘peer observation partnerships’.

The program design was influenced by my earlier experiences of action research in the school system and by the evaluation work of Web and McEnerney (1995) indicating the importance of training sessions, materials, and meetings. Blackwell (1996), too, in Higher Education Quarterly described POT as stimulating reflection on and improvement of teaching. Early results of my program, published in IJAD in 2001, reported POT as promoting the development of skills, knowledge and ideas about teaching, as a vehicle for ongoing change and development, and as a means of building professional relationships and a collegial approach to teaching.

My feeling then was that a collegial POT process would eventually be broadly accepted as a key strategy for teaching development in universities. Surely universities would see POT as a high value, low cost, professional development activity. This motivated me to publish Peer Observation Partnerships in Higher Education through the Higher Education Research and Development Society of Australasia (HERDSA).

Gosling’s model appeared in 2002 in which he posed three categories of POT, in summary: evaluation, development, and fostering collaboration. Until then I had not considered the possibility that POT could be employed as an evaluation tool, mainly because to my mind observers did not need a particular level of teaching expertise. Early career teachers were capable of astute observation, and of discussing the proposed learning outcomes for the class along with the activity observed. I saw evaluation as requiring appropriate expertise to assess teaching quality against a set of reliable and valid criteria. Having been observed by an Inspector of Schools in my career as a secondary school teacher, I had learned from experience the difference between ‘expert observation’ and ‘peer observation’.

Looking back, I discovered that the tension between POT as a development activity rather than an evaluation tool had always existed. POT had been mooted as a form of peer review and as a staff appraisal procedure in Australia since the late eighties and early nineties, when universities were experiencing pressure to introduce procedures for annual staff appraisal. The emphasis at that time was evaluative – a performance management approach seeking efficiency and linking appraisal to external rewards and sanctions. Various researchers and commentators c.1988-1993, including Lonsdale, Abbott, and Cannon, sought an alternative approach which emphasised collegial professional development. At that time action research involving POT was prevalent in the school system using the Action Research Planner of Kemmis and McTaggert. Around this time Jarzabkowski and Bone from The University of Sydney developed a detailed guide for Peer Appraisal of Teaching. They defined the term ‘peer appraisal’ as a method of evaluation, that could both provide feedback on teaching for personal development as well as providing information for institutional or personnel purposes. ‘Observer expertise in the field of teaching and learning’ was a requirement.

In American universities various peer-review-through-observation projects had emerged in the early nineties. A scholarly discussion of peer review of teaching was taking place under the auspices of the American Association for Higher Education Peer Review of Teaching project and the national conference, ‘Making Learning Visible: Peer-review and the Scholarship of Teaching’ (2000), brought together over 200 participants. The work of both Centra and Hutchings in the 90s, and Bernstein and others in the 2000s advocated the use of peer review for teaching evaluation.

In 2002 I was commissioned by what was then the Generic Centre (UK) to report on POT in Australian universities. At that time several universities provided guidelines or checklists for voluntary peer observation, while a number of Australian universities were accepting peer review reports of teaching observations for promotion and appointment. Soon after that I worked on a government funded Peer Review of Teaching project led by the University of Melbourne, again reviewing POT in Australian universities. One of the conclusions of the report was that POT was not a common professional activity. Many universities however listed peer review of teaching as a possible source of evidence for inclusion in staff appraisal and confirmation and promotion applications.

My last serious foray into POT was an intensive departmental program developed with Paul Cooper, then Head of one of our schools in the Engineering Faculty. Along with my earlier work, the outcomes of this program, published in IJAD (2013), confirmed my view that a carefully designed and implemented collegial program could overcome problems such as those reported back in 1998 by Martin in Innovations in Education and Teaching International, 35(2). Meanwhile my own head of department asked me to design a POT program that would provide ‘formal’ peer observation reports to the promotions and tenure committee. I acquiesced, although I was concerned that once POT became formalised for evaluation purposes in this way, the developmental program would be undermined.

Around 2008 my university implemented the formal POT strategy with trained, accredited peer observers and reporting templates. POT is now accepted in the mix of evidence for promotions and is compulsory for tenure applications. In the past year I’ve been involved in a project to review existing peer observation of teaching activities across the institution, which has found little evidence of the use of developmental POT.

The Lonsdale report (see above) proposed a set of principles for peer review of teaching and for the type of evidence that should be used in decisions about promotion and tenure: Fairness such that decisions are objective; openness such that criteria and process are explicit and transparent; and consistency between standards and criteria applied in different parts of the institution and from year to year. It always seemed to me that the question of criteria and standards would prove both difficult and contentious. How does a promotions committee decipher or interpret a POT report? What about validity and reliability? What if the POT reports don’t align with student evaluation data? And what does it mean for the dynamics of promotion when one of your peer’s observations might influence your appraisal?

In 2010 Chamberlain et al reported on a study exploring the relationship between annual peer appraisal of teaching practice and professional development. This quote from a participant in the study stays with me, “… the main weakness as far as I’m concerned is that it doesn’t know what it is. Well, what is its purpose?”

POT for professional development is an activity that is collegial, subjective, and reflective. My view is that POT for professional development can only co-exist with a version of POT for evaluation that is re-named, re-framed and standardised. And let’s call it what it really is – Peer Evaluation of Teaching (PET).

Dr Maureen Bell is Editor of HERDSA NEWS, Higher Education Research and Development Society of Australasia; HERDSA Fellow; Senior Fellow, University of Wollongong Australia.


Leave a comment

Mind the Gap – Gendered and Caste-based Disparities in Access to Conference Opportunities

In an interview with Conference Inference [1] editor Emily Henderson, Nidhi S. Sabharwal discussed inequalities of access to conference opportunities in India.

Figure 1: Participation in Conferences by Gender (in a high-prestige institution)Figure 1: Participation in Conferences by Gender (in a high-prestige institution)

EH: Nidhi, can you explain first of all where conferences come into your wider research on inequalities in Indian higher education?

NS: Equitable access to professional development opportunities such as conferences is an indicator of institutional commitment to achieving diversity and inclusion of diverse social groups on campuses. Continue reading

Image of Rob Cuthbert


Leave a comment

Doing academic work

by Rob Cuthbert

Summer holidays may not be what they were, but even so it is the time of year when universities tend to empty of students and (some) staff – an opportunity to reflect on why we do what we do. What do universities do? They do academic work, of course. What exactly does that involve? Well, as far as teaching is concerned, there are six stages in the ‘value chain’. For every teaching programme a university will: Continue reading


Leave a comment

The deaf delegate – experiences of space and time in the conference (BSL version included)

By Dai O’Brien

In this post, Dai O’Brien discusses spatial and temporal challenges that deaf academics face when attending conferences, and presents some preliminary thoughts from his funded research project on deaf academics. This post is accompanied by a filmed version of this post in British Sign Language.

Access the British Sign Language version of this post here.

Attending conferences is all about sharing information, making those contacts which can help you with research ideas, writing projects and so on. This is the ideal. However, Continue reading


1 Comment

What is Times Higher Education for?

By Paul Temple

Have you been to a THE Awards bash? If not, it’s worth blagging an invite – your University must be on the shortlist for Herbaceous Border Strategy Team of the Year, or some such, as the business model obviously depends on getting as many universities as possible onto the shortlists, and then persuading each university to cough up to send along as many of its staff as possible. A night out at a posh Park Lane hotel for staff whose work most likely is normally unnoticed by the brass: where’s the harm? I went once – once is enough – mainly I think because our Marketing Director wanted to see if I really possessed a dinner jacket. (She was generous enough to say that I “scrubbed up nicely”.)

I mention this because THE itself seems to be becoming less a publication dealing with higher education news and comment and more a business aimed at extracting cash from higher education institutions, with the weekly magazine merely being a marketing vehicle in support of this aim. The Awards events are the least bothersome aspect of this. The THE rankings – highly valued as “how not to use data” examples by teachers of basic quantitative methods courses – have now entered the realm of parody (“Emerging Economy Universities with an R in their names”) although the associated conferences and double-page advertising spreads in the magazine rake in a nice bit of revenue, one imagines. THE might fairly respond by saying that nobody makes these universities come to their conferences or buy corporate advertising in their pages, and anyway they weren’t the ones who decided that the marketization of higher education worldwide would be a good idea. True, but their profit-making activities give the ratchet another turn, making it harder for universities trying to survive in a competitive market to say no to marketing blandishments, and so helping to move yet more spending away from teaching and research: something regularly lampooned by Laurie Taylor in – remind me where his Poppleton column appears?

The newer, more problematic, development is THE then selling itself as a branding consultancy to the same universities that it is including in its rankings and maybe covering in its news or comment pages. Now it goes without saying that a journal with the standards of THE would never allow the fact that it was earning consultancy fees from a university to influence that university’s position in the rankings that it publishes or how it was covered editorially. It would be unthinkable: not least because it would at a stroke undermine the whole basis of the rankings themselves. Audit firms similarly assure us that the fact that they are earning consultancy fees from a company could never affect the audit process affecting that company. The causes of misleading audit reports – on Carillion, say – should be sought elsewhere, we’re told.

But wait a minute, what’s this on the THE website? “THE is the data provider underpinning university excellence in every continent across the world. As the company behind the world’s most influential university ranking, and with almost five decades of experience as a source of analysis and insight on higher education, we have unparalleled expertise on the trends underpinning university performance globally. Our data and benchmarking tools are used by many of the world’s most prestigious universities to help them achieve their strategic goals.” This seems to be saying that the data used to create the THE rankings are available, at a price, to allow universities to improve their own performance. Leaving aside the old joke about a consultant being someone who borrows your watch to tell you the time, referring to the data used to produce rankings and in the following sentence proposing using the same data to help universities achieve their strategic goals (and I’d be surprised if these goals didn’t include rising in the aforementioned rankings) will suggest to potential clients that these two THE activities are linked. Otherwise why mention them in the same breath? This is skating on thin ethical ice.

SRHE member Paul Temple, Centre for Higher Education Studies, UCL Institute of Education, University College London.