SRHE Blog

The Society for Research into Higher Education

Spotlight on the inclusion process in developing AI guidance and policy

Leave a comment

by Lilian Schofield and Joanne J. Zhang

Introduction

When the discourse on ChatGPT started gaining momentum in higher education in 2022, the ‘emotions’ behind the response of educators, such as feelings of exclusion, isolation, and fear of technological change, were not initially at the forefront. Even educators’ feelings of apprehension about the introduction and usage of AI in education, which is an emotional response, were not given much attention. This feeling was highlighted by Ng et al (2023), who stated that many AI tools are new to educators, and many educators may feel overwhelmed by them due to a lack of understanding or familiarity with the technology. The big issues then were talks on banning the use of ChatGPT, ethical and privacy concerns, inclusive issues and concerns about academic misconduct (Cotton et al, 2023; Malinka et al, 2023; Rasul et al, 2023; Zhou & Schofield, 2023).

As higher education institutions started developing AI guidance in education, again the focus seemed to be geared towards students’ ethical and responsible usage of AI and little about educators’ guidance. Here we reflect on the process of developing the School of Business and Management, Queen Mary University of London’s AI guidance through the lens of inclusion and educators’ ‘voice’. We view ‘inclusion’ as the active participation and contribution of educators in the process of co-creating the AI policy alongside multiple voices from students and staff.

Co-creating inclusive AI guidance

Triggered by the lack of clear AI guidance for students and educators, the School of Business and Management at the Queen Mary University of London (QMUL) embarked on developing AI guidance for students and staff from October 2023 to March 2024.  Led by Deputy Directors of Education Dr Joanne J. Zhang and Dr Darryn Mitussis, the guidance was co-created with staff members through different modes, such as the best practice sharing sessions, staff away day, student-staff consultation, and staff consultation. These experiences helped shape the inclusive way and bottom-up approach of developing the AI guidance. The best practice sharing sessions allowed educators to contribute their expertise as well as provide a platform to voice their fears and apprehensions about adopting and using AI for teaching. The sessions acted as a space to share concerns and became a space where educators could have a sense of relief and solidarity. Staff members shared that knowing that others share similar apprehensions was reassuring and reduced the feeling of isolation. This collective space helped promote a more collaborative and supportive environment for educators to comfortably explore AI applications in their teaching.

Furthermore, the iterative process of developing this guidance has engaged different ‘voices’ within and outside the school. For instance, we discussed with the QMUL central team their approach and resources for facilitating AI usage for students and staff. We discussed Russell Group principles on AI usage and explored different universities’ AI policies and practices. The draft guideline was discussed and endorsed at the Teaching Away Day and education committee meetings. As a result, we suggested three principles for developing effective practices in teaching and learning:

  1. Explore and learn.
  2. Discuss and inform.
  3. Stress test and validate.

Key learning points from our process include having the avenue to use voice, whether in support of AI or not, and ensuring educators are active participants in the AI guidance-making process. This is also reflected in the AI guidance, which supports all staff in developing effective practices at their own pace.

Consultation with educators and students was an important avenue for inclusion in the process of developing the AI policy. Open communication and dialogue facilitated staff members’ opportunities to contribute to and shape the AI policy. This consultative approach enhanced the inclusion of educators and strengthened the AI policy.

Practical suggestions

Voice is a powerful tool (Arnot & Reay, 2007). However, educators may feel silenced and isolated without an avenue for their  voice. This ‘silence’ and isolation takes us back to the initial challenges experienced at the start of AI discourse, such as apprehension, fear, and isolation. The need to address these issues is pertinent, especially now when employers, students and higher education drive AI to be embedded in the curriculum and have AI-skilled graduates (Southworth et al, 2023). A co-creative approach to developing AI policies is crucial to enable critique and learning, promoting a sense of ownership and commitment to the successful integration of AI in education.

The process of developing an AI policy itself serves as the solution to the barriers to educators adopting AI in their practice and an enabler for inclusion. It ensures educators’ voices are heard, addresses their fears, and finds effective ways to develop a co-created AI policy. This inclusive participatory and co-creative approach helped mitigate fears associated with AI by creating a supportive environment where apprehensions can be openly discussed and addressed.

The co-creative approach of developing the policy with educators’ voices plays an important role in AI adoption. Creating avenues, such as the best practice sharing sessions where educators can discuss their experiences with AI, both positive and negative, ensures that voices are heard and concerns are acknowledged and addressed. This collective sharing builds a sense of community and support, helping to alleviate individual anxieties.

Steps that could be taken towards an inclusive approach to developing an inclusive AI guidance and policy are as follows:

  1. Set up the core group – Director for Education, chair of the exam board, and the inclusion of educators from different subject areas. Though the development of AI guidance can have a top-down approach, it is important that the group set-up is inclusive of educators’ voices and concerns.
  2. Design multiple avenues for educators ‘voices’ to be heard (best practice sharing sessions within and cross faulty, teaching away day).
  3. Communication channels are clear and open for all to contribute.
  4. Engaging all staff and students – hearing from students directly is powerful for staff, too; we learned a lot from students and included their voices in the guidance.
  5. Integrate and gain endorsements from the school management team. Promoting educators’ involvement in creating AI guidance legitimises their contributions and ensures that their insights are taken seriously. Additionally, such endorsement ensures that AI guidance is aligned with the needs and ethical considerations of those directly engaged and affected by the guidance.

Conclusion

As many higher education institutions move towards embedding AI into the curriculum and become clearer in their AI guidance, it is crucial to acknowledge and address the emotional dimensions educators face in adapting to AI technologies in education. Educators’ voices in contributing to AI policy and guidance are important in ensuring that they are clear about the guidance, embrace it and are upskilled in order for the embedding and implementation of AI in teaching and learning to be successful.

Dr. Lilian Schofield is a senior lecturer in Nonprofit Management and the Deputy Director of Student Experience at the School of Business and Management, Queen Mary University of London. Her interests include critical management pedagogy, social change, and sustainability. Lilian is passionate about incorporating and exploring voice, silence, and inclusion into her practice and research. She is a Queen Mary Academy Fellow and has taken up the Learning and Teaching Enhancement Fellowship, where she works on student skills enhancement practice initiatives at Queen Mary University of London.

Dr Joanne J. Zhang is Reader in Entrepreneurship, Deputy Director of Education at the School of Business and Management, Queen Mary University of London, and a visiting fellow at the University of Cambridge. She is the ‘Entrepreneurship Educator of the Year’, Triple E European Award 2022. Joanne is also the founding director of the Entrepreneurship Hub , and the QM Social Venture Fund  - the first student-led social venture fund investing in ‘startups for good’ in the UK.  Joanne’s research and teaching interests are entrepreneurship, strategy and entrepreneurship education. She has led and engaged in large-scale research and scholarship projects totalling over GBP£7m.  Email: Joanne.zhang@qmul.ac.uk

Author: SRHE News Blog

An international learned society, concerned with supporting research and researchers into Higher Education

Leave a Reply

Discover more from SRHE Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading