Go back

Russell Group issues guidance on staff and student use of AI

           

Research-intensive universities seek to ensure ethical use of ChatGPT and other AI programs

The Russell Group has published a set of principles for universities to help students and staff become “AI literate”, while ensuring the “ethical and responsible use” of artificial intelligence programs such as ChatGPT.

The principles state that staff should be equipped to support students to use generative-AI tools effectively and appropriately in their learning experience, and that teaching and assessment should be adapted to incorporate the ethical use of AI.

Universities should also “ensure academic rigour and integrity is upheld”, and “work collaboratively to share best practice, as the technology and its application in education evolves”.

“AI breakthroughs are already changing the way we work and it’s crucial students get the new skills they need to build a fulfilling career,” said Tim Bradshaw, chief executive of the Russell Group. “University staff also need support, as they look at how AI can be used to enhance their teaching and help bring subjects to life.”

‘Useful principles’

Rose Stephenson, director of policy and advocacy at the Higher Education Policy Institute, told Research Professional News that it was a “useful set of principles” that the sector would find helpful.

“These principles demonstrate an approach beyond what was considered in the early days of the internet, including the focus on the ethics of AI,” she said. “Such comprehensive principles will support higher education providers to navigate and embrace AI technology.”

Andrew Brass, head of the school of health sciences at the University of Manchester, said students were already using AI technology, so “the question for us as educators is: how do you best prepare them for this and what are the skills they need to have to know how to engage with generative AI sensibly?”

He added: “We want to embrace new tools as a way of helping people learn more effectively, for instance, in providing formative feedback to help students recognise gaps in their knowledge. But if students are going to use generative AI, they must apply a critical evaluation to it and understand the weaknesses of the tools.”

Gavin McLachlan, vice-principal and chief information officer at the University of Edinburgh, said: “Universities will now have a responsibility to ensure their students are AI literate, both to support their use of these tools in their learning but also more widely to equip them with the skills they need to use these tools appropriately through their careers—because it seems very likely every job and sector will be transformed by AI to some extent.”