Go back

Furore over use of AI to assess research proposals

Grant applicants allege use of ChatGPT-like service in generating feedback on Australia’s prestigious Discovery proposals

The Australian Research Council faces allegations that at least one of its peer reviewers has used artificial intelligence engines to generate assessments in the current round of grants under the Discovery Projects competition.

The allegations were revealed on 30 June, via Twitter account @ARC_Tracker, which said that “several [applicants] say assessor reports are generic regurgitation of their apps [applications]”.

One affected researcher, who wanted to remain anonymous, said the words “regenerate response” at the bottom of a piece of assessor feedback had alerted them to the possible use of AI, possibly one such as the ChatGPT service.

They were shown the feedback for their project during the rejoinder phase of the current 2024 grant round.

The affected researcher told Research Professional News that if their proposal had been fed to ChatGPT, it was an “issue”.

“There’s original ideas in a grant proposal,” they said, adding, “it looks like they fed my proposal into the machine, so now it’s in its model.”

Slippery slope

The researcher, who has acted as an ARC assessor in the past, suggested that the use of AI to generate responses would amount to “academic misconduct” in their view.

“It reads like the assessment was written by someone who doesn’t understand the science and has just paraphrased the application,” they said.

“This is a serious thing, it’s a slippery slope.”

The response read as if it had been “regurgitated” from the original application, the researcher said. “It’s very bland. It doesn’t have any opinions.”

“Assessors are asked to give you an opinion as an expert. It would have been better to decline.”

The Discovery Projects grants are among the ARC’s most significant programmes, offering researchers up to A$500,000 a year for five years. In 2023 A$221 million was awarded in the programme.

The council declined to respond to questions about whether it was aware of the use of artificial intelligence by assessors, but after the allegations were made public on social media, it issued a statement on the “confidentiality obligations” of assessors.

“Like many organisations,” it said, “the Australian Research Council is considering a range of issues regarding the use of generative artificial intelligence that use algorithms to create new content (such as ChatGPT) and that may present confidentiality and security challenges for research and for grant programme administration.

“While we are undertaking this work, we would like to remind all peer reviewers of their obligations to ensure the confidentiality of information received as part of National Competitive Grants Program processes.”

Referring to a 2018 national code of research conduct, the statement said that “release of material that is not your own outside of the closed Research Management System, including into generative AI tools, may constitute a breach of confidentiality. As such, the ARC advises that peer reviewers should not use AI as part of their assessment activities.”

In response to further questions, a spokesperson also referred to the council’s conflict of interest and confidentiality policy. They said that “while generative AI tools are not named in this [policy] document, the common principles of confidentially apply across both existing and emerging channels through which confidential information may be inappropriately disclosed”.

The affected researcher said they had asked their university research office to lodge a complaint with the ARC, and were waiting for a response. Rejoinders are due on 6 July, and they don’t know whether they should address the assessment in question in their rejoinder.

They were also concerned about whether the score from that assessor was fair or not, if indeed generative AI has been used for assessment.

Applicants see comments, but not scores.

“[The AI] quite likes my grant application but I don’t know if that reflects my score,” they said.

Any negative feedback from that assessor was unknown, they said, unlike other assessments where clear mistakes could be corrected.

They said it had been the “red flag” of the “regenerate response” words that alerted them, and they were concerned other researchers may be affected.

“If people [assessors] are doing this in a widespread way, I think that’s a bit worrying.”

The operator of the @ARC_Tracker Twitter account told Research Professional News that the ARC’s existing policies did not cover the use of AI specifically.

The ARC needs to work on “how to best ensure assessments are high quality. They’re asleep at the wheel on this issue,” they said. “Did they not see this coming?”

They referred to a 2021 open letter to then education minister Alan Tudge, signed by more than 1,000 Australian researchers, calling on the minister to improve several issues with ARC processes, including an “improved system for responding to peer reviews [and] consequences for inappropriate and unprofessional reviewer comments”.

Assessors who repeatedly engaged in such behaviour should “be reported to their university’s deputy vice-chancellor for research,” the letter said.