Go back

Australian Research Council bans AI in assessment

Use of generative artificial intelligence by grant assessors could “compromise integrity”, funding body says

The Australian Research Council has banned the use of artificial intelligence by grant assessors, following allegations that some assessor reports were being written by publicly available AI engines.

The allegations were first aired by the Twitter account ARC Tracker in late June. One researcher told Research Professional News that the words “regenerate response” at the bottom of a piece of feedback were a “red flag” that alerted them to the possible use of AI. They said their research office had put in a complaint, and that the suspect report had been removed from the assessment after the complaint was lodged.

Several other researchers believe that assessments of their applications for ARC Discovery grants were partly written by AI.

A new policy for assessors, issued by the ARC on 7 July, specifically addresses the use of AI. It says that AI’s “risks include IT security, intellectual integrity and property protection, and the loss of confidential information. When information is entered into generative AI tools it enters the public domain and can be accessed by unspecified third parties. The content is therefore not reliable and can lead to disputes about the true authorship of what is generated.”

Elsewhere, the policy says: “Release of material into generative AI tools constitutes a breach of confidentiality and peer reviewers, including all detailed and general assessors, must not use generative AI as part of their assessment activities.”

The use of generative AI “may compromise the integrity of the ARC’s peer review process”. In cases where the use of generative AI by assessors is suspected, the ARC “will remove that assessment from its assessment process”, it says.

“If, following an investigation, an assessor is found to have breached the code during ARC assessment, the ARC may impose consequential actions in addition to any imposed by the employing institution.”

Those writing grant applications are advised to exercise “caution” in using AI tools but are not banned from doing so. The deputy vice-chancellor for research or their equivalent at an administering organisation “is required to certify applications on submission to the ARC. This includes certification that all participants are responsible for the authorship and intellectual content of the application.”

Underlying issues

However, the researcher who operates the ARC Tracker account told Research Professional News that the move did not fully address the underlying issues. While it is good to see the ARC taking action, they said, “the ‘don’t do it’ approach isn’t really going to stop the problem. What would be much more effective—and what researchers already proposed—is for grant applicants to flag such ‘unprofessional’ reviews, including zombie reviews.”

“Zombie reviews” include any that simply regurgitate applications, whether written by AI or not.

No mechanism for checking has been specified. “There could be dozens of other ChatGPT assessments slipping through undetected,” the ARC Tracker operator said.

When asked whether any use of AI had been discovered, an ARC spokesperson said the council would not disclose any “confidential information”.

The new policy is “effective immediately and applies to all current and future funding schemes”, they said.

The policy says the ARC “will continue to engage actively with domestic and international counterparts on this issue and will maintain a watching brief on the uses of generative AI and update this policy as required”.