Go back

Government promises ‘stronger guardrails’ for AI development

   

Australia will strengthen law for “high-risk” applications of artificial intelligence, science minister says

The Australian government has promised to increase oversight of artificial intelligence, after a consultation with academia, industry and the public found widespread fears that current rules and regulations are inadequate.

After consulting in 2023 on how to ensure AI use is “safe and responsible”, the government said on 17 January that it had heard that “existing laws likely do not adequately prevent AI-facilitated harms before they occur” and that more needed to be done to deal with harms that do occur.

“Australians understand the value of AI, but they want to see the risks identified and tackled,” said Ed Husic, minister for industry and science. “We have heard loud and clear that Australians want stronger guardrails to manage higher-risk AI.”

The government insists that work is already underway to strengthen laws, including around privacy and misinformation, but it also promises to look at “mandatory safeguards” for those working with AI in what it calls “high-risk settings”.

Its consultation response cites filtering spam emails as a low-risk use of AI, whereas higher-risk uses include enabling self-driving vehicles or predicting whether someone is likely to commit an offence.

While industry groups responding to the consultation wanted existing laws to be strengthened, academic groups were more in favour of new laws or a specific AI act, following the lead of the EU and other countries. There were also calls for Australia to do more to build up its ability to develop AI.

‘Sensible first step’

The Australian Academy of Science said the moves were a “sensible first step” and it welcomed the fact that there seemed to be an attempt to avoid “unnecessary or disproportionate burdens for the research and development sector, the community and regulators”.

It said the government must now heed calls to develop a national strategy for AI uptake in science and invest in high-performance computing.

It also said the government should implement the Unesco recommendation on open science. “Since AI is trained on available data, keeping scientific data and peer-reviewed publications behind paywalls impacts the ability of these systems to leverage the most reliable information,” it said.