“Wait-and-see attitude” is not enough, technologists say
Australia needs to move to ensure that artificial intelligence is “used for good”, a report has said.
Leading technologists have warned that if Australia does not step up, it may miss the opportunity to influence the development of AI, particularly around ethics and standards.
The Australian Academy of Technology and Engineering and the Australian Institute for Machine Learning at the University of Adelaide issued the Responsible AI report on 23 November.
It raises questions about Australia’s level of spending on AI research, suggesting a figure of A$1 billion would be more appropriate than the current A$100 million set aside for integrating AI into industry.
Launching the report, academy chief executive Kylie Walker said Australia needed around 100,000 more “digitally skilled” workers but was only graduating about 7,000 a year with the “right skills”.
“We need a culture of research and risk-taking and university-industry collaborative research,” Walker said.
Stela Solar, director of the National AI Centre at the Commonwealth Scientific and Industrial Research Organisation, said Australia had a “deep researcher capability [in AI] that is world-leading”.
She said the growth of the centre’s Responsible AI Network to 2,000 members showed that “businesses want to do AI well”.
But according to Australian Institute for Machine Learning director Simon Lucey, the nation risks taking a “wait-and-see attitude”. Lucey said Australia’s economy was “too simple” and AI technology could be “the real silver bullet” for developing further, if a national strategy could be created.
The report identifies university-industry partnerships, more investment and a stronger skills base as key to Australia’s success in developing and using AI.
Academy board member and Commonwealth Scientific and Industrial Research Organisation researcher Elanor Huntington wrote in the report that “we need to do more to support the growth of our broader AI ecosystem and double down on our winners to create globally competitive products”.
“We need to ensure that Australia captures its fair share of [AI’s] benefits.”
Applications and gaps
The report included short editorials by a number of leading AI researchers on potential applications and gaps in research.
Michael Milford, joint director of the Queensland University of Technology’s Centre for Robotics, wrote that Australia had an opportunity to bridge a key gap in AI development.
AI still does not “know when it doesn’t know” the answer to a problem, he said. Developing AI that could do that could make it possible to deploy robots in a much wider range of situations.
“Further investment at scale in Australian research will enable us to bridge the divide between basic, blue-sky AI research and introspective, trusted embodied intelligence in autonomous systems of all varieties.”
Carolyn Semmler and Lana Tikhomirov, psychology researchers at the University of Adelaide, wrote that the use of AI in healthcare needed to take account of “socio-technical systems”, with special care around issues like the datasets used to train AI tools.
“We need to take a radically different approach to the design of AI systems. This can be achieved by understanding how expert human decision-makers like doctors do their work,” they wrote.
A group of University of Queensland computer scientists also raised concerns about AI datasets, writing that “given the current lack of regulation in most jurisdictions of the world on the use of public data to train AI systems, its usage may be regarded as legally compliant but it may also be considered ethically questionable as it poses risks”.