Share this article on:
The Australian government has released its interim response to last year’s AI consultation paper, pushing for a careful balance between driving innovative new solutions and empowering works and mitigating the dangers and risks associated with the technology.
The Department of Industry, Science and Resources opened the consultation in June 2023, receiving over 500 submissions before it closed again in August.
Now, the government has collected the submissions and formed an interim response, saying that while the technology has the potential to massively improve the lives of Australians and stimulate the economy, there is still a lack of trust for artificial intelligence (AI).
“The potential for AI systems and applications to help improve wellbeing, quality of life and grow our economy is well known. It’s been estimated that adopting AI and automation could add an additional $170 billion to $600 billion a year to Australia’s GDP by 2030,” said the response.
“While AI is forecast to grow our economy, there is low public trust that AI systems are being designed, developed, deployed and used safely and responsibly. This acts as a handbrake on business adoption and public acceptance.
“Surveys have shown that only one-third of Australians agree Australia has adequate guardrails to make the design, development and deployment of AI safe.”
As a result of its findings, the Australian government has announced that it will be applying mandatory safeguards to risky AI tools, while low or no-risk AI tools are allowed to be used without restriction, maximising the amount in which they can help Australians in day-to-day operations.
“In considering the right regulatory approach to implementing safety guardrails, the government’s underlying aim will be to help ensure that the development and deployment of AI systems in Australia in legitimate, but high-risk settings, is safe and can be relied upon while ensuring the use of AI in low-risk settings can continue to flourish largely unimpeded.”
The government added that its first step in developing these safeguards is to identify what risks AI tools present, what mandatory safety safeguards would appropriately deal with said risks and the best ways to implement them.
On top of this, the government is working with industry professionals to “develop a voluntary AI Safety Standard, implementing risk-based guardrails for industry”.
It has also said it is establishing an expert advisory body that will overlook the development of future AI guardrails and that it will develop a voluntary labelling and watermarking scheme that will require creators to mark AI-generated material.
Alongside the strict crackdown on AI risks, the government hopes to optimise the technology within Australian society and maximise the benefits it presents.
As per submissions, the government has said that more investment in AI is crucial in bringing Australia to the forefront of the technology and maximising the nation’s output.
In line with this, the government has dedicated $75.7 million in funding to AI initiatives, which it said complements the massive multibillion-dollar investments made by the private sector – $4.4 billion since 2013, with $1.9 billion in 2022 and $1.8 billion in 2021 alone.
“Building on these important investments, the Australian government will continue to consider opportunities to support the adoption and development of AI and other automation technologies in Australia, including the need for an AI Investment Plan. This complements efforts to ensure that Australia has in place the necessary guardrails to build trust and confidence in the use of AI.”