Share this article on:
The Australian government has announced plans to introduce artificial intelligence (AI) safety legislation to manage the development and use of AI.
While AI can be an incredibly powerful tool, its tendency to hallucinate alongside its potential for creating misinformation means its misuse, intentional or not, can have devastating effects.
High-risk AI systems could also create security risks, expose personal data, exacerbate bias and discrimination, and, in a worst-case scenario, mislead leaders of government or major companies.
In a release by Minister for Industry and Science Ed Husic today (5 September), the government announced two new initiatives to make AI use safer.
“Australians want stronger protections on AI, we’ve heard that, we’ve listened,” said Minister Husic.
“Australians know AI can do great things, but people want to know there are protections in place if things go off the rails. From today, we’re starting to put those protections in place.”
The first is the new Voluntary AI Safety Standard, which will come into effect immediately. The standard aims to provide businesses using high-risk AI with “practical guidance” so they can “implement best practice” to protect themselves and others.
The standard will be updated as the technology and global standards develop.
The second announcement is the introduction of new guardrails to guide AI use and development.
According to the Tech Council, generative AI alone could bolster the Australian economy by $45 billion to $115 billion by 2030.
However, while businesses are interested in implementing the technology, the government has been asked repeatedly to comment on or implement AI regulation to ensure that the use of the technology within business is productive and safe.
In response, the government has announced a Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings, which was informed by a government-appointed AI expert group.
The paper intends to set a proposed definition on what high-risk AI is, introduce 10 proposed mandatory guardrails, and “three regulatory options” to enforce the guardrails.
“The three regulatory approaches could be:
The paper is open to consultations for four weeks and will close on Friday, 4 October 2024, at 5pm.
“Business has called for greater clarity around using AI safely, and today, we’re delivering,” said Minister Husic.
“We need more people to use AI, and to do that, we need to build trust.”