Powered by MOMENTUM MEDIA
cyber daily logo
Breaking news and updates daily. Subscribe to our Newsletter

Australian government evaluates AI impact on Australian Consumer Law

As part of its analysis of AI use, the Australian government is currently investigating whether or not Australian Consumer Law (ACL) is still suitable as businesses adopt AI.

user icon Daniel Croft
Tue, 15 Oct 2024
Australian government evaluates AI impact on Australian Consumer Law
expand image

Building on previous consultations on Safe and responsible AI in Australia, the new review is part of the country’s overarching analysis of how AI and generative AI can be used productively, safely, and responsibly.

“In the 2024–25 budget, the Australian government invested $39.9 million over five years for the development of policies and capability to support the adoption and use of AI, including work to clarify and strengthen existing laws.

“As part of this work, Treasury is leading a priority review (the review) into the implications of AI on the ACL. The Department of Health and Aged Care and the Attorney-General’s Department are undertaking similar AI reviews into health and age-care sector regulation and copyright law,” the review said.

============
============

The government released the Review of AI and the Australian Consumer Law discussion paper to guide industry and stakeholders in submitting their thoughts on AI and its influence on consumer law.

“We are seeking your views on whether the Australian Consumer Law (ACL) remains suitable to protect consumers who use artificial intelligence (AI) [and] support the safe and responsible use of AI by businesses,” it said.

The paper intends to gauge stakeholder views on “how well adapted the ACL is to support Australian consumers and businesses to manage potential consumer law risks of AI-enabled goods and services, the application of well-established ACL principles to AI-enabled goods and services, the remedies available to consumers of AI-enabled goods and services under the ACL, and the mechanisms for allocating liability among manufacturers and suppliers of AI-enabled goods and services”.

Submissions for the paper will be open until Tuesday, 12 November 2024.

The consultation comes as the government evaluates the overall impact of AI, announcing last month that it was looking to introduce “mandatory guardrails.

In a release by Minister for Industry and Science Ed Husic on 5 September, the government announced two new initiatives to make AI use safer.

“Australians want stronger protections on AI, we’ve heard that, we’ve listened,” said Minister Husic.

“Australians know AI can do great things, but people want to know there are protections in place if things go off the rails. From today, we’re starting to put those protections in place.”

Husic announced a new Voluntary AI Safety Standard, which came into effect immediately. The standard aims to provide businesses using high-risk AI with “practical guidance” so they can “implement best practice” to protect themselves and others.

The standard will be updated as the technology and global standards develop.

Additionally, the government has proposed mandatory guardrails for using AI in high-risk environments.

The government also launched a federal inquiry into the impact of AI systems in government, overseen by the joint committee of public accounts and audit.

“The committee will specifically examine the adoption and use of artificial intelligence (AI) systems and processes by public sector entities to conduct certain functions, including but not limited to the delivery of services, to help achieve their objectives,” said the committee.

Despite recognising the danger AI presents, including its potential use to spread misinformation, a parliamentary inquiry has pushed back on legislation that would ban the use of AI-generated content in election campaigns.

The select committee on adopting AI recommended that the government develop new legislation and regulations regarding the use of AI as the next federal election looms.

However, the Senate committee, while recognising the risk generative AI presents in spreading disinformation and its use in election campaigns, said that the election was too soon and that there would not be enough time to create properly thought-out legislation.

At the same time, the Liberal Party launched the first Australian political advert entirely generated by AI, featuring a deepfake of ACT Chief Minister Andrew Barr.

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.