Share this article on:
Yesterday (17 January 2024), the Australian government released its interim response to the AI consultation paper from last year, which received 500 submissions from industry and beyond.
The government general stance is that it wants to allow the technology to drive innovation and wants low-risk artificial intelligence (AI) to be unrestricted to maximise its output, but that high-risk AI presents a danger, and it plans to mandate AI safeguards for it.
Here is how the industry has responded.
Professor Geoff Webb
Department of data science and AI, faculty of information technology, Monash University
“The Australian government’s interim response to their AI consultation process strikes a measured balance between controlling risks and stifling innovation. AI has the potential to add hundreds of billions of dollars to the Australian economy. But these potential benefits come with many varied and substantial risks.
“By focusing on high-risk applications of the technology, the government has taken a sensible next step in what will be a long-term process as these technologies continue their breakneck pace of evolution.
“An important next focus will need to be education, both of the public so that they understand how to safely interact with the new technologies, and also of experts so that we can best deploy the technologies in the national interest.”
Josh Griggs
Interim chief executive, Australian Computer Society
“Given the profound changes AI will make to the workforce in coming years, ACS welcomes the federal government’s response and looks forward to working with the proposed Temporary Expert Advisory Group to ensure Australia has regulation that’s fit for purpose over the coming decades.
“Consulting with experts and industry leaders is going to be critical in ensuring that any regulation reaps the benefits of AI while mitigating the real risks presented from misuse of the emerging technology.
“In the 2023 Digital Pulse report, ACS forecast 75 per cent of Australian workers will see their roles changed by AI so the impact is going to be felt across the economy. It’s important that alongside well-designed legislation, support for industry is considered through frameworks, and individuals are supported with skills development.
“Our submission to the Safe and Responsible AI discussion paper also highlighted that we need to be careful that regulation is consistent and co-ordinated to enhance Australia’s capability in a safe and responsible manner. We believe a holistic approach should be a priority for the advisory group when it meets.
“We look forward to working with the federal government, industry, educators and all key stakeholders to ensure Australia maximises the benefits from AI and associated technologies over the coming decade.”
Niusha Shafiabady
Associate Professor of Computational and Artificial Intelligence, Charles Sturt University
“The discussion paper talks about issues with AI that everyone who reads the news has heard about, like misinformation and disinformation and collaboration with industry to ‘develop options for voluntary labelling and watermarking of AI-generated materials’.
“It also mentions that firms like ‘Microsoft, Google, Salesforce and IBM’ are ‘already adopting ethical principles’ in using AI.
“It is good fun to read the executive order US President Joe Biden signed on October 30, 2023 on Safe and Trustworthy use of AI, and compare it with the Australian Government’s discussion paper.
“To me, threats AI and technology pose to people are two types: long-term and short-term. The paper our government has released lacks the long-term threats altogether. AI is changing our education system and the way the kids grow up and learn at schools.
“AI will be displacing many jobs. To what extent are we going to allow it to be integrated in our lives? Are we thinking strategically or put our faith in big firms’ hands for saying they are ‘already adopting ethical principles’? How are we going to create mandatory guardrails for ‘testing’, ‘transparency’ and ‘accountability’ through collaboration with industry? Who are our industry experts who verify if an AI system is ‘biased’ or not?
“The first question we should ask ourselves, in my opinion, is how are the AI systems being created and who are the people developing them? These days, the so-called ‘AI experts’ are people who have learnt to use free toolboxes or a paid tool which are basically distributed by the big firms like Microsoft, Google, Salesforce and IBM.
“What are these tools doing? Do the developers even know that there are ways to avoid issues like ‘bias’ for the AI systems? Do they have enough knowledge and training to develop the systems that are less likely viable to making mistakes? Can big firms that the governments are paying millions of dollars to use their services, be trusted?
“This paper, Safe and responsible AI in Australia, is stored in Google space.
“We need regulations and enforcement, not just talking about things that are good ideas. Misinformation and disinformation are serious threats. We need regulations to mandate watermarking the fake material.
“In the US government’s executive order, they specifically mention what they are implementing and what needs to be done. Here, we are putting our faith and the fate of the people in hands of industry’s good faith. Sorry but this wouldn’t work. If we don’t take the threat of technology seriously and come up with mandatory regulations, we will feel the blow as a nation. It is time to act now.”