Share this article on:
A 222-page report on the adoption of AI in Australia has just been tabled in the Senate – here’s what it recommends.
A report from the Senate’s select committee on adopting AI was tabled today (27 November), and while the whole thing runs to 222 pages – including dissenting opinions from two Liberal National Party senators and additional comments from the Australian Greens and independent Senator David Pocock – the report can be summed up with the 13 main recommendations presented by the committee.
Regulating AI in Australia
The initial focus of the committee’s report is on what it calls “high-risk AI uses”, particularly when it comes to deepfakes, privacy and data security, and biases and discrimination.
As Dr Darcy Allen, Professor Chris Berg, and Dr Aaron Lane noted in their submission, “biases in generative AI models are, in part, a reflection of the biases inherent in humans”.
“These models are trained on vast datasets … Unsurprisingly, biases from the datasets become embedded in the models. This is [an AI system] capturing the prevailing tendencies, preferences, and prejudices of the data it has been trained on,” they said.
The committee has three recommendations in this area, and we’ll be quoting them verbatim for clarity:
1. That the Australian government introduce new, whole-of-economy, dedicated legislation to regulate high-risk uses of AI, in line with Option 3 presented in the government’s Introducing mandatory guardrails for AI in high-risk settings: proposals paper.
2. That, as part of the dedicated AI legislation, the Australian government adopt a principles-based approach to defining high-risk AI uses, supplemented by a non-exhaustive list of explicitly defined high-risk AI uses.
3. That the Australian government ensure the non-exhaustive list of high-risk AI uses explicitly includes general-purpose AI models, such as large language models (LLMs).
Developing a local AI industry
The report said: “The essential challenge for Australia is to develop its AI industry through policies that maximise the widespread opportunities afforded by AI technologies, while ensuring appropriate protections are in place.”
According to the committee, AI is a transformative technology and one that is being developed by a range of organisations large and small, both in the private and public sectors. It was also identified that sovereign AI capability was a key point of many submissions made to the report.
There’s only one, broad yet inclusive recommendation in this area:
4. That the Australian government continue to increase the financial and non-financial support it provides in support of sovereign AI capability in Australia, focusing on Australia’s existing areas of comparative advantage and unique First Nations perspectives.
AI impact on workers and industry
Perhaps unsurprisingly, this is where the bulk of the committee’s recommendations are regarding the benefits and risks of AI on employers and employees, as well as the wider industry.
The committee notes that creative industries are at particular risk, while the healthcare sector could see both immense benefits and “very serious risks” from the increasing adoption of AI.
Overall productivity was identified as an area that could see considerable improvement. According to a submission from Microsoft’s corporate vice president, Steven Worrall, “Australia has an incredible foundation to build on. Forecasts predict that AI could create 200,000 new jobs and contribute up to $115 billion annually to our economy.”
The committee has six recommendations in this area:
5. That the Australian government ensure that the final definition of high-risk AI clearly includes the use of AI that impacts on the rights of people at work, regardless of whether a principles-based or list-based approach to the definition is adopted.
6. That the Australian government extend and apply the existing work health and safety legislative framework to the workplace risks posed by the adoption of AI.
7. That the Australian government ensure that workers, worker organisations, employers and employer organisations are thoroughly consulted on the need for, and best approach to, further regulatory responses to address the impact of AI on work and workplaces.
8. That the Australian government continue to consult with creative workers, rightsholders and their representative organisations through the CAIRG on appropriate solutions to the unprecedented theft of their work by multinational tech companies operating within Australia.
9. That the Australian government require the developers of AI products to be transparent about the use of copyrighted works in their training datasets, and that the use of such works is appropriately licenced and paid for.
10. That the Australian government urgently undertake further consultation with the creative industry to consider an appropriate mechanism to ensure fair remuneration is paid to creators for commercial AI-generated outputs based on copyrighted material used to train AI systems.
Automating the decision-making process
AI is increasingly being used in automated decision making, or ADM. This comes with considerable benefits and efficiency boosts, but it also comes with similar risks when it comes to transparency and accountability.
The Law Council of Australia noted in its submission that “transparency is critical for the responsible use of ADM by Australian organisations, both in the public sector and private sector”.
Biases built into any AI-based decision-making process are also of concern.
“AI draws inferences from patterns in existing data,” ARC Centre of Excellence on Automated Decision-Making and Society said in its submission to the committee. “When biases are embedded in the data used to train models, models tend to perpetuate those biases …”
Here are the committee’s recommendations:
11. That the Australian government implement the recommendations pertaining to automated decision-making in the review of the Privacy Act, including Proposal 19.3 to introduce a right for individuals to request meaningful information about how substantially automated decisions with legal or similarly significant effect are made.
12. That the Australian government implement recommendations 17.1 and 17.2 of the Robodebt Royal Commission pertaining to the establishment of a consistent legal framework covering ADM in government services and a body to monitor such decisions. This process should be informed by the consultation process currently being led by the Attorney-General’s Department and be harmonious with the guardrails for high-risk uses of AI being developed by the Department of Industry, Science and Resources.
Environmental impacts
We already know that the data centres being used to drive generative AI come at a huge environmental cost, and that is something many submissions to the committee discussed.
Dr Catherine Foley, Australia’s chief scientist, said that “[training] a model like GPT-3…[is estimated] to use about 1½ thousand megawatt hours … [which is] the equivalent of watching about 1½ million hours of Netflix”.
Another submission, this time from the Department of Industry, Science and Resources, noted that “a single data centre may consume energy equivalent to heating 50,000 homes for a year”.
The committee’s final recommendation focuses on making AI growth sustainable:
13. That the Australian government take a coordinated, holistic approach to managing the growth of AI infrastructure in Australia to ensure that growth is sustainable, delivers value for Australians and is in the national interest.
You can find an HTML version of the full report – and it is worth a read – here.
David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.