Powered by MOMENTUM MEDIA
cyber daily logo
Breaking news and updates daily. Subscribe to our Newsletter

Government use of AI under the microscope in new federal inquiry

The use of AI tools in government is set to be reviewed as part of a new federal inquiry to ensure that adequate measures are being taken to ensure appropriate and ethical use.

user icon Daniel Croft
Fri, 13 Sep 2024
Government use of AI under the microscope in new federal inquiry
expand image

On Thursday (12 September), the Joint Committee of Public Accounts and Audit announced that it would be launching an inquiry into the use of AI systems by the government.

“The committee will specifically examine the adoption and use of artificial intelligence (AI) systems and processes by public sector entities to conduct certain functions, including but not limited to the delivery of services, to help achieve their objectives,” said the committee.

The committee has laid out a terms of reference for the inquiry, which outline its key objectives in evaluating the use of the technology.

============
============

Key areas include the purposes for which AI is being used by public sector entities, whether current internal governance structures ensure the ethical and responsible use of the technology, the public sector’s internal capability to “effectively adopt and utilise AI”, the existence of sovereign capability issues as most AI tools are sourced from overseas, evaluating the current existing legislative and regulatory frameworks and more.

The committee has said that it would be utilising submissions made in a previous inquiry into Commonwealth financial statements 2022–23, but it has invited parties to submit more before 25 October.

The inquiry comes as the government announced earlier this month that it would be exploring the possibility of AI safety legislation and mandatory guardrails to manage the development and use of AI.

In a release by Minister for Industry and Science Ed Husic on 5 September, the government announced two new initiatives to make AI use safer.

“Australians want stronger protections on AI, we’ve heard that, we’ve listened,” said Minister Husic.

“Australians know AI can do great things, but people want to know there are protections in place if things go off the rails. From today, we’re starting to put those protections in place.”

The first is the new Voluntary AI Safety Standard, which will come into effect immediately. The standard aims to provide businesses using high-risk AI with “practical guidance” so they can “implement best practice” to protect themselves and others.

The standard will be updated as the technology and global standards develop.

The second announcement is the introduction of new guardrails to guide AI use and development.

According to the Tech Council, generative AI alone could bolster the Australian economy by $45 billion to $115 billion by 2030.

However, while businesses are interested in implementing the technology, the government has been asked repeatedly to comment on or implement AI regulation to ensure that the use of the technology within business is productive and safe.

In response, the government has announced a Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings, which was informed by a government-appointed AI expert group.

The paper intends to set a proposed definition on what high-risk AI is, introduce 10 proposed mandatory guardrails, and “three regulatory options” to enforce the guardrails.

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.