Powered by MOMENTUM MEDIA
cyber daily logo

Breaking news and updates daily. Subscribe to our Newsletter

Breaking news and updates daily. Subscribe to our Newsletter X facebook linkedin Instagram Instagram

AEC says AI-generated misinformation likely to plague next federal election

The Australian Electoral Commission (AEC) expects AI-generated misinformation and deepfakes to be a challenge for the 2025 federal election, but it has revealed that it does not have the technology to combat them.

user icon Daniel Croft
Tue, 21 May 2024
AEC says AI-generated misinformation likely to plague next federal election
expand image

Already, tools like OpenAI’s ChatGPT and Dall-E, as well as Meta AI and more, allow users to create (at least somewhat) convincing images, video, audio and text simply and easily, which, while largely used for productivity or fun, can be used for the malicious spread of misinformation.

Speaking at a Senate inquiry to a parliamentary committee into artificial intelligence (AI) on Monday (20 May), AEC commissioner Tom Rogers expressed that while these tools have proven “amazing productivity benefits”, other elections around the globe have already been plagued by fake posts generated by AI.

“The AEC does not possess the legislative tools or internal technical capability to deter, detect or then adequately deal with false AI-generated content concerning the election process – such as content that covers where to vote, how to cast a formal vote and why the electoral process may not be secure or trustworthy,” Rogers said.

============
============

“We’re seeing increased use of those sorts of tactics in elections around the world.

“I don’t think we’re going to be immune to that. So we could expect things like that to occur at the next election.”

One such example is a robocall deepfake of US President Joe Biden’s voice that circulated early this year, urging US citizens not to vote.

At this stage, Rogers stressed that the AEC does not have the capabilities to combat AI-generated misinformation plaguing electoral campaigns, particularly as some content, such as deepfakes and voice-cloned robocalls, are not technically illegal.

“If those messages were authorised, duly authorised, they do not fall afoul of the electoral act currently,” he said.

Greens Senator David Shoebridge also pointed out that deepfakes were not unlawful and that stronger powers to fight them were necessary.

“It’s not unlawful to produce deepfakes in our election system. Now, just think about what that might mean,” he said.

“If there’s a barrage of deepfake voice recordings, going into marginal seats in the last few days before the election, that’s a very real and present danger.

“It’s one thing to see an unethical player lying about their political opponent and creating a false narrative about their political opponent. It’s quite another thing to get your political opponent to lie themselves and to generate the lies being told out of the mouth of your political opponent.

“That’s what deepfakes will permit unless we take some urgent regulatory action.”

Rogers did say that the federal government’s consideration that AI-generated content would require mandatory watermarks would be a step in the right direction, alongside stricter codes for tech platforms, political codes of conduct and a national digital literacy campaign, particularly as AI tools are “improving the quality of disinformation to make it more undetectable”.

He also added that the AEC was working with tech companies to reduce the danger AI-generated content presents.

Meta is one such company with which the AEC has already held a meeting discussing the dangers of AI and the capabilities it grants users to create misinformation.

“Just to give Meta some praise, after that meeting [three weeks ago], a whole bunch of us sat around a table unscientifically and tried to get their new AI search tool to spit forward false information about the Australian election by asking a whole range of questions, and we couldn’t do it,” Rogers said, who added that the AEC is currently doing further checks.

While any instances of AI misinformation relating to the upcoming election will be reported to the AEC before anywhere else, Rogers said that the AEC is not going to be responsible for policing the misinformation.

“I acknowledge it will come to us because, at election time, anything that’s got the word ‘election’ in it eventually comes to the AEC,” he said.

“But for a lot of that, we would refer that off to others, potentially the Australian Federal Police. People that are affected by that content may have other remedies, such as civil or injunctive relief.”

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.