Powered by MOMENTUM MEDIA
cyber daily logo
Breaking news and updates daily. Subscribe to our Newsletter

Australia needs to adopt deepfake use regulations, says RMIT expert

RMIT expert Dr Shahriar Kaisar says Australia needs to follow in the footsteps of the US and develop guidelines for the use of deepfake technology.

user icon Daniel Croft
Fri, 22 Mar 2024
Australia needs to adopt deepfake use regulations says RMIT expert
expand image

Deepfake technology presents a unique but critical problem when it comes to digital authenticity and combating fraud.

The technology can be used to mimic individuals and commit acts of fraud for financial attacks, or even worse, be used to influence political decisions such as elections.

“In an era dominated by digital content, the rise of deepfake technology presents an alarming threat to the authenticity of information on the internet,” says Dr Shahriar Kaisar, lecturer of information systems at RMIT.

============
============

“Deepfakes include, but are not limited to, fabricated speeches and manipulated visuals of public figures that can seriously impact people, businesses and even countries.”

Additionally, with technology like artificial intelligence, developing deepfakes is becoming easier at a rapid pace, lowering the barrier for entry for scammers and other individuals looking to commit malicious acts.

As a result of this, experts like Kaisar are calling for Australia to join other parts of the globe in developing regulations for the use of deepfake technology.

“As the technology behind deepfakes becomes more accessible, concerns mount regarding their ability to cause issues,” Kaisar continued.

“From issues such as scamming people and spreading misinformation, to influencing elections, undermining trust in media or even the potential to start a war.

“US policymakers are working on forming regulations around the use of deepfakes, but there are no moves in Australia to introduce a specific legislation to address the misuse of deepfakes.

“Regulations and awareness campaigns are crucial as many people are still unaware of the technology and could be the next victim of a scam.

“Only through collective vigilance can we unveil the deceptive realities of deepfakes and safeguard our digital world.”

Deepfakes are getting more and more realistic and are being used in a variety of ways. As previously mentioned, the technology is an ideal tool for use by fraudsters, as well as those looking to influence political decisions.

However, the realism of the technology is also being used to create pornographic content, with the faces of celebrities and other high-profile people being imposed on pornographic content.

According to a 2019 study by Sensity AI, 96 per cent of all deepfake videos are pornographic.

Kaisar added that “although it is becoming increasingly difficult to detect deepfakes, there are a few signs you can look out for to determine a video’s authenticity, such as:

  • Unnatural facial expressions
  • Odd eye movements or poorly synced lip movements
  • Inconsistent lighting and shadows
  • Unusual blinking behaviours
  • Unnatural background sounds

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.