Share this article on:
RMIT expert Dr Shahriar Kaisar says Australia needs to follow in the footsteps of the US and develop guidelines for the use of deepfake technology.
Deepfake technology presents a unique but critical problem when it comes to digital authenticity and combating fraud.
The technology can be used to mimic individuals and commit acts of fraud for financial attacks, or even worse, be used to influence political decisions such as elections.
“In an era dominated by digital content, the rise of deepfake technology presents an alarming threat to the authenticity of information on the internet,” says Dr Shahriar Kaisar, lecturer of information systems at RMIT.
“Deepfakes include, but are not limited to, fabricated speeches and manipulated visuals of public figures that can seriously impact people, businesses and even countries.”
Additionally, with technology like artificial intelligence, developing deepfakes is becoming easier at a rapid pace, lowering the barrier for entry for scammers and other individuals looking to commit malicious acts.
As a result of this, experts like Kaisar are calling for Australia to join other parts of the globe in developing regulations for the use of deepfake technology.
“As the technology behind deepfakes becomes more accessible, concerns mount regarding their ability to cause issues,” Kaisar continued.
“From issues such as scamming people and spreading misinformation, to influencing elections, undermining trust in media or even the potential to start a war.
“US policymakers are working on forming regulations around the use of deepfakes, but there are no moves in Australia to introduce a specific legislation to address the misuse of deepfakes.
“Regulations and awareness campaigns are crucial as many people are still unaware of the technology and could be the next victim of a scam.
“Only through collective vigilance can we unveil the deceptive realities of deepfakes and safeguard our digital world.”
Deepfakes are getting more and more realistic and are being used in a variety of ways. As previously mentioned, the technology is an ideal tool for use by fraudsters, as well as those looking to influence political decisions.
However, the realism of the technology is also being used to create pornographic content, with the faces of celebrities and other high-profile people being imposed on pornographic content.
According to a 2019 study by Sensity AI, 96 per cent of all deepfake videos are pornographic.
Kaisar added that “although it is becoming increasingly difficult to detect deepfakes, there are a few signs you can look out for to determine a video’s authenticity, such as: