Share this article on:
Artificial intelligence-generated content and that created by real people are almost impossible to tell apart, according to a new study.
A global survey of more than 3,000 people has revealed that AI-generated content has become so convincing that most people cannot tell it apart from the real thing.
According to the academics behind the study, while much work has gone into automated techniques to spot AI forgeries, there’s been a lack of focus on how well humans can spot such fakes.
Speaking to people in the US, China, and Germany, the study found “state-of-the-art forgeries are almost indistinguishable from ‘real’ media, with the majority of participants simply guessing when asked to rate them as human- or machine-generated”.
The study was presented at the 45th IEEE Symposium on Security and Privacy in San Francisco this week by Dr Lea Schönherr and Professor Dr Thorsten Holz, two of the eight people who worked on the project.
“Artificially generated content can be misused in many ways,” Holz said in an interview with TechXplore.
“We have important elections coming up this year, such as the elections to the EU Parliament or the presidential election in the US. AI-generated media can be used very easily to influence political opinion. I see this as a major threat to our democracy.”
Those polled were images, text, and audio, with half the content being real and the other half AI-generated. The images were portraits, while the text was news articles. The audio was taken from books. The study also took into account the media literacy and political leanings of the participants, but despite the differences, most people believed that the AI-generated content was the real thing.
“We were surprised that there are very few factors that can be used to explain whether humans are better at recognising AI-generated media or not. Even across different age groups and factors such as educational background, political attitudes or media literacy, the differences are not very significant,” Holz said.
Schönherr described the study as “a race against time”.
“Media created with newly developed AI generation methods are becoming increasingly difficult to recognise using automatic methods,” Schönherr said. “That’s why it ultimately depends on whether a human can make appropriate assessments.”
Schönherr also believes the study holds some important data points for cyber security research, especially when it comes to combating phishing and social engineering.
“It is conceivable that the next generation of phishing emails will be personalised to me and that the text will match me perfectly,” Schönherr said.
You can find the full study here.
David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.