Powered by MOMENTUM MEDIA
cyber daily logo
Breaking news and updates daily. Subscribe to our Newsletter

OpenAI developing AI-generated image detector

OpenAI has announced that it is developing a tool to detect images that have been generated using its own AI image generator, DALL-E 3.

user icon Daniel Croft
Thu, 09 May 2024
OpenAI developing AI-generated image detector
expand image

The artificial intelligence (AI) tool, which generates images based on text prompts, has become increasingly popular, with organisations opting to use it rather than pay for human-made images or photos.

Now, to combat the use of undisclosed AI imagery, which could convey potentially misleading messages, OpenAI has announced the launch of a tool to detect whether content was created using DALL-E 3.

“This tool predicts the likelihood that an image was generated by OpenAI’s DALL·E 3,” said OpenAI in a blog post.

============
============

The company adds that at this stage, it is currently testing the tool’s efficacy, but early testing has revealed roughly 98 per cent accuracy for detecting DALL-E-generated images. Additionally, when testing on real images, the tool has tagged less than 0.5 per cent of images as AI-generated.

The company added that it is currently testing different augmentations that make the tool less effective, such as compression and colour changes, and eventually plans to add “tamper-resistant watermarking” to AI-generated content, which marks content with a “digital signal” that indicates that it is AI-generated, and is difficult to remove.

The move comes as AI-generated images and content are being used increasingly to spread disinformation.

Several websites already run AI-powered false news pages that post fake articles for shock value.

The impact of AI images and deepfakes is also a major concern, with a study conducted by security firm McAfee noting Australia’s increased concern for the technology influencing elections.

According to the research – which polled 7,000 people around the world in early 2024 – 43 per cent of Australians listed election interference as a key concern when it comes to the impact of AI-powered technology.

That’s a 66 per cent increase within a 12-month period.

“Deepfakes can be made by anyone in an afternoon. The tools to create cloned audio and deepfake video are readily available and take only a few hours to master, and it takes just seconds to convince you that it’s all real,” said Tyler McGee, head of APAC at McAfee, in a statement.

“This is raising critical questions about the authenticity of content, especially in a year where so many elections are happening, including here in Australia. Democracy is on the ballot this year thanks to AI.”

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.