Powered by MOMENTUM MEDIA
cyber daily logo
Breaking news and updates daily. Subscribe to our Newsletter

Iranian ChatGPT accounts banned for generating US election disinformation

OpenAI has revealed that it has banned a swarm of Iranian ChatGPT accounts for creating false and misleading content relating to the US elections.

user icon Daniel Croft
Mon, 19 Aug 2024
Iranian ChatGPT accounts banned for generating US election disinformation
expand image

In a post uploaded to OpenAI’s site yesterday (18 August), the company said it detected accounts that were “generating content for a covert Iranian influence operation identified as Storm-2035.

“We have banned these accounts from using our services, and we continue to monitor for any further attempts to violate our policies.

“The operation used ChatGPT to generate content focused on a number of topics – including commentary on candidates on both sides in the US presidential election – which it then shared via social media accounts and websites,” it said.

============
============

Storm-2035, alongside six other Iranian threat groups, was identified in a report earlier this month by Microsoft as conducting “cyber-enabled influence operations” relating to the US election.

The Microsoft report, which was released on 9 August, specified that Storm-2035 was making use of AI to generate disinformation and misinformation, which would then be spread on social media.

The group reportedly established four websites that posed as legitimate news outlets, all of which have been around since at least 2020.

OpenAI also identified that the group was using ChatGPT to create this content, with two main types of content being generated – short social media comments and long-form articles.

“The first workstream produced articles on US politics and global events, published on five websites that posed as both progressive and conservative news outlets,” said OpenAI.

“The second workstream created short comments in English and Spanish, which were posted on social media.”

Alongside spreading misinformation relating to the US presidential election, OpenAI said the threat actors also generated information relating to the conflict in Gaza and the presence of Israel at the recently closed Paris 2024 Olympics.

While much less common, some content also related to Scottish independence and “politics in Venezuela, the rights of Latinx communities in the US (both in Spanish and English)”.

OpenAI has said that the campaign failed to achieve “meaningful audience engagement”, with posts generating very little, if any, likes, shares or comments.

“We similarly did not find indications of the web articles being shared across social media,” it said.

Despite this, OpenAI said it takes incidents like this incredibly seriously.

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.