Share this article on:
ChatGPT creator OpenAI has announced that it will be taking measures to stop the use of the artificial intelligence (AI) tool for disinformation and political manipulation ahead of elections taking place this year.
Concerns over AI’s ability to spread misinformation and disinformation have been a major topic of discussion among regulatory bodies and developers, with OpenAI chief executive Sam Altman previously stating that this was one of his main concerns with the technology.
In its 2024 Global Risks Report, the World Economic Forum warned that AI could disrupt politics and elections through misinformation.
Additionally, a number of major elections are taking place throughout the year, including in the US, the UK, the EU and India. Together, this represents half the world’s population and 60 per cent of the world’s gross domestic product (GDP).
To combat the changes that ChatGPT and its other tools, OpenAI has said it will launch measures to prevent its use in spreading misinformation.
“Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process,” said OpenAI in a blog post this week.
“As we prepare for elections in 2024 across the world’s largest democracies, our approach is to continue our platform safety work by elevating accurate voting information, enforcing measured policies, and improving transparency,” it added.
“We have a cross-functional effort dedicated to election work, bringing together expertise from our safety systems, threat intelligence, legal, engineering, and policy teams to quickly investigate and address potential abuse.”
OpenAI has said it is adopting other measures, the first of which aims to anticipate and prevent abuse, such as misleading deepfakes, chatbots impersonating candidates or political representatives and discouraging or preventing people from participating in the democratic process.
As part of this, it has not yet allowed users to create their own GPTs for the purposes of lobbying or political campaigning.
Additionally, OpenAI has said its DALL-E image generation AI will not be usable for political campaigns.
It has also said that it will be introducing greater transparency around AI-generated content to prevent people from believing information or imagery generated by AI is real and authentic.
Finally, through work with the National Association of Secretaries of State (NASS), ChatGPT will direct users to CanIVote.org when asked questions related to the election and how it works, such as voting booth locations.
Altman has previously said one of his greatest concerns with AI is its ability to influence elections and the democratic process.
“My areas of greatest concern [are] the more general abilities for these models to manipulate, to persuade, to provide sort of one-on-one interactive disinformation,” he said during a meeting with Congress last year.
“Given that we’re going to face an election next year, and these models are getting better, I think this is a significant area of concern, [and] I think there are a lot of policies that companies can voluntarily adopt.”