Share this article on:
Facebook and Instagram’s parent company, Meta, has announced a new policy that will require creators of political content to disclose if it had been created with the use of AI.
The policy, which will come into effect next year, will see the social media giant label posts with disclosures when an advertiser reveals a post has been created or altered using artificial intelligence (AI).
We’re announcing a new policy to help people understand when a social issue, election, or political advertisement on Facebook or Instagram has been digitally created or altered, including through the use of AI,” said Meta on its Facebook blog.
“This policy will go into effect in the new year and will be required globally.”
While advertisers won’t need to disclose if the image has been altered in a way that doesn’t influence the argument the image makes, such as image resizing and sharpening, Meta has laid out the scenarios in which disclosure is required.
“Advertisers will have to disclose whenever a social issue, electoral, or political ad contains a photorealistic image or video, or realistic-sounding audio, that was digitally created or altered to:
Taking to Meta’s Twitter clone Threads, former UK deputy prime minister and Meta’s president of global affairs Nick Clegg said that the new policy marks Meta’s push to prevent disinformation in the political space.
“This builds on Meta’s industry-leading transparency measures for political ads,” he said.
“These advertisers are required to complete an authorisation process and include a ‘Paid for by’ disclaimer on their ads, which are then stored in our public Ad Library for seven years.”
The move comes only days after Meta announced it would be banning political advertisers and campaigns from using its own generative AI advertising products.
“As we continue to test new generative AI ads creation tools in Ads Manager, advertisers running campaigns that qualify as ads for housing, employment or credit or social issues, elections, or politics, or related to health, pharmaceuticals or financial services aren’t currently permitted to use these generative AI features,” the company wrote.
“We believe this approach will allow us to better understand potential risks and build the right safeguards for the use of generative AI in ads that relate to potentially sensitive topics in regulated industries.”
The use of generative AI tools, such as ChatGPT, DALL-E and Elon Musk’s new AI chatbot ‘Grok’, has raised major concerns over the generation of false information and its use in influencing major global decisions such as elections and political campaigns.
OpenAI chief executive Sam Altman spoke in front of US Congress earlier in the year, expressing his concerns with the unregulated development of AI, saying his main concerns related to its use to spread disinformation.
“My areas of greatest concern [are] the more general abilities for these models to manipulate, to persuade, to provide sort of one-on-one interactive disinformation,” he said.
“Given that we’re going to face an election next year, and these models are getting better, I think this is a significant area of concern, [and] I think there are a lot of policies that companies can voluntarily adopt.”