Powered by MOMENTUM MEDIA
lawyers weekly logo

Powered by MOMENTUMMEDIA

Breaking news and updates daily. Subscribe to our Newsletter
Advertisement

Spain to fine those who do not label AI-generated content

A new bill introduced by the Spanish government will see companies that post AI content without labelling it as such face massive fines as the country attempts to reduce the impact of AI deepfakes and misinformation.

user icon Daniel Croft
Wed, 12 Mar 2025
Spain to fine those who do not label AI-generated content
expand image

According to Spain’s Digital Transformation Minister Oscar Lopez, the new bill bases itself on the EU’s AI Act, which mandates that high-risk AI systems must be transparent.

The bill, which is yet to pass the lower house, says that a breach of the proposed legislation, as in an instance in which AI-generated content is not appropriately or sufficiently labelled, is a “serious offence”.

The bill will also outlaw the use of subliminal influence, such as through sound and visuals, citing cases in which chatbots influence those with gambling addictions or encourage children to do dangerous challenges.

Additionally, organisations will no longer be allowed to use AI to classify people using their biometric data, such as to judge their likelihood of committing crimes or to grant them benefits. Authorities will be exempt from this rule and still be able to use AI for real-time biometric surveillance in the interest of national security.

Companies that breach this rule may face fines of either 7 per cent of their global annual turnover or €35 million.

“AI is a very powerful tool that can be used to improve our lives ... or to spread misinformation and attack democracy,” said Lopez.

If passed, the new legislation will be regulated by the Spanish Agency for the Supervision of Artificial Intelligence (AESIA), which is part of the nation’s Department of Digital Transformation.

The only exception to AESIA’s enforcement of the new legislation is in specific instances where crime, credit ratings, insurance, elections, data privacy or capital market systems are involved, in which case, the relevant government bodies will take responsibility.

AI technology has intensified the issues of misinformation and deepfakes as they become easier to generate.

In the lead-up to the 2024 US election, X owner, Tesla CEO, and now right-hand-man of US President Donald Trump, Elon Musk, joined X users in posting misinformation on the platform using his Grok AI.

In a post on his X (formerly Twitter) account, Musk posted an image of former vice president Kamala Harris dressed in a red suit donned with the communist hammer and sickle emblem in response to one of her political campaign ads that said: “Donald Trump vows to be a dictator on day one.”

Alongside the image, Musk wrote: “Kamala vows to be a communist dictator on day one. Can you believe she wears that outfit.”

While Musk does nothing to indicate the image is AI-generated and states that it is Harris wearing the outfit, the image is very obviously AI-generated, barely resembling Harris. Additionally, the claim that she vowed to be a communist dictator has no real source and is thus misinformation.

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.
You need to be a member to post comments. Become a member for free today!

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.