Powered by MOMENTUM MEDIA
cyber daily logo
Breaking news and updates daily. Subscribe to our Newsletter

AI development should require a license, says UK's Labour Party

Organisations looking to develop artificial intelligence (AI) should be required to hold a license to do so.

These are comments made by the UK Labour Party’s digital spokesperson, Lucy Powell, who has entered into the discussion on how best to regulate AI development and prevent it from being used maliciously or having dangerous consequences.

“My real point of concern is the lack of any regulation of the large language models that can then be applied across a range of AI tools, whether that’s governing how they are built, how they are managed or how they are controlled,” Powell told The Guardian.

“[The] the kind of model we should be thinking about [is one] where you have to have a license in order to build these models,” she said. “These seem to me to be the good examples of how this can be done.”

While Chancellor Jeremy Hunt said that he wanted the UK to “win the race” of AI development, UK Prime Minister Rishi Sunak has expressed his concerns over its rapid advancement, saying that it presented an “existential threat”.

Furthermore, only months after the UK government published its white paper on AI regulation, industry experts are calling it outdated, further exemplifying the dramatic rate at which the technology is changing and evolving.

The most recent moves by the UK government to introduce regulations for AI development come just weeks after the chief executive of ChatGPT creator OpenAI, Sam Altman, presented before US Congress calling for government regulation on AI development, saying that it was necessary for government bodies to step in to curb the dangers that the technology creates.

“We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” he said.

“For a very new technology, we need a new framework.”

Altman said that the ability of AI to manipulate elections was just one example of the dangers that could arise.

Following the presentation, experts from the Centre for AI Safety (CAIS) in San Francisco said that AI has the potential to lead to human extinction if not regulated.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” CAIS said in a one-word statement.

user icon Daniel Croft
Tue, 06 Jun 2023
AI development should require a license, says UK's Labour Party
expand image
Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.