Share this article on:
Leaders of the AI industry are saying that the development of artificial intelligence could lead to extinction.
In a statement released by the Centre for AI Safety (CAIS) based in San Francisco, experts have said that ensuring that the risk that AI could wipe out the human race is minimal should be of the utmost importance worldwide.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” said CAIS in the one-sentence release.
We just put out a statement:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Signatories include Hinton, Bengio, Altman, Hassabis, Song, etc.https://t.co/N9f6hs4bpa
? (1/6)— Dan Hendrycks (@DanHendrycks) May 30, 2023========================
CAIS outlined a number of ways in which the human race could be wiped out at the hands of AI, such as the potential for widespread misinformation to derail society, AI tools being weaponised to create weapons of mass destruction, and complete dependence.
Currently, AI is far from capable of annihilating the human race but still presents risks at its current level of sophistication, according to Princeton University scientist Arvind Narayanan.
“Current AI is nowhere near capable enough for these risks to materialise. As a result, it’s distracted attention away from the near-term harms of AI,” Narayanan said, speaking with the BBC.
“Advancements in AI will magnify the scale of automated decision-making that is biased, discriminatory, exclusionary or otherwise unfair while also being inscrutable and incontestable.”
“[This could] drive an exponential increase in the volume and spread of misinformation, thereby fracturing reality and eroding the public trust, and drive further inequality, particularly for those who remain on the wrong side of the digital divide.”
Similarly, Professor Geoff Webb, department of data science and AI, faculty of information technology at Monash University, said that the concern of mass extinction had been blown out of proportion.
“Extraordinary recent advances in AI have led to alarming predictions of existential threats to humanity,” he said.
“While there are many risks associated with the new technologies, there are also many benefits.
“We need to do our utmost to control the risks while ensuring Australia shares in the many benefits. There are good reasons to be concerned, but predictions of the ‘end of humanity’ are overblown.”
Despite Webb’s beliefs, support for the CAIS statement has flooded in from hundreds of scientists and industry heads, including Sam Altman, chief executive of ChatGPT creator OpenAI, and Demis Hassabis, CEO of Google DeepMind.
This comes just a week after Altman met with US Congress to discuss the implications of AI development, requesting that government authorities regulate AI.
“I think if this technology goes wrong, it can go quite wrong, and we want to be quite vocal about that; we want to work with the government to prevent that from happening,” Altman told US Congress.
“We try to be very clear about what the downside case is and the work that we have to do to mitigate that.”
Regardless of the concerns, Altman maintained that the pros of AI currently outweigh the cons and that with proper regulation, AI could transform humanity for the better.
“OpenAI was founded on the belief that artificial intelligence has the ability to improve nearly every aspect of our lives but also that it creates serious risks that we have to work together to manage,” he said.