Powered by MOMENTUM MEDIA
cyber daily logo
Breaking news and updates daily. Subscribe to our Newsletter

OpenAI to shift from AGI to ‘superintelligence’, according to Altman

Only a month after ChatGPT had its second birthday, OpenAI CEO Sam Altman has set the company’s sights beyond artificial general intelligence (AGI) to focus on superintelligence.

user icon Daniel Croft
Tue, 07 Jan 2025
OpenAI to shift from AGI to ‘superintelligence’, according to Altman
expand image

In a blog post, Altman said that OpenAI is certain of the development of AGI and anticipates the first AI agents in the workforce to appear this year. As a result, the company is looking further than just AI.

“We are beginning to turn our aim beyond [AGI], to superintelligence in the true sense of the word,” said Altman.

“We love our current products, but we are here for the glorious future.”

============
============

Superintelligence refers to artificial intelligence with greater capacity than the human brain. A superintelligent model may have unlimited memory, greater speed than a human, superior thinking skills and a greater capacity for knowledge – an artificial brain that is smarter than the smartest human.

“With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity,” said Altman.

“We’re pretty confident that in the next few years, everyone will see what we see, and that the need to act with great care, while still maximising broad benefit and empowerment, is so important. Given the possibilities of our work, OpenAI cannot be a normal company.”

As Altman acknowledges in his blog post, focus on superintelligence and superhuman AI saw the CEO in hot water in late 2023, when he was fired by the OpenAI board after they had heard the company was working on potentially dangerous artificial intelligence.

The AI in question is called Q* (pronounced Q Star), and it’s OpenAI’s project to search for what is known as artificial general intelligence (AGI), or superintelligence, which the company said is AI that is smarter than human beings.

Q* had reportedly been making serious progress, with the model performing tasks that could revolutionise AI.

Current AI models, like OpenAI’s ChatGPT 4, are fantastic at writing based on predictive behaviour, but as a result, answers do vary and are not always correct.

Researchers believe that having an AI tool capable of solving and properly understanding mathematic problems where there is only one answer is a major breakthrough in the development of superintelligence.

This is exactly what Q* began to do. While only solving equations at a school level and being extremely resource-heavy, researchers believe this to be big.

It’s worth noting that this is not like a calculator that can solve certain equations as you enter them in. ChatGPT can already do that, thanks to the massive library of data it has access to. Superintelligence like Q* is able to learn and properly understand the process.

The reason the OpenAI board was displeased with Altman is that news of these developments had been kept quiet.

“A little over a year ago, on one particular Friday, the main thing that had gone wrong that day was that I got fired by surprise on a video call, and then right after we hung up the board published a blog post about it. I was in a hotel room in Las Vegas. It felt, to a degree that is almost impossible to explain, like a dream gone wrong,” Altman said in his latest blog post.

“The whole event was, in my opinion, a big failure of governance by well-meaning people, myself included. Looking back, I certainly wish I had done things differently, and I’d like to believe I’m a better, more thoughtful leader today than I was a year ago.

“I also learned the importance of a board with diverse viewpoints and broad experience in managing a complex set of challenges. Good governance requires a lot of trust and credibility. I appreciate the way so many people worked together to build a stronger system of governance for OpenAI that enables us to pursue our mission of ensuring that AGI benefits all of humanity.”

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.