Multiple users have expressed their reluctance to hop onto the AI bandwagon, keeping it at arm’s length because of privacy, security, and existential concerns. According to p(doom), AI safety researcher and director of the Cyber Security Laboratory at the University of Louisville, Roman Yampolskiy warned that there’s a 99.999999% probability AI will end humanity.
And if recent reports are anything to go by, most AI labs are potentially on the precipice of hitting the coveted AGI (artificial general intelligence) benchmark. More specifically, OpenAI and Anthropic predict that AGI could be achieved within this decade.
Despite the potential threat AGI could pose to humanity, OpenAI CEO Sam Altman claims that the threat won’t manifest during the AGI moment. Instead, it will simply whoosh by with surprisingly little societal impact.
However, former OpenAI chief scientist Ilya Sutskever expressed concerns about AI surpassing human cognitive capabilities and becoming smarter.
As a workaround, the executive recommended building “a doomsday bunker,” where researchers working at the firm would seek cover in case of an unprecedented rapture following the release of AGI (via The Atlantic).
During a meeting among key scientists at OpenAI in the summer of 2023, Sutskever indicated:
“We’re definitely going to build a bunker before we release AGI.”
Sutskever’s comment about the bunker was first cited in Karen Hao’s upcoming book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. Interestingly, this wasn’t the only time the AI scientist referenced the safety bunker.
The executive often talked about the bunker during OpenAI’s internal discussions and meetings. According to a researcher, multiple people shared Sutskever’s fears about AGI and its potential to rapture humanity.
While Safe Superintelligence Inc. founder Ilya Sutskever declined to comment on the matter, it raises great concern, especially since he was intimately involved in ChatGPT’s development and other flagship AI-powered products. Are we truly ready for a world with AI systems that are powerful and smarter than humans?
This news comes after DeepMind CEO Demis Hassabis indicated that Google could be on the verge of achieving AGI following the release of new updates to its Gemini models. He raised concerns, citing that society isn’t ready and that the prospect of “Artificial General Intelligence” is keeping him awake at night.
Elsewhere, Anthropic CEO Dario Amodei admitted that the company doesn’t know how its models work. The executive further indicated that society should be concerned about the lack of understanding and its potential threats.