What you need to know
- An AI researcher claims there’s a 99.9% probability AI will end humanity according to p(doom).
- The researcher says a perpetual safety machine might help prevent AI from spiraling out of control and ending humanity.
- An OpenAI insider says the company is excited about AGI and recklessly racing to achieve the feat by prioritizing shiny products over safety processes.
We’re on the verge of the most significant technological breakthrough with AI, though several impediments might allow us to scale such heights. For instance, OpenAI parts with $700,000 every day to keep its ChatGPT operations running. This is on top of the ridiculous amount of electricity required to power AI advances and water for cooling.
The privacy and security around the technology have left a vast majority of users with concerns. Granted, controversial AI-powered features like Microsoft’s recalled Windows Recall referred to as a “privacy nightmare” and “hacker’s paradise” by concerned users. While it’s impossible to tell which direction the cutting-edge technology is headed, NVIDIA CEO Jensen Huang says the next wave of AI will include self-driving cars and humanoid robots.
However, guardrails and regulation measures to prevent the technology from spiraling out of control remain slim at best. OpenAI CEO Sam Altman admits “there’s no big red button” in place to stop the progression of AI. Unfortunately, AI becoming more intelligent than humans, taking over our jobs, and eventually ending humanity are some of the concerns and predictions that have been made.
An AI researcher says there’s a 99.9% probability AI will end humanity, though Elon Musk seemingly optimistic dwindles it down to a 20% chance and says it should be explored anyway.
Why is the probability of AI ending humanity so high?
I’ve been following AI trends for a hot minute. And while the technology has scaled great heights and breakthroughs across significant sectors, one thing is apparent — the bad outweighs the good.
AI researcher Roman Yampolskiy appeared on Fridman’s podcast to discuss the potential risk AI poses to humanity in a broad interview. Yampolskiy says there’s a very high chance AI will end humanity unless humans develop sophisticated software with zero bugs in the next century. However, he’s skeptical as all models have been exploited and tricked into breaking character and doing things they aren’t supposed to do:
“They already have made mistakes. We had accidents, they’ve been jailbroken. I don’t think there is a single large language model today, which no one was successful at making do something developers didn’t intend it to do.”
The AI researcher recommends the development of a perpetual safety machine that will prevent AI from ending humanity and gain control over it. Yampolskiy says even if the next-gen AI models pass all the safety checks, the technology continues to evolve — thus becoming more intelligent and better at handling complex tasks and situations.
OpenAI insider says AI will lead to inevitable doom
In a separate report, former OpenAI governance researcher Daniel Kokotajlo reiterates Yampolskiy’s sentiments. Kokotajio claims there’s a 70% chance AI will end humanity (via Futurism). As per the list embedded above, it’s clear every major player and stakeholder in the AI landscape has different p(doom) values. For context, p(doom) is an equation used to determine the probability AI will lead to the end of humanity.
“OpenAI is really excited about building AGI, and they are recklessly racing to be the first there,” stated Kokotajio. As multiple OpenAI execs left the company, super alignment lead Jan Leike indicated that the ChatGPT maker prioritizes shiny products over safety measures and culture.
“The world isn’t ready, and we aren’t ready,” wrote Kokotajio in an email seen by the NYT. “And I’m concerned we are rushing forward regardless and rationalizing our actions.”