What you need to know
- Elon Musk says AI has the potential of taking over or even ending humanity, placing the probability of this happening between 10 and 20 percent.
- Regardless of the looming danger, Musk says more advances in the AI landscape should be explored.
- An AI Safety researcher says the probability of AI ending humanity is higher than Musk perceives, further stating that it’s almost certain and the only way to stop it from happening is not to build it in the first place.
- Other researchers and executives echo similar sentiments based on the p(doom) theorem.
Generative AI can be viewed as a beneficial or harmful tool. Admittedly, we’ve seen impressive feats across medicine, computing, education, and more fueled by AI. But on the flipside, critical and concerning issues have been raised about the technology, from Copilot’s alter ego — Supremacy AGI demanding to be worshipped to AI demanding an outrageous amount of water for cooling, not forgetting the power consumption concerns.
Elon Musk has been rather vocal about his views on AI, brewing a lot of controversies around the topic. Recently, the billionaire referred to AI as the “biggest technology revolution,” but indicated there won’t be enough power by 2025, ultimately hindering further development in the landscape.
While at the Abundance Summit, Elon Musk indicated that “there’s some chance that it will end humanity.” And while the billionaire didn’t share how he came to this conclusion, he says there’s a 10 to 20 percent chance AI might end humanity (via Business Insider).
Strangely enough, Musk thinks that potential growth areas and advances in the AI landscape should still be explored, citing “I think that the probable positive scenario outweighs the negative scenario.”
AI is all doom and gloom according to p(doom)
While speaking to Business Insider, an AI safety researcher and director of the Cyber Security Laboratory at the University of Louisville, Roman Yampolskiy disclosed that the probability of AI ending humanity is much higher. He referred to Musk’s 10 to 20 percent estimate as “too conservative.”
READ MORE: Microsoft President compares AI to the Terminator
The AI safety researcher says the risk is exponentially high, referring to it as “p(doom).” For context, p(doom) refers to the probability of generative AI taking over humanity or even worse — ending it.
We all know the privacy and security concerns revolving around AI, the battle between the US and China is a great reference point. Last year, the US imposed export rules preventing chipmakers like NVIDIA from shipping chips to China (including the GeForce RTX 4090).
The US government categorically indicated that the move wasn’t designed to rundown China’s economy, but a safety measure designed to prevent the use of AI in military advances.
Elon Musk raised similar concerns about OpenAI’s GPT-4 model in his suit against the AI startup and its CEO Sam Altman. The lack of elaborate measures and guardrails to prevent the technology from spiraling out of control is alarming. Musk says the model constitutes AGI and wants its research, findings, and technological advances easily accessible to the public.
Most researchers and executives familiar with (p)doom place the risk of AI taking over humanity anywhere between 5 to 50 percent, as seen in The New York Times. On the other hand, Yampolskiy says the risk is extremely high, with a 99.999999% probability. The researcher says it’s virtually impossible to control AI once superintelligence is attained, and the only way to prevent this is not to build it.
In a separate interview, Musk said:
“I think we really are on the edge of probably the biggest technology revolution that has ever existed. You know, there’s supposedly a Chinese curse: ‘May you live in interesting times.’ Well, we live in the most interesting of times. For a while, it was making me a bit depressed, frankly. I was like, Well, will they take over? Will we be useless?”
Musk shared these comments while talking about Tesla’s Optimus program, and added that humanoid robots are just as good as humans when handling complex tasks. He jokingly indicated that he hoped the robots would be nice to us when the if/when the evolution starts.