Despite reports claiming major tech AI labs, including OpenAI, Anthropic, and Google, are struggling to develop advanced AI systems due to scaling laws prompted by a lack of high-quality content for model training, generative AI continues to scale greater heights. OpenAI CEO Sam Altman recently indicated that AGI (artificial general intelligence) might be achieved sooner than anticipated, and superintelligence is only “a few thousand days away.”
Aside from privacy and security concerns around AI, most users have expressed their reservations about the technology as it could potentially lead to existential doom. According to an AI safety researcher and director of the Cyber Security Laboratory at the University of Louisville, Roman Yampolskiy, there’s a 99.999999% probability AI will end humanity. The researcher claimed the only way to avoid the outcome is not to build AI in the first place.
While there’s a critical need for guardrails and regulations to prevent AI from veering off the rails and spiraling out of control, Vitalik Buterin, Ethereum co-founder, proposes a “global soft pause button” on global hardware to prevent the tech from taking over humanity.
According to Buterin:
“The goal would be to have the capability to reduce worldwide available compute by ~90-99% for 1-2 years at a critical period, to buy more time for humanity to prepare. The value of 1-2 years should not be overstated: a year of “wartime mode” can easily be worth a hundred years of work under conditions of complacency. Ways to implement a “pause” have been explored, including concrete proposals like requiring registration and verifying location of hardware.”
The Canadian computer programmer says a clever cryptographic trickery would serve as an advanced approach to address AI risks. He proposes that industrial-scale AI hardware should be fitted with a trusted chip that would only continue running if it gets up to three signatures every week from major international bodies, including one non-military party.
“The signatures would be device-independent (if desired, we could even require a zero-knowledge proof that they were published on a blockchain), so it would be all-or-nothing: there would be no practical way to authorize one device to keep running without authorizing all other devices,” Buterin added. He indicated that the constant need to get online every week for a signature would help discourage extending the scheme to consumer hardware.
OpenAI CEO Sam Altman claimed AI will be smart enough to solve the consequences of rapid advances in the landscape, including the destruction of humanity. Interestingly, he claimed the safety concerns expressed won’t manifest at the coveted AGI moment as it will whoosh by with “surprisingly little” societal impact. However, the executive says AI should be regulated like an airplane by an international agency to ensure the safety testing of these advances.
Buterin touts the approach for several reasons, including its capability to slow down the transition if it shows early signs of catastrophic damage and negligible impact on developers.