Vitalik Buterin suggests a “global soft pause button” on industrial-scale hardware to prevent catastrophic harm caused by AI



Despite reports claiming major tech AI labs, including OpenAI, Anthropic, and Google, are struggling to develop advanced AI systems due to scaling laws prompted by a lack of high-quality content for model training, generative AI continues to scale greater heights. OpenAI CEO Sam Altman recently indicated that AGI (artificial general intelligence) might be achieved sooner than anticipated, and superintelligence is only “a few thousand days away.”

Aside from privacy and security concerns around AI, most users have expressed their reservations about the technology as it could potentially lead to existential doom. According to an AI safety researcher and director of the Cyber Security Laboratory at the University of Louisville, Roman Yampolskiy, there’s a 99.999999% probability AI will end humanity. The researcher claimed the only way to avoid the outcome is not to build AI in the first place.



Source link

Previous articleMicrosoft to spend $80 billion building data centers for its AI advances in 2025
Next articleEchoes of Bitcoin’s Genesis: Reflecting on 2024’s Rare 2009 Wallet Movements – Bitcoin.com News