OpenAI scientists wanted a doomsday bunker to hide from AGI



Multiple users have expressed their reluctance to hop onto the AI bandwagon, keeping it at arm’s length because of privacy, security, and existential concerns. According to p(doom), AI safety researcher and director of the Cyber Security Laboratory at the University of Louisville, Roman Yampolskiy warned that there’s a 99.999999% probability AI will end humanity.

And if recent reports are anything to go by, most AI labs are potentially on the precipice of hitting the coveted AGI (artificial general intelligence) benchmark. More specifically, OpenAI and Anthropic predict that AGI could be achieved within this decade.



Source link

Previous articleKentucky’s Bitcoin Boom Has Gone Bust
Next articleI write about AI for a living and I haven’t seen AI video as realistic as Veo 3 before, here are the 9 best examples