“IMO, an AGI race is a very risky gamble, with huge downside,” indicated OpenAI safety researcher Steven Adler as he announced his exit from the AI startup after four years of service. “No lab has a solution to AI alignment today. And the faster we race, the less likely that anyone finds one in time.”
The safety concerns raised in Adler’s departure message from OpenAI echo AI safety researcher and director of the Cyber Security Laboratory at the University of Louisville, Roman Yampolskiy, claim about a 99.999999% probability of AI ending humanity.
Some personal news: After four years working on safety across @openai, I left in mid-November. It was a wild ride with lots of chapters – dangerous capability evals, agent safety/control, AGI and online identity, etc. – and I’ll miss many parts of it.January 27, 2025
According to the researcher’s p(doom) values, the only way to avert inevitable doom is not to build AI in the first place. Perhaps that option is out of the window with OpenAI and SoftBank’s $500 billion bet on Stargate to facilitate the construction of data centers across the United States to foster sophisticated AI advances.
Adler isn’t the first employee to depart from the ChatGPT maker over safety concerns. Last year, Jan Leike, OpenAI’s Head of alignment, super alignment lead, and executive, announced his departure. The former alignment lead disclosed that he’d disagreed with OpenAI’s leadership over core priorities on next-gen AI models, security, monitoring, and more.
Perhaps more concerning, Leike indicated that the safety process had taken a back seat as shiny products like AGI gained precedents. Additionally, OpenAI reportedly rushed through GPT-4o’s launch, allocating less than a week to its team to run safety tests. Sources with close affiliations indicated OpenAI sent invitations for the product’s launch celebration party before the safety team ran tests. While an OpenAI spokesman admitted that GPT-4o’s launch was rushed and stressful to the safety team, he claimed that the company didn’t cut any safety corners to meet the tight deadline.
Anthropic CEO says AI will extend human lifespan by 150 years by 2037
While speaking at the World Economic Forum in Davos, Switzerland, Anthropic CEO, Dario Amodei claimed generative AI could double human lifespans within 5 to 10 years (via tsarnick on X):
“It is my guess that by 2026 or 2027, we will have AI systems that are broadly better than almost all humans at almost all things. I see a lot of positive potential.”
Anthropic CEO Dario Amodei says he would bet even odds that by 2037, human lifespan will have been extended to 150 years by AI pic.twitter.com/9AN3zRBxuqJanuary 27, 2025
The executive highlighted major improvements across tech, military, and health. He believes that AI could increase human lifespans despite reports of the technology prompting existential doom.
According to Anthropic CEO Dario Amodei:
“If I had to guess, and you know, this is not a very exact science, my guess is that we can make 100 years of progress in areas like biology in five or ten years if we really get this AI stuff right. If you think about, you know, what we might expect humans to accomplish in an area like biology in 100 years, I think a doubling of the human lifespan is not at all crazy. And then if AI is able to accelerate that, we may be able to get that in five to ten years. So that’s kind of the grand vision. At Anthropic, we are thinking about, you know, what’s the first step towards that vision, right? If we’re two or three years away from the enabling technologies for that.”
Amodei admits that “this is not a very exact science,” meaning that the progression of AI might take a different route amid claims that scaling laws have begun due to a lack of high-quality content for training AI models. Interestingly, OpenAI CEO Sam Altman claims AI will be smart enough to solve the consequences of rapid advances in the landscape, including the destruction of humanity.
OpenAI is racing toward AGI, especially after Altman confirmed that his team knows how to build AGI and that it could be achieved sooner than anticipated with current hardware. He also hinted that the company could be shifting focus to superintelligence, which could trigger a 10x surge in scientific AI breakthroughs each year as revolutionary as a decade.