OpenAI safety researcher: AGI is a gamble with a huge downside


“IMO, an AGI race is a very risky gamble, with huge downside,” indicated OpenAI safety researcher Steven Adler as he announced his exit from the AI startup after four years of service. “No lab has a solution to AI alignment today. And the faster we race, the less likely that anyone finds one in time.”

The safety concerns raised in Adler’s departure message from OpenAI echo AI safety researcher and director of the Cyber Security Laboratory at the University of Louisville, Roman Yampolskiy, claim about a 99.999999% probability of AI ending humanity.

Honestly I’m pretty terrified by the pace of AI development these days. When I think about where I’ll raise a future family, or how much to save for retirement, I can’t help but wonder: Will humanity even make it to that point?
Today, it seems like we’re stuck in a really bad equilibrium. Even if a lab truly wants to develop AGI responsibly, others can still cut corners to catch up, maybe disastrously. And this pushes all to speed up. I hope labs can be candid about real safety regs needed to stop this.

Ex-OpenAI researcher, Steven Adler





Source link

Previous articleSouth African Court Sets Bail for Man Accused of Sending Bitcoin to Terrorists – Bitcoin.com News