AI safety researcher Roman Yampolskiy indicated a 99.999999% probability that AI will end humanity, indicating that the only way to avoid this outcome is by not building and developing AI in the first place.
Interestingly, as generative AI becomes more advanced and scales to greater heights, Anthropic’s CEO, Dario Amodei, admitted that the company doesn’t have a precise working knowledge of its artificial intelligence.
In an essay recently published on the executive’s website, Amodei indicated:
“People outside the field are often surprised and alarmed to learn that we do not understand how our own AI creations work. They are right to be concerned: this lack of understanding is essentially unprecedented in the history of technology.”
He describes how this poses a significant threat that could lead to harmful outcomes. To prevent any of those negative scenarios, Amodei recommends that AI labs lean more toward interpretability before AI advances to a level that might be impossible for humans to control.
These systems will be absolutely central to the economy, technology, and national security, and will be capable of so much autonomy that I consider it basically unacceptable for humanity to be totally ignorant of how they work.
Anthropic CEO, Dario Amodei
However, Anthropic’s CEO admits that a lot of work is needed to unlock the level of interpretability required to establish control of the ever-evolving AI.
A recent report suggested that OpenAI is cutting corners with its AI advances and development, slashing a significant amount of time allocated for safety tests. The report further detailed how the ChatGPT maker uses this dangerous technique to maintain a lead against its rivals.
It isn’t the first time the ChatGPT maker has been put on the spot over safety concerns. A huge chunk of the company’s founding team departed, citing safety issues, with some indicating that “shiny” products like AGI had gained precedence over safety culture and processes.
Every major tech corporation is seemingly racing to get a share of the generative AI bubble, and even Microsoft is committing $80 billion in AI advances and investments. It’s apparent that investors are concerned about the technology, especially due to its high capital demand with no clear path to profitability.
Over the past few months, people have raised privacy and security concerns as the tool becomes more prevalent and gains broad adoption, but perhaps more importantly, its long-term impact on humanity.
Tech executives, including Microsoft co-founder Bill Gates, predict that AI will replace humans for most things, but what stands out the most (at least for me) is the technology’s threat to humanity.