What you need to know
- A recent report indicated OpenAI, Google, and, Anthropic are struggling to develop advanced AI models.
- The issue was attributed to a lack of sufficient high-quality data for training, causing the models to fall short of expectations, ultimately forcing Anthropic and OpenAI to postpone the launch of their next-gen AI models.
- OpenAI CEO Sam Altman recently shared a message on X indicating “there is no wall,” potentially refuting the allegations.
Recently, a report emerged indicating OpenAI, Google, and Anthropic are struggling to develop advanced AI models. The struggle was attributed to a lack of high-quality content for AI model training and the cost of chasing the AI hype.
Investors have raised concern over Microsoft’s continued heavy investment in AI despite the dismal profit returns. Anthropic CEO Dario Amodei predicts AI labs could spend $100 million training advanced AI models by the end of the year. He added that this figure could skyrocket to $100 billion in the coming years.
Amodei’s predictions aren’t farfetched. OpenAI CEO Sam Altman recently indicated that superintelligence could be only “a few thousand days away.” However, while vaguely referring to an audacious AI dream (which could also be code for AGI), Altman indicated it would take ” $7 trillion and many years to build 36 semiconductor plants and additional data centers” to achieve the feat.
The highlighted struggles in scaling AI models were centered on AI labs delaying the launch of their flagship advanced AI models, including OpenAI’s Orion model and Anthropic’s long-anticipated Claude 3.5 Opus model. According to people with knowledge of the matter, the company has since brushed up its wording on its website about the product launch from later this year to coming soon.
A luta continua for advanced AI model development
Anthropic CEO admits many factors could prevent AI from advancing, including a lack of sufficient data for training. However, he’s optimistic that there is always a way around such issues, including leveraging synthetic data. For context, synthetic data consists of images and text generated using computers, mimicking human-created content.
It’s reported that OpenAI and Anthropic’s next-gen AI models fell short of expectations, prompting the delayed release. Granted, the models spot better capabilities and performance than their predecessors but by a small gap.
The arms race by AI labs toward the coveted AGI (artificial general intelligence) benchmark is becoming more fierce, and there are concerns about whether they can hit the threshold within the stipulated timeline with the issues abound.
Former OpenAI Chief Scientist and Safe Superintelligence Inc. Ilya Sutskever admitted that scaling advanced AI models has seemingly hit a ceiling. Interestingly, OpenAI CEO Sam Altman recently shared a cryptic message on X, indicating “there is no wall (via Business Insider).”
While the executive didn’t categorically address the claims of OpenAI reaching a knowledge cap for training AI models, it’s coincidental that he posted the message shortly after the report started making rounds on social media.