Waiting For ChatGPT to Improve? You Might Be Waiting a While, Here’s Why


Summary

  • ChatGPT’s release cadence is slowing down, moving towards annual updates.
  • Transformer technology and diminishing returns are causing LLM development to slow.
  • A lack of training data and uncertain profit models challenge the future of AI projects like ChatGPT.


ChatGPT changed the way many people around the world live and work, but those with a keen eye on model cadence have noticed it’s slowed down as of late. What’s going on with LLM development, and are we headed for an AI dark age in 2025 and beyond?



ChatGPT: A Timeline

When OpenAI launched its first public model, ChatGPT 3.5, in November 2022, it took the search and AI industries by storm. Up until Meta Threads launched in 2023, ChatGPT was the fastest-growing application of all time, adding 100 million users to its roster in less than three months.

Since then, the company has moved from roughly a six-month cadence between new models, to now pushing further into annual updates instead. While it was only five months between the launch of ChatGPT 3.5 and ChatGPT 4.0, it took from March 2023 until December 2024 for ChatGPT o1 to release after that.

With o3 lacking a firm launch date, there’s no real telling when we might see OpenAI’s next big model. Some early testers have already gotten their hands on the beta, but that doesn’t give much of a signal as to when we can expect the next evolution in LLMs to hit public PCs. So what are some of the reasons LLM development has started to slow, and will the tech world’s investment pay dividends in the end?


Autobots, Roll Out

Transformers are the fundamental technology that first transformed (for lack of a better term) the AI industry, starting around 2017. By utilizing the CUDA architecture within GPUs as a total compute platform rather than image rendering only, transformers are able to turn even the most basic graphics cards into AI-friendly processors.

But while many of the earliest models of large language models (LLMs) and their smaller token inputs were able to take greater advantage of CUDA architecture, as of late we’ve seen diminishing returns. Like an accelerated version of Moore’s Law—which is admittedly a drastic simplification of the tech in service of brevity—GPUs have started to peak in AI performance output despite increased investment in transistor density and VRAM specs year-on-year.

Even Nvidia’s keynote at CES this year was met with tepid reactions, as it became clear we’ve already hit the “evolutionary” phase of AI hardware, rather than the “revolutionary” leaps some were expecting given the trajectory of the past few years.


Front face of the NVIDIA Project DIGITS on display at CES 2025.
Justin Duino / How-To Geek

We’re not yet as close to the point of pushing GPU-based AI hardware to its theoretical physical limit as we are with some classical CPUs. (Note; this doesn’t include newer 3D-based approaches.) However, the major gains we’ve seen over the past five years in GPUs and the transformer architectures support are starting to slow to a crawl, rather than the sprint that some in the industry were hoping for a’la classical computing between the 1980s and early-2000s.

Scraping the Bottom of the Barrel

Another significant hurdle that many LLM companies are facing right now, including OpenAI with ChatGPT, is a lack of training data. As every FAANG-backed LLM (Gemini, Claude, and ChatGPT) has already sucked up and trained on what could effectively be considered the entirety of public information available on the open web, companies are running into a brick wall of input-to-output returns.


Without much new data left to train the next generation of models on, some developers have turned to what’s known as the “recursive” training model. In these instances, AI is used to train AI, but the results have been a mixed bag at best. While simpler concepts and tasks can be recursively trained, achieving results greater than those seen with AI that’s been trained on human outputs is a problem of hallucination. If you thought AIs could hallucinate before, try feeding an AI to an AI and see what kind of outputs come back. In short, a not insignificant portion of it is made up on the spot.

The race for AI and LLM supremacy has fueled a dumpster fire of money being poured into the industry, set to total over $1 trillion in the next few years as forecast by a recent Goldman Sachs analysis. However, even with all that cash on call, the sunken cost of training and maintaining an LLM like ChatGPT is still in search of a profit channel to keep the lights on.


The training, operation, and pull requests of LLMs cost a considerable amount more than your standard Google search. Some estimates suggest one ChatGPT request could use ten times the compute and power requirements of a Google query, though the real numbers are a well-kept secret by OpenAI. Until recently, all the major players in FAANG approached AI with the standard operating playbook: “1. Dump in more VC cash than your competitors 2. Capture the highest market share possible 3. ??? 4. Profit.”

But the world of AI is anything but standard. As compute costs have, not coincidentally, skyrocketed right alongside Nvidia’s stock price, the actual profit model for recouping those costs still seems foggy at best.

Two robotic hands tearing a dollar bill with the ChatGPT logo in the center.
Lucas Gouveia / How-To Geek


ChatGPT charges $20 per month for access to its most advanced and most recent models. But even with its 11 million paying subscribers, according to a report from The Information quoting OpenAI’s COO, OpenAI is still considering new subscription tiers for more advanced LLMs that could range up to $2,000 per month, depending on capability.

This problem is made even worse by diminishing returns in results. As many people hit the point of free models like ChatGPT 4o being “good enough” for what they need—”enough” being a subjective experience to each user and their use case of course—the selling point of subscribing monthly loses its value. This fear of potentially lost capital has led to a slowing in AI investment compared to previous years, which means slowed development output in kind.

When Will ChatGPT Make Its Next Leap?

As ChatGPT prepares for the launch of its o3 model, industry analysts expect it may be the only new public release we’ll see from OpenAI in all of 2025. Many are happy to be proven wrong, but given the mentioned issues above, it’s looking more likely by the day.


But, ultimately, is that such a bad thing? As the leaderboard at Chatbot Arena shows, model iterations that previously only took months to jump hundreds of points between releases have barely moved more than a few dozen in over a year. We’re reaching the peak of what LLMs are capable of even in their most performant environments, and while scaled corporate applications are still ripe for the picking, what an LLM can do for your average user seems to be inching toward its theoretical limit.

So, when will you get your hands on the next version of ChatGPT? Only time will tell. But, while we wait, models like ChatGPT o1 and 4o are still plenty powerful to handle whipping up a grocery list sorted by aisles, helping you remember which book you read a specific quote in, or whatever you like to use your favorite chatbot for most often.



Source link

Previous articleBitcoin Technical Analysis: Is a Breakout Above $97K on the Horizon? – Bitcoin.com News