GPT-4o just took unshackled the free version of ChatGPT



OpenAI announced the release of its newest snack-sized generative model, dubbed GPT-4o mini, which is both less resource intensive and cheaper to operate than its standard GPT-4o model, allowing developers to integrate the AI technology into a far wider range of products.

It’s a big upgrade for developers and apps, but it also expands the capabilities and reduces limitations on the free version of ChatGPT. GPT-4o mini is now available to users on the Free, Plus, and Team tiers through the ChatGPT web and app for users and developers starting today, while ChatGPT Enterprise subscribers will gain access next week. GPT-4o mini will replace the company’s existing small model, GPT-3.5 Turbo, for end users beginning today.

The older model is still available to developers through the API if they don’t want to switch to 4o mini just yet. The company says it will retire the older model eventually but has not yet set a date.

GPT-4o has been available to free ChatGPT accounts since May, but there have been limitations around demand. According to the updated FAQ page, GPT-4o proper still has those limitations in place, but you’ll now get downgraded to GPT-4o mini rather than GPT-3.5 when you hit your limit. In theory, that’s a big win for those that haven’t upgraded to ChatGPT Plus.

We’re continuing to make advanced AI accessible to all with the launch of GPT-4o mini, now available in the API and rolling out in ChatGPT today. https://t.co/sTxtOfUapJ

— OpenAI (@OpenAI) July 18, 2024

Per data from Artificial Analysis, OpenAI’s newest AI model scored 82% on the MMLU reasoning benchmark, beating Gemini 1.5 Flash by 3% and Claude 3 Haiku by 7%. For reference, the highest MMLU benchmark to date was set by Gemini Ultra, Google’s top-of-the-line AI, with a score of 90%.

What’s more, OpenAI claims that GPT-4o mini is 60% cheaper to operate than GPT-3.5 Turbo. Developers will pay 15 cents per million input tokens and 60 cents per million output tokens. OpenAI says GPT-4o mini is “the most capable and cost-efficient small model available today,” per CNBC.

Where do those cost savings come from? Well, not every task that can be enhanced by AI needs the full weight and capability of a full-sized model like GPT, Claude or Gemini. Like swatting flies with a sledgehammer, utilizing a standard size LLM for simple but high-volume tasks is overkill and wastes both money and compute resources — which is where small LLMs such as Google’s Gemini 1.5 Flash, Meta’s Llama 3 8b, or Anthropic’s Claude 3 Haiku come in. They’re able to perform these simple, repetitive tasks faster and more cost-efficiently than the larger iterations.

According to OpenAI, GPT-4o mini will have the same size context window, 128,000 tokens (roughly a book’s worth of content), as the full-size version with the same knowledge cutoff as well, October 2023, though the company did not specify the new model’s exact size. The model API currently only offers text and vision capabilities, but video and audio will be coming in the future as well.

The announcement comes just a few weeks after OpenAI provided a long-awaited update on its anticipated, advanced Voice Mode as part of GPT-4o. The company’s update indicated that a smaller alpha release was still to come in late July, with a wider rollout being held for this fall.








Source link

Previous articleApple TV+ could get an influx of Hollywood blockbusters