OpenAI's GPT-4.5 could be the last of its kind
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Lindsey Bailey/Axios
GPT-4.5, OpenAI's big new model, represents a significant step forward for AI's industry leader. It could also be the end of an era.
The big picture: 4.5 is "a giant, expensive model," as OpenAI CEO Sam Altman put it. The company has also described it as "our last non-chain-of-thought model," meaning — unlike the newer "reasoning" models — it doesn't take its time to respond or share its "thinking" process.
Why it matters: The pure bigger-is-better approach to model pre-training now faces enormous costs, dwindling availability of good data and diminishing returns — which is why the industry has begun exploring different roads to continuing to improve each new AI generation.
Between the lines: Building and powering the massive data centers required to build and run the latest models has become an enormous burden, while assembling ever-bigger datasets has become challenging, since today's models already use nearly all the data available on the public internet.
Yes, but: Although pre-training may have hit a wall, most of the industry remains bullish on new gains to be made with reasoning.
Catch up quick: OpenAI Thursday released an early version of GPT-4.5, a major update to the large language model underlying ChatGPT that OpenAI says will be better at recognizing patterns and drawing connections.
- This is OpenAI's largest model yet — though the company declined to offer details about its size or the computing resources it took to train it.
- While OpenAI isn't sharing details, the cost is clearly substantial, as evidenced by the fact that developers are being charged 30 times as much for their use of GPT-4.5 compared with the current cost of GPT-4o.
- OpenAI says GPT-4.5 should hallucinate less, follow instructions better and deliver interactions that feel more natural.
OpenAI turned on the new model Thursday, but only for subscribers of the $200-per-month ChatGPT Pro and developers who use OpenAI's API.
- Next week it will be available for some other paid subscribers, including the $20-per-month ChatGPT Plus service, with paid enterprise and educational customers getting access the following week.
What they're saying: Altman wrote in a post on X, "Good news: it is the first model that feels like talking to a thoughtful person to me. ... It's a different kind of intelligence and there's a magic to it I haven't felt before."
- One skill the new model seems to have mastered is "reading the room," for instance when a user might prefer a conversation rather than be handed a pile of facts.
Zoom in: Box CEO Aaron Levie, whose company has been testing GPT-4.5, says that it shines in certain areas, such as accurately extracting the proper information from very large datasets. In such tasks, Levie told Axios, GPT-4.5 is about 20% better.
- "We're very much in the camp of not diminishing returns yet," Levie said in an interview, adding that updates like GPT-4.5 are "continuing to drive new step function improvements on reasoning capabilities, logic, math, a bunch of things that really matter to our world in the enterprise."
- Levie said it makes sense to use models like GPT-4o for some tasks, such as summarizing documents, especially given how much the cost of that model has come down.
- "But if you go to a bank or a large law firm and they need to run mission critical operations on their data, then they would absolutely pay the five or 10 times increase on these more powerful models — because it's still far cheaper than their alternative of just throwing humans at the problem."
Yes, but: Levie said the next era of gains will likely come from improving the reasoning that sits on top of large language models.
- "If the foundation model is extremely powerful and then you're doing chain-of-thought thinking on top of that model, then you get very, very high-impact results," he said.
- Altman has already said that the next big release, GPT-5, will integrate reasoning capabilities from its inception.
Between the lines: Former OpenAI chief research officer Bob McGrew says the question isn't either/or, but comes down to where AI companies commit their resources.
- "That o1 is better than GPT-4.5 on most problems tells us that pre-training isn't the optimal place to spend compute in 2025," McGrew said in a thread on X.
- "There's a lot of low-hanging fruit in reasoning still. But pre-training isn't dead, it's just waiting for reasoning to catch up to log-linear returns."
