Axios AI+

December 11, 2025
I really wanted to go to last night's launch event for the San Francisco Chronicle's book on the Valkyries' inaugural season, but went to Google's holiday PR party instead.
🤖 Situational awareness: Time magazine named "the architects of AI" as 2025's Person of the Year.
- Also breaking this morning: Disney will invest $1 billion in OpenAI and license hundreds of characters to Sora.
Today's AI+ is 1,105 words, a 4-minute read.
1 big thing: New models could increase cyber risks
OpenAI says the cyber capabilities of its frontier AI models are accelerating and warned yesterday that upcoming models are likely to pose a "high" risk, in a report shared first with Axios.
Why it matters: The models' advances could significantly expand the number of people able to carry out cyberattacks.
Driving the news: OpenAI said it has already seen an increase in capabilities in recent releases, particularly as models are able to operate longer autonomously, paving the way for brute force attacks.
- The company notes that GPT-5 scored a 27% on a capture-the-flag exercise in August, GPT-5.1-Codex-Max was able to score 76% last month.
- "We expect that upcoming AI models will continue on this trajectory," the company says in the report. "In preparation, we are planning and evaluating as though each new model could reach 'high' levels of cybersecurity capability as measured by our Preparedness Framework."
Catch up quick: OpenAI issued a similar warning relative to bioweapons risk in June, and then released ChatGPT Agent in July, which rated "high" on its risk levels.
- "High" is the second-highest level, below the "critical" level at which models are unsafe to be released publicly.
Reality check: The company didn't say when to expect the first models rated "high" for cybersecurity risk, or which types of future models could pose such a risk.
What they're saying: "What I would explicitly call out as the forcing function for this is the model's ability to work for extended periods of time," OpenAI's Fouad Matin told Axios in an exclusive interview.
- The kinds of brute force attacks that rely on this extended time are more easily defended, Matin says.
- "In any defended environment this would be caught pretty easily," he added.
The big picture: Leading models are getting better at finding security vulnerabilities — and not just models from OpenAI.
- That helps both attackers and defenders.
Yes, but: Last month Anthropic revealed the first documented case of a foreign government using AI to fully automate a cyber operation.
- Increasingly powerful generative AI models are also helping fuel other types of crime, from expense receipt fraud to deepfake-assisted extortion efforts.
As a result of the increased capabilities, OpenAI says it is stepping up efforts to work across the industry on cybersecurity threats, including through the Frontier Model Forum that it launched with other leading labs in 2023.
- The company says it will establish a separate Frontier Risk Council, an advisory group that will "bring experienced cyber defenders and security practitioners into close collaboration" with OpenAI's teams.
- In addition to ongoing private conversations, OpenAI told Axios it is planning some sort of event to help ensure a "shared understanding" of the threat landscape.
- OpenAI is also in private testing for Aardvark, a tool that developers can use to find security gaps in their products. Developers have to apply to gain access to Aardvark, which has already found critical vulnerabilities, OpenAI said.
2. Wall Street awaits a $3 trillion IPO gusher
Some of the world's biggest startups — including two AI darlings — have signaled possible IPOs next year, blockbuster offerings that would mint some $3 trillion worth of new public companies.
Why it matters: These companies generate little or no profit yet carry towering valuations. An AI-obsessed market that's happy to overlook all that risks repeating the mistakes of the dot-com era.
State of play: Elon Musk's SpaceX has told its investors that it's planning to go public next year.
- The company is seeking a $1.5 trillion valuation — the richest listing in history, per Bloomberg.
- OpenAI has an implied valuation of over $500 billion, fueling speculation about a future stock listing.
- Other AI, crypto-infrastructure and frontier-tech "centicorns" — companies valued at $100-billion plus — are reportedly weighing 2026 listings, including OpenAI rival Anthropic.
Zoom in: Markets are near all-time highs, and there's strong investor enthusiasm surrounding AI, space and crypto companies.
- "Feed the ducks while they're quacking," said Steve Sosnick, chief strategist at Interactive Brokers.
Friction point: Yet investors are also wary that a bubble may have already formed in the shares of existing AI companies, without having to contend with new, richly valued stocks joining the frenzy.
- And a whiff of weakness in the AI trade is enough to make investors skittish these days.
- Oracle has made an enormous bet on an AI data center buildout, and its shares tumbled more than 10% in early trading this morning after the company reported disappointing quarterly results.
What they're saying: If "valuations get too ridiculous" we could get a "WeWork moment," noted Jay Ritter, director of the IPO initiative and emeritus professor at the University of Florida.
- WeWork was valued at $47 billion by SoftBank in 2019, when it was set to go public. But institutional investors decided the office-space coworking company was nowhere near that value. WeWork later filed for bankruptcy.
- Sky-high valuations aren't just a market concern — they test the limits of how much speculative hope investors are willing to underwrite in an era defined by AI, space ambition and cheap private capital.
Yes, but: Any potential IPO boom will have its winners and losers.
- "Some will underperform, and a few will turn out to be the next Nvidia or Alphabet," Ritter said.
- A wave of big companies going public wouldn't necessarily be a bad thing, as capital markets are "supposed to be about allowing ordinary investors to participate in the growth of corporate prosperity," said Sosnick of Interactive Brokers.
3. Training data
- Former Meta policy chief Nick Clegg is joining London-based Hiro Capital as a general partner, while exiting AI executive Yann LeCun will serve on an advisory board for the firm. (Tech.eu)
- The Information reports that DeepSeek is training new models using Nvidia chips smuggled into China, while Nvidia tells Bloomberg it hasn't gotten any tips or substantiation of that.
- Sources report tension between Meta's new TBD lab and longer-standing teams at the company over access to resources, among other issues. (NYT)
- Google has promoted Amin Vahdat to chief technologist for AI infrastructure, a new position reporting to CEO Sundar Pichai. (Semafor)
4. + This
In-N-Out, the popular Southern California burger chain, has quietly stopped using order number 67 to avoid the inevitable chaos it sparks among younger customers. Meanwhile, Arby's isn't afraid of the youth.
Thanks to Megan Morrone for editing this newsletter and Matt Piper for copy editing.
Sign up for Axios AI+




