Axios AI+

November 17, 2025
Sometimes the best day is just a quiet day at home. Yesterday was that day. Today's AI+ is 1,138 words, a 4.5-minute read.
1 big thing: World models move beyond language
Move over large language models — the new frontier in AI is world models that can understand and simulate reality.
Why it matters: Such models are key to creating useful AI for everything from robotics to video games.
- For all the book smarts of LLMs, they currently have little sense for how the real world works.
Driving the news: Some of the biggest names in AI are working on world models, including Fei-Fei Li, whose World Labs announced Marble, its first commercial release.
- Machine learning veteran Yann LeCun reportedly plans to launch a world model startup when he leaves Meta in the coming months.
- Google and Meta are also developing world models, both for robotics and to make their video models more realistic.
- Meanwhile, OpenAI has posited that building better video models could also be a pathway toward a world model.
- Tangentially related, the New York Times reported Monday that Jeff Bezos has started a new AI company focused on engineering and manufacturing, where he'll serve as co-CEO. "Project Prometheus" is seeded with more than $6 billion in funding.
As with the broader AI race, it's also a global battle.
- Chinese tech companies, including Tencent, are developing world models that include an understanding of both physics and three-dimensional data.
- Last week, the United Arab Emirates-based Mohamed bin Zayed University of Artificial Intelligence, a growing player in AI, announced PAN, its first world model.
What they're saying: "I've been not making friends in various corners of Silicon Valley, including at Meta, saying that within three to five years, this [world models, not LLMs] will be the dominant model for AI architectures, and nobody in their right mind would use LLMs of the type that we have today," LeCun said last month at a symposium at the Massachusetts Institute of Technology, as noted in a Wall Street Journal profile.
How they work: World models learn by watching video or digesting simulation data and other spatial inputs, building internal representations of objects, scenes and physical dynamics.
- Instead of predicting the next word, as a language model does, they predict what will happen next in the world, modeling how things move, collide, fall, interact and persist over time.
- The goal is to create models that understand concepts like gravity, occlusion, object permanence and cause-and-effect without having been explicitly programmed on those topics.
Context: There's a similar but related concept called a "digital twin" where companies create a digital version of a specific place or environment, often with a flow of real-time data for sensors allowing for remote monitoring or maintenance predictions.
Between the lines: Data is one of the key challenges. Those building large language models have been able to get most of what they need by scraping the breadth of the internet.
- World models also need a massive amount of information, but from data that's not consolidated or as readily available.
- "One of the biggest hurdles to developing world models has been the fact that they require high-quality multimodal data at massive scale in order to capture how agents perceive and interact with physical environments," Encord president and co-founder Ulrik Stig Hansen said in an email interview.
- Encord offers one of the largest open source datasets for world models, with 1 billion data pairs across images, videos, text, audio and 3D point clouds as well as a million human annotations assembled over months.
- But even that is just a baseline, Hansen said. "Production systems will likely need significantly more."
What we're watching: While world models are clearly needed for a variety of uses, whether they can advance as rapidly as language models remains uncertain.
- Though clearly they're benefiting from a fresh wave of interest and investment.
2. Investors sour on Big Tech's debt amid AI race


Oracle's $3.5 billion, 30-year bond has dropped roughly 8% since its October peak and is now trading at just 65 cents on the dollar.
Why it matters: It's a sign of growing investor unease over Big Tech's borrowing binge to fund AI infrastructure.
Zoom in: Oracle's credit risk has widened faster than the overall investment-grade market has, according to Bank of America analysts.
- Five-year credit default swaps (insurance-like contracts that protect investors against a default of a company's debt) have widened to around 80 basis points, the highest in about two years.
- BofA flags this as a warning that investors aren't comfortable with how Big Tech is financing its AI buildout.
Zoom out: Financial conditions have loosened, helped by lower interest rates and a rally in risk assets.
- Even as credit spreads have widened recently amid some AI bubble concerns, they remain near historically low levels.
- Still, the bond spreads and credit default swap spreads of tech companies are widening, making it more expensive for investors to insure against defaults in the debt.
- Bank of America says that trend reflects concern that tech companies may not have enough cash to finance the "AI capex arms race."
The bottom line: Just two weeks ago, bond investors were clamoring for their piece of the AI pie, with Meta's latest debt issuance four times oversubscribed.
- A drop in demand coupled with a selloff in Big Tech stocks could be an indicator that investors are questioning how much is too much to spend on an AI buildout without a clear path for returns on that investment.
3. The age of AI-powered cyberattacks is here
The dam for foreign spies automating cyberattacks with AI tools is officially broken.
Why it matters: Imagine a world where Chinese spies can tamper with a U.S. water system or steal a major AI vendor's plans for its next model upgrade — all with just a few clicks. That future is no longer hypothetical.
- "Guys wake the f up," Sen. Chris Murphy (D-Conn.) said on X. "This is going to destroy us — sooner than we think — if we don't make AI regulation a national priority tomorrow."
Threat level: As AI models get smarter, state-backed hacking powered by AI will too.
- "This is simply the tip of the iceberg and a clear indication of the future threat landscape," said John Watters, CEO and managing partner at cybersecurity firm iCounter.
The big picture: Cybersecurity experts have warned for months that fully autonomous cyberattacks — in which AI agents execute an entire operation with minimal human input — were 12 to 18 months away.
- That timeline just shrank.
4. Training data
- Here's a look at the AI infrastructure race, broken down into six charts. (WSJ)
- Apple will require developers to disclose when they are sending data to third-party AI engines and get users' permission before doing so. (Cult of Mac)
5. + This
The middle schooler is long past sharing my joy for "Sesame Street," but I do think he will like this, from Count von Count.
Thanks to Megan Morrone for editing this newsletter and Matt Piper for copy editing.
Sign up for Axios AI+





