Robotaxis are learning to drive in an AI-simulated world
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Lindsey Bailey/Axios
The more that autonomous vehicles can learn to drive in the virtual world, the less time they need to practice on physical streets.
Why it matters: Simulation could accelerate robotaxi rollouts — Waymo is now operating in 10 cities — provided AI-generated training data truly mirrors reality. Not all experts are convinced.
The big picture: So-called world models — AI systems that simulate physical reality — are essential to developing everything from self-driving cars to robots and video games.
- Though the concept dates back decades, world models are becoming more important as AI advances beyond text toward general intelligence for the physical world.
How they work: World models learn by digesting video or other sensor data to create a digital twin of the environment, modeling motion, interactions and predicting what happens next.
- AV developers like Waymo and Waabi use them to train vehicles on complex scenarios long before their tires hit public roads.
- The goal is to design a system that can reason through situations it's never seen before.
Zoom in: Waymo says its Waymo World Model, built on Google DeepMind's Genie 3, can simulate rare edge cases — from tornados to wandering elephants — that are nearly impossible to capture in real-world testing.
- This "what if" capability helps Waymo rapidly improve its technology to safely scale its robotaxis across more cities, the company says.
Yes, but: While simulation is essential to teach AVs to drive, even the best simulators can't anticipate every random hazard, says Carnegie Mellon Emeritus Prof. Philip Koopman, an expert in embodied AI safety.
Friction point: "You only know world models are good enough through testing, and we are not doing enough testing," warns Missy Cummings, George Mason University robotics professor and a former safety advisor at the National Highway Traffic Safety Administration.
- "The use of simulated data to both build and validate world models is the ultimate in hubris and will most certainly lead to deaths for AI used in safety-critical systems," she says.
- "GIGO [garbage in, garbage out] is the operative word here."
Relying on synthetic data to design embodied AI systems like robotaxis creates risks that aren't fully understood, Koopman adds.
- "You have AI talking to AI. Where's the ground truth?" he asks.
What they're saying: Waymo points to its safety record:
- "Compared to human drivers over 127 million rider-only miles, the Waymo Driver has been involved in 10x fewer serious injury or worse crashes and 13x fewer crashes with injuries to pedestrians," the company says, citing its own peer-reviewed research.
- Waabi, meanwhile, defended the realism of its AI-generated world, and claimed it sets an industry standard.
Between the lines: Using its own paired testing method, the Waabi World simulator achieved a 99.7% "realism score" — reflecting how a robotaxi trained by AI drives the same in the real world as it does in simulation, founder and CEO Raquel Urtasun said.
- While other companies don't share similar measurements, Waymo says internal benchmarks show its own world model "embodies the highest level of realism to date."
What we're watching: Regulators may eventually use simulated driving tests to determine whether AV systems are safe to deploy.
- But proving a simulator's realism should be a prerequisite in the meantime, Urtasun said.
The bottom line: Robotaxis are only as good as the AI model that trained them.
