Austin shooting puts spotlight on Waymo
Add Axios as your preferred source to
see more of our stories on Google.

When a Waymo robotaxi blocked an ambulance from reaching the scene of last week's mass shooting in Austin, it wasn't a remote Waymo agent coming to the rescue.
- Instead, it was a local police officer who got in and manually drove the car out of the way.
Why it matters: The incident exposes the limits of remote supervision systems in AV networks when vehicles — or the public — need urgent intervention.
- Few situations are more complex than a mass shooting, like the one that unfolded here.
By the numbers: The city of Austin has received complaints about 172 incidents involving Waymos in Austin since July 2024, according to the city's autonomous vehicle dashboard.
- Of those, 33.7% were deemed safety issues.
- The city's data reflects "only occurrences which have been directly reported to the city" through 311 calls and reporting by other city departments, like police, EMS and fire.
Zoom in: A Waymo robotaxi was stopped sideways in a road near the Austin shooting scene, attempting to make a U-turn amid the chaos, with an ambulance behind it trying to get by, according to a bystander video circulating on social media and confirmed by Waymo and Austin-Travis County EMS.
- With Waymo's remote assistants thousands of miles away, an Austin police officer arrived about 30 seconds into the video of the standoff, and used his cellphone to scan a QR code on the side of the vehicle to contact Waymo specialists.
- He was able to disengage the autonomous driving system and manually drive it into a nearby parking garage.
The impact: The ambulance sought another route — and officials say the incident didn't have serious consequences.
The big picture: Congress has started asking questions about how remote assistance works in autonomous vehicles.
The bottom line: "While humans are not perfect drivers, they have an amazing ability to muddle through a novel situation with high uncertainty, which is precisely where machine learning is at its worst," Carnegie Mellon Emeritus Prof. Philip Koopman, an expert in embodied AI safety, said.

