Hackers are coming for AI in the physical world
Add Axios as your preferred source to
see more of our stories on Google.

Photo illustration: Axios Visuals; Photo: Courtesy of SentinelOne
The models underpinning self-driving cars, humanoids and other physical applications of AI are about to become prime targets for hackers over the next year, SentinelOne CEO Tomer Weingarten warned.
Why it matters: Few people outside the depths of the security industry are ready for a world where Waymos are hijacked or warehouse robots are tricked into rerouting merchandise.
Driving the news: SentinelOne was one of the pioneers of the AI security world, emerging before ChatGPT even hit the consumer market.
- Now, while everyone is studying ways to protect models from data poisoning or prompt injections, where hidden text instructions trick a large language model into doing something bad, Weingarten is worried about the even bigger wave that's coming next.
What they're saying: "We forget that there are more and more real-world applications of these models," Weingarten told Axios.
- "You can inject malicious commands through visual processing and through audio processing, so the moment we open up our systems to receive inputs that are not just textual, suddenly there's a whole new class of threats," he said.
- "That is very, very worrisome."
How it works: When Weingarten says those "real-world applications," he's talking about the self-driving Waymos that crisscross San Francisco and the humanoid robots that many technology companies are developing.
- Each of those is underpinned by a "multimodal model" — AI systems that process multiple types of inputs, including text, video, audio and images.
- An attack on one of these models could target any of those data sources.
Zoom in: As an example, Weingarten mentioned the possibility of someone holding up a sign on the road that means nothing to a Waymo passenger but does mean something to the AI model that is using visual data to operate the car.
- "That can present a complete new way to basically compromise the Waymo, just by the interpretation of the camera," he said.
Threat level: Despite the billions poured into cybersecurity companies, Weingarten cautioned that there still aren't enough people studying what new kinds of multimodal threats could look like.
- "I don't think anybody is really parsing through that right now," he said.
- "When something happens in the real world, I think that's where consequences become vividly real," he said. "I hope it doesn't happen in the next year, but that's one vector I would really like people to pay more attention to."
Of note: SentinelOne has been pushing to position itself as a dominant vendor in securing AI applications and the environments in which AI tools operate — meaning the company has a vested interest in seeing customers prepare for multimodal attacks.
- But Weingarten said his teams also care deeply about securing against both attacks on AI systems and AI-enabled attacks.
- On Monday, the company released open-source security tools designed to protect OpenClaw, the AI agent that anyone can download.
What to watch: Weingarten also said he expects to see more AI model poisoning attacks, where hackers tamper with underlying training data, as well as security incidents tied to "vibe coding" over the next year.
Go deeper: The age of AI-powered cyberattacks is here
