Welcome to the AI agent arms race
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Allie Carl/Axios
OpenClaw's continued buzz has kicked off a new race, with Anthropic, Nvidia, Perplexity and others all fast-tracking autonomous bots that can make OpenClaw's magic more palatable to businesses.
Why it matters: Companies are giving AI agents the ability to send emails, move files and change live systems — increasing both productivity and risk.
- "Autonomy only works if it's clear who can act, what's allowed, and how those decisions are tracked," Nick Durkin, CTO of software delivery platform Harness, told Axios.
- "Most companies are still figuring that part out."
Catch up quick: Anthropic in January released Claude Cowork, an AI agent that works with your files and tools directly for work tasks.
- OpenClaw launched before Cowork, but Cowork's big splash — especially among insiders — drew more users to OpenClaw's open source framework that set AI agents free — with minimal guardrails.
Driving the news: Now the excitement around OpenClaw has prompted companies to announce complementary products or rival claw-like systems.
- "Every single company" needs an "OpenClaw strategy," Nvidia CEO Jensen Huang said at last week's GTC conference in San Jose.
State of play: Nvidia last week debuted NemoClaw, a set of services it says can make OpenClaw more reliable and secure.
- Anthropic last week released Dispatch, a feature that allows Claude Cowork tasks to be launched from anywhere via a phone or other device while it runs on your local machine.
- "This is OpenClaw for grown ups," Authority Hacker co-founder Gale Breton wrote on X. "It can do 90% [of] what OpenClaw does in a 90% more secure way."
- Perplexity used its first developer conference to pitch itself as a more secure alternative to OpenClaw. The company announced a business-centered version of its "Perplexity Computer" agent system and previewed Personal Computer, which runs on a Mac and has access to local files.
- Snowflake, the cloud-based data platform, released a similar autonomous platform for office tasks called Project SnowWork.
Zoom in: Agents designed for the enterprise can still go rogue.
- This week Meta confirmed to Axios that one of their in-house agents (similar to OpenClaw) posted advice in an internal forum without approval from the Meta employee
- Another employee then acted on that advice — according to The Information — triggering a security incident that granted employees access to sensitive company and customer information.
- Meta says there is no evidence that any employees accessed that data.
- At Amazon Web Services, a human misconfigured permissions on an AI agent, which then deleted and recreated a live environment and caused a 13-hour outage, per the Financial Times.
What they're saying: "The engineering teams using AI most aggressively are experiencing more deployment failures and security incidents, not fewer," Durkin said.
- "More capability without more governance doesn't reduce risk. It just makes the problems harder to find."
- "At the end of the day. companies are going to be responsible for the actions of their agents, just like they're responsible for the actions of their employees," said Brooke Johnson, chief legal officer at Avanti.
- The best advice is to treat AI like you would a human employee, but one that only understands rules, not morals, she said.
Threat level: Companies need to be very specific and intentional with both the tasks they give to agents and what systems they allow the agent to access, says James Everingham, CEO of Guild.ai, a startup that helps companies manage their agents.
- Agents will use all the access they have to achieve a goal, "whether it's right or wrong," Everingham tells Axios.
The bottom line: Companies shouldn't avoid using AI agents, but they should limit the tools and data agents have access to, experts told Axios.
