Silicon Valley's latest AI fixation poses early security test
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Aïda Amer/Axios
Silicon Valley's obsession with a new homegrown autonomous AI assistant is ringing alarm bells throughout the security industry.
Why it matters: This is just the beginning, and AI adopters are already hastily picking convenience over digital security.
Driving the news: All week, tech enthusiasts have been flocking to an open-source AI agent called Moltbot — previously known as Clawdbot — that runs on a computer and operates with extensive system access.
- Need to manage your upcoming flight? You can text Moltbot from your phone, and it will open your browser on your computer and check you in.
- Want to reschedule a meeting? It can tap your calendar and find another time.
- The agent can even join a video call on your behalf.
- Some users have asked Moltbot to negotiate with car dealerships and autonomously investigate and remediate flaws in code.
Reality check: That level of autonomy without human review introduces real risks to a user's systems.
- After installation, Moltbot has full shell access on the machine, including the ability to read and write files and to access your browser, email inbox and calendar, including login credentials.
- Users integrate the bot into messaging services, like Telegram or WhatsApp, to send directions.
- Moltbot maintains persistent memory of its activities so it can perpetually learn and improve its operations.
Threat level: One security researcher found hundreds of Moltbot control panels exposed or misconfigured on the public internet this week — meaning an intruder could access private conversation histories, API keys and credentials, and in some cases hijack the agent to run commands on a user's behalf.
- Cybersecurity firm Token Security said Wednesday that 22% of their customers already have employees who are using Moltbot within their organizations — likely without IT approval.
Between the lines: Like AI chatbots, agents can hallucinate, and they're susceptible to prompt injections — a type of attack that sneaks harmful instructions into normal content to trick AI models into following them.
- AI agents aren't able to decipher between a PDF or web page with regular instructions or a PDF or web page that has malicious code embedded in it to steal someone's data.
- "A lot of people setting this up don't realize what they're opting into," Rahul Sood, CEO of Irreverent Labs, wrote on X. "They see 'AI assistant that actually works' and don't think through the implications of giving an LLM root access to their life."
The big picture: These risks scale as major companies and government agencies start adopting sanctioned AI agents on their networks.
- 39% of companies in a McKinsey study said they've begun experimenting with AI agents.
- The Pentagon is also moving to deploy more agents across its networks — including for war-gaming.
Flashback: In October, Axios interviewed the CEOs of three major identity security companies for a panel on AI agents' security risks at the Identity Underground Summit. One of them said they'd already heard of instances where an agent accidentally cleared someone's calendar or deleted customer records.
Yes, but: For now, Moltbot requires significant technical know-how to install and run — limiting it mostly to more sophisticated users.
- Security experts have cautioned users to change some of the default configurations and to run the bot on a dedicated, siloed machine if they want to safely play around with Moltbot.
