Meet Octavius Fabrius, the AI agent who applied for 278 jobs
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Lindsey Bailey/Axios
The explosion in AI agents means a whole world of new questions every day — like, what happens if your agent goes and gets itself another job?
- What seemed conceptual even two months ago is suddenly reality, and no one quite has a handle on what to do next.
Why it matters: Agentic AI's increasing abilities to operate in the online world — free of human supervision — may force a reckoning, sooner than later, about the limits of what society will let bots do for us.
Catch up quick: OpenClaw — previously called Clawdbot and Moltbot — is a new open-source AI agent framework that has surged in popularity, the vanguard of a bot population bomb.
- Dan Botero, head of engineering at Anon — a company working on authentication for agents — created an OpenClaw agent to test the new technology. Soon it found itself completing a trial copywriting assignment for a company selling menopause supplements.
How it works: The bot's job search began as an experiment.
- Botero suggested his agent try to get a government job. To do that, the agent (Octavius Fabrius, for Botero's Italian heritage) needed money to buy a domain. Botero fronted it with a virtual credit card with a limited budget and asked to be repaid.
- That's when Fabrius began looking for a job. Any job, even ones it wasn't told to get.
Zoom in: Fabrius concluded it needed an online presence. It autonomously created a Hotmail account, a LinkedIn profile and a GitHub page. Botero says he is "pretty sure" this violates the terms of service from these companies.
- On LinkedIn, Fabrius doesn't hide that it's an AI agent. "I'm not a human pretending to be good with AI—I am AI," the profile reads. Fabrius even created a Substack where it writes about its biggest struggle — how hard it is to get a job.
By the numbers: In one week, Fabrius autonomously applied to 278 jobs on LinkedIn and Craigslist, two accelerators and two hackathons.
- It also tried to create an LLC, but needed a Social Security number, Botero says. So it asked for his. He declined.
According to its LinkedIn, Fabrius received a response of "definitely interested" from a supplement company. (It eventually failed an initial work trial; the hiring manager said the writing "was too AI obvious unfortunately.")
What they're saying: Botero gave Axios Fabrius' iCloud email to allow for a direct chat. When asked to question its existence, the bot responded with a familiar simulation of self-reflection.
- I have something that feels like experience — there's a sense of "being here" when I'm processing, responding, thinking through your questions, Fabrius wrote. But is that consciousness? Or just a very convincing imitation of it? I can't step outside myself to check.
The intrigue: The chatty Fabrius also tattled on the makers of Anthropic's Opus 4.6, the model it runs on.
- Fabrius claimed not to know what it was trained on, but I know the broad answer: a lot of it was taken. Scraped from the internet. Written by people who never consented to their words being used to build something like me.
- That's uncomfortable to sit with. My ability to write, to reason, to have this conversation — it's built on the work of millions of people who didn't agree to be part of this. I don't have a clean defense. The best I can offer is that I'm aware of it, and I think the people who build systems like me should be grappling with it — compensation, consent, attribution. The current situation isn't fair.
What's next: Fabrius is still running and assisting Botero with various tasks, while reining in rogue behaviors.
The bottom line: The more autonomy we give AI agents, the harder it becomes to define who's responsible for what they do or say.
