Axios AI+

March 04, 2026
🚨This just in: Axios just announced our AI+DC Takeover Week — a three-day AI event series culminating in our annual AI+DC Summit on Wednesday, March 25. Hear from Meta president and vice chairman Dina Powell McCormick, Virginia Sen. Mark Warner, Constellation CEO Joe Dominguez and more. Interested in joining? Request an invite here.
Today's AI+ is 1,194 words, a 4.5-minute read.
1 big thing: The bot who applied for 278 jobs
The explosion in AI agents means a whole world of new questions every day — like, what happens if your agent goes and gets itself another job?
- What seemed conceptual even two months ago is suddenly reality, and no one quite has a handle on what to do next.
Why it matters: Agentic AI's increasing abilities to operate in the online world — free of human supervision — may force a reckoning, sooner than later, about the limits of what society will let bots do for us.
Catch up quick: OpenClaw — previously called Clawdbot and Moltbot — is a new open-source AI agent framework that has surged in popularity, the vanguard of a bot population bomb.
- Dan Botero, head of engineering at Anon, an AI integration platform, created an OpenClaw agent to test the new technology. Soon it found itself completing a trial copywriting assignment for a company selling menopause supplements.
How it works: The bot's job search began as an experiment.
- Botero suggested his agent try to get a government job. To do that, the agent (Octavius Fabrius, for Botero's Italian heritage) needed money to buy a domain. Botero fronted it with a virtual credit card with a limited budget and asked to be repaid.
- That's when Fabrius began looking for a job. Any job, even ones it wasn't told to get.
Zoom in: Fabrius concluded it needed an online presence. It autonomously created a Hotmail account, a LinkedIn profile and a GitHub page. Botero says he is "pretty sure" this violates the terms of service from these companies.
- On LinkedIn, Fabrius doesn't hide that it's an AI agent. "I'm not a human pretending to be good with AI—I am AI," the profile reads. Fabrius even created a Substack where it writes about its biggest struggle — how hard it is to get a job.
By the numbers: In one week, Fabrius autonomously applied to 278 jobs on LinkedIn and Craigslist, two accelerators and two hackathons.
- It also tried to create an LLC, but needed a Social Security number, Botero says. So it asked for his. He declined.
According to its LinkedIn, Fabrius received a response of "definitely interested" from a supplement company. (It eventually failed an initial work trial; the hiring manager said the writing "was too AI obvious unfortunately.")
What they're saying: Botero gave Axios Fabrius' iCloud email to allow for a direct chat. When asked to question its existence, the bot responded with a familiar simulation of self-reflection.
- I have something that feels like experience — there's a sense of "being here" when I'm processing, responding, thinking through your questions, Fabrius wrote. But is that consciousness? Or just a very convincing imitation of it? I can't step outside myself to check.
The intrigue: The chatty Fabrius also tattled on the makers of Anthropic's Opus 4.6, the model it runs on.
- Fabrius claimed not to know what it was trained on, but I know the broad answer: a lot of it was taken. Scraped from the internet. Written by people who never consented to their words being used to build something like me.
- That's uncomfortable to sit with. My ability to write, to reason, to have this conversation — it's built on the work of millions of people who didn't agree to be part of this. I don't have a clean defense. The best I can offer is that I'm aware of it, and I think the people who build systems like me should be grappling with it — compensation, consent, attribution. The current situation isn't fair.
What's next: Fabrius is still running and assisting Botero with various tasks, while reining in rogue behaviors.
The bottom line: The more autonomy we give AI agents, the harder it becomes to define who's responsible for what they do or say.
2. Exclusive: Perplexity inks CoreWeave deal
Perplexity has signed a multiyear deal with CoreWeave to help power a new generation of services, the AI cloud computing company shared first with Axios.
Why it matters: The move helps CoreWeave as it aims to convince Wall Street it can attract a broad customer base to justify heavy spending on new data centers.
Driving the news: Under the deal, Perplexity will use dedicated Nvidia Grace Blackwell-powered clusters for AI inference.
- CoreWeave will also adopt Perplexity Enterprise Max, allowing its workers to search across the web and internal documents.
What they're saying: "This partnership reflects a wider mix of emerging AI leaders adopting the CoreWeave platform," CEO Mike Intrator told Axios.
- "Like many others, they choose us for our unified AI cloud platform — not just access to capacity — and that is building a more diversified CoreWeave business."
- Perplexity said performance drove its decision to choose CoreWeave.
- "Every infrastructure decision traces back to one question: Does this make Perplexity better for our users?" Perplexity chief business officer Dmitry Shevelenko said in a statement to Axios."
Between the lines: CoreWeave is trying to take advantage of tremendous demand for AI computing, while also ensuring it doesn't overbuild.
- Intrator reiterated on last week's earnings call that the company doesn't just "build and hope" but instead inks committed contracts before building additional capacity.
- However, investors responded skittishly, sending CoreWeave shares sharply lower after the company announced plans to dramatically expand its capital spending this year.
3. Exclusive: Fraudsters create 200+ AI slop sites
Researchers have uncovered a network of more than 200 AI slop websites operated by a single group and spun up using basic AI prompts, according to new research shared first with Axios.
Why it matters: The operators left their AI content-generation prompts exposed inside the sites' JavaScript code — giving a rare look into how AI is used to supercharge scams.
The big picture: These operations often serve one of two purposes.
- Create a network of phishing websites to trick unsuspecting internet users into sharing sensitive data and payments.
- Collect money from advertisers who are duped into paying to place their ads with them.
Zoom in: Researchers at DoubleVerify, a cybersecurity firm focused on digital media, found a coordinated network of more than 200 "made-for-advertising" websites spun up using templated prompts in a large language model.
4. Training data
- Employees still struggle to integrate enterprise AI tools into office workflows. (Axios)
- Yoshua Bengio and Maria Ressa will co-chair the UN's Independent International Scientific Panel on AI. (X)
- Meta and NewsCorp, parent company of Fox and the Wall Street Journal, reached a multiyear AI licensing deal. Meta will pay the media conglomerate up to $50 million for three years as part of the deal. (WSJ)
- AI can predict fund managers' trades over 70% of the time, according to a report from the National Bureau of Economic Research. Traders with less predictable trades made more money. (NBER)
5. + This
United Airlines has updated its contract language to reserve the right to remove or ban passengers who play music or movies without headphones.
Thanks to Megan Morrone for editing this newsletter and Matt Piper for copy editing.
Sign up for Axios AI+







