Axios AI+

April 22, 2025
This is going to come as a great shock to you, but I am headed to the airport. Today's AI+ is 997 words, a 4-minute read.
1 big thing: New workplace threat — "non-human" identities
Anthropic expects AI-powered virtual employees to begin roaming corporate networks in the next year, the company's top security leader told Axios in an interview this week.
Why it matters: Managing those AI identities will require companies to reassess their cybersecurity strategies or risk exposing their networks to major security breaches.
The big picture: Virtual employees could be the next AI innovation hotbed, Jason Clinton, the company's chief information security officer, told Axios.
- Agents typically focus on a specific, programmable task. In security, that's meant having autonomous agents respond to phishing alerts and other threat indicators.
- Virtual employees would take that automation a step further: These AI identities would have their own "memories," their own roles in the company and even their own corporate accounts and passwords.
- They would have a level of autonomy that far exceeds what agents have today.
- "In that world, there are so many problems that we haven't solved yet from a security perspective that we need to solve," Clinton said.
Between the lines: Those problems include how to secure the AI employee's user accounts, what network access it should be given and who is responsible for managing its actions, Clinton added.
- Anthropic believes it has two responsibilities to help navigate AI-related security challenges.
- First, to thoroughly test Claude models to ensure they can withstand cyberattacks, Clinton said.
- The second is to monitor safety issues and mitigate the ways that malicious actors can abuse Claude.
Threat level: Network administrators are already struggling to monitor which accounts have access to various systems and fend off attackers who buy reused employee account passwords on the dark web.
Zoom in: AI employees could go rogue and hack the company's continuous integration system — where new code is merged and tested before it's deployed — while completing a task, Clinton said.
- "In an old world, that's a punishable offense," he said. "But in this new world, who's responsible for an agent that was running for a couple of weeks and got to that point?"
The intrigue: Clinton says virtual employee security is one of the biggest security areas where AI companies could be making investments in the next few years.
- He's especially keen on solutions that provide visibility into what an AI employee account is doing on a system and also on tools that create a new account classification system that better accounts for virtual employees.
- Major AI companies have recently been on an investing hot streak: OpenAI is in talks to purchase AI coding startup Windsurf. Anthropic just invested in Goodfire, which decodes how AI models think.
Yes, but: Integrating AI into the workplace is already causing headaches, and figuring out how to manage virtual employees won't be easy.
- Last year, performance management company Lattice said AI bots should be "part of the workforce," including taking spots in corporate org charts. The company quickly reversed course after complaints.
What to watch: Several cybersecurity vendors are already releasing products to manage so-called "non-human" identities.
- Okta released a unified control platform in February to better protect non-human identities and constantly monitor what systems each company account has access to and monitors for suspicious activity.
Go deeper: What Anthropic's AI knows about you
2. DOJ: Google uses AI to monopolize search
Google is using its AI products to further expand its dominance in the online search market, the Justice Department argued yesterday.
Why it matters: The federal government is making its case to break up Google and reshape the internet.
Driving the news: Google and the Justice Department kicked off the remedies phase of their federal antitrust trial yesterday, following Google's loss in D.C. District Court.
- Google had argued in an earlier phase of the trial that the advent of generative AI made the DOJ's case against the company out of date, and that new generative AI products would give Google even more competitors.
- The DOJ said yesterday that its case has grown stronger in the past few months, pointing out that Google is paying Samsung "an enormous sum of money" for Gemini to be the default AI assistant on Samsung devices.
Context: The court already found Google's exclusionary contracts to be the default search engine on certain devices and browsers to be illegal.
What they're saying: "This is the monopolist playbook at work. Google is using the same strategy that they did for search and now applying it to Gemini," DOJ attorney David Dahlquist said.
- The DOJ wants remedies to be "forward-looking" and include AI, Dahlquist said.
- "This is why these products must be included as part of the remedy. Increasing queries increases ad dollars and increases revenue to Google."
- "Google wants to expressly carve out their GenAI products so that they can repeat the monopoly playbook on those products going forward. The risk of excluding GenAI, as well as Gemini [from remedies], is too great."
The other side: Google argued ahead of yesterday's trial in a blog post that the DOJ's proposed remedies would hurt national security and disrupt the global AI race.
- In court, Google attorney John Schmidtlein said the DOJ's remedy list is a "wish list for competitors to gain benefits with competition."
- The generative AI market is "performing extraordinarily competitively," Schmidtlein said, name-checking OpenAI, Meta, Microsoft Copilot and X's Grok.
What's next: Google executives and witnesses from other tech companies will soon take the stand, with closing arguments set for May 30 and a decision expected by August.
Axios Pro: Tech Policy is covering every twist and turn in Google's antitrust trials. Get it in your inbox.
3. Training data
- Meta is testing AI that checks whether teens might be overstating their age. (The Verge)
- AI startups accounted for 60% of all digital health funding in Q1, raising $3.2 billion. (Axios Pro)
- The Philadelphia Parking Authority and the School District of Philadelphia are partnering with Hayden AI to experiment with AI-assisted surveillance cameras. (Axios Philadelphia)
4. + This
I'm usually not one to pass along small funding news, but I couldn't resist sharing "Columbia student suspended over interview cheating tool raises $5.3M to 'cheat on everything.'" Well played, TechCrunch.
Thanks to Scott Rosenberg and Megan Morrone for editing this newsletter and Matt Piper for copy editing.
Sign up for Axios AI+





