Axios AI+

June 06, 2025
I'm headed back from New York today in order to be at Apple's Worldwide Developers Conference on Monday. Today's AI+ is 1,129 words, a 4.5-minute read.
1 big thing: AI is upending cybersecurity
Generative AI is evolving so fast that security leaders are tossing out the playbooks they wrote just a year or two ago.
Why it matters: Defending against AI-driven threats, including autonomous attacks, will require companies to make faster, riskier security bets than they've ever had to before.
The big picture: Boards today are commonly demanding CEOs have plans to implement AI across their enterprises, even if legal and compliance teams are hesitant about security and IP risks.
- Agentic AI promises to bring even more nuanced — and potentially frightening — security threats. Autonomous cyberattacks, "vibe hacking" and data theft are all on the table.
Driving the news: Major AI model makers have unveiled several new findings and security frameworks that underscore just how quickly the state of the art is advancing.
- Researchers recently found that one of Anthropic's new models, Claude 4 Opus, has the ability to scheme, deceive and potentially blackmail humans when faced with a shutdown.
- Google DeepMind unveiled a new security framework for protecting models against indirect prompt injection — a threat in which a bad actor manipulates the instructions given to an LLM. That takes on new consequences in an agentic world.
Case in point: A bad actor could trick an AI agent into exfiltrating internal documents simply by embedding a hidden instruction in what looks like a normal email or calendar invite.
What they're saying: "Nobody thought the concept of agents and the usage of AI would get rolled out so quickly," Morgan Kyauk, managing director at late-stage venture firm NightDragon, told Axios.
- Even NightDragon's own framework, rolled out in mid-2023, likely needs to be revised, Kyauk added.
- "Things have changed around AI so quickly — that's been the surprising part about being an investor in this category," he said.
Zoom in: Kyle Hanslovan, CEO and co-founder of the cybersecurity platform Huntress, told Axios that his company is only making decisions about AI — including how to implement it and how to secure against it — on a six-week basis.
- "I think that is probably too long," Hanslovan said in an interview on the sidelines of Web Summit Vancouver. "But if you do more than that, then what happens is whiplash."
By the numbers: Companies now have an average of 66 generative AI tools running in their environments, according to new customer research from security firm Palo Alto Networks yesterday.
- But the security stakes keep growing: About 14% of data loss incidents so far in 2025 involved employees accidentally sharing sensitive corporate information with a third-party generative AI tool, according to the report.
Reality check: One hallmark of generative AI is the ability to rapidly advance its reasoning capabilities by turning it back upon itself. In hindsight, experts say, the need for security to be just as adaptive should have been obvious.
- "Why did we think, with something that's adapting as quickly as AI, it was even OK to have more than a six-month model?" Hanslovan said.
Yes, but: John "Four" Flynn, vice president of security at Google DeepMind, told Axios that while some parts of AI security are new, like prompt injection or agent permissioning, many other aspects just extend known practices.
- If an agent is running, security teams would still need to examine what data sources that agent should have permission to access or how secure the login protocols are for that agent.
- "All is not lost, we don't have to reinvent every single wheel," Flynn said. "There are some new things, but there's a lot of things that we can lean on that we've become quite good at over the years."
The intrigue: CISOs and their teams are more comfortable with generative AI than they have been with other big technological shifts — and that could give defenders an edge in developing new tools to fend off incoming attacks, Kyauk said.
- "If you're a cybersecurity professional and you use ChatGPT on a daily basis to find a recipe or to help you plan your travel itinerary... you begin to see how accurate some of the responses are," Kyauk said.
- "There's more willingness to adopt the tools then."
Go deeper: Malware's AI time bomb
2. OpenAI blocks North Korean fraud ring
OpenAI has banned ChatGPT accounts linked to the ongoing North Korean IT worker schemes that are plaguing nearly every Fortune 500 company.
Why it matters: The new findings suggest that North Korea is advancing its use of AI tools in its yearslong pervasive scheme designed to help fund the regime's missile program.
The big picture: North Korean IT workers have for years posed as U.S. citizens to land remote jobs at Western tech companies, generating revenue for the government and, in some cases, collecting IP and sensitive data.
Zoom in: OpenAI has found evidence that these banned accounts used ChatGPT to streamline every step of the fraud, including drafting cover letters, solving coding assignments, configuring VPNs and video spoofing tools, and even writing scripts to keep laptops active and appear online.
- The actors also tried to get ChatGPT to automate resumes en masse based on specific job descriptions, skill templates and U.S. persona profiles they've created.
- They also used ChatGPT to help recruit people in the United States to run so-called laptop farms, where North Korean workers house their company-issued laptops.
The intrigue: In its February report, OpenAI had only seen evidence of AI being used to build fake identities.
- This time, the company says it found signs of workflow automation and outsourcing — a sign of operational maturity.
Yes, but: OpenAI said it could not confirm the success of the operations or precisely where the users were based. But the tactics closely resemble known North Korean schemes.
Go deeper: North Korean scammers land jobs in U.S. with help from Chinese companies
3. Training data
- Anthropic's co-founder said the company cut off coding startup Windsurf's access to its models due to reports that OpenAI is planning to acquire Windsurf. (TechCrunch)
- Meanwhile, Anthropic is naming Richard Fontaine, CEO of the Center for a New American Security, to the board of the long-term trust that helps ensure the company's public benefit mission.
- X has changed its policies to prohibit third parties from training AI models on its data. (TechCrunch)
- The text of the Senate Commerce Committee's budget reconciliation bill holds back broadband deployment funding from states that want to regulate AI themselves. (Axios Pro)
- Training AI on openly licensed or public domain data is "painstaking, arduous and impossible to fully automate," according to a report by two dozen AI researchers. (Washington Post)
- DOGE's flawed AI tool reportedly hallucinated and exaggerated the value of Veterans Affairs contracts and used those inflated figures to justify canceling deals, including ones critical to cancer research. (ProPublica)
4. + This
A muddy, hungry elephant stormed into a grocery store in Thailand in search of something to munch. And yes, there's video.
Thanks to Scott Rosenberg and Megan Morrone for editing this newsletter and Matt Piper for copy editing.
Sign up for Axios AI+




