Axios Future of Cybersecurity

April 21, 2026
Happy Tuesday! Welcome back to Future of Cybersecurity.
✈️ Hello from the friendly skies, where I'm writing while en route to both D.C. and Nashville this week!
- 📍 Heading to SANS AI Cyber Summit or Vanderbilt's Asness Summit? Let's get coffee!
- 👔 If you're a CEO or a CEO's team, request access to Axios CEO Jim VandeHei's new weekly C-suite newsletter.
🚨 Situational awareness: CISA is not on the list of more than 40 organizations that have access to Anthropic's Mythos model, Axios has learned. An Anthropic official previously said the company had briefed both the agency and the Commerce Department before its release.
📬 Have thoughts, feedback or scoops to share? [email protected].
Today's newsletter is 2,098 words, an 8-minute read.
1 big thing: Faster attacks, quicker exploits are the real AI cyber shift
The real leap in Anthropic's and OpenAI's latest cyber-capable models isn't that they can hack in entirely new ways, but that they can do it faster, at greater scale, and increasingly turn vulnerabilities into working exploits, early users tell Axios.
Why it matters: The models may only represent one big step forward today, rather than a leap into the unknown. But if their current trajectory holds, they may still outstrip defenses designed for human-scale attacks.
Driving the news: OpenAI last week joined Anthropic in rolling out a cyber-focused model, GPT-5.4-Cyber, with access limited to vetted partners.
- Early adopters say the models aren't radically more capable than previous generations, but their speed and ability to generate proof-of-concept exploits are changing the equation.
Threat level: "When the attackers move at machine speed, and the defenders move at human speed, we don't lose the game — it's game over," Illumio CEO and founder Andrew Rubin told Axios.
- Rubin argued that many current defenses aren't built for that shift: "A security strategy that relies on occasional patching and keeping threats outside the perimeter is a recipe for disaster."
- Executives at Cisco and Zscaler said the biggest gains show up in how the models handle complexity, including analyzing large codebases, identifying vulnerabilities and linking them together for full attack plans.
- Cisco, which is testing both models, found they can "chain together vulnerabilities to build an exploit chain," said Anthony Grieco, the company's chief security and trust officer.
- Dhawal Sharma, executive vice president of AI security at Zscaler, said that the models are already uncovering issues "humans have not found for years, decades" and that "AI can facilitate lateral movement at lightning speed."
Between the lines: New research and early user testing suggest the models are at the tipping point in their ability to not just find flaws, but validate and exploit them.
- Anthropic's Mythos Preview completed 73% of all expert-level cybersecurity tasks in testing by the U.K.'s AI Security Institute and was the first model to complete a 32-step simulated attack, from initial reconnaissance to full network takeover in some runs.
- OpenAI's model stands out not just for finding bugs, but for quickly testing and generating working exploits, said Isaac Evans, CEO of Semgrep, which received an OpenAI grant to evaluate the system.
- "The model can cut through its own hallucinations in a way previous generations couldn't," Evans said while describing an internal case where it proved a supposed false positive was actually a real vulnerability.
- Socket, another grant recipient, said in a blog post that OpenAI's model identified a malicious package tied to the Axios JavaScript library hack in six seconds.
Zoom in: Cisco and Zscaler are already using the models internally to scan products and systems for vulnerabilities, with plans to integrate them into customer-facing tools like threat intelligence and red teaming.
- But the tools still depend on experienced operators. At Cisco, the models work best when "you marry them with a mature organization, mature red teamers and a harness," Grieco said.
Yes, but: Running these models requires a lofty token budget that not all companies — or even attackers — have. In some tests, the U.K. AI Security Institute used a 100-million-token budget.
What to watch: Anthropic CEO Dario Amodei told the Financial Times he expects open-source models and Chinese developers to be able to replicate Mythos' cyber capabilities within six to 12 months.
Editor's note: This story has been corrected to reflect that Andrew Rubin's analysis was based on conversations with industry peers (not his own experience using Mythos).
2. Vercel hack tied to compromised AI tool
A hacker gained access to Vercel's systems by compromising an employee account through a breached third-party AI platform, the cloud application company said Sunday, after the unidentified attacker claimed to be selling the stolen data.
Why it matters: Vercel sits deep in the modern web stack, powering app infrastructure and maintaining the popular Next.js framework — both widely used by developers, including those building with AI-assisted coding tools.
Driving the news: Vercel said Sunday it had identified a breach where a hacker got "unauthorized access to certain internal Vercel systems," affecting a "limited subset of customers."
- A user claiming ties to the ShinyHunters cybercrime group posted on BreachForums offering a purported dataset for sale.
Zoom in: Vercel CEO Guillermo Rauch said the attack began with the compromise of an AI platform, Context.ai, used by a Vercel employee, allowing the attacker to access the employee's Google Workspace account and use it as a foothold into internal systems.
- Vercel said customer credentials are encrypted but variables not designated as sensitive may have been exposed after the attacker gained deeper access.
- "We believe the attacking group to be highly sophisticated and, I strongly suspect, significantly accelerated by AI," Rauch said. "They moved with surprising velocity and in-depth understanding of Vercel."
Threat level: Vercel urged customers to review activity logs and rotate API keys, tokens, database credentials and signing keys, particularly those not designated as sensitive.
- Some security researchers warned the breach could have downstream effects beyond Vercel's direct customers due to its widely used open-source projects. Rauch said the company has not seen evidence that the hacker tampered with those.
Between the lines: AI companies and their tech suppliers are increasingly becoming targets for hackers who recognize that by targeting them, they can hit hundreds of victims at once.
3. Exclusive: Cursor's push to secure vibe coding
Cursor, the popular AI coding platform, has tapped a new security partner to reduce the risk of developers pulling vulnerable or malicious open-source code into their projects.
Why it matters: As AI tools generate more code, security teams worry that vulnerable or malicious components may spread faster than they can be reviewed or fixed.
Driving the news: Cursor is launching a partnership with open-source security company Chainguard today that attempts to limit that risk by attempting to steer AI-generated code toward vetted open-source components.
- Cursor will embed Chainguard's products into its platform so that images and code libraries pulled into users' projects are less likely to include hidden malware or known vulnerabilities.
- Developers can turn on the Chainguard integration "through simple natural language instructions," with little or no additional setup, Cursor said in a news release.
Threat level: Hackers are increasingly targeting open-source software as a way to compromise not just one company, but potentially millions of systems at once.
- A recent wave of supply-chain attacks involved hackers injecting malicious code into new versions of open-source libraries.
- "AI agents are making dependency decisions at a scale and speed no security team can manually review," Dan Lorenc, CEO and co-founder of Chainguard, said in a statement. "As organizations adopt agentic development, the biggest blocker is no longer how fast code can be generated — it's whether that code can be trusted."
Between the lines: Cursor — along with Anthropic's Claude Code and OpenAI's Codex — has unlocked a cornucopia of vibe-coded software.
- But that code still relies on open-source packages that can contain vulnerabilities or be compromised by attackers.
The bottom line: Security experts expect AI to eventually help find and fix vulnerabilities faster. But for now, companies are scrambling to make sure it doesn't introduce new ones.
4. Altman's iris-scanning tech lands biz deals
A company co-founded by OpenAI's Sam Altman and known for its iris-scanning orbs announced new and expanded integrations Friday with companies including Zoom, DocuSign, Tinder, Okta, Shopify and VanEck as it looks to grow its user base.
Why it matters: World, formerly known as Worldcoin, has struggled to convince everyday internet users to sign up for its identity verification system.
- But as AI agents proliferate, companies are increasingly looking for ways to verify not just who users are, but whether a real human is behind an online interaction at all.
Driving the news: World upgraded the protocol behind its identity tool, World ID, and is open-sourcing it so any app can integrate it as an authentication layer.
- The company is also launching a standalone World ID app, where users can store credentials and use them to log in to other services.
Between the lines: The announcement bundles together a range of previously introduced ideas — from AI agent verification tools to non-biometric sign-in options — as World tries to push its technology into more mainstream use.
How it works: World ID is designed to function more like a CAPTCHA replacement than a traditional identity system, said Tiago Sada, chief product officer at Tools for Humanity, which develops World.
- The protocol has three tiers for how users can validate their identities: taking a selfie, submitting an official government-issued ID, and going in-person to an orb to scan your iris.
Zoom in: World is now leaning on partnerships to drive adoption.
- Zoom plans to integrate World ID to help verify participants on video calls and guard against deepfake impersonation.
- DocuSign is testing World ID as a way to confirm that a real human — not a bot or compromised account — is behind a digital signature.
- Okta and Vercel are working with World on tools to verify that a real human approved certain actions taken by AI systems.
- Tinder is expanding a previous pilot in Japan to the U.S., allowing users to verify that a real person is behind a profile.
5. Former incident responder pleads guilty
A Florida-based man formerly employed as a ransomware negotiator pleaded guilty yesterday to conspiring with a Russian ransomware gang to leak information about his clients.
The big picture: The guilty plea is one of the final acts in a bizarre case where a set of legitimate ransomware negotiators — hired by companies to negotiate extortion payments — also worked with a cybercriminal gang to carry out attacks.
Catch up quick: In February, the U.S. Department of Justice charged Angelo Martino with conducting at least 10 ransomware attacks against organizations between 2023 and 2025.
- Martino worked with the notorious Black Cat ransomware gang, according to prosecutors, and leaked information he acquired as a negotiator to help hackers extort more money from the victims.
Zoom in: Martino pleaded guilty yesterday to one count of "conspiracy to obstruct, delay or affect commerce or the movement of any article or commodity in commerce by extortion."
- He faces a maximum penalty of 20 years in prison.
- His co-conspirators — Ryan Goldberg of Georgia and Kevin Martin of Texas — submitted guilty pleas in December. They're scheduled for sentencing on April 30.
The intrigue: Law enforcement has seized $10 million worth of assets they say Martino acquired with the proceeds from his crimes, including cryptocurrencies, vehicles, a food truck and a luxury fishing boat.
What's next: Martino is scheduled to be sentenced July 9.
6. Catch up quick
@ D.C.
🤖 The National Security Agency has had early access to Anthropic's Mythos Preview despite the Pentagon's blacklist. (Axios)
✍🏻 National cyber director Sean Cairncross said President Trump is expected to sign more cybersecurity executive orders "relatively soon." (Semafor)
🏛️ The Senate cleared a short-term extension of Section 702 of the Foreign Intelligence Surveillance Act to April 30 as lawmakers address warrant requirements and privacy concerns. (Axios)
@ Industry
👀 The National Institute of Standards and Technology will no longer provide scores and other enrichment details about every vulnerability submitted to its database. (Dark Reading)
💰 Cybersecurity startup Artemis raised a $70 million Series A round led by Felicis. (Fortune)
🧳 Entry-level cyber workers are getting left behind as hiring managers seek out more experienced professionals to oversee AI-powered security programs. (Wall Street Journal)
@ Hackers and hacks
🇮🇷 Iranian hackers are still pursuing new espionage campaigns targeting government officials in the U.S. and Israel. (New York Times)
💔 Lovable, a fast-growing AI coding company, denied reports of an internal data leak, arguing that the ability to see the prompts and source code of some users' projects was the result of "intentional behavior." (The Register)
🧑🏻⚖️ A Tennessee man who pleaded guilty to hacking the U.S. Supreme Court and other agencies was sentenced to a year of probation. (TechCrunch)
7. 1 fun thing
🐈⬛ 🌳 10/10 recommend taking your cat for a walk outside to see some new birds — even if they appear as unimpressed as my girl, Lola.
- She had fun! I think!
☀️ See y'all next week!
Thanks to Dave Lawler for editing and Khalid Adad for copy editing this newsletter.
If you like Axios Future of Cybersecurity, spread the word.
Sign up for Axios Future of Cybersecurity







