Axios Future of Cybersecurity

May 05, 2026
Happy Tuesday! Welcome back to Future of Cybersecurity.
📬 Have thoughts, feedback or scoops to share? [email protected].
🇨🇦 Heading to Web Summit Vancouver next week? Join Axios and IBM at an evening reception with senior tech leaders, innovators and partners on Monday. Register here.
Today's newsletter is 1,572 words, a 6-minute read.
1 big thing: Trumpworld shifts on AI security after Mythos
President Trump campaigned on repealing Biden-era AI safety and security rules, dismissing them as burdensome, innovation-killing "red tape."
Why it matters: After Mythos, that hard line has softened — suggesting the White House may embrace some of the guardrails it once opposed as AI systems become more capable and potentially more dangerous.
Driving the news: David Sacks, special adviser to Trump, told Politico on Friday that holding back the public release of Anthropic's Mythos Preview model was the "right decision."
- "I would not want the hackers to have access to it before the defenders," he said.
State of play: The Office of the National Cyber Director has been holding meetings with tech and cyber companies, as well as tech trade associations, over the last week about broader AI security issues, as my colleague Ashley Gold and I reported yesterday.
- During some of these meetings, the office has been discussing an AI security framework that was already in the works before Mythos, according to a source familiar with the matter.
- One of the items in that framework is requiring the Pentagon to lead red-teaming for AI deployments for federal, state and local governments, two sources told us.
- However, it's now unclear if that framework will be updated to reflect advancements from Mythos and OpenAI's GPT-5.5 model. Rumors are circulating that the office is considering pushing the framework as an executive order, a source added.
Catch up quick: The White House is also working on an update to the Biden administration's national security memo on the use of AI in national security agencies, Bloomberg reported last week.
- The New York Times reported yesterday that the administration is discussing an executive order that would establish a working group to run safety and security tests before new AI models are rolled out.
Flashback: This approach is a stark change from the "move fast, break things" mentality of early Trump 2.0.
- Trump ripped up the Biden administration's AI executive order on his first day back in office, as his advisers argued that many of the safety and security requirements would slow down innovation.
- The Biden executive order called for AI companies to submit the results of their internal security and safety tests for new models before they're released to the public.
What to watch: The Trump administration's approach may be sliding back toward the one it abandoned last year.
- What's not entirely clear is what the new focus on AI safety and security means for Anthropic, with the administration wanting to work with the company around Mythos but not currently prepared to lift the Pentagon's supply-chain risk designation.
- The White House is looking at executive actions that would allow federal agencies to sidestep the designation so they can use Mythos, as Axios reported last week.
- However, the current two-track approach — one at ONCD focused broadly on AI security issues and another focused on reconciling the supply-chain risk designation — may be converging, sources said.
2. AI models are catching up to Mythos
New research suggests that OpenAI's GPT-5.5 model — aka Spud — is nearly as good at finding and exploiting software bugs as Anthropic's Mythos Preview.
Why it matters: The head start that cyber defenders were promised when Mythos was unveiled last month is disappearing faster than expected.
Driving the news: The U.K. AI Security Institute said Thursday that GPT-5.5 was able to complete a 32-step simulated corporate cyberattack in 2 out of 10 test runs. Mythos did the same in 3 out of 10 runs.
- Before Mythos, no AI model had ever successfully completed that test.
- GPT-5.5 also outperformed Mythos on a range of capture-the-flag tasks that test how well a model can find vulnerabilities, reverse-engineer incidents, and exploit web-based applications.
Between the lines: When Mythos was announced, Anthropic estimated it would be another six to 18 months before another AI company released a model with similar cyber capabilities.
- Now, that assumption is being tested, calling into question how much time government officials, critical infrastructure operators and cybersecurity companies have to beef up their defenses.
Yes, but: The powerful cyber capabilities of both Mythos and GPT-5.5 aren't available to everyone.
- Anthropic has given access to Mythos to only around 40 organizations, including the 12 members of its information-sharing partnership Project Glasswing.
- OpenAI has placed strict guardrails on the public versions of its models and is only giving access to models with fewer guardrails to vetted cyber defenders through its Trusted Access program.
What to watch: Last week, the Wall Street Journal reported that the White House had urged Anthropic not to broaden access to Mythos over national security concerns.
- Meanwhile, OpenAI has been helping federal agencies, state and local governments, and international allies sign up for its program that gives cyber defenders access to cyber-permissible versions of GPT-5.4 and 5.5.
3. ICYMI: Army maps out AI cyber defenses
The Army brought in more than a dozen technology and cybersecurity companies last week to advise it on where to invest in automated cyber defenses.
Why it matters: AI advances are forcing military leaders to rethink their defenses, and they can't afford to move at a typical bureaucratic pace.
Driving the news: The Army hosted its second AI tabletop exercise last week with C-suite leaders from Wiz, Amazon Web Services, Darktrace Federal, Google, OpenAI, CrowdStrike, SentinelOne, Booz Allen Hamilton, Palo Alto Networks, Veria Labs, Mattermost and Microsoft, officials told reporters Wednesday.
- Two additional industry participants asked not to be named. Government representatives from U.S. Cyber Command and Army Cyber Command and other Pentagon leaders also attended.
- Participants walked through a hypothetical Indo-Pacific crisis, designed and conducted by the Special Competitive Studies Project, examining how AI agents could help fend off continuous cyberattacks, said Brandon Pugh, principal cyber adviser to the Army secretary.
- Much of the discussion focused on how newer AI models are accelerating vulnerability discovery and how those same tools could be used to defend Army systems.
- Following the exercise, the Army plans to begin fielding and testing "two potential units" of agentic AI tools.
What they're saying: "We don't have the luxury of sitting around or having long acquisition pipelines," Pugh told reporters. "We need these capabilities now and we don't need to start from scratch."
- "We can leverage what exists in industry and perhaps pivot and fine-tune it for the Army's specific needs."
The big picture: The exercise comes as Washington scrambles to prepare for a wave of AI-driven cyberattacks following the release of Anthropic's advanced Mythos Preview model and OpenAI's GPT-5.4-Cyber last month.
- In part to allow wider government access to Mythos, the White House is working on a plan to sidestep the Pentagon's decision to designate Anthropic a supply-chain risk.
- OpenAI and Anthropic have both been briefing lawmakers on the cyber implications of their latest cyber-capable models.
Between the lines: The Army used the exercise to better understand how AI could secure its networks, not how it might be deployed offensively on the battlefield, officials said.
- "We are in the very nascent stage of figuring out how to defend the AI we're using," Lt. Gen. Christopher Eubank, commanding general of Army Cyber Command, told reporters. "What we thought was probably '12 to 18 months out' has arrived today when it comes to AI and agentic AI."
Zoom in: Officials added that the exercise explored deception tactics — including how to lure and manipulate adversarial AI agents — and how those techniques could pair with automated defenses.
- The group also discussed which processes could eventually be fully automated.
- "It's not just about augmenting the human," Eubank said. "If it is, then we're going to be more behind than we believe."
What's next: Army Cyber Command plans to use its internal testing lab to rapidly pilot new AI tools — potentially in 30- to 90-day cycles — before moving them into formal procurement.
4. Catch up quick
@ D.C.
✍🏻 The U.S. Center for AI Standards and Innovation has signed deals with Google DeepMind, Microsoft and xAI to test their models pre-deployment. (Axios)
⏰ The Cybersecurity and Infrastructure Security Agency is weighing a new policy that would require federal agencies to patch known-and-exploited vulnerabilities within three days — down from the typical two- to three-week time frame — in light of new advanced AI models. (Reuters)
👀 Tom Parker, a security services lead at IBM, is a possible contender to lead CISA as the Department of Homeland Security eyes candidates with little to no government experience. (Nextgov)
@ Industry
🔒 OpenAI rolled out a new security mode that allows people to swap email logins for passkeys. (Axios)
🤖 CISA and the NSA, alongside partners in Australia, Canada, New Zealand and the United Kingdom, released new guidance on how companies can securely deploy agentic AI systems. (CISA)
@ Hackers and hacks
⚠️ Defenders are scrambling to ward off severe data compromises after attackers found a way to target a critical and effectively unpatched flaw in Linux systems. (Ars Technica)
🇨🇺 Chinese hackers breached Cuba's embassy in Washington and spied on communications between dozens of diplomats earlier this year. (Bloomberg)
5. 1 fun thing
🥸 Children across the U.K. are apparently outwitting online safety measures by inputting fake birthdays, using borrowed IDs and ... drawing fake mustaches on their faces.
☀️ See y'all next week!
Thanks to Dave Lawler for editing and Khalid Adad for copy editing this newsletter.
If you like Axios Future of Cybersecurity, spread the word.
Sign up for Axios Future of Cybersecurity




