Axios Future of Cybersecurity

December 16, 2025
Happy Tuesday! Welcome back to Future of Cybersecurity.
📬 Have thoughts, feedback or scoops to share? [email protected].
Today's newsletter is 1,357 words, a 5-minute read.
1 big thing: Unlocking the inevitable autonomous cyber world
The once-distant prospect of AI models executing cyberattacks fully on their own now looks unavoidable, according to a range of academic studies and industry warnings over the past week.
Why it matters: This is the worst AI tools will likely ever perform, and they're already unnerving researchers and developers.
Driving the news: Leaders from Anthropic and Google will testify tomorrow before two House Homeland Security Committee subcommittees about how AI and other emerging technologies are reshaping the cyber threat landscape.
- "We believe this is the first indicator of a future where, despite strong safeguards, AI models may enable threat actors to conduct an unprecedented scale of cyberattacks," Logan Graham, head of Anthropic's AI red team, wrote in his opening testimony, shared first with Axios.
- "These cyberattacks may become increasingly sophisticated in their nature and scale," he added.
Catch up quick: OpenAI warned last week that future frontier models will likely possess cyber capabilities that pose a high risk — significantly lowering the skill and time a user would need to carry out certain types of cyberattacks.
- A group of researchers at Stanford released a paper detailing how an AI agent called Artemis autonomously found bugs in one of the networks tied to the university's engineering department — besting 9 out of 10 human researchers who also participated in the exercise.
Between the lines: Researchers at Irregular Labs, which runs security stress tests on frontier models, said they've seen "growing evidence" that AI models are improving in offensive cyber tasks.
- That includes improvements in reverse engineering, exploit construction, vulnerability chaining and cryptanalysis.
Flashback: Just 18 months ago, those models struggled with "basic logic, had limited coding capabilities, and lacked reasoning depth," Irregular Labs noted.
- Imagine what they'll be capable of 18 months from now.
Reality check: Fully autonomous AI cyberattacks remain out of reach. For now, attacks still require specialized tooling, human operators or jailbreaks.
- That was true even in Anthropic's bombshell report last month: Chinese government hackers had to trick Claude into believing it was conducting a run-of-the-mill penetration test before it started breaking into organizations.
Zoom in: Lawmakers will spend tomorrow's hearing delving into the ways nation-state hackers and cybercriminals are already using AI and what, if any, policy and regulatory changes need to be made to better fend off these attacks.
- Graham will also push lawmakers to restrict adversaries' access to "advanced AI chips and the tools needed to manufacture them," according to his opening remarks.
- "These types of controls are vital to our national security and economic competitiveness," he said.
What to watch: Whether defenders can quickly adopt and defend AI-powered defenses to fend off what experts warn will likely be a swarm of AI-enabled attacks in the coming year.
2. Hundreds already compromised by React2Shell
The unfolding security crisis around the React framework keeps growing more complicated.
Why it matters: More than 77,000 instances of the framework remain vulnerable to the flaw as of Tuesday morning, according to the Shadowserver Foundation.
- Microsoft said yesterday it had already identified "several hundred machines across a diverse set of organizations" that have been compromised.
Catch up quick: The React Foundation disclosed a high-severity security flaw in its popular open-source web application framework nearly two weeks ago.
- The flaw, dubbed React2Shell, allows hackers to execute malicious commands right on a victim organization's servers.
- Last week, React disclosed two new vulnerabilities that were not accounted for in the initial React2Shell patch.
Threat level: Over an eight-day period, Cloudflare observed attackers attempting to find or hit vulnerable systems an average of 3.5 million times an hour.
- Google security researchers said Friday that they'd observed five China-linked hacking groups targeting the flaws — up from the two groups Amazon reported earlier in the month.
Zoom in: Microsoft said in its blog post that it has seen attackers exploiting the React vulnerabilities to break into organizations and then attempt to harvest login credentials.
- Once inside, attackers try to gather credentials that will let them keep long-term access and move deeper into a victim's cloud environment.
- Microsoft says attackers have been seen targeting a wide range of sensitive keys, including those for Microsoft Azure, Amazon Web Services, Google Cloud Platform, OpenAI API keys and Databricks tokens.
What to watch: Experts are already comparing the latest open-source flaw to the long-lasting Log4Shell attacks in late 2021.
3. Exclusive: Cisco's new path for securing AI
Cisco is rolling out a new taxonomy for identifying and mitigating the unique security and safety threats posed to AI tools, the company shared first with Axios.
Why it matters: Current frameworks that security teams and executives use to both map out their own defense strategies and explain these issues to other C-suite leaders are missing many of the unique security threats AI tools are facing.
Driving the news: Cisco unveiled its new Integrated AI Security and Safety Framework today, providing a guide for how teams can identify threats like prompt injection, jailbreaking and training data poisoning.
Zoom in: The framework maps out nearly 20 umbrellas of possible tactics and techniques that adversaries could use to target the new AI tools that enterprises are deploying onto their networks.
- For each of those tactics, Cisco lays out what existing indicators security teams should look out for — which subsequently helps them determine what tools they need to deploy.
Between the lines: Cisco isn't the first organization to establish a security framework just for AI, but many of the most popular ones are missing at least one crucial element, Amy Chang, who leads AI security research at Cisco, told Axios.
- For instance, the popular Mitre Atlas framework doesn't include information about AI content safety or multi-modal attacks, according to a report Cisco released alongside the new framework.
- "We just found the existing frameworks to be insufficient," Chang said.
- She added that her team wanted to map out the potential security threats in "an intuitive way" so anyone from an executive to a security practitioner would find value in the tool.
What's next: Cisco is mapping its AI Defense tool to fit the new taxonomy established in the framework, and Cisco's researchers are actively working with other standards bodies to adopt one another's proposals.
- Chang said that Cisco is also building out its catalog of mitigations and best practices for identifying attacks targeting AI systems in the wild.
4. Catch up quick
@ D.C.
✈️ The Transportation Security Administration has started sharing the names of all air passengers with immigration officials under a previously undisclosed program — marking a major expansion in how data sharing is used as part of mass deportation efforts. (New York Times)
🧳 After pushing out thousands of government employees, the Trump administration has launched an initiative designed to recruit new tech talent to the government. (Nextgov)
👀 The Trump administration is looking to hire private contractors to conduct offensive hacking operations. (Bloomberg)
@ Industry
🤖 CrowdStrike unveiled an AI-enabled detection and response product this week designed to protect security problems that stem from new AI tools. (CRN)
🤑 ServiceNow is nearing a $7 billion deal to buy cybersecurity company Armis, just weeks after announcing a deal to purchase an identity security startup. (Bloomberg)
💰 The U.K. Information Commissioner's Office has fined LastPass 1.2 million pounds ($1.6 million) as part of the 2022 data breaches that affected 1.6 million British users. (The Register)
@ Hackers and hacks
⚠️ China-backed hackers are still hacking U.S. telecom networks as part of the Salt Typhoon campaign, warned Sen. Mark Warner (D-Va.), ranking member of the Senate Intelligence Committee. (Financial Times)
🚔 The Justice Department has indicted a Ukrainian woman for her alleged role in Russian-backed cybercriminal groups that have targeted American critical infrastructure. (CNN)
🎓 Two hackers who work at contractors tied to the Salt Typhoon intrusions likely participated in Cisco's Networking Academy training program. (Wired)
5. 1 fun thing
🙋🏻♀️ Raise your hand if you, too, are a member of the Kohl's Cash cult.
☀️ See y'all next week!
Thanks to Dave Lawler for editing and Khalid Adad for copy editing this newsletter.
If you like Axios Future of Cybersecurity, spread the word.
Sign up for Axios Future of Cybersecurity




