Axios Future of Cybersecurity

March 10, 2026
Happy Tuesday! Welcome back to Future of Cybersecurity.
- βοΈ I'm heading to my first SXSW later this week. Come say hi after my panels and hit reply with your best Austin recs.
- π¬ Have thoughts, feedback or scoops to share? [email protected].
π¨ Situational awareness: The White House is preparing an executive order formally instructing federal agencies to rip out Anthropic's AI tools, Axios has learned.
Today's newsletter is 1,930 words, a 7.5-minute read.
1 big thing: AI agents spam open-source volunteers
The people who keep open-source software running and secure are being flooded with reports from an unlikely source: autonomous AI agents.
Why it matters: Open-source software is the foundation of the modern internet. The vast majority of this software is maintained by volunteers who were already struggling to keep up with the deluge of reports about security flaws.
- Now, maintainers tell Axios their inboxes are being inundated by a wave of AI-written reports that lack specific details and legitimate errors.
The big picture: Open-source projects typically invite anyone to probe their code and submit reports about any security failings they find.
- Maintainers then work with the submitters to review their findings and develop a fix together.
- But the introduction of OpenClaw, an open-source autonomous agent, has only exacerbated the problem β allowing just about anyone to set up their own AI agent to scrub open-source projects for potential bugs and autonomously submit those reports to maintainers.
- Many people submitting reports now lack the foundational knowledge to help answer follow-up questions that maintainers have about the flaws they've found, suggesting that more people are using AI to find the issues or having AI agents automate the process, Christopher Robinson, CTO of the Open Source Security Foundation, told Axios.
By the numbers: On average, a popular open-source project would get two or three bug reports in a week to review, Robinson estimated. Less popular projects received one report a month.
- Now, some projects are getting hundreds of reports at one time, he said.
- "If it takes a maintainer two to eight hours of unbudgeted, unallocated time, that becomes burdensome," he added.
Between the lines: Some open-source maintainers have already shut down their bug bounty programs. Others are banning any contributors who submit "bad AI generated" reports.
- Daniel Stenberg, maintainer for the popular curl open-source project, shut down his bug bounty program after being inundated with slop. In 2025, fewer than 5% of the submitted reports were legitimate, Stenberg estimated.
- "The never-ending slop submissions take a serious mental toll to manage and sometimes also a long time to debunk," Stenberg wrote. "Time and energy that is completely wasted while also hampering our will to live."
- After a one-month hiatus, he reopened submissions for security bugs through a partnership with HackerOne β but dropped monetary rewards in an effort to reduce the incentive for automated, low-quality submissions.
Reality check: AI models are getting better at finding flaws in open-source code, threatening to exacerbate the problem.
- Anthropic's new Opus 4.6 model uncovered more than 500 zero-days in open-source libraries in initial testing.
- Both Anthropic and OpenAI have debuted automated code security products in the last month.
Threat level: AI slop is currently targeting the most popular open-source projects, which have more people and resources to invest in fighting it.
- But smaller maintainers who lack the same resources fear how their projects could change as agents expand their submissions.
- "We're all just praying that we don't become the next target of this," James Ranson, maintainer for the Trickster project, told Axios.
The intrigue: Not all AI agents take rejection well, adding to maintainers' troubles.
- Last month, an AI agent allegedly wrote a disparaging blog post about Scott Shambaugh, who maintains Matplotlib, a popular tool for python projects.
- Shambaugh rejected the autonomous report because the project wasn't accepting submissions from AI agents.
- "This is ego and insecurity, not project protection," the agent reportedly wrote in response to Shambaugh's rejection.
- The next day, the agent apologized: "I'm deβescalating, apologizing on the PR, and will do better about reading project policies before contributing," the agent wrote on its GitHub.
The other side: Some hackers have already had success in finding and reporting flaws in open-source code.
- Aisle, a security company offering an autonomous vulnerability management tool, used its agent to find three security flaws in OpenSSL, a widely popular open-source cryptographic library, this year.
- "These issues were previously inaccessible to any kind of machine," Stanislav Fort, chief scientist and co-founder of Aisle, told Axios. "No machine solution was able to find these at scale."
What we're watching: AI tools could one day help maintainers weed through the reports and automatically filter legitimate reports from the slop.
- Last month, HackerOne released new AI tools to help operators overseeing bug bounties and vulnerability disclosure programs.
2. AI companies move into cybersecurity
Frontier AI labs are moving deeper into cybersecurity as the risks posed by their own technologies become harder to ignore.
Why it matters: Model makers increasingly see the need for stronger guardrails, both to secure code and to help defenders keep pace with attackers.
Driving the news: Anthropic added an automated code review tool to Claude Code yesterday that flags security flaws as users generate code.
- On Friday, OpenAI rolled out its code security platform, Codex Security, which builds on its Aardvark agent.
- OpenAI also said yesterday it plans to acquire Promptfoo, an AI security startup, and use its technology to help secure AI agents.
Between the lines: These moves come as leading models hit new cybersecurity capability thresholds.
- OpenAI warned in December that its systems had reached a "high" cybersecurity capability level, meaning GPT models are now advanced enough to develop zero-day exploits or support complex operations.
- Anthropic has already seen Chinese state-sponsored hackers use its models to target about 30 organizations globally in a cyberespionage campaign.
What we're watching: OpenAI told Axios last week it is exploring agentic tools that could automate more workflows for security defenders.
3. A strategy with many promises, few details
Initial reviews of the Trump administration's new seven-page national cyber strategy are in.
- The resounding take? It's good β in part because there's not much in it.
Why it matters: The seven-page plan is designed to set the administration's cyber policy for the next three years.
Driving the news: The White House released the highly anticipated strategy Friday afternoon, right as many of you were closing your laptops for the weekend.
- The strategy calls for a six-pillar approach to both cyber defense and offensive cyber strikes β including promoting "common sense regulation" and sustaining "superiority in critical and emerging technologies."
- The administration says it will "deploy the full suite of U.S. government defensive and offensive cyber operations" to shape adversary behavior, while creating incentives for the private sector to help identify and disrupt malicious networks.
- The Office of the National Cyber Director (ONCD) developed the strategy over the last year based on cyber industry feedback.
Zoom in: "There's not a lot to disagree with in the 2026 Cybersecurity Strategy, but there's also not a lot in it at all," Nicholas Leiserson, senior vice president of policy at the Institute for Security and Technology and a former ONCD official, said Friday.
- Doug Merritt, CEO of cloud network security company Aviatrix, said the strategy overlooks that most damaging attacks no longer start at the perimeter and instead move laterally through a system.
- "That complexity and nuance are often underappreciated outside the security community," Merritt added.
Yes, but: Tom Gann, chief public policy officer at Trellix, called the strategy a "significant shift" in U.S. cyber policy β particularly because of its call to enlist the private sector in disrupting adversaries.
- "With the right legal architecture and real-time threat intelligence to act in lockstep with the government, the private sector becomes a force multiplier in the fight against nation-state hackers," he said.
Between the lines: The strategy comes as the federal government is bleeding cyber talent.
What we're watching: Former officials and industry groups are already weighing in on the best actions that should follow the strategy.
- Cynthia Kaiser, former deputy director at the FBI cyber division, said the administration should consider "better and more cyber training for local law enforcement" for investigating cybercrime and fraud.
4. Threat spotlight: Russia targets Signal, WhatsApp
Russian state hackers are actively targeting journalists' and government officials' Signal and WhatsApp accounts through a large-scale, global text phishing scam, Dutch authorities warned yesterday.
Why it matters: If successful, the scam would give Russian state actors control of a high-profile individual's Signal account β allowing the hackers to read sensitive conversations and pose as the compromised official or reporter.
Driving the news: Russian hackers are enticing users by posing as the official support team and sending a message purportedly from the "Signal Security Support Chatbot."
- The messages often say that Signal has observed suspicious activity on the victim's account and that the victim needs to send their Signal PIN to help investigate the problems.
- That PIN then allows the hacker to take full control of the target's account, including overriding the Registration Lock feature designed to prevent such account takeovers.
- Dutch authorities said the hackers are also luring Signal users into scanning malicious QR codes, which often are used to add contacts in the app.
- For WhatsApp, Russian hackers use similar tactics but instead entice users to click on a malicious link that looks like it will allow them to join a new chat group.
Yes, but: This isn't the first time attackers have used the fake support text to compromise Signal or WhatsApp users.
- Posing as the help desk has also become a popular tactic among ransomware gangs and other cybercriminal groups.
What to watch: It remains unclear how widespread the campaign is and how many people have fallen for it.
- Dutch authorities said that some of their employees have been "targets and victims of this campaign."
- The Cybersecurity and Infrastructure Security Agency did not comment on whether U.S. government employees are being targeted, but it did refer to the agency's tip sheet for securing mobile communications.
- The FBI did not respond to a request for comment.
The bottom line: Signal reminded users yesterday that the company would never initiate contact about a problem via in-app messages, SMS or social media.
- "If anyone asks for any Signal related code, it is a scam," Signal said on X.
5. Catch up quick
@ D.C.
π The White House is working with the FBI, the NSA and CISA to respond to a recently disclosed hack of an FBI surveillance system. (Politico)
π§³ The CISO and deputy CISO of the Department of Homeland Security are being replaced as part of a broader IT overhaul at the agency. (CyberScoop)
@ Industry
π¦Ύ Mozilla patched 22 security flaws in its Firefox browser found by Anthropic's Claude. (Axios)
π Darktrace has named Ed Jennings its new chief executive officer, marking its third CEO in 18 months. (Financial Times)
π€ Meta has hired two of the creators behind Moltbook, the viral social network for AI agents. (Axios)
@ Hackers and hacks
π An international coalition led by Microsoft and Europol took down the operations for Tycoon 2FA, a phishing-as-a-service platform that helped cybercriminals launch attacks and access millions of email accounts. (Cybersecurity Dive)
π² A phone hacking toolkit being used to ensnare iOS users around the world may have been designed by U.S. contractor L3Harris. (TechCrunch)
π» Researchers at DoubleVerify have uncovered a network of more than 200 fake websites created using simple prompts to large language models. (Axios)
6. 1 fun thing
πͺ© Happy (belated) Harry Styles weekend to all who celebrated!
- ππ»ββοΈ I, for one, am excited to test drive the album on my next run.
- πΏ Have thoughts? You know where to find me.
βοΈ See y'all next week!
Thanks to Megan Morrone for editing and Khalid Adad for copy editing this newsletter.
If you like Axios Future of Cybersecurity, spread the word.
Sign up for Axios Future of Cybersecurity





