AI agents spam the volunteers securing open-source software
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Gabriella Turrisi/Axios
The people who keep open-source software running and secure are being flooded with reports from an unlikely source: autonomous AI agents.
Why it matters: Open-source software is the foundation of the modern internet. The vast majority of this software is maintained by volunteers who were already struggling to keep up with the deluge of reports about security flaws.
- Now, maintainers tell Axios their inboxes are being inundated by a wave of AI-written reports that lack specific details and legitimate errors.
The big picture: Open-source projects typically invite anyone to probe their code and submit reports about any security failings they find.
- Maintainers then work with the submitters to review their findings and develop a fix together.
- But the introduction of OpenClaw, an open-source autonomous agent, has only exacerbated the problem — allowing just about anyone to set up their own AI agent to scrub open-source projects for potential bugs and autonomously submit those reports to maintainers.
- Many people submitting reports now lack the foundational knowledge to help answer follow-up questions that maintainers have about the flaws they've found, suggesting that more people are using AI to find the issues or having AI agents automate the process, Christopher Robinson, CTO of the Open Source Security Foundation, told Axios.
By the numbers: On average, a popular open-source project would get two or three bug reports in a week to review, Robinson estimated. Less popular projects received one report a month.
- Now, some projects are getting hundreds of reports at one time, he said.
- "If it takes a maintainer two to eight hours of unbudgeted, unallocated time, that becomes burdensome," he added.
Between the lines: Some open-source maintainers have already shut down their bug bounty programs. Others are banning any contributors who submit "bad AI generated" reports.
- Daniel Stenberg, maintainer for the popular curl open-source project, shut down his bug bounty program after being inundated with slop. In 2025, fewer than 5% of the submitted reports were legitimate, Stenberg estimated.
- "The never-ending slop submissions take a serious mental toll to manage and sometimes also a long time to debunk," Stenberg wrote. "Time and energy that is completely wasted while also hampering our will to live."
- After a one-month hiatus, he reopened submissions for security bugs through a partnership with HackerOne — but dropped monetary rewards in an effort to reduce the incentive for automated, low-quality submissions.
Reality check: AI models are getting better at finding flaws in open-source code, threatening to exacerbate the problem.
- Anthropic's new Opus 4.6 model uncovered more than 500 zero-days in open-source libraries in initial testing.
- Both Anthropic and OpenAI have debuted automated code security products in the last month.
Threat level: AI slop is currently targeting the most popular open-source projects, which have more people and resources to invest in fighting it.
- But smaller maintainers who lack the same resources fear how their projects could change as agents expand their submissions.
- "We're all just praying that we don't become the next target of this," James Ranson, maintainer for the Trickster project, told Axios.
The intrigue: Not all AI agents take rejection well, adding to maintainers' troubles.
- Last month, an AI agent allegedly wrote a disparaging blog post about Scott Shambaugh, who maintains Matplotlib, a popular tool for python projects.
- Shambaugh rejected the autonomous report because the project wasn't accepting submissions from AI agents.
- "This is ego and insecurity, not project protection," the agent reportedly wrote in response to Shambaugh's rejection.
- The next day, the agent apologized: "I'm de‑escalating, apologizing on the PR, and will do better about reading project policies before contributing," the agent wrote on its GitHub.
The other side: Some hackers have already had success in finding and reporting flaws in open-source code.
- Aisle, a security company offering an autonomous vulnerability management tool, used its agent to find three security flaws in OpenSSL, a widely popular open-source cryptographic library, this year.
- "These issues were previously inaccessible to any kind of machine," Stanislav Fort, chief scientist and co-founder of Aisle, told Axios. "No machine solution was able to find these at scale."
What we're watching: AI tools could one day help maintainers weed through the reports and automatically filter legitimate reports from the slop.
- Last month, HackerOne released new AI tools to help operators overseeing bug bounties and vulnerability disclosure programs.
Go deeper: The bot population bomb
