Exclusive: Feds debut new plan for reporting AI security threats
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Sarah Grillo/Axios
The feds and top U.S. technology companies are unveiling a new plan today for reporting and trading details about ongoing security threats targeting AI models, according to materials shared exclusively with Axios.
Why it matters: Security flaws found in AI systems affect more than just model makers.
- Any company running an AI model in their own applications could be hacked if they don't properly patch newly discovered flaws.
Driving the news: The Cybersecurity and Infrastructure Security Agency (CISA) is publishing a new playbook today outlining how companies can report and share details about ongoing security threats, including system vulnerabilities and ongoing cyberattacks.
- The playbook is coming from the AI-focused arm of CISA's Joint Cyber Defense Collaborative (JCDC). Anthropic, Amazon Web Services, Google, Microsoft and OpenAI are among those who contributed to the playbook.
The big picture: Security flaws in AI systems could allow bad actors to poison models, steal confidential information and even control autonomous agents.
- "AI systems are evolving rapidly. There's no single entity that has all the information to manage AI-related risks," CISA director Jen Easterly told Axios. "This is an area where we have to work together and collaborate and share."
Zoom in: CISA's playbook, seen by Axios, includes two checklists for sharing new information, with one for reporting details about ongoing attacks and another about new vulnerabilities.
- The playbook also includes directions for various scenarios, such as reporting suspicious behavior or sharing new publicly available reports about new threat actors.
- CISA and its partners designed the playbook to be a resource for security analysts, incident responders and other technical staff.
Catch up quick: Much of the playbook was inspired by feedback collected at two AI tabletop exercises that the JCDC hosted last year.
- Microsoft hosted the first one in June in Northern Virginia, as Axios previously reported.
- Scale AI hosted another in San Francisco last fall that simulated an AI security incident targeting the financial services sector.
Between the lines: As with all JCDC efforts, companies and government agencies participate in this level of threat intel sharing on a voluntary basis.
- Officials and executives who helped create the playbook told Axios that the project is the culmination of three-and-a-half years of building trust so they feel safer sharing confidential information with one another.
- The tabletop exercises allowed participating companies "to look them in the eye and understand that they're going to use the information in a way that's consistent with their expectations," Eric Wenger, senior director for cyber and emerging tech policy at Cisco, which contributed to the playbook, told Axios.
Reality check: CISA and the JCDC's fate is unclear as the new Trump administration prepares to take office Monday.
- Republican Senate leaders have called for the agency's total elimination.
- Easterly and other top CISA officials are set to leave the agency on Monday as Trump is sworn in.
Yes, but: Alex Levinson, head of security at Scale AI, told Axios that the company plans to keep sharing intel with their JCDC partners and to assist the new agency leadership on these issues — even if this specific program is dismantled.
- "Scale didn't join this because one political party or another put it forward," Levinson said. "If policy changes, if priorities change, I don't think this work stops."
- Easterly added that only a few senior-level officials are leaving CISA, but most of the agency's 3,400 federal employees will still be in their roles next week.
The bottom line: CISA and its private and public sector partners believe the new reporting playbook will help Americans be able to "embrace fully the amazing potential of AI," Lisa Einstein, chief AI officer at CISA, told Axios.
- "Americans aren't going to accept these new technologies, and these companies' critical infrastructure are not going to accept the new technologies, if they can't trust that they're built with security in mind," she said.
