Exclusive: OpenAI, Anthropic meet with House committee over advanced cyber models
Add Axios as your preferred source to
see more of our stories on Google.

House Homeland Security Chair Andrew Garbarino during a hearing in March. Photo: Tom Williams/CQ-Roll Call via Getty Images
OpenAI and Anthropic briefed House Homeland Security Committee staff on their new cyber-capable AI models and their implications for cybersecurity, Axios has learned.
Why it matters: This is one of the first briefings that lawmakers have had with the AI giants about the cyber threats posed by their new models, including to under-resourced critical infrastructure sectors.
State of play: Anthropic has held off on a public release of its Mythos Preview model due to its ability to quickly find and exploit critical security flaws.
- OpenAI decided on a tiered approach for releasing its GPT-5.4-Cyber model.
- Both companies are working with federal agencies to get them access to the models.
Driving the news: OpenAI and Anthropic briefed staffers on Thursday in two separate classified briefings, a committee aide told Axios.
- The aide described the briefings as "proactive engagement with these companies on recent frontier model developments," including their implications for critical infrastructure cybersecurity.
- The briefings also touched on a recent White House memo accusing China of "industrial-scale" campaigns to distill and copy American AI models, the aide said.
- OpenAI said the briefing was one of several the company held with Senate and House committees last week — in addition to a briefing with the White House.
- An Anthropic spokesperson said that the company regularly briefs "congressional staff on model capabilities and their national security implications," adding that last week's briefing was part of "that ongoing engagement."
The big picture: House Homeland Security Chair Andrew Garbarino (R-N.Y.) has been hosting ongoing private roundtables with tech and AI executives, according to the Washington Post, and has been discussing this work with Rep. Jay Obernolte (R-Calif.) who introduced a bill laying out a federal framework for AI this week.
- The committee has also held several hearings on the implications of generative AI models on national security, including nation-state cyberattacks.
What they're saying: "Productive partnerships between industry and government are essential to help us stay ahead of the evolving threat landscape, ensure the government is prepared to securely harness AI for its defensive capabilities, and support and protect American AI development as adversaries like China seek to gain an advantage by any means," Garbarino told Axios in a statement.
- Garbarino added that these engagements help the committee identify risks and make sure "Congress is asking the right questions."
Between the lines: Members of the committee said another briefing last week, on jailbroken AI models — which have been manipulated to bypass their built-in safety and security guardrails — gave them new urgency on regulating AI.
- Members were shown ways such tools could be used to carry out a school shooting or a bombing.
- "What I just saw in there, with just a short amount of time typing in questions, is very scary. These models are very powerful," Rep. August Pfluger (R-TX) told reporters following the briefing.
- "We see how powerful it is, and it should used for good, but guardrails need to be attached... Congress and the executive branch need to work with our industry partners to make sure that we keep kids safe."
The bottom line: Rep. Andy Ogles (R-TN) said following that earlier briefing:
- "What's extraordinary about this presentation is how most of it is readily off the shelf and easy to access, and it increases the probability that the wrong person gets it... It's rather frightening, and it underscores the fact that AI is advancing so rapidly and Congress is light years behind."
Go deeper: New AI tools speed up known hacking tactics, early testers say

