Axios Future of Cybersecurity Thought Bubble

December 17, 2025
👋🏻 Surprise! I'm back with some takeaways from today's congressional hearing on AI and cybersecurity.
📬 Have scoops, thoughts or feedback? [email protected].
Today's newsletter is 477 words, a 2-minute read.
1 big thing: Congress wakes up to AI cyber threat
Members of Congress know AI is about to push U.S. cybersecurity off a cliff — they just don't know what to do to stop the fall.
Why it matters: AI companies and security researchers are already warning that models' capabilities are at an inflection point.
- For Congress, not moving fast enough could be the same as not moving at all.
Driving the news: Lawmakers from both parties came back to the same point during a highly anticipated hearing on cybersecurity and emerging technology today: They're still figuring out how regulating AI threats would even work.
- "This is such an exploratory exercise for so many of us that are not experts," said Rep. Josh Brecheen (R-Okla.), chair of the House Homeland Security Committee's oversight subcommittee.
Reality check: This is a pretty typical conundrum for Congress. Remember that federal data privacy law they've been trying to pass since the 2018 Cambridge Analytica scandal?
Yes, but: The stakes are arguably higher this time around and the actual threats are harder to predict.
- Not only can identities be stolen and copied via deepfakes, but adversaries could use AI to automate attacks on the electric grid, water utilities and hospital systems.
Between the lines: Amid intense competition with China and with each other, U.S. model makers are pushing increasingly powerful generative AI models out to the masses — including to adversaries who can weaponize those tools.
- The Trump administration wants minimal regulation and maximum velocity in AI development.
- Some in Congress are more skeptical, but they're not aligned on what the regulatory framework should look like.
Zoom in: Leaders from Anthropic, Google, Quantum Xchange and Seven Hill Ventures called on lawmakers to improve information-sharing channels between industry and federal agencies.
- Industry leaders will need "good and quick and sensitive channels" to share novel information about AI threats as they uncover them, Logan Graham, head of Anthropic's frontier red team, told lawmakers.
- In addition to alerting the government, companies need channels to proactively share information with one another, Graham added.
- Royal Hansen, Google's vice president for privacy, safety and security engineering, also pushed for more security standards across industry around building and deploying AI responsibly.
What's next: Rep. Andy Ogles (R-Tenn.) teased that the Homeland Security Committee's cyber subcommittee, which he chairs, could form a working group with industry to tackle the effect of AI on the threat landscape.
- "This is that moment in time that we'll point to in this space: Did we heed the warning? Were we listening? Were we paying attention?" Ogles said. "You've got our attention."
☀️ See y'all Tuesday!
Thanks to Dave Lawler for editing and Khalid Adad for copy editing this newsletter.
If you like Axios Future of Cybersecurity, spread the word.
Sign up for Axios Future of Cybersecurity Thought Bubble


