AI security needs a rework, hacker group says
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Maura Losch/Axios
A prominent group of hackers warns that without a fundamental overhaul of current security practices, AI vulnerabilities will continue to pose serious risks.
Why it matters: Well-intentioned hackers say it's still too easy to probe AI systems and tools — and if they can get in, imagine what the bad guys can do.
Driving the news: Organizers of the DEF CON hacker conference released their first "Hackers' Almanack" last week, detailing key takeaways and findings from the summer's annual hacker gathering.
- The report, published in partnership with the Cyber Policy Initiative at the University of Chicago, comes as top AI executives, heads of state, academics and nonprofit leaders gather in Paris this week to discuss a range of AI safety and security topics.
Zoom in: Governments around the world have been calling for AI companies to lean on red teaming — where ethical hackers try breaking into a system to help organizations — to improve AI security and safety.
- But that system doesn't account for the "unknown unknowns" that AI model operators are constantly looking for, Sven Cattell, an organizer of DEF CON's AI Village, wrote in the Almanack.
- Unlike traditional software security, AI vulnerabilities emerge unpredictably, making one-off red-teaming exercises insufficient.
- Instead, DEF CON organizers argue that AI security should follow the model of traditional cybersecurity, where various stakeholders come together to systematically track and address issues similar to the Common Vulnerabilities and Exposures system. The CVE system, run by research organization Mitre, rates the severity of a flaw found in an online system.
- "The goal of AI security is not to make it impossible to break a system, but to make any such break expensive and short lived," Cattell wrote.
The big picture: DEF CON's call for change comes as tech companies and the Trump administration move away from prioritizing AI safety in policy discussions.
