Generative AI blitz hits cyber industry's biggest conference
- Sam Sabin, author of Axios Codebook

Illustration: Sarah Grillo/Axios
This year's RSA Conference has become a hot spot for AI security product announcements.
Driving the news: Companies big and small are rolling out new products this week that incorporate generative AI.
- But so far, most of the products have been pretty simple — with security firms opting to just train their own large language models on their stores of intelligence and attack data.
What's happening: Google unveiled its plans Monday to introduce its own security-focused large language model, Sec-PaLM, that will help defenders collect details about ongoing breaches and contextualize threat intelligence.
- SecurityScorecard, a company that rates internal security programs against those of other organizations, announced plans today to embed ChatGPT into its programs so customers can more easily find specific ratings data.
- Veracode and Recorded Future released similar programs earlier this month that would bring generative AI into their own products. Veracode's generative AI product will suggest fixes for flaws in code and open-source repositories, while Recorded Future trained GPT to help threat analysts better interpret security risks.
Zoom out: Ever since OpenAI's ChatGPT entered the scene last fall, companies have been scrambling to figure out how they, too, can profit off of the latest tech craze.
The big picture: But until this week, cybersecurity firms have been a bit slower to embed generative AI into their systems compared to other industries.
- The first big leap didn't come until late March, when Microsoft announced Security CoPilot, a ChatGPT-enabled bot that helps defenders pull in alerts, notifications and other information during incidents.
What they're saying: "It felt natural to us to start talking about Google and its approach to AI related to security at the largest security conference of the year," Eric Doerr, vice president of security engineering at Google Cloud, told Axios.
- "If we showed up and everyone else is talking about generative AI and we weren't, that would be very strange," he added.
Between the lines: Generative AI's impact on cybersecurity is likely to be much bigger than what we'll see at RSA throughout the week.
- Generative AI has the potential to enable security products to better detect advanced phishing attacks, proactively scan networks for suspicious activity, and automatically "fight back" against ongoing attacks, Avivah Litan, distinguished vice president analyst at Gartner, told Axios.
- Most current uses of AI in security are still reactive to threats, rather than offensive, Litan added.
Yes, but: Gartner and other consulting firms recommend companies hold off on using ChatGPT for code generation, code security scanning and secure code reviews since large language models still struggle to write clean code and are prone to misinformation.
- "You have to treat an AI model as a new vector, so anything going in and out of the model directly needs special toolsets to scan for vulnerabilities," Litan said.
Be smart: Cybersecurity vendors aren't exempt from marketing hype cycles when new technology emerges.
- "If you buy these security products with AI in them, you have no visibility into what the tool is doing and if it's performing as advertised," Litan said.
- Those interested in new cyber AI products should ask vendors for specific examples and metrics to back up claims about how these tools will benefit them, she added.
Sign up for Axios’ cybersecurity newsletter Codebook here
Editor's note: This story has been corrected to show that Recorded Future trained GPT, not ChatGPT, for threat analysis.