Security teams embrace agentic AI
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Annelise Capossela/Axios
Companies and their cybersecurity teams are leaning into the new agentic world, experts say.
Why it matters: Agentic AI can reduce workload and boost response times, but if it misfires, it could expose systems to serious threats.
The big picture: While chatbots respond to prompts, agentic AI goes a step further and takes approved actions based on its own findings.
- As with any technological evolution, getting security teams to adopt AI takes time and education.
- Building confidence in new AI-enabled security tools also comes with a unique threat: If an AI tool gets something wrong, it leaves an opening for spies and cybercriminals to break in.
Driving the news: Microsoft unveiled plans Monday to start previewing 11 new AI agents in Security Copilot next month.
- CrowdStrike added agentic AI to its security tools last month, and Trend Micro rolled out autonomous agents and its own AI brain to customers last year.
Flashback: Just two years ago, major corporations were blocking employees from even opening ChatGPT for fears of data leaks.
Yes, but: The tides have turned, and security is one of the clearest use cases for generative AI — especially since the industry has long had a dearth of available workers and faces high burnout rates.
- More than 70% of CISOs said in a survey last summer that their organizations are considered either "innovators," "early adopters" or "early majority" adopters of new AI technologies, which could be influencing their newfound trust in AI tools.
- Half of the CISOs in that same survey also said they have developed some AI use cases or were piloting potential new AI projects for their teams.
Between the lines: Many security teams just want agentic AI to help sort through the thousands of threat notifications they receive daily and determine which ones are legitimate threats to their organizations.
- When Microsoft customers first started playing around with its Security Copilot, they would stick to prescriptive use cases, like summarizing a recent incident, Dorothy Li, corporate VP of Microsoft Security Copilot, told Axios.
- As they've become more comfortable, some users now let Copilot automate as much of their workflow as possible, she added, which inspired Microsoft to bring autonomous agents into the mix.
- Many of those use cases involved responding to phishing alerts and notifications about vulnerabilities across the various tools in their stacks.
Zoom in: Last month, CrowdStrike added an agentic capability to its security-focused large language model that automatically triages notifications for customers' security operations teams.
- Once implemented, the new tool can eliminate more than 40 hours of manual work per week, CrowdStrike estimates.
- CrowdStrike tests its new agentic capabilities internally against its own analysts' findings to ensure the tools are accurate and don't take inappropriate actions before they're deployed.
- That testing is key to building trust with customers, who include security teams in major corporations, Elia Zaitsev, chief technology officer at CrowdStrike, told Axios.
- "Everything in the generative AI space, in particular, by pretty much every measurement I've seen, is being adopted quicker than any technology out there," Zaitsev said.
Reality check: A healthy amount of skepticism still remains in AI's promise for security teams, Zaitsev added.
- "People need to see those hard, quantifiable metrics," he said. "They need to see there's real ROI."
What we're watching: Now that companies are giving AI the green light in their systems, expect even more cyber vendors to make splashy announcements about their own agentic capabilities.
