Microsoft injects AI agents into security tools
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Aïda Amer/Axios
Microsoft said Monday it will soon roll out 11 new AI agents for its security-focused Copilot aimed at offloading some of the most repetitive tasks that bog down cybersecurity teams.
Why it matters: Microsoft is the latest major vendor to embed autonomous AI agents directly into its security suite in an effort to reduce burnout for cyber pros and boost efficiency through AI-powered automation.
The big picture: Security professionals have long hoped that AI could help close the cybersecurity workforce gap and ease analyst burnout.
- The U.S. only has enough cyber professionals to fill 83% of the available cyber jobs, according to federal data.
- Security teams spend about three hours a day just responding to alerts, with some teams seeing more than 4,400 alerts daily, according to research from Vectra AI.
- While many legacy cybersecurity vendors have released AI copilots or assistants, only a small group have rolled out agents that can take autonomous action.
Zoom in: Starting next month, Microsoft will make six of its own new agents and five agents from partner companies available for preview in Security Copilot — which is already integrated into all of Microsoft's security tools.
- Each agent focuses on a different task: One specifically combs through potential phishing emails. Another can craft notification letters to send to different regulators after a data breach.
- Customers can configure each agent's level of access and autonomy, including whether the agent acts under its own identity (with a unique username and password) or as an extension of a human account.
- Each agent also has a map of its thinking so human users can review their decisions — and even override or correct their selections.
Case in point: If an agent wrongly flags a training email as phishing, the security team can label it a false positive and instruct the agent not to flag messages from that vendor again.
Between the lines: Microsoft says the new agents are a direct response to customer feedback.
- Agents are "an inflection point for us," Vasu Jakkal, corporate VP of security at Microsoft, told Axios at a media preview event on Thursday. "Copilot was more like question-answer, and (customers) always asked us 'Well, we would like it to one-click and get that done.'"
- Microsoft first made Security Copilot widely available last year, and Jakkal said customers quickly began asking for more autonomous functionality.
- Partners rolling out agents in Copilot include OneTrust, Aviatrix, BlueVoyant, Tanium and Fletch.
What they're saying: "There's just opportunity everywhere," Dorothy Li, corporate VP of Microsoft Security Copilot, told Axios.
- "These are the [tasks] that had the highest amount of pain, most volume and where agents can make the most impact today and that's where we chose to start."
- Microsoft also anticipates that it will roll out more security agents in the near future, Li added.
The intrigue: Microsoft also relied on an internal generative AI red team to pressure test the new agents for potential security risks.
- The red team worked closely with product teams throughout the entire development lifecycle, said Victoria Westerhoff, director of AI safety and security red teaming at Microsoft.
Go deeper: Malware's AI time bomb
