Shadow AI creates new headaches for company IT teams
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Aïda Amer/Axios
Company IT teams know employees are using AI tools without approval — and they're racing to protect their networks.
Why it matters: Cybersecurity vendors are making shadow AI a priority this year, rolling out new tools to tackle a problem that's surprisingly straightforward to mitigate.
Driving the news: The rise of China-based DeepSeek has sparked fresh concerns over data privacy and security at U.S. companies, with security executives warning that employees could download the app and feed corporate data into its open-source model.
- Both the Pentagon and the U.S. Navy have banned DeepSeek, citing "potential security and ethical concerns."
The big picture: Employees using unauthorized AI tools at work isn't new, but now the phenomenon has a name — shadow AI.
- It's the latest iteration of long-standing shadow IT problems, where employees bypass official channels to use unapproved tech.
- Examples include staff using DeepSeek's free version to compile internal memos and developers turning to ChatGPT for coding help — without IT's knowledge or oversight.
By the numbers: Companies typically have 67 generative AI tools running across their systems, but 90% lack proper licensing or approval, according to cybersecurity firm Prompt Security.
- The firm also found that 65% of employees using ChatGPT rely on its free tier, where data can be used to train models — raising concerns about corporate information leakage.
Between the lines: Businesses worry that employees could input sensitive data into AI tools, which could then be absorbed into training datasets.
- There's also concern that large language models might expose restricted information to employees who wouldn't typically have access.
- DeepSeek presents an additional risk: According to its privacy policy, queries are processed on servers in China — putting data under the jurisdiction of Chinese laws, a long-standing concern for U.S. security experts.
Yes, but: Banning AI tools outright hasn't worked, forcing security teams to focus on governing AI use instead.
- Many are investing in tools that apply guardrails — preventing data leaks and controlling inputs — rather than blocking access altogether.
What they're saying: "We preached for more than 12 months to CISOs," Prompt Security CEO Itamar Golan told Axios. "You think your employees are using mostly ChatGPT, Gemini, or Microsoft Copilot. But we detect thousands of tools, including many from countries with no guarantees around legal and data privacy."
The intrigue: The fight against shadow AI mirrors past efforts to control shadow IT and unauthorized cloud applications, Shannon Murphy, senior manager of global security and risk strategy at Trend Micro, told Axios.
- "AI applications are really just cloud applications," Murphy said. "Tools already exist to monitor usage and assess risk."
- But Daniel Kendzior, global data and AI security practice lead at Accenture, warned that AI security challenges will extend to mobile devices this year, as employees increasingly use AI apps on their phones.
- "It requires fundamentally different tools and a new approach to sourcing and [data] provenance because the landscape has changed dramatically in just 18 months," Kendzior told Axios.
Zoom in: Cisco is making a big bet on AI security. This month, it launched a suite of AI-driven security tools aimed at tackling shadow AI — signaling a broader 2025 trend in enterprise security.
- "We have an amazing amount of visibility, but we can also enforce controls," DJ Sampath, vice president of product and AI software at Cisco, told Axios "If a model tries to access the internet or an API, we can lock it down."
- Cisco also introduced new algorithmic red-teaming tools to test corporate AI models for security flaws.
What we're watching: The rise of AI agents and the continued adoption of China-linked AI models are likely to make shadow AI an even bigger headache for IT teams this year.
Go deeper: DeepSeek's models are easier to manipulate than U.S. counterparts, research finds
