Workers are spilling secrets to chatbots
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Allie Carl/Axios
Sensitive corporate data appeared in more than 4% of generative AI prompts and over 20% of uploaded files in the second quarter of this year, according to new research from Harmonic Security released Thursday.
The big picture: The problem isn't new, but as workplace genAI use increases, many employers still lack or don't enforce AI policies, causing employees to use bots in secret or without proper training.
By the numbers: Harmonic Security sampled a million prompts and 20,000 files submitted to 300 genAI tools and AI-enabled SaaS applications between April and June.
- 43,700 of the prompts (4.4%) and 4,400 of the uploaded files (22%) contained sensitive information.
Between the lines: Personal and free chatbot accounts make up a large share of corporate data exposure.
- Nearly half (47.42%) of sensitive uploads to Perplexity were from users with standard (non-enterprise) accounts.
- About a quarter of the prompts with sensitive information came through the free version of ChatGPT, and another 15% of sensitive prompts were submitted via free versions of Google Gemini accounts.
Zoom in: Overall, including free and paid tiers, ChatGPT was by far the biggest source of prompt-based information exposure, followed by Microsoft Copilot and Google Gemini.
Code was the most common type of sensitive data sent to chatbots.
- Harmonic says code was "especially prevalent in ChatGPT, Claude, DeepSeek and Baidu Chat."
- The number of prompts containing proprietary code was disproportionately high in Claude, which is often regarded as the best AI tool for coders.
- Sensitive prompts to ChatGPT involved M&A planning, financial modeling, and investor communications.
The intrigue: Tools that feel safe — like document editors or design platforms — may now include genAI features trained on user data, creating exposure risk that bypasses traditional controls, Harmonic says.
- Harmonic found that Canva, Replit, Grammarly, and other tools with LLMs embedded inside them were used for legal strategy, internal emails, client data, and code.
- These uses were often not flagged as AI tools by corporate systems.
The fine print: The organizations Harmonic looked at have all deployed the company's tools to secure their data, meaning actual exposure elsewhere could be even higher.
