Exclusive: New open-source wrapper keeps AI data private
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Natalie Peeples/Axios
A group of former Apple and Databricks engineers is launching an open-source standard on Wednesday to keep company data private when using AI tools, according to a release shared exclusively with Axios.
Why it matters: There's currently no standard way to secure the data flowing through the new servers companies are setting up to run AI copilots and autonomous agents.
- That leaves the sensitive information users are sharing with their new AI tools vulnerable to leaks and hacks.
Zoom in: Confident Security, which emerged from stealth earlier this year and offers enterprise encryption for AI models, is releasing OpenPCC, the first open-source encryption standard that protects prompts, outputs, and logs sent to the AI copilots and agents now flooding into workplaces.
- Jonathan Mortensen, founder and CEO of Confident Security, compared the new standard to a wrapper for AI models — the models can keep operating in the same way, but OpenPCC puts a new layer on top that keeps the information secret from potential onlookers.
- Confident Security is releasing technical tools that developers can use to build the standard into AI systems, including software kits, tools to verify data privacy and a demo server showing how it works in practice.
How it works: OpenPCC encrypts the prompt data sent to AI models, ensuring that no private information is exposed.
- Today, most AI systems still process prompts in plaintext, meaning sensitive data can be stored, logged, or accessed by model providers. That has raised serious privacy concerns as companies feed AI tools trade secrets, regulated data, and other confidential material.
Threat level: Without standardized encryption, security leaders are left to rely on manual workarounds, such as running models locally or redacting documents before use.
- "The bar has been raised, given the usefulness of AI and the sensitivity of the data you might be willing to give it," Mortensen told Axios.
Between the lines: Current AI privacy standards are closed off and only work on their own platforms.
- Apple's Private Cloud Compute only works for generative models running on iPhone, iPad and Mac.
- Meta has a similar encryption standard that secures AI tools running on WhatsApp.
The intrigue: OpenPCC will allow any developer to adopt encryption on their tools — similar to how HTTPS encrypts web traffic to keep it private.
Yes, but: Adopting OpenPCC right now requires companies to use specific hardware, like modern Intel chips and Nvidia GPUs, and to rework how their systems handle encryption.
- That can be a heavy technical lift for smaller teams or legacy environments.
What's next: Mortensen is in the early stages of standing up a separate foundation that will oversee the OpenPCC project, just like other open-source project maintainers.
Editor's note: This story's explanation of how OpenPCC works in relation to existing security protocols has been corrected.
Go deeper: Hot new protocol glues together AI and apps
