OpenAI weighs encryption for temporary chats
Add Axios as your preferred source to
see more of our stories on Google.

Photo Illustration: Natalie Peeples/Axios. Photo: Justin Sullivan/Getty Images
Sam Altman says OpenAI is strongly considering adding encryption to ChatGPT, likely starting with temporary chats.
Why it matters: Users are sharing sensitive data with ChatGPT, but those conversations lack the legal confidentiality of consultations with a doctor or lawyer.
- "We're, like, very serious about it," the OpenAI CEO said during a dinner with reporters last week. But, he added, "We don't have a timeline to ship something."
- An OpenAI spokesperson declined further comment.
How it works: Temporary chats don't appear in history or train models, and OpenAI says it may keep a copy for up to 30 days for safety.
- That makes temporary chats a likely first step for encryption.
- Temporary and deleted chats are currently subject to a federal court order from May forcing OpenAI to retain the contents of these chats.
Yes, but: Encrypted messaging keeps providers from reading content unless an endpoint holds the keys. With chatbots, the provider is often an endpoint, complicating true end-to-end encryption.
- In this case, OpenAI would be a party to the conversation. Encrypting the data while it is in transit isn't enough to keep OpenAI from having sensitive information available to share with law enforcement.
- Apple has addressed this challenge, at least in part, with its "Private Cloud Compute" for Apple Intelligence, which allows queries to run on Apple servers without making the data broadly available to the company.
- Adding full encryption to all of ChatGPT would also pose complications as many of its services, including long-term memory, require OpenAI to maintain access to user data.
The big picture: Altman and OpenAI have advocated for some protection from government access to certain data, especially when people are relying on ChatGPT for medical and legal advice — protections that apply when you speak to a licensed professional.
- "If you can get better versions of those [medical and legal chats] from an AI, you ought to be able to have the same protections for the same reason," Altman said, echoing comments he has recently made.
OpenAI hasn't yet seen a large number of demands for customer data from law enforcement.
- "The numbers are still very small for us, like double digits a year, but growing," he said. "It will only take one really big case for people to say, like, all right, we really do have to have a different approach here."
Between the lines: Altman said this issue wasn't originally on his radar but that it has become a priority after he realized how people are using ChatGPT and how much sensitive data they are sharing.
- "People pour their heart out about their most sensitive medical issues or whatever to ChatGPT," Altman said. "It has radicalized me into thinking that AI privilege is a very important thing to pursue."
What to watch: Altman predicted some sort of protections will emerge, adding that lawmakers have been somewhat receptive and generally favor privacy protections.
- "I don't know how long it will take," he said. "I think society has got to evolve."
Go deeper: Generative AI's privacy problem
