Axios AI+

August 18, 2025
School starts today for our now-seventh grader. (Wish us luck.) Today's AI+ is 1,035 words, a 4-minute read.
1 big thing: OpenAI is "very serious" about encryption
Sam Altman says OpenAI is strongly considering adding encryption to ChatGPT, likely starting with temporary chats.
Why it matters: Users are sharing sensitive data with ChatGPT, but those conversations lack the legal confidentiality of consultations with a doctor or lawyer.
- "We're, like, very serious about it," the OpenAI CEO said during a dinner with reporters last week. But, he added, "We don't have a timeline to ship something."
- An OpenAI spokesperson declined further comment.
How it works: Temporary chats don't appear in history or train models, and OpenAI says it may keep a copy for up to 30 days for safety.
- That makes temporary chats a likely first step for encryption.
- Temporary and deleted chats are currently subject to a federal court order from May forcing OpenAI to retain the contents of these chats.
Yes, but: Encrypted messaging keeps providers from reading content unless an endpoint holds the keys. With chatbots, the provider often functions as an endpoint, complicating true end-to-end encryption.
- In this case, OpenAI would be a party to the conversation. Encrypting the data while it is in transit isn't enough to keep OpenAI from having sensitive information available to share with law enforcement.
- Apple has addressed this challenge, at least in part, with its "Private Cloud Compute" for Apple Intelligence, which allows queries to run on Apple servers without making the data broadly available to the company.
- Adding full encryption to all of ChatGPT would also pose complications as many of its services, including long-term memory, require OpenAI to maintain access to user data.
The big picture: Altman and OpenAI have advocated for some protection from government access to certain data, especially when people are relying on ChatGPT for medical and legal advice — protections that apply when you speak to a licensed professional.
- "If you can get better versions of those [medical and legal chats] from an AI, you ought to be able to have the same protections for the same reason," Altman said, echoing comments he has recently made.
OpenAI hasn't yet seen a large number of demands for customer data from law enforcement.
- "The numbers are still very small for us, like double digits a year, but growing," he said. "It will only take one really big case for people to say, like, all right, we really do have to have a different approach here."
Between the lines: Altman said this issue wasn't originally on his radar but that it has become a priority after he realized how people are using ChatGPT and how much sensitive data they are sharing.
- "People pour their heart out about their most sensitive medical issues or whatever to ChatGPT," Altman said. "It has radicalized me into thinking that AI privilege is a very important thing to pursue."
What to watch: Altman predicted some sort of protections will emerge, adding that lawmakers have been somewhat receptive and generally favor privacy protections.
- "I don't know how long it will take," he said. "I think society has got to evolve."
Go deeper: Generative AI's privacy problem
2. Where AI-driven job cuts are hitting first
Artificial intelligence is not taking your job just yet, according to MIT's State of AI in Business 2025 report. Instead, AI is predominantly replacing outsourced, offshore workers.
Why it matters: As U.S. workers feel the pain of a tight labor market coupled with a white-collar bloodbath, any disruption from AI is so far landing farther afield, the MIT findings suggest, even though its longer-term risk is much greater.
What they're saying: "There doesn't seem to be any layoffs. … Jobs most impacted were already low priority or outsourced," Aditya Challapally, leader of the Connected AI group at MIT Media Lab, tells Axios.
- Instead of replacing workers, organizations are finding real gains from "replacing BPOs [business process outsourcing] and external agencies, not cutting internal staff," according to the report.
Zoom out: While 3% of jobs could be replaced by AI in the short term, Challapally said that nearly 27% of jobs could be replaced by AI in the longer term.
- Industries that are considered advanced adopters of AI see the nearest-term labor impact.
- Over 80% of executives surveyed within tech and media, the only two sectors that showed clear signs of AI disruption, anticipate reduced hiring volumes in the next two years.
- Still, most companies surveyed are backfilling workers with AI rather than outright replacing them.
By the numbers: For now, companies aren't firing employees but just canceling contracts that involve outsourced labor, a strategy that's leading to financial gains.
- Back-office automations also have a higher return on investment, with $2 million to $10 million in BPO expenditures eliminated for the firms studied by MIT researchers.
- One company studied saved $8 million a year by spending $8,000 on an AI tool.
Between the lines: 50% of AI budgets flow to sales and marketing, based on estimates.
- That could indicate that front-office tools are getting more investment even though back-office tools save more money.
- It can also be harder to measure front-office AI-driven successes. (It's difficult to tell if AI helped you close more sales in a year, for example.)
Be smart: For investors betting on AI to drive productivity gains, the report offers hope and risks.
- 95% of organizations investing in generative AI are getting zero return on that investment.
- But companies are seeing "significant increased productivity," Challapally says.
The bottom line: If AI boosts productivity in a way that helps companies cut costs without causing mass layoffs, it could be a Goldilocks scenario for investors — fueling earnings growth while avoiding the economic drag of widespread job losses.
3. Training data
- OpenAI has hired a raft of Democratic Party veterans in its push to win California's approval for its for-profit reorganization. (Politico)
- AI-loving CEOs are fighting to get their top managers to use the new tools. (New York Times)
- Anthropic is letting its Claude chatbot end conversations in certain cases where it determines they are harmful or abusive. (TechCrunch)
4. + This
Here's a link to restore at least some faith in humanity: A NASCAR pit crew helped a rival after he lost a tire.
Thanks to Scott Rosenberg and Megan Morrone for editing this newsletter and Matt Piper for copy editing.
Sign up for Axios AI+




