Axios AI+

December 16, 2025
We've asked you to tell us how you use AI, but now we want to know how you don't use it. Reply to this email and tell us what red lines you draw around AI use. Or what did it used to be great for and now ... not so much?
Today's AI+ is 1,153 words, a 4.5-minute read.
1 big thing: Mattel and OpenAI delay making toys together
Mattel's quiet delay of its first OpenAI-powered toy is the latest sign that enthusiasm for AI is colliding with rising safety, privacy and regulatory concerns.
Why it matters: The slower-than-anticipated rollout reflects mounting unease over AI's impact on teens and a series of missteps by other makers of AI toys.
Driving the news: Mattel, which has been silent about its plans since announcing the collaboration in June, confirmed to Axios that it won't hit its original target to announce a product during 2025.
- "We don't have anything planned for the holiday season," a Mattel representative told Axios.
- The company also reiterated that its first product, when it does arrive, is aimed at "older customers and families," noting that OpenAI's developer interface only supports those 13 and older.
- Mattel didn't offer a further update on its plans but said it sees AI as a complement to, rather than a replacement for, traditional play, and that any products will comply with safety and privacy regulations.
The big picture: Much has changed since June when Mattel and OpenAI announced their tie-up.
- There has been increased focus on the interactions between AI and vulnerable audiences, including youth, amid reports of chatbots helping fuel delusions and suicidal thoughts.
- Early AI-enabled toys have also had a variety of issues, from talking dirty and sharing their takes on Taiwanese sovereignty to being rendered nonfunctional when their companies go belly up.
What they're saying: A number of regulators and consumer safety organizations have warned that chatbots are categorically unsafe, even for older teens.
- Toys that embed AI pose additional dangers, the U.S. PIRG Education Fund warned in a report last month.
- "We found some of these toys will talk in-depth about sexually explicit topics, will offer advice on where a child can find matches or knives, act dismayed when you say you have to leave, and have limited or no parental controls," the group said.
Beyond saying inappropriate things, generative AI-equipped toys pose other risks, including the potential for emotional dependence on a gadget that positions itself as a friendly companion.
- There are also privacy issues, given that generative AI often relies on a customer providing information. Young people can't be expected to understand the consequences of sharing their data.
Between the lines: Toymakers, especially brand-name ones, have to weigh the types of experiences generative AI can offer with the reputational risk if interactions go bad.
- On the positive side, generative AI could pave the way for more interactive play, including longer conversations and deeper personalization.
2. Trump taps Big Tech for AI federal workforce
The White House unveiled a new AI-focused "Tech Force," looking to tap employees from the nation's biggest tech firms in the latest sign of the industry's embrace of President Trump.
Why it matters: The White House appears to be trying to pick up the pieces after Elon Musk's DOGE swept through the federal government and wiped out significant existing tech expertise.
- The administration listed a who's who of Big Tech as partners for the program, including Adobe, Amazon Web Services, IBM, Meta, Microsoft, Nvidia, Oracle, Palantir and Musk's xAI.
How it works: Big tech firms will be able to lend their workers to do two-year stints to help modernize the federal government — and retain any equity or stock options — and then return to their employer, Office of Personnel Management director Scott Kupor said in a call with reporters yesterday.
- The program is also available to people outside these big firms.
- The administration wants to recruit 1,000 employees, and salaries would be between $135,000-$195,000 a year.
- To avoid conflicts of interest, employees will have to take a leave of absence from the companies and adhere to government ethics rules. It's not clear if some would have to leave their employers entirely.
For security clearances, Kupor said, employees would have to go through the normal processes, but he added that he has secured commitments from agencies that they will do this as "efficiently" as possible.
- Kupor pointed to the Defense Department as one agency he believes is ripe for this program, and said virtually all agencies were participating.
What they're saying: "Tech Force will primarily recruit early-career technologists from traditional recruiting channels, along with experienced engineering managers from private sector partners," the program's website states.
- Kupor said the aim was to get more early career workers into the federal workforce.
Between the lines: Attracting talent to the federal workforce has long been difficult.
3. Exclusive: Whole Foods to deploy AI food recycling
A new kind of AI-supercharged composting bin will turn fruit and vegetable scraps at Whole Foods into chicken feed — which will then help produce the grocer's own eggs.
Why it matters: The technology can shrink waste volumes by up to 80%, according to its maker, startup Mill, cutting greenhouse gas emissions from food waste and saving Whole Foods money.
- "Waste is one of the largest sectors of the economy that most folks in our industry overlook," Mill co-founder and CEO Matt Rogers told Axios in an interview yesterday.
Driving the news: Amazon and Mill are partnering to roll out the bins across Whole Foods stores by 2027, the companies tell Axios exclusively.
- Amazon, which owns Whole Foods, is also investing an undisclosed amount in Mill, founded in 2020.
Follow the money: Mill has raised a total of $250 million since its founding 2020, the startup also shared exclusively with Axios.
- In addition to Amazon, via its Climate Pledge Fund, other investors include Prelude, Breakthrough Energy Ventures, Lower Carbon, Energy Impact Partners and Google Ventures.
4. OpenAI comms chief Hannah Wong to depart
OpenAI chief communications officer Hannah Wong has decided to step down. She will depart the company at the end of January.
Why it matters: Wong was the AI giant's first CCO and guided the company through the launch of ChatGPT, heightened regulatory scrutiny, controversies, a slew of deals and lawsuits.
The big picture: Her departure comes as the company is pushing on a variety of fronts, from $1.4 trillion in committed infrastructure spending, a flurry of new products such as Sora, and a corporate restructuring that could lead to an IPO in the coming years.
5. Training data
- OpenAI has hired longtime Google business executive Albert Lee as head of corporate development. (The Information)
- Nvidia released Nemotron 3 as an open model and, unlike with most such models, the chipmaker is also sharing much of the underlying training data. (Wired)
- Investors and executives at public companies have differing expectations about returns from AI, according to a new survey. (Axios)
6. + This
Well, AI has clearly gone mainstream, with Merriam-Webster choosing "slop" as the 2025 word of the year.
Thanks to Megan Morrone for editing this newsletter and Matt Piper for copy editing.
Sign up for Axios AI+










