Axios AI+

March 26, 2025
Hi from D.C. (though I am en route to the airport back to the West Coast). Today's AI+ is 864 words, a 3.5-minute read.
1 big thing: OpenAI's Chris Lehane on the AI race
The winner of the AI race will make decisions that could set industry norms and influence global AI policy for years to come, OpenAI's chief global affairs officer, Chris Lehane, told Axios' Ina Fried yesterday at the Axios What's Next Summit in Washington, D.C.
Why it matters: Lehane says beating China in the AI race is so important that we should not tie the hands of AI makers by limiting their use of data under copyright laws that China won't observe.
- "Whoever ends up winning ends up building the AI rails for the world," Lehane said.
Between the lines: Lehane argued that OpenAI plays a critical role in ensuring that the U.S. is leading in AI.
- The requirement to remain competitive in this space, Lehane told Fried, infuses every AI regulation debate right now.
- "There's a growing recognition and understanding that we do need to make sure that we are leading as a country on innovation ... from a U.S. competitiveness perspective," Lehane said.
Zoom out: Lehane insisted that there are plenty of laws already on the books that govern what AI companies can and can't do.
- He said OpenAI and some of the other big tech companies are already doing everything they can to build models that align with the average person's top concerns about AI — protecting kids, limiting deepfakes, identifying AI-generated content.
- OpenAI has already struck licensing deals with publishers, including the Associated Press, Axel Springer and Axios, while pushing for a broader industry conversation around fair compensation and transparency.
Zoom in: In a recent White House memo, Lehane and OpenAI argued that AI companies should be able to train their models on copyrighted material as a matter of national security and that the government should codify this right under the "fair use" principle.
- Asked about what material OpenAI trains its models on, Lehane said the company uses "data that is appropriately accessible and available," a common phrase used by AI companies to describe their broad use of internet data that might or might not be protected by copyright.
- When Fried pushed Lehane on whether OpenAI was training on copyrighted material, Lehane again stressed the importance of the race with China.
- "That is a bit of a zero-sum game. And do you want the world built on autocratic, authoritarian AI, where there's not going to be any copyright, there's not going to be any fair use ... you're not going to have any freedoms?"
Disclosure: Axios and OpenAI have a licensing and technology agreement that allows OpenAI to access part of Axios' story archives while helping fund the launch of Axios into four local cities and providing some AI tools. Axios has editorial independence.
2. Social Security nominee's AI fixation
One buzzword came up a lot during the confirmation hearing yesterday for Frank Bisignano, the president's nominee for Social Security commissioner: AI.
Why it matters: Just like a private sector CEO, Bisignano appears to believe in the magic of artificial intelligence.
- "One of the greatest efficiency opportunities we have is using artificial intelligence," he told senators. "That doesn't mean we use it for answering the phone. It means we use it to learn how to do our work better. "
Where it stands: A push to use more AI tools was already underway in the previous administration, and the agency has been using versions of AI for years.
- Advocates also have some privacy concerns — already flaring over DOGE's data access — amid worries that AI would be used to replace the humans tasked with talking to beneficiaries about complicated issues.
Zoom in: Bisignano said AI doesn't need to be customer-facing, like a chatbot, but could help staffers work more efficiently.
Zoom out: Social Security has notoriously bad customer service; the 20-plus-minute wait time to get a human to answer a call came up repeatedly yesterday.
- AI could help. It's been found to boost productivity inside call centers, per a 2023 study from well-regarded researchers at Stanford and MIT. Not necessarily by automating call answering, but by helping agents get information faster while they're working.
Reality check: At the moment the agency's problems seem to be less about moving fast on AI, and more about the impact of huge cuts and fast-paced policy changes.
3. Training data
- Michael Kratsios, who served in the first Trump administration as chief technology officer, won Senate confirmation Tuesday to be the new head of the White House Office of Science and Technology Policy. (Axios)
- Google released the first of its Gemini 2.5 "thinking models," which the company says is capable of handling more complex problems and giving more accurate answers. (Demis Hassabis on X)
- OpenAI added a new image-making capability to ChatGPT yesterday based on its GPT-4o model, and the company's release notes say that — unlike some previous OpenAI image tools — it will allow users to create images of adult public figures (except for those who opt out). (OpenAI, Simon Willison)
4. + This
OpenAI CEO Sam Altman touted the way ChatGPT is able to better render text and follow prompts in its new image generator. And, while those appear to be significant advances, call me a nitpicker, but I'm still stuck on the fact that one of the hands still has only four fingers.
Thanks to Scott Rosenberg and Megan Morrone for editing this newsletter and Matt Piper for copy editing.
Sign up for Axios AI+





/2025/03/26/1742947726799.gif?w=3840)