AI firms push to use copyrighted content freely
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Sarah Grillo/Axios
A sharp divide over AI engines' free use of copyrighted material has emerged as a key conflict among the firms and groups that recently flooded the White House with advice on its forthcoming "AI Action Plan."
Why it matters: Copyright infringement claims were among the first legal challenges following ChatGPT's launch, with multiple lawsuits now winding their way through the courts.
Driving the news: In their White House memos, OpenAI and Google argue that their use of copyrighted material for AI is a matter of national security — and if that use is limited, China will gain an unfair edge in the AI race.
- "If the [Chinese] developers have unfettered access to data and American companies are left without fair use access, the race for AI is effectively over," OpenAI said in its proposal.
- The companies make clear they would like to see executive or legislative action that guarantees their ability to train models on copyrighted material.
Yes, but: Fair use does permit limited use of copyrighted material without permission. At issue here is whether fair use principles are broad enough to cover the wholesale consumption of massive datasets in AI training.
The other side: Groups representing actors, filmmakers and publishers (among other creative professionals) used their filings, public statements and editorials to reject those arguments.
- The News/Media Alliance, a trade group that represents newspapers and other publishers, called for strong protection of copyrights.
- "OpenAI's submission to the Trump administration's AI Action Plan would undermine the American economy," N/MA CEO Danielle Coffey said in a statement to Axios. "Loosening standards for everyone else's creative IP might be convenient for them in the short run, but the long-run implications are bad for everyone."
- More than 400 Hollywood entertainers signed a letter warning that "America's global AI leadership must not come at the expense of our essential creative industries."
- The group, which included Paul McCartney, Ben Stiller, Lilly Wachowski, Cynthia Erivo (and dozens of other well-known actors, directors and filmmakers), noted that the U.S. arts and entertainment industry supports over 2.3 million jobs "while providing the foundation for American democratic influence and soft power abroad."
Zoom in: In its filing, startup Vermillio makes its own case for maintaining copyright protection. The company, which focuses on monetizing protected IP, uses OpenAI's ChatGPT to help bolster its case.
- "There are several areas in this document where the proposals seem to overstep bounds and could be seen as trampling on rights guaranteed to Americans," Vermillio quotes ChatGPT as saying.
- In addition to the possibility of overreach on its fair use argument, ChatGPT calls out the preemption of state laws and the risk of government surveillance and censorship as areas where OpenAI's recommendations could infringe on citizens' rights.
- "Overall, the proposals attempt to centralize power in AI companies and the federal government while diminishing state control, intellectual property protections and privacy rights."
The big picture: Publishers, writers, artists and others have filed suit against OpenAI, Microsoft, Google and other companies arguing their training and operation of generative AI systems violates intellectual property law.
- In the U.S., lawsuits are dominating the debate, while in the U.K. explicit legal protections for AI training are already on the table.
My thought bubble: There are options beyond not having access to copyright material or having totally free rein.
- The most obvious of these is that AI companies can — and indeed already are — paying to license such content. OpenAI has many such deals, while Microsoft, Google and Meta have also made deals for content they believe is critical to their work.
- And while the threat from China and other adversaries is real, the "if we don't do it, they will" approach could be used to argue for the abandonment of all sorts of protections.
- For example, Chinese companies have access to more sensor data thanks to the country's widespread surveillance of citizens, a factor that could give its companies an advantage building AI systems based on the physical world. Does that mean that the U.S. should also allow for widespread surveillance?
- U.S. tech giants have long used foreign threats to argue that U.S. regulations should be loosened or not enforced on them, whether they be antitrust, intellectual property or other issues.
Disclosure: Axios and OpenAI have a licensing and technology agreement that allows OpenAI to access part of Axios' story archives while helping fund the launch of Axios into four local cities and providing some AI tools. Axios has editorial independence.
