Copyright law is AI's 2024 battlefield
Looming fights over copyright in AI are likely to set the new technology's course in 2024 faster than legislation or regulation.
Driving the news: The New York Times filed a lawsuit against OpenAI and Microsoft on December 27, claiming their AI systems' "widescale copying" constitutes copyright infringement.
The big picture: After a year of lawsuits from creators protecting their works from getting gobbled up and repackaged by generative AI tools, the new year could see significant rulings that alter the progress of AI innovation.
Why it matters: The copyright decisions coming down the pike — over both the use of copyrighted material in the development of AI systems and also the status of works that are created by or with the help of AI — are crucial to the technology's future and could determine winners and losers in the market.
What they're saying: "Copyright owners have been lining up to take whacks at generative AI like a giant piñata woven out of their works. 2024 is likely to be the year we find out whether there is money inside," James Grimmelmann, professor of digital and information law at Cornell, tells Axios.
- "Every time a new technology comes out that makes copying or creation easier, there's a struggle over how to apply copyright law to it," Grimmelmann says.
- According to Grimmelman, "There are definitely cases in which copyright owners attempt to veto technologies that they see as creating too much of a risk of infringement [and that] winds up really hampering or distorting innovation."
- "If copyright law says that some kinds of AI models are legal and others aren't, it will steer innovation down a path determined not by what uses of AI are beneficial to society but one based on irrelevant technical details of the training process," Grimmelmann says.
- "If courts determine that training the systems constitutes infringement...I think that's going to have a really significant impact," University of Miami Professor of Law Andres Sawicki tells Axios.
Yes, but: Some observers believe the copyright system can adapt to an AI world.
- Jerry Levine, general counsel for ContractPodAi, a generative AI tool that helps lawyers analyze legal documents, tells Axios he thinks there are "major issues" with the Times' case, but that "copyright law is well equipped to solve these issues."
- Levine predicts that it will be incumbent on the generative AI providers to prevent copyright infringement. He suggests that if a response might violate a copyright, the tool would be able to offer to summarize the text and link to the original, instead of reproducing the entire copyrighted work.
Between the lines: The biggest risk to AI innovation might lie in a ruling that limits the generative AI field to players with the resources to fight lawsuits and license large amounts of data.
- Sawicki pointed out that we're already seeing deals from the big players, like Microsoft and OpenAI's partnership with the Associated Press to license its news stories. If such licensing becomes mandatory, he suggests, AI development may become limited to players with deep pockets.
- "Smaller startups, academic researchers, prominent individuals, open source movements aren't going to be able to do that," he says.
In the absence of other rules governing the new technology, copyright law could end up resolving legal questions around generative AI that aren't really about copyright at all.
- In 2019, after a report that IBM had trained one of its image-recognition systems on Flickr photos users had shared under a Creative Commons license, critics saw a violation-of-privacy problem. But since the U.S. lacks comprehensive privacy laws, copyright law became the fall-back remedy.
- Ryan Merkley, CEO of Creative Commons at the time, argued in a blog post that "copyright is not a good tool to protect individual privacy, to address research ethics in AI development, or to regulate the use of surveillance tools employed online. Those issues rightly belong in the public policy space."
The bottom line: "Copyright law is not really the way to worry about the big societal effects" of AI, Grimmelmann tells Axios. "We shouldn't be using copyright law as labor policy to figure out the role of humans in a world of automation. We shouldn't be using copyright law to protect privacy or to protect against dangerous content. A copyright was not built for that."