January 02, 2024

Hi, it's Ryan. I'm still trying to vacuum up glitter from New Year's Eve. But at least I'm not hunting for Nvidia H100 chips — they're now retailing for around $100,000 each. Today's AI+ is 1,296 words, a 5-minute read.

1 big thing: Copyright is AI's new battlefield

Illustration: Sarah Grillo/Axios

Looming fights over copyright in AI are likely to set the new technology's course in 2024 faster than legislation or regulation, reports Megan Morrone.

Driving the news: The New York Times filed a lawsuit against OpenAI and Microsoft on December 27, claiming their AI systems' "widescale copying" constitutes copyright infringement.

The big picture: After a year of lawsuits from creators protecting their work from getting gobbled up and repackaged by generative AI tools, the new year could see significant rulings that alter the progress of AI innovation.

Why it matters: The copyright decisions coming down the pike — over both the use of copyrighted material in the development of AI systems and also the status of works created by or with the help of AI — are crucial to the technology's future and could determine winners and losers in the market.

What they're saying: "Copyright owners have been lining up to take whacks at generative AI like a giant piñata woven out of their works. 2024 is likely to be the year we find out whether there is money inside," James Grimmelmann, professor of digital and information law at Cornell, tells Axios.

  • "Every time a new technology comes out that makes copying or creation easier, there's a struggle over how to apply copyright law to it," Grimmelmann says.
  • "If copyright law says that some kinds of AI models are legal and others aren't, it will steer innovation down a path determined not by what uses of AI are beneficial to society but one based on irrelevant technical details of the training process," Grimmelmann says.

Yes, but: Some observers believe the copyright system can adapt to an AI world.

  • Jerry Levine, general counsel for ContractPodAI, a generative AI tool that helps lawyers analyze legal documents, tells Axios he thinks there are "major issues" with the Times' case, but that "copyright law is well equipped to solve these issues."
  • Levine predicts that it will be incumbent on the generative AI providers to prevent copyright infringement. He suggests that if a response might violate a copyright, the tool would be able to offer to summarize the text and link to the original, instead of reproducing the entire copyrighted work.

Between the lines: The biggest risk to AI innovation might lie in a ruling that limits the generative AI field to players with the resources to fight lawsuits and license large amounts of data.

  • University of Miami Professor of Law Andres Sawicki tells Axios if licensing of copyright content becomes mandatory, AI development may become limited to players with deep pockets.

In the absence of other rules governing the new technology, copyright law could end up resolving legal questions around generative AI that aren't really about copyright at all.

  • In 2019, after a report that IBM had trained one of its image-recognition systems on Flickr photos users had shared under a Creative Commons license, critics saw a violation-of-privacy problem. But since the U.S. lacks comprehensive privacy laws, copyright law became the fall-back remedy.
  • Ryan Merkley, CEO of Creative Commons at the time, argued in a blog post that "copyright is not a good tool to protect individual privacy, to address research ethics in AI development, or to regulate the use of surveillance tools employed online. Those issues rightly belong in the public policy space."

The bottom line: "Copyright law is not really the way to worry about the big societal effects" of AI, Grimmelmann tells Axios.

  • "We shouldn't be using copyright law as labor policy to figure out the role of humans in a world of automation. We shouldn't be using copyright law to protect privacy or to protect against dangerous content. A copyright was not built for that."

2. AI gets real in 2024

Illustration: Brendan Lynch/Axios

2024 will be the year the AI industry gets serious about trying to deliver results across a wide slice of business and life, moving beyond the hype surrounding the successes of ChatGPT and chipmaker Nvidia, Megan Morrone and I report.

Why it matters: Everyone using AI will be looking for proof that it's making their life or work better following 2023's surges of enthusiasm and fear.

  • AI providers are hunting for profitable business models that can support expensive-to-run generative AI systems.
  • Business leaders want to move beyond AI brainstorming and pilot phases and begin offering leaps in efficiency, productivity and creativity.

The big picture: The industry's rise in the second half of 2023 created a mismatch between AI's sky's-the-limit potential and its challenging realities — hallucinating chatbots, hard-to-obtain GPU chips, liabilities around copyright and concerns about bias and accuracy.

  • With the arrival of smaller and specialized AI models, and the availability of more AI tools on mobile devices, the landscape could improve.

Zoom in:

  • AI at work: For millions of workers, 2024 will be the year of the AI copilot. But for many others, it could be the year that AI-linked job losses move from theory to reality.
  • In daily life: More AI tools will run on mobile phones, transforming our relationships and hobbies as much as business. "[Google's new] Gemini nano is built for this," notes AI analyst Nina Schick.
  • Enterprise: The EU AI Act provides some regulatory exemptions for open models, and VCs are racing to invest in open-source providers. Look for companies to focus on new ways to apply AI models to their own products, rather than on general products like ChatGPT.
  • Markets: Wedbush analyst Daniel Ives predicts a "new bull market for the tech sector" driven by a 20-25% increase in cloud and AI spending.

The prospect of AI breakthroughs in health generates the most enthusiasm among both the general population and experts.

  • 2024 should see more early disease detection and personalized medical treatment plans with the help of AI "by analyzing patient data, improving surgical precision and enhancing post-operative monitoring," per Omar Arnaout, a neurosurgeon at Brigham and Women's Hospital.
  • Marc Succi, a Mass General Brigham radiologist, predicts that patients would increasingly turn to AI chatbots for medical triage advice, leading the biggest tech companies to acquire health-tech firms.

The big picture: There's little prospect of comprehensive federal AI legislation during the 2024 election cycle — but state houses are becoming active AI legislators.

Be smart: No one knows exactly how quickly AI will move from Silicon Valley to Main Street.

  • Even as Bill Gates argues that today's AI advances will trigger "a massive technology boom later this decade," he notes that mass market adoption may still be two years away.

What's next: OpenAI is set to open its delayed GPT store — an app store for customized versions of the chatbot — in early 2024.

3. Chief Justice Roberts urges humility on AI

U.S. Supreme Court Chief Justice John Roberts used his year-end report on the federal judiciary to focus on how artificial intelligence should be used with "caution and humility," Mike Allen and Ashley May report.

What they're saying: "I predict that judicial work — particularly at the trial level — will be significantly affected by AI. Those changes will involve not only how judges go about doing their job, but also how they understand the role that AI plays in the cases that come before them," Roberts wrote.

Read the 13-page report.

4. Training data

Thanks to Scott Rosenberg and Megan Morrone for editing this newsletter.