Axios AI+

August 21, 2023
Hi, it's Ryan. Today's newsletter is 1,060 words, a 4-minute read.
1 big thing: Artists break down AI stereotypes
Image: Wes Cockx/Google DeepMind
As we begin to glimpse what AI will mean for art and its audiences, GoogleDeepMind gave Axios an exclusive early look at a new exhibition, "Visualizing AI," in which 13 artists explore the risks and visualize the opportunities of AI.
Why it matters: We need more ways to engage with AI, and art about these technologies can help us think beyond stereotypical images of mazes of code and godlike robots.
There's a dreamy consistency across the artworks, which include commentaries on AI and work produced using AI.
- The artists ranging from former monk Champ Panupong Techawongthawon to Martina Stiftinger, who helped create the visual design of many of the biggest tech companies.
It's the video art that stands out most — presenting AI as fluid, complex, and endlessly transforming.
- XK Studio takes us into a deeply rich 3D view of the world — suggesting new perspectives AI advances could open for us — and Brooklyn's Wes Cockx invites us to imagine "Large Models" in constant movement.
- Portugal's Nidia Dias takes us into the school science lessons most of us never had — extracting ecosystems from their real life locations and spinning them around for an immersive experience of their biodiversity.
- All images and videos are free for download, and DeepMind says it paid participating artists.
Driving the news: U.S. District Judge Beryl Howell Friday ordered that art produced by generative AI cannot be registered for copyright in the U.S.
- While noting that AI advances mean we are approaching "new frontiers" in copyright, the court insisted that "human authorship is a bedrock requirement of copyright."
- Thousands of books under copyright have been used without authors' permission to train generative AI models (see our next item below).
Zoom out: Our relationship with AI is already complicated, and likely to get more complicated.
- AI optimists such as Marc Andreessen have called the development and proliferation of AI a "moral obligation." But different kinds of AI could both help diagnose and cure your cancer and transform or eliminate your job — meaning there's no easy or right way to react.
The big picture: AI debates today are dominated by awe over the speed of advances in the generative AI field and demands that the new technologies should be regulated.
- But many of the metaphors and analogies used in those debates fall short. Artists may be able to add texture to our understanding of AI in ways that innovators and regulators cannot.
What's happening: Advocates seeking to reduce excessive hype and fear around AI are issuing reports and guides designed to nudge media outlets, tech companies and others leading AI debates into offering more varied and better balanced perspectives on AI.
- AI researchers say that stereotyping in stock images, for example, can negatively impact public perceptions of AI — including by focusing on the technology rather than the people affected by it, or by exaggerating AI's capabilities.
Yes, but: While art can broaden our understanding of AI, some generative AI art is already undermining the livelihoods of artists, according to the team behind Glaze, a non-profit "cloaking tool" from the University of Chicago, which works to prevent AI mimicry of artists producing digital copyrighted works.
- "Popular independent artists find low quality facsimiles of their artwork online, often with their names still embedded in the metadata from model prompts," writes Ben Zhao of Glaze.
Our thought bubble: Absent from the DeepMind exhibit is a whole tradition of artistic work — from Janelle Shane's curation of AI mistakes to Simone Giertz's "Sh---y Robots" — that mocks AI or takes a more jaundiced view of its prospects.
Flashback: Ina Fried reported in October on the push to have AI art treated as serious art.
2. An AI training library full of copyright books
Illustration: Natalie Peeples/Axios
A number of popular generative AI models, including Meta's open source Llama, were trained in part on pirated versions of books from leading authors, according to a new investigation in the Atlantic.
Why it matters: At least 170,000 books, "the majority published in the past 20 years," are in Llama's training data, per the story by Alex Reisner.
- BloombergGPT and GPT-J, from the EleutherAI nonprofit, also trained on the same dataset, per The Atlantic.
The details: The copied books were roughly one-third fiction and two-thirds non-fiction, and were contained in a dataset labeled Books3, which formed part of a much bigger compendium of training data called the Pile that was freely available online from 2020 until recently.
- The list of copied books is full of titles by bestselling and acclaimed authors, including Stephen King, Margaret Atwood, Haruki Murakami, and Jonathan Franzen.
- More than 30,000 titles are from Penguin Random House.
Of note: Sarah Silverman and two other authors sued Meta and OpenAI last month for copyright infringement in allegedly using their books to train AI models.
The intrigue: The developer who claimed responsibility for releasing the Books3 dataset said he did it to give others "OpenAI-grade training data."
- While it appears the authors did not give permission for their works to be used to train these AI models, some developers are likely to claim fair use. Others may not have been aware that material they were using was under copyright. The law governing the use of copyright data in training AI remains unsettled.
What they're saying: EleutherAI executive director Stella Biderman told The Atlantic that the company is "creating a version of the Pile that exclusively contains documents licensed for that use."
3. Training data
- Microsoft CEO Satya Nadella told Fast Company that AI could add $50 billion-$100 billion to Microsoft's annual revenue by 2027. (Fast Company)
- There's an intensifying race between the U.S. and China to integrate AI into military capabilities. (Wall Street Journal)
- Just 4% of Americans can answer all nine of these questions about AI, cybersecurity and Big Tech correctly. (Pew Research)
- RIP: John Warnock, the Adobe co-founder who helped spark the desktop publishing revolution, died at 82.
- Trading places: Meta AI policy director Kevin Bankston is leaving the company after four years.
- Errata: In Friday's newsletter, when we wrote about Amazon Web Services' use of "user content" to improve its AI, we should have made clear that the company explicitly says it does not use "personal data" and allows customers to opt out of any AI-related use of their content.
4. + This
Bryson Stott rocked a pencil bat at the plate in Sunday's Philadelphia Phillies vs. Washington Nationals game, part of the Little League Classic.
Thanks to Scott Rosenberg for editing and Bryan McBournie for copy editing this newsletter.
Sign up for Axios AI+

Scoops on the AI revolution and transformative tech, from Ina Fried, Madison Mills, Ashley Gold and Maria Curi.

