Axios AI+

May 06, 2024
Ryan here. As RSA kicks off in San Francisco, I've headed to the Milken Global Conference in LA. Today's AI+ is 1,153 words, a 4-minute read.
1 big thing: Hollywood's AI disclosure dilemma
Generative AI has hit Hollywood, but you have to look hard to see it.
Why it matters: With no laws or standards governing when and how to tell viewers about AI's involvement in the creative process, film and TV makers are winging it — and further eroding the line between reality and fiction.
Driving the news: Media companies and content creators keep getting caught not disclosing their use of generative AI.
- Netflix's recently released true-crime documentary "What Jennifer Did" included images that appear to have been created or altered with generative AI, as first reported by Futurism.
- HBO's "True Detective" fans also noticed posters in the background of one scene that showed telltale signs of AI.
- The directors of the horror film "Late Night with the Devil" had to go on the defensive to explain their use of AI for three still images in the movie.
Documentary fans argue that AI-generated images introduce falsity into the historical record, while fans of fictional dramas say that AI art steals jobs from artists, which ruins their enjoyment of the films.
- "True Detective's" showrunner tweeted that the posters were meant to poke fun at generative AI, but she later deleted those tweets.
- The "Late Night with the Devil" directors called their use an experiment.
The big picture: Aside from scores of AI copyright issues, viewers simply do not like to be fooled.
- "People crave authenticity," Subramaniam Vincent, director of journalism and media ethics at Santa Clara University, tells Axios. There's a "creeping fear" that the images and media we see every day are not real, Vincent adds.
AI was a sticking point in last year's Hollywood strikes, in particular the use of actors' likenesses in films without their consent.
- "No generative AI in the entertainment industry, period," actor and filmmaker Justine Bateman told Ina Fried on stage at the Axios AI+ summit in San Francisco last year.
- "Technology should solve a problem," she said. AI is "labor replacement, not a tool for filmmakers."
The other side: Film and video are technologies, and Hollywood has been altering images from its earliest days.
- If you think of generative AI as the latest in a long tradition of special effects, it looks less like doomsday for truth and more like Hollywood business as usual.
Reality check: Disclosing or labeling the use of AI is hard. AI-generated media rarely exists without some human input, and any disclosure requirement isn't going to work if it's a simple, binary "AI or not AI" label.
- Most of what we see on television and in the movies is already a collaboration between humans and machines. We don't expect disclosure for the use of special effects or digital editing.
- "You don't need AI to manipulate photos or video in stunningly and meaningfully misleading ways," Denise Howell, technology lawyer and host of the podcast "Uneven Distribution," tells Axios.
Flashback: Long before generative AI, film critics questioned Errol Morris' use of slow motion re-enactments of a 1976 murder of a police officer in his 1988 documentary "The Thin Blue Line."
- Morris felt that the re-creation of the murder was obviously staged due to the unlikelihood of him being at the scene with a camera.
- "We assemble our picture of reality from details. We don't take in reality whole," Morris wrote in the New York Times 20 years after "The Thin Blue Line" came out.
- Regarding the issues at Netflix, Howell tells Axios, "sensationalist true-crime storytelling has been popular and widespread for a long time even without today's generative AI tools."
State of play: Vincent, a former engineer as well as an ethics professor, says Hollywood is acting like Silicon Valley, using generative AI without disclosing it and then apologizing when getting caught.
- "They're doing the same thing software engineers have been doing for a long time: release the software, release the technology, then fix the bugs," Vincent says.
- "People are obviously keeping an eye on broader policy and regulatory initiatives," Covington & Burling lawyer Adrian Perry tells Axios. Perry works directly with film studios, content distribution platforms and other kinds of content creators. "I think a lot of this is being kind of handled ad hoc on a deal-by-deal basis."
- He tells Axios that in the future we might see warnings about AI-generated content the way we see mature content warnings in TV and film now.
2. How Big Tech requires users to label AI
Tech platform policies on how users label AI-generated images and video are a moving target.
Why it matters: The explosion of generative AI online is making it harder for people to determine what is and isn't real on social media.
Between the lines: Tech giants have guidelines in place for their own platforms, and are joining coalitions to set industry standards for labeling.
- Meta, Google and TikTok all say that content made with their own AI tools will be labeled automatically.
- But content that's made with other AI tools and posted on these platforms is harder to label.
YouTube currently requires disclosure for any "content that is meaningfully altered or synthetically generated when it seems realistic."
- "Beauty filters" are OK, but generating a whole new face is not.
- If users of generative AI don't label their own work, YouTube says they'll apply the labels for them and potentially remove the content or suspend the user from YouTube's partner program.
TikTok requires labeling for any content that "contains realistic images, audio, and video."
- That means content that's been edited "beyond minor corrections or enhancements."
- This includes showing a subject doing something they never actually did or saying something they never said. It also includes any use of face-swapping apps.
Meta released new guidelines earlier this month based on feedback from its Oversight Board.
- Original guidelines restricted content that was manipulated to depict fake speech, and now also covers content altered to falsify someone's actions.
- Meta says it will use "industry-shared signals of AI images," advice from fact checkers and self-disclosure when it starts labeling content in May.
Yes, but: It's unclear whether any of these companies have the moderation support to enforce these rules and standards.
- A recent study from the Stanford Internet Observatory found that Facebook's algorithm was feeding AI-generated content (usually unlabeled) from accounts that people don't follow because the engagement on those posts was so high.
3. Training data
- Warren Buffett finds AI scary, and is keeping his investments mostly at arms' length. (Axios)
- Tom Kalil, a former Schmidt Futures chief innovation officer and Democratic administration official, launched Renaissance Philanthropy today to connect high net-worth individuals and foundations with technologists, scientists and innovators. (Axios Pro)
- Elon Musk's plan for news on X is more AI-drafted news summaries, supplemented by chat. (Big Technology)
- OpenAI's head of people, Diane Yoon, and its head of nonprofit and strategic initiatives, Chris Clark, both left the company last week. (The Information)
4. + This
You can pay to be chauffeured in your own car over America's scariest bridge — in Maryland, not far from the bridge that collapsed in April.
Thanks to Megan Morrone and Scott Rosenberg for editing this newsletter and to Caitlin Wolper for copy editing it.
Sign up for Axios AI+





