AI-generated music strikes chord and discord
- Ina Fried, author of Axios Login

Illustration: Annelise Capossela/Axios
AI's arrival on the music scene is inspiring wildly diverging responses from stars and performers, with some inviting the technology to share the stage and others preferring to remain a solo act.
Why it matters: The dawn of generative AI is raising all manner of legal issues, and the music industry is providing key early tests of the limits of existing protections for intellectual property.
Driving the news:
- Last week, a song with AI-generated vocals that sounded convincingly like Drake's, "Heart on My Sleeve," became an overnight hit before being pulled down from several streaming services.
- Grimes, meanwhile, has invited fans to create their own tracks using an AI version of her voice and has offered to split the royalties.
- Composer and musician Holly Herndon has offered up Holly+, an AI version of herself.
- Then there's singer-songwriter Dan Bern's number, "AI Songwriting App," which repeatedly hurls the F-word at said innovation.
The big picture: The music industry has often been at the forefront of intellectual property issues, including questions about sampling and fair use as well as how to credit and compensate multiple parties whose work goes into a finished track.
- For example, there are clear rules establishing the conditions under which an artist can record a song written by someone else. Rooted in the era when singers weren't often also songwriters, the conventions today allow artists to record and perform cover versions.
Yes, but: It's clear that existing rules don't help us navigate scenarios such as the recent Drake impersonation.
Between the lines: Both the writers and performers of a song can protect their work via copyright, and U.S. law offers strong protections, with owners able to demand that infringing works be taken offline.
- AI impersonations provide a new twist, enabling the duplication of artists' voices and styles "performing" works they never actually sang.
- A musician might be able to bring a suit claiming that an AI rendition is making an unfair use of their likeness — but legally, that's a much more expensive and time consuming challenge than a copyright claim.
Zoom out: The legal questions the music industry faces will soon emerge in a much wider range of cases, from video impersonations of celebrities to deepfakes of everyday individuals.
- They will turn up in viral videos, phone messages that seem to be from relatives in distress, and other cases we can't yet imagine.
Having a legal means to protect digital versions of one's self is critically important, but not adequately addressed under current law, says Tom Graham, the head of AI company Metaphysic, which makes tools that enable such creations.
- "Anything that involves a lawsuit, even a letter of demand, is beyond the reach of regular people," Graham tells Axios. Graham has applied for trademark protection for a digital avatar of his likeness, hoping that approach could offer some protection.
Of note: There are separate issues regarding the training of the engines that crank out generative AI content.
- Drake's label, for example, has said it believes AI engines that are trained using copyrighted music are infringing their rights — a contention also being leveled by publishers of written works, as well as stock photo giant Getty Images.
- UMG said in a media statement, "The training of generative AI using our artists' music" represented "both a breach of our agreements and a violation of copyright law."
The bottom line: Graham says the platforms creating and distributing these technologies need to do more to address the eminently foreseeable problems they generate.
- "Everybody needs to spend more time and money and attention trying to mitigate those harms," he said.