Music

AI Deepfakes Beware: Copyright Infringement Can Cost Up To $150,000 Per Copied Work

Drake’s track with an AI 2Pac verse didn’t last long. A day after the Tupac Shakur estate threatened to sue Drake for using an AI imitation of the later rapper’s voice on “Taylor Made Freestyle,” he took down the recording. In using 2Pac’s voice, though, Drake opened yet another important debate about generative AI that reveals just how risky the business is — and how rightsholders may have more power to shape it than they realize.

Related

So let’s get legal! In the cease-and-desist letter he sent on behalf of the Shakur estate, lawyer Howard King referenced both Shakur’s personality rights, which encompasses publicity rights, or what some states refer to as likeness rights, plus the copyrights to the rapper’s recordings and songs. Most coverage of this focused on the former issue, since personality rights are relatively straightforward — Shakur’s estate controls the rights to the rapper’s distinctive style. The second gets complicated, since the recording copyrights — and potentially the song copyrights — have less to do with Drake’s use of 2Pac-style vocals than how he was able to create them in the first place.

To create such a convincing imitation of 2Pac, an AI model would almost certainly have to ingest — and, in the course of doing so, copy — a significant number of Shakur’s recordings. So King, in his letter, demanded from Drake “a detailed explanation for how the sound-alike was created and the persons or company that created it, including all recordings and other data ‘scraped’ or used.” Any answer Drake gave would have taken the issue into legal terra incognita — an AI’s ingestion of recordings and songs would implicate copyright, although it’s not clear if this could be done without a license under fair use. The stakes would be high, though. As opposed to a California right of publicity violation, which would be relatively easy to prove and incur limited damages, copyright infringement is federal and comes with statutory damages of up to $150,000 per work infringed. That means a company that ingests 20 works to create one would be liable a maximum of $3 million.

For the last year, music creators and rightsholders have been talking about generative AI as something that’s coming — the deals they’ll negotiate, the terms they’ll set, the business they’ll do — once they negotiate the right deals. But technology companies tend to beg forgiveness rather than ask permission, and it seems some of them have already ingested a considerable amount of music for AI without a license. Think about it: None of the major labels have announced deals for AI companies to ingest their catalogs of recordings, but enough recordings have been ingested to make AI vocal imitations of Drake, 2Pac, Snoop — even Frank Sinatra doing Lil Jon’s “Skeet skeet.” That means that a company or companies could be in big trouble. Or that they have a first-mover advantage over their rivals. Or both.

Part of the reason technology companies forge ahead is that deals that involve new technology get complicated. In this case, how do you value a license you’re not sure you need? If you think that companies need a license to ingest music for the purposes of allowing users to make AI vocal imitations — as seems likely — the price for that license is going to be relatively high, with complicated terms, because rightsholders would presumably want to be compensated on an ongoing basis. (It’s insanely difficult to create a fair one-time license to ingest a catalog of music: first, since copyright law controls copying, the licensor would forfeit any control not specified in the contract; second, it would be hard for a potential buyer to raise the kind of money a seller might want, so the economics of ongoing payments make more sense.) If you think that ingestion would fall under fair use — which is very possible in some edge cases but much less so generally — why would you pay a high fee, much less constrain yourself with complicated terms?

Related

The legal cases that will tip the scales in one direction or the other will proceed at the speed of litigation, which moves slower than culture, much less technology. The first big case will be against Anthropic, which Universal Music, Concord, ABKCO and other music publishers sued in October for training an AI on lyrics to compositions they control. (Universal’s agreement with YouTube on AI principles might make a ruling that this is fair use somewhat less likely, since it shows that major labels are willing to license their music.) There are already other cases in other parts of the media business — The New York Times sued OpenAI and Microsoft in December, for example — and one of them could set an important precedent.

Until that happens — and maybe after, too — there will be settlements. Very few rightsholders have much of an interest in stopping AI — some could in some cases, but it’s a losing battle. What they really want to do is leverage the power they have to destroy, or at least delay, a nascent business in order to shape it. (“The power to destroy a thing is the absolute control over it,” in the words of Paul Atreides, Padishah Emperor of the Known Universe, who might be exaggerating but certainly has a point.) That will give them real power — not only to monetize music with AI but to shape the terms of engagement in a way that, let’s face it, is likely to favor big companies with big catalogs. It will be interesting to see what they do with it.

Powered by Billboard.

Related Articles

Back to top button