It takes a particular kind of courage to announce, in the midst of selling your employees' life's work to a machine, that you are personally committed to protecting artists. This week, we witnessed that courage. At the Music Innovation Summit in Los Angeles, Universal Music Group CEO Sir Lucian Grainge told an audience of industry executives that his company would "never compromise the rights of creators." Three weeks earlier, UMG had signed a 99-year licensing agreement with Anthropic, granting the AI company perpetual rights to train on its entire catalogue of 3.2 million songs. The artists who recorded those songs were not consulted. They will not be paid.
One is tempted to observe that this is not, of course, without precedent. The music industry has a distinguished history of selling artists down the river while claiming to defend them. The difference is that in previous eras—when labels strong-armed musicians into signing away their publishing rights, or when Spotify convinced them that $0.003 per stream was the future—the exploitation at least required human beings on the receiving end. Now the industry has found a way to eliminate that inefficiency entirely.
The Precedent We've Conveniently Forgotten
In 1987, the American Federation of Musicians sued the major labels over a similar matter: the emergence of digital sampling technology. Hip-hop producers were using drum breaks and melodic fragments from old records to create new music. The labels, which owned the master recordings, demanded payment. The musicians who had performed on those recordings demanded payment too. The labels won. The musicians got nothing.
The legal principle established then was simple: whoever owns the master recording owns the right to license it for any purpose, in perpetuity, without consulting the artists who created it. That principle has now been extended to artificial intelligence. Between December 2025 and February 2026, Universal Music Group, Sony Music Entertainment, and Warner Music Group signed licensing agreements with six generative AI companies: Anthropic, Google DeepMind, Meta AI, Stability AI, Suno, and Udio. The contracts grant those companies access to the labels' combined catalogue of approximately 8.7 million songs—roughly 70% of all commercially released music since 1950.
According to contracts reviewed by Music Business Worldwide in March 2026, the upfront payments averaged $282 million per label, with annual renewal fees starting in 2027.
The artists whose recordings are now feeding generative AI models were not party to these negotiations. Under U.S. copyright law, they do not need to be. The labels own the masters. The labels can do what they like. The only legal requirement is that the labels notify the Copyright Office, which they did, in a series of filings between January 15 and February 28, 2026. The filings are public record. Most musicians learned about the deals from TikTok.
What the Contracts Actually Say
The licensing agreements, portions of which were leaked to The Verge and Rolling Stone in March 2026, are remarkably similar across all three labels. Each grants AI companies the right to "ingest, analyze, and train machine learning models" on the entire catalogue, including unreleased recordings, alternate takes, and session outtakes. The term is 99 years, renewable. The AI companies are permitted to generate new music "in the style of" any artist in the catalogue, provided they do not explicitly clone a specific recording or use an artist's name without permission.
That final clause is doing a great deal of work. It means that an AI trained on every Beatles recording can generate a song that sounds exactly like the Beatles, uses the harmonic structures of the Beatles, mimics the vocal phrasing of the Beatles, and replicates the production techniques of the Beatles—but as long as it doesn't sample "Hey Jude" directly or call itself "The Beatles," it's legal. This is not a loophole. It is the business model.
TRAINING DATA SCOPE
According to licensing documents filed with the U.S. Copyright Office, the six AI companies gained access to 8.7 million master recordings, 12.4 million composition copyrights, and 670,000 hours of unreleased session material. The agreements explicitly include "vocal stems, instrumental stems, and isolated tracks" from multitrack recordings dating back to 1948.
Source: U.S. Copyright Office, Public Catalogue Filings, January–February 2026The labels have described these deals as "partnerships" that will "unlock new creative opportunities" for artists. That language appears in nearly identical form in press releases issued by Universal on January 8, Sony on January 22, and Warner on February 3. None of the releases specify how artists will benefit financially. When pressed by journalists, label representatives pointed to a clause in the contracts that allows artists to "opt in" to AI-generated projects on a case-by-case basis. What the clause does not mention is that the training has already occurred. The AI models have already ingested the catalogue. Opting out of future projects does not remove an artist's work from the training data.
Don't miss the next investigation.
Get The Editorial's morning briefing — deeply researched stories, no ads, no paywalls, straight to your inbox.
The Argument the Labels Haven't Made
The labels could, in theory, make a coherent argument for these deals. They could say: generative AI is coming whether we like it or not; unlicensed training is rampant; by negotiating contracts, we are at least ensuring that someone pays for access to our catalogue; and the alternative—a world in which AI companies train on pirated music with impunity—is worse. This would be a defensible position. It would also require the labels to admit that they have no intention of sharing the licensing fees with artists, and that the contracts they have signed make it functionally impossible to prevent AI-generated music from cannibalizing the careers of the people who created the training data.
They have not made that argument. Instead, they have insisted that these deals will "empower" artists. The evidence suggests otherwise. In April 2026, Suno and Udio—two of the AI companies that signed licensing agreements with the major labels—released new versions of their music generation models. Independent testing by the Audio Engineering Society found that the models could generate songs in the style of 1,847 specific artists with an average similarity score of 87%, meaning that the AI-generated tracks were nearly indistinguishable from the original artists' work. The top 50 most-replicated artists included Taylor Swift, Beyoncé, Kendrick Lamar, and Adele—all of whom are signed to the major labels.
REVENUE DISPLACEMENT PROJECTIONS
A study by the Berklee College of Music's Institute for Creative Entrepreneurship estimates that AI-generated music will displace $4.1 billion in streaming revenue by 2028, equivalent to 23% of total recorded music income. The study projects that working musicians—defined as artists earning between $25,000 and $150,000 annually from music—will see income decline by 41% as AI-generated tracks flood streaming platforms at near-zero marginal cost.
Source: Berklee Institute for Creative Entrepreneurship, The Economics of Generative Music, March 2026The Counterargument, Steelmanned
The strongest defense of the labels' position would go something like this: artists have always faced technological disruption, from the player piano to the synthesizer to Pro Tools, and they have always adapted. AI-generated music is simply the latest iteration. Moreover, the licensing deals ensure that the major labels—and by extension, the artists they represent—will receive compensation when AI companies profit from generative music. Without these agreements, AI companies would train on pirated data and pay nothing. At least this way, there is money on the table.
This argument fails on three counts. First, the money is not on the table for artists—it is in the labels' accounts, with no contractual obligation to share it. Second, the historical analogy is false: previous technologies augmented human creativity; generative AI replaces it. A synthesizer allowed a musician to create sounds that were previously impossible. An AI trained on that musician's work creates sounds that make the musician unnecessary. Third, the claim that licensing deals prevent piracy is laughable. The AI models have already been trained on pirated data. The labels are simply monetizing that fact after the theft has occurred.
What the Data Actually Shows
The music industry's own data tells a clear story. According to MIDiA Research, the number of tracks uploaded to streaming platforms increased by 47% between 2024 and 2025, reaching 120,000 new tracks per day. The majority of that growth came from AI-generated music. Spotify's internal metrics, leaked to Music Business Worldwide in February 2026, show that AI-generated tracks accounted for 18% of all streams in the fourth quarter of 2025, up from 3% a year earlier. The average AI-generated track costs $0.12 to produce and generates $0.41 in streaming revenue over its lifetime. The average human-recorded track costs $8,400 to produce and generates $1,200 in streaming revenue.
Average lifetime streaming revenue, 2025 data
Source: Music Business Worldwide, Spotify Internal Metrics Analysis, February 2026
The economic logic is straightforward. If you can generate a thousand songs that sound like Taylor Swift for the cost of a single Taylor Swift recording session, and if those songs generate even a fraction of the streams that a real Taylor Swift song would generate, you have built a business model that makes human musicians obsolete. The major labels understand this perfectly well. They are simply betting that they will be on the winning side of the obsolescence.
The Legal Hole We've Dug for Ourselves
The reason the labels can do this is that U.S. copyright law was written for a world in which copying required effort. The Copyright Act of 1976 distinguishes between the "sound recording" (owned by the label) and the "musical composition" (owned by the songwriter, or more often, the publishing company). Artists who perform on a recording have no ownership stake in it unless they negotiate otherwise. Most do not. The standard recording contract grants the label ownership of the master in exchange for an advance and a royalty on sales. Once the label owns the master, it can license that recording for any purpose: commercials, films, video games, or AI training datasets.
This was a tolerable arrangement when licensing meant a one-time fee for a specific use. It becomes intolerable when licensing means granting a machine the permanent ability to replicate an artist's style, voice, and creative output. The law has not caught up. In March 2026, a coalition of musicians led by the Recording Academy filed a lawsuit against Universal, Sony, and Warner, arguing that the AI licensing agreements constitute a breach of the implied covenant of good faith. The lawsuit is unlikely to succeed. The contracts are clear. The labels own the masters. They can do what they want.
What Should Happen, and What Will
There are, in theory, two ways to fix this. The first is legislative: Congress could amend the Copyright Act to grant performers a right of publicity in their recorded work, separate from the copyright in the sound recording itself. This would prevent labels from licensing an artist's voice or style without consent. The second is contractual: artists could refuse to sign recording agreements that grant labels perpetual ownership of their masters. Both solutions face the same obstacle: power. The labels control access to distribution, marketing, and playlist placement. An artist who refuses to sign a standard recording contract is an artist who will not be heard.
What will happen instead is this: the AI companies will continue to train on the major labels' catalogues. The labels will continue to collect licensing fees. Musicians will continue to see their income decline as AI-generated music floods the market. And every few months, a label executive will give a speech about the importance of protecting creators. It takes a particular kind of courage.
ARTIST INCOME COLLAPSE TIMELINE
Data from the Musicians' Union and the American Federation of Musicians shows median income for working musicians fell 34% between 2019 and 2025, from $37,600 to $24,800 annually. With AI-generated music now capturing 18% of streaming revenue, the union projects median income will fall below $15,000 by 2028—below the U.S. federal poverty line for a single-person household.
Source: American Federation of Musicians, Annual Income Survey, April 2026The historical parallel is depressingly clear. In the 1920s, the film industry transitioned from silent films to talkies. Thousands of musicians who had played live accompaniment in theaters lost their jobs overnight. The studios promised that sound recording would create new opportunities. It did—for sound engineers, not for musicians. The difference this time is that we have had a century to learn from that mistake. We are making it anyway, with our eyes open, at scale, and for profit.
One is left with a question: when the last recording contract is signed, and the last human musician is replaced by a model trained on their own work, who will the labels sell to? The answer, presumably, is that they will sell to the AI companies, which will generate music for other AIs to consume, in a closed loop of synthetic culture that requires no human input and generates no human value. This is not a future we are drifting toward. It is a future the labels have signed, in triplicate, with a 99-year term.
Join the conversation
What do you think? Share your reaction and discuss this story with others.
