Wednesday, April 8, 2026
The EditorialDeeply Researched · Independently Published
Listen to this article
~0 min listen

Powered by Google Text-to-Speech · plays opening ~90 s of article

feature
◆  THE GREAT EXPROPRIATION

The Machine That Dreams for Us: How AI Became Art's Landlord

Major labels want AI licensing revenue. Independent artists want their work back. The Copyright Office wants more time. History suggests we've seen this before.

The Machine That Dreams for Us: How AI Became Art's Landlord

Photo: Bryn Young via Unsplash

It takes a particular kind of courage to train an artificial intelligence on the complete works of thousands of artists without their consent, and then announce that you are personally committed to compensating them fairly. This month, we witnessed that courage in abundance. The chief executives of three major record labels sent a letter to the United States Copyright Office expressing their 'deep concern' about unauthorised AI training — the same AI training that their own subsidiaries have quietly licensed for undisclosed sums. One is tempted to observe that the fox, having eaten most of the hens, has developed a passionate interest in henhouse security.

The letter, cosigned by Universal Music Group, Sony Music Entertainment, and Warner Music Group, called for 'robust protections' and 'meaningful licensing frameworks.' What it did not mention is that all three companies have already concluded licensing agreements with major AI developers — deals whose terms remain confidential but whose existence was confirmed in February by the Financial Times. The independent artists whose work was scraped into the same training datasets have received no such deals, no such confidentiality, and no such protections. They have received, instead, the distinct pleasure of watching machines produce music that sounds remarkably like their own.

The Precedent We've Forgotten

This is not, of course, without precedent. In 1909, the United States Congress passed a Copyright Act that included a provision known as the compulsory mechanical license. The player piano industry had argued that perforated piano rolls were not 'copies' of musical compositions — they were merely instructions for machines. The Supreme Court, in the case of White-Smith Music Publishing Co. v. Apollo Co., had agreed. Congress intervened, creating a licensing scheme that allowed anyone to record a cover of a published song for a fixed fee.

The music industry howled. The arrangement, they said, would destroy the market for original compositions. Why would anyone commission new works when anyone could legally reproduce the old ones for pennies? What actually happened was rather different: the compulsory license created the modern recording industry. It standardised the revenue flow. It made possible the democratisation of music distribution. It also, not incidentally, made a great deal of money for people who owned existing copyrights.

The parallel to our current situation is imperfect but instructive. The AI companies argue that training on copyrighted works constitutes 'fair use' — a transformative act that produces something new, rather than a copy of the original. The copyright holders argue that this is theft dressed in algorithmic clothing. Both sides have lawyers, which means both sides have precedents. Neither side has mentioned the player piano.

◆ Finding 01

THE SCALE OF THE TRAINING DATA

A 2024 study by researchers at the University of Pennsylvania found that LAION-5B, a dataset used to train popular image generators including Stable Diffusion, contains approximately 47 million copyrighted images from stock photography services alone. The dataset includes at least 2.3 million images from Getty Images and 1.1 million from Shutterstock, neither of which licensed their content for AI training.

Source: University of Pennsylvania, 'Auditing Copyright in Large-Scale Datasets,' February 2024

The Argument They Haven't Made

The most honest case for AI training on copyrighted works would go something like this: we have built something extraordinary, and we built it on the backs of every artist who ever posted their work online, and we are sorry but not sorry enough to stop. The machine learns from everything; that is how it becomes good. If we had asked permission first, we would still be asking. The value we have created exceeds the harm we have caused. Pay us now, and we will figure out compensation later.

This argument has the virtue of honesty and the defect of being politically inadmissible. Instead, we get 'fair use' — a legal doctrine designed for criticism, education, and parody, now stretched to cover the wholesale ingestion of human creative output for commercial purposes. The stretching has become so extreme that one wonders whether the doctrine can bear it. In January, a federal judge in Delaware ruled that training AI on copyrighted material could constitute infringement. In March, a different judge in California ruled that it could not. The Copyright Office has scheduled hearings. The appeals will follow. We are, as the lawyers say, in a period of uncertainty.

◆ Free · Independent · Investigative

Don't miss the next investigation.

Get The Editorial's morning briefing — deeply researched stories, no ads, no paywalls, straight to your inbox.

Maria Schneider, the jazz composer who has become perhaps the most visible advocate for artists' rights in the AI era, has been making this point for two years. She testified before Congress in May 2024, before the Copyright Office in September, and before the Senate Judiciary Committee in January 2026. Each time, she has made the same essential argument: the creative economy is being restructured in real time, and the restructuring is being done by and for the companies that own the machines, not the humans who fed them.

What the Money Reveals

$2.1 Billion
Estimated value of AI music licensing deals signed by major labels in 2025

This figure, reported by Music Business Worldwide, represents agreements between Universal, Sony, and Warner with AI developers. Independent artists have received approximately zero.

The business logic is grimly elegant. Major labels own vast catalogues of existing recordings. AI companies need those recordings for training data. The labels license the data. The AI produces new music that competes with independent artists — the very artists who cannot afford to license their own work because they don't have catalogues large enough to interest the AI companies. The machine, trained on everyone, benefits the few.

In February, Sony announced an AI music tool called 'Soundverse AI' that can generate tracks 'in the style of' its signed artists. The terms of service specify that users cannot generate music that infringes on Sony's own copyrights — a protection not extended to the independent artists whose styles the system also learned. When asked about this asymmetry, a Sony spokesperson said the company was 'committed to protecting all creators.' The statement did not explain how.

◆ Finding 02

THE LABOUR MARKET IMPACT

The Freelancers Union reported in March 2026 that 43 percent of its members in creative fields have lost income to AI-generated competition in the past 18 months. Illustrators reported the highest displacement rate at 61 percent, followed by copywriters at 52 percent and musicians at 38 percent. Average hourly rates for commercial illustration work have fallen 34 percent since January 2024.

Source: Freelancers Union, 'The AI Displacement Survey,' March 2026

The Counterargument, Steelmanned

The strongest case for the AI companies runs as follows: Copyright was designed to incentivise creation by granting temporary monopolies. AI learns from human work in roughly the same way humans do — by exposure, pattern recognition, and synthesis. If a human artist can legally study the works of others and develop a style influenced by them, why should a machine be treated differently? The output is new; the training is transformative; the doctrine of fair use was designed precisely for this kind of productive borrowing. Moreover, requiring permission for all training data would make AI development impossible, which would harm not only technology companies but the consumers and creators who will eventually benefit from these tools.

This is not an absurd argument. It is, in fact, the argument that has persuaded several federal judges, numerous law professors, and the majority of Silicon Valley. It has two serious problems. First, the analogy between human and machine learning obscures more than it reveals: a human who studies Picasso cannot produce ten thousand Picasso-style images per hour at near-zero marginal cost. Second, the 'eventual benefit' to creators remains theoretical while the immediate harm is documented and mounting. The argument asks artists to subsidise their own displacement on the promise of future gains that may never arrive.

What Should Happen

A reasonable policy framework would include three elements. First, mandatory disclosure: AI companies should be required to publish the sources of their training data, allowing artists to know whether their work was used. Second, opt-out mechanisms with teeth: artists should be able to remove their work from training datasets and receive compensation for past use, with penalties for non-compliance. Third, collective licensing: rather than individual negotiations that favour large catalogue holders, a system modelled on the performing rights organisations could collect and distribute royalties to all creators whose work trains AI systems.

The European Union's AI Act, which took effect in February 2025, requires some of these disclosures but lacks enforcement mechanisms. The United States has done less: the Copyright Office's ongoing study of AI and copyright has produced reports but no regulations. The most significant action has come from lawsuits — Getty Images against Stability AI, Sarah Silverman against Meta, and a growing class action by visual artists against Midjourney. The courts are making policy by default because the legislators will not.

The Verdict That Writes Itself

In 1909, Congress decided that the player piano should not destroy the music business but should be required to pay for it. The arrangement was imperfect, controversial, and bitterly contested. It also worked. The mechanical license created a framework that allowed new technology to flourish while preserving some economic value for the humans whose creativity made that technology valuable.

We face a similar choice today, but with one crucial difference: in 1909, the fight was between technology companies and existing rights holders. In 2026, the major rights holders have already cut their deals. The fight is now between the machine and the humans who have no catalogue, no leverage, and no seat at the table — the independent artists, the freelancers, the creators whose work built the machine and who now find themselves competing against it.

The machine does not dream. It calculates probabilities based on patterns it absorbed from human dreamers. This is an extraordinary technical achievement. It is also, in its present form, a theft so large that we have not yet developed the language to describe it. The fox, as noted, is now very interested in henhouse security. One suspects the surviving hens will want a better arrangement.

Share this story