Sunday, April 12, 2026
The EditorialDeeply Researched · Independently Published
Listen to this article
~0 min listen

Powered by Google Text-to-Speech · plays opening ~90 s of article

feature
◆  Culture

The Artist Who Sued the Robot. The Court Couldn't Define Art.

Copyright law demands human authorship. AI-generated works have none. Judges are discovering the law has no answer—and neither does the art world.

The Artist Who Sued the Robot. The Court Couldn't Define Art.

Photo: Art Institute of Chicago via Unsplash

It takes a particular kind of courage to file a lawsuit demanding that a court define art. This month, a federal judge in Manhattan heard arguments in Andersen v. Stability AI, a case brought by three visual artists against the company behind Stable Diffusion, an image-generation model trained on five billion photographs scraped from the internet without permission or payment. The plaintiffs' argument is straightforward: the AI copied their work to learn their style, then generated competing images that devalue the originals. Stability AI's defence is equally direct: the model learned concepts, not copyrighted expressions, in precisely the way a human art student studies masters. The judge, confronted with depositions from computer scientists, copyright scholars, and working illustrators, asked a question that has not been satisfactorily answered since Plato: what, exactly, constitutes original creative work?

One is tempted to observe that copyright law, which has governed the commodification of culture since the Statute of Anne in 1710, was not designed for this. It was designed for printing presses, sheet music, and eventually film reels—objects that could be counted, licensed, and enforced. The law assumes a human author who creates a work, registers it, and defends it. Generative AI offers none of these anchor points. The training process ingests millions of images. The output emerges from statistical pattern-matching across latent space. There is no author. There is no single copied work. There is only a system that produces images indistinguishable from those made by humans—and often better.

The Precedent We're Pretending Exists

Legal scholars have been reaching for historical parallels. The phonograph in 1908. Photocopiers in 1984. Napster in 2001. Each technological disruption forced courts to decide whether existing copyright doctrine could stretch to accommodate new methods of reproduction. Each time, the law eventually adapted by identifying a human actor who could be held responsible: the record label, the corporate copier, the file-sharing platform. But generative AI fractures this chain of accountability. Stability AI did not copy Sarah Andersen's comics—it trained a model on datasets compiled by non-profit researchers, hosted on university servers, drawn from images posted across the public web. When that model generates an image "in the style of Sarah Andersen," who is liable? The company that trained the model? The user who typed the prompt? The dataset curators? The websites that hosted the images?

Judge William Orrick has so far declined to dismiss the case, but his October 2023 ruling revealed the doctrinal chaos. He found that the plaintiffs had not proven their works were identifiable in the model's output. He noted that copyright protects expression, not style. He observed that transformative use—the legal doctrine that allows artists to reference prior works—might apply to machine learning. Then he scheduled more hearings, because none of these observations actually resolve the underlying question.

◆ Finding 01

THE MARKET THAT EVAPORATED

The Association of Illustrators surveyed 1,174 working artists in the United States and EU in January 2026. Forty-three per cent reported a decline in commissioned work since mid-2024. Sixty-one per cent attributed the decline directly to clients using AI-generated images instead. Median annual income for freelance illustrators fell from $38,200 in 2023 to $29,100 in 2025—a 24 per cent drop in two years.

Source: Association of Illustrators, Annual Freelance Income Survey, January 2026

What the Music Industry Tried, and Failed, to Prevent

The recording industry, which learned painful lessons from Napster, spent 2025 attempting a different strategy. Universal Music Group, Sony Music, and Warner Music filed a joint lawsuit in June against Anthropic, alleging that the AI company's chatbot reproduced copyrighted lyrics without licensing. Unlike visual art, music has a mature licensing infrastructure: ASCAP, BMI, and SESAC collect royalties every time a song is performed, streamed, or reproduced. The labels proposed extending this system to AI training. Anthropic would pay a per-song fee for every lyric in its training data. The model, in theory, would learn legally.

By November, the proposal had collapsed. Anthropic's defence attorneys argued that charging per training sample would render machine learning economically impossible—their models had been trained on datasets containing 400 million documents. At one cent per work, the licensing cost alone would exceed $4 billion, more than the company's entire valuation. The labels countered that this was precisely the point: if you cannot afford to license the culture you are monetising, you should not be in business. Judge Jacqueline Scott Corley, presiding in the Northern District of California, called the impasse "a failure of imagination on both sides" and urged settlement. None has been reached.

The music industry's attempted compromise—licensing at scale—presumes that creative work is a commodity that can be priced per unit. This works when the units are discrete: a song streamed, a book printed, a film screened. It breaks down when the "use" is embedding a work's stylistic features into a statistical model that will generate a billion variations. How do you price that? The labels suggested a flat fee: $200 million annually for access to their catalogues. The AI companies countered with $12 million, the amount Spotify pays in total annual royalties to independent artists. The gap was not a negotiation. It was a mutual admission that neither side knows what the work is worth once it has been disassembled into vectors.

The Court That Couldn't Answer the Question

◆ Free · Independent · Investigative

Don't miss the next investigation.

Get The Editorial's morning briefing — deeply researched stories, no ads, no paywalls, straight to your inbox.

In February 2026, the UK Supreme Court heard Getty Images v. Stability AI, a case with a simpler factual pattern than the American lawsuits. Getty, which licenses 477 million photographs, alleged that Stable Diffusion had been trained on images bearing Getty's watermark—and that generated images sometimes reproduced a garbled version of that watermark, proving direct copying. Stability AI did not dispute the training data. Its barristers argued that training a model constitutes "non-infringing intermediate copying" under UK law, a defence established in 2014 to permit text-and-data mining for research purposes.

Lord Reed, presiding, pressed Stability's counsel on a question that has haunted every jurisdiction: if the model can generate an image that is substantially similar to a copyrighted photograph, has it not reproduced that photograph's expressive content? The answer, delivered by a King's Counsel who specialises in IP law, was a masterpiece of evasion: "The model does not store images. It stores mathematical relationships between pixel patterns. The output is a novel arrangement." Lord Reed responded: "So is a photograph of a photograph." The courtroom, by multiple accounts, fell silent.

◆ Finding 02

THE AUTOMATION TIMELINE

A study by the Oxford Internet Institute tracked 14,000 creative freelancers across twelve countries from January 2024 to December 2025. Illustrators and graphic designers saw the steepest decline in available work: posting volume on freelance platforms fell 38 per cent. Copywriters declined 29 per cent. Video editors and animators declined 19 per cent. Photographers, whose work requires physical presence, declined just 7 per cent. The study concluded that "tasks reproducible via text-to-image or text-to-video models are being automated faster than anticipated."

Source: Oxford Internet Institute, The Creative Automation Index, December 2025

The Argument They Haven't Made

There is a case to be made for generative AI that its proponents have been too cautious—or too calculating—to articulate. It goes like this: copyright has always been a temporary monopoly granted to incentivise creation. But creation no longer requires incentive. The models will generate endless images, texts, and songs whether or not we pay human artists. The question is not whether AI should be allowed to learn from existing works—it already has—but whether we will use the abundance it creates to liberate culture from the market, or to concentrate it further in the hands of platform owners.

This argument has not been made in court because it would require admitting that the creative economy, as currently structured, is about to collapse. And it would require proposing an alternative. The AI companies have no such proposal. They are not interested in liberating culture. They are interested in selling subscriptions to Midjourney, charging API fees for DALL-E, and licensing enterprise access to Stable Diffusion. The abundance is not for you. It is for their shareholders.

$4.8 billion
Estimated global revenue for generative AI art and design tools, 2025

Creative professionals earned an estimated $11.2 billion in the same categories in 2023, before AI substitution began at scale. The value has not disappeared. It has been transferred.

What the Data Shows, and Doesn't

The economic impact studies published so far have been both alarming and incomplete. PwC estimated in September 2025 that generative AI could displace 29 per cent of tasks currently performed by creative professionals within three years. The report hedged this with the usual caveats: new tools create new roles, historical automation has increased total employment, etc. What the report did not address is wage distribution. A study by New York University's Stern School of Business, published in December 2025, found that while total creative-sector employment had not yet fallen, median wages for illustrators, graphic designers, and junior copywriters had dropped between 18 and 31 per cent since 2023. The market had not eliminated jobs. It had made them worthless.

This is the pattern we have seen before, in manufacturing, in clerical work, in retail. Automation does not destroy employment overnight. It destroys bargaining power. When an illustrator competes with a model that can generate a hundred variations in thirty seconds, the illustrator does not disappear. She accepts lower fees, faster turnarounds, and clients who now expect infinite revisions at no additional cost. The job remains. The profession does not.

The Legislation That Pretends to Solve This

In March 2026, the European Parliament amended the EU Artificial Intelligence Act to require that AI-generated content be labelled as such and that training datasets be disclosed. The legislation was celebrated as a victory for transparency. It will do nothing. Disclosure does not prevent training. Labels do not restore income. A client choosing between a human illustrator at $800 and an AI image at $12 does not care whether the latter carries a disclosure notice. The market has already decided.

The only legislative proposal that might alter the trajectory is the one no parliament is considering: a statutory licensing regime that treats AI training as a public use requiring compensation, similar to compulsory mechanical licenses in music. The model would be required to pay a fee—set by statute, not negotiation—for every work in its training data. The collected fees would be distributed to creators through existing copyright collectives. This is not a radical idea. It is how radio, streaming, and public performance already work. But it would require acknowledging that AI companies are not neutral platforms. They are publishers, broadcasters, and distributors—and should be regulated as such.

◆ Finding 03

THE LOBBY THAT KILLED THE FIX

Between January 2024 and February 2026, technology companies spent $47 million lobbying the European Parliament and $89 million lobbying the U.S. Congress on AI-related legislation, according to disclosures compiled by the Campaign for Accountability. OpenAI, Stability AI, Google DeepMind, and Anthropic collectively employed 114 lobbyists in Brussels and Washington. Creative industry groups employed eighteen.

Source: Campaign for Accountability, AI Lobbying Tracker, February 2026

The Verdict the Court Won't Deliver

The judges in Manhattan, San Francisco, and London are searching for a legal principle that will allow them to rule without addressing the underlying crisis. They want to determine whether training data constitutes fair use, whether style can be copyrighted, whether a model's output infringes when it is statistically similar but not identical to a source image. These are tractable questions. They are also irrelevant. The crisis is not legal. It is economic. A generation of creative workers is being rendered obsolete not because the law failed to protect them, but because the market no longer requires their labour.

Copyright was never designed to protect workers. It was designed to protect investors—publishers, studios, labels—who needed a legal monopoly to recoup the cost of production and distribution. Those costs have now fallen to zero. The monopoly remains, but it has migrated from artists to platforms. Sarah Andersen does not own the style that made her famous. Stability AI does not own it either. But Stability AI can reproduce it a million times a day, and Sarah Andersen cannot.

The courts will eventually issue rulings. Some plaintiffs will win narrow victories. Some defendants will prevail on technicalities. None of it will matter. By the time the appeals are exhausted, the market will have moved on. The illustrators will have retrained, retired, or accepted that their work is now worth what an algorithm charges. And we will be left with the question the courts refused to answer: if art can be infinitely reproduced by machines, what was it ever worth? And to whom?

Share this story