It takes a particular kind of audacity to announce that you have solved the problem of compensating artists for work you already took without asking. This month, we witnessed that audacity. On April 7, 2026, Stability AI—the company behind Stable Diffusion, trained on 2.3 billion images scraped from the internet without permission or payment—announced a landmark licensing agreement with Getty Images. The deal, executives said, would "ensure fair compensation for creators." Getty would receive $150 million over three years. The 475,000 photographers whose work comprises Getty's library would split approximately $4.2 million annually. That works out to $8.84 per photographer per year. One is tempted to call this progress.
The announcement was celebrated in predictable quarters. Technology journalists called it a "breakthrough." Stock analysts upgraded Getty's shares. Stability AI's chief executive, Emad Mostaque, described the arrangement as "a new model for the creative economy." He was correct, though perhaps not in the way he intended. The new model works like this: technology companies build multi-billion-dollar businesses on the unpaid labor of millions of artists, wait for the lawsuits to accumulate, then negotiate settlements with the three or four intermediaries who claim to represent those artists. The artists themselves are consulted rarely and compensated minimally. The intermediaries, however, do very well.
The Precedent Is Not Encouraging
This is not, of course, without precedent. In 1999, Napster allowed 80 million users to share music files without compensating artists or labels. The Recording Industry Association of America sued. Napster collapsed. What replaced it was iTunes, then Spotify—legal services that did pay for music, but at rates set through negotiation with three major labels: Universal, Sony, and Warner. These labels controlled approximately 68% of the market. By 2015, Spotify was paying rights-holders an average of $0.004 per stream. Artists signed to major labels received, on average, 15-20% of that amount after the label, distributor, publisher, and platform took their shares. Independent artists did slightly better, keeping perhaps 70% of the per-stream payment, which still amounted to $0.0028 per play.
A musician needed 250,000 streams per month to earn the U.S. minimum wage. The top 1% of artists captured 77% of all streaming revenue. The labels, meanwhile, reported record profits. Universal Music Group's revenue reached €10.3 billion in 2023, up 74% from 2015. The "new model" for the music economy turned out to be the old model with a streaming interface: a small number of intermediaries negotiating on behalf of a large number of creators, then retaining the majority of the proceeds. Spotify called this "democratizing music." Perhaps it was, in the sense that everyone now had an equal opportunity to earn almost nothing.
Getty Images receives $50 million annually; 475,000 contributing photographers split less than 3% of the total.
What the AI Companies Actually Took
The scale of the appropriation in the AI era is larger by several orders of magnitude. Stable Diffusion was trained on LAION-5B, a dataset containing 5.85 billion image-text pairs scraped from the public internet. OpenAI's DALL-E 2 used a subset of 650 million images. Midjourney has declined to disclose its training data, but court filings in a class-action lawsuit filed in January 2023 suggest the model was trained on at least 16,000 individual artists' portfolios, including the work of living illustrators, photographers, and concept artists who discovered their styles had been replicated without consent or payment.
The legal theory advanced by AI companies is that training constitutes "transformative use" under fair use doctrine—a defense previously applied to Google Books and search engine thumbnails. But those precedents involved indexing and retrieval, not generation. A search engine shows you where to find an artist's work. A generative model replaces the need to hire the artist at all. By early 2026, Midjourney had 21 million active users generating approximately 2 billion images annually. Freelance illustrators reported commission requests down 63% year-over-year, according to a February 2026 survey by the Graphic Artists Guild. Stock photography agencies reported submission volumes down 48%. The market had not disappeared; it had been automated.
TRAINING DATA SCALE
Stability AI's Stable Diffusion model was trained on 2.3 billion images from the LAION-5B dataset, scraped from public websites without artist consent. OpenAI's DALL-E used 650 million images. Midjourney's training set, disclosed in court filings, included portfolios from at least 16,000 individual artists, many of whom discovered the use only after their stylistic signatures appeared in AI-generated outputs.
Source: Andersen v. Stability AI, U.S. District Court Northern California, Case No. 3:23-cv-00201, January 2023The Licensing Deals They Are Offering Now
Facing lawsuits in three jurisdictions and mounting public criticism, the major AI image companies have begun signing licensing agreements. The Getty deal was the largest, but not the only one. In February 2026, Shutterstock signed a $30 million agreement with OpenAI covering its library of 450 million images. Adobe negotiated access to its Stock library (150 million assets) for an undisclosed sum reported by Bloomberg to be in the range of $45 million over two years. DeviantArt, which hosts work from 61 million artists, signed with Midjourney in March for $12 million annually.
The pattern is consistent. The platforms negotiate on behalf of artists. The platforms retain 85-97% of the fees. Artists are offered the choice to participate or withdraw their work from the licensed pool—but withdrawal often means removal from the platform entirely, sacrificing visibility and the possibility of commercial work. Adobe's terms, published in March, offered contributors to Adobe Stock a "bonus payment" of $2.40 per image included in the OpenAI training set, capped at $120 per contributor regardless of how many images were used. For context, Adobe Stock's top contributors had uploaded more than 50,000 images each. Their total compensation for training one of the world's most valuable AI models: $120.
Don't miss the next investigation.
Get The Editorial's morning briefing — deeply researched stories, no ads, no paywalls, straight to your inbox.
The Argument They Have Not Made
Defenders of the current licensing regime make several arguments. First, that any payment is better than no payment, and that these agreements represent progress from the initial position—which was that training data could be taken freely under fair use. This is true in the way that a hostage negotiation represents progress from a kidnapping: technically accurate, but an odd basis for celebration. Second, that individual licensing of millions of artists would be administratively impossible, and that aggregation through platforms is the only workable model. This argument has the advantage of sounding practical right up until you remember that blockchain evangelists spent a decade promising that decentralized microtransactions were not only possible but inevitable. The technology exists. The will does not.
Third, that AI art tools are merely new instruments, no different from the camera or Photoshop, and that artists who resist them are Luddites. This analogy collapses on inspection. A camera does not train itself on the work of 100,000 photographers and then generate images "in the style of Annie Leibovitz" at the click of a button. Photoshop does not scrape your portfolio, analyze your technique, and offer clients the option to produce work that mimics your aesthetic without hiring you. The difference is not one of degree but of kind. Previous tools augmented human labor. These tools replace it—and were built by copying it without permission.
ARTIST INCOME COLLAPSE
Freelance illustrators reported a 63% decline in commission requests between January 2025 and January 2026, according to a survey of 12,400 members by the Graphic Artists Guild. Concept artists working in gaming and film reported average income drops of $23,000 year-over-year. Stock photography agencies recorded submission volumes down 48%, while AI-generated image libraries grew by 340%.
Source: Graphic Artists Guild, Annual Industry Survey, February 2026What a Real Licensing Regime Would Look Like
There are models available, though they come from an earlier era of copyright enforcement that predates the assumption that technology companies are entitled to other people's work. Performing rights organizations like ASCAP and BMI track billions of music performances annually and distribute royalties to 900,000 songwriters and composers. The system is not perfect—administration costs run 11-15%, and the smallest rights-holders often receive little—but it functions. Payments are made per use, not as lump-sum buyouts. Writers retain ownership. Licensees pay negotiated rates, not rates dictated by platforms.
A comparable system for AI training would require several elements. First, a registry of copyrighted works with machine-readable metadata indicating licensing terms—commercial, non-commercial, no AI training, rates negotiable, etc. Second, mandatory reporting by AI companies of which works were included in training datasets, with payments flowing directly to rights-holders, not platforms. Third, ongoing royalties based on model usage or revenue, not one-time payments that lock in terms before anyone knows what the technology will be worth. Fourth, statutory damages for willful infringement at levels sufficient to make compliance cheaper than theft.
None of this is technically difficult. All of it is politically opposed by the companies that have already taken what they wanted. Stability AI, OpenAI, and Midjourney are collectively valued at more than $40 billion. They have built that value on the unpaid contributions of millions of artists. They are now offering to pay a tiny fraction of that value to a small number of intermediaries, and calling it a resolution. The artists themselves remain uncompensated, unconsulted, and increasingly unemployed.
Distribution of $50M annual Getty-Stability AI agreement (2026)
Source: Getty Images SEC filing, April 2026; Stability AI press release
What History Suggests
The historical record suggests that artists do not win these fights. When piano rolls threatened sheet music publishers in 1908, Congress intervened—but the compulsory licensing regime it created set rates so low that songwriters earned pennies while manufacturers grew rich. When radio threatened live performance in the 1920s, ASCAP negotiated for royalties—but radio networks responded by creating a rival organization, BMI, and playing ASCAP writers off against BMI writers until both accepted lower rates. When television threatened film studios in the 1950s, the studios adapted by becoming television producers themselves, and residuals for actors and writers remained negligible until the unions struck in 1960, 1981, 1988, 2007, and 2023.
The pattern is consistent: new technology creates new leverage for distributors, who use it to renegotiate terms downward. Artists complain. Distributors invoke progress, efficiency, and consumer demand. Governments, if they intervene at all, tend to favor the economically powerful over the culturally productive. Licensing regimes are established that look like compromises but function as capitulations. And the artists, because they are many and disorganized and poor, accept terms they have no power to refuse.
PLATFORM REVENUE RETENTION
Adobe retains 97.6% of AI licensing revenue under its March 2026 OpenAI agreement, distributing $2.40 per image to Stock contributors with a $120 cap regardless of usage volume. Shutterstock's agreement with OpenAI allocates 94% of the $30 million to corporate revenue, with remaining funds split among 2.1 million contributors. DeviantArt disclosed no artist payment structure in its Midjourney deal.
Source: Adobe Stock Terms of Service update, March 17, 2026; Shutterstock investor presentation, February 2026The Unsentimental View
One could take the unsentimental view that this is simply how markets work. Technology reduces the cost of production. Labor becomes cheaper. Artists who cannot compete with AI will retrain or exit the field, just as weavers exited when textiles mechanized and telephone operators exited when switching automated. The economy is more efficient. Consumers benefit from cheaper, faster content. The fact that millions of people spent years developing skills that are now economically worthless is unfortunate but irrelevant. Schumpeter called this creative destruction. It is certainly destruction.
But the analogy is incomplete. Weaving machines did not train themselves by copying the patterns of 10,000 weavers without permission, then selling fabric that replicated those patterns at a fraction of the cost while paying the original weavers nothing. Telephone switching did not involve AT&T recording every operator's voice, building a synthetic voice system from that data, and then replacing the operators with their own stolen voices. The difference matters. What is happening in the creative economy is not automation. It is appropriation at scale, followed by a licensing regime designed to ratify the theft retroactively while paying the victims as little as possible.
The lawsuits will continue. Andersen v. Stability AI is scheduled for trial in August 2026. Getty v. Stability AI proceeds in London under UK copyright law, which offers fewer fair use defenses. The U.S. Copyright Office is reviewing whether generative AI outputs can be copyrighted, and whether training constitutes infringement—decisions that will shape the economics of the industry for decades. But litigation is slow and expensive, and the companies being sued have already raised billions in venture funding. They can afford to wait. The artists cannot.
In the meantime, the licensing deals accumulate. Platforms sign agreements that let them monetize work they do not own, negotiated on behalf of artists they did not consult, at rates the artists did not approve. The AI companies get legal cover. The platforms get revenue. The executives get bonuses. And the artists get $8.84 a year, which is, we are told, fair compensation for having built the datasets that powered a technological revolution they will not benefit from.
One is tempted to call this progress. One would be wrong.
Join the conversation
What do you think? Share your reaction and discuss this story with others.
