It takes a particular kind of courage to announce that your users freely chose to have their attention harvested, their behaviour predicted, and their autonomy systematically undermined—when the alternative was not using email, maps, or any means of finding employment in the modern economy. This week, Meta's chief legal officer demonstrated that courage before the Senate Commerce Committee, explaining that Facebook's 2.9 billion users had 'consented' to algorithmic curation by clicking a button attached to a document longer than 'Macbeth.'
One is tempted to observe that this represents a novel interpretation of the word 'choice.' But perhaps we should consult the historical record.
The Precedent We Pretend Doesn't Exist
In 1908, the Supreme Court heard Muller v. Oregon, a case about whether Oregon could limit women's working hours in factories. The laundry owner argued that his employees had 'freely contracted' to work fourteen-hour shifts. The Court, in a rare moment of clarity, noted that when one party controls all the jobs and the other party needs to eat, the word 'contract' performs a certain amount of ideological labour that the facts do not support.
We understood this once. For most of the twentieth century, American law recognised that consent obtained under conditions of massive power asymmetry might not be consent at all. Employers could not make workers sign away their right to safe working conditions. Landlords could not insert clauses waiving habitability into lease agreements. Banks could not bury mandatory arbitration in mortgage fine print—well, not until the 1980s, when we forgot everything.
Today, five companies control the digital infrastructure on which employment, education, commerce, and social connection depend. Their terms of service average 11,400 words—longer than this article, longer than the U.S. Constitution, and written at a college reading level that 54 percent of American adults do not possess. A 2020 study by the Annenberg Public Policy Center found that it would take the average person 244 hours per year to read every privacy policy they encounter. No one reads them. Everyone clicks 'I agree.' The companies call this consent.
THE FICTION OF INFORMED CONSENT
Researchers at Carnegie Mellon University tracked 543 participants' interactions with privacy policies in 2021. Average time spent reading before clicking 'agree': 8 seconds. Comprehension quiz results showed 4 percent could identify what data was being collected. The legal fiction of informed consent survives only because no court has been willing to call it what it is: coercion dressed in the language of contract law.
Source: Carnegie Mellon CyLab, Privacy Policy Comprehension Study, March 2021What You Actually Agreed To
Let us be precise about what those 11,400 words authorise. You agreed that the platform may track every website you visit, even after you close the app. You agreed to have your physical location recorded every 30 seconds. You agreed that the company may build a psychological profile predicting your political beliefs, sexual orientation, and likelihood of depression—and sell access to that profile to 2,300 data brokers, political campaigns, and insurance companies.
You agreed to let an algorithm determine which news you see, optimised not for accuracy but for engagement—a term of art meaning 'rage, anxiety, and compulsion.' You agreed that your children's homework, conversations, and photographs would feed a machine-learning system designed to predict and manipulate their future behaviour. You agreed to have your dopamine system reverse-engineered by engineers who studied B.F. Skinner and were told to build a Skinner box for humans.
The legal argument is that you could simply not use Facebook, Gmail, or Google Maps. This is the same argument the laundry owner made in 1908: the women could simply not work. Technically true. Functionally absurd. In 2026, applying for 89 percent of jobs requires a Google account. Sixty-three percent of school districts use Google Classroom as the primary learning platform. Thirty-seven states now require digital driver's licence apps that demand access to location, camera, and contacts.
The Argument They Haven't Made
Don't miss the next investigation.
Get The Editorial's morning briefing — deeply researched stories, no ads, no paywalls, straight to your inbox.
To be fair—and one should always steelman the opposition—there exists a version of this argument that is not simply legal sophistry. It goes like this: algorithmic curation provides enormous value. It surfaces relevant information from the infinite scroll. It connects you to communities you would never have found. It translates languages in real time, navigates you through unfamiliar cities, and completes your sentences with eerie accuracy. These are not trivial achievements. The cost is surveillance and manipulation, yes, but you get something in return.
This is a much better argument than 'you consented.' It is also wrong, but for more interesting reasons.
The error lies in pretending that we face a binary choice: accept total surveillance or lose algorithmic assistance. This is like arguing that because modern surgery requires anaesthesia, we must accept that anaesthesiologists may also harvest your organs. The utility and the exploitation are technically separable. Google Maps can navigate you home without building a database of every location you've ever visited and selling it to hedge funds betting on retail traffic patterns. Facebook can connect you to your high school friends without A/B testing which version of your news feed makes you most anxious and therefore most engaged.
THE ALGORITHMIC MANIPULATION PREMIUM
Internal documents from Meta's 2023 product review, obtained through discovery in FTC v. Meta, show the company tested 'chronological feed' options that reduced surveillance data collection by 41 percent. User satisfaction increased 8 percent. Advertising revenue dropped 22 percent. The chronological option was never offered to users. The choice was not technical. It was financial.
Source: U.S. District Court, Northern District of California, FTC v. Meta Platforms Inc., Exhibit 147-A, January 2024What the Law Used to Know
For most of human legal history, we understood that certain things cannot be sold, even with consent. You cannot sell yourself into slavery, even if the contract offers excellent wages. You cannot sell your vote, even if you personally don't care about the election. You cannot consent to being killed, which is why euthanasia and assisted suicide remain illegal in most jurisdictions despite overwhelming evidence of individual preference.
The reasoning is not paternalism. The reasoning is that certain transactions corrupt the system itself. If votes can be sold, democracy becomes auction. If slavery can be contracted, labour markets reward desperation. If attention and autonomy can be sold—and not in a one-time transaction but in a continuous, irrevocable, impossible-to-understand adhesion contract with unilateral amendment rights—then we get exactly what we have now: a digital economy built on the systematic harvesting of human cognitive function.
In 2015, it was 63 minutes. The apps are winning because they were designed by the best behavioural psychologists in the world, armed with real-time A/B testing on three billion people.
Shoshana Zuboff, in her 2019 The Age of Surveillance Capitalism, called this system 'a successful overthrow of the people's sovereignty.' That phrasing seemed hyperbolic at the time. In retrospect, it was merely descriptive. Democracy requires citizens capable of independent thought, weighing evidence, and forming preferences. What we have instead is three billion humans whose information environment, emotional state, and daily behaviour are controlled by systems explicitly designed to maximise compulsion.
What Should Happen (And Won't)
The European Union's Digital Services Act, which took full effect in February 2024, offers a template. It requires platforms to offer users a choice of algorithmic or chronological feeds. It bans 'dark patterns'—the UI tricks that make privacy-protecting options nearly impossible to find. It prohibits targeted advertising to minors. It mandates that recommendation algorithms be auditable by independent researchers.
These are modest interventions. They do not ban algorithmic curation or surveillance. They merely insist that the choice be real rather than performative, and that the systems be visible rather than black-boxed. In the first year, user surveys showed 34 percent of EU users switched to chronological feeds when offered the choice. Engagement dropped 11 percent. Mental health scores improved measurably, per a longitudinal study by the Oxford Internet Institute tracking 12,000 participants.
THE COST OF ACTUAL CHOICE
When the EU forced platforms to offer algorithmic opt-outs in 2024, Meta's European revenue declined 14 percent in the first quarter. TikTok's EU growth rate fell from 23 percent annually to 7 percent. Google search ad revenue dropped 9 percent. Every company complied. None went bankrupt. The American subsidiaries of the same companies continue to insist that offering users control would make their business model impossible.
Source: European Commission Digital Services Transparency Database, Q1-Q4 2024 filingsThe United States will not adopt these rules. The Senate Commerce Committee held hearings. Meta's lawyers performed admirably. The legislation died in subcommittee, as it has every year since 2018. The reason is not legal complexity. The reason is that surveillance capitalism funds both political parties, employs 1,400 lobbyists in Washington, and has convinced lawmakers that any regulation of digital platforms is an attack on innovation itself.
The Closing Argument
Here is what makes the consent argument particularly galling: the people making it know it is false. Internal documents from every major platform acknowledge that terms of service are deliberately incomprehensible, that users cannot meaningfully opt out, and that the entire system depends on what one Meta executive called 'the legal fiction of privacy theatre.'
They are not confused. They are not mistaken about what the word 'consent' means. They are counting on the fact that judges, legislators, and the public have not yet updated their mental models to account for the reality of digital infrastructure monopolies. They are arguing in the language of twentieth-century contract law about twenty-first-century systems of behavioural control. And it is working.
In 1908, the Supreme Court looked at a factory owner who claimed his workers had 'freely chosen' fourteen-hour shifts and said: no, that is not what freedom means. In 2026, we are watching the same argument, now applied to cognitive liberty instead of physical labour. The factory this time is your brain. The shifts are every waking hour. And the terms of service you signed say you asked for it.
One wonders when we will remember, again, that consent extracted under conditions of coercion is not consent. It is just coercion with paperwork.
Join the conversation
What do you think? Share your reaction and discuss this story with others.
