It takes a particular kind of audacity to build a machine designed to manipulate human attention, extract billions in profit from that manipulation, and then insist—with a straight face—that users consented to the arrangement. This week, Meta announced new "transparency features" allowing users to "better understand" how its algorithms work. The company framed this as a commitment to user autonomy. One is tempted to ask: autonomy to do what, exactly? To understand the cage you're in is not the same as having the key.
The consent fiction—the idea that users have meaningfully agreed to what happens to them on digital platforms—has become the foundational myth of the attention economy. We clicked "I agree." We scrolled past the terms of service. We granted permissions we didn't read. And therefore, the logic goes, we chose this. But consent obtained through deliberate obfuscation, cognitive overload, and design patterns explicitly engineered to bypass rational decision-making is not consent. It is something else entirely. The law calls it adhesion. Behavioral scientists call it manipulation. The rest of us might call it what it is: a con.
The Precedent We've Forgotten
This is not, of course, without precedent. In 1914, Henry Ford introduced the assembly line and with it a new understanding of labor efficiency. Workers were not asked whether they consented to the pace of the line; their consent was implied by their presence in the factory. It took decades of labor organizing, strikes, and eventually federal regulation to establish that workers had rights the factory could not simply engineer away. The eight-hour workday, workplace safety standards, the weekend—none of these emerged from corporate benevolence. They emerged from a collective recognition that consent under conditions of severe power asymmetry is not really consent at all.
We are living through the digital equivalent of that moment, except this time the assembly line is invisible and the product being manufactured is you. Or rather, a model of you—your preferences, vulnerabilities, attention patterns, emotional triggers—accurate enough to predict and shape your behavior. The factory is your phone. The shift never ends. And the consent form, should you ever try to read it, runs to 15,000 words of legal language designed by teams of attorneys whose job is to maximize corporate latitude while minimizing legal liability.
THE UNREADABLE CONTRACT
In 2022, researchers at the University of Copenhagen analyzed the terms of service agreements for the 50 most popular digital platforms. Average reading time, assuming college-level comprehension: 4 hours and 32 minutes. Estimated time users actually spent reading them: 8 seconds. Facebook's privacy policy alone requires a graduate-level reading ability and 18 minutes of sustained attention.
Source: University of Copenhagen, Digital Rights Laboratory, June 2022The platforms know this. They have always known this. The illegibility is not a bug; it is the core feature. Because if users actually understood what they were agreeing to—that their attention would be auctioned in real-time to the highest bidder, that their emotional responses would be tested and optimized for engagement regardless of accuracy or harm, that their children's developing brains would be exposed to variable reinforcement schedules calibrated to maximize dopamine release—some percentage of them might say no. And the business model cannot afford no.
The Argument They Haven't Made
The defense offered by platform executives, when pressed, takes a predictable form: users are free to leave. If you don't like the terms, don't use the service. This argument might carry weight if digital platforms were optional luxuries rather than essential infrastructure. But when your employer communicates via Slack, your child's school assigns homework through Google Classroom, your local government posts updates only on Facebook, and professional opportunity requires a LinkedIn presence, the choice to abstain is not really a choice. It is economic and social exile.
Moreover, the platforms have spent two decades building moats around user data and social graphs precisely to make switching costs prohibitively high. Your contacts are on WhatsApp. Your photos are in iCloud. Your professional network is on LinkedIn. Migrating to an alternative platform—assuming one exists with comparable functionality—means abandoning years of accumulated social capital and digital infrastructure. This is not a competitive market. It is a series of walled gardens with deliberately high exit costs, and calling the decision to stay "consent" is like calling a hostage's cooperation voluntary.
What Cognitive Science Reveals
The deeper problem is that the platforms have become sophisticated in exploiting cognitive vulnerabilities that users cannot consciously defend against. This is not speculation. It is documented in the platforms' own internal research. In 2021, whistleblower Frances Haugen released thousands of pages of internal Facebook documents showing that the company's researchers had identified specific harms—increased anxiety and depression among teenage girls, political polarization driven by algorithmic recommendations, the spread of vaccine misinformation—and that senior executives had chosen not to mitigate those harms when doing so would reduce engagement metrics.
THE DELIBERATE EXPLOITATION
Internal Meta research from 2019, disclosed in the Facebook Papers, found that 13.5% of teen girls in the UK said Instagram made their suicidal thoughts worse, and 17% said it worsened their eating disorders. The company's response was not to redesign the product but to commission additional research on how to expand its youth user base. A 2020 presentation to executives was titled "Teen Mental Health Deep Dive."
Source: Facebook Papers, Wall Street Journal investigation, September 2021Don't miss the next investigation.
Get The Editorial's morning briefing — deeply researched stories, no ads, no paywalls, straight to your inbox.
This is where the consent argument collapses entirely. Informed consent requires that the consenting party understand what they are agreeing to. But the platforms have deliberately withheld the information necessary for that understanding—indeed, they have spent billions ensuring users do not understand how the systems work, because that understanding would threaten the business model. When a doctor prescribes medication, they are required by law to disclose known side effects. When a platform deploys an algorithm known to increase depression in adolescents, no such disclosure is required. We have built a regulatory framework that treats clicks as consent and calls it freedom.
Average across major platforms (Meta, Google, Amazon) as of 2024—location, contacts, search history, purchase behavior, biometric data, and thousands of behavioral micro-signals users never knowingly disclosed.
The scale of data extraction is itself a kind of cognitive violation. Most users could not list ten categories of data their devices collect; the actual number runs into the hundreds. Location tracking continues even when location services are "disabled." Microphones activate for "voice assistant optimization." Biometric data—typing patterns, scrolling speed, facial expressions captured by front-facing cameras—trains models that predict emotional states and psychological vulnerabilities. None of this appears in the eight seconds users spend skimming the consent form. All of it is technically legal.
The Counterargument, Steelmanned
The strongest defense of the current system goes something like this: Personalized services require personal data. Users enjoy those services—targeted recommendations, social connection, free access to tools that would otherwise cost thousands of dollars. The trade is transparent: you give us your data, we give you Gmail, Google Maps, Instagram, and the ability to video-call your grandmother in Bangalore for free. That trade has created enormous consumer surplus and democratized access to information and connection. Yes, the consent forms are long, but adults are responsible for their own decisions. Infantilizing users by suggesting they cannot consent ignores their agency.
This argument is seductive. It is also incomplete. The problem is not that users lack agency in principle; it is that the platforms have systematically designed environments to override that agency in practice. Behavioral economics has demonstrated for decades that human decision-making is predictably irrational—we discount future harms, we are vulnerable to framing effects, we follow social proof even when it contradicts our interests. Casinos know this, which is why they remove clocks and pump in oxygen. Platforms know this too, which is why infinite scroll eliminates stopping cues and why notification badges are red (the color that triggers the strongest urgency response).
THE DESIGN INTENT
In 2018, testimony before the U.S. Senate, former Google design ethicist Tristan Harris detailed how platforms employ "persuasive technology" techniques developed in behavioral psychology labs. Variable reward schedules (the same mechanism behind slot machine addiction), social reciprocity triggers, and "fear of missing out" exploits are built into core product features. Internal A/B testing measures not user satisfaction but "time on site" and "daily active users"—metrics that incentivize compulsion over well-being.
Source: Senate Commerce Committee Hearing on Tech Platform Responsibility, April 2018The distinction matters. A market economy assumes rational actors making informed choices. But when one party to the transaction employs neuroscientists and behavioral psychologists to exploit the cognitive vulnerabilities of the other party—and when that exploitation is the business model—we are no longer talking about a market transaction. We are talking about a system of extraction that resembles nothing so much as the predatory lending practices that led to the 2008 financial crisis, except this time the asset being stripped is not home equity but human attention and autonomy.
What Regulation Requires
If the consent framework is broken, what should replace it? The answer is not complicated, though the politics will be. We already have regulatory models for industries that pose cognitive or physical harm: pharmaceuticals, financial services, tobacco, gambling. All of these sectors are required to disclose risks, limit predatory practices, and in some cases, restrict access to vulnerable populations. None of this eliminates consumer choice. It simply ensures that choice is exercised under conditions that are not deliberately designed to override rational decision-making.
Applied to digital platforms, this would mean several concrete changes. First, a ban on manipulative design patterns—infinite scroll, autoplay, notification badges—that exploit cognitive vulnerabilities. These are not essential to platform functionality; they are engagement optimization tactics dressed up as user experience. Second, mandatory algorithmic transparency: platforms should be required to disclose, in plain language, what signals their algorithms optimize for. If the answer is "maximizing time on site regardless of content quality or user well-being," users deserve to know that.
Third, data minimization and portability. Platforms should collect only the data necessary for the service users explicitly requested, and users should be able to export their data and move to competitors without losing their social graphs. This is not radical; it is how telephone number portability works, and it transformed competition in telecommunications. Fourth, special protections for minors. If we do not allow casinos to target children, we should not allow platforms to deploy the same psychological techniques on adolescents whose prefrontal cortexes are still developing.
The European Union's Digital Services Act and Digital Markets Act, which entered force in 2023 and 2024 respectively, move in this direction—banning certain dark patterns, requiring algorithmic transparency for very large platforms, and imposing interoperability requirements. Early enforcement has been uneven, but the framework exists. The United States, by contrast, remains in a state of regulatory paralysis, with Congress unable to pass even modest privacy protections despite bipartisan public support.
The Stakes We're Ignoring
The stakes here are not abstract. Cognitive liberty—the right to control one's own mental processes and attention—is a foundational prerequisite for autonomy. If external actors can reliably predict and manipulate your behavior without your knowledge or meaningful consent, then the autonomy we claim as the basis of liberal democracy becomes a polite fiction. You cannot have self-governance without a self that is, to some meaningful degree, self-determining.
This is not hypothetical. Research published in Nature in 2024 found that exposure to platform algorithms significantly altered political attitudes, increased affective polarization, and reduced trust in institutions—effects that persisted even after platform use ended. The researchers described it as "externally induced belief change at scale." In other words: mass manipulation, but legal because users clicked "I agree" on a form designed to be incomprehensible.
THE MEASURED HARM
A 2023 study by the Oxford Internet Institute tracked 12,000 participants across six countries over two years. Those in the highest quartile of social media use showed measurable decreases in deliberative reasoning, increased susceptibility to misinformation, and reduced ability to maintain attention on long-form content. The effects were dose-dependent and appeared within six months of heavy use.
Source: Oxford Internet Institute, Digital Behavior and Cognition Study, March 2023The platforms will argue, as they always do, that these are unintended consequences, that they are working to address the problems, that regulation will stifle innovation. But the internal documents tell a different story. They knew. They measured. They optimized for engagement anyway, because the alternative was leaving money on the table. This is not market failure; it is the market working exactly as designed, extracting maximum value with minimum accountability.
There is a grim historical irony here. The liberal democratic tradition emerged in part as a response to totalitarian systems that sought to control not just behavior but thought itself. We built elaborate safeguards—free speech, free assembly, privacy rights—to protect the inner life of the citizen from state intrusion. And then we handed that inner life over to a handful of corporations who discovered that manipulating it was vastly more profitable than respecting it. We did this voluntarily, we are told. We consented. The consent form was on page 47, subsection 12(c), in 8-point font. We simply failed to read it.
One is tempted to observe that this is an odd definition of freedom—being free to be manipulated, so long as the manipulation is profitable and the consent form is theoretically readable. But then, perhaps that is the point. In an attention economy, the product being sold is not the service. It is you. And you cannot meaningfully consent to your own commodification when the consent form was written by the people doing the selling. The factory workers of 1914 learned this. Two generations of labor organizing taught them that contracts signed under conditions of extreme power asymmetry are not really contracts at all. We are overdue for the same realization. The assembly line is already running. The question is how many more years we will spend pretending we chose to be on it.
Join the conversation
What do you think? Share your reaction and discuss this story with others.
