It takes a particular kind of genius to persuade two billion people that they are freely choosing to spend four hours daily on applications specifically engineered to prevent them from leaving. This week, Meta's quarterly earnings call offered shareholders the satisfying news that 'time spent' on Instagram had increased another seven percent year-over-year. The company attributed this success to 'AI-powered content recommendations.' One is tempted to observe that the tobacco industry, in its final decades of credibility, employed similar euphemisms for addiction.
The comparison is not hyperbole. It is understatement. Cigarettes, for all their murderous efficiency, could only destroy your lungs. The algorithmic attention economy is after something more intimate: the capacity to form your own intentions in the first place. What Philip Morris did to bronchial tissue, recommendation systems are doing to volition itself — and unlike lung cancer, this particular pathology leaves no visible trace on an X-ray.
The Precedent We've Forgotten
In 1957, a market researcher named James Vicary claimed he had flashed the words 'Drink Coca-Cola' and 'Eat Popcorn' during a film screening at rates too fast for conscious perception. Sales, he reported, had increased dramatically. The claim was later exposed as fabrication, but the scandal it produced was instructive: the American public was genuinely horrified by the prospect that their decisions might be manipulated below the threshold of awareness. The FCC moved to ban subliminal advertising. Congress held hearings. The notion that corporations might bypass conscious deliberation to influence behaviour was treated, quite correctly, as an existential threat to consumer autonomy.
We are now, it is worth noting, living in the world Vicary falsely claimed to have created — except that the manipulation is not subliminal but supraliminal, not hidden but ambient, not brief but continuous. And somehow, the horror has dissipated. Perhaps because the manipulation arrived gradually, like a frog in warming water. Perhaps because the manipulators had the good sense to call it 'personalization.'
THE SCALE OF ALGORITHMIC DECISION-MAKING
The average smartphone user now encounters approximately 35,000 algorithmically-mediated micro-decisions daily, from content ranking to notification timing to predictive text completion. Research from Carnegie Mellon's Human-Computer Interaction Institute found that users are consciously aware of algorithmic intervention in fewer than 3% of these encounters.
Source: Carnegie Mellon University, Human-Computer Interaction Institute, 'Algorithmic Awareness Study,' February 2026The Argument They Haven't Made
The technology industry's defence, such as it is, runs roughly as follows: users freely choose to engage with these platforms; users can leave at any time; the recommendations merely surface content users would have wanted anyway; and besides, advertising has always tried to influence behaviour, so what's different? This argument has the superficial plausibility of a man explaining that he didn't push you off the cliff — he merely made standing on solid ground feel unbearable until you jumped.
Let us steelman the case. Recommendation algorithms, the industry might argue, are simply tools that reduce cognitive load. In a world of infinite content, they help users find what they want faster. This is a service, not a manipulation. The user remains the principal; the algorithm is merely an agent executing inferred preferences.
The problem with this defence is that it requires us to ignore what the algorithms are actually optimised for. They are not optimised for user satisfaction, or user flourishing, or even user preference-fulfilment. They are optimised for engagement — which is to say, for the maximisation of time spent and attention captured, regardless of whether that attention serves the user's actual interests or their stated intentions. An algorithm that helped you find what you wanted and then let you leave would be, from the platform's perspective, a catastrophic failure.
Don't miss the next investigation.
Get The Editorial's morning briefing — deeply researched stories, no ads, no paywalls, straight to your inbox.
What History Suggests
The philosopher Harry Frankfurt, in his classic 1971 essay 'Freedom of the Will and the Concept of a Person,' drew a crucial distinction between first-order desires (wanting something) and second-order desires (wanting to want something). A drug addict, Frankfurt observed, might have a first-order desire for heroin while simultaneously having a second-order desire not to want heroin. The tragedy of addiction is precisely this conflict between what we want in the moment and what we want to want upon reflection.
The attention economy has industrialised this tragedy. The entire apparatus of algorithmic recommendation is designed to generate first-order desires — the urge to click, scroll, watch, engage — that systematically override second-order desires. You intended to check the weather and emerged forty minutes later having watched seventeen videos about celebrity feuds. The algorithm did not malfunction. It performed exactly as designed, exploiting the gap between your momentary impulses and your considered preferences.
THE INTENTION-ACTION GAP
A 2025 study tracking 12,000 users found that 67% of social media sessions lasted longer than users had intended, with the average 'overrun' being 23 minutes. Among users aged 18-24, the figure rose to 78%. Researchers identified specific design patterns — autoplay, infinite scroll, notification timing — that reliably triggered extended sessions against stated user intentions.
Source: Oxford Internet Institute, 'Digital Autonomy and Platform Design,' November 2025This is not, one hastens to add, a matter of weak-willed users failing to exercise self-control. The platforms employ thousands of engineers whose explicit job is to defeat user self-control. They run continuous A/B tests to identify which design patterns most effectively override conscious intention. They have, quite literally, optimised against human autonomy for the past two decades. To blame users for losing this arms race is rather like blaming a chess amateur for losing to a supercomputer.
The Rights We Haven't Named
The legal scholar Nita Farahany has proposed the concept of 'cognitive liberty' — the right to mental self-determination, including protection from non-consensual manipulation of mental states. The term has a slightly science-fictional ring to it, conjuring images of brain-computer interfaces and neural implants. But cognitive liberty does not require exotic technology to be violated. It merely requires systems sophisticated enough to predict and exploit cognitive vulnerabilities at scale. We have had such systems for years. We simply lacked the vocabulary to name what they were doing.
This figure represents the monetised value of human attention captured through algorithmic systems, extracted primarily through design patterns that exploit cognitive biases.
The European Union's AI Act, which entered full force in 2025, includes provisions against 'AI systems that deploy subliminal techniques beyond a person's consciousness' to 'materially distort behaviour.' This is a start, though the act's focus on subliminal manipulation may prove too narrow. The more pressing threat is not manipulation below consciousness but manipulation that operates precisely at the threshold of awareness — visible enough to seem like choice, calibrated enough to reliably override it.
What Should Be Done
Three interventions suggest themselves, none of them sufficient alone. First, the radical transparency of algorithmic systems: not the performative 'transparency reports' that platforms currently publish, but mandatory disclosure of optimization objectives. Users deserve to know, in plain language, that the system recommending content to them has been designed to maximise their time spent, not their satisfaction or wellbeing. Truth in advertising, as it were.
Second, the creation of meaningful exit options. The network effects that lock users into dominant platforms are not natural phenomena but designed dependencies. Interoperability requirements — allowing users to communicate across platforms without losing their social connections — would introduce genuine choice into markets currently structured to eliminate it.
Third, and most fundamentally, the recognition of cognitive liberty as a protected right. This would require courts and legislatures to acknowledge what the technology industry has known for two decades: that human attention is a finite resource, that its extraction is an economic activity, and that manipulating mental states for profit without meaningful consent is a harm that law should address.
The Verdict
There is a species of techno-optimist who will object that all this concern about algorithmic manipulation is merely moral panic — the latest iteration of fears about television, rock music, and comic books. Technologies change; humans adapt; the kids are alright. One is sympathetic to this view, up to a point. Previous technologies did not, however, employ machine learning systems that improve continuously at predicting and exploiting cognitive vulnerabilities. Previous technologies could not run real-time experiments on billions of users to identify which stimuli most reliably override conscious intention. Previous technologies were not, in short, explicitly designed to defeat human agency as a matter of commercial strategy.
The attention economy is not an industry like other industries. It is an extractive enterprise whose raw material is human volition itself. We regulate industries that extract fossil fuels, that sell addictive substances, that dispose of toxic waste. The idea that we should not regulate industries that extract and monetise the capacity for autonomous decision-making — on the grounds that users 'choose' to participate — requires a definition of choice that would embarrass a Jesuit.
James Vicary's subliminal advertising hoax terrified a nation in 1957. Today, we live inside a system vastly more sophisticated, vastly more effective, and vastly more profitable than anything Vicary imagined — and we have been persuaded to call it 'personalization.' The algorithm ate our agency. The least we might do is notice.
