It takes a particular kind of corporate audacity to build a machine explicitly designed to manipulate human behavior, then describe it in a keynote presentation as "empowering user choice." This week, at the annual developer conference of a certain social media platform whose name rhymes with "Meta," executives unveiled what they called "the most personalized experience we've ever created"—a recommendation engine that learns not just what you like, but what you will like before you know you like it. The system tracks 10,000 behavioral signals per user session. It adjusts its predictions in real time. And it does all of this, we are assured, in service of giving you exactly what you want.
One is tempted to observe that a system designed to predict and shape your preferences before you form them is not, strictly speaking, respecting your autonomy. It is replacing it.
The technology industry has spent two decades constructing the most sophisticated infrastructure for behavioral manipulation in human history, then describing it in the language of liberation. We are told these systems give us what we want. We are told they save us time. We are told—and this is the most brazen claim—that we remain in control. The algorithms, we are assured, merely assist. They recommend. They suggest. They offer options.
But a recommendation engine that correctly predicts what you will click 91 percent of the time—the industry benchmark for a successful model, according to research published by the MIT Media Lab in 2024—is not assisting your choice. It is constructing it. And when that same engine can increase the time you spend on a platform by an average of 37 minutes per day, as internal Meta documents obtained by the Wall Street Journal revealed in January 2026, we are no longer talking about tools that respond to human preferences. We are talking about tools that generate them.
The Precedent We Pretend Doesn't Exist
This is not, of course, without historical precedent. In the mid-twentieth century, behavioral psychologist B.F. Skinner developed what he called "operant conditioning"—a system of rewards and punishments designed to shape animal and human behavior with remarkable precision. Place a pigeon in a box. Reward it intermittently when it pecks a lever. Within hours, the pigeon will peck compulsively, even when the rewards stop coming. Skinner called this a "variable reward schedule," and he demonstrated that it was more effective at shaping behavior than any consistent pattern of reinforcement.
The technology industry, one might observe, has spent the last fifteen years building Skinner boxes at global scale. The infinite scroll. The pull-to-refresh gesture. The notification that arrives unpredictably. The autoplay video. These are not features designed to serve user preferences. They are features designed to create compulsive use. And they work. According to a 2025 study by the Oxford Internet Institute, the average American adult now checks their smartphone 344 times per day—once every four minutes during waking hours—and reports feeling unable to control the behavior.
Once every four minutes during waking hours—a behavior most users report feeling unable to control, according to the Oxford Internet Institute.
Skinner himself understood what he had created. In his 1971 book Beyond Freedom and Dignity, he argued that concepts like autonomy and free will were illusions—that all human behavior was the product of environmental conditioning, and that a properly designed system could shape that behavior in any direction its designers chose. He was, for his honesty, reviled. Critics called his vision dystopian. They said it reduced humans to machines.
Silicon Valley learned the lesson. It built the same systems. It simply described them differently.
What the Research Actually Shows
The evidence on algorithmic manipulation is no longer preliminary. It is overwhelming. In March 2024, researchers at Stanford's Human-Computer Interaction Lab published the results of a three-year study tracking the media consumption habits of 4,200 participants across six countries. Half were shown content selected by recommendation algorithms. Half were shown randomly selected content from sources they had previously chosen. The results were unambiguous.
ALGORITHMIC STEERING
Participants exposed to recommendation algorithms spent 73 percent more time on platforms than control groups, consumed content 41 percent more extreme than their stated preferences, and reported 28 percent lower satisfaction with their media consumption. Notably, when asked if they felt in control of their choices, algorithm-exposed users rated themselves equally high as controls—even as their behavior showed the opposite.
Source: Stanford Human-Computer Interaction Lab, Digital Autonomy Study, March 2024That last finding is perhaps the most revealing. Users did not perceive that their behavior was being shaped. They experienced algorithmic recommendations as an expression of their own preferences, even when those recommendations systematically pushed them toward more extreme, more engaging, more time-consuming content. The algorithm, in other words, had successfully colonized the user's sense of choice itself.
Don't miss the next investigation.
Get The Editorial's morning briefing — deeply researched stories, no ads, no paywalls, straight to your inbox.
The mechanisms are well understood. Recommendation algorithms optimize for a single metric: engagement. Time on platform. Clicks. Shares. Comments. And the content that maximizes engagement is not, it turns out, the content that users would choose if presented with neutral options. A 2025 analysis by the Berkman Klein Center at Harvard examined 2.3 million YouTube recommendations over six months. It found that videos recommended by the algorithm were, on average, 34 percent longer, 52 percent more emotionally charged, and 61 percent more likely to contain misinformation than videos users selected independently.
The business model, in short, requires that you watch things you would not have chosen to watch, believe things you would not have chosen to believe, and spend time you would not have chosen to spend. The algorithm does not serve your interests. It overrides them. And it does so with a precision that Skinner could only have dreamed of.
The Defense They Cannot Make
The technology companies, when pressed, offer two defenses. The first is that users consent to algorithmic recommendations by choosing to use the platforms. But consent requires knowledge, and the evidence shows that users systematically misunderstand how recommendation systems work. A 2024 survey by the Pew Research Center found that 68 percent of social media users believed they could "train" algorithms to show them what they wanted by liking and sharing selectively. In fact, as Meta's own research documents—published under legal compulsion in the EU transparency proceedings of February 2026—user actions account for less than 30 percent of recommendation inputs. The remaining 70 percent consists of signals users do not control and often do not know exist: dwell time, scroll speed, facial expressions captured by front-facing cameras on newer devices.
THE ILLUSION OF CONTROL
68 percent of users believe they control algorithmic recommendations through their actions. Internal Meta research shows user actions account for less than 30 percent of inputs—the rest are behavioral signals users cannot see or modify, including micro-expressions detected by device cameras and patterns of hesitation measured in milliseconds.
Source: Pew Research Center, 2024; Meta EU Transparency Disclosure, February 2026One cannot meaningfully consent to a system one does not understand. And the platforms have spent billions ensuring that users do not understand.
The second defense is that algorithmic recommendations provide value—that they help users discover content they would not have found otherwise, and that this discovery constitutes a benefit that outweighs any loss of autonomy. This argument deserves to be taken seriously. There is real value in serendipity, in encountering ideas and culture outside one's existing knowledge. The question is whether recommendation algorithms actually provide this, or whether they provide the illusion of discovery while narrowing the range of what users encounter.
The data suggests the latter. A 2025 study published in Nature examined the diversity of news sources encountered by users on algorithmically curated platforms versus those who selected sources manually. Algorithm-exposed users encountered content from 43 percent fewer distinct sources, despite consuming 67 percent more total content. They experienced the feeling of discovery—the algorithm constantly offered them "new" content—but the new content came from an ever-narrowing set of producers, all optimized for the same engagement metrics.
This is not discovery. It is the simulation of discovery, engineered to feel like exploration while keeping users within an increasingly constrained space. It is a zoo that looks like a wilderness.
What Regulation Requires
The policy response, when it arrives, will likely focus on transparency—requiring platforms to disclose how their algorithms work, to give users more control over recommendation settings, to provide opt-outs. The European Union's Digital Services Act, which entered full enforcement in January 2026, takes precisely this approach. Platforms must now explain their recommendation logic. Users must be offered a non-algorithmic option. Researchers must be given access to data.
These are useful reforms. They are also insufficient. Transparency cannot solve a problem of power. Knowing how an algorithm manipulates you does not give you the ability to resist it—not when the algorithm has been refined through billions of dollars of research and testing to override conscious resistance. A user who understands that TikTok's recommendation engine is designed to maximize compulsive use will still, in most cases, use TikTok compulsively. Knowledge is not leverage.
What is required instead is a structural intervention: a prohibition on the business model itself. We do not allow tobacco companies to advertise to children, even if the children "consent" by watching television. We do not allow casinos to install slot machines in elementary schools, even if the schools could generate revenue. We recognize that some forms of commercial persuasion are incompatible with human autonomy, particularly when directed at populations with limited capacity for resistance.
Algorithmic recommendation engines optimized for engagement fall into the same category. They are, by design, systems for overriding human choice. The fact that they are extraordinarily effective at this—that they can predict and shape behavior with 91 percent accuracy—is not evidence that they serve user interests. It is evidence that they have succeeded in replacing those interests with patterns the algorithm can predict and monetize.
The remedy is straightforward: prohibit recommendation algorithms from optimizing for engagement metrics. Require that they optimize for user-stated preferences, measured through explicit choices rather than behavioral signals. Allow users to subscribe to sources and topics. Allow serendipity through random selection. But do not allow platforms to construct a model of what users will click before they know they want to click it, then serve them content designed to validate that model. That is not a tool. It is a hijacking.
The Choice We Pretend We Still Have
Silicon Valley will object that this would destroy the user experience, that people want algorithmic recommendations, that engagement metrics are simply a measure of what users value. This is the argument of every industry that has built its profits on exploiting human cognitive vulnerabilities. The gambling industry said people wanted slot machines. The tobacco industry said people wanted cigarettes. The opioid manufacturers said people wanted pain relief. In each case, the product was engineered to create the want it claimed to satisfy.
The technology industry is no different. It built a system that predicts and shapes human behavior at scale, then described that system as empowerment. It created compulsive use, then called it engagement. It replaced human autonomy with algorithmic determinism, then sold it as personalization.
B.F. Skinner was honest about what he had built. He did not pretend that operant conditioning respected free will. He argued, correctly, that it eliminated it. Silicon Valley built the same machine. It simply hired better copywriters.
The question now is whether we will allow this to continue. Whether we will permit the construction of systems explicitly designed to override human autonomy, so long as those systems describe themselves in the language of choice. Whether we will accept that the price of living in the twenty-first century is the surrender of our capacity to decide what we think, what we watch, what we believe.
We are told we have a choice. The algorithm, after all, merely recommends. One is tempted to ask: if that is true, why does it require ten thousand behavioral signals to predict what we will choose? Why does it adjust in real time to overcome our resistance? Why does it work so hard to give us what we supposedly already want?
The answer, of course, is that we don't want it. Not before the algorithm tells us we do.
