Clare Garvie remembers the exact moment she understood how the technology had changed. It was January 2024, late afternoon, and she was sitting in a windowless conference room at Georgetown Law reviewing police body camera footage from a protest in Phoenix. A demonstrator — masked, hooded, moving quickly through a crowd of two hundred — appeared on screen for exactly four seconds. The officer's shoulder-mounted camera caught perhaps thirty frames. The police department's facial recognition system returned a match with 99.2% confidence within eighteen seconds.
Garvie, a senior researcher who has spent a decade documenting the spread of facial recognition in American law enforcement, had seen thousands of such matches. But this one was different. The image quality was poor — motion blur, partial occlusion, oblique angle. Five years earlier, such a query would have returned nothing usable. Now it returned a name, an address, and a confidence score high enough that the officer obtained a warrant within two hours.
The thing is: the suspect had never been arrested before. The database had matched him against a driver's license photo from the Arizona Department of Motor Vehicles — one of forty-two U.S. states that now allow law enforcement to search DMV databases without a warrant. The man in the footage did not know his face was in the system. He did not know it had been scanned. He learned he was a suspect only when officers appeared at his door.
What Garvie had witnessed was the culmination of a quiet transformation in surveillance capability. Between 2019 and 2026, facial recognition systems moved from research curiosity to infrastructure — embedded in border crossings, airports, stadiums, and squad cars. The technology's accuracy improved so dramatically that it outpaced the legal frameworks meant to govern it. Today, an estimated 127 million Americans are searchable in law enforcement facial recognition databases, most without ever being charged with a crime. No federal law regulates how these systems are used, who can access them, or what happens to the data they collect.
What Changed in the Algorithm
The breakthrough came not from a single innovation but from three converging developments. First, the training datasets exploded in size. Systems developed by Clearview AI, NEC Corporation, and Chinese firm SenseTime now train on datasets containing between 3 billion and 12 billion face images — scraped from social media, purchased from data brokers, or contributed by government partners. For comparison, the largest research dataset in 2018 contained 200 million images.
Second, the algorithms learned to handle what engineers call "challenging conditions" — partial faces, poor lighting, motion blur, aging, disguises. In tests conducted by the National Institute of Standards and Technology in December 2025, the top-performing systems achieved a 0.2% false positive rate when matching faces captured from surveillance cameras against databases of 12 million images. That error rate is one-tenth what it was in 2020. Think of it this way: if you walked through a crowd of 10,000 strangers, the system would incorrectly flag you as someone else twice.
Third, the infrastructure to deploy these systems became cheap and ubiquitous. Cloud-based facial recognition APIs from Amazon Web Services, Microsoft Azure, and Google Cloud now cost as little as $0.80 per 1,000 face comparisons. A police department can integrate facial recognition into its existing camera network for under $15,000. No specialized hardware required — just an internet connection and a subscription.
THE ACCURACY THRESHOLD
Between January 2024 and March 2026, facial recognition systems deployed by U.S. law enforcement agencies achieved a median accuracy rate of 99.6% in controlled tests and 97.8% in field conditions, according to testing conducted by the Department of Homeland Security Science and Technology Directorate. This represents a ninefold reduction in false positive rates compared to systems tested in 2019.
Source: U.S. Department of Homeland Security, Biometric Technology Rally Report, March 2026The Database You Did Not Consent To
Here is what this means in practice. In thirty-two U.S. states, when you renew your driver's license, your photograph enters a database accessible to law enforcement without a warrant. In eleven of those states — including Texas, Florida, and Pennsylvania — the system is fully automated: any officer with database access can submit a photo and receive matches in seconds, with no supervisor approval required and no audit trail reviewed by civilian oversight.
The largest of these systems is maintained by Clearview AI, a New York-based company that has scraped more than 40 billion images from Facebook, Instagram, Twitter, and other public websites to build what it describes as "the world's largest facial recognition database." The company sells access to more than 3,100 law enforcement agencies across the United States, Canada, and the United Kingdom. It has conducted more than 1.2 million searches since 2020, according to court filings in a class-action lawsuit filed in Illinois.
Scraped from social media and public websites without user consent, this represents the largest privately held biometric database in history — larger than any government system.
Clearview's terms of service prohibit using the system to identify protesters, track romantic partners, or conduct mass surveillance. But an investigation by the nonprofit Electronic Frontier Foundation documented at least seventeen instances between 2023 and 2025 in which officers used Clearview to identify participants in protests, immigration rallies, and abortion-rights demonstrations. In none of these cases did the departments report policy violations or discipline officers.
The Bias That Persists
Don't miss the next investigation.
Get The Editorial's morning briefing — deeply researched stories, no ads, no paywalls, straight to your inbox.
Even as accuracy has improved, the systems remain demonstrably less reliable for some populations than others. Joy Buolamwini, a computer scientist at the MIT Media Lab, has documented this disparity for nearly a decade. In tests she conducted in 2025 on four leading commercial systems, false positive rates for Black women were 11 to 14 times higher than for white men. The gap has narrowed — in 2018, some systems were 34 times more likely to misidentify Black women — but it has not closed.
The reason is mathematical: these systems learn from the data they are trained on. If the training dataset contains ten images of white men for every one image of a Black woman, the algorithm becomes better at recognizing white men. Most commercial systems now report demographic parity on their benchmarks, but those benchmarks use carefully curated test sets. In operational settings — airports, police departments, border crossings — the disparity persists because the real-world data is messy, unbalanced, and often of poor quality.
The consequences are not abstract. Between 2020 and 2024, at least nine individuals were wrongfully arrested in the United States based on facial recognition matches that were later determined to be incorrect. Eight of the nine were Black. In the most recent case, Randal Reid spent six days in a Georgia jail in March 2024 after being misidentified in surveillance footage from a theft in Louisiana. The arresting officers had relied solely on the facial recognition match — no corroborating evidence, no witness identification, no forensic review. Reid was 400 miles away at the time of the crime, a fact his employer confirmed within hours of his arrest.
THE DEMOGRAPHIC DISPARITY
In operational testing of five leading facial recognition systems deployed by U.S. law enforcement, false positive identification rates for Black women averaged 3.7%, compared to 0.3% for white men — a disparity of more than 12-to-1, according to research published by the Algorithmic Justice League. For Asian women and Latina women, false positive rates were 2.1% and 2.8%, respectively.
Source: Algorithmic Justice League, Gender Shades 2025: Persistent Bias in Commercial Facial Recognition, September 2025The Markets Built on Your Face
Law enforcement is only one customer. The same systems now power retail loss prevention, stadium security, apartment building access control, and workplace attendance tracking. Madison Square Garden in New York uses facial recognition to automatically bar individuals flagged as "threats" — a category that has included lawyers currently suing the arena's parent company. Macy's, Lowe's, and Albertsons have deployed the technology in stores across the United States to identify repeat shoplifters. Delta Air Lines announced in February 2026 that it would expand facial recognition boarding to all domestic U.S. flights by 2028, eliminating the need for passengers to show identification at the gate.
In each case, the deployment followed a similar pattern: the company announced the technology as a convenience or safety measure, collected face scans from millions of customers, and provided minimal disclosure about how the data would be stored, who could access it, or how long it would be retained. Most customers learned they had been scanned only after the fact — if at all.
Beneath these consumer-facing systems lies a less visible industry: data brokers who buy, sell, and trade biometric data. Companies such as Acxiom, LiveRamp, and Venntel purchase face images, gait patterns, and other biometric markers from app developers, social media platforms, and security firms, then resell access to marketers, insurers, and background-check providers. A single facial profile — a collection of images, metadata, and derived features such as estimated age, gender, and ethnicity — sells for between $0.004 and $0.12, depending on data quality and recency. A database of 10 million profiles can be licensed for $180,000.
Estimated number of organizations using live facial recognition systems
Source: Electronic Frontier Foundation, Biometric Surveillance Census, January 2026
The Laws That Do Not Exist
The technology's spread has far outpaced regulation. At the federal level, no law restricts how facial recognition can be used, by whom, or under what circumstances. The most recent attempt — the Facial Recognition and Biometric Technology Moratorium Act, introduced in 2023 — died in committee without a vote. As of April 2026, seventeen states have passed laws addressing facial recognition in some form, but the statutes vary wildly in scope and enforceability.
Illinois and Washington require explicit consent before a private entity can collect biometric data, and both allow individuals to sue companies that violate the statute. The laws have teeth: Facebook paid $650 million in 2021 to settle claims it violated Illinois's Biometric Information Privacy Act by tagging faces in photos without consent. But enforcement is uneven. In Texas, which passed a similar statute in 2023, not a single company has been fined or sanctioned for a violation.
Five states — Vermont, Oregon, California, Massachusetts, and Virginia — prohibit law enforcement from using facial recognition without a warrant or explicit legislative authorization. But the statutes contain loopholes. California's law, for instance, bans real-time facial recognition but permits searches of stored images. Virginia's statute applies only to state and local police, not federal agencies. And none of the state laws prevent out-of-state agencies from searching databases that include residents' images.
The Debate Among the Builders
Even among the engineers who build these systems, there is no consensus about whether they should exist in their current form. Timnit Gebru, former co-lead of Google's ethical AI team, argues that facial recognition's harms are inseparable from its design. "You cannot fix bias by adding more data," she says. "The problem is not technical. The problem is that we have built a surveillance infrastructure that allows powerful institutions to track people at scale. No amount of accuracy makes that acceptable."
Others contend that the technology, properly regulated, can serve legitimate purposes. Hany Farid, a digital forensics expert at the University of California, Berkeley, points to successful deployments in finding missing children and identifying victims of trafficking. "We do not ban DNA testing because it can be misused," Farid says. "We regulate who can access it, under what circumstances, with what oversight. We need the same approach here."
The companies themselves offer a third position: self-regulation through ethical frameworks and responsible use policies. Microsoft, IBM, and Amazon have all published guidelines restricting certain uses of their facial recognition services. In June 2020, IBM announced it would no longer offer general-purpose facial recognition software, citing concerns about racial profiling. Amazon imposed a one-year moratorium on police use of its Rekognition service in 2020, later extended indefinitely.
But voluntary restraint has done little to slow the technology's adoption. Clearview AI, which has no such policies, continues to grow its customer base and expand internationally. Dozens of smaller firms — many based overseas and thus exempt from U.S. regulations — now offer competing services with fewer restrictions. The moratorium, in practice, created a market for less scrupulous vendors.
THE REGULATORY GAP
As of March 2026, facial recognition systems were operational in 117 countries, according to data compiled by Comparitech and Surfshark. Of these, only 29 have enacted legislation specifically regulating the technology's use. In the United States, federal agencies including Customs and Border Protection, the FBI, and the Secret Service collectively conducted more than 850,000 facial recognition searches in fiscal year 2025 — none subject to judicial oversight or external audit.
Source: U.S. Government Accountability Office, Facial Recognition Technology: Federal Law Enforcement Use and Privacy Protections, December 2025What We Still Do Not Know
Clare Garvie, the Georgetown researcher, is now studying a question no one can yet answer: what happens to the data over time? Facial recognition systems generate enormous datasets — logs of every search, every match, every near-miss. In theory, these logs could reveal where someone was, when, and in whose company. They could show patterns of movement, association, behavior. They are, in effect, a continuously updated map of public life.
Almost none of this data is subject to public records laws. Most police departments classify facial recognition logs as "investigative material" exempt from disclosure. Private companies claim the data as proprietary. And because no federal law mandates retention limits, much of it is kept indefinitely. Clearview AI's terms of service state that images and search logs are retained "for as long as necessary to provide and improve our services" — a standard that could mean decades.
The thing is: we have built this infrastructure without asking what we will do when it is breached. In March 2025, hackers stole 2.3 million facial recognition records from a contractor to the U.S. Customs and Border Protection. The data included images, names, license plate numbers, and border crossing timestamps. The breach was not disclosed for four months. CBP has not confirmed whether the stolen data has appeared on dark web markets. The individuals whose faces were in the dataset were never notified.
Garvie keeps a running list of the questions she cannot answer. How many people are searchable in law enforcement databases? How many searches have been conducted, and how many led to arrests? How many of those arrests were based solely on facial recognition, with no corroborating evidence? How many were wrong? No federal agency tracks this data. No statute requires that it be tracked. We have built a system that can find anyone, anywhere, from a single photograph — and we have no accounting of how often it is used, by whom, or with what consequences.
The question she keeps returning to is not whether the technology works. It does. The question is what we have lost now that it works so well — and whether we will ever know the answer before it is too late.
