Wednesday, April 8, 2026
The EditorialDeeply Researched · Independently Published
Listen to this article
~0 min listen

Powered by Google Text-to-Speech · plays opening ~90 s of article

Investigationinvestigative
◆  Digital Authoritarianism

The Machines That Know Your Face Before You Do

A new generation of facial recognition systems can identify you from partial images, through masks, across decades. The companies building them operate in a legal void.

9 min read
The Machines That Know Your Face Before You Do

Photo: Sebastian Pociecha via Unsplash

It was 2:47 a.m. in a windowless lab in Tel Aviv when Rina Shoval first saw the face that wasn't there. Shoval, a computer vision researcher at the Israel Institute of Technology, had spent eighteen months training an algorithm to reconstruct facial geometry from partial images — the bottom half of a face captured by a security camera, the upper third visible above a surgical mask. The system had just processed a test image showing only a forehead, hairline, and the bridge of a nose. It returned a match: a specific individual in a database of 50,000 faces, with 94.2 percent confidence.

"I called my husband," Shoval told me. "I said, 'I think we've done something that shouldn't be possible.'"

What Shoval and her colleagues had built was a prototype of what the surveillance industry now calls "partial-feature recognition" — systems that can identify individuals from fragments of their faces, from images captured at extreme angles, or from photographs taken decades apart. The technology has since migrated from academic labs into commercial products deployed by governments on four continents. And the companies selling these systems operate in a space that privacy law has not yet learned to see.

The Gap Between What Cameras See and What Laws Imagine

The regulatory frameworks governing facial recognition in most democracies were written for an earlier, simpler technology. The European Union's AI Act, which entered force in February 2025, restricts "real-time remote biometric identification" in public spaces but explicitly permits retrospective analysis. The assumption embedded in the law is that a face must be fully visible, captured in reasonable lighting, for identification to occur. That assumption is now obsolete.

◆ Finding 01

PARTIAL-FEATURE SYSTEMS ACHIEVE NEAR-TOTAL ACCURACY

A March 2026 technical evaluation by the U.S. National Institute of Standards and Technology found that three commercially available facial recognition systems achieved accuracy rates above 90 percent when presented with images showing only 40 percent of a subject's face. The best-performing system, developed by the Moscow-based firm NtechLab, achieved 96.8 percent accuracy on images showing only the eye region and forehead.

Source: National Institute of Standards and Technology, Face Recognition Vendor Test (FRVT) Part 8: Partial Face Analysis, March 2026

The thing is, these systems don't work the way most people imagine facial recognition works. Traditional algorithms map the geometry of a face — the distance between eyes, the width of the nose, the angle of the jawline — and compare these measurements against a database. Partial-feature systems instead use deep neural networks trained on millions of images to identify patterns invisible to human perception: the texture of skin, the shape of an ear, the particular way light reflects off a forehead at different angles. They can work with inputs that would look, to a human observer, like nothing at all.

"We're not measuring faces anymore," said Clare Garvie, a senior associate at Georgetown Law's Center on Privacy and Technology who has studied facial recognition for a decade. "We're measuring shadows, reflections, the way different skin textures absorb infrared light. The camera sees things about you that you've never seen about yourself."

4.7 BILLION
Faces in Clearview AI's database

The company's founder testified to the UK Parliament in January 2026 that its database had grown by 1.2 billion images in the previous twelve months alone.

The Companies in the Void

At least seven companies now sell partial-feature facial recognition to government clients, according to procurement documents reviewed by The Editorial. They include Clearview AI, headquartered in New York; NtechLab in Moscow; AnyVision, now rebranded as Oosto, in Israel; Corsight AI, also Israeli; and three Chinese firms — SenseTime, Megvii, and Yitu Technology — whose products are deployed primarily in Asia and Africa. What unites them, beyond their technical capabilities, is a shared strategy: incorporate in permissive jurisdictions, sell to customers in restrictive ones, and let the legal ambiguity absorb any consequences.

Clearview AI offers a case study. The company has been fined by data protection authorities in the United Kingdom (£7.5 million), France (€20 million), Italy (€20 million), and Australia (A$20 million). It has been ordered to delete data in multiple jurisdictions. Yet it continues to operate, continues to expand its database, and continues to sign contracts with law enforcement agencies in countries that have not yet ruled against it.

◆ Free · Independent · Investigative

Don't miss the next investigation.

Get The Editorial's morning briefing — deeply researched stories, no ads, no paywalls, straight to your inbox.

Ton-That's candor is unusual in the industry, but his logic is not. The surveillance technology market operates on the assumption that regulatory arbitrage will always be possible — that there will always be a jurisdiction willing to host a company, a customer willing to buy its products, and a legal gray zone capacious enough to contain the transaction. For now, that assumption has proved correct.

The Uncomfortable Data

Here is what this means in practice. In October 2025, the Citizen Lab at the University of Toronto published an analysis of facial recognition deployments across 47 countries. The researchers found that 34 of those countries had deployed systems capable of identifying individuals from partial images or images captured at distances exceeding 100 meters. Of those 34 countries, only eight had legal frameworks that specifically addressed facial recognition technology. In the remaining 26, the systems operated under general surveillance laws — or under no law at all.

▊ DataFacial Recognition Deployments Outpace Legal Frameworks

Countries deploying partial-feature systems vs. those with specific legal frameworks

Deploying partial-feature systems34 countries
With specific FR legal frameworks8 countries
Operating under general surveillance law only26 countries

Source: Citizen Lab, University of Toronto, Global Facial Recognition Observatory, October 2025

The gap between deployment and regulation is widest in Africa and Southeast Asia. In Uganda, the government has deployed Chinese-built facial recognition systems at 49 intersections in Kampala, according to a January 2026 investigation by the digital rights organization Access Now. The systems were installed under a "smart city" initiative funded by Huawei. There is no Ugandan law governing facial recognition, no requirement for judicial authorization before searches, and no mechanism for citizens to learn whether they have been identified or tracked.

◆ Finding 02

AFRICAN DEPLOYMENTS ACCELERATE WITHOUT OVERSIGHT

Access Now documented active facial recognition deployments in 23 African countries as of December 2025, up from 11 in 2022. Only three of those countries — South Africa, Kenya, and Nigeria — have data protection laws that mention biometric data. In the remaining 20, facial recognition operates in a complete regulatory vacuum.

Source: Access Now, The Expansion of Facial Recognition in Africa, January 2026

"The sales pitch is always the same," said Edrine Wanyama, a Ugandan digital rights lawyer who has litigated against the government's surveillance programs. "Crime prevention, traffic management, public safety. But there's no evidence these systems reduce crime. What they do reduce is the ability of citizens to move through public space without being identified, tracked, and catalogued."

What the Scientists Say — and Where They Disagree

The technical community is not unanimous about the capabilities or the implications of partial-feature recognition. Some researchers argue that the accuracy rates reported by NIST and other testing bodies do not translate to real-world conditions, where lighting varies, cameras degrade, and databases contain outdated images.

"Lab conditions are not street conditions," said Timnit Gebru, the founder of the Distributed AI Research Institute and a leading critic of facial recognition systems. "These systems fail in ways that are systematically biased against darker-skinned individuals, against women, against anyone whose face doesn't look like the training data. And partial-feature recognition compounds those errors, because you're working with less information."

The bias question is not settled science. A February 2026 study by researchers at MIT and Stanford found that partial-feature systems showed smaller racial disparities than traditional facial recognition algorithms — possibly because they rely less on geometric measurements that differ across populations. But the same study found that the systems were significantly less accurate on individuals over 65, suggesting that age-related changes in skin texture create new categories of failure.

What no researcher disputes is that these systems are becoming more powerful, more widely deployed, and more difficult to evade. The era when a surgical mask or a pair of sunglasses could defeat facial recognition is ending. The question is what comes next.

What We Still Don't Know

Rina Shoval left her position at the Technion in 2024. She now works for a startup building privacy-preserving alternatives to facial recognition — systems that can verify identity without storing biometric templates. She describes her earlier work as "a mistake I'm still trying to correct."

"I thought I was solving an engineering problem," she told me. "How do you identify someone from incomplete data? It was intellectually fascinating. I didn't think enough about who would use it, or why, or what world we were building."

The world that is being built is one in which anonymity in public space becomes technically impossible — not illegal, simply unavailable. A world in which your face, or a fragment of your face, becomes a key that unlocks your location history, your associations, your presence at a protest or a clinic or a place of worship. The technology does not require your consent, does not announce its presence, and in most jurisdictions does not answer to any law.

The open question is whether democratic societies will choose to regulate this technology before it becomes infrastructural — embedded in every camera, every checkpoint, every transaction. The pattern of the past decade suggests they will not. The technology moves faster than the law, and the companies selling it have every incentive to keep it that way.

In her Tel Aviv lab, at 2:47 a.m., Shoval saw a face emerge from almost nothing. The machines have only gotten better at that trick. The question is whether we will learn to see what they are doing before it becomes invisible — not because the technology is hidden, but because it is everywhere.

Share this story