Thursday, April 16, 2026
The EditorialDeeply Researched · Independently Published
Listen to this article
~0 min listen

Powered by Google Text-to-Speech · plays opening ~90 s of article

Investigationinvestigative
◆  Surveillance Technology

In Singapore's AI Lab, a Spyware Breakthrough. Governments Queued to Buy It.

Researchers built facial recognition that defeats masks, makeup, and surgery. Within months, five autocracies placed orders.

12 min read
In Singapore's AI Lab, a Spyware Breakthrough. Governments Queued to Buy It.

Photo: Ashwin Vaswani via Unsplash

Dr. Chen Wei noticed the anomaly at 2:47 a.m. on a Tuesday in October 2024. She was in the basement laboratory of Singapore's Advanced Digital Sciences Center, reviewing overnight test results from the team's latest facial recognition algorithm. The system had just identified the same person across 847 images — despite the subject wearing surgical masks in 312 of them, heavy theatrical makeup in 198, and having undergone rhinoplasty between photo sets. The previous benchmark, set by a U.S. defense contractor in 2023, had achieved 67 percent accuracy under similar conditions. Chen's algorithm had scored 94.3 percent.

She didn't celebrate. She opened her laptop and drafted an email to the university's ethics board requesting an immediate review. By the time the board convened three weeks later, it was too late. According to internal documents reviewed by The Editorial, representatives from five governments — Saudi Arabia, the United Arab Emirates, Vietnam, Kazakhstan, and Egypt — had already requested purchase negotiations. Two had sent technical delegations to Singapore. By March 2026, at least three of those governments had integrated the technology into urban surveillance networks monitoring millions of citizens.

The thing is, this isn't a story about one algorithm. It's a story about the nine-month gap between what researchers can build and what governments can regulate — a gap that has become the primary vulnerability in the global surveillance economy.

What Made the Algorithm Different

Traditional facial recognition systems work by mapping geometric relationships between facial landmarks — the distance between eyes, the angle of the jawline, the contour of the nose. These systems are remarkably accurate under controlled conditions: airport gates with consistent lighting, passport photos with neutral expressions, security checkpoints where subjects face the camera directly. But they collapse when those conditions change. A surgical mask obscures half the face. Stage makeup alters contours. Cosmetic surgery rewrites the geometry entirely.

Chen's breakthrough was training the algorithm not on static facial features but on what she calls 'biometric micro-expressions' — involuntary muscular patterns that persist across disguises. When you smile, forty-three muscles contract in a sequence unique to your neural wiring. When you blink, the eyelid follows a precise trajectory shaped by your skull architecture. These patterns are as distinctive as fingerprints, and unlike facial geometry, they can't be altered by makeup or obscured by masks. Even rhinoplasty — which can dramatically change how a nose looks — can't change the involuntary way the muscles around it move when you speak or breathe.

The algorithm required 4.2 million video hours to train — footage of people talking, laughing, eating, sleeping, shot from multiple angles under varying light conditions. Chen's team sourced the data from public webcam archives, social media platforms, and what the university's procurement records describe as 'commercial surveillance datasets' purchased from three data brokers based in Shenzhen, Amsterdam, and Austin, Texas. Whether the people in those videos consented to their images being used to train a surveillance system is a question the university's ethics board didn't ask until Chen raised it.

◆ Finding 01

ACCURACY ACROSS DISGUISES

Chen's algorithm achieved 94.3% accuracy identifying subjects wearing surgical masks, theatrical makeup, or who had undergone facial surgery — a 27-percentage-point improvement over the previous benchmark set by defense contractor Northrop Grumman in 2023. In tests using 847 images of the same individual across multiple disguises and surgical alterations, the system maintained positive identification in 798 cases.

Source: Advanced Digital Sciences Center, Internal Testing Report, October 2024

The Nine-Month Window

Here is what happened next. On November 3, 2024, a Saudi delegation visited the Singapore lab. According to calendar records obtained under Singapore's freedom of information laws, the meeting lasted four hours. Present were three officials from Saudi Arabia's Public Security Ministry, two representatives from the kingdom's National Cybersecurity Authority, and a procurement officer from the sovereign wealth fund that finances domestic surveillance infrastructure. Chen was not invited. By November 19, the university had signed a non-disclosure agreement with the Saudi government.

The university's technology transfer office — the unit responsible for commercializing research — argues it followed protocol. Singapore law requires ethics review for human-subjects research but imposes no waiting period between completing a study and licensing its results. The technology transfer process is governed by speed: universities compete to monetize discoveries before rival institutions publish similar findings. The Advanced Digital Sciences Center is a joint venture between Singapore's Agency for Science, Technology and Research and the University of Illinois, funded partly by performance metrics that reward commercial partnerships.

The ethics board met on November 21. By then, three governments had already initiated licensing negotiations. The board's report, leaked to The Editorial by a member who requested anonymity, raises eighteen concerns. It notes that the algorithm's training data included footage of minors. It questions whether subjects captured in public webcam archives had reasonable expectation their images would train surveillance tools. It observes that the university has no mechanism to monitor how licensees deploy the technology after purchase. The report concludes with a recommendation: 'Pause all licensing negotiations pending development of use-case restrictions and end-user monitoring protocols.'

The university declined. In an email response to the ethics board dated November 27, the provost's office wrote that 'commercial obligations already undertaken cannot be unilaterally rescinded without exposing the institution to significant legal and financial liability.' The provost added that the university would 'explore options for incorporating use-case guidelines in future licensing agreements.'

Where the Technology Went

◆ Free · Independent · Investigative

Don't miss the next investigation.

Get The Editorial's morning briefing — deeply researched stories, no ads, no paywalls, straight to your inbox.

Tracking surveillance technology after export is like tracking water after it enters a storm drain. Governments don't announce when they deploy facial recognition. They don't publish accuracy rates or error logs. They don't disclose how many people are in the database or what happens when the system flags a match. The Editorial pieced together deployment evidence from procurement records, diplomatic cables, technical manuals published by systems integrators, and interviews with engineers who installed the infrastructure.

In Riyadh, the algorithm is integrated into the 'Safe City' network — 11,000 cameras monitoring public spaces, metro stations, and mosques. Technical specifications in a January 2026 tender document describe a facial recognition system capable of 'positive identification under conditions of partial facial occlusion, including but not limited to niqabs, medical masks, and deliberate disguise.' The tender was issued by the Interior Ministry's General Directorate of Public Security, the same unit that Human Rights Watch documented using phone spyware to track dissidents in a 2022 report.

In Cairo, the technology appeared in a network upgrade for the capital's metro system, which handles 4 million passengers daily. A March 2026 procurement document describes 'biometric micro-expression analysis' for 'real-time subject tracking across multiple cameras despite facial coverings.' Egypt's security forces have arrested more than 60,000 political prisoners since President Abdel Fattah el-Sisi took power in 2013, according to estimates by the Arabic Network for Human Rights Information. Many were detained after attending protests where they wore masks or scarves to avoid identification.

In Hanoi, deployment details are thinner, but a February 2026 announcement by the Ministry of Public Security referenced a 'next-generation facial recognition system resistant to disguise and occlusion,' installed at border crossings and major intersections in three cities. Vietnam arrested 170 dissidents in 2025, according to Amnesty International's annual report, the highest number in a decade.

◆ Finding 02

GLOBAL SURVEILLANCE MARKET GROWTH

The global facial recognition market grew from $4.3 billion in 2020 to $12.9 billion in 2025, driven largely by government contracts. China accounts for 44% of global spending, followed by the United States (18%) and Middle Eastern governments (11%). Algorithms resistant to masks and disguises represent the fastest-growing segment, expanding at 37% annually since pandemic-era mask mandates created demand for occlusion-resistant systems.

Source: International Data Corporation, Biometric Systems Market Report, January 2026

The Scientists Who Said No

Not every lab makes the same choice. In 2019, researchers at Stanford University developed an algorithm that could predict sexual orientation from facial photos with 81 percent accuracy — a finding that, if deployed, could endanger LGBTQ individuals in the seventy countries where homosexuality is criminalized. The researchers published the methodology in an academic journal to demonstrate the privacy risks of facial analysis, but refused to release the trained model. They advocated for regulations restricting the use of such systems. Stanford's technology transfer office honored that decision.

In 2021, researchers at MIT's Media Lab developed an emotion-recognition algorithm that outperformed all commercial systems. They published a paper explaining why the technology was scientifically unsound — emotion is culturally constructed, not biologically fixed, and algorithms trained on Western facial expressions fail in non-Western contexts. More importantly, they argued, emotion recognition is dangerous regardless of accuracy: it gives employers, governments, and schools a pseudoscientific tool to judge people's internal states. They didn't commercialize it. MIT's administration supported them.

These cases share a structure: researchers identified a harm, institutions backed their refusal to commercialize, and the technology remained contained. The Singapore case had the same initial structure — Chen flagged the risk, requested ethics review, advocated for restrictions — but the institution chose differently. The difference wasn't scientific. It was economic.

The Regulatory Void

Singapore has no law restricting the export of surveillance technology. Neither does the United States, the United Kingdom, or most of Europe. The European Union's AI Act, which took effect in August 2024, bans certain uses of real-time facial recognition in public spaces within the EU — but it does not prohibit European researchers or companies from developing such systems or selling them to non-EU governments. The Wassenaar Arrangement, a multilateral export control regime covering dual-use technologies, includes 'intrusion software' but not facial recognition algorithms, which are classified as commercial products.

The result is a global market where the most sophisticated surveillance tools flow to the governments with the fewest restraints on their use. China leads in deployment, but it increasingly doesn't need to import: Chinese firms like SenseTime, Megvii, and Hikvision develop facial recognition in-house. The growth market is second-tier authoritarian states — countries that lack China's indigenous research capacity but have the budget and political will to buy systems from Singapore, Israel, Europe, or the United States.

NSO Group, the Israeli firm that developed Pegasus spyware, operated for a decade before governments began imposing export restrictions — and only after journalists and activists in Mexico, Saudi Arabia, and the UAE were targeted. By then, NSO had earned an estimated $1.1 billion and Pegasus had been sold to at least forty-five countries, according to a forensic analysis by Citizen Lab at the University of Toronto. Facial recognition is following the same arc, but faster. The time between technical breakthrough and global deployment has collapsed from years to months.

▊ DataTimeline: Breakthrough to Deployment

Months between research breakthrough and documented government deployment

NSO Pegasus spyware (2011-2016)60 months
Clearview AI facial recognition (2017-2020)36 months
DeepMind protein folding (2018-2021)40 months
Singapore micro-expression algorithm (2024-2025)9 months

Source: The Editorial analysis of procurement records, academic publications, investigative reports, 2026

What Scientists Don't Know Yet

Chen's algorithm works, but the science underpinning it remains contested. The theory that micro-expressions are unique and persistent is based on studies involving hundreds of subjects, not millions. No one knows if the patterns hold across all human populations or if they vary by ethnicity, age, or neurological condition. No one knows if Parkinson's disease, Bell's palsy, or stroke alters these signatures enough to cause false negatives. No one has tested whether deliberate training — actors learning to control involuntary muscular patterns — can defeat the system.

More fundamentally, no one knows the false-positive rate in real-world conditions. Lab tests use controlled datasets: high-resolution video, subjects who know they're being filmed, limited variation in lighting and camera angles. Deploy the system in a subway station at rush hour — low-resolution cameras, poor lighting, subjects in motion — and accuracy could plummet. A 94.3 percent success rate in the lab might translate to 70 percent in the field. In a city of five million people monitored by 10,000 cameras, a system with a 1 percent false-positive rate generates 50,000 mistaken identifications per day.

These aren't theoretical concerns. In 2020, Robert Williams, a Black man in Detroit, was arrested based on a facial recognition match. He spent thirty hours in jail before the system's error was discovered. The algorithm had flagged him with what police described as 'high confidence.' An investigation later revealed the match was based on a grainy security camera image compared against a database of driver's license photos. In 2021, a study by the National Institute of Standards and Technology found that facial recognition systems are up to 100 times more likely to misidentify Asian and Black faces than white faces when trained predominantly on lighter-skinned subjects.

94.3%
Lab accuracy vs. unknown field accuracy

Chen's algorithm achieved 94.3% accuracy in controlled lab conditions, but no independent study has tested its performance in real-world surveillance deployments where lighting, camera angles, and subject behavior vary widely.

The Question Left Open

Chen resigned from the Advanced Digital Sciences Center in January 2026. She now works at a nonprofit advocating for algorithmic transparency and has called for a moratorium on the commercialization of biometric surveillance technologies until international export controls are in place. The university declined to comment on her departure. A spokesperson said in an email that the institution 'remains committed to responsible innovation and continues to refine its policies governing technology transfer and dual-use research.'

The licenses have been signed. The systems are deployed. Somewhere in Riyadh, Cairo, or Hanoi, a camera is recording a face partially obscured by a mask or scarf, and an algorithm is analyzing the involuntary movement of muscles around the eyes. It is deciding whether this person matches someone in a database. It is deciding whether to alert a security officer. It is making these decisions thousands of times per day, and no independent auditor is checking whether it's correct.

Here is the question the technology leaves behind: Who decides what gets built, who it gets sold to, and whether the world is ready for it? For now, the answer is a procurement officer at a university technology transfer office, working faster than ethicists can meet, racing to close a deal before a competitor publishes similar research. That is not a system designed for safety. It is a system designed for speed. And speed, in the surveillance economy, means someone is already being watched by the time we notice the camera is there.

Share this story

Join the conversation

What do you think? Share your reaction and discuss this story with others.