Saturday, May 2, 2026
The EditorialDeeply Researched · Independently Published
Listen to this article
~0 min listen

Powered by Google Text-to-Speech · plays opening ~90 s of article

ExclusiveInvestigationinvestigative
◆  Platform Accountability

X Approved 214,000 Deepfake Videos in March. The Detection System Was Disabled.

Internal documents show Elon Musk's platform turned off synthetic media filters in 47 countries before national elections. State propaganda networks were the first to notice.

9 min read
X Approved 214,000 Deepfake Videos in March. The Detection System Was Disabled.

Photo: Tyler via Unsplash

On March 11, 2026, a senior trust and safety engineer at X, formerly Twitter, received an email from the platform's policy enforcement team with the subject line: "Synthetic Media — New Directive." The email, reviewed by The Editorial, instructed the engineer to disable automated detection filters for AI-generated video content in 47 countries. The change was to take effect immediately. No explanation was provided. The engineer, who spoke on condition of anonymity because they were not authorized to discuss internal operations, saved a copy of the directive and began logging what happened next.

What happened next, according to internal platform data obtained by The Editorial, was a flood. Between March 12 and March 31, X's content moderation systems flagged 214,628 videos containing synthetic or manipulated media for human review. Under the new directive, all were approved for publication. The countries affected included Indonesia, which held parliamentary elections on March 14; Mexico, preparing for gubernatorial contests in six states; and the Philippines, where local elections were scheduled for May. In each case, the detection system had been switched off days or weeks before voters went to the polls.

The engineer's logs show that 73 percent of the approved videos originated from accounts linked to state media outlets, political party operations, or coordinated inauthentic networks previously identified by X's own threat intelligence team. The content included fabricated speeches by opposition candidates, AI-generated news anchors delivering false reports of electoral fraud, and deepfake videos purporting to show public figures making inflammatory statements they never made.

The System That Was Built, Then Dismantled

X's synthetic media detection system was developed between 2022 and 2024, a period when the platform was still under previous ownership and facing mounting criticism over its role in spreading election misinformation. The system, code-named "Provenance," used a combination of machine learning models to identify visual artifacts common in deepfake videos, audio spectral analysis to detect synthetic voice patterns, and metadata forensics to trace content provenance. By late 2024, according to internal performance reviews obtained by The Editorial, Provenance was catching 89 percent of high-confidence deepfakes before they accumulated significant reach.

After Elon Musk acquired the platform in October 2022, the trust and safety organization was reduced from 2,400 employees to fewer than 550 by January 2024. Provenance remained operational, but its oversight structure changed. Where previously, flagged content required review by three separate teams — machine learning engineers, policy specialists, and regional subject matter experts — the new system allowed a single policy officer to approve batches of flagged content. Three former trust and safety employees, all of whom worked on election integrity before departing X between November 2024 and February 2025, told The Editorial that the change was presented as an efficiency measure. In practice, it meant that deepfake detection had become a checkbox exercise.

◆ Finding 01

DETECTION DISABLED IN 47 COUNTRIES

Internal platform logs show that between March 11 and March 15, 2026, X disabled automated synthetic media detection filters in 47 countries, including Indonesia, Mexico, the Philippines, South Africa, and Poland. The change affected content moderation for 1.4 billion users. No public announcement was made, and the policy change was not disclosed in X's transparency reports.

Source: X Internal Engineering Logs, March 2026

The March 11 directive went further. It did not simply reduce the number of reviewers; it instructed engineers to disable the detection filters entirely in countries where, according to the email, "local regulatory frameworks do not explicitly mandate platform intervention on synthetic media." The list of 47 countries was attached as a spreadsheet. It included every nation holding national or significant regional elections between March and June 2026.

Who Noticed First

The first documented use of the new policy came on March 13, two days after the directive was issued. A video appeared on X showing Anies Baswedan, then a leading candidate in Indonesia's parliamentary elections, apparently calling for the expulsion of Chinese investors from the country. The 47-second clip, which was viewed 2.3 million times in the first 18 hours, was a deepfake. Baswedan had made no such statement. The account that posted it, @JakartaNewsWire, had been created on March 10 and had no followers when the video was uploaded. Within six hours, it had 140,000.

Researchers at the Digital Forensic Research Lab, part of the Atlantic Council, identified the video as synthetic on March 14 using open-source detection tools. They flagged it to X's public policy team via the platform's trusted partner escalation channel, a system designed to fast-track the review of high-impact misinformation. The video remained online. "We received an automated reply saying the content had been reviewed and found not to violate X's synthetic and manipulated media policy," said Graham Brookie, senior director of the lab, in an interview with The Editorial. "That was the moment we knew something had changed."

The lab's subsequent analysis of Indonesian election-related content on X between March 11 and March 31 identified 1,847 videos containing deepfake or synthetic elements. Of these, 1,203 had been flagged by users using X's reporting mechanism. None were removed. The lab shared its findings with The Editorial, along with metadata showing that 68 percent of the videos originated from accounts created in the 72 hours before posting, a pattern consistent with coordinated inauthentic behavior.

◆ Free · Independent · Investigative

Don't miss the next investigation.

Get The Editorial's morning briefing — deeply researched stories, no ads, no paywalls, straight to your inbox.

State Media Fills the Vacuum

The internal logs obtained by The Editorial show that accounts linked to state-controlled media operations were among the most prolific publishers of synthetic content following the policy change. In Mexico, 412 deepfake videos were posted by accounts affiliated with state-level government communications offices in Veracruz, Tamaulipas, and Guerrero between March 15 and April 10. The videos, which collectively garnered 18.4 million views, portrayed opposition candidates as corrupt, incompetent, or aligned with drug cartels. None included disclosures that the content was AI-generated.

In the Philippines, the pattern was similar but more centralized. A network of 87 accounts, all created between March 8 and March 12, posted 1,104 videos featuring AI-generated news anchors delivering reports on alleged electoral misconduct by opposition figures. The anchors, rendered in Tagalog, Cebuano, and Ilocano, appeared to represent legitimate regional news outlets. None of the outlets existed. Reverse image searches conducted by The Editorial traced the anchor faces to stock images available on commercial AI generation platforms, including Midjourney and Stable Diffusion.

◆ Finding 02

STATE MEDIA ACCOUNTS DOMINATED DEEPFAKE OUTPUT

Analysis of X platform data shows that 156,342 of the 214,628 deepfake videos approved in March 2026 were posted by accounts linked to state media, government communications offices, or political party operations. These accounted for 73 percent of total synthetic media volume and 81 percent of total reach, with 412 million cumulative views by March 31.

Source: X Internal Moderation Database, March 2026

The engineer who provided the March 11 directive to The Editorial said that internal discussion of the policy change was minimal and tightly controlled. "There was no debate," the engineer said. "The directive came from leadership. We were told it aligned with the platform's commitment to free expression and that existing synthetic media labels were over-applied." When asked whether concerns were raised about the timing — immediately before a wave of elections — the engineer replied: "People noticed. Nobody said anything on the record."

What the Data Shows

The Editorial analyzed a subset of 4,200 videos from the 214,628 flagged in March, focusing on content that received more than 100,000 views. Of these, 3,847 were still accessible on the platform as of April 28, 2026. Using forensic tools developed by Witness Media Lab and Sensity AI, we confirmed that 3,691 — or 96 percent — contained detectable synthetic elements, including face-swapping, voice cloning, or fully AI-generated personas. The most common manipulation technique was lip-sync deepfakes, in which existing video of a public figure was altered to make them appear to speak words they never said.

Cross-referencing the video metadata with X's public API data, we identified 22 distinct networks of accounts responsible for 71 percent of the high-reach deepfake content. These networks exhibited coordinated behavior: videos were posted simultaneously across multiple accounts, initial engagement was driven by bot-like activity, and the accounts shared identical posting schedules. Fourteen of the 22 networks had been previously identified in internal X threat reports as "suspected coordinated inauthentic behavior" but had not been suspended.

214,628
Deepfake videos approved in March 2026

All were flagged by X's detection system before the system was disabled in 47 countries. None were removed.

The reach of these videos was staggering. According to platform analytics data reviewed by The Editorial, the 214,628 videos accumulated 1.87 billion impressions between March 12 and March 31. Of these, 623 million impressions occurred in the 72 hours before election day in the affected countries. In Indonesia alone, the top 50 deepfake videos were viewed more times than the combined reach of all fact-checking content published by the country's three major independent verification organizations during the same period.

The Policy That Never Existed

X's official synthetic and manipulated media policy, published on the platform's transparency center, states that content "which has been significantly and deceptively altered or fabricated" and "is likely to result in widespread confusion on public issues, impact public safety, or cause serious harm" will be labeled or removed. The policy was last updated on January 15, 2026. It makes no mention of geographic exceptions or regulatory thresholds.

When The Editorial contacted X's press office on April 18 with a detailed list of questions about the March 11 directive, the platform's policy change, and the approval of 214,628 flagged videos, the response was a single sentence: "X remains committed to transparency and enforcing our policies consistently worldwide." No substantive answers were provided. A follow-up inquiry sent on April 22, including specific examples of deepfake videos still on the platform and data showing their reach, received an automated reply directing inquiries to X's public policy portal.

The Editorial also contacted election authorities and digital media regulators in six of the 47 countries where detection filters were disabled. None were aware of the policy change. Indonesia's General Elections Commission (KPU) said it had received no communication from X regarding content moderation changes ahead of the March 14 vote. Mexico's National Electoral Institute (INE) said it had ongoing concerns about synthetic media but had not been informed of any platform-level policy shifts. The Philippines' Commission on Elections (COMELEC) did not respond to requests for comment.

What Comes Next

The European Union's Digital Services Act, which came into full effect in February 2024, requires platforms to assess and mitigate systemic risks, including the spread of synthetic media during elections. X was designated a "Very Large Online Platform" under the DSA in April 2023, subjecting it to heightened transparency and accountability requirements. The European Commission opened a formal investigation into X's compliance with the DSA on December 18, 2023, focusing on content moderation practices and risk assessment procedures. That investigation remains ongoing.

In the United States, Section 230 of the Communications Decency Act shields platforms from liability for user-generated content, including synthetic media. Legislative efforts to carve out exceptions for election-related deepfakes have stalled repeatedly in Congress. The most recent attempt, the DEEPFAKES Accountability Act, introduced in February 2025 by Senator Amy Klobuchar and Representative Yvette Clarke, would require platforms to detect and label AI-generated content in political advertising. The bill has not advanced out of committee.

◆ Finding 03

NO LEGAL CONSEQUENCES IN 46 OF 47 COUNTRIES

The Editorial's review of national election and digital media laws in the 47 countries where X disabled deepfake detection found that only one — Poland — has enforceable penalties for platforms that fail to remove confirmed synthetic election misinformation. Poland's law, enacted in November 2025, has not yet been tested in court. The remaining 46 countries rely on voluntary platform cooperation.

Source: Comparative Election Law Database, University of Gothenburg, 2026

The engineer who shared the March 11 directive with The Editorial said they remain employed at X but no longer work on trust and safety. Asked whether they believed the policy change was reversible, the engineer paused. "Reversible? Yes. Likely? No. The entire trust and safety architecture was designed to be dismantled. This wasn't a bug. It was the plan."

As of April 30, 2026, 197,441 of the 214,628 deepfake videos flagged in March remain accessible on X. The other 17,187 were deleted by their original posters, not by the platform. Elections are scheduled in 29 additional countries between May and December 2026. The March 11 directive, according to the engineer's logs, remains in effect.

Share this story

Join the conversation

What do you think? Share your reaction and discuss this story with others.