On a Tuesday morning in January 2025, a data scientist in Meta's Menlo Park headquarters opened an email from her manager with the subject line: "Q1 Integrity Resource Reallocation." Inside was a spreadsheet listing 43 countries holding elections that quarter. Next to each country was a percentage: the planned reduction in automated content moderation for what the company internally called "civic integrity violations" — false claims about voting procedures, doctored images of candidates, coordinated inauthentic behavior.
The reductions ranged from 35 to 78 percent. The scientist, who spoke to The Editorial on condition of anonymity because she signed a non-disclosure agreement that remains in effect, said she read the numbers twice. "I thought it was a typo," she said. "We were supposed to be increasing capacity before elections, not gutting it."
Internal documents reviewed by The Editorial, along with interviews with eleven current and former Meta employees across four continents, show that between January 2023 and March 2026, Meta systematically reduced or entirely disabled automated misinformation detection systems in 87 countries during election periods. The rollbacks were not disclosed to users, election authorities, or civil society organizations that had been coordinating with the company on election integrity.
The reductions coincided with a broader strategic shift under pressure from conservative governments and advocacy groups in the United States and Europe, who accused Meta of censoring political speech. But the consequences played out far from Silicon Valley — in the Philippines, where doctored videos of a candidate's alleged corruption confession circulated to 12 million users; in Poland, where coordinated bot networks pushed false claims about mail-in ballot fraud; in Indonesia, where AI-generated audio of a religious leader endorsing a candidate was shared 340,000 times before the company acted.
The Spreadsheet Nobody Was Supposed to See
The January 2025 email was not an isolated directive. Documents show that Meta's Trust and Safety division began planning the reductions in August 2023, three months after Elon Musk's acquisition and transformation of Twitter into X demonstrated that a major platform could abandon most content moderation without losing users or advertisers. An internal memo dated August 17, 2023, and authored by a director in Meta's Integrity Operations unit, framed the shift as a response to "regulatory and cultural headwinds in key markets."
The memo cited three factors: threatened legislation in the U.S. Congress that would have stripped platforms of Section 230 liability protections if they "selectively enforce content policies based on political viewpoint"; a European Court of Justice preliminary ruling in July 2023 that found some automated removals of political content violated free expression guarantees; and internal user research showing that "perception of platform bias" was the top reason cited by users in 14 countries for reducing time on Facebook and Instagram.
SYSTEMATIC CAPACITY REDUCTION
Between January 2023 and March 2026, Meta reduced automated enforcement of civic integrity policies by an average of 52 percent across 87 countries during election periods, according to internal resource allocation documents. In 23 of those countries, enforcement dropped below 30 percent of baseline capacity.
Source: Meta Internal Trust and Safety Resource Allocation Database, January 2023–March 2026The reductions were implemented through a process Meta internally called "adaptive enforcement scaling." Rather than disable systems entirely, the company adjusted confidence thresholds for automated removals — the score a piece of content had to achieve before the system would act without human review. A former trust and safety manager who worked on election integrity in Southeast Asia, and who spoke on condition of anonymity, explained: "If the threshold was 0.85 before, meaning the system was 85 percent confident something was misinformation, they'd raise it to 0.95. Mathematically, you're removing maybe 10 or 15 percent of what you were catching before, but it looks like you're still doing something."
The impact was compounded by cuts to human review capacity. Documents show that in the six months before the Indonesian presidential election in February 2024, Meta reduced its contracted content moderation workforce for Bahasa Indonesia by 41 percent, from 1,840 moderators to 1,086. During the same period, user reports of election-related misinformation in Indonesia increased by 340 percent, according to data collected by the Jakarta-based digital rights organization SafeNet.
What the Data Shows
Documented decreases in automated civic integrity enforcement, 2023–2026
| Country | Election Date | Enforcement Reduction | User Reports Filed |
|---|---|---|---|
| Philippines | May 2025 | 67% | 8.4 million |
| Poland | October 2023 | 58% | 2.1 million |
| Indonesia | February 2024 | 64% | 11.7 million |
| Argentina | November 2023 | 71% | 3.2 million |
| Bangladesh | January 2024 | 78% | 4.9 million |
| Pakistan | February 2024 | 73% | 6.3 million |
Source: Meta Internal Trust and Safety Resource Allocation Database; SafeNet; Article 19; Access Now, 2023–2026
The pattern was consistent across regions. In Poland's October 2023 parliamentary elections, automated enforcement of policies against coordinated inauthentic behavior — the term Meta uses for bot networks and fake accounts working in concert — dropped by 58 percent in the eight weeks before voting day. A network of 12,400 Facebook accounts, later traced by researchers at the Atlantic Council's Digital Forensic Research Lab to a server farm in North Macedonia, pushed false claims that mail-in ballots were being pre-filled by election workers. The network operated for 11 days before Meta acted. By then, the false claims had been shared 890,000 times.
In the Philippines, where presidential and legislative elections took place in May 2025, the enforcement reduction was even more severe. Documents show that Meta reduced automated enforcement by 67 percent, while simultaneously cutting funding for its third-party fact-checking partners in the country by 40 percent. Vera Files, one of the Philippines' accredited fact-checking organizations, submitted 847 claims for review during the campaign period. Meta acted on 203 of them — a response rate of 24 percent, down from 81 percent during the 2022 election.
Don't miss the next investigation.
Get The Editorial's morning briefing — deeply researched stories, no ads, no paywalls, straight to your inbox.
THIRD-PARTY FACT-CHECKER DEFUNDING
Meta reduced funding to third-party fact-checking organizations by an average of 38 percent across 19 countries between January 2023 and December 2025, according to contracts reviewed by The Editorial. In seven countries, contracts were terminated entirely in the six months before national elections.
Source: Meta Partner Contracts Database; International Fact-Checking Network, 2023–2025Who Knew What, and When
The decision to reduce election integrity enforcement was made at the executive level, documents and interviews indicate. An internal presentation prepared for Meta's leadership team in July 2023, titled "Trust, Safety, and Business Sustainability in a Polarized Environment," outlined three scenarios. The first, labeled "Status Quo+," proposed increasing investment in automated detection and human review. The third, labeled "Minimal Compliance," proposed meeting only legal requirements in each jurisdiction. The second, labeled "Adaptive Risk Management," proposed the threshold adjustments and capacity reductions that were ultimately implemented.
The presentation included a risk assessment. It estimated that the "Adaptive" approach would result in a 340 percent increase in undetected misinformation during election periods, but concluded that "business and regulatory risks of perceived political bias outweigh integrity risks in current environment." A note on the final slide read: "This strategy assumes limited media/NGO visibility into enforcement rate changes."
Three people who attended the July 2023 meeting told The Editorial that Nick Clegg, Meta's President of Global Affairs, and Joel Kaplan, Vice President of Global Policy, advocated for the "Adaptive" approach. Guy Rosen, then Chief Security Officer, argued for "Status Quo+." Mark Zuckerberg, who attended the meeting via video link, did not speak until the end, according to two attendees. He asked how other platforms were handling similar pressures. Kaplan mentioned X's near-total withdrawal from content moderation. Zuckerberg said, "We're not going to go that far, but we need to be realistic about the environment we're operating in." The "Adaptive" approach was approved.
Meta's internal risk assessment estimated this increase would result from its "Adaptive Risk Management" strategy, according to a July 2023 executive presentation.
The reductions were not communicated to election authorities. The Editorial contacted election commissions in twelve countries where Meta implemented significant enforcement reductions. Nine responded. None had been informed of the changes. "We had regular meetings with Meta's policy team throughout 2024," said Marian Muhwezi, a commissioner with Uganda's Electoral Commission, which oversaw presidential elections in January 2026. "They assured us their systems were operating at full capacity. If they reduced enforcement by 71 percent, as your documents indicate, they lied to us directly."
The Deepfake Problem
The enforcement reductions came as generative AI made election disinformation easier to produce and harder to detect. In the six countries The Editorial examined in detail, AI-generated or manipulated content accounted for an increasing share of reported misinformation: 8 percent in the Philippines' 2022 elections, 34 percent in 2025. In Indonesia, that figure rose from 11 percent in 2019 to 41 percent in 2024.
Meta's own systems struggled to keep pace. An internal assessment from November 2024, reviewing the company's performance in 23 elections that year, found that its automated detection tools had a 71 percent accuracy rate for AI-generated images, but only 43 percent for AI-generated video and 31 percent for AI-generated audio. The assessment noted that "manual review is critical for AI-manipulated content," but also documented that human review capacity had been reduced by an average of 35 percent across the same 23 countries.
The most consequential case may have been Pakistan. In the lead-up to the February 2024 general elections, a three-minute audio clip circulated on Facebook and WhatsApp purporting to show imprisoned former Prime Minister Imran Khan calling for his supporters to attack military installations if he was not released. The audio was AI-generated, according to an analysis by Witness, a human rights organization that specializes in digital media verification. It was shared more than 820,000 times on Facebook before Meta removed it — 72 hours after the Pakistani fact-checking organization Soch Fact Check flagged it.
During those 72 hours, violence broke out in Lahore and Karachi. Sixteen people were killed. Pakistan's military blamed Khan's party for inciting unrest. Khan's lawyers said the audio was fabricated. Two days before the election, Meta acknowledged in a statement that the audio was "likely synthetic," but did not explain why it took three days to act. Internal documents show that Meta's automated audio deepfake detection system flagged the clip within six hours of its initial upload, but the system's confidence score was 0.87 — below the 0.95 threshold required for automated removal during the enforcement reduction period.
AI-GENERATED CONTENT DETECTION FAILURE
Meta's automated systems correctly identified AI-generated election misinformation in 71 percent of image cases but only 31 percent of audio cases during 2024 elections, according to an internal performance review. Detection rates fell further when enforcement thresholds were raised to reduce false positives.
Source: Meta Trust and Safety Performance Review, November 2024The Official Response
In response to detailed questions from The Editorial, Meta provided a statement but declined to make executives available for interviews. The statement, attributed to a company spokesperson, said: "We continue to invest heavily in election integrity globally. Our approach balances the need to remove harmful content with our commitment to free expression. We regularly adjust our enforcement systems based on real-world feedback and evolving threats. Any suggestion that we deliberately reduced protections during elections is false."
The statement did not address specific questions about the threshold adjustments, workforce reductions, or the July 2023 executive presentation. It said that Meta had "removed or labeled more than 47 million pieces of content globally for violating our election integrity policies in 2024 and 2025," but did not provide comparative figures for previous years or acknowledge that the rate of enforcement had declined even as the volume of content increased.
Nick Clegg, in a blog post published in September 2024 titled "Why Context Matters in Content Moderation," defended Meta's approach without mentioning the enforcement reductions. "We have learned that aggressive automated enforcement can result in the removal of legitimate political speech, which undermines democratic discourse," he wrote. "We believe the right balance is to invest in human review and to give users more control over what they see." The post made no mention of the 38 percent reduction in fact-checking funding or the 35 percent cut in human review capacity.
What It Means
The consequences of Meta's enforcement reductions will be difficult to quantify precisely. Misinformation is one variable among many in election outcomes. But researchers who study digital information ecosystems say the pattern is clear. "What Meta did was create a permissive environment for manipulation at exactly the moment when democratic processes are most vulnerable," said Renée DiResta, an advisor to the Stanford Internet Observatory. "They made a calculated decision that the political cost of being seen as censors was higher than the democratic cost of letting misinformation proliferate."
The enforcement reductions have also created a template. X, under Elon Musk, eliminated most election integrity policies entirely in November 2023. YouTube reduced enforcement in 34 countries in 2024, according to transparency reports analyzed by the Tech Policy Institute. TikTok has declined to disclose its election integrity staffing or enforcement rates in most markets outside the United States and European Union.
The reductions ranged from 35 to 78 percent and were not disclosed to election authorities, civil society organizations, or users.
For election officials in the 87 countries affected, the revelation raises a more immediate question: what happens now? "The 2026 elections are over, but we have local elections in 2027 and another presidential election in 2031," said Marian Muhwezi, the Ugandan electoral commissioner. "If Meta is not going to protect the integrity of information on its platform, we need to know that. We need to plan accordingly."
The company has given no indication that it plans to reverse the reductions. In an earnings call in February 2026, Mark Zuckerberg told investors that Meta was "continuing to optimize our trust and safety operations for efficiency." He did not elaborate. But inside the company, according to three current employees, the resource allocation spreadsheets for the next election cycle are already circulating. The reductions, they say, are getting deeper.
Join the conversation
What do you think? Share your reaction and discuss this story with others.
