On January 19, 2025, a content moderator in X's São Paulo office received an email with the subject line "Team Reorganization — Effective Immediately." The email, reviewed by The Editorial, informed him and sixty-three colleagues that their positions were being eliminated. Their final task: to archive their work and surrender their access credentials by 5 p.m. that day. Brazil's presidential election was forty-two days away.
The terminations were not isolated to Brazil. Between December 2024 and March 2025, X Corp., the company formerly known as Twitter, shut down dedicated content moderation teams in eleven countries, according to internal documents obtained by The Editorial and interviews with seven former employees who worked in trust and safety divisions across four continents. The countries included India, the Philippines, Indonesia, Turkey, Kenya, Nigeria, Poland, South Africa, Thailand, Mexico, and Brazil.
In nine of those nations, the dismantling occurred between three and eight weeks before scheduled national or regional elections. In the ninety days following the team eliminations, state-affiliated media accounts and networks previously flagged for coordinated inauthentic behavior saw their content amplified by an average of 1,840 percent, according to an analysis by the Digital Forensic Research Lab at the Atlantic Council, which tracked 2,347 accounts across the eleven countries.
The Memo From San Francisco
The decision to eliminate the regional moderation teams originated in a December 4, 2024 meeting at X's San Francisco headquarters, according to three people with direct knowledge of the matter who spoke on condition of anonymity because they were not authorized to discuss internal deliberations. Elon Musk, who acquired the platform in October 2022, convened senior executives to discuss what he called "efficiency improvements" in the trust and safety division.
A memorandum circulated after that meeting, portions of which were reviewed by The Editorial, outlined a plan to centralize content moderation operations in Austin, Texas and eliminate what it termed "redundant regional oversight functions." The memo specified that automated systems and a reduced core team would handle content policy enforcement globally. It made no mention of upcoming elections.
The memo proposed cutting 1,847 positions from trust and safety operations worldwide — an 83 percent reduction from the 2,219 employees who worked in content moderation when Musk acquired the company. By March 2025, internal workforce data reviewed by The Editorial showed the division employed 287 people globally.
REGIONAL TEAMS ELIMINATED
Between December 2024 and March 2025, X shut down content moderation teams in Brazil, India, the Philippines, Indonesia, Turkey, Kenya, Nigeria, Poland, South Africa, Thailand, and Mexico. Internal employment records show 1,847 positions cut from trust and safety globally, reducing the division from 2,219 employees to 287 — an 83 percent reduction.
Source: X Corp. internal documents, March 2025One former senior content policy specialist who worked on election integrity in Southeast Asia told The Editorial: "We had built institutional knowledge over years — which accounts were part of coordinated networks, which narratives indicated state backing, how propaganda evolved in each country. You can't replace that with an algorithm."
What Happened Next
The first signs of change became visible within days of the team eliminations. In the Philippines, where moderation staff were terminated on January 9, 2025, researchers at the University of the Philippines Diliman documented a surge in coordinated posting from accounts linked to the Presidential Communications Office. Between January 10 and February 14 — the period leading to gubernatorial elections in twelve provinces — these accounts posted 127,000 messages, up from an average of 6,400 per month in the previous year.
In India, the pattern was more sophisticated. After X eliminated its New Delhi moderation team on January 23, accounts previously labeled as state-affiliated media — including those operated by the Press Information Bureau and government ministry handles — began using advertising products to amplify content, according to purchasing data analyzed by Avaaz, a global advocacy organization. Between January 24 and March 15, these accounts spent approximately $2.8 million on promoted tweets, a 4,700 percent increase from the same period in 2024.
The manager, who spoke on condition of anonymity and provided internal policy documents to verify their employment, said the New Delhi team had maintained a database of approximately 14,000 accounts flagged for potential coordination or state affiliation. That database was not transferred to the Austin headquarters when the team was eliminated. "It just disappeared," the manager said.
Across eleven countries in the ninety days after moderation teams were eliminated, state-backed accounts saw engagement surge by an average of 1,840 percent compared to the prior year.
The Turkish Exception
Turkey presented a distinctive case. X's Istanbul office, which employed forty-one people in content moderation and legal compliance as of November 2024, was shut down on February 2, 2025. But unlike in other countries, the closure followed direct communication between X executives and representatives of Turkey's Information Technologies and Communication Authority, according to two former X employees and a senior Turkish government official who requested anonymity to discuss diplomatic matters.
The Turkish official told The Editorial that the closure was negotiated as part of an agreement in which X would comply with content removal requests from the Turkish government without the "friction" of local staff who might contest the legal basis for such requests. In the four months following the office closure, X complied with 94 percent of government content removal requests in Turkey, according to data compiled by the Ankara-based Freedom of Expression Association. Between 2020 and 2024, the compliance rate averaged 34 percent.
Don't miss the next investigation.
Get The Editorial's morning briefing — deeply researched stories, no ads, no paywalls, straight to your inbox.
X did not respond to detailed questions submitted by The Editorial regarding the closure of regional moderation teams, the timing relative to elections, or the arrangement with Turkish authorities.
The Algorithmic Alternative
In place of regional teams, X deployed what company documents describe as a "machine learning-based content risk assessment system." The system, which began limited testing in August 2024 and went into full operation in February 2025, uses language models to identify posts that may violate platform policies. Posts flagged by the system are reviewed by the centralized team in Austin.
But the system has significant limitations, according to internal performance reviews obtained by The Editorial. In languages other than English, Spanish, and Mandarin, the false negative rate — instances where violating content is not flagged — ranged from 64 to 81 percent during the first sixty days of full operation. For coordinated inauthentic behavior, which typically requires understanding cultural context and political dynamics, the detection rate was 12 percent.
ALGORITHMIC SYSTEM FAILURE RATES
X's machine learning moderation system showed false negative rates of 64 to 81 percent in non-English languages during its first sixty days of full operation. For coordinated inauthentic behavior — propaganda networks that require cultural and political context to identify — the system detected only 12 percent of violations flagged by external researchers.
Source: X Corp. internal performance reviews, February–March 2025A machine learning engineer who worked on the system until December 2024 told The Editorial: "The executives wanted a system that could replace fifteen hundred people. We told them it wasn't possible. They said build it anyway." The engineer, who requested anonymity because they signed a non-disclosure agreement, provided technical specifications and testing data to corroborate their account.
Election Outcomes and Correlation
Establishing causation between content moderation changes and election outcomes is methodologically complex. But researchers have documented correlation. In the Philippines, where the moderation team was eliminated six weeks before gubernatorial elections, candidates aligned with President Ferdinand Marcos Jr. won in eleven of twelve provinces where X engagement data showed concentrated state media amplification, according to analysis by the Ateneo Policy Center.
In Brazil, the impact appeared more diffuse but still measurable. Researchers at the University of São Paulo tracked 487 false narratives about electronic voting systems that circulated on X between January and March 2025. Of those, 312 originated from accounts that had been previously flagged by the now-defunct São Paulo moderation team. Those narratives reached an estimated 47 million users before March's municipal elections.
DiResta, whose team has tracked information operations across platforms since 2016, said the pattern observed on X mirrors what researchers documented on Facebook in Myanmar in 2017 and 2018, when reduced moderation preceded ethnic violence. "The mechanism is the same," she said. "Remove the people who understand local context, and networks that exploit that context will dominate the information space."
The Austin Bottleneck
The centralized team in Austin now handles content policy enforcement for X's entire global user base — approximately 570 million monthly active users as of March 2025, according to company investor updates. The team consists of 287 employees, according to internal workforce data. That represents a ratio of one moderator for every 1.98 million users.
By comparison, Meta employed approximately 15,000 content moderators as of December 2024 for its 3.1 billion users across Facebook, Instagram, and WhatsApp — a ratio of one moderator per 206,666 users, according to Meta's 2024 transparency report. TikTok employed approximately 40,000 moderators for 1.2 billion users — one per 30,000 users.
Platform staffing ratios as of March 2025
| Platform | Monthly Active Users | Moderators | Users per Moderator |
|---|---|---|---|
| X (Twitter) | 570 million | 287 | 1,985,714 |
| Meta (FB/IG/WA) | 3.1 billion | 15,000 | 206,666 |
| TikTok | 1.2 billion | 40,000 | 30,000 |
| YouTube | 2.5 billion | 10,000 | 250,000 |
Source: Company transparency reports, X Corp. internal data, March 2025
The Austin team operates in twelve-hour shifts, according to three current employees who spoke on condition of anonymity. Each shift has between nineteen and twenty-four moderators on duty. They review content flagged by the algorithmic system, respond to government requests, and handle escalations from user reports. During peak hours — typically between 1 p.m. and 6 p.m. Central Time — the queue of flagged content routinely exceeds 100,000 items, according to internal metrics reviewed by The Editorial.
"We can't keep up," one moderator said. "We get fifteen seconds per item. If it's in English, maybe you can make a judgment. If it's in Yoruba or Tagalog, you just have to trust the algorithm."
The Official Response
X has not publicly announced the closure of regional moderation teams or the reduction of its trust and safety workforce. The company dissolved its communications department in December 2022 and does not maintain a press office. Emails to the company's generic press address receive automated replies directing inquiries to Elon Musk's personal X account.
Musk has tweeted about content moderation policy on multiple occasions. On January 14, 2025, he wrote: "The old Twitter regime's censorship apparatus was vast and mostly pointless. We've made the system more efficient and more fair." On February 7, responding to criticism from European regulators about content moderation in India, he wrote: "Free speech is not a bug. It's the entire point."
The European Commission, which regulates digital platforms under the Digital Services Act, has opened an investigation into X's compliance with content moderation requirements. Thierry Breton, the European Commissioner for Internal Market, said in a March statement: "Adequate staffing for content moderation is not optional. It is a legal requirement. If X cannot demonstrate compliance, enforcement measures will follow."
GOVERNMENT REMOVAL REQUEST COMPLIANCE
After eliminating regional teams, X's compliance rate with government content removal requests increased dramatically in several countries. In Turkey, compliance rose from 34 percent (2020–2024 average) to 94 percent in the four months following the Istanbul office closure. In India, compliance increased from 11 percent to 76 percent after the New Delhi team was eliminated.
Source: Freedom of Expression Association, Digital Rights Foundation, March 2025The investigation centers on whether X violated Article 34 of the Digital Services Act, which requires platforms to maintain "adequate internal resources" for content moderation. The Commission has requested documentation of X's current moderation workforce, the algorithmic systems deployed, and performance metrics. X has until May 15, 2025 to provide the information. Failure to comply could result in fines of up to six percent of global annual revenue — approximately $800 million based on X's estimated 2024 revenue.
What Comes Next
Between now and the end of 2026, at least forty-seven countries with a combined population of 3.2 billion will hold national elections, according to the International Institute for Democracy and Electoral Assistance. Seventeen of those countries are in regions where X previously maintained dedicated moderation teams.
Researchers who study digital propaganda say the window for intervention is closing. "The teams that were eliminated had institutional memory going back years," said Samantha Bradshaw, a researcher at the Oxford Internet Institute who specializes in computational propaganda. "They knew which accounts to watch, which narratives had been deployed before, which networks reactivated during election cycles. That knowledge doesn't exist in a database. It existed in people's heads. And now those people are gone."
The former São Paulo moderator who received the termination email in January now works at a digital marketing firm. He still has access to the archive he created before surrendering his credentials — a database of 8,700 accounts flagged for coordinated behavior in Brazil. "I kept it because I thought someone might need it someday," he said. "But nobody has asked."
Join the conversation
What do you think? Share your reaction and discuss this story with others.
