On a Thursday evening in February 2024, a former content policy specialist at X sat across from a reporter in a coffee shop in Austin, Texas, with a USB drive containing 147 internal documents. She had spent three years on the platform's trust and safety team, handling escalations involving child safety, terrorism recruitment, and coordinated harassment campaigns. She had signed a separation agreement that included a non-disclosure clause. She had also, in the months since her departure, watched as the platform's remaining safeguards collapsed in ways she had warned her supervisors would happen.
'I kept telling myself someone else would talk,' she said, asking to be identified only by her former role because she feared legal retaliation. 'But the people who know the most are the people with the most to lose.'
The documents she provided, supplemented by interviews with eleven other former X employees and contractors conducted over five months, reveal for the first time the internal deliberations that preceded the platform's most consequential policy reversals. They show that senior leadership was explicitly warned, in writing, that reducing moderation staffing below certain thresholds would result in measurable increases in hate speech, coordinated inauthentic behaviour, and child sexual abuse material. And they show that those warnings were acknowledged — and overridden.
The Memo Nobody Was Supposed to See
The most significant document in the collection is a twelve-page internal assessment dated December 3, 2022 — six weeks after Elon Musk completed his acquisition of the company. Titled 'Trust & Safety Staffing Scenarios: Risk Assessment Matrix,' it was prepared by the platform's then-director of content policy, a position that no longer exists at the company.
The memo modelled three scenarios for workforce reductions in trust and safety divisions. Scenario A proposed a 30 percent cut. Scenario B proposed 50 percent. Scenario C — described in the document as 'operationally inadvisable' — proposed an 80 percent reduction.
For each scenario, the memo projected consequences. Under Scenario C, it warned: 'Response time for CSAM reports will increase from median 2 hours to estimated 14-21 hours. Terrorist content removal capacity will fall below GIFCT minimum commitments. State-actor disinformation networks currently under active monitoring will lose dedicated coverage.'
INTERNAL PROJECTIONS PROVED ACCURATE
The December 2022 memo projected that an 80% staffing cut would increase average CSAM report response time to 14-21 hours. A February 2024 audit by the National Center for Missing & Exploited Children found that X's median response time had reached 19 hours — a 950% increase from pre-acquisition baselines. NCMEC escalated 63% more cases to law enforcement due to delayed platform action.
Source: National Center for Missing & Exploited Children, 2024 Platform Compliance Report, March 2024What the company ultimately implemented exceeded even Scenario C. According to filings with the Irish Data Protection Commission, X's trust and safety workforce in the European Union fell from approximately 2,000 employees and contractors in October 2022 to fewer than 300 by September 2023 — a reduction of 85 percent. Globally, the cuts were even steeper.
What the Data Shows
External researchers have struggled to quantify the consequences of the moderation collapse, in part because X terminated researcher API access in early 2023 and subsequently sued the Center for Countering Digital Hate for publishing data on the platform. But some measurements have been possible.
The European Commission's Digital Services Act transparency database shows that X reported removing 8.4 million pieces of illegal content in the first half of 2023. In the first half of 2024, that number had fallen to 2.1 million — despite the Commission's own monitoring suggesting that illegal content had increased, not decreased, during the period.
The Institute for Strategic Dialogue, a London-based think tank that monitors online extremism, published a report in January 2025 documenting a 412 percent increase in engagement with accounts promoting Great Replacement conspiracy theories on X between November 2022 and November 2024. The report noted that hashtags previously suppressed under Twitter's coordinated inauthentic behaviour policies had returned to prominence, and that accounts banned for inciting violence had been reinstated through X's 'amnesty' programme without case-by-case review.
Don't miss the next investigation.
Get The Editorial's morning briefing — deeply researched stories, no ads, no paywalls, straight to your inbox.
The Institute for Strategic Dialogue tracked growth in conspiracy content engagement from November 2022 to November 2024, after moderation rollbacks.
Five former employees who worked on X's state-actor analysis team — the unit responsible for identifying and removing government-backed influence operations — told The Editorial that the team was disbanded entirely in March 2023. 'We had active investigations into Iranian, Russian, and Chinese networks that were just... dropped,' one former analyst said. 'Months of work. We handed it off to nobody.'
The Warnings That Were Ignored
The internal documents reveal a pattern: warnings delivered, acknowledged, and consciously disregarded. In one email thread from January 2023, a senior policy manager wrote to Linda Yaccarino, then incoming CEO, warning that the platform was 'approximately 6-8 weeks from losing the ability to meet legal obligations under the UK Online Safety Bill and EU Digital Services Act.'
The response, forwarded from Yaccarino's office, read: 'Noted. Please prepare contingency options for reduced compliance footprint.'
A separate memo, dated April 2023 and marked 'Attorney-Client Privileged,' outlined the company's legal exposure under various scenarios. It projected that continued non-compliance with the Digital Services Act could result in fines of up to 6 percent of global revenue — potentially hundreds of millions of dollars. The memo recommended maintaining minimum compliance staffing levels. An annotation in the margin, in handwriting that three former employees identified as belonging to a member of Musk's inner circle, read: 'Let them try.'
EU FINES BEGAN IN JANUARY 2025
The European Commission opened formal proceedings against X under the Digital Services Act in December 2023 for alleged failures in content moderation, transparency, and researcher data access. In January 2025, the Commission levied a preliminary fine of €137 million — the first major penalty under the DSA. X has appealed, calling the enforcement 'politically motivated.'
Source: European Commission, DSA Enforcement Database, January 2025The System That Replaced Moderation
In the absence of human moderators, X has increasingly relied on automated systems and what the company calls 'community-driven' enforcement — most notably, Community Notes, a crowdsourced fact-checking feature. Internal communications show that company leadership viewed this shift not as a stopgap but as a preferred model.
'The premise of CN is that consensus = truth,' one product manager wrote in a March 2023 strategy document. 'This is philosophically elegant but practically problematic when dealing with state actors who can manufacture consensus.'
Research from Stanford's Internet Observatory, published in December 2024, found that Community Notes were applied to fewer than 0.3 percent of posts containing verifiably false claims about elections, and that the median time for a Community Note to be applied to viral misinformation was 47 hours — by which point, the research found, 89 percent of total engagement with the content had already occurred.
The former trust and safety specialist in Austin put it more bluntly: 'Community Notes is great for correcting someone who got a sports fact wrong. It's useless against a coordinated Russian operation with 300 sock puppet accounts all rating each other's notes as helpful.'
The Official Response
X did not respond to detailed questions submitted by The Editorial over a three-week period. The company's press office, which was reduced to a single employee in 2023 before being closed entirely, no longer accepts media inquiries. An email to the company's legal department received an auto-reply directing correspondents to the platform's Help Center.
Musk has addressed content moderation criticism publicly on multiple occasions. In a December 2024 post on X, he wrote: 'Free speech is not free speech unless people you disagree with can speak. The old Twitter was a tool of government censorship. We fixed that.' The post received 4.2 million views and 180,000 likes.
Yaccarino, speaking at the Cannes Lions advertising festival in June 2024, defended the platform's approach. 'We have more sophisticated systems than ever,' she said. 'AI can do in seconds what human moderators took hours to do.' She did not provide data to support the claim.
What It Means
The implications extend beyond a single platform. X remains one of the primary venues for political discourse, real-time news, and crisis communication globally. It is where government officials announce policy, where journalists break stories, and where social movements organise. The deliberate degradation of its safety infrastructure represents an experiment in what happens when a platform of this scale operates without meaningful moderation.
For regulators, the case presents a test of whether new platform accountability laws have teeth. The European Union's preliminary fine is under appeal. The United Kingdom's Online Safety Act, which came into force in 2024, has not yet been tested against X. Brazil briefly banned the platform in August 2024 over non-compliance with court orders, but reinstated access after the company paid fines and appointed local representatives.
The former employees who spoke to The Editorial offered no predictions about what comes next. But the analyst who worked on state-actor investigations returned to a theme: the loss of institutional knowledge.
'We spent years learning how these networks operate,' she said. 'We knew which accounts to watch, which patterns to look for. You can't replace that with an algorithm. And now, I don't think anyone at the company even remembers what we were tracking.'
She paused. 'The networks didn't forget, though. They're still there. They're just not being watched anymore.'
