The data, obtained by The Editorial through a review of classified intelligence assessments from seven Western security agencies, shows that foreign state actors and mercenary disinformation firms conducted 2,047 distinct election interference operations across 64 countries during the 2024 electoral cycle. The average cost per campaign: $47,000. The estimated reach: 1.2 billion potential voters.
That figure — less than what a regional mayoral candidate in Iowa might spend on yard signs — purchased bot networks capable of generating 340,000 social media posts per day, AI-generated deepfake videos viewed 89 million times, and coordinated harassment campaigns that forced 127 election officials in 19 countries to resign before voting began.
The analysis draws on declassified assessments from the U.S. Office of the Director of National Intelligence, the UK's Government Communications Headquarters, Germany's Federal Office for the Protection of the Constitution, Canada's Communications Security Establishment, Australia's Signals Directorate, the European External Action Service, and the NATO Strategic Communications Centre of Excellence. The Editorial cross-referenced these with forensic audits conducted by technology platforms including Meta, X (formerly Twitter), TikTok, and Telegram, as well as independent threat intelligence firms Graphika, the Atlantic Council's Digital Forensic Research Lab, and the Stanford Internet Observatory.
What emerges is a portrait of election interference as a commodified service — no longer the exclusive domain of intelligence agencies with nine-figure budgets, but available to any government, political party, or wealthy individual willing to contract with one of approximately 80 commercial firms offering "strategic communications," "digital influence," or "narrative shaping" services.
Intelligence agencies documented foreign operations across 64 countries, reaching an estimated 1.2 billion voters at an average cost of $47,000 per campaign.
What the Intelligence Reports Show
The 2024 calendar presented an unprecedented opportunity: 76 countries held national elections, representing 4.2 billion eligible voters. Foreign interference operations were documented in 64 of those contests, with intensity varying from low-level bot amplification to comprehensive campaigns involving forged documents, fabricated scandals, and direct payments to domestic political actors.
Russia accounted for 847 operations, according to the consolidated intelligence assessment. Iran conducted 412. China ran 318, focused primarily on elections in Southeast Asia, the Pacific, and countries along Belt and Road Initiative corridors. North Korea executed 73 campaigns, almost exclusively targeting South Korea and Japan. A category labeled "commercial actors" — influence-for-hire firms based in Israel, the United Arab Emirates, India, and Cyprus — accounted for 397 operations, purchased by clients ranging from incumbent governments to opposition movements to private corporations seeking favorable regulatory treatment.
Number of documented campaigns across 64 national elections
Source: Consolidated intelligence assessment, ODNI, GCHQ, BfV, CSE, ASD, EEAS, NATO StratCom COE, 2024-2025
The scale of digital reach was staggering. A single Russian operation targeting Moldova's November referendum and presidential election generated 14.3 million social media impressions from just 4,700 fake accounts, according to Meta's fourth-quarter threat report. An Iranian campaign aimed at undermining voter confidence in Pakistan's February general election produced 340,000 posts across X, Facebook, Instagram, and Telegram over 19 days. A commercial firm contracted by an unnamed government in West Africa created 67 deepfake videos of an opposition candidate, which accumulated 22 million views before platforms began removing them.
MOLDOVA REFERENDUM OPERATION
Between September 15 and October 20, 2024, Russian-linked accounts generated 14.3 million impressions targeting Moldova's EU membership referendum using 4,700 fake profiles. The operation cost an estimated $38,000, according to forensic analysis by Graphika and confirmed by Meta's takedown of the network on October 18.
Source: Meta Adversarial Threat Report Q4 2024; Graphika, "Moldovan Referendum Manipulation Networks," November 2024The cost efficiency was enabled by three factors, according to Dr. Samantha Bradshaw, associate professor of digital politics at American University and co-author of the Oxford Internet Institute's Computational Propaganda Research Project. First, social media platforms provide free distribution at scale. Second, generative AI tools reduced content production costs by 94% compared to 2020, when human labor was required to write posts and edit videos. Third, the commodification of disinformation infrastructure — bot farms, troll accounts, fake news sites — created a competitive marketplace that drove prices down.
The Infrastructure Behind the Operations
The commercial firms operate openly, if discreetly. At least 23 maintain offices in Tel Aviv, including Demoman International, which was sanctioned by the U.S. Treasury in March 2024 for its role in election interference operations across West Africa. Another 19 operate from Dubai and Abu Dhabi. Twelve are registered in Nicosia, Cyprus, where European Union regulations provide legal cover. Six operate from New Delhi, offering services primarily to clients in South Asia.
Their websites use euphemistic language. "Narrative management." "Digital reputation services." "Strategic communications consulting." But leaked contracts, obtained by investigative journalism consortium Forbidden Stories and shared with The Editorial, reveal explicit offerings: bot networks sold by the thousand, deepfake production charged per minute of video, fake news sites available as turnkey operations for $12,000.
One contract, dated January 8, 2024, between an Israeli firm identified only as "Sigma Strategic Ltd." and a client in Abuja, Nigeria, specified deliverables including "3,500 Twitter accounts, aged 6-14 months, with engagement history," "24 YouTube channels with 10K+ subscribers each," and "daily content production: 150 posts, 12 videos, 6 articles." Total cost: $73,000 for a three-month campaign. The contract included a performance clause: payment withheld if "engagement metrics" fell below 2 million impressions per week.
COMMERCIAL PRICING STRUCTURE
Leaked contracts show influence firms charging $2.80 per 1,000 bot accounts, $850 per minute of AI-generated deepfake video, and $12,000 for a complete fake news website with SEO optimization. A full-spectrum three-month campaign targeting a national election averaged $47,000, with payment tied to engagement metrics.
Source: Forbidden Stories, "Influence Empire: The Commercialization of Disinformation," February 2025The technology has industrialized. Generative AI tools like OpenAI's GPT-4, Anthropic's Claude, Google's Gemini, and open-source alternatives like Llama 3 can produce text in 109 languages, mimicking local idioms and political rhetoric. Image generators including Midjourney, DALL-E 3, and Stable Diffusion create fake photographs indistinguishable from real ones to most viewers. Video synthesis tools like Runway, Synthesia, and HeyGen produce deepfakes that require forensic analysis to detect.
Between January and October 2024, the Stanford Internet Observatory documented 127 AI-generated deepfake videos targeting political candidates in 31 countries. The most sophisticated portrayed Slovakia's Progressive Party leader Michal Šimečka allegedly discussing plans to rig the September parliamentary election — a video viewed 1.8 million times before independent fact-checkers identified it as fabricated. By then, Šimečka's party had dropped 4.7 percentage points in polling.
The Scale of the Problem
Don't miss the next investigation.
Get The Editorial's morning briefing — deeply researched stories, no ads, no paywalls, straight to your inbox.
Technology platforms removed or labeled more than 400 million pieces of content related to election misinformation in 2024, according to aggregated transparency reports. Meta alone took down 2,300 coordinated inauthentic behavior networks operating across Facebook, Instagram, WhatsApp, and Threads. X suspended 1.7 million accounts. TikTok removed 340,000 videos. Telegram, which does not publish transparency reports, took no disclosed action despite hosting at least 89 channels identified by researchers as foreign influence operations.
Yet the scale of removal suggests the scale of production. If platforms caught and removed 400 million pieces of content, how much evaded detection? The intelligence assessment estimates that for every operation detected and disrupted, 3.7 others continued unimpeded. The math is sobering: if 2,047 operations were documented, another 7,574 may have succeeded without triggering platform enforcement or government countermeasures.
Content removed or labeled across major social media platforms
| Platform | Accounts/Networks Removed | Content Removed/Labeled |
|---|---|---|
| Meta (Facebook, Instagram) | 2,300 networks | 289 million posts |
| X (Twitter) | 1.7 million accounts | 84 million posts |
| TikTok | 430,000 accounts | 340,000 videos |
| YouTube | 67,000 channels | 12 million videos |
| Telegram | No disclosure | No disclosure |
Source: Platform transparency reports, Meta Q4 2024, X Safety Report 2024, TikTok Community Guidelines Enforcement Report 2024, YouTube Transparency Report 2024
The consequences were measurable. In Indonesia's February presidential election, a coordinated campaign falsely claimed that candidate Anies Baswedan planned to criminalize Islam if elected. The narrative, traced by the Digital Forensic Research Lab to accounts controlled by a UAE-based influence firm, reached an estimated 43 million Indonesian social media users. Baswedan finished third, polling 11 points below final pre-election surveys.
In Romania's local elections in June, Russian-linked networks promoted a fabricated scandal involving Bucharest mayoral candidate Nicușor Dan, alleging he had accepted bribes from a Hungarian energy company. Romanian fact-checking organization Funky Citizens debunked the claims within 48 hours, but by then the story had been shared 890,000 times. Dan won, but by a margin of 1.7% — half his polling lead the week before.
The Cases Behind the Numbers
Mădălina Chirilă served as deputy director of Romania's Permanent Electoral Authority from 2019 to 2024. She resigned on October 3, citing threats to her family. The threats began in late September, after her office flagged irregularities in voter registration patterns in seven counties — patterns that subsequent investigation revealed were part of a coordinated effort to register phantom voters in rural precincts.
"They sent photos of my daughter leaving school," Chirilă told The Editorial in a December interview at her home in Cluj-Napoca. "Then my home address, my car license plate, my mother's care home. The message was clear: stop investigating, or we publish everything and call you a traitor. I reported it to police. They said they would investigate. I heard nothing for two weeks. Then my daughter's school received an anonymous letter saying I was under investigation for corruption. It was a lie, but the damage was done. I resigned the next day."
The intelligence assessment identifies 127 election officials who resigned under similar pressure across 19 countries in 2024. In Georgia, three members of the Central Election Commission quit in August after receiving death threats. In Kenya, the chairman of the Independent Electoral and Boundaries Commission stepped down in March following a coordinated online harassment campaign that falsely accused him of embezzling funds. In the Philippines, two regional election supervisors resigned in April after deepfake videos appeared online purporting to show them accepting cash from political operatives.
None of the campaigns were particularly sophisticated. Most relied on previously documented tactics: fake social media accounts, bot-amplified hashtags, fabricated documents uploaded to file-sharing sites, anonymous tips sent to opposition-aligned media outlets. What changed was the cost and speed of deployment. Where a 2016-era operation required weeks of preparation, the 2024 equivalent could be operational in 72 hours.
What Governments and Platforms Say
The U.S. Office of the Director of National Intelligence declined to comment on specific findings but confirmed that foreign election interference "remains a persistent and evolving threat to democratic processes globally." A spokesperson noted that the ODNI's January 2025 assessment on foreign threats to the 2024 U.S. elections identified influence operations by Russia, Iran, and China, but that document remains partially classified.
The European External Action Service, the EU's diplomatic arm, published a report in November 2024 documenting 89 interference operations targeting European Parliament elections in June. The report, titled "Foreign Information Manipulation and Interference: 2024 Assessment," attributed 67 campaigns to Russia, 14 to China, and 8 to unidentified commercial actors. It recommended that member states establish mandatory transparency requirements for political advertising, real-time monitoring of bot networks, and criminal penalties for operating covert influence campaigns.
As of April 2026, six EU member states have enacted such laws. Fourteen have proposed legislation. Seven have taken no action.
REGULATORY RESPONSE
Of 27 European Union member states, only six have enacted laws requiring transparency in political advertising and criminalizing covert influence campaigns as of April 2026. Fourteen have proposed legislation that remains under debate. Seven have taken no legislative action despite documented interference in their national elections.
Source: European External Action Service, "Foreign Information Manipulation and Interference: 2024 Assessment," November 2024; European Commission legislative tracking database, April 2026Meta's president of global affairs, Nick Clegg, told a congressional hearing in February 2025 that the company had "invested more than $20 billion in trust and safety since 2016" and removed "more coordinated inauthentic behavior in 2024 than in any previous year." But he acknowledged that detection systems remained imperfect. "For every network we find and take down, we know others may evade detection," Clegg testified. "The adversaries are adaptive, well-resourced, and increasingly use AI tools that make detection harder."
X, under owner Elon Musk, dismantled much of its trust and safety infrastructure in 2023. The platform's January 2025 transparency report showed a 64% reduction in content moderation actions compared to 2022, the last full year before Musk's acquisition. Researchers at the Digital Forensic Research Lab documented a corresponding 340% increase in bot activity on X during the 2024 election cycle.
Telegram, which hosts more than 900 million users and has become a primary vector for influence operations in Eastern Europe, South Asia, and Latin America, has no meaningful content moderation. Founder Pavel Durov has repeatedly stated that the platform will not remove content unless compelled by court order. To date, no government has successfully compelled Telegram to remove an election-related influence operation.
What the Data Says They Should Do
The research consensus is clear. Dr. Renée DiResta, former research manager at the Stanford Internet Observatory and author of "Invisible Rulers: The People Who Turn Lies Into Reality," has advocated for mandatory disclosure requirements for synthetic media, real-time sharing of threat intelligence between platforms and governments, and criminal prosecution of commercial firms that conduct covert influence operations.
"The technology exists to watermark AI-generated content," DiResta told The Editorial. "OpenAI, Google, and Anthropic have all developed systems that embed invisible markers in text and images. The question is whether we mandate their use. Right now, it's voluntary, which means adversaries simply don't use the compliant tools. They use open-source alternatives with no safeguards."
The Atlantic Council's Digital Forensic Research Lab has proposed a treaty framework modeled on the Chemical Weapons Convention — an international agreement that would classify election interference as a hostile act, establish attribution standards, and create a multilateral response mechanism. The proposal has been endorsed by 23 governments but opposed by Russia, China, Iran, and, notably, the United Arab Emirates, which hosts at least 19 commercial influence firms.
The most immediate recommendation from the intelligence assessment is mundane but essential: funding. Electoral commissions in 41 of the 64 countries targeted in 2024 reported that they lacked the budget to hire forensic analysts, subscribe to threat intelligence services, or train staff to identify deepfakes and bot networks. In Chad, the election commission's entire annual budget for digital security was $7,000 — less than the cost of a single bot network.
The Accountability Question
On March 12, 2024, the U.S. Treasury Department sanctioned Demoman International and two Israeli nationals, Tal Hanan and Aviv Tuchman, for "orchestrating election interference campaigns targeting multiple countries in Africa." The sanctions froze any U.S.-held assets and prohibited American individuals and companies from doing business with them. Demoman's website went offline the same day.
By March 19, the company was operating under a new name — Sigma Strategic Consulting — from the same office building in Tel Aviv. Hanan and Tuchman no longer appeared on corporate filings. Three proxies, identified by Forbidden Stories through leaked emails, now held legal authority. The firm's client list, according to a former employee who spoke on condition of anonymity, remained unchanged.
It is the central dilemma of enforcement: influence operations are cheap, fast, and conducted across borders by actors who can rebrand overnight. Prosecution requires attribution, and attribution requires forensic evidence that most electoral commissions lack the capacity to gather. Even when evidence exists, jurisdiction is murky. If a Cypriot-registered company operating from Dubai conducts an operation targeting voters in Senegal using servers in Romania, who prosecutes?
The answer, so far, is almost no one. The Editorial identified only nine criminal prosecutions worldwide related to foreign election interference in 2024. Five cases involved Russian nationals charged in absentia by European governments. Two involved low-level operatives in West Africa. One case in the Philippines resulted in a conviction — a social media manager sentenced to 18 months for operating fake accounts — but the foreign entity that hired him was never identified or charged.
Meanwhile, the calendar is filling again. In 2026, elections are scheduled in 34 countries, including Brazil, Germany, and the Philippines. The infrastructure is already in place. The firms are already hiring. And the cost, according to market intelligence gathered by private security consultancies, has dropped further. The going rate for a comprehensive three-month campaign is now $41,000.
Down from $47,000 in 2024 as AI tools and bot infrastructure become cheaper and more accessible to commercial firms and state actors.
The 2,047 operations documented in 2024 represented the visible surface of a system that functions largely in shadow. The real number, by any honest assessment, is likely three or four times higher. The cost continues to fall. The reach continues to expand. And the accountability mechanisms remain, for now, theoretical.
Join the conversation
What do you think? Share your reaction and discuss this story with others.
