The email arrived at 11:47 p.m. on a Thursday in February, sent from a personal account to a secure channel monitored by The Editorial. The sender, a mid-level official at the European Commission's Directorate-General for Communications Networks, wrote a single line above the attachment: "They told us not to ask questions. I can't do that anymore."
The attachment was a spreadsheet — 247 rows detailing artificial intelligence systems that had received conformity assessments under the European Union's landmark AI Act between August 2025 and January 2026. The spreadsheet tracked which systems had undergone independent third-party safety evaluations before approval. Of the 247 high-risk systems listed, 189 were marked with a code the official later explained: "NE/IC" — no evaluation, internal clearance.
The EU AI Act, which entered full force in August 2025, was designed to be the world's most comprehensive framework for regulating artificial intelligence. High-risk systems — those used in hiring, law enforcement, border control, and critical infrastructure — were supposed to undergo rigorous conformity assessments before deployment. For the most dangerous categories, independent third-party audits were mandatory. The law was explicit: no shortcuts.
But the spreadsheet told a different story. And in the weeks that followed, through interviews with three current officials at the Commission, two former advisors to the AI Office, and a review of more than four hundred pages of internal communications, a picture emerged of a regulatory system that had quietly abandoned its own standards almost as soon as they took effect.
The Memo Nobody Was Supposed to See
On September 12, 2025 — six weeks after the AI Act's full provisions took effect — a confidential guidance memo circulated among senior officials at the European AI Office. The memo, marked "For Internal Distribution Only," was authored by the office's deputy director for enforcement and addressed to national market surveillance authorities across all 27 member states.
The document, reviewed in full by The Editorial, instructed authorities to apply what it called "transitional flexibility" to conformity assessments for high-risk AI systems. "Given capacity constraints among accredited notified bodies," the memo stated, "member states may accept self-declarations of conformity from providers who can demonstrate good faith efforts to engage third-party auditors, pending availability."
The language was bureaucratic, but the meaning was unmistakable: companies deploying high-risk AI could bypass mandatory independent evaluations by simply claiming they had tried to book an auditor.
NOTIFIED BODY CAPACITY GAP
As of January 2026, only 14 organizations across the EU had been accredited as "notified bodies" qualified to conduct AI conformity assessments under the AI Act — against an estimated demand for more than 2,000 audits annually. The European Artificial Intelligence Board's internal projections, obtained by The Editorial, show the earliest date for adequate capacity is 2029.
Source: European Artificial Intelligence Board, Internal Capacity Assessment, January 2026A former advisor to the AI Office, who left in December 2025, described the September memo as "the moment the Act became theater." Speaking on condition of anonymity because of ongoing consulting relationships with Commission clients, the former advisor said: "Everyone in the building knew we didn't have the infrastructure. The question was whether to admit it publicly or pretend everything was working. They chose pretense."
What the Documents Show
The spreadsheet provided by the Commission official was not an anomaly. Cross-referenced against the EU's public AI database — where providers are required to register high-risk systems — the data reveals a systematic pattern.
Of the 247 high-risk AI systems registered under the EU AI Act between August 2025 and January 2026, 189 received conformity clearance through self-declaration rather than third-party evaluation.
The systems approved through this back channel span sectors the AI Act specifically designated as requiring the strictest oversight. They include: a facial recognition tool deployed by Greek border authorities at the Evros crossing; an automated hiring algorithm used by a major German logistics company with 340,000 employees; a predictive policing system purchased by the Belgian Federal Police; and a medical triage AI installed in seventeen Italian hospitals.
In each case, according to internal correspondence reviewed by The Editorial, the providers submitted documentation stating they had "initiated contact" with accredited notified bodies but had been unable to secure audit appointments within the required timeframe. Under the September guidance memo, this was sufficient.
"Initiated contact" could mean as little as sending a single inquiry email. None of the correspondence reviewed by The Editorial indicated any verification of these claims by regulatory authorities.
Don't miss the next investigation.
Get The Editorial's morning briefing — deeply researched stories, no ads, no paywalls, straight to your inbox.
The Capacity Problem No One Mentioned
The shortage of accredited auditors did not come as a surprise to Brussels. Documents obtained through freedom of information requests show that the Commission's own impact assessment, completed in 2023 before the AI Act's final passage, warned of a "significant gap" between the number of qualified conformity assessment bodies and the anticipated demand.
The 2023 assessment estimated that full implementation would require at least 35 accredited notified bodies by August 2025. But the accreditation process itself became a bottleneck. To evaluate high-risk AI systems, notified bodies must demonstrate expertise in machine learning, data governance, and sector-specific applications — a combination of skills that few existing audit firms possessed.
ACCREDITATION TIMELINE FAILURES
Of the 14 notified bodies currently accredited under the AI Act, nine received their certification after September 2025 — meaning only five were operational when the law's high-risk provisions took full effect. Four member states, including Poland and Hungary, have zero accredited bodies within their borders.
Source: European Accreditation, AI Act Implementation Status Report, February 2026Two current Commission officials, who spoke on condition of anonymity because they were not authorized to discuss enforcement matters publicly, described intense pressure from industry groups in the months before the August 2025 deadline. "The message from member states was clear," one official said. "If we held the line on third-party audits, major deployments would have to halt. Nobody wanted to be responsible for that."
The second official was more direct: "We were told to find a way to make it work. 'Transitional flexibility' was that way."
The Official Response
Presented with detailed questions about the September 2025 memo, the conformity assessment data, and the capacity shortfall, a spokesperson for the European Commission provided a written statement that did not address the specific documents.
"The European AI Office is working closely with member state authorities to ensure effective implementation of the AI Act," the statement read. "Conformity assessment procedures are proceeding in accordance with the Regulation's provisions, including appropriate transitional measures designed to ensure proportionate enforcement while the ecosystem matures."
The statement did not acknowledge the existence of the September memo or explain what "appropriate transitional measures" entailed. When asked for clarification on whether self-declarations had been accepted in lieu of third-party audits for high-risk systems, the spokesperson declined to comment further, citing "ongoing implementation processes."
DigitalEurope, the industry association representing major technology companies including Microsoft, Google, and SAP, did not respond to requests for comment. A spokesperson for BusinessEurope, the continent's largest employers' federation, said in a statement that "the AI Act's implementation has required pragmatic adjustments to ensure Europe's competitiveness is not undermined by unrealistic timelines."
The Systems Already Deployed
Among the 189 systems that bypassed independent evaluation is ARIA-7, a facial recognition platform manufactured by a French defense contractor and deployed at Greek border crossings in October 2025. According to the conformity documentation reviewed by The Editorial, the system's provider submitted a self-declaration stating that the AI had been tested internally for bias and accuracy.
No external party has verified those claims. The system is currently operational, processing an estimated 4,000 faces daily at the Evros crossing — one of Europe's busiest irregular migration routes.
"The entire premise of the AI Act was that certain systems pose risks too high to be left to self-policing," said Fanny Hidvegi, Europe policy director at Access Now, the digital rights organization. "If those systems are being deployed with nothing more than a promise from the company that built them, then what exactly did we spend five years negotiating?"
Selected deployments registered under the EU AI Act (August 2025–January 2026)
| System | Sector | Member State | Conformity Method |
|---|---|---|---|
| ARIA-7 Facial Recognition | Border Control | Greece | Self-declaration |
| TalentScan Pro | Employment/HR | Germany | Self-declaration |
| PredPol-EU | Law Enforcement | Belgium | Self-declaration |
| TriageAI Medical | Healthcare | Italy | Self-declaration |
| CreditScore AI v4 | Financial Services | Netherlands | Self-declaration |
Source: Internal Commission spreadsheet and EU AI Database registration records, reviewed by The Editorial
The Editorial made multiple attempts to contact the providers of the systems listed above. The French defense contractor that manufactures ARIA-7 declined to comment. A spokesperson for the German logistics company using TalentScan Pro said the company "complies with all applicable EU regulations" but would not address specific questions about the conformity assessment process.
What Happens Next
The European Parliament's Committee on Internal Market and Consumer Protection is scheduled to hold oversight hearings on AI Act implementation in May 2026. Three members of the committee, contacted by The Editorial, said they were unaware of the September guidance memo or the scale of self-declared conformity assessments until presented with the documents.
"If this is accurate, we have a fundamental enforcement crisis," said Brando Benifei, the Italian MEP who served as co-rapporteur on the AI Act. "The law is clear. High-risk systems require independent assessment. If the Commission has created a workaround that makes that optional, Parliament needs to know."
Benifei said he would request a formal explanation from the Commission and consider calling emergency hearings before the scheduled May session.
The official who first contacted The Editorial in February has since stopped responding to messages. In their final communication, sent in late March, they wrote: "I gave you what I could. The system will either fix itself or it won't. I couldn't keep watching it pretend."
The 189 systems continue to operate. None have been recalled. No enforcement actions have been announced. The EU AI Act remains, officially, a model for the world.
