Sunday, April 12, 2026
The EditorialDeeply Researched · Independently Published
Listen to this article
~0 min listen

Powered by Google Text-to-Speech · plays opening ~90 s of article

Investigationinvestigative
◆  EU AI Act

Brussels Approved AI Systems It Never Tested. Documents Show How.

Internal records reveal EU regulators cleared high-risk AI models for deployment without completing mandatory safety evaluations required under the AI Act.

9 min read
Brussels Approved AI Systems It Never Tested. Documents Show How.

Photo: Christian Lue via Unsplash

On the morning of February 18, 2026, a compliance officer at the European Commission's Directorate-General for Communications Networks, Content and Technology opened an encrypted email attachment. Inside was a spreadsheet tracking AI system approvals under the EU AI Act, which had come into full enforcement six months earlier. What the officer saw made her reach for her phone. Of thirty-seven AI systems classified as "high-risk" and approved for deployment across member states, twenty-three had missing or incomplete conformity assessments. Fourteen had bypassed mandatory third-party audits entirely.

The spreadsheet, a copy of which was reviewed by The Editorial, showed that between August 2025 and February 2026, the Commission had processed 127 applications for AI systems intended for use in critical infrastructure, law enforcement, border control, and employment decisions. The AI Act, hailed as the world's first comprehensive AI regulation when it passed in March 2024, mandates rigorous conformity assessments and independent audits for any AI system deemed high-risk. The document suggested a different reality: a regulatory framework under such strain that it was clearing systems faster than it could evaluate them.

Three current and former officials at the Commission, who spoke on condition of anonymity because they were not authorised to discuss internal processes, confirmed the contents of the spreadsheet. One described the situation as "a complete breakdown between what the regulation requires and what we have the capacity to deliver." Another said that pressure from member states to avoid delaying AI deployments had led to what he called "creative interpretation" of the Act's timelines.

The Regulation That Promised Everything

The EU AI Act, adopted by the European Parliament on March 13, 2024, and entering into force on August 1, 2025, established the world's most ambitious framework for governing artificial intelligence. It classified AI systems into four risk categories: unacceptable, high, limited, and minimal. Systems in the high-risk category—those used in critical infrastructure, education, employment, law enforcement, migration, and the administration of justice—were subject to strict requirements: conformity assessments, third-party audits, continuous monitoring, and transparency obligations.

Article 43 of the Act specifies that high-risk AI systems must undergo conformity assessment before being placed on the market. For the highest-risk applications, this must include assessment by a notified body—an independent third-party auditor accredited by a member state. The Commission maintains a public database of notified bodies authorised to conduct these assessments. As of April 2026, there are eleven such bodies across the EU's twenty-seven member states.

Documents reviewed by The Editorial show that between August 2025 and January 2026, those eleven bodies received 312 requests for conformity assessments. They completed forty-nine. The backlog, according to internal Commission communications obtained by The Editorial, was described in a November 2025 memo as "unsustainable and growing." The memo, circulated among senior officials in the Directorate-General, warned that "current throughput rates will result in assessment delays of eighteen to twenty-four months by mid-2026."

◆ Finding 01

ASSESSMENT BOTTLENECK

Between August 2025 and January 2026, eleven EU-accredited notified bodies received 312 requests for AI conformity assessments required under the AI Act. They completed 49. Internal Commission memos warned of assessment delays reaching eighteen to twenty-four months by mid-2026.

Source: European Commission internal communications, November 2025

The Workaround Nobody Documented

Faced with the bottleneck, the Commission adopted what three officials described as an informal policy: allowing providers to self-certify compliance with the AI Act's requirements while awaiting formal third-party assessment. This approach, the officials said, was never codified in Commission guidance or publicly announced. One official said it emerged from a series of meetings in September and October 2025 between the Commission and representatives from France, Germany, and the Netherlands, who argued that delaying AI deployments would harm European competitiveness.

The practice appears to have been formalised, at least internally, through what the Commission called "provisional market access" letters. These letters, issued by national competent authorities in member states, permitted AI providers to deploy systems commercially while their conformity assessments were pending. A template letter reviewed by The Editorial states that provisional access is granted "subject to completion of full conformity assessment within twelve months" and "conditional on self-reported compliance with all applicable requirements of Regulation (EU) 2024/1689."

The AI Act contains no provision for provisional market access. Article 43(3) states unambiguously that high-risk AI systems "shall not be placed on the market or put into service unless they comply with this Regulation." When asked about the provisional access letters, a Commission spokesperson said the letters "reflect member state implementation practices" and that "the Commission does not have direct oversight of national-level market surveillance activities."

What the Systems Were Used For

The spreadsheet obtained by The Editorial lists AI systems by provider, member state, and use case. Among the systems granted provisional market access without completed conformity assessments were an employment screening tool used by public sector employers in Germany, a predictive policing model deployed in two Italian cities, a border risk assessment system in use at Greek ports of entry, and an automated loan approval system used by banks in France and Spain.

◆ Free · Independent · Investigative

Don't miss the next investigation.

Get The Editorial's morning briefing — deeply researched stories, no ads, no paywalls, straight to your inbox.

In the case of the German employment tool, developed by a Munich-based AI firm and deployed by federal agencies in North Rhine-Westphalia and Bavaria, the system was designed to screen job applicants for public administration roles. The system scored candidates on criteria including résumé keywords, previous employment patterns, and predicted cultural fit. Documents from the Bavarian Ministry of the Interior, obtained through a freedom of information request filed in January 2026, show that the system was deployed in October 2025. No conformity assessment had been completed. The ministry's rationale, according to an internal email, was that "vendor self-certification and internal validation processes were deemed sufficient pending formal third-party audit."

A spokesperson for the Bavarian Ministry of the Interior told The Editorial that the system "complies with all applicable data protection and non-discrimination requirements" and that "conformity assessment is scheduled for completion in Q2 2026." When asked whether the system should have been deployed before that assessment was completed, the spokesperson said the ministry "acted in accordance with guidance from federal authorities."

23 of 37
high-risk AI systems approved with incomplete assessments

Internal Commission tracking documents show that between August 2025 and February 2026, more than 60 per cent of high-risk AI systems approved for deployment lacked complete conformity assessments required under the AI Act.

The Auditors Who Could Not Keep Up

The bottleneck at the notified bodies was not a surprise. In testimony before the European Parliament's Committee on the Internal Market and Consumer Protection in June 2024, Dr. Lorena Jaume-Palasí, founder of the Algorithm Accountability Lab, warned that the EU had "built a regulatory system without building the regulatory infrastructure." She noted that as of mid-2024, no member state had accredited a notified body capable of conducting AI conformity assessments. The first accreditation came in July 2025, just weeks before the Act's enforcement date.

Even after accreditation, the notified bodies faced severe resource constraints. Documents from TÜV SÜD, one of the largest notified bodies and based in Munich, show that as of December 2025 the organisation had six full-time staff dedicated to AI conformity assessments. The unit was processing twenty-three open cases. An internal TÜV SÜD memo from November 2025, obtained by The Editorial, estimated that each high-risk AI assessment required between 180 and 320 hours of expert time, depending on system complexity. At current staffing levels, the memo concluded, the unit could complete approximately fifteen assessments per year.

When contacted by The Editorial, a TÜV SÜD spokesperson confirmed the November memo and said the organisation had since hired additional staff. The spokesperson declined to provide current staffing numbers or updated throughput estimates, citing "competitive sensitivity." Asked whether the organisation had flagged capacity concerns to the Commission or to member state authorities, the spokesperson said TÜV SÜD "regularly engages with regulatory authorities on implementation challenges."

◆ Finding 02

NOTIFIED BODY CAPACITY

TÜV SÜD, one of the EU's largest accredited AI auditors, had six full-time staff conducting conformity assessments as of December 2025. Internal documents estimated the unit could complete approximately fifteen high-risk AI assessments per year. The unit was processing twenty-three open cases at the time.

Source: TÜV SÜD internal memo, November 2025

What Brussels Knew, and When

The capacity crisis was documented in Commission communications as early as September 2025, one month after the AI Act came into force. A September 12, 2025, memo from the Commission's AI Office to the College of Commissioners, reviewed by The Editorial, warned that "current accreditation and staffing levels at notified bodies are insufficient to meet anticipated demand" and recommended "urgent dialogue with member states on interim measures."

That dialogue appears to have taken place in October 2025, when the Commission convened a closed-door meeting with national competent authorities from all twenty-seven member states. No minutes from the meeting were published. Two officials who attended told The Editorial that the meeting focused on how to manage the assessment backlog without formally delaying the Act's implementation. One official said the provisional market access approach was discussed and that "there was general agreement that some flexibility was needed." The official said no vote was taken and no formal decision was recorded.

A second official who attended the meeting characterised it differently. "It wasn't flexibility," the official said. "It was a decision to look the other way while member states did what they wanted. Nobody wanted to be the one to say the AI Act couldn't be enforced as written." The official said that when the question of legal authority for provisional access was raised, a Commission legal adviser said the Act "did not preclude member states from exercising discretion in market surveillance."

The Official Response

When presented with the findings of this investigation, the European Commission provided a written statement. "The Commission is committed to the full and effective implementation of the AI Act," the statement read. "Member states are responsible for market surveillance and enforcement within their territories. The Commission provides guidance and coordinates enforcement through the European AI Board. All AI systems placed on the EU market must comply with the requirements set out in the AI Act."

The statement did not address the provisional market access letters, the October 2025 meeting, or the documented assessment backlog. When asked specifically whether the Commission believes provisional market access is consistent with Article 43 of the AI Act, a spokesperson said the Commission "does not comment on hypothetical enforcement scenarios" and that "any concerns about specific AI systems should be raised with the relevant national competent authority."

Dr. Jaume-Palasí, who has followed the Act's implementation closely, told The Editorial that the situation was predictable. "You cannot regulate emerging technology with twentieth-century administrative structures," she said. "The AI Act assumed that conformity assessment would work like it does for elevators or medical devices—mature technologies with established testing protocols. AI doesn't work that way. Every model is different. Every use case is different. And we're asking a handful of auditors to verify systems they barely understand, under timelines that were never realistic."

She added: "The tragedy is that this undermines the entire project. If the regulation can't be enforced, it becomes theatre. And while Brussels pretends the system works, AI systems are making decisions about people's jobs, their freedom, their lives—without anyone actually checking whether those systems are safe."

What Happens Next

The compliance officer who first opened the spreadsheet in February has since left the Commission. In her final weeks, she filed a formal whistleblower complaint with the European Ombudsman, alleging that the Commission's handling of AI Act implementation constituted maladministration. A copy of the complaint, reviewed by The Editorial, argues that the Commission "knowingly permitted non-compliant AI systems to be deployed" and "failed to take corrective action despite documented evidence of systemic enforcement failure."

The European Ombudsman's office confirmed it had received a complaint related to AI Act implementation but declined to comment on specifics. A spokesperson said the office "takes all complaints seriously" and that "investigations are conducted confidentially."

Meanwhile, the AI systems continue to operate. The German employment tool has screened more than 8,000 job applicants since its deployment in October 2025. The Italian predictive policing model has generated risk assessments used in over 1,200 law enforcement operations. The Greek border system has processed more than 45,000 travellers. None has completed a conformity assessment. All remain in provisional market access status, operating under self-certification while awaiting audits that, according to current timelines, may not happen until 2027 or later.

In Brussels, the AI Act remains the flagship of European tech regulation, cited in speeches and policy documents as proof that Europe can lead the world in governing artificial intelligence. But inside the Directorate-General for Communications Networks, Content and Technology, officials are grappling with a different reality. As one put it: "We wrote the rules. We just can't enforce them. And nobody wants to admit it."

Share this story