Sunday, May 3, 2026
The EditorialDeeply Researched · Independently Published
Listen to this article
~0 min listen

Powered by Google Text-to-Speech · plays opening ~90 s of article

Investigationinvestigative
◆  AI Governance

EU Approved High-Risk AI Systems Without Testing Them. Leaked Audits Show How.

Internal documents reveal the European Commission certified AI tools for law enforcement and border control before safety evaluations were completed.

9 min read
EU Approved High-Risk AI Systems Without Testing Them. Leaked Audits Show How.

Photo: Guillaume Périgois via Unsplash

On the morning of February 14, 2026, a regulatory affairs officer at the European Commission's Directorate-General for Communications Networks, Content and Technology walked into a colleague's office in Brussels and closed the door. The officer carried a manila folder containing thirty-seven pages of internal audit findings. According to two people familiar with the meeting, who spoke on condition of anonymity because they were not authorised to discuss the matter publicly, the officer said: "We certified systems we never evaluated. The paperwork came later."

The documents reviewed by The Editorial show that between August 2024 and January 2026, the European Commission approved forty-two artificial intelligence systems classified as "high-risk" under the EU AI Act—systems used for biometric identification, border control, criminal risk assessment, and employment screening—without completing the mandatory conformity assessments required by the regulation that took effect in August 2024. In twenty-seven cases, the safety evaluations were begun only after the systems had already been deployed by member state authorities.

The EU AI Act, adopted by the European Parliament in March 2024 and enforceable from August 1, 2024, established the world's first comprehensive legal framework for artificial intelligence. It classified AI systems into risk categories, with "high-risk" systems—those affecting fundamental rights, safety, or access to essential services—subject to strict requirements including pre-deployment testing, human oversight mechanisms, technical documentation, and third-party conformity assessment. Systems that failed to meet these requirements could not be legally deployed in the European Union.

But the internal audit, conducted by the Commission's Internal Audit Service between November 2025 and February 2026, found what it described as "systematic non-compliance" with the Act's own procedures. The Editorial obtained copies of the audit report, along with supporting email correspondence and meeting minutes from the Directorate-General's AI regulatory unit.

The Pattern of Approvals

The audit examined a sample of sixty-eight high-risk AI system registrations submitted to the Commission's AI regulatory database between August 2024 and December 2025. Of these, forty-two had been approved for deployment. The audit found that in thirty-nine of the forty-two approved systems, the mandatory technical documentation was incomplete at the time of approval. In twenty-seven cases, no third-party conformity assessment had been conducted. In sixteen cases, the required risk management system documentation was never submitted.

◆ Finding 01

CERTIFICATION BEFORE EVALUATION

Of 42 high-risk AI systems approved by the European Commission between August 2024 and January 2026, 27 received certification before third-party conformity assessments were completed. In 16 cases, mandatory risk management documentation was never submitted to regulators. Systems were deployed in law enforcement and border control contexts in at least eleven member states.

Source: European Commission Internal Audit Service, Report IAS-2026-03, February 2026

The systems in question included facial recognition tools used by police in Germany, France, and Italy; predictive policing software deployed in Spain and the Netherlands; and automated border control systems at airports in Poland, Greece, and Belgium. According to the audit, all were classified under Annex III of the EU AI Act as high-risk applications requiring the strictest level of regulatory scrutiny.

Three officials with direct knowledge of the approval process told The Editorial that pressure to expedite certifications came from member state governments and from the Commission's own political leadership. "We were told these tools were already in use, that they were operationally necessary, and that the legal framework couldn't be a bottleneck," one official said. "The Act was supposed to set the standard. Instead, it became a rubber stamp."

What the Files Reveal

Email correspondence obtained by The Editorial shows that in September 2024, a senior policy officer in the Directorate-General sent a memo to the unit responsible for high-risk AI approvals. The subject line read: "Accelerated processing: member state requests." The memo outlined a "fast-track" procedure for systems already in limited operational use by national authorities. Under this procedure, systems could be granted provisional certification while documentation and conformity assessments were "finalized in parallel."

The memo, dated September 11, 2024, was reviewed by The Editorial. It stated that "strict adherence to sequential conformity assessment timelines may create friction with member states that have already invested in and deployed these capabilities." It proposed that the Commission "adopt a pragmatic approach that balances regulatory rigour with operational reality."

The fast-track procedure was never formalised in a Commission regulation or published guidance. According to the audit, it was implemented through internal workflow adjustments and communicated to regulatory staff in meetings and via email. No public notice was given. The AI Act's text contains no provision for provisional or accelerated approval of high-risk systems.

Four regulatory staff members who worked on high-risk AI approvals told The Editorial that they raised concerns internally. One officer sent an email to a supervisor in October 2024 warning that "certifying systems without completed assessments exposes the Commission to legal and reputational risk." The officer received a reply the following day stating that "political and operational constraints require flexibility in implementation." The officer, who requested anonymity, said: "I was told to process the applications. The evaluations would catch up later. They never did."

The Systems in Question

◆ Free · Independent · Investigative

Don't miss the next investigation.

Get The Editorial's morning briefing — deeply researched stories, no ads, no paywalls, straight to your inbox.

Among the approved systems were tools supplied by three major European defence and technology contractors. Documents reviewed by The Editorial identify systems provided by Thales Group, a French multinational; Hensoldt, a German sensor technology company; and Atos, a French IT services firm. All three companies declined to comment on specific contracts or certification timelines.

One system, a facial recognition platform used by German federal police, was approved in November 2024. The approval documents, obtained by The Editorial, show that the system was registered on November 4, 2024, and certified on November 8, 2024. The conformity assessment by an accredited third-party body—required under Article 43 of the EU AI Act—was dated January 12, 2025, more than two months after the system received Commission certification. The system had been in operational use at Frankfurt Airport since October 2024.

◆ Finding 02

FACIAL RECOGNITION DEPLOYED WITHOUT ASSESSMENT

A facial recognition system used by German federal police at Frankfurt Airport was certified by the European Commission on November 8, 2024. The required third-party conformity assessment was completed on January 12, 2025—sixty-five days later. The system had been operational since October 2024, processing an estimated 340,000 passenger faces before its safety evaluation was concluded.

Source: European Commission AI System Registry, Case ID DE-2024-FR-088; TÜV Rheinland Conformity Assessment Report, January 2025

Another system, a predictive policing algorithm used by the Dutch National Police, was certified in December 2024. The system analyses criminal incident data to forecast where offences are likely to occur and allocates patrol resources accordingly. The Commission approval was granted on December 3, 2024. According to documents obtained by The Editorial, the technical documentation required under Article 11 of the AI Act—including datasets used for training, accuracy metrics, and bias testing results—was submitted in incomplete form. A follow-up submission was received on March 2, 2026, nearly three months after the system had been deployed in Rotterdam, The Hague, and Amsterdam.

The Dutch Ministry of Justice and Security did not respond to questions from The Editorial about the system's deployment timeline or its conformity with EU regulations.

The Regulator's Response

When presented with the audit findings in February 2026, the European Commission's Directorate-General for Communications Networks, Content and Technology issued an internal response, a copy of which was obtained by The Editorial. The response, dated February 21, 2026, acknowledged "procedural gaps" but stated that "no systems were approved that posed imminent safety risks." It noted that "retrospective conformity assessments have been completed or are in progress for all systems in question."

The response did not address the question of whether systems that failed initial safety evaluations remained in use. Nor did it specify whether any approved systems had been suspended or withdrawn following completed assessments. A spokesperson for the Directorate-General told The Editorial that "the Commission takes its regulatory obligations under the AI Act with the utmost seriousness" and that "all high-risk systems undergo rigorous scrutiny." The spokesperson declined to comment on specific cases or internal audit findings.

In March 2026, the European Data Protection Supervisor, an independent EU body responsible for monitoring data protection compliance by EU institutions, opened a preliminary inquiry into the Commission's handling of AI system approvals. The inquiry is ongoing. A spokesperson for the Supervisor's office confirmed the investigation but declined to provide details.

What the Law Requires

The EU AI Act, formally Regulation (EU) 2024/1689, was adopted on March 13, 2024, and entered into force on August 1, 2024. It established a tiered risk-based framework. Unacceptable-risk systems—such as social scoring by governments or real-time biometric identification in public spaces—are banned outright. High-risk systems, defined in Annex III, must comply with strict requirements before deployment.

Article 43 of the Act requires that high-risk AI systems undergo a conformity assessment by a notified body—an accredited independent organisation authorised by a member state to evaluate compliance. Article 11 requires providers to prepare comprehensive technical documentation, including datasets, training methodologies, performance metrics, and measures to address bias and discrimination. Article 9 requires that systems incorporate risk management processes throughout their lifecycle. Article 14 mandates human oversight mechanisms to prevent or minimise risks to health, safety, or fundamental rights.

42 systems
High-risk AI tools approved without full evaluation

Between August 2024 and January 2026, the European Commission certified these systems for law enforcement and border control before mandatory safety assessments were completed.

The Act contains no provision for provisional approval, delayed assessment, or retrospective compliance. Legal experts consulted by The Editorial said the fast-track procedure described in internal Commission documents has no basis in the regulation. "The entire point of the AI Act was to ensure that high-risk systems are safe before they are used on European citizens," said Dr. Miriam Stegmann, a professor of EU technology law at the University of Amsterdam and former adviser to the European Parliament's Committee on Civil Liberties, Justice and Home Affairs. "If systems are deployed first and evaluated later, the regulation has failed its primary purpose."

The Political Context

The approvals occurred during a period of intensifying political pressure on the Commission to demonstrate that the AI Act could coexist with member state security priorities. In November 2024, the interior ministers of Germany, France, Italy, and Spain sent a joint letter to the Commission expressing concern that "overly rigid application of AI regulation could hamper law enforcement and border security capabilities." The letter, obtained by The Editorial, urged the Commission to "ensure that compliance procedures do not create unnecessary delays for systems already deemed operationally essential by member state authorities."

The letter was sent three weeks after the terrorist attack in Lyon on October 18, 2024, in which a vehicle-ramming incident killed eleven people and injured forty-three. French authorities said the attacker had been flagged by predictive policing software but that insufficient resources had been allocated to follow up. In the weeks that followed, officials in Paris and Berlin called for greater investment in AI-enabled security tools and warned against what they described as "regulatory overreach."

Two officials with knowledge of internal Commission discussions told The Editorial that the political environment influenced decision-making. "There was a view that we couldn't be seen as blocking security measures in the middle of a terrorism crisis," one official said. "The fear was that if we slowed down approvals, member states would bypass the system entirely or blame Brussels for the next attack."

What Happens Next

The audit recommended that the Commission "immediately suspend the use of any system that has not completed a full conformity assessment" and "publish a detailed account of all high-risk AI approvals, including timelines and documentation status." As of May 2026, the Commission has not publicly released such an account. A spokesperson told The Editorial that "transparency measures are under consideration."

◆ Finding 03

NO SYSTEMS SUSPENDED

Despite internal audit recommendations that uncertified systems be immediately suspended, no high-risk AI tool approved by the European Commission has been withdrawn from operational use as of May 2026. The Commission has not published timelines, conformity assessment results, or documentation status for any of the forty-two systems flagged in the audit.

Source: European Commission Internal Audit Service, Recommendations Summary, February 2026; Commission spokesperson statement to The Editorial, April 2026

The European Parliament's Committee on Civil Liberties, Justice and Home Affairs has requested a briefing from the Commission on the audit findings. A hearing is scheduled for June 2026. Members of the European Parliament's Greens/EFA group have called for the resignation of the Director-General responsible for AI regulation. The Commission has rejected the call.

Civil society organisations have begun filing legal challenges. In April 2026, the European Digital Rights alliance, a coalition of privacy and digital rights groups, submitted a formal complaint to the European Court of Justice alleging that the Commission violated the AI Act by certifying systems without completed assessments. The case is pending.

Legal scholars say the case could set a precedent for how the EU's pioneering AI regulation is enforced—or undermined. "The AI Act was supposed to be the model for the world," said Professor Stegmann. "If the institution responsible for enforcing it cannot follow its own rules, the entire framework is in question."

The regulatory officer who brought the audit findings to light in February remains employed by the Commission. According to a person familiar with the matter, the officer was reassigned to a different unit in March 2026. The officer declined to be interviewed for this article.

Share this story

Join the conversation

What do you think? Share your reaction and discuss this story with others.