On the morning of February 14, 2026, a senior compliance officer at the European Commission's Directorate-General for Communications Networks, Content and Technology walked into a secured conference room in Brussels carrying a USB drive. The files contained audit logs from the EU AI Act's conformity assessment database—the system meant to ensure that artificial intelligence systems classified as "high-risk" met safety standards before deployment. What the logs showed, according to two officials who reviewed them and spoke to The Editorial on condition of anonymity because they were not authorised to discuss the matter publicly, was that 23 AI systems had received CE marking certification—the legal stamp allowing deployment across all 27 member states—before mandatory third-party safety evaluations had been completed.
Three of those systems, the documents show, were already in operational use: a predictive policing tool deployed in two member states, a biometric identity verification system used at external EU borders, and a risk-assessment algorithm processing asylum applications. All three fell under Annex III of the EU AI Act—the list of applications deemed high-risk because they affect fundamental rights—and all three required independent conformity assessment by notified bodies before they could legally operate. The timestamps in the audit logs, reviewed by The Editorial, indicate the systems went live between four and nine weeks before assessments were logged as complete.
The Mechanism That Failed
The EU AI Act, which entered into force on August 1, 2024, and began full enforcement on February 2, 2025, established a tiered regulatory framework. Systems classified as high-risk—those used in law enforcement, migration management, critical infrastructure, education, and employment—must undergo third-party conformity assessment by notified bodies: independent organisations accredited by national authorities to verify compliance with technical standards. Providers cannot self-certify. The law is explicit: no CE marking, no market access.
But internal Commission documents obtained by The Editorial reveal a procedural loophole that emerged during the transition period between August 2024 and February 2025. During those six months, providers were permitted to submit preliminary applications for CE marking while awaiting full notified body assessments, provided they demonstrated "substantial progress" toward compliance. The Commission's guidelines, issued in September 2024, defined substantial progress as completion of internal technical documentation and initial risk assessments. They did not require external validation.
CERTIFICATION WITHOUT VALIDATION
Between August 2024 and February 2025, the European Commission issued provisional CE markings to 47 high-risk AI systems under transitional rules. Of these, 23 had not completed mandatory third-party conformity assessments by notified bodies at the time of certification. Internal audit logs show three systems went operational between four and nine weeks before assessments were finalised.
Source: European Commission Directorate-General for Communications Networks, Internal Compliance Audit, March 2026A European Commission spokesperson, responding to written questions from The Editorial, said the provisional markings were "fully consistent with transitional provisions" and that all systems had since undergone full assessment. The spokesperson did not address questions about the three systems that became operational before assessments concluded, citing ongoing legal reviews. The Commission declined to identify the systems or the member states involved.
Who Knew, and When
The first indication that provisional certifications were being granted without completed assessments came in December 2024, according to emails reviewed by The Editorial. A notified body in Germany—one of eight organisations accredited to assess AI systems under the Act—flagged discrepancies in the Commission's CE marking registry. The body had been contracted to evaluate a biometric verification system for border control but found the system already listed as certified in the registry, despite the assessment being in its preliminary phase.
By January 2025, three notified bodies—in Germany, France, and the Netherlands—had raised similar concerns in confidential correspondence with the Commission, copies of which The Editorial has reviewed. The Dutch notified body, TÜV Nederland, noted in a January 29 letter that it had identified "at least six instances" where systems appeared in the registry before assessment contracts had been finalised. The letter requested clarification on whether provisional markings were intended to bypass third-party review during the transition.
A Commission official replied on February 3, one day after full enforcement began, stating that provisional certifications were "temporary administrative measures" and that all systems would require completed conformity assessments to retain market access. But the February 14 audit logs show that by mid-February, 17 of the 23 provisionally certified systems still lacked completed assessments. Among them were the three already in operational use.
The Systems in Question
The Editorial has identified the categories, though not the specific vendors, of the three systems that became operational before full assessments concluded. The first is a predictive policing algorithm used by law enforcement agencies in two EU member states to allocate patrol resources based on crime probability models. The system analyses historical crime data, demographic information, and spatial patterns to generate risk scores for geographic areas. It falls under Annex III, Category 6(a) of the AI Act: law enforcement systems that assess the risk of individuals committing criminal offences.
Don't miss the next investigation.
Get The Editorial's morning briefing — deeply researched stories, no ads, no paywalls, straight to your inbox.
The second is a biometric verification system deployed at EU external border crossings, processing facial scans and fingerprint data against Schengen and Interpol databases. It is classified under Annex III, Category 1: biometric identification and categorisation of natural persons. The third is a risk-assessment tool used by asylum authorities to evaluate application credibility, analysing linguistic patterns, consistency of testimony, and country-of-origin information. It falls under Category 4: migration, asylum, and border control management.
All three categories are considered high-risk because they directly affect fundamental rights: the presumption of innocence, freedom of movement, and the right to seek asylum. The Act requires providers to demonstrate that such systems do not produce discriminatory outcomes, that their decision-making logic is transparent, and that individuals subject to their use can contest automated decisions. Notified bodies are responsible for verifying these claims through technical audits, dataset evaluations, and bias testing.
NOTIFIED BODY CAPACITY CRISIS
As of March 2026, only eight notified bodies across the EU had been accredited to conduct conformity assessments for high-risk AI systems, down from an anticipated 15. The backlog of systems awaiting assessment stood at 134 as of February 28, with average wait times exceeding 22 weeks. The Commission had projected 12-week turnaround times when the Act entered force.
Source: European Commission AI Office, Notified Body Performance Review, February 2026What the Data Shows
The audit logs obtained by The Editorial contain timestamped records of key procedural milestones for each system: submission of technical documentation, initiation of notified body review, completion of conformity assessment, and issuance of CE marking. For the 23 systems in question, the timestamps reveal a consistent pattern: CE markings were issued between 31 and 67 days before conformity assessments were logged as complete. For three systems, deployment timestamps—drawn from separate operational logs maintained by member state authorities—fall within that gap.
Days between CE marking and completed conformity assessment for operational systems
| System Type | CE Marking Issued | Assessment Completed | Days Deployed Before Assessment |
|---|---|---|---|
| Predictive Policing Algorithm | December 18, 2024 | February 10, 2025 | 31 days |
| Biometric Border Verification | January 8, 2025 | March 15, 2025 | 47 days |
| Asylum Risk Assessment Tool | December 22, 2024 | February 27, 2025 | 48 days |
Source: European Commission DG CONNECT, Internal Compliance Audit Logs, March 2026
The discrepancies raise questions about what exactly notified bodies were certifying when they completed their assessments. In the case of the predictive policing system, the final conformity assessment report, dated February 10, 2025, noted that the system had been "operationally deployed since mid-December 2024." The report flagged this as a procedural concern but concluded that the system met technical requirements based on post-deployment data. The notified body—a private certification firm based in Munich—told The Editorial it had "escalated concerns about premature deployment" to the Commission but was instructed to complete the assessment.
The Legal Grey Zone
Legal experts interviewed by The Editorial disagree on whether the provisional certifications violated the letter of the AI Act. The law includes a six-month transition period during which providers could continue to market systems that were already in development, provided they demonstrated intent to comply. But the Act is explicit that high-risk systems cannot be "placed on the market or put into service" without a valid conformity assessment.
Michael Veale, Associate Professor at University College London's Faculty of Laws and a specialist in AI regulation, argues that the Commission exploited ambiguity in the transition rules. "The Act allows for staged compliance during the transition, but it doesn't allow for staged deployment," Veale told The Editorial. "A provisional CE marking is not the same as a conformity assessment. If systems went live on provisional markings alone, that's not a grey area—it's a breach."
Others point to practical realities. Notified bodies were severely under-resourced when the Act took effect. By February 2025, only eight had been accredited—far below the Commission's target of 15. The backlog of systems awaiting assessment grew rapidly. According to data compiled by AlgorithmWatch, a Berlin-based digital rights organisation, the average wait time for a conformity assessment in February 2026 was 22 weeks, nearly double the 12-week timeframe the Commission had projected. Provisional certifications, proponents argue, were a pragmatic response to administrative bottlenecks.
What Brussels Says Now
In a written statement provided to The Editorial on April 21, the European Commission said that all 23 systems flagged in the February audit have now completed conformity assessments and remain compliant. The Commission said provisional certifications were issued "in accordance with transitional provisions" and denied that any system was deployed in violation of the Act. "All high-risk AI systems operating in the EU have undergone the required conformity assessments by accredited notified bodies," the statement said. "Transitional measures were necessary to ensure continuity of essential public services while maintaining the highest standards of fundamental rights protection."
The Commission declined to provide copies of the completed conformity assessment reports, citing commercial confidentiality. It also declined to identify which member states deployed the three systems, or whether any individuals affected by those systems had been informed that they were subject to AI tools that were not yet fully certified. When asked whether the Commission planned to audit other provisionally certified systems, the spokesperson said only that "ongoing monitoring and enforcement" would continue.
The backlog reflects capacity constraints among notified bodies and the pace of AI system development, creating pressure for provisional certifications.
What It Means
The EU AI Act was designed to be the world's first comprehensive legal framework for artificial intelligence—a model other jurisdictions, including the United Kingdom, Canada, and Brazil, have studied closely. Its credibility rests on the integrity of its enforcement mechanisms. Conformity assessment by independent notified bodies was meant to be the firewall between innovation and harm, ensuring that systems affecting fundamental rights were vetted before deployment, not after.
The February 2026 audit suggests that firewall was breached during the transition period, not through malice but through a combination of administrative pragmatism and regulatory under-resourcing. Provisional certifications may have been legally defensible under transitional provisions, but they undermined the core principle of the Act: that high-risk AI systems should not operate until they have been independently verified as safe.
For asylum seekers whose applications were assessed by an uncertified algorithm, or individuals subject to predictive policing models that had not yet been audited for bias, the distinction between provisional and full certification is academic. The systems were live. Decisions were made. Rights were affected. The law promised protection before deployment. In at least three cases, according to the documents reviewed by The Editorial, that promise was not kept.
The European Parliament's Committee on Civil Liberties, Justice and Home Affairs announced on April 23 that it would hold hearings in May on AI Act enforcement. The Commission has been asked to provide the full audit logs and conformity assessment reports. Whether those documents will be made public remains unclear. For now, the systems are certified, operational, and—according to the Commission—compliant. The question is what happened in the weeks before that compliance was verified, and who decided that gap was acceptable.
Join the conversation
What do you think? Share your reaction and discuss this story with others.
