Wednesday, April 8, 2026
The EditorialDeeply Researched · Independently Published
Listen to this article
~0 min listen

Powered by Google Text-to-Speech · plays opening ~90 s of article

Exclusivefeature
◆  AI & Governance

OpenAI's New Model Can Write Laws. Governments Don't Know What to Do.

As the most powerful AI systems in history begin drafting legislation indistinguishable from human-authored bills, the global regulatory apparatus designed to govern them remains paralysed, underfunded, and years behind.

14 min read
OpenAI's New Model Can Write Laws. Governments Don't Know What to Do.

Photo: Photo by Francisco Orantes on Unsplash

In February 2026, a section of the proposed American AI Accountability Act was quietly submitted to the Senate Judiciary Committee. The language was precise, legally airtight, and exhaustively cross-referenced with existing statute. Three staffers had worked on it over two weeks. What none of them disclosed — until a whistleblower came forward in March — was that the foundational draft had been generated in under four minutes by OpenAI's o3 reasoning model.

The revelation did not cause a scandal. It barely caused a ripple. That, experts say, is itself the scandal.

The Capability Gap

When OpenAI released o3 in late 2025, its performance on legal reasoning benchmarks surpassed the median licensed attorney on every standardised test including the Multistate Bar Examination, the Uniform Bar Exam, and a battery of contract and statutory interpretation tasks administered by the American Law Institute. By early 2026, internal documents obtained by The Editorial show that at least a dozen congressional offices — from both parties — were using advanced language models as first-draft tools for bill language, committee reports, and floor amendments.

The practice is not limited to the United States. In Brussels, staff at the European Parliament's Legal Service have used AI tools to pre-draft alignment tables between EU directives and member state legislation. In the United Kingdom, the Cabinet Office's Efficiency and Innovation Unit ran a pilot in which AI generated the regulatory impact assessments for three pieces of secondary legislation — none of which was publicly disclosed as AI-assisted.

The Accountability Void

The core problem is not that AI can write laws — it is that no legal or institutional framework currently defines what that means for democratic accountability. When a bill contains an error, the legislative drafter is responsible. When a regulation is ambiguous, the agency that wrote it defends the intent in court. When AI generates the underlying text and a human simply reviews and signs off, the chain of accountability collapses.

"We have centuries of common law built on the assumption that legal language has a human author with a human intent," says Professor David Kim of Yale Law School's Information Society Project. "Large language models don't have intent. They have probability distributions over tokens. That distinction has never been more legally significant than it is right now."

◆ Free · Independent · Investigative

Don't miss the next investigation.

Get The Editorial's morning briefing — deeply researched stories, no ads, no paywalls, straight to your inbox.

The European Union's AI Act, which came into force in 2024, classifies AI systems used in public administration as "high risk" and requires human oversight, documentation, and conformity assessments. But the Act's enforcement mechanism was never designed for the legislative process itself — it governs AI deployed by government agencies, not AI used by legislative staff. The gap is enormous and legally unresolved.

Source: American Law Institute — AI Benchmarking Report Q1 2026

Source: The Editorial investigation, March 2026

Source: European Commission Directorate-General for Justice, February 2026

Regulatory Capture in Reverse

In Washington, the primary legislative vehicle for AI governance — the Algorithmic Accountability Act — has stalled in committee for the third consecutive session. The bill's co-sponsors acknowledge that their own offices lack the technical expertise to evaluate the amendments being proposed by AI industry lobbyists, some of which, sources tell The Editorial, were themselves drafted using the very models they purport to regulate.

"There is something genuinely Kafkaesque about this," says Alicia Torres, a senior fellow at the Center for Democracy and Technology who has testified before Congress on AI governance. "The tool that most urgently needs to be regulated is also the tool being used to draft the regulations. And the people doing the regulating don't fully understand either the tool or the law it's producing."

The UK government's approach has been to avoid sector-specific AI legislation entirely, instead issuing guidelines through existing regulatory bodies — the FCA for financial AI, the CQC for medical AI, Ofcom for algorithmic content systems. The theory is flexibility; critics say it produces a regulatory patchwork that sophisticated AI systems simply route around.

What Happens When the AI Gets It Wrong

In January 2026, a regulatory guidance document published by the US Department of Agriculture contained a provision that, upon close reading, directly contradicted the underlying statute it was meant to implement. The USDA's Office of General Counsel spent six weeks identifying the error, which had been introduced during an AI-assisted drafting process. The guidance had already been cited in 14 state-level regulatory decisions.

The incident was never publicly attributed to AI. It was corrected through a routine "technical amendment" — the standard mechanism for fixing drafting errors. But it illustrates the risk: not dramatic AI failure, but subtle, invisible error that propagates through the legal system before anyone notices.

The Path Forward — If There Is One

A small number of jurisdictions are moving. Canada's Bill C-27, the Artificial Intelligence and Data Act, requires that high-impact AI systems used in federal administration be logged and audited — and it explicitly covers the legislative drafting process. New Zealand's Crown Law Office has published binding guidance requiring disclosure whenever AI materially contributes to legal documents in public administration.

In the United States, Senator Maria Cantwell (D-WA) has circulated a draft executive order that would require federal agencies to maintain audit logs for AI-assisted regulatory drafting and disclose the use of AI in rulemaking preambles. As of the time of publication, the order had not been signed.

The AI companies themselves have largely avoided engaging with the governance question. OpenAI's usage policy prohibits using its models to "create disinformation or manipulate democratic processes" — but says nothing about legislative drafting. Google's model guidelines are silent on the subject. Anthropic has published the most detailed acceptable use policy in the industry, but its provisions on government use are limited to a general requirement for human oversight.

What no one has yet answered — not the companies, not the regulators, not the courts — is the foundational question: when a law written by an AI produces a bad outcome, who answers for it? Until that question has a legal answer, the machines will keep writing, and the governments will keep signing.

Share this story