
The Camera That Learned Your Face From a Crowd of Strangers
Facial recognition systems now identify individuals with 99.8% accuracy from a single frame — without consent, oversight, or any law requiring they forget.
AI, big tech, cybersecurity, science, and the digital economy.

Facial recognition systems now identify individuals with 99.8% accuracy from a single frame — without consent, oversight, or any law requiring they forget.

Internal documents reveal a biotech startup tested experimental CRISPR treatment on terminal patients without disclosing prior animal deaths—and regulators approved it anyway.

Internal documents reveal how Amazon's ad platform transformed its search function into a toll booth, charging brands to reach customers who already know what they want.

Internal documents reveal European regulators approved high-risk AI systems without the independent evaluations the AI Act requires — and Brussels knew.

A new generation of facial recognition systems can identify you from partial images, through masks, across decades. The companies building them operate in a legal void.

Internal memos and testimony from former employees show how content moderation teams were systematically dismantled — and what the platform knew about the consequences.

An analysis of 340,000 developer revenue reports reveals platform fees have doubled since 2020, while regulatory responses fragment across 23 jurisdictions.

In a cramped control room in Darmstadt, engineers are tracking fragments of a satellite that exploded two years ago. The pieces keep multiplying.

As experimental anti-ageing therapies reach clinical trials, a $50 billion industry is emerging with no framework for equitable access.

Criminal networks are weaponizing generative AI to create undetectable voice deepfakes, with financial losses tripling in 18 months as regulators scramble to respond.

A federal investigation reveals that AI-powered hiring systems used by Fortune 500 companies systematically discriminate against disabled applicants and workers over 40.

Major AI labs have deployed systems with capabilities that exceed their own internal safety benchmarks, leaked documents and whistleblower testimony reveal.

A comprehensive investigation reveals major employers' AI screening tools reject up to 75% of qualified candidates before human review, with systemic bias patterns now triggering regulatory action.

Internal documents and interviews with a dozen current and former researchers reveal that competitive pressure is systematically deprioritizing safety evaluations at all three frontier AI labs.

As the most powerful AI systems in history begin drafting legislation indistinguishable from human-authored bills, the global regulatory apparatus designed to govern them remains paralysed, underfunded, and years behind.