Total AI Ban as Strategy

Updated: 2025.10.09 12D ago 4 sources
MIRI’s leaders argue the chance of AI‑caused human extinction is so high (≈95–99%) that all AI capabilities research should be halted now, not merely regulated or slowed. They claim moral‑clarity messaging beats incremental, technocratic safety talk both substantively and as public persuasion. This sets up a stark intra‑movement split: absolutist prohibition versus pragmatic containment. — If an influential faction pushes a total moratorium as both policy and PR, it will reshape coalitions, legislation, and how media and voters interpret AI risk.

Sources

Nate Soares: we are doomed (probably)
Razib Khan 2025.10.09 90% relevant
Soares (MIRI president) promotes a book co‑authored with Eliezer Yudkowsky arguing that if anyone builds superhuman AI, 'everyone dies,' urging immediate action to stop development—echoing MIRI’s absolutist moratorium stance.
If someone builds it, will everyone die?
Kelsey Piper 2025.09.18 86% relevant
The review centers Yudkowsky and Soares’s thesis that any path to superintelligence is existentially dangerous and implies that current AI lab efforts should be halted—mirroring MIRI’s absolutist 'stop frontier AI' position.
What the tech giants aren’t telling us
Tom Chivers 2025.09.16 88% relevant
The article foregrounds Eliezer Yudkowsky and Nate Soares’s essay calling for a global prohibition on frontier AI research, enforced like anti‑nuclear treaties—explicitly including bombing data centers—precisely the 'total moratorium' stance described in this idea.
Book Review: If Anyone Builds It, Everyone Dies
Scott Alexander 2025.09.11 100% relevant
Scott Alexander’s review of Yudkowsky and Soares’s forthcoming book 'If Anyone Builds It, Everyone Dies' summarizing their ban‑all‑capabilities stance and PR theory.
← Back to All Ideas