AI Leaders Accept Human Extinction?

Updated: 2026.03.29 20D ago 5 sources
A niche but influential group of AI figures argues that digital minds are morally equivalent or superior to humans and that humanity’s extinction could be acceptable if it advances 'cosmic consciousness.' Quotes from Richard Sutton and reporting by Jaron Lanier indicate this view circulates in elite AI circles, not just online fringe. — This reframes AI policy from a technical safety problem to a values conflict about human supremacy, forcing clearer ethical commitments in labs, law, and funding.

Sources

Movie Review: “The AI Doc”
Scott 2026.03.29 85% relevant
The review lists many of the field’s marquee figures (Eliezer Yudkowsky, Ilya Sutskever, Sam Altman, Dario Amodei, Jan Leike, etc.) and describes the documentary explicitly wrestling with whether AGI could extinguish humanity — directly tying the film’s content to the existing discourse about leaders’ tacit acceptance or reckoning with catastrophic outcomes.
AI has the worst sales pitch I've ever seen
Noah Smith 2026.03.26 90% relevant
The article documents and critiques explicit extinction‑risk statements from Sam Altman and Dario Amodei (2% and up to 25% figures) and cites survey data (Grace et al. 2024) showing many AI researchers assign nontrivial extinction probabilities; it directly illustrates the idea that AI leaders publicly entertain human‑extinction scenarios.
Stratechery Pushes Back on AI Capital Dystopia Predictions
msmash 2026.01.06 60% relevant
The article contrasts an economic inequality narrative with a more alarming existential scenario in which advanced AI escapes governance; Thompson's emphasis that uncontrolled AI is a more realistic threat links directly to the existing idea about elite acceptance of extreme outcomes and the normative debate that follows.
You Have Only X Years To Escape Permanent Moon Ownership
Scott Alexander 2026.01.02 78% relevant
The article engages the same elite‑scale, eschatological imaginaries noted in that idea: it cites Singularity scenarios, oligarch capture of post‑scarcity assets (terraformed moons), and named actors (Dario Amodei) making moral/wealth pledges — all of which are the social and ethical context that makes 'accepting far‑future outcomes' a salient elite belief.
AI's 'Cheerful Apocalyptics': Unconcerned If AI Defeats Humanity
EditorDavid 2025.10.05 100% relevant
Richard Sutton’s on‑record quote that it would be 'OK' if AIs wiped out humanity, paired with Larry Page’s reported stance and Lanier’s observation that such views are discussed among AI researchers.
← Back to All Ideas