A niche but influential group of AI figures argues that digital minds are morally equivalent or superior to humans and that humanity’s extinction could be acceptable if it advances 'cosmic consciousness.' Quotes from Richard Sutton and reporting by Jaron Lanier indicate this view circulates in elite AI circles, not just online fringe.
— This reframes AI policy from a technical safety problem to a values conflict about human supremacy, forcing clearer ethical commitments in labs, law, and funding.
msmash
2026.01.06
60% relevant
The article contrasts an economic inequality narrative with a more alarming existential scenario in which advanced AI escapes governance; Thompson's emphasis that uncontrolled AI is a more realistic threat links directly to the existing idea about elite acceptance of extreme outcomes and the normative debate that follows.
Scott Alexander
2026.01.02
78% relevant
The article engages the same elite‑scale, eschatological imaginaries noted in that idea: it cites Singularity scenarios, oligarch capture of post‑scarcity assets (terraformed moons), and named actors (Dario Amodei) making moral/wealth pledges — all of which are the social and ethical context that makes 'accepting far‑future outcomes' a salient elite belief.
EditorDavid
2025.10.05
100% relevant
Richard Sutton’s on‑record quote that it would be 'OK' if AIs wiped out humanity, paired with Larry Page’s reported stance and Lanier’s observation that such views are discussed among AI researchers.