AI Doom’s Eleven Hidden Assumptions

Updated: 2025.10.01 20D ago 3 sources
The author argues that AI‑apocalypse predictions rest on at least eleven specific claims about intelligence: that it’s unitary, general‑purpose, unbounded, already present in AIs, rapidly scaling to human/superhuman levels, and coupled to agency and hostile goals. He contends that breaking even one link collapses high p(doom), and that several links—especially ‘intelligence as a single continuum’ and automatic goal formation—are mistaken. — This provides a checklist that forces doomer arguments into testable sub‑claims, sharpening public and policy debates about AI risk and regulation.

Sources

A 'Godfather of AI' Remains Concerned as Ever About Human Extinction
msmash 2025.10.01 60% relevant
Bengio’s claims—1% extinction risk is unacceptable, a 3–10 year (treat as 3) timeline, and industry race incentives—exemplify the high‑stakes doomer stance that the referenced idea dissects into specific assumptions requiring evidence; his interview keeps that debate salient.
If Anything Changes, All Value Dies?
Robin Hanson 2025.09.16 78% relevant
Hanson challenges key doomer premises—unpredictable post‑training goals, early value lock‑in, and inevitable misalignment leading to human extinction—by arguing these assumptions would equally condemn any changing descendants, not just AI. This mirrors the 'break any link, collapse p(doom)' critique of stacked assumptions.
AI Doomerism Is Bullshit
David Pinsof 2025.01.27 100% relevant
Pinsof explicitly lists "at least eleven claims" he says doomers assume about intelligence and AI, promising to rebut them.
← Back to All Ideas