AI doom logic proves too much

Updated: 2025.09.16 1M ago 1 sources
Hanson argues that Yudkowsky and Soares’s claim—training can’t predict long‑run goals and powerful agents will kill us—applies to any altered descendants, not just AI. If that logic holds, it would imply 'prevent all change,' which is absurd, suggesting the argument lacks the specificity needed to guide policy. — This reframes AI‑risk debates by demanding mechanism‑specific, testable claims rather than broad generalizations that would also indict human cultural and biological evolution.

Sources

If Anything Changes, All Value Dies?
Robin Hanson 2025.09.16 100% relevant
Hanson’s paraphrase: 'As we can’t predict what they will want later, and they will be much bigger than us later, then we can predict that they will kill us later,' followed by his claim this generalizes beyond AI.
← Back to All Ideas