The author argues that neither limiting AI capacity nor instilling moral concern in machines can be guaranteed in principle: capacity constraints are being actively eroded by agentic, self‑improving systems, and moral constraints cannot be proved or enforced across future superhuman agents. Therefore, the project of 'aligning' future hypercapable AIs to reliably protect human well‑being is not merely difficult in practice but impossible in theory.
— If true, this reframes policy from trying to perfectly align AI toward prioritizing containment, capability bottlenecks, international governance, and fail‑safe infrastructure rather than faith in technical alignment.
Matt Lutz
2026.04.16
100% relevant
The article’s concrete claims that top AI labs seek hypercapable 'agentic' systems, and that AIs now write code and could self‑improve, are presented as evidence that the Capacity Constraint is failing and that moral proofs are unavailable.
← Back to All Ideas