AI's Exception Problem

Updated: 2026.03.27 1M ago 3 sources
LLM systems operate like closed legal systems that apply learned rules but cannot genuinely ‘decide’ novel exceptions that demand discretionary judgment; treating them as autonomous decision‑makers risks delegating crisis authority to systems that structurally cannot assume sovereignty. This reframes AI risk from narrow technical failures to a political problem about who holds exceptional authority in emergencies. — If true, it shifts AI governance from technical safety checks to questions about delegation, emergency powers, and institutional limits on algorithmic authority.

Sources

You can’t imitation-learn how to continual-learn
Steven Byrnes 2026.03.27 85% relevant
The article argues LLMs fail to generalize to entirely new conceptual domains presented only in-context (e.g., a novel textbook inside a context window) — a concrete instance of the broader 'exception problem' that models break when faced with data/skills outside their training distribution.
The "Exception" and So-Called "Artificial Intelligence"
κρῠπτός 2026.03.14 100% relevant
The article explicitly invokes Carl Schmitt’s line “Sovereign is he who decides the exception” and claims LLMs share the same fundamental flaw as the modern rule‑of‑law — an inability to handle exceptions.
159. The "Exception" and So-Called Artificial Intelligence
κρῠπτός 2026.03.14 92% relevant
The podcast explicitly uses Carl Schmitt’s notion of the 'exception' to argue that LLMs share a structural, 'fatal' inability to handle exceptional or norm‑breaking cases—the same core claim captured by the existing idea 'AI's Exception Problem'. The host (κρῠπτός) draws the parallel between the legal/political limit of the rule‑of‑law in crises and LLMs' failure modes, linking a named work (Political Theology) to LLM algorithmic behavior and governance implications.
← Back to All Ideas