Meta‑theoretic disproof of LLM consciousness

Updated: 2026.01.15 13D ago 1 sources
A class of mathematical/meta‑theoretic arguments can be used to rule out broad families of falsifiable theories that would ascribe subjective experience to large language models, producing a proof‑style result that LLMs have no 'what‑it‑is‑like' experience and therefore cannot be conscious in any morally relevant sense. — If accepted, such a proof would shift law, regulation, and ethics away from debates about granting AI personhood, criminal culpability, or rights, and toward conventional product‑safety, consumer‑protection and transparency rules for generative systems.

Sources

Proving (literally) that ChatGPT isn't conscious
Erik Hoel 2026.01.15 100% relevant
Erik Hoel’s Jan 15, 2026 arXiv paper claiming a meta‑theoretic proof that no non‑trivial, falsifiable theory of consciousness could grant consciousness to LLMs (and his public essay summarizing the result).
← Back to All Ideas