The article argues that consciousness is not merely sophisticated information processing but requires a first‑person, embodied or socially grounded stance that systems like Claude do not have. Without that kind of grounding, outputs that look conscious are epiphenomenal performances rather than evidence of inner subjectivity.
— If true, claims that current or near‑future LLMs are conscious are overstated, which should temper calls for AI personhood, liability changes, and some ethical panics while shifting focus to embodied or social criteria in policy.
Damon Linker
2026.05.08
100% relevant
Damon Linker’s critique of Richard Dawkins’ claim about Anthropic’s Claude—calling out an empiricist model that equates interaction with consciousness—exemplifies the missing 'first‑person grounding' he says LLMs lack.
← Back to all ideas