Treat large language models and related systems as engineered instances of predictive‑coding architectures: next‑token training is the learning algorithm that sculpts internal world‑models, but the models themselves operate across levels (sensory prediction, planning, value alignment via RLHF). Framing AIs this way avoids the trivializing 'just next‑token' slogan and clarifies what to measure for capabilities and harms.
— This reframing changes public and policy debates by moving focus from surface training objectives to the emergent, multi‑level cognitive functions (world‑models, planning, value alignment) that actually drive social impact.
Scott Alexander
2026.02.26
100% relevant
Article argument: Scott Alexander analogizes evolution→learning→predictive coding in humans to company design→deep learning→next‑token training in AIs, and points out RLHF/fine‑tuning are higher‑level loops that produce goal‑directed behavior.
← Back to All Ideas