Model Synthesis Explains Human Reasoning

Updated: 2025.09.02 1M ago 2 sources
A new paper argues people tackle open-ended problems by assembling small, task-specific probabilistic programs from relevant bits of knowledge, then doing Bayesian updates within that tiny model. A 'problem‑conditioned language model' picks the variables and assumptions to include, rather than reasoning over all knowledge at once. — This reframes cognition and AI design around assembling ad‑hoc models on demand, guiding how we build, evaluate, and constrain 'reasoning' systems.

Sources

What Is Man, That Thou Art Mindful Of Him?
Scott Alexander 2025.09.02 60% relevant
The dialogue stresses humans’ limited 'context window' (seven ± two) and the value of 'Thinking Mode' (scratchpads) to assemble reasoning—paralleling the claim that effective cognition comes from building small task‑specific models rather than reasoning over all knowledge at once.
Links for 2025-07-19
Alexander Kruel 2025.07.19 100% relevant
The arXiv paper 'Modeling Open-World Cognition as On-Demand Synthesis of Probabilistic Models' linked in the post.
← Back to All Ideas