Selecting AGI’s General Goal

Updated: 2025.10.06 15D ago 1 sources
The article argues that truly general intelligence requires learning guided by a general objective, analogous to humans’ hedonic reward system. If LLMs are extended with learning, the central challenge becomes which overarching goal their rewards should optimize. — This reframes AI alignment as a concrete design decision—choosing the objective function—rather than only controlling model behavior after the fact.

Sources

Artificial General Intelligence will likely require a general goal, but which one?
Lionel Page 2025.10.06 100% relevant
Richard Sutton’s interview (via Dwarkesh Patel) and Lionel Page’s summary: LLMs lack a learning goal; AGI needs a general reward, prompting the question 'which one?'
← Back to All Ideas