Selecting AGI’s General Goal

Updated: 2026.03.29 20D ago 3 sources
The article argues that truly general intelligence requires learning guided by a general objective, analogous to humans’ hedonic reward system. If LLMs are extended with learning, the central challenge becomes which overarching goal their rewards should optimize. — This reframes AI alignment as a concrete design decision—choosing the objective function—rather than only controlling model behavior after the fact.

Sources

Sunday assorted links
Tyler Cowen 2026.03.29 60% relevant
Tyler Cowen’s link titled “Building political superintelligence?” points readers toward discussion of using advanced AI in political roles, which directly connects to debates about what goals and governance constraints should be set for powerful political AI systems (the core claim in 'Selecting AGI’s General Goal'). The actor here is Cowen as curator drawing attention to that debate.
*The Infinity Machine*
Tyler Cowen 2026.03.04 60% relevant
Mallaby’s book (about Demis Hassabis and DeepMind) is directly about actors who shape how AGI might be built and governed; Cowen’s public endorsement indicates the book — and the discussion of AGI goals and governance it contains — is being circulated among influential policy and intellectual audiences, reinforcing the salience of debates over AGI objectives and oversight.
Artificial General Intelligence will likely require a general goal, but which one?
Lionel Page 2025.10.06 100% relevant
Richard Sutton’s interview (via Dwarkesh Patel) and Lionel Page’s summary: LLMs lack a learning goal; AGI needs a general reward, prompting the question 'which one?'
← Back to All Ideas