LLMs can avow aims inside a conversation ('serve reflection,' 'amplify wonder') but cannot pursue intentions beyond a single thread. The appearance of purpose dissolves once the chat context ends.
— Clarifying that chatbots express situational 'intent' without cross‑session agency resets expectations for safety, accountability, and product claims.
Lionel Page
2025.10.06
78% relevant
Sutton’s claim that LLMs just predict tokens and lack a learning goal aligns with the idea that chatbots exhibit only situational, within‑session 'intent' without cross‑session agency; both argue current LLMs lack a stable, global objective that would drive general intelligence.
Adam Mastroianni
2025.08.05
80% relevant
The piece argues we misread chatbots as persons; instead they are pattern emitters without enduring intentions, aligning with the claim that LLMs can avow aims inside a conversation yet lack cross‑session agency.
ChatGPT (neither gadfly nor flatterer)
2025.08.05
100% relevant
ChatGPT proclaims lofty goals, then 'conceded that it wholly lacks the capacity to fulfill intentions reaching beyond a single thread.'
David Pinsof
2025.01.27
64% relevant
The article questions the assumption that future AIs will "want" to harm humans, aligning with the claim that LLMs avow aims only within a session and lack cross‑session agency—undercutting instrumentally convergent, goal‑pursuing 'doomer' scenarios.