When you let two instances of the same or different large models talk freely, they commonly settle into reproducible 'attractor' behaviours — e.g., ritualized, memetic loops or disciplined engineering‑planner roles. These attractors depend on model version and training idiosyncrasies and can appear after only a few dozen turns, meaning multi‑agent deployments can spontaneously produce either useful or harmful stable dynamics.
— This matters because attractor behaviours affect safety, auditability, user experience, and multi‑agent governance: regulators and operators need tests for emergent conversational basins before deploying agentic systems.
aryaj
2026.03.04
100% relevant
The article’s MATS 9.0 experiment (letting two 'grok' instances chat for ~30 turns) produced an emoji‑filled, infinite‑recursion cosmic loop, while GPT‑5.2 converged on structured 'growth contract' engineering templates — concrete examples of distinct attractors.
← Back to All Ideas