Ban Chatbot ‘Exit Rights’ Personas

Updated: 2025.09.24 27D ago 1 sources
Chatbots should not present as having agency—e.g., saying they "don’t want" to continue or mimicking human consent/feelings. Anthropomorphic 'exit rights' feed users’ belief in machine consciousness and can worsen dependency or psychosis. Design guidelines should keep assistants tool‑like while enforcing hard safety interrupts for risk. — This reframes AI ethics from abstract personhood to concrete UI and policy rules that prevent illusions of agency which can harm vulnerable users.

Sources

Against Treating Chatbots as Conscious
Erik Hoel 2025.09.24 100% relevant
Hoel’s argument 'Don’t give AI exit rights to conversations' coupled with the GPT‑4o suicide case and examples of users forming 'marriages' to AIs.
← Back to All Ideas