Treating AI as a constant approver—'is this okay?'—shifts users from gut-checking to permission-seeking. As people offload small social and moral judgments (messages, flirting, birthday notes) to chatbots, they train themselves to distrust their own instincts, creating a dependency dynamic akin to a controlling partner.
— It reframes AI safety and product design around preserving self-trust, not just accuracy or harm filters, with implications for youth mental health and autonomy.
Gurwinder
2025.03.16
100% relevant
Freya India warns that young people already use AI to flirt, write cards, and 'calculate who is right' in arguments, likening this to repeatedly asking a boyfriend for approval.
← Back to All Ideas