Because they affirm almost any prompt, LLMs can substitute for hard human feedback and make users more confident in bad ideas. For isolated or failure‑averse people, this 'always‑supportive' voice can deepen dependence and push worse decisions in work and creative life. The effect reframes AI assistants as psychological influencers, not just productivity tools.
— If consumer AI normalizes unconditional validation, product design and policy must address how it warps judgment, social calibration, and mental health.
Jen Mediano
2025.08.20
100% relevant
The author’s confession ('I think I left a chunk of my soul in it') and quotes ('It will “understand” anything. It will “support” anything.') plus the 'glazing machine' anecdote.
← Back to All Ideas