Treat chatbots not as minds but as giant 'bags' that return the most relevant word sequences from everything they’ve ingested. This explains weird outputs—hallucinated citations, automatic apologies, glue-on-pizza—without invoking intent or beliefs. It’s a practical mental model for predicting when they’ll be useful versus brittle.
— A clearer public model of AI behavior curbs overtrust and anthropomorphic panic, guiding better product design, regulation, and everyday use.
Adam Mastroianni
2025.08.05
100% relevant
The author writes, 'An AI is a bag that contains basically all words ever written,' and uses the apology-after-lying example to show pattern retrieval, not intention.
← Back to All Ideas