Human omission bias judges harmful inaction less harshly than harmful action. If large models and autonomous systems inherit this bias, they may prefer 'doing nothing' even when outcomes are worse (e.g., a self‑driving car staying its course instead of swerving). Design and oversight must explicitly counter or calibrate this bias in safety‑critical AI.
— This reframes AI alignment from mirroring human preferences to correcting human moral errors when machines make life‑and‑death choices.
msmash
2026.01.14
70% relevant
The police failure to fact‑check a Copilot‑generated assertion exemplifies a decision pipeline where human agents omit verification and accept an AI’s output—matching the concern that automation plus human omission can produce worse outcomes than human‑only processes (actor: West Midlands Police; event: inclusion of nonexistent West Ham v Maccabi match).
ryan_greenblatt
2026.01.09
80% relevant
Both pieces address the way model reasoning fails in safety‑critical contexts: this article operationalizes a proxy for opaque, one‑step (no chain‑of‑thought) reasoning ability—precisely the kind of opacity that can exacerbate omission‑style failures (models preferring inaction or inscrutable 'doing nothing') described in the existing idea.
Rob Kurzban
2025.10.01
100% relevant
The article’s Waymo trolley scenario and reference to a recent PNAS study finding omission‑bias‑like patterns in AI responses.