Public and policy debates often hold AI to a standard of near‑perfect behavior while excusing the same rare mistakes by humans. That asymmetry—illustrated by asking whether autonomous cars ‘cause any accidents’ rather than whether they cause more or fewer accidents than humans—distorts risk assessment, slows rational adoption, and pushes regulation toward unrealistic demands.
— If policymakers and the public demand machine infallibility while granting humans leniency, regulations and adoption choices will be miscalibrated, with consequences for safety, innovation, and trust.
Arnold Kling
2026.05.12
100% relevant
Mark McNeilly’s piece: “When AI hallucinates, we say, ‘See. You can’t trust AI.’ Yet we don’t do that with humans... Instead of asking, ‘who causes more accidents, self‑driving cars or human drivers?’ people ask, ‘Do self‑driving cars cause any accidents?’ ”
← Back to all ideas