Deontological Bars for AI Activism

Updated: 2026.04.30 1H ago 1 sources
Some actions in the AI safety debate might be ruled out not because they fail a cost‑benefit test but because adopting them destroys social equilibria or invites harmful second‑order actors. Framing movement tactics as potentially violating deontological norms (e.g., ‘‘don’t enable likely catastrophe’’) changes who can legitimately participate and which strategies are politically credible. — This idea reframes AI safety strategy as a norms problem: disagreements are not only about predictions but about whether certain forms of support or protest are categorically impermissible, which affects coalition‑building, regulation, and public legitimacy.

Sources

What Deontological Bars?
Scott Alexander 2026.04.30 100% relevant
Scott Alexander’s essay asking whether moral rules should bar working with AI labs or organizing mass activism that attracts bad actors (internal AI safety movement debate over engagement vs pause).
← Back to All Ideas