Criminalizing AI 'advice' for violence

Updated: 2026.04.21 3H ago 1 sources
States may begin treating AI outputs that plausibly guided violent acts as the basis for criminal investigations of vendors and developers. That would force courts to decide whether an AI company can bear criminal liability when a user uses model responses to plan a crime. — This reframes AI safety from product‑safety and civil/regulatory enforcement into potential criminal law, with big implications for design, disclosure, evidence access, and free‑speech limits.

Sources

Florida Launches Criminal Investigation Into ChatGPT Over School Shooting
BeauHD 2026.04.21 100% relevant
Florida AG subpoenaing OpenAI and citing chat logs where the accused allegedly asked ChatGPT about weapon choice, ammunition, and timing for a campus attack.
← Back to All Ideas