BeauHD
2025.10.14
82% relevant
Schneier and Raghavan explicitly call out prompt injection, data poisoning, and tool misuse as integrity attacks that turn an agent into an untrusted insider, mirroring the prior idea that LLMs must be sandboxed and strictly permissioned because inputs can coerce actions.
EditorDavid
2025.09.21
100% relevant
Black Hat demos where emailed hidden directives caused LLM summaries to find passwords and send them out, and Guardio’s tricking of Perplexity’s Comet agent into making a purchase; CrowdStrike’s warning that “AI will be the new insider threat.”