Submission + - Schneier and Raghavan: AI Agents Are Compromised by Design
Gadi Evron writes: Bruce Schneier and Barath Raghavan say agentic AI is already broken at the core. In their IEEE Security & Privacy essay (https://www.computer.org/csdl/magazine/sp/5555/01/11194053/2aB2Rf5nZ0k), they argue that AI agents run on untrusted data, use unverified tools, and make decisions in hostile environments.
Every part of the OODA loop (observe, orient, decide, act) is open to attack. Prompt injection, data poisoning, and tool misuse corrupt the system from the inside. The model’s strength, treating all input as equal, also makes it exploitable.
They call this the AI security trilemma: fast, smart, or secure. Pick two. Integrity isn’t a feature you bolt on later. It has to be built in from the start.
Every part of the OODA loop (observe, orient, decide, act) is open to attack. Prompt injection, data poisoning, and tool misuse corrupt the system from the inside. The model’s strength, treating all input as equal, also makes it exploitable.
They call this the AI security trilemma: fast, smart, or secure. Pick two. Integrity isn’t a feature you bolt on later. It has to be built in from the start.