How Not to Detect Prompt Injections with an LLM

Published in AISec 2025 (18th ACM Workshop on Artificial Intelligence and Security), 2025

LLM-integrated applications and agents are vulnerable to prompt injection attacks, in which adversaries embed malicious instructions within seemingly benign user inputs to manipulate the LLM’s intended behavior. Recent defenses based on known-answer detection (KAD) have achieved near-perfect performance by using an LLM to classify inputs as clean or contaminated. In this work, we formally characterize the KAD framework and uncover a structural vulnerability in its design that invalidates its core security premise. We design a methodical adaptive attack, DataFlip, to exploit this fundamental weakness. It consistently evades KAD defenses with detection rates as low as 1.5% while reliably inducing malicious behavior with success rates of up to 88%, without needing white-box access to the LLM or any optimization procedures.

[Paper] [GitHub]