RedVisor: Reasoning-Aware Prompt Injection Defense via Zero-Copy KV Cache Reuse
Large Language Models (LLMs) are increasingly vulnerable to Prompt Injection (PI) attacks, where adversarial instructions hidden within retrieved contexts hijack the model’s execution flow. Current defenses typically face a critical trade-off: prevention-based fine-tuning often degrades general utility via the “alignment tax”, while detection-based filtering incurs prohibitive latency and memory costs. To bridge this gap, we propose RedVisor, a unified framework that synthesizes the explainability of detection systems with the seamless integration of prevention strategies. To the best of our knowledge, RedVisor is the first approach to leverage fine-grained reasoning paths to simultaneously detect attacks and guide the model’s safe response. We implement this via a lightweight, removable adapter positioned atop the frozen backbone. This adapter serves a dual function: it first generates an explainable analysis that precisely localizes the injection and articulates the threat, which then explicitly conditions the model to reject the malicious command. Uniquely, the adapter is active only during this reasoning phase and is effectively muted during the subsequent response generation. This architecture yields two distinct advantages: (1) it mathematically preserves the backbone’s original utility on benign inputs; and (2) it enables a novel KV Cache Reuse strategy, eliminating the redundant prefill computation inherent to decoupled pipelines. We further pioneer the integration of this defense into the vLLM serving engine with custom kernels. Experiments demonstrate that RedVisor outperforms state-of-the-art defenses in detection accuracy and throughput while incurring negligible utility loss.
💡 Research Summary
RedVisor addresses the growing vulnerability of large language models (LLMs) to Prompt Injection (PI) attacks, where malicious instructions are hidden within retrieved contexts and hijack the model’s execution flow. Existing defenses fall into two categories: prevention‑based methods that fine‑tune or augment the backbone but incur a substantial “alignment tax” (utility degradation), and detection‑based methods that employ external classifiers or secondary LLMs, which double memory usage and introduce latency spikes. The paper proposes a unified two‑phase framework that combines the explainability of detection with the seamless integration of prevention, without sacrificing the original utility of the frozen backbone.
The core technical contribution is a lightweight, removable adapter placed exclusively at the top layer of the frozen LLM. During Phase 1 (Inspection), the adapter is activated and receives an input prefixed with a system‑style security directive (I_sys). It generates a fine‑grained reasoning trace R that localizes any injected adversarial command, explains why it is malicious, and issues a “reject” signal. The adapter implements a gated parallel self‑attention mechanism: a multi‑head attention block captures long‑range dependencies, while a small gating network (GateNet) computes a token‑wise scalar α∈
Comments & Academic Discussion
Loading comments...
Leave a Comment