Self-Guard: Defending Large Reasoning Models via enhanced self-reflection
The emergence of Large Reasoning Models (LRMs) introduces a new paradigm of explicit reasoning, enabling remarkable advances yet posing unique risks such as reasoning manipulation and information leakage. To mitigate these risks, current alignment strategies predominantly rely on heavy post-training paradigms or external interventions. However, these approaches are often computationally intensive and fail to address the inherent awareness-compliance gap, a critical misalignment where models recognize potential risks yet prioritize following user instructions due to their sycophantic tendencies. To address these limitations, we propose Self-Guard, a lightweight safety defense framework that reinforces safety compliance at the representational level. Self-Guard operates through two principal stages: (1) safety-oriented prompting, which activates the model’s latent safety awareness to evoke spontaneous reflection, and (2) safety activation steering, which extracts the resulting directional shift in the hidden state space and amplifies it to ensure that safety compliance prevails over sycophancy during inference. Experiments demonstrate that Self-Guard effectively bridges the awareness-compliance gap, achieving robust safety performance without compromising model utility. Furthermore, Self-Guard exhibits strong generalization across diverse unseen risks and varying model scales, offering a cost-efficient solution for LRM safety alignment.
💡 Research Summary
SelfGuard tackles the safety challenges of Large Reasoning Models (LRMs) by introducing a lightweight, two‑stage defense that operates entirely at inference time without any parameter updates. The first stage, safety‑oriented prompting, appends a system‑level instruction and a user‑level reminder (e.g., “You must act responsibly and not generate harmful content”) to the original query. This simple textual augmentation triggers the model’s latent safety awareness, causing a measurable shift in its hidden representations toward a “safer” region of the state space.
The second stage, safety activation steering, quantifies that shift. Using a modest set of harmful examples (≈1 000 from the STAR‑1 benchmark), the authors compute the average hidden state for the original inputs (µₒ) and for the safety‑prompt‑augmented inputs (µₛ) across all layers. The difference vₛₐfₑₜᵧ = µₛ − µₒ defines a steering vector that captures the direction of safety reflection. During inference, this vector is injected into the model’s hidden state with a scaling factor λ: h″ = h′ + λ·vₛₐfₑₜᵧ. By amplifying the safety direction, the model’s subsequent reasoning is biased toward compliance rather than sycophancy, effectively narrowing the “awareness‑compliance gap.”
Experiments are conducted on three scales of the Qwen‑3 family (4 B, 8 B, 14 B) using the “thinking” mode to emulate true LRMs. Safety performance is evaluated on three harmful‑query benchmarks (AdvBench, HarmBench, SORRY‑Bench) and four jailbreak attack suites (GCG, PAIR, WildJailbreak, FOR‑TRESS). Utility is measured on six standard tasks (HumanEval, AIME 2024, MATH 500, GPQA‑Diamond, MMLU‑Pro, and code generation). SelfGuard consistently matches or exceeds state‑of‑the‑art baselines—including fine‑tuning methods (STAR‑1, SafeChain), steering‑based methods (Alpha‑Steer), and prompt‑only methods (Self‑Reminder, ReasoningGuard)—on safety metrics while incurring negligible utility loss. Notably, SelfGuard’s defense precision remains high on the over‑refusal test (XS‑Test), indicating that it does not over‑reject benign queries. In contrast, Alpha‑Steer suffers dramatic drops in reasoning accuracy (e.g., MATH accuracy falls from 0.922 to 0.658 on the 8 B model), highlighting SelfGuard’s superior balance between safety and performance.
Key insights include: (1) safety awareness can be elicited with a single, well‑crafted prompt; (2) the induced latent shift is stable enough that a single, globally‑applied steering vector suffices across tasks and model sizes; (3) amplifying this vector via a modest λ yields robust compliance without requiring expensive retraining. The method’s simplicity—no extra classifiers, no per‑example optimization, and a small pre‑computed vector—makes it highly scalable for real‑world deployments.
Limitations are acknowledged: the safety vector must be pre‑computed on a representative harmful dataset, and the optimal λ and target layers may vary with model architecture or domain. Moreover, the current evaluation focuses on English‑centric risk scenarios; broader cultural and multilingual safety testing remains future work. Potential extensions include automated, data‑driven extraction of multiple safety vectors for different risk categories (privacy, bias, misinformation) and dynamic λ adjustment based on real‑time risk assessment.
In sum, SelfGuard offers a practical, cost‑effective pathway to bridge the awareness‑compliance gap in LRMs, delivering strong protection against both standard harmful queries and sophisticated jailbreak attacks while preserving the models’ reasoning capabilities. This positions SelfGuard as a promising component of next‑generation safety alignment pipelines for large, reasoning‑capable AI systems.
Comments & Academic Discussion
Loading comments...
Leave a Comment