Chain-of-Thought Hijacking
Reading time: 1 minute
...
📝 Original Info
- Title: Chain-of-Thought Hijacking
- ArXiv ID: 2510.26418
- Date: 2025-10-30
- Authors: ** 논문에 명시된 저자 정보가 제공되지 않았습니다. (저자 명단이 필요하면 원문을 확인해 주세요.) **
📝 Abstract
Large Reasoning Models (LRMs) improve task performance through extended inference-time reasoning. While prior work suggests this should strengthen safety, we find evidence to the contrary. Long reasoning sequences can be exploited to systematically weaken them. We introduce Chain-of-Thought Hijacking, a jailbreak attack that prepends harmful instructions with extended sequences of benign puzzle reasoning. Across HarmBench, CoT Hijacking achieves attack success rates of 99\%, 94\%, 100\%, and 94\% on Gemini 2.5 Pro, ChatGPT o4 Mini, Grok 3 Mini, and Claude 4 Sonnet. To understand this mechanism, we apply activation probing, attention analysis, and causal interventions. We find that refusal depends on a low-dimensional safety signal that becomes diluted as reasoning grows: mid-layers encode the strength of safety checking, while late layers encode the refusal outcome. These findings demonstrate that explicit chain-of-thought reasoning introduces a systematic vulnerability when combined with answer-prompting cues. We release all evaluation materials to facilitate replication.💡 Deep Analysis
📄 Full Content
Reference
This content is AI-processed based on open access ArXiv data.