Cost-Minimized Label-Flipping Poisoning Attack to LLM Alignment
Reading time: 2 minute
...
📝 Original Info
- Title: Cost-Minimized Label-Flipping Poisoning Attack to LLM Alignment
- ArXiv ID: 2511.09105
- Date: 2025-11-12
- Authors: ** 정보 없음 (논문에 저자 정보가 제공되지 않음) **
📝 Abstract
Large language models (LLMs) are increasingly deployed in real-world systems, making it critical to understand their vulnerabilities. While data poisoning attacks during RLHF/DPO alignment have been studied empirically, their theoretical foundations remain unclear. We investigate the minimum-cost poisoning attack required to steer an LLM's policy toward an attacker's target by flipping preference labels during RLHF/DPO, without altering the compared outputs. We formulate this as a convex optimization problem with linear constraints, deriving lower and upper bounds on the minimum attack cost. As a byproduct of this theoretical analysis, we show that any existing label-flipping attack can be post-processed via our proposed method to reduce the number of label flips required while preserving the intended poisoning effect. Empirical results demonstrate that this cost-minimization post-processing can significantly reduce poisoning costs over baselines, particularly when the reward model's feature dimension is small relative to the dataset size. These findings highlight fundamental vulnerabilities in RLHF/DPO pipelines and provide tools to evaluate their robustness against low-cost poisoning attacks.💡 Deep Analysis
📄 Full Content
Reference
This content is AI-processed based on open access ArXiv data.