CAPO: Confidence Aware Preference Optimization Learning for Multilingual Preferences

Reading time: 1 minute
...

📝 Original Info

  • Title: CAPO: Confidence Aware Preference Optimization Learning for Multilingual Preferences
  • ArXiv ID: 2511.07691
  • Date: 2025-11-10
  • Authors: ** 제공된 정보에 저자 명단이 포함되지 않았습니다. **

📝 Abstract

Preference optimization is a critical post-training technique used to align large language models (LLMs) with human preferences, typically by fine-tuning on ranked response pairs. While methods like Direct Preference Optimization (DPO) have proven effective in English, they often fail to generalize robustly to multilingual settings. We propose a simple yet effective alternative, Confidence-Aware Preference Optimization (CAPO), which replaces DPO's fixed treatment of preference pairs with a dynamic loss scaling mechanism based on a relative reward. By modulating the learning signal according to the confidence in each preference pair, CAPO enhances robustness to noisy or low-margin comparisons, typically encountered in multilingual text. Empirically, CAPO outperforms existing preference optimization baselines by at least 16% in reward accuracy, and improves alignment by widening the gap between preferred and dispreferred responses across languages.

💡 Deep Analysis

📄 Full Content

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut