Fairness Aware Reward Optimization

Fairness Aware Reward Optimization
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Demographic skews in human preference data propagate systematic unfairness through reward models into aligned LLMs. We introduce Fairness Aware Reward Optimization (Faro), an in-processing framework that trains reward models under demographic parity, equalized odds, or counterfactual fairness constraints. We provide the first theoretical analysis of reward-level fairness in LLM alignment, establishing: (i) provable fairness certificates for Faro-trained rewards with controllable slack; a (ii) formal characterization of the accuracy-fairness trade-off induced by KL-regularized fine-tuning, proving fairness transfers from reward to policy; and the (iii) existence of a non-empty Pareto frontier. Unlike pre- and post-processing methods, Faro ensures reward models are simultaneously ordinal (ranking correctly), cardinal (calibrated), and fair. Across multiple LLMs and benchmarks, Faro significantly reduces bias and harmful generations while maintaining or improving model quality.


💡 Research Summary

The paper tackles a critical source of bias in large language model (LLM) alignment: demographic skews in human preference data that become encoded in reward models and subsequently amplified during reinforcement‑learning‑from‑human‑feedback (RLHF). Existing mitigation strategies—pre‑processing (data filtering, re‑balancing) and post‑processing (detoxifying decoding, threshold adjustments)—only address superficial symptoms and cannot guarantee that the underlying reward function is fair, calibrated, and ordinal. To close this gap, the authors propose Fairness Aware Reward Optimization (FARO), an in‑processing framework that embeds group‑fairness constraints directly into the reward‑model training objective.

Core Idea

FARO augments each preference tuple ((x, \hat y_w, \hat y_l)) with a set of sensitive attributes (S) (e.g., gender, race) and unrestricted attributes (U) (e.g., age, education). The reward model (r_\phi) is trained to output a scalar for any ((x, y)); the pairwise preference probability is defined via a Bradley‑Terry model: \


Comments & Academic Discussion

Loading comments...

Leave a Comment