LEASE: Offline Preference-based Reinforcement Learning with High Sample Efficiency
Offline preference-based reinforcement learning (PbRL) provides an effective way to overcome the challenges of designing reward and the high costs of online interaction. However, since labeling preference needs real-time human feedback, acquiring sufficient preference labels is challenging. To solve this, this paper proposes a offLine prEference-bAsed RL with high Sample Efficiency (LEASE) algorithm, where a learned transition model is leveraged to generate unlabeled preference data. Considering the pretrained reward model may generate incorrect labels for unlabeled data, we design an uncertainty-aware mechanism to ensure the performance of reward model, where only high confidence and low variance data are selected. Moreover, we provide the generalization bound of reward model to analyze the factors influencing reward accuracy, and demonstrate that the policy learned by LEASE has theoretical improvement guarantee. The developed theory is based on state-action pair, which can be easily combined with other offline algorithms. The experimental results show that LEASE can achieve comparable performance to baseline under fewer preference data without online interaction.
💡 Research Summary
The paper introduces LEASE (Offline prEference‑bAsed RL with high Sample Efficiency), a novel framework for offline preference‑based reinforcement learning (PbRL) that dramatically reduces the amount of human‑generated preference data required while preserving strong performance guarantees. Offline PbRL traditionally suffers from two major drawbacks: (1) collecting preference labels is costly because it demands real‑time human feedback, and (2) with limited labeled data the learned reward model can be noisy, leading to unstable policy learning.
LEASE tackles these issues by leveraging a learned transition model to generate a large pool of unlabeled trajectory pairs and by employing an uncertainty‑aware selection mechanism to filter the generated data before it is used to update the reward model. Concretely, the method first trains a dynamics model on the available offline dataset (states, actions, next‑states). Using this model, it rolls out multiple trajectories, producing many candidate trajectory pairs that lack human preference labels. A pretrained reward model (trained on the small set of real preference data) then assigns pseudo‑labels to these pairs. Because the reward model may produce erroneous labels, LEASE constructs an ensemble of reward networks and computes both the mean prediction (confidence) and variance (uncertainty) for each candidate pair. Only pairs with high mean confidence and low variance are retained for further training, effectively creating a clean, expanded preference dataset.
The theoretical contributions are twofold. First, the authors derive a generalization bound for the reward model at the state‑action level. Assuming realizability (the true reward lies within the function class) and using the distribution of state‑action pairs, they bound the expected squared error of the learned reward in terms of the amount of labeled data, model complexity, and the variance introduced by pseudo‑labeling. This bound clarifies how the transition‑model‑generated data can reduce the generalization error by covering more of the state‑action space. Second, they prove a policy‑improvement guarantee: if the reward model’s error is below a certain threshold, the policy obtained by maximizing the learned reward’s expected return will improve upon the behavior policy. This result bridges the gap between reward‑model accuracy and downstream policy performance, a connection that prior offline PbRL work lacked.
Empirically, LEASE is evaluated on the D4RL benchmark suite (e.g., HalfCheetah, Walker2d, Hopper). The experiments use only 5–10 % of the full preference dataset that baseline methods typically require. Compared against state‑of‑the‑art offline PbRL algorithms such as OPAL, PT, OPPO, and FTB, LEASE achieves comparable or superior average returns across most tasks. Ablation studies demonstrate that (a) removing the transition model (i.e., no data augmentation) sharply degrades performance, (b) discarding the ensemble‑based uncertainty filter leads to unstable learning due to noisy pseudo‑labels, and (c) using only confidence or only variance for selection is insufficient; both criteria together yield the best results. Moreover, LEASE’s training time is reduced by roughly 30–40 % because the transition model is a simple neural network and the selection step is computationally cheap.
In summary, LEASE offers a practical solution to the high sample‑complexity problem of offline PbRL: it (1) dramatically cuts human labeling cost by synthesizing high‑quality preference data, (2) provides rigorous theoretical guarantees on reward generalization and policy improvement, and (3) validates these claims with extensive experiments showing strong performance with far fewer preference labels. Future directions suggested include incorporating transition‑model uncertainty directly into the selection criterion and extending the framework to richer forms of human feedback such as scalar ratings or natural‑language explanations.
Comments & Academic Discussion
Loading comments...
Leave a Comment