Constrained Latent Action Policies for Model-Based Offline Reinforcement Learning

Constrained Latent Action Policies for Model-Based Offline Reinforcement Learning
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In offline reinforcement learning, a policy is learned using a static dataset in the absence of costly feedback from the environment. In contrast to the online setting, only using static datasets poses additional challenges, such as policies generating out-of-distribution samples. Model-based offline reinforcement learning methods try to overcome these by learning a model of the underlying dynamics of the environment and using it to guide policy search. It is beneficial but, with limited datasets, errors in the model and the issue of value overestimation among out-of-distribution states can worsen performance. Current model-based methods apply some notion of conservatism to the Bellman update, often implemented using uncertainty estimation derived from model ensembles. In this paper, we propose Constrained Latent Action Policies (C-LAP) which learns a generative model of the joint distribution of observations and actions. We cast policy learning as a constrained objective to always stay within the support of the latent action distribution, and use the generative capabilities of the model to impose an implicit constraint on the generated actions. Thereby eliminating the need to use additional uncertainty penalties on the Bellman update and significantly decreasing the number of gradient steps required to learn a policy. We empirically evaluate C-LAP on the D4RL and V-D4RL benchmark, and show that C-LAP is competitive to state-of-the-art methods, especially outperforming on datasets with visual observations.


💡 Research Summary

**
The paper introduces Constrained Latent Action Policies (C‑LAP), a novel model‑based offline reinforcement learning (RL) framework that tackles two central challenges of offline RL: distribution shift and value overestimation. Traditional model‑based offline methods learn a dynamics model from a fixed dataset and generate imagined trajectories to train a policy. However, limited data leads to model errors, causing the policy to visit out‑of‑distribution (OOD) states and actions, which in turn results in overly optimistic value estimates. Existing solutions typically add uncertainty‑based penalties (e.g., ensembles) to the Bellman update or constrain the learned policy to stay close to the behavior policy. These approaches increase computational cost and require careful tuning of penalty coefficients.

C‑LAP departs from this paradigm by jointly modeling the distribution of observations and actions rather than learning a conditional dynamics model p(oₜ|oₜ₋₁, aₜ₋₁). It introduces a recurrent latent action state‑space model (RL‑SMM) comprising latent states sₜ and latent actions uₜ. The generative process factorizes as:

  • Latent state prior pθ(sₜ|sₜ₋₁, uₜ₋₁) (deterministic transition + action decoder)
  • Latent action prior pθ(uₜ|sₜ) (Gaussian)
  • Observation decoder pθ(oₜ|sₜ)
  • Action decoder pθ(aₜ|sₜ, uₜ)

The model is trained by maximizing an evidence lower bound (ELBO) on the joint log‑likelihood of observation‑action sequences, augmented with reward and termination likelihoods. This yields a generative model capable of sampling realistic observation‑action trajectories while staying within the support of the original dataset.

Policy learning occurs in the latent action space. A policy πψ(uₜ|sₜ) is constrained to produce latent actions that lie within the support of the latent action prior. Concretely, the prior is assumed Gaussian N(μθ(sₜ), σθ(sₜ)). The policy outputs a bounded latent variable ûₜ ∈


Comments & Academic Discussion

Loading comments...

Leave a Comment