Return Augmented Decision Transformer for Off-Dynamics Reinforcement Learning

Return Augmented Decision Transformer for Off-Dynamics Reinforcement Learning
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We study offline off-dynamics reinforcement learning (RL) to utilize data from an easily accessible source domain to enhance policy learning in a target domain with limited data. Our approach centers on return-conditioned supervised learning (RCSL), particularly focusing on Decision Transformer (DT) type frameworks, which can predict actions conditioned on desired return guidance and complete trajectory history. Previous works address the dynamics shift problem by augmenting the reward in the trajectory from the source domain to match the optimal trajectory in the target domain. However, this strategy can not be directly applicable in RCSL owing to (1) the unique form of the RCSL policy class, which explicitly depends on the return, and (2) the absence of a straightforward representation of the optimal trajectory distribution. We propose the Return Augmented (REAG) method for DT type frameworks, where we augment the return in the source domain by aligning its distribution with that in the target domain. We provide the theoretical analysis demonstrating that the RCSL policy learned from REAG achieves the same level of suboptimality as would be obtained without a dynamics shift. We introduce two practical implementations REAG$\text{Dara}^{*}$ and REAG$\text{MV}^{*}$ respectively. Thorough experiments on D4RL datasets and various DT-type baselines demonstrate that our methods consistently enhance the performance of DT type frameworks in off-dynamics RL.


💡 Research Summary

Offline reinforcement learning (RL) often suffers from a dynamics mismatch when data are collected in an easily accessible source environment but the policy must be deployed in a target environment with different transition dynamics. This paper tackles the off‑dynamics RL problem from the perspective of Return‑Conditioned Supervised Learning (RCSL), focusing on Decision Transformer (DT)‑type models that condition actions on a desired return‑to‑go. Existing dynamics‑aware reward‑augmentation methods (e.g., DARC, DARA) cannot be directly applied because DT policies explicitly depend on the return, and the optimal trajectory distribution in the target domain is not readily expressible.

The authors propose Return Augmented (REAG), a framework that modifies the returns of source trajectories so that their distribution aligns with that of the target domain. Formally, each source trajectory τ with original cumulative return g(τ) is transformed by a function ψ, yielding ψ(g(τ)) as the conditioning signal during DT training. Two concrete instantiations are introduced:

  1. REAG*Dara – adapts the dynamics‑aware reward‑augmentation idea. By training binary classifiers to distinguish source from target transitions, the log‑ratio Δr(s,a,s′)=log q_T(M_T|s,a,s′)/q_T(M_S|s,a,s′)−log q_sa(M_T|s,a)/q_sa(M_S|s,a) is estimated. The transformed return is ψ(g_t)=∑_{h≥t}

Comments & Academic Discussion

Loading comments...

Leave a Comment