Cross-Domain Offline Policy Adaptation via Selective Transition Correction

Cross-Domain Offline Policy Adaptation via Selective Transition Correction
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

It remains a critical challenge to adapt policies across domains with mismatched dynamics in reinforcement learning (RL). In this paper, we study cross-domain offline RL, where an offline dataset from another similar source domain can be accessed to enhance policy learning upon a target domain dataset. Directly merging the two datasets may lead to suboptimal performance due to potential dynamics mismatches. Existing approaches typically mitigate this issue through source domain transition filtering or reward modification, which, however, may lead to insufficient exploitation of the valuable source domain data. Instead, we propose to modify the source domain data into the target domain data. To that end, we leverage an inverse policy model and a reward model to correct the actions and rewards of source transitions, explicitly achieving alignment with the target dynamics. Since limited data may result in inaccurate model training, we further employ a forward dynamics model to retain corrected samples that better match the target dynamics than the original transitions. Consequently, we propose the Selective Transition Correction (STC) algorithm, which enables reliable usage of source domain data for policy adaptation. Experiments on various environments with dynamics shifts demonstrate that STC achieves superior performance against existing baselines.


💡 Research Summary

The paper tackles the problem of cross‑domain offline reinforcement learning, where a source domain and a target domain share the same state and action spaces but differ in transition dynamics. Directly mixing source and target datasets often harms performance because the source transitions are generated under dynamics that do not match the target environment. Existing offline domain‑adaptation methods typically mitigate this mismatch by filtering out source transitions that appear dissimilar or by penalising their rewards. While effective at reducing bias, such strategies discard a large portion of the potentially useful source data, limiting sample efficiency.

To address this limitation, the authors propose Selective Transition Correction (STC), a three‑step pipeline that transforms source transitions so that they become compatible with the target dynamics, and then selectively retains only those transformed samples that are trustworthy.

Phase I – Model training on the target dataset.

  1. Inverse policy model (f_{\text{inv}}(s, s’)): trained to predict the action that most likely caused the observed next state under the target dynamics. The loss is a simple L2 regression on ((s, a, s’)) triples from the target offline data.
  2. Reward model (r(s, a)): a parametric function approximating the target reward function, learned by minimizing squared error on target ((s, a, r)) tuples.
  3. Forward dynamics model (f_{\text{fwd}}(s, a)): predicts the next state given a state‑action pair, also trained on the target dataset.

Phase II – Source transition correction.
For each source transition ((s_{\text{src}}, a_{\text{src}}, s’{\text{src}}, r{\text{src}})):

  • The inverse policy model generates a corrected action (\hat a_{\text{src}} = f_{\text{inv}}(s_{\text{src}}, s’_{\text{src}})).
  • The reward model is used to adjust the original reward via a first‑order Taylor expansion:
    \

Comments & Academic Discussion

Loading comments...

Leave a Comment