Enabling Off-Policy Imitation Learning with Deep Actor Critic Stabilization

Reading time: 1 minute
...

📝 Original Info

  • Title: Enabling Off-Policy Imitation Learning with Deep Actor Critic Stabilization
  • ArXiv ID: 2511.07288
  • Date: 2025-11-10
  • Authors: 논문에 명시된 저자 정보가 제공되지 않았습니다. 저자명과 소속을 확인하려면 원문을 참고하십시오.

📝 Abstract

Learning complex policies with Reinforcement Learning (RL) is often hindered by instability and slow convergence, a problem exacerbated by the difficulty of reward engineering. Imitation Learning (IL) from expert demonstrations bypasses this reliance on rewards. However, state-of-the-art IL methods, exemplified by Generative Adversarial Imitation Learning (GAIL)Ho et. al, suffer from severe sample inefficiency. This is a direct consequence of their foundational on-policy algorithms, such as TRPO Schulman et.al. In this work, we introduce an adversarial imitation learning algorithm that incorporates off-policy learning to improve sample efficiency. By combining an off-policy framework with auxiliary techniques specifically, double Q network based stabilization and value learning without reward function inference we demonstrate a reduction in the samples required to robustly match expert behavior.

💡 Deep Analysis

📄 Full Content

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut