When Object-Centric World Models Meet Policy Learning: From Pixels to Policies, and Where It Breaks

Reading time: 1 minute
...

📝 Original Info

  • Title: When Object-Centric World Models Meet Policy Learning: From Pixels to Policies, and Where It Breaks
  • ArXiv ID: 2511.06136
  • Date: 2025-11-08
  • Authors: ** 제공된 정보에 저자명 및 소속이 포함되어 있지 않습니다. **

📝 Abstract

Object-centric world models (OCWM) aim to decompose visual scenes into object-level representations, providing structured abstractions that could improve compositional generalization and data efficiency in reinforcement learning. We hypothesize that explicitly disentangled object-level representations, by localizing task-relevant information, can enhance policy performance across novel feature combinations. To test this hypothesis, we introduce DLPWM, a fully unsupervised, disentangled object-centric world model that learns object-level latents directly from pixels. DLPWM achieves strong reconstruction and prediction performance, including robustness to several out-of-distribution (OOD) visual variations. However, when used for downstream model-based control, policies trained on DLPWM latents underperform compared to DreamerV3. Through latent-trajectory analyses, we identify representation shift during multi-object interactions as a key driver of unstable policy learning. Our results suggest that, although object-centric perception supports robust visual modeling, achieving stable control requires mitigating latent drift.

💡 Deep Analysis

📄 Full Content

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut