Learning Policy Representations for Steerable Behavior Synthesis

Learning Policy Representations for Steerable Behavior Synthesis
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Given a Markov decision process (MDP), we seek to learn representations for a range of policies to facilitate behavior steering at test time. As policies of an MDP are uniquely determined by their occupancy measures, we propose modeling policy representations as expectations of state-action feature maps with respect to occupancy measures. We show that these representations can be approximated uniformly for a range of policies using a set-based architecture. Our model encodes a set of state-action samples into a latent embedding, from which we decode both the policy and its value functions corresponding to multiple rewards. We use variational generative approach to induce a smooth latent space, and further shape it with contrastive learning so that latent distances align with differences in value functions. This geometry permits gradient-based optimization directly in the latent space. Leveraging this capability, we solve a novel behavior synthesis task, where policies are steered to satisfy previously unseen value function constraints without additional training.


💡 Research Summary

The paper tackles the problem of learning a compact, manipulable representation of policies in a Markov decision process (MDP) that enables direct steering of behavior at test time. The authors observe that a policy is uniquely characterized by its discounted state‑action occupancy measure dπ(s,a). They therefore define a policy representation hπ as the expectation of a state‑action feature map f under dπ: hπ = E_{dπ}


Comments & Academic Discussion

Loading comments...

Leave a Comment