A Robust Model-Based Approach for Continuous-Time Policy Evaluation with Unknown Lévy Process Dynamics

A Robust Model-Based Approach for Continuous-Time Policy Evaluation with Unknown Lévy Process Dynamics
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

This paper develops a model-based framework for continuous-time policy evaluation (CTPE) in reinforcement learning, incorporating both Brownian and Lévy noise to model stochastic dynamics influenced by rare and extreme events. Our approach formulates the policy evaluation problem as solving a partial integro-differential equation (PIDE) for the value function with unknown coefficients. A key challenge in this setting is accurately recovering the unknown coefficients in the stochastic dynamics, particularly when driven by Lévy processes with heavy tail effects. To address this, we propose a robust numerical approach that effectively handles both unbiased and censored trajectory datasets. This method combines maximum likelihood estimation with an iterative tail correction mechanism, improving the stability and accuracy of coefficient recovery. Additionally, we establish a theoretical bound for the policy evaluation error based on coefficient recovery error. Through numerical experiments, we demonstrate the effectiveness and robustness of our method in recovering heavy-tailed Lévy dynamics and verify the theoretical error analysis in policy evaluation.


💡 Research Summary

This paper tackles the problem of continuous‑time policy evaluation (CTPE) in reinforcement learning by explicitly modeling the underlying dynamics with both Brownian motion and a symmetric 2α‑stable Lévy process. The state evolution is described by the stochastic differential equation
(dX_t = b(X_t)dt + \Sigma(X_t)dW_t + \sigma(X_t)dL^\alpha_t),
where (W_t) is a standard Wiener process, (L^\alpha_t) is a Lévy process with stability index α∈(0,1), and the drift and diffusion coefficients (b, \Sigma, \sigma) are unknown functions of the state. The objective is to estimate the discounted value function
(V(x)=\mathbb{E}\big


Comments & Academic Discussion

Loading comments...

Leave a Comment