Dynamic programming principle for one kind of stochastic recursive optimal control problem and Hamilton-Jacobi-Bellman equations

Dynamic programming principle for one kind of stochastic recursive   optimal control problem and Hamilton-Jacobi-Bellman equations
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In this paper, we study one kind of stochastic recursive optimal control problem with the obstacle constraints for the cost function where the cost function is described by the solution of one reflected backward stochastic differential equations. We will give the dynamic programming principle for this kind of optimal control problem and show that the value function is the unique viscosity solution of the obstacle problem for the corresponding Hamilton-Jacobi-Bellman equations.


💡 Research Summary

The paper investigates a class of stochastic recursive optimal control problems in which the performance criterion is given by the solution of a reflected backward stochastic differential equation (RBSDE). The RBSDE incorporates an obstacle (a lower barrier) that forces the cost process to stay above a prescribed function, and an associated increasing process ensures minimal reflection. The authors first formulate the controlled state dynamics as a standard stochastic differential equation driven by a Brownian motion, and then define the cost triple ((Y,Z,K)) through the RBSDE with terminal condition (\Phi(X_T)), driver (f), and obstacle (h(t,X_t)). The control objective is to minimize the initial value (Y_t) over all admissible, progressively measurable controls.

The central contribution is a rigorous proof of the dynamic programming principle (DPP) for this setting. By employing comparison theorems for RBSDEs, continuity properties of the value functional, and a careful topological description of the admissible control set, the authors show that the value function \


Comments & Academic Discussion

Loading comments...

Leave a Comment