Stochastic Perrons method for Hamilton-Jacobi-Bellman equations

We show that the value function of a stochastic control problem is the unique solution of the associated Hamilton-Jacobi-Bellman (HJB) equation, completely avoiding the proof of the so-called dynamic

Stochastic Perrons method for Hamilton-Jacobi-Bellman equations

We show that the value function of a stochastic control problem is the unique solution of the associated Hamilton-Jacobi-Bellman (HJB) equation, completely avoiding the proof of the so-called dynamic programming principle (DPP). Using Stochastic Perron’s method we construct a super-solution lying below the value function and a sub-solution dominating it. A comparison argument easily closes the proof. The program has the precise meaning of verification for viscosity-solutions, obtaining the DPP as a conclusion. It also immediately follows that the weak and strong formulations of the stochastic control problem have the same value. Using this method we also capture the possible face-lifting phenomenon in a straightforward manner.


💡 Research Summary

The paper tackles a fundamental problem in stochastic optimal control: establishing that the value function of a control problem is the unique viscosity solution of the associated Hamilton‑Jacobi‑Bellman (HJB) equation. Traditionally, this result is obtained by first proving the dynamic programming principle (DPP), then using it to derive the HJB equation, prove existence of a viscosity solution, and finally invoke a comparison principle to obtain uniqueness. The DPP proof, however, is technically demanding; it requires delicate measurability arguments, regularization of controls, and often a separate treatment of strong (fixed probability space) and weak (probability space also variable) formulations.

The authors bypass the DPP entirely by employing Stochastic Perron’s method, a stochastic analogue of the classical Perron technique used in deterministic PDE theory. The core idea is to construct two families of candidate functions:

  1. Stochastic super‑solutions (upper barriers) – functions that lie below the value function and satisfy the HJB inequality in the viscosity sense with a “≥’’ sign.
  2. Stochastic sub‑solutions (lower barriers) – functions that lie above the value function and satisfy the opposite inequality with a “≤’’ sign.

Both families are built directly from the control problem without invoking optimality. For the super‑solution, one fixes an admissible control α, computes the expected discounted cost, and then takes the infimum over all controls. By applying Itô’s formula and the martingale property, one shows that the resulting function Φ satisfies

\


📜 Original Paper Content

🚀 Synchronizing high-quality layout from 1TB storage...