Dynamic Programming for Epistemic Uncertainty in Markov Decision Processes

Dynamic Programming for Epistemic Uncertainty in Markov Decision Processes
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In this paper, we propose a general theory of ambiguity-averse MDPs, which treats the uncertain transition probabilities as random variables and evaluates a policy via a risk measure applied to its random return. This ambiguity-averse MDP framework unifies several models of MDPs with epistemic uncertainty for specific choices of risk measures. We extend the concepts of value functions and Bellman operators to our setting. Based on these objects, we establish the consequences of dynamic programming principles in this framework (existence of stationary policies, value and policy iteration algorithms), and we completely characterize law-invariant risk measures compatible with dynamic programming. Our work draws connections among several variants of MDP models and fully delineates what is possible under the dynamic programming paradigm and which risk measures require leaving it.


💡 Research Summary

The paper introduces a unified framework called “ambiguity‑averse Markov Decision Processes (MDPs)” to handle epistemic uncertainty in transition dynamics. Instead of assuming a fixed transition kernel, the authors model the kernel as a random variable ˜P drawn from a distribution ν over the set of feasible kernels. For any policy π, the discounted return µᵀVπ,˜P becomes a random variable, and the decision maker evaluates it through a risk measure ρ, yielding the objective ρ(µᵀVπ,˜P). This formulation subsumes many existing uncertain‑MDP models: robust MDPs (ρ = essential infimum), optimistic MDPs (ρ = essential supremum), multi‑model MDPs (ρ = expectation), percentile optimization (ρ = Value‑at‑Risk), and others such as CVaR or entropic risk measures.

Two types of kernel uncertainty are distinguished: static kernels, where a single realization of ˜P is sampled once and then held fixed for the entire horizon, and resampled kernels, where an i.i.d. kernel is drawn at each time step. The authors show that these two settings have markedly different computational implications; for example, static multi‑model MDPs are NP‑hard, whereas the resampled version can be solved by dynamic programming.

The core technical contribution is the extension of Bellman operators to the ambiguity‑averse setting. Under three axioms on the risk measure—monotonicity, translation invariance, and law invariance—the paper proves that the policy‑specific Bellman operator Tπ,ν,ρ is monotone and a contraction in the ℓ∞‑norm. Consequently, the ambiguity‑averse value function Vπ,ν,ρ has a unique fixed point, stationary optimal policies exist, and classic algorithms such as value iteration and policy iteration retain their convergence guarantees.

However, the authors demonstrate that the class of law‑invariant risk measures compatible with this dynamic‑programming structure is extremely limited. By imposing either a continuity condition or a law‑invariance plus monotonicity assumption, they prove that the only admissible risk measures are the essential infimum, essential supremum, and the expectation. In other words, risk measures like Conditional Value‑at‑Risk, entropic risk, or any non‑linear distortion of the distribution cannot be embedded within a standard Bellman recursion; to use them one must abandon the classic DP paradigm and resort to alternative approaches such as state augmentation, nested risk formulations, or direct gradient‑based policy optimization.

Table 1 in the paper summarizes how each existing model maps onto the ambiguity‑averse framework, indicating whether static or resampled kernels are used, whether the dynamic‑programming conditions hold, whether stationary optimal policies exist, and whether the problem is polynomial‑time tractable. This table makes clear that only robust, optimistic, and risk‑neutral (expectation) models satisfy all DP conditions; all other risk‑sensitive formulations either lose tractability or require non‑DP solution methods.

The paper’s contributions are twofold: (1) a comprehensive theoretical unification of uncertain‑MDP models via risk‑adjusted objectives, and (2) a precise characterization of the limitations imposed by dynamic programming on admissible risk measures. The results guide practitioners: if one wishes to retain the computational elegance of DP, the choice of risk measure is essentially forced to be one of the three classic forms. To incorporate richer risk attitudes, one must explore beyond DP—e.g., by augmenting the state space, employing nested risk measures, or using reinforcement‑learning style policy gradient methods. The authors provide full proofs in the appendices and a detailed literature review, establishing a solid foundation for future work on risk‑aware decision making under model uncertainty.


Comments & Academic Discussion

Loading comments...

Leave a Comment