Bellman Optimality of Average-Reward Robust Markov Decision Processes with a Constant Gain

Bellman Optimality of Average-Reward Robust Markov Decision Processes with a Constant Gain
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Learning and optimal control under robust Markov decision processes (MDPs) have received increasing attention, yet most existing theory, algorithms, and applications focus on finite-horizon or discounted models. Long-run average-reward formulations, while natural in many operations research and management contexts, remain underexplored. This is primarily because the dynamic programming foundations are technically challenging and only partially understood, with several fundamental questions remaining open. This paper steps toward a general framework for average-reward robust MDPs by analyzing the constant-gain setting. We study the average-reward robust control problem with possible information asymmetries between the controller and an S-rectangular adversary. Our analysis centers on the constant-gain robust Bellman equation, examining both the existence of solutions and their relationship to the optimal average reward. Specifically, we identify when solutions to the robust Bellman equation characterize the optimal average reward and stationary policies, and we provide one-sided weak communication conditions ensuring solutions’ existence. These findings expand the dynamic programming theory for average-reward robust MDPs and lay a foundation for robust dynamic decision making under long-run average criteria in operational environments.


💡 Research Summary

This paper tackles a fundamental gap in the theory of robust Markov decision processes (RMDPs) under the long‑run average‑reward criterion. While discounted‑reward robust MDPs have been extensively studied, the average‑reward setting remains largely unresolved because the associated Bellman optimality equations are technically challenging and their relationship to the optimal average value is not fully understood. The authors focus on the constant‑gain formulation, where the Bellman equation contains an unknown scalar “gain” α that represents the optimal average reward.

Model and Problem Statement
The authors consider a finite state space S and a finite action space A, with bounded rewards r(s,a)∈


Comments & Academic Discussion

Loading comments...

Leave a Comment