The simplex method is strongly polynomial for deterministic Markov decision processes
We prove that the simplex method with the highest gain/most-negative-reduced cost pivoting rule converges in strongly polynomial time for deterministic Markov decision processes (MDPs) regardless of the discount factor. For a deterministic MDP with n states and m actions, we prove the simplex method runs in O(n^3m^2log^2 n) iterations if the discount factor is uniform and O(n^5m^3log^2 n) iterations if each action has a distinct discount factor. Previously the simplex method was known to run in polynomial time only for discounted MDPs where the discount was bounded away from 1 [Ye11]. Unlike in the discounted case, the algorithm does not greedily converge to the optimum, and we require a more complex measure of progress. We identify a set of layers in which the values of primal variables must lie and show that the simplex method always makes progress optimizing one layer, and when the upper layer is updated the algorithm makes a substantial amount of progress. In the case of nonuniform discounts, we define a polynomial number of “milestone” policies and we prove that, while the objective function may not improve substantially overall, the value of at least one dual variable is always making progress towards some milestone, and the algorithm will reach the next milestone in a polynomial number of steps.
💡 Research Summary
The paper establishes that the simplex method, when equipped with the highest‑gain (or most‑negative‑reduced‑cost) pivot rule, solves deterministic Markov decision processes (MDPs) in strongly polynomial time, irrespective of the discount factors. A deterministic MDP consists of n states and m actions, each action leading to a unique successor state. The authors first formulate the optimal control problem as a linear program (LP) whose primal variables are state values and whose dual variables are the usual Bellman multipliers. The pivot rule selects the non‑basic variable with the largest improvement in the objective, which in the deterministic setting reduces to the action with the greatest “gain” (r_{sa} + \gamma_a v_{s’} - v_s).
Traditional analyses of the simplex method rely on the magnitude of objective‑function decrease per pivot. In deterministic MDPs this decrease can be arbitrarily small, especially when discount factors approach one, making such analyses insufficient. To overcome this, the authors introduce a novel progress measure based on layers. The range of possible state‑value variables is partitioned into a polynomial number of intervals (layers). Within a layer, any pivot guarantees a minimum increase of at least (1/(nm)) in at least one state value; when a pivot moves a variable to a higher layer, the overall value improves by a factor of (\Omega(1/n)). This layered structure yields a logarithmic‑squared factor in the total bound.
For the case of a uniform discount factor (all actions share the same (\gamma)), the layer analysis alone suffices. The authors prove that the number of pivots never exceeds (O(n^3 m^2 \log^2 n)). Since each pivot can be performed in (O(m)) arithmetic operations, the total running time is polynomial in the input size, satisfying the definition of a strongly polynomial algorithm.
When discount factors are non‑uniform (each action may have its own (\gamma_a)), the layer argument alone does not guarantee progress because the relative importance of actions can shift dramatically. To handle this, the paper defines a polynomial‑size set of milestone policies. These milestones are ordered according to the discount hierarchy and serve as way‑points for the algorithm. Although the objective value may not improve substantially between milestones, the authors show that at least one dual variable (a Bellman multiplier) makes a guaranteed progress of at least (1/(nm)) toward the next milestone. Consequently, the algorithm must reach each successive milestone in a polynomial number of pivots, leading to an overall bound of (O(n^5 m^3 \log^2 n)) pivots for the non‑uniform case.
The main contributions are threefold: (1) a proof that the simplex method with the highest‑gain rule is strongly polynomial for deterministic MDPs regardless of discount factors, extending beyond the previously known regime where (\gamma) is bounded away from 1; (2) the introduction of the layered progress measure, which replaces the traditional reliance on objective‑function decrease; and (3) the milestone‑policy framework that ensures steady dual‑variable advancement in the presence of heterogeneous discounts.
These results have significant theoretical implications. They demonstrate that a classic, widely used algorithm—simplex—can be guaranteed to run efficiently on a broad class of decision‑making problems that were previously thought to be challenging when the discount factor is close to one. Moreover, the techniques of layering and milestones may be adaptable to other LP‑based algorithms, stochastic MDPs, or even to integer programming contexts where progress measures are hard to define. Future work could explore extensions to stochastic transition structures, alternative pivot rules such as Bland’s rule, and empirical studies to assess practical performance on large‑scale deterministic MDP instances.