Curse of dimensionality reduction in max-plus based approximation methods: theoretical estimates and improved pruning algorithms

Curse of dimensionality reduction in max-plus based approximation   methods: theoretical estimates and improved pruning algorithms
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Max-plus based methods have been recently developed to approximate the value function of possibly high dimensional optimal control problems. A critical step of these methods consists in approximating a function by a supremum of a small number of functions (max-plus “basis functions”) taken from a prescribed dictionary. We study several variants of this approximation problem, which we show to be continuous versions of the facility location and $k$-center combinatorial optimization problems, in which the connection costs arise from a Bregman distance. We give theoretical error estimates, quantifying the number of basis functions needed to reach a prescribed accuracy. We derive from our approach a refinement of the curse of dimensionality free method introduced previously by McEneaney, with a higher accuracy for a comparable computational cost.


💡 Research Summary

This paper investigates the use of max‑plus algebra for approximating the value function of high‑dimensional optimal control problems, focusing on both theoretical limits and practical algorithmic improvements. The authors begin by reviewing the dynamic programming formulation of optimal control, which leads to a Hamilton‑Jacobi‑Bellman (HJB) partial differential equation. Traditional grid‑based methods suffer from the curse of dimensionality because the computational effort grows exponentially with the state dimension d. Max‑plus based methods avoid this exponential blow‑up by representing the value function as a supremum (max‑plus linear combination) of a finite set of basis functions drawn from a prescribed dictionary.

The paper concentrates on a common choice of basis functions: quadratic forms of the type
(w_{p}(x) = -c^{2}|x|^{2} + p^{\top}x),
where the parameter c is fixed and p varies in (\mathbb{R}^{d}). Any c‑semiconvex function can be expressed exactly as the supremum of infinitely many such functions, but in practice only a finite number n can be kept. The central question is how the approximation error depends on n and on the dimension d.

By reformulating the approximation problem as the selection of n affine minorants that best cover a given convex (or semiconvex) function, the authors connect it to continuous versions of the facility‑location and k‑center combinatorial optimization problems, where the cost is measured by a Bregman distance. They prove two asymptotic error estimates. For a C², c‑semiconvex function ψ, the optimal L¹ error behaves as

\


Comments & Academic Discussion

Loading comments...

Leave a Comment