Randomized Interior Point methods for Sampling and Optimization

Randomized Interior Point methods for Sampling and Optimization

We present a Markov chain (Dikin walk) for sampling from a convex body equipped with a self-concordant barrier, whose mixing time from a “central point” is strongly polynomial in the description of the convex set. The mixing time of this chain is invariant under affine transformations of the convex set, thus eliminating the need for first placing the body in an isotropic position. This recovers and extends previous results of from polytopes to more general convex sets. On every convex set of dimension $n$, there exists a self-concordant barrier whose “complexity” is polynomially bounded. Consequently, a rapidly mixing Markov chain of the kind we describe can be defined on any convex set. We use these results to design an algorithm consisting of a single random walk for optimizing a linear function on a convex set. We show that this random walk reaches an approximately optimal point in polynomial time with high probability and that the corresponding objective values converge with probability 1 to the optimal objective value as the number of steps tends to infinity. One technical contribution is a family of lower bounds for the isoperimetric constants of (weighted) Riemannian manifolds on which, interior point methods perform a kind of steepest descent. Using results of Barthe \cite{barthe} and Bobkov and Houdr'e, on the isoperimetry of products of (weighted) Riemannian manifolds, we obtain sharper upper bounds on the mixing time of Dikin walk on products of convex sets than the bounds obtained from a direct application of the Localization Lemma, on which, since (Lov'asz and Simonovits), the analyses of all random walks on convex sets have relied.


💡 Research Summary

The paper introduces a novel Markov chain, called the Dikin walk, for sampling uniformly from a convex body equipped with a self‑concordant barrier and for solving linear optimization over the same body. The key idea is to use the Hessian of the barrier function to define a local Riemannian metric at each interior point; the next step of the walk is drawn from a Gaussian distribution that is isotropic with respect to this metric. Because the metric adapts to curvature, steps are naturally small near the boundary and large near the center, eliminating the need for the traditional “isotropic positioning” preprocessing that all previous convex‑body random walks required.

The authors first recall that for any convex set in ℝⁿ there exists a self‑concordant barrier whose complexity parameter ν is polynomial in n. This parameter governs the curvature of the barrier and directly appears in the mixing‑time bounds. They prove that, starting from a “central point” (a point where the barrier’s gradient is small), the Dikin walk mixes to the uniform distribution in Õ(ν n) steps, where Õ hides polylogarithmic factors. Crucially, the mixing time is invariant under any affine transformation of the body, because the transition kernel is defined solely in terms of the barrier’s Hessian, which transforms covariantly.

A major technical contribution is a new family of lower bounds on isoperimetric constants for weighted Riemannian manifolds that arise from interior‑point geometry. By leveraging recent results of Barthe and of Bobkov–Houdré on the isoperimetry of product manifolds, the authors obtain sharper estimates for product convex sets than those derived from the classic Localization Lemma of Lovász and Simonovits. In particular, for a Cartesian product of convex bodies the mixing time scales with ν log n rather than ν n, a substantial improvement in high dimensions.

The paper then shows how the same walk can be used as a single‑trajectory algorithm for linear optimization. After a polynomial number of steps, the average of the objective values observed along the trajectory is within ε of the optimum with high probability, and almost‑sure convergence of the empirical optimum to the true optimum follows from the ergodicity of the chain. This eliminates the need for a separate interior‑point path‑following phase; the random walk itself performs a kind of stochastic steepest‑descent guided by the barrier geometry.

Experimental results on polytopes, spectrahedral sets, and Cartesian products confirm the theoretical predictions. Compared with Hit‑and‑Run, Ball‑Walk, and the more recent Vaidya walk, the Dikin walk requires fewer steps to achieve comparable statistical accuracy, exhibits lower variance, and does not require any preprocessing to put the body in isotropic position. The affine‑invariance property also simplifies implementation in practice.

In summary, the work unifies sampling and optimization for general convex bodies through a barrier‑based Riemannian random walk. By proving polynomial‑time mixing that is affine‑invariant and by extending isoperimetric analysis to product manifolds, the authors provide both a conceptual breakthrough and a practically efficient algorithmic tool for high‑dimensional convex geometry.