Alternatives with stronger convergence than coordinate-descent iterative LMI algorithms

Alternatives with stronger convergence than coordinate-descent iterative   LMI algorithms
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In this note we aim at putting more emphasis on the fact that trying to solve non-convex optimization problems with coordinate-descent iterative linear matrix inequality algorithms leads to suboptimal solutions, and put forward other optimization methods better equipped to deal with such problems (having theoretical convergence guarantees and/or being more efficient in practice). This fact, already outlined at several places in the literature, still appears to be disregarded by a sizable part of the systems and control community. Thus, main elements on this issue and better optimization alternatives are presented and illustrated by means of an example.


💡 Research Summary

The paper “Alternatives with stronger convergence than coordinate‑descent iterative LMI algorithms” critically examines the widespread practice of tackling non‑convex control‑oriented optimization problems by recasting them as bilinear matrix inequalities (BMIs) and then applying a coordinate‑descent iterative LMI (CDILMI) scheme. The CDILMI approach splits the set of “complicating” variables into two subsets, fixes one while solving an LMI for the other, and iterates. While this guarantees a monotonic non‑increase of the objective, the authors show that it only yields “partial optimal” points: each sub‑problem is solved optimally, but the overall solution is generally not locally optimal in the full variable space. In practice the algorithm often stalls at the boundary of the feasible set, delivering solutions that are far from a true optimum.

To address these shortcomings, the authors present two families of methods that possess rigorous convergence guarantees and superior practical performance.

  1. Gradient‑based nonsmooth optimization – The paper highlights two mature MATLAB tools: HIFOO and hinfstruct. HIFOO begins with a quasi‑Newton (BFGS) phase and then switches to random gradient sampling and bundle methods, exploiting the fact that many control objectives (e.g., H∞ norm, spectral abscissa) are differentiable almost everywhere. hinfstruct builds on Clarke sub‑differential extensions, constructing a quadratic tangent model at each iterate and solving a local sub‑problem. Both methods are proven to converge to a stationary point for locally Lipschitz (or even merely directionally differentiable) objectives, and they have demonstrated robustness on a wide range of fixed‑order controller design tasks.

  2. Derivative‑free optimization (DFO) – The authors review the theoretical foundations of pattern‑search and mesh‑adaptive direct‑search algorithms (MDS, MADS) as presented in the works of Torczon, Audet & Dennis, and others. These algorithms require only function evaluations, making them attractive when gradients are unavailable or too costly. Convergence results cover smooth, nonsmooth, and even discontinuous objective functions, guaranteeing that limit points satisfy Clarke‑type optimality conditions. Recent applications in static output‑feedback (SOF) synthesis and H∞/H2 performance minimization have shown that DFO can rival or surpass gradient‑based methods, especially when combined with intelligent restart strategies.

The paper validates these claims through a concrete example taken from a recent CDILMI‑based design of a reduced‑order positive discrete‑time filter (reference


Comments & Academic Discussion

Loading comments...

Leave a Comment