Duality Gap, Computational Complexity and NP Completeness: A Survey
We survey research that studies the connection between the computational complexity of optimization problems on the one hand, and the duality gap between the primal and dual optimization problems on the other. To our knowledge, this is the first survey that connects the two very important areas. We further look at a similar phenomenon in finite model theory relating to complexity and optimization.
💡 Research Summary
The paper surveys the relationship between the duality gap of optimization problems and their computational complexity, focusing especially on connections to NP‑completeness. It begins by recalling that the duality gap—the difference between the optimal values of a primal problem and its dual—has been observed for decades, but a systematic study of its link to algorithmic difficulty has been lacking. The authors ask two central questions: (1) Does the existence of polynomial‑time algorithms for both the primal and its dual guarantee a zero duality gap? (2) Conversely, does a non‑zero duality gap imply that at least one of the two problems is NP‑hard?
The paper first formalizes decision versions of optimization problems (denoted D₁(r)) and reviews standard complexity classes NP, CoNP, and P. It introduces the notion of “tight duals” (TD): a pair of optimization problems that are each other’s dual and have zero duality gap. Lemma 9 shows that TD ⊆ NP ∩ CoNP, establishing that any problem with a tight dual lies in the intersection of NP and its complement. However, the relationship between TD and P remains open; the authors note that while P ⊆ NP ∩ CoNP, it is unknown whether TD ⊆ P or P ⊆ TD.
The authors distinguish weak duality (the universal inequality θ(u,v) ≤ f(x) for any feasible primal x and dual (u,v)) from strong duality (equality holds for some feasible pair). Theorem 13 confirms weak duality for Lagrangian duals. Theorem 17 provides sufficient conditions for strong duality in convex programs that satisfy a constraint qualification (existence of an interior feasible point). Corollary 18 follows: if strong duality fails (i.e., a duality gap exists), then at least one of the primal or dual is non‑convex.
The paper then links non‑convexity to NP‑hardness. Theorem 19, based on a reduction from Subset‑Sum, states that any non‑convex optimization problem is NP‑hard. This result is reinforced by the concept of “hidden convexity”: a non‑convex problem may have a convex dual with zero gap, but if that convex dual is itself NP‑hard, the original problem remains hard. The authors illustrate this with the Standard Quadratic Program (SQP), which is non‑convex yet admits an exact reformulation as a copositive program; both formulations are NP‑hard because the decision version embeds the maximum‑clique problem.
From these observations, Theorem 21 concludes that, under the constraint‑qualification assumption, the presence of a duality gap in Lagrangian duals implies that either the primal or the dual is NP‑hard. This establishes a clear direction: lack of strong duality → computational intractability.
Section 5 tackles the converse: does strong duality guarantee polynomial‑time solvability? The authors note that for simple cases such as linear programming, strong duality coincides with interior‑point methods that run in polynomial time. Recent work
Comments & Academic Discussion
Loading comments...
Leave a Comment