A parallel approximation algorithm for mixed packing and covering semidefinite programs

A parallel approximation algorithm for mixed packing and covering   semidefinite programs

We present a parallel approximation algorithm for a class of mixed packing and covering semidefinite programs which generalize on the class of positive semidefinite programs as considered by Jain and Yao [2011]. As a corollary we get a faster approximation algorithm for positive semidefinite programs with better dependence of the parallel running time on the approximation factor, as compared to that of Jain and Yao [2011]. Our algorithm and analysis is on similar lines as that of Young [2001] who considered analogous linear programs.


💡 Research Summary

The paper addresses the problem of efficiently approximating mixed packing‑and‑covering semidefinite programs (SDPs) in a parallel computing setting. A mixed packing‑and‑covering SDP consists of a positive semidefinite matrix variable X∈Sⁿ₊ together with two families of linear matrix constraints: “packing” constraints of the form A_i·X ≤ b_i and “covering” constraints of the form C_j·X ≥ d_j. This formulation generalizes the class of positive SDPs studied by Jain and Yao (2011), which only contain packing constraints, and captures many practical optimization problems such as network design, spectrum allocation, and resource‑balancing where both upper‑ and lower‑bound requirements coexist.

The authors build on two strands of prior work. Jain and Yao gave a parallel approximation algorithm for positive SDPs with a running time that scales as O(ε⁻⁴·polylog) in the approximation parameter ε. Young (2001) introduced a potential‑function based framework for linear programs with mixed packing and covering constraints, achieving an O(ε⁻²) dependence on ε. The main contribution of this paper is to combine Young’s potential‑function technique with matrix‑valued exponential updates, thereby extending the O(ε⁻²) behavior to the semidefinite setting.

Algorithmically, the method maintains a weight w_i for each constraint. At each iteration the algorithm computes a matrix exponentiation of a weighted combination of the constraint matrices: \