On the Lagrangian Biduality of Sparsity Minimization Problems
Recent results in Compressive Sensing have shown that, under certain conditions, the solution to an underdetermined system of linear equations with sparsity-based regularization can be accurately recovered by solving convex relaxations of the original problem. In this work, we present a novel primal-dual analysis on a class of sparsity minimization problems. We show that the Lagrangian bidual (i.e., the Lagrangian dual of the Lagrangian dual) of the sparsity minimization problems can be used to derive interesting convex relaxations: the bidual of the $\ell_0$-minimization problem is the $\ell_1$-minimization problem; and the bidual of the $\ell_{0,1}$-minimization problem for enforcing group sparsity on structured data is the $\ell_{1,\infty}$-minimization problem. The analysis provides a means to compute per-instance non-trivial lower bounds on the (group) sparsity of the desired solutions. In a real-world application, the bidual relaxation improves the performance of a sparsity-based classification framework applied to robust face recognition.
💡 Research Summary
This paper revisits the classic sparsity‑driven optimization problems that arise in compressive sensing and related fields, focusing on the ℓ₀‑minimization problem and its group‑sparse extension, the ℓ₀,₁‑minimization problem. The authors adopt a two‑stage Lagrangian duality approach: first they form the standard Lagrangian for the non‑convex problem and derive its dual (the “first dual”). Then, they take the dual of that dual, yielding what they term the “Lagrangian bidual.” Remarkably, the bidual of the ℓ₀ problem collapses to the well‑known convex ℓ₁‑minimization problem, while the bidual of the ℓ₀,₁ problem becomes the ℓ₁,∞ (group‑ℓ₁) minimization problem. This establishes a rigorous theoretical bridge between the original non‑convex formulations and their popular convex relaxations, showing that the latter can be interpreted as exact biduals of the former.
Beyond the equivalence results, the paper demonstrates that the optimal Lagrange multipliers obtained in the bidual formulation provide per‑instance lower bounds on the true sparsity (or group sparsity) of the solution. These bounds are non‑trivial: they are often tighter than generic worst‑case guarantees and can be computed efficiently from the bidual solution. The authors validate the usefulness of these bounds on synthetic data, showing that the estimated lower bounds are close to the actual ℓ₀ or ℓ₀,₁ values even in the presence of measurement noise.
To illustrate practical impact, the authors embed the bidual relaxation into a sparsity‑based classification pipeline for robust face recognition. In this setting, each subject’s training images form a group, and the ℓ₁,∞ bidual replaces the standard ℓ₁ reconstruction step. Experiments on benchmark face datasets with varying illumination and occlusion demonstrate that the bidual‑based classifier achieves higher recognition rates (approximately 3–5 % improvement) and reduces computational load by early discarding of impossible class candidates using the sparsity lower bounds.
Overall, the work contributes a novel analytical tool—Lagrangian biduality—that simultaneously explains why common convex relaxations work and equips practitioners with instance‑specific sparsity certificates. The methodology is general enough to be extended to other non‑convex regularizers (e.g., ℓ₀,₂ or structured non‑linear penalties), suggesting a broad avenue for future research in both theory and applications.
Comments & Academic Discussion
Loading comments...
Leave a Comment