Sparse solution of overdetermined linear systems when the columns of $A$ are orthogonal
In this paper, we consider the problem of obtaining the best $k$-sparse solution of $Ax=y$ subject to the constraint that the columns of $A$ are orthogonal. The naive approach for obtaining a solution to this problem has exponential complexity and there exist $l_1$ regularization methods such as Lasso to obtain approximate solutions. In this paper, we show that we can obtain an exact solution to the problem, with much less computational effort compared to the brute force search when the columns of $A$ are orthogonal.
💡 Research Summary
The paper addresses the problem of finding a best‑k‑sparse solution to an overdetermined linear system Ax = y where the matrix A ∈ ℝ^{m×n} (with m > n) has mutually orthogonal columns. Formally, the goal is to minimize the Euclidean residual ‖y − Ax̂‖₂ subject to the sparsity constraint ‖x̂‖₀ ≤ k. In the general case this is an NP‑hard combinatorial problem; exhaustive search requires evaluating (\binom{n}{k}) subsets, and convex relaxations such as the Lasso (ℓ₁‑regularized least squares) only provide approximate solutions without guarantees of exact sparsity optimality.
The key observation exploited by the authors is that orthogonality of the columns implies AᵀA = D, a diagonal matrix whose i‑th diagonal entry equals the squared norm of column a_i. Consequently, the least‑squares coefficient for any single column is simply the projection of y onto that column, scaled by the column norm: \
Comments & Academic Discussion
Loading comments...
Leave a Comment