Submodular meets Spectral: Greedy Algorithms for Subset Selection, Sparse Approximation and Dictionary Selection
We study the problem of selecting a subset of k random variables from a large set, in order to obtain the best linear prediction of another variable of interest. This problem can be viewed in the context of both feature selection and sparse approximation. We analyze the performance of widely used greedy heuristics, using insights from the maximization of submodular functions and spectral analysis. We introduce the submodularity ratio as a key quantity to help understand why greedy algorithms perform well even when the variables are highly correlated. Using our techniques, we obtain the strongest known approximation guarantees for this problem, both in terms of the submodularity ratio and the smallest k-sparse eigenvalue of the covariance matrix. We further demonstrate the wide applicability of our techniques by analyzing greedy algorithms for the dictionary selection problem, and significantly improve the previously known guarantees. Our theoretical analysis is complemented by experiments on real-world and synthetic data sets; the experiments show that the submodularity ratio is a stronger predictor of the performance of greedy algorithms than other spectral parameters.
💡 Research Summary
The paper tackles two closely related problems that arise in high‑dimensional linear prediction: (i) subset selection, where one must choose at most k variables from a large pool V to best predict a target variable Z, and (ii) dictionary selection, where a small dictionary D of size d is to be built so that, for many target variables Z₁,…,Zₛ, the average R² obtained by using at most k dictionary atoms per target is maximized. Both problems can be expressed as maximizing a set function
f(S) = b_Sᵀ C_S⁻¹ b_S,
where C is the covariance matrix of the candidate variables, b contains the covariances between each candidate and the target, and S ⊆ V is the chosen subset. The objective f(S) equals the squared multiple correlation R² of the optimal linear predictor built from S.
A major obstacle in analyzing greedy heuristics for these problems is that f is not submodular in general; classical guarantees rely on strong assumptions such as low pairwise coherence (μ ≪ 1/k) or Restricted Isometry Property (RIP) conditions, which are rarely satisfied in practice, especially when C is near‑singular. To overcome this, the authors introduce the submodularity ratio γ_{U,k}(f), a scalar that quantifies how close a monotone set function is to being submodular. Formally, for any current set L and any disjoint candidate set S with |S| ≤ k,
γ_{U,k}(f) = min_{L,S}
Comments & Academic Discussion
Loading comments...
Leave a Comment