Randomized Dimensionality Reduction for k-means Clustering
We study the topic of dimensionality reduction for $k$-means clustering. Dimensionality reduction encompasses the union of two approaches: \emph{feature selection} and \emph{feature extraction}. A feature selection based algorithm for $k$-means clustering selects a small subset of the input features and then applies $k$-means clustering on the selected features. A feature extraction based algorithm for $k$-means clustering constructs a small set of new artificial features and then applies $k$-means clustering on the constructed features. Despite the significance of $k$-means clustering as well as the wealth of heuristic methods addressing it, provably accurate feature selection methods for $k$-means clustering are not known. On the other hand, two provably accurate feature extraction methods for $k$-means clustering are known in the literature; one is based on random projections and the other is based on the singular value decomposition (SVD). This paper makes further progress towards a better understanding of dimensionality reduction for $k$-means clustering. Namely, we present the first provably accurate feature selection method for $k$-means clustering and, in addition, we present two feature extraction methods. The first feature extraction method is based on random projections and it improves upon the existing results in terms of time complexity and number of features needed to be extracted. The second feature extraction method is based on fast approximate SVD factorizations and it also improves upon the existing results in terms of time complexity. The proposed algorithms are randomized and provide constant-factor approximation guarantees with respect to the optimal $k$-means objective value.
💡 Research Summary
The paper addresses the problem of reducing the dimensionality of data before applying k‑means clustering, a task that becomes increasingly challenging as modern datasets grow in both size and number of features. While many heuristic approaches exist, prior to this work there were no provably accurate feature‑selection methods for k‑means, and only two provably accurate feature‑extraction methods: one based on random projections and another based on the singular value decomposition (SVD). The authors make three main contributions.
First, they introduce the first theoretically guaranteed feature‑selection algorithm for k‑means (Theorem 11). The algorithm proceeds in two stages. Given a data matrix A∈ℝ^{m×n} and a target number of clusters k, it computes an approximate basis Z∈ℝ^{n×k} that approximates the top‑k right singular vectors of A using a fast randomized SVD routine. Using the squared ℓ₂‑norms of the rows of Z as importance scores, the algorithm samples O(k log k / ε²) columns of A with replacement (probability proportional to these scores). The sampled columns form a reduced matrix C∈ℝ^{m×r}. The total running time is O(mnk ε⁻¹ + k log k ε⁻² log(k log k ε⁻¹)). With constant probability the optimal k‑means cost on the reduced data, when lifted back to the original space, is at most (3 + ε) times the optimal cost on the full data. This result bridges the gap between feature selection and clustering approximation, providing a concrete trade‑off between the number of selected features, computational effort, and clustering quality.
Second, the authors improve the classic random‑projection based feature‑extraction method. The folklore result (e.g., from
Comments & Academic Discussion
Loading comments...
Leave a Comment