Feature Clustering for Accelerating Parallel Coordinate Descent
Large-scale L1-regularized loss minimization problems arise in high-dimensional applications such as compressed sensing and high-dimensional supervised learning, including classification and regressio
Large-scale L1-regularized loss minimization problems arise in high-dimensional applications such as compressed sensing and high-dimensional supervised learning, including classification and regression problems. High-performance algorithms and implementations are critical to efficiently solving these problems. Building upon previous work on coordinate descent algorithms for L1-regularized problems, we introduce a novel family of algorithms called block-greedy coordinate descent that includes, as special cases, several existing algorithms such as SCD, Greedy CD, Shotgun, and Thread-Greedy. We give a unified convergence analysis for the family of block-greedy algorithms. The analysis suggests that block-greedy coordinate descent can better exploit parallelism if features are clustered so that the maximum inner product between features in different blocks is small. Our theoretical convergence analysis is supported with experimental re- sults using data from diverse real-world applications. We hope that algorithmic approaches and convergence analysis we provide will not only advance the field, but will also encourage researchers to systematically explore the design space of algorithms for solving large-scale L1-regularization problems.
💡 Research Summary
The paper addresses the computational challenges of solving large‑scale L1‑regularized loss minimization problems, which are ubiquitous in high‑dimensional applications such as compressed sensing, text classification, and genomic regression. While coordinate descent (CD) methods are attractive for their simplicity and low memory footprint, their parallel scalability has been limited. Existing parallel CD variants—Shotgun (random simultaneous updates), Greedy CD (sequential greedy updates), Thread‑Greedy (each thread picks its own greedy coordinate), and SCD (sequential CD)— each exploit a different trade‑off between parallelism and convergence speed, but none provides a unified framework that can systematically balance these aspects.
The authors introduce Block‑Greedy Coordinate Descent (BGCD), a family of algorithms that subsumes the aforementioned methods as special cases. In BGCD, the set of features is partitioned into a predetermined number of blocks (or clusters). Within each block, the algorithm selects the coordinate that would produce the largest decrease in the objective (the “greedy” choice) and updates all selected coordinates across blocks in parallel. The key theoretical contribution is a unified convergence analysis that shows the expected decrease in the objective per iteration is bounded by
\
📜 Original Paper Content
🚀 Synchronizing high-quality layout from 1TB storage...