A Relaxed Randomized Averaging Block Extended Bregman-Kaczmarz Method for Combined Optimization Problems
Randomized Kaczmarz-type methods are widely used for their simplicity and efficiency in solving large-scale linear systems and optimization problems. However, their applicability is limited when dealing with inconsistent systems or incorporating structural information such as sparsity. In this work, we propose a \emph{relaxed randomized averaging block extended Bregman-Kaczmarz} (rRABEBK) method for solving a broad class of combined optimization problems. The proposed method integrates an averaging block strategy with two relaxation parameters to accelerate convergence and enhance numerical stability. We establish a rigorous convergence theory showing that rRABEBK achieves linear convergence in expectation, with explicit constants that quantify the effect of the relaxation mechanism, and a provably faster rate than the classical randomized extended Bregman-Kaczmarz method. Our method can be readily adapted to sparse least-squares problems and extended to both consistent and inconsistent systems without modification. Complementary numerical experiments corroborate the theoretical findings and demonstrate that rRABEBK significantly outperforms the existing Kaczmarz-type algorithms in terms of both iteration complexity and computational efficiency, highlighting both its practical and theoretical advantages.
💡 Research Summary
This paper proposes a novel iterative algorithm named the relaxed randomized averaging block extended Bregman-Kaczmarz (rRABEBK) method, designed to solve a broad class of combined optimization problems. The target problem involves minimizing a potentially non-smooth, strongly convex objective function (like an L1-regularized term for sparsity) subject to a linear constraint, where the right-hand side is defined as the projection of the observed data onto the range of the coefficient matrix. This formulation elegantly handles inconsistent linear systems and promotes structural properties like sparsity in the solution.
The work is situated within the rich lineage of Kaczmarz-type methods. While the classical randomized Kaczmarz (RK) method is efficient for consistent systems, it fails for inconsistent ones. The randomized extended Kaczmarz (REK) method overcomes this by maintaining an auxiliary variable. Separately, the Bregman-Kaczmarz (BK) framework generalizes the projection step using Bregman distances, allowing it to handle non-smooth objective functions. The randomized extended Bregman-Kaczmarz (REBK) method combines these two ideas for the combined optimization problem. This paper significantly advances REBK along two axes.
First, it introduces an averaging block strategy. Instead of updating based on a single randomly selected row per iteration, the method selects a block of rows. It performs independent Bregman-Kaczmarz updates for each row in the block in the dual (variable) space and then averages these dual updates. This block-averaging approach reduces the variance inherent in single-row updates, enhances computational throughput through potential parallelism, and improves numerical stability.
Second, and most crucially, the method incorporates two relaxation parameters (ω_k and γ_k) into the update rules for the auxiliary variable (z) and the primary variable (x), respectively. This yields the “relaxed” variant, rRABEBK. These parameters provide fine-grained control over the step sizes, allowing the algorithm to balance aggressive correction with robustness. They are key to achieving accelerated performance in practice.
The authors establish a rigorous convergence theory for rRABEBK. They prove that the sequence of iterates converges linearly in expectation to the unique solution of the combined optimization problem. The analysis provides explicit constants that quantify the influence of the relaxation parameters, the block size, and the problem’s conditioning (via the matrix’s scaled condition number). A key theoretical result is that with appropriately chosen relaxation parameters, rRABEBK achieves a provably faster convergence rate than the basic REBK method.
Numerical experiments comprehensively validate the theoretical claims and demonstrate practical superiority. Tests are conducted on both consistent and inconsistent linear systems, as well as sparse least-squares problems (simulating compressed sensing scenarios). The proposed rRABEBK is compared against state-of-the-art methods including RK, REK, REBK, and the non-relaxed RABEBK. Metrics include iteration count and CPU time. The results consistently show that rRABEBK significantly outperforms all competitors, converging in far fewer iterations and less computation time. The relaxation mechanism is shown to be particularly effective, often doubling the speedup gained from the block-averaging strategy alone.
In conclusion, the rRABEBK method represents a substantial advancement in randomized Kaczmarz-type algorithms. It successfully integrates block processing, averaging, and relaxation within the powerful Bregman framework. The method offers strong theoretical convergence guarantees, handles inconsistency and non-smooth regularizers natively, and delivers marked practical improvements in computational efficiency, making it a highly attractive solver for large-scale inverse problems in signal processing, imaging, and machine learning.
Comments & Academic Discussion
Loading comments...
Leave a Comment