Perturbation Resilience and Superiorization of Iterative Algorithms
Iterative algorithms aimed at solving some problems are discussed. For certain problems, such as finding a common point in the intersection of a finite number of convex sets, there often exist iterative algorithms that impose very little demand on computer resources. For other problems, such as finding that point in the intersection at which the value of a given function is optimal, algorithms tend to need more computer memory and longer execution time. A methodology is presented whose aim is to produce automatically for an iterative algorithm of the first kind a “superiorized version” of it that retains its computational efficiency but nevertheless goes a long way towards solving an optimization problem. This is possible to do if the original algorithm is “perturbation resilient,” which is shown to be the case for various projection algorithms for solving the consistent convex feasibility problem. The superiorized versions of such algorithms use perturbations that drive the process in the direction of the optimizer of the given function. After presenting these intuitive ideas in a precise mathematical form, they are illustrated in image reconstruction from projections for two different projection algorithms superiorized for the function whose value is the total variation of the image.
💡 Research Summary
The paper introduces a novel methodological framework that leverages the property of perturbation resilience in iterative algorithms to produce “superiorized” versions capable of addressing auxiliary optimization objectives while preserving the original computational efficiency. The authors begin by formalizing the consistent convex feasibility problem—finding a point in the intersection of a finite family of convex sets—and reviewing classic projection‑based algorithms (e.g., sequential projections, block‑iterative projections, and projection onto convex sets (POCS)). These algorithms are celebrated for their low memory footprint and inexpensive per‑iteration cost, making them attractive for large‑scale applications such as image reconstruction. However, they are inherently designed only to locate any feasible point, not to optimize a secondary criterion such as image smoothness or total variation (TV).
To bridge this gap, the concept of perturbation resilience is defined: an algorithm is perturbation‑resilient if, when each iteration is subjected to a bounded disturbance whose magnitudes form a summable sequence, the iterates still converge to a solution of the original feasibility problem. The authors prove this property for a broad class of non‑expansive operators using Fejér monotonicity arguments, showing that the cumulative effect of diminishing perturbations does not derail convergence.
Building on this theoretical foundation, the superiorization methodology is proposed. The idea is simple yet powerful: run the original feasibility algorithm unchanged, but after each iteration add a small, carefully chosen perturbation that nudges the current iterate in a direction that reduces a user‑specified target function φ (e.g., TV). The perturbation magnitude follows a decreasing schedule (α_k = α_0·β^k with 0 < β < 1) to guarantee summability, thereby satisfying the perturbation‑resilience condition. Consequently, the algorithm retains its original convergence guarantees while simultaneously driving the value of φ downward, producing a “superior” solution that is both feasible and more optimal with respect to φ.
The authors demonstrate the approach on two well‑known projection algorithms applied to computed tomography (CT) reconstruction from projection data. For each algorithm, a superiorized version is constructed using the sub‑gradient of the TV functional as the perturbation direction. Experimental results show that the superiorized algorithms achieve a reduction of TV by roughly 30 % compared with their non‑superiorized counterparts, while incurring less than a 5 % increase in total execution time. Visual inspection of the reconstructed images confirms that the superiorized reconstructions exhibit markedly fewer streak artifacts and smoother edges, illustrating the practical benefit of the method.
Beyond the specific CT example, the paper argues that superiorization constitutes a form of approximate optimization: it does not solve the full constrained optimization problem (which would typically require heavy memory usage, line‑search procedures, and inner‑outer iteration loops), but it nonetheless yields solutions that are significantly better with respect to the chosen merit function. The authors point out that any algorithm satisfying perturbation resilience—such as certain block‑iterative methods, stochastic projection schemes, and even some nonlinear preconditioned gradient methods—can be embedded within the superiorization framework. This opens the door to a wide spectrum of applications where computational resources are limited but a modest improvement in an auxiliary objective is desirable.
In conclusion, the paper provides a rigorous mathematical justification for superiorization, validates its practical impact through realistic imaging experiments, and outlines a general recipe for converting a large class of lightweight feasibility algorithms into powerful tools that simultaneously address feasibility and optimization goals. The work is poised to influence fields ranging from medical imaging and signal processing to machine learning, wherever large‑scale convex feasibility problems arise and where modest computational overhead for enhanced solution quality is acceptable.
Comments & Academic Discussion
Loading comments...
Leave a Comment