External Division of Two Bregman Proximity Operators for Poisson Inverse Problems

External Division of Two Bregman Proximity Operators for Poisson Inverse Problems
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

This paper presents a novel method for recovering sparse vectors from linear models corrupted by Poisson noise. The contribution is twofold. First, an operator defined via the external division of two Bregman proximity operators is introduced to promote sparse solutions while mitigating the estimation bias induced by classical $\ell_1$-norm regularization. This operator is then embedded into the already established NoLips algorithm, replacing the standard Bregman proximity operator in a plug-and-play manner. Second, the geometric structure of the proposed external-division operator is elucidated through two complementary reformulations, which provide clear interpretations in terms of the primal and dual spaces of the Poisson inverse problem. Numerical tests show that the proposed method exhibits more stable convergence behavior than conventional Kullback-Leibler (KL)-based approaches and achieves significantly superior performance on synthetic data and an image restoration problem.


💡 Research Summary

This paper tackles the challenging problem of recovering sparse vectors from linear measurements corrupted by Poisson noise, a setting that appears in applications such as PET imaging and low‑light photography. Classical approaches formulate the inverse problem as the minimization of a data‑fidelity term based on the Kullback‑Leibler (KL) divergence together with an ℓ₁‑norm regularizer to promote sparsity. While the KL term accurately reflects the Poisson likelihood, its gradient is not Lipschitz continuous, preventing the use of standard proximal‑gradient methods. The NoLips algorithm overcomes this difficulty by replacing the Euclidean distance in the proximal step with a Bregman divergence generated by a Legendre function, thereby guaranteeing convergence under a Lipschitz‑like convexity condition.

However, the ℓ₁ regularizer introduces a well‑known bias: large coefficients are systematically shrunk toward zero, degrading the quality of the recovered signal. To mitigate this bias without sacrificing sparsity, the authors propose a new operator constructed by the external division of two Bregman proximity operators. Specifically, they consider the Boltzmann–Shannon entropy  h(u)=∑ u_j log u_j as the Legendre generating function and define shifted ℓ₁‑norms ‖·−a 1_n‖₁ with a small positive shift a. The external‑division operator is

 T_{h,ω,η₁,η₂,a}(x)=ω Prox_{h}^{η₁‖·−a1‖₁}(x) − (ω−1) Prox_{h}^{η₂‖·−a1‖₁}(x),

where ω>1 and η₂ is chosen as η₂=log((ω−1)/(ω e^{−η₁}−1)). Proposition 1 provides an explicit coordinate‑wise formula showing that for sufficiently large entries the operator reduces to the identity mapping, thereby eliminating bias, while for small entries it behaves like a soft‑shrinkage that enforces sparsity around the shift a.

The operator is embedded into the NoLips framework via a plug‑and‑play (PnP) strategy: the standard Bregman proximity step in NoLips is simply replaced by T_{h,ω,η₁,η₂,a}. The resulting iteration reads

 x_{k+1}=T_{h,ω,η₁,η₂,a}(∇h* (∇h(x_k)−λ∇f(x_k))),

with the data‑fidelity term f(x)=D_ϕ(Ax,b) based on the same entropy ϕ. The authors argue for this choice because D_ϕ(Ax,b) remains well‑defined even when Ax contains zeros, which is common for sparse x.

Two complementary reformulations illuminate the geometry of the new operator. The first (Corollary 1) shows that T can be expressed as an affine combination of two shifted soft‑shrinkage operators applied in the dual space after mapping x to the dual via the mirror map ∇h, and then mapping back with ∇h⁻¹. In this view, sparsity is induced in the dual space, while bias reduction is achieved by the affine combination in the primal space. The second reformulation (Proposition 3) introduces scalar functions S₁ and S₂ (soft‑shrinkage with different thresholds) and a correction term log


Comments & Academic Discussion

Loading comments...

Leave a Comment