Gaussian surrogates do well on Poisson inverse problems
In imaging inverse problems with Poisson-distributed measurements, it is common to use objectives derived from the Poisson likelihood. But performance is often evaluated by mean squared error (MSE), which raises a practical question: how much does a Poisson objective matter for MSE, even at low dose? We analyze the MSE of Poisson and Gaussian surrogate reconstruction objectives under Poisson noise. In a stylized diagonal model, we show that the unregularized Poisson maximum-likelihood estimator can incur large MSE at low dose, while Poisson MAP mitigates this instability through regularization. We then study two Gaussian surrogate objectives: a heteroscedastic quadratic objective motivated by the normal approximation of Poisson data, and a homoscedastic quadratic objective that yields a simple linear estimator. We show that both surrogates can achieve MSE comparable to Poisson MAP in the low-dose regime, despite departing from the Poisson likelihood. Numerical computed tomography experiments indicate that these conclusions extend beyond the stylized setting of our theoretical analysis.
💡 Research Summary
The paper investigates the relationship between reconstruction objectives derived from the Poisson likelihood and the mean‑squared error (MSE) performance in low‑dose imaging inverse problems, where measurements follow a Poisson distribution. While it is common to optimize Poisson‑based likelihoods (e.g., Richardson‑Lucy/ML‑EM) for photon‑limited data, the authors point out that minimizing the Poisson negative log‑likelihood does not guarantee minimal MSE, especially in ill‑posed settings where small singular values amplify noise.
A stylized diagonal forward model is introduced to enable closed‑form analysis. In this model, the Poisson maximum‑likelihood estimator (MLE) has the simple form (\hat x_{j}=y_{j}/(s a_{j})). Its MSE decomposes into a variance term that scales with (1/(s a_{j})) and a truncation bias term. When the forward operator is ill‑conditioned (i.e., (a_{j}) decays rapidly), the variance term dominates for many modes, leading to large MSE at low dose (small (s)). The authors derive a dose‑dependent optimal resolution (d(s)) that balances variance and bias, showing that for extremely low dose ((s\ll1)) the variance dominates regardless of resolution.
To mitigate this, the paper examines two regularized approaches. First, Poisson MAP with isotropic Tikhonov regularization yields a per‑mode closed‑form solution (Eq. 12). The resulting MSE reduction factor depends on the effective regularization level (\gamma_{j}= \tau/(s a_{j})^{2}) and is given by Eq. (14). Stronger regularization (larger (\gamma_{j})) shrinks the high‑variance low‑dose modes more aggressively.
Second, the authors study Gaussian surrogate objectives. The heteroscedastic surrogate, derived from the normal approximation of Poisson counts, still contains logarithmic terms and is non‑quadratic, but supplementary analysis shows it can outperform the Poisson MLE in low‑dose regimes. More strikingly, a homoscedastic Gaussian surrogate—assuming constant variance (1) irrespective of the signal—leads to a simple quadratic loss (Eq. 15) and a linear MAP estimator (Eq. 17). Its per‑mode MSE ratio relative to the Poisson MLE is given by Eq. (18), which, in the low‑dose limit, reduces the MSE by a factor ((1+\gamma_{j})^{-2}). This factor is smaller than the Poisson MAP reduction, indicating that the homoscedastic surrogate can achieve even greater shrinkage of the variance.
The theoretical findings are validated on 2‑D parallel‑beam computed tomography (CT) using both the LoDoPaB‑CT dataset and a Shepp‑Logan phantom. Three reconstruction strategies are compared: (1) Poisson MAP with one‑step‑late EM updates, (2) regularized homoscedastic Gaussian MAP (HG MAP), and (3) penalized weighted least squares (PWLS) with various weight choices, including an oracle variance, a plug‑in estimate, and a plug‑in based on an FBP reconstruction. All methods are run to convergence, and the Tikhonov regularization parameter (\tau) is tuned to minimize MSE on a validation set.
Results show that at very low counts (average expected count per detector bin ≈ 10), homoscedastic LS and regularized HG MAP achieve MSE comparable to or slightly better than Poisson MAP. Visual differences are minor, and the MSE gap disappears as the dose increases (average counts ≈ 1000), where all methods converge to similar performance. The experiments confirm that, in low‑count regimes, the key to good MSE performance is not exact modeling of the Poisson likelihood but effective regularization and variance shrinkage, which can be provided by simple quadratic surrogates.
In conclusion, the paper demonstrates both analytically and empirically that properly regularized Gaussian surrogate objectives can match or surpass Poisson‑based MAP estimators in terms of MSE for low‑dose Poisson inverse problems. This insight suggests that computationally cheaper linear estimators based on homoscedastic quadratic losses are viable alternatives in photon‑limited imaging applications, offering both theoretical guarantees and practical performance benefits.
Comments & Academic Discussion
Loading comments...
Leave a Comment