In imaging inverse problems with Poisson-distributed measurements, it is common to use objectives derived from the Poisson likelihood. But performance is often evaluated by mean squared error (MSE), which raises a practical question: how much does a Poisson objective matter for MSE, even at low dose? We analyze the MSE of Poisson and Gaussian surrogate reconstruction objectives under Poisson noise. In a stylized diagonal model, we show that the unregularized Poisson maximum-likelihood estimator can incur large MSE at low dose, while Poisson MAP mitigates this instability through regularization. We then study two Gaussian surrogate objectives: a heteroscedastic quadratic objective motivated by the normal approximation of Poisson data, and a homoscedastic quadratic objective that yields a simple linear estimator. We show that both surrogates can achieve MSE comparable to Poisson MAP in the low-dose regime, despite departing from the Poisson likelihood. Numerical computed tomography experiments indicate that these conclusions extend beyond the stylized setting of our theoretical analysis.
When photon or electron counts are low, the Poisson noise model is standard [1,2,3] and it is natural to optimize a Poisson likelihood instead of a Gaussian quadratic data term for reconstruction [4,5,6]. But likelihood and reconstruction error are different objectives. Maximizing the correct likelihood does not guarantee minimal MSE-particularly in illposed problems where small eigenvalues amplify noise. This raises a practical question: how much MSE do we lose (or perhaps gain) by using objectives based on the Gaussian likelihood when the data are Poisson? At high dose the Poisson distribution converges to a Gaussian, but we are interested in understanding what happens at low dose. We show that even in this regime, properly regularized Gaussian surrogates can be surprisingly competitive with Poisson objectives in terms of MSE. In a diagonal model that allows for closedform analysis, we show that the unregularized Poisson MLE incurs large MSE at low dose: small diagonal entries push individual modes into an effective low-count regime where single-photon events create variance spikes (Section 2.1).
We then discuss two Gaussian surrogates. First, we consider a heteroscedastic objective motivated by the normal approximation of the Poisson distribution. While our analysis shows that it can achieve smaller MSE than the Poisson MLE (Proposition A.1), it still retains some of the practical difficulties of Poisson modeling, since the log-likelihood remains non-quadratic. We then study a homoscedastic Gaussian surrogate (Section 2.2.2) that departs entirely from Poisson statistics. We show that it yields provably smaller MSE in the same low-dose regime, and we compare it with Poisson maximum a posteriori (MAP) with Tikhonov regularization (Section 2.2.1). The homoscedastic surrogate is particularly attractive: its quadratic objective yields a linear estimator, making it both computationally simple and analytically tractable.
Numerical experiments on computed tomography (Section 3) confirm that these findings extend beyond the diagonal setting. We find that simple regularized quadratic solvers, including ordinary least squares, match Poisson MAP in MSE across all tested count levels.
Poisson noise arises in photon-limited imaging modalities, including fluorescence deblurring, emission tomography, astronomical imaging, and denoising [7,8,9]. The Poisson MLE is often computed using the Richardson-Lucy algorithm [10,11], which corresponds to ML-EM in emission tomography [12,13,14]. In practice, ML-EM reconstructions are regularized to mitigate variance explosion, for example via early stopping or explicit penalties [15]. Poisson MAP is commonly tackled via one-step-late (OSL) MAP-EM updates [16]. OSL is widely used in tomography, including with TV-type regularization [17,18].
An alternative line of work relies on variance-stabilizing transformations, most notably the Anscombe transform [19], which approximates Poisson noise as additive Gaussian noise with nearly constant variance. These approximations lead to weighted least squares (WLS) and penalized WLS formula-tions [20,21,22], which are widely used in tomography.
We consider measurements y = (y 1 , . . . , y m ) ∈ N m modeled as independent Poisson counts y j ∼ P (sAx ⋆ ) j , j = 1, . . . , m,
where P(λ) denotes the Poisson distribution with parameter λ, A : R n → R m is a forward operator such that (Ax ⋆ ) j ≥ 0 for all j = 1, . . . , m, x ⋆ ∈ R n is the unknown signal, and s > 0 is the dose. A key quantity in our analysis is the expected number of counts in measurement j:
In this paper, we focus on the low dose regime: µ j ≪ 1. We aim to characterize the mean-squared error (MSE),
In particular, we study how the MSE depends on the dose s, the operator A, and, importantly, the reconstruction strategy.
We begin by considering the Poisson MLE to estimate x ⋆ in the low-dose regime (µ j ≪ 1 for many j), a straightforward starting point since it is derived from the Poisson model:
x MLE,P ∈ arg min
where X + := {x ∈ R n + : (Ax) j ≥ 0, j = 1, . . . , m} and
We interpret L P (x; y) as an extended-valued function on X + by setting L P (x; y) = +∞ whenever s(Ax) j = 0 for some j with y j > 0. Moreover, we adopt the convention 0 log 0 := 0, i.e., y j log s(Ax) j := 0 when y j = 0 and s(Ax) j = 0.
To analyze the MSE of x MLE,P , we study a stylized model in which A is linear and diagonal (and m = n). In the Gaussian setting, many linear inverse problems can be reduced to diagonal form without breaking Gaussianity. This is not true for the Poisson random variables, which are not preserved by a linear change of coordinates. Nonetheless, the diagonal model still provides useful insight, as it allows an explicit analysis of how the MSE depends on the dose s and the conditioning of A. We leave the much more involved non-diagonal case to future work.
We consider the setting where only the first d ≤ m components of the measurement vector are observed:
The remaining m
This content is AI-processed based on open access ArXiv data.