This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. To overcome the complexity of the posterior distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be used to estimate the image to be recovered, e.g. by maximizing the estimated posterior distribution. In our fully Bayesian approach the posteriors of all the parameters are available. Thus our algorithm provides more information than other previously proposed sparse reconstruction methods that only give a point estimate. The performance of our hierarchical Bayesian sparse reconstruction method is illustrated on synthetic and real data collected from a tobacco virus sample using a prototype MRFM instrument.
For several decades, image deconvolution has been of increasing interest [2], [47]. Image deconvolution is a method for reconstructing images from observations provided by optical or other devices and may include denoising, deblurring or restoration. The applications are numerous including astronomy [49], medical imagery [48], remote sensing [41] and photography [55]. More recently, a new imaging technology, called Magnetic Resonance Force Microscopy (MRFM), has been developed (see [38] and [29] for reviews). This nondestructive method allows one to improve the detection sensitivity of standard magnetic resonance imaging (MRI) [46]. Three dimensional MRI at 4nm spatial resolution has recently been achieved by the IBM MRFM prototype for imaging the proton density of a tobacco virus [8]. Because of its potential atomic-level resolution 1 , the 2-dimensional or 3-dimensional images resulting from this technology are naturally sparse in the standard pixel basis. Indeed, as the observed objects are molecules, most of the image is empty space. In this paper, a hierarchical Bayesian model is proposed to perform reconstruction of such images.
Sparse signal and image deconvolution has motivated research in many scientific applications including: spectral analysis in astronomy [4]; seismic signal analysis in geophysics [7], [45]; and deconvolution of ultrasonic B-scans [39]. We propose here a hierarchical Bayesian model that is based on selecting an appropriate prior distribution for the unknown image and other unknown parameters. The image prior is composed of a weighted mixture of a standard exponential distribution and a mass at zero. When the non-zero part of this prior is chosen to be a centered normal distribution, this prior reduces to a Bernoulli-Gaussian process. This distribution has been widely used in the literature to build Bayesian estimators for sparse deconvolution problems (see [5], [16], [24], [28], [33] or more recently [3] and [17]). However, choosing a distribution with heavier tail may improve the sparsity inducement of the prior. Combining a Laplacian distribution with an atom at zero results in the so-called LAZE prior. This distribution has been used in [27] to solve a general denoising problem in a non-Bayesian quasi-maximum likelihood estimation framework. In [52], [54], this prior has also been used for sparse reconstruction of noisy images, including MRFM. The principal weakness of these previous approaches is the sensitivity to hyperparameters that determine the prior distribution, e.g. the LAZE mixture coefficient and the weighting of the prior vs the likelihood function. The hierarchical Bayesian approach proposed in this paper circumvents these difficulties. Specifically, a new prior composed of a mass at zero and a single-sided exponential distribution is introduced, which accounts for positivity and sparsity of the pixels in the image. Conjugate priors on the hyperparameters of the image prior are introduced. It is this step that makes our approach hierarchical Bayesian. The full Bayesian posterior can then be derived from samples generated by Markov chain Monte Carlo (MCMC) methods [44].
The estimation of hyperparameters involved in the prior distribution described above is the most difficult task and poor estimation leads to instability. Empirical Bayes (EB) and Stein unbiased risk (SURE) solutions were proposed in [52], [54] to deal with this issue. However, instability was observed especially at higher signal-to-noise ratios (SNR). In the Bayesian estimation framework, two approaches are available to estimate these hyperparameters. One approach couples MCMC methods to an expectation-maximization (EM) algorithm or to a stochastic EM algorithm [30], [32] to maximize a penalized likelihood function. The second approach defines non-informative prior distributions for the hyperparameters; introducing a second level of hierarchy into the Bayesian formulation. This latter fully Bayesian approach, adopted in this paper, has been successfully applied to signal segmentation [11], [14], [15] and semi-supervised unmixing of hyperspectral imagery [13].
Only a few papers have been published on reconstruction of images from MRFM data [6], [8], [56], [58]. In [21], several techniques based on linear filtering and maximum-likelihood principles were proposed that do not exploit image sparsity. More recently, Ting et al. has introduced sparsity penalized reconstruction methods for MRFM (see [54] or [53]). The reconstruction problem has been formulated as a decomposition into a deconvolution step and a denoising step, yielding an iterative thresholding framework. In [54] the hyperparameters are estimated using penalized log-likelihood criteria including the SURE approach [50]. Despite promising results, especially at low SNR, penalized likelihood approaches require iterative maximization algorithms that are often slow to converge and can get stuck on local maxima [10]. In contrast to [54], the f
This content is AI-processed based on open access ArXiv data.