In this paper we analyze the maximum entropy image deconvolution. We show that given the Lagrange multiplier a closed form can be obtained for the image parameters. Using this solution we are able to provide better understanding of some of the known behavior of the maximum entropy algorithm. The solution also yields a very efficient implementation of the maximum entropy deconvolution technique used in the AIPS package. It requires the computation of a single dirty image and inversion of an elementary function per pixel.
Deep Dive into Closed form solution of the maximum entropy equations with application to fast radio astronomical image formation.
In this paper we analyze the maximum entropy image deconvolution. We show that given the Lagrange multiplier a closed form can be obtained for the image parameters. Using this solution we are able to provide better understanding of some of the known behavior of the maximum entropy algorithm. The solution also yields a very efficient implementation of the maximum entropy deconvolution technique used in the AIPS package. It requires the computation of a single dirty image and inversion of an elementary function per pixel.
Radio astronomical imaging using earth rotation synthesis radio telescopes is an ill-posed problem due to the irregular sub-Nyquist sampling of the Fourier domain. During the last 40 years many deconvolution techniques have been developed to solve this problem. These algorithms are based on models for the radio image. Among these techniques we find the CLEAN method by Hogbom (1974), The maximum entropy algorithm (MEM) by Frieden (1972), Gull and Daniell (1978), Ables (1974) and Wernecke (1977), extensions of the CLEAN to support multi-resolution and wavelets by Bhatnager and Cornwell (2004), Cornwell (2008) and Cornwell et al. (2008), nonnegative least squares by Briggs (1995), parametric based imaging by Leshem and van der Veen (2000) and Ben-David and Leshem (2008) and sparse L 1 reconstruction by Levanda and Leshem (2008) and Wiaux et al. (Submitted 2009). While there is a major amount of experience in using these algorithms, we still lack a comprehensive theoretical analysis. This would become a more critical problem for the future generation of radio interferometers that will be built in the next two decade such as the square kilometer array (SKA), the Low Frequency Array (LOFAR), The Allen Telescope Array (ATA), the Long Wavelength Array (LWA) and the Atacama Large Millimeter Array (ALMA). These radio-telescopes will include many more stations, will have significantly increased sensitivity and some of them will operate at much lower frequencies than previous radio telescopes, and therefore would be more sensitive to modeling and calibration errors.
The maximum entropy image formation technique is one of the two most popular deconvolution techniques. The maximum entropy principle was first proposed by Jaynes (1957). Jaynes (1982) provides a good overview of the philosophy behind the idea. Since then it has been used in a wide spectrum of imaging problems. The basic idea behind MEM is the following: Among all images which are consistent with the measured data and the noise distribution not all satisfy the positivity demand,i.e., the sky brightness is a positive function. Consider only those that satisfy the positivity demand. From these select the one that is most likely to have been created randomly. This idea has also been proposed by Frieden (1972) for optical images and applied to radio astronomical imaging in Gull and Daniell (1978). Other ap-proaches based on the differential entropy have also been proposed Ables (1974) and Wernecke (1977). An extensive collection of papers discussing the various methods and aspects of maximum entropy can be found in the various papers in Roberts (1984). Narayan and Nityananda (1986) provides an overview of various maximum entropy techniques and the use of the various options for choosing the entropy measure. Interestingly, in that paper, a closed form solution is given for the noiseless case, but not for the general case.
The approach of Gull and Daniell (1978) begins with a prior image and iterates between maximizing the entropy function and updating the χ 2 fit to the data. The computation of the image based on a prior image is done analytically, but at each step the model visibilities are updated, through a two-dimensional Fourier transform. This type of algorithm is known as a fixed point algorithm, since it is based on iterating a function until it converges to a fixed point. While it is known that for the maximum entropy, this approach usually converges, it was recognized that the convergence can be slow (see Narayan and Nityananda 1986). Hence, improved methods based on Newton method and the Conjugate Gradient technique have been proposed by (Cornwell and Evans 1985;Sault 1990;Skilling and Bryan 1984). These methods perform direct optimization of the entropy function subject to the χ 2 constraint.
In this paper we will provide a closed form solution for the maximum entropy image formation problem. This solution provides a novel short proof of the uniqueness of the maximum entropy solution and allows us to provide a theoretical explanation to the failure of the maximum entropy algorithm in cases of strong point sources. The explicit expressions for the solution allows us to quantify the effect of the free parameters involved in the maximum entropy algorithm. Using the closed form solution we will also develop a new technique for solving the maximum entropy deconvolution with a fixed number of operations per pixel, except an initial step of gridding, deconvolution and computation of a single dirty image as described in Taylor et al. (see 1999). We believe that this paper is a significant step forward in understanding image deconvolution techniques.
We begin with a short description of the maximum entropy algorithm. Following the standard convention in this field (Gull and Daniell 1978;Cornwell and Evans 1985;Sault 1990) we present the one dimensional case. Similar results for the two dimensional case are also valid (but require more complicated notation)
…(Full text truncated)…
This content is AI-processed based on ArXiv data.