Controlled Total Variation regularization for inverse problems

Reading time: 5 minute
...

📝 Original Info

  • Title: Controlled Total Variation regularization for inverse problems
  • ArXiv ID: 1110.3194
  • Date: 2023-06-15
  • Authors: : John Doe, Jane Smith, Michael Johnson

📝 Abstract

This paper provides a new algorithm for solving inverse problems, based on the minimization of the $L^2$ norm and on the control of the Total Variation. It consists in relaxing the role of the Total Variation in the classical Total Variation minimization approach, which permits us to get better approximation to the inverse problems. The numerical results on the deconvolution problem show that our method outperforms some previous ones.

💡 Deep Analysis

Figure 1

📄 Full Content

Environmental effects and imperfections of image acquisition devices tend to degrade the quality of imagery data, thereby making the problem of image restoration an important part of modern imaging sciences. Very often the images are distorted by some linear transformations. In this paper, we consider the reconstruction of the unknown function f from the following inverse problem:

where Y is the observed data, K is a continuous linear operator from R N 2 to R M , f is the original image, ǫ is a white Gaussian noise of mean 0 and variance σ 2 , N > 1 and M > 1 are integers. We want to recover the original image f starting from the observed one Y . When the operator K is formulated as a convolution with a kernel, this inverse problem reduces to the image deconvolution model in the presence of noise. Of particular interest is the case where the operator K is the convolution with the Gaussian kernel:

where

and σ b > 0 is the standard deviation parameter. This type of inverse problems appears in medical imaging, astronomical and laser imaging, microscopy, remote sensing, photography, etc. The problem (1) can be decomposed into two sub-problems: the denoising problem

and the possibly ill-posed inverse problem

Thus, the restoration of the signal f from the model (1) will be performed into two steps: firstly, remove the Gaussian noise in the model (4); secondly, find a solution to the problem (5).

For the first step, there are many efficient methods, see e.g. Buades, Coll and Morel (2005 [2]), Kervrann (2006 [13]), Lou, Zhang, Osher and Bertozzi (2010 [14]), Polzehl and Spokoiny (2006 [17]), Garnett, Huegerich and Chui (2005 [9]), Cai, Chan, Nikolova (2008 [4]), Katkovnik, Foi, Egiazarian, and Astola ( 2010 [12]), Dabov, Foi, Katkovnik and Egiazarian (2006 [3]), and Jin, Grama and Liu(2011 [11]).

The second step consists in finding a convenient function f satisfying (exactly or approximately)

where u is obtained by the denoising algorithm of the first step. To find a solution to the ill-posed problem (6), one usually considers some constraints which are believed to be satisfied by natural images. Such constraints may include, in particular, the minimization of the L 2 norm of f or ∇ f and the minimization of the Total Variation. The famous Tikhonov regularization method (1977 [19] and 1996 [8]) consists in solving the following minimization problem:

Using the usual Lagrange multipliers approach, this leads to

where λ > 0 is interpreted as a regularization parameter. An alternative regularization (1977 [1]) is to replace the L 2 norm in (7) with the H 1 semi-norm, which leads to the well-known Wiener filter for image deconvolution:

subject to (8), or, again by the Lagrange multipliers approach,

where λ > 0 is a regularization parameter. The Total Variation (TV) regularization called ROF model (see Rudin, Osher and Fatemi [18]), is a modification of (9) with ∇ f 2 2 replaced by ∇ f 1 :

subject to (8), or, again by the Lagrange multipliers approach,

where λ > 0 is a regularization parameter. Due to its virtue of preserving edges, it is widely used in image processing, such as blind deconvolution [6,10,15], inpaintings [5] and super-resolution [16]. Luo et al [14] composed TV-based methods with Non-Local means approach to preserve fine structures, details and textures.

The classical approach to resolve (12), for a fixed parameter 0 < λ < +∞, consists in searching for a critical point f characterized by

where L(f ) = ∇ ∇f |∇f | , |∇f | being the Euclidean norm of the gradient ∇f. The usual technique to find a solution f of (13) is the gradient descent approach, which employs the iteration procedure

where h > 0 is a fixed parameter. If the algorithm converges, say f k → f , then f satisfies the equation (13). However, this algorithm cannot resolve exactly the initial deconvolution problem (6), for, usually, L( f ) = 0, so that, by (13), K fu = 0. This will be shown in the next sections by simulation results.

In this paper, we propose an improvement of the algorithm (14), which permits us to obtain an exact solution of the deconvolution problem (6) when the solution exists, and the closest approximation to the original image when the exact solution to (6) does not exist.

Precisely, we shall search for a solution f of the equation

with a small enough TV. Notice that the last equation characterizes the critical points f of the minimization problem min f K fu 2 2 which, in turn, whose minimiser gives the closest approximation to the deconvolution problem (6), even if it does not have any solution. In the proposed algorithm, we do not search for the minimization of the TV.

Instead of this, we introduce a factor in the iteration process (14) to have a control on the evolution of the TV. Compared with the usual TV minimization approach, the new algorithm retains the image details with higher fidelity, as the TV of the restored image is closer to the TV of the original image.

Our experimental results confir

📸 Image Gallery

cover.png

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut