The TV-Stokes model is a two-step variational method for image denoising that combines the estimation of a divergence-free tangent field with total variation regularization in the first step and then uses that to reconstruct the image in the second step. Although effective in practice, its mathematical structure and potential for parallelization have remained unexplored. In this work, we establish a rigorous functional-analytic foundation for the TV-Stokes model. We formulate both steps in appropriate infinite-dimensional function spaces, derive their dual formulations, and analyze the compatibility and mathematical consistency of the coupled system. In particular, we identify analytical inconsistencies in the original formulation and demonstrate how an alternative model resolves them. We also examine the orthogonal projection onto the divergence-free subspace, proving its existence in a continuous setting and establishing consistency with its discrete counterpart. Building on this theoretical framework, we develop the first domain decomposition method for TV-Stokes by applying overlapping Schwarz-type iterations to the duals of both steps. Although the divergence-free constraint gives rise to a global projection operator in the continuous model, we show that it becomes locally computable in the discrete setting. This insight enables a fully parallelizable algorithm suitable for large-scale image processing in memory-constrained environments. Numerical experiments demonstrate the correctness of the domain decomposition approach and its usability in parallel image reconstruction.
Total variation (TV) minimization is a variational regularization technique, first introduced in [1], to address ill-posed inverse problems in image processing due to its ability to preserve discontinuities in the solution. Let Ω ⊂ R 2 be an open, bounded and simply connected domain with Lipschitz boundary representing the image domain. For a vector ⃗ v ∈ L 1 (Ω, R c ), c ∈ N, we define the total variation of ⃗ v in Ω by
|⃗ p i | ≤ 1 almost every (a.e.) in Ω, i = 1, …, c , where div : C 1 0 (Ω, R 2×c ) → C 1 0 (Ω, R c ) describes the column-wise divergence and | • | denotes the standard Euclidean vector norm. Here D⃗ v denotes the distributional gradient of ⃗ v and BV (Ω, R c ) defines the space of all L 1 -functions with bounded variation, i.e. BV (Ω, R c ) := {⃗ v ∈ L 1 (Ω, R c ) : T V (⃗ v) < ∞}. Equipped with the norm ∥⃗ v∥ BV := ∥⃗ v∥ L 1 + T V (⃗ v) the space BV (Ω, R c ) becomes a Banach space [2,Thm. 10.1.1]. For a short overview of different ways to define the total variation for vector-valued functions, we refer the reader to [3].
While total variation is known to preserve discontinuities, it is also well-known that reconstructions obtained by total variation minimization may suffer from the so-called staircase effect. In the context of image restoration, this effect generates blocky and non-natural structures in the solution [4]. To overcome this limitation and achieve more natural reconstructions, several higher order regularization strategies have been proposed, such as the total generalized variation [5] and second-order approaches [6]. Another strategy is the TV-Stokes model [7] on which we will focus in this work. This model is a two-step variational method designed to mitigate the staircase effect while preserving edges in image denoising. In the first step, a divergent-free tangent field ⃗ τ is computed by solving
where ⃗ τ 0 ∈ L 2 (Ω, R 2 ) is the given tangent field of an observed image d 0 and δ > 0 weighting the importance of the two terms. In the second step, the reconstructed image is recovered from the obtained divergent field ⃗ τ by solving min d∈BV (Ω,R)∩L 2 (Ω,R)
where σ > 0 denotes the standard deviation of the noise present in the observed image d 0 .
In [7], the minimization problems (1) and ( 2) are solved numerically using an explicit timemarching scheme. To improve computational efficiency, dual formulations of these problems were later proposed in [8], resulting in an iterative algorithm based on a variant of Chambolle’s projection method [9]. While the dual approach significantly accelerates convergence and performs well for small-to medium-scale images, its applicability to large-scale problems is limited. This is primarily due to the iterative nature of the algorithms required to solve both steps of the TV-Stokes model, which leads to high computational costs when applied to high-resolution data. It should be noted that the dual formulations in [8] were presented without specifying the underlying function spaces. To address this gap, we include a careful derivation of the dual problem with explicit function space considerations in this paper.
To overcome the scalability limitations of these existing solvers, particularly in the context of high-resolution or large-scale imaging problems, it is natural to consider parallelization strategies. Domain decomposition methods provide a principled framework for this purpose, allowing the global problem to be reformulated as a collection of coupled subproblems defined on overlapping or non-overlapping subdomains.
One of the central challenges in developing domain decomposition algorithms for TV minimization stems from the intrinsic properties of the TV functional: it is both non-differentiable and lacks additivity over disjoint domain partitions. More precisely, if the domain Ω is split into disjoint subdomains Ω 1 and Ω 2 , then the total variation of a function d over Ω satisfies the decomposition formula; cf. [10,Theorem 3.84]:
where H 1 denotes the 1-dimensional Hausdorff measure, and d + and d -denote the traces of d on the interface from the interior and exterior, respectively. This interface term captures the magnitude of jumps across subdomains, highlighting the fact that preserving continuity or controlled discontinuities across interfaces is essential when designing effective decomposition methods.
It is important to note that for widely used methods such as those in [11][12][13][14], the question of convergence to a global minimizer remains open when applied to non-smooth and non-additive problems, as a general convergence theory is still lacking. Nonetheless, in [15] and [16], subspace correction techniques from [13,14] have been successfully applied to smoothed approximations of total variation minimization problems.
The first domain decomposition techniques tailored to the minimization of TV appeared in [17][18][19][20][21], with the convergence of the energy and monotonicity properties proved. However,
This content is AI-processed based on open access ArXiv data.