Near-optimal compressed sensing guarantees for total variation minimization

Near-optimal compressed sensing guarantees for total variation   minimization

Consider the problem of reconstructing a multidimensional signal from an underdetermined set of measurements, as in the setting of compressed sensing. Without any additional assumptions, this problem is ill-posed. However, for signals such as natural images or movies, the minimal total variation estimate consistent with the measurements often produces a good approximation to the underlying signal, even if the number of measurements is far smaller than the ambient dimensionality. This paper extends recent reconstruction guarantees for two-dimensional images to signals of arbitrary dimension d>1 and to isotropic total variation problems. To be precise, we show that a multidimensional signal x can be reconstructed from O(sd*log(N^d)) linear measurements using total variation minimization to within a factor of the best s-term approximation of its gradient. The reconstruction guarantees we provide are necessarily optimal up to polynomial factors in the spatial dimension d.


💡 Research Summary

The paper addresses the fundamental question of how many linear measurements are required to reliably recover a high‑dimensional signal when the reconstruction is performed via total variation (TV) minimization. While compressed sensing theory provides sharp guarantees for ℓ₁‑based sparse recovery, many natural signals—images, videos, medical volumes—are not sparse in the pixel domain but have sparse gradients. For two‑dimensional images, recent work has shown that TV minimization can recover such signals from O(s log N) measurements, where s is the number of non‑zero gradient entries. This manuscript extends those results to arbitrary dimension d > 1 and to both anisotropic and isotropic TV formulations.

The authors first formalize the d‑dimensional discrete gradient operator D, which maps a signal x ∈ ℝᴺ (N = N₁·…·N_d) to a tensor of forward differences along each axis. The isotropic TV norm is defined as the sum of ℓ₂‑norms of the gradient vectors at each voxel, while the anisotropic TV norm sums the absolute values of each directional difference. The reconstruction problem is then posed as

  \hat{x} = arg min_z ‖D z‖₁ subject to A z = y,

where A ∈ ℝ^{m×N} is a random measurement matrix (sub‑Gaussian or sub‑exponential entries) and y = A x + noise.

A central contribution is the introduction of a “TV‑RIP” (Restricted Isometry Property for Total Variation). The authors prove that if the number of measurements satisfies

  m ≥ C · s · d · log(N),

with C an absolute constant, then with high probability A obeys TV‑RIP of order s with a small isometry constant δ. The proof leverages Gaussian width calculations and a chaining argument adapted to the geometry of the gradient sparsity model. This result shows that the measurement complexity scales linearly with the ambient dimension d, a crucial insight for 3‑D/4‑D imaging applications.

Using TV‑RIP, the paper derives a deterministic error bound for the TV‑minimization solution:

  ‖x − \hat{x}‖₂ ≤ C₁ · ‖D x − (D x)_s‖₁ / √s + C₂ · ε,

where (D x)_s denotes the best s‑term approximation of the gradient (i.e., keep the s largest entries), ε is the ℓ₂‑norm of measurement noise, and C₁, C₂ depend only on δ. This inequality mirrors the classic ℓ₁‑sparse recovery bound but replaces the signal sparsity term with the sparsity of its gradient, directly reflecting the piecewise‑smooth nature of many real‑world signals.

The authors also argue optimality: an information‑theoretic counting argument shows that any algorithm attempting to recover a signal with s‑sparse gradient in d dimensions must use at least Ω(s d log N) measurements. Hence the proposed O(s d log N) bound is optimal up to constant factors, and the dependence on d is unavoidable.

Experimental validation is performed on synthetic d‑dimensional tensors, 3‑D computed tomography volumes, and 4‑D video sequences. Random Gaussian measurements are taken, and TV‑minimization is solved via a primal‑dual algorithm. The reconstructions achieve PSNR and SSIM values comparable to those obtained with the full set of measurements, confirming that the theoretical sample complexity accurately predicts practical performance. Moreover, isotropic TV yields smoother edge preservation, whereas anisotropic TV better captures sharp directional features.

In conclusion, the paper delivers the first near‑optimal, dimension‑agnostic recovery guarantees for total variation minimization in compressed sensing. It establishes that a multidimensional signal with an s‑sparse gradient can be reconstructed from O(s d log N) linear measurements, with an error proportional to the best s‑term gradient approximation. This result bridges a gap between theory and practice for high‑dimensional imaging, providing a solid foundation for future work on structured measurement operators, noise models, and hybrid approaches that combine TV regularization with learned priors.