Compressed Sensing for Moving Imagery in Medical Imaging

Compressed Sensing for Moving Imagery in Medical Imaging

Numerous applications in signal processing have benefited from the theory of compressed sensing which shows that it is possible to reconstruct signals sampled below the Nyquist rate when certain conditions are satisfied. One of these conditions is that there exists a known transform that represents the signal with a sufficiently small number of non-zero coefficients. However when the signal to be reconstructed is composed of moving images or volumes, it is challenging to form such regularization constraints with traditional transforms such as wavelets. In this paper, we present a motion compensating prior for such signals that is derived directly from the optical flow constraint and can utilize the motion information during compressed sensing reconstruction. Proposed regularization method can be used in a wide variety of applications involving compressed sensing and images or volumes of moving and deforming objects. It is also shown that it is possible to estimate the signal and the motion jointly or separately. Practical examples from magnetic resonance imaging has been presented to demonstrate the benefit of the proposed method.


💡 Research Summary

Compressed sensing (CS) promises accurate reconstruction of signals from far fewer measurements than dictated by the Nyquist theorem, provided that the signal admits a sparse representation in a known transform domain and that the sampling matrix is sufficiently incoherent. In static imaging, sparsity is typically enforced with wavelets, discrete cosine transforms, or total variation (TV) regularization, and reconstruction proceeds by solving an ℓ1‑minimization problem that balances data fidelity against a sparsity‑promoting penalty. However, when the target consists of moving or deforming images—such as cardiac cine MRI, dynamic CT, or functional MRI—the underlying assumption of a fixed sparse basis breaks down. Motion introduces non‑stationary structures that spread energy across many transform coefficients, and conventional TV regularization fails to preserve temporal coherence, leading to blurred edges and motion‑induced artifacts.

The authors address this fundamental limitation by embedding the optical flow constraint directly into the CS regularization term. Optical flow, derived from the brightness‑constancy assumption, yields the differential relationship I_t + ∇I·v = 0, where I denotes image intensity, ∇I the spatial gradient, I_t the temporal derivative, and v the instantaneous motion field. By treating this equation as an additional penalty—either in ℓ1 or ℓ2 form—the reconstruction algorithm simultaneously enforces sparsity and motion consistency. The resulting optimization problem involves two coupled variables: the image sequence x and the motion field v. The objective combines (1) a data‑consistency term ‖A x – y‖₂² (A is the undersampled measurement operator, y the acquired k‑space data), (2) a sparsity term λ₁‖Ψ x‖₁ (Ψ is a chosen sparse transform), and (3) an optical‑flow term λ₂‖∇x·v + x_t‖₁ (or ℓ2). The scalar weights λ₁ and λ₂ balance the influence of sparsity versus motion compensation.

To solve this non‑convex problem efficiently, the authors adopt an Alternating Direction Method of Multipliers (ADMM) framework. In each iteration, the image sub‑problem is solved with v fixed, using standard CS solvers (e.g., FISTA) to enforce sparsity, while the motion sub‑problem updates v by minimizing the optical‑flow penalty with the current image estimate. This alternating scheme yields a practical algorithm that converges rapidly despite the coupling between x and v. Two operational modes are explored: (i) joint estimation, where x and v are updated iteratively from random initialization, allowing the algorithm to discover motion patterns intrinsically; and (ii) separate estimation, where an external motion estimate (e.g., from navigator echoes or a pre‑trained model) seeds v, accelerating convergence and reducing computational load.

Experimental validation focuses on cardiac MRI. The authors retrospectively undersample fully sampled k‑space data by factors of 4–6, then reconstruct using the proposed motion‑compensated CS (MC‑CS) method, conventional TV‑CS, and wavelet‑CS. Quantitatively, MC‑CS improves peak signal‑to‑noise ratio (PSNR) by 2–3 dB over TV‑CS and 3–4 dB over wavelet‑CS, with the greatest gains observed during rapid systolic motion. Qualitatively, MC‑CS preserves myocardial wall sharpness, eliminates the blurring typical of TV‑CS, and accurately captures the temporal dynamics of cardiac contraction without the need for additional gating or navigator acquisitions. The method also extends naturally to 3‑D volume time series, where it maintains inter‑slice consistency and prevents geometric distortion of deforming structures.

In summary, the paper makes three principal contributions. First, it introduces a principled way to incorporate physical motion constraints—via the optical flow equation—into the CS regularization framework, thereby aligning the mathematical model with the underlying dynamics of the data. Second, it provides a flexible optimization architecture that can jointly estimate image content and motion, or leverage pre‑computed motion fields, while retaining the computational efficiency of ADMM‑based solvers. Third, it demonstrates, through comprehensive MRI experiments, that the motion‑compensated approach yields superior reconstruction quality and higher acceleration factors compared with traditional static‑image CS techniques. The work opens avenues for further research, including extensions to more complex deformation models, integration with deep‑learning priors, and application to other modalities such as ultrasound or dynamic CT, where motion is an intrinsic challenge.