Automatic regularization parameter choice for tomography using a double model approach

Automatic regularization parameter choice for tomography using a double model approach
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Image reconstruction in X-ray tomography is an ill-posed inverse problem, particularly with limited available data. Regularization is thus essential, but its effectiveness hinges on the choice of a regularization parameter that balances data fidelity against a priori information. We present a novel method for automatic parameter selection based on the use of two distinct computational discretizations of the same problem. A feedback control algorithm dynamically adjusts the regularization strength, driving an iterative reconstruction toward the smallest parameter that yields sufficient similarity between reconstructions on the two grids. The effectiveness of the proposed approach is demonstrated using real tomographic data.


💡 Research Summary

The paper addresses the long‑standing problem of automatically selecting the regularization parameter α in X‑ray computed tomography (CT) reconstruction, especially when data are sparse or noisy. Instead of relying on traditional heuristics such as the L‑curve, discrepancy principle, or data‑driven learning, the authors cast the task as a closed‑loop control problem. The key idea is to solve the same inverse problem on two geometrically distinct discretizations (a primary grid A and a rotated version Aθ) using the same sinogram. Because discretization errors differ between the two grids, under‑regularized reconstructions (small α) produce markedly different images, yielding a low structural similarity index (SSIM). As α increases, noise and grid‑specific artifacts are suppressed, and the two reconstructions converge toward a common, grid‑independent solution, causing SSIM to rise monotonically.

A proportional controller operates in the logarithmic domain of α:
log10(α_{k+1}) = log10(α_k) + K_p·(S_ref – S_k),
where S_k is the SSIM measured at iteration k, S_ref is a user‑defined target similarity, and K_p (set to 0.5) determines the step size. The algorithm terminates when the absolute error |S_k – S_ref| stays below ε = 0.05 for N = 5 consecutive iterations, preventing premature stopping due to transient SSIM fluctuations.

Experiments were conducted on two publicly available 2‑D slices (Walnut and Pine Cone) extracted from 3‑D cone‑beam scans, using both total variation (TV) and Tikhonov (L2) regularizers. The controller was initialized with α₀ = 10⁻⁶ and a random rotation angle θ ∈ (10°, 20°). For the Walnut/TV case, the controller increased α from 10⁻¹⁰ to approximately 2.3 × 10⁻⁶, achieving the target SSIM of 0.95 and preserving sharp internal boundaries. When α was deliberately set much higher (≈2.3 × 10⁻⁴), SSIM rose to >0.99 but fine structural detail vanished, illustrating that maximal similarity does not guarantee optimal reconstruction. Similar behavior was observed for Walnut/Tikhonov, Pine Cone/TV, and Pine Cone/Tikhonov, demonstrating that the method is robust to different regularizers and to more complex, high‑frequency structures.

A comparative study with the L‑curve and discrepancy principle (both applied as one‑shot, open‑loop selections) showed that those heuristics tend to pick larger α values, yielding very smooth images with SSIM >0.98 but substantial loss of detail (low gradient energy). In contrast, the proposed controller deliberately stops at a lower SSIM (e.g., 0.95), balancing noise suppression against detail preservation. Plotting SSIM versus a proxy for image detail (∥∇x∥₂) reveals a Pareto‑like curve: the classical methods sit in a high‑consistency, low‑detail region, while the controller operates near the knee, where a modest SSIM reduction yields a large gain in recovered detail.

The authors conclude that discretization‑induced inconsistencies, traditionally viewed as numerical error, can be turned into a valuable internal sensing signal. By letting the user specify a target consistency level, the method provides transparent, task‑specific regularization without requiring explicit noise estimates or ground truth. Limitations include the reliance on SSIM as a proxy for quality (which may not capture all perceptual aspects) and the assumption of monotonicity, which could break for certain rotation angles or extreme noise levels. Future work is suggested on formalizing monotonicity conditions, extending to severely ill‑posed scenarios (limited‑angle, metal artifacts), and exploring richer feedback metrics or more sophisticated controllers (PID, adaptive gains). Overall, the study demonstrates that feedback control offers a natural and effective paradigm for managing the stability‑detail trade‑off in large‑scale inverse problems.


Comments & Academic Discussion

Loading comments...

Leave a Comment