Visible Singularities Guided Correlation Network for Limited-Angle CT Reconstruction

Visible Singularities Guided Correlation Network for Limited-Angle CT Reconstruction
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Limited-angle computed tomography (LACT) offers the advantages of reduced radiation dose and shortened scanning time. Traditional reconstruction algorithms exhibit various inherent limitations in LACT. Currently, most deep learning-based LACT reconstruction methods focus on multi-domain fusion or the introduction of generic priors, failing to fully align with the core imaging characteristics of LACT-such as the directionality of artifacts and directional loss of structural information, which are caused by the absence of projection angles in certain directions. Inspired by the theory of visible and invisible singularities, taking into account the aforementioned core imaging characteristics of LACT, we propose a Visible Singularities Guided Correlation network for LACT reconstruction (VSGC). The design philosophy of VSGC consists of two core steps: First, extract VS edge features from LACT images and focus the model’s attention on these VS. Second, establish correlations between the VS edge features and other regions of the image. Additionally, a multi-scale loss function with anisotropic constraint is employed to constrain the model to converge in multiple aspects. Finally, qualitative and quantitative validations are conducted on both simulated and real datasets to verify the effectiveness and feasibility of the proposed design. Particularly, in comparison with alternative methods, VSGC delivers more prominent performance in small angular ranges, with the PSNR improvement of 2.45 dB and the SSIM enhancement of 1.5%. The code is publicly available at https://github.com/yqx7150/VSGC.


💡 Research Summary

The paper addresses the challenging problem of limited‑angle computed tomography (LACT), where missing projection views lead to direction‑dependent artifacts and loss of structural information. Classical reconstruction methods (FBP, TV‑based iterative schemes) either produce severe streaking artifacts or suffer from over‑smoothing and high computational cost. Recent deep‑learning approaches have focused on multi‑domain fusion or generic priors, but they do not explicitly exploit the intrinsic physics of LACT: the fact that edges whose tangents are parallel to the available rays (visible singularities, VS) are well‑preserved, while edges orthogonal to the missing rays (invisible singularities, IVS) become blurred.

Inspired by microlocal analysis and the VS/IVS theory, the authors propose the Visible Singularities Guided Correlation network (VSGC). VSGC consists of two sequential modules: (1) Visible Singularities Wavelet Dense (VSWD) for extracting and enhancing VS edge features, and (2) Universal Multi‑scale Cross‑region Correlation Attention (UMCA) for propagating those features to the rest of the image.

In VSWD, the input limited‑angle reconstruction (L_{\alpha}f) is first combined with Sobel‑derived x‑ and y‑direction edge maps to form an edge‑enhanced feature map (F_e). A 2‑D discrete wavelet transform decomposes (F_e) into a low‑frequency subband (LL, dominated by IVS) and three high‑frequency subbands (LH, HL, HH, dominated by VS). Adaptive convolution kernels are applied to each subband, and the inverse wavelet transform reconstructs a refined feature map that strongly emphasizes VS high‑frequency components while preserving global context. A hierarchical encoder‑decoder further aggregates VS information across scales.

UMCA introduces a Multi‑scale Cross‑region Correlation Attention (MCA) mechanism. MCA computes a self‑attention matrix between the VS feature map and all spatial locations, then retains only the top‑k scores per row to focus on the most significant global correlations. Parallel convolutional pathways capture local, fine‑grained relationships within image patches. This combination enables the network to transfer the well‑preserved VS structures to the poorly reconstructed IVS regions, effectively reducing streaking artifacts and restoring missing details.

Training is guided by a novel multi‑scale loss comprising four components: (i) an anisotropic weighted term that assigns higher penalties to regions severely affected by limited‑angle geometry, (ii) an SSIM term to preserve structural similarity, (iii) an edge‑gradient term to sharpen boundaries, and (iv) a perceptual term (VGG‑based) to avoid over‑smoothing. The loss operates at multiple resolutions, encouraging convergence from both global and local perspectives.

Extensive experiments were conducted on simulated phantoms (angular ranges 10°, 20°, 30°) and real clinical datasets (breast tomosynthesis, C‑arm neuro‑imaging). VSGC was benchmarked against FBP, anisotropic TV, directional TV, AEDS, and several state‑of‑the‑art deep models (DIOR, IRON, MIST‑Net, MSDDRNet, SIAR‑GAN, etc.). Quantitatively, VSGC achieved up to 2.45 dB higher PSNR and 1.5 % higher SSIM in the most challenging 30° scenario. Qualitatively, visual results showed markedly reduced streaks and clearer reconstruction of fine vascular and bone structures.

The authors release the source code and pretrained weights, facilitating reproducibility and further research. They also discuss the broader applicability of the VS‑guided design to other inverse problems with incomplete data, such as low‑dose imaging or accelerated MRI. In summary, the paper provides a theoretically grounded, practically effective deep‑learning framework that directly leverages the physics of visible singularities to overcome the inherent limitations of limited‑angle CT reconstruction.


Comments & Academic Discussion

Loading comments...

Leave a Comment