Continuity-driven Synergistic Diffusion with Neural Priors for Ultra-Sparse-View CBCT Reconstruction
The clinical application of cone-beam computed tomography (CBCT) is constrained by the inherent trade-off between radiation exposure and image quality. Ultra-sparse angular sampling, employed to reduce dose, introduces severe undersampling artifacts and inter-slice inconsistencies, compromising diagnostic reliability. Existing reconstruction methods often struggle to balance angular continuity with spatial detail fidelity. To address these challenges, we propose a Continuity-driven Synergistic Diffusion with Neural priors (CSDN) for ultra-sparse-view CBCT reconstruction. Neural priors are introduced as a structural foundation to encode a continuous threedimensional attenuation representation, enabling the synthesis of physically consistent dense projections from ultra-sparse measurements. Building upon this neural-prior-based initialization, a synergistic diffusion strategy is developed, consisting of two collaborative refinement paths: a Sinogram Refinement Diffusion (Sino-RD) process that restores angular continuity and a Digital Radiography Refinement Diffusion (DR-RD) process that enforces inter-slice consistency from the projection image perspective. The outputs of the two diffusion paths are adaptively fused by the Dual-Projection Reconstruction Fusion (DPRF) module to achieve coherent volumetric reconstruction. Extensive experiments demonstrate that the proposed CSDN effectively suppresses artifacts and recovers fine textures under ultra-sparse-view conditions, outperforming existing state-of-the-art techniques.
💡 Research Summary
The paper addresses the critical challenge of reconstructing high‑quality cone‑beam computed tomography (CBCT) images from ultra‑sparse angular views, a scenario that dramatically reduces patient radiation dose but typically yields severe streak artifacts, loss of fine detail, and inter‑slice inconsistencies. Existing methods either operate in the sinogram domain, the image domain, or combine both, yet they fail to simultaneously enforce the two fundamental continuity constraints inherent to CBCT physics: (1) angular continuity of the projection data (the smoothness required by the Radon transform) and (2) inter‑slice continuity along the axial direction dictated by the cone‑beam geometry.
To overcome these limitations, the authors propose Continuity‑driven Synergistic Diffusion with Neural priors (CSDN). The framework consists of three major components:
-
Neural Prior (NP) – An implicit neural representation (INR) that maps 3‑D spatial coordinates to attenuation coefficients, effectively learning a continuous attenuation field. The NP is trained using the ultra‑sparse measurements together with a physics‑based loss derived from the Beer‑Lambert law, thereby producing dense, physically consistent synthetic projections that serve as a robust initialization for subsequent refinement.
-
Dual‑Path Synergistic Diffusion – Two conditional diffusion processes are applied in parallel:
- Sinogram Refinement Diffusion (Sino‑RD) operates in the sinogram space. Starting from the dense sinogram generated by the NP, it iteratively denoises a sequence of Gaussian‑perturbed sinograms while conditioning on the sparse real measurements and on continuity regularizers that enforce smooth angular variation. This restores missing angular information and reduces streaking.
- Digital Radiography Refinement Diffusion (DR‑RD) works directly on the projection images (digital radiographs). It leverages the same diffusion backbone but incorporates a 3‑D contextual module that promotes consistency between adjacent slices, thereby correcting inter‑slice discontinuities that are not addressed by sinogram‑only methods.
-
Dual‑Projection Reconstruction Fusion (DPRF) – The outputs of Sino‑RD and DR‑RD are adaptively fused. DPRF employs a learned attention or weighting mechanism that evaluates the reliability of each path per view and per region, then combines them to produce a final sinogram which is back‑projected (e.g., via FDK or iterative reconstruction) to obtain the 3‑D volume.
Experimental validation is performed on several ultra‑sparse configurations (as few as 8–12 views). Quantitative metrics (PSNR, SSIM, RMSE) show that CSDN outperforms state‑of‑the‑art baselines such as GMSD, CT‑SDM, DOLCE, and DCDS by 2–3 dB in PSNR and noticeable gains in structural similarity. Visual inspection confirms the suppression of streak artifacts, preservation of fine textures (e.g., small nodules, vascular structures), and the elimination of slice‑to‑slice distortion. Ablation studies demonstrate that removing the neural prior or using only one diffusion path degrades performance, confirming the necessity of both components.
Strengths of the work include:
- A principled incorporation of physical continuity constraints through an implicit neural field, providing a physics‑consistent initialization.
- A synergistic dual‑diffusion scheme that simultaneously restores angular smoothness and axial coherence, a combination rarely explored in prior literature.
- Adaptive fusion that leverages the complementary strengths of sinogram‑ and projection‑domain refinements, leading to superior reconstruction quality.
Limitations are also acknowledged: training the neural prior and running two diffusion processes are computationally intensive, which may hinder real‑time clinical deployment. The diffusion hyper‑parameters (noise schedule, number of steps) are sensitive and may need retuning for different scanner geometries or acquisition protocols. Moreover, the current implementation assumes a standard circular trajectory; extending to non‑circular or limited‑arc trajectories will require additional research.
In conclusion, CSDN introduces a novel paradigm that unifies continuity modeling and diffusion‑based refinement for ultra‑sparse‑view CBCT reconstruction. By jointly addressing angular and inter‑slice continuity, it achieves a compelling balance between dose reduction and diagnostic‑grade image quality, paving the way for safer longitudinal imaging in radiotherapy and other clinical contexts. Future work will focus on model acceleration, broader geometric generalization, and integration with multimodal imaging pipelines.
Comments & Academic Discussion
Loading comments...
Leave a Comment