End-to-end reconstruction of OCT optical properties and speckle-reduced structural intensity via physics-based learning

End-to-end reconstruction of OCT optical properties and speckle-reduced structural intensity via physics-based learning
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Inverse scattering in optical coherence tomography (OCT) seeks to recover both structural images and intrinsic tissue optical properties, including refractive index, scattering coefficient, and anisotropy. This inverse problem is challenging due to attenuation, speckle noise, and strong coupling among parameters. We propose a regularized end-to-end deep learning framework that jointly reconstructs optical parameter maps and speckle-reduced OCT structural intensity for layer visualization. Trained with Monte Carlo-simulated ground truth, our network incorporates a physics-based OCT forward model that generates predicted signals from the estimated parameters, providing physics-consistent supervision for parameter recovery and artifact suppression. Experiments on the synthetic corneal OCT dataset demonstrate robust optical map recovery under noise, improved resolution, and enhanced structural fidelity. This approach enables quantitative multi-parameter tissue characterization and highlights the benefit of combining physics-informed modeling with deep learning for computational OCT.


💡 Research Summary

The paper addresses the longstanding challenge of quantitative inverse scattering in optical coherence tomography (OCT), where the goal is to recover intrinsic tissue optical parameters—refractive index (n), scattering coefficient (µs), and anisotropy factor (g)—as well as a speckle‑reduced structural intensity map from a single raw B‑mode OCT image. Traditional methods rely on simplistic attenuation models, exponential fitting, or iterative regularization, which are highly sensitive to noise, require manual segmentation, and often oversmooth the data. Recent purely data‑driven deep learning approaches suffer from a lack of ground‑truth optical parameters and consequently generalize poorly to new tissue types or imaging conditions.

To overcome these limitations, the authors propose a physics‑regularized end‑to‑end deep learning framework that jointly estimates all four quantities in a single forward pass. The architecture consists of a multi‑branch U‑Net encoder‑decoder. The raw OCT image (1024 × 1024) is fed into a shared encoder, after which four independent decoder branches predict n(x,y), µs(x,y), g(x,y), and a speckle‑reduced intensity Ĩ(x,y). Separate branches prevent interference among the tightly coupled optical parameters while preserving high‑frequency spatial details via skip connections.

The novelty lies in embedding a differentiable OCT forward model, derived from the Extended Huygens–Fresnel (EHF) theory, directly into the loss function. The forward model analytically combines three contributions—single scattering, forward multiple scattering, and their coherent cross‑term—using depth‑dependent beam waists wH(z) and wS(z) that depend on system optics and the anisotropy‑dependent scattering angle θRMS≈√


Comments & Academic Discussion

Loading comments...

Leave a Comment