Decoding Orbital Angular Momentum in Turbid Tissue-like Scattering Medium via Fourier-Domain Deep Learning

Decoding Orbital Angular Momentum in Turbid Tissue-like Scattering Medium via Fourier-Domain Deep Learning
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Structured light beams carrying orbital angular momentum (OAM), such as Laguerre-Gaussian modes, are promising tools for high-capacity optical communications and advanced biomedical imaging. However, multiple scattering in turbid media distorts their phase and amplitude, complicating the retrieval of topological charge. We introduce VortexNet, a deep learning architecture that integrates an Angular Fourier Transform to explicitly extract rotational symmetries of OAM beams from experimentally acquired intensity and interference patterns. By transforming spatial information into the angular frequency domain, VortexNet isolates azimuthal features that persist despite scattering, enabling accurate topological charge classification even in complex optical environments. The results reveal that OAM-specific angular correlations can survive multiple scattering and be decoded through angular-domain learning. This establishes a new paradigm for structured-light analysis in complex medium, where deep learning enables the recovery of topological information beyond the reach of classical optics, paving the way for resilient photonic systems in communication, sensing, and imaging.


💡 Research Summary

The paper tackles the long‑standing challenge of retrieving the orbital angular momentum (OAM) topological charge of Laguerre‑Gaussian (LG) beams after they have traversed highly scattering, tissue‑like media. Conventional interferometric, diffractive, or modal‑decomposition techniques fail once multiple scattering randomizes the phase front and destroys recognizable intensity structures. To overcome this, the authors introduce VortexNet, a deep‑learning classifier that explicitly operates in the angular‑frequency domain.

Key methodological innovation
Before feeding the data to a convolutional neural network, each recorded intensity or on‑axis interference image is transformed by an Angular Fourier Transform (AFT). The AFT converts the azimuthal coordinate ϕ into its frequency components, thereby isolating the ℓ‑dependent rotational symmetry that survives even in dense speckle fields. The resulting angular spectrum is reshaped into a 2‑D tensor (angular frequency × radial coordinate) and processed by a conventional CNN consisting of five convolutional blocks, batch normalization, global average pooling, and a final soft‑max layer that outputs one of six classes (ℓ = 0 … 5).

Experimental and simulation framework
A 640 nm continuous‑wave laser is coupled into a single‑mode fiber, expanded, and modulated by a spatial light modulator to generate LG beams with ℓ = 0‑5. The beams pass through a controllable scattering slab that mimics biological tissue; the optical depth is quantified by the reduced scattering length (z/l*), varied from 0 (free space) to 16 (fully diffusive regime). For each depth, both raw intensity images and on‑axis interference (petal) patterns are recorded, yielding more than 10 000 samples per class. Complementary Monte‑Carlo photon‑transport simulations reproduce the same conditions and validate the experimental data.

Training strategy
The dataset is augmented with random rotations and crops to enforce rotational invariance. The network is trained with the Adam optimizer (learning rate = 1e‑3) for 50 epochs, using categorical cross‑entropy loss. No explicit phase information is supplied; the model relies solely on the angular‑frequency representation produced by the AFT.

Performance results
VortexNet achieves 100 % classification accuracy for the highest‑order mode (ℓ = 5) across the entire range of optical depths, including the fully diffusive case where conventional methods completely fail. For lower orders, accuracy remains perfect in the quasi‑ballistic regime (z/l* ≤ 4) and the “vortex‑memory” regime (z/l* ≈ 9.6), dips to 40‑70 % around the annular‑blur region (z/l* ≈ 6.5), and degrades further at the deepest scattering (z/l* = 16). Confusion matrices and Integrated Gradients (IG) visualizations reveal how the network’s attention shifts with depth: from the classic five‑lobed petal pattern in low‑scattering conditions, to thin radial spokes in the blur regime, to subtle star‑shaped speckle harmonics in the vortex‑memory regime, and finally to minute residual lobes at the core in the fully diffusive case.

Physical interpretation
The results confirm that OAM‑specific angular correlations survive multiple scattering as a form of “phase memory.” High‑order vortices imprint stronger azimuthal signatures into the speckle field, making them more resilient to decorrelation. The AFT isolates these signatures, allowing the CNN to exploit information that is invisible to the naked eye or to traditional optics.

Limitations and future directions
The current dataset covers only six topological charges, limiting direct applicability to real‑world OAM multiplexing systems that may require dozens or hundreds of channels. The scattering model is static and isotropic; dynamic or anisotropic media (e.g., flowing blood) are not addressed. Expanding the training set, incorporating time‑varying speckle sequences, and combining VortexNet with optical pre‑conditioning techniques such as wavefront shaping or adaptive optics are promising avenues to push the depth limit further and enable real‑time communication or imaging.

Implications

  1. Optical communications – By decoding OAM states without coherent detection, VortexNet could extend OAM‑based multiplexing to turbulent free‑space links or multimode fibers where scattering is severe.
  2. Biomedical imaging – The ability to read OAM after deep tissue penetration opens new contrast mechanisms, potentially allowing angular‑momentum‑sensitive fluorescence or Raman probing at depths previously inaccessible.
  3. Sensing – Persistent angular signatures could be harnessed for ultra‑sensitive refractive‑index or mechanical‑stress sensing in scattering environments.

In summary, the study demonstrates that a physics‑guided preprocessing step (angular Fourier transform) combined with deep learning can recover hidden rotational order from heavily scrambled optical fields. VortexNet thus establishes a new paradigm for structured‑light analysis in turbid media, showing that OAM information is not destroyed by scattering but merely encoded in a domain that neural networks can learn to decode. This insight paves the way for robust, data‑driven photonic systems across communication, imaging, and sensing applications.


Comments & Academic Discussion

Loading comments...

Leave a Comment