Accelerated Time-Domain Simulation of Complex Photonic Structures with a Data-Aware Fourier Neural Operator

Accelerated Time-Domain Simulation of Complex Photonic Structures with a Data-Aware Fourier Neural Operator
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Efficient and accurate time-domain simulation of electromagnetic fields in complex photonic devices is critical for designing broadband and ultrafast optical components, yet it is often limited by the high computational cost of conventional numerical methods like FDTD. While machine learning approaches show promise in accelerating these simulations, existing models still struggle to simultaneously capture the dynamic field evolution and generalize to complex geometries. In this paper, we introduce a Data-Aware Fourier Neural Operator (DA-FNO) as an innovative neural operator for solving electromagnetic simulations. Applied autoregressively, the model iteratively predicts the time-domain evolution of all field components and automatically terminates upon energy convergence. Our model not only generalizes to complex and randomized geometries but also shows good predictive consistency across the optical C-band (1530-1565nm) when evaluated on the test set. In a representative configuration, it achieves an 11* speedup over conventional methods while maintaining about 95% accuracy across the C-band. This approach provides a new pathway for C-band photonic simulations, potentially facilitating the research, development, and inverse design of novel photonic devices.


💡 Research Summary

The paper introduces a novel neural operator, the Data‑Aware Fourier Neural Operator (DA‑FNO), designed to accelerate time‑domain electromagnetic simulations of complex photonic structures while preserving high accuracy. Traditional finite‑difference time‑domain (FDTD) methods, though accurate, suffer from severe computational burdens due to the Courant‑Friedrichs‑Lewy (CFL) stability constraint, especially for broadband or large‑scale problems. Recent machine‑learning approaches have attempted to alleviate this cost, yet they either fail to capture the full temporal dynamics, cannot generalize to irregular geometries, or accumulate errors over long‑term predictions.

DA‑FNO builds upon the vanilla Fourier Neural Operator (FNO) by incorporating two physics‑motivated modifications. First, a learnable 3 × 3 convolutional layer precedes each Fourier layer, explicitly mixing the three TM‑polarized field components (Ez, Hx, Hy). This convolution mimics the local coupling dictated by Maxwell’s curl equations, ensuring that the network respects the intrinsic physical relationships among the fields rather than treating each component as an independent channel. Second, instead of the fixed rectangular low‑pass truncation used in standard FNOs, the authors propose a data‑aware spectral mode selection scheme. By Fourier‑transforming all training snapshots, normalizing their amplitudes, and selecting modes in descending magnitude until a cumulative energy threshold θ is reached (e.g., θ = 0.9), the network retains the high‑frequency modes essential for resolving fine scattering and diffraction features that emerge in complex geometries.

The model operates autoregressively. Five consecutive time steps {sₜ₋₄,…,sₜ} are stacked as input channels, each containing the three field components plus the spatial coordinates and permittivity distribution ε, thereby embedding geometric priors. This tensor is lifted to a high‑dimensional latent space via a shallow fully‑connected network (P), processed through four DA‑FNO layers (convolution → Fourier transform → spectral weighting → inverse Fourier → residual addition → SELU activation), and finally projected back to the physical space by another linear layer (Q) to produce the next state sₜ₊₁. After each prediction, the oldest state is discarded and the new one appended, forming a rolling window for the next iteration.

A key practical feature is the automatic termination criterion based on electromagnetic energy. The instantaneous domain energy Uₜ₊₁ = ∫½(E·D + H·B) dS is computed after each step; the simulation stops when Uₜ₊₁ falls below δ × U_max, where U_max is the maximum energy observed so far and δ is a small convergence factor (e.g., 10⁻²). Because the network is not bound by CFL, a coarse time step of m Δt (with m = 15 in the main experiments) can be used, yielding an 11‑fold speedup over conventional FDTD while maintaining about 95 % accuracy (average relative L1 error ≈ 0.235 ± 0.027 on the test set).

The authors generate a comprehensive dataset using an in‑house FDTD solver with a Gaussian pulse centered at 1550 nm, covering the full optical C‑band (1530‑1565 nm) and randomizing the geometry of each sample (various rectangles, circles, and composite shapes). Training is performed on three coarse time‑step settings (12 Δt, 15 Δt, 20 Δt) with a fixed convergence factor δ = 10⁻⁴. Performance is evaluated using the Average Relative L1 Error (ARL1E) across all three field components and time steps. Compared to the vanilla FNO (ARL1E ≈ 0.970) and a convolution‑only baseline (ARL1E ≈ 0.246), DA‑FNO with θ = 0.9 achieves the lowest error while using a comparable number of parameters. Ablation studies varying θ demonstrate a clear trade‑off: lower θ reduces the number of selected spectral modes but slightly degrades accuracy, confirming the importance of retaining high‑frequency information.

A preliminary extension to 3‑D extruded geometries shows that the same architecture can learn volumetric features, suggesting scalability to full‑3D photonic simulations. The paper also provides the first 200 training samples and full implementation code, facilitating reproducibility.

In summary, DA‑FNO integrates physics‑aware local convolutions with data‑driven spectral mode selection to overcome the limitations of existing neural operators for electromagnetic time‑domain problems. It delivers substantial computational acceleration (≈ 11×) without sacrificing the fidelity required for broadband C‑band photonic design, and it generalizes robustly to unseen, irregular geometries. This work represents a significant step toward ML‑augmented photonic simulation pipelines, enabling faster forward modeling, inverse design, and large‑scale optimization of next‑generation optical components.


Comments & Academic Discussion

Loading comments...

Leave a Comment