Improved Physics-Driven Neural Network to Solve Inverse Scattering Problems
This paper presents an improved physics-driven neural network (IPDNN) framework for solving electromagnetic inverse scattering problems (ISPs). A new Gaussian-localized oscillation-suppressing window (GLOW) activation function is introduced to stabilize convergence and enable a lightweight yet accurate network architecture. A dynamic scatter subregion identification strategy is further developed to adaptively refine the computational domain, preventing missed detections and reducing computational cost. Moreover, transfer learning is incorporated to extend the solver’s applicability to practical scenarios, integrating the physical interpretability of iterative algorithms with the real-time inference capability of neural networks. Numerical simulations and experimental results demonstrate that the proposed solver achieves superior reconstruction accuracy, robustness, and efficiency compared with existing state-of-the-art methods.
💡 Research Summary
This paper presents a significant advancement in solving electromagnetic inverse scattering problems (ISPs) through the development of an Improved Physics-Driven Neural Network (IPDNN) framework. ISPs, which aim to reconstruct the properties of unknown objects from scattered field measurements, are inherently ill-posed, nonlinear, and computationally intensive. The proposed IPDNN enhances the authors’ prior Physics-Driven Neural Network (PDNN) by introducing three key innovations to address these challenges, leading to superior reconstruction accuracy, robustness, and computational efficiency.
The first major contribution is the novel Gaussian-localized oscillation-suppressing window (GLOW) activation function. Unlike conventional functions like ReLU or Tanh, GLOW is designed with properties specifically beneficial for learning the complex nonlinear mappings in ISPs. Its derivative increases linearly for small inputs but decays exponentially for large ones. This characteristic allows it to preserve subtle features while suppressing the influence of outliers with large amplitudes, thereby stabilizing convergence and improving reconstruction precision. Crucially, GLOW’s effectiveness is robust to hyperparameter variations and network architecture, enabling the use of a lightweight, single fully-connected layer network without sacrificing performance. This reduces memory usage dramatically compared to more complex architectures.
Second, the paper introduces a dynamic scatter subregion identification strategy. Instead of fixing the region of interest at initialization (as in the original PDNN), this mechanism periodically analyzes the network’s predicted permittivity distribution during training. It dynamically updates a binary mask identifying “active grids” likely containing scatterers. The forward electromagnetic calculations are then performed only on these active grids. This approach serves a dual purpose: it recovers subregions that might have been missed in the initial estimate, preventing detection failures, and it drastically reduces computational cost by focusing calculations only on relevant areas.
Third, the framework incorporates transfer learning to enhance practical applicability in real-world scenarios such as nondestructive testing. The network is first pre-trained using data from a known, defect-free (sound) object. The learned weights are then transferred and fine-tuned for the downstream task of defect detection within a similar object. This leverages prior structural knowledge, accelerating convergence, improving image quality, and enabling effective reconstruction even with a reduced number of transmitters, bridging the gap between the interpretability of physics-based iterative methods and the real-time inference capability of neural networks.
The methodology is comprehensively validated through numerical simulations and experimental tests using the Fresnel Institute datasets. Results demonstrate that the GLOW activation function yields sharper boundaries, better contrast, and higher fidelity to ground truth compared to other activations, alongside faster and more stable convergence. The dynamic identification strategy effectively minimizes computational overhead. The transfer learning approach is shown to reduce the required number of iterations and transmitters while maintaining high reconstruction quality. In conclusion, the IPDNN framework establishes a new state-of-the-art for solving ISPs, offering a powerful, efficient, and practical solver that combines physical principles with the adaptive learning power of deep neural networks.
Comments & Academic Discussion
Loading comments...
Leave a Comment