Physics-Driven Neural Network for Solving Electromagnetic Inverse Scattering Problems

Physics-Driven Neural Network for Solving Electromagnetic Inverse Scattering Problems
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In recent years, deep learning-based methods have been proposed for solving inverse scattering problems (ISPs), but most of them heavily rely on data and suffer from limited generalization capabilities. In this paper, a new solving scheme is proposed where the solution is iteratively updated following the updating of the physics-driven neural network (PDNN), the hyperparameters of which are optimized by minimizing the loss function which incorporates the constraints from the collected scattered fields and the prior information about scatterers. Unlike data-driven neural network solvers, PDNN is trained only requiring the input of collected scattered fields and the computation of scattered fields corresponding to predicted solutions, thus avoids the generalization problem. Moreover, to accelerate the imaging efficiency, the subregion enclosing the scatterers is identified. Numerical and experimental results demonstrate that the proposed scheme has high reconstruction accuracy and strong stability, even when dealing with composite lossy scatterers.


💡 Research Summary

The paper introduces a novel physics‑driven neural network (PDNN) framework for solving electromagnetic inverse scattering problems (ISPs). Traditional ISP solvers rely on linear approximations (Born, Rytov) or iterative nonlinear optimization (Born‑Iterative, DBIM, CSI), which are computationally intensive and sensitive to initial guesses. Recent data‑driven deep learning approaches (DeepNIS, U‑Net, SOM‑Net) achieve fast reconstructions but require large training datasets and suffer from poor generalization to unseen scatterer configurations.

To overcome these limitations, the authors embed the governing electromagnetic equations directly into the loss function of a neural network, eliminating the need for any pre‑collected training data. The PDNN takes as input the measured scattered fields (real and imaginary parts) and outputs a predicted complex contrast distribution (relative permittivity). Its architecture consists of three 3×3 convolutional layers, three residual blocks, and two fully‑connected layers, using ReLU/LeakyReLU activations to capture the highly nonlinear mapping between fields and material parameters.

The loss function is composed of three terms:

  1. Data fidelity (L Data) – the L1 norm of the difference between measured scattered fields and those recomputed from the current prediction using the Method of Moments (MoM). This term enforces strict agreement with the physical measurement.

  2. Lower‑bound regularization (L Bound) – a ReLU‑based penalty that forces the real part of the predicted permittivity to stay above unity, reflecting the physical fact that the scatterer cannot have a permittivity lower than the background.

  3. Total‑variation regularization (L TV) – promotes spatial smoothness and reduces noise amplification. Hyper‑parameters α and β balance the influence of the regularization terms and are tuned empirically.

During each iteration, the current prediction is fed to the PDNN, which produces an updated contrast map. MoM is then used to compute the scattered fields for this updated map, the loss is evaluated, and back‑propagation updates the network weights. This iterative scheme mirrors classical ISP solvers but benefits from the global, physics‑consistent optimization provided by the neural network, reducing the risk of getting trapped in local minima.

A key practical contribution is the identification of a reduced imaging sub‑region to lower computational cost. An initial U‑Net reconstruction (trained offline on synthetic data) is processed with statistical thresholding, morphological closing, and dilation to generate a binary mask (B Dilation) that encloses all potential scatterer voxels. Subsequent MoM calculations are confined to this mask, cutting the number of grid points by roughly 70 % without sacrificing reconstruction fidelity.

The authors evaluate the method on a 0.15 m × 0.15 m domain discretized into 64 × 64 cells, illuminated by 36 transmitters and observed by 36 receivers placed on a circle of radius 20 λ at 4 GHz. Four test scenarios are considered: a square object, two adjacent circles, a concentric ring, and a composite lossy dielectric. Compared with Born‑Iterative, DBIM, and DeepNIS, PDNN achieves lower relative errors (often below 10 %) and markedly better edge and interior reconstruction, especially for high‑contrast and lossy materials.

Experimental validation with a real‑world antenna array confirms the method’s robustness to measurement noise and its computational advantage: reconstruction times are reduced by a factor of five relative to conventional iterative solvers while maintaining high image quality.

In conclusion, the paper makes three major contributions: (1) a fully physics‑driven learning scheme that eliminates dependence on large training datasets, (2) a loss formulation that simultaneously enforces data fidelity, physical lower bounds, and spatial smoothness, and (3) an efficient sub‑region extraction technique that dramatically reduces the cost of forward field evaluations. Remaining challenges include the still‑significant MoM cost per iteration and the reliance on an initial U‑Net estimate for sub‑region detection. Future work may integrate faster forward solvers (e.g., FFT‑based or GPU‑accelerated methods) and develop adaptive region‑updating strategies to move toward real‑time, fully data‑independent electromagnetic imaging.


Comments & Academic Discussion

Loading comments...

Leave a Comment