Enforcing hidden physics in physics-informed neural networks

Enforcing hidden physics in physics-informed neural networks
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Physics-informed neural networks (PINNs) represent a new paradigm for solving partial differential equations (PDEs) by integrating physical laws into the learning process of neural networks. However, ensuring that such frameworks fully reflect the physical structure embedded in the governing equations remains an open challenge, particularly for maintaining robustness across diverse scientific problems. In this work, we address this issue by introducing a simple, generalized, yet robust irreversibility-regularized strategy that enforces hidden physical laws as soft constraints during training, thereby recovering the missing physics associated with irreversible processes in the conventional PINN. This approach ensures that the learned solutions consistently respect the intrinsic one-way nature of irreversible physical processes. Across a wide range of benchmarks spanning traveling wave propagation, steady combustion, ice melting, corrosion evolution, and crack growth, we observe substantial performance improvements over the conventional PINN, demonstrating that our regularization scheme reduces predictive errors by more than an order of magnitude, while requiring only minimal modification to existing PINN frameworks.


💡 Research Summary

The paper addresses a fundamental limitation of physics‑informed neural networks (PINNs): while PINNs enforce the explicit terms of a governing partial differential equation (PDE) through a residual loss, they do not automatically respect implicit physical constraints such as the Second Law of Thermodynamics or other irreversibility conditions that are not directly encoded in the PDE. To bridge this gap, the authors propose a simple, general, and computationally inexpensive regularization strategy that enforces hidden irreversibility as a soft constraint during training.

The core idea is to quantify the directional or temporal monotonicity that characterizes many irreversible processes. For a solution field (u(\beta)) defined over a generalized coordinate vector (\beta = (\beta_1,\dots,\beta_n)) (including space, time, and possibly parameters), a sign (s_k\in{+1,-1}) is assigned to each coordinate direction. The irreversibility condition reads (s_k,\partial u/\partial \beta_k \ge 0) for all points in the domain. The authors define a point‑wise violation measure
(V_k(\beta;\theta)=\text{ReLU}\bigl(-s_k,\partial \hat u(\beta;\theta)/\partial \beta_k\bigr)),
where (\hat u) is the neural network approximation with parameters (\theta). This measure is zero when the condition is satisfied and positive otherwise. By averaging (V_k) over a set of collocation points (the same points used for the PDE residual), they obtain an irreversibility loss (L_{\text{irr}}). The total training loss becomes

\


Comments & Academic Discussion

Loading comments...

Leave a Comment