Deep backward schemes for high-dimensional nonlinear PDEs

Deep backward schemes for high-dimensional nonlinear PDEs
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We propose new machine learning schemes for solving high dimensional nonlinear partial differential equations (PDEs). Relying on the classical backward stochastic differential equation (BSDE) representation of PDEs, our algorithms estimate simultaneously the solution and its gradient by deep neural networks. These approximations are performed at each time step from the minimization of loss functions defined recursively by backward induction. The methodology is extended to variational inequalities arising in optimal stopping problems. We analyze the convergence of the deep learning schemes and provide error estimates in terms of the universal approximation of neural networks. Numerical results show that our algorithms give very good results till dimension 50 (and certainly above), for both PDEs and variational inequalities problems. For the PDEs resolution, our results are very similar to those obtained by the recent method in \cite{weinan2017deep} when the latter converges to the right solution or does not diverge. Numerical tests indicate that the proposed methods are not stuck in poor local minimaas it can be the case with the algorithm designed in \cite{weinan2017deep}, and no divergence is experienced. The only limitation seems to be due to the inability of the considered deep neural networks to represent a solution with a too complex structure in high dimension.


💡 Research Summary

**
The paper introduces two novel deep‑learning algorithms, DBDP1 and DBDP2, for solving high‑dimensional nonlinear partial differential equations (PDEs) and related variational inequalities (optimal stopping problems). The authors start from the classical probabilistic representation of a semilinear PDE via a backward stochastic differential equation (BSDE). After discretising the forward diffusion and the BSDE with an Euler scheme on a time grid, they depart from the “global‑loss” strategy of the Deep BSDE method (Weinan et al., 2017) and instead adopt a backward dynamic programming (BDP) perspective.

In the BDP framework the problem is decomposed into N local optimisation tasks, one for each time step. At step i the algorithm seeks neural‑network approximations of the value function (u(t_i,\cdot)) and, optionally, its gradient (\sigma^\top \nabla u(t_i,\cdot)). The loss at step i measures the discrepancy between the network‑based prediction at time (t_i) and the one‑step‑ahead Euler update that uses the already‑computed approximation at time (t_{i+1}).

DBDP1 employs two independent feed‑forward networks (U_i) and (Z_i) to learn the value and the gradient simultaneously. The loss \


Comments & Academic Discussion

Loading comments...

Leave a Comment