Deep learning methods for inverse problems using connections between proximal operators and Hamilton-Jacobi equations

Deep learning methods for inverse problems using connections between proximal operators and Hamilton-Jacobi equations
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Inverse problems are important mathematical problems that seek to recover model parameters from noisy data. Since inverse problems are often ill-posed, they require regularization or incorporation of prior information about the underlying model or unknown variables. Proximal operators, ubiquitous in nonsmooth optimization, are central to this because they provide a flexible and convenient way to encode priors and build efficient iterative algorithms. They have also recently become key to modern machine learning methods, e.g., for plug-and-play methods for learned denoisers and deep neural architectures for learning priors of proximal operators. The latter was developed partly due to recent work characterizing proximal operators of nonconvex priors as subdifferential of convex potentials. In this work, we propose to leverage connections between proximal operators and Hamilton-Jacobi partial differential equations (HJ PDEs) to develop novel deep learning architectures for learning the prior. In contrast to other existing methods, we learn the prior directly without recourse to inverting the prior after training. We present several numerical results that demonstrate the efficiency of the proposed method in high dimensions.


💡 Research Summary

The paper introduces a novel deep‑learning framework for learning the regularization prior in inverse problems by exploiting the well‑known mathematical relationship between proximal operators and Hamilton‑Jacobi (HJ) partial differential equations. Traditional proximal‑based methods either learn the proximal map itself or require a post‑training inversion step to retrieve the underlying prior, which can be computationally costly and analytically opaque. In contrast, the authors propose to recover the prior directly from the viscosity solution of an HJ equation, thereby eliminating the need for any inversion after training.

The authors begin by recalling that for a proper lower‑semicontinuous function (J:\mathbb{R}^n\to\mathbb{R}\cup{+\infty}) the proximal operator is defined as
\


Comments & Academic Discussion

Loading comments...

Leave a Comment