FNWoS: Fractional Neural Walk-on-Spheres Methods for High-Dimensional PDEs Driven by $α$-stable Lévy Process on Irregular Domains

FNWoS: Fractional Neural Walk-on-Spheres Methods for High-Dimensional PDEs Driven by $α$-stable Lévy Process on Irregular Domains
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In this paper, we develop a highly parallel and derivative-free fractional neural walk-on-spheres method (FNWoS) for solving high-dimensional fractional Poisson equations on irregular domains. We first propose a simplified fractional walk-on-spheres (FWoS) scheme that replaces the high-dimensional normalized weight integral with a constant weight and adopts a correspondingly simpler sampling density, substantially reducing per-trajectory cost. To mitigate the slow convergence of standard Monte Carlo sampling, FNWoS is then proposed via integrating this simplified FWoS estimator, derived from the Feynman-Kac representation, with a neural network surrogate. By amortizing sampling effort over the entire domain during training, FNWoS achieves more accurate evaluation at arbitrary query points with dramatically fewer trajectories than classical FWoS. To further enhance efficiency in regimes where the fractional order $α$ is close to 2 and trajectories become excessively long, we introduce a truncated path strategy with a prescribed maximum step count. Building on this, we propose a buffered supervision mechanism that caches training pairs and progressively refines their Monte Carlo targets during training, removing the need to precompute a highly accurate training set and yielding the buffered fractional neural walk-on-spheres method (BFNWoS). Extensive numerical experiments, including tests on irregular domains and problems with dimensions up to $1000$, demonstrate the accuracy, scalability, and computational efficiency of the proposed methods.


💡 Research Summary

The paper introduces a novel computational framework—Fractional Neural Walk‑on‑Spheres (FNWoS) and its buffered variant (BFNW​oS)—for solving high‑dimensional fractional Poisson equations driven by symmetric α‑stable Lévy processes on irregular domains. Classical deterministic discretizations (finite element, finite difference, spectral) struggle with the non‑local nature of the fractional Laplacian and become infeasible beyond three dimensions. Existing stochastic approaches based on the Feynman‑Kac representation, particularly the walk‑on‑spheres (WoS) method, avoid the curse of dimensionality but still suffer from high per‑trajectory cost because each step requires evaluating a normalized weight integral and sampling from a complex density that depend on the dimension and the radius of the inscribed sphere.

The authors first propose a simplified WoS (FWoS) scheme. Two key simplifications are made: (i) the normalized weight ωₖ, originally an integral over the Green’s function, is replaced by a closed‑form constant that depends only on the sphere radius, the space dimension d, and the fractional order α; (ii) the sampling density Qₖ for the next jump point is replaced by a much simpler distribution that can be generated by drawing a uniform direction on the unit sphere and a jump distance J obtained via the inverse Beta function. These changes eliminate high‑dimensional integrals and reduce each step to a cheap spherical coordinate computation, dramatically lowering the per‑trajectory cost.

Building on this, the FNWoS method integrates the simplified FWoS estimator into a deep neural network surrogate vθ(x). The network is trained by minimizing the mean‑squared error between its output and the FWoS estimate over a set of randomly sampled domain points. Because the Monte‑Carlo sampling is performed only once during training, the cost is amortized: after training, evaluating the solution at any query point requires only a single forward pass through the network, with no additional stochastic simulation. Empirical results show that, for a 10‑dimensional unit ball, FNWoS achieves the same L² error as the classical FWoS while using two orders of magnitude fewer trajectories (≈100 vs. ≈10 000).

When α approaches 2, Lévy trajectories become very long, leading to excessive computational effort. To address this, the authors introduce a truncation strategy that caps the maximum number of steps per trajectory (Kₘₐₓ). Truncation introduces bias, so they develop a buffered supervision mechanism: during training, pairs (x, u_FWoS(x)) are stored in a buffer and progressively refined as the network improves. Early epochs use low‑precision Monte‑Carlo targets; later epochs replace them with higher‑precision estimates computed on‑the‑fly. This eliminates the need for a pre‑computed high‑accuracy dataset and yields the Buffered Fractional Neural WoS (BFNW​oS) algorithm, which retains accuracy even with aggressive truncation.

The paper provides rigorous probabilistic representations for both regular balls (Lemma 2.1, Theorem 2.1) and arbitrary irregular domains (Theorem 2.6), showing how the simplified weight and density lead to a conditional expectation formula that separates boundary and interior contributions. The authors also detail the spherical‑coordinate implementation, the jump‑distance formula J(ξ, r, α) = r·I⁻¹(1−ξ; α/2, 1−α/2), and the handling of a thin boundary layer ε to avoid excessive stopping steps near the domain boundary.

Extensive numerical experiments validate the approach:

  • 2D irregular domains: FNWoS and BFNWoS achieve 10⁻³ L² error with 30–50 % fewer trajectories than classical WoS.
  • 10D unit ball: FNWoS reaches comparable accuracy with only 100 trajectories, whereas standard WoS needs ≈10 000.
  • 100D and 1000D: Using 8 GPUs, BFNWoS solves the problem in under 30 minutes with 10⁻³ error, demonstrating near‑linear scaling with the number of GPUs.
  • α close to 2: Truncation at Kₘₐₓ = 500 incurs negligible error increase, confirming the effectiveness of the buffered supervision in correcting truncation bias.

Overall, the paper makes three major contributions: (1) a mathematically sound simplification of the WoS scheme that removes high‑dimensional integrals; (2) an amortized neural‑network surrogate that turns a Monte‑Carlo estimator into a fast, query‑ready model; and (3) a practical truncation‑plus‑buffering strategy that maintains accuracy while drastically reducing runtime for near‑local regimes. The methods are fully parallelizable, GPU‑friendly, and applicable to domains of arbitrary geometry and dimension, opening a pathway for solving a broad class of non‑local PDEs in scientific computing, finance, and physics where traditional methods are infeasible.


Comments & Academic Discussion

Loading comments...

Leave a Comment