The Vekua Layer: Exact Physical Priors for Implicit Neural Representations via Generalized Analytic Functions
Implicit Neural Representations (INRs) have emerged as a powerful paradigm for parameterizing physical fields, yet they often suffer from spectral bias and the computational expense of non-convex optimization. We introduce the Vekua Layer (VL), a differentiable spectral method grounded in the classical theory of Generalized Analytic Functions. By restricting the hypothesis space to the kernel of the governing differential operator – specifically utilizing Harmonic and Fourier-Bessel bases – the VL transforms the learning task from iterative gradient descent to a strictly convex least-squares problem solved via linear projection. We evaluate the VL against Sinusoidal Representation Networks (SIRENs) on homogeneous elliptic Partial Differential Equations (PDEs). Our results demonstrate that the VL achieves machine precision ($\text{MSE} \approx 10^{-33}$) on exact reconstruction tasks and exhibits superior stability in the presence of incoherent sensor noise ($\text{MSE} \approx 0.03$), effectively acting as a physics-informed spectral filter. Furthermore, we show that the VL enables “holographic” extrapolation of global fields from partial boundary data via analytic continuation, a capability absent in standard coordinate-based approximations.
💡 Research Summary
The paper introduces the Vekua Layer (VL), a novel differentiable spectral module for implicit neural representations (INRs) that leverages the classical theory of Generalized Analytic Functions to solve homogeneous elliptic partial differential equations (PDEs) exactly. By embedding basis functions that lie in the kernel of the governing differential operator—harmonic polynomials for Laplace’s equation and Fourier‑Bessel functions for the Helmholtz equation—the VL restricts the hypothesis space to physically admissible solutions. Consequently, learning reduces from a non‑convex, gradient‑based optimization problem to a strictly convex linear least‑squares projection.
The authors formulate the approximation as a linear combination of basis functions with learnable coefficients w. Given noisy boundary observations D = {(xᵢ, uᵢ)}, they construct the feature matrix Φ and solve w* = V Σ⁻¹_τ Uᵀ u via truncated singular value decomposition (TSVD). This projection naturally filters out components orthogonal to the physical manifold, providing built‑in denoising.
Four experiments benchmark VL against a standard Sinusoidal Representation Network (SIREN) across distinct challenges:
- Spectral Bias (Helmholtz, k = 20) – VL recovers the exact Bessel‑function solution with machine‑precision error (MSE ≈ 10⁻³³), while SIREN suffers phase and amplitude errors (MSE ≈ 0.4).
- Holographic Extrapolation – Training on only two sides of a square domain, VL uniquely determines the full harmonic field (u = x² − y²) via analytic continuation (MSE ≈ 10⁻³¹). SIREN fails dramatically on unseen boundaries (MSE ≈ 1.81).
- Robustness to Sensor Noise – Adding 20 % Gaussian noise to Helmholtz boundary data, VL still reconstructs the clean interior field (MSE ≈ 0.03) thanks to orthogonality between noise and Bessel modes. SIREN overfits the noise (MSE ≈ 0.65).
- Complex Chaotic Field – Approximating a superposition of 30 random Bessel modes with an under‑parameterized basis (N = 15), VL instantly finds the optimal projection (MSE ≈ 10⁻⁹), whereas SIREN gets trapped in local minima (MSE ≈ 0.6).
Complexity analysis shows that VL’s dominant cost is the SVD of Φ, scaling as O(M·B²) with B = 2N + 1. Since N ≪ M in practice, inference runs in milliseconds (≈ 0.001 s), delivering a ≈ 10⁴‑fold speedup over SIREN’s ≈ 20 s training time.
The discussion acknowledges that VL’s strength—embedding exact physical priors—also limits its applicability to linear, homogeneous elliptic PDEs where analytic bases are known. Non‑linear or variable‑coefficient problems cannot be solved by VL alone, but the layer can serve as a powerful spectral component within larger architectures (e.g., DeepONets).
In conclusion, the Vekua Layer demonstrates that for a well‑defined class of PDEs, moving the physics from the loss function into the network architecture eliminates spectral bias, guarantees global optimality, provides intrinsic noise filtering, and achieves orders‑of‑magnitude computational gains. Future work aims to extend the framework to non‑linear equations, integrate with operator learning models, and design composite bases for multi‑physics scenarios.
Comments & Academic Discussion
Loading comments...
Leave a Comment