On Non-Linear operators for Geometric Deep Learning
This work studies operators mapping vector and scalar fields defined over a manifold $\mathcal{M}$, and which commute with its group of diffeomorphisms $\text{Diff}(\mathcal{M})$. We prove that in the case of scalar fields $L^p_ω(\mathcal{M,\mathbb{R}})$, those operators correspond to point-wise non-linearities, recovering and extending known results on $\mathbb{R}^d$. In the context of Neural Networks defined over $\mathcal{M}$, it indicates that point-wise non-linear operators are the only universal family that commutes with any group of symmetries, and justifies their systematic use in combination with dedicated linear operators commuting with specific symmetries. In the case of vector fields $L^p_ω(\mathcal{M},T\mathcal{M})$, we show that those operators are solely the scalar multiplication. It indicates that $\text{Diff}(\mathcal{M})$ is too rich and that there is no universal class of non-linear operators to motivate the design of Neural Networks over the symmetries of $\mathcal{M}$.
💡 Research Summary
The paper investigates the class of nonlinear operators acting on scalar‑ and vector‑valued fields defined over a smooth Riemannian manifold 𝓜 that commute with the full diffeomorphism group Diff(𝓜). The authors formalize the problem by considering the Banach spaces Lᵖ_ω(𝓜,ℝ) and Lᵖ_ω(𝓜,T𝓜) (the latter being the space of Lᵖ‑integrable sections of the tangent bundle) and define the natural action of a diffeomorphism φ∈Diff(𝓜) on these spaces: for scalars (L_φ f)(x)=f(φ(x)) and for vectors (L_φ f)(x)=dφ⁻¹_{φ(x)} f(φ(x)). The central question is: which nonlinear maps 𝓜 satisfy 𝓜∘L_φ = L_φ∘𝓜 for every φ?
Two main theorems answer this question. Theorem 1 (scalar case) states that any Lipschitz continuous operator 𝓜: Lᵖ_ω(𝓜,ℝ)→Lᵖ_ω(𝓜,ℝ) that commutes with all diffeomorphisms must be a pointwise non‑linearity: there exists a Lipschitz function ρ:ℝ→ℝ such that 𝓜
Comments & Academic Discussion
Loading comments...
Leave a Comment