Optimization, Generalization and Differential Privacy Bounds for Gradient Descent on Kolmogorov-Arnold Networks

Optimization, Generalization and Differential Privacy Bounds for Gradient Descent on Kolmogorov-Arnold Networks
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Kolmogorov–Arnold Networks (KANs) have recently emerged as a structured alternative to standard MLPs, yet a principled theory for their training dynamics, generalization, and privacy properties remains limited. In this paper, we analyze gradient descent (GD) for training two-layer KANs and derive general bounds that characterize their training dynamics, generalization, and utility under differential privacy (DP). As a concrete instantiation, we specialize our analysis to logistic loss under an NTK-separable assumption, where we show that polylogarithmic network width suffices for GD to achieve an optimization rate of order $1/T$ and a generalization rate of order $1/n$, with $T$ denoting the number of GD iterations and $n$ the sample size. In the private setting, we characterize the noise required for $(ε,δ)$-DP and obtain a utility bound of order $\sqrt{d}/(nε)$ (with $d$ the input dimension), matching the classical lower bound for general convex Lipschitz problems. Our results imply that polylogarithmic width is not only sufficient but also necessary under differential privacy, revealing a qualitative gap between non-private (sufficiency only) and private (necessity also emerges) training regimes. Experiments further illustrate how these theoretical insights can guide practical choices, including network width selection and early stopping.


💡 Research Summary

This paper develops a unified theoretical framework for training two‑layer Kolmogorov‑Arnold Networks (KANs) with gradient descent (GD) and its differentially private variant (DP‑GD). KANs differ from standard multilayer perceptrons (MLPs) by assigning a learnable univariate function to each edge; in the concrete model studied, these edge functions are represented by B‑spline bases and a bounded activation σ (e.g., tanh or sigmoid). The authors collect all spline coefficients into a single parameter vector Θ∈ℝ^{m p (d+1)} where m is the hidden width, p the spline degree, and d the input dimension.

The analysis is built on three pillars: (1) optimization, (2) generalization, and (3) differential privacy. A key structural assumption is NTK‑separability: the expected Neural Tangent Kernel (NTK) Gram matrix has a positive minimum eigenvalue γ>0. This condition is weaker than the positive‑definiteness assumptions used in earlier KAN work and holds in many realistic settings.

Optimization. The authors introduce a “reference‑point complexity” C_S(Θ*) = 2ηT L_S(Θ*) + ‖Θ(0)−Θ*‖², which combines the training loss of a reference solution Θ* with the squared distance from initialization. Using a self‑bounded convex loss (logistic loss satisfies the required properties) and a constant step size η, they prove that if the network width satisfies m ≥ polylog(n,T) (polylogarithmic in the sample size n and iteration count T), then with high probability over random Gaussian initialization the training loss after T steps obeys

 L_S(Θ_T) ≤ O(1/(γ² η T)).

Thus a polylogarithmic width already yields the classic O(1/T) convergence rate, contrasting with prior work that required polynomial widths.

Generalization. Leveraging the fact that GD stays close to the initialization (implicit regularization), the authors bound the Rademacher complexity of the trajectory‑constrained hypothesis class. Under the same NTK‑separability and the condition η T ≳ n, they obtain a fast‑rate population risk bound

 L(Θ_T) ≤ O(1/(γ⁴ n)),

which is a 1/n rate (up to constants and logarithmic factors) rather than the usual O(1/√n). The bound explicitly depends on the separability margin γ but not on the width beyond the polylog requirement, confirming that increasing m beyond polylogarithmic size yields diminishing returns for generalization.

Differential Privacy. For DP‑GD, Gaussian noise ξ_k∼𝒩(0,σ²I) is added at each iteration. The authors perform a trajectory‑wise sensitivity analysis that incorporates the NTK‑separability condition and the polylog width bound. They show that choosing

 σ ≈ (γ √d)/(n ε)

ensures (ε,δ)‑DP for the whole training process. Under this calibration and with η T ≈ γ² n ε √d, the averaged population risk over T steps satisfies

 E


Comments & Academic Discussion

Loading comments...

Leave a Comment