Asymptotic expansions for spectral convergence of compact self-adjoint operators on general spectral subsets, with application to kernel Gram matrices
We study the spectral convergence of compact, self-adjoint operators on a separable Hilbert space under operator norm perturbations, and derive asymptotic expansions for their eigenvalues and eigenprojections. Our analysis focuses on eigenvalues indexed by a general subset, with minimal restrictions on their selection. The usefulness of the provided expansions is illustrated by an application to kernel Gram matrices, deriving concentration inequalities as well as weak convergence results, which, in contrast to existing literature, are primarily relying on assumptions on the kernel that are easy to check.
💡 Research Summary
The paper investigates the spectral convergence of compact, self‑adjoint operators on a separable Hilbert space when they are perturbed in operator norm. Building on classical analytic perturbation theory (Rellich, Kato) and recent statistical extensions, the authors develop asymptotic expansions for both eigenvalues and eigenprojections that are valid for any finite index set J of the spectrum, rather than being restricted to isolated single eigenvalues. The only structural requirement on J is a positive “outer spectral gap” γ_J = min_{k∈J,ℓ∉J}|λ_k−λ_ℓ|>0, which guarantees that the chosen eigenvalues are separated from the rest of the spectrum.
The main technical contribution is Theorem 3.3, which shows that the difference between the perturbed and original eigenprojections can be written as a leading linear term ˆS_J plus a remainder bounded by 8K·(‖Ĥ−H‖ₒₚ/γ_J)², where K is the number of distinct eigenvalues within J. This bound depends solely on the outer gap γ_J and remains stable even when the eigenvalues inside J form a tight cluster, a situation where traditional bounds that rely on internal gaps become vacuous.
For eigenvalues, two regimes are treated. In the “well‑separated” case (each distinct eigenvalue in J has a sufficiently large internal gap γ_{J_j}), the vector of eigenvalue differences (ˆλ_k−λ_k){k∈J_j} is approximated by the eigenvalues of the matrix ⟨(Ĥ−H)ψ_k,ψ_ℓ⟩{k,ℓ∈J_j}. Theorem 3.6 provides a precise error bound that involves the internal gaps and the magnitudes of the eigenvalues themselves. In the “clustered” case, where internal gaps are small, the authors focus on the sum Σ_{k∈J}(ˆλ_k−λ_k). Theorem 3.7 shows that this sum can be approximated by Σ_{k∈J}⟨(Ĥ−H)ψ_k,ψ_k⟩, with an error that depends only on the outer gap γ_J, the size of J, and the largest eigenvalue in the set. This reflects the cancellation of fluctuations within a cluster and yields a robust approximation even for highly degenerate spectra.
The theoretical framework is then applied to kernel Gram matrices. Assuming a compact metric space M with probability measure P and a continuous, symmetric, positive‑semi‑definite kernel h satisfying Mercer’s conditions, the integral operator H on L²(P) has an orthonormal eigenbasis {ϕ_k} with eigenvalues {λ_k}. The associated reproducing kernel Hilbert space (RKHS) ℋ is constructed, and the population covariance operator ℋ = E
Comments & Academic Discussion
Loading comments...
Leave a Comment