On the randomized SVD in infinite dimensions
Randomized methods, such as the randomized SVD (singular value decomposition) and Nyström approximation, are an effective way to compute low-rank approximations of large matrices. Motivated by applications to operator learning, Boullé and Townsend (FoCM, 2023) recently proposed an infinite-dimensional extension of the randomized SVD for a Hilbert-Schmidt operator $A$ that invokes randomness through a Gaussian process with a covariance operator $K$. While the non-isotropy introduced by $K$ allows one to incorporate prior information on $A$, an unfortunate choice may lead to unfavorable performance and large constants in the error bounds. In this work, we introduce a novel infinite-dimensional extension of the randomized SVD that does not require such a choice and enjoys error bounds that match those for the finite-dimensional case. Our extension implicitly uses isotropic random vectors, reflecting a choice commonly made in the finite-dimensional case. In fact, the theoretical results of this work show how the usual randomized SVD applied to a discretization of $A$ approaches our infinite-dimensional extension as the discretization gets refined, both in terms of error bounds and the Wasserstein distance. We also present and analyze a novel extension of the Nyström approximation for self-adjoint positive semi-definite trace class operators.
💡 Research Summary
The paper addresses the problem of extending randomized low‑rank approximation techniques, in particular the randomized singular value decomposition (SVD) and the Nyström method, from finite‑dimensional matrices to infinite‑dimensional Hilbert–Schmidt operators. Recent work by Boullé and Townsend (FoCM 2023) introduced an infinite‑dimensional randomized SVD that draws random test functions from a Gaussian process with a prescribed covariance operator K on the input space. While this non‑isotropic choice allows the incorporation of prior information about the target operator A, it also introduces interaction constants between K and the right singular vectors of A. An unfavourable choice of K can dramatically inflate error bounds and increase sampling cost.
The authors propose a different infinite‑dimensional formulation that completely avoids the need for a user‑chosen covariance K. Their key observation is that, for a Hilbert–Schmidt operator A : H₁ → H₂, the columns of the sketch matrix Y = AΩ in the finite‑dimensional setting are independent Gaussian vectors with mean zero and covariance AA*. This property can be lifted to the Hilbert space context by defining a centered Gaussian measure μ_{AA*} on H₂ with covariance operator AA*. Sampling independent vectors y₁,…,y_{k+p} from μ_{AA*} yields exactly the same statistical structure as the finite‑dimensional case, but now the random vectors are isotropic (i.e., their covariance is proportional to the identity on the subspace spanned by the left singular vectors of A). Algorithm 1 implements this idea: draw the sketch Y from N_{H₂}(0,AA*), compute an orthonormal basis Q for its range, apply A* to Q, and return the factorized approximation  = Q(AQ)^{}. Although the algorithm appears idealised because it requires knowledge of the singular vectors of A to sample from μ_{AA*}, the authors later show that a practical discretisation of A combined with standard Gaussian test matrices reproduces the same output in the limit of mesh refinement.
Error analysis mirrors the classical proofs for the finite‑dimensional randomized SVD. By expressing each sketch column as y_j = ∑i ω{ij}σ_i u_i, where {u_i} are the left singular vectors and {σ_i} the singular values, the coefficient matrix Ω_k is a standard Gaussian matrix. Theorems 1 and 2 establish high‑probability and expectation bounds for the Hilbert–Schmidt norm and the operator norm of the approximation error that are identical to those in Halko‑Martinsson‑Tropp (2011) (reference
Comments & Academic Discussion
Loading comments...
Leave a Comment