Optimal learning rates for Kernel Conjugate Gradient regression
We prove rates of convergence in the statistical sense for kernel-based least squares regression using a conjugate gradient algorithm, where regularization against overfitting is obtained by early stopping. This method is directly related to Kernel Partial Least Squares, a regression method that combines supervised dimensionality reduction with least squares projection. The rates depend on two key quantities: first, on the regularity of the target regression function and second, on the intrinsic dimensionality of the data mapped into the kernel space. Lower bounds on attainable rates depending on these two quantities were established in earlier literature, and we obtain upper bounds for the considered method that match these lower bounds (up to a log factor) if the true regression function belongs to the reproducing kernel Hilbert space. If this assumption is not fulfilled, we obtain similar convergence rates provided additional unlabeled data are available. The order of the learning rates match state-of-the-art results that were recently obtained for least squares support vector machines and for linear regularization operators.
💡 Research Summary
The paper investigates the statistical convergence properties of kernel‑based least‑squares regression when the solution is computed by a conjugate‑gradient (CG) algorithm and regularization is achieved through early stopping. The authors view the number of CG iterations as an implicit regularization parameter: each iteration expands the Krylov subspace and yields a solution that approximates the Tikhonov‑regularized estimator ( (T+λI)⁻¹T ), but without the need to solve a linear system for a prescribed λ. By stopping the iteration at an appropriately chosen time t, one can control the bias‑variance trade‑off and prevent over‑fitting.
Two central quantities drive the analysis. First, the regularity of the target regression function f* is expressed by a source condition f* = T^r g with r∈(0,1] and ‖g‖_{L2} bounded; r measures the smoothness of f* relative to the kernel covariance operator T. When r=1 the target lies in the reproducing kernel Hilbert space (RKHS) itself. Second, the intrinsic dimensionality of the data in the feature space is captured by the effective dimension N(λ)=Tr
Comments & Academic Discussion
Loading comments...
Leave a Comment