Minimal Subsampled Rank-1 Lattices for Multivariate Approximation with Optimal Convergence Rate
In this paper we show error bounds for randomly subsampled rank-1 lattices. We pay particular attention to the ratio of the size of the subset to the size of the initial lattice, which is decisive for the computational complexity. In the special case of Korobov spaces, we achieve the optimal polynomial sampling complexity whilst having the smallest initial lattice possible. We further characterize the frequency index set for which a given lattice is reconstructing by using the reciprocal of the worst-case error achieved using the lattice in question. This connects existing approaches used in proving error bounds for lattices. We make detailed comments on the implementation and test different algorithms using the subsampled lattice in numerical experiments.
💡 Research Summary
This paper investigates the use of randomly subsampled rank‑1 lattices for multivariate function approximation, focusing on achieving optimal convergence rates while minimizing the size of the underlying full lattice. The authors begin by reviewing the classical rank‑1 lattice algorithm, which approximates Fourier coefficients using all n points of a lattice generated by a vector z∈ℤⁿ, and its kernel‑based counterpart. Both methods share the same worst‑case error bound, expressed in terms of a quantity Sₙ(z) that can be minimized by component‑by‑component (CBC) constructions.
A central contribution is the rigorous connection between the worst‑case error of any algorithm that uses the lattice points and the set of frequencies for which the lattice possesses the “reconstructing property” (i.e., no two distinct frequencies have the same inner product with z modulo n). Theorem 3.3 shows that, given a lattice X, the frequencies satisfying r(h) < (e_wor‑app(A_X))⁻² automatically enjoy this property. Consequently, the frequency index set can be derived directly from the achievable error, linking the two previously separate approaches (CBC construction of reconstructing lattices and error‑based vector selection).
The paper then turns to subsampling: selecting a subset J⊂{0,…,n−1} and forming the point set X_J. To avoid non‑uniqueness, the authors require |J|≥|B| and that the full lattice X be reconstructing on the frequency set B. They propose a least‑squares approximation S_{X_J}^B, which can be solved efficiently using FFT‑based diagonalization of the kernel matrix. Probabilistic error bounds for random J are derived, and the analysis is specialized to weighted Korobov spaces (dominant mixed smoothness α>½).
The main theoretical result (Theorem 5.3 and Corollary 5.4) establishes that for any ε∈(0,α) one can achieve e_wor‑app(S_{X_J}^B) ≲ |J|^{−α+ε}, while the size of the original lattice satisfies |J|^{2 √(1−ε/α)} ≲ n ≲ |J|^{2/√(1−ε/α)}. Thus the sampling complexity |J|^{−α+ε} is essentially optimal (matching the lower bound for unrestricted point sets), and the initial lattice size is only a constant factor away from the theoretical minimum |J|². This improves on earlier work where n had to grow like |J|^{2−2εα}, a substantially larger exponent that depended on the smoothness.
Implementation details are discussed, including fast CBC construction of z, FFT‑based kernel inversion, and practical strategies for choosing J (e.g., deterministic selections of size ⌈√n log √n⌉). Numerical experiments in dimensions d=2,3,5 and smoothness levels α=1,2,3 confirm the predicted convergence rates and demonstrate significant reductions in computational time and memory compared to using the full lattice. Experiments also illustrate that violating the reconstructing property leads to a dramatic loss of accuracy, underscoring the theoretical conditions.
In conclusion, the paper provides a comprehensive framework that unifies error‑based lattice design with frequency‑set reconstruction, shows that minimal subsampled rank‑1 lattices can attain optimal approximation rates, and offers concrete algorithms and empirical evidence. Future directions suggested include extensions to non‑periodic domains, adaptive subsampling schemes, and strategies to mitigate the curse of dimensionality in very high‑dimensional settings.
Comments & Academic Discussion
Loading comments...
Leave a Comment