Optimization of gridding algorithms for FFT by vector optimization
The Fast Fourier Transform (FFT) is widely used in applications such as MRI, CT, and interferometry; however, because of its dependence on uniformly sampled data, it requires the use of gridding techniques for practical implementation. The performance of these algorithms strongly depends on the choice of the gridding kernel, with the first prolate spheroidal wave function (PSWF) regarded as optimal. This work redefines kernel optimality through the lens of vector optimization (VO), introducing a rigorous framework that characterizes optimal kernels as Pareto-efficient solutions of an error shape operator. We establish the continuity of such operator, study the existence of solutions, and propose a novel methodology to construct kernels tailored to a desired target error function. The approach is implemented numerically via interior-point optimization. Comparative experiments demonstrate that the proposed kernels outperform both the PSWF and the state-of-the-art methods (MIRT-NUFFT) in specific regions of interest, achieving orders-of-magnitude improvements in mean absolute errors. These results confirm the potential of VO-based kernel design to provide customized accuracy profiles aligned with application-specific requirements. Future research will extend this framework to multidimensional cases and relative error minimization, with potential integration of machine learning for adaptive target error selection.
💡 Research Summary
The paper addresses the fundamental problem of reconstructing non‑uniformly sampled data using the Fast Fourier Transform (FFT) by focusing on the design of the gridding kernel and its accompanying compensation (deapodization) function. While the first prolate spheroidal wave function (PSWF) has long been regarded as the globally optimal kernel, the authors argue that many practical applications—such as magnetic resonance imaging (MRI), computed tomography (CT), and radio‑interferometry—require high accuracy only in specific regions of the spectrum or field‑of‑view. Consequently, a notion of optimality that is tailored to a desired error profile is needed.
The authors begin by formalising the gridding process. A non‑uniform signal (u_n) sampled at times (t_n) is convolved with a compact‑support kernel (C) to produce a uniformly sampled intermediate signal (u^\ast_k). An FFT is then applied, and the result is multiplied by a compensation function (h) to obtain the approximate spectrum (y^\ast(x)). The squared error between the true non‑uniform DFT (y(x)) and its approximation can be bounded as
\
Comments & Academic Discussion
Loading comments...
Leave a Comment