Fast Convolutional Nets With fbfft: A GPU Performance Evaluation
We examine the performance profile of Convolutional Neural Network training on the current generation of NVIDIA Graphics Processing Units. We introduce two new Fast Fourier Transform convolution implementations: one based on NVIDIA’s cuFFT library, and another based on a Facebook authored FFT implementation, fbfft, that provides significant speedups over cuFFT (over 1.5x) for whole CNNs. Both of these convolution implementations are available in open source, and are faster than NVIDIA’s cuDNN implementation for many common convolutional layers (up to 23.5x for some synthetic kernel configurations). We discuss different performance regimes of convolutions, comparing areas where straightforward time domain convolutions outperform Fourier frequency domain convolutions. Details on algorithmic applications of NVIDIA GPU hardware specifics in the implementation of fbfft are also provided.
💡 Research Summary
This paper investigates the performance of convolutional neural network (CNN) training on modern NVIDIA GPUs and proposes two fast Fourier transform (FFT) based convolution implementations. The first implementation builds on NVIDIA’s cuFFT library, while the second, called fbfft, is a Facebook‑authored open‑source FFT that the authors claim outperforms cuFFT by more than 1.5× on the problem sizes relevant to deep learning. Both implementations are integrated into the Torch framework and are released as part of the fbcuda/fbcunn libraries.
The authors begin by reviewing the mathematical equivalence between spatial‑domain convolution (or cross‑correlation) and frequency‑domain multiplication, noting that the latter reduces asymptotic complexity from O(N² k²) to O(N² log N) for an N × N input and a k × k kernel. They then discuss practical considerations on GPUs: the need to pad inputs and kernels to a common FFT size, the cost of zero‑padding, the overhead of multiple kernel launches, and the memory traffic associated with forward FFT, transposition, batched GEMM (via cuBLAS), inverse FFT, and final cropping.
In the cuFFT‑based pipeline, the authors exploit cuFFT’s Cooley‑Tukey radix‑2/3/5/7 kernels and, when the size does not factor into these radices, fall back to the Bluestein algorithm, which is considerably slower. To mitigate this, they deliberately pad inputs and kernels to the nearest size that cuFFT can handle efficiently (typically a power of two or a product of small primes). After forward FFTs of both inputs and filters, they transpose the data from the BDHW layout to HWBD to feed cuBLAS’s highly tuned GEMM routine, then transpose the result back before the inverse FFT. The transposition is performed out‑of‑place using cuBLAS’s cgeam, though the authors note that an in‑place custom transpose could further reduce memory traffic.
fbfft is designed to avoid cuFFT’s limitations. It implements batched 1‑D and 2‑D FFT kernels directly in CUDA, making heavy use of shared memory, registers, and warp‑level primitives. Hermitian symmetry of real‑valued inputs is exploited to store only half the complex spectrum, halving memory requirements. The implementation also fuses transposition with the FFT kernels where possible, eliminating separate global‑memory transposes. By carefully choosing thread‑block dimensions based on the FFT size, fbfft maintains high occupancy even for small batches, achieving hardware utilization rates above 75 % in many cases.
A key contribution is the autotuning framework. For each convolution configuration (batch size S, input channels f, output channels f′, spatial size n, kernel size k), the system enumerates feasible FFT sizes i in the range
Comments & Academic Discussion
Loading comments...
Leave a Comment