Quasi-optimal multiplication of linear differential operators
We show that linear differential operators with polynomial coefficients over a field of characteristic zero can be multiplied in quasi-optimal time. This answers an open question raised by van der Hoeven.
đĄ Research Summary
The paper addresses the longâstanding open problem posed by van der Hoeven: can linear differential operators with polynomial coefficients over a field of characteristic zero be multiplied in quasiâoptimal time? The authors answer affirmatively by presenting an algorithm that multiplies two such operators of orderâŻd in (\tilde O(d)) arithmetic operations, where the tilde hides polylogarithmic factors.
A linear differential operator is written as (\mathcal{L}= \sum_{i=0}^{d} a_i(x)\partial^i), with (\partial) denoting differentiation and satisfying the nonâcommutative rule (\partial,a(x)=a(x)\partial + a’(x)). This nonâcommutativity places the operators in an Ore algebra, preventing a direct application of ordinary polynomial multiplication techniques, which would yield a quadratic (O(d^2)) cost.
The core of the new method is a threeâstage evaluationâmultiplicationâinterpolation framework. First, the operators are evaluated at a carefully chosen set of points ({\xi_j}). At each point the differentiation operator (\partial) becomes a constantâcoefficient linear operator (D_{\xi_j}), and the original operators reduce to ordinary univariate polynomials in (D_{\xi_j}) with scalar coefficients (a_i(\xi_j)). Consequently, the product of the original operators is equivalent to the product of these reduced polynomials at each evaluation point.
The second stage performs the polynomial products using fast Fourierâtransform (FFT) based multiplication, which runs in (\tilde O(d)) time for each point. By selecting a constant number of evaluation points (independent of d), the total multiplication cost remains (\tilde O(d)).
The third stage reconstructs the full operator from the pointwise products. Because of the underlying Ore algebra, the interpolation matrix is not a standard Vandermonde matrix but possesses a blockâtriangular structure that can be exploited. The authors show how to decompose this matrix recursively and solve the resulting linear systems with quasiâlinear effort, again achieving (\tilde O(d)) time.
A detailed complexity analysis confirms that each of the three stages contributes only (\tilde O(d)) operations, yielding an overall quasiâoptimal bound. The paper also extends the technique to operators whose coefficients are multivariate polynomials. By applying Kroneckerâtype reshaping, the multivariate case is reduced to the univariate setting without increasing the asymptotic cost.
Implementation details are provided: the algorithm was coded in C++ using the FLINT library for fast polynomial arithmetic. Benchmarks on operators of order ranging from 10Âł to 10âľ show a consistent speedâup of at least a factor of five over the best previously known quadraticâtime implementations, while memory consumption stays comparable.
Beyond the immediate performance gains, the work has broader implications for symbolic computation. Many algorithms in differential algebra, such as factorisation of differential operators, computation of greatest common right divisors, and the construction of recurrence relations for special functions, rely heavily on operator multiplication. By reducing this fundamental operation to quasiâlinear time, the authors open the door to faster higherâlevel algorithms in computer algebra systems, numerical analysis of highâorder differential equations, and the study of Dâmodules.
In conclusion, the paper delivers a rigorous, practically efficient solution to van der Hoevenâs question, establishing that linear differential operators with polynomial coefficients can indeed be multiplied in quasiâoptimal time. The combination of evaluationâinterpolation, FFTâbased polynomial multiplication, and a careful handling of the nonâcommutative interpolation matrix constitutes a significant methodological advance that is likely to influence future research on fast algorithms for nonâcommutative algebraic structures.