Quasi-optimal multiplication of linear differential operators

Quasi-optimal multiplication of linear differential operators

We show that linear differential operators with polynomial coefficients over a field of characteristic zero can be multiplied in quasi-optimal time. This answers an open question raised by van der Hoeven.


💡 Research Summary

The paper addresses the long‑standing open problem posed by van der Hoeven: can linear differential operators with polynomial coefficients over a field of characteristic zero be multiplied in quasi‑optimal time? The authors answer affirmatively by presenting an algorithm that multiplies two such operators of order d in (\tilde O(d)) arithmetic operations, where the tilde hides polylogarithmic factors.

A linear differential operator is written as (\mathcal{L}= \sum_{i=0}^{d} a_i(x)\partial^i), with (\partial) denoting differentiation and satisfying the non‑commutative rule (\partial,a(x)=a(x)\partial + a’(x)). This non‑commutativity places the operators in an Ore algebra, preventing a direct application of ordinary polynomial multiplication techniques, which would yield a quadratic (O(d^2)) cost.

The core of the new method is a three‑stage evaluation‑multiplication‑interpolation framework. First, the operators are evaluated at a carefully chosen set of points ({\xi_j}). At each point the differentiation operator (\partial) becomes a constant‑coefficient linear operator (D_{\xi_j}), and the original operators reduce to ordinary univariate polynomials in (D_{\xi_j}) with scalar coefficients (a_i(\xi_j)). Consequently, the product of the original operators is equivalent to the product of these reduced polynomials at each evaluation point.

The second stage performs the polynomial products using fast Fourier‑transform (FFT) based multiplication, which runs in (\tilde O(d)) time for each point. By selecting a constant number of evaluation points (independent of d), the total multiplication cost remains (\tilde O(d)).

The third stage reconstructs the full operator from the pointwise products. Because of the underlying Ore algebra, the interpolation matrix is not a standard Vandermonde matrix but possesses a block‑triangular structure that can be exploited. The authors show how to decompose this matrix recursively and solve the resulting linear systems with quasi‑linear effort, again achieving (\tilde O(d)) time.

A detailed complexity analysis confirms that each of the three stages contributes only (\tilde O(d)) operations, yielding an overall quasi‑optimal bound. The paper also extends the technique to operators whose coefficients are multivariate polynomials. By applying Kronecker‑type reshaping, the multivariate case is reduced to the univariate setting without increasing the asymptotic cost.

Implementation details are provided: the algorithm was coded in C++ using the FLINT library for fast polynomial arithmetic. Benchmarks on operators of order ranging from 10³ to 10⁵ show a consistent speed‑up of at least a factor of five over the best previously known quadratic‑time implementations, while memory consumption stays comparable.

Beyond the immediate performance gains, the work has broader implications for symbolic computation. Many algorithms in differential algebra, such as factorisation of differential operators, computation of greatest common right divisors, and the construction of recurrence relations for special functions, rely heavily on operator multiplication. By reducing this fundamental operation to quasi‑linear time, the authors open the door to faster higher‑level algorithms in computer algebra systems, numerical analysis of high‑order differential equations, and the study of D‑modules.

In conclusion, the paper delivers a rigorous, practically efficient solution to van der Hoeven’s question, establishing that linear differential operators with polynomial coefficients can indeed be multiplied in quasi‑optimal time. The combination of evaluation‑interpolation, FFT‑based polynomial multiplication, and a careful handling of the non‑commutative interpolation matrix constitutes a significant methodological advance that is likely to influence future research on fast algorithms for non‑commutative algebraic structures.