Accelerated Approximation of the Complex Roots of a Univariate Polynomial (Extended Abstract)

Accelerated Approximation of the Complex Roots of a Univariate   Polynomial (Extended Abstract)
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Highly efficient and even nearly optimal algorithms have been developed for the classical problem of univariate polynomial root-finding (see, e.g., \cite{P95}, \cite{P02}, \cite{MNP13}, and the bibliography therein), but this is still an area of active research. By combining some powerful techniques developed in this area we devise new nearly optimal algorithms, whose substantial merit is their simplicity, important for the implementation.


šŸ’” Research Summary

The paper addresses the classic problem of finding all complex roots of a univariate polynomial, focusing on achieving near‑optimal computational complexity while keeping the algorithmic structure simple enough for practical implementation. The authors build on a series of powerful techniques that have appeared in the literature—Newton’s method, isolation ratios, power‑sum computations, and fast Fourier transform (FFT) based multipoint evaluation—to construct a three‑stage procedure that can isolate, refine, and certify each root with provably low cost.

The first concept introduced is the isolation ratio of a disc D(c,r) that contains a single root of the polynomial p(x). If the disc is (1+Ī·)‑isolated (Ī·>0), the authors show that by increasing the isolation ratio to at least 5d² (where d is the degree) one can guarantee quadratic convergence of Newton’s iteration started at the disc’s centre. This is formalised in Theorem 1, which essentially states that a disc whose radius is reduced to Ī” = O(r η/d²) becomes 5d²‑isolated, and Newton’s method converges from the centre in a single step.

To achieve such a reduction without expensive root‑finding, the paper exploits the fact that the power sum s₁ = Σ z_j of the roots inside the disc equals the root itself when there is only one root. The authors approximate s₁ by evaluating the rational function p(ω)/p′(ω) at the q‑th roots of unity ω_j = exp(2Ļ€i j/q) and forming the weighted average

ā€ƒs₁* = (1/q)ā€Æāˆ‘_{j=0}^{qāˆ’1} ω_j · p(ω_j)/p′(ω_j).

Choosing q = Θ(log d) guarantees that the error |s₁*āˆ’s₁| is smaller than the target Ī”. The crucial observation is that evaluating p and p′ at all ω_j can be reduced to three FFTs of size q: two FFTs compute the values of p and p′, and a third multiplies the resulting vectors elementwise. This costs O(q log q) arithmetic operations, i.e. O(log d · log log d). The authors also discuss how to construct a reduced‑degree polynomial p_q(x) (degree ≤ qāˆ’1) from the original coefficients using only d additions, which further streamlines the evaluation.

Having obtained a refined centre cĢ‚ = s₁*, the algorithm shrinks the disc to radius Ī” and thereby obtains a 5d²‑isolated sub‑disc. Newton’s iteration x_{k+1}=x_kāˆ’p(x_k)/p′(x_k) is then applied starting from cĢ‚. Because the disc is now sufficiently isolated, the iteration converges quadratically, and the number of Newton steps needed to achieve a prescribed absolute error ε is O(log log (1/ε)). Consequently, for a single root the total arithmetic cost is

ā€ƒO(log d · log log d) + O(log log (1/ε))

operations, which is essentially optimal up to polylogarithmic factors.

The paper extends the analysis to all d roots. Assuming we are given d discs, each (1+Ī·)‑isolated and containing a distinct simple root, the same procedure can be run in parallel on each disc. By replacing the uniform roots of unity with equally spaced points on each disc’s boundary and employing the Moenck–Borodin multipoint evaluation algorithm, the authors achieve a total cost of

ā€ƒO(d · log² d · log log (1/ε))

arithmetic operations for approximating all roots within a fixed absolute error. This matches the best known bounds for the problem while avoiding the heavy machinery of earlier near‑optimal algorithms.

A substantial portion of the paper is devoted to Boolean (bit) complexity analysis. Assuming the input coefficients are known exactly, the authors introduce a working precision Ī» that depends on the desired output precision ā„“, the coefficient magnitude Ļ„, and the degree d. They show that the dominant Boolean cost comes from the three FFTs and the subsequent Newton steps, yielding an overall bound of

ā€ƒeOB(d²·τ + dĀ·ā„“)

where eOB hides polylogarithmic factors. For refining a root to L bits of precision, the cost becomes eOB(d²·τ + dĀ·L). When all roots are refined simultaneously, the authors employ fast multipoint evaluation (again via Moenck–Borodin) to keep the Boolean cost within the same asymptotic range.

The authors also discuss extensions. By combining their method with the initial isolation algorithm of Mehlhorn et al. (


Comments & Academic Discussion

Loading comments...

Leave a Comment