Accurate and Efficient Expression Evaluation and Linear Algebra
We survey and unify recent results on the existence of accurate algorithms for evaluating multivariate polynomials, and more generally for accurate numerical linear algebra with structured matrices. By “accurate” we mean that the computed answer has relative error less than 1, i.e., has some correct leading digits. We also address efficiency, by which we mean algorithms that run in polynomial time in the size of the input. Our results will depend strongly on the model of arithmetic: Most of our results will use the so-called Traditional Model (TM). We give a set of necessary and sufficient conditions to decide whether a high accuracy algorithm exists in the TM, and describe progress toward a decision procedure that will take any problem and provide either a high accuracy algorithm or a proof that none exists. When no accurate algorithm exists in the TM, it is natural to extend the set of available accurate operations by a library of additional operations, such as $x+y+z$, dot products, or indeed any enumerable set which could then be used to build further accurate algorithms. We show how our accurate algorithms and decision procedure for finding them extend to this case. Finally, we address other models of arithmetic, and the relationship between (im)possibility in the TM and (in)efficient algorithms operating on numbers represented as bit strings.
💡 Research Summary
The paper develops a comprehensive theory for when high‑accuracy numerical algorithms can be constructed for evaluating multivariate polynomials and for performing linear‑algebraic operations on structured matrices, while also meeting polynomial‑time efficiency. The authors adopt the Traditional Model (TM) as their baseline arithmetic framework: each elementary operation (+, –, ×, ÷) is assumed to be performed with a relative error bounded by the machine epsilon u, and an algorithm is deemed “accurate” if the final result’s relative error is strictly less than one, guaranteeing at least one correct leading digit.
The first major contribution is a set of necessary and sufficient conditions that decide, for any given polynomial f(x₁,…,xₙ), whether an accurate TM algorithm exists using only the basic operations. By representing the polynomial as a computation graph, the authors show that the key obstacles to accuracy are (i) heterogeneous degree patterns that cause intermediate values to approach zero, and (ii) sign‑alternating coefficients that amplify rounding errors. If all terms share the same total degree (homogeneous case) or if each degree class has coefficients of uniform sign, the polynomial can be normalized and evaluated in a carefully ordered sequence that avoids catastrophic cancellation, yielding a relative error < 1 in O(poly(n)) time. Conversely, when degrees differ and signs alternate, no ordering of the basic operations can prevent loss of significance, and the paper proves that no accurate TM algorithm can exist.
The second part extends the analysis to structured matrices such as Toeplitz, circulant, symmetric Toeplitz, and block‑diagonal forms. The authors exploit the limited degrees of freedom inherent in these structures to design high‑accuracy LU, QR, and SVD factorizations. Their “block‑recursive” scheme decomposes a large matrix into small blocks, applies exact 2×2 (or similarly sized) factorizations to each block, and then recombines the results recursively. Because the block size and recursion depth are bounded by a polynomial in the input size, the overall algorithm runs in polynomial time. Moreover, for many Toeplitz‑type matrices the authors achieve O(n log n) complexity by using fast convolution techniques that preserve the required accuracy. These results demonstrate that structure can be leveraged not only for speed but also for guaranteeing a correct leading digit in the computed solution.
When the TM analysis yields a negative answer, the paper proposes extending the primitive operation set with additional accurate kernels. Examples include a ternary addition x + y + z, an exact inner product ⟨x,y⟩, or precise implementations of elementary transcendental functions (exp, log, sin). By treating the extended set as an enumerable library, the authors redefine the necessary‑sufficient conditions and construct an automated decision procedure. Given any problem specification, the procedure either synthesizes a high‑accuracy algorithm using the minimal extra kernels or produces a formal proof that no such algorithm can exist even with the extended library. This framework provides a systematic way to identify which extra operations are essential for a particular computational task.
Finally, the authors compare the TM with a bit‑string model in which real numbers are represented by finite binary strings and operation cost is measured in bit operations. They show that some problems impossible to solve accurately in TM become feasible in the bit model when the word length is chosen sufficiently large, because the relative‑error bound can be enforced by increasing precision rather than by changing the algorithmic structure. The paper quantifies the relationship between TM error bounds and required bit precision, establishing a mapping that clarifies the trade‑off between accuracy and computational effort across the two models.
In summary, the paper delivers (1) a rigorous characterization of when accurate polynomial evaluation is possible under the Traditional Model, (2) efficient high‑accuracy algorithms for a broad class of structured linear‑algebra problems, (3) a systematic method for extending the primitive operation set and automatically deciding the existence of accurate algorithms, and (4) a comparative analysis of TM versus bit‑string arithmetic that illuminates the fundamental limits of numerical accuracy and efficiency. These contributions provide a solid foundation for developers of scientific‑computing libraries, guiding them on which primitive operations to implement, how to exploit matrix structure, and when to resort to higher‑precision or alternative arithmetic models.
Comments & Academic Discussion
Loading comments...
Leave a Comment