The complexity of multiple-precision arithmetic
In studying the complexity of iterative processes it is usually assumed that the arithmetic operations of addition, multiplication, and division can be performed in certain constant times. This assumption is invalid if the precision required increases as the computation proceeds. We give upper and lower bounds on the number of single-precision operations required to perform various multiple-precision operations, and deduce some interesting consequences concerning the relative efficiencies of methods for solving nonlinear equations using variable-length multiple-precision arithmetic. A postscript describes more recent developments.
💡 Research Summary
The paper addresses a fundamental gap in the traditional analysis of iterative numerical methods: the assumption that elementary arithmetic operations (addition, multiplication, division) execute in constant time. This assumption breaks down when the required precision grows during the computation, as is common in high‑precision scientific computing, cryptography, and numerical root‑finding. To remedy this, the authors introduce a model in which the basic unit of work is a “single‑precision operation” – a fixed‑size hardware operation (e.g., 32‑ or 64‑bit). A multiple‑precision operation is then expressed as a sequence of these single‑precision steps, and its cost is measured by the number of such steps required.
The first part of the paper establishes upper and lower bounds for the elementary operations when the operand size is n bits. For addition and subtraction the bound is Θ(n) single‑precision operations, reflecting the linear propagation of carries. Multiplication is treated in depth: the classical O(n²) schoolbook method, Karatsuba’s O(n^{log₂3}) algorithm, Toom‑Cook’s O(n^{1.465}) method, and FFT‑based multiplication with O(n log n) complexity are all examined. The authors prove that for sufficiently large n the FFT‑based approach dominates, and they quantify the hidden constants and memory‑access overhead that affect practical performance. Division is reduced to multiplication via Newton–Raphson iteration for reciprocal computation. The analysis shows that computing a reciprocal to n‑bit accuracy requires O(M(n) log n) single‑precision steps, where M(n) denotes the cost of an n‑bit multiplication; the division cost inherits this bound.
Having established these primitives, the paper turns to the solution of nonlinear equations. It compares fixed‑precision Newton’s method, where each iteration incurs the full cost of an n‑bit multiplication, with a variable‑precision Newton scheme that adapts the working precision to the current error. In the variable‑precision setting, early iterations use low precision, and the required precision roughly doubles each step. The authors demonstrate that the total work to achieve t‑bit final accuracy is O(M(t)), a dramatic improvement over the O(t·M(t)) cost of the fixed‑precision approach. Similar analyses are performed for Secant, Bisection, and Fixed‑Point iteration methods, showing that while Secant may be cheaper initially, Newton’s method becomes superior as the target precision grows.
The paper also discusses practical implications: variable‑precision algorithms reduce both CPU time and memory footprint, making them attractive for large‑scale simulations where high precision is required only in the final stages. The authors provide concrete guidelines for algorithm designers, emphasizing the choice of multiplication algorithm (Karatsuba vs. FFT) based on the anticipated precision range, and recommending Newton‑Raphson reciprocal computation for division to exploit the same multiplication kernel.
In a postscript, the authors survey developments that have occurred since the original publication. They highlight new hybrid multiplication schemes that combine Karatsuba’s divide‑and‑conquer with FFT for medium‑size operands, the exploitation of GPUs and FPGAs for massive parallelism in multiple‑precision arithmetic, and advances in the computation of transcendental functions (logarithms, exponentials, trigonometric functions) using optimized series expansions and argument reduction techniques. They also note the emergence of robust multiple‑precision libraries such as MPFR and ARPREC, which implement automatic precision management based on the theoretical bounds described earlier.
Overall, the paper provides a rigorous complexity framework for multiple‑precision arithmetic, translates that framework into actionable insights for solving nonlinear equations, and situates the work within the broader evolution of high‑precision computing. Its blend of theoretical bounds, algorithmic strategies, and practical recommendations makes it a cornerstone reference for researchers and practitioners dealing with variable‑length high‑precision computations.
Comments & Academic Discussion
Loading comments...
Leave a Comment