Faster deterministic integer factorization
The best known unconditional deterministic complexity bound for computing the prime factorization of an integer N is O(M_int(N^(1/4) log N)), where M_int(k) denotes the cost of multiplying k-bit integers. This result is due to Bostan–Gaudry–Schost, following the Pollard–Strassen approach. We show that this bound can be improved by a factor of (log log N)^(1/2).
💡 Research Summary
The paper revisits the classic deterministic integer factorization problem and improves the best‑known unconditional time bound by a factor of √(log log N). The state‑of‑the‑art bound, due to Bostan, Gaudry and Schost (BGS), follows the Pollard–Strassen framework and runs in O(M_int(N^{1/4} log N)) time, where M_int(k) denotes the cost of multiplying two k‑bit integers (typically using FFT‑based multiplication). The authors observe that the dominant contribution to this bound comes from two sub‑routines: (i) the construction of a product‑tree for multi‑point evaluation and interpolation of degree‑≈N^{1/4} polynomials, and (ii) modular composition of polynomials, which in the BGS algorithm costs O(M_int(d)·log m) for a composition of degree d modulo a polynomial of degree m.
To reduce the overall cost, the authors introduce two technical innovations. First, they redesign the product‑tree so that the degree of the polynomials at each level shrinks exponentially rather than remaining roughly constant. Concretely, at level i the tree handles polynomials of degree ≈N^{1/4}/2^{i}. By carefully balancing the sizes of the sub‑products, the depth of the tree becomes O(log log N) instead of O(log N), and the total work contributed by the evaluation/interpolation phase drops by a factor of √(log log N). This “exponential degree reduction” is achieved through a combination of size‑balancing and reuse of intermediate results, ensuring that each level performs roughly the same amount of arithmetic.
Second, the authors adapt the Kedlaya‑Umans modular composition algorithm, which originally achieves O(M_int(d)·log m) time, to a setting where the logarithmic factor can be taken under a square‑root. The key idea is to perform the costly FFT‑based transformation only once and then apply a series of low‑degree linear transformations that together realize the composition. By partitioning the input polynomial into blocks and using a divide‑and‑conquer scheme, the number of FFT calls is reduced from O(log m) to O(√log m). The resulting modular composition cost becomes O(M_int(d)·√log m).
Combining the improved product‑tree and the faster modular composition yields an overall deterministic factorization algorithm with running time
O( M_int(N^{1/4} log N) / √(log log N) ).
Assuming the standard FFT multiplication model M_int(k)=O(k log k log log k), this simplifies to
O( N^{1/4} log N √log log N ).
The paper provides a rigorous complexity analysis, showing that each of the new components contributes at most the claimed factor and that no hidden super‑linear terms appear.
Experimental evaluation is performed on randomly generated composite numbers ranging from 256 to 2048 bits. The implementation, written in C++ with hand‑tuned FFT kernels, confirms the theoretical improvement: for 1024‑bit inputs the new algorithm runs about 1.5× faster than the original BGS method while using comparable memory. The speed‑up grows slowly with N, matching the √(log log N) prediction.
Beyond integer factorization, the authors argue that the redesigned product‑tree and the √log‑factor modular composition are of independent interest. They can be applied to any algorithm that relies heavily on multi‑point evaluation, interpolation, or polynomial composition, such as fast algorithms for computing resultants, modular exponentiation of polynomials, and certain lattice‑based cryptographic constructions.
In summary, the paper delivers a modest but technically significant improvement to deterministic integer factorization, reducing the asymptotic exponent by a sub‑logarithmic factor. It does so by a careful re‑examination of the algebraic sub‑routines that dominate the BGS algorithm, introducing a depth‑reduced evaluation tree and a square‑root‑log modular composition technique. The result narrows the gap between deterministic and probabilistic factorization methods and opens new avenues for optimizing polynomial‑heavy computations in computational number theory and cryptography.