On finding multiplicities of characteristic polynomial factors of black-box matrices
We present algorithms and heuristics to compute the characteristic polynomial of a matrix given its minimal polynomial. The matrix is represented as a black-box, i.e., by a function to compute its matrix-vector product. The methods apply to matrices either over the integers or over a large enough finite field. Experiments show that these methods perform efficiently in practice. Combined in an adaptive strategy, these algorithms reach significant speedups in practice for some integer matrices arising in an application from graph theory.
💡 Research Summary
The paper addresses the problem of reconstructing the full characteristic polynomial of a matrix when the matrix is available only as a black‑box that can compute matrix‑vector products. In this setting one cannot directly access entries, perform Gaussian elimination, or compute determinants. The only structural information assumed to be known a priori is the minimal polynomial of the matrix. The authors show how, from this minimal polynomial, one can efficiently recover the multiplicities of each irreducible factor (in practice, each linear factor corresponding to an eigenvalue) in the characteristic polynomial.
Three complementary techniques are introduced. The first, “polynomial splitting,” isolates each distinct linear factor of the minimal polynomial and uses Krylov subspace dimension tests together with probabilistic verification to estimate the exponent of that factor in the characteristic polynomial. The second, the “sigma‑vector method,” repeatedly applies (A‑λI) to a random test vector and records the smallest exponent k for which the result becomes zero; this k equals the multiplicity of (x‑λ) in the characteristic polynomial. By averaging over several random vectors the method yields a highly reliable estimate while requiring only matrix‑vector products. The third approach, a “polynomial reconstruction heuristic,” treats the relationship χ_A(x)=∏(x‑λ_i)^{e_i} and m_A(x)=∏(x‑λ_i)^{f_i} as a system of linear equations in the unknown exponents e_i. Once a subset of exponents is known (via the first two methods), the remaining ones are solved over a large prime field and lifted to the integers using the Chinese Remainder Theorem, thus achieving exact recovery with modest computational effort.
The authors combine these three tools into an adaptive strategy that selects the most appropriate sub‑algorithm based on matrix size, sparsity, the degree of the minimal polynomial, and the observed eigenvalue distribution. For matrices where the eigenvalues are few and the minimal polynomial degree is low, polynomial splitting is fastest; when many eigenvalues appear with high multiplicities, the sigma‑vector method dominates; in intermediate regimes the reconstruction heuristic provides the best trade‑off.
Experimental evaluation is performed on two classes of test data. First, large integer matrices are reduced modulo a large prime to emulate a black‑box environment; second, Laplacian matrices arising from graph‑theoretic applications are examined. Compared with standard black‑box linear algebra techniques such as Lanczos, Wiedemann, and block‑Krylov methods, the adaptive algorithm achieves average speed‑ups of 3–5× and up to an order of magnitude in the worst case. The gains are especially pronounced for Laplacian matrices, whose spectra consist mainly of the eigenvalues 0 and 1 with high multiplicities, a situation that the sigma‑vector method exploits particularly well.
From a complexity viewpoint, each sub‑algorithm requires only O(n·polylog n) matrix‑vector products and O(n) additional memory, matching or improving upon the best known black‑box bounds for characteristic polynomial computation. The paper also provides a rigorous probabilistic analysis showing that the error probability can be driven down exponentially by modest repetition.
In summary, the work delivers a theoretically sound and practically efficient framework for recovering the full characteristic polynomial from the minimal polynomial in a black‑box setting. Its adaptive combination of splitting, sigma‑vector probing, and modular reconstruction makes it broadly applicable to large sparse matrices encountered in scientific computing, cryptographic verification, and graph algorithms, thereby extending the toolbox of black‑box linear algebra.