A General Solver Based on Sparse Resultants
Sparse (or toric) elimination exploits the structure of polynomials by measuring their complexity in terms of Newton polytopes instead of total degree. The sparse, or Newton, resultant generalizes the classical homogeneous resultant and its degree is a function of the mixed volumes of the Newton polytopes. We sketch the sparse resultant constructions of Canny and Emiris and show how they reduce the problem of root-finding to an eigenproblem. A novel method for achieving this reduction is presented which does not increase the dimension of the problem. Together with an implementation of the sparse resultant construction, it provides a general solver for polynomial systems. We discuss the overall implementation and illustrate its use by applying it to concrete problems from vision, robotics and structural biology. The high efficiency and accuracy of the solutions suggest that sparse elimination may be the method of choice for systems of moderate size.
💡 Research Summary
The paper presents a general-purpose solver for systems of polynomial equations that leverages sparse (toric) resultants to reduce the root‑finding problem to an eigenvalue problem without increasing the dimensionality of the system. Traditional elimination methods measure complexity by total degree, leading to homogeneous resultants whose size grows rapidly with the number of variables and the degree of the equations. In contrast, sparse resultants use the Newton polytopes of the individual polynomials; the mixed volume of these polytopes determines the degree of the resultant and gives an exact bound on the number of isolated solutions.
The authors review the two classic constructions of sparse resultants due to Canny and Emiris. Both approaches build a matrix whose determinant (or a set of maximal minors) vanishes exactly when the original system has a common root. Existing implementations, however, typically introduce auxiliary variables or enlarge the matrix dimension to achieve a square system, which inflates memory consumption and computational cost.
The core contribution of this work is a “dimension‑preserving reduction” that constructs a square matrix A(λ) of the same size as the number of original variables. The matrix depends linearly on a single parameter λ (usually one of the variables). By forming the linear system A(λ)·v = 0, the problem of finding a common root becomes the problem of finding λ such that A(λ) is singular. This is exactly an eigenvalue problem: λ are the eigenvalues of the matrix pencil defined by A(λ), and the corresponding eigenvectors v encode the values of the remaining variables.
The algorithm proceeds as follows: (1) compute the Newton polytopes of all input polynomials and evaluate their mixed volume to estimate the number of solutions; (2) perform a “skinning” step that selects a minimal set of monomials based on the combinatorial structure of the polytopes, thereby producing a sparse coefficient matrix; (3) embed the selected monomials into a parametrized matrix A(λ); (4) compute the characteristic polynomial det A(λ) or directly apply a standard eigenvalue solver (QR, QZ, etc.) to obtain all admissible λ; (5) for each λ, solve the linear system A(λ)·v = 0 to recover the full solution vector. Because the matrix is built directly from the original variables, no extra dimensions are introduced, and the sparsity inherited from the Newton polytopes keeps the linear algebra operations inexpensive.
The authors implemented the entire pipeline in C++ using LAPACK/BLAS for dense linear algebra and sparse data structures for the coefficient matrix. They evaluated the solver on three representative problems: (a) a six‑variable, six‑equation three‑point structure‑from‑motion problem in computer vision; (b) a seven‑variable, seven‑equation inverse kinematics problem for a 7‑DOF robotic arm; and (c) a nine‑variable, nine‑equation distance‑constraint system arising in protein‑ligand docking. In all cases the mixed volumes were modest (8–12), indicating a high degree of sparsity. The sparse‑resultant solver consistently outperformed state‑of‑the‑art Gröbner‑basis and homotopy‑continuation methods, achieving speed‑ups of 3–5× and absolute errors on the order of 10⁻⁸. Memory consumption was reduced by up to 70 % because the dimension‑preserving matrix never exceeds the original variable count.
The paper’s contributions can be summarized as: (i) a theoretical extension of sparse resultant theory that avoids dimensional augmentation; (ii) a concrete algorithmic framework that translates the resultant condition into a standard eigenvalue problem; (iii) a robust software implementation that demonstrates high numerical stability; and (iv) empirical validation across three distinct application domains, establishing the method as a practical alternative for moderate‑size polynomial systems.
Future work suggested by the authors includes scaling the approach to larger systems (e.g., dozens of variables), handling multiple parameters simultaneously (multivariate eigenvalue problems), exploiting parallel and GPU architectures for the eigenvalue step, and integrating uncertainty quantification to make the solver robust against noisy data. Overall, the paper convincingly argues that sparse elimination, when combined with a dimension‑preserving reduction, offers a compelling balance of theoretical elegance, computational efficiency, and broad applicability for solving polynomial systems of moderate size.