Linear Systems and Eigenvalue Problems: Open Questions from a Simons Workshop

Linear Systems and Eigenvalue Problems: Open Questions from a Simons Workshop
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

This document presents a series of open questions arising in matrix computations, i.e., the numerical solution of linear algebra problems. It is a result of working groups at the workshop \emph{Linear Systems and Eigenvalue Problems}, which was organized at the Simons Institute for the Theory of Computing program on \emph{Complexity and Linear Algebra} in Fall 2025. The complexity and numerical solution of linear algebra problems %in matrix computations and related fields is a crosscutting area between theoretical computer science and numerical analysis. The value of the particular problem formulations here is that they were produced via discussions between researchers from both groups. The open questions are organized in five categories: iterative solvers for linear systems, eigenvalue computation, low-rank approximation, randomized sketching, and other areas including tensors, quantum systems, and matrix functions.


💡 Research Summary

This document compiles a curated list of open research questions that emerged from the “Linear Systems and Eigenvalue Problems” workshop held at the Simons Institute for the Theory of Computing in the fall of 2025. The authors, representing both theoretical computer science (TCS) and numerical linear algebra (NLA) communities, organized 55 questions into five thematic sections: iterative solvers for linear systems, eigenvalue computation, low‑rank approximation, randomized sketching, and extensions to tensors, quantum Hamiltonians, and matrix functions.

Section 2 – Iterative Solvers
The authors call for a systematic suite of parameterized benchmark problems derived from partial differential equations (PDEs) such as diffusion, convection‑diffusion, and Helmholtz. These benchmarks should span a wide range of matrix properties (e.g., symmetric diagonally dominant (SDD), symmetric diagonally dominant M‑matrices (SDDM), and more general weakly‑coupled diagonally dominant matrices) and be amenable to both TCS‑style near‑linear‑time algorithms and classical NLA preconditioners (Boomer‑AMG, GENEO, etc.). Two concrete tasks are posed: (1) develop software that can generate families of discretized PDE systems with tunable coefficients, and (2) design and analyze algorithms that solve each family in linear or nearly linear time, possibly extending graph‑Laplacian techniques to nonsymmetric or indefinite systems. The section also highlights gaps in multigrid theory beyond standard model problems, asking for O(n) convergence proofs for broader matrix classes and for a deeper understanding of how smoothing and approximation properties interact with modern preconditioners.

Section 3 – Eigenvalue Problems
Key challenges include derandomizing the “pseudosp ectral shattering” technique that underlies recent probabilistic QR convergence proofs, i.e., constructing deterministic matrix perturbations that achieve the same eigenvalue spread as random perturbations. Additional questions address the trade‑off between numerical precision and algorithmic accuracy for tridiagonal and bidiagonal matrices, the distribution of Ritz values in Krylov subspaces, and the robustness of MR³‑type algorithms for non‑Hermitian spectra. The authors also ask for provably efficient methods to compute the largest or rightmost eigenvalues of large, possibly non‑normal matrices.

Section 4 – Low‑Rank Approximation and Column Selection
The workshop participants seek a theoretical foundation for when greedy column‑selection schemes (QR with column pivoting, LU with complete pivoting) succeed, especially on matrices with special structure such as Laplacians, block‑low‑rank, or hierarchically‑semiseparable forms. They introduce the “discrete Lehmann representation” and “orthogonal rows” concepts as potential analytical tools, and request a precise relationship between volume sampling and optimal subset selection. Clarifying the role of incoherence and random row subsampling in guaranteeing approximation quality is also highlighted.

Section 5 – Randomized Sketching
A novel notion called “injection” is presented, which relaxes the usual subspace‑embedding guarantees while still enabling accurate least‑squares solutions with fewer samples. The authors ask whether injections can replace full embeddings in a broad class of algorithms, and they query the optimality of existing sparse embeddings (e.g., CountSketch, sparse Johnson‑Lindenstrauss transforms). Structured embeddings such as subsampled randomized Hadamard transforms are examined for their trade‑offs between computational speed and approximation error, with an open problem of characterizing their exact limits.

Section 6 – Tensors, Quantum Systems, and Matrix Functions
Open problems extend to tensor decompositions, specifically Tensor‑Train (TT) formats: can one devise algorithms with provable, probabilistic error bounds rather than purely heuristic guarantees? In the quantum realm, the authors propose studying eigenvalue problems arising from 2‑local Hamiltonians, seeking reductions to linear systems that can be tackled by existing solvers. Finally, they define a class of matrix functions that can be computed using a fixed number of matrix‑matrix products (e.g., the matrix sign function) and request a systematic analysis of their approximation properties and computational complexity.

Overall, the paper serves as a roadmap for collaborative research between TCS and NLA. By pinpointing concrete benchmark generation, algorithmic design, and rigorous analysis tasks, it aims to bridge the gap between asymptotic complexity results and practical, high‑performance numerical software. The authors anticipate that addressing these questions will shape the next decade of advances in scalable linear‑system solvers, robust eigenvalue algorithms, and efficient low‑rank approximations across scientific computing, data science, and quantum simulation.


Comments & Academic Discussion

Loading comments...

Leave a Comment