Computational Complexity and Numerical Stability of Linear Problems
We survey classical and recent developments in numerical linear algebra, focusing on two issues: computational complexity, or arithmetic costs, and numerical stability, or performance under roundoff error. We present a brief account of the algebraic complexity theory as well as the general error analysis for matrix multiplication and related problems. We emphasize the central role played by the matrix multiplication problem and discuss historical and modern approaches to its solution.
💡 Research Summary
The paper provides a comprehensive survey of the interplay between computational complexity and numerical stability in linear algebra, with matrix multiplication serving as the central theme. It begins by formalizing two notions of cost: total arithmetic complexity (L_tot), which counts all basic operations, and multiplicative complexity (L), which counts only multiplications (including divisions). By modeling bilinear maps ϕ: U × V → W as third‑order tensors t ∈ U ⊗ V ⊗ W, the authors show that the tensor rank R(t) gives an upper bound on the multiplicative complexity, and that L(ϕ) ≤ R(ϕ) ≤ 2 L(ϕ). This relationship allows one to study algorithmic limits through algebraic properties of tensors.
The authors then focus on the matrix multiplication tensor h_{n,n,n}. For n = 2 the rank is known to be 7 (Winograd), while for n = 3 the exact rank remains unknown, with the best bounds 19 ≤ R ≤ 23. The matrix multiplication exponent ω(F) is defined as the infimum of τ such that L_tot for n × n matrix multiplication is O(n^τ). Classical bounds give 2 ≤ ω ≤ 3; Strassen’s algorithm (2 × 2 matrices with 7 multiplications) reduces the upper bound to ω ≤ log₂ 7 ≈ 2.807.
Two major families of techniques that have pushed ω below 3 are examined in depth. The first is the “laser method,” which packs several small tensors into a single “laser” tensor and exploits its border rank R̄(t). Border rank allows an approximate decomposition t₁(ε) = ∑{i=1}^r u_i(ε)⊗v_i(ε)⊗w_i(ε) that converges to t as ε → 0, often with r < R(t). If R̄(h{e,e,e}) ≤ r, then ω ≤ log_e r, leading to the current best bound ω < 2.376. The method relies on symmetrization, rectangular matrix embeddings, and recursive tensor powers.
The second family is group‑theoretic algorithms. By embedding matrix multiplication into the group algebra ℂ
Comments & Academic Discussion
Loading comments...
Leave a Comment