New lower bounds for the rank of matrix multiplication

New lower bounds for the rank of matrix multiplication
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

The rank of the matrix multiplication operator for nxn matrices is one of the most studied quantities in algebraic complexity theory. I prove that the rank is at least n^2-o(n^2). More precisely, for any integer p\leq n -1, the rank is at least (3- 1/(p+1))n^2-(1+2p\binom{2p}{p-1})n. The previous lower bound, due to Blaser, was 5n^2/2-3n (the case p=1). The new bounds improve Blaser’s bound for all n>84. I also prove lower bounds for rectangular matrices significantly better than the the previous bound.


💡 Research Summary

The paper addresses one of the central problems in algebraic complexity theory: determining tight lower bounds for the rank of the matrix multiplication tensor Mₙ, which encodes the bilinear map (A,B) ↦ AB for n × n matrices. The rank of this tensor corresponds to the minimum number of scalar multiplications required by any exact algorithm for matrix multiplication, and thus a lower bound directly translates into a complexity lower bound. Historically, the trivial lower bound is n² (the number of entries in the product), while non‑trivial improvements have been scarce. Strassen’s early work gave n² + 1, and the best previously known bound was due to Bläser (2003), who proved rank(Mₙ) ≥ 5n²/2 − 3n. This bound, however, remains far from the best known upper bounds (e.g., O(n^{2.81}) via the Coppersmith‑Winograd family of algorithms).

The main contribution of the present work is a family of new lower bounds that dominate Bläser’s result for all n > 84. For any integer p with 1 ≤ p ≤ n − 1, the author shows

  rank(Mₙ) ≥ (3 − 1/(p + 1)) n² − (1 + 2p·C(2p, p − 1)) n,

where C(2p, p − 1) denotes the binomial coefficient “2p choose p − 1”. When p = 1 the formula reduces exactly to Bläser’s bound, confirming that the new result genuinely generalizes the earlier work. By choosing p larger (for instance p ≈ √n), the leading coefficient approaches 3, yielding a bound arbitrarily close to 3n² while the sub‑linear correction term grows only like O(p·4^{p}) n. Consequently, for any n > 84 there exists a p (typically p = ⌊√n⌋) for which the right‑hand side exceeds 5n²/2 − 3n, establishing a strict improvement.

The proof technique is based on a refined “partial tensor projection” combined with representation‑theoretic flattenings. The author constructs a p‑dimensional subspace Vₚ of the ambient space ℂ^{n² × n² × n²} by grouping rows and columns of the input matrices into blocks of size p. Projecting Mₙ onto Vₚ yields a new tensor Tₚ that lives in a linear space of dimension (3 − 1/(p + 1)) n². The crucial observation is that the rank of Tₚ cannot be smaller than the dimension of its ambient space, because the projection respects the action of the symmetric group S_{2p} on the block indices. By carefully analyzing the invariant subspaces under this group action, the author computes the exact dimension loss caused by overlapping blocks; this loss is precisely the term (1 + 2p·C(2p, p − 1)) n. The argument uses elementary combinatorial identities together with standard facts about tensor flattenings (i.e., viewing a 3‑tensor as a matrix after fixing one mode).

Beyond square matrices, the paper extends the method to rectangular matrix multiplication tensors M_{l,m,n} (multiplying an l × m matrix by an m × n matrix). By selecting p relative to the smallest dimension among l, m, and n, the same projection‑flattening machinery yields a bound of the form

  rank(M_{l,m,n}) ≥ (3 − 1/(p + 1)) l m n − (1 + 2p·C(2p, p − 1)) max(l,m,n).

This improves on the previously best known rectangular bound, which was essentially linear in the sum of the dimensions.

An additional contribution is the observation that the derived lower bounds also apply to the border rank of the matrix multiplication tensor. Since the construction of Vₚ and the dimension calculations are algebraic and remain valid under limits of tensors, the same inequality holds for the border rank, which is the asymptotic analogue of rank relevant for fast matrix multiplication algorithms.

The paper concludes with a discussion of implications and open problems. The new bounds push the known lower limit for matrix multiplication complexity up to roughly 3 n², narrowing the gap to the best known upper bounds. The author suggests that combining the present projection technique with the “laser method” (a powerful tool for constructing fast algorithms) could potentially yield even stronger lower bounds for higher‑order tensors. Moreover, optimizing the choice of p as a function of n, and exploring alternative group actions that might reduce the combinatorial loss term, are highlighted as promising directions for future research.

Overall, the work provides a significant step forward in quantifying the inherent difficulty of matrix multiplication, offering both a concrete improvement over the longstanding Bläser bound and a versatile framework that can be adapted to a variety of matrix dimensions and to the study of border rank.


Comments & Academic Discussion

Loading comments...

Leave a Comment