Efficient $k$-Sign Consistency Verification of Hankel Matrices via Schur Polynomials
We consider the problem of certifying (strict) $k$-sign consistency of a matrix, that is, whether all of its $k$-th order minors share the same (strict) sign. Although this problem is generally of combinatorial complexity, we show that for Hankel matrices it can be significantly simplified: our sufficient condition requires checking only the $k$-th order minors of a reshaped Hankel matrix with $k$ rows. Remarkably, when applied to the Hankel operator, this sufficient condition is also necessary. Comparable results were known only in the setting of (strictly) $k$-positive Hankel matrices and operators, in which all minors of order up to $k$ have the same (strict) sign. More concretely, we derive a formula expressing the $k$-th order minors of Hankel matrices as nonnegative integer linear combinations of $k$-th order minors with consecutive row indices. Our derivation uses Schur polynomial theory to show that the $k$-th order minors of any matrix are nonnegative integer linear combinations of row-consecutive $k$-th order minors, meaning minors formed from distinct columns whose consecutive row indices need not coincide across columns. For Hankel matrices, these minors coincide – up to sign changes arising from column swaps – with the usual $k$-th order minors with consecutive row indices. Our main result then follows by showing that the sum of certain signed nonnegative integer coefficients equals the corresponding Littlewood–Richardson coefficients. In our problem, the nonnegativity of these coefficients ensures that negatively signed column permutations are cancelled by positively signed ones. Our results also extend naturally to Toeplitz matrices and operators, and we present a partial analogue for circulant matrices.
💡 Research Summary
The paper tackles the problem of verifying (strict) k‑sign consistency of a matrix, i.e., whether all its k‑th order minors share the same (strict) sign. While this verification is combinatorially hard in general, the authors show that for Hankel matrices the task can be dramatically simplified. Their main contribution is a formula that expresses every k‑th order minor of a Hankel matrix as a non‑negative integer linear combination of k‑th order minors taken from a reshaped Hankel matrix that has exactly k rows. Consequently, it suffices to check only those minors of the reduced matrix to certify k‑sign consistency of the original matrix.
The technical backbone of the work relies on algebraic combinatorics, specifically Schur polynomials, Kostka numbers, and Littlewood–Richardson coefficients. First, the authors prove a general statement: for any matrix, each k‑th order minor can be written as a non‑negative integer combination of “row‑consecutive” k‑minors, where each column contributes a block of k consecutive rows, but the starting rows need not be the same across columns. This is established by expanding determinants via the alternating polynomial representation of Schur functions and using the product rule for Schur polynomials (s_λ s_μ = ∑γ c^γ{λ,μ} s_γ). The coefficients c^γ_{λ,μ} are precisely the Littlewood–Richardson numbers, which are known to be non‑negative integers. The non‑negativity guarantees that any sign changes caused by column permutations cancel out.
When the matrix is Hankel, the special structure forces all row‑consecutive minors to coincide (up to a sign determined by column swaps) with the ordinary minors formed from consecutive rows and columns of a k‑row reshaped Hankel matrix. Therefore the general expansion collapses to a linear combination of these consecutive‑row minors. The authors show that the signed contributions from column permutations sum to zero, leaving only the non‑negative Littlewood–Richardson coefficients as weights. This yields the desired expression and the efficient verification criterion.
For the infinite‑dimensional Hankel operator, the same reshaped k‑row matrix captures all k‑minors of the operator, making the sufficient condition also necessary. Hence the paper provides a necessary and sufficient characterization of k‑sign consistency for Hankel operators.
The results extend naturally to Toeplitz matrices because a Toeplitz matrix is obtained from a Hankel matrix by reversing the order of columns; the same argument applies after this simple transformation. A partial analogue is presented for circulant matrices, though a full characterization remains open.
The paper also situates its findings within the broader theory of total positivity. While total positivity (all minors non‑negative) can be certified by checking only consecutive minors up to order k, k‑sign consistency is a more general property. The authors demonstrate that, for Hankel structures, the verification complexity drops from combinatorial (∼ binomial(m,k)·binomial(n,k)) to linear in the matrix dimensions, making the test computationally tractable for large-scale applications such as signal processing, system identification, and moment problems where Hankel matrices naturally arise.
In summary, the work bridges determinant theory, Schur polynomial algebra, and combinatorial representation theory to produce a practically useful criterion for k‑sign consistency of Hankel (and related) matrices, offering both theoretical insight and algorithmic efficiency.
Comments & Academic Discussion
Loading comments...
Leave a Comment