We give a polynomial-time dynamic programming algorithm for solving the linear complementarity problem with tridiagonal or, more generally, Hessenberg P-matrices. We briefly review three known tractable matrix classes and show that none of them contains all tridiagonal P-matrices.
Given a matrix M ∈ R n×n and a vector q ∈ R n , the linear complementarity problem LCP(M, q) is to find vectors w, z ∈ R n such that w -M z = q, w, z ≥ 0, w T z = 0.
(
It is NP-complete in general to decide whether such vectors exist [2]. But if M is a P-matrix (meaning that all principal minors-determinants of principal submatrices-are positive), then there are unique solution vectors w, z for every right-hand side q [10].
It is unknown whether these vectors can be found in polynomial time [7].
The matrix M = (m ij ) n i,j=1 is tridiagonal if m ij = 0 for |j -i| > 1. More generally, M is lower Hessenberg if m ij = 0 for ji > 1, and M is upper Hessenberg if M T is lower Hessenberg; see Figure 1.
In this note we show that LCP(M, q) can be solved in polynomial time if M is a lower (or upper) Hessenberg P-matrix. Polynomial-time results already Figure 1: A tridiagonal matrix (left) and a lower Hessenberg matrix (right); the nonzero entries are enclosed in bold lines. exist for other classes of matrices, most notably Zmatrices [1], hidden Z-matrices [6], and transposed hidden K-matrices [9]. Section 6 shows that none of these classes contains all tridiagonal P-matrices.
For the remainder of this note, we fix a P-matrix M ∈ R n×n and a vector q ∈ R n .
For B ⊆ [n] := {1, 2, . . . , n}, we let M B be the n × n matrix whose ith column is the ith column of -M if i ∈ B, and the ith column of the n × n identity matrix I n otherwise. M B is invertible for every set B, a direct consequence of M having nonzero principal minors. We call B a basis and M B the associated basis matrix.
The complementary pair (w(B), z(B)) associated with the basis B is defined by
and
for all i ∈ [n].
Lemma 2.1. For every basis B ⊆ [n], the following two statements are equivalent.
(i) The pair (w(B), z(B)) solves LCP(M, q), meaning that w = w(B), z = z(B) satisfy (1).
(ii) M -1
B q ≥ 0.
If both statements hold, B is called an optimal basis for LCP(M, q). Proof. As a consequence of ( 2) and (3), w = w(B) and z = z(B) already satisfy w -M z = q and w T z = 0, for every B. Moreover, w, z ≥ 0 if and only if w(B), z(B) ≥ 0; this in turn is equivalent to
From now on, we assume w.l.o.g. that LCP(M, q) is nondegenerate, meaning that (M -1 B q) i = 0 for all B ⊆ [n] and all i ∈ [n]. We can achieve this e.g. through a symbolic perturbation of q. In this case, we obtain the following Lemma 2.2. There is a unique optimal basis B for LCP(M, q). Proof. Let w, z be solution vectors of LCP(M, q), and set B := {i ∈ [n] : wi = 0}. Since wT z = 0, we have zi = 0 if i ∈ [n] \ B. Hence, the vectors w, z satisfy
1 and is therefore an optimal basis. Uniqueness of w, z [10] implies via Lemma 2.1 that (w(B), z(B)) = (w( B), z( B)) for every optimal basis B. But then (2) and (3) show that (M
Under nondegeneracy, there can be no such i, hence B = B.
For K ⊆ [n], let M KK be the principal submatrix of M consisting of all entries m ij with i, j ∈ K. Furthermore, let q K be the subvector of q consisting of all entries q i , i ∈ K.
By definition, the submatrix M KK is also a Pmatrix, and LCP(M KK , q K ) is easily seen to inherit nondegeneracy from LCP(M, q). Hence, Lemma 2.2 allows us to make the following
We also set B(-1) = B(0) = ∅.
Let M be a lower Hessenberg matrix. Then we have the following
As a consequence, the system of k equations
includes the ℓ equations
Since B(k) is the optimal basis of LCP(M
), the unique solution x of (4) satisfies x ≥ 0; see Lemma 2.1. Vice versa, the unique partial solution x[ℓ] ≥ 0 of subsystem (5) shows that B(k)∩[ℓ] = B(ℓ), the unique optimal basis of LCP(M
). Together with the choice of ℓ, the statement of the theorem follows.
We remark that a variant of Theorem 4.1 for upper Hessenberg matrices can be obtained by considering lower right principal submatrices M KK .
A basis test is a procedure to decide whether a given basis B ⊆ [n] is optimal for LCP(M, q). According to Lemma 2.1, a basis test can be implemented in polynomial time, using Gaussian elimination. In the sequel, we will therefore adopt the number of basis tests as a measure of algorithmic complexity. Here is our main result.
Theorem 5.1. Let M ∈ R n×n be a lower Hessenberg P-matrix. The optimal basis B = B(n) of LCP(M, q) can be found with at most n+1 2 basis tests.
Proof. We successively compute the optimal bases B(-1), B(0), . . . , B(n), where B(-1) = B(0) = ∅. To determine B(k), k > 0, we simply test the k + 1 candidates for B(k) that are given by Theorem 4.1. In fact, we already know B(k) after testing k of the candidates. This algorithm requires a total of
Using an O(n 3 ) Gaussian elimination procedure, we obtain an O(n 5 ) algorithm-this is certainly not best possible. Faster algorithms are available if M is a tridiagonal Z-matrix [5] or K-matrix [4,3], but to our knowledge, the above algorithm is the first one to handle tridiagonal (and lower Hessenberg) Pmatrices in polynomial time. The case of upper Hessenberg matrices is analogo
This content is AI-processed based on open access ArXiv data.