We show the following. \begin{theorem} Let $M$ be an finite-state ergodic time-reversible Markov chain with transition matrix $P$ and conductance $\phi$. Let $\lambda \in (0,1)$ be an eigenvalue of $P$. Then, $$\phi^2 + \lambda^2 \leq 1$$ \end{theorem} This strengthens the well-known~\cite{HLW,Dod84, AM85, Alo86, JS89} inequality $\lambda \leq 1- \phi^2/2$. We obtain our result by a slight variation in the proof method in \cite{JS89, HLW}; the same method was used earlier in \cite{RS06} to obtain the same inequality for random walks on regular undirected graphs.
A Markov chain is a sequence of random variables {X i } i 1 taking values in a finite set such that
Let the state space of the Markov chain be [n] and let P = (P ij ) be its n × n transition matrix: P ij = Pr[X t = i | X t-1 = j]. We will assume that the Markov chain is ergodic, that is, irreducible( for every pair of states i, j ∈ [n], P s ij > 0 for some s) and aperiodic(for any state i ∈ [n], gcd{s : P s ii > 0} = 1). Then, the Markov chain has a unique stationary distribution π: Pπ = π. We say that the Markov chain is time-reversible if it satisfies the following detailed balance condition:
All Markov chains considered in this note will be assumed to be finite-state ergodic and time-reversible. The conductance of a Markov chain with state space [n] is defined to be
The following theorem plays a central role in the theory of rapidly mixing Markov chains.
Theorem ([5]). Let λ < 1 be an eigenvalue of the transition matrix of an ergodic time-reversible Markov chain with conductance φ. Then, λ 1 -φ 2 2 . In this note we strengthen this inequality slightly.
Theorem. Let λ ∈ (0, 1) be an eigenvalue of the transition matrix of an ergodic time-reversible Markov chain with conductance φ. Then,
Such an inequality was derived by Radhakrishnan and Sudan [6] for the special case of random walks on regular undirected graphs. The purpose of this note is to show that their arguments (which were a slight variation on the arguments in [5,4]) apply to finite-state ergodic time-reversible Markov chains as well.
Proof. Let π be the stationary distribution of the chain with transition matrix P. Let f, g ∈ R n . We will be thinking of f, g, π as vectors in R n . Let
We have the following two claims.
Claim 1. For any proper f,
Claim 2. For λ ∈ (0, 1), there exists a proper f such that
Using ( 2) and (3), we obtain
from which the theorem follows.
Proof of Claim 1. Permute the co-ordinates of f such that
To see the first inequality, we observe that
(by the Cauchy-Schwarz inequality)
The calculations up to this point are identical to those in [5,4]; the calculations below are similar to those in [6].
Proof of Claim 2. Let g ∈ R n be a right eigenvector of P T with eigenvalue λ ∈ (0, 1) . We may assume i:g(i)>0 π i 1 2 (otherwise consider -g). By renaming the co-ordinates we may assume that g 1 g 2
Let f be such that f i = g i for i ∈ [r] and 0 otherwise. Then ∀i ∈ [r], (P T f) i (P T g) i = λg i = λf i Then, f,
This content is AI-processed based on open access ArXiv data.