Kemeny's Constant for Markov Processes

Kemeny's Constant for Markov Processes
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

The mean time taken by an irreducible Markov chain on a finite state space to hit a target chosen at random according to the stationary distribution does not depend on the initial state of the chain. This mean time is known as Kemeny’s constant. I present a new approach, based on time reversal and a mean occupation time formula. The method is used to prove an analogous result for continuous-time Markov processes. We also present a second approach, based on work of N.~Eisenbaum and H.~Kaspi, when all states are regular. Examples are provided.


💡 Research Summary

The paper revisits the classical result known as Kemeny’s constant – the fact that for an irreducible Markov chain (or more generally a Markov process) on a finite or suitably recurrent state space, the expected time to hit a target state chosen at random according to the stationary distribution does not depend on the starting state. The author, P. J. Fitzsimmons, supplies two distinct proof strategies and extends the result to continuous‑time Hunt processes, thereby broadening the scope beyond the traditional discrete‑time, finite‑state setting.

Section 1 (Introduction) frames the problem and cites earlier work (Kemeny, Doyle, Pinsky, etc.). The “Kemeny function” is defined as
(K(x)=\int_E \mathbb{E}_x


Comments & Academic Discussion

Loading comments...

Leave a Comment