Cover times, blanket times, and majorizing measures
We exhibit a strong connection between cover times of graphs, Gaussian processes, and Talagrand's theory of majorizing measures. In particular, we show that the cover time of any graph $G$ is equivalent, up to universal constants, to the square of th…
Authors: Jian Ding, James R. Lee, Yuval Peres
Co v er times, blank et times, and ma jorizing measures Jian Ding ∗ U. C. Berk eley James R. Lee †∗ Univ ersit y of W ashington Y uv al P eres Microsoft Researc h Abstract W e exhibit a strong connection betw een cover t imes of gr aphs, Gaus sian pro cesses , and T ala grand’s theory of ma jorizing measures. In particular , w e show that the cov er time of any graph G is equiv a lent , up to un iversal constants, to the square of the exp ected maximum of the Gaussian fr e e field on G , scaled b y the num b er of edges in G . This allo ws us to r esolve a n umber of ope n questions. W e give a deterministic p olynomia l- time algorithm that computes the cover time to within an O (1) factor for an y graph, answering a question of Aldous and Fill (1994). W e also p o sitively resolve the blanket time conjectures o f Winkler and Zuck er man (1996 ), showing that for a n y g raph, the blanket and cov e r times are within an O (1) factor. The b est previous approximation factor for bo th these pro blems was O ((log log n ) 2 ) for n -v ertex gr a phs, due to Kahn, Kim, Lov´ asz, and V u (200 0). Con ten ts 1 In tro duction 2 1.1 Related wo rk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2 Gaussian pro cesses and lo cal times 11 2.1 The blank et time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2 An asymptotically strong u p p er b ound . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.3 Geometry of the resistance metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.4 The Gaussian free field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3 Ma jorizing measures 24 3.1 T rees, measures, and f unctionals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.2 Separated trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.3 Computing an appro ximation to γ 2 deterministically . . . . . . . . . . . . . . . . . . 31 3.4 T ree-lik e prop erties of the Gaussian free field . . . . . . . . . . . . . . . . . . . . . . 34 ∗ A substan tial p ortion of this w ork was completed du ring visits o f the author to Micro soft Research. † P artially supp orted by NS F g rant CCF -0915251 and a Sloan Researc h F ello wship. 1 4 The co ver time 36 4.1 A tree-lik e sub -pro cess . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4.2 The coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.3 T ree-lik e p ercolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 4.4 The lo cal times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.5 Additional applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 5 Op en problems a nd furthe r discussion 51 1 In tro duction Let G = ( V , E ) b e a finite, connected graph, an d consider the simple rand om w alk on G . W riting τ co v for the first time at whic h every v ertex of G has been visited, let E v τ co v denote the exp ectation of this quan tit y when the random w alk is started at some v ertex v ∈ V . Th e follo wing fu ndamen tal parameter is kno wn as the c over time of G , t co v ( G ) = max v ∈ V E v τ co v . (1) W e refer to the b o oks [2, 36] and the sur v ey [37] for relev an t backg round m aterial. W e also recall the discrete Gaussian fr e e field (GFF) on the graph G . This is a cen tered Gaussian pro cess { η v } v ∈ V with η v 0 = 0 for s ome fixed v 0 ∈ V . Th e pro cess is c haracterized by the relation E ( η u − η v ) 2 = R eff ( u, v ) for all u, v ∈ V , where R eff denotes the effectiv e resistance on G . Equiv alen tly , the co v ariances E ( η u η v ) a re giv en by the Gr een k ernel of the random walk killed at v 0 . (W e refer to Sections 1.2 and 1.3 for bac kground on electrical net w orks and Gaussian p r o cesses.) The next theorem represents one of the pr im ary connections put forward in th is w ork. W e use the notation ≍ to denote equiv alence up to a un iversal constan t factor. Theorem 1.1. F or any finite, c onne cte d g r aph G = ( V , E ) , we have t co v ( G ) ≍ | E | E max v ∈ V η v 2 , wher e { η v } v ∈ V is the Gaussian fr e e field on G . The utility of suc h a c haracterizati on will b eco me clear soon. Despite b eing an in tensively studied parameter of graphs, a n umb er of basic questions in vo lving the co v er time hav e remained op en. W e n ow highligh t tw o of these, whose resolution w e discuss sub s equen tly . The blank et time. F or a no d e v ∈ V , let π ( v ) = deg( v ) 2 | E | denote the stationary measure of the random walk, and let N v ( t ) b e a random v ariable denoting the num b er of times the rand om w alk has visited v up to time t . Now define τ ◦ bl ( δ ) to b e th e first time t > 1 at wh ic h N v ( t ) > δt π ( v ) (2) holds for all v ∈ V . In other w ords, τ ◦ bl ( δ ) is the fi rst time at whic h all no des ha v e b een visited at least a δ fraction as m uc h as we exp ect at stationarit y . Using the same notation as in (1), define the δ -blanket time as t ◦ bl ( G, δ ) = m ax v ∈ V E v τ ◦ bl ( δ ) . (3) 2 Clearly for δ ∈ (0 , 1), w e ha ve t ◦ bl ( G, δ ) > t co v ( G ). Wink ler and Zuc kerman [54] made the follo w ing conjecture. Conjecture 1.1. F or every 0 < δ < 1 , ther e exists a C such that for every gr aph G , one has t ◦ bl ( G, δ ) 6 C · t co v ( G ) . In other wor ds, for eve ry fixe d δ ∈ (0 , 1) , one has t co v ( G ) ≍ t ◦ bl ( G, δ ) . Kahn, Kim, Lo v´ asz, and V u [30] show ed th at for ev ery fixed δ ∈ (0 , 1), one can take C ≍ (log log n ) 2 for n -no de graphs, bu t whether there is a univ ersal constan t, indep endent of n , remained op en for ev ery v alue of δ > 0 . In order to b ound t ◦ bl ( G, δ ), we introdu ce the follo wing stronger notion. L et τ bl ( δ ) b e the first time t > 1 s u c h that for ev ery u, v ∈ V , w e ha ve N u ( t ) /π ( u ) N v ( t ) /π ( v ) > δ , i.e. the fi rst time at whic h all the v alues { N u ( t ) /π ( u ) } u ∈ V are within a factor of δ . As in [30], w e define the str ong δ -blanket time as t bl ( G, δ ) = m ax v ∈ V E v τ bl ( δ ) . Clearly one has t ◦ bl ( G, δ ) 6 t bl ( G, δ ) for eve ry δ ∈ (0 , 1 ). The second question we highlig ht is computational in nature. Question 1.2 ([2, 30]) . Is ther e a deterministic, p olynomial-time algorithm that appr oximates t co v ( G ) within a c onstant factor? In o ther w ord s , is th er e a quantit y A ( G ) whic h can b e computed dete rministically , in p olynomial - time in | V | , suc h that A ( G ) ≍ t co v ( G ). It is crucial that one asks f or a d eterministic pro cedur e, since a r andomized algorithm can simp ly sim ulate th e chain, and output the empirical mean of the observe d times at whic h the graph is firs t co vered. This is gu aranteed to pr o duce an accurate estimate with high-probabilit y in p olynomial time, since the mean and standard deviation of τ co v are O ( | V | 3 ) [6]. A resu lt of Matthews [43 ] can b e used to pro d u ce a determinisically computable b ound w hic h is within a log | V | factor of t co v ( G ). Subsequ en tly , [30] sho w ed h ow one could compu te a b oun d whic h lies within an O ((log log | V | ) 2 ) factor of the co ver time. Before we state our main theorem and resolv e the pr eceding q u estions, w e briefly review the γ 2 functional from T alagrand’s theory of ma jorizing measures [48, 50]. Ma jorizing measures and Gaussian pro cesses. Consider a co mpact metric space ( X, d ). Let M 0 = 1 and M k = 2 2 k for k > 1. F or a p artition P of X and an elemen t x ∈ X , w e will wr ite P ( x ) for th e uniqu e S ∈ P con taining x . An admissible se quenc e { A k } k > 0 of partitions of X is suc h t hat A k +1 is a refinement of A k for k > 0, a nd | A k | 6 M k for all n > 0. T alagrand d efines the functional γ 2 ( X, d ) = inf sup x ∈ X X k > 0 2 k / 2 diam ( A k ( x )) , (4) 3 where the infimum is ov er all admissible sequences { A k } . Consider n ow a Gaussian pr o cess { η i } i ∈ I o ver some in dex set I . T h is is a stoc hastic pro cess suc h that ev ery fi nite linear com bination of random v ariables is norm ally distributed. F or the purp oses of the present pap er, one may assume that I is fi nite. W e will assume that all Gaussian pr o cesses are c ent ered, i.e. E ( η i ) = 0 for all i ∈ I . Th e index set I carries a natural metric which assigns, for i, j ∈ I , d ( i, j ) = q E | η i − η j | 2 . (5) The follo win g result constitutes a primary consequen ce of the m a jorizing measures theory . Theorem (MM) (Ma jorizing measures theorem [48]). F or any cen tered Gaussian pro cess { η i } i ∈ I , γ 2 ( I , d ) ≍ E sup { η i : i ∈ I } . W e remark that the upp er b oun d of the p receding theorem, i.e. E sup { η i : i ∈ I } 6 C γ 2 ( I , d ) for s ome constan t C , go es bac k to work of F ernique [24 , 25]. F erniqu e form ulated this resu lt in the language of m easur es (from wh en ce the name “ma jorizing measures” arises), wh ile the form ulation of γ 2 giv en in (4) is due to T alagrand. The fact that the tw o notions are related is non -trivial; w e refer to [50, § 2] for a thorough discussion of the connection b et we en them. Comm ute times, hitt ing times, and co ver times. In order to relate th e ma jorizing measure theory to co ve r times of graphs, w e recall the follo wing n atural metric. F or any t wo nod es u, v ∈ V , use H ( u, v ) to d enote the exp e cte d hitting time from u to v , i.e. the exp ected time for a random w alk started at u to hit v . Th e e xp e cte d c ommute time b et we en t wo no d es u, v ∈ V is then defined b y κ ( u, v ) = H ( u, v ) + H ( v , u ) . (6) It is imm ed iate that κ ( u, v ) is a metric on any fin ite, connected grap h . A w ell-kno w n fact [11] is that κ ( u, v ) = 2 | E | R eff ( u, v ), where R eff ( u, v ) is the effe ctive r esistanc e b etw een u and v , w hen G is consider ed as an electrical netw ork w ith un it condu ctances on the edges. W e no w restate our main result in terms of ma jorizing measur es. F or a metric d , w e w rite √ d for the d istance √ d ( u, v ) = p d ( u, v ). Theorem 1.2 (Co v er times, blanket times, and ma jorizing measures) . F or any gr aph G = ( V , E ) and any 0 < δ < 1 , we have t co v ( G ) ≍ γ 2 ( V , √ κ ) 2 = | E | · h γ 2 ( V , p R eff ) i 2 ≍ δ t bl ( G, δ ) , wher e ≍ δ denotes e quivalenc e up to a c onstant dep e nding on δ . Clearly this yields a p ositiv e resolution to Conjecture 1.1. Moreo v er, w e prov e the preceding theorem in the setting of general finite-state rev ersib le Mark ov c hains. See Th eorem 1.9 for a statemen t of our most general theorem. W e no w address some additional consequences of the main theorem. First, observe that by com b ining Theorem 1.2 with Theorem (MM), we obta in T heorem 1.1. Theorem 1.3 (Cov er times and th e Gaussian free field) . F or any gr aph G = ( V , E ) and any 0 < δ < 1 , we have t co v ( G ) ≍ | E | E max v ∈ V η v 2 ≍ δ t bl ( G, δ ) , wher e { η v } is the Gaussian fr e e field on G . 4 In fact, in Section 2.2, w e exhibit th e follo wing strong asymptotic upp er b oun d. Theorem 1.4. F or every gr aph G = ( V , E ) , if t hit ( G ) denotes the maximal hitting time i n G , and { η v } v ∈ V is the Gaussian fr e e field on G , then t co v ( G ) 6 1 + C s t hit ( G ) t co v ( G ) ! · | E | · E sup v ∈ V η v 2 , wher e C > 0 is a u niversal c onstant. In Section 3, w e pr ov e the follo wing theorem whic h, in conju nction with Theorem 1.2, resolv es Question 1.2. Theorem 1.5. L et ( X , d ) b e a finite metric sp ac e, with n = | X | . If, for any two p oints x, y ∈ X , one c an deterministic al ly c ompute d ( x, y ) in time p olynomial in n , then one c an deterministic al ly c ompute a numb er A ( X, d ) i n p olynomial time, f or which A ( X, d ) ≍ γ 2 ( X, d ) . A “comparison theorem” f ollo ws immediately from Theorem 1.2, and th e fact that γ 2 ( X, d ) 6 Lγ 2 ( X, d ′ ) wheneve r d 6 Ld ′ (see (4)). Theorem 1.6 (Comparison theorem for co ver times) . Supp ose G and G ′ ar e two gr aphs on the same set of no des V , and κ G and κ G ′ ar e the distanc es induc e d by r e sp e ctive c ommute times. If ther e exists a numb er L > 1 such that κ G ( u, v ) 6 L · κ G ′ ( u, v ) for al l u, v ∈ V , then t co v ( G ) 6 O ( L ) · t co v ( G ′ ) . Finally , our w ork implies that there is an extremely simple randomized algo rithm for computing the co v er time of a graph, up to constan t factors. T o this end, consider a graph G = ( V , E ) wh ose v ertex set w e tak e to b e V = { 1 , 2 , . . . , n } . Let D b e the d iagonal d egree matrix, i.e. such that D ii = deg( i ) and D ij = 0 for i 6 = j , and let A b e the adjacency matrix of G . W e define the follo wing normalized Laplacian, L G = D − A tr( D ) . Let L + G denote the Mo ore-P enrose p eudoin v erse of L G . Note that b oth L G and L + G are p ositiv e semi-definite. W e h a ve the follo wing c haracterizatio n. Theorem 1.7. F or any c onne cte d g r aph G , it holds that t co v ( G ) ≍ E q L + G g 2 ∞ , wher e g = ( g 1 , . . . , g n ) is an n -dimensional Gaussian, i.e. such that { g i } ar e i.i. d. N(0,1) r andom variables. The preceding theorem yields an O ( n ω )-time randomized algorithm for appro ximating t co v ( G ), where ω ∈ [2 , 2 . 376) is the b est-possib le exp onen t for matrix multiplica tion [13]. Using the linear- system solv ers of Sp ielman and T eng [47] (see also [45]), along with id eas from Spielman and Srivista v a [46], w e presen t an algo rithm th at runs in near-linear time in the num b er of edges of G . Theorem 1.8 (Near-linear time randomized algorithm) . Ther e is a r andomize d algorithm which, given an m -e dge c onne cte d gr aph G = ( V , E ) , runs i n time O ( m (log m ) O (1) ) and outputs a numb er A ( G ) such that t co v ( G ) ≍ E [ A ( G ) ] ≍ ( E A ( G ) 2 ) 1 / 2 . 5 1.1 Related w ork Co v er times of finite graphs ha ve b een studied f or o v er 30 ye ars. W e refer to [2, 37, 3 6] for the basic theory . W orks of F eige sho wed that the co ver time for an y n -no de graph is at least (1 − o (1)) n log n [22], an d a t most 4 n 3 / 27 [21]. Both of these b ounds a re asymp totically ti ght , with the tigh t example for the lo w er b ound giv en by the complete graph on n no des. The c onnection b et ween co ve r times, comm ute times, and the theory of electrical net works was laid o ut in [11]. In general, the electrical viewp oin t pro vid es a p ow erful method ology f or anal yzing random wal ks (see, for example, [15, 53, 39]). Indeed, th is p oint of view w ill b e cent ral to the present work. A fund amen tal b ound of Matt hews [43] sho ws that t co v ( G ) 6 max u,v ∈ V H ( u, v ) (1 + log n ) , where we recall that H ( u, v ) is the exp ected hitting time from u to v . Using the straigh tforw ard lo wer b ound t co v ( G ) > max u,v ∈ V H ( u, v ), this fact pro vides a deterministic O (log n )-app r o xim ation to t co v ( G ) in n -no de graphs. Matthews also prov ed the lo we r b ound , t co v ( G ) > max S ⊆ V min u 6 = v ∈ S H ( u, v ) log( | S | − 1) . (7) In [30], it is sho wn that taking the maximum of the lo wer b ound in (7 ) and th e maximal h itting time max u,v ∈ V H ( u, v ) is an O ((log log n ) 2 )-appro ximation f or t co v . Recen tly , F eige and Zeitouni [23] hav e sh o w n that on tr ees, one can obtain a v ery strong b ound : F or ev ery ε > 0, there is a (1 + ε )-appro x im ation obtainable by a deterministic, p olynomial-time algorithm. The c o v er time has al so b een studied for man y sp ecific families of graphs. Kahn, Linial, Nisa n, and Saks [31] established an O ( n 2 ) up p er b ound for regular graphs. Bro der and Karlin [9 ] pro v ed that t he co ver time of constan t-degree expander graphs i s O ( n log n ). F or planar g raphs of maxi mum degree d , Jonasson an d S c h ramm [29] s h o wed th at the co ve r time is at least c d n (log n ) 2 and at most 6 n 2 . The ord er of the co v er time on lattices was determined by Aldous [1] and Z uc k erman [55]. The latter pap er also calculated the order of the co ve r time on regular trees. F urthermore, f or a few families of sp ecific examples, the asymp totics of the cov er time ha v e b een calculated more p recisely . Th ese include the work of Aldous [4] for r egular trees, De mbo, P eres, Rosen, and Zeitouni [14] for the 2-dimensional discrete torus, a nd Co op er and F rieze [12] for the gian t comp onent of v arious random graphs. Finally , we remark on an u pp er boun d o f Barlo w, Ding, Nac hmias, a nd P eres [7] whic h w as part of the motiv ation for the present w ork. Consid er a connected graph G = ( V , E ) and the metric space ( V , κ ), where we recall the comm ute d istance from (6). F or eac h h ∈ Z , let A h ⊆ V b e a set of minimal size whose 2 h -neigh b orho o d (in the metric κ ) co v ers V . Then, t co v ( G ) 6 O (1) · X h ∈ Z 2 h/ 2 p log | A h | ! 2 . (8) It turn s out th at this upp er b ound is tight (up to a un iv ersal constan t) for a n umber of concrete examples with a pp r o xim ately “homogeneo us” geo metry (w e refer to [7] for examples, m ostly relate d 6 to v arious random graphs arising from p ercolatio n). F or ins tance, the results of th e present paper imply that the right -hand side of (8) is equ iv alen t to t co v ( G ) for an y verte x-transitiv e graph G . F urthermore, th e formula (8) resem bles the app earance of the Dudley in tegral [16], which giv es a tigh t b ou n d for Gaussian pro cesses with stationary in cremen ts. This suggests, in particular, a connection b etw een the co v er time of graphs and ma j orizing measures. 1.2 Preliminaries T o b egin, we in tro du ce some fundament al notions from random w alks and electrical netw orks. Electrical netw orks and ra ndom walks. A network is a finite, u ndirected graph G = ( V , E ), together with a set of non -n egativ e conductances { c xy : x, y ∈ V } supp orted exac tly on the edges of G , i.e. c xy > 0 ⇐ ⇒ xy ∈ E . T he conductances are symmetric so that c xy = c y x for all x, y ∈ V . W e will write c x = P y ∈ V c xy and C = P x ∈ V c x for the total c onductanc e. W e will often u se th e notation G ( V ) for a n et work on the verte x set V . In this case, the asso ciated conductances are implicit. In the few cases when there are multiple net works un der consideration sim u ltaneously , w e will use the notation c G xy to refer to the conductances in G . Asso ciated to suc h a net wo rk is the canonical discr ete time r andom walk on G , whose transition probabilities are giv en by p xy = c xy /c x for all x, y ∈ V . It is easy to see that this defin es the transition matrix of a reversible Mark ov c hain on V , and that ev ery finite-state reversible Mark ov c h ain arises in this wa y (see [2, § 3.2]). The stationary measure of a vertex is p recisely π ( x ) = c x / C . Asso ciated to su c h an electrical net wo rk are the classical quanti ties C eff , R eff : V × V → R > 0 whic h are referred to, resp ectiv ely , as the e ffe ctive c onductanc e and effe ctiv e r esistanc e b et we en pairs of no d es. W e refer to [36, Ch. 9] for a discussion of the connectio n b et ween electrical n et works a nd the corresp onding r andom walk. F or n o w , it is u s eful to k eep in mind th e follo wing f act [11]: F or an y x, y ∈ V , R eff ( x, y ) = κ ( x, y ) C , (9) where the comm u te time κ is defin ed as b efore (6 ). F or co nv enience, we will wo rk exclusiv ely with c ontinuous-time Mark ov chains, wh ere the tran- sition rates b et wee n no d es are giv en by the pr ob ab ilities p xy from the discrete c hain. One w a y to realize the con tin uous-time chai n is b y making jumps according to th e discrete-time c hain, wher e the times sp en t b et ween jump s are i.i.d. exp onentia l random v ariables with mean 1. W e refer to these random v ariables as th e holding times. See [2, Ch . 2] for bac kground and relev an t definitions. Co v er times, lo cal times, a nd blank et times. W e will no w define v arious stopping times for the con tinuous-time random wa lk. First, w e observe th at if τ ⋆ co v is the first time at whic h the con tin uous-time r an d om w alk has visited ev ery no d e of G , then for ev ery v ertex v , E v τ ⋆ co v = E v τ co v , where w e recall that the latter quan tit y refers to the discrete-t ime c hain. Th us w e ma y also defin e the co v er time w ith r esp ect to the con tinuous-time c hain, i.e. t co v ( G ) = max v ∈ V E v τ ⋆ co v . In fact, it will b e far more con v enien t t o work with the c over and r eturn time defined as follo ws. Let { X t } t ∈ [0 , ∞ ) b e the con tin uous-time c h ain, and define τ co v = inf { t > τ ⋆ co v : X t = X 0 } . (10) 7 F or concreteness, we define the c over and r e tu rn time of G as t co v ( G ) = max v ∈ V E v τ co v , but the f ollo wing fact shows that the c hoice of initial ve rtex is not of great imp ortance for us (see [2, Ch. 5, Lem. 25]), 1 2 t co v ( G ) 6 t co v ( G ) 6 t co v ( G ) 6 3 min v ∈ V E v τ co v . (11) F or a v ertex v ∈ V and time t , w e defin e th e lo c al time L v t b y L v t = 1 c v Z t 0 1 { X s = v } ds , (12) where we recall that c v = P u ∈ V c uv . F or δ ∈ (0 , 1), w e defi ne τ ⋆ bl ( δ ) a s the fir st time t > 0 at whic h min u,v ∈ V L u t L v t > δ . F urthermore, the c ontinuous-time str ong δ -blanket time is defined to b e t ⋆ bl ( G, δ ) = max v ∈ V E v τ ⋆ bl ( δ ) . (13) Asymptotic notation. F or exp ressions A and B , we w ill us e the notati on A . B to denote that A 6 C · B f or some constan t C > 0. If we wish to stress that the constant C dep en ds on some p arameter, e.g. C = C ( p ), we will use the notation A . p B . W e use A ≍ B to denote the conjunction A . B and B . A , and w e use th e notation A ≍ p B s imilarly . 1.3 Outline W e first state our main th eorem in f u ll generalit y . W e u se only the language of effectiv e resistances, since this is most natural in the con text to f ollo w. Theorem 1.9. F or any network G = ( V , E ) and any 0 < δ < 1 , t co v ( G ) ≍ C h γ 2 ( V , p R eff ) i 2 ≍ δ t bl ( G, δ ) ≍ δ t ⋆ bl ( G, δ ) , wher e C is the total c onductanc e of G . W e no w pr esen t an o ve rview of our main argumen ts, and lay out the organiza tion of the pap er. Hin ts of a connection. Firs t, it may help the reader to ha ve some in tuition ab out why co ver times should b e connected to th e Gaussian pro cesses and particularly th e theory of ma jorizing measures. A fir st hint go es b ac k to w ork of Aldous [ 3], where i t is sho wn that the hitting times o f Mark o v c h ains are appr o xim ately d istributed as exp onen tial r an d om v ariables. It is we ll-kno wn that an exp onent ial v ariable can b e represente d as the sum of the squares of t w o Gaussians. Observing that the co ver time is just the maximum of all the hitting times, one might hop e that the co v er time can b e related to the maxim um of a family of Gaussians. 8 This p oint of view is strengthened by some quant itativ e similarities. Let { η i } i ∈ I b e a cente red Gaussian p ro cess, and let d ( i, j ) b e the natural metric on I from (5). Th e follo win g t wo lemmas are cen tral to the pro of of the ma jorizing measures theorem (Th eorem (MM)). W e refer to [35] [50] for their utilit y in the ma j orizing measures theory . The next lemma f ollo ws dir ectly from th e definition of the Gaussian density; see, for instance, [42, Lem. 5.1.3, Eq. (5.18)]. Lemma 1.10 (Gaussian concen tration) . F or every i, j ∈ I , and α > 0 , P ( η i − η j > α ) 6 exp − α 2 2 d ( i, j ) 2 . The next result can b e found in [35, Th m. 3.18]. Lemma 1.11 (Su dak o v minoration) . F or every α > 0 , If I ′ ⊆ I is such that i, j ∈ I ′ and i 6 = j implies d ( i, j ) > α , then E sup i ∈ I ′ η i & α p log | I ′ | . No w , let G = ( V , E ) b e a n et work, and consider the asso ciated con tin uous-time rand om w alk { X t } with local times L v t . W e define also the inverse lo c al times τ v ( t ) = inf { s : L v s > t } . An analog of the f ollo wing lemma w as pr o ved in [30] for the discrete-time chain; the conti nuous-time ve rsion can b e similarly prov ed, though w e w ill not do so her e, as it will not b e used in th e argument s to come. In in terpreting the n ext lemma, it helps to recall that L u τ u ( t ) = t . Lemma 1.12 (Concen tration for lo cal times) . F or al l u, v ∈ V and any α > 0 and t > 0 , we have P u L u τ u ( t ) − L v τ u ( t ) > α 6 exp − α 2 4 tR eff ( u, v ) , wher e P u denotes the me asur e for the r andom walk starte d at u . Th us lo cal times satisfy sub-gaussian concentrat ion, where no w the distance d is replaced by √ t · R eff . On the other side, the classical b ound of Matthews [43] pr o vid es an analo g to Lemma 1.11. Lemma 1.13 (Matthews b ound ) . F or every α > 0 , if V ′ ⊆ V is such that u, v ∈ V ′ and u 6 = v implies H ( u, v ) > α , then t co v ( G ) > α log( | V ′ | − 1) . Of course th e similar structur e of these lemmas offers no formal connection, but merely a hint that something d eep er ma y b e happ ening. W e n o w discuss a far more concrete connection b etw een lo cal times and Gaussian pro cesses. The isomorphism theorems. T he distribution of th e lo cal times for a Borel r igh t pro cess can b e fully c haracterized by certain asso ciated Gaussian pro cesses; results of this fl a v or go by the name of Isomorphism The or ems . Seve ral versions ha ve b een deve lop ed by Ray [44] and Knight [33], Dynkin [18, 17], Marcus and Rosen [40 , 41], Eisenbaum [19] and Eisenbaum, Kaspi, Marcus, Rosen and Shi [20]. In what follo w s, w e presen t the second Ray- Knight theorem in the sp ecia l c ase of a conti nuous-time random w alk. It fi rst app eared in [20]; see also Theorem 8 .2.2 of the bo ok b y Marcus and Rosen [42] (whic h cont ains a w ealth of information on the connection b et we en lo cal times and Gaussian pro cesses). It is easy to verify that the contin uous-time rand om walk on a connected graph is indeed a recurrent strongly symmetric Borel righ t pro cess. 9 Theorem 1.14 (Generalized Second Ra y-Knigh t Isomorp hism Theorem) . Fix v 0 ∈ V and define the inverse lo c al time, τ ( t ) = inf { s : L v 0 s > t } . (14) L et T 0 b e the hitting time to v 0 and let Γ v 0 ( x, y ) = E x ( L y T 0 ) . Denote by η = { η x : x ∈ V } a me an zer o Gaussian pr o c ess with c ovarianc e Γ v 0 ( x, y ) . L et P v 0 and P η b e the me asur es on the pr o c e sses { L x T 0 } and { η x } , r esp e ctively. Then under the me asur e P v 0 × P η , for any t > 0 L x τ ( t ) + 1 2 η 2 x : x ∈ V law = 1 2 ( η x + √ 2 t ) 2 : x ∈ V . (15) Th us to ev ery con tin uous-time random w alk, we can asso ciate a Gaussian pro cess { η v } v ∈ V . As discussed in Section 2.4, we ha v e the r elationship d ( u, v ) = p R eff ( u, v ), where d ( u, v ) = p E | η u − η v | 2 . In particular, the pr o cess { η v } v ∈ V is the Gaussian free field on the net w ork G . Using the Isomorphism Th eorem in conju nction with concentrati on b ounds f or Gaussian p ro- cesses, w e already hav e enough machinery to pro v e the follo w in g up p er bou n d in Sect ion 2.1, t co v ( G ) 6 t bl ( G, δ ) . δ C [ γ 2 ( V , d )] 2 = C h γ 2 ( V , p R eff ) i 2 . (16) W e also show ho w to pro ve a matc hing lo wer b ound in terms of γ 2 , but for a sligh tly differen t notion of “blank et time.” Th us (1 6 ) pr ov es the fi rst h alf of Theorem 1.9. The lo we r b ou n d for co v er times quite a bit more difficult to pro v e. Of course, the co v er and retur n time relates to the eve nt n ∃ v : L v τ ( t ) = 0 o , and unfortun ately th e corresp ondence (15) seems to o coarse to provide lo wer b ounds on the probabilit y of this ev en t d irectly . T o this end, w e need to sho w that f or th e righ t v alue of t in T heorem 1.14, we often ha ve η x ≈ − √ 2 t for some x ∈ V . The main difficult y is that we will h a ve to sho w that there is often a v ertex x ∈ V w ith | η x + √ 2 t | b eing much smal ler than the stand ard d eviation of η x . In doing so, w e will use the full p ow er of the ma jorizing m easures theory , a s w ell a s the special structure o f the Gaussian pro cesses arising from the Isomorph ism Theorem. The discrete Gaussian free field and a tree-lik e subpro cess. In Section 2.4 (see (35)), w e recall that th e Gaussian p ro cesses arising from the I somorphism Th eorem are not arbitrary , but corr esp ond to the Gaussian free field (GFF) asso ciated w ith G . Sp ecial prop erties of such pro cesses will b e essential to our pro of o f Theorem 1.9. In p articular, if w e use R eff ( v , S ) to denote the effectiv e resistance b et ween a p oin t v and a set of vertic es S ⊆ V , then w e hav e the r elationship p R eff ( v , S ) = di st L 2 ( η v , aff ( { η w } w ∈ S )) , (17) where aff ( · ) denotes the affin e hull, and dist L 2 is the L 2 distance in the Hilb ert space u nderlying the pro cess { η v } v ∈ V . In Section 2.3, we pr o ve a num b er of p rop erties of the effectiv e r esistance metric (e.g. F oster’s net work theorem); com b ined with (17), this yields some prop erties unique to pro cesses arising from a GFF. Next, in S ection 3, we recall that one of th e pr imary comp onents of the ma jorizing measures theory is that every Gaussian pro cess { η i } i ∈ I con tains a “tree like ” subpr o cess whic h controls E sup i ∈ I η i . Af ter a p repro cessing step that ensur es our tree s ha ve a n u m b er of additional features, 10 w e use the structur e of th e GFF to s elect a representa tiv e sub tree with very str ong indep endence prop erties that will b e essen tial to our analysis of co ve r times. Restructuring t he randomness and a p ercolation argumen t. The ma jorizing measures theory is designed to con trol the fi rst moment E sup i ∈ I η i of the suprem um o f Gaussian pro cess. In analyzing (15) to pro v e a lo wer b ound on the co v er times, we actuall y n eed to emplo y a v arian t of the second m oment metho d. The need for this, and a detailed discus sion of how it pr o ceeds, are present ed at the b eginn in g of Secti on 4 . T o w ards this end, w e w an t to associate ev en ts to the lea v es of our “ tree lik e” subp ro cess which can b e thought of as “op en even ts” in a p ercola tion p ro cess on the tree. F or general trees, it is kno wn that the second momen t metho d give s accurate estimates for the probabilit y of ha ving an op en path to a leaf [38]. Wh ile our trees are not regular, t hey are “regularized” by the ma jorizing measure, and w e do a somewhat standard analysis of su ch a pro cess in Section 4.3. The real difficult y in vo lv es set ting up th e right filtration on th e probabilit y space corresp ondin g to our tree so that the p ercolation argument yields the desired control on the co ver times. This requires a delicate d efi nition of the ev en ts asso ciated to eac h edge, and the ensuin g analysis forms the tec h nical core of our argument in Sect ion 4. Algorithmic issues. In order to complete the pro of of Theorem 1.5 and thus resolv e Question 1.2, we present a deterministic algorithm w h ic h compu tes an appro ximation to γ 2 ( X, d ) for any metric space ( X, d ). This is ac hieved in Section 3.3 . While the algorithm is fairly element ary to describ e, its analysis requires a num b er of tools f rom the ma jorizing measures theory . W e remark that, in com bination with Theorem 1.9, this yields the follo win g result. Theorem 1.15. F or any finite- state, r eversible Markov chain pr esente d as a network G = ( V , E ) with given c onductanc es { c xy } , ther e is a deterministic, p olynomial-time algorithm which c omputes a value A ( G ) suc h that A ( G ) ≍ t co v ( G ) . Observe that for ge neral rev ersible c hains, the co ver time is not n ecessarily b ounded a p olyno- mial in | V | , and thus even randomized simulatio n of the chain d o es not yield a p olynomial-time algorithm f or appr o ximating t co v ( G ). Fin ally , in Sectio n 4.5, w e pro ve Theorems 1.7 and 1.8 in the setting of arbitrary reversible Mark ov chains, leading to a near-linear time randomized algorithm for computing co v er times. 2 Gaussian pro cesses and lo cal times W e now discuss prop erties of the Gaussian pro cesses arising from the isomorp h ism theorem (The- orem 1.14). In S ection 2.1, we sho w that the isomorphism theorem, combined w ith concent ration prop erties of Gaussian pro cesses, is already enough to get strong con trol on blank et times and related quan tities. In Section 2.3, we prov e some geometric prop erties of the resistance metric on net w orks that will b e crucial to our w ork on the co ver time in Sections 3 and 4. Finally , in S ection 2.4, we recall the d efi nition of the Ga ussian free field and show ho w the geometry of suc h a pro cess relates to the geometry of the und er lyin g r esistance metric. 11 2.1 The blank et t ime W e firs t remark that the co v ariance matrix of the Gaussian pro cess arising from the isomor- phism theorem can b e c alculated explicitly in terms of the r esistance metric on the net w ork G ( V ). Throughout this section, the pro cess { η x } x ∈ V refers to the one r esulting from T h eorem 1.14 with v 0 ∈ V some fixed (bu t arbitrary) v ertex, τ ( t ) refers to the in v erse lo cal time defined in (14), and T 0 is the hitting time to v 0 . Lemma 2.1. F or every x, y ∈ V , Γ v 0 ( x, y ) = E x ( L y T 0 ) = 1 2 ( R eff ( x, v 0 ) + R eff ( v 0 , y ) − R eff ( x, y )) . In p articular, E ( η x − η y ) 2 = R eff ( x, y ) . Pr o of. T o prov e the lemma, we u se the cycle iden tit y for hitting times (see, e.g., [36, Lem. 10.10]) whic h asserts that, H ( x, v 0 ) + H ( v 0 , y ) + H ( y , x ) = H ( x, y ) + H ( y , v 0 ) + H ( v 0 , x ) . (18) Av eraging b oth sides of (18) and recalling (9 ) yields H ( x, v 0 ) + H ( v 0 , y ) + H ( y , x ) = C 2 [ R eff ( x, v 0 ) + R eff ( v 0 , y ) + R eff ( x, y )] . No w , w e subtract C R eff ( x, y ) = H ( x, y ) + H ( y , x ) from b oth sides, giving H ( x, v 0 ) + H ( v 0 , y ) − H ( x, y ) = C 2 [ R eff ( x, v 0 , ) + R eff ( v 0 , y ) − R eff ( x, y )] Finally , we co nclude using the iden tit y (see, e.g. [2, C h 2., Lem. 9]), E x ( L y T 0 ) = 1 C ( H ( x, v 0 ) + H ( v 0 , y ) − H ( x, y )) . W e now relate the blanket time of the r andom w alk to the exp ected supremum o f its a sso ciated Gaussian pro cess. The follo wing is a cen tral facet of the theory of concen tration of measure; see, for example, [34, Thm. 7.1, Eq. (7.4)]. Lemma 2.2. Consider a Gaussian pr o c ess { η x : x ∈ V } and define σ = sup x ∈ V ( E ( η 2 x )) 1 / 2 . Then for α > 0 , P sup x ∈ V η x − E sup x ∈ V η x > α 6 2 exp ( − α 2 / 2 σ 2 ) . W e are no w ready to establish the u pp er b ound on the strong b lank et time t ⋆ bl ( G, δ ), for an y fixed 0 < δ < 1. Note that this will naturally yield an upp er b ound on t bl ( δ ). Theorem 2.3. Consider a network G ( V ) and its total c onductanc e C = P x ∈ V c x . F or any fixe d 0 < δ < 1 , the blanket time t ⋆ bl ( G, δ ) of the r andom walk on G ( V ) satisfies t ⋆ bl ( G, δ ) . δ C · E sup x ∈ V η x 2 , wher e { η x } is the asso ciate d Gaussian pr o c ess fr om The or em 1.14. 12 Pr o of. W e firs t pr o ve that for some A δ > 0 t ⋆ bl ( δ ) 6 A δ C E sup x ∈ V η x 2 + su p x ∈ V E ( η 2 x ) ! . (19) Fix a v er tex v 0 ∈ V and consid er the lo cal times { L x τ ( t ) : x ∈ V } , w here for t > 0, w e wr ite τ ( t ) = inf { s : L v 0 s > t } . Let σ = sup x ∈ V p E ( η 2 x ) and Λ = E sup x η x . Use { η L x } to denote th e cop y of the Gaussian p ro cess corresp ondin g to the left-hand side of (15), and { η R x } to d enote the i.i.d. pro cess corresp onding to the righ t-hand sid e. Fix β > 0, and set t = t ( β ) = β (Λ 2 + σ 2 ). By Theorem 1.14, we g et that P min x L x τ ( t ) 6 √ δ t 6 P inf x 1 2 ( η R x + √ 2 t ) 2 6 1 + √ δ 2 t ! + P sup x 1 2 ( η L x ) 2 > 1 − √ δ 2 t ! . Therefore, P min x L x τ ( t ) 6 √ δ t 6 P inf x η R x 6 − a δ √ t + P sup x | η L x | > b δ √ t , where a δ = √ 2 − p 1 + √ δ and b δ = p 1 − √ δ . Applying Lemma 2.2, we obtain that if β > β 0 ( δ ) for some β 0 ( δ ) > 0, then P min x L x τ ( t ) 6 √ δ t 6 6 exp( − γ δ β ) , (20) where γ δ = 1 2 ( a 2 δ ∧ b 2 δ ). On the other hand, we ha v e P max x L x τ ( t ) > t/ √ δ 6 P max x 1 2 ( η R x + √ 2 t ) 2 > t/ √ δ = P max x η x > a ′ δ √ t , where a ′ δ = p 1 /δ − 1. Ap p lying Lemma 2.2 again for β > β 0 ( δ ), we ge t that P max x L x τ ( t ) > t/ √ δ 6 2 exp ( − γ ′ δ β ) , (21) where γ ′ δ = ( a ′ δ ) 2 / 2. Note that assuming min x L x τ ( t ) > √ δ t and max x L x τ ( t ) 6 t/ √ δ , we ha ve τ ( t ) = P x c x L x τ ( t ) 6 C t/ √ δ as w ell as min x,y L x τ ( t ) /L y τ ( t ) > δ . It then follo ws th at τ ⋆ bl 6 τ ( t ) 6 C t/ √ δ . Therefore, w e can dedu ce that n τ ⋆ bl > C t/ √ δ o ⊂ n min x L x τ ( t ) 6 √ δ t o [ n max x L x τ ( t ) > t/ √ δ o . Com bined with (20) and (21), it yields that P ( τ ⋆ bl > C t/ √ δ ) 6 6 exp ( − γ δ β ) + 2 exp( − γ ′ δ β ) . It then follo ws that t ⋆ bl 6 A δ C (Λ 2 + σ 2 ) for some A δ > 0 w hic h dep end s only on δ , est ablishing (19). It remains to prov e that σ = O (Λ). T o this end, let x ∗ b e suc h that E η 2 x ∗ = σ 2 . W e ha ve Λ > E max( η v 0 , η x ∗ ) = E max(0 , η x ∗ ) = σ √ 2 π . (22) This completes the pro of for the con tin uous-time case. 13 Remark 1. An in teresting question is the asymp totic b eha v ior of δ -b lank et time as δ → 1 , namely the dep end ence on δ of A δ in (19). As implied in the pr o of, w e can see that A δ . 1 γ δ + 1 γ ′ δ . 1 (1 − δ ) 2 . These asymptotics are tigh t for the complete grap h ; see e.g. [54 , Cor. 2]. W e n ext extend the pro of of the preceding theorem to the case of the discrete-time random w alk. The next lemma conta ins th e main estimate required for this extension. Lemma 2.4. L et G ( V ) b e a network and write γ 2 = γ 2 ( V , √ R eff ) . Then for al l u > 16 , we have X v ∈ V e − u · c v γ 2 2 . e − u/ 8 . Pr o of. By definition of the γ 2 functional, w e can choose a sequence of partitions A k with |A k | 6 2 2 k suc h that γ 2 > 1 2 sup v ∈ V X k > 0 2 k / 2 diam ( A k ( v )) . F or v ∈ V , let k v = min { k : { v } ∈ A k } . It is clear that R eff ( u, v ) > 1 /c v for all u 6 = v and h ence ( diam ( A k v − 1 ( v ))) 2 > 1 /c v . Therefore, we see that X v ∈ V e − u · c v γ 2 2 = ∞ X k =0 X v : k v = k +1 e − u · c v γ 2 2 6 ∞ X k =1 2 2 k +1 e − u 2 k / 4 . e − u/ 8 , completing the pro of. Theorem 2.5. Consider a network G ( V ) and its total c onductanc e C = P x ∈ V c x . F or any fixe d 0 < δ < 1 , the discr ete blanket time t bl ( G, δ ) of the r andom walk on on G ( V ) satisfies t bl ( G, δ ) . δ C · E sup x ∈ V η x 2 , wher e { η x } is the asso ciate d Gaussian pr o c ess fr om The or em 1.14. Pr o of. W e now consid er the e mbedd ed discrete-t ime random w alk of the con tinuous-time count er- part (i.e. the corresp onding ju m p c h ain; see [2, Ch. 2]). Let N v t b e suc h that c v · N v t is the n um b er of visits to vertex v u p to con tin uous time t , i.e. N v t is a d iscrete-time analog of the lo cal time L v t . Fix a vertex v 0 ∈ V and consid er the lo cal times { L x τ ( t ) : x ∈ V } . Let σ = sup x ∈ V p E ( η 2 x ) and Λ = E s u p x η x . Again, set t = β (Λ 2 + σ 2 ). Let τ bl ( δ ) denote the first time at whic h N x t > δt C for every x ∈ V . Assu ming that min x N x τ ( t ) > δ 1 / 4 t and max x N x τ ( t ) 6 t/δ 3 / 4 , w e ha v e τ ( t ) = P x c x N x τ ( t ) 6 C t/δ 3 / 4 and th us min x N x τ ( t ) > δτ ( t ) / C . It then follo w s that τ bl ( δ ) 6 τ ( t ) 6 C t/δ 3 / 4 . T herefore, w e deduce that τ bl ( δ ) > C t δ 3 / 4 ⊂ n min x N x τ ( t ) 6 δ 1 / 4 t o [ n max x N x τ ( t ) > t/δ 3 / 4 o . 14 Therefore w e hav e, P τ bl ( δ ) > C t δ 3 / 4 6 P min x L x τ ( t ) 6 √ δ t or max x L x τ ( t ) > t/ √ δ + P ∀ x : √ δ t 6 L x τ ( t ) 6 t/ √ δ | min x N x τ ( t ) 6 δ 1 / 4 t or max x N x τ ( t ) > t/δ 3 / 4 . Note that w e hav e already b ounded th e first term in (20) and (21). The sec ond term ca n b e boun ded by a s im p le a pplication of a large deviatio n inequalit y on the sum of i.i.d. exp onentia l v ariables. Precisely , X x ∈ V P √ δ t 6 L x τ ( t ) 6 t/ √ δ | N x τ ( t ) 6 δ 1 / 4 t or N x τ ( t ) > t/δ 3 / 4 . X x ∈ V e − ˜ a δ · c x t for some constant ˜ a δ > 0 dep en d ing only on δ . Recall that T heorem (MM) imp lies E sup x η x ≍ γ 2 ( V , √ R eff ). By (22), w e see that σ 6 √ 2 π Λ. Altogether, w e get that t ≍ Λ 2 ≍ β γ 2 ( V , √ R eff ) 2 . Applying Lemma 2.4, w e conclude that th ere exists ˜ β 0 ( δ ) > 0 d ep ending only on δ such that for all β > ˜ β 0 ( δ ), we h a ve P ( τ bl ( G, δ ) > C t/δ 3 / 4 ) . e − ˜ b δ β where ˜ b δ is a constan t d ep ending only on δ . T his im m ediately yields the desired u pp er b ound on the blanke t time for the d iscr ete-time r andom w alk. W e next exhib it a lo w er b oun d on a v ariation of blanket time (considered in [30]). It is app aren t that the lo w er b ound on the co v er time, whic h will b e pro v ed in Section 4, is an au tomatic lo wer b ound on the blanket time. In w hat follo ws, though, we try to giv e a simple argu m en t that can b e regarded as a w arm up. F or the con v enience of analysis, we consider th e follo wing notion. F or 0 < ε < 1, define t ∗ bl ( G, ε ) = max w ∈ V inf { s : P w ( ∀ u, v ∈ V : L u t 6 2 L v t ) > ε for all t > s } . (23) Theorem 2.6. Consider a network G ( V ) and its total c onductanc e C = P x ∈ V c x . F or any fixe d 0 < ε < 1 , we have t ∗ bl ( G, ε ) & ε C · E sup x ∈ V η x 2 . In order to pro v e Theorem 2.6, we will u se the next simp le lemma. W e will also require this estimate in Section 4. Lemma 2.7. L et τ ( t ) b e the i nv e rse lo c al time at vertex v 0 , as define d in (14) . L et C b e the total c onductanc e and let D = max x,y ∈ V p R eff ( x, y ) . Then, for al l β > 0 and t > D 2 /β 2 , P v 0 ( τ ( t ) 6 β C t ) 6 3 β . Pr o of. W e use P v to denote the measure on random walks started at a vertex v ∈ V , an d w e use E v similarly . Let p δ = min v { P v ( τ ( t ) 6 δ C t ) } for some δ > 0. Using the strong Marko v prop ert y , w e get that for all v ∈ V , P v ( τ ( t ) > kδ C t ) 6 (1 − p δ ) k . 15 In particular, E v τ ( t ) 6 δ C t/p δ . By Theorem 1. 14, it follo ws easily that E v 0 τ ( t ) = C t . S ince E v τ ( t ) > E v 0 ( τ ( t )), we deduce that p δ 6 δ . Let u = u ( δ ) b e su c h that P u ( τ ( t ) 6 δ C t ) = p δ . L et Y , Z b e r andom v ariables with the law τ ( t ), wh en the rand om w alk is started at u and v 0 , resp ectiv ely . Clearly , Y law = Z + T v 0 , (24) where T v 0 is distribu ted as the hitting time to v 0 , wh en then random walk is started at u and T v 0 is indep endent of Z . Since R eff ( u, v 0 ) 6 D 2 , we hav e E u T v 0 6 C D 2 (b y (9)), and this yields P u ( T v 0 > C D 2 /β ) 6 β . Using the assumption t > D 2 /β 2 and (24), w e conclude that P ( Z 6 β C t ) 6 P ( Z 6 2 β C t − C D 2 /β ) 6 P ( Y 6 2 β C t ) + P ( T v 0 > C D 2 /β ) 6 p 2 β + β 6 3 β , as required. W e are no w ready to establish th e lo wer b ound on t ∗ bl ( G, ε ). Pr o of of The or em 2.6. W e consider the asso ciated Ga ussian pro cess as in th e pro of of Theorem 2.3. Let σ = sup x ∈ V p E η 2 x and Λ = E sup x η x . Ob serv e that the maximal hitting time is a s imple lo wer b ound on t ∗ bl ( G, ε ) up to a constan t dep end ing only on ε . In light of Lemma 2.1, we see t ∗ bl ( G, ε ) & ε C · σ 2 . Therefore, we ca n assu me in what follo ws Λ 2 > 100 log (4 /ε ) ε − 2 σ 2 . (25) Let t ∗ = 1 2 Λ 2 . By Lemma 2.2, we ge t P inf x ∈ V 1 2 ( η R x + √ 2 t ∗ ) 2 6 log(4 /ε ) σ 2 > P | sup x ∈ V η R x − Λ | 6 p 2 log(4 /ε ) σ > 1 − ε 2 . Applying Theorem 1.14, w e obtain P inf x ∈ V L x τ ( t ∗ ) 6 log(4 /ε ) σ 2 > 1 − ε 2 . By triangle inequalit y , w e ha v e D 6 2 σ . Recalling the assu mption (25) , we can apply Lemm a 2.7 and deduce that P ( τ ( t ∗ ) 6 ε C t ∗ / 6) 6 ε/ 2 . W riting t 0 = ε C t ∗ / 6, w e can then obtain that P inf x ∈ V L x t 0 6 log(4 /ε ) σ 2 , τ ( t ∗ ) > t 0 > 1 − ε . Also, w e see that sup x ∈ V L x t 0 > ε Λ 2 / 12 whenever τ ( t ∗ ) > t 0 . Usin g assu mption (25) again, we conclude P v 0 ( ∃ x, y ∈ V : L x t 0 > 2 L y t 0 ) > 1 − ε . This implies that t ∗ bl ( G, ε ) > t 0 , completing the pro of. 16 2.2 An asymptotically st rong upp er b ound Finally , we sho w a strong u pp er b ound for the asymptotics of t co v on a sequence of graphs { G n } , assuming t hit ( G n ) = o ( t co v ( G n )). Theorem 2.8. F or any gr aph G = ( V , E ) with v 0 ∈ V , let t hit ( G ) b e the maximal hitting time in G and let { η v } v ∈ V b e the GFF on G with η v 0 = 0 . Then, for a u niversal c onstant C > 0 , t co v ( G ) 6 1 + C s t hit ( G ) t co v ( G ) ! · | E | · E sup v ∈ V η v 2 . Pr o of. Theorem 2.5 asserts that t co v ( G ) ( E max v η v ) 2 , (26) where denotes sto c hastic domination. W rite σ 2 = m ax v E η 2 v . Note that σ 2 corresp onds to the diameter of V in the effectiv e resistance metric, thus t hit ( G ) ≍ | E | σ 2 . Denote b y S = P v d v η 2 v , where d v is the degree of vertex v . By a generalized H¨ older inequalit y and momen t estimates for Gaussian v ariables (h ere w e us e th at E X 6 = 15 for a standard Gaussian v ariable X ), we obtain that E S 3 6 X u,v ,w d u d v d w E ( η 2 u η 2 v η 2 w ) 6 X u,v ,w d u d v d w E ( η 6 u ) 1 / 3 E ( η 6 v ) 1 / 3 E ( η 6 w ) 1 / 3 6 15 | E | 3 σ 6 . An application of Mark o v ’s inequalit y then yields P ( S > α | E | σ 2 ) 6 15 α 3 . (27) W rite Q = P v d v η v . C learly , Q is a cen tered Gaussian with v ariance b ounded by 4 | E | 2 σ 2 and therefore, P ( | Q | > α | E | σ ) 6 2e − α 2 / 8 . (28) F or β > 0, let t = 1 2 ( E max v η v + β σ ) 2 . Noting τ ( t ) = P v d v L v τ ( t ) and recalling the Isomorph ism theorem (Theorem 1.14), we ge t that τ ( t ) 2 | E | t + √ 2 t 2 | Q | + 1 2 S . Com bined with (27) and (28), we deduce that P ( τ ( t ) > 2 | E | t + √ 2 tβ | E | σ + β | E | σ 2 ) 6 12 ( β − 2) 2 + 2e − β 2 / 8 . (29) W e no w turn to b ound the probabilit y for τ co v > τ ( t ). Observe that on th e ev ent { τ co v > τ ( t ) } , there exists v ∈ V suc h that L v τ ( t ) = 0. It is clear that for all v ∈ V , we ha v e P ( η 2 v > β σ 2 / 2) 6 2e − β / 4 . Since { η v } v ∈ V and { L v τ ( t ) } v ∈ V are t wo indep end en t processes, we obta in P { τ co v > τ ( t ) } \ n ∃ v ∈ V : L v τ ( t ) + 1 2 η 2 v < β σ 2 / 2 o 6 2e − β / 4 . (30) 17 On the other hand, w e dedu ce from the concent ration of Gaussian pro cesses (Lemma 2.2) that P inf v ( √ 2 t + η v ) 2 6 β σ / 2 6 2e − β / 8 . Applying Isomorphism theorem again and com bined w ith (30), w e get that P ( τ co v > τ ( t )) 6 4e − β / 8 . Com bined with (29), it follo ws that P ( τ co v > 2 | E | t + √ 2 tβ | E | σ + β | E | σ 2 ) 6 15 β 3 + 2e − β 2 / 8 + 4e − β / 8 . Since t = 1 2 ( E max v η v + β σ ) 2 , w e can deduce that for some universal constan t C 1 > 0, t co v ( G ) 6 | E | ( E sup v η v ) 2 + C 1 | E | ( σ 2 + σ E sup v η v ) . Recalling (26), we co mplete the pro of. 2.3 Geometry of the r esistance metric W e now discuss some rele v ant prop erties of the resistance metric on a net w ork G ( V ). Effectiv e resistances and net w ork reduction. F or a subset S ⊆ V , define th e quotien t net wo rk G/S to h a ve v ertex set ( V \ S ) ∪ { v S } , where v S is a new ve rtex disj oin t from V . The conductances in G/S are defined b y c G/S xy = c xy if x, y / ∈ S and c v S x = P y ∈ S c xy for x / ∈ S . No w , giv en v ∈ V and S ⊆ V , we put R eff ( v , S ) △ = R G/S eff ( v , v S ) , (31) where the lat ter effectiv e resistance is c omputed in G/S . F or t w o d isjoin t sets S, T ⊆ V , we define R eff ( S, T ) △ = R G/S eff ( v S , T ) , and the resistance is defined to b e 0 if S ∩ T 6 = ∅ . It is straightforw ard to c h ec k that R eff ( S, T ) = R eff ( T , S ). The follo win g net w ork reduction lemma w as disco v ered by Campb ell [10] und er the name “star-mesh transformation” (see also, e.g., [39, Ex. 2.47(d)]). W e giv e a pro of for completeness. Lemma 2.9. F or a network G ( V ) and a subset e V ⊂ V , ther e exists a network ˜ G ( ˜ V ) su c h that for al l u, v ∈ e V , we have ˜ c v = c v and R e G eff ( u, v ) = R eff ( u, v ) . We c al l e G ( e V ) the reduced netw ork . F urthermor e, if e V = V \ { x } , we then have the formula ˜ c y z = c y z + c ∗ ,x y z , wher e c ∗ ,x y z = c xy c xz P w ∈ V x c xw . (32) 18 Pr o of. Let P b e the tr an s ition ke rnel of the d iscrete-time random w alk { S t } on the net w ork G and let P e V b e the transition k ernel of the ind u ced random w alk on e V , namely for u, v ∈ e V P e V ( u, v ) = P u ( T + e V = v ) , where T + A △ = min { t > 1 : S t ∈ A } for all A ⊆ V . In other w ords, P e V is the chain w atc h ed in the subset e V . W e obs erv e that P e V is a rev ersible Mark o v c hain on e V (see, e.g., [2, 36]). I t is clear that the c h ain P e V has the same in v ariant measure as that of P restricted to e V , up to scaling by a constan t. Th erefore, th er e exists a (unique) net work e G ( e V ) corresp on d ing to the Mark o v c h ain P e V suc h that ˜ c u = c u for all u ∈ e V . W e n ext sh o w that th e effectiv e r esistances are pr eserved in e G ( e V ). T o this end, w e u se the follo win g ident it y relating effecti ve resistance and the random walk (see, e.g ., [39, Eq. (2.5)]), P v ( T + v > T u ) = 1 c v R eff ( u, v ) , (33) where T u = min { t > 0 : S t = u } . Sin ce P e V is a watc hed chain on the sub set e V , we see th at P e V v ( T + v > T u ) = P v ( T + v > T u ) for all u, v ∈ e V . This yields R e G eff ( u, v ) = R eff ( u, v ). T o prov e the second half o f the lemma, w e let e G ( e V ) b e the net wo rk defined by (32). A straigh t- forw ard calculation yields that ˜ c v = c v − c xv + X y ∈ V x c ∗ ,x vy = c v − c xv + X y ∈ V x c xv c xy P z ∈ V x c xz = c v . Let P e G b e the transition kernel for the random w alk on the net work e G ( e V ). Then , P e G ( u, v ) = ˜ c uv ˜ c u = c uv + c ux c xv P y ∈ V x c xy c u . On the other hand, the wa tc hed chain P e V satisfies P e V ( u, v ) = c uv c u + c ux c u c xv P y ∈ V x c xy . Altoget her, we s ee th at P e G ( u, v ) = P e V ( u, v ), completing th e pro of. W ell-separated sets. Th e f ollo wing result is an imp ortan t prop erty of the r esistance metric, crucial for our analysis. Prop osition 2.10. Consider a network G ( V ) and its asso ciate d r esistanc e metric ( V , R eff ) . Sup- p ose that for some subset S ⊆ V , ther e is a p artition S = B 1 ∪ B 2 ∪ · · · ∪ B m which satisfies the fol lowing pr op erties. 1. F or al l i = 1 , 2 , . . . , m and f or al l x, y ∈ B i , we have R eff ( x, y ) 6 ε/ 4 8 . 2. F or al l i 6 = j ∈ { 1 , 2 , . . . , m } , for al l x ∈ B i and y ∈ B j , we have R eff ( x, y ) > ε. 19 Then ther e is a sub se t I ⊆ { 1 , 2 , . . . , m } with | I | > m/ 2 such that for al l i ∈ I , R eff ( B i , S \ B i ) > ε/ 24 . In order to prov e Prop osition 2.10, we need the follo wing t wo ingredien ts. Lemma 2.11. Supp ose the network H ( W ) c an b e p artitione d into two disjoint p arts A and B such that for some ε > 0 , and some vertic es u ∈ A and v ∈ B , we have 1. R H eff ( u, v ) > ε , and 2. R H eff ( u, x ) 6 ε/ 1 2 for al l x ∈ A , and R H eff ( v , x ) 6 ε/ 12 for al l x ∈ B . Then, R H eff ( A, B ) > ε/ 6 . Pr o of. Recall that b y Thomson’s Pr inciple (see, e.g., [39, Ch. 2.4]), the effectiv e resistance satisfies R eff ( x, y ) = min f E ( f ) , w here E ( f ) = 1 2 X x,y f 2 ( x, y ) r xy , and the minimum is o ver all un it flo ws from x to y . Here, r xy = 1 /c xy is th e edge resistance for { x, y } . Supp ose now that R H eff ( A, B ) < ε/ 6. Then th er e exi sts a u nit flo w f AB from set A to set B su c h that E ( f AB ) < ε/ 6. F or x ∈ A , let q x b e the amoun t of flo w sen t out from v ertex x in f AB and for x ∈ B , let q x b e the amount of fl o w sen t in to v ertex x . Note th at P x ∈ A q x = P x ∈ B q x = 1. Analogously , by assump tion (2), there exist flo ws { f ux : x ∈ A } and { f xv : x ∈ B } su c h that f xy is a unit flo w from x to y and E ( f xy ) 6 ε/ 12. W e next build a flo w f suc h th at f = f AB + X w ∈ A q w f uw + X z ∈ B q z f z v . W e see that f is ind eed a unit flo w from u to v . F u rthermore, b y Cauch y-Sch wartz, E ( f ) = 1 2 X x,y f 2 ( x, y ) r xy = 1 2 X x,y r xy f AB ( x, y ) + X w ∈ A q w f uw ( x, y ) + X z ∈ B q z f z v ( x, y ) ! 2 6 3 2 X x,y r xy f 2 AB ( x, y ) + X w ∈ A q w f 2 uw ( x, y ) + X z ∈ B q z f 2 z v ( x, y ) ! = 3 E ( f AB ) + X w ∈ A q w E ( f uw ) + X z ∈ B q z E ( f z v ) ! < ε . This con tradicts assum ption (1), completing the pro of. 20 Lemma 2.12. F or any network G ( V ) , the fol low ing holds. If ther e is a subset S ⊆ V and a value ε > 0 such that R eff ( u, v ) > ε f or al l u, v ∈ S , then ther e is a sub se t S ′ ⊆ S with | S ′ | > | S | / 2 such that for eve ry v ∈ S ′ , R eff ( v , S \ { v } ) > ε/ 4 . Pr o of. Consider the reduced net work e G on the v ertex set S , as defined in Lemma 2.9. Let the new conductances b e denoted ˜ c xy for x, y ∈ S . By Lemma 2.9, our initial assumption that R eff ( u, v ) > ε for all u, v ∈ S imp lies that R e G eff ( u, v ) > ε for all u, v ∈ S . Let n = | S | . F oster’s Theorem [26] (see also [53]) states that 1 2 X u 6 = v ∈ S R e G eff ( u, v )˜ c u,v = n − 1 . Com bined with the fact that R e G eff ( u, v ) > ε , this yields 1 2 X u 6 = v ∈ S ˜ c uv 6 n ε . In particular, there exists a sub s et S ′ ⊆ S with | S ′ | > n/ 2 su c h that for all v ∈ S ′ , X u ∈ S \{ v } ˜ c uv 6 4 ε . It follo ws that for ev ery v ∈ S ′ , w e ha v e C e G eff ( v , S \ { v } ) 6 4 /ε , hen ce R eff ( v , S \ { v } ) = R e G eff ( v , S \ { v } ) > ε/ 4 . Pr o of of Pr op osition 2.10 . F or eac h i ∈ { 1 , 2 , . . . , m } , choose some v i ∈ B i . By assump tion (2), R eff ( v i , v j ) > ε for i 6 = j . Th us app lying Lemma 2.12, we fi nd a sub set I ⊆ { 1 , 2 , . . . , m } with | I | > m/ 2 and suc h that for ev ery i ∈ I , w e ha ve R eff ( v i , { v 1 , . . . , v m } \ { v i } ) > ε/ 4 . (34) W e claim that this sub s et I satisfies the conclusion of the pr op osition. T o this en d , fix i ∈ I , and let ˜ G b e the quotien t netw ork formed by gluing { v 1 , . . . , v m } \ { v i } in to a single v ertex ˜ v . By (34), w e hav e R ˜ G eff ( v i , ˜ v ) > ε/ 4. No w let, ˜ B = { ˜ v } ∪ [ j 6 = i B j \ { v i } i ∈ I . Consider an y x ∈ ˜ B with x 6 = ˜ v . Then x ∈ B j for some j 6 = i , hence b y assumption (1), w e conclude that, R ˜ G eff ( x, ˜ v ) 6 R eff ( x, v j ) 6 ε/ 48 . 21 W e may n o w apply Lemma 2.11 to the sets B i and ˜ B in ˜ G (with resp ectiv e vertic es v i and ˜ v ) to conclude that R ˜ G eff ( B i , ˜ B ) > ε/ 24 . But the preceding line immediately yields, R eff ( B i , S \ B i ) > ε/ 24 , finishing the pro of. W e end this section with the follo wing simple lemma. Lemma 2.13. F or any network G ( V ) , if A, B 1 , B 2 ⊆ V ar e disjoint, then R eff ( A, B 1 ∪ B 2 ) > R eff ( A, B 1 ) · R eff ( A, B 2 ) R eff ( A, B 1 ) + R eff ( A, B 2 ) . Pr o of. By considerin g the quotien t graph, the lemma can b e redu ced to th e case when A = { u } . Let { S t } b e the discrete-time random wa lk on the net wo rk and define T B = min { t > 0 : S t ∈ B } and T + B = min { t > 1 : S t ∈ B } for B ⊆ V . It is clear that for a random w alk started at u , we ha ve P u ( T + u > T B 1 ∪ B 2 ) 6 P u ( T + u > T B 1 ) + P u ( T + u > T B 2 ) . Com bined with (33), this giv es 1 R eff ( u, B 1 ∪ B 2 ) 6 1 R eff ( u, B 1 ) + 1 R eff ( u, B 2 ) , yielding the desired inequalit y . 2.4 The Gaussian free field W e recall the graph Laplacian ∆ : ℓ 2 ( V ) → ℓ 2 ( V ) d efined b y ∆ f ( x ) = c x f ( x ) − X y c xy f ( y ) . Consider a connected net w ork G ( V ). Fix a vertex v 0 ∈ V , and consider the rand om pr o cess X = { η v } v ∈ V , where η v 0 = 0, and X has densit y prop ortional to exp − 1 2 hX , ∆ X i = exp − 1 4 X u,v c uv | η u − η v | 2 ! . (35) The pro cess X is calle d the Gaussian free field (GFF) asso ciated with G . The next lemma is kno wn, see, e.g., Theorem 9.20 of [28]. W e includ e the p ro of for completeness. Lemma 2.14. F or any c onne cte d network G ( V ) , if X = { η v } v ∈ V is the asso ciate d GFF, then for al l u, v ∈ V , E ( η u − η v ) 2 = R eff ( u, v ) . (36) 22 Pr o of. F rom (35), and the fact that the Laplacian is p ositiv e semi-definite, it is clear that X is a Gaussian pro cess. Let Γ v 0 ( u, v ) = E u L v T 0 , where T 0 is the hitting time for v 0 as in Theorem 1.14 . F rom Lemma 2.1, we ha v e Γ v 0 ( u, v ) = 1 2 ( R eff ( v 0 , u ) + R eff ( v 0 , v ) − R eff ( u, v )) . (37) Let e ∆ and e Γ v 0 , resp ectiv ely , b e the matrice s ∆ and Γ v 0 with the ro w and column corr esp onding to v 0 remo v ed. App ealing to (35), if w e can sho w th at e ∆ e Γ v 0 = I , it f ollo ws that Γ v 0 is the cov ariance matrix for X . In th is case, comparing (37) to E ( η u η v ) = 1 2 E η 2 u + E η 2 v − E ( η u − η v ) 2 and using η v 0 = 0, w e s ee that (36) follo ws. In order to demonstrate e ∆ e Γ v 0 = I , w e consider u, v such that v 0 / ∈ { u, v } . Conditioning on the first step of the w alk from u give s, c u Γ v 0 ( u, v ) = c u E u L v T 0 = 1 { u = v } + X w c uw E w L v T 0 = 1 { u = v } + X w c uw Γ v 0 ( v , w ) (38) On the other hand, b y definition of the Laplacian, (∆Γ v 0 )( u, v ) = c u Γ v 0 ( u, v ) − X w c uw Γ v 0 ( v , w ) = 1 { u = v } , where the latter equalit y is pr ecisely (38). Thus e ∆ e Γ v 0 = I , completing the pro of. A geometric iden tit y . In what follo ws, for a set of p oin ts Y lying in some Hi lb ert space, w e use aff ( Y ) to denote their affine h ull, i.e. the closure of { P n i =1 α i y i : n > 1 , y i ∈ Y , P n i =1 α i = 1 } . Of course, when Y con tains the origin, aff ( Y ) is simply the linear span of Y . Lemma 2.15. F or any network G ( V ) , if X = { η v } v ∈ V is the GFF asso ciate d with G , then for a ny w ∈ V and subset S ⊆ V , p R eff ( w, S ) = dist L 2 ( η w , aff ( { η u } u ∈ S )) . Pr o of. Since the statemen t of the lemma is inv ariant und er translation, we ma y assu me that the GFF is defined with resp ect to some v 0 ∈ S . In this case, by th e definition in (35), the GFF for G/S has densit y pr op ortional to exp − 1 4 X u,v / ∈ S c uv | η u − η v | 2 + X u / ∈ S c v S u | η u | 2 , i.e. the GFF o n G/S is precisely the in itial Gaussian pr o cess X conditioned on the linear subspace A S = { η v = η v 0 = 0 : v ∈ S } . 23 Using (31) and L emm a 2.1 4 , w e ha v e R eff ( w, S ) = R G/S eff ( w, v S ) = E | η w − η v 0 | 2 A S = E | η w | 2 A S . T o compute the lat ter expectation, write η w = Y + Y ′ , wh ere Y ′ ∈ span( { η v } v ∈ S ) and E ( Y Y ′ ) = 0. It follo ws immediately that dist L 2 ( η w , aff ( { η u } u ∈ S )) = p E [ Y 2 ] = q E | η w | 2 A S , completing the pro of. 3 Ma jorizing measures W e now review th e relev an t p arts of the ma jorizing measure theory . One is encouraged to consult the b o ok [5 2] for fu rther inform ation. In Section 1, w e s a w T alagrand’s γ 2 functional. F or our purp oses, it will b e more conv enien t to w ork with a different v alue that is equiv alen t to the functional γ 2 , up to univ ersal constan ts. In Sectio n 3.2, w e discuss separated trees, and pro ve a num b er of standard prop erties ab out suc h ob jects. I n Section 3.3, w e pr esen t a determin istic algorithm for compu ting γ 2 ( X, d ) for an y finite m etric s p ace ( X , d ). Finally , in Section 3.4, we sp eciali ze the theory of Gaussian pr o cesses and trees to the case of GFFs. There, w e will use the geometric p rop erties pro v ed in S ections 2.3 and 2.4. Before we b egin, w e attempt to give some rough intuitio n ab out the role of trees in the ma jorizing measures theory . A go o d reference f or this material is [27]. A tr e e of su b sets of X is a finite collection F of subsets with the prop ert y th at for all A, B ∈ F , either A ∩ B = ∅ , or A ⊆ B , or B ⊆ A . A se t B is a child of A if B ⊆ A , B 6 = A , and C ∈ F , B ⊆ C ⊆ A = ⇒ C = B or C = A. W e assume that X ∈ F , and X is referr ed to as the ro ot of the tree F . T o eac h A ∈ F , we use N ( A ) to d enote the num b er o f c hildren of A . A br anch of F is a sequence A 1 ⊃ A 2 ⊃ · · · such that eac h A k +1 is a c hild of A k . A branc h is maximal if it is not conta ined in a longer branc h. W e will assume additionally that ev ery maximal b ranc h terminates in a singleton set { x } for x ∈ X . Let { η x } x ∈ X b e a cent ered Gaussian pro cess with X fi nite, a nd let d ( x, y ) = p E ( η x − η y ) 2 . The basic premise of the tree in terpr etation of the ma jorizing measures theory is that one c an assign a measure of “ size” to an y tree of subs ets in X , and this size pro vides a lo we r b ound on E su p x ∈ X η x . The ma jorizing measures theorem then claims that the v alue of the optimal such tree is w ithin absolute constan ts of the exp ect ed supr em um. T he size of the tree (see (39)) can b e defined using only the metric structure of ( X , d ), without reference to the und erlying Gaussian pro cess. Thus m uc h of the theorems in this section are stated for general metric spaces. The tree of subsets is mean t to capture the structure of ( X , d ) at all scales simulta neously . In general, to obtain a multi- scale lo wer b ound on the exp ecte d suprem um of th e pro cess, one arranges so that the d iameter of the subsets decreases exp onentiall y as one go es do wn the tree, and all subsets at one lev el of the tree are separated by a constant fr action of their diameter (see Definitions 3.1 and 3.8 b elo w). This allo w s a certain level of indep endence b etw een differen t b ranc hes of the tree whic h is exploited in the low er b ounds. Muc h of this section is d ev oted to pro ving that one can construct a near-optimal tree with a num b er of regularit y prop erties that will b e crucial to ou r approac h in Section 4. 24 3.1 T rees, measures, and functionals Let ( X, d ) b e an arbitrary metric space. Definition 3.1. F or values q ∈ N and α, β > 0 , and r > 2 , a tr e e of subsets F in X is c al le d a ( q , r , α, β )-tree if to e ach A ∈ F , one c an asso ci ate a nu mb er n ( A ) ∈ Z such that the fol low ing thr e e c onditions ar e satisfie d. 1. F or al l childr en B of A , we have n ( B ) 6 n ( A ) − q . 2. If B and B ′ ar e two distinct childr en of A , then d ( B , B ′ ) > β r n ( A ) − 1 . 3. diam ( A ) 6 α r n ( A ) . We wil l r efer to a ( q , r , 4 , 1 2 ) -tr e e as simply a ( q , r )-tree. The r -size of a tr e e of subsets F , w ritten size r ( F ), is defined as the infimum of X k > 1 r n ( A k ) q log + N ( A k ) (39) o ver all p ossible maximal branches of F , where w e use the notation log + x = log x for x 6 = 0, and log + (0) = 0. T o connect trees of sub sets with the γ 2 functional, w e recall the r elationship with ma jorizing measures. Th e next result is fr om [51, Thm. 1.1] Theorem 3.2. F or every metric sp ac e ( X , d ) , we have γ 2 ( X, d ) ≍ inf sup x ∈ X Z ∞ 0 log 1 µ ( B ( x, ε )) 1 / 2 dε, wher e B ( x, ε ) is the close d b al l of r adius ε ab out x , and the infimum is over al l finitely supp orte d pr ob ability me asur es on X . W e will also need the follo win g theorem due to T alagrand (see Prop ositio n 4.3 of [50] and also Theorem T5 of [27].) W e will emp loy it no w and also in S ection 3.3. Theorem 3.3. Ther e is a value r 0 > 2 such that the fol lowing holds. L et ( X , d ) b e a finite metric sp ac e, and r > r 0 . Assu me ther e i s a family of functions { ϕ i : X → R + : i ∈ Z } such that the fol lowing c onditions hold for some β > 0 . 1. ϕ i ( x ) > ϕ i − 1 ( x ) for al l i ∈ Z and x ∈ X . 2. If t 1 , t 2 , . . . , t N ∈ B ( s, r j ) ar e such that d ( t i , t i ′ ) > r j − 1 for i 6 = i ′ , then ϕ j ( s ) > β r j p log N + min { ϕ j − 2 ( t i ) : i = 1 , 2 , . . . , N } . Under these c onditions, γ 2 ( X, d ) . r,β sup x ∈ X,i ∈ Z ϕ i ( x ) . 25 The preceding t w o theorems allo w us to presen t the follo w in g connectio n b et w een trees and γ 2 . Suc h a connection is well -kno wn (see, e.g. [49]), but we record th e pr o ofs h ere for completeness, and for the precise quan titativ e b ounds w e will u s e in future sections. Lemma 3.4. Ther e is a value r 0 > 2 such that for eve ry finite metric sp ac e ( X , d ) , and ev e ry r > r 0 , we have γ 2 ( X, d ) . r sup { size r ( F ) : F is a (1 , r , 4 , 1 2 ) -tr e e i n X } . (40) Pr o of. First, for a sub set S ⊆ X , let θ ( S ) = sup { siz e r ( F ) : F is a (1 , r , 4 , 1 2 )-tree in X } . Then define, for ev er y i ∈ Z and x ∈ X , d efine ϕ i ( x ) = θ ( B ( x, 2 r i )) . where B ( x, R ) is th e closed b all of r adius R about x ∈ X . W e no w wish to v erify that the conditions of Theorem 3.3 hold for { ϕ i } . Condition (1) is immed iate. Assume that r > 8. Giv en t 1 , t 2 , . . . , t N as in cond ition (2) of Theorem 3.3, consider the set A = B ( s, 2 r j ) which has diameter b ounded by 4 r j , and the disjoint subset s ets of A giv en by A i = B ( t i , 2 r j − 2 ) wh ic h eac h ha ve diameter b ound ed by 4 r j − 2 , and whic h satisfy d ( A i , A j ) > r j − 1 / 2 for i 6 = j . W e also ha ve A i ⊆ A for eac h i ∈ { 1 , . . . , N } . T aking the tree of subsets with ro ot A , n ( A ) = j , and children { A i } N i =1 , and in eac h A i a tree whic h ac hiev es v alue at least θ ( A i ) = θ ( B ( t i , 2 r j − 2 )) = ϕ j − 2 ( i ), we see immediat ely that ϕ j ( s ) = θ ( B ( s, 2 r j )) > r j p log N + min { ϕ j − 2 ( t i ) : i = 1 , 2 , . . . , N } , confirming condition (2) of Th eorem 3.3. Applying the theorem, it follo ws that γ 2 ( X, d ) . r θ ( X ), pro ving (40). W e will n eed the upp er b ound (40) to hold for (2 , r, 4 , 1 2 )-trees. T o ward this end, w e state a v ersion of [49, Th m 3.1]. The theorem there is only p ro v ed for α = 1 and β = 1 2 , but it is straigh tforw ard to see that it wo rks for all v alues α, β > 0 since th e pro of merely p ro ceeds b y c h o osing an appropriate sub tree of the giv en tree; the v alues α and β are not used. Theorem 3.5. F or eve ry metric sp ac e ( X, d ) , the fol lowing h olds. F or ev ery α, β , r > 0 and q ∈ N , and for e v ery (1 , r , α, β ) -tr e e F in X , ther e exists a ( q , r, α, β ) -tr e e F ′ in X su ch that size r ( F ) . q · siz e r ( F ′ ) . Com bining Theorem 3.5 with Lemma 3.4 yields the follo wing upp er b ound using (2 , r )-tree s. Corollary 3.6. Ther e is a value r 0 > 2 such that for every finite metric sp ac e ( X , d ) , and every r > r 0 , we have γ 2 ( X, d ) . r sup { size r ( F ) : F is a (2 , r , 4 , 1 2 ) -tr e e i n X } . (41) No w w e mo v e on to a low er b ound on γ 2 . Lemma 3.7. Ther e is a value r 0 > 2 such that for eve ry finite metric sp ac e ( X , d ) , and ev e ry r > r 0 , we have γ 2 ( X, d ) & sup { size r ( F ) : F is a (1 , r , 8 , 1 6 ) -tr e e } . 26 Pr o of. W e will sh ow for an y p robabilit y measure µ on X and an y (1 , r , 8 , 1 6 )-tree F in X , we h a ve size r ( F ) . r sup x ∈ X Z ∞ 0 log 1 µ ( B ( x, ε )) 1 / 2 dε . The basic idea is that if A 1 , A 2 , . . . A k are c hildren of A , in F , then the sets B ( A i , 1 20 r n ( A ) − 1 ) are disjoin t b y prop ert y (2) of Definition 3.1, where we write B ( S, R ) = { x ∈ X : d ( x, S ) 6 R } . Thus one of these sets A i has µ ( B ( A i , 1 20 r n ( A ) − 1 )) 6 1 / N ( A ). Th us we ma y fin d a finite sequence of sets, starting with A (0) = X such that A ( i +1) is a c hild A ( i ) and µ ( B ( A ( i +1) , 1 20 r n ( A ( i ) ) − 1 )) 6 1 / N ( A ( i ) ) . Since ev ery maximal branc h in a tree of subs ets terminates in a singleton, the sequence ends with some set A ′ = A ( h ) = { x } . By construction, w e ha ve µ ( B ( x, 1 20 r n ( A ′ ) − 1 )) 6 1 N ( A ′ ) . Th us, assumin g r > 40 , r n ( A ′ ) − 2 q log + N ( A ′ ) 6 Z 1 20 r n ( A ′ ) − 1 r n ( A ′ ) − 2 s 1 log µ ( B ( x, ε )) dε . (42) By prop ert y of Definition 3.1, the interv als ( r n ( A ) − 2 , 1 20 r n ( A ) − 1 ) are disjoint for differen t sets A ∈ F with x ∈ A , th u s summing (42) yields size r ( F ) . r X A ∈F : x ∈ A r n ( A ) − 2 q log + N ( A ) 6 Z ∞ 0 s 1 log µ ( B ( x, ε )) dε , completing the pro of. 3.2 Separated trees Let ( X , d ) b e an arbitrary metric space. Consid er a finite, connected, graph -theoretic tree T = ( V , E ) (i.e., a connected, acyclic graph) suc h that V ⊆ X , with a fixed root z ∈ V , and a mappin g s : V → Z . Abu s ing notation, w e will sometimes u se T for the v ertex set of T . F or a v ertex x ∈ T , w e use T x to denote the sub tree r o oted at x , and w e use Γ( x ) to den ote the set of c hildren 1 of x with resp ect to the ro ot z . Finally , w e write ∆( x ) = | Γ( x ) | + 1 for all x ∈ T . Let L b e the set of lea v es of T . F or any v ∈ T , let P ( v ) = { z , . . . , v } d enote the set of no des on the unique path from the ro ot to v . F or a pair of no d es u, v ∈ T , we use P ( u, v ) to denote the sequence of no des on the u nique path from u to v . If u is the paren t o f v , w e write u = p ( v ) and in particular we write z = p ( z ). F or an y such pair ( T , s ) and r > 2, we define the value of ( T , s ) by val r ( T , s ) = inf ℓ ∈L X v ∈P ( ℓ ) r s ( v ) p log ∆( v ) . (43) The follo win g defin ition will be cen tral. 1 F ormally , these are precisely the neigh b ors of x in T whose unique path to the ro ot z passes th rough x . 27 Definition 3.8. F or a value r > 2 , we say that the p air ( T , s ) is an r -separated tree in ( X, d ) if it satisfies the fol lowing c onditions for al l x ∈ T . 1. F or al l y ∈ Γ( x ) , s ( y ) 6 s ( x ) − 2 . 2. F or al l u, v ∈ Γ( x ) , we have d ( x, T u ) > 1 2 r s ( x ) − 1 and d ( T u , T v ) > 1 2 r s ( x ) − 1 . 3. diam ( T x ) 6 4 r s ( x ) . W e remark that o ur separated tree i s a sligh tly different version of th e (2 , r )-tree in tro du ced in the preceding section. The main difference is that the no d es of our separat ed tree are p oint in the metric space X , whereas a no de in a (2 , r )-tree is a sub s et of X . Our definition is tailored for the application in Section 4. Not surprisin gly , w e h a ve a similar v ersion of the ab ov e theorem for separated trees. Theorem 3.9. F or some r 0 > 2 and every r > r 0 , and any metric sp ac e ( X , d ) , we have sup T val r ( T , s ) ≍ r γ 2 ( X, d ) , wher e the supr emum is over al l r -sep ar ate d tr e es in X . Theorem 3.9 follo w s from Corollary 3.6 and the follo wing lemma. Lemma 3.10. Consider r > 8 and any metric sp ac e ( X , d ) . F or any (2 , r ) -tr e e F , ther e is an r - sep ar ate d tr e e T such th at si ze r ( F ) = val r ( T ) . Also, for any r -sep ar ate d tr e e T , ther e is a (2 , r ) -tr e e F such that siz e r ( F ) > val r ( T ) − r diam ( X ) . Pr o of. W e only p ro v e th e first half of the state ment, since the s econd half can b e obtained by rev ersing th e construction. The additive factor − r diam ( X ) is due to the sligh t d ifference in the definitions of the v alue for a separated tree and the size for a (2 , r )-tree (see (43) and (39)). Let F b e a (2 , r )-tree on ( X, d ). F or eac h A ∈ F with N ( A ) > 1, w e select one c hild c ( A ) and an arbitrary p oin t v A ∈ c ( A ). W e no w construct the separat ed tree T . Its verte x set is a subset of { v A : A ∈ F } . The ro ot of T is v X , and its c hildren are { v B : B is a c hild of X with B 6 = c ( X ) } . In general, if v A is a no de of T , th en its c hildr en are the p oin ts { v B : B is a c hild of A with B 6 = c ( A ) } . Finally , for v A ∈ T , we pu t s ( v A ) = n ( A ). Let us first v erify that T is an r -separated tree. Cond ition (1) of Definition 3.8 holds b ecause if y is a child of v A ∈ T , then y = v B for some c hild B of A (in F ), whic h implies s ( y ) = n ( B ) 6 n ( A ) − 2 = s ( v A ) − 2. Secondly , If v A is a no de with c hildren v B 1 , v B 2 , . . . , v B k , then clearly b y Definition 3.1, d ( v A , T v B i ) > d ( c ( A ) , B i ) > 1 2 r s ( v A ) − 1 , d ( T v B i , T v B j ) > d ( B i , B j ) > 1 2 r s ( v A ) − 1 , v erifying condition (2) of Definition 3.8. Thirdly , if x A ∈ T , then for an y c hild x B of x A , w e kno w B is a c h ild of A , hence diam ( T x B ) 6 diam ( B ) 6 4 r n ( A ) = 4 r s ( x A ) , using prop erty (3 ) of a q -tree. This v er ifi es condition (3) of Definition 3.8. Finally , observe that f or ev ery non-leaf no de v A ∈ T , w e ha ve ∆( v A ) = | Γ( v A ) | + 1 = N ( A ), and for lea v es, w e ha v e log ∆( v A ) = log + N ( A ) = 0. It follo ws that val r ( T , s ) = si ze r ( F ), completing the pro of. 28 3.2.1 Additional structure W e no w observ e that w e ca n tak e our s ep arated trees to hav e some additional prop erties. Sa y th at an r -separated tree ( T , s ) is C -r e gular for some C > 1, if it satisfies, for ev ery v ∈ T \ L , ∆( v ) > exp C 2 r 2 4 s ( z ) − s ( v ) . (44) Lemma 3.11. F or e v ery C > 1 and r > 4 , for every r -sep ar ate d tr e e ( T , s ) in X , if val r ( T , s ) > 4 C r s ( z )+1 , then ther e is a C -r e gular r -sep ar ate d tr e e ( T ′ , s ′ ) in X with 1 2 val r ( T , s ) 6 val r ( T ′ , s ′ ) 6 val r ( T , s ) . Pr o of. Consider th e follo wing op eration on an r -separated tree ( T , s ). F or x ∈ T \ L , consider a new r -separated tree ( T ′ , s ′ ) = Φ x ( T , s ), whic h is defined as follo ws. Let u b e the child of x an d let S con tain the remaining c hildren suc h that val r ( T u , s | T u ) 6 val r ( T v , s | T v ) for all v ∈ S , (45) where T u is the s ubtree of T ro oted at u and con taining all its descendant s, and s | T u is the r estriction of s on the subtr ee T u . Consider the tree T ′ that results fr om deleting all the no des in S , as w ell as the s u btrees un der them, and th en contrac ting th e ed ge ( x, u ). W e also put s ′ ( x ) = s ( u ) and s ′ ( y ) = s ( y ) for all y ∈ T ′ . As long as there is a no d e x ∈ T \ L which violates (44) (for the current ( T ′ , s ′ )), we iterate this pro cedure (namely , w e replace ( T ′ , s ′ ) b y Φ x ( T ′ , s ′ )). It is clear th at w e end with a C -regular tree ( T ′ , s ′ ). Note that different c hoices of x at eac h stage will lead to differen t outcomes, but the follo win g pro of sho ws that all of them satisfy the required condition. It is also straigh tforward to verify that for an y ℓ ∈ L ′ , w e hav e X v ∈P T ′ ( ℓ ) r s ′ ( v ) p log ∆ T ′ ( v ) > X v ∈P T ( ℓ ) r s ( v ) p log ∆ T ( v ) − C r X v ∈ P T ( ℓ ) r s ( v ) 2 s ( z ) − s ( v ) > X v ∈P T ( ℓ ) r s ( v ) p log ∆ T ( v ) − C r s ( z )+1 ∞ X k =0 2 2 k r − 2 k > X v ∈P T ( ℓ ) r s ( v ) p log ∆ T ( v ) − 2 C r s ( z )+1 > val r ( T , s ) − 2 C r s ( z )+1 > 1 2 val r ( T , s ) . where in the second line w e h a ve used prop ert y (1) of De finition 3.8, in the third line, w e ha ve used r > 4, and in the final line we ha ve used our assumption that val r ( T , s ) > 4 C r s ( z )+1 . It remains to prov e that val r ( T , s ) > val r ( T ′ , s ′ ). The issue here is that it is p ossible L ′ ( L . Ho wev er, by our choice of u at eac h stage (as in equation (45)), it is guaranteed that ℓ ∈ L ′ for a certain ℓ ∈ L su c h that val r ( T , s ) = P v ∈P ( ℓ ) r s ( v ) p log ∆( v ) . This completes the pro of. 29 W e next study the subtr ees of separated trees. In what follo ws, we conti nue denoting by s | T ′ the restriction of s on T ′ for T ′ ⊆ T , and w e use a subscr ip t T ′ to refer to the subtree T ′ . Lemma 3.12. F or eve ry r - sep ar ate d tr e e ( T , s ) , ther e is a subtr e e T ′ ⊆ T such that ( T ′ , s | T ′ ) is an r -sep ar ate d tr e e satisfying the f ol lowing c onditions. 1. val r ( T , s ) ≍ val r ( T ′ , s | T ′ ) . 2. F or every v ∈ T ′ \ L T ′ , ∆ T ′ ( v ) = ∆( v ) . 3. F or every v ∈ T ′ \ L T ′ and w ∈ L T ′ ∩ T v , X u ∈P ( v,w ) r s ( u ) p log ∆ T ′ ( u ) > 1 2 r s ( p ( v )) p log ∆ T ′ ( p ( v )) . (46) Pr o of. W e constru ct the subtree T ′ in the follo w ing wa y . W e examine the ve rtices of v ∈ T in the breadth-first search order (that is, w e order the ve rtices such th at their distances to the ro ot are non-decreasing). If v is n ot delete d y et and for some ℓ ∈ L ∩ T v , X u ∈P ( v,ℓ ) r s ( u ) p log ∆ T ( u ) 6 r s ( p ( v )) p log ∆ T ( p ( v )) , (47) w e d elete all the descendants of v . L et T ′ b e the su btree obtained at the end of th e pro cess. It is clear that ( T ′ , s | T ′ ) is a separated tree, and it remains to v erify the requir ed prop erties. By the construction of our subtree T ′ , w e see that wheneve r a v ertex is deleted, all its siblings are deleted. S o for a no d e v ∈ T ′ \ L T ′ , all the children in T of v are p reserv ed in T ′ , yielding prop erty ( 2). Note that if v ∈ L T ′ \ L , there exists ℓ ∈ L ∩ T v suc h that (47) holds. Th erefore, w e see X u ∈P ( z ,v ) r s ( u ) p log ∆ T ′ ( u ) = X u ∈P ( z ,v ) \{ v } r s ( u ) p log ∆ T ( u ) > 1 2 X u ∈P ( z ,ℓ ) r s ( u ) p log ∆ T ( u ) > 1 2 val r ( T , s ) . This v erifies prop ert y (1) (noting that the reverse inequalit y is trivial). T ak e v ∈ T ′ \ L T ′ and w ∈ L T ′ ∩ T v . If w ∈ L , we see th at (46) holds for v and w since (47) do es not hold for v an d ℓ = w (otherwise all the descendants of v ha v e to b e deleted and v will b e a leaf no de in T ′ ). If w 6∈ L , there exists ℓ 0 ∈ L ∩ T w suc h that X u ∈P ( w, ℓ 0 ) r s ( u ) p log ∆ T ( u ) 6 r s ( p ( w )) p log ∆ T ( p ( w )) . Recall that (47) fails with ℓ = ℓ 0 . Altogether, we c onclude th at X u ∈P ( v,w ) r s ( u ) p log ∆ T ′ ( u ) = X u ∈P ( v,ℓ 0 ) r s ( u ) p log ∆ T ( u ) − X u ∈P ( w, ℓ 0 ) r s ( u ) p log ∆ T ( u ) > 1 2 X u ∈P ( v,ℓ 0 ) r s ( u ) p log ∆ T ( u ) > 1 2 r s ( p ( v )) p log ∆ T ( p ( v )) , establishing prop ert y (3) and completing the pro of. 30 Finally , we observ e that sep arated trees are stable in the follo wing sense. Lemma 3.13. Fix 0 < δ < 1 . Supp ose that ( T , s ) is an r -se p ar ate d tr e e in X , and for every no de v ∈ V , we delete al l but ⌈ δ · | Γ( v ) |⌉ of its childr en. Denote by T ′ the induc e d tr e e on the c onne cte d c omp onent c ontaining z ( T ) . Then ( T ′ , s | T ′ ) is an r -sep ar ate d tr e e and val r ( T , s ) ≍ δ val r ( T ′ , s | T ′ ) . Pr o of. It is clear that Prop erties (1), (2) and (3) of separated tr ees are pr eserv ed for th e in duced tree T ′ for s | T ′ . So ( T ′ , s ) is an r -separated tree. F u rthermore, for ev ery leaf ℓ of T ′ , X v ∈P ( ℓ ) r s ( v ) p log ∆ T ′ ( v ) > X v ∈P ( ℓ ) r s ( v ) p log(1 + ⌈ δ · | Γ( v ) |⌉ ) > c ( δ ) X v ∈P ( ℓ ) r s ( v ) p log(1 + | Γ( v ) | ) > c ( δ ) val r ( T , s ) , where c ( δ ) is a constan t dep ending only on δ . It follo ws that val r ( T ′ , s | T ′ ) > c ( δ ) val r ( T , s ), com- pleting the pro of since the reverse direction is ob vious. 3.3 Computing an appro ximation to γ 2 deterministically W e now presen t a deterministic algorithm for computing an appro ximation to γ 2 . Theorem 3.14. L et ( X, d ) b e a finite metric sp ac e, with n = | X | . If, for any two p oints x, y ∈ X , one c an c ompute d ( x, y ) in time p olynomial in n , then one c an c ompute a numb er A ( X , d ) in p olynomial time, for which A ( X, d ) ≍ γ 2 ( X, d ) . Pr o of. Fix r > 16. First, let us assume that 1 6 d ( x, y ) 6 r M for x 6 = y ∈ X and some M ∈ N . Fix x 0 ∈ X . Our algorithm constru cts functions ϕ 0 , ϕ 1 , . . . , ϕ M : X → R + . W e will return the v alue A ( X, d ) = ϕ M ( x 0 ). First put ϕ 1 ( x ) = ϕ 0 ( x ) = 0 for all x ∈ X . Ne xt, we show ho w to con- struct ϕ j giv en ϕ 0 , ϕ 1 , . . . , ϕ j − 1 . F or x ∈ X and r > 0, we use B ( x, r ) △ = { y ∈ X : d ( x, y ) 6 r } . Firs t, we construct a maximal 1 3 r j − 1 net N j in X in the follo win g wa y . Supp osing that y 1 , . . . , y k ha v e already b een c hosen, let y k +1 b e a p oin t satisfying ϕ j − 2 ( y k +1 ) = max ( ϕ j − 2 ( y ) : y ∈ X \ k [ i =1 B x, 1 3 r j − 1 ) , as long as there exists some p oint o f X \ S k i =1 B ( x, 1 3 r j − 1 ) remaining. F or x ∈ X , set g j ( x ) = y min { k : d ( x,y k ) 6 1 3 r j − 1 } . No w w e define ϕ j ( x ) for x ∈ X . S upp ose that B ( x, 2 r j ) ∩ N j = { y ℓ 1 , y ℓ 2 , . . . , y ℓ h } , with ℓ 1 6 ℓ 2 6 · · · 6 ℓ h , and define 31 I. ϕ j ( x ) = ϕ j − 1 ( x ) if B ( g j ( x ) , 4 r j ) \ B ( g j ( x ) , 1 16 r j − 2 ) is empt y . I I. O therwise, ϕ j ( x ) = m ax max k 6 h r j p log k + min i 6 k ϕ j − 2 ( y ℓ i ) , max { ϕ j − 1 ( z ) : z ∈ B ( x, 1 3 r j − 1 ) } . (48) No w , w e verify that { ϕ j } M j = 0 satisfies the conditions of Theorem 3.3. T h e monotonicit y condition (1) is satisfied by construction. W e will now v erify condition (2), starting with the follo wing lemma. Lemma 3.15. F or any j > 0 , If d ( s, t ) 6 r j and B ( g j ( s ) , 4 r j ) \ B ( g j ( s ) , 1 16 r j − 2 ) is empty, then ϕ j ( s ) = ϕ j ( t ) . Pr o of. W e pro ve this b y ind uction on j . Clearly it holds v acuously for j 6 2. Assum e that it holds for ϕ 0 , ϕ 1 , . . . , ϕ j − 1 and j > 2. By the cond ition of the lemma and the fact that s ∈ B ( g j ( s ) , 1 3 r j − 1 ), w e ha ve d ( s, g j ( s )) 6 1 16 r j − 2 , (49) whic h implies that B ( s , 2 r j ) \ B ( s, 1 8 r j − 2 ) is also empt y . F urthermore, w e ha v e g j ( s ) = g j ( t ), since otherwise d ( g j ( s ) , g j ( t )) > 1 3 r j − 1 , and we w ould conclude that 2 r j > d ( g j ( t ) , s ) > d ( g j ( s ) , g j ( t )) − d ( s, g j ( s )) > 1 3 r j − 1 − 1 16 r j − 2 > 1 8 r j − 1 , con tradicting the fact that B ( s, 2 r j ) \ B ( s, 1 8 r j − 2 ) is empt y . It fol lo ws that B ( s, 2 r j ) \ B ( s, 1 8 r j − 2 ) = ∅ and B ( t, 2 r j ) \ B ( t, 1 8 r j − 2 ) = ∅ . (50) Since g j ( s ) = g j ( t ), w e conclude that b oth ϕ j ( s ) and ϕ j ( t ) a re defined b y case (I) ab o v e, hence ϕ j ( s ) = ϕ j − 1 ( s ) and ϕ j ( t ) = ϕ j − 1 ( t ) . (5 1) So we are d one by ind uction un less B ( g j ( s ) , 4 r j − 1 ) \ B ( g j ( s ) , 1 16 r j − 3 ) is non-empty , in which case ϕ j − 1 ( s ) and ϕ j − 1 ( t ) are defin ed by case (I I). But fr om (50) and d ( s, t ) 6 r j , w e see that B ( t, 2 r j − 1 ) = B ( s, 2 r j − 1 ) and B ( s, 1 3 r j − 2 ) = B ( t, 1 3 r j − 2 ) as well . This implies that ϕ j − 1 ( s ) and ϕ j − 1 ( t ) see the same maximization in (48), hence ϕ j − 1 ( s ) = ϕ j − 1 ( t ) and by (51) w e are done. No w , let s, t 1 , . . . , t N ∈ X b e as in condition (2), and let B ( s, 2 r j ) ∩ N j = { y ℓ 1 , y ℓ 2 , . . . , y ℓ h } b e suc h that ℓ 1 6 ℓ 2 6 · · · 6 ℓ h . If B ( g j ( s ) , 4 r j ) \ B ( g j ( s ) , 1 16 r j − 1 ) is empty , then N = 1, and L emma 3.15 implies that ϕ j ( s ) = ϕ j ( t 1 ) > ϕ j − 2 ( t 1 ), where the latter inequalit y follo ws fr om monotonicit y . Th us w e ma y assu me that ϕ j ( s ) is defined b y case (I I). T o every t i , w e can asso ciate a d istin ct p oint g j ( t i ) ∈ B ( s, 2 r j ) ∩ N j , and by construction w e ha v e ϕ j − 2 ( g j ( t i )) > ϕ j − 2 ( t i ), since ϕ j − 2 ( y k ) is decreasing as k increases. Using this prop ert y aga in in conjunction with the definition (48), we ha ve ϕ j ( s ) > r j p log N + min { ϕ j − 2 ( y ℓ i ) : i = 1 , . . . , N } > r j p log N + min { ϕ j − 2 ( g j ( t i )) : i = 1 , . . . , N } > r j p log N + min { ϕ j − 2 ( t i ) : i = 1 , . . . , N } , 32 completing our ve rification of condition (2) of T h eorem 3.3. Applying Theorem 3.3, we see that γ 2 ( X, d ) . sup x ∈ X,i ∈ Z ϕ i ( x ) = ϕ M ( x 0 ) = A ( X , d ) . (52) T o pro v e the matc h ing lo wer b ound , we first build a tree T wh ose v ertex set is a sub set of X × Z . The ro ot of T is ( x 0 , M ). In general, if ( x, j ) is already a v ertex of T w ith j > 1, then w e add children to ( x, j ) according to the maximizer of (48). If ϕ j ( x ) = ϕ j − 1 ( z ), then we mak e ( z , j − 1) the only c hild o f ( x, j ). Oth er w ise, we put t he nod es ( y 1 , j − 2) , . . . , ( y h , j − 2) a s c hildren of ( x, j ), where { y i } ⊆ N j are the no des that ac hiev e the maxim um in (48). Let the pair ( T ′ , s ) b e a constructed in the follo wing w a y f rom T . W e replace ev ery maximal path of the form ( x, j 0 ) , ( x, j 0 − 1) , . . . , ( x, j 0 − k ) b y the vertex x and put s ( x ) = j 0 − k . It follo ws immediately by constr u ction that val r ( T ′ , s ) . ϕ M ( x 0 ) + r diam ( X, d ) . ϕ M ( x 0 ) , (53) where the latter inequ ality follo w s from (52), sin ce ϕ M ( x 0 ) & γ 2 ( X, d ) & diam ( X , d ). Note that the correction term of diam ( X, d ) in (53) is simp ly b ecause of the use of ∆( v ) = | Γ( v ) | + 1 in the definition (43). W e next build a (1 , r , 8 , 1 16 )-tree F , which essen tially captures the str u cture of the tree T . In general, the sets in F will b e balls in X , with the no de ( x, j ) ∈ T b eing asso ciated with the set B ( x, 4 r j ) in F , whic h will ha v e lab el n ( B ( x, 4 r j )) = j . W e construct the (1 , r, 8 , 1 16 )-tree F recursivel y . T he r o ot of F is B ( x 0 , 4 r M ) (whic h is equal to X ), and we defin e n ( B ( x, 4 r j )) = M . In general, if F con tains the s et B ( x, 4 r j ) corresp onding to the nod e ( x, j ) ∈ T , and if ( x, j ) has c hildren ( y 1 , j − 2) , ( y 2 , j − 2) , . . . , ( y h , j − 2) ∈ T , w e add the sets B ( y i , 4 r j − 2 ) as c hildren of B ( x, 4 r j ) in F , with n ( B ( y i , 4 r j − 2 )) = j − 2. Likewise, if ( z , j − 1) is the c hild of ( x, j ), then w e add the s et B ( z , 4 r j − 1 ) a s the unique c hild of B ( x, 4 r j ) in F and put n ( B ( z , 4 r j − 1 )) = j − 1. W e con tin ue in this manner until T is exhausted. W e no w v erify that F is indeed a (1 , r, 8 , 1 6 )-tree. First, note th at if ( z , j − 1) is a c h ild of ( x, j ) in T , then clearly B ( z , 4 r j − 1 ) ⊆ B ( x, 4 r j ) since this can only happ en if d ( x, z ) 6 1 3 r j − 1 . Also, if ( y 1 , j − 2) , . . . , ( y h , j − 2) are the children of ( x, j ), then b y the c onstruction of th e maps in (48), w e ha v e d ( y i , x ) 6 2 r j , hence B ( y i , 4 r j − 2 ) ⊆ B ( x, 4 r j ), recalling that r > 16. F urthermore, for i 6 = k , since y i , y k ∈ N j , w e ha v e d ( y i , y k ) > 1 3 r j − 1 , so B ( y i , 4 r j − 2 ) ∩ B ( y k , 4 r j − 2 ) = ∅ , v erifying that F is indeed a tree of subsets. In fact, we ha v e the estimate d B ( y i , 4 r j − 2 ) , B ( y k , 4 r j − 2 ) > 1 3 r j − 1 − 8 r j − 2 > 1 6 r j − 1 = 1 6 r n ( B ( x, 4 r j )) − 1 , using r > 16. This v erifies that pr op ert y (2) of a (1 , r , 1 , 1 6 )-tree is satisfied. F urthermore, prop erty (1) of a (1 , r, 8 , 1 6 )-tree f ollo ws immediately by construction. Finally , to verify p rop ert y (3), note that for an y set in our tree of subsets F , corresp onding to a no d e of the form ( x, j ) ∈ T , we ha v e diam ( B ( x, 4 r j )) 6 8 r j and n ( B ( x, 4 r j )) = j . By construction, we ha v e val r ( T ′ , s ) . size r ( F ) + r dia m ( X, d ) , and Lemma 3.7 yields γ 2 ( X, d ) & siz e r ( F ) + di a m ( X, d ) (u sing γ 2 ( X, d ) & diam ( X , d )). C ombining this with (53) sho ws that γ 2 ( X, d ) & val r ( T ′ , s ) & ϕ M ( x 0 ) = A ( X , d ) . 33 T ogether with (52), this shows that γ 2 ( X, d ) ≍ A ( X, d ). The o nly thing left is to remo v e the d ep endence of our running time on M . But since there are at m ost n 2 distinct d istances in ( X, d ), only O ( n 2 ) of the maps ϕ 0 , ϕ 1 , . . . , ϕ M are distinct. More precisely , su p p ose that there is no pair u, v ∈ X satisfying d ( u, v ) ∈ [ r j − 3 , r j + 1 ] for some j ∈ Z . In that case, ϕ j ( x ) is defined by case (I) for all x ∈ X , and th us ϕ j ≡ ϕ j − 1 . Obviously , we ma y skip computation of the intermediate non-distinct maps (and it is easy to see whic h maps to skip by precomputing the v alues of j such that there are u, v ∈ X with d ( u, v ) ∈ [ r j − 3 , r j + 1 ].) Since there are only O ( n 2 ) non-trivial v alues of j , this completes the pr o of. 3.4 T ree-lik e prop ert ies of t he Gaussian free field Finally , we consider how the r esistance m etric (and hence the Gauss ian free fi eld) allo ws u s to obtain trees with sp ecial prop erties. Consider a netw ork G ( V ), and the asso ciated m etric space ( V , √ R eff ). Let ( T , s ) b e an r -separated tree in G . W e sa y that ( T , s ) is str ongly r -sep ar ate d if, for ev ery non-ro ot no d e v ∈ T , w e hav e the inequ alit y p R eff ( v , T \ T v ) > 1 20 r s ( p ( v )) − 1 , (54) where p ( v ) denotes the parent of v in T . Lemma 3.16. F or any network G ( V ) and any r > 96 , let ( T 0 , s ) b e an arbitr ary r -sep ar ate d tr e e on the sp ac e ( V , √ R eff ) . Then ther e is an induc e d str ongly r -se p ar ate d tr e e ( T , s ) such that | Γ T ( v ) | > | Γ T 0 ( v ) | / 2 for al l v ∈ T \ L T . F u rthermor e val r ( T , s ) ≍ val r ( T 0 , s ) . ( 55) Pr o of. Consider any non-leaf no de v ∈ T 0 with c hildren c 1 , . . . , c k , where k > 1. If k = 1, let S v = { c 1 } . Otherwise, w e wish to apply Prop osition 2.10 to the sets {T c i } k i =1 . By p rop ert y (2) of separated trees, w e get that for all x ∈ T c i , y ∈ T c j with i 6 = j R eff ( x, y ) > 1 2 r s ( v ) − 1 2 = 1 4 r 2( s ( v ) − 1) . Com bined with p rop ert y (3) of separated trees, Prop ositio n 2.10 yields that there exists a sub set S v ⊆ { c 1 , . . . , c k } with | S v | > k / 2 suc h that for c ∈ S v , w e ha v e R eff ( T c , T v \ ( T c ∪ { v } )) > 1 4 r 2( s ( v ) − 1) · 1 24 > 1 96 r 2( s ( v ) − 1) . Applying Lemma 2.13 with A = T c , B 1 = T v \ ( T c ∪ { v } ) and B 2 = { v } , w e get that R eff ( T c , T v \ T c ) > 1 100 r 2( s ( v ) − 1) . (56) Next, consider the induced r -separated tree ( T , s ) that arises from deleting, for ev er y n on-leaf no de v ∈ T 0 , al l the c hildren not in S v as well as all their d escend an ts. I t is clear that for all v ∈ T \ L T , w e ha ve | Γ T ( v ) | > | Γ T 0 ( v ) | / 2. Lemma 3.13 then yields th at val r ( T , s ) ≍ val r ( T 0 , s ) . 34 It remains to v erify that ( T , s ) is stron gly r -separated. Defin e D 0 = 1 and for h > 1, D h = D h − 1 1 − D 2 h − 1 r − 4 h . It is straigh tforw ard to verify that D h > 1 / 2 for all h > 0, since r > 2. W e now pro ve , b y induction on the heigh t of T , that for ev ery no de u at depth h > 1 in T , p R eff ( u, T \ T u ) > 1 10 r s ( p ( u )) − 1 D h − 1 . (57) By the preceding remarks, this v erifies (54), completing the pro of of the lemma. Let z = z ( T ) b e th e ro ot, and let v b e so me c h ild of z . Let u ∈ T v b e a no d e at depth h in T v (and hence at depth h + 1 in T ). By (56), w e ha v e p R eff ( u, T \ T v ) > p R eff ( T v , T \ T v ) > 1 10 r s ( p ( v )) − 1 . (58) If u = v , then the preceding inequalit y yields (57). O th erwise, u 6 = v , and h > 1. By the induction h yp othesis (57) applied to u and T v , w e hav e p R eff ( u, T v \ T u ) > 1 10 r s ( p ( u )) − 1 D h − 1 . (59) Since u ∈ T v is a no de at d epth h , we get from p rop erty (1) of a separated tree that s ( p ( v )) > s ( p ( u )) + 2 h and therefore 1 10 r s ( p ( u )) − 1 D h − 1 6 r − 2 h · 1 10 r s ( p ( v )) − 1 D h − 1 . (60) No w , using (58) and (59), w e app ly Lemma 2.13 with A = { u } , B 1 = T v \ T u and B 2 = T \ T v , yielding p R eff ( u, T \ T u ) > 1 10 r s ( p ( u )) − 1 D h − 1 · 1 10 r s ( p ( v )) − 1 q ( 1 10 r s ( p ( u )) − 1 D h − 1 ) 2 + ( 1 10 r s ( p ( v )) − 1 ) 2 > 1 10 r s ( p ( u )) − 1 D h − 1 1 p 1 + ( D h − 1 r − 2 h ) 2 > 1 10 r s ( p ( u )) − 1 D h − 1 (1 − D 2 h − 1 r − 4 h ) , where the second transition follo w s from (60) and the third transition follo w s from the fact that (1 + x 2 ) − 1 / 2 > 1 − x 2 . T his completes the pro of. Go o d tre e s inside the GFF. Consider a Gaussian free field { η x } x ∈ V corresp onding to net w ork G ( V ) with the associated metric space ( V , d ), where d ( x, y ) = ( E ( η x − η y ) 2 ) 1 / 2 . Prop osition 3.17. F or some r 0 > 2 and any r > r 0 and C > 1 , ther e exists a c onstant K = K ( C, r ) dep ending only on C and r such that the fol lowing holds. F or an arbitr ary Gaussian f r e e field { η x } x ∈ V with γ 2 ( V , d ) > K diam ( V ) , ther e exists an r -sep ar ate d tr e e ( T , s ) with set of le aves L , such that the fol lowing pr op erties hold. 35 (a) val r ( T , s ) ≍ r,C γ 2 ( X, d ) . (b) F or every v ∈ V , dist L 2 ( η v , aff ( { η u } u / ∈T v )) > 1 20 r s ( p ( v )) − 1 . (c) F or every v ∈ V , ∆( v ) > exp C 2 r 2 4 s ( z ) − s ( v ) for al l v ∈ T \ L . (d) F or every v ∈ T \ L and w ∈ L ∩ T v , X u ∈P ( v,w ) r s ( u ) p log ∆( u ) > 1 2 r s ( p ( v )) p log ∆( p ( v )) . We c al l such a tr e e T a C - go o d r -sep ar ate d tr e e. Pr o of. By definition of th e GFF, we ha v e d = √ R eff for some net w ork G ( V ). Applying Th eorem 3.9, there exists an r -separated tree ( T 0 , s 0 ) suc h that val r ( T 0 , s 0 ) ≍ r γ 2 ( V , d ). Recalling prop ert y (3 ) of Definition 3.8 and the a ssump tion that γ 2 ( V , d ) > K diam ( V ), w e can then select K large enough suc h that the condition of Lemma 3.11 is s atisfied for the separated tree ( T 0 , s 0 ). Th en applying Lemma 3.11, w e can get a 2 C -regular separated tree ( T 1 , s 1 ) w ith val r ( T 1 , s 1 ) ≍ r,C val r ( T 0 , s 0 ). A t this p oin t, using Lemma 3.16, we obtai n a C -regular strongly r -separated tree ( T 2 , s 2 ) suc h that val r ( T 2 , s 2 ) ≍ r γ 2 ( V , d ). That is to sa y , the tree ( T 2 , s 2 ) satisfies prop er ties (a) and (c). F urthermore, b y Lemma 2.15, w e see that prop ert y (b) holds for ( T 2 , s 2 ) b eca use it is equiv alent to the strongly r -separated prop ert y (54) . Finally , Lemma 3.12 imp lies that th ere exists a subtree T ⊆ T 2 with val r ( T , s 2 | T ) ≍ r,C val r ( T 2 , s 2 ) suc h that prop erty (d) holds for T and pr op erties (a) and (c) are pr eserved (note that b y pr op ert y (2) of Lemma 3.12, the d egrees of non-leaf no des are preserved). Observe that prop erty (b) is preserve d by taking su btrees. W riting s = s 2 | T , we conclud e th at the sep arated tree ( T , s ) satisfies all the requir ed prop erties, completing the pro of. 4 The co v er time W e now t urn to our main theorem. Theorem 4.1. F or any network G ( V ) with total c onductanc e C = P x ∈ V c x , we have t co v ( G ) ≍ C h γ 2 ( V , p R eff ) i 2 . Com bined with Theorem 2.3, this also yields a p ositiv e answer to th e strong conjecture of Winkler and Zuck erman [54]. Corollary 4.2. F or every δ ∈ (0 , 1) , for any network G ( V ) with total c onductanc e C = P x ∈ V c x , t co v ( G ) ≍ C h γ 2 ( V , p R eff ) i 2 ≍ δ t bl ( G, δ ) . F or the remainder of this section, we denote S = γ 2 ( V , p R eff ) . (61) 36 It is clea r that for all 0 < δ < 1, we ha v e t co v ( G ) 6 t bl ( G, δ ), and t bl ( G, δ ) . δ C S 2 b y Theorem 2.3. Th us, in order to pr o ve the preceding corollary and Theorem 4.1, w e need only show that t co v ( G ) & C S 2 . (62) Let { W t } b e th e con tinuous-time random w alk o n G ( V ), and let { L v t } v ∈ V b e the lo cal times, as defined in Section 2. Applyin g the isomorph ism theorem (Theorem 1.14) with some fi xed v 0 ∈ V , w e ha ve L x τ ( t ) + 1 2 η 2 x : x ∈ V law = 1 2 ( η x + √ 2 t ) 2 : x ∈ V , (63) for some asso ciated Gaussian pro cess { η x } x ∈ V . By Lemma 2.14, this pr o cess is a Gaussian free field, and w e hav e for every x, y ∈ V , d ( x, y ) △ = q E | η x − η y | 2 = p R eff ( x, y ) . (64) Let D = max x,y ∈ V d ( x, y ) b e the diameter of the Gaussian pr o cess. Pro of outline. Let { L > 0 } b e the ev en t { L x τ ( t ) > 0 : x ∈ V } . Consid er a set S ⊆ R V , and let S L and S R b e the ev ent s corresp onding to the left and righ t-hand sides of (63) falling in to S . Our goal is to find suc h a set S so that for some t ≍ S 2 , w e ha v e P ( S R ) − P ( S L ∩ { L > 0 } ) > c, (65) for some univ ersal constan t c > 0. In this case, with prob ab ilit y at least c , the set of un cov ered v ertices { v : L v τ ( t ) = 0 } is non-empt y . Using the fact that the inv erse lo cal time τ ( t ) is & C t with probabilit y at least 1 − c/ 2, we will conclude that t co v ( G ) & C S 2 . Th us we are left to give a lo w er b ound on P ( S R ) and an up p er b oun d on P ( S L ∩ { L > 0 } ). Since the structure of the lo cal times pro cess { L x t } conditioned on { L > 0 } can b e quite u n wieldy , w e will only use first momen t b ounds f or the latt er task. C alculating a lo we r b ound on P ( S R ) will require a significan tly more delicate application of the second-momen t met ho d, but here we will b e able to exploit the full p ow er of Gaussian p r o cesses and the ma jorizing measur es theory . Before defin ing the set S ⊆ R V , we describ e it in b road terms. By (64) and Theorem (MM), w e know that for some t 0 ≍ S 2 , we should ha ve E inf x ∈ V η x = − E sup x ∈ V η x close to − √ 2 t 0 . By Lemma 2.2, w e kno w that the standard deviation of inf x ∈ V η x is O ( D ). Th us we can exp ect th at with p robabilit y b ounded a w a y from 0, for the r igh t c hoice o f t 0 ≍ S 2 , some v alue o n the right -hand side of (63) is O ( D ) for t = t 0 . No w , when E sup x ∈ V η x ≫ D , it is in tuitive ly true that for t = εt 0 and ε > 0 small, there should b e many p oints x ∈ V with η x ≈ − √ 2 t . I f these p oin ts ha v e some level of in dep endence, then we should exp ect that with pr obabilit y b ound ed a wa y from 0, th er e is some x ∈ V with | η x − √ 2 t | v ery s mall (muc h smaller than O ( D )). Our set S will represent the existence of su c h a p oin t. On the other h and, we will argue that if all the lo cal times { L x τ ( t ) } are p ositiv e, then the probabilit y for the left-hand side to hav e such a lo w v alue is small. 37 4.1 A t ree-lik e sub-pro cess First, observe that b y the comm ute time id en tit y , t co v ( G ) > C max x,y ∈ V R eff ( x, y ) = C D 2 . Th us in pro ving Th eorem 4.1, w e may assume that S > K D , (66) for an y u niv ersal constant K > 1. In particular, by an app licatio n of Prop osition 3.17, we can assume the existe nce of an r -separated tree ( T , s ) in ( V , d ), for some fixed r > 128, with ro ot z = v 0 , and such that for some constan t C > 1 and θ = θ ( C ), prop erties (67), (70), (71), and (72 ) b elo w are satisfied. W e will c ho ose C sufficien tly large lat er, indep en d en t of an y other parameters. F or eac h u ∈ T , let h u denote the heigh t of u , wher e we order t he tree so that h z = 0, where z is the ro ot. Recalling that L is the set of lea v es of T , for eac h v ∈ L , let P ( v ) = { f v (0) , f v (1) , . . . , f v ( h v ) } b e the set of no des on the path from z = f v (0) to v = f v ( h v ), where f v ( i ) is the paren t of f v ( i + 1), for 0 6 i < h . Firs t, w e can require that for eve ry v ∈ L , σ v > 1 θ S , (67) where χ v ( k ) △ = r s ( f v ( k )) p log ∆( f v ( k )) , (68) σ v △ = h v − 1 X k =0 χ v ( k ) . (69) F urthermore, we c an r equ ire that the tree T s atisfies, for ev ery v ∈ V , h v − 1 X i = j +1 χ v ( i ) > C · 2 j · r s ( f v ( j )) , (70) as w ell as ∆( f v ( k )) > exp ( C 2 r 2 4 k ) . (71) Finally , we require that for ev ery v ∈ T , dist L 2 ( η v , aff ( { η u } u / ∈T v )) > 1 20 r s ( p ( v )) − 1 . (72) All these requirements are justified by Pr op osition 3.17. The distinguishing even t. F or u, v ∈ L , we let h uv b e the h eigh t of th e least common ancestor of u and v . W e will u se deg ↓ ( v ) = | Γ( v ) | to denote the num b er of c hildren of v . Define m u = h u − 1 Y k =0 deg ↓ ( f u ( k )) , and m uv = h uv − 1 Y k =0 deg ↓ ( f u ( k )) . (73) 38 First, w e fix ε = 1 2 10 r θ . (74) F or ev ery v ∈ L , co nsider the ev ents E v ( ε ) = n | η v − ε S | 6 50 r s ( p ( v )) m − 3 / 4 v o . (75) Instead of arguing directly ab out the ev ents E v ( ε ), w e will couple them to leaf ev ent s of a “p ercolation” pro cess on T . In particular, in S ection 4.2, w e will prov e the follo wing lemma. Lemma 4.3. F or al l v ∈ L , ther e exist events E v such that the fol lowing pr op erties hold. 1. E v ⊆ E v ( ε ) = n | η v − ε S | 6 50 r s ( p ( v )) m − 3 / 4 v o . 2. P ( E v ) > 1 2 m − 7 / 8 v . 3. P ( E u ∩ E v ) 6 m 1 / 8 uv ( m u m v ) − 7 / 8 . In Section 4.3, we will p r o ve that for an y eve nts {E v } v ∈L satisfying pr op erties (2) an d (3) of Lemma 4.3, w e ha v e P [ u ∈L E u ! > 1 8 . (76) Th us for t = 1 2 ε 2 S 2 , w e hav e P ∃ v ∈ V : 1 2 ( η v + √ 2 t ) 2 6 50 2 r 2 s ( p ( v )) m − 3 / 2 v > 1 8 . (77) In ligh t of the discussion sur rounding (65), the reader shou ld think of S = n s ∈ R V : s v 6 50 2 r 2 s ( p ( v )) m − 3 / 2 v for some v ∈ V o , and then (77) give s the d esired lo wer b ound on P ( S R ). W e no w turn to an upp er b ound on P ( S L ∩ { L > 0 } ). The next le mma is pro v ed in Section 4.4. Lemma 4.4. F or t > 1 2 ε 2 S 2 , P [ v ∈L n 0 < L v τ ( t ) 6 50 2 · r 2 s ( p ( v )) m − 3 / 2 v o ! 6 1 16 . (78) F rom (78) and (77), w e conclud e that with p r obabilit y at least 1 / 16, we m ust h a ve L v τ ( t ) = 0 for some v ∈ V and t = 1 2 ε 2 S 2 , else (63) is violated. This implies that P v 0 τ co v > τ ( 1 2 ε 2 S 2 ) > 1 16 . (79) T o fi nish our pro of of (62) and complete the pro of of Theorem 4.1, w e w ill apply Lemm a 2.7 with β = 1 96 . In particular, we may c ho ose K = 96 /ε in (66), and then applying Lemma 2. 7 yields P τ ( 1 2 ε 2 S 2 ) 6 C ε 2 S 2 192 6 1 32 . 39 Com bining this with (79) yields P v 0 τ co v > C ε 2 S 2 192 > 1 16 . In particular, τ co v & C ε 2 S 2 . This completes the pr o of of (62), and hence of Theorem 4.1. 4.2 The coupling The present section is devo ted to the pro of of Lemma 4.3. T o ward this end, we will try to fin d a leaf v ∈ L for which η v ≈ ε S . As in Lemma 4.3(1), the lev el of closeness we d esir e is gauged according to a prop er scale, r s ( p ( v )) , as well as to the n umb er of other lea ves w e exp ect to see at this scale, whic h is represented r oughly b y m − 3 / 4 v (the v alue 3 / 4 is not essen tial her e, a nd an y other v alue in (1 / 2 , 1) wo uld su ffi ce). Our goal is to find a such a leaf by s tarting at the r o ot of the tree, and arguing th at some of its c hildren should b e somewhat close to the target ε S . T his closeness is ac hieved using the fact that, b y definition of an r -separated tree, the children a re separated in t he Gaussian distance, and th us exhibit some lev el of indep endence. W e will conti nue in this manner indu ctiv ely , arguing that the c hildren whic h are somewhat close to the target hav e th eir o wn c hildren whic h w e co uld expect to b e ev en closer, and so on. W e aim to shrin k these win d o ws around the target more an d more so they are small enough once w e reac h the lea v es. There are a num b er of difficulties in v olv ed in executing this sc heme. I n particular, conditioning on the exact v alues of the c hildren of the ro ot could determine the en tire pro cess, making future lev els mo ot. Th us we m u s t first select a carefu l filtering whic h allo ws us to reserv e some ran d omness f or later lev els. This is done i n Section 4.2.1. F urthermore, the in termed iate targets h av e to b e arranged according to the v ariances along the ro ot-leaf p aths in our tr ee. T his corresp onds to the fact that, although we ha v e a un iform lo we r b ound on eac h σ v (from (67)), the summation definin g the σ v ’s could put different w eigh ts on th e v arious lev els (recall (69) ). T he targets also hav e to take in to accoun t random “noise” from the filter describ ed ab o v e, and th us the targets themselv es must b e rand om. T his “windo w analysis” is p erformed in Section 4.2.2. 4.2.1 Restructuring the randomness W e kno w that η z = 0, since z = v 0 is the ro ot of T (and the starting p oin t of the asso ciated rand om w alk). Fix a depth-first ordering of T (one starts at the ro ot and exp lores as far as p ossible along eac h branch b efore backt rac king). W r ite u ≺ v if u is explored b efore v , and u v if u ≺ v or u = v . F or u 6 = z , we write u − for the v ertex preceding u in the DFS order. Let F = span ( { η x : x ∈ T } ). F or a no de v ∈ T , let F v = span( { η u } u v ) a nd F − v = span( { η u } u ≺ v ). W e next associate a cen tered Gaussian pro cess { ξ x : x ∈ T } to { η x : x ∈ T } in the follo wing inductive wa y . Define ξ z = 0. Now, assuming w e hav e defin ed ξ u for u ≺ v , w e define ξ v b y wr iting η v = ζ v + ξ v , where ζ v ∈ F v − and ξ v ⊥ F v − . Observe that, b y construction, { ξ u } u v forms an orthogonal basis in L 2 for F v . Applying (72 ), w e ha ve for all u ∈ T , k ξ u k 2 = dist L 2 ( η u , span ( { η w } w ≺ u )) > di st L 2 ( η u , span ( { η w } w / ∈T u )) > 1 20 r s ( p ( u )) − 1 , (80) 40 where we u sed the fact that the span an d the affine hull are the s ame since ξ z = 0. F or v ∈ L , define the subspaces F v,k = s pan ( { ξ u : f v ( k ) ≺ u f v ( k + 1) } ) , F − v,k = s pan ( { ξ u : f v ( k ) ≺ u ≺ f v ( k + 1) } ) . F or 0 6 k 6 h v − 1, d efine inductiv ely ˜ η v, 0 = 0, and ˜ η v,k +1 = ˜ η v,k + p ro j F v,k ( η v ) . (81) Note that the subspaces {F v,k } h v k =0 are m utually orthogonal, and together they span F v . Thus, ˜ η v,h v = η v . (82) F urthermore, by the definition of the subspace F v,k , w e can decomp ose ˜ η v,k +1 − ˜ η v,k = ˜ ζ v,k + ˜ ξ v,k , (83) where ˜ ζ v,k ∈ F − v,k , and ˜ ξ v,k ⊥ F − v,k . The next lemma s tates that ˜ ξ v,k has at least comparable v ariance to ˜ ζ v,k . Lemma 4.5. F or every v ∈ L and k = 0 , 1 , . . . , h v − 1 , we have the estimates ˜ ζ v,k 2 6 8 r s ( f v ( k )) , (84) and, 1 64 r s ( f v ( k )) − 1 6 ˜ ξ v,k 2 6 8 r s ( f v ( k )) . (85) Pr o of. W riting the telescoping s um, η v = h v − 1 X j = 0 η f v ( j +1) − η f v ( j ) , w e see that pro j F v,k ( η v ) 2 6 h v − 1 X j = k k η f v ( j +1) − η f v ( j ) k 2 6 h v − 1 X j = k 4 r s ( f v ( j )) 6 8 r s ( f v ( k )) , (86) where w e used prop erties (1) and (3) of the separated tree, and ha v e assumed r > 2. Th us b y orthogonalit y and (83), w e ha v e ˜ ζ v,k 2 6 k ˜ η v,k +1 − ˜ η v,k k 2 = pro j F v,k ( η v ) 2 6 8 r s ( f v ( k )) , and precisely the same conclusion holds for ˜ ξ v,k . 41 Next, we establish a lo wer b ound on k ˜ ξ v,k k 2 . F rom (81) and (83), ˜ ξ v,k = pro j F v,k ( η v ) − pr o j F − v,k ( η v ) (87) = h v − 1 X j = k pro j F v,k ( η f v ( j +1) − η f v ( j ) ) − pro j F − v,k ( η f v ( j +1) − η f v ( j ) ) = h pro j F v,k ( η f v ( k +1) − η f v ( k ) ) − pro j F − v,k ( η f v ( k +1) − η f v ( k ) ) i + h v − 1 X j = k +1 pro j F v,k ( η f v ( j +1) − η f v ( j ) ) − pro j F − v,k ( η f v ( j +1) − η f v ( j ) ) . Observe that the term in brac kets is precisely pro j F v,k ( η f v ( k +1) ) − pr o j F − v,k ( η f v ( k +1) ) = ξ f v ( k +1) , since η f v ( k ) ⊥ F v,k . In particular, we a rrive at ˜ ξ v,k 2 > ξ f v ( k +1) 2 − h v − 1 X j = k +1 η f v ( j +1) − η f v ( j ) 2 > 1 32 r s ( f v ( k )) − 1 − 2 r s ( f v ( k +1)) > 1 32 r s ( f v ( k )) − 1 − 2 r s ( f v ( k )) − 2 > 1 64 r s ( f v ( k )) − 1 , where in the seco nd line we ha ve used (80) and prop erties (1) and (2) of th e separated tree, and in the final line w e ha ve used r > 128. 4.2.2 Defining the even ts E v Recall that our goal no w is to fin d m an y lea v es v ∈ L with η v ≈ ε S . No w , writing η v = h v − 1 X k =0 pro j F v,k ( η v ) = h v − 1 X k =0 ( ˜ ζ v,k + ˜ ξ v,k ) , our “ideal” goal wo uld b e to hit a w in do w arou n d the target by getting th e k th term of this sum close to a v ( k ) △ = ε S χ v ( k ) σ v , for k = 0 , 1 , . . . , h v − 1. W e w ill use the v ariance of the ˜ ξ v,k v ariables (recall Lemma 4.5) to lo wer b ound the pr obabilit y that some p oin ts get closer to th e desired target. On th e other hand, we will treat the ˜ ζ v,k v ariables as noise wh ic h h as to b e b ounded in absolute v alue. This noise cannot alwa ys b e counte red in a sin gle lev el, but it can b e countered on av erage along the path to t he leaf; this is the con ten t of (70). W e will amortiz e th is cost o v er future targets as follo w s . Let b v (0) = 0 and f or k = 0 , 1 , . . . , h v − 2, d efine ρ v ( k ) = ˜ ζ v,k + ˜ ξ v,k − a v ( k ) + b v ( k ) , b v ( k + 1) = k X i =0 χ v ( k + 1) P h v − 1 ℓ = i +1 χ v ( ℓ ) ρ v ( i ) . 42 Clearly ρ v (0) = ˜ ζ v, 0 + ˜ ξ v, 0 − a v (0) represent s ho w muc h w e miss our first target. A similar fact holds for the fi nal target, as th e next lemma argues; in b et we en, the errors are spread out prop ortional to the con tribution to val r ( T , s ) for eac h of the the remaining lev els (represen ted b y the χ v ( k ) v alues). Here b v ( k ) represents th e error that is mean t to b e absorb ed in th e k -th lev el. Lemma 4.6. F or every v ∈ L , ρ v ( h v − 1) = η v − ε S . Pr o of. W e hav e, h v − 2 X k =0 b v ( k + 1) = h v − 2 X k =0 k X i =0 χ v ( k + 1) P h v − 1 ℓ = i +1 χ v ( ℓ ) ρ v ( i ) = h v − 2 X i =0 ρ v ( i ) h v − 2 X k = i χ v ( k + 1) P h v − 1 ℓ = i +1 χ v ( ℓ ) = h v − 2 X i =0 ρ v ( i ) . (88) Also note that h v − 1 X k =0 ρ v ( k ) = h v − 1 X k =0 ( ˜ ζ v,k + ˜ ξ v,k − a v ( k ) + b v ( k )) = η v − ε S + h v − 1 X k =0 b v ( k ) . Com bined with b v (0) = 0 and (88), it follo ws that ρ v ( h v − 1) = η v − ε S , completing the proof. W e now define the e ve nts A v ( k ) = { | ˜ ζ v,k | 6 εθχ v ( k ) } , B v ( k ) = { | ρ v ( k ) | 6 w v ( k ) } , where, for 0 6 k 6 h v − 2, w v ( k ) is selected so that P B v ( k ) | ˜ ζ v,k + b v ( k ) = deg ↓ ( f v ( k )) − 1 / 8 . (89) W e emp hasize that the w indo wn w v ( k ) is not deterministic. An d, f or k = h v − 1, w e select w v ( k ) so that P ( B v ( k ) | ˜ ζ v,k + b v ( k )) = d eg ↓ ( f v ( k )) − 1 / 8 m − 3 / 4 v , (90) Remark 2. Here, w v ( k ) can b e thought to repr esen t the wind o w size arou n d the rand om target. The v alue of w v ( k ) is chosen to make the probabilities in (89 ) and (90) e xact, allo wing us to couple seamlessly to th e p ercola tion pro cess in Section 4.3. The k ey fact, prov ed in Lemma 4.7, is that the window sizes actually satisfy a deterministic upp er b oun d, assuming that all the “g o o d” ev ent s on the path from the root to f v ( k ) occurred. Thus one should think of the true windo w size as the b ound s sp ecified in (94) and (95), while the random v alue is for the purp ose of the coupling. F or 0 6 k 6 ℓ 6 h v − 1, d efine A v ( k , ℓ ) △ = ℓ \ i = k A v ( i ) and B v ( k , ℓ ) △ = ℓ \ i = k B v ( i ) . (91) Since ˜ ξ v,k ∈ σ ( F v,k \ F − v,k ) (see, e.g. (87)), w e see that the e ve nt B v ( k ) is conditionally ind ep endent of σ ( F − f v ( k +1) ) giv en the v alue of ˜ ζ v,k + b v ( k ). T h is imp lies that for all even ts E 0 ∈ σ ( F − f v ( k +1) ) su ch that E 0 ∩ A v (0 , k ) ∩ B v (0 , k − 1) 6 = ∅ , P ( B v ( k ) | A v (0 , k ) , B v (0 , k − 1) , E 0 ) = ( deg ↓ ( f v ( k )) − 1 / 8 , if 0 6 k < h v − 1 , deg ↓ ( f v ( k )) − 1 / 8 m − 3 / 4 v , if k = h v − 1 . (92) 43 Finally , for v ∈ L , w e defi n e the ev ent E v = A v (0 , h v − 1) ∩ B v (0 , h v − 1) . (93) Windo w analysis. W e will no w sho w that our final windo w w v ( h v − 1) is small enough. Ob s erv e that our choi ce of w v ( k ) is n ot deterministic. Nev erth eless, we will giv e an absolute u p p er b ound. The b ou n d is essentia lly the natural one: F or any no de u in th e tree, and any c h ild v of u , the standard deviation of η u − η v is O ( r s ( u ) ). This follo w s from pr op ert y (3) of th e r -separated tree (recall Definition 3.8). Lemma 4.7. F or every v ∈ L and k = 0 , 1 , . . . , h v − 2 , if A v (0 , k ) and B v (0 , k − 1) hold then, w v ( k ) 6 50 r s ( f v ( k )) . (94) F urthermor e, i f A v (0 , h v − 1) and B v (0 , h v − 2) hold, then w v ( h v − 1) 6 50 r s ( f v ( h v − 1)) m − 3 / 4 v . (95) Pr o of. F or k = 0, we ha ve ρ v (0) = ˜ ζ v, 0 + ˜ ξ v, 0 − a v (0). By (67) , w e ha v e a v (0) = ε S χ v (0) /σ v 6 θ εχ v (0) = θεr s ( f v (0)) p log ∆( f v (0)) . (96) F urthermore, fr om Lemma 4.5, w e know that for all k > 0, 1 64 r s ( f v ( k )) − 1 6 ˜ ξ v,k 2 6 8 r s ( f v ( k )) . (97) No w , consider a v alue w > 0 such that w 6 a v (0) + εθ χ v (0) 6 2 θ εr s ( f v (0)) p log ∆( f v (0)) . (98) Using (97) and r ecalling the Gaussian densit y , we ha ve P | ρ v (0) | 6 w | A v (0) > P | ρ v (0) | 6 w | ˜ ζ v, 0 = − εθ χ v (0) = P | ˜ ξ v, 0 − a v (0) − εθ χ v (0) | 6 w > 1 2 w √ 2 π 8 r s ( f v (0)) exp − 1 2 (128 εrθ ) 2 log ∆( f v (0)) = w 16 √ 2 π r s ( f v (0)) ∆( f v (0)) − 1 2 (128 εrθ ) 2 . (99) Recalling the assumption (71), we ha ve p log ∆( f v (0)) > C r > 16 √ 2 π 2 10 r , by c ho osing C large enough. In particular, εθ χ v (0) > (1 6 √ 2 π 2 10 εθ r ) r s ( f v (0)) = 16 √ 2 π r s ( f v (0)) , recalling (74). Thus setting w = 16 √ 2 π r s ( f v (0)) satisfies (98), and applying (99) we ha ve P | ρ v (0) | 6 16 √ 2 π r s ( f v (0)) | A v (0) > ∆( f v (0)) − 1 2 (128 εrθ ) 2 > deg ↓ ( f v (0)) − 1 / 8 , 44 where w e hav e used 1 2 (128 εrθ ) 2 = 1 128 , and ∆( f v (0)) > 16 f rom (7 1 ). Therefore w v (0) 6 16 √ 2 π r s ( f v (0)) 6 50 r s ( f v (0)) , recalling the definition of w v (0) from (89). No w su p p ose that (94) holds for all k 6 ℓ < h v − 2, and consider the case k = ℓ + 1. If the ev en ts {B v ( j ) : 0 6 j 6 ℓ } hold, then | ρ v ( j ) | 6 w v ( j ) 6 50 r s ( f v ( j )) , where the fir st in equalit y is from the definition of B v ( j ), and the second is from the induction h yp othesis. Using (70), it follo ws that | b v ( k ) | 6 k − 1 X i =0 χ v ( k ) P h v − 1 ℓ = i +1 χ v ( ℓ ) | ρ v ( i ) | 6 2 C χ v ( k ) . (100) Recall that ρ v ( k ) = ˜ ζ v,k + ˜ ξ v,k − a v ( k ) + b v ( k ). Similar to the k = 0 case, w e obtain that for 0 < w 6 2 θεr s ( f v ( k )) p log ∆ ( f v ( k )) , w e ha ve , P | ρ v ( k ) | 6 w | A v ( i ) , B v ( i ) for all 0 6 i < k , A v ( k ) > P ˜ ξ v,k − a v ( k ) − εθ χ v ( k ) − 2 C χ v ( k ) 6 w > 1 2 w √ 2 π 8 r s ( f v ( k )) ∆( f v ( k )) − 1 2 (128 r ) 2 ( εθ + C − 1 ) 2 . No w , b y c ho osing C > 10 24 r , an d recalling (74), we see that 1 2 (128 r ) 2 ( εθ + C − 1 ) 2 6 1 32 . Since ∆( f v ( k )) > 16 (again, b y (71)), w e conclude that P | ρ v ( k ) | 6 16 √ 2 π r s ( f v ( k )) | A v ( i ) , B v ( i ) for all 0 6 i < k , A v ( k ) > deg ↓ ( f v ( k )) − 1 / 8 . This implies w v ( k ) 6 16 √ 2 π r s ( f v ( k )) 6 50 r s ( f v ( k )) , w h ere we recall once again the definition of w v ( k ) from (89). An almost identi cal argument yields that w v ( h v − 1) 6 50 r s ( f v ( h v − 1)) m − 3 / 4 v . The next lemma s tates that the even ts E v as defined in (93 ) satisfy requirement (1) of Lemma 4.3. Lemma 4.8. If E v o c curs, then | η v − ε S | 6 w v ( h v − 1) 6 50 r s ( f v ( h v − 1)) m − 3 / 4 v . Pr o of. This follo w s dir ectly from Lemma 4.6, the iden tit y (82) and the definition of B v ( k ). 45 The first momen t. W e no w giv e lo we r b ound s on the p robabilit y of the ev en t E v . Lemma 4.9. F or every v ∈ L , P ( E v ) > 1 2 m − 7 / 8 v . Pr o of. W e hav e, P ( E v ) = h v − 1 Y k =0 P ( A v ( k ) | A v (0 , k − 1) , B v (0 , k − 1)) P ( B v ( k ) | A v (0 , k ) , B v (0 , k − 1)) = m − 3 / 4 v h v − 1 Y k =0 deg ↓ ( f v ( k )) − 1 / 8 h v − 1 Y k =0 P ( A v ( k ) | A v (0 , k − 1) , B v (0 , k − 1)) = m − 7 / 8 v h v − 1 Y k =0 P ( A v ( k )) , (101) where the second line follo ws from (92), and the third line from the fact that A v ( k ) is indep end en t of {A v ( i ) , B v ( i ) : 0 6 i < k } . Using (84), we ha ve P ( A v ( k )) > 1 − 2 √ 2 π Z ∞ εθ χ v ( k ) exp − x 2 128 r 2 s ( f v ( k )) dx > 1 − 2∆( f v ( k )) − 1 128 ε 2 θ 2 > 1 − 2 exp − 1 128 2 − 20 C 2 4 k . where w e hav e used (71), the d efinition of ε (74), and χ v ( k ) = r s ( f v ( k )) p log ∆ ( f v ( k )). Clearly b y c ho osing C a large enough constan t, we ha ve h v − 1 Y k =0 P ( A v ( k )) > 1 2 , completing the pro of. The second momen t. Finally , w e b ou n d the probabilit y of E u ∩ E v for u 6 = v . Lemma 4.10. F or e v ery u, v ∈ L , P ( E u ∩ E v ) 6 m 1 / 8 uv ( m u m v ) − 7 / 8 . Pr o of. Assume, without loss of generalit y , that u ≺ v ∈ L . It is clear from (101) that P ( E u ) 6 m − 7 / 8 u . Also, w e hav e P ( E v | E u ) 6 P ( A v (0 , h v − 1) , B v (0 , h u − 1) | E u ) 6 h v − 1 Y k = h uv P ( B v ( k ) | E u , A v (0 , k ) , B v (0 , k − 1)) . 46 No w recall that E u ∈ σ ( F − f v ( h uv +1) ) ⊂ σ ( F − f v ( k +1) ) for all k > h uv . By (92), we obtain, h v − 1 Y k = h uv P ( B v ( k ) | E u , A v (0 , k ) , B v (0 , k − 1)) = d eg ↓ ( f v ( h v − 1)) − 3 / 4 h v − 1 Y k = h uv deg ↓ ( f v ( k )) − 1 / 8 = m 1 / 8 uv m − 7 / 8 v . Altoget her, we conclud e that P ( E u ∩ E v ) = P ( E u ) P ( E v | E u ) 6 m 1 / 8 uv ( m u m v ) − 7 / 8 , as required. The main coupling lemma, Lemma 4.3 , is an immediately corollary of L emmas 4.8, 4.9 and 4.10. 4.3 T ree-lik e p ercolation Lemma 4.11 b elow yields (76). Its p ro of is a v ariant on th e wel l-kno wn second moment metho d for p ercolation in trees (see [38]). First, we define a m easur e ν on L via ν ( u ) = m − 1 u . Observe that ν is a probabilit y measure on L , i.e. X u ∈L ν ( u ) = 1 . (102) T o s ee this, construct a unit fl o w from the ro ot to the lea ves, where eac h non-leaf no de splits its incoming flow equally among its children. Clearly the amount that reac hes a leaf u is precisely ν ( u ). Lemma 4.11. Supp ose that to e ach v ∈ L , we as so ciate an event E v such th at the fo l lowing b ounds old. 1. P ( E v ) > 1 2 m − 7 / 8 v for al l v ∈ L . 2. P ( E u ∩ E v ) 6 m 1 / 8 uv ( m u m v ) − 7 / 8 for al l u, v ∈ L . Define Z = P u ∈L m − 1 / 8 u 1 E u . Then, P ( Z > 0) > 1 8 . Pr o of. By assump tion (1), E Z > X u ∈L 1 2 m − 1 / 8 u m − 7 / 8 u = 1 2 X u ∈L m − 1 u = 1 2 . where the last equalit y follo ws f rom (102). By assumption (2), w e hav e E Z 2 = X u,v ∈L ( m u m v ) − 1 / 8 P ( E u ∩ E v ) 6 X u,v ∈L m 1 / 8 uv ( m u m v ) − 1 . In order to estimate the second moment , we first fix u and sum o ver v . T o b e more p recise, let L h ( u ) = { v ∈ L : h uv = h } , 47 where w e reca ll that h u is the heigh t of a n o de u , and h uv is the heigh t of the least-common ancestor of u and v . W e can then partition L = S h > 0 L h ( u ) and obtain for ev ery u ∈ L , X v ∈L m 1 / 8 uv m − 1 v = h u X h =0 X v ∈L h ( u ) m 1 / 8 uv m − 1 v = h u X h =0 h − 1 Y i =0 deg ↓ ( f u ( i )) 1 / 8 X v ∈L h ( u ) m − 1 v = h u X h =0 h − 1 Y i =0 deg ↓ ( f u ( i )) 1 / 8 ν ( L h ( u )) . Recalling the flo w representat ion of the measure ν , w e see that ν ( L h ( u )) = deg ↓ ( f u ( h )) − 1 deg ↓ ( f u ( h )) h − 1 Y i =0 deg ↓ ( f u ( i )) . Therefore, X v ∈L m 1 / 8 uv m − 1 v = h u X ℓ =0 deg ↓ ( f u ( h )) − 1 deg ↓ ( f u ( h )) h − 1 Y i =0 deg ↓ ( f u ( i )) − 7 / 8 6 h u X ℓ =0 h − 1 Y i =0 deg ↓ ( f u ( i )) − 7 / 8 6 2 , where the last tran s ition follo w s from (71), for C chosen sufficient ly large. App lyin g the s econd momen t metho d, we deduce that P ( Z > 0) > ( E Z ) 2 E Z 2 > 1 8 , completing the pro of. 4.4 The lo cal times W e now pro ve Lemma 4.4, in o rder to the complete the analysis of the left-hand side of (63). Lemma 4.12. Consider the lo c al times L v τ ( t ) as define d in The or em 1.14. F or v ∈ L , define ˜ E v = n 0 < L v τ ( t ) 6 5 0 2 · r 2 s ( f v ( h v − 1)) m − 3 / 2 v o . Then, for any t > 0 P [ v ∈L ˜ E v ! 6 1 16 . Pr o of. Note that the r andom w alk is at v ertex v 0 at time τ ( t ). Hence, given that L v τ ( t ) > 0, the random walk contai ns at least one excursion wh ic h starts at v and ends at v 0 . Th er efore, give n that L v τ ( t ) > 0, w e see c v L v τ ( t ) sto c h astically dominates the r andom v ariable L = Z T v 0 0 1 { X t = v } dt , where X t is a random w alk on the netw ork started at v and T v 0 is the hitting time to v 0 . 48 By defin ition, ev ery time th e rand om w alk hits v , it takes an exp onen tial time for the walk to lea ve. Also, the probabilit y that th e rand om walk w ould hit v 0 b efore return ing to v can b e related to th e effectiv e resistance (see, for example, [39]). F ormally , when the random wa lk W t is at v er tex v , it will wa it u n til the P oisson clo c k σ with rate 1 rings and then mo v e to a n eighb or (p ossibly v itself ) s elected prop ortional to the edge conductance. Define T + v = min { t > σ : X t = v } . Then w e ha v e the con tinuous-time v ersion of (33), P v ( T + v > T v 0 ) = 1 c v R eff ( v , v 0 ) . By the strong Mark o v p rop ert y , L follo ws th e la w of the sum of a geometric num b er of i.i.d. exp onent ial v ariables. Th us L follo ws the la w of an exp onen tial v ariable w ith E L = c v R eff ( v , v 0 ). Recalling prop erty (72) of our separated tree T , we see that R eff ( v , v 0 ) = E ( η v − η v 0 ) 2 > 2 − 10 r 2 s ( f v ( h v − 1)) − 2 . Th us, P (0 < L v τ ( t ) 6 50 2 · r 2 s ( f v ( h v − 1)) m − 3 / 2 v ) 6 P ( L 6 c v · 50 2 · r 2 s ( f v ( h v − 1)) m − 3 / 2 v ) 6 50 2 · r 2 s ( f v ( h v − 1)) m − 3 / 2 v R eff ( v , v 0 ) 6 2 11 · 50 2 · r 2 m − 3 / 2 v 6 1 16 m − 1 v , where the last transition usin g (71) for C chosen la rge enou gh , and m v > exp( C 2 r 2 ) . Therefore, we conclude that P [ v ∈L ˜ E v ! 6 1 16 X v ∈L m − 1 v = 1 16 , where w e used, from (102), the fact that P v ∈L m − 1 v = 1, completing the pro of. 4.5 Additional applications W e no w prov e a generalization of Theorem 1.7. S upp ose th at V = { 1 , 2 , . . . , n } , and let G ( V ) b e a net w ork with condu ctances { c ij } . W e define real, symmetric n × n matrices D and A by D ij = ( c i i = j 0 otherwise. A ij = c ij . W e write L G = D − A tr( D ) , (103) and L + G for the pseudoinv erse of L G . 49 Theorem 4.13. F or any c onne cte d network G ( V ) , t co v ( G ) ≍ E q L + G g 2 ∞ , wher e g = ( g 1 , . . . , g n ) is a standar d n -dimensional Gaussian. Pr o of. If κ denotes the commute time in G , then the follo win g form ula is well -kno wn (see, e.g. [32]), κ ( i, j ) = h e i − e j , L + G ( e i − e j ) i , where { e 1 , . . . , e n } are the standard basis v ectors in R n . Using the fact that L + G is self-adjoin t and p ositiv e semi-definite, this y ields κ ( i, j ) = q L + G e i − q L + G e j 2 . Let g = ( g 1 , . . . , g n ) ∈ R n b e a standard n -dimensional Gaussian, and consider the Gaussian pro cesses { η i : i = 1 , . . . , n } w here η i = g , q L + G e i . One ve rifies th at for all i, j ∈ V , E | η i − η j | 2 = q L + G ( e i − e j ) 2 = κ ( i, j ) , th us by T heorem (MM), γ 2 ( V , √ κ ) ≍ E max i ∈ V η i = E max i ∈ V g , q L + G e i = E max i ∈ V q L + G g , e i ≍ E q L + G g ∞ . (104) By Theorem 1.9, [ γ 2 ( V , √ κ )] 2 ≍ t co v ( G ). Finally , one can use Lemma 2.2 to conclude that E q L + G g ∞ 2 ≍ E q L + G g 2 ∞ , completing the pro of. Theorem 4.14. Ther e a r andomize d algo rithm which, gi ven any c onne cte d netwo rk G ( V ) , with m = |{ ( x, y ) : c xy 6 = 0 }| , runs in time O ( m (lo g m ) O (1) ) and outputs a numb er A ( G ) such that t co v ( G ) ≍ E [ A ( G )] ≍ ( E A ( G ) 2 ) 1 / 2 . Pr o of. In [46, § 4], it is sho wn how to compute a k × n matrix Z , in exp ected time O ( m (log m ) O (1) ), with k = O (log n ), and suc h that for ev ery i, j ∈ V , κ ( i, j ) 6 k Z ( e i − e j ) k 2 6 2 κ ( i, j ) . (105) W e can asso ciate the Gaussian pr o cesses { η i } i ∈ V , where η i = h g , Z e i i , and g is a standard k - dimensional Gaussian. Letting d ( i, j ) = p E | η i − η j | 2 , we see f r om (105) that √ κ 6 d 6 √ 2 κ , therefore γ 2 ( V , √ κ ) ≍ γ 2 ( V , d ). It follo ws (see (104)) that E k Z g k 2 ∞ ≍ E q L + G g 2 ∞ ≍ t co v ( G ) , where the last equiv alence is the con ten t of Theorem 4.13. The output of our alg orithm is th us A ( G ) = k Z g k 2 ∞ , wh er e g is a standard k -dimensional Gaussian v ector. Th e fact that E [ A ( G )] ≍ ( E [ A ( G ) 2 ]) 1 / 2 follo ws from Lemma 2.2. 50 5 Op en problems and further discussion W e no w present tw o op en questions that arise naturally from the presen t w ork. The first question concerns obtaining a b etter deterministic app r o xim ation to the co ve r time. Question 5.1. Is ther e, for any ε > 0 , a deterministic, p olynomial-time algorithm that appr oxi- mates t co v ( G ) up to a (1 + ε ) factor? Note that the preceding question has b een solv ed b y F eige and Zeitouni [23] in the case of trees. The s econd question in vo lv es concentrat ion of τ co v around its exp ected v alue. Un der the as- sumption that lim n →∞ t co v ( G n ) t hit ( G n ) = ∞ , wh ere t hit denotes the maximal hitting time, Aldous [5] pro v es that τ co v ( G n ) t co v ( G n ) con v erges to 1 in probability . W e ask whether it is p ossible to obtain sharp er concen tr ation. Question 5.2. Is the standar d devi ation of τ co v b ounde d by the maximal hitting time t hit ? F u r- thermor e, do es τ co v − t co v t hit exhibit an exp onential de c ay with c onstant r ate? It is in teresting to consider the extent to wh ic h Theorem 2.8 is sh arp. Con s ider a family of graphs { G n } . W e p oint out t hat the asymp totic form u la, t co v ( G n ) ∼ | E ( G n ) | · E su p v ∈ V η v 2 , (10 6) holds for b oth the family of complete graphs and the family of regular trees, where w e write a n ∼ b n for lim a n /b n = 1, and E ( G n ) denotes the set of edges in G n . Here, { η v } is the GFF asso ciated to G n with η v 0 = 0 for some fixed v ertex v 0 . T o see this, note that the GFF on the n -v ertex complete graph satisfies V ar η v = 2 n and E ( η v η u ) = 1 n for v 0 / ∈ { u, v } . Therefore, w e can write η v = ξ + ξ v for ev er y v 6 = v 0 , where ξ and all { ξ v } v ∈ V are i.i.d. Gaussian v ariables with v ariance 1 n . It is now clear that E sup v η v ∼ p 2 log n/n . Com bined with the facts that t co v ( G n ) ∼ n log n and | E ( G n ) | = n ( n − 1) 2 , this confirms (106) for complete graphs. Fix b > 2 and consider a regular b -ary tree T m of heigh t m w ith n = b m +1 − 1 b − 1 v ertices. It is sho wn in [4] that t co v ( T m ) ∼ 2 mn log n . On the other hand , Biggins [8] p ro v ed th at the corresp ond ing GFF satisfies E sup v η v ∼ √ 2 m log n . Since the n umb er of edges in T m is n − 1, w e infer that (106 ) holds for regular trees. It is clearly ve ry interesting to un derstand the generalit y under whic h (106) holds. Ac kno wledgemen t s W e are grateful to Martin Barlo w and Asaf Nachmias for helpf u l discussions in the early stages of this w ork. W e thank J a y Rosen and an anonymous referee f or a v ery thorough r eading of the man uscript, along with n umerous in sigh tful comments. W e also thank Nik e Sun, Ru ss Lyo ns, Saran Ah uja, and Y oshihiro Ab e for useful commen ts. References [1] D. Aldous . Pr ob ability appr oximations via the Poisson c lumping heuristic , v olume 77 of A pplie d Mathematic al Scienc es . S p ringer-V erlag, New Y ork, 1989. 51 [2] D. Aldou s and J . Fill. R eversible Markov Cha ins and R andom W alks on Gr aphs . In prep aration, a v ailable at htt p://www.s tat.berke ley.edu/ al dous/RWG/ book.html . [3] D. J. Aldous. Marko v c hains with almost exp onential hitting times. Sto chastic Pr o c ess. Appl. , 13(3): 305–310 , 1982. [4] D. J. Aldous. Random w alk cov ering of some sp ecial tr ees. J. Math. Anal. Appl. , 15 7(1):271 – 283, 1991. [5] D. J. Aldous. T hreshold limits for co v er times. J. The or et. Pr ob ab. , 4(1):1 97–211, 1991. [6] R. Aleliunas, R. M. Karp , R. J. Lipton, L. Lo v´ asz, and C. Rac ko ff. Random w alks, unive rsal tra v ersal sequen ces, and the complexit y of maze pr ob lems. In 20th Annual Symp osium on Foundations of Computer Scienc e (San Juan, Pu erto Ric o, 1979) , pag es 218– 223. IEEE, New Y ork, 1979. [7] M. T. Barlo w, J. Ding, A. Nac h mias, and Y. Pe res. The evo lution of the co v er time. Preprint, a v ailable at htt p://arxiv .org/abs/ 1001.0609 . [8] J. D. Biggins. Chernoff ’s theorem in th e branching r andom wa lk. J. Appl. Pr ob ability , 14(3): 630–636 , 1977. [9] A. Z. Broder and A. R. Karlin. Bounds on the co v er time. J. The or et. P r ob ab. , 2(1):101–1 20, 1989. [10] G. A. Campb ell. C isoidal oscillations. T r ans. Amer. Inst. Ele c. E ng rs. , (30), 1911. [11] A. K . Chandr a, P . Ragha v an, W. L. Ruzzo, R. Smolensky , and P . Tiw ari. The electrical resistance of a graph captures its comm ute and co v er times. Comput. Complexity , 6(4):3 12– 340, 1996/ 97. [12] C. Co op er and A. F rieze. Th e co ver time of the gia nt comp on ent of a random graph. R andom Structur es Algor ithms , 32(4):401– 439, 2008 . [13] D. Copp ersmith and S. Winograd. Matrix multiplicati on via arithmetic p rogressions. J. Symb olic Comput. , 9(3):25 1–280, 1990. [14] A. De mbo, Y. Pe res, J. Rosen, and O. Zeitouni. Cov er times for Bro w nian motion and random w alks in t w o dimensions. Ann. of M ath. (2) , 160(2 ):433–4 64, 2004. [15] P . G. Do yle and J. L. Snell. R andom walks and ele ctric networks , v olume 22 of Car us Mathe- matic al Mono gr aphs . Mathemati cal Asso ciation of America, W ashington, DC, 1984. [16] R. M. Du d ley . The sizes of compact su bsets of Hilb ert space and con tin uity of Gaussian pro cesses. J. F unctional Analysis , 1:2 90–330, 1967. [17] E. B. Dynkin. Gaussian and non-Gaussian rand om fields asso ciated with Marko v p ro cesses. J. F unct. Anal. , 55(3):344–3 76, 1984. [18] E. B. Dynkin. Lo cal times and quantum fields. In Seminar on sto chastic pr o c esses, 1983 (Gainesvil le, Fla., 1983) , vol ume 7 of Pr o gr. Pr ob ab. Statist. , p ages 69– 83. Birkh ¨ auser Boston, Boston, MA, 1984. 52 [19] N. Eisen baum. Une v ersion sans conditionnement du th ´ eo r` eme d’isomorphisms d e Dyn k in . In S´ eminair e de Pr ob abilit´ es, XXIX , vo lume 1613 of L e ctur e N otes in Math. , pages 266–289. Springer, Berlin, 1995. [20] N. Eisen baum, H. Kaspi, M. B. Marcus, J. Rosen, and Z. Sh i. A Ra y-Knight theorem for symmetric Mark o v p ro cesses. Ann. Pr ob ab. , 28(4):1781 –1796, 2000 . [21] U. F eige . A tigh t low er b ound on the co v er time for random wal ks on graph s. R andom Structur es Algor ithms , 6(4):433–4 38, 1995. [22] U. F eige. A tigh t upp er b ound on the co ve r time for r andom w alks on graphs. R andom Structur es Algor ithms , 6(1):51–54 , 1995. [23] U. F eige and O. Zeitouni. Deterministic approximati on for the co ver time of trees. Prep rin t, a v ailable at htt p://arxiv 1.library .cornell.edu/abs/0909.2005 ,. [24] X. F ernique. R´ egularit´ e d e pro cessus gaussiens. Invent. Math. , 12:30 4–320, 1971. [25] X. F ernique. Regularit ´ e des tra jectoires des fonctions al ´ eatoires gaussiennes. In ´ Ec ole d’ ´ Et´ e de Pr ob abilit´ es de Saint-Flour, IV-1974 , pages 1–96. Lecture Notes in Math., V ol. 480. Springer, Berlin, 1975. [26] R. M. F oster. T he a verag e imp edan ce of an electrical netw ork. In R eissner Anniversary Volume, Contributions to Applie d M e chanics , pages 333–340. J. W. Ed w ards, Ann Arb or, Mic h igan, 1948. [27] O. Gu ´ edon and A. Zv a vitc h. Sup rem um of a pro cess in terms of trees. In Ge ometric asp e cts of functional analysis , vo lume 1807 of L e ctur e Notes in Math. , pages 136–14 7. Spr inger, Berlin, 2003. [28] S. Janson. Gaussian H ilb ert sp ac es , vo lume 129 of Cambridge T r acts in Mathematics . Cam- bridge Univ ersit y Press, Cam bridge, 1997. [29] J. Jonasson a nd O. Sc hramm. On the co ve r time of planar graphs. Ele ctr on. Comm. Pr ob ab. , 5:85–9 0 (electronic), 2000. [30] J. K ah n , J. H. Kim, L. Lo v´ asz, and V. H. V u. The co v er time, th e blanke t time, and the Matthews boun d. In 41s t Annual Symp osium on Foundations of Computer Scienc e (Re dondo Be ach, CA, 2000 ) , pages 467–475 . IEEE Comput. So c. Press, Los Alamitos, CA, 2000 . [31] J. D. Kahn, N. Linial, N. Nisan, and M. E. Saks. On the co ver time of rand om wa lks on graphs. J. The or et. Pr ob ab. , 2(1):12 1–128, 1989. [32] D. J. Klein and M. Randi´ c. Resistance distance. J. Math. Chem. , 12(1-4):8 1–95, 1993 . Applied graph theory and discrete mathematics in c hemistry (S ask atoon, SK, 1991) . [33] F. B. Kn igh t. Random w alks and a so journ densit y pro cess of B rownian motion. T r ans. Amer. Math. So c. , 109:56 –86, 1963. [34] M. Ledoux. The c onc entr ation of me asur e phenomenon , vo lume 89 of Mathematic al Surveys and M ono gr aphs . American Mathematical So ciet y , Pro vidence, RI, 2001. 53 [35] M. Ledoux and M. T alagrand. Pr ob ability in Banach sp ac es , v olume 23 of Er gebni sse der Math- ematik und ihr er Gr enzgebiete (3) [R esults i n Mathematics and R elate d Ar e as (3)] . S pringer- V erlag, Berlin, 1991. Isop erimetry an d pro cesses. [36] D. A. Levin, Y. Peres, and E. L. Wilmer. Markov chains and mixing times . American Math- ematical So ciet y , Providence, RI, 2009. With a chapter by James G. Propp and Da vid B. Wilson. [37] L. Lo v´ asz. Random walks on graphs: a su rv ey . In Combinatorics, Paul E r d˝ os is eighty, Vol. 2 (Keszthely, 1993) , v olume 2 of Bolyai So c. Math. Stud. , pages 353 –397. J´ anos Boly ai Math. So c., Budap est, 1996. [38] R. Lyo ns. Rand om w alks, capacit y an d p ercolation on trees. A nn. Pr ob ab. , 20(4):204 3–2088, 1992. [39] R. Ly ons, w ith Y. P eres. Pr ob ability on T r e es and Networks . In preparation. Curr en t v ers ion a v ailable at http://m ypage.iu.e du/~rdlyons/prbtree/book.pdf , 200 9. [40] M. B. Marcus and J. Rosen. Sample path prop erties of the lo cal times of strongly symmetric Mark o v pro cesses via Gaussian p ro cesses. Ann. P r ob ab. , 20(4):1603 –1684, 1992 . [41] M. B. Marcus a nd J. Rosen. Gaussian pro cesses a nd local times of symmetric L ´ evy processes. In L´ evy pr o c esses , pages 67–88. Birkh¨ auser Boston, Boston, MA, 2001. [42] M. B. Marcus and J. Rosen. M arkov pr o c esses, Gaussian pr o c esses, and lo c al times , v olume 100 of Cambridge Studies in A dvanc e d Mathematics . Cam bridge Universit y Press, Cambridge, 2006. [43] P . Matthews. Co v ering p roblems for Mark o v c hains. A nn. Pr ob ab. , 16(3):1 215–122 8, 1988. [44] D. Ra y . So journ times of diffusion pro cesses. Il linois J. Math. , 7:615–6 30, 1963. [45] D. Spielman. Algorithms, graph theory , and linear equations in Lapla cian matrices. T o app ear, Pr o c e e dings of the Internationa l Congr e es of Mathematicians, Hyd erabad, India, 2010 . [46] D. S pielman and N. Sriv asta v a. Gr aph sp arsification b y effectiv e resistances. Av ailable at http://a rxiv.org/ abs/0803.0929 , 20 08. [47] D. Spielman and S .-H. T eng. Nearly-linear time algorithms for precondition- ing and solving symmetric, diagonally dominan t linear systems. Av ailable at http://a rxiv.org/ abs/cs.NA/0607105 , 20 06. [48] M. T alagrand. Regularit y of Gaussian pro cesses. A cta Math. , 159(1-2):99 –149, 1987. [49] M. T alagrand. Em b edd ing subspaces of L p in l N p . I n Ge ometric asp e cts of functional analysis (Isr ael, 1992–1 994) , volume 77 of Op er. The ory A dv. Appl. , p ages 311–32 5. Birkh¨ auser, Basel, 1995. [50] M. T alagrand. Ma jorizing measures: the generic c haining. Ann. Pr ob ab. , 24(3 ):1049–1 103, 1996. 54 [51] M. T alagrand. Ma j orizing measures without measures. Ann. P r ob ab. , 29(1):41 1–417, 2001. [52] M. T alagrand. The ge neric chaining . Springer Monographs in Mathematics. S pringer-V erlag, Berlin, 2005. Upp er and lo w er b ounds of stochastic pro cesses. [53] P . T etali. Random wa lks and the effectiv e resistance of n etw orks. J. The or et. Pr ob ab. , 4 (1):101– 109, 1991. [54] P . Winkler and D. Z uc k erman. Multiple co ver time. R andom Structur es Algorith ms , 9(4 ):403– 411, 1996. [55] D. Z uc k erman. A tec hniqu e for lo wer b ound ing the cov er time. SIAM J. D iscr ete Math. , 5(1):8 1–87, 1992. 55
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment