Fluctuations of Discrete-Time Random Walks
These notes are devoted to fluctuations of one-dimensional random walks. We discuss various approaches to first-passage times and to the corresponding conditional distributions. After discussion of some classical methods, such as reflection principle…
Authors: Denis Denisov, Vitali Wachtel
Fluctuations of Discrete-Time Random W alks Denis Deniso v and Vitali W ac h tel F ebruary 23, 2026 Abstract These notes are dev oted to fluctuations of one-dimensional random w alks. W e discuss v arious approac hes to first-passage times and to the corresp onding conditional distributions. After discussion of some classical metho ds, suc h as reflection principle for simple random walks and Wiener- Hopf factorisation, w e pro ceed to the universalit y approach, whic h has b een developed in recent past. Considering one-dimensional case allows us to a v oid some technical obstacles and to presen t the core of this metho d in a more transparent wa y . It turns out that the univ ersality metho d is muc h more robust than the Wiener-Hopf factorisation and allo ws one to consider w alks with non-iden tically distributed or ev en dependent incremen ts. 1 In tro duction Let S ( n ) b e a real-v alued random walk with indep endent incremen ts { X k } , so that S ( n ) = X 1 + X 2 + · · · + X n , n ≥ 1 . W e are concerned with the so-called oscillating case, that is when the walk S ( n ) is such that, almost surely , lim sup n →∞ S ( n ) = ∞ and lim inf n →∞ S ( n ) = −∞ . This implies that the sets ( −∞ , − x ] and ( x, ∞ ) for every x ∈ R are recurrent and, in particular, stopping times τ x := inf { n ≥ 1 : x + S ( n ) ≤ 0 } , x ≥ 0 are almost surely finite. W e will be in terested in asymptotic prop erties of dis- tributions of stopping times τ x and of corresp onding conditional probabilities P ( x + S ( n ) ≥ y , τ x > n ) and P ( x + S ( n ) = y , τ x > n ) . These characteristics are among the most classical ob jects of study in the probability theory . The main purp ose of these notes is to describ e existing (classical and rather new ones) approac hes to the problems mentioned ab o ve. W e shall start with some sp ecial classes of random walks, where explicit calculations play an im- p ortan t role in the asymptotic analysis. Then we de scrib e basic principles of 1 the Wiener-Hopf factorisation, which is apparently the most p ow erful approach to fluctuations of 1 -dimensional random w alks with i.i.d. incremen ts and of 1 -dimensional Levy pro cesses. The most crucial requirement in this approach is the classical duality lemma for S ( n ) . Th us, there is no real hop e that the factorisation tec hniques can b e adapted to cases when the increments are not iden tically distributed or even not indep endent. The ma jor part of this text will b e dev oted to a rather new approac h which will b e called universality metho d. Assume that there exists a scaling sequence { c n } such that S ( nt ) c n con verges to a stable pro cess. (In this pap er we shall consider only the case when one has Bro wnian motion in the limit.) Then it is rather natural to exp ect that the b eha viour of τ x should be similar to the b eha viour of the corresp ond ing first exit tim e of the limiting pro cess. This is rather obvious in the case when the starting p oint x is or order of the scaling c n . But if x gro ws slow er or is even fixed then such a transfer not clear. W e pro vide a n um b er of tec hnical tools whic h allow one to transfer the knowledge on the exit times for limiting pro cess in to results on exit times and corresp onding conditional distributions for the pre-limiting w alk S ( n ) . Since these tools are based on functional limit theo- rems and martingales techniques only , the universalit y metho d is more robust than the Wiener-Hopf factorisation and provides results for random walks with not iden tically distributed increments and for Mark ov c hains. F urthermore, this metho d w orks also in the multi-dimensional setting, where factorisation tec hniques do not work. A ctually , w e ha ve initiated the developmen t of the univ ersality approach in our pap ers [6, 7, 8] where multi-dimensional random w alks in cones hav e been considered. These notes are based on several mini-courses that were giv en by the authors in Munich (2014), No vosibirsk (2016), Melb ourne (2018) and W roclaw (2023). The structure of the notes is as follo ws. In Section 2 we consider simple symmetric random w alks, where one can obtain, b y using the classical André Reflection Principle, explicit expressions for P ( τ x = n ) and for P ( x + S ( n ) = y , τ x > n ) . Using these explicit form ulas we deriv e also several limit theorems, whic h should provide an in tuition for more general results whic h will b e obtained in later sections. Section 3 is dev oted to left-con tinuous random walks. This class of walks is also kno wn for the fact that it allo ws for a closed form expression for P ( τ x > n ) in terms of lo cal probabilities of S ( n ) . This relation can be used to find tail asymptotics for τ x and to prov e conditional limit theorems. Section 4 deals with the Wiener-Hopf factorisation, whic h is the most clas- sical and the most p o werful approac h to fluctuations of 1 -dimensional ran- dom w alks with indep endent, identically distributed incremen ts. Here we in- tro duce the notion of dual stopping times, find a dual for τ 0 and show that τ x with x > 0 do es not p ossess a dual stopping time. Then w e derive fac- torisation identities corresp onding to pairs of dual stopping time and show their handiness by deriving asymptotics for P ( τ x > n ) under the assumption lim n →∞ P ( S ( n ) > 0) = ϱ ∈ (0 , 1) . In this section w e follo w primarily the approac h from Green woo d and Shak ed [15]. 2 In Section 5 we consider random w alks with independent but not necessar- ily iden tically distributed incremen ts. The Wiener-Hopf factorisation does not apply in this setting. Instead, one can use the so-called universalit y metho d, whic h com bines functional limit theorems and martingale techniques, and allows one to study first-passage times of sto chastic processes. As a result, we obtain asymptotics for τ x and pro ve conditional limit theorems for random w alks sat- isfying the Lindeb erg condition, i.e. minimal condition under which the central limit theorem holds. The most important p eculiarity of the universalit y ap- proac h is that it do es not use generating functions and F ourier transforms. The presen tation in this section follo ws rather close the paper [4]. In Section 6 we sp ecialize the results from Section 5 to the case of i.i.d. incremen ts. Here w e also giv e a probabilistic construction of a p ositiv e harmonic function for random walks killed at leaving p ositive half-axis. Section 7 is dev oted to the pro of of the lo cal conditional limit theorem. Al- though, one can prov e such theorems via Wiener-Hopf factorisation and F ourier transforms, w e suggest here an alternative metho d, which combines the stan- dard, unconditional, lo cal limit theorem and the integral conditional limit the- orem considered in Section 5. This method is more robust and works in many situations, where the Wiener-Hopf factorisation is not applicable. In Section 8 we sho w that the univ ersalit y metho d applies also in the case when the incremen ts of the walk are even not indep endent. There w e consider a discrete-time Mark ov chain from the domain of attraction of Brownian motion and derive tail asymptoics for the first when the chain b ecomes non-p ositive and prov e the corresp onding limit theorems. 2 Simple Random W alks In this section we shall alw a ys assume that the w alk { S ( n ) } is simple, that is, P ( X 1 = 1) = p and P ( X 1 = − 1) = q = 1 − p for some p ∈ (0 , 1) . This implies that tra jectories of S ( n ) are contin uous on Z . This con tinuit y prop ert y allows one to calculate many c haracteristics of the w alk S ( n ) explicitly . W e shall start with stopping times τ x . Com bining the contin uity with the spatial homogeneity , w e conclude that, for ev ery x ≥ 1 , τ x is a sum of x inde- p enden t copies of the stopping time τ 1 . Let f ( s ) be the generating function of τ 1 : f ( s ) = E s τ 1 , s ∈ [0 , 1] . After the first step, the new p osition of the walk is either 0 (with probability q ) or 2 (with probability p ). In the first case τ 1 = 1 , while in the second case the pro cess is restarted from 2. Then the generating function f ( s ) satisfies f ( s ) = q s + p s ( f ( s )) 2 . Solving this quadratic equation for f ( s ) , w e obtain E s τ 1 = 1 − p 1 − 4 pq s 2 2 ps . (1) 3 Th us, for ev ery x ≥ 1 , E s τ x = 1 − p 1 − 4 pq s 2 2 ps ! x . (2) In the case when the w alk starts at zero w e ha ve P ( τ 0 = 1) = q and P ( τ 0 = 2 k ) = p P ( τ 1 = 2 k − 1) (3) for all k ≥ 1 . Consequently , E s τ 0 = q s + ∞ X k =1 s 2 k P ( τ 0 = 2 k ) = q s + ps ∞ X k =1 s 2 k − 1 P ( τ 1 = 2 k − 1) = q + ps E s τ 1 = q s + ps 1 − p 1 − 4 pq s 2 2 ps ! = q s + 1 − p 1 − 4 pq s 2 2 . (4) These exact expressions for generating functions allo w one to obtain explicit expressions for P ( τ x = n ) for ev ery n . In what follo ws we shall concen trate on oscillating walks. This means that w e restrict ourselves to the symmetric case p = q = 1 / 2 . F or this c hoice of the parameter p w e ha ve E [ s τ 1 ] = 1 s 1 − p 1 − s 2 and E [ s τ x ] = 1 s x 1 − p 1 − s 2 x for x ≥ 1 . Using now the equalit y (1 − z ) − 1 / 2 = ∞ X k =0 2 k k 1 2 2 k z k , (5) w e conclude that 1 − (1 − z ) 1 / 2 = Z z 0 1 2 (1 − u ) − 1 / 2 du = 1 2 ∞ X k =0 2 k k 1 2 2 k Z z 0 u k du = 1 2 ∞ X k =0 2 k k 1 2 2 k z k +1 k + 1 . Consequen tly , E s τ 1 = 1 s 1 − p 1 − s 2 = 1 s ∞ X k =0 2 k k 1 2 2 k +1 s 2 k +2 k + 1 4 and P ( τ 1 = 2 k + 1) = 1 k + 1 2 k k 2 − 2 k − 1 , k ≥ 0 . Applying Stirling’s form ula, one obtains P ( τ 1 = 2 k + 1) ∼ 1 2 √ π k − 3 / 2 , k → ∞ . Moreo ver, P ( τ 1 > n ) = X k ≥ n/ 2 P ( τ 1 = 2 k + 1) ∼ r 2 π n − 1 / 2 , n → ∞ . (6) More generally , for an y fixed x ≥ 1 , one has P ( τ x > n ) ∼ x r 2 π n − 1 / 2 , n → ∞ . (7) Let us now turn to the remaining case x = 0 . T o obtain asymptotics for P ( τ 0 > n ) we can first use equalities (3) and apply then (6). But one can also obtain a closed form expression for the probability P ( τ 0 > n ) . Letting p = q = 1 2 in (4), one obtains easily ∞ X n =0 P ( τ 0 > n ) s n = 1 − E [ s τ 0 ] 1 − s = 1 − s + √ 1 − s 2 2(1 − s ) = 1 2 + 1 2 √ 1 − s 2 (1 − s ) = 1 2 + 1 2 1 + s √ 1 − s 2 . Com bining this represen tation with (5), w e conclude that P ( τ 0 > 2 k ) = P ( τ 0 > 2 k + 1) = 2 k k 1 2 2 k +1 , k ≥ 1 . This formula is a particular case of the follo wing prop osition, which follo ws from the classical reflection principle for Dyck paths. Prop osition 2.1. L et { S ( n ) } b e a symmetric simple r andom walk. Then, for al l x, y ≥ 1 , P ( x + S ( n ) = y , τ x > n ) = P ( S ( n ) = y − x ) − P ( S ( n ) = y + x ) (8) and P ( x + S ( n ) ≥ y , τ x > n ) = P ( S ( n ) ∈ [ y − x, y + x )) . (9) Pr o of. The equality (8) is the probabilistic formulation of the reflection prin- ciple. F or completeness we give no w a probabilistic pro of of this fact. By the 5 Mark ov prop erty , P ( x + S ( n ) = y , τ x > n ) = P ( x + S ( n ) = y ) − P ( x + S ( n ) = y , τ x ≤ n ) = P ( x + S ( n ) = y ) − P ( x + S ( n ) = y , τ x < n ) = P ( x + S ( n ) = y ) − n − 1 X k =1 P ( τ x = k ) P ( S ( n − k ) = y ) and 0 = P ( x + S ( n ) = − y , τ x > n ) = P ( x + S ( n ) = − y ) − P ( x + S ( n ) = − y , τ x ≤ n ) = P ( x + S ( n ) = − y ) − P ( x + S ( n ) = y , τ x < n ) = P ( x + S ( n ) = − y ) − n − 1 X k =1 P ( τ x = k ) P ( S ( n − k ) = − y ) for all x, y ≥ 1 . T aking the difference and using the symmetry of the distribution of the w alk { S ( n ) } , w e obtain P ( x + S ( n ) = y , τ x > n ) = P ( x + S ( n ) = y ) − P ( x + S ( n ) = − y ) . Th us, (8) is prov ed. The second claim follows from (8) by summation: P ( x + S ( n ) ≥ y , τ x > n ) = ∞ X z = y P ( x + S ( n ) = z , τ x > n ) = ∞ X z = y [ P ( S ( n ) = z − x ) − P ( S ( n ) = z + x )] = P ( S ( n ) ≥ y − x ) − P ( S ( n ) ≥ y + x ) = P ( S ( n ) ∈ [ y − x, y + x )) . Corollary 2.2. One has P ( τ 0 > n ) = 1 2 P ( S ( n − 1) ∈ { 0 , 1 } ) and, for every x ≥ 1 , P ( τ x > n ) = P ( S ( n ) ∈ [1 − x, x + 1)]) . Pr o of. The second claim of the corollary follo ws from (9) with y = 1 . T o get the first one it suffices to combine the second claim with (3). Com bining these exact expressions with the de Moivre-Laplace theorem one can easily obtain v arious asymptotic relations for the simple symmetric random w alk. First we giv e an alternative proof of the tail asymptotics for the stopping times τ x . 6 Corollary 2.3. As n → ∞ , P ( τ 0 > n ) ∼ 1 √ 2 π 1 √ n and, uniformly in x = o ( √ n ) , P ( τ x > n ) ∼ r 2 π x √ n . (10) A lso, ther e exists a c onstant C such that P ( τ x > n ) ≤ C x + 1 √ n (11) for al l x ≥ 0 and n ≥ 1 . Pr o of. Again, the first asymptotic relation is a com bination of (3) and of the second one with x = 1 . A ccording to Corollary 2.2, for ev ery x ≥ 1 , P ( τ x > n ) = x X z =1 − x P ( S ( n ) = z ) . (12) By the de Moivre-Laplace theorem, P ( S ( n ) = z ) = 0 if n + z is odd (13) and sup z : m + z is even P ( S ( n ) = z ) − 2 √ 2 π n e − z 2 / 2 n = o 1 √ n . (14) Using these relations and noting that the summation in terv al [1 − x, x ] in (12) con tains exactly x p oints z such that n + z is ev en, w e obtain (10). T o pro ve (11) w e use a concen tration inequalit y that ensures existence of C suc h that P ( S ( n ) = y ) ≤ C 2 √ n for all y and n . Then, P ( τ x > n ) = P ( S ( n ) ∈ [1 − x, x + 1)]) = 1+ x X y =1 − x P ( S ( n ) = y ) ≤ C 1 + x √ n . This completes the pro of of the corollary . Corollary 2.4. If x √ n → v > 0 then P ( τ x > n ) → r 2 π Z v 0 e − u 2 / 2 du. 7 Pr o of. Combining Corollary 2.2 with the integral de Moivre-Laplace theorem, w e obtain P ( τ x > n ) = P ( S ( n ) ≤ x ) − P ( S ( n ) ≤ − x ) = Φ x √ n − Φ − x √ n + o (1) = 2 Z x/ √ n 0 1 √ 2 π e − u 2 / 2 du + o (1) . This gives the desired relation. Corollary 2.5. If x = o ( √ n ) as n → ∞ then P x + S ( n ) √ n > v τ x > n → e − v 2 / 2 , v > 0 . Pr o of. W e giv e first a proof for the case x ≥ 1 . According to (9), P ( x + S ( n ) ≥ ⌊ v √ n ⌋ , τ x > n ) = ⌊ v √ n ⌋ + x − 1 X z = ⌊ v √ n ⌋− x P ( S ( n ) = z ) . Noting that the summation interv al contains also here exactly x p oints suc h that z + n is ev en and applying (13), (14), we conclude that P ( x + S ( n ) ≥ ⌊ v √ n ⌋ , τ x > n ) ∼ r 2 π x √ n e − v 2 / 2 . Com bining this with Corollary 2.3, w e complete the pro of for x ≥ 1 . If x = 0 then we hav e P ( S ( n ) ≥ ⌊ v √ n ⌋ , τ 0 > n ) = 1 2 P (1 + S ( n − 1) ≥ ⌊ v √ n ⌋ , τ 1 > n − 1) and P ( τ 0 > n ) = 1 2 P ( τ 1 > n − 1) Com bining these equalities with the conv ergence in the case x = 1 , we finish the pro of. One of the imp ortan t prop erties of (10) is the fact that the dep endence on x and n factorizes. W e shall see later that this happ ens for all oscillating random w alks and that the function which describ es the dep endence on the starting p oin t plays an imp ortan t role in studying many prop erties of walks conditioned to stay p ositive. The contin uit y of the tra jectories of a simple random w alk implies that the function V ( x ) = x 8 satisfies the follo wing harmonicity prop erty: E V ( x + S (1)); τ x > 1 = E x + S (1); τ x > 1 = E x + S (1) = x, x ≥ 2 and E V (1 + S (1)); τ 1 > 1 = E 1 + S (1); τ 1 > 1 = 2 P 1 + S (1) = 2 = 1 . More generally , x = E x + S ( n ); τ x > n , x, n ≥ 1 . Using this function, one can define a new transition kernel on Z + (the Do ob h -transform of the original random w alk): b p ( x, x + 1) = V ( x + 1) V ( x ) P x + S (1) = x + 1 , τ x > 1 = x + 1 x · 1 2 , and similarly , b p ( x, x − 1) = V ( x − 1) V ( x ) P x + S (1) = x − 1 , τ x > 1 = x − 1 x · 1 2 . This transition kernel pro duces a Marko v chain { b S ( n ) } on the p ositive integers whic h is commonly referred to as r andom walk c onditione d to stay p ositive . This terminology is justified by the follo wing observ ation. F or ev ery x 0 ≥ 0 we ha ve P ( x 0 + S ( k + 1) = x + 1 | x 0 + S ( k ) = x, τ x 0 > n ) = P ( x 0 + S ( k + 1) = x + 1 , x 0 + S ( k ) = x, τ x 0 > n ) P ( x 0 + S ( k ) = x, τ x 0 > n ) = P ( x 0 + S ( k ) = x, τ x 0 > k ) 1 2 P ( τ x +1 > n − k − 1) P ( x 0 + S ( k ) = x, τ x 0 > k ) P ( τ x > n − k ) − → 1 2 x + 1 x = b p ( x, x + 1) as n → ∞ , where we used asymptotics (10) in the last line. Similarly one shows that P ( x 0 + S ( k + 1) = x + 1 | x 0 + S ( k ) = x, τ x 0 > n ) → b p ( x, x − 1) as k → ∞ . Using Prop osition 2.1 one can obtain a limit theorem for the c hain { b S ( n ) } . Prop osition 2.6. F or every fixe d x ≥ 1 , P x b S ( n ) √ n ≥ v ! → r 2 π Z ∞ v u 2 e − u 2 / 2 du, v > 0 . 9 Pr o of. By the definition of the c hain { b S ( n ) } , P x ( b S ( n ) ≥ y ) = ∞ X z = y P x ( b S ( n ) = z ) = 1 V ( x ) ∞ X z = y V ( z ) P ( x + S ( n ) = z , τ x > n ) = 1 x ∞ X z = y z P ( x + S ( n ) = z , τ x > n ) = y x P ( x + S ( n ) ≥ y , τ x > n ) + 1 x ∞ X z = y +1 ( z − y ) P ( x + S ( n ) = z , τ x > n ) . Using summation b y parts formula and applying Proposition 2.1, we obtain P x ( b S ( n ) ≥ y ) = y x P ( x + S ( n ) ≥ y , τ x > n ) + 1 x ∞ X u = y +1 P ( x + S ( n ) ≥ u, τ x > n ) = y x P ( x + S ( n ) ≥ y , τ x > n ) + 1 x ∞ X u = y +1 P ( x + S ( n ) ∈ [ u − x, u + x )) = y x P ( x + S ( n ) ≥ y , τ x > n ) + 1 x y + x X z = y − x +1 P ( x + S ( n ) ≥ z ) . (15) As we hav e seen in the proof of Corollary 2.3, P ( x + S ( n ) ≥ y , τ x > n ) ∼ x r 2 π n e − v 2 / 2 pro vided that y ∼ v √ n . Therefore, y x P ( x + S ( n ) ≥ y , τ x > n ) ∼ r 2 π v e − v 2 / 2 . (16) F urthermore, b y the central limit theorem, y + x X z = y − x +1 P ( x + S ( n ) ≥ z ) ∼ 2 x Z ∞ v 1 √ 2 π e − u 2 / 2 du. (17) Plugging (16) and (17) in to (15), we conclude that P x ( b S ( n ) ≥ y ) ∼ r 2 π v e − v 2 / 2 + Z ∞ v e − u 2 / 2 du pro vided that y ∼ v √ n . Integrating by parts completes the pro of. 10 3 Left-con tin uous random walks. There is a further class of lattice random w alks where one can obtain some rather explicit expressions for the distribution of τ x . It turns out that it is sufficien t to assume that the walk can only mo v e do wn wards in a contin uous manner. More precisely , it suffices to assume that P ( X 1 ∈ {− 1 , 0 , 1 , . . . } ) = 1 . If this condition holds then we shall sa y that the walk { S ( n ) } is left-c ontinuous . Similar to the case of the simple random walk, we hav e S ( τ x ) = 0 , x ≥ 1 and S ( τ 0 ) = − 1 { τ 0 = 1 } . Prop osition 3.1. F or al l x, n ≥ 1 one has P ( τ x = n ) = x n P ( S ( n ) = − x ) . (18) Pr o of. W e shall use the induction ov er n . Assume first that n = 1 . It is clear that P ( τ 1 = 1) = P ( X 1 ≤ − 1) = P ( X 1 = − 1) . F urthermore, the left-con tinuit y implies that P ( τ x = 1) = 0 = P ( X 1 = − x ) for all x > 1 . Thus, (18) holds for n = 1 and all x ≥ 1 . T o p erform the induction step, we assume that (18) is v alid for some n ≥ 1 . Com bining this induction assumption with the Marko v property , we obtain P ( τ x = n + 1) = ∞ X k = − 1 P ( X 1 = k ) P ( τ x + k = n ) = ∞ X k = − 1 P ( X 1 = k ) x + k n P ( S ( n ) = − x − k ) = x n P ( S ( n + 1) = − x ) + 1 n E [ X 1 ; S ( n + 1) = − x ] . Since all X k are indep endent and iden tically distributed, E [ X 1 ; S ( n + 1) = − x ] = E [ X k ; S ( n + 1) = − x ] for all k ≤ n + 1 . This implies that E [ X 1 ; S ( n + 1) = − x ] = − x n + 1 P ( S ( n + 1) = − x ) . Consequen tly , P ( τ x = n + 1) = x n P ( S ( n + 1) = − x ) − x n ( n + 1) P ( S ( n + 1) = − x ) = x n + 1 P ( S ( n + 1) = − x ) . Th us, the proof is complete. 11 Remark 3.2. There exist several different pro ofs of this prop osition. Here we follo wed an elemen tary approac h due to [20]. Prop osition 3.1 connects with each other the mass functions of the random v ariables τ x and S ( n ) . In particular, asymptotics for P ( τ x = n ) can b e obtained from the local cen tral limit theorem for lattice random v ariables. T o formulate this result we recall first the notion of the aperio dicity of lattice distributions. Let X b e a random v ariable with v alues in Z . W e set d := g . c . d . { k − j : P ( X = k ) P ( X = j ) > 0 } This num b er is called p erio d of the distribution of X . If d = 1 then we shall sa y that the distribution of X is ap erio dic . If d > 1 then there exists an integer a ∈ [0 , d ) such that the distribution of Y := X − a d is ap erio dic. Let { Y k } be a sequence of i.i.d. random v ariables with finite positive v ariance σ 2 Y and with an ap erio dic distribution. Then one has the following lo cal v ersion of the cen tral limit theorem: sup y ∈ Z √ n P n X k =1 Y k = y ! − 1 p 2 π σ 2 Y exp − ( y − n E Y 1 ) 2 2 nσ 2 Y → 0 . (19) If { X k } is a sequence of i.i.d. random v ariables with finite p ositive v ari- ance σ 2 and with p erio d d > 1 . Then the random v ariables Y k := X k − a d ha ve exp ectation E X 1 − a d and v ariance σ 2 d 2 . Noting no w that P n X k =1 X k = x ! = P n X k =1 ( a + d Y k ) = x ! = P n X k =1 Y k = x − an d ! and using (19), we conclude that sup x ∈ D n √ n P n X k =1 X k = x ! − d √ 2 π σ 2 exp − ( x − n E X 1 ) 2 2 nσ 2 → 0 (20) and P n X k =1 X k = x ! = 0 for all x / ∈ D n , (21) where D n := x ∈ Z : x − an d ∈ Z . In the case when E X 1 = 0 , σ 2 := E X 2 1 ∈ (0 , ∞ ) and the distribution of X 1 is ap erio dic, (20) implies that P ( S ( n ) = − x ) ∼ 1 σ √ 2 π n as n → ∞ 12 for every fixed x . Consequen tly , as n → ∞ , P ( τ x = n ) ∼ x σ √ 2 π n − 3 / 2 (22) and, by summing up P ( τ x = n ) , P ( τ x > n ) ∼ x σ r 2 π n − 1 / 2 . (23) If X 1 has p erio d d > 1 , E X 1 = 0 and P ( X 1 ≥ − 1) = 1 then a = d − 1 . Indeed, the assumptions E X 1 = 0 and P ( X 1 ≥ − 1) = 1 imply that P ( X 1 = − 1) > 0 . Consequently , − 1 = a + dm for some m ∈ Z . Recalling that, b y definition, a ∈ [0 , d ) , we conclude that m = − 1 and a = d − 1 . This implies that D n = x ∈ Z : x − ( d − 1) n d ∈ Z = x ∈ Z : x + n d ∈ Z . Com bining this with (20) and (21), we obtain, for every x ≥ 1 , P ( S ( n ) = − x ) ∼ d σ √ 2 π n as n → ∞ and n ∈ E x and P ( S ( n ) = − x ) = 0 for all x / ∈ E x , where E x := { x + dm, m ≥ 0 } . Com bining this with Prop osition 3.1, we finally conclude that P ( τ x = n ) ∼ x d σ √ 2 π n − 3 / 2 as n → ∞ and n ∈ E x and that (23) remains v alid also in the case p eriodic distributions. The left-contin uity allows one also to sho w that Corollary 2.5 remains v alid for left-contin uous random walks. Prop osition 3.3. Assume that E X 1 = 0 , σ 2 = E X 2 1 ∈ (0 , ∞ ) . Then, for every fixe d x ≥ 1 , P x + S ( n ) σ √ n ≥ v τ x > n → e − v 2 / 2 . Pr o of. T o simplify a bit the pro of w e shall additionally assume that the distri- bution of X 1 is ap erio dic. Let y = y n := ⌊ v √ n ⌋ . Rep eating the arguments from the pro of of the reflection principle in Prop osition 2.1, we hav e P ( x + S ( n ) ≥ y , τ x > n ) = P ( x + S ( n ) ≥ y ) − n − 1 X k =1 P ( τ x = k ) P ( S ( n − k ) ≥ y ) 13 and 0 = P ( x + S ( n ) ≤ − y , τ x > n ) = P ( x + S ( n ) ≤ − y ) − n − 1 X k =1 P ( τ x = k ) P ( S ( n − k ) ≤ − y ) . T aking the difference, we obtain P ( x + S ( n ) ≥ y , τ x > n ) = P ( x + S ( n ) ≥ y ) − P ( x + S ( n ) ≤ − y ) + n − 1 X k =1 P ( τ x = k ) [ P ( S ( n − k ) ≤ − y ) − P ( S ( n − k ) ≥ y )] = P ( S ( n ) ∈ [ y − x, y )) + P ( S ( n ) ∈ ( − y − x, − y ]) + P ( S ( n ) ≥ y ) − P ( S ( n ) ≤ − y ) + n − 1 X k =1 P ( τ x = k ) [ P ( S ( n − k ) ≤ − y ) − P ( S ( n − k ) ≥ y )] . By the local cen tral limit theorem (20), P ( S ( n ) ∈ [ y − x, y )) + P ( S ( n ) ∈ ( − y − x, − y ]) ∼ x √ 2 σ √ π n e − v 2 / 2 since y ∼ v σ √ n . This implies that P ( x + S ( n ) ≥ y , τ x > n ) − x √ 2 σ √ π n e − v 2 / 2 = o ( n − 1 / 2 ) + P ( S ( n ) ≥ y ) − P ( S ( n ) ≤ − y ) + n − 1 X k =1 P ( τ x = k ) [ P ( S ( n − k ) ≤ − y ) − P ( S ( n − k ) ≥ y )] = o ( n − 1 / 2 ) + ( P ( S ( n ) ≥ y ) − P ( S ( n ) ≤ − y )) P ( τ x ≥ n ) + n − 1 X k =1 P ( τ x = k ) γ n,k ( y ) , where γ n,k ( y ) := P ( S ( n ) ≥ y ) − P ( S ( n ) ≤ − y ) + P ( S ( n − k ) ≤ − y ) − P ( S ( n − k ) ≥ y ) . Com bining (23) with the central limit theorem, w e infer that ( P ( S ( n ) ≥ y ) − P ( S ( n ) ≤ − y )) P ( τ x ≥ n ) = o ( n − 1 / 2 ) . 14 Consequen tly , P ( x + S ( n ) ≥ y , τ x > n ) − x √ 2 σ √ π n e − v 2 / 2 ≤ o ( n − 1 / 2 ) + n − 1 X k =1 P ( τ x = k ) γ n,k , (24) where γ n,k := sup y | γ n,k ( y ) | . By the cen tral limit theorem, γ n,k → 0 as both n and n − k → ∞ . Com bining this with (22), we conclude that X k ∈ ( εn,n − 1] P ( τ x = k ) γ n,k = o ( n − 1 / 2 ) (25) for every ε ∈ (0 , 1) . T o deal with smaller v alues of k w e notice that by a telescoping argumen t, γ n,k ≤ 2 sup y | P ( S ( n ) ≥ y ) − P ( S ( n − k ) ≥ y ) | ≤ 2 n − 1 X j = n − k sup y | P ( S ( j + 1) ≥ y ) − P ( S ( j ) ≥ y ) | . F or the summands on the right hand side one has the follo w ing b ound ∆ j := sup y | P ( S ( j + 1) ≥ y ) − P ( S ( j ) ≥ y ) | ≤ C j . (26) W e postp one the pro of of this b ound and notice that it implies that γ n,k ≤ C k n for all k ≤ n/ 2 . Therefore, X k ≤ εn P ( τ x = k ) γ n,k ≤ C n X k P ( τ x = k ) ≤ C ′ ε 1 / 2 n − 1 / 2 , (27) where in the last step w e hav e used (22). Plugging (25), (27) in to (24) and letting ε → 0 , we conclude that P ( x + S ( n ) ≥ y , τ x > n ) ∼ x √ 2 σ √ π n e − v 2 / 2 since y ∼ v σ √ n. 15 Th us, it remains to prov e (26). Let φ ( t ) = E e itX 1 b e the characteristic function of X 1 . By the inv ersion form ula, for a fixed A > y , P ( S ( j + 1) ∈ [ y , y + A ]) − P ( S ( j ) ∈ [ y , y + A ]) = y = A X k = y 1 2 π Z π − π e − itk φ j +1 ( t ) − φ j ( t ) dt = 1 2 π Z π − π e − ity 1 − e it ( A − y ) 1 − e it φ j ( t )(1 − φ ( t )) dt. Noting max t ∈ [ − π ,π ] | t | | 1 − e it | = π 2 , we obtain | P ( S ( j + 1) ∈ [ y , y + A ]) − P ( S ( j ) ∈ [ y , y + A ]) | ≤ 1 2 Z π − π | φ ( t ) | j 1 − φ ( t ) t dt. Letting now A → ∞ , w e conclude that ∆ j ≤ 1 2 Z π − π | φ ( t ) | j 1 − φ ( t ) t dt. Since E X 1 = 0 and σ 2 = E X 2 1 < ∞ , | 1 − φ ( t ) | ≤ σ 2 t 2 2 and, consequently , ∆ n ≤ σ 2 4 Z π − π | t || φ ( t ) | n dt. F urthermore, the finiteness of the second momen t implies the existence of ε, δ > 0 such that | φ ( t ) | ≤ e − εt 2 , | t | ≤ δ. Therefore, Z δ − δ | t || φ ( t ) | n dt ≤ 2 Z δ 0 te − εnt 2 dt ≤ 1 εn . The ap erio dicity of X 1 implies that there exists q < 1 such that | φ ( t ) | ≤ q for all | t | ∈ ( δ, π ] . Therefore, Z { t : | t |∈ ( δ,π } | t || φ ( t ) | n dt ≤ C q n . This completes the pro of of (26) and the pro of of the proposition. Remark 3.4. One can see that the reflection type argumen ts w ork well not only for the simple symmetric random w alk but for non-symmetric random walk as w ell. The approach via the reflection principle w orks for general random w alks as well and v ery similar arguments can give ev en Berry-Esseen type estimates for the rate of conv ergence, see [5] for further details. 16 4 Dual stopping times and Wiener–Hopf factori- sation In this section we describ e principal elements of the Wiener–Hopf factorisation. This metho d seems to be the most p ow erful tool in the analysis of fluctuations of one-dimensional w alks with indep endent, iden tically distributed increments. In contrast to simple and left-contin uous in the case of general random w alks one has no closed form expressions for distributions of stopping times τ x , but the Wiener–Hopf factorisation allows one to obtain explicit expressions for gen- erating functions of τ 0 . These equalities can serve as a starting p oint in the tail analysis of τ x . The standard references for the Wiener–Hopf factorisation are b o oks of Spitzer [19] and of Borovk ov [2]. W e shall present b elow a bit different approac h to the factorisation, which has been suggested by Green woo d and Shaked [15]. Let us start with the one-dimensional case, that is, { X n } are indep endent, iden tically distributed real-v alued random v ariables. Besides the stopping time τ 0 w e define τ + := inf { n ≥ 1 : S ( n ) > 0 } and U + 0 := 0 , U + k +1 := inf { n > U + k : S ( n ) > S ( U + k ) } for every k ≥ 1 . The random v ariables { U + k } k ≥ 1 are called strict asc ending ladder ep o chs. Clearly , U + 1 = τ + . F urthermore, it is immediate from the Marko v property that τ + k := U + k − U + k − 1 are indep endent copies of τ + . The indep endent random v ariables χ + k := S ( U + k ) − S ( U + k − 1 ) are called strict asc ending ladder heights. Similarly we define descending ladder v ariables. First w e define U − 0 := 0 , U − k +1 := inf { n > U − k : S ( n ) ≤ S ( U − k ) } for every k ≥ 1 . These random times are called we ak desc ending ladder ep o chs. F urthermore, the v ariables τ − k := U − k − U − k − 1 are indep enden t copies of τ 0 . Finally w e define we ak desc ending ladder ep o chs by the equalities χ − k := S ( U − k ) − S ( U − k − 1 ) , k ≥ 1 . F or the ladder ep o chs U + k one has ∞ X k =1 P τ + 1 + . . . + τ + k = n = P τ + 1 + · · · + τ + k = n for some k ≥ 1 = P U + k = n for some k ≥ 1 = P S ( n ) > S ( n − 1) , S ( n ) > S ( n − 2) , . . . , S ( n ) > S (1) , S ( n ) > 0 = P X n > 0 , X n + X n − 1 > 0 , . . . , X n + X n − 1 + · · · + X 1 > 0 = P τ 0 > n , 17 in the last step w e hav e used the classical duality lemma for random walks. Multiplying b oth sides by s n and summing o ver n , w e obtain ∞ X n =1 s n P ( τ 0 > n ) = ∞ X n =1 s n ∞ X k =1 P τ + 1 + · · · + τ + k = n = ∞ X k =1 E h s τ + i k = 1 1 − E s τ + − 1 . Th us, 1 − E [ s τ 0 ] 1 − s = 1 1 − E [ s τ + ] , or, equiv alently , 1 − E [ s τ 0 ] 1 − E [ s τ + ] = 1 − s. (28) If the distribution of X 1 is symmetric and has no atoms then τ 0 and τ + ha ve the same distribution and it follo ws from (28) that E [ s τ 0 ] = 1 − √ 1 − s. Therefore, P ( τ 0 > n ) = 2 n n 2 − 2 n for every n ≥ 1 . In general, without the symmetry assumption, (28) giv es one equation for the t wo unknown generating functions E [ s τ 0 ] and E [ s τ + ] . One cannot determine distributions of τ 0 and τ + , but (28) allo ws us to extract some useful asymptotic relations. F or instance: (a) If E [ τ + ] < ∞ , then P ( τ 0 = ∞ ) > 0 . (b) P ( τ + > n ) is regularly v arying with index − γ ∈ (0 , 1) , if and only if P ( τ 0 > n ) is regularly v arying with index − 1 + γ . The last statement can b e used, for example, to derive asymptotics for τ + in the case of left-contin uous walks. The distribution of τ 0 has b een studied in the previous section. W e no w show that (28) is v alid also for pairs of stopping times, which a dual to eac h other. W e now rigorously define this notion of dualit y for random w alks in R d . F rom now on we assume that { X k } are indep enden t, identically distributed random vectors in R d . F urthermore, without restricting the gener- alit y , we shall assume that all the random v ariables are defined on the following ’standard’ space of elementary even ts Ω = { ( x 1 , x 2 , . . . ) , x k ∈ R d } . Let θ k denote the k -fold shift: θ k ( x 1 , x 2 , . . . ) = ( x k +1 , x k +2 , . . . ) , ( x 1 , x 2 , . . . ) ∈ Ω . F or ev ery n ≥ 1 we define also the function r n ( x 1 , x 2 , . . . , x n , x n +1 , . . . ) = ( x n , x n − 1 , . . . , x 1 , x n +1 , . . . ) , ( x 1 , x 2 , . . . ) ∈ Ω . 18 Let τ b e a stopping time for the w alk { S ( n ) } . W e define recursively τ 0 ( ω ) ≡ 0 , τ 1 ( ω ) = τ ( ω ) and τ k ( ω ) := τ k − 1 ( ω ) + τ ( θ τ k − 1 ( ω ) ( ω )) , k ≥ 2 . Let M τ denote the random set { τ k } . W e shall say that a stopping time η is dual to the stopping time τ if { ω : n ∈ M τ } = { ω : η ( r n ( ω )) > n } for every n ≥ 0 . (29) This implies that if τ and τ ′ are dual to the same η then M τ ( ω ) = M τ ′ ( ω ) for ev ery ω and, consequently , τ = τ ′ . Moreov er, ev ery τ is dual to at most one η . Theorem 1 in [15] pro ves that if τ is dual to η then η is dual to τ − . (This fact is not ob vious, since the definition (29) is not symmetric.) So, we can sp eak of pairs of dual stopping times. It is easy to chec k that the stopping times τ 0 and τ + are dual to each other. By the symmetry arguments, inf { n ≥ 1 : S ( n ) < 0 } and inf { n ≥ 1 : S ( n ) ≥ 0 } are also dual to each other. W e next sho w that τ x has no dual provided that x is large enough. T o this end w e pro ve the following prop erty of the dualit y . Lemma 4.1. A stopping time τ has a dual if and only if for e ach n and ω , n ∈ M τ ( ω ) implies that j ∈ M τ ( θ n − j ( ω )) for every j ∈ { 1 , 2 , . . . , n } . Pr o of. Assume first that τ posses ses a dual stopping time η . Then, b y the definition, n ∈ M τ ( ω ) ⇔ η ( r n ( ω )) > n ⇔ η ( r n ( ω )) > j for all j ∈ { 1 , 2 , . . . , n } . Since η is a stopping time, { ω : η ( r n ( ω )) > j } = { ω : η ( r j ( θ n − j ( ω ))) > j } , j ≤ n. Using the definition of the dualit y once again, we hav e { ω : η ( r n ( ω )) > j } = { ω : j ∈ M τ ( θ n − j ( ω )) } , j ≤ n. Consequen tly , w e ha ve the desired prop ert y: { ω : n ∈ M τ ( ω ) } = n \ j =1 { ω : j ∈ M τ ( θ n − j ( ω )) } . (30) Let us no w assume that (30) holds. Set A n := { ω : n ∈ M τ ( r n ( ω )) } , n ≥ 0 . Since τ is a stopping time, whether ω = ( x 1 , x 2 , . . . ) b elongs to A n dep ends on x 1 , x 2 , . . . , x n only . If w e no w sho w that the sequence A n is monotone decreasing then we ma y define a stopping time η b e the lev el sets { η > n } = A n , n ≥ 0 . Clearly , this stopping time will b e dual to τ . 19 T o sho w that A n decreases we notice that, according to (30), A n = n \ j =1 { ω : j ∈ M τ ( θ n − j ( r n ( ω ))) } . Noting that θ n − j ( r n ( ω )) = ( x j , x j − 1 , . . . , x 1 , x n +1 , . . . ) and recalling that to decide whether ω b elongs to { ω : j ∈ M τ ( r j ( ω )) } one needs to kno w x 1 , x 2 , . . . , x j only , w e conclude that A n = n \ j =1 { ω : j ∈ M τ ( r j ( ω )) } . So, the sequence A n is decreasing and the pro of is complete. Let x > 0 b e such that τ x = inf { n ≥ 1 : S ( n ) ≤ − x } > inf { n ≥ 1 : S ( n ) < 0 } with p ositiv e probability . This restriction implies that P ( X 1 > − x ) > 0 . Th us, it happ ens, with p ositive probabilit y , that n ∈ M τ x but X n > − x and, consequen tly , τ x ( θ n − 1 ( ω )) > 1 . Applying Lemma 4.1, we infer that τ x has no dual. Lemma 4.2. L et τ and η b e dual stopping times for a d -dimensional r andom walk { S ( n ) } . L et T b e an indep endent of { S ( n ) } ge ometric al ly distribute d r an- dom variable, P ( T ≥ n ) = u n for some u ∈ (0 , 1] . Define the me asur es H η ,u and G τ ,u by the e qualities H η ,u ( A ) = P ( S ( η ) ∈ A, η ≤ T ) , G τ ,u ( A ) = ∞ X n =0 P ( S ( n ) ∈ A, τ > n, T ≥ n ) for every Bor el subset A of R d . Then G τ ,u = ∞ X k =0 H ( ∗ k ) η ,u and ( δ 0 − H η ,u ) ∗ G τ ,u = δ 0 for u ∈ (0 , 1) , wher e δ 0 is a unit mass at zer o. Pr o of. Let A b e a Borel subset of R d . It is immediate from the definition of the dualit y that P ( S ( n ) ∈ A, τ > n ) = ∞ X k =1 P ( S ( η 1 + η 2 + . . . + η k ) ∈ A, η 1 + η 2 + . . . + η k = n ) for every n . Multiplying this b y u n and taking then the sum o v er all n , we get G τ ,u ( A ) = ∞ X n =0 u n P ( S ( n ) ∈ A, τ > n ) = δ 0 ( A ) + ∞ X k =1 ∞ X n =1 u n P ( S ( η 1 + η 2 + . . . + η k ) ∈ A, η 1 + η 2 + . . . + η k = n ) . 20 Applying now the Mark ov prop erty , we get G τ ,u ( A ) = ∞ X k =0 H ( ∗ k ) η ,u ( A ) . (31) F or every u < 1 , the measures G τ ,u and H ( ∗ k ) η ,u are finite. This implies that (31) can b e written as G τ ,u = δ 0 + H ∗ G. Th us, the second claim of the lemma is immediate consequence of the first one. W e next prov e a Spitzer-Pollaczek factorisation related to a pair of dual stopping times. Theorem 4.3. L et τ and η b e dual stopping times for the walk { S ( n ) } . L et F denote the distribution of X 1 . Then, for every u ∈ (0 , 1] , δ 0 − uF = ( δ 0 − H η ,u ) ∗ ( δ 0 − H τ ,u ) . Pr o of. W e first assume that u < 1 . Since T is geometrically distributed and indep enden t of the w alk { S ( n ) } , P ( S ( T ) ∈ A, τ ≥ T ) = (1 − u ) δ 0 ( A ) + u P ( S ( T ) ∈ A, τ ≥ T | T ≥ 1) = (1 − u ) δ 0 ( A ) + u P ( S ( T + 1) ∈ A, τ ≥ T + 1) = (1 − u ) δ 0 ( A ) + u ( K τ ,u ∗ F )( A ) , (32) where K τ ,u ( A ) = P ( S ( T ) ∈ A, τ > T ) . F urthermore, P ( S ( T ) ∈ A, τ ≥ T ) = K τ ,u ( A ) + P ( S ( T ) ∈ A, τ = T ) . Noting that P ( S ( T ) ∈ A, τ = T ) = ∞ X n =0 P ( T = n ) P ( S ( n ) ∈ A, τ = n ) = (1 − u ) ∞ X n =0 P ( T ≥ n ) P ( S ( n ) ∈ A, τ = n ) = (1 − u ) P ( S ( τ ) ∈ A, τ ≤ T ) = (1 − u ) H τ ,u ( A ) , w e hav e P ( S ( T ) ∈ A, τ ≥ T ) = K τ ,u ( A ) + (1 − u ) H τ ,u ( A ) . Com bining this with (32), w e conclude that 1 1 − u K τ ,u ∗ ( δ 0 − uF ) = δ 0 − H τ ,u , u < 1 . 21 It follows from the definition of K τ ,u that 1 1 − u K τ ,u ( A ) = 1 1 − u ∞ X n =0 P ( T = n ) P ( S ( n ) ∈ A, τ > n ) = ∞ X n =0 P ( T ≥ n ) P ( S ( n ) ∈ A, τ > n ) = ∞ X n =0 P ( S ( n ) ∈ A, τ > n, T ≥ n ) = G τ ,u ( A ) . This completes the pro of in the case u < 1 . Letting u ↑ 1 , we obtain the desired equalit y also in the case u = 1 . One can rewrite the statement of Theorem 4.3 in terms of double transforms. Let φ ( λ ) denote the characteristic function of the vector X 1 , that is, φ ( λ ) = E e i ( λ,X 1 ) , λ ∈ R d . Then, for ev ery pair ( τ , η ) of dual stopping times we hav e 1 − uφ ( λ ) = 1 − E h u τ e i ( λ,S ( τ )) i 1 − E h u η e i ( λ,S ( η )) i . (33) Letting here λ = 0 , w e obtain 1 − u = (1 − E u τ )(1 − E u η ) . This equality generalises (28) to arbitrary pairs of dual stopping times. Notice that the equation (33) contains tw o unkno wn transforms. F ormally , w e can not determine b oth transforms from that equation. T o solve it, we deriv e no w a further factorisation for the function 1 − uφ ( λ ) . Let us partition R d in to half-spaces W 1 and W 2 . Then, for every u < 1 one has log(1 − uφ ( λ )) = ∞ X n =1 u n n φ n ( λ ) = ∞ X n =1 u n n E e i ( λ,S ( n )) = ∞ X n =1 u n n E h e i ( λ,S ( n )) ; S ( n ) ∈ W 1 i + ∞ X n =1 u n n E h e i ( λ,S ( n )) ; S ( n ) ∈ W 2 i . Consequen tly , for every u < 1 , 1 − uφ ( λ ) = exp ( ∞ X n =1 u n n E h e i ( λ,S ( n )) ; S ( n ) ∈ W 1 i ) × exp ( ∞ X n =1 u n n E h e i ( λ,S ( n )) ; S ( n ) ∈ W 2 i ) . (34) It turns out that one can claim that the comp onen ts of factorisations in (33) and in (34) are equal to each other. More precisely , one has the following result. 22 Theorem 4.4. L et τ and η b e dual stopping times for the walk { S ( n ) } . If the values of S ( τ ) and S ( η ) b elong to two disjoint half-sp ac es W τ and W η and if W τ ∪ W η = R d then 1 − E h u τ e i ( λ,S ( τ )) i = exp ( ∞ X n =1 u n n E h e i ( λ,S ( n )) ; S ( n ) ∈ W τ i ) (35) and 1 − E h u η e i ( λ,S ( η )) i = exp ( ∞ X n =1 u n n E h e i ( λ,S ( n )) ; S ( n ) ∈ W η i ) . (36) Pr o of. Set ψ τ ( u, λ ) = E h u τ e i ( λ,S ( τ )) i and ψ η ( u, λ ) = E h u η e i ( λ,S ( η )) i . F or ev ery u < 1 we hav e log(1 − ψ τ ( u, λ )) = ∞ X n =1 1 n ψ n τ ( u, λ ) and log(1 − ψ η ( u, λ )) = ∞ X n =1 1 n ψ n η ( u, λ ) . Com bining this with the factorisations (33) and (34), we obtain ∞ X n =1 1 n ψ n τ ( u, λ ) + ∞ X n =1 1 n ψ n η ( u, λ ) = ∞ X n =1 u n n E h e i ( λ,S ( n )) ; S ( n ) ∈ W τ i + ∞ X n =1 u n n E h e i ( λ,S ( n )) ; S ( n ) ∈ W η i . Using no w the bijection b etw een measures and their F ourier transforms, w e conclude that ∞ X n =1 1 n ψ n τ ( u, λ ) = ∞ X n =1 u n n E h e i ( λ,S ( n )) ; S ( n ) ∈ W τ i and ∞ X n =1 1 n ψ n η ( u, λ ) = ∞ X n =1 u n n E h e i ( λ,S ( n )) ; S ( n ) ∈ W η i (37) for every u < 1 . This giv es the desired equalities. Sp ecialising Theorem 4.4 to the stopping times τ + and τ 0 for a one-dimensional w alk { S ( n ) } and letting λ = 0 , we obtain 1 − E [ u τ + ] = exp ( − ∞ X n =1 u n n P ( S ( n ) > 0) ) 23 and 1 − E [ u τ 0 ] = exp ( − ∞ X n =1 u n n P ( S ( n ) ≤ 0) ) . (38) These exact equalities are quite complicated, since we hav e no conv enient exact expressions for P ( S ( n ) > 0) and P ( S ( n ) ≤ 0) . But t ypically we know the asymptotic b eha viour of that probabilities. The most classical case is when we assume that E [ X 1 ] = 0 and V ar [ X 1 ] =: σ 2 ∈ (0 , ∞ ) . Then, by the central limit theorem, P ( S ( n ) ≤ 0) → 1 / 2 . F or the stopping time τ 0 w e hav e the follo wing result. Theorem 4.5. Assume that P ( S ( n ) > 0) → ϱ ∈ (0 , 1) as n → ∞ . Then ther e exists a slow ly varying function ℓ ( n ) such that P ( τ 0 > n ) = n ϱ − 1 ℓ ( n ) . Pr o of. It follows from (38) that X s n P ( τ 0 > n ) = 1 − E [ s τ 0 ] 1 − s = exp ( − log (1 − s ) − ∞ X n =1 s n n P ( S ( n ) ≤ 0) ) = exp ( ∞ X n =1 s n n P ( S ( n ) > 0) ) =(1 − s ) − ϱ exp ( ∞ X n =1 s n n ( P ( S ( n ) > 0) − ϱ ) ) . The assumption P ( S ( n ) > 0) → ϱ implies that the function L ( x ) := exp ( ∞ X n =1 (1 − 1 /x ) n n ( P ( S ( n ) > 0) − ϱ ) ) is slowly v arying at infinit y . Combining this with the equalit y X s n P ( τ 0 > n ) = (1 − s ) − ϱ L 1 1 − s and applying the T aub erian theorem, see, for example, Theorem XI I I.5.5 in [14], w e conclude that, as n → ∞ , P ( τ 0 > n ) ∼ 1 Γ( ϱ ) n ϱ − 1 L ( n ) . This is equiv alen t to the claim in the theorem. In the case of zero drift and finite v ariance, the condition of the abov e the- orem holds with ϱ = 1 / 2 . F urthermore, one can sho w that X 1 n P ( S ( n ) > 0) − 1 2 < ∞ 24 for ev ery random walk with zero drift and finite v ariance. This summability prop ert y implies that ℓ ( n ) is asymptotically equiv alent to a p ositiv e constan t C 0 and, consequently , P ( τ 0 > n ) ∼ C 0 √ n as n → ∞ . As we hav e men tioned b efore, stopping time τ x with x > 0 has no dual. This indicates that it is not p ossible to obtain a factorisation which inv olves τ x . But the knowledge on the tail b ehaviour of τ 0 allo ws one to conclude that for ev ery oscillating random w alk there exists a p ositiv e function H ( x ) suc h that lim n →∞ P ( τ x > n ) P ( τ 0 > n ) = H ( x ) for ev ery x > 0 . W e shall give a probabilistic pro of of this equalit y in Proposition 6.4. The function H is a renewal function of weak descending ladder heights: H ( x ) = X P χ − 1 + χ − 2 + . . . + χ − k < x . Wiener-Hopf factorisation is a v ery p ow erful tool in studying first-passage problems for one-dimensional pro cesses with independent stationary increments (random walks and Levy processes). W e ga v e ab ov e just one example of its usage, but there is a v ery large num b er of papers whic h use factorisation iden- tities to study boundary crossing problems with one or tw o b oundaries. The most classical references for basics of the Wiener-Hopf factorisation for random w alks are textb o ok by Spitzer [19] and b y Borovk o v [2]. F or the case of Levy pro cesses we refer to Doney [11] and to Kyprianou [17]. 5 A Univ ersalit y Approac h to Exit Times In this section we shall describe an alternative approac h to first-passage times for discrete time random w alks. This approach is based on the following uni- v ersality idea: if the random w alk { S ( n ) } b elongs to the domain of attraction of the Brownian motion then the tail b eha viour of first-passage times of { S ( n ) } should b e similar to that of the Brownian motion. It turns out that this idea is quite robust and allo ws one to consider random walks with time-inhomogeneous incremen ts. It is worth recalling that the duality lemma holds only for identi- cally distributed increments and, consequen tly , the Wiener–Hopf factorisation is not applicable in the case when { X k } hav e differen t distributions. Let { S ( n ) } be a 1 -dimensional random walk with indep endent increments { X k } : S ( n ) = X 1 + · · · + X n , n ≥ 1 . W e shall assume that E X k = 0 and σ 2 k := E X 2 k ∈ (0 , ∞ ) for every k ≥ 1 . (39) 25 Set B 2 0 := 0 and B 2 n = n X k =1 σ 2 k , n ≥ 1 . W e shall also assume that the classical Lindeb erg condition holds: L n ( ε ) := 1 B 2 n n X k =1 E X 2 k ; | X k | ≥ εB n → 0 as n → ∞ (40) for ev ery ε > 0 . It is well-kno wn that this condition is necessary and sufficien t for the v alidity of the functional central limit theorem . More precisely , if we set s ( t ) := S ( k ) + X k +1 t − B 2 k σ 2 k +1 , t ∈ [ B 2 k , B 2 k +1 ] , k ≥ 0 and s n ( t ) := s ( tB 2 n ) B n , t ∈ [0 , 1] , then s n con verges weakly on C [0 , 1] to wards the standard Brownian motion W if and only if (40) holds. Since w e w an t to use similarities betw een the walk S ( n ) and the Brownian motion W ( t ) , let us first tak e a lo ok at first-passage times for W ( t ) . Define τ ( bm ) x = inf { t > 0 : x + W ( t ) ≤ 0 } , x > 0 . Using the reflection principle, one can show that P τ ( bm ) x > 1 = P min 0 ≤ s ≤ 1 W ( s ) > − x = 1 − 2 P W (1) ≥ x = 2 Z x 0 1 √ 2 π e − u 2 / 2 du. This implies that P τ ( bm ) x > 1 ∼ r 2 π x as x → 0 . Th us, by the scaling prop erty of the Brownian motion, for ev ery fixed x > 0 , P τ ( bm ) x > t ∼ r 2 π x √ t = r 2 π − E [ W ( τ bm x )] √ t , t → ∞ . (41) It follows from the function cen tral limit theorem that P ( τ uB n > B 2 n ) = P min k ≤ n S ( k ) > − uB n ∼ P min t ∈ [0 , 1] W ( t ) > − u = P ( τ ( bm ) u > 1) 26 for every fixed u > 0 . F urthermore, since ev ery conv ergence p ossesses a certain rate of con vergence, we ha ve P ( τ u n B n > B 2 n ) ∼ P ( τ ( bm ) u n > 1) ∼ r 2 π u n pro vided that u n → 0 sufficiently slo w. But one can not immediately infer from the central limit theorem that the same relation is v alid in the case of fixed starting p oin t, whic h corresp onds to u n = xB − 1 n . So, our main purp ose will b e to find a wa y of deriving tail asymptotics for first-passage times of S ( n ) from the functional cen tral limit theorem. It turns out that, in contrast to the previous sections, we can consider not only fixed but also moving b oundaries. F or a real-v alued sequence { g ( n ) } we define the stopping time T g := inf { n ≥ 1 : S ( n ) ≤ g ( n ) } . In particular, if g ( n ) ≡ − x , then T g = τ x . W e shall only assume that the b oundary { g ( n ) } satisfies g ( n ) = o ( B n ) . (42) This condition means that, from the p oint of view of the central limit theorem, the b oundary is asymptotically zero. Theorem 5.1. Assume that (39) , (40) and (42) hold. If, in addition, P ( T g > n ) > 0 for every n then P ( T g > n ) ∼ r 2 π U g ( B 2 n ) B n , wher e U g B 2 n = E [ S ( n ) − g ( n ); T g > n ] ∼ E [ − S ( T g ) ; T g ≤ n ] is a p ositive slow ly varying function. W e ha ve P ( T g > u ) ∼ r 2 π E [ − S ( T g ) , T g ≤ n ] B n Comparing this with (41), we see that the only difference is a "partial" exp ected v alue of − S ( T g ) . It ma y happen that E [ − S ( T g ) ; T g ≤ n ] do es not conv erge to a p ositive constant. One example of this type will b e discussed later and further examples can be found in [4]. W e split the pro of of Theorem 5.1 in to sev eral blo cks, eac h blo ck corresponds to one subsection b elow. 5.1 Upp er b ounds for the tail of T g . Lemma 5.2. F or every n ≥ 1 and every x ∈ R , P S ( n ) > x, T g > n ≥ P S ( n ) > x P ( T g > n ) . 27 Pr o of. F or x ≤ g ( n ) the inequalit y is trivial. F or x > g ( n ) w e shall use the induction. If n = 1 and x > g (1) then P ( S (1) > x, T g > 1) = P ( S (1) > x ) ≥ P ( S (1) > x ) P ( T g > 1) . Assume that the inequalit y holds for some n ≥ 1 . F or every x > g ( n + 1) w e then hav e P ( S ( n + 1) > x, T g > n + 1) = Z R P ( S ( n ) + y > x, S ( n ) + y > g ( n + 1) , T g > n ) P ( X n +1 ∈ dy ) = Z R P ( S ( n ) + y > x, T g > n ) P ( X n +1 ∈ dy ) . Using the induction assumption, we obtain P ( S ( n + 1) > x, T g > n + 1) ≥ Z R P ( y + S ( n ) > x ) P ( T g > n ) P ( X n +1 ∈ dy ) = P ( S ( n + 1) > x ) P ( T g > n ) ≥ P ( S ( n + 1) > x ) P ( T g > n + 1) . Lemma 5.3. F or every n ≥ 1 , P ( T g > n ) ≤ E S ( n ) − g ( n ); T g > n E S ( n ) − g ( n ); S ( n ) > g ( n ) . F urthermor e, if g ( n ) = o ( B n ) then ther e exists a c onstant C > 0 such that P ( T g > n ) ≤ C E S ( n ) − g ( n ); T g > n B n = C U g ( B 2 n ) B n . Pr o of. W rite E S ( n ) − g ( n ); T g > n = Z ∞ 0 P S ( n ) − g ( n ) > x, T g > n dx. Using Lemma 5.2 yields E S ( n ) − g ( n ); T g > n ≥ P ( T g > n ) Z ∞ 0 P S ( n ) − g ( n ) > x dx = E S ( n ) − g ( n ); S ( n ) > g ( n ) . Th us, the first claim is pro ved. T o pro ve the second claim, we ha ve to estimate the denominator from b elow. Since S ( n ) B n con verges tow ards W (1) and g ( n ) = o ( B n ) , the sequence S ( n ) − g ( n ) B n 28 con verges to the same limit. Then, by F atou’s lemma, lim inf n →∞ 1 B n E [ S ( n ) − g ( n ); S ( n ) − g ( n ) > 0] = lim inf n →∞ E " S ( n ) − g ( n ) B n + # ≥ E W + (1) = Z ∞ 0 x 1 √ 2 π e − x 2 / 2 dx = 1 √ 2 π . This implies the existence of a positive constant c such that E [ S ( n ) − g ( n ) , T g > n ] ≥ 1 c B n for all n ≥ 1 . This completes the proof of the lemma. 5.2 Some martingale iden tities. Lemma 5.4. F or every m ≥ 1 , E S ( m ) − g ( m ); T g > m = − E h S ( T g ); T g ≤ m i − g ( m ) P ( T g > m ) . and E [ S ( ν m ) − g ( ν m ) ; T g > ν m ] = − E [ S ( T g ) ; T g ≤ ν m ] − E [ g ( ν m ) ; T g > ν m ] , wher e ν m = min n m, inf { k ≥ 1 : S ( k ) − g ( k ) > B m } o . Pr o of. Let θ be a b ounded stopping time. By the optional stopping theorem applied to the martingale S ( n ) , w e hav e 0 = E S ( T g ∧ θ ) = E h S ( T g ); T g ≤ θ i + E h S ( θ ); T g > θ i . Consequen tly , E [ S ( θ ); T g > θ ] = − E [ S ( T g ) ; T g ≤ θ ] and E [ S ( θ ) − g ( θ ); T g > θ ] = − E [ S ( T g ) ; T g ≤ θ ] − E [ g ( θ ); T g > θ ] . T aking here θ = m and θ = ν m , we obtain the desired equalities. Set G ( n ) := max k ≤ n | g ( k ) | . 29 Corollary 5.5. F or al l inte gers n ≥ m ≥ 1 , the fol lowing estimates hold: E [ S ( ν m ) − g ( ν m ) ; T g > ν m ] − E [ S ( n ) − g ( n ); T g > n ] ≤ 2 G ( n ) P ( T g > ν m ) , E [ S ( m ) − g ( m ); T g > m ] − E [ S ( n ) − g ( n ); T g > n ] ≤ 2 G ( n ) P ( T g > m ) , | E [ S ( ν m ) − g ( ν m ) ; T g > m ] − E [ S ( n ) − g ( n ); T g > n ] | ≤ 2 G ( n ) P ( T g > ν m ) + E [ − ( S ( T g ) − g ( T g )) ; ν m < T g ≤ n ] and max m ≤ k ≤ n | E [ S ( k ) − g ( k ); T g > k ] − E [ S ( n ) − g ( n ); T g > n ] | ≤ 2 G ( n ) P ( T g > m ) + E [ − ( S ( T g ) − g ( T g )) ; m < T g ≤ n ] . Pr o of. By Lemma 5.4, E [ S ( ν m ) − g ( ν m ) ; T g > ν m ] − E [ S ( n ) − g ( n ); T g > n ] = − E [ S ( T g ) ; T g ≤ ν m ] − E [ g ( ν m ) ; T g > ν m ] + E [ S ( T g ) ; T g ≤ n ] + g ( n ) P ( T g > n ) = E [ S ( T g ) ; ν m < T g ≤ n ] − E [ g ( ν m ) ; T g > ν m ] + g ( n ) P ( T g > n ) = E h S ( T g ) − g ( T g ) ; ν m < T g ≤ n i + E [ g ( T g ) − g ( ν m ) ; ν m < T g ≤ n ] + E [ g ( n ) − g ( ν m ) ; T g > n ] . This equality implies the first and the third estimates due to S ( T g ) − g ( T g ) ≤ 0 and | ( g ( T g ) − g ( ν m )) 1 { ν m < T g ≤ n }| ≤ 2 G ( n ) 1 { ν m < T g ≤ n } , | ( g ( n ) − g ( ν m )) 1 { T g > n }| ≤ 2 G ( n ) 1 { T g > n } . Similarly , E [ S ( k ) − g ( k ); T g > k ] − E [ S ( n ) − g ( n ); T g > n ] = E [ S ( T g ) − g ( T g ) ; k < T g ≤ n ] + E [ g ( T g ) − g ( k ); k < T g ≤ n ] + ( g ( n ) − g ( k )) P ( T g > n ) . This yields the second b ound. T o get the last bound w e notice that | E [ S ( k ) − g ( k ); T g > k ] − E [ S ( n ) − g ( n ); T g > n ] | ≤ 2 G ( n ) P ( T g > k ) + E [ g ( T g ) − S ( T g ) ; k < T g ≤ n ] . Noting that the right hand side is decreasing in k , we complete the proof. 30 W e next provide upp er b ounds for exp ected v alues of the o vershoot S ( T g ) − g ( T g ) which app ears on the right hand sides of the last t wo estimates of Corol- lary 5.5. Set ε n := inf ε > 0 : L 2 n ( ε ) ≤ ε 2 . If the Lindeberg condition holds then, clearly , ε n → 0 as n → ∞ . Lemma 5.6. F or al l n ≥ m ≥ 1 one has E [ − ( S ( T g ) − g ( T g )) ; ν m < T g ≤ n ] ≤ 2 ( G ( n ) + ε n B n ) P ( T g > ν m ) and E [ − ( S ( T g ) − g ( T g )) ; m < T g ≤ n ] ≤ 2 ( G ( n ) + ε n B n ) P ( T g > m ) . Pr o of. W e pro v e the first inequality only . The pro of of the second one is very similar and, in some parts, ev en simpler. Since S ( T g − 1) > g ( T g − 1) , − ( S ( T g ) − g ( T g )) = − X T g − ( S ( T g − 1) − g ( T g − 1)) + g ( T g ) − g ( T g − 1) ≤ − X T g + g ( T g ) − g ( T g − 1) . Consequen tly , E [ − ( S ( T g ) − g ( T g )) ; ν m < T g ≤ n ] ≤ E − X T g + g ( T g ) − g ( T g − 1); ν m < T g ≤ n ≤ E − X T g ; ν m < T g ≤ n + 2 G ( n ) P ( T g > ν m ) . Th us, it remains to sho w that E − X T g ; ν m < T g ≤ n ≤ 2 ε n P ( T g > ν m ) . Fix some ε > 0 . Then E − X T g ; ν m < T g ≤ n ≤ εB n P ( T g > ν m ) + E − X T g ; − X T g > εB n , ν m < T g ≤ n . F or the exp ected v alue on the right hand side we hav e E − X T g ; − X T g > εB n , ν m < T g ≤ n = n X j =2 E [ − X j ; − X j > εB n , T g = j > ν m ] ≤ n X j =2 E [ − X j ; − X j > εB n ] P ( T g > j − 1 , ν m ≤ j − 1) ≤ P ( T g > ν m ) n X j =2 E [ − X j ; − X j > εB n ] . 31 Applying now the Mark ov inequalit y , w e obtain E − X T g ; − X T g > εB n , ν m < T g ≤ n ≤ P ( T g > ν m ) 1 εB n n X j =1 E [ X 2 j ; | X j | > εB n ] = P ( T g > ν m ) L 2 n ( ε ) ε B n . Consequen tly , E − X T g ; ν m < T g ≤ n ≤ L 2 n ( ε ) ε 2 + 1 εB n P ( T g > ν m ) . Letting ε → ε n and recalling the definition of ε n , we get E − X T g ; ν m < T g ≤ n ≤ 2 ε n B n P ( T g > ν m ) . This completes the pro of of the lemma. Lemma 5.7. If m ≤ n and B m ≥ R G ( n ) for some sufficiently lar ge R , then ther e exists a c onstant C such that P T g > ν m ≤ P T g > m ≤ C E S ( n ) − g ( n ); T g > n B m . Pr o of. By the definition of ν m , S ( ν m ) − g ( ν m ) > B m on the ev en t { ν m < m } . This implies that P ( T g > ν m ) = P ( T g > ν m = m ) + P ( T g > ν m , ν m < m ) ≤ P ( T g > m ) + P ( S ( ν m ) − g ( ν m ) > B m , T g > ν m ) . W e ha ve shown in Lemma lem:upp er2 that P ( T > m ) ≤ C 1 E [ S ( m ) − g ( m )); T g > m ] B m . F urthermore, b y the Marko v inequality , P ( S ( ν m ) − g ( ν m ) > B m , T g > ν m ) ≤ E [ S ( ν m ) − g ( ν m ) ; T g > ν m ] B m . A ccording to Corollary 5.5, E [ S ( ν m ) − g ( ν m ) ; T g > ν m ] ≤ E [ S ( n ) − g ( n ); T g > n ] + 2 G ( n ) P ( T g > ν m ) and E [ S ( m ) − g ( m ); T g > m ] ≤ E [ S ( n ) − g ( n ); T g > n ] + 2 G ( n ) P ( T g > m ) . Using these bounds and noting that P ( T g > m ) ≤ P ( T g > ν m ) , we hav e P ( T g > ν m ) ≤ ( C 1 + 1) E [ S ( n ) − g ( n ); T g > n ] B m + 2 ( C 1 + 1) G ( n ) B m P ( T g > ν m ) . 32 If we choose R > 4( C 1 + 1) then the assumption B m > RG ( n ) implies that P ( T g > ν m ) ≤ 2 ( C 1 + 1) E [ S ( n ) − g ( n ); T g > n ] B m . Th us, the proof is complete. Corollary 5.8. Under the c onditions of L emma 5.7, E h S ( ν m ) − g ( ν m ); T g > ν m i − E h S ( n ) − g ( n ); T g > n i ≤ C ( G ( n ) + ε n ) E h S ( n ) − g ( n ); T g > n i and max m ≤ k ≤ n E h S ( k ) − g ( k ); T g > k i − E h S ( n ) − g ( n ); T g > n i ≤ C ( G ( n ) + ε n ) E h S ( n ) − g ( n ); T g > n i . Pr o of. Applying Lemma 5.7 to the righ t hand sides of inequalities in Lemma 5.6, w e obtain the desired estimates. 5.3 Estimates in the b oundary problem. In this paragraph we are going to use the central limit theorem and to obtain a represen tation for the probability P ( T g > n ) , which will then lead to the claim of Theorem 5.1. F or all 1 ≤ k ≤ m < n and y > 0 we set Q k,n ( y ) := P y + min k ≤ i ≤ n Z i − Z k > 0 , where Z j := S ( j ) − g ( j ) . Applying the strong Marko v property , one obtains P ( T g > n ) = P min 1 ≤ j ≤ n Z j > 0 = P T g > ν m , Z ν m + min ν m ≤ j ≤ n ( Z j − Z ν m ) > 0 = E Q ν m,n ( Z ν m ) ; T g > ν m . (43) T o analyse the right hand side in (43) one needs go o d estimates for Q k,n . W e will obtain suc h b ounds from the central limit theorem. Next lemma is an immediate consequence of the ce n tral limit theorem. 33 Lemma 5.9. F or e ach n ≥ 1 we c an define { S ( n ) } n ≥ 1 and a Br ownian motion W n ( t ) on a c ommon pr ob ability sp ac e so that P max t ≤ B 2 n | s ( t ) − W n ( t ) | > π n B n = P max t ≤ 1 s n ( t ) − 1 B n W n tB 2 n > π n ≤ π n for some se quenc e π n ↓ 0 . Set B 2 k,n := B 2 n − B 2 k > 0 and ε k,n := π n B n + G ( n ) B k,n . Define also Q ( y ) := P y + min t ≤ 1 W ( t ) > 0 = 2 Z y + 0 φ ( u ) du, y ∈ R . In the next lemma we compare Q k,n and Q . Lemma 5.10. F or al l k < n and y ≥ 0 one has Q k,n ( y ) − Q y B k,n ≤ π n + 4 φ (0) ε k,n . Pr o of. F or ev ery k < n we define q k,n ( y ) : = P y + min k ≤ j ≤ n ( S ( j ) − S ( k )) > 0 = P y + min B 2 k ≤ t ≤ B 2 n s ( t ) − s B 2 k > 0 . Noting that | ( Z j − Z k ) − ( S ( j ) − S ( k )) | ≤ 2 G ( n ) and setting y ± = y ± 2 G ( n ) , w e obtain q k,n ( y − ) ≤ Q k,n ( y ) ≤ q k,n ( y + ) . Applying Lemma 5.9, we hav e q k,n ( y + ) ≤ π n + P y + + min B 2 k ≤ t ≤ B 2 n W n ( t ) − W n B 2 n > − 2 π n B n = π n + P y + + 2 π n B n B k,n + min t ≤ 1 W ( t ) > 0 = Q y B k,n + 2 ε k,n + π n . It is immediate from the definition of Q that Q y B k,n + 2 ε k,n ≤ Q y B k i ,n + 4 φ (0) ε k,n . 34 As a result we hav e Q k,n ( y ) ≤ Q y B k,n + 4 φ (0) ε k,n + π n Similar arguments give the low er bound Q k,n ( y ) ≥ Q y B k,n − 4 φ (0) ε k,n − π n , whic h finishes the pro of of the lemma. Lemma 5.11. If m is such that B m ≤ 3 5 B n then, for al l k ≤ m , | B n Q k,n ( y ) − 2 y φ (0) | ≤ 3 π n + 2 G ( n ) B n B n + 2 y B 2 m B 2 n + y 1 { y ≥ 3 B m } . Pr o of. By the triangle inequality , | B n Q k,n ( y ) − 2 y φ (0) | ≤ B n Q k,n ( y ) − B n Q y B k,m + B n Q y B k,n − 2 y φ (0) . If B m ≤ 3 5 B n then B k,n ≥ B m,n ≥ 4 5 B n . Using this bound and noting that φ (0) = 1 √ 2 π < 2 5 , we obtain π n + 4 φ (0) ε k,n ≤ π n + 4 · 2 5 · π n B n + G ( n ) 4 5 B n = 3 π n + 2 G ( n ) B n . Com bining these estimates with Lemma 5.10, we conclude that B n Q k,n ( y ) − B n Q y B km ≤ 3 π n + 2 G ( n ) B n B n . Th us, it remains to sho w that B n Q y B k,n − 2 y φ (0) ≤ 2 y B 2 m B 2 n + y 1 { y ≥ 3 B m } . (44) Since Q ( y ) ≤ 2 y φ (0) and B k,n ≥ 4 5 B n , B n Q y B k,n − 2 y φ (0) ≤ 2 y φ (0) B n B k,n − 1 ≤ y B 2 n − B 2 k,n B k,n ( B k,n + B n ) = y B 2 k 4 5 1 + 4 5 B 2 n ≤ y B 2 m B 2 n . (45) 35 Clearly , B n Q y B k,n − 2 y φ (0) ≥ − 2 y φ (0) ≥ − y . Com bining this with (45), w e see that (44) holds for y > 3 B m . Assume now that y ≤ 3 B m . Noting that φ ( x ) = φ (0) e − x 2 / 2 ≥ φ (0) 1 − x 2 / 2 , w e obtain Q ( y ) = 2 Z y 0 φ ( x ) dx ≥ 2 φ (0) Z y 0 1 − x 2 2 dx ≥ 2 y φ (0) − φ (0) y 3 3 . Th us, B n Q y B k,n − 2 y φ (0) ≥ B n Q y B n − 2 y φ (0) ≥ − B n φ (0) 3 y B n 3 ≥ − φ (0) 3 (3 B m ) 2 B 2 n y = − 3 φ (0) B 2 m B 2 n y ≥ − 2 B 2 m B 2 n y . This completes the pro of of the lemma. Plugging the inequalit y in Lemma 5.11 into (43), w e conclude that if B m ≤ 3 5 B n then | B n P ( T g > n ) − 2 φ (0) E [ S ( ν m ) − g ( ν m ) ; T g > ν m ] | ≤ 3 π n + 2 G ( n ) B n B n P ( T g > ν m ) + 2 B 2 m B 2 n E [ S ( ν m ) − g ( ν m ) ; T g > ν m ] + E [ S ( ν m ) − g ( ν m ) ; T g > ν m , S ( ν m ) − g ( ν m ) > 3 B m ] . (46) Th us, w e need to show that every summand on the right hand side is negligi- bly small in comparison with E [ S ( ν m ) − g ( ν m ) ; T g > ν m ] for an appropriately c hosen m = m ( n ) . 5.4 Pro of of Theorem 5.1. The Lindeb erg condition (40) implies that σ 2 n = o ( B 2 n ) . Combining this property with the assumption G ( n ) = o ( B n ) , we infer that there exists m ( n ) < n suc h that B m ( n ) B n → 0 and G ( n ) + ε n + π n B n B m ( n ) → 0 . (47) Applying Corollary 5.8 with this m ( n ) , we conclude that E S ν m ( n ) − g ν m ( n ) ; T g > ν m ( n ) ∼ E [ S ( n ) − g ( n ) ; T g > n ] (48) 36 and max m ( n ) ≤ k ≤ n E [ S ( n ) − g ( n ) ; T g > n ] E [ S ( k ) − g ( k ) ; T g > k ] − 1 = o (1) . (49) The latter relation implies that the function U g ( x ) is slo wly v arying. Com bining the first relation in (47) and (48), we 2 B 2 m ( n ) B 2 m E S ν m ( n ) − g ν m ( n ) ; T g > ν m ( n ) = o ( E [ S ( n ) − g ( n ) ; T g > n ]) . (50) F urthermore, applying fi rst Lemma 5.7 and using then the second relation in (47), we conclude that 3 π n + 2 G ( n ) B n B n P T g > ν m ( n ) ≤ C (3 π n B n + G ( n )) E [ S ( n ) − g ( n ) ; T g > n ] B m ( n ) = o ( E [ S ( n ) − g ( n ) ; T g > n ]) . (51) Noting that S ( ν m ) − g ( ν m ) = S ( ν m − 1) + X ν m − g ( ν m − 1) + g ( ν m − 1) − g ( ν m ) ≤ B m + X ν m + 2 G ( m ) . The assumption G ( m ) = o ( B m ) implies that S ( ν m ) − g ( ν m ) ≤ 3 2 B m + X ν m for all sufficien tly large m . Consequen tly , E [ S ( ν m ) − g ( ν m ) ; T g > ν m , S ( ν m ) − g ( ν m ) > 3 B m ] ≤ m X j =1 E 3 2 B m + X j ; T g > j − 1 , X j > 3 2 B m ≤ 2 m X j =1 E X j ; X j > 3 2 B m P ( T g > j − 1) . Using Lemma 5.3 and applying Potter’s b ound for slowly v arying functions, we obtain P ( T g > j − 1) ≤ C 1 U g ( B 2 j − 1 ) B j − 1 ≤ C 2 B 1 / 3 m U g ( B 2 m ) B 4 / 3 j − 1 ≤ C 3 B 1 / 3 m U g ( B 2 m ) B 4 / 3 j for all j ≤ m . Consequently , E [ S ( ν m ) − g ( ν m ) ; T g > ν m , S ( ν m ) − g ( ν m ) > 3 B m ] ≤ 2 C 3 B 1 / 3 m U g ( B 2 m ) m X j =1 B − 4 / 3 j E X j ; X j > 3 2 B m . 37 Applying now the Mark ov inequalit y , w e obtain E [ S ( ν m ) − g ( ν m ) ; T g > ν m , S ( ν m ) − g ( ν m ) > 3 B m ] ≤ 4 3 C 3 B − 2 / 3 m U g ( B 2 m ) m X j =1 B − 4 / 3 j E X 2 j ; | X j | > B m . Set now a j := E X 2 j ; | X j | > B m and A j = j X k =1 a k . Noting that A j ≤ B 2 j for all j , w e get m X j =1 B − 4 / 3 j E X 2 j ; | X j | > B m ≤ m X j =1 A − 2 / 3 j a j = m X j =1 A j − A j − 1 A 2 / 3 j ≤ Z A m 0 x − 2 / 3 dx = 3 A 1 / 3 m . Consequen tly , E [ S ( ν m ) − g ( ν m ) ; T g > ν m , S ( ν m ) − g ( ν m ) > 3 B m ] ≤ C B − 2 / 3 m A 1 / 3 m U g ( B 2 m ) . Noting that B − 2 / 3 m A 1 / 3 m = L 1 / 3 m (1) and using (49), w e conclude that E [ S ( ν m ) − g ( ν m ) ; T g > ν m , S ( ν m ) − g ( ν m ) > 3 B m ] = o ( E [ S ( n ) − g ( n ) ; T g > n ]) . (52) Plugging (50), (51) and (52) into (46), we obtain B n P ( T g > n ) ∼ 2 φ (0) U g ( B 2 n ) . Th us, the proof is complete. 5.5 Conditional functional limit theorem and some prop- erties of the function U g Our approach to the tail b ehaviour of the stopping time T g w as based on the comparison with the Bro wnian motion. The same sc heme allo ws one to obtain a functional limit theorem for conditioned random walks. T o form ulate this result w e hav e to in tro duce the limit. Let B = { B ( t ) , t ∈ [0 , 1] } b e the standard Bro wnian motion. It has b een shown by Durrett, Iglehart and Miller [13] that, as ε → 0 , the family of pro cesses { B | min s ≤ 1 B ( s ) > − ε } , ε > 0 con verges weakly on the space C [0 , 1] of con tinuous functions on the time interv al [0 , 1] . The limit is called Br ownian me ander and we shall denote it b y M = { M ( t ) , t ∈ [0 , 1] } . 38 Theorem 5.12. Under the c onditions of The or em 5.1, the distribution of s n c onditione d on T g > n c onver ges we akly on C [0 , 1] t owar ds the Br ownian me an- der M . In p articular, P ( S ( n ) > g ( n ) + xB n | T g > n ) → e − x 2 / 2 , x ≥ 0 . The univ ersalit y approac h allo ws one to a void the most standard approac h to the pro of of functional limit theorems, which consists in proving con vergence of finite dimensional distributions and in sho wing the tigh tness. Instead, one can work directly with b ounded contin uous functionals on C [0 , 1] . The pro of of Theorem 5.12 can b e found in [4]. Durrett [12] has used the classical metho d via con vergence of finite dimensional distributions and tightness to prov e conditional functional limit theorem for some classes of n ull recurren t Mark ov chains. As we hav e men tioned before, the only difference betw een Theorem 5.1 and (41) is the app earance of the slowly v arying function U g . W e no w sho w that it ma y happ en that this function is not asymptotically constant even in the case when the boundary is constant. If g ( n ) ≡ − x for some x ≥ 0 then S ( τ x ) ≤ 0 . (Recall that in the case of constant b oundaries we use the notation τ x instead of T g .) Applying the monotone conv ergence theorem, we conclude that E [ − S ( τ x ) ; τ x ≤ n ] → E [ − S ( τ x )] =: V x ∈ (0 , ∞ ] . Let us next derive a necessary condition for V x < ∞ . The monotonicity of E [ − S ( τ x ) ; τ x ≤ n ] and Theorem 5.1 imply that P ( τ x > n ) ≥ C 0 B n , n ≥ 1 for some constan t C 0 > 0 . Lemma 5.13. L et x ≥ 0 b e fixe d and assume that the c onditions of The or em 5.1 hold. F or every ε > 0 ther e exists N ε such that E [ − x − S ( τ x )] ≥ C 0 4 1 − e − ε 2 / 8 ∞ X n = N ε +1 1 B n E [ − X n ; − X n > εB n ] . Pr o of. W e kno w from Theorem 5.12 that, for every ε > 0 , P x + S ( n ) B n < ε 2 τ x > n → 1 − e − ε 2 / 8 . Th us, there exists N ε suc h that P x + S ( n ) B n < ε 2 τ x > n ≥ 1 2 1 − e − ε 2 / 8 for all n ≥ N ε . 39 F or E [ − x − S ( τ x )] we hav e the follo wing representation: E [ − x − S ( τ x )] = ∞ X n =1 E [ − x − S ( n ); τ x = n ] = ∞ X n =1 E [ − x − S ( n − 1) − X n ; τ x > n − 1 , x + S ( n − 1) + X n ≤ 0] . W e next notice that if x + S ( n − 1) ≤ ε 2 B n − 1 and X n < − εB n then − x − S ( n − 1) − X n > − ε 2 B n − 1 + ε 2 B n − X n 2 > − X n 2 . Consequen tly , E [ − x − S ( n − 1) − x n ; τ x > n − 1 , x + S ( n − 1) + X n < 0] ≥ E − X n 2 ; − X n > εB n P x + S ( n − 1) < ε 2 B n − 1 , τ x > n − 1 and E [ − x − S ( τ x )] ≥ ∞ X n = N ε +1 1 − e − ε 2 / 8 4 P ( T 0 > n − 1) E [ − X n ; − X n > εB n ] ≥ C 0 1 − e − ε 2 / 8 4 ∞ X n = N ε +1 1 B n E [ − X n ; − X n > εB n ] . Using this lemma we can construct an example with E [ − S ( τ x )] = ∞ for ev ery x ≥ 0 . Let X n ha ve the follo wing distribution: P ( X n = ± √ n ) = p n 2 and P ( X n = ± a n ) = 1 − p n 2 , where p n = 1 n log ( n + 2) and a n = r 1 − np n 1 − p n . It is easy to see that E X n = 0 and E X 2 n = 1 for every n ≥ 1 . Th us, B 2 n = n . Next, noting that a n ∈ (0 , 1) for all n , one infers that, for n > ε − 2 , L n ( ε ) = 1 n n X k =1 E [ X 2 k ; | X k | > εn ] = 1 n X k ∈ ( ε 2 n,n ] k p k = 1 n X k ∈ ( ε 2 n,n ] 1 log( k + 2) = O 1 log n . 40 So, the Lindeb erg condition holds and, consequently , { X n } satisfies all the con- ditions of Theorem 5.1. Then, for ev ery x > 0 there exists a slowly v arying function U x suc h that P ( τ x > n ) ∼ r 2 π U x ( n ) n 1 / 2 as n → ∞ . Notice next that, for every N ≥ 1 , ∞ X k = N +1 1 B k E − X k ; X k < − B k 2 = ∞ X k = N +1 1 √ k E " − X k ; X k < − √ k 2 # = 1 2 ∞ X k = N +1 1 k log ( k + 2) = ∞ . Applying no w Lemma 5.13 with ε = 1 2 , w e infer that E [ − S ( τ x )] = ∞ for every x ≥ 0 . Surprisingly , this is not p ossible in the case of i.i.d. summands. If all X n ha ve the same distribution with E X n = 0 and E X 2 n = σ 2 ∈ (0 , ∞ ) then ∞ X n = N +1 1 B n E [ − X n ; − X n > εB n ] = ∞ X n = N +1 1 √ nσ 2 E − X 1 ; − X 1 > εσ √ n ≤ 1 √ σ 2 E " − X 1 ∞ X n =1 1 √ n 1 − X 1 > εσ √ n # ≤ C E X 2 1 < ∞ . So, Lemma 5.13 does not pro duce infinite low er bound. In the next section w e shall sho w that E [ − S ( τ x )] is finite for every x provided that E [ X 1 ] = 0 and E X 2 1 = σ 2 ∈ (0 , ∞ ) . 6 Conditioned random w alks with i.i.d. incre- men ts. In this section we concentrate primerly on the case when the increments { X k } are indep endent and iden tically distributed. If we additionally assume that E X 1 = 0 and E X 2 1 := σ 2 ∈ (0 , ∞ ) (53) then the Lindeberg condition is v alid and we ma y apply Theorem 5.1. Sp ecial- ising this result to the case of i.i.d. incremen ts and of constan t boundaries, w e obtain P ( τ x > n ) ∼ r 2 π σ 2 U ( σ 2 n ) √ n as n → ∞ , (54) 41 the slowly v arying function U x is given by U x ( σ 2 n ) = E [ S ( n ); τ x > n ] = E [ − S ( τ x ); τ x ≤ n ] . It is easy to see that U x is monotone increasing and that lim n →∞ U x ( σ 2 n ) = E [ − S ( τ x )] ∈ (0 , ∞ ] . Our first purpose is to sho w that the function V ( x ) := E [ − S ( τ x )] is finite and to determine the asymptotic, as x → ∞ , b ehaviour of this function. W e represen t V ( x ) as follo ws: V ( x ) = E [ − S ( τ x )] = x − E [ x + S ( τ x )] =: x + f ( x ) . (55) Prop osition 6.1. Assume that (53) holds. Then the function f x ) define d in (55) is finite. Mor e over, f ( x ) = o ( x ) as x → ∞ . T o prov e this propos ition w e shall construct an appropriate p ositive super- martingale. T o form ulate the corresp onding result we in tro duce the following notations: a ( x ) := − E [ x + X 1 ; x + X 1 ≤ 0) = Z ∞ x P ( X 1 ≤ − y ) dy , a ( x ) = Z ∞ x P ( X 1 > y ) dy , b ( x ) = Z ∞ x a ( y ) dy and m ( x ) = Z x 0 b ( y ) dy . Using integration by parts, we obtain E [( x + X 1 ) 2 ; x + X 1 < 0] = Z − x −∞ ( y + x ) 2 d P ( X 1 ≤ y ) = − 2 Z − x −∞ ( y + x ) P ( X 1 ≤ y ) dy = 2 Z ∞ x ( y − x ) P ( X 1 ≤ − y ) dy = 2 Z ∞ x Z y x dz P ( X 1 ≤ − y ) dy = 2 Z ∞ x Z ∞ z P ( X 1 ≤ − y ) dy dz = 2 Z ∞ x a ( z ) dz . The assumption E X 2 1 < ∞ implies that the function a ( z ) is in tegrable. Conse- quen tly , b ( x ) = 1 2 E ( x + X 1 ) 2 ; X 1 < − x → 0 as x → ∞ . This, in its turn, implies that m ( x ) x → 0 as x → ∞ . 42 Lemma 6.2. If (53) is valid then ther e exist p ositive c onstants A and R W ( x ) = x + Am ( x ) + R is sup erharmonic for S ( n ) kil le d at τ x . In other wor ds, E [ W ( x + S ( n )); τ x > n ] ≤ W ( x ) for al l x ≥ 0 and al l n ≥ 1 . (56) W e p ostp one the pro of of Lemma 6.2 and show that (56) yields the claim of Prop osition 6.1. Indeed, (56) implies that W ( x ) ≥ E [ W ( x + S ( n )); τ x > n ] ≥ E [ x + S ( n ); τ x > n ] = x − E [ x + S ( τ x ) ; τ x ≤ n ] . Letting here n → ∞ , we conclude that x + f ( x ) ≤ W ( x ) = x + Am ( x ) + R. In other w ords, f ( x ) ≤ Am ( x ) + R < ∞ and f ( x ) = o ( x ) due to the fact that m ( x ) = o ( x ) . Pr o of of L emma 6.2. W e w ant to show that ∆( x ) := E [ W ( x + X 1 ); τ x > 1] − W ( x ) ≤ 0 for all x ≥ 0 . Let F ( x ) denote the distribution function of X 1 and set F ( x ) := 1 − F ( x ) . Since x = E [ x + X 1 ] , we hav e ∆( x ) = E [ x + X 1 + Am ( x + X ) + R ; X 1 > − x ] − x − Am ( x ) − R = − E [ x + X 1 ; X 1 ≤ − x ] − RF ( − x ) − Am ( x ) F ( − x ) + A E [ m ( x + X 1 ) − m ( x ); X 1 > − x ] = a ( x ) − R F ( − x ) − Am ( x ) F ( − x ) + A Z ∞ − x ( m ( x + y ) − m ( x )) dF ( y ) . (57) Recalling that m ( x ) = R x 0 b ( z ) dz , we hav e Z ∞ 0 ( m ( x + y ) − m ( x )) dF ( y ) = Z ∞ 0 Z y 0 b ( x + z ) dz dF ( y ) = Z ∞ 0 b ( x + z ) F ( z ) dz and Z 0 − x ( m ( x + y ) − m ( x )) dF ( y ) = − Z 0 − x Z 0 y b ( x + z ) dz dF ( y ) = − Z 0 − x b ( x + z ) F ( z ) dz + F ( − x ) Z 0 − x b ( x + z ) dz = − Z 0 − x b ( x + z ) F ( z ) dz + F ( − x ) m ( x ) . 43 Th us, for the integral in (57) w e ha ve Z ∞ − x ( m ( x + y ) − m ( x )) dF ( y ) = Z 0 − x ( m ( x + y ) − m ( x )) dF ( y ) + Z ∞ 0 ( m ( x + y ) − m ( x )) dF ( y ) = m ( x ) F ( − x ) − Z 0 − x b ( x + y ) F ( y ) dy + Z ∞ 0 b ( x + y ) F ( y ) dy = m ( x ) F ( − x ) + Z x 0 b ( x − y ) a ′ ( y ) dy − Z ∞ 0 b ( x + y ) a ′ ( y ) dy , where we hav e used the equalities a ′ ( x ) = − F ( − x ) , a ′ ( x ) = − F ( x ) . Then, integrating by parts, we obtain Z ∞ − x ( m ( x + y ) − m ( x )) dF ( y ) = m ( x ) F ( − x ) + b (0) a ( x ) − b ( x ) a (0) − Z x 0 a ( x − y ) a ( y ) dy + b ( x )¯ a (0) − Z ∞ 0 a ( x + y )¯ a ( y ) dy . W e next notice that b ( x )¯ a (0) − b ( x ) a (0) = b ( x ) E [ X ] = 0 and b (0) = 1 2 E h X − 1 2 i . Consequen tly , Z ∞ − x ( m ( x + y ) − m ( x )) dF ( y ) = m ( x ) F ( − x ) + 1 2 E [( X − 1 ) 2 ] a ( x ) − Z x 0 a ( x − y ) a ( y ) dy − Z ∞ 0 a ( x + y )¯ a ( y ) dy . Plugging this into (57) and noting that the integrals are nonnegative, we obtain ∆( x ) = a ( x ) − R F ( − x ) + A 2 E [( X − 1 ) 2 ] a ( x ) − A Z x 0 a ( x − y ) a ( y ) dy − A Z ∞ 0 a ( x + y )¯ a ( y ) dy ≤ a ( x ) − RF ( − x ) − A Z x 0 a ( x − y ) a ( y ) dy . Since a ( x ) is decreasing, Z x 0 a ( x − y ) a ( y ) dy = 2 Z x/ 2 0 a ( x − y ) a ( y ) dy ≥ 2 a ( x )( b (0) − b ( x/ 2)) . 44 Using this bound and choosing A = 4 E [ ( X − 1 ) 2 ] = 2 b (0) , we hav e ∆( x ) ≤ − RF ( − x ) + a ( x ) − 2 Aa ( x )( b (0) − b ( x/ 2)) = − RF ( − x ) + a ( x )(2 Ab ( x/ 2) − 2 Ab (0) + 1) = − RF ( − x ) + a ( x )(2 Ab ( x/ 2) − 3) . Since b ( x ) → 0 there exists x 0 > 0 such that 2 Ab ( x 0 / 2) = 3 . Therefore, ∆( x ) ≤ 0 for all x ≥ x 0 . If F ( − x 0 ) > 0 then we can choose R = a (0) /F ( − x 0 ) . F or this c hoice w e ha ve ∆( x ) ≤ − RF ( − x 0 ) + a (0) = 0 for all x ≤ x 0 . Finally supp ose that F ( − x 0 ) = 0 . This means that P ( X 1 > − x 0 ) = 1 and, consequen tly , a ( x 0 ) = 0 . If we take R = 3 x 0 then, applying the mean v alue theorem, we hav e for all x < x 0 , ∆( x ) ≤ a ( x ) − RF ( − x ) = a ( x 0 ) − ( a ( x 0 ) − a ( x )) − RF ( x ) = ( x 0 − x ) F ( − ξ ) − 3 x 0 F ( − x ) ≤ x 0 F ( − ξ ) − 3 x 0 F ( − x ) for some ξ ∈ ( x, x 0 ) . Noting that F ( − ξ ) ≤ F ( − x ) , we complete the proof. Com bining now (54) with Proposition 6.1, we conclude that P ( τ x > n ) ∼ r 2 π σ 2 V ( x ) n 1 / 2 as n → ∞ , (58) for every fixed x . So, the function V ( x ) describes the dependence of the tail of τ x on the starting p osition x . It turns out that the function V has also a further, rather important, prop erty . Lemma 6.3. Assume that (53) holds. Then the function V ( x ) = − E [ S ( τ x )] is harmonic for S ( n ) kil le d at τ x . That is, E [ V ( x + S (1)); τ x > 1] = V ( x ) for al l x > 0 . Pr o of. Since E [ X ] = 0 , x = Z { y > − x } ( x + y ) P ( X 1 ∈ dy ) + Z { y ≤− x } ( x + y ) P ( X 1 ∈ dy ) = E [ x + S (1); τ x > 1] + Z { y ≤− x } ( x + y ) P ( X 1 ∈ dy ) . (59) Recall the definition of the function f ( x ) in (55). By the Mark o v property , − f ( x ) = E [ x + S ( τ x )] = Z { y > − x } E [ x + y + S ( τ x + y )] P ( X 1 ∈ dy ) + Z { y ≤− x } ( x + y ) P ( X 1 ∈ dy ) = E [ − f ( x + S (1)); τ x > 1] + Z { y ≤− x } ( x + y ) P ( x ∈ dy ) . (60) 45 T aking the difference of (59) and (60), we hav e V ( x ) = x + f ( x ) = E [ x + S (1) + f ( x + S (1)); τ x > 1] = E [ V ( x + X ); τ x > 1] . Th us, the lemma is pro ved. W e next relate the constructed ab ov e harmonic function to weak descending ladder epo chs { χ − k } whic h hav e b een in tro duced in the section on the Wiener- Hopf factorisation. Prop osition 6.4. L et S ( n ) b e a r andom walk with indep endent, identic al ly distribute d incr ements. If the function P ( τ 0 > n ) is r e gularly varying with index − ϱ ∈ ( − 1 , 0) then, for every x > 0 , lim n →∞ P ( τ x > n ) P ( τ 0 > n ) = H ( x ) , wher e H ( x ) is the r enewal function of the we ak desc ending ladder heights { χ − k } , that is, H ( x ) = 1 + ∞ X m =1 P χ − 1 + χ − 2 + . . . + χ − m < x , x ≥ 0 . Pr o of. Decomp osing the the tra jectory of the random w alk according to ladder ep o chs, one obtains easily the represen tation τ x = τ − 1 + τ − 2 + . . . + τ − η ( x ) , (61) where η ( x ) := inf { k ≥ 1 : χ − 1 + χ − 2 + . . . + χ − k ≥ x } . Since { ( τ − k , χ − k ) } are indep endent, the σ -algebras σ ( τ − 1 , τ − 2 , . . . , τ − m , 1 { η x ≤ m } ) and σ ( τ − m +1 , τ − m +2 , . . . ) are indep endent for ev ery m . This allo ws us to apply Theorem 1 from Korsh unov [16] which implies that E min { τ x , n } ∼ E η ( x ) E min { τ 0 , n } as n → ∞ . (62) Noting that E min { Y , n } = n X k =1 k P ( Y = k ) + n P ( Y > n ) = n X k =1 k ( P ( Y > k − 1) − P ( Y > k )) + n P ( Y > n ) = n − 1 X k =0 ( k + 1) P ( Y > k ) − n X k =1 k P ( Y > k ) + n P ( Y > n ) = n − 1 X k =0 P ( Y > k ) 46 for every integer-v alued random v ariable Y , we can rewrite (62) as follo ws: n − 1 X k =0 P ( τ x > k ) ∼ E η ( x ) n − 1 X k =0 P ( τ 0 > k ) as n → ∞ . (63) The assumption that P ( τ 0 > n ) is regularly v arying with index − ϱ implies that n − 1 X k =0 P ( τ 0 > k ) ∼ 1 1 − ϱ n P ( τ 0 > n ) as n → ∞ . (64) Com bining this with (63), w e infer that X k ∈ [ an,n ) P ( τ x > n ) ∼ E η ( x ) 1 − a 1 − ϱ 1 − ϱ n P ( τ 0 > n ) for every a ∈ (0 , 1) . Using now the fact that the sequence P ( τ x > k ) decreases, w e get the b ounds P ( τ x > n ) P ( τ 0 > n ) ≤ E η ( x ) 1 − a 1 − ϱ (1 − a )(1 − ϱ ) + o (1) and P ( τ x > an ) P ( τ 0 > n ) ≥ E η ( x ) 1 − a 1 − ϱ (1 − a )(1 − ϱ ) + o (1) . Consequen tly , lim sup n →∞ P ( τ x > n ) P ( τ 0 > n ) ≤ E η ( x ) 1 − a 1 − ϱ (1 − a )(1 − ϱ ) and lim inf n →∞ P ( τ x > n ) P ( τ 0 > n ) ≥ E η ( x ) 1 − a 1 − ϱ (1 − a )(1 − ϱ ) a ϱ . Letting here a → 1 , we conclude that lim n →∞ P ( τ x > n ) P ( τ 0 > n ) = E η ( x ) . It remains to notice that E η ( x ) = ∞ X k =0 P ( η ( x ) > k ) = 1 + ∞ X k =1 P ( χ − 1 + χ − 2 + . . . + χ − k < x ) = H ( x ) . If the moment conditions in (53) are v alid then, as w e hav e already sho wn that P ( τ x > n ) ∼ V ( x ) n − 1 / 2 , 47 where V ( x ) = E [ − S ( τ x )] . Combining this with Prop osition 6.4, we conclude that V ( x ) = V (0) H ( x ) = r 2 π σ 2 E [ χ − 1 ] H ( x ) , x ≥ 0 . (65) Using now Lemma 6.3, we infer that the function H ( x ) is also harmonic for S ( n ) killed at leaving [0 , ∞ ) : H ( x ) = E [ H ( x + S (1)); τ x > 1] , x > 0 . (66) Our next purp ose is to show that (66) remains v alid without momen t assump- tions in (53). Prop osition 6.5. Assume that the r andom walk S ( n ) is oscil lating: lim inf n →∞ S ( n ) = −∞ and lim sup n →∞ S ( n ) = ∞ a.s. Then the function H ( x ) is harmonic for S ( n ) kil le d at le aving [0 , ∞ ) , that is, (66) holds. Pr o of. The assumption lim inf n →∞ S ( n ) = −∞ implies that all ladder heights χ − k are weel-defined. Then, b y the definition of H , H ( x ) = 1 + E " ∞ X k =1 1 χ − 1 + χ − 2 + . . . + χ − k < x # = 1 + E " ∞ X k =1 1 χ − 1 + χ − 2 + . . . + χ − k < x ; S (1) > 0 # + E " ∞ X k =1 1 χ − 1 + χ − 2 + . . . + χ − k < x ; S (1) ∈ ( − x, 0] # + E " ∞ X k =1 1 χ − 1 + χ − 2 + . . . + χ − k < x ; S (1) ≤ − x # . The last term is equal zero b ecause χ − 1 ≥ x on the even t { S (1) ≤ − x } . F ur- thermore, using the Marko v property , we obtain E " ∞ X k =1 1 χ − 1 + χ − 2 + . . . + χ − k < x ; S (1) > 0 # = E [ H ( x + S (1)) − H ( S (1)); S (1) > 0] and E " ∞ X k =1 1 χ − 1 + χ − 2 + . . . + χ − k < x ; S (1) ∈ ( − x, 0] # = E [ H ( x + S (1)); S (1) ∈ ( − x, 0]] . 48 Consequen tly , H ( x ) = E [ H ( x + S (1)); x + S (1) > 0] + 1 − E [ H ( S (1)); S (1) > 0] . Th us, it remains to sho w that E [ H ( S (1)); S (1) > 0] = 1 . (67) By the total probability law, H ( x ) = 1 + ∞ X k =1 P χ − 1 + χ − 2 + . . . + χ − k < x = 1 + ∞ X k =1 ∞ X n =1 P ( U − k = n, S ( n ) ∈ ( − x, 0]) = 1 + ∞ X n =1 P ( S ( n ) ≤ S ( j ) for all j ≤ n, S ( n ) ∈ ( − x, 0]) . Applying now the dualit y lemma, w e obtain H ( x ) = 1 + ∞ X n =1 P ( S ( j ) ≤ 0 for all j ≤ n, S ( n ) ∈ ( − x, 0]) . Therefore, E [ H ( S (1)); S (1) > 0] = Z ∞ 0 H ( y ) P ( S (1) ∈ dy ) = P ( S (1) > 0) + ∞ X n =1 Z ∞ 0 P ( S ( j ) ≤ 0 for all j ≤ n, S ( n ) ∈ ( − y , 0]) P ( S (1) ∈ dy ) = P ( S (1) > 0) + ∞ X n =1 P ( S ( j ) ≤ 0 for all j ≤ n, S ( n + 1) > 0) = P ( τ + = 1) + ∞ X n =1 P ( τ + = n + 1) = P ( τ + < ∞ ) . (68) Noting now that the assumptions lim sup n →∞ S ( n ) = ∞ implies that P ( τ + < ∞ ) = 1 , we complete the pro of of (67). Remark 6.6. F orm ula (68) implies that E [ H ( S (1)); S (1) > 0] < 1 pro vided that lim n →∞ S ( n ) = −∞ with probabilit y one. Consequently , E [ H ( x + S ( n )); x + S (1) > 0] < H ( x ) , x > 0 for any random w alk whic h tends to −∞ . ⋄ 49 Under the conditions of Prop osition 6.4 onas also the following uniform upp er b ound. Lemma 6.7. Assume that the c onditions of Pr op osition 6.4 hold. Then ther e exist a c onstant C such that, uniformly in x > 0 , P ( τ x > n ) P ( τ 0 > n ) ≤ C H ( x ) , n ≥ 0 . Pr o of. Combining the represen tation (61) we the Marko v inequalit y , w e obtain P ( τ x ≥ n ) = P σ ( x ) X k =1 τ − k ≥ n = P σ ( x ) X k =1 ( τ − k ∧ n ) ≥ n ≤ 1 n E σ ( x ) X k =1 ( τ − k ∧ n ) . Applying no w the W ald identit y and recalling that E [ σ ( x )] = H ( x ) , w e conclude that P ( τ x ≥ n ) ≤ H ( x ) n E [ τ 0 ∧ n ] . (69) Due to (64), E [ τ 0 ∧ n ] = n − 1 X k =0 P ( τ 0 > k ) ∼ 1 1 − ϱ n P ( τ 0 > n ) as n → ∞ . Com bining this with (69), w e infer that there exists a constan t C such that P ( τ x ≥ n ) ≤ C H ( x ) P ( τ 0 ≥ n ) for all x > 0 and all n ≥ 1 . Thus, the pro of is complete. The existence of increasing harmonic function for a w alk killed at τ x allo ws one to obtain an alternativ e, uniform in starting p oint x , upper b ound for the tail of the stopping time τ x . Lemma 6.8. F or every oscil lating r andom walk one has P ( τ x > n ) ≤ H ( x ) E [ H ( S ( n )); S ( n ) > 0] , n ≥ 1 (70) for al l x ≥ 0 . If (53) is valid then ther e exists a c onstant C such that P ( τ x > n ) ≤ C V ( x ) √ n , n ≥ 1 (71) for al l x ≥ 0 . 50 Pr o of. Applying Lemma 5.2 to the stopping time τ x , we hav e P ( x + S ( n ) > y | τ x > n ) ≥ P ( S ( n ) > y ) for all x ∈ R . Com bining this with the fact that H increases, w e conclude that E [ H ( x + S ( n )) | τ x > n ] ≥ E [ H ( x + S ( n )); x + S ( n ) > 0] ≥ E [ H ( S ( n )); S ( n ) > 0] . This is equiv alen t to (70). If (53), then we can combine (65) and (70) to conclude that P ( τ x > n ) ≤ V ( x ) E [ V ( S ( n )); S ( n ) > 0] , n ≥ 1 . Th us, it remains to b ound the denominator from b elow. Recall that V ( x ) ∼ x as x → ∞ . Combining this with the cen tral limit theorem, we obtain lim inf n →∞ E [ V ( S ( n )); S ( n ) > 0] √ n ≥ lim inf n →∞ V ( √ n ) P ( S ( n ) > √ n ) √ n = Φ 1 σ > 0 . This completes the pro of of the lemma. Existence of a p ositiv e harmonic function V for S ( n ) killed at leaving (0 , ∞ ) allo ws one to perform the Do ob h -transform and to define a random walk con- ditioned to stay p ositiv e at al l times. This is a Mark o v pro cess which is giv en b y the transition kernel b P ( x, dy ) = V ( y ) V ( x ) P ( x + S (1) ∈ dy , τ x > 1) , x, y > 0 . Let b P denote the corresp onding probabilit y measure. The connection b etw een this new measure and the conditioning on τ x > n is given by the following lemma. Lemma 6.9. Assume that (53) holds. Then, for every fixe d k ≥ 1 and every A ∈ σ ( S (1) , S (2) , . . . , S ( k )) , b P x ( A ) = lim n →∞ P ( A | τ x > n ) . Pr o of. By the Mark ov prop erty at time k , P ( A, τ x > n ) = Z ∞ 0 P ( x + S ( k ) ∈ dz , A ∩ { τ x > k } ) P ( τ z > n − k ) . Consequen tly , P ( A | τ x > n ) = Z ∞ 0 P ( x + S ( k ) ∈ dz , A ∩ { τ x > k } ) P ( τ z > n − k ) P ( τ x > n ) . 51 Using (58), we conclude that lim n →∞ P ( τ z > n − k ) P ( τ x > n ) = V ( z ) V ( x ) for every z > 0 . F urthermore, combining the finiteness of E X 2 1 and (71), we infer that the Leb esgue theorem on dominated conv ergence applies. As a result, lim n →∞ P ( A | τ x > n ) = Z ∞ 0 P ( x + S ( k ) ∈ dz , A ∩ { τ x > k } ) V ( z ) V ( x ) . By the definition of b P , the integral on the right hand side equals b P x ( A ) . Thus, the pro of is complete. Bertoin and Doney [1] hav e shown that the measure b P can also b e obtained b y conditioning the walk on the ev ent D N ,x = { the sequence { x + S ( n ) } hits ( N , ∞ ) before ( −∞ , 0] } . Mo precisely , they ha ve prov en that lim N →∞ P ( A | D N ,x ) = b P x ( A ) for every A ∈ σ ( S (1) , S (2) , . . . , S ( k )) with some fixed k . One can derive limit theorems for the walk S ( n ) under the measure b P from Theorem 5.12 sp ecialised to the case of i.i.d. incremen ts. In Section 8 w e shall form ulate and prov e a more general result, which is v alid for Do ob h transforms of Marko v c hains. 7 Lo cal Limit Theorem for Conditional Distribu- tions Sp ecializiing Theorem 5.12 to the case of i.i.d. incremen ts, we obtain P x + S ( n ) σ √ n > y τ x > n → e − y 2 / 2 for every y > 0 . (72) Then it is rather natural to expect that the local probabilities b ehav e als o reg- ularly and that one has P ( x + S ( n ) ∈ ( y , y + 1] | τ x > n ) = P x + S ( n ) σ √ n ∈ y σ √ n , y + 1 σ √ n τ x > n ≈ exp − y 2 2 σ 2 n − exp − ( y + 1) 2 2 σ 2 n ≈ y σ 2 n exp − y 2 2 σ 2 n . In this section we derive this relation rigorously for lattice random w alks. 52 Theorem 7.1. Assume that X 1 takes values on Z , has p erio d d and E X 1 = 0 , E X 2 1 = σ 2 ∈ (0 , ∞ ) . Then, for every fixe d x ≥ 0 , sup y : y − x ∈ D n √ n P ( x + S ( n ) = y | τ x > n ) − d y σ 2 √ n e − y 2 / 2 σ 2 n → 0 , n → ∞ . F urthermor e, P ( x + S ( n ) = y | τ x = 0) for al l y such that y − x / ∈ D n . This lo cal limit theorem can be pro ved by using recursiv e form ulae for con- ditional local probabilities, whic h follo w from the Wiener-Hopf factorisation iden tities, see [21]. The pro of we present b elow, is a simplified v ersion of the pro of in [7], where multidimensional w alks in cones hav e b een considered. Since factorisation techniques dot not apply in higher dimensions, the proof strategy b elo w is more robust than the strategy in [21]. W e start b y deriving some upp er bounds for lo cal probabilities. Lemma 7.2. Ther e exist c onstants C 1 and C 2 such that P ( x + S ( n ) = y ; τ x > n ) ≤ C 1 n − 1 / 2 P τ x > n 2 ≤ C 2 V ( x ) n . Pr o of. It is immediate from (20) and (21) that there exists a constan t C 0 suc h that P ( S ( j ) = z ) ≤ C 0 √ j uniformly in z ∈ Z . (73) Therefore, applyng the Marko v property at time m = ⌊ n 2 ⌋ , we obtain P ( x + S ( n ) = y , τ x > n ) = X z > 0 P ( x + S ( m ) = z , τ x > m ) P ( z + S ( n − m ) = y , τ z > n − m ) ≤ X z > 0 P ( x + S ( m ) = z , τ x > m ) P ( S ( n − m ) = y − z ) ≤ C 0 √ n − m P ( τ x > m ) . Th us, the first inequality is prov ed. The second one follows from (71). Lemma 7.3. F or al l x, y > 0 we have P ( x + S ( n ) = y ; τ x > n ) ≤ C 3 ( x + 1)( y + 1) n 3 / 2 53 Pr o of. By the Mark ov prop erty at time m = ⌊ n 2 ⌋ , P ( x + S ( n ) = y , τ x > n ) = X z > 0 P ( x + S ( m ) = z , τ x > n ) P ( z + S ( n − m ) = y, τ z > n − m ) . W e no w rev erse the time in the second probabilit y P ( z + S ( n − m ) = y , τ z > n − m ) = P y − S ( n − m ) = z , τ ′ y > n − m , where τ ′ y = inf { k ≥ 1 : y − S ( k ) ≤ 0 } and V ′ ( y ) = E [ S ( τ ′ y )] . Applying now Lemma 7.2 to the random w alk {− S ( n ) } w e get P ( x + S ( n ) = y , τ x > n ) ≤ C 1 n V ′ ( y ) P ( τ x > n/ 2) ≤ C 2 V ′ ( y ) V ( x ) n 3 / 2 . It remains to recall that V ( x ) ∼ x and V ′ ( y ) ∼ y as x, y → ∞ and to notice that this implies the b ound V ( x ) V ′ ( y ) ≤ C ( x + 1)( y + 1) . Lemma 7.4. Ther e exist c onstants C , a > 0 such that lim sup n →∞ sup | y | >u √ n √ n P ( S ( n ) = y ) ≤ C e − au 2 and lim sup n →∞ sup x,z ∈ M n,u √ n P ( x + S ( n ) = z , τ x ≤ n ) ≤ C e − au 2 wher e M n,u := { z : z ≥ u √ n } . Pr o of. Set again m = ⌊ n 2 ⌋ . F or every y with | y | ≥ u √ n we hav e P ( S ( n ) = y ) = P ( S ( n ) = y , | S ( m ) | ≥ u √ n/ 2) + P ( S ( n ) = y , | S ( m ) | < m √ n/ 2) = P ( S ( n ) = y , | S ( m ) | ≥ u √ n/ 2) + P S ( n ) = y , | S ( n ) − S ( m ) | ≥ u √ n/ 2 . Using the Mark ov prop erty and taking in to accoun t (73), we obtain P ( S ( n ) = y , | S ( n ) | ≥ u √ n/ 2) = X z : | z | >u √ n/ 2 P ( S ( m ) = z ) P ( S ( n − m ) = y − z ) ≤ C √ n − m X z : | z | >u √ n/ 2 P ( S ( m ) = z ) = C √ n − m P ( | S ( m ) | ≥ u √ n/ 2) 54 and P ( S ( n ) = y , | S ( n ) − S ( m ) | ≥ u √ n/ 2) = X z : | z − y | >u √ n/ 2 P ( S ( m ) = z ) P ( S ( n − m ) = y − z ) ≤ C √ m X z : | z − y | >u √ n/ 2 P ( S ( n − m ) = y − z ) = C √ m P ( | S ( n − m ) | ≥ u √ n/ 2) . Applying now the cen tral limit theorem, we obtain the first estimate. T o pro ve the second estimate, we notice that if z ≥ u √ n then P ( x + S ( n ) = z , τ x ≤ n/ 2) = n/ 2 X k =1 0 X y = −∞ P ( x + S ( n ) = y , τ x = k ) P ( y + S ( n − k ) = z ) ≤ max n 2 ≤ k ≤ n sup | y |≥ u √ n P ( S ( k ) = y ) . F urthermore, x ≥ u √ n then, in verting the time, w e obtain P x + S ( n ) = z , n 2 < τ x ≤ n ≤ P ( z − S ( n ) = x, τ ′ z ≤ n/ 2) ≤ max n 2 ≤ k ≤ n sup | y |≥ u √ n P ( S ( k ) = y ) . Using now the first claim, w e finish the pro of. Pr o of of The or em 7.1. Assume first that y is such that y − x / ∈ D n . Then, by (21), P ( x + S ( n ) = y , τ x > n ) ≤ P ( S ( n ) = x − y ) = 0 . Th us, it remains to consider v alues y suc h that y − x ∈ D n . Fix ε > 0 and A > 0 . Since z e − z 2 / 2 go es to zero as z → 0 and z → ∞ , it suffices to sho w that lim ε → 0 lim sup n →∞ n max y ≤ ε √ n P ( x + S ( n ) = y , τ x > n ) = 0 , (74) lim A →∞ lim sup n →∞ n max y ≥ A √ n P ( x + S ( n ) = y , τ x > n ) = 0 (75) and lim ε → 0 lim sup n →∞ sup y ∈ [ ε √ n,A √ n ] ∩ ( x + D n ) n P ( x + S ( n ) = y , τ x > n ) − y √ n e − y 2 / 2 √ n P ( τ x > n ) = 0 . (76) 55 Equation (74) is immediate from Lemma (7.3). T o sho w (75) we set m = ⌊ n 2 ⌋ and notice that P ( x + S ( n ) = y , τ x > n ) = P x + S ( n ) = y , τ x > u, | S ( m ) | ≤ A √ n 2 + P x + S ( n ) = y , τ x > n, | S ( m ) | > A √ n 2 . Using Marko v prop erty at time m and using first (73) and then Theorem 72, w e obtain P x + S ( n ) = y , τ x > n, | S ( m ) | > A √ n 2 ≤ X z > 0 P ( x + S ( m ) = z , τ x > m ) P ( z + S ( n − m ) = y ) ≤ C 1 √ n P | S ( m ) | > A √ n 2 , τ x > m ≤ C 2 n P | S ( m ) | > A √ n 2 τ x > m ≤ C 3 n e − A 2 / 2 . Applying Lemma 7.4 we obtain, P x + S ( n ) = y , τ x > n, | S ( m ) | ≤ A √ n ≤ P ( τ x > m ) sup z >A √ n/ 2 P ( S ( n − m ) = z ) ≤ C V ( x ) n e − aA 2 . Com bining these bounds and letting first n → ∞ and then A → ∞ , w e obtain (75). W e no w turn to the cen tral part: y ∈ [ ε √ n, A √ n ] ∩ ( x + D n ) . F or this range of y w e set m = ε 3 n and write the following representation: P ( x + S ( n ) = y , τ x > n ) = X z > 0 P ( x + S ( n − m ) = z , τ x > n − m ) P ( z + S ( m ) = y , τ z > m ) . Set R 1 = z : | z − y | < ε 2 √ n . W e hav e X z / ∈ R 1 P ( x + S ( n − m ) = z , τ x > n − m ) P ( z + S ( n ) = y , τ z > m ) ≤ X z / ∈ R 1 P ( x + S ( n − m ) = z , τ x > n − m ) P ( S ( m ) = y − z ) ≤ P ( τ x > n − m ) max | w | >ε/ 2 √ n P ( S ( m ) = w ) . 56 Since m ∼ ε 3 n, ε 2 √ n ∼ ε 2 p m ε 3 = 1 2 ε 1 / 2 √ m , using Lemma 7.4, w e conclude that X z / ∈ R 1 P ( x + S ( n − m ) = z , τ x > n − m ) P ( z + S ( m ) = y , τ z > m ) ≤ C V ( x ) √ n · C √ ε 3 n e − a/ε = C V ( x ) n ε − 3 / 2 e − a/ε . (77) W e can no w inter that this part is negligible due to the fact that ε − 3 / 2 e − a/ε → 0 as ε → 0 . It remains to consider z ∈ R 1 . Here w e use P ( z + S ( m ) = y , τ z > m ) = P ( z + S ( m ) = y ) − P ( z + S ( m ) = y , τ z ≤ m ) . Applying Lemma 7.4 once again, w e hav e | P ( z + S ( m ) = y , τ x > m ) − P ( z + S ( m ) = y ) | ≤ C √ n ε − 3 / 2 e − a/ε , uniformly in z ∈ R 1 , y > ε √ n . Therefore, X z ∈ R 1 P ( x + S ( n − m ) = z , τ x > n − m ) P ( z + S ( m ) = y , τ z > m ) − X z ∈ R 1 P ( x + S ( n − m ) = z , τ x > n − m ) P ( z + S ( m ) = y ) ≤ C V ( x ) n ε − 3 / 2 e − a/ε . Com bining this with (77), w e obtain P ( x + S ( n ) = y , τ x > n ) − X z > 0 P ( x + S ( n − m ) = z , τ x > n − m ) P ( z + S ( m ) = y ) ≤ C V ( x ) n ε − 3 / 2 e − a/ε . W e kno w from (20), (21) that P ( z + S ( m ) = y ) = d √ 2 π m e − ( y − z ) 2 / 2 m + o 1 √ m uniformly in z and y suc h that y − z ∈ D m and P ( z + S ( m ) = y ) = 0 if y − z / ∈ D m . 57 Therefore, X z > 0 P ( x + S ( n − m ) = z , τ x > n − m ) P ( z + S ( m ) = y ) = X z > 0 P ( x + S ( n − m ) = z , τ x > n − m ) d √ 2 π n ε − 3 / 2 e − ( y − z ) 2 / 2 ε 3 n + o 1 n = d ε 3 / 2 √ 2 π n E exp − ( y − x − S ( n − m )) 2 2 ε 3 n ; τ x > n − m + o 1 n = d P ( τ x > n − m ) ε 3 / 2 √ 2 π n E " exp ( − ( S ( n − m ) + x − y ) 2 2 ε 3 1 − ε 3 ( n − m ) | τ x > n − m # + o 1 n . It follows from (72) that E " exp ( − ( S ( n − m ) + x − y ) 2 2 ε 3 1 − ε 3 ( n − m ) τ x > n − m # = Z ∞ 0 ue − u 2 / 2 exp ( − 1 − ε 3 2 ε 3 u − y √ n − w 2 ) + o (1) . It remains to notice that, as ε → 0 , the follo wing con vergence takes place 1 √ 2 π ε − 3 / 2 Z ∞ 0 ue − u 2 / 2 exp ( − 1 − ε 3 2 ε 3 ( u − v ) 2 ) du → v e − v 2 / 2 . 8 Mark o v c hains In this section w e shall demonstrate that the universalit y method used in Sec- tions 5 and 6 w orks also in the case when the incremen ts of the walk are no longer indep endent. W e shall consider a time-homogeneous real-v alued Marko v c hain { X ( n ) } and study the first time when this c hain b ecomes non-positive: τ := inf { n ≥ 1 : X ( n ) ≤ 0 } . Let ξ ( x ) denote the jump of the chain { X ( n ) } from the state x , that is, P ( ξ ( x ) ∈ B ) = P ( X 1 − x ∈ B | X 0 = x ) . W e shall use the standard for Mark ov c hains agreement and denote b y P x the distribution of the chain conditioned on X 0 = x . W e shall assume that E [ ξ ( x )] = 0 and E [ ξ 2 ( x )] = 1 for all x. (78) 58 F urthermore, we shall assume that there exists a positive random v ariable Y with a finite second moment such that, for all x , P ( | ξ ( x ) | > y ) ≤ P ( Y > y ) , y ≥ 0 . (79) W e shall also assume, without loss of generality , that the tail of Y is regularly v arying with index − 2 . Assumptions (78) and (79) ensure that one can apply martingale versions of the central limit theorem to the chain { X ( n ) } . This implies that the sequence x ( n ) ( t ) := X ( ⌊ nt ⌋ ) + ( nt − ⌊ nt ⌋ )( X ( ⌊ nt ⌋ + 1) − X ( ⌊ nt ⌋ )) √ n con verges weakly on C [0 , 1] tow ards the standard Bro wnian motion B ( t ) . T h us, one may exp ect that the tail b ehaviour of τ should b e similar to the tail b e- ha viour of the corresp onding exit time for B ( t ) . The moment assumptions in (78) are imp osed only to simplify the technical argumen ts. A rather similar univ ersality idea has been used in the monograph [3] to study stopping times for the c hains with asymptotically zero drift. This class of chains is characterised by the follo wing momen t assumptions on the jumps: E [ ξ ( x )] ∼ µ x and E [ ξ 2 ( x )] ∼ b ∈ (0 , ∞ ) as x → ∞ . Under these assumptions one compares discrete time c hains with Bessel pro- cesses. Thus fact underlines that the universalit y metho d is not restricted to the Brownian motion. Belo w w e apply universalit y idea to the b ehaviour of the Green functions of killed discrete time Mark ov chain and killed Bro wnian motion. This t yp e of the universalit y has b een earlier used in [9] to construct harmonic functions for m ultidimensional random walks killed at lea ving a cone. The approach in [3] is more sophisticated and do es not use Green functions. The approach described b elow is a simplified version of the work [10], where m ulti-dimensional Marko v chains in cones hav e b een considered. 8.1 Construction of the p ositiv e harmonic function As w e hav e seen in the analysis of random w alks, the dep endence of τ from the starting p oint x is describ ed by an appropriate p ositive harmonic function for a killed random walk. So, we first construct suc h a function for X ( n ) killed at lea ving half-axis (0 , ∞ ) . T o this end w e shall apply the univ ersality idea to Green functions. Let us start with an example, which illustrates the role of Green functions. Assume for a moment that { X ( n ) } is integer-v alued. Set G ( x, y ) = ∞ X n =0 P x ( X n = y , τ > n ) , x, y > 0 59 and a ∗ ( x ) := E x [ X 1 ; X 1 > 0] − x, x > 0 . Then the function U ∗ ( x ) := x + ∞ X y =1 G ( x, y ) a ∗ ( y ) , x > 0 (80) is harmonic for { X ( n ) } killed at the stopping time τ . Indeed, E x [ U ∗ ( X 1 ); τ > 1] = ∞ X z =1 P x ( X 1 = z ) U ∗ ( z ) = E x [ X 1 ; X 1 > 0] + ∞ X z =1 P x ( X 1 = z , τ > 1) ∞ X n =0 ∞ X y =1 P z ( X n = y , τ > n ) a ∗ ( y ) = x + a ∗ ( x ) + ∞ X n =0 ∞ X y =1 P x ( X n +1 = y , τ > n + 1) a ∗ ( y ) = x + ∞ X y =1 G ( x, y ) a ∗ ( y ) . The first summand in the definition of U ∗ is the harmonic function for B ( t ) killed at leaving (0 , ∞ ) ; the second one is a correction. Since w e ha ve no information ab out the Green function G ( x, y ) we cannot even guarantee that the series P ∞ y =1 G ( x, y ) a ∗ ( y ) con verges. T o av oid this problem we can use instead the Green function of the Bro wnian motion given by 2 min( x, y ) which (in view of in v ariance principle) w ould giv e approximately the required harmonic function. This suggests to make use of the follo wing function U ∗∗ ( x ) := x + 2 ∞ X y =1 min( x, y ) a ∗ ( y ) . W e will need then to correct this function a little bit more to get rid of errors of the diffusion approximation to obtain the final harmonic function. W e first construct a sup erharmonic function W ( x ) using the ma joran t of a ∗ ( x ) given by a ( x ) = − E [ x − Y ; x − Y ≤ 0] = Z ∞ x P ( Y ≥ y ) dy . Instead of the series P ∞ y =1 min( x, y ) a ∗ ( y ) w e then shall use the correction m ( x ) = Z ∞ 0 min( x, y ) a ( y ) dy . 60 Putting b ( x ) = Z ∞ x a ( y ) dy w ee can rewrite the correction term applying the integration b y parts as follo ws m ( x ) = Z x 0 y a ( y ) dy + xb ( x ) = − Z x 0 y b ′ ( y ) dy + xb ( x ) = Z x 0 b ( y ) dy . Lemma 8.1. Ther e exist p ositive c onstants A and R such that function W ( x ) = x + R + Am ( x + R ) is sup erharmonic for X kil le d at τ . In other wor ds E x [ W ( X (1)); τ > 1] ≤ W ( x ) for al l x ≥ 0 (81) or, e quivalently, the se quenc e W ( X ( n ))1 { τ > n } , n ≥ 0 is a sup ermartingale. Pr o of. W e w ant to sho w that ∆( x ) := E x [ W ( X (1)); τ > 1] − W ( x ) ≤ 0 for all x ≥ 0 . for a suitable choice of A and R . Recalling that ξ ( x ) denotes the jump from the state x , w e ha ve ∆( x ) = E [ W ( x + ξ ( x )) − W ( x )] ≤ E [ W ( x + ξ ( x )) − W ( x ); | ξ ( x ) | < x + R ] + E [ W ( x + ξ ( x )) − W ( x ); ξ ( x ) ≥ x + R ] =: ∆ 1 ( x ) + ∆ 2 ( x ) . By the mean v alue theorem, ∆ 1 ( x ) = W ′ ( x ) E [ ξ ( x ); | ξ ( x ) | < x + R ] + 1 2 E [ W ′′ ( x + θ ξ ( x )) ξ 2 ( x ); | ξ ( x ) | < x + R ] . It is immediate from the definition of W ( y ) that W ′ ( y ) = 1 + Ab ( y + R ) and W ′′ ( y ) = − Aa ( y + R ) . Using these equalities and noting that a ( y + R ) decreases, w e obtain the estimate ∆ 1 ( x ) ≤ (1 + Ab ( x + R )) E [ ξ ( x ); | ξ ( x ) | < x + R ] − 1 2 a ( x + 2 R ) E [ ξ 2 ( x ); | ξ ( x ) | < x + R ] . It follows from the assumptions (78) and (79) that E [ ξ ( x ); | ξ ( x ) | < x + R ] = − E [ ξ ( x ); | ξ ( x ) ≥ x + R ] ≤ E [ Y ; Y ≥ x + R ] = a ( x + R ) 61 and E [ ξ 2 ( x ); | ξ ( x ) | < x + R ] = 1 − E [ ξ 2 ( x ); | ξ ( x ) | ≥ x + R ] ≥ 1 − E [ Y 2 ; Y ≥ R ] . Applying these b ounds and noting that the function b ( y ) decreases, we conclude that ∆ 1 ( x ) ≤ (1 + Ab ( R )) a ( x + R ) − A 2 1 − E [ Y 2 ; Y ≥ R ] a ( x + 2 R ) . The assumption that the tail of Y is regularly v arying with index − 2 implies that a ( y ) is regularly v arying with index − 1 . Th us, we can choose R so large that a ( x + 2 R ) ≥ 1 4 a ( x + R ) for all x ≥ 0 . F urthermore, E [ Y 2 ; Y ≥ R ] ≤ 1 2 for all sufficiently large R . Therefore, there exists R 0 suc h that ∆ 1 ( x ) ≤ 1 + Ab ( R ) − A 16 a ( x + R ) . F urthermore, recalling that m ′ ( y ) = b ( y ) decreases, we obtain ∆ 2 ( x ) = E [ ξ ( x ); ξ ( x ) ≥ x + R ] + A E [ m ( x + R + ξ ( x )) − m ( x + R ); ξ ( x ) ≥ x + R ] ≤ (1 + Ab ( R )) a ( x + R ) . Com bining this with the bound for ∆ 1 ( x ) , we conclude that ∆( x ) ≤ 2 + 2 Ab ( R ) − A 16 a ( x + R ) , x ≥ 0 for all R ≥ R 0 . Since lim y →∞ b ( y ) = 0 , there exists R 1 ≥ R 0 suc h that b ( y ) < 1 64 for all R ≥ R 1 . Then, for all R ≥ R 1 and all A ≥ 32 we hav e ∆( x ) ≤ 0 , x ≥ 0 . Th us, the proof is completed. W e can no w make use of this sup erharmonic function to find a positive harmonic function for the killed Mark o v chain. Lemma 8.2. Assume that (78) and (78) hold. Then the function V ( x ) = x − E x X ( τ ) is p ositive harmonic for { X ( n ) } kil le d at le aving (0 , ∞ ) . Mor e over, V ( x ) ≤ W ( x ) for al l x > 0 (82) and lim x →∞ V ( x ) x = 1 . (83) 62 Pr o of. The supermartingale prop erty of the sequence W ( X ( n ))1 { τ > n } implies that E x [ X ( n ); τ > n ] ≤ E x [ W ( X ( n )); τ > n ] ≤ W ( x ) . (84) Applying the optional stopping theorem to the martingale X n , we obtain E x [ X ( n ); τ > n ] = E x [ X ( τ ∧ n ); τ > n ] = E x [ X ( τ ∧ n )] − E x [ X ( τ ∧ n ); τ ≤ n ] = x − E x [ X ( τ ); τ ≤ n ] , x > 0 . (85) Therefore, E x [ − X ( τ ); τ ≤ n ] ≤ W ( x ) − x = R + Am ( x + R ) , x > 0 . Since − X ( τ ) ≥ 0 , by the monotone conv ergence theorem w e obtain that 0 ≤ E x [ − X τ ] ≤ R + Am ( x + R ) . Th us function V ( x ) is well-defined. Moreov er, since W ( x ) − x = o ( x ) as x → ∞ , w e also obtain (83). The harmonicity follows from the following application of the Marko v prop- ert y and the assumption E x [ X 1 − x ] = 0 , E x [ V ( X 1 ); τ > 1] = Z ∞ 0 P x ( X 1 ∈ dy )( y − E y X τ ) = E x [ X 1 ; τ > 1] − E x [ X τ ; τ > 1] = x − E x [ X 1 ; τ = 1] − E x [ X τ ; τ > 1] = V ( x ) . Due to (85), V ( x ) = lim n →∞ E x [ X ( n ); τ > n ] . Applying now (84), w e obtain (82). Th us, the proof ic finished. 8.2 Coupling of Mark ov c hain and a simple random w alk T o find asymptotics and upper b ounds for P x ( τ > n ) we mak e use of the follo wing coupling that follows from Sakhanenko [18, Corollary 3]. Prop osition 8.3. Ther e exist se quenc es γ n = o ( √ n ) → 0 and π n → 0 such that for e ach n and x one c an c onstruct Markov Chain { X ( k ) } n k =0 starting fr om x and a symmetric simple r andom walk { S ( k ) } n k =0 on the same pr ob ability sp ac e in such a way that P x sup k ≤ n | X ( k ) − x − S ( k ) | > γ n ≤ π n . (86) 63 Remark 8.4. Here π n = D ( γ n ) and D ( γ n ) ≤ n E h 3 2 Y γ n + E h 3 2 S (1) γ n with h 3 ( x ) = min( | x | 2 , | x | 3 ) . A similar statemen t can b e made for linearly in terp olated Mark ov chain x ( n ) ( t ) and a Brownian motion ( B ( t )) t ≥ 0 instead of a simple random walk. 8.3 Upp er b ounds for P x ( τ > n ) W e first pro ve a crude estimate for P x ( τ > n ) , which will later b e used to obtain a rather sharp upp er bound for this probabilit y . Lemma 8.5. Ther e exists a c onstant C and a se quenc e δ n → 0 such that P x ( τ > n ) ≤ C ε + δ n , for any x ∈ [0 , ε √ n ] . Pr o of. T o prov e we make use of Prop osition 8.3 and construct X ( k ) and x + S ( k ) on the same probability space. Let e τ x := inf { k ≥ 1 : x + S ( k ) ≤ 0 } Then, for sequences γ n = o ( √ n ) → 0 and π n → 0 , P x ( τ > n ) ≤ P ( x + min 1 ≤ k ≤ n ( X ( k ) − x ) > 0 , max k ≤ n | X ( k ) − x − S ( k )) | ≤ γ n ) + π n ≤ P ( x + γ n + min 1 ≤ k ≤ n S ( k ) > 0) + π n ≤ P ( e τ x + γ n > n ) + π n . Applying (11) we hence obtain P x ( τ > n ) ≤ C x + 1 + γ n √ n + π n ≤ C ε + C 1 + γ n √ n + π n . Th us, the desired inequality holds with δ n = C (1+ γ n ) √ n . Lemma 8.6. Ther e exists a c onstant C such that P x ( τ > n ) ≤ C x + 1 √ n (87) for any x ≥ 0 and n ≥ 1 . Pr o of. Since P x ( τ > n ) is decreasing in n , it is sufficient to pro ve the claim for n = 2 m , m ≥ 1 . F or every fixed ε > 0 we hav e P x ( τ > n ) ≤ P x ( τ > n, X ( n/ 2) > ε √ n ) + P x ( τ > n, X ( n/ 2) ≤ ε √ n ) . 64 Applying the Marko v inequalit y to the first summand and recalling that the sup erharmonic function W ( x ) constructed in Lemma 8.1 satisfies x ≤ W ( x ) for all x ≥ 0 , we obtain P x ( τ > n, X ( n/ 2) ≥ ε √ n ) ≤ E x [ X ( n/ 2); τ > n/ 2] ε p n/ 2 ≤ E x [ W ( X ( n/ 2)); τ > n/ 2] ε p n/ 2 . Applying Lemma 8.1, we obtain the b ound P x ( τ > n, X ( n/ 2) ≥ ε √ n ) ≤ W ( x ) ε p n/ 2 . It follows from the definition of W that there exists a constant C suc h that W ( x ) ≤ C ( x + 1) for all x ≥ 0 . Therefore, P x ( τ > n, X ( n/ 2) ≥ ε √ n ) ≤ √ 2 C x + 1 ε √ n . (88) Next, using the Marko v property and applying Lemma 8.5, we obtain P x ( τ > n, X ( n/ 2) ≤ ε √ n ) = Z ε √ n 0 P x ( τ > n/ 2 , X ( n/ 2) ∈ dy ) P y ( τ > n/ 2) ≤ C P x ( τ > n/ 2)( ε + δ n ) . Com bination of the latter and former b ounds giv es P x ( τ > n ) ≤ √ 2 C x + 1 ε √ n + C P x ( τ > n/ 2)( ε + δ n ) . No w pick ε suc h that e ε := 2 C ε < 1 / 2 . Let n 0 b e such that for n ≥ n 0 sequence δ n ≤ ε . W e then obtain P x ( τ > n ) ≤ x + 1 ε 2 √ n + e ε P x ( τ > n/ 2) . No w iterate this inequality N times to obtain P x ( τ > n ) ≤ x + 1 ε 2 N − 1 X i =0 e ε i 1 p n/ 2 i + e ε N P x ( τ > n/ 2 N ) . T ake N = log n − 2 log e ε to obtain for n/ 2 N > n 0 that P x ( τ > n ) ≤ x + 1 ε 2 √ n 1 1 − 2 e ε + 1 √ n . The claim then follows. 65 8.4 Asymptotics for P x ( τ > n ) and conditional limit theo- rem W e can now extend Corollary 2.3 and Corollary 2.5 from symmetric simple random walk to Mark ov c hain satisfying our assumptions. Theorem 8.7. Assume that (78) , and (79) ar e valid. Fix any se quenc e δ n ↓ 0 and x 0 > 0 . The one has the fol lowing statements. (i) Uniformly in x ∈ [ x 0 , δ n √ n ] , P x ( τ > n ) ∼ r 2 π V ( x ) √ n , n → ∞ . (ii) F or any v ≥ 0 , uniformly in x ∈ [ x 0 , δ n √ n ] , P x X ( n ) √ n > v τ > n → e − v 2 / 2 as n → ∞ . Pr o of. T ake m = m ( n ) suc h that m ( n ) n → 0 sufficien tly slow. In particular, w e shall assume, without loss of generalit y , that x ≤ p δ n m. Fix also ε ∈ (0 , 1) and A > 1 . Using the Marko v prop erty at time m , we hav e P x X ( n ) √ n > v , τ > n = P 1 + P 2 + P 3 := Z ε √ m 0 P x ( X ( m ) ∈ dy , τ > m ) P y X ( n − m ) √ n > v , τ > n − m + Z A √ m ε √ m P x ( X ( m ) ∈ dy , τ > m ) P y X ( n − m ) √ n > v , τ > n − m + Z ∞ A √ m P x ( X ( m ) ∈ dy , τ > m ) P y X ( n − m ) √ n > v , τ > n − m . Applying Lemma 8.6 we obtain P 1 ≤ Z ε √ m 0 P x ( X ( m ) ∈ dy , τ > m ) P y ( τ > n − m ) ≤ C Z ε √ m 0 P x ( X ( m ) ∈ dy , τ > m ) 1 + y n 1 2 ≤ 2 C ε r m n P x ( τ > m ) , 66 for all m ≥ 1 /ε 2 . Using Lemma 8.6 one more time and noting that V ( x ) ≥ x ≥ x 0 for all x ≥ x 0 , we conclude that P 1 ≤ εC 1 r m n 1 + x √ m ≤ εC 2 V ( x ) √ n . (89) T o bound P 3 w e apply again Lemma 8.6. This gives P 3 ≤ C n 1 2 Z ∞ A √ m P x ( X ( m ) ∈ dy , τ > m )(1 + y ) ≤ C n 1 2 E x [1 + X ( m ) , τ > m, X ( m ) > A √ m ] ≤ 2 C n 1 2 E x [ X ( m ) , τ > m, X ( m ) > A √ m ] . Next, for all x ≤ √ δ n m , E x [ X ( m ); τ > m, X ( m ) > A √ m ] ≤ x P x ( τ > m ) + E x [ X ( m ) − x ; τ > m, X ( m ) > A √ m ] ≤ C (1 + x ) 2 √ m + E x [( X ( m ) − x ) 2 ; τ > m, X ( m ) − x > ( A − δ 1 / 2 n ) √ m ] ( A − δ 1 / 2 n ) √ m . The assumption x ≤ √ δ n m also implies that 1 + x √ m ≤ 2 δ 1 / 2 n . Using the fact that ( X ( n ) − x ) 2 − n is a martingale, w e ha ve 0 = E x [ X ( τ ∧ m ) − x ) 2 − τ ∧ m ] = E x [ X ( m ) − x ) 2 − m ; τ > m ] + E x [ X ( τ ) − x ) 2 − τ ; τ ≤ m ] ≥ E x [ X ( m ) − x ) 2 − m ; τ > m ] − E x [ τ ; τ ≤ m ] . Therefore, E x [ X ( τ ) − x ) 2 ; τ > m ] ≤ m P x ( τ > m ) + E x [ τ ; τ ≤ m ] ≤ C V ( x ) √ m. Then, P 3 ≤ C V ( x ) n 1 2 1 A + δ n . (90) By taking m ( n ) sufficiently large (but still o ( n ) ), we can ensure using Prop osi- tion 8.3 that, P y X ( n − m ) √ n > v , τ > n − m ∼ P y + S ( n − m ) √ n > v , e τ y > n − m , (91) 67 uniformly in y ∈ [ ε, A ] √ m . Corollary 2.5 implies then, P y + S ( n − m ) √ n > v , e τ y > n − m ∼ r 2 π y n 1 2 e − v 2 2 , uniformly in y ∈ [ ε, A ] √ m . Then, P 2 ∼ r 2 π n e − v 2 / 2 Z A √ m ε √ m P x ( X ( m ) ∈ dy , τ > m ) y . Using the bounds for P 1 and P 3 P 2 − r 2 π n e − v 2 / 2 Z ∞ 0 P x ( X ( m ) ∈ dy , τ > m ) y = Z y / ∈ [ ε,A ] √ m P x ( X ( m ) ∈ dy , τ > m ) y e − v 2 / 2 ≤ C V ( x ) n 1 2 ε + 1 A + δ n . (92) Com bining equations (89), (90) and (92), the statemen t of part (ii) follows, P x X ( n ) √ n > v , τ x > n − r 2 π n 1 / 2 e − v 2 / 2 E x [ X ( m ); τ > m ] ≤ C V ( x ) ε + 1 A + δ n n 1 / 2 . uniformly in x ≤ δ n √ m . Letting ε → 0 and A → ∞ w e obtain, P x X ( n ) √ n > v ; τ > n ∼ r 2 π n 1 / 2 V ( x ) e − v 2 / 2 (93) and taking v = 0 P x ( τ > n ) ∼ r 2 π V ( x ) √ n . 8.5 Marko v c hain conditioned to stay p ositive Similar to the case of random walks, harmonic function V ( x ) can b e used to p erform the Doob h -transform. Let b P denote the corresp onding measure: b P x ( b X ( n ) ∈ B ) = Z B V ( y ) V ( x ) P x ( X ( n ) ∈ y , τ > n ) for all x, n > 0 all all Borel subsets B of (0 , ∞ ) . As in the case of walks with i. i.d. incremen ts, the measure b P can b e obtained b y conditioning of the the original chain. 68 Lemma 8.8. F or every x > 0 and for every A ∈ σ ( X (1) , X (2) , . . . ) with some fixe d k ≥ 1 , c P x ( A ) = lim n →∞ P x ( A | τ > n ) . The proof of this lemma is the verbatim of the proof of Lemma 6.9 and we omit it. Using Theorem 8.7 one can obtain a limit theorem for the chain { X ( n ) } under the new measure b P . The follo wing result is th us an extension of Prop o- sition 2.6, where simple random w alks hav e been considered. Theorem 8.9. Assume that the assumptions (78) and (79) ar e valid. Then, for every fixe d x > 0 , lim n →∞ b P x X ( n ) √ n ≥ v = r 2 π Z ∞ v u 2 e − u 2 / 2 du for every v > 0 . Pr o of. The claim is equiv alent to lim n →∞ b P x X ( n ) √ n ∈ ( v 1 , v 2 ] = r 2 π Z ∞ v u 2 e − u 2 / 2 du for all v 2 > v 1 > 0 . (94) By the definition of the measure b P and b y (83), b P x ( X ( n ) ∈ ( v 1 , v 2 ]) = 1 V ( x ) Z v 2 √ n v 1 √ n V ( y ) P x ( X ( n ) ∈ dy , τ > n ) = 1 + o (1) V ( x ) Z v 2 √ n v 1 √ n y P x ( X ( n ) ∈ dy , τ > n ) = (1 + o (1)) P x ( τ > n ) V ( x ) Z v 2 √ n v 1 √ n y P x ( X ( n ) ∈ dy | τ > n ) . Applying now Theorem 8.7(i), we conclude that b P x ( X ( n ) ∈ ( v 1 , v 2 ]) = r 2 π (1 + o (1)) Z v 2 √ n v 1 √ n y √ n P x ( X ( n ) ∈ dy | τ > n ) = r 2 π (1 + o (1)) Z v 2 v 1 u P x X ( n ) √ n ∈ du τ > n . Since the limiting law in Theorem 8.7(ii) is absolutely contin uous, lim n →∞ Z v 2 v 1 u P x X ( n ) √ n ∈ du τ > n = Z v 2 v 1 u · ue − u 2 / 2 du. This completes the pro of of (94) and, consequently , the pro of of the theorem. 69 References [1] J. Bertoin and R. A. Doney . On conditioning a random w alk to sta y non- negativ e. Ann. Pr ob ab. , 22(4):2152 – 2167, 1994. [2] A. A. Boro vko v. Pr ob ability The ory . Universitext. Springer London, Lon- don, 1 edition, 2013. [3] D. Deniso v, D. Korsh unov, and V. W ach tel. Markov Chains with Asymp- totic al ly Zer o Drift: L amp erti’s Pr oblem . New Mathematical Monographs. Cam bridge Universit y Press, 2025. [4] D. Deniso v, A. Sakhanenk o, and V. W ach tel. First passage times for random w alks with nonidentically distributed increments. Ann. Pr ob ab. , 46(6):3313 – 3350, 2018. [5] D. Deniso v, A. T araso v, and V. W ac htel. Berry-Esseen inequalit y for ran- dom walks conditioned to stay p ositive. Submitte d , 2025. [6] D. Denisov and V. W ac htel. Conditional limit theorems for ordered random w alks. Ele ctr onic Journal of Pr ob ability , 15:292 – 322, 2010. [7] D. Deniso v and V. W ach tel. Random walks in cones. Ann. Pr ob ab. , 43(3):992 – 1044, 2015. [8] D. Denisov and V. W ac htel. Alternative constructions of a harmonic func- tion for a random walk in a cone. Ele ctr onic journal of pr ob ability , 24(none), 2019. [9] D. Deniso v and V. W ac htel. Random walks in cones revisited. A nn. Inst. Henri Poinc ar é Pr ob ab. Stat. , 60(1):126–166, 2024. [10] D. Deniso v and K. Zhang. Marko v chains in the domain of attraction of Bro wnian motion in cones. J. The or et. Pr ob ab. , 38(1):P ap er No. 14, 34, 2025. [11] R. A. Doney . Fluctuation the ory for Lévy pr o c esses , v olume 1897 of L e ctur e Notes in Mathematics . Springer, Berlin, 2007. Lectures from the 35th Summer Sc ho ol on Probability Theory held in Saint-Flour, July 6–23, 2005, Edited and with a foreword by Jean Picard. [12] R. Durrett. Conditioned limit theorems for some null recurrent Marko v pro cesses. Ann. Pr ob ab. , 6(5):798–828, 1978. [13] R. T. Durrett, D. L. Iglehart, and D. R. Miller. W eak con vergence to Bro wnian meander and Bro wnian excursion. Ann. Pr ob ability , 5(1):117– 129, 1977. [14] W. F eller. A n intr o duction to pr ob ability the ory and its applic ations. Vol. II . John Wiley & Sons, Inc., New Y ork-London-Sydney , second edition, 1971. 70 [15] P . Green w o o d and M. Shak ed. Dual Pairs of Stopping Times for Random W alk. A nn. Pr ob ab. , 6(4):644 – 650, 1978. [16] D. A. Korsh unov. An analogue of Wald’s identit y for random walks with infinite mean. Sibirsk. Mat. Zh. , 50(4):836–840, 2009. [17] A. E. Kyprianou. Fluctuations of L évy Pr o c esses with Applic ations . Uni- v ersitext. Springer Berlin, Heidelb erg, Berlin, Heidelb erg, 2 edition, 2014. [18] A. Sakhanenk o. Simple metho d of obtaining estimates in the inv ariance principle. In Pr ob ability The ory and Mathematic al Statistics: Pr o c e e dings of the Fifth Jap an-USSR Symp osium, held in Kyoto, Jap an, July 8–14, 1986 , pages 430–443. Springer, 1988. [19] F. Spitzer. Principles of R andom W alk . Graduate T exts in Mathematics. Springer New Y ork, New Y ork, NY, 2 edition, 1964. Springer Bo ok Archiv e; Originally published 1964. [20] R. v an der Hofstad and M. Keane. An elementary pro of of the hitting time theorem. The Americ an Mathematic al Monthly , 115(8):753–756, 2008. [21] V. A. V atutin and V. W ach tel. Local probabilities for random walks con- ditioned to s ta y p ositive. Pr ob ab. The ory R elate d Fields , 143(1-2):177–217, 2009. 71
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment