Large deviations for the branching random walk with heavy-tailed associated random walk - a principle of one big jump

We prove a version of Nagaev's theorem for the branching random walk with heavy-tailed associated random walk. For a branching random walk on $\mathbb{R}$ we consider the random measure $Z_n = \sum_{|u|=n} e^{-V_u} δ_{V_u}$ where $V_u$, $|u| = n$ den…

Authors: Jakob Stonner

Large deviations for the branc hing random w alk with hea vy-tailed asso ciated random w alk – a principle of one big jump Jak ob Stonner Marc h 18, 2026 Abstract W e pro ve a v ersion of Nagaev’s theorem for the branching random w alk with heavy-tailed asso ciated random walk. F or a branc hing random walk on R w e consider the random measure Z n = P | u | = n e − V u δ V u where V u , | u | = n denote the p ositions of the particles in the n -th generation. Under the assumption that E [ Z 1 ( · )] is a probability distribution with regularly v arying tail, w e pro ve that Z n (( n E [ X ] + t n , ∞ )) = W n P ( X > t n )(1 + o (1)) in L 1 as n → ∞ where W is a non-zero random v ariable, t n ↑ ∞ gro ws suitably fast, and X has la w E [ Z 1 ( · )] . The result is explained probabilistically by a principle of one big jump for the branching random walk. Keyw ords: Branching random w alk, large deviations, regular v ariation Sub class: MSC: 60J80 · 60F10 1 In tro duction In this article w e prov e a v ersion of Nagaev’s theorem for the branching random w alk. Nagaev’s theorem [ Nag79 ] states that, for random w alk ( S n ) n ∈ N with zero mean, unit v ariance, and regularly v arying tail of index − p < − 2 (i.e., P ( S 1 > λt ) / P ( S 1 > t ) → λ − p as t → ∞ for all λ > 0 ) we ha ve P ( S n > n E [ S 1 ] + t ) = n P ( S 1 > t )(1 + o (1)) (1.1) as n → ∞ uniformly in t ≥ a √ n log n , for any fixed a > √ p − 2 . This form ula is explained probabilistically b y the principle of one big jump : Due to its heavy tail, the ‘c heapest’ wa y for a random w alk to deviate substan tially from its mean is for one of its increments to make a ‘big jump’. T o presen t our v ersion of this result for the branching random walk, let us first define the mo del. The branc hing random walk is a collection of particles u ∈ I whic h hav e p ositions ( V u ) u ∈I in R that are constructed in the follo wing wa y . Let ξ = P N j =1 δ X j b e a point process on the real n umbers. W e start with a single particle lo cated at 0 , forming the generation 0 . No w in the ( n + 1) -th step of the construction of the pro cess, eac h particle u from generation n receiv es an independent identically distributed (i.i.d.) cop y of ξ that w e shift by the p osition V u of u . Then the ( n + 1) -th generation is formed by particles that are lo cated according to the shifted p oin t processes assigned to the particles of the n -th generation. A precise model definition will b e giv en in the next section. Let m ( θ ) : = E  Z e − θx ξ (d x )  = E  N X j =1 e − θX j  , θ ∈ R (1.2) 1 denote the Laplace transform of the in tensity measure of ξ . W e assume that m (1) = 1 and we ha ve m ( θ ) = ∞ for all θ < 1 . Define Z n : = X | u | = n e − V u δ V u , n ∈ N 0 . (1.3) Here the sum ranges ov er all particles in generation n . The random measure Z n is a natural ob ject to describe the p ositions of particles in large generations (see e.g. [ Big92 ]). Moreo ver, its in tensity measure E [ Z n ( · )] is the law of the n -th step of a random walk with increment law F : = E [ Z 1 ( · )] , whic h we call the asso ciate d r andom walk . This random w alk is an essen tial tool for in vestigation of the m odel (see e.g. [Ly o97, AS14, Shi15]). W e assume further that F has a regularly v arying tail of index − p for some p > 2 . Then our v ersion of (1.1) for the branching random w alk reads Z n ( n E [ S 1 ] + t n , ∞ ) = W n P ( S 1 > t n )(1 + o (1)) in L 1 , for every suitably fast growing sequence t n ↑ ∞ (see Theorem 2.5 b elo w). Here W is the limit of Biggins’ martingale W n : = Z n ( R ) as n → ∞ . The in tuition b ehind this result is that, just as for the random walk, the ‘c heap est’ w a y for any individual particle’s position in the branc hing random walk to deviate substantially from its exp ected position is to allo w exactly one of its increments to make a ‘big jump’. The main contribution to Z n ( n E [ S 1 ] + t n , ∞ ) is then comprised of suc h particles. 1.1 Notation and o verview W e write N = { 1 , 2 , . . . } and N 0 = N ∪ { 0 } . F or real num b ers a and b w e denote a ∧ b = min { a, b } and a ∨ b = max { a, b } . F or sequences ( a n ) n ∈ N , ( b n ) n ∈ N of real n umbers we write a n ∈ o ( b n ) or a n ≪ b n if a n /b n → 0 as n → ∞ . W e also write a n ∈ O ( b n ) if lim sup n →∞ a n /b n < ∞ . Moreo ver, a n ∼ b n means that a n /b n → 1 as n → ∞ . Analogously , if f , g : R → R are functions (suc h that g ( t ) is even tually positive for large t ), then f ( t ) ∼ g ( t ) , f ( t ) ∈ o ( g ( t )) and f ( t ) ∈ O ( g ( t )) are defined similarly . W e use set notation with O ( g ( t )) and o ( g ( t )) , so e.g. f ( t ) ∈ O ( g 1 ( t )) ∩ O ( g 2 ( t )) means f ( t ) ∈ O ( g 1 ( t )) and f ( t ) ∈ O ( g 2 ( t )) . F or random v ariables X, Y w e write X ⪯ Y and sa y that Y sto c hastically dominates X if we ha v e P ( X ≥ t ) ≤ P ( Y ≥ t ) for all sufficien tly large t ≥ 0 . The rest of this article is organized as follows. In Section 2 w e define the model, state our result, and provide examples. The pro of is presented in Sections 3 and 4. W e give a short summary of the proof in Section 3.2. 1.2 Notation and o verview W e write N = { 1 , 2 , . . . } and N 0 = N ∪ { 0 } . F or real num b ers a and b w e denote a ∧ b = min { a, b } and a ∨ b = max { a, b } . F or sequences ( a n ) n ∈ N , ( b n ) n ∈ N of real n umbers we write a n ∈ o ( b n ) or a n ≪ b n if a n /b n → 0 as n → ∞ . W e also write a n ∈ O ( b n ) if lim sup n →∞ a n /b n < ∞ . Moreo ver, a n ∼ b n means that a n /b n → 1 as n → ∞ . Analogously , if f , g : R → R are functions (suc h that g ( t ) is even tually positive for large t ), then f ( t ) ∼ g ( t ) , f ( t ) ∈ o ( g ( t )) and f ( t ) ∈ O ( g ( t )) are defined similarly . W e use set notation with O ( g ( t )) and o ( g ( t )) , so e.g. f ( t ) ∈ O ( g 1 ( t )) ∩ O ( g 2 ( t )) 2 means f ( t ) ∈ O ( g 1 ( t )) and f ( t ) ∈ O ( g 2 ( t )) . F or random v ariables X, Y w e write X ⪯ Y and sa y that Y sto c hastically dominates X if we ha v e P ( X ≥ t ) ≤ P ( Y ≥ t ) for all sufficien tly large t ≥ 0 . The rest of this article is organized as follows. In Section 2 w e define the model, state our result, and provide examples. The pro of is presented in Sections 3 and 4. W e give a short summary of the proof in Section 3.2. 2 Mo del setup and main results 2.1 Branc hing Random W alk In the follo wing we define the branc hing random walk mo del with incremen t p oin t pro cess ξ = P N j =1 δ X j . Here ( X j ) j ∈ N is a sequence of R -v alued random v ariables and N tak es v alues in N 0 ∪ {∞} . The branc hing random w alk is a collection of particles u that hav e positions V u in R ∪ {∞} . W e lab el the particles using the Ulam-Harris tree I : = S n ∈ N 0 N n where N 0 b y conv ention con tains only the empty word ∅ . F or u = ( u 1 , . . . , u n ) ∈ N n and v = ( v 1 , . . . , v m ) ∈ N m with n, m ∈ N 0 w e write uv : = ( u 1 , . . . , u n , v 1 , . . . , v m ) ∈ N n + m for the concatenation of u and v . Similarly , for u = ( u 1 , . . . , u n ) ∈ N n and j ∈ N we write uj : = ( u 1 , . . . , u n , j ) and call u the p ar ent of uj . Moreo ver, for u ∈ N n with n ∈ N 0 w e write | u | = n and call it the gener ation of u . There is a partial order on I given by u ≤ v ⇔ | u | ≤ | v | and ( v 1 , . . . , v | u | ) = u where v = ( v 1 , . . . , v | v | ) . W e write u < v if u ≤ v and u  = v , u, v ∈ I . If u ≤ v w e call u an anc estor of v . The branc hing random w alk is no w constructed as follo ws. The pro cess starts with a single particle lab eled with ∅ at p osition V ∅ : = 0 . Let ( ξ u ) u ∈I b e a family of i.i.d. copies of ξ and denote ξ u = N ( u ) X j =1 δ X j ( u ) , u ∈ I . F or u ∈ I we define recursiv ely the position V uj of the particle with label uj , j ∈ N , by V uj : = ( V u + X j ( u ) j ≤ N ( u ) , ∞ else with the conv en tion ∞ + x = ∞ for all x ∈ R ∪ {∞} . Then the collection ( V u ) u ∈I is called branc hing random walk with increment point process ξ . F urther, ( V u ) u ∈I is called CMJ pr o c ess if ξ is almost surely supported in [0 , ∞ ) . A CMJ pro cess ( V u ) u ∈I is naturally in terpreted as a p opulation model where u ∈ I lab els an individuum that is born at time V u (cf. [ Ner81 , Jag89 ]). W e occasionally write V ( u ) instead of V u if it is notationally more conv enien t. F urther, we write ∆ V ( uj ) : = X j ( u ) = V ( uj ) − V ( u ) for the displacement of the particle uj relativ e to its parent u . 3 If Φ : R I → R is a measurable map and u ∈ I w e define the shifted map [Φ] u : R I → R , ( x v ) v ∈I 7→ Φ  ( x uv − x u ) v ∈I  . In other words, the op erator [ · ] u shifts the considered quan tity in to the sub-tree ro oted at u and shifts the p ositions to make the new root u sit at the origin. F or instance, for u ∈ I and W 1 = P | u | =1 e − V u w e hav e [ W 1 ] u = X | v | =1 e − ( V uv − V u ) = N ( u ) X j =1 e − X j ( u ) . 2.2 Assumptions, examples and result statement Let ξ b e a point pro cess on R with in tensit y measure µ ( B ) : = E [ ξ ( B )] , B ⊆ R Borel. Throughout w e assume that ξ is almost surely lo cally finite. W e define the Laplace transform m b y (1.2) and set m ′ ( θ ) : = − Z x e − θx µ (d x ) , θ ∈ R , if the in tegral exists. Note that m ′ arises from m b y formally differen tiating (indeed m ′ equals the deriv ative of m if both are w ell-defined). In the following w e consider the assumptions m (1) = 1 , m ( θ ) = ∞ for all θ < 1 (A1) m ′ (1) ∈ ( −∞ , 0) . (A2) 2.1 R emark. In this article w e are in terested in v alues θ from the b oundary of the domain of definition of m D : = { θ ∈ R : m ( θ ) < ∞} . Note that if D is nonempt y , then it is an interv al. Generally w e are in terested in the left-most θ > 0 such that m ( θ ) < ∞ . If for a given p oin t pro cess ξ = P N j =1 δ X j there exists θ > 0 with m ( θ ) < ∞ and m ( θ ′ ) = ∞ for all θ ′ < θ then w e ma y reduce to (A1) b y switching to the p oin t pro cess ˜ ξ = N X j =1 δ θX j +log m ( θ ) . Therefore, if D is non-empt y , inf D > 0 , and inf D ∈ D then w e may assume (A1) without loss of generalit y . Let ( V u ) u ∈I b e a branc hing random walk with increment p oin t process ξ , starting from 0 . Note that (A1) implies that E [ ξ ( R )] = ∞ , whence the branching process is sup ercritical. Let W n : = X | u | = n e − V u , n ∈ N 0 , (2.1) denote Biggins’ martingale. It is well-kno wn (see [ Big92 , Ly o97 ]) that this martingale has a non-zero limit W on surviv al of the branc hing pro cess if and only if (A2) holds and E [ W 1 log + W 1 ] < ∞ (2.2) 4 with log + x : = 0 ∨ log x . W e shall assume that there exists γ ∈ (1 , 2) such that E [ W γ 1 ] < ∞ and m ( γ ) < 1 . (A3) Under this assumption W n con verges to W in L γ b y [ Liu00 , Thm 2.1]. Moreov er, by assumption (A1) the measure F (d x ) : = e − x µ (d x ) defines a he avy-taile d distribution, i.e., it lac ks a finite exp onential moment. One possible natural further assumption is therefore that its tail F ( t ) : = F ( t, ∞ ) is regularly v arying at infinity: F ( t ) ∼ t →∞ t − p ℓ ( t ) (A4) for some p > 0 and a slo wly v arying function ℓ at ∞ (i.e., ℓ ( λt ) /ℓ ( t ) → 1 as t → ∞ , for all λ > 0 ). The random w alk ( S n ) n ∈ N 0 with incremen t law P ( S 1 ∈ ( · )) = F will be called asso ciate d with the branc hing random walk ( V u ) u ∈I . Sub ject to (A2), F has finite, positive mean c : = Z xF (d x ) = E [ S 1 ] = − m ′ (1) ∈ (0 , ∞ ) . If p > 2 (whic h will b e the case throughout the pap er), then (A4) and (A1) further imply that F has finite v ariance σ 2 : = Z ( x − c ) 2 F (d x ) ∈ (0 , ∞ ) . F urther, w e obtain a stronger result b y additionally assuming that there exists γ > 1 such that E [ Z 1 ( t ) γ ] ∈ O ( F ( t ) γ ) as t → ∞ . (A5) where Z 1 ( t ) : = R ∞ t e − x ξ (d x ) = P | u | =1 e − V u 1 { V u > t } , t ∈ R . Note that, b y Jensen’s inequalit y , if (A5) is true for some γ > 1 then it is also true for any γ ′ ∈ (1 , γ ) . Therefore, if (A5) and (A3) hold we may without loss of generality we may assume that the conditions hold for the same γ ∈ (1 , 2) . 2.2 R emark. Assumption (A5) ma y b e view ed as a strong uniform integrabilit y condition. By de la V allée-P oussin’s theorem ( Z 1 ( t ) /F ( t )) t ∈ R is uniformly in tegrable if and only if there exists a function G : [0 , ∞ ) → [0 , ∞ ) with G ( t ) /t → ∞ such that sup t ∈ R E h G  Z 1 ( t ) F ( t ) i < ∞ . Assumption (A5) states that w e can choose G ( x ) = x γ for some γ > 1 , therefore imp osing that ( Z 1 ( t ) /F ( t )) t ∈ R is uniformly in tegrable. In particular, assumption (A5) is nev er satisfied if ξ ( R ) < ∞ almost surely since then hav e Z 1 ( t ) /F ( t ) → 0 as t → ∞ almost surely . 2.3 Example. (a) Let ξ b e a P oisson point process with intensit y measure b e x x − ( p +1) ℓ ( x ) 1 (1 , ∞ ) ( x )d x for some b > 0 , p > 2 and a slowly v arying function ℓ at infinit y . Then w e hav e m ( θ ) = b Z ∞ 1 e − x ( θ − 1) x − ( p +1) ℓ ( x )d x 5 whic h is finite if and only if θ ≥ 1 . Moreov er, c ho osing b − 1 = R ∞ 1 x − ( p +1) ℓ ( x )d x guaran tees (A1) . The assumption (A4) follo ws from Karamata’s theorem (see e.g. [ BGT87 , Prop osition 1.5.8]). Since p > 2 this implies (A2) . Moreo ver, Campb ell’s form ula for the v ariance of in tegrals with resp ect to Poisson point processes (e.g. [Kal21, Lemma 15.22 (iii)]) giv es that V [ W 1 ] = Z ∞ 0 (e − x ) 2 E [ ξ (d x )] = m (2) < ∞ . This implies (A3) . F urther, for all γ ∈ (1 , 2) we ha v e, by Jensen’s inequalit y , Campb ell’s form ula, and subadditivit y of x 7→ x γ / 2 , E [ Z 1 ( t ) γ ] ≤ E [ Z 1 ( t ) 2 ] γ / 2 =  V [ Z 1 ( t )] + E [ Z 1 ( t )] 2  γ / 2 =  Z ∞ t e − 2 x E [ ξ (d x )] + F ( t ) 2  γ / 2 ≤ e − γ t/ 2 F ( t ) γ / 2 + F ( t ) γ . This shows that (A5) holds. Consequen tly , ξ is an example of a p oin t process where (A1) through (A5) are satisfied. (b) The follo wing example is inspired by the mo del that is considered in [ DMM17 ] where a Y ule pro cess is used instead of a Co x pro cess. Let f b e a [0 , 1) -v alued random v ariable, b > 0 , and let ξ b e a Co x process with random intensit y measure b e f x d x on [0 , ∞ ) . Then we hav e m ( θ ) = b E  Z ∞ 0 e − x ( θ − f ) d x  = b E [( θ − f ) − 1 ] whenev er θ > f almost surely , and m ( θ ) = ∞ otherwise. Now assume that w e ha ve P (1 − f ≤ t ) = t p +1 ℓ ( t ) , t ≤ ε for ε ∈ (0 , 1) , p > 0 and ℓ slo wly v arying at zero. Then one can sho w that m (1) finite and therefore w e can choose b > 0 so that m (1) = 1 . As in (a) one shows that (A3) is satisfied. Moreov er, one can show that t 7→ E [e − t (1 − f ) ] is regularly v arying at ∞ with index − ( p + 1) (details are pro vided in Appendix B). Therefore, F ( t ) = b Z ∞ t E [e − x (1 − f ) ]d x is regularly v arying with index − p b y Karamata’s theorem. Ho wev er, using E [ Z 1 ( t ) | f ] = b e − t (1 − f ) / (1 − f ) and Jensen’s inequality for conditional exp ectations w e find for every γ > 1 E [ Z 1 ( t ) γ ] ≥ E [ E [ Z 1 ( t ) | f ] γ ] = b γ E h e − γ t (1 − f ) (1 − f ) γ i ≥ b γ E h e − γ t (1 − f ) 1 − f i = b γ − 1 F ( γ t ) . This implies that E [ Z 1 ( t ) γ ] /F ( t ) γ → ∞ as t → ∞ . W e conclude that (A1) through (A4) hold but (A5) is not satisfied. (c) Let N be a N -v alued random v ariable such that E [ N 1 / 2 ] = 1 and P ( N ≥ t ) ∼ t →∞ t − 1 / 2 (log t ) − ( p +1) . 6 Define ξ = N δ log N 2 . Then w e hav e m ( θ ) = E [ N 1 − θ/ 2 ] , so that (A1) is satisfied. Moreo ver, a short calculation shows that F ( t ) = E [ N 1 / 2 1 { N > e 2 t } ] ∼ t →∞ ct − p for some c ∈ (0 , ∞ ) , whence (A4) holds with ℓ = 1 . How ev er, we ha ve W 1 = N 1 / 2 , so (2.2) holds but (A3) is violated. Th us ξ is an example of a point process that satisfies (A1), (A2) and (A4) but not (A3) and (A5). This highligh ts the in terest of replacing the assumption (A3) b y (2.2). 2.4 R emark. A situation in which (A1) through (A5) hold ma y arise in CMJ pro cesses that hav e no Malth usian parameter, i.e., there is no θ > 0 suc h that m ( θ ) = 1 . If there is a θ > 0 suc h that m ( θ ) < 1 and m ( θ ′ ) = ∞ for all θ ′ < θ (tak e for instance Example 2.3 (a) or (b) with smaller b > 0 ) then w e can use the transformation in Remark 2.1 to get a point process ˜ ξ that satisfies (A1). Note that ˜ ξ may not b e supported in [0 , ∞ ) an ymore. No w we state our result. F or n ∈ N , let the random measure Z n b e defined by (1.3) and let Z n ( t ) : = Z n ( t, ∞ ) , t ∈ R denote its tail. 2.5 Theorem. Assume (A1) thr ough (A4) with p > 2 and fix a > √ p − 2 and a se quenc e ( t n ) n ∈ N with t n ≥ aσ √ n log n for al l n ∈ N . F urther supp ose that one of the fol lowing c onditions is satisfie d: (I) Ther e ar e δ ∈ (0 , 1 − γ ) , q ∈ (1 + δ, γ ) , and a se quenc e ( r ( n )) n ∈ N with r ( n ) ∈ o ( n ) ∩ o ( t n ) such that lim sup n →∞ e − (1+ δ ) r ( n ) n − 1 X j =1 e r ( j ) < ∞ , and (2.3) lim sup n →∞ e − r ( n )(1 − (1+ δ ) /q ) nF ( t n ) < ∞ . (2.4) (II) (A5) holds. Then we have Z n ( nc + t n ) n P ( S 1 > t n ) L 1 − → n →∞ W (2.5) wher e W is the limit of Biggins’ martingale. Mor e over, in the c ase (I I) the c onver genc e holds in L q for any q < 2 − γ − 1 . 2.6 R emark. (a) W e can decompose Z n ( t ) = : X | u | = n e − V u 1  V u > t, ∃ v ≤ u : ∆ V ( v ) > h n ≥ max w ≤ u w  = v ∆ V ( w )  + rest n ( t ) for some fixed sequence ( h n ) n ∈ N . The rest term collects contributions to Z n ( t ) due to particles whic h hav e no or at least tw o ‘large jumps’ among the displacemen ts of their ancestors, where a displacemen t is considered to b e ‘large’ if it exceeds h n . Using the man y-to-one lemma (Lemma 7 3.1 below) and the pro of of Nagaev’s theorem giv en in [ DDS08 ]) (see Equation (7) and Prop osition 8.1 in [DDS08]) it follo ws that for h n : = σ p n/ ( a log n ) we ha v e E [rest n ( t )] ∈ o ( n P ( S 1 > t )) as n → ∞ uniformly in t ≥ nc + aσ √ n log n . It follo ws that only particles which hav e exactly one ‘large jump’ among the displacemen ts of one of their ancestors contribute to Z n ( t ) asymptotically . Hence, (2.5) ma y b e explained heuristically b y a principle of one big jump . (b) One ma y compare the uniformity of Theorem 2.5 to that of Nagaev’s theorem (see (1.1) , and Theorem 3.3 below). The statemen t of Theorem 2.5 in case (II) is equiv alen t with sup t ≥ aσ √ n log n E     Z n ( nc + t ) n P ( S 1 > t ) − W     − → n →∞ 0 . It is plausible that under certain conditions (2.5) holds almost surely along any fixed sequence t n ≥ aσ √ n log n . Ho wev er, if ξ ( R ) is finite almost surely (as in Example 2.3 (c)) then for every n ∈ N there exists a large t > 0 with Z n ( nc + t ) = 0 , so that sup t ≥ aσ √ n log n    Z n ( nc + t ) nF ( t ) − W    ≥ | W | . This shows that an almost sure conv ergence with the same kind of uniformity as in Nagaev’s theorem is not true generally . (c) Note that, in view of P otter’s theorem (e.g. [ BGT87 , Theorem 1.5.6]) and r ( n ) ∈ o ( n ) , condition (2.4) in case (I) implies log t n ≪ n . Moreo ver, the following facts are true. (i) If ( r ( n )) n ∈ N is increasing and e − δ r ( n ) n → 0 as n → ∞ then (2.3) holds. (ii) If w e hav e lim inf n →∞ r ( n ) log t n > p 1 − 1+ δ γ then (2.4) follo ws. Indeed, there exist q ∈ (1 , γ ) and p ′ > p suc h that for all sufficien tly large n w e hav e r ( n ) ≥ p ′ 1 − 1+ δ q log t n . Therefore w e hav e e − r ( n )(1 − (1+ δ ) /q ) nF ( t n ) ≤ e − p ′ log t n +log F ( t n ) . By P otter’s theorem, the exponent on the right-hand side goes to −∞ as n → ∞ , whence (2.4) holds. (iii) If ( t n ) n ∈ N is increasing and satisfies log t n ≪ n then all conditions of case (I) hold. Indeed, then there exists an increasing sequence ( r ( n )) n ∈ N with log t n ≪ r ( n ) ≪ t n ∧ n . In particular, for ev ery C ∈ (0 , ∞ ) w e hav e log t n ≤ C r ( n ) ev entually . Using t n ≥ aσ √ n log n this yields for ev ery δ > 0 e − δ r ( n ) ≤ e − δ C log( aσ √ n log n ) = ( aσ ) − δ C ( n log n ) − δC 2 , yielding e − δ r ( n ) n → 0 if C > 2 /δ . Therefore, from (i) and (ii) w e conclude (2.3) and (2.4). 8 (d) The following argumen t shows that in some cases a gro wth limitation of ( t n ) n ∈ N is crucial for the conclusion of Theorem 2.5 to hold. Supp ose (2.5) holds for all ( t n ) n ∈ N with t n ≥ aσ √ n log n (as in case II). Fix q > p and consider the sum Y n : = X | u | = n e − V u ( V u ) q + . W e find for all t ≥ 0 Y n ≥ X | u | = n e − V u ( V u ) q + 1 { V u > t } ≥ t q Z n ( t ) ≥ t q Z n ( nc + t ) . No w let ( a n ) n ∈ N b e an arbitrary sequence of p ositiv e num b ers. W e find P ( Y n ≥ a n ) ≥ P ( t q n Z n ( nc + t n ) ≥ a n ) = P  Z n ( nc + t n ) nF ( t n ) ≥ a n nF ( t n ) t q n  − → n →∞ 1 (2.6) if w e c ho ose t n appropriately . Supp ose that there are infinitely man y n ∈ N suc h that P ( Y n < ∞ ) = 1 . Then there exist sequences ( n j ) j ∈ N and ( a n j ) j ∈ N suc h that n j ↑ ∞ and P ( Y n j ≤ a n j ) ≤ 1 / 2 for all j ∈ N . If we no w extend ( a n j ) j ∈ N to a sequence ( a n ) n ∈ N b y filling in the missing v alues arbitrarily , then w e obtain lim inf n →∞ P ( Y n ≥ a n ) ≤ 1 2 , con tradicting (2.6) . Therefore, w e ha v e Y n = ∞ with positive probability for all but finitely many n ∈ N . Under the assumption ξ ( R ) < ∞ almost surely , for instance, this is not p ossible since ev ery generation is finite almost surely . Therefore, there are cases in whic h (2.5) is not true for arbitrarily fast gro wing t n ↑ ∞ . 3 Pro of of Theorem 2.5 3.1 Preliminaries In the follo wing we state well-kno wn results that will b e used in the pro of of Theorem 2.5. The asso ciated random walk and the coming generation 3.1 Lemma (Many-to-one formula,[ BK97 , Shi15 ]) . L et ( V u ) u ∈I b e a br anching r andom walk with m (1) = E [ P | u | =1 e − V u ] = 1 . L et ( S n ) n ∈ N b e a r andom walk with incr ement distribution P ( S 1 ∈ d x ) = F (d x ) = e − x µ (d x ) starting at 0 . Then for every n ∈ N and b ounde d, me asur able f : R n → R we have E  X | u | = n e − V u f ( V u 1 , . . . , V u n )  = E [ f ( S 1 , . . . , S n )] , (3.1) wher e we write u = ( u 1 , . . . , u n ) and E [ · ] denotes the exp e ctation with r esp e ct to P . In p articular, for every Bor el subset B ⊆ R we have E [ Z n ( B )] = F ∗ n ( B ) = P ( S n ∈ B ) . 9 F ollo wing [ Shi15 ], w e call ( S n ) n ∈ N the asso ciate d r andom walk of the branc hing random walk ( V u ) u ∈I . Instead of summing ov er all the n -th generation particles in (3.1) w e are interested in summing o ver the random set C t : = { u ∈ I | V u > t, ∀ v < u : V v ≤ t } (3.2) of particles whose p osition exceeds t ≥ 0 for the first time along its ancestral line. In the con text of CMJ pro cesses C t kno wn as the c oming gener ation at time t (see [ Jag89 ]). If we define the stopping time τ t : = inf { n ∈ N : S n > t } , t ≥ 0 (3.3) then it is straigh t-forward to show that (3.1) implies E  X u ∈C t e − V u f ( | u | , V u )  = E [ f ( τ t , S τ t ) 1 { τ t < ∞} ] (3.4) for ev ery b ounded, measurable f : N 0 × R → R (see the proof of [ BK05 , Theorem 10]). Note that if E [ S 1 ] > 0 (whic h is the case under assumption (A2)) w e ha ve τ t < ∞ almost surely . Corresp onding to ( C t ) t ≥ 0 there exists a filtration ( F C t ) t ≥ 0 suc h that, for all t, s ≥ 0 and u ∈ I , 1 { u ∈ C t } is F C s -measurable if and only if t ≤ s , see [Jag89, Kyp00]. 3.2 Theorem ([ Kyp00 , Theorem 9]) . Under the assumptions (A1) , (A2) and (2.2) , the pr o c ess Y t : = X u ∈C t e − V u , t ≥ 0 is a unit me an martingale that c onver ges almost sur ely and in L 1 to the limit W of Biggins’ martingale (2.1) . In the c on text of CMJ pro cesses the pro cess ( Y t ) t ≥ 0 is kno wn as Nerman ’s martingale (c.f. [Ner81]), whic h is also ho w we will refer to it. Random w alks with regularly v arying tail Let ( S n ) n ∈ N b e a centered random w alk with regularly v arying tail of index − p ≤ 0 , i.e., lim t →∞ P ( S 1 > λt ) P ( S 1 > t ) = λ − p (3.5) for all λ > 0 (see [ BGT87 ]). In particular, the distribution of S 1 is sub-exp onential , that is, for all fixe d n ∈ N w e hav e, as t → ∞ , P ( S n > t ) = n P ( S 1 > t )(1 + o (1)) , (3.6) see [ BGT87 , App endix 4]. Nagaev’s theorem is a v ersion of (3.6) where n and t approac h ∞ sim ultaneously . 3.3 Theorem ([ Nag79 , Theorem 1.9]) . L et ( S n ) n ∈ N b e a r andom walk with r e gularly varying tail of index p > 2 , me an zer o and unit varianc e. Fix a > √ p − 2 . Then we have P ( S n > t ) = n P ( S 1 > t )(1 + o (1)) (3.7) as n → ∞ uniformly in t ≥ a √ n log n , i.e., sup t ≥ a √ n log n    P ( S n > t ) n P ( S 1 > t ) − 1    − → n →∞ 0 . 10 See also [ DDS08 ] for a mo dern (and more general) pro of of this result. Note that regularly v arying functions are in particular long-taile d , i.e., S 1 with regularly v arying tail satisfies P ( S 1 > t + x ) ∼ t →∞ P ( S 1 > t ) (3.8) for all x ∈ R . Moreo ver, if ( S n ) n ∈ N is a random w alk with regularly v arying tail of index − p < − 2 that has mean c = E [ S 1 ] and v ariance V [ S 1 ] = σ 2 , then Theorem 3.3 combined with (3.8) implies that w e hav e P ( S n > nc + t ) = n P ( S 1 > t )(1 + o (1)) as n → ∞ uniformly in t ≥ aσ √ n log n . 3.2 Ov erview Our pro of of Theorem 2.5 has three parts. First (Section 3.3) we group the n -th generation particles u ∈ I with V u > nc + t n according to their ancestor whose p osition first exceeds r ( n ) < nc + t n . This giv es the decomp osition Z n ( nc + t n ) = X u ∈C r ( n ) | u |≤ n e − V u [ Z n −| u | ( nc + t n − V u )] u (3.9) where C r ( n ) is defined in (3.2) and w e ha ve [ Z n ( t )] u = X | v | = n e − ( V uv − V u ) 1 { V uv − V u > t } , u ∈ I , n ∈ N , t ∈ R . This decomposition is p ossible since every particle u from generation n with V u > nc + t m ust itself b e in C r ( n ) or ha ve an ancestor in C r ( n ) . Then we use the many-to-one lemma to get rid of summands in (3.9) that correspond to particles with large generations or large ov ersho ots (see Lemma 3.6). In the second part (Section 3.4, Lemma 3.8) w e use a weak la w of large num bers to approximate the righ t-hand side of (3.9) by X u ∈C r ( n ) | u |≤ n e − V u E [[ Z n −| u | ( nc + t n − V u )] u | V u ] = X u ∈C r ( n ) | u |≤ n e − V u F n −| u |  t n − ( V u − | u | c )  . where F n ( t ) : = P ( S n − nc > t ) , t ∈ R is the tail of the distribution of S n − nc . Finally , in the third part (Section 3.5) w e use Theorem 3.3 to approximate F n ( t ) ≈ nF ( t ) for large n and t , so that w e obtain Z n ( nc + t n ) ≈ X u ∈C r ( n ) | u |≤ n e − V u ( n − | u | ) F ( t n − ( V u − | u | c )) . Then we use regular v ariation of F to end up with an appro ximation to Nerman’s martingale whic h conv erges to the limit of Biggins’ martingale. 3.3 Pro of of Theorem 2.5 – Part 1: Preparation F or the rest of this section, let (A1) through (A4) b e satisfied, fix a > √ p − 2 and a sequence ( t n ) n ∈ N with t n ≥ aσ √ n log n for all n ∈ N . Occasionally w e need to distinguish the cases (I) and (I I) from Theorem 2.5. The pro of is split in to several lemmas to improv e readabilit y . 11 First w e argue that if suffices to sho w that (2.5) holds in probability . Note that the man y-to-one form ula (Lemma 3.1) and Nagaev’s theorem (Theorem 3.3) giv e E h Z n ( nc + t n ) nF ( t n ) i = P ( S n > nc + t n ) n P ( S 1 > t n ) − → n →∞ 1 = E [ W ] . Therefore, the claimed con vergence in L 1 follo ws from conv ergence in probabilit y (see e.g. [ Kal21 , Theorem 5.12]). In case (I I) , the claimed con v ergence in L q for all q < 2 − γ − 1 in turn follo ws from con vergence in probabilit y com bined with Prop osition 3.9 below. W e define tw o sequences ( m ( n )) n ∈ N and ( r ( n )) n ∈ N . In case (I) , r ( n ) is already defined. In case (I I) w e c ho ose for r ( n ) any sequence with r ( n ) ↑ ∞ and r ( n ) ∈ o ( n ) ∩ o ( t n ) , for instance r ( n ) : = n ∧ t n log n . In both cases, let m ( n ) : = l 2 r ( n ) c m , n ∈ N . (3.10) This definition is taylored to Lemma 3.7 below. F urthermore, fix ε > 0 suc h that (1 − ε ) a > √ p − 2 . During the proof we will often use that due to regular v ariation of F (b y assumption (A4) ) w e ha ve F ((1 − ε ) t n ) F ( t n ) − → n →∞ (1 − ε ) − p . Therefore, there exists a constan t K ∈ (0 , ∞ ) such that for all n ∈ N w e hav e F ((1 − ε ) t n ) F ( t n ) ≤ K. (3.11) T o simplify notation, for individuals u with | u | ≤ n we define X u ( n ) : = [ Z n −| u | ( nc + t n − V u )] u = X | v | = n −| u | e − ( V uv − V u ) 1 { V uv > nc + t n } whereas for | u | > n w e set X u ( n ) : = 0 . W e claim that for all n ∈ N with r ( n ) < nc + t n w e hav e the decomposition Z n ( nc + t n ) = X | u | = n e − V u 1 { V u > nc + t n } = X u ∈C r ( n ) e − V u X u ( n ) (3.12) where C r ( n ) is defined in (3.2) . Indeed, every non-zero summand in the (implicit) double sum on the right-hand side corresp onds to an individual u with | u | = n and V u > nc + t n and thus b elongs to the sum of the left-hand side. Con versely , ev ery individual u with | u | = n and V u > nc + t n has an ancestor v ≤ u with v ∈ C r ( n ) due to r ( n ) < nc + t n , so its corresponding summand appears in X v ( n ) . Using the man y-to-one formula w e obtain for ev ery fixed individual u with | u | ≤ n E [ X u ( n ) | V u ] = F n −| u |  t n − ( V u − | u | c )  (3.13) where F n ( t ) = P ( S n − nc > t ) , t ∈ R should b e recalled. 3.4 Definition. Let ( X ( n, T )) n ∈ N ,T > 0 b e a stochastic process and f : N × (0 , ∞ ) → (0 , ∞ ) b e a function. W e write X ( n, T ) ∈ o P T ,n ( f ( n, T )) if, for all δ > 0 , lim T →∞ lim sup n →∞ P     X ( n, T ) f ( n, T )    ≥ δ  = 0 . (3.14) Moreo ver, if ( Y ( n, T )) n ∈ N ,T > 0 is another stochastic process then w e write X ( n, T ) = Y ( n, T ) + o P T ,n ( f ( n, T )) if X ( n, T ) − Y ( n, T ) ∈ o P T ,n ( f ( n, T )) . 12 3.5 R emark. (a) Note that lim T →∞ lim sup n →∞ E    X ( n, T ) f ( n, T )    = 0 implies X ( n, T ) ∈ o P T ,n ( f ( n, T )) b y Mark ov’s inequalit y , which w e will frequen tly use without any further commen t. (b) If ( X n ) n ∈ N and ( Y n ) n ∈ N are sequences of random v ariables and w e ha ve X n = X ( n, T ) + o P T ,n (1) and Y n = X ( n, T ) + o P T ,n (1) then if follo ws X n − Y n → 0 in probabilit y as n → ∞ . Indeed, for ev ery δ > 0 and T > 0 we ha v e P ( | X n − Y n | > δ ) ≤ P ( | X n − X ( n, T ) | > δ / 2) + P ( | Y n − X ( n, T ) | > δ / 2) . Applying lim sup n →∞ and then taking T → ∞ giv es the claim. 3.6 Lemma. F or fixe d T > 0 let C T t : = { u ∈ C t : V u ≤ t + T } , t > 0 . (3.15) Then the fol lowing hold, with m ( n ) , r ( n ) as ab ove: Z n ( nc + t n ) = X u ∈C T r ( n ) | u |≤ m ( n ) e − V u X u ( n ) + o P T ,n ( nF ( t n )) , (3.16) Y r ( n ) = X u ∈C T r ( n ) | u |≤ m ( n ) e − V u + o P T ,n (1) . (3.17) The proof of Lemma 3.6 builds on the follo wing estimates for random walks. 3.7 Lemma. L et ( S n ) n ∈ N b e the asso ciate d r andom walk. Fix a > √ p − 2 and a se quenc e ( t n ) n ∈ N such that t n ≥ aσ √ n log n for al l n ∈ N . F urther let ( r ( n )) n ∈ N , ( m ( n )) n ∈ N b e se quenc es that satisfy r ( n ) , m ( n ) ↑ ∞ , m ( n ) ∈ N , m ( n ) ∈ o ( n ) , r ( n ) ∈ o ( n ) ∩ o ( t n ) , and lim sup n →∞ r ( n ) m ( n ) < c. (a) L et τ r : = inf { n ∈ N : S n > r } , r ∈ R . Then P ( S n > nc + t n , τ r ( n ) > m ( n )) ∈ o ( nF ( t n )) . (3.18) (b) L et R s : = S τ s − s b e the oversho ot at s > 0 . Then we have lim T →∞ lim sup n →∞ P ( S n > nc + t n , R r ( n ) > T ) nF ( t n ) = 0 . (3.19) Pr o of of L emma 3.7. (a) As a consequence of the central limit theorem w e ha ve P ( S m ( n ) ≤ r ( n )) = P  S m ( n ) − m ( n ) c p m ( n ) ≤ p m ( n )  r ( n ) m ( n ) − c   − → n →∞ 0 . (3.20) 13 W e write P x for the law of the random walk starting from x ∈ R , where P 0 = P . Using the Mark ov property at time m ( n ) we obtain for all n ∈ N suc h that n − m ( n ) > 0 P ( S n > nc + t n , τ r ( n ) > m ( n )) ≤ P ( S n > nc + t n , S m ( n ) ≤ r ( n )) = E [ 1 { S m ( n ) ≤ r ( n ) } P S m ( n ) ( S n − m ( n ) > nc + t n )] ≤ P ( S m ( n ) ≤ r ( n )) P r ( n ) ( S n − m ( n ) > nc + t n ) = P ( S m ( n ) ≤ r ( n )) F n − m ( n ) ( m ( n ) c + t n − r ( n )) ≤ P ( S m ( n ) ≤ r ( n )) F n − m ( n ) ( t n − r ( n )) (3.21) Recall that w e fixed ε ∈ (0 , 1) suc h that a (1 − ε ) > √ p − 2 . By Theorem 3.3 there exists n 0 ∈ N suc h that for all n ≥ n 0 and t ≥ (1 − ε ) aσ √ n log n we ha v e F n ( t ) nF ( t ) ≤ 2 . (3.22) Since r ( n ) ∈ o ( t n ) there exists n 1 ∈ N suc h that t n − r ( n ) ≥ (1 − ε ) t n for all n ≥ n 1 . Therefore, for n ≥ n 1 w e hav e t n − r ( n ) ≥ (1 − ε ) t n ≥ (1 − ε ) aσ p n log n ≥ (1 − ε ) aσ p ( n − m ( n )) log ( n − m ( n )) . Th us for n ≥ n 1 suc h that n − m ( n ) ≥ n 0 (whic h is the case for all sufficiently large n due to m ( n ) ∈ o ( n ) ) w e find F n − m ( n ) ( t n − r ( n ) nF ( t n ) = n − m ( n ) n F ( t n − r ( n )) F ( t n ) F n − m ( n ) ( t n − r ( n )) ( n − m ( n )) F ( t n − r ( n )) ≤ 2 F ((1 − ε ) t n ) F ( t n ) . By (3.11), the right-hand side is b ounded b y 2 K . Thus w e conclude that F n − m ( n ) ( t n − r ( n ) ∈ O ( nF ( t n )) . Com bining this with (3.20), we infer (3.18) from (3.21). (b) W e hav e P ( S n > nc + t n , R r ( n ) > T ) = P ( S n > nc + t n , R r ( n ) > T , τ r ( n ) > m ( n )) + P ( S n > nc + t n , R r ( n ) > T , τ r ( n ) ≤ m ( n ) , S τ r ( n ) > εt n ) + P ( S n > nc + t n , R r ( n ) > T , τ r ( n ) ≤ m ( n ) , S τ r ( n ) ≤ εt n ) . (3.23) F rom (a) we infer that the first summand in the righ t-hand side is in o ( n P ( S 1 > t n )) . Note that since S τ r ( n ) > εt n implies R r ( n ) > T for all but finitely many n , the second summand on the righ t-hand side of (3.23) is b ounded b y P ( τ r ( n ) ≤ m ( n ) , S τ r ( n ) > εt n ) . 14 F urther, we find P ( τ r ( n ) ≤ m ( n ) , S τ r ( n ) > εt n ) ≤ m ( n ) X j =1 E [ 1 { S j − 1 ≤ r ( n ) } P S j − 1 ( S 1 > εt n )] ≤ F ( εt n − r ( n )) m ( n ) . Since F is regularly v arying and w e hav e r ( n ) ∈ o ( t n ) and m ( n ) ∈ o ( n ) this is in o ( nF ( t n )) . Finally , using the Marko v property the third summand on the righ t-hand side of (3.23) is giv en by m ( n ) X j =1 E [ 1 { τ r ( n ) = j, r ( n ) + T < S j < εt n } P S j ( S n − j > nc + t n )] = m ( n ) X j =1 E [ 1 { τ r ( n ) = j, r ( n ) + T < S j < εt n } F n − j ( t n + j c − S j )] . If S j ≤ εt n then b y (3.22) and (3.11) we ha v e for all n such that n − m ( n ) ≥ n 0 F n − j ( t n + j c − S j ) ≤ F n − j ((1 − ε ) t n ) ≤ 2( n − j ) F ((1 − ε ) t n ) ≤ 2 K nF ( t n ) . Therefore, m ( n ) X j =1 E [ 1 { τ r ( n ) = j, S j > r ( n ) + T , S j ≤ εt n } F n − j ( t n + j c − S j )] ≤ 2 K nF ( t n ) P ( R r ( n ) > T ) . Since R r ( n ) con verges in distribution as n → ∞ (see e.g. [Kal21, Lemma 12.22]), we get lim T →∞ lim sup n →∞ P ( S n > nc + t n , R r ( n ) > T , τ r ( n ) ≤ m ( n ) , S τ r ( n ) ≤ εt n ) nF ( t n ) = 0 . This concludes the proof of (b). Pr o of of L emma 3.6. Let ( G n ) n ∈ N b e the natural filtration of ( S n ) n ∈ N . By conditioning on F C r ( n ) and then using (3.13) w e obtain, for all T > 0 , E  X u ∈C r ( n ) e − V u X u ( n ) 1 { V u > r ( n ) + T }  = E  X u ∈C r ( n ) e − V u F n −| u |  t n − ( V u − | u | c )  1 { V u > r ( n ) + T , | u | ≤ n }  . No w we apply the man y-to-one form ula for stopping lines (3.4) and get E  X u ∈C r ( n ) e − V u F n −| u |  t n − ( V u − | u | c )  1 { V u > r ( n ) + T , | u | ≤ n }  = E  F n − τ r ( n )  t n − ( S τ r ( n ) − τ r ( n ) c )  1 { S τ r ( n ) > r ( n ) + T , τ r ( n ) ≤ n }  . 15 Using the strong Mark ov property , this equals E  P ( S n > nc + t n | G τ r ( n ) ) 1 { S τ r ( n ) > r ( n ) + T , τ r ( n ) ≤ n }  = P ( S n > nc + t n , S τ r ( n ) > r ( n ) + T , τ r ( n ) ≤ n ) . where G τ r ( n ) denotes the pre- τ r ( n ) σ -algebra. With R t = S τ ( t ) − t it follows E  X u ∈C r ( n ) e − V u X u ( n ) 1 { V u > r ( n ) + T }  = P ( S n > nc + t n , R r ( n ) > T , τ r ( n ) ≤ n ) ≤ P ( S n > nc + t n , R r ( n ) > T ) . Th us with Lemma 3.7 (b) and (3.12) w e infer (c.f. Remark 3.5 (a)) Z n ( nc + t n ) = X u ∈C r ( n ) e − V u X u ( n ) 1 { V u ≤ r ( n ) + T } + o P T ,n ( nF ( t n )) T o restrict the sum further, we infer similarly E  X u ∈C r ( n ) e − V u X u ( n ) 1 { m ( n ) < | u |}  = P ( S n > nc + t n , m ( n ) < τ r ( n ) ≤ n ) whic h by Lemma 3.7 (a) is in o ( nF ( t n )) . Therefore, w e obtain (3.16). Analogously w e approximate Nerman’s martingale Y t = P u ∈C t e − V u . Indeed, w e hav e E  X u ∈C r ( n ) e − V u 1 { V u > r ( n ) + T }  = P ( R r ( n ) > T ) , E  X u ∈C r ( n ) e − V u 1 { m ( n ) < | u |}  = P ( τ r ( n ) > m ( n )) ≤ P ( S m ( n ) < r ( n )) . By (3.20) and the fact that the o v ersho ot R t con verges in distribution as t → ∞ , both integrands on the left-hand side are in o P T ,n (1) . Th us we conclude (3.17). 3.4 Pro of of Theorem 2.5 – Part 2: A weak la w of large num b ers The goal of this section is to prov e a w eak law of large n umbers for the right-hand side of (3.16) . 3.8 Lemma. F or al l sufficiently lar ge T > 0 we have 1 nF ( t n ) X u ∈C T r ( n ) | u |≤ m ( n ) e − V u  X u ( n ) − E [ X u ( n ) | V u ]  P − → n →∞ 0 . (3.24) W e prov e the cases (I) and (II) of Theorem 2.5 separately , starting with case (II). 3.4.1 Proof of Lemma 3.8 in case (I I) T o ensure sto c hastic domination for a weak la w of large num b ers in this case w e use the follo wing prop osition. Its pro of is more in volv ed and therefore postp oned to Section 4. 16 3.9 Prop osition. Supp ose that (A1) thr ough (A5) hold with γ ∈ (1 , 2) . Define η : = 1 − γ − 1 and fix a > √ p − 2 . Then for al l ( t n ) n ∈ N with t n ≥ aσ √ n log n for al l n ∈ N we have E [ Z n ( nc + t n ) 1+ η ] ∈ O  ( nF ( t n )) 1+ η  as n → ∞ . Pr o of of L emma 3.8 in the c ase (I I) , admitting Pr op osition 3.9. F rom [ Big98 , Theorem 3] we kno w that assumption (A1) implies lim n →∞ min | u | = n V u = ∞ (3.25) almost surely on surviv al, whence N ( t ) : = P u ∈I 1 { V u ≤ t } is finite almost surely for all t ≥ 0 (here w e use that the p oin t pro cess ξ is almost surely locally finite). Since |C T t | ≤ N ( t + T ) , it follo ws that C T t is almost surely finite for all T > 0 . Let T > 0 satisfy µ ( −∞ , T ] > 1 (this is true for all sufficien tly large T b y (A1) ). F rom the proof of [ Ner81 , Lemma 3.6] w e get that for some c > 0 we ha v e almost surely on surviv al lim inf t →∞ |C T t | N ( t ) ≥ c. As N ( t ) ↑ ∞ it follo ws |C T t | → ∞ as t → ∞ almost surely on surviv al. W e shall use [Ner81, Prop osition 3.8] to infer 1 |C T r ( n ) | X u ∈C T r ( n ) | u |≤ m ( n ) e r ( n ) − V u nF ( t n )  X u ( n ) − E [ X u ( n ) | V u ]  P − → n →∞ 0 . (3.26) Note that the summands are indep enden t giv en F C r ( n ) (see [ Jag89 , Theorem 4.14]), and e r ( n ) − V u ≤ 1 for all u ∈ C T r ( n ) . T o conclude (3.26) it suffices to show that X u ( n ) nF ( t n ) , u ∈ C T r ( n ) is uniformly sto c hastically dominated conditional on F C r ( n ) for all n ∈ N b y some integrable random v ariable. This is turn follo ws if we sho w that there exist C ∈ (0 , ∞ ) and η > 0 suc h that for all n ∈ N we hav e sup u ∈C T r ( n ) E h X u ( n ) nF ( t n )  1+ η | F C r ( n ) i ≤ C. (3.27) Indeed, if a family of non-negative random v ariables ( X i ) i ∈ I has sup i ∈ I E [ X 1+ η i ] ≤ C , then by Mark ov’s inequalit y w e hav e for all i ∈ I P ( X i > t ) ≤ C t 1+ η ∧ 1 , t > 0 . Therefore, if the right-hand side is taken to define the tail probabilit y of a random v ariable Y , then ( X i ) i ∈ I is uniformly stochastically dominated by Y and Y is in tegrable. Thus we see that (3.27) is sufficien t to show that X u ( n ) /nF ( t n ) , u ∈ C T r ( n ) is stochastically dominated. Recall that ε > 0 is chosen suc h that (1 − ε ) a > √ p − 2 . By Prop osition 3.9 there exist η > 0 and C ∈ (0 , ∞ ) such that sup n ∈ N E h Z n ( nc + (1 − ε ) t n ) nF ((1 − ε ) t n )  1+ η i ≤ C. 17 Moreo ver, b y r ( n ) ∈ o ( t n ) there exists n 0 ∈ N suc h that for all n ≥ n 0 w e hav e t n − ( r ( n ) + T ) ≥ (1 − ε ) t n . Then for all n ≥ n 0 and u ∈ C T r ( n ) with | u | ≤ n w e find, due to V u ≤ r ( n ) + T and t 7→ Z j ( t ) b eing decreasing for all j ∈ N , X u ( n ) = [ Z u −| u | ( nc + t n − V u )] u ≤ [ Z u −| u | (( n − | u | ) c + t n − ( r ( n ) + T ))] u ≤ [ Z u −| u | (( n − | u | ) c + (1 − ε ) t n )] u . No w, since [ Z j ( t )] u conditional on F C r ( n ) has the same law as Z j ( t ) for all j ∈ N , t ∈ R and u ∈ C r ( n ) , w e get for all n ≥ n 0 and u ∈ C T r ( n ) with | u | < n (if | u | = n then X u ( n ) = 0 ) E h X u ( n ) nF ( t n )  1+ η | F C r ( n ) i ≤  F ((1 − ε ) t n ) F ( t n )  1+ η E h Z n −| u | (( n − | u | ) c + (1 − ε ) t n ) ( n − | u | ) F ((1 − ε ) t n )  1+ η i ≤ K 1+ η C where in the last step w e used (3.11). This sho ws (3.27). Th us we conclude (3.26) using [Ner81, Prop osition 3.8] (note that restricting the sum further to u ∈ C T r ( n ) with | u | ≤ m ( n ) has no effect). Borro wing another trick from [Ner81] we see that, for all n ∈ N , e − ( r ( n )+ T ) |C T r ( n ) | ≤ X u ∈C T r ( n ) e − V u ≤ Y r ( n ) . (3.28) Since Y r ( n ) con verges almost surely b y Theorem 3.2, the claim follo ws. 3.4.2 Proof of Lemma 3.8 in case (I) In the case (I) w e stochastically b ound X u ( n ) instead of X u ( n ) /nF ( t n ) and use a suitable Marcinkiewicz-Zygm und-type weak la w of large n umbers to get conv ergence in probability with a rate. Its proof is a combination of the pro of of [ Ner81 , Prop osition 4.1] with a Marcinkiewicz- Zygm und weak la w of large n umbers. W e presen t the details in App endix A. 3.10 Theorem. L et ( X kj ) j ≤ n k , k ∈ N b e a family of non-ne gative r andom variables such that X k 1 , . . . , X kn k is indep endent for al l k ∈ N . F urther assume that ther e exists a non-ne gative r andom variable X such that (i) X kj ⪯ X for al l k, j , (ii) P ( X > t ) ∈ o ( t − p ) as t → ∞ for some p ∈ (1 , 2) , (iii) lim sup k →∞ n − q k P j ≤ k n j < ∞ for some q > 0 . Then we have 1 n q /p k n k X j =1  X kj − E [ X kj ]  P − → k →∞ 0 . In our case w e hav e n k = |C T r ( k ) | with C T r ( k ) from (3.15) . T o get condition (iii) of the ab o v e theorem is the purpose of the following lemma. 18 3.11 Lemma. L et ( r ( n )) n ∈ N satisfy (2.3) with δ ∈ (0 , γ − 1) . Then for al l δ ′ > δ and al l sufficiently lar ge T > 0 we have, almost sur ely on survival, lim sup n →∞ |C T r ( n ) | − (1+ δ ′ ) n − 1 X j =1 |C T r ( n ) | < ∞ . Pr o of. F rom (3.28) ab o v e w e kno w that for all T > 0 there exists a finite random v ariable Y suc h that, almost surely , |C T t | ≤ Y e t for all t ≥ 0 . W e also need to establish a low er bound. T o that end consider the point process ˜ ξ : = X u ∈C 0 δ V u (with C 0 from (3.2) ) which is almost surely supp orted in [0 , ∞ ) . It is well-kno wn (see [ BK97 , BK05 , ABM12 ]) that the branching random w alk ( V u ) u ∈I has an embedded instance of a branching random w alk with repro duction p oin t pro cess ˜ ξ , i.e., there exists a random injectiv e map ι : I → I suc h that ( ˜ V u ) u ∈I : = ( V ι ( u ) ) u ∈I is a branc hing random w alk with repro duction p oint pro cess ˜ ξ . An individual u ∈ I app ears in the em bedded branching random w alk if and only if V u > V v for all v < u (i.e., its p osition is maximal among the positions of all its ancestors or, in other words, its p osition is a strictly ascending ladder heigh t in the sequence of positions in its lineage). F rom (3.25) it follo ws that the em bedded pro cess surviv es if and only if the original one surviv es. In particular the em bedded branching pro cess is supercritical. Therefore, for all sufficiently large T > 0 w e hav e ˜ µ [0 , T ] : = E  ˜ ξ [0 , T ]  > 1 . Fix one suc h T > 0 . Using (3.4) we infer (c.f. [BK05, Theorem 10]) E  Z e − x ˜ ξ (d x )  = E  X u ∈C 0 e − V u  = P ( τ 0 < ∞ ) = 1 , so the em b edded branching process has Malthusian parameter 1 . Moreo ver, it is easy to see that ˜ C t : = { u ∈ I | max v 0 such that lim inf t →∞ | ˜ C T t | ˜ N ( t ) ≥ c where ˜ N ( t ) : = P u ∈I 1 { ˜ V u ≤ t } (here w e use ˜ µ [0 , T ] > 1 ). Now w e in vok e [ Big95 , Theorem 2] to get 1 lim inf t →∞ log ˜ N ( t ) t ≥ 1 almost surely on surviv al. 1 Note that the cited reference contains a non-lattice assumption which it inherits from Nerman’s w ork [ Ner81 ]. It is straight-forw ard to obtain the same result in the lattice case via the same pro of, using [ Gat00 ] instead of [Ner81]. 19 Conclusiv ely , we ha v e lim inf t →∞ log |C T t | t ≥ 1 almost surely on surviv al. Let δ ′ > δ and tak e η ∈ (0 , 1) suc h that (1 − η )(1 + δ ′ ) > 1 + δ . Almost surely on surviv al, for all sufficien tly large t we ha v e |C T t | ≥ e (1 − η ) t , so that for all sufficien tly large n |C T r ( n ) | − (1+ δ ′ ) ≤ e − (1 − η )(1+ δ ′ ) r ( n ) ≤ e − (1+ δ ) r ( n ) . Therefore, the claim follo ws from (2.3). Pr o of of L emma 3.8 in the c ase (I) . Let δ ∈ (0 , γ − 1) and q ∈ (1 + δ, γ ) satisfy (2.4) . Fix q ′ ∈ ( q , γ ) and let δ ′ > δ satisfy (1 + δ ′ ) /q ′ = (1 + δ ) /q . W e will use Theorem 3.10 to sho w that |C T r ( n ) | − 1+ δ ′ q ′ X u ∈C T r ( n ) e r ( n ) − V u  X u ( n ) − E [ X u ( n ) | F C r ( n ) ]  P − → n →∞ 0 . (3.29) T o that end we ha v e to sto c hastically dominate eac h summand X u ( n ) , u ∈ C r ( n ) conditional on F C r ( n ) uniformly b y an integrable random v ariable Y that satisfies P ( Y > t ) ∈ o ( t − q ′ ) . Similar to the proof in case (I I) (see (3.27) ) it suffices to sho w that, for some C ∈ (0 , ∞ ) and all n ∈ N w e ha ve sup u ∈C T r ( n ) E [ X u ( n ) γ | F C r ( n ) ] ≤ C. (3.30) As the tail of the dominating random v ariable Y w e then take P ( Y > t ) = C t − γ ∧ 1 , whic h is in o ( t − q ′ ) as claimed. F or every t ∈ R and n ∈ N we hav e Z n ( t, ∞ ) ≤ Z n ( R ) = W n , where ( W n ) n ∈ N 0 is Biggins’ martingale, see (2.1) . By assumption (A3) and [ Liu00 , Thm 2.1] w e ha ve W n → W in L γ , whic h implies that sup n ∈ N 0 E [ W γ n ] ≤ E [ W γ ] < ∞ . Since for eac h u ∈ C r ( n ) the la w of X u ( n ) giv en F C r ( n ) is that of Z n −| u | ( nc + t n − V u ) , w e arrive at (3.30) . Thus we ha ve established (i) and (ii) of Theorem 3.10, while (iii) follows from Lemma 3.11. Therefore, the theorem yields (3.29). Finally , we find 1 nF ( t n ) X u ∈C T r ( n ) e − V u  X u ( n ) − E [ X u ( n ) | V u ]  = e − r ( n )(1 − (1+ δ ) /q ) nF ( t n )  e − r ( n ) |C T r ( n ) |  1+ δ ′ q ′ × |C T r ( n ) | − 1+ δ ′ q ′ X u ∈C T r ( n ) e r ( n ) − V u  X u ( n ) − E [ X u ( n ) | V u ]  Using (2.4), (3.28), and Theorem 3.2 w e conclude (3.24). 20 3.5 Pro of of Theorem 2.5 – Part 3: Conclusion No w we can finish the pro of of our main result. Pr o of of The or em 2.5. Fix T > 0 suc h that (3.24) holds. Com bining Lemma 3.6 with Lemma 3.8 and (3.13) yields Z ( t n + nc ) = X u ∈C T r ( n ) | u |≤ m ( n ) e − V u X u ( n ) + o P T ,n ( nF ( t n )) = X u ∈C T r ( n ) | u |≤ m ( n ) e − V u E [ X u ( n ) | V u ] + o P T ,n ( nF ( t n )) = X u ∈C T r ( n ) | u |≤ m ( n ) e − V u F n −| u |  t n − ( V u − | u | c )  + o P T ,n ( nF ( t n )) . Therefore, b y (3.17) and Remark 3.5 (b) it suffices sho w that X u ∈C T r ( n ) | u |≤ m ( n ) e − V u F n −| u |  t n − ( V u − | u | c )  = nF ( t n ) X u ∈C T r ( n ) | u |≤ m ( n ) e − V u + o P T ,n ( nF ( t n )) . (3.31) Abbreviate the left-hand side b y S n,T and define Y n,T : = X u ∈C T r ( n ) | u |≤ m ( n ) e − V u . Let δ ∈ (0 , 1) suc h that (1 − δ ) a > √ p − 2 . By Theorem 3.3 there exists n 0 suc h that for all n ≥ n 0 sup x ≥ a (1 − δ ) σ √ n log n    F n ( x ) nF ( x ) − 1    < δ . Recall that for all u ∈ Λ T n ∩ C r ( n ) w e hav e V u ≤ r ( n ) + T . Since r ( n ) ∈ o ( t n ) , w e hav e for all but finitely man y n t n − ( V u − | u | c ) ≥ t n − ( r ( n ) + T ) ≥ (1 − δ ) t n . (3.32) Therefore, for all large n suc h that n − m ( n ) ≥ n 0 w e hav e S n,T ≤ X u ∈C T r ( n ) | u |≤ m ( n ) e − V u F n −| u |  (1 − δ ) t n  ≤ (1 + δ ) nF ((1 − δ ) t n ) Y n,T . Th us we get for all η > 0 P  S n,T nF ( t n ) − Y n,T > η  ≤ P  Y n,T  (1 + δ ) F  (1 − δ ) t n  F ( t n ) − 1  > η  ≤ P  Y r ( n )  (1 + δ ) F  (1 − δ ) t n  F ( t n ) − 1  > η  . 21 In view of Theorem 3.2 and (A4), as n → ∞ the righ t-hand side conv erges to lim sup n →∞ P  S n,T nF ( t n ) − Y n,T > η  ≤ P  W  (1 + δ )(1 − δ ) − p − 1  > η  whic h go es to zero as δ ↓ 0 . Hence w e obtain P  S n,T nF ( t n ) − Y n,T > η  − → n →∞ 0 . T o get the low er limit w e use t n − ( V u − | u | c ) ≤ t n + m ( n ) c ≤ (1 − δ ) t n for all but finitely man y n due to m ( n ) ∈ o ( t n ) . This giv es, as ab o ve, S n,T ≥ (1 − δ ) nF ((1 + δ ) t n ) Y n,T . The same reasoning as abov e sho ws that P  S n,T nF ( t n ) − Y n,T < − η  − → n →∞ 0 . Th us (3.31) follows and the pro of is complete. 4 Pro of of Prop osition 3.9 In this section w e pro v e Prop osition 3.9. The pro of rests on the spinal decomp osition theorem which w e will set up now. F or a more detailed accoun t the reader is referred to [Ly o97, BK04, Shi15]. Recall that we label particles in a branching random w alk by elemen ts of the Ulam-Harris tree I = S n N n (see Section 2.1). Therefore, the positions ( V u ) u ∈I of a branc hing random w alk define a measure on the space of mark ed trees I ∗ : = ( R ∪ {∞} ) I . W e further consider the set ∂ I of infinite lines of descen t in the Ulam-Harris tree, whic h we formally may iden tify as the set of sequences ( u n ) n ∈ N 0 ⊆ I suc h that u 0 = ∅ and for all n ∈ N 0 there exists j ∈ N with u n +1 = u n j . Recall that F n is the σ -algebra of the p ositions ( V u ) | u |≤ n of a branching random walk up to generation n ∈ N 0 . W e use Biggins’ martingale to obtain a probabilit y measure Q on I ∗ with d Q d P     F n = W n , n ∈ N 0 . (4.1) The corresp onding exp ectation is denoted by E Q . It turns out that Q is also the law of the p ositions of a certain spinal br anching r andom walk , whic h w e will describ e in the follo wing. W e consider tw o t yp es of particles whic h w e call spine and off-spine , resp ectiv ely . Off-spine particles displace their offspring according to the original p oint pro cess ξ , while spine particles do so according to the size-biased p oin t pro cess ˆ ξ , whose la w has the Radon-Nik o dým density W 1 with respect to the law of ξ under P . The process starts with a spine particle denoted as w 0 : = ∅ . The positions ( V u ) u ∈I in this m ulti-t yp e branc hing random w alk are constructed as usual (see Section 2.1). F or every n ∈ N 0 , the next spine particle w n +1 is chosen among the direct offspring ( w n j ) j ∈ N of w n suc h that w n j = w n +1 with probabilit y prop ortional to e − ( V ( w n j ) − V ( w n )) , j ∈ N . All other offspring particles of w n are declared off-spine. Let B denote the law of (( V u ) u ∈I , ( w n ) n ∈ N 0 ) , whic h is a probability measure on I ∗ × ∂ I . W e con tinue to use the notation F n for the σ -algebra of the positions up to generation n ∈ N 0 on this 22 space as w ell. Note that F n con tains no information ab out the spine. F urther, let G n denote the σ -algebra of the spine ( w j ) j ≤ n , the positions ( V ( w j )) j ≤ n and the positions of all its immediate offspring ( V ( w n j ) − V ( w n )) j ∈ N , n ∈ N 0 . F or u ∈ I and j ∈ N w e write Ω( uj ) : = { ui : j  = i ∈ N } ⊆ I for the set of siblings of uj . 4.1 Theorem (Spinal decomposition theorem) . With Q and B as ab ove, Q is e qual to the mar ginal law of B on I ∗ . As customary in liter atur e we identify Q with B , so we use Q as a pr ob ability me asur e on I ∗ × ∂ I . F urthermor e, the fol lowing hold. (i) Conditional ly on G n , for al l n ∈ N the pr o c esses ( V ( uv ) − V ( u )) v ∈I , u ∈ Ω( w n ) ar e indep endent and have the distribution of a br anching r andom walk with r epr o duction p oint pr o c ess ξ . (ii) F or every u ∈ I with | u | = n ∈ N 0 we have Q ( w n = u | F n ) = e − V u W n . (iii) The law of ( V ( w n )) n ∈ N 0 under Q is that of the asso ciate d r andom walk of the br anching r andom walk ( V u ) u ∈I (se e Se ction 3.1). With the spinal decomposition theorem at hand we can now pro ve Prop osition 3.9. Pr o of of Pr op osition 3.9. W e start with some preliminary observ ations. Recall that η = 1 − γ − 1 and note that 1 + η ≤ γ . Therefore, by (A3) w e hav e E [ W 1+ η 1 ] < ∞ , and (A5) com bined with Jensen’s inequalit y gives E [ Z 1 ( t ) 1+ η ] ≤ E [ Z 1 ( t ) γ ] (1+ η ) /γ ∈ O ( F ( t ) 1+ η ) . (4.2) Moreo ver, for an y function f that is regularly v arying at infinity w e ha ve O ( f ( t )) = O ( f ( λt )) for all λ > 0 . Since b y (A4) for an y q > 0 the function t 7→ F ( t ) q is regularly v arying at infinit y with index − pq , w e conclude that O ( F ( t ) q ) = O ( F ( λt ) q ) for all λ > 0 . (4.3) F urthermore, b y Potter’s theorem (e.g. [ BGT87 , Theorem 1.5.6]) for every δ > 0 there exists c > 0 such that for all suficien tly large t w e hav e F ( t ) q ≥ ct − pq − δ . This implies that for ev ery δ > 0 w e hav e e − δ t ∈ o ( F ( t ) q ) . (4.4) Finally , by Theorem 4.1 (iii) w e ha ve, for every θ > 0 and n ∈ N , E Q [e − θV ( w n ) ] = E [e − θS n ] = E [e − θS 1 ] n = m (1 + θ ) n . (4.5) Using (A1) , (A2) and (A3) com bined with the con vexit y of m w e see that m (1 + θ ) < 1 for all θ ∈ (0 , γ − 1) . 23 No w we pro ve the proposition. Let t, M ∈ (0 , ∞ ) and n ∈ N . Using Theorem 4.1 (ii) we ha v e Z n ( t ) = W n X | u | = n e − V u W n 1 { V u > t } = W n Q ( V ( w n ) > t | F n ) . (4.6) By switc hing to the measure Q and using properties of conditional exp ectations w e find E [ Z n ( t ) 1+ η 1 { W n ≤ M } ] = E Q h Z n ( t ) η Z n ( t ) W n 1 { W n ≤ M } i = E Q [ Z n ( t ) η Q ( V ( w n ) > t | F n ) 1 { W n ≤ M } ] = E Q [ Z n ( t ) η 1 { V ( w n ) > t, W n ≤ M } ] . No w we condition on the spine and use Jensen’s inequality for conditional exp ectations to get E [ Z n ( t ) 1+ η 1 { W n ≤ M } ] = E Q [ E Q [ Z n ( t ) η 1 { W n ≤ M } | G n ] 1 { V ( w n ) > t } ] ≤ E Q [ E Q [ Z n ( t ) 1 { W n ≤ M } | G n ] η 1 { V ( w n ) > t } ] . (4.7) Note that the inner conditional exp ectation exists since 0 ≤ Z n ( t ) ≤ W n ≤ M . Decomposition along the spine yields Z n ( t ) = e − V ( w n ) 1 { V ( w n ) > t } + n X j =1 X u ∈ Ω( w j ) e − V u [ Z n − j ( t − V u )] u where Z 0 ( t ) = 1 { t < 0 } . Therefore, we get E Q [ Z n ( t ) 1 { W n ≤ M } | G n ] = e − V ( w n ) 1 { V ( w n ) > t } E Q [ 1 { W n ≤ M } | G n ] + n X j =1 X u ∈ Ω( w j ) e − V u E Q  [ Z n − j ( t − V u )] u 1 { W n ≤ M } | G n  . Bounding 1 { W n ≤ M } ≤ 1 and applying the man y-to-one formula results in E Q [ Z n ( t ) 1 { W n ≤ M } | G n ] ≤ e − V ( w n ) 1 { V ( w n ) > t } + n X j =1 X u ∈ Ω( w j ) e − V u F n − j ( t − V u − ( n − j ) c ) where F j ( t ) : = P ( S j − j c > t ) , j > 0 , F 0 ( t ) : = 1 { t < 0 } should b e recalled. Now w e plug this bac k in (4.7), use that x 7→ x η is subadditiv e and take M ↑ ∞ to get E [ Z n ( nc + t n ) 1+ η ] ≤ E Q  e − η V ( w n ) 1 { V ( w n ) > nc + t n } + n X j =1 E Q  1 { V ( w n ) > nc + t n }  X u ∈ Ω( w j ) e − V u F n − j  t n − ( V u − j c )   η  . (4.8) The first summand on the right-hand side is bounded b y e − η t n whic h is in o (( nF ( t n )) 1+ η ) by (4.4) . The rest of the pro of is concerned with estimating the second summand. W e pro ceed in sev eral steps. F or brevity , w e write t ′ n : = nc + t n and W n ( t ) : = X u ∈ Ω( w n ) e − ∆ V u 1 { ∆ V u > t } , t ∈ R W n : = X u ∈ Ω( w n ) e − ∆ V u , n ∈ N 24 where ∆ V u = V u − V ( w n ) is the displacemen t of u ∈ Ω( w n ) with respect to its parent. Step 1: The summand in (4.8) with j = n . The summand on the righ t-hand side of (4.8) with j = n is giv en by E Q  1 { V ( w n ) > t ′ n }  X u ∈ Ω( w n ) e − V u 1 { V u > t ′ n }  η  = E Q  1 { V ( w n ) > t ′ n } e − η V ( w n − 1 ) W n ( t ′ n − V ( w n − 1 )) η  = E Q  e − η V ( w n − 1 ) E Q [ 1 { V ( w n ) > t ′ n } W n ( t ′ n − V ( w n − 1 )) η | G n − 1 ]  = E Q [e − η V ( w n − 1 ) ϕ ( t ′ n − V ( w n − 1 ))] (4.9) where ϕ ( t ) : = E Q [ 1 { V ( w 1 ) > t } W 1 ( t ) η ] , t ∈ R . (4.10) W e hav e, for all t ∈ R , ϕ ( t ) ≤ E Q [ W η 1 ] ≤ E Q [ W η 1 ] = E [ W 1+ η 1 ] < ∞ . F urthermore, using (4.6) with n = 1 we see that ϕ ( t ) = E Q h Z 1 ( t ) W 1 W 1 ( t ) η i ≤ E Q h Z 1 ( t ) 1+ η W 1 i = E [ Z 1 ( t ) 1+ η ] . By (4.2) this is in O ( F ( t ) 1+ η ) as t → ∞ . Th us, for the righ t-hand side of (4.9) w e infer, using that ϕ is decreasing, E Q [e − η V ( w n − 1 ) ϕ ( t ′ n − V ( w n − 1 ))] ≤ E Q [e − η V ( w n − 1 ) ϕ ( t n − V ( w n − 1 ))] ≤ E Q [e − η V ( w n − 1 ) 1 { V ( w n − 1 ) > t n / 2 } ] E [ W 1+ η 1 ] + ϕ ( t n / 2) E Q [e − η V ( w n − 1 ) 1 { V ( w n − 1 ) ≤ t n / 2 } ] ≤ e − η t n / 2 E [ W 1+ η 1 ] + ϕ ( t n / 2) E Q [e − η V ( w n − 1 ) ] The first summand on the right-hand side is in o ( F ( t n ) 1+ η ) b y (4.4) . Regarding the second summand, w e use (4.5) and (4.3) to get ϕ ( t n / 2) E Q [e − η V ( w n − 1 ) ] = ϕ ( t n / 2) m (1 + η ) n ≤ ϕ ( t n / 2) ∈ O ( F ( t n / 2) 1+ η ) = O ( F ( t n ) 1+ η ) Com bined we hav e sho wn that the j = n summand on the right-hand side of (4.8) is in O  F ( t n ) 1+ η  . Step 2: The summands in (4.8) with j ∈ { 1 , . . . , n − 1 } . W e claim that there exists C ∈ (0 , ∞ ) suc h that for all sufficiently large n ∈ N and j < n w e ha ve E Q  1 { V ( w n ) > t ′ n }  X u ∈ Ω( w j ) e − V u F n − j  t n − ( V u − j c )   η  ≤ C ( nF ( t n )) 1+ η m (1 + η / 2) j . (4.11) Note that, using (4.8) and m (1 + η / 2) < 1 , this claim com bined with step 1 completes the pro of. 25 Let ε > 0 suc h that a (1 − ε ) > √ p − 2 and ε/ 2 ≤ 1 − ε . By Theorem 3.3 combined with (3.6) there exists n 0 ∈ N suc h that for all n ≥ n 0 , j ≤ n and t ≥ (1 − ε ) aσ √ n log n we ha v e F j ( t ) ≤ 2 j F ( t ) . (4.12) Indeed, first w e take n 1 suc h that for all n ≥ n 1 and t ≥ (1 − ε ) aσ √ n log n w e hav e F n ( t ) ≤ 2 nF ( t ) . Then, by (3.6) , there exists t 0 suc h that F j ( t ) ≤ 2 j F ( t ) for all j ≤ n 1 and t ≥ t 0 . Now take n 0 ≥ n 1 suc h that (1 − ε ) aσ √ n 0 log n 0 ≥ t 0 and (4.12) follo ws. Moreov er, b y (4.3) w e ma y choose n 0 large enough to ensure the v alidity of F ((1 − ε ) t n ) ≤ F ( εt n / 2) ≤ K F ( t n ) (4.13) for some constan t K ∈ (0 , ∞ ) and all n ≥ n 0 (the first inequalit y is due to ε/ 2 ≤ 1 − ε ). F or the rest of the proof we assume that n ≥ n 0 holds. W e pro ceed with the estimation of the left-hand side of (4.11) . Let j < n . Using conditioning on G j , Theorem 4.1 (iii) and the fact that F j is decreasing w e find E Q  1 { V ( w n ) > t ′ n }  X u ∈ Ω( w j ) e − V u F n − j  t n − ( V u − j c )   η  = E Q  X u ∈ Ω( w j ) e − V u F n − j  t n − ( V u − j c )   η Q ( V ( w n ) > t ′ n | G j )  = E Q  e − η V ( w j − 1 )  X u ∈ Ω( w j ) e − ∆ V u F n − j  t n − ( V u − j c )   η F n − j  t n − ( V ( w j ) − j c )   ≤ E Q  e − η V ( w j − 1 )  X u ∈ Ω( w j ) e − ∆ V u F n − j ( t n − V u )  η F n − j ( t n − V ( w j ))  (4.14) First we estimate the inner term in brack ets. W e split the sum into particles u ∈ Ω( w j ) with V u > εt n or V u ≥ εt n . F or V u > εt n w e estimate F n − j ( t n − V u ) ≤ 1 . F or V u ≤ εt n w e use again that F n − j is decreasing and get b y (4.12) and (4.13) F n − j ( t n − V u ) ≤ F n − j ((1 − ε ) t n ) ≤ 2( n − j ) F ((1 − ε ) t n ) ≤ 2 K nF ( t n ) . Therefore, w e hav e X u ∈ Ω( w j ) e − ∆ V u F n − j ( t n − V u ) ≤ X u ∈ Ω( w j ) V u >εt n e − ∆ V u + X u ∈ Ω( w j ) V u ≤ εt n e − ∆ V u 2 K nF ( t n ) ≤ W j ( εt n − V ( w j − 1 )) + 2 K nF ( t n ) W j . Since x 7→ x η is increasing and subadditiv e, we get from (4.14) E Q  1 { V ( w n ) > t ′ n }  X u ∈ Ω( w j ) e − V u F n − j  t n − ( V u − j c )   η  ≤ E Q  e − η V ( w j − 1 ) W j ( εt n − V ( w j − 1 )) η F n − j ( t n − V ( w j ))  + (2 K nF ( t n )) η E Q  e − η V ( w j − 1 ) W η j F n − j ( t n − V ( w j ))  . (4.15) 26 It suffices to b ound eac h summand of the right-hand side of this inequalit y b y the righ t-hand side of (4.11) . Our strategy to do so is to split the exp ectation according to the v alues of V ( w j − 1 ) and V ( w j ) . T o organize this we define the even ts A j,n : = { V ( w j − 1 ) ≤ εt n / 2 } , B j,n : = { V ( w j ) ≤ εt n } , j ≤ n. Step 2.1 : Se c ond summand on the right-hand side of (4.15) . T o control the second summand on the right-hand side of (4.15) it suffices to show E Q  e − η V ( w j − 1 ) W η j F n − j ( t n − V ( w j ))  ≤ C nF ( t n ) m (1 + η/ 2) j for some C ∈ (0 , ∞ ) , all sufficien tly large n and all j < n . On the ev ent B j,n w e hav e, b y (4.12) and (4.13), F n − j ( t n − V ( w j )) ≤ F n − j ((1 − ε ) t n ) ≤ 2( n − j ) F ((1 − ε ) t n ) ≤ 2 K nF ( t n ) . (4.16) Therefore, w e get using (4.5) E Q  e − η V ( w j − 1 ) W η j F n − j ( t n − V ( w j )) 1 B j,n  ≤ 2 K nF ( t n ) E Q  e − η V ( w j − 1 ) W η j  = 2 K nF ( t n ) E Q [e − η V ( w j − 1 ) ] E Q [ W η j ] ≤ 2 K nF ( t n ) m (1 + η ) j − 1 E [ W 1+ η 1 ] . The right-hand side has the desired form. F urthermore, on B c j,n w e estimate F n − j ( t n − V ( w j )) ≤ 1 and split into to A j,n and A c j,n . On A c j,n w e estimate e − η V ( w j − 1 ) ≤ e − η εt n / 4 e − η V ( w j − 1 ) / 2 and obtain E Q [e − η V ( w j − 1 ) W η j F n − j ( t n − V ( w j )) 1 B c j,n ∩ A c j,n ] ≤ e − η εt n / 4 E Q [e − η V ( w j − 1 ) / 2 W η j ] ≤ e − η εt n / 4 m (1 + η/ 2) j − 1 E [ W 1+ η 1 ] . By (4.4) , this is also of the desired form. On B c j,n ∩ A j,n w e necessarily ha ve ∆ V ( w j ) > εt n / 2 , so that E Q [e − η V ( w j − 1 ) W η j F n − j ( t n − V ( w j )) 1 B c j,n ∩ A j,n ] ≤ E Q [e − η V ( w j − 1 ) W η j 1 { ∆ V ( w j ) > εt n / 2 } ] = E Q [e − η V ( w j − 1 ) ] E Q [ W η 1 1 { V ( w 1 ) > εt n / 2 } ] = m (1 + η ) j − 1 E Q h W η 1 Z 1 ( εt n / 2) W 1 i ≤ m (1 + η ) j − 1 E [ W η 1 Z 1 ( εt n / 2)] where w e ha ve used (4.5) and (4.6) with n = 1 in the p enultimate step. By Hölder’s inequalit y ( p = (1 + η ) /η , q = 1 + η ), we infer E [ W η 1 Z 1 ( εt n / 2)] ≤ E [ W 1+ η 1 ] η 1+ η E [ Z 1 ( εt n / 2) 1+ η ] 1 1+ η , whic h by (4.2) is in O ( F ( εt n / 2)) = O ( F ( t n )) . This finishes the second summand on the righ t-hand side of (4.15). Now we turn to the first summand and pro ceed similarly . 27 Step 2.2 : First summand on the right-hand side of (4.15) . Again, w e use the ev ents A j,n and B j,n to split the expectation in to several parts according to the v alues of V ( w j − 1 ) and V ( w j ) . As before we get E Q  e − η V ( w j − 1 ) W j ( εt n − V ( w j − 1 )) η F n − j ( t n − V ( w j )) 1 A c j,n  ≤ e − εη t n / 4 E Q  e − η V ( w j − 1 ) / 2 W η j ] ≤ e − εη t n / 4 m (1 + η/ 2) j − 1 E [ W 1+ η 1 ] . F urther, by (4.16) w e see that E Q  e − η V ( w j − 1 ) W j ( εt n − V ( w j − 1 )) η F n − j ( t n − V ( w j )) 1 A j,n ∩ B j,n  ≤ 2 K nF ( t n ) E Q  e − η V ( w j − 1 ) W j ( εt n / 2)) η  = 2 K nF ( t n ) m (1 + η ) j − 1 E Q [ W 1 ( εt n / 2) η ] . By Hölder’s inequalit y (with p = 1 / (1 − η ) = γ , q = 1 /η , recall that η = 1 − γ − 1 ) w e hav e for all t > 0 E Q [ W 1 ( t ) η ] = E [ W 1 W 1 ( t ) η ] ≤ E [ W 1 Z 1 ( t ) η ] ≤ E [ W γ 1 ] 1 /γ E [ Z 1 ( t )] η = E [ W γ 1 ] 1 /γ F ( t ) η . Th us, E Q [ W 1 ( εt n / 2) η ] ∈ O ( F ( εt n / 2) η ) = O ( F ( t n ) η ) . Finally , we ha ve E Q  e − η V ( w j − 1 ) W j ( εt n − V ( w j − 1 )) η F n − j ( t n − V ( w j )) 1 A j,n ∩ B c j,n  ≤ E Q [e − η V ( w j − 1 ) W j ( εt n / 2) η 1 { ∆ V ( w j ) > εt n / 2 } ] = E Q [e − η V ( w j − 1 ) ] E Q [ W j ( εt n / 2) η 1 { ∆ V ( w j ) > εt n / 2 } ] = m (1 + η ) j − 1 ϕ ( εt n / 2) with ϕ from (4.10) . W e ha ve sho wn earlier that ϕ ( t ) ∈ O ( F ( t ) 1+ η ) , th us combined with (4.3) the claimed bound follows. This concludes the proof. A A Marcinkiewicz-Zygmund-t yp e w eak la w of large num- b ers Recall that w e write X ⪯ Y if for all sufficiently large t ≥ 0 we ha ve P ( X > t ) ≤ P ( Y > t ) . T o pro ve the following Marcinkiewicz-Zygmund-t yp e w eak la w of large n umbers one can adapt [Kal21, Theorem 6.17] straigh t-forwardly . A.1 Theorem. L et ( X j ) j ∈ N b e a se quenc e of indep endent r andom variables. Supp ose further that E [ X j ] = 0 and that ther e exists a r andom variable X ≥ 0 such that | X j | ⪯ X for al l j ∈ N , and that we have P ( X > t ) ∈ o ( t − p ) as t → ∞ for some p ∈ (1 , 2) . Then we have n − 1 /p P n j =1 X j → 0 in pr ob ability as n → ∞ . Pr o of of The or em 3.10. W e mo dify the the pro of of [ Ner81 , Prop osition 4.1]. Assume without loss of generalit y that E [ X kj ] = 0 for all k , j and that the whole family ( X kj ) j ≤ n k , k ∈ N is independent. By Theorem A.1 w e hav e P k i =1 P n i j =1 X ij  P k i =1 n i  1 /p P − → k →∞ 0 28 Then w e find 1 n q /p k n k X j =1 X kj = P k i =1 P n i j =1 X ij  P k i =1 n i  1 /p  P k i =1 n i n q k  1 /p − P k − 1 i =1 P n i j =1 X ij  P k − 1 i =1 n i  1 /p  P k − 1 i =1 n i n q k  1 /p . Since the righ t-hand side conv erges to zero in probabilit y the claim is pro v ed. B Details on Example 2.3 (b) Here w e pro ve the remaining claims of Example 2.3 (b). Supp ose that w e ha ve P (1 − f ≤ t ) = t p +1 ℓ ( t ) for all t ≤ ε ∈ (0 , 1) , where ℓ is slo wly v arying at zero. Claim 1: m (1) < ∞ . W e ha ve m (1) b = E [(1 − f ) − 1 ] = E [(1 − f ) − 1 1 { 1 − f > ε } ] + E [(1 − f ) − 1 1 { 1 − f ≤ ε } ] . The first summand on the righ t-hand side is b ounded by 1 /ε . F or the second one w e use the iden tity t − 1 = R ∞ t x − 2 d x and find, using F ubini’s theorem, E [(1 − f ) − 1 1 { 1 − f ≤ ε } ] = E  Z ∞ 1 − f x − 2 1 { 1 − f ≤ ε } d x  = Z ∞ 0 x − 2 P (1 − f ≤ ε ∧ x )d x = Z ε 0 x − 2 P (1 − f ≤ x )d x + Z ∞ ε x − 2 P (1 − f ≤ ε )d x = Z ε 0 x p − 1 ℓ ( x )d x + P (1 − f ≤ ε ) ε − 1 . No w note that ˜ ℓ ( x ) : = ℓ (1 /x ) is slo wly v arying at ∞ , and the first summand transforms to Z ε 0 x p − 1 ℓ ( x )d x = Z ∞ ε − 1 y − p − 1 ˜ ℓ ( y )d y whic h is finite by [BGT87, Prop osition 1.5.10] since p > 0 . Therefore, m (1) < ∞ as claimed. Claim 2: t 7→ E [e − t (1 − f ) ] is regularly v arying at ∞ with index − ( p + 1) . W e hav e E [e − t (1 − f ) ] = E [e − t (1 − f ) 1 { 1 − f > ε } ] + E [e − t (1 − f ) 1 { 1 − f ≤ ε } ] . The first summand on the right-hand side is b ounded by e − εt and hence of low er order once w e sho w that there is a regularly v arying part. for the second summand we use the identit y e − t = R ∞ t e − x d x and find, using F ubini’s theorem, E [e − t (1 − f ) 1 { 1 − f ≤ ε } ] = E  Z ∞ t (1 − f ) e − x 1 { 1 − f ≤ ε } d x  = Z ∞ 0 e − x P (1 − f ≤ ε ∧ x/t )d x = Z εt 0 e − x P (1 − f ≤ x/t )d x + Z ∞ εt e − x P (1 − f ≤ ε )d x. 29 Once again, the second summand on the righ t-hand side is bounded b y e − εt . The first one is further ev aluated as Z εt 0 e − x P (1 − f ≤ x/t )d x = t − ( p +1) Z εt 0 e − x x p +1 ℓ ( x/t )d x. W e claim that, as t → ∞ , Z εt 0 e − x x p +1 ℓ ( x/t )d x ∼ ℓ (1 /t )Γ( p + 2) where Γ denotes the Gamma-function. Note that for each fixed x > 0 w e hav e ℓ ( x/t ) /ℓ (1 /t ) → 1 as t → ∞ , th us this claim follows from the dominated conv ergence theorem once w e find a ma jorant. Define the function ˜ ℓ ( x ) = ℓ (1 /x ) for x ≥ 1 /ε and ˜ ℓ ( x ) = ℓ ( ε ) for x < 1 /ε . Then ˜ ℓ is slo wly v arying at ∞ and bounded aw ay from 0 and ∞ on all compact in terv als in (0 , ∞ ) (since otherwise we run in to trouble with P (1 − f ≤ t ) = t p ℓ ( t ) ). By P otter’s theorem (e.g. [ BGT87 , Theorem 1.5.6]) there exists C ∈ (0 , ∞ ) suc h that ˜ ℓ ( t/x ) ˜ ℓ ( t ) ≤ C x for all t/x, t > 0 . No w we see that 1 ℓ (1 /t ) Z εt 0 e − x x p +1 ℓ ( x/t )d x = Z εt 0 e − x x p +1 ˜ ℓ ( t/x ) ˜ ℓ ( t ) d x ≤ C Z ∞ 0 e − x x p +2 d x < ∞ . This giv es the desired ma jorant. W e hav e th us shown that, as t → ∞ , E [e − t (1 − f ) ] ∼ t − ( p +1) ℓ (1 /t )Γ( p + 2) . Therefore, the claim is pro ved. A c kno wledgemen t The author wishes to express sincere gratitude to Alicja Kołodziejsk a and Matthias Meiners for carefully reading an early version of this man uscript. This w ork w as financially supported b y DF G grant ME 3625/5-1. References [ABM12] Gerold Alsmey er, J.D. Biggins, and Matthias Meiners. The functional equation of the smo othing transform. The A nnals of Pr ob ability , 40(5):2069–2105, 2012. [AS14] Elie Aïdék on and Zhan Shi. The Seneta–Heyde scaling for the branching random walk. The Annals of Pr ob ability , 42(3):959–993, 2014. [BGT87] N.H. Bingham, C.M. Goldie, and J.L. T eugels. R e gular variation . Cam bridge Universit y Press, 1987. [Big92] J.D. Biggins. Uniform conv ergence of martingales in the branching random w alk. A nnals of Pr ob ability , 20(1):137–151, 1992. 30 [Big95] J.D. Biggins. The gro wth and spread of the general branching random walk. A nnals of Applie d Pr ob ability , 5(4):1008–1024, 1995. [Big98] J.D. Biggins. Lindley-t yp e equations in the branc hing random w alk. Sto chastic Pr o c esses and their Applic ations , 75:105–133, 1998. [BK97] J.D. Biggins and A.E. Kyprianou. Seneta-Heyde norming in the branc hing random w alk. Annals of Pr ob ability , 25(1):337–360, 1997. [BK04] J.D. Biggins and A.E. Kyprianou. Measure c hange in multit ype branching. A dvanc es in Applie d Pr ob ability , 36:544–581, 2004. [BK05] J.D. Biggins and A.E. Kyprianou. Fixed p oints of the smo othing transform: the b oundary case. Ele ctr onic Journal of Pr ob ability , 10(17):609–631, 2005. [DDS08] D Deniso v, A.B. Dieker, and V. Shneer. Large deviations for random w alks under sub exponentialit y: the big-jump domain. The Annals of Pr ob ability , 36(5):1946–1991, 2008. [DMM17] Steffen Dereich, Cécile Mailler, and Peter Mörters. Nonextensiv e condensation in reinforced branc hing pro cesses. The Annals of Applie d Pr ob ability , 27(4), 2017. [Gat00] Dimitris Gatzouras. On the lattice case of an almost-sure renewal theorem for branc hing random w alks. A dvanc es in Applie d Pr ob ability , 32(3):720–737, 2000. [Jag89] P eter Jagers. General branching processes as Marko v fields. Sto chastic Pr o c esses and their Applic ations , 32:183–212, 1989. [Kal21] Olav Kallenberg. F oundations of Mo dern Pr ob ability . Springer, 3rd edition, 2021. [Kyp00] A. Kyprianou. Martingale conv ergence and the stopp ed branc hing random w alk. Pr ob ab. The ory R elat. Fields , 116:405–419, 2000. [Liu00] Quansheng Liu. On generalized m ultiplicative cascades. Sto chastic Pr o c esses and their Applic ations , 86:263–286, 2000. [Ly o97] Russel Ly ons. A simple path to Biggins’ martingale conv ergence for branching random w alk. Statistics & Pr ob ability L etters , 79(8):1129–1133, 1997. [Nag79] S.V. Nagaev. Large deviations of sums of indep enden t random v ariables. The Annals of Pr ob ability , 7(5):745–789, 1979. [Ner81] Olle Nerman. On the con vergence of supercritical general (C-M-J) branc hing processes. Z. W ahrscheinlichkeitsthe orie verw. Gebiete , 57:365–395, 1981. [Shi15] Zhan Shi. Br anching R andom W alks . Springer, 2015. 31

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment