Generalized Coverage Processes with Infinitely Divisible Finite Dimensional Distributions
In this paper we define a class of coverage processes with infinitely divisible finite dimensional distributions and a particular type of correlation structure that can be thought of as generalizations of the classical Ornstein--Uhlenbeck process and…
Authors: George Makatis, Michael A. Zazanis
Generalized Cov erage Pr ocesses with Infinitely Divisible Finite Dimensional Distrib utions George Makatis and Michael A. Zazanis * Department of Statistics, Athens Uni versity of Economics and Business, Athens, Greece Abstract In this paper we define a class of cov erage processes with infinitely divisible finite dimensional dis- tributions and a particular type of correlation structure that can be thought of as generalizations of the classical Ornstein–Uhlenbeck process and which include cov erage processes such as the M /GI / ∞ pro- cess. W e show how such processes arise naturally as limits of superpositions of independent ON/OFF Markov processes with dif ferent parameters by formulating an appropriate limit theorem. V arious exam- ples of processes of this type are giv en. K E Y W O R D S : S U P E R P O S I T I O N O F O N / O F F P RO C E S S E S , M U LT I V A R I AT E P R O BA B I L I T Y L AW S , O R N S T E I N – U H L E N B E C K P R O C E S S , C OV E R AG E P R O C E S S , M /GI / ∞ P R O C E S S 1 Intr oduction W e define a new family of processes which we term Generalized Coverage Processes. These are stationary , infinitely divisible real valued processes defined on R with a special correlation structure similar to that of the classical cov erage processes such as the M /G/ ∞ process (see Hall [ 8 ]). Stationary infinitely divisible processes hav e been examined by Maruyama [ 15 ]. See also Lee [ 14 ] and Banrdorff-Nielsen [ 4 ] who e xamines stationary infinitely di visible processes closer in spirit to those examined in this paper . The classical Ornstein–Uhlenbeck process is the solution of the stochastic dif ferential equation dX t = − aX t dt + dW t (1) where { W t ; t ≥ 0 } is standard Brownian motion and X 0 a Gaussian random variable independent of { W t ; t ≥ 0 } . If a > 0 then the process has a stationary version { X t ; t ≥ 0 } when X 0 is Gaussian with zero mean and v ariance equal to 1 2 a . * Corresponding author . zazanis@aueb .gr 1 One particularly fruitful generalization of the classical Ornstein–Uhlenbeck process consists in replacing the driving Brownian Motion term in ( 1 ) by a general L ´ evy process. The study of such processes and their applications in many areas, most notably mathematical finance, has attracted a great deal of attention over the last three decades. W e refer the reader to Sato [ 23 ] for background on these Generalized Ornstein–Uhlenbeck (GOU) processes and to Barndorf f–Nielsen and Shephard [ 3 ] for applications in finance and economics. W e note in particular that the class of GOU processes is intimately related to self–decomposable distributions (see [ 26 ] and [ 10 ]). In this paper we propose an alternati ve generalization of the stationary , W iener process dri v en, Ornstein–Uhlenbeck process which is intimately connected to cov erage processes and limits of superpositions of ON/OFF processes. 2 The covariance structure of the classical Ornstein-Uhlenbeck process and an algebraic identity The stationary version of the process ( 1 ) is a zero mean Gaussian process { X t } and the joint characteristic function of ( X t 1 , X t 2 , . . . , X t n ) for t 1 < t 2 < · · · < t n is gi ven by E exp i n X i =1 θ i X t i ! = exp − 1 4 a n X i =1 θ 2 i − 1 2 a X 1 ≤ i 0 (9) 3 (see for instance [ 19 ]). As a consequence of this property we can see that the concav e function f is super- modular i.e. if 0 ≤ x 1 < x 2 < x 3 < x 4 the follo wing inequality holds f ( x 3 − x 1 ) − f ( x 3 − x 2 ) − f ( x 4 − x 1 ) + f ( x 4 − x 2 ) ≥ 0 . (10) Indeed, to see that ( 9 ) implies ( 10 ) it suf fices to take x = x 3 − x 2 , y = x 3 − x 1 and h = x 4 − x 3 . The follo wing Lemma will play a central role in the sequel. Lemma 2. Assuming that ϕ ( θ ) is the char acteristic function of an infinitely divisible law on R with char ac- teristic exponent given by ( 8 ), for any n ∈ N and nonne gative r eals a ij , 1 ≤ i ≤ j ≤ n , the function ϕ n ( θ 1 , . . . , θ n ) := exp X 1 ≤ i ≤ j ≤ n a ij ψ ( θ i + · · · + θ j ) (11) is the characteristic function of a pr obability measur e on R n . Furthermore , ϕ n is infinitely divisible and its L ´ evy triplet, ( β , Σ , ν n ) , is β = ( β 1 , . . . , β n ) with β k = β k X i =1 n X j = k a ij , k = 1 , . . . , n, (12) Σ kl := σ 2 X 1 ≤ i ≤ ( k ∧ l ) ( k ∨ l ) ≤ j ≤ n a ij , k, l = 1 , . . . , n. (13) The L ´ evy measur e ν n on R n corr esponding to ϕ n can be char acterized as follows: Let u ij , 1 ≤ i ≤ j ≤ n , be the collection of vectors in R n such that u ij = (0 , . . . , 0 , 1 , . . . , 1 , 0 , . . . , 0) ⊤ (i.e. its k th component is 1 if i ≤ k ≤ j and 0 otherwise). Then, for any 0 < r < R and any bounded, continuous function φ : R → R n vanishing outside of the set { x : r < ∥ x ∥ < R } , Z R n φ ( x ) ν n ( d x ) = X 1 ≤ i ≤ j ≤ n a ij Z R φ ( s u ij ) ν ( ds ) . (14) Pr oof. Let { Z ij ( t ); t ≥ 0 } , 1 ≤ i ≤ j ≤ n , be independent L ´ evy processes, all with the same characteristic exponent ψ ( θ ) (and thus all with triplet ( β , σ 2 , ν ) ). Thus, for θ ∈ R , E e iθZ ij ( t ) = e tψ ( θ ) , t ≥ 0 . Define the random element Y = ( Y 1 , . . . , Y n ) ⊤ of R n by Y = X 1 ≤ i ≤ j ≤ n Z ij ( a ij ) u ij . (15) If θ = ( θ 1 , . . . , θ n ) ⊤ then θ ⊤ u ij = θ i + · · · + θ j and in vie w of ( 15 ), Y has characteristic function E [ e i θ ⊤ Y ] = E h exp i X 1 ≤ i ≤ j ≤ n θ ⊤ u ij Z ij ( a ij ) i = E h exp i X 1 ≤ i ≤ j ≤ n ( θ i + · · · + θ j ) Z ij ( a ij ) i = Y 1 ≤ i ≤ j ≤ n E e i ( θ i + ··· + θ j ) Z ij ( a ij ) = Y 1 ≤ i ≤ j ≤ n e a ij ψ ( θ i + ··· + θ j ) = ϕ n ( θ 1 , . . . , θ n ) (16) 4 where we have used the independence of the L ´ evy processes { Z ij ( t ); t ≥ 0 } and the fact that they all have the same characteristic e xponent ψ ( θ ) . Equation ( 16 ) follows directly from ( 11 ) and therefore ϕ n ( θ 1 , . . . , θ n ) is the characteristic function of Y . Note also that Y , as defined in ( 15 ), is the sum of the n ( n + 1) / 2 inde- pendent, infinitely divisible random elements of R n of ( 15 ), with characteristic functions E [ e i θ · u ij Z ij ( a ij ) ] = e ψ ( θ i + ··· + θ j ) a ij and therefore Y is an infinitely di visible random element of R n . The L ´ evy triplet of ϕ n can be obtained from ( 8 ) with β = ( β 1 , . . . , β n ) ⊤ gi ven by θ ⊤ β = n X k =1 θ k β k = X 1 ≤ i ≤ j ≤ n a ij β ( θ i + · · · + θ j ) = n X k =1 θ k β k X i =1 n X j = k a ij thus establishing ( 12 ). Similarly 1 2 θ ⊤ Σ θ = 1 2 n X i =1 n X j =1 Σ ij θ i θ j = 1 2 X 1 ≤ i ≤ j ≤ n σ 2 a ij ( θ i + . . . + θ j ) 2 . Then, from identiy ( 5 ) we obtain ( 13 ). Finally , the random variable u ij Z ij ( a ij ) has L ´ evy (and therefore probability) measure concentrated on the one-dimensional linear subspace of R n V ij := { x : x = s u ij , s ∈ R } . This justifies ( 14 ) and completes the proof. Theorem 3 (GCID) . Suppose that ϕ ( θ ) is the char acteristic function of an infinitely divisible law on R and denote by ψ ( θ ) := log ϕ ( θ ) its char acteristic exponent given by ( 8 ). Denote by X a real random variable with the corr esponding distribution, i.e. ϕ ( θ ) = E e iθX . Suppose also that H is a pr obability distribution function on R such that H (0) = 0 and H is concave. Then, for n ∈ N and θ i , t i ∈ R , i = 1 , 2 , . . . , n , t 1 ≤ t 2 ≤ · · · ≤ t n , the family of functions ϕ n ( θ 1 , . . . , θ n ; t 1 , . . . , t n ) defined by log ϕ n ( θ 1 , . . . , θ n ; t 1 , . . . , t n ) (17) = X 1 ≤ i ≤ j ≤ n ψ ( θ i + · · · + θ j ) [ H ( t j − t i − 1 ) − H ( t j − t i ) − H ( t j +1 − t i − 1 ) + H ( t j +1 − t i )] ar e characteristic functions of consistent, infinitely divisible finite–dimensional distributions. For each fixed n in the abov e expression t 0 is to be interpreted as −∞ and t n +1 as + ∞ . The family of c haracteristic functions defined in ( 17 ) corr esponds to the finite–dimensional distributions of a stationary pr ocess { X t ; t ∈ R } with mar ginal distribution E e iθX t = ϕ ( θ ) . When the random variable X has finite second moment the pr ocess { X t } has covariance Cov ( X t , X t + h ) = − ψ ′′ (0) [1 − H ( | h | )] . (18) Remark 1. W e point out that, for n = 1 , ( 17 ) contains only one term, namely log ϕ 1 ( θ 1 ; t 1 ) = ψ ( θ 1 ) , while for n = 2 , after simple calculations taking into account the con ventions reg arding t 0 and t n +1 we obtain log ϕ 2 ( θ 1 , θ 2 ; t 1 , t 2 ) = ψ ( θ 1 ) H ( t 2 − t 1 ) + ψ ( θ 1 + θ 2 ) [1 − H ( t 2 − t 1 )] + ψ ( θ 2 ) H ( t 2 − t 1 ) . (19) Pr oof. Note that a ij := [ H ( t j − t i − 1 ) − H ( t j − t i ) − H ( t j +1 − t i − 1 ) + H ( t j +1 − t i )] ≥ 0 (20) since H is concav e and hence supermodular . Therefore, using Lemma 2 , we conclude that the right hand side of ( 17 ) is in fact the logarithm of a characteristic function of an infinitely di visible probability la w on R n . There remains to show that this family of distributions satisfies the K olmogorov consistency conditions. Denote by F ( x 1 , . . . , x n ; t 1 , . . . , t n ) the distribution function corresponding to the characteristic function 5 defined by ( 17 ). In order to establish for these distributions to satisfy the consistency conditions it suffices to sho w that, for ev ery n ≥ 2 and e v ery sequence of v alues t 1 < · · · < t k < · · · < t n , log ϕ n ( θ 1 , . . . , θ k − 1 , θ k , θ k +1 , . . . , θ n ; t 1 , . . . , t k − 1 , t k , t k +1 , . . . , t n ) | θ k =0 (21) = log ϕ n − 1 ( θ 1 , . . . , θ k − 1 , θ k +1 , . . . , θ n ; t 1 , . . . , t k − 1 , t k +1 , . . . , t n ) . (The case where k = 1 or k = n is similar and its discussion is omitted.) In view of ( 17 ) both sides of ( 21 ) consist of terms of the form ψ ( θ i + · · · + θ j ) a ij , 1 ≤ i ≤ j ≤ n, (22) where a ij is giv en by ( 20 ). If k / ∈ { i, i + 1 , j − 1 , j } then the term ( 22 ) apears both in the right and left hand side of ( 21 ). When k = i or k = i + 1 on the left of ( 21 ) we hav e the two terms ψ ( θ k + · · · + θ j ) [ H ( t j − t k − 1 ) − H ( t j − t k ) − H ( t j +1 − t k − 1 ) + H ( t j +1 − t k )] , (23) ψ ( θ k +1 + · · · + θ j ) [ H ( t j − t k ) − H ( t j − t k +1 ) − H ( t j +1 − t k ) + H ( t j +1 − t k +1 )] . Setting θ k = 0 giv es ψ ( θ k + · · · + θ j ) = ψ ( θ k +1 + · · · + θ j ) . Hence the two terms in ( 23 ) are combined into one as follo ws ψ ( θ k +1 + · · · + θ j )[ H ( t j − t k − 1 ) − H ( t j − t k ) − H ( t j +1 − t k − 1 ) + H ( t j +1 − t k ) + H ( t j − t k ) − H ( t j − t k +1 ) − H ( t j +1 − t k ) + H ( t j +1 − t k +1 )] = ψ ( θ k +1 + · · · + θ j ) × [ H ( t j − t k − 1 ) − H ( t j +1 − t k − 1 ) − H ( t j − t k +1 ) + H ( t j +1 − t k +1 )] . (24) The expression in ( 24 ) is equal to the corresponding term in the right hand side of ( 21 ). The same can be sho wn when k = j − 1 and k = j which giv es the term containing ψ ( θ i + · · · + θ k − 1 ) and thus we establish ( 21 ). The stationarity of the process { X t ; t ∈ R } is immediate from the fact that log ϕ n ( θ 1 , . . . , θ n ; t 1 + τ , . . . , t n + τ ) = log ϕ n ( θ 1 , . . . , θ n ; t 1 , . . . , t n ) , for any τ ∈ R , as can be readily verified from ( 17 ). T o establish ( 18 ) use ( 19 ) with θ 2 = θ , θ 1 = − θ and h = t 2 − t 1 in to obtain ϕ 2 ( − θ , θ ; t, t + h ) = E h e iθ ( X t + h − X t ) i = exp { ψ (0) [1 − H ( h )] + ψ ( θ ) H ( h ) + ψ ( − θ ) H ( h ) } = exp { H ( h ) [ ψ ( θ ) + ψ ( − θ )] } (25) where in the last equation we have tak en into account that ψ (0) = 0 . W e hav e also assumed that h > 0 . The situation where h is negativ e can be dealt with similarly . Differentiating twice this expression and ev aluating at θ = 0 we obtain E ( X t − X t + h ) 2 = − d 2 dθ 2 ϕ 2 ( − θ , θ ; t, t + h ) θ =0 = − 2 H ( h ) ψ ′′ (0) . (26) From the stationarity of the process X t we hav e Cov ( X t , X t + h ) = Va r ( X t ) − 1 2 E ( X t − X t + h ) 2 . T aking into account the fact that ψ ′′ (0) = Va r ( X t ) and ( 26 ) establishes ( 18 ). W e also note that H is necessarily continuous and therefore the process is necessarily mean square continuous. 6 Remark 2. A special case of particular importance arises when H ( t ) = 1 − e − µt for some µ > 0 . In that case log ϕ n ( θ 1 , . . . , θ n ; t 1 , . . . , t n ) (27) = X 1 ≤ i ≤ j ≤ n ψ ( θ i + · · · + θ j ) 1 − e − µ ( t i − t i − 1 ) e − µ ( t j − t i ) 1 − e − µ ( t j +1 − t j ) . Note that the correlation structure in the above expression is of the form ( 4 ) which corresponds to that of an Ornstein-Uhlenbeck process ( 3 ). In particular , if second moments exist, the covariance function is gi ven by Cov ( X t , X t + h ) = V a r ( X 0 ) e − µ | h | . Remark 3. Suppose that { X t ; t ∈ R } is a real-valued, zero-mean, stationary Gaussian process with cov ari- ance function C ( t ) := E [ X s X s + t ] , t ≥ 0 , with C (0) > 0 and lim t →∞ C ( t ) = 0 . Define the correlation function r ( t ) := C ( t ) /C (0) , t ≥ 0 . If the cov ariance function C is a con vex function then the Gaussian process { X t ; t ∈ R } belongs to the family GCID with ψ ( θ ) := − C (0) 2 θ 2 and H ( t ) := 1 − r ( t ) . Then, with the standard assumption that t 0 = −∞ and t n +1 = + ∞ , E [ e i P n i =1 θ i X t i ] = e − 1 2 P n i,j =1 θ i θ j C ( | t i − t j | ) = e C (0) 2 P 1 ≤ i ≤ j ≤ n ( θ i + ··· + θ j ) 2 [ H ( t j +1 − t i − 1 ) − H ( t j +1 − t i ) − H ( t j − t i − 1 )+ H ( t j − t i )] . (28) It is clear that the above equality holds for any stationary Gaussian process for which lim t →∞ C ( t ) = 0 as a result of the identity ( 5 ). Howe v er only when r ( t ) is conv ex (and therefore H concav e) do we have H ( t j +1 − t i − 1 ) − H ( t j +1 − t i ) − H ( t j − t i − 1 ) + H ( t j − t i ) ≥ 0 . Lemma 4. F or any pr obability distribution H on R such that H (0) = 0 and H is concave the following identity holds for any n : X 1 ≤ i ≤ j ≤ n ( θ i + · · · + θ j ) [ H ( t j − t i − 1 ) − H ( t j − t i ) − H ( t j +1 − t i − 1 ) + H ( t j +1 − t i )] = θ 1 + · · · + θ n . (29) Ther efor e, if ψ ( θ ) contains a drift term, i.e . if ψ ( θ ) = iβ θ + ψ 0 ( θ ) where ψ 0 ( θ ) := − 1 2 σ 2 θ 2 + Z R \{ 0 } e iθx − 1 − iθ x 1 ( | x | < 1) ν ( dx ) ( 17 ) becomes log ϕ n ( θ 1 , . . . , θ n ) = β ( θ 1 + · · · + θ n ) + X 1 ≤ i ≤ j ≤ n ψ 0 ( θ i + · · · + θ j ) [ H ( t j − t i − 1 ) − H ( t j − t i ) − H ( t j +1 − t i − 1 ) + H ( t j +1 − t i )] . Pr oof. The left hand side of ( 29 ) can be written as n X r =1 θ r X 1 ≤ i ≤ r ≤ j ≤ n [ H ( t j − t i − 1 ) − H ( t j − t i ) − H ( t j +1 − t i − 1 ) + H ( t j +1 − t i )] (30) and the inner sum in ( 30 ) can be written as X 1 ≤ i ≤ r X r ≤ j ≤ n [ H ( t j − t i − 1 ) − H ( t j +1 − t i − 1 )] + X r ≤ j ≤ n [ H ( t j +1 − t i ) − H ( t j − t i )] . (31) 7 The two inner sums in ( 31 ) are telescopic and reduce to H ( t r − t i − 1 ) − H ( t n +1 − t i − 1 ) = H ( t r − t i − 1 ) − H ( ∞ ) = H ( t r − t i − 1 ) − 1 and H ( t n +1 − t i ) − H ( t r − t i ) = H ( ∞ ) − H ( t r − t i ) = 1 − H ( t r − t i ) respecti vely . (In the above we ha ve taken into account the con vention t n +1 = + ∞ .) Thus ( 31 ) becomes X 1 ≤ i ≤ r { H ( t r − t i − 1 ) − H ( t r − t i ) } = H ( t r − t 0 ) − H ( t r − t r ) = H ( t r − ( −∞ )) − H (0) = H ( ∞ ) − 0 = 1 (32) and hence ( 30 ) reduces to P n r =1 θ r and the proof is complete. Proposition 5. If { X t ; t ≥ 0 } , { Y t ; t ≥ 0 } , are independent GCID pr ocesses with char acteristic exponents ψ ( θ ) and cψ ( θ ) r espectively (where c > 0 ) and corr elation structur e functions H 1 and H 2 r espectively , then { Z t ; t ≥ 0 } wher e Z t = X t + Y t is also GCID with characteristic exponent (1 + c ) ψ ( θ ) and corr elation structur e function H ( t ) := 1 1+ c H 1 ( t ) + c 1+ c H 2 ( t ) . Pr oof. From Theorem 3 and the independence of the processes { X t } , { Y t } , we see that log E [exp( θ 1 Z t 1 + · · · + θ n Z t n )] is a sum of terms of the form ψ ( θ i + · · · + θ j ) H 1 ( t j − t i − 1 ) − · · · + cψ ( θ i + · · · + θ j ) H 2 ( t j − t i − 1 ) − · · · = (1 + c ) ψ ( θ i + · · · + θ j ) 1 1 + c H 1 ( t j − t i − 1 ) + c 1 + c H 2 ( t j − t i − 1 ) − · · · . Clearly (1 + c ) ψ is a characteristic exponent of an infinitely divisible distribution when ψ is. Also H := 1 1+ c H 1 + c 1+ c H 2 is a concav e distrib ution function on [0 , ∞ ) with H (0) = 0 whenever H 1 and H 2 are. W e end the section by defining a subclass of GCID processes with spectrally positiv e L ´ evy marginal distribution and correlation structure similar to that of the classical Ornstein-Uhlenbeck process. Definition 1. A pr ocess { X t ; t ∈ R } will be called an Exponential Spectrally Positive Coverage Processes (ESPC) if (a) F or all t ∈ R X t has spectrally positive L ´ evy distribution with ψ ( θ ) := log E h e iθX t i = Z ∞ 0 ( e iθx − 1) ν ( dx ) wher e R ∞ 0 xν ( dx ) < ∞ (but R ∞ 0 ν ( dx ) may or may not be finite). (b) The finite dimensional distributions of { X t } , for t 1 < · · · < t n , ar e given by log E h e i ( θ 1 X t 1 + θ 2 X t 2 + ··· + θ n X t n ) i (33) = X 1 ≤ j ≤ k ≤ n Z ∞ 0 e i ( θ j + ··· + θ k ) x − 1 ν ( dx ) (1 − e − µ ( t j − t j − 1 ) ) e − µ ( t k − t j ) (1 − e − µ ( t k +1 − t k ) ) . 8 The class of ESPC pr ocesses is a subclass of the GCID pr ocesses defined in Theorem 3 . The f act that ( 33 ) indeed defines a consistent family of finitely dimensional distrib utions is a direct conse- quence of Theorem 3 . Note that the mean value of the abov e process is gi v en by E X t = R ∞ 0 xν ( dx ) , its v ari- ance by V a r ( X t ) = R ∞ 0 x 2 ν ( dx ) (pro vided this last inte gral is finite), and its cov ariance by Cov ( X t 1 , X t 2 ) = e − µ ( t 2 − t 1 ) R ∞ 0 x 2 ν ( dx ) under the same pro viso. The connection of these processes to limits of superpositions of ON/OFF processes will be discussed in section 5 . 4 Pr ocesses with Poisson and compound P oisson marginals. The M /GI / ∞ pr ocess and its generalizations In this section we consider processes that fall into the framework of Theorem 3 arising from infinite server queues with Poisson arri v als and general service times. In the first subsection we consider the number of customers in the system and thus obtain a stationary process with Poisson marginals, while in the second we will introduce “marked” M /GI / ∞ queues where to each customer a real–valued mark is associated. In this fashion we obtain stationary processes with compound Poisson mar ginal distributions. 4.1 The number of customers in the system Suppose that { T l ; l ∈ Z } is a Poisson process on the real line with intensity λ . T l denotes the arri val epoch of the l th customer and σ l the customer’ s service requirement. W e assume that { σ l ; l ∈ Z } is an i.i.d. sequence of positi ve random variables with distribution function G ( x ) := P ( σ l ≤ x ) with finite mean m = R ∞ 0 (1 − G ( x )) dx and independent of the Poisson process { T l } . W e will use G := 1 − G to designate the corresponding survi val function. The number of customers in the M /GI / ∞ system at time t is then X t = X l ∈ Z 1 ( T l ≤ t < T l + σ l ) , t ∈ R . A particularly useful point of vie w is to consider a Poisson process M on the upper half plane R × R + with points { M l ; l ∈ Z } defined by M l := ( T l , σ l ) and mean measure µ ( dt × dx ) := λdt × G ( dx ) . The number of customers in the system at time t is the number of points of M in the wedge A := { ( s, x ) ∈ R × R + : s < t < s + x } and this is a Poisson random variable with mean µ ( A ) = R R s 0 as the intensity of the source. The triplet ( λ, µ, r ) characterizes completely this ON/OFF source. In this section we gi ve results that sho w ho w the class of processes defined by ( 17 ) can arise as a limit of superpositions of ON/OFF processes of this type with v arying parameters. Consider next a collection of n sources each of which generates intermittent traffic: Suppose that the j th source is inacti ve during an exponential period with rate λ j and then becomes acti ve for an independent period of time, exponentially distributed, with rate µ , the same for all sources. When the j th source is ON, it generates fluid at rate r j , whereas when it is OFF it does not generate any fluid. W e consider the effect of the superposition of a large number of such sources under the assumption that the sources are independent and that each indi vidual source generates traffic infrequently . In order to study the asymptotic behavior of such a superposition let us introduce a triangular array of independent, markovian ON/OFF processes, { ζ nj ( t ); t ≥ 0 } , j = 1 . . . n , n ∈ N , with corresponding triplets ( λ nj , µ, r nj ) . Note that we hav e made the simplifying assumption that the ON rates are the same for all elements of the array . For simplicity we will assume the stationarity of the elements of the family { ζ nj ( t ) } under the probability measure P . Denote by X n ( t ) = n X j =1 ζ nj ( t ) , t ∈ [0 , T ] , n = 1 , 2 , . . . (42) the sequence of superpositions of the ON/OFF processes. These hav e sample paths in the space D [0 , T ] consisting of the c ` adl ` ag functions on [0 , T ] , i.e. all functions that are continuous from the right and have left limits. At a fixed point in time, t , we obtain a double array of independent random variables ζ nj ( t ) = 0 with prob . µ λ nj + µ , r nj with prob . λ nj λ nj + µ (43) with corresponding characteristic functions E e iθζ nj ( t ) = µ λ nj + µ + λ nj λ nj + µ e iθr nj . A double array of independent random variables { χ nj } , j = 1 , . . . , n , n ∈ N , of real random variables is called uniformly asymptotically ne gligible or u.a.n. (alternativ ely null , see [ 12 ]) if lim ϵ → 0 max 1 ≤ j ≤ n P ( | χ nj | > ϵ ) = 0 . (44) If we denote by F nj ( x ) := P ( χ nj ≤ x ) , x ∈ R , the distribution functions of the elements of the triagular array and by ϕ nj ( θ ) := R R e iθx F nj ( dx ) , θ ∈ R , the corresponding characteristic functions, a necessary and suf ficient condition for the triangular array to be u.a.n. is (see [ 7 , p.305]) lim n →∞ max 1 ≤ j ≤ n | ϕ nj ( θ ) − 1 | = 0 for all θ ∈ R . (45) 12 The follo wing result (Theorem 24 in [ 7 , p.311]) gi ves necessary and suf ficient conditions for the con ver gence in distribution of the ro w sums of such a double array . Theorem 6. Let { χ nj } , j = 1 , . . . , n , n ∈ N , be a u.a.n. array of nonne gative r eal random variables. In or der that the sequence of r ow sums, W n := P n j =1 χ nj , con ver g e in distrib ution it is necessary and suf ficient that ther e e xist γ > 0 and a L ´ evy measur e ν on R + satisfying the conditions C1. ν [ x, ∞ ) = lim n →∞ n X j =1 P ( χ nj ≥ x ) if x > 0 and ν { x } = 0 , C2. γ = lim ϵ → 0 lim sup n →∞ n X j =1 E [ χ nj 1 ( χ nj ≤ ϵ )] = lim ϵ → 0 lim inf n →∞ n X j =1 E [ χ nj 1 ( χ nj ≤ ϵ )] . When these conditions are satisfied, the sequence { W n } con ver ges in distribution to an infinitely divisible random variable with c haracteristic function exp γ θ + R ∞ 0 ( e iθx − 1) ν ( dx ) . W e will discuss sufficient conditions for the array { ζ nj ( t ) } to be u.a.n. (for each value of t ) and for the sequence of processes defined in ( 42 ) to con ver ge in distribution to a limiting process. Since we assume for the sake of simplicity that all sources have the same ON rate, µ , the triangular array of input processes { ζ nj } is described by a corresponding triangular array of parameters ( λ nj , r nj ) , j = 1 . . . n , n ∈ N . Let δ x denote the Dirac measure on the real line placing a unit mass at x ∈ R , defined by δ x ( A ) = 1 if x ∈ A and 0 otherwise for e very Borel A ⊂ R . W e will describe the limiting behavior of the input process ( 42 ) as n → ∞ by considering the corresponding behavior of the sequence of measures ν n ( · ) = n X j =1 λ nj δ r nj ( · ) . (46) W e will show that, when the sequence of measures { ν n } conv erges weakly to a measure ν on (0 , ∞ ) , under some additional assumptions, the sequence of processes { X n } defined in ( 42 ) conv erges weakly to a limit process X of the GCID type defined in Theorem 3 . Suppose that ν is a σ − finite measure on (0 , ∞ ) which satisfies the condition R ∞ 0 ( x ∧ 1) ν ( dx ) < ∞ and thus is a L ´ evy measure on (0 , ∞ ) . In particular ν [ x, ∞ ) < ∞ for each x > 0 . W e assume that the array { ( λ nj , r nj ) } satisfies the conditions ( A. 1) max 1 ≤ j ≤ n λ nj → 0 when n → ∞ , ( A. 2) lim ϵ ↓ 0 lim sup n →∞ n X j =1 λ nj r nj 1 ( r nj ≤ ϵ ) = 0 , ( A. 3) sup n ∈ N n X j =1 λ nj r p nj ≤ C , for p ∈ [1 , 4] and for some C > 0 . 13 In addition assume that the double array of parameters { ( λ nj , r nj ) } , j = 1 , . . . , n , n ∈ N , is such that the sequence of empirical measures { ν n } con ver ges weakly to ν , i.e. ( A. 4) ν n [ x, ∞ ) = n X j =1 λ nj 1 ( r nj ≥ x ) → ν [ x, ∞ ) as n → ∞ , for ev ery x > 0 for which ν { x } = 0 . The next assumption concerns the measure ν : ( A. 5) Z ∞ 0 x q ν ( dx ) < ∞ q = 1 , 2 . Finally , we will assume that ( A. 6) Z [ x, ∞ ) y ν n ( dy ) = n X j =1 λ nj r nj 1 ( r nj ≥ x ) → Z [ x, ∞ ) y ν ( dy ) as n → ∞ , for e very x > 0 for which ν { x } = 0 . Proposition 7. Assumptions (A.4) and (A.6) taken together imply the following ( A. 6 b ) Z ∞ 0 e iθy − 1 ν n ( dy ) = n X j =1 λ nj e iθr nj − 1 → Z ∞ 0 e iθy − 1 ν ( dy ) as n → ∞ , ∀ θ ∈ R . Pr oof. Define the sequence of measures { e ν n } , 1 , 2 , . . . on (0 , ∞ ) , via e ν n ( dy ) := y ν n ( dy ) . These measures are finite and the sequence { e ν n } con ver ges weakly to the measure e ν defined by e ν ( dy ) := y ν ( dy ) as a result of (A.6). Further, e ν is finite as a result of (A.5). Define the family of functions h θ : R → C , index ed by θ ∈ R , h θ ( x ) := e iθx − 1 x if x = 0 iθ if x = 0 . It can be readily seen that, for each θ , h θ is continuous and bounded and hence from Helly’ s second theorem lim n →∞ Z ∞ 0 h θ ( y ) e ν n ( dy ) = Z ∞ 0 h θ ( y ) e ν ( dy ) . This is ho wev er equi valent to the con vergence in (A.6b). An example of a double array of parameters and corresponding ON/OFF processes satisfying Asssump- tions A.1-A.6 and Conditions C.1 and C.2 is gi ven in the Appendix. The following proposition gives necessary and sufficient conditions for the conv ergence of the ro w sums of the triangular array { ζ nj ( t ) } . Proposition 8. Suppose that ther e exists a L ´ evy measur e ν on (0 , ∞ ) for which the triangular array of parameter s ( λ nj , r nj ) satisfies Assumptions A.1, A.2, and A.4. Then for each fixed t X n ( t ) := n X j =1 ζ nj ( t ) d → X t , as n → ∞ . (47) X t is infinitely divisible with log E [ e iθX t ] = Z ∞ 0 e iθx − 1 1 µ ν ( dx ) . 14 Pr oof. Since max 1 ≤ j ≤ n E e iθζ nj ( t ) − 1 = max 1 ≤ j ≤ n λ nj λ nj + µ e iθr nj − 1 ≤ 2 µ max 1 ≤ j ≤ n λ nj → 0 as n → ∞ (48) from Assumption A.1 and ( 45 ) the array { ζ nj ( t ) } is null for each fixed t . For each x > 0 n X j =1 P ( ζ nj ( t ) ≥ x ) = n X j =1 λ nj λ nj + µ 1 ( r nj ≥ x ) = 1 µ n X j =1 λ nj 1 ( r nj ≥ x ) − n X j =1 λ 2 nj µ ( λ nj + µ ) 1 ( r nj ≥ x ) . (49) The second term in the right hand side of the last equation v anishes as n → ∞ because 0 ≤ n X j =1 λ 2 nj µ ( λ nj + µ ) 1 ( r nj ≥ x ) ≤ max 1 ≤ j ≤ n λ nj 1 µ 2 n X j =1 λ nj 1 ( r nj ≥ x ) → 0 for all x > 0 by Assumption A.1 and A.4. Hence, again by Assumption A.4, lim n →∞ n X j =1 P ( ζ nj ( t ) ≥ x ) = 1 µ ν [ x, ∞ ) where 1 µ ν ( · ) is a L ´ evy measure on (0 , ∞ ) . Also, n X j =1 E [ ζ nj ( t ) 1 ( ζ nj ( t ) ≤ ϵ ] = n X j =1 λ nj λ nj + µ r nj 1 ( r nj ≤ ϵ ) ≤ 1 µ n X j =1 λ nj r nj 1 ( r nj ≤ ϵ ) and hence A.2 implies C.2 with γ = 0 . Hence, as an immediate consequence of Theorem 6 , lim n →∞ log E [ e iθX n ( t ) ] = 1 µ Z ∞ 0 e iθx − 1 ν ( dx ) for any θ ∈ R . (50) The following theorem in Billingsle y [ 5 , p.142] guarantees the weak con ver gence of a family of processes in D [0 , T ] . Theorem 9. Let { X n ( t ); t ∈ [0 , T ] } be a sequence of r eal-valued pr ocesses and a pr ocess { X ( t ); t ∈ [0 , T ] } with sample paths in D [0 , T ] , such that, for all finite dimensional distrib utions ( X n ( t 1 ) , . . . , X n ( t k )) d → ( X ( t 1 ) , . . . , X ( t k )) for all k ∈ N and t i ∈ [0 , T ] (51) and further that X ( T ) − X ( T − δ ) d → 0 as δ → 0 . (52) F inally , suppose that for t 1 ≤ t 2 ≤ t 3 the inequality E h | X n ( t 2 ) − X n ( t 1 ) | β | X n ( t 3 ) − X n ( t 2 ) | β i ≤ K ( t 3 − t 1 ) 1+ α (53) hods for some α > 0 , β > 1 , and K > 0 . Then X n d → X in D [0 , T ] . 15 Theorem 10. Suppose that ther e exists a L ´ evy measure ν on (0 , ∞ ) for which the triangular array of pa- rameter s ( λ nj , r nj ) satisfies Assumptions A.1 – A.6. Then the r ow sums of the triangular array of pr ocesses { ζ nj ( t ); t ≥ 0 } satisfy X n ( t ) := n X i =1 ζ nj ( t ) d → X ( t ) (54) wher e { X ( t ); t ≥ 0 } is an ESPC pr ocess (see Definition 1 ) with finite dimensional distrib utions given by log E h e i ( θ 1 X ( t 1 )+ θ 2 X ( t 2 )+ ··· + θ m X ( t m )) i = 1 µ X 1 ≤ j ≤ k ≤ m Z ∞ 0 e i ( θ j + ··· + θ k ) x − 1 ν ( dx ) × 1 − e − µ ( t j − t j − 1 ) e − µ ( t k − t j ) 1 − e − µ ( t k +1 − t k ) . (55) In the above e xpr ession t 0 = −∞ and t m +1 = + ∞ accor ding to the standar d con vention in this paper . Pr oof. The con vergence of the marginal distributions has already been pro ved (under weaker assumptions) in Proposition ( 8 ). W e will split the proof in two parts. Part 1. Con vergence of the finite dimensional distributions. W e begin by determining the joint distribu- tion of the process { X n ( t ) } at times t 1 < t 2 < · · · < t m . Set ξ nj ( t ) = ζ nj ( t ) /r nj (so that the processes ξ nj ( t ) take values in the set { 0 , 1 } ). The joint characteristic function is gi ven by E e i P m k =1 θ k X n ( t k ) = E e i P n j =1 r nj P m k =1 θ k ξ nj ( t k ) = E n Y j =1 e i P m k =1 r nj θ k ξ nj ( t k ) = n Y j =1 E e i P m k =1 r nj θ k ξ nj ( t k ) , (56) the last equation following from the independence of the sources. Writing for the generic element of the array ξ k := ξ nj ( t k ) , r in place r nj , and taking into account the f act that e iθ k rξ k = 1 + ξ k ( e irθ k − 1) since ξ k takes only the v alues 0 and 1, E e i P m k =1 θ k ξ nj ( t k ) = E m Y k =1 e iθr ξ k = E m Y k =1 1 + ξ k ( e irθ k − 1) = 1 + X 1 ≤ l 1 ≤ m E [ ξ l 1 ] ( e irθ l 1 − 1) + m X k =2 X 1 ≤ l 1 0 , E h ( X n ( t 2 ) − X n ( t 1 )) 2 ( X n ( t 3 ) − X n ( t 2 )) 2 i ≤ K ( t 3 − t 1 ) 2 for all 0 < t 1 < t 3 < T . Finally there remains to establish ( 52 ) and to this end it suffices to sho w that the limit process X satisfies the condition P ( | X ( T ) − X ( T − δ ) | > ϵ ) → 0 as δ → 0 for an y ϵ > 0 . From Markov’ s inequality we ha ve P ( | X ( T ) − X ( T − δ ) | > ϵ ) ≤ E ( X ( T ) − X ( T − δ )) 2 ϵ 2 ≤ 1 ϵ 2 2 H ( δ ) Z ∞ 0 x 2 1 µ ν ( dx ) (62) where in the last inequality we have used ( 25 ), ( 26 ), together with the fact that ψ ( θ ) = R ∞ 0 e iθx − 1 1 µ ν ( dx ) and R ∞ 0 x 2 ν ( dx ) < ∞ by virtue of Assumption A.5. Also, H ( δ ) = 1 − e − µδ . Since lim δ → 0 H ( δ ) = 0 from ( 62 ) we obtain ( 52 ). The moment condition in Assumptions A.3 and A.5 are necessary in establishing the tightness of the family of processes { X n ( t ) } because of inequality ( 53 ) in Theorem 9 . In particular , A.5 restricts the possible limit processes { X ( t ) } to those with finite first and second moments. It may be possible to circumvent this restriction by using an alternative to ( 53 ) which does not inv olve moments (see for instance Theorem 13.5 in [ 5 , p.142]) and also modify the proof of part 2 of Theorem 10 in order to relax these requirements. W e will not pursue this direction here. 6 Further examples of GCID pr ocesses Here we discuss further the class of Gaussian GCIP processes and giv e a fe w additional examples as well as examples of non-Gaussian processes. Proposition 11. Suppose that { X t ; t ∈ R } is a GCID pr ocess with ψ ( θ ) = iµθ − 1 2 σ 2 θ 2 and given H satisfying the conditions of Theor em 3 . Then { X t } is a stationary Gaussian pr ocess with co variance function Cov ( X t i , X t j ) = σ 2 (1 − H ( t j − t i )) , t i ≤ t j . (63) If in addition H satisfies lim h ↓ 0 H ( h ) | log h | 1+ β = 0 for some β > 0 (64) then { X t } has a.s. continuous sample paths. Pr oof. From Theorem 3 and Lemma 4 ϕ n ( θ 1 , . . . , θ n ; t 1 , . . . , t n ) = iµ ( θ 1 + · · · + θ n ) − σ 2 2 X 1 ≤ i ≤ j ≤ n ( θ i + · · · + θ j ) 2 a ij (65) with a ij = [ H ( t j − t i − 1 ) − H ( t j − t i ) − H ( t j +1 − t i − 1 ) + H ( t j +1 − t i )] . From ( 6 ), ( 7 ), X 1 ≤ i ≤ j ≤ n ( θ i + · · · + θ j ) 2 a ij = X 1 ≤ i ≤ n, 1 ≤ j ≤ n θ i θ j b ij 20 with b ij = i X k =1 n X l = j [ H ( t j − t i − 1 ) − H ( t j − t i ) − H ( t j +1 − t i − 1 ) + H ( t j +1 − t i )] = 1 − H ( t j − t i ) , when i ≤ j and b j i = b ij , where in the abov e we use the telescopic nature of the sums and the con vention t 0 = −∞ , t n +1 = + ∞ , H (0) = 0 , H (+ ∞ ) = 1 . Thus log E [ e i P n k =1 θ k X t k ] = iµ n X k =1 θ k − σ 2 2 n X i = k θ 2 k − σ 2 X 1 ≤ j 0 . (See [ 1 , p.14-15].) A sufficient condition, easier to check, is p ( u ) ≤ C | log u | 1+ β (68) for some C > 0 and β > 0 . Using ( 26 ) equation ( 67 ) gives p ( u ) = sup 0 ≤ t ≤ u σ 2 H ( t ) = σ 2 H ( u ) . Therefore ( 64 ) implies that ( 68 ) holds and hence that the sample paths of the process are a.s. continuous. This completes the proof. The classical Ornstein–Uhlenbeck process is a Gaussian GCID process with H ( t ) = 1 − e − αt (with α > 0 ), satisfies ( 64 ) since lim h ↓ 0 H ( h ) /h = α , and therefore has (as is well kno wn) continuous sample paths w .p. 1. On the other hand its GCID counterpart with Poisson marginals, the M / M / ∞ process, has sample paths that w .p. 1 are piecewise constant and hav e jumps of size 1. Thus it is clear that path behavior depends both on the nature of the marginal distributions and on the type of correlation structure which is determined by H . Another instance of a Gaussian GCID process which also satisfies the condition of Proposition 11 and has therefore a.s. continuous sample paths is the following generalization of the Slepian-Shepp process (see Slepian [ 25 ] and Shepp [ 24 ]) which we discuss in the follo wing Example 12. Let H ( x ) = min( x α , 1) = x α ∧ 1 where α ∈ (0 , 1] and ψ ( θ ) = − 1 2 σ 2 θ 2 . Then ( 17 ) gives log ϕ n ( θ 1 , . . . , θ n ; t 1 , . . . , t n ) = − 1 2 σ 2 (69) × X 1 ≤ i ≤ j ≤ n ( θ i + · · · + θ j ) 2 h ( t j − t i − 1 ) α ∧ 1 − ( t j − t i ) α ∧ 1 − ( t j +1 − t i − 1 ) α ∧ 1 + ( t j +1 − t i ) α ∧ 1 i . 21 Then from Proposition 11 , ( 69 ) can be written as − 1 2 σ 2 n X i =1 θ 2 i + 2 X 1 ≤ i 0 . Example 13. W e take again H ( x ) = x α ∧ 1 with α ∈ (0 , 1] . Ho wev er , this time we choose ψ ( θ ) = − log(1 − iθ ) . Then log ϕ n ( θ 1 , . . . , θ n ; t 1 , . . . , t n ) = − X 1 ≤ i ≤ j ≤ n log[1 − i ( θ i + · · · + θ j )] (71) × [( t j − t i − 1 ) α ∧ 1 − ( t j − t i ) α ∧ 1 − ( t j +1 − t i − 1 ) α ∧ 1 + ( t j +1 − t i ) α ∧ 1] . This defines a process which has the same cov ariance structure as the Gaussian process of Example 12 but has Gamma marginals. In particular , in the case α = 1 and assuming t n − t 1 ≤ 1 , the right hand side of ( 71 ) reduces to − n − 1 X j =1 log[1 − i ( θ 1 + · · · + θ j )]( t j +1 − t j ) − n X i =2 log[1 − i ( θ i + · · · + θ n )]( t i − t i − 1 ) − log [1 − i ( θ 1 + · · · + θ n )] [1 − ( t n − t 1 )] . The distribution of X t + h − X t can be obtained from ( 25 ). Its characteristic function is exp H ( h ) − log (1 − iθ ) − log(1 + iθ ) = 1 1 + θ 2 H ( h ) . This is in fact the dif ference of two independent Gamma–distrib uted random v ariables. In particular , note that ϕ ( θ 1 , θ 2 ) = E h e iθ 1 X t 1 + iθ 2 X t 2 i = 1 1 − i ( θ 1 + θ 2 ) 1 − H ( t 2 − t 1 ) 1 1 − iθ 1 H ( t 2 − t 1 ) 1 1 − iθ 2 H ( t 2 − t 1 ) . (72) Thus ( 71 ) defines a type of multidimensional e xponential distrib ution. If the distribution H has finite support, say H ( t ) = 1 for t ≥ τ , then X t 1 , X t 2 are independent for t 2 − t 1 > τ as can be seen from ( 72 ). 22 Example 14. This is an illustration of Proposition 5 . If in ( 17 ) we take ψ ( θ ) = 1 2 σ 2 θ 2 and H ( t ) := 1 − P K k =1 p k e − α k t where σ > 0 , p k > 0 , α k > 0 , k = 1 , . . . , K , and P K k =1 p k = 1 , then the resulting process is the superposition of K independent (zero mean) stationary Ornstein-Uhlenbeck processes, { Y ( k ) t ; t ≥ 0 } , k = 1 , . . . , K where E [ Y ( k ) t + s Y ( k ) s ] = σ 2 p k e − α k t , t > 0 , k = 1 , . . . , K . Unlike proposition 1 where the question of path continuity can be addressed based on known results about Gaussian processes, in general the question of path continuity for GCIDs not corresponding to already known processes is more difficult to settle. It will be addressed in a future paper . A positiv e answer to the question of path regularity may be obtained by a straightforward application of the K olmogorov-Chentsov criterion according to which, if a process { X t } satisfies the inequality E | X t 2 − X t 1 | α | X t 3 − X t 2 | α ≤ | t 3 − t 1 | 1+ β for 0 < t 1 < t 2 < t 3 and α > 0 , β > 0 when t 3 is sufficiently small then w .p. 1 it possesses paths that have no discontinuities of the second kind [ 6 ]. Assuming that the GCID process have finite fourth moments and furthermore, assuming for the sake of simplicity that ψ ′ (0) = E X t = 0 we may obtain from ( 17 ) E ( X t 2 − X t 1 ) 2 ( X t 3 − X t 2 ) 2 = ψ (4) (0) a 0 + ψ ′′ (0) 2 ( a 0 + a 1 )( a 0 + a 2 ) + 2 a 2 0 with a 1 := H ( t 3 − t 1 ) + H ( t 2 − t 1 ) + H ( t 3 − t 2 ) , a 2 := H ( t 3 − t 1 ) + H ( t 3 − t 2 ) + H ( t 2 − t 1 ) , and a 0 := H ( t 2 − t 1 ) + H ( t 3 − t 2 ) − H ( t 3 − t 1 ) . In the special case where H is twice continuously differentiable in a neighborhood of the origin with H ′ (0) = ℓ > 0 and H ′′ (0) = b < 0 , there exists h such that whenever 0 < t < h , H ( t ) = ℓt + 1 2 bt 2 + o ( h 2 ) . Then E ( X t 2 − X t 1 ) 2 ( X t 3 − X t 2 ) 2 = 1 8 bψ (4) (0) + 4 ℓ 2 ψ ′′ (0) ( t 3 − t 1 ) 2 + o ( h 2 ) and thus K olmogorov’ s criterion is satisfied with α = 2 and β = 1 . A A ppendix A.1 Bounds for ON/OFF Mark ovian Pr ocesses Lemma 15. Let { ζ t } be a Markovian ON/OFF source taking the values 0 and r assumed stationary , and suppose that u < t < s . Then, E ( ζ t − ζ u ) 2 ( ζ s − ζ t ) 2 ≤ r 4 4 λµ ( s − u ) 2 , (73) | E [( ζ u − ζ t )( ζ t − ζ s )] | ≤ r 2 4 λµ ( u − s ) 2 , (74) E ( ζ t − ζ u ) 2 ≤ 2 λµr 2 λ + µ ( t − u ) . (75) Pr oof. The product ( ζ t − ζ u )( ζ s − ζ t ) is non zero only when ζ u = 0 , ζ t = r , ζ s = 0 , and ζ u = r , ζ t = 0 , ζ s = r . In both cases the product is equal to − r 2 . Then E ( ζ t − ζ u ) 2 ( ζ s − ζ t ) 2 = r 4 P ( ζ u = 0) P ( ζ t = r | ζ u = 0) P ( ζ s = 0 | ζ t = r ) + r 4 P ( ζ u = r ) P ( ζ t = 0 | ζ u = r ) P ( ζ s = r | ζ t = 0) 23 and thus E ( ζ t − ζ u ) 2 ( ζ s − ζ t ) 2 = λµr 4 ( λ + µ ) 2 (1 − e − ( λ + µ )( t − u ) )(1 − e − ( λ + µ )( s − t ) ) . T aking into account the fact that 0 ≤ 1 − e − ( λ + µ ) x ≤ ( λ + µ ) x when x > 0 (76) we see that E ( ζ t − ζ u ) 2 ( ζ s − ζ t ) 2 ≤ r 4 λµ ( λ + µ ) 2 ( λ + µ ) 2 ( t − u )( s − t ) ≤ r 4 λµ 1 4 ( s − u ) 2 where the last inequality follo ws from the fact that ( t − u )( s − t ) ≤ 1 4 ( s − u ) 2 . Inequality ( 74 ) is established by taking again into account the fact that ζ u − ζ t and ζ t − ζ s take v alues in the set {− r , 0 , r } . Then, by Jensen’ s inequality , E [( ζ u − ζ t )( ζ t − ζ s )] ≤ E [ | ζ u − ζ t | | ζ t − ζ s | ] = r − 2 E ( ζ u − ζ t ) 2 ( ζ t − ζ s ) 2 (the last equality following from the fact that the process { ζ t } takes v alues in the set {− r , 0 , r } ) and hence we can use ( 73 ) to establish ( 74 ). The last inequality can be established by the same type of argument as the first by noting that E ( ζ t − ζ u ) 2 = r 2 P ( ζ u = 0) P ( ζ t = r | ζ u = 0) + r 2 P ( ζ u = 1) P ( ζ t = 0 | ζ u = r ) = 2 λµr 2 ( λ + µ ) 2 1 − e − ( λ + µ )( t − u ) . Using ( 76 ) we establish ( 75 ). Proposition 16. Ther e is M > 0 such that, for all k ∈ { 2 , . . . , n } the quantity L nj ( k ) defined in ( 60 ), satisfies the bound 0 ≤ E [ ξ nj ( t l 1 ) · · · ξ nj ( t l k )] − λ nj µ e − µ ( t l k − t l 1 ) ≤ M λ 2 nj . (77) Also, the quantity R nj := m X k =2 X 1 ≤ l 1 0 . (79) Pr oof. Expanding the product in ( 59 ) we may obtain the bound E [ ξ nj ( t l 1 ) · · · ξ nj ( t l k )] − π nj e − α nj ( t l k − t l 1 ) ≤ π 2 nj M 1 for some positi ve M 1 . Hence, taking into account ( 58 ), E [ ξ nj ( t l 1 ) · · · ξ nj ( t l k )] − λ nj µ e − µ ( t l k − t l 1 ) ≤ M 1 π 2 nj − π nj e − µ ( t l k − t l 1 ) 1 − e λ nj ( t l k − t l 1 ) 24 Using the inequalities π nj = λ nj µ + λ nj ≤ λ nj µ and 0 ≤ 1 − e − λ nj ( t l k − t l 1 ) ≤ λ nj ( t l k − t l 1 ) , elementary consider - ations gi ve the bound ( 77 ). Use the triangular inequality in ( 78 ) and take into account ( 77 ). Also use the inequality e iθr − 1 ≤ 2 ∧ | θ r | to write ( e irθ l 1 − 1) · · · ( e irθ l k − 1) ≤ r | θ l 1 | 2 k − 1 . Hence the modulus of the first term in the right hand side of ( 78 ) is bounded by m X k =2 X 1 ≤ l 1 0 the limit e xists and is finite the sequence is bounded and thus A.3 is satisfied. W e no w turn to the double array of stationary processes { ζ nj ( t ) } corresponding to the array of parameters ( λ nj , r nj ) in the framework of section 5 . Fixing t and omitting it from the notion it is easy to see that the array or random variables { ζ nj } satisfies C.1 with respect to the L ´ evy measure µ − 1 ν ( dx ) on (0 , ∞ ) . Indeed, from ( 84 ) it follows immediately that n X j =1 P ( ζ nj ≥ x ) = n X j =1 n − α n − α + µ 1 ( b j n − α ≤ ϵ ) → 1 µ ν [ x, ∞ ) W e will see that Condition C.2 is satisfied with γ = 0 . Indeed n X j =1 E ( ζ nj 1 ( ζ nj ≤ ϵ )] = n X j =1 λ nj r nj λ nj + µ 1 ( r nj ≤ ϵ ) = n X j =1 n − α b j n − α n − α + µ 1 ( b j n − α ≤ ϵ ) = X n α C ≤ j ≤ n n − α b j n − α n − α + µ = b C − b n 1 − α n α (1 − b n − α ) 1 n − α + µ (87) where C := log ϵ log b and we have used the fact that b j n − α ≤ ϵ ⇔ j ≥ n α C . Since 0 < α, b < 1 the last term in ( 87 ), as n → ∞ conv erges to b C / log(1 /b ) = ϵ log(1 /b ) . Thus C.2 is satisfied in this situation. Refer ences [1] Adler R. J. (1990). An intr oduction to continuity , extr ema, and related topics for general Gaussian pr ocesses, IMS Lecture Notes Monogr . Ser ., 12. [2] Applebaum D. (2009). L ´ evy Processes and Stochastic Calculus, 2nd edn. Cambridge Univ ersity Press, Cambridge. 27 [3] Barndorff-Nielsen O. E. and N. Shephard. (2001). Non-Gaussian Ornstein–Uhlenbeck-based models and some of their uses in financial economics, J . Roy . Statist. Soc. Ser . B Statist. Methodol. 63 (2), 167–241. [4] Barndorff-Nielsen O. E. (2011). Stationary infinitely di visible processes, Brazilian Journal of Pr obabil- ity and Statistics, 25 (3), 294–322. [5] Billingsley , P . (1999). Con ver gence of Pr obability Measur es, 2nd edn. John W ile y , New Y ork. [6] Chentsov N. N. (1956). W eak con ver gence of stochastic processes whose trajectories hav e no discon- tinuities of the second kind and the “heuristic” approach to the Kolmogoro v-Smirnov tests. Theory of Pr obability & Its Applications, 1 , 1, 140–144. [7] Fristedt B. and L. Gray . (1997) A Modern Appr oach to Pr obability Theory , Birkh ¨ auser , Boston. [8] Hall P . (1988). Intr oduction to the Theory of Cover ag e Pr ocesses John W ile y & Sons, Ne w Y ork. [9] Heath D., S. Resnick, and G. Samorodnitsky . (1998). Heavy T ails and Long Range Dependence in On/Of f Processes and Associated Fluid Models, Mathematics of Operations Resear c h, 23 (1), 145-165. [10] Jurek, Z. J. and W . V ervaat (1983). An Integral Representation for Selfdecomposable Banach Space V alued Random V ariables. Z. W ahrscheinlic hkeitstheorie verw . Gebiete 62 , 247–262. [11] Kaj I. and M. S. T aqqu. (2008). Con vergence to Fractional Bro wnian Motion and to the T elecom Process: the Integral Representation Approach. Pr ogr ess in Pr obability , V ol. 60, 383–427, Birkh ¨ auser V erlag, Basel. [12] Kallenberg O. (2021). F oundations of Modern Pr obability . 3rd Edn, Springer, Ne w Y ork. [13] K onstantopoulos, T . and S. J. Lin. (1998). Macroscopic models for long-range dependent network traffic. Queueing Systems Theory Appl. 28 , 215–243. [14] Lee P . M. (1967). Infinitely divisible stochastic processes Zeitschrift f ¨ ur W ahrscheinlic hkeitstheorie und V erwandte Gebiete , 7 , 147–160. [15] Maruyama G. (1970). Infinitely di visible processes. Theory of Pr obability and its Applications , 15 (1), 3–23. [16] Maulik K. and S. Resnick (2003). Small and Large T ime Scale Analysis of a Network Traf fic Model, Queueing Systems , 43 , 221–250. [17] Mikosch T ., S. Resnick, H. Rootz ´ en and A. Stegeman, (2002). Is network traffic approximated by stable L ´ evy motion or fractional bro wnian motion? The Annals of Applied Pr obability , 12 (1), 23–68. [18] Mikosch T . and G. Samorodnitsky , (2007). Scaling Limits for Cumulati v e Input Processes, Mathematics of Operations Resear ch, 32 (4), 890-919. [19] Nicolescu P . and L. Persson (2006). Con vex Functions and Their Applications: A Contemporary Ap- pr oac h. Springer . [20] Pipiras V . and M. S. T aqqu (2000). The limit of a renew al rew ard process with heavy-tailed rewards is not a linear fractional stable motion. Bernoulli, 6 , 4, 2000, 607–614. [21] Resnick S and E. van den Berg (2000). W eak con v ergence of high-speed network traffic models. J ournal of Applied Pr obability , 37 ,2, 575-597. 28 [22] Resnick, S. and G. Samorodnitsky . (2003). Limits of ON/OFF hierarchical product models for data transmission The Annals of Applied Pr obability , 13 , 4, 1355–1398. [23] Sato, K. (1999). L ´ evy Pr ocesses and Infinitely Divisible Distributions. Cambridge Univ ersity Press, Cambridge. [24] Shepp L.A. (1971). First passage time for a particular Gaussian process. The Annals of Mathematical Statistics , 42, (3) 946-951. [25] Slepian D. (1961). First Passage T ime for a Particular Gaussian Process Annals of Mathematical Statis- tics, 32, (2) 610-612. [26] W olfe S.J. (1982). On a continuous analogue of the stochastic difference equation X n = ρX n − 1 + B n . Stoch. Pr oc. Appl. 12 , 301–312. [27] W olpert R. L. and M. S. T aqqu. (2005). Fractional Ornstein–Uhlenbeck L ´ evy processes and the T elecom process: Upstairs and downstairs, Signal Pr ocessing , 85 , 1523–1545. 29
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment