Exact maximum likelihood estimators for drift fractional Brownian motions
This paper deals with the problems of consistence and strong consistence of the maximum likelihood estimators of the mean and variance of the drift fractional Brownian motions observed at discrete time instants. A central limit theorem for these esti…
Authors: Hu Yaozhong, Xiao Weilin, Zhang Weiguo
EXA CT MAXIMUM LIKELIHOOD ESTIMA TOR FOR DRIFT FRA CTIONAL BR O WNIAN MO TION A T DISCRETE OBSER V A T ION BY Y A OZHONG HU ∗ 1 , WEILIN XIA O 2 AND WEIGUO ZHANG 2 Uni versity of Kansas and South China Univ ersity of T ech nology This paper deals with th e p roblem s of consistence and strong consistence of the maximum likelihood estimato rs of the mean and variance of th e drift fractional Brownian motion s ob served at discrete tim e instants. A central limit theorem fo r the se estimators is also obtained by using the Malliavin calculus. 1. Intr oduction. Long memory pro cesses hav e been widely app lied to var ious fields, such as finance, hydrolog y , netwo rk traf fi c analysis and so on. Fractiona l Bro wnian motions are one special class of long memory pro cesses when the Hurst paramete r H > 1 / 2 . The stochast ic calcul us for these processe s has now been well-esta blished (see [2]). When a long memory model is used to describ e some pheno mena, it is important to ide ntify the parameter s in the m odel. In th is paper , we shall consi der the foll owing simple model Y t = µt + σ B H t , t ≥ 0 , (1.1) where µ and σ are consta nts to be estimated from discrete observ ation s of the proces s Y . Our method works for fracti onal Brownian motions of all parameter s. So in this paper w e assume that ( B H t , t ≥ 0) is a fractional Brownian motion of Hurst parameter H ∈ (0 , 1) . But we do not discuss the case H = 1 / 2 , the standa rd B ro wnian motion case since it is kno wn. This means, ( B H t , t ≥ 0) is a mean 0 Gauss ian process with the follo wing cov ariance structure: E B H t B H s ) = 1 2 t 2 H + s 2 H − | t − s | 2 H . W e assume that the process is observed at discrete time instants ( t 1 , t 2 , · · · , t N ) . T o simplify nota tion we assume t k = k h, k = 1 , 2 , · · · , N for some fixed length ∗ Corresponding author: hu@ma th.ku.edu 1 Supported by NSF Grants DMS-0504783 2 Supported by National Natural Science Funds for Distinguished Y oung Scholar 708250 05 AMS 2000 subject classifications: Primary 62G05; secondary 60H07 ke ywords and phrases: Maximum lik eli hood estimation, fractional Brownian motions, discrete ob- serv ation, strong consistence, central limit theorem, Malliavin calculus. 1 h > 0 . Thus the observ ation vecto r is Y = ( Y t 1 , Y t 2 , · · · , Y t N ) ′ . W e will ob- tain the maximum lik elihood esti mators ˆ µ N and ˆ σ 2 N of µ and σ 2 respec tiv ely and study their asymptotic behav iors. In particul ar , the almost sure con ve rgenc e and the central limit type theorem. The first rea son we chose to study (1.1) is because it is simple and we can obtain exp licit estimators. The second reason is that it is also w idely applied in variou s fields. The logar ithm of a widely used geometric fraction al Brownia n motion , which is po pular in finance, is of the f orm (1.1). T his p aper is also complementary to the work [6], where the parameter estimation problem (w ith continuou s time observ ation ) for fractiona l Ornstein -Uhlenbeck pro cesses is studied. The par ameter estimation problem for lo ng memory proc esses hav e been well- studie d (s ee [1], [3], [4], [9], [11]). Although most work requir es th e process to be station ary , we may still adap t their idea t o analyze abo ve model (1.1). But we shal l use the method of [6] which seems to be the simplest one to us. This m ethod is based on a result of ( [8]) and uses the idea of Mallia vin calculus. W e introduce notation Y = µ t + σ B H t , (1.2) where and for the rest of the pape r t = ( h, 2 h, · · · , N h ) ′ and B H t = ( B H h , · · · , B H N h ) ′ . The joint probabi lity densi ty function of Y is h ( Y ) = (2 π σ 2 ) − N 2 | Γ H | − 1 2 exp − 1 2 σ 2 ( Y − µ t ) ′ Γ − 1 H ( Y − µ t ) , where Γ H = h [ C ov [ B H ih , B H j h ] i i,j =1 , 2 , ··· ,N = 1 2 h 2 H i 2 H + j 2 H − | i − j | 2 H i,j =1 , 2 , ··· ,N . The maximu m likeliho od est imators of µ and σ 2 from th e observ ation Y are giv en by ˆ µ = t ′ Γ − 1 H Y t ′ Γ − 1 H t , (1.3) ˆ σ 2 = 1 N ( Y ′ Γ − 1 H Y )( t ′ Γ − 1 H t ) − ( t ′ Γ − 1 H Y ) 2 t ′ Γ − 1 H t . (1.4) In Section 2, w e sha ll sho w that ˆ µ and ˆ σ 2 con v erge to µ and σ 2 both in mean square and almost surely . In Section 3, we prov e central limit type theorem. In Section 4, we gi ve some simulatio n to demon strate our estimators ˆ µ and ˆ σ 2 . 2. Consistence . In this section we will consid er the L 2 consis tency and the strong consis tency of both M LE µ and σ 2 . No w , let us first consid er the L 2 consis tency of (1.3) . 2 Theor em 2.1 The estimator ˆ µ (defined by (1.3)) of µ is unbiased and it con ver g es in pr obab ility to µ as N → ∞ . PR OOF . Substitu ting Y by µ t + σ B H t in (1.3), we ha ve ˆ µ = µ + σ t ′ Γ − 1 H B H t t ′ Γ − 1 H t . (2.1) Thus E [ ˆ µ ] = µ and hence ˆ µ is un biased. On the other hand, we hav e V ar [ ˆ µ ] = σ 2 E h t ′ Γ − 1 H B H t ( B H t ) ′ Γ − 1 H t ( t ′ Γ − 1 H t ) 2 i = σ 2 t ′ Γ − 1 H Γ H Γ − 1 H t ( t ′ Γ − 1 H t ) 2 = σ 2 t ′ Γ − 1 H t . Denote M = ( m ij ) i,j =1 ,...,N , where m ij = 1 2 ( i 2 H + j 2 H − | i − j | 2 H ) , and den ote by m − 1 i,j the e ntry of the in v erse matrix M − 1 of M . Then we may write V ar [ ˆ µ ] = h − 2 H σ 2 t ′ M − 1 t = h − 2 H h − 2 σ 2 P N i,j =1 ij m − 1 i,j = σ 2 h − 2 H − 2 P N i,j =1 ij m − 1 i,j . W e shall use the follo wing inequality (with x = N = (1 , 2 , · · · , N ) ) x ′ M − 1 x ≥ k x k 2 2 λ max , where λ max is the lar gest eigen v alue of th e matrix M . Thus we hav e V ar [ ˆ µ ] ≤ σ 2 h − 2 H − 2 λ max k N k 2 2 Since k N k 2 = 1 2 + 2 2 + . . . + n 2 = n ( n +1)(2 n +1) 6 we kno w that k N k 2 2 ≈ N 3 . On the o ther hand we ha ve by th e Gerschg orin Circle Theo rem (see [7], Theor em 8.1.3) λ max ≤ max i =1 ,...,N N X j =1 | m ij |≤ C N 2 H +1 , where C a positi ve co nstant whose va lue may be dif ferent in dif ferent occu rrences. Consequ ently , we hav e V ar [ ˆ µ ] ≤ C σ 2 h − 2 H − 2 N − 3 N 2 H +1 = C N 2 H − 2 . which con v erge s to zero as N → ∞ . Next we stu dy the estimator ˆ σ 2 defined by (1.4). 3 Theor em 2.2 W e have E ( ˆ σ 2 ) = N − 1 N σ 2 and V ar [ ˆ σ 2 ] N →∞ − − − − → 0 . (2.2) PR OOF . By replaci ng Y with µ t + σ B H t in (1.4), we ha ve ˆ σ 2 = σ 2 N [( B H t ) ′ Γ − 1 H B H t − ( t ′ Γ − 1 H B H t ) 2 t t Γ − 1 H t ] . Thus E [ ˆ σ 2 ] = σ 2 N E [( B H t ) ′ Γ − 1 H B H t − ( t ′ Γ − 1 H B H t ) 2 t t Γ − 1 H t ] = σ 2 N ( N − t ′ Γ − 1 H E [ B H t ( B H t ) ′ ]Γ − 1 H t t t Γ − 1 H t ) = N − 1 N σ 2 . (2.3) T o compute the va riance of ˆ σ 2 we also nee d to c ompute E [( ˆ σ 2 ) 2 ] : E [( ˆ σ 2 ) 2 ] = σ 4 N 2 E h ( B H t ) ′ Γ − 1 H B H t − ( t ′ Γ − 1 H B H t ) 2 t t Γ − 1 H t 2 i = σ 4 N 2 E [(( B H t ) ′ Γ − 1 H B H t ) 2 ] − 2 E [( B H t ) ′ Γ − 1 H B H t ( t ′ Γ − 1 H B H t ) 2 t t Γ − 1 H t ] + E [( ( t ′ Γ − 1 H B H t ) 2 t t Γ − 1 H t ) 2 ] = σ 4 N 2 E [(( B H t ) ′ Γ − 1 H B H t ) 2 ] − 2 E [( B H t ) ′ Γ − 1 H B H t ( t ′ Γ − 1 H B H t ) 2 t t Γ − 1 H t ] + 3 . (2.4) Denote X = Γ − 1 / 2 H B H t . Then E ( X X ′ ) = E (Γ − 1 / 2 H B H t ( B H t ) ′ Γ − 1 / 2 H ) = I . There- fore, X is a standard Gaussi an vect or of dimens ion N . For a ny λ small enou gh and ε ∈ R let us compute the follo wing . E [exp( λ ( B H t ) ′ Γ − 1 H B H t + ε t ′ Γ − 1 H B H t )] = E [exp( λ | X | 2 + ε t ′ Γ − 1 2 H X )] = 1 (2 π ) N 2 Z R N e − | X | 2 2 + λ X 2 + ε t ′ Γ − 1 2 H X d X . A standard techni que of compl eting th e squares yields E [exp( λ ( B H t ) ′ Γ − 1 H B H t + ε t ′ Γ − 1 H B H t )] = (1 − 2 λ ) − N 2 exp ( ε 2 t ′ Γ − 1 H t 2(1 − 2 λ ) ) =: f ( λ, ε ) . 4 W e are only interested in the coef ficient of λ 2 and λε 2 in the abo ve exp ression f ( λ, ε ) . W e h ave f ( λ, ε ) = (1 + N λ + N ( N + 2) λ 2 + · · · ) " 1 + ε 2 t ′ Γ − 1 H t 2 (1 + 2 λ + · · · ) + · · · # = 1 + N λ + N ( N + 2) λ 2 + · · · + ( N + 2) λε 2 t ′ Γ − 1 H t + · · · . Comparing the coef ficients of λ 2 and λε 2 we ha ve E ( B H t ) ′ Γ − 1 H B H t ) 2 = N ( N + 2) , (2.5) E [( B H t ) ′ Γ − 1 H B H t ( t ′ Γ − 1 H B H t ) 2 ] = ( N + 2)( t ′ Γ − 1 H t ) . Hence, we ha ve E [( B H t ) ′ Γ − 1 H B H t ( t ′ Γ − 1 H B H t ) 2 t t Γ − 1 H t ] = ( N + 2)( t ′ Γ − 1 H t ) t t Γ − 1 H t = N + 2 . (2.6) Using (2.3), (2.4), (2.5) and (2.6), we obtain V ar [ ˆ σ 2 ] = E [( ˆ σ 2 ) 2 ] − ( E [ ˆ σ 2 ]) 2 = σ 4 N 2 [ N ( N + 2) − 2( N + 2) + 3 − ( N − 1) 2 ] = 2( N − 1) N 2 σ 4 , (2.7) which is con v ergen t to 0 . Thus we prov e the theorem. No w we can sho w the strong consisten ce of the MLE ˆ µ and ˆ σ 2 as N → ∞ . Theor em 2.3 The es timators ˆ µ and ˆ σ 2 define d by (1.3) a nd (1.4), r espect ively , ar e str ong ly con sistent, that is, ˆ µ → µ a.s. as N → ∞ (2.8) ˆ σ 2 → σ 2 a.s. as N → ∞ (2.9) PR OOF . Let’ s prov e the con ver gence for ˆ µ first. W e will use a Borel-Cantell i lemma. T o this end, we will sho w that X N ≥ 1 P | ˆ µ − µ | > 1 N ǫ < ∞ (2.10) for some ǫ > 0 . 5 T ake 0 < ǫ < 1 − H . Then from the Chebyshe v’ s inequality an d the Nelson’ s hyperc ontracti vity inequal ity [5], we ha ve P | ˆ µ − µ | > 1 N ǫ = N 2 pε E ( | ˆ µ − µ | p ) ≤ C p N 2 pε E ( | ˆ µ − µ | 2 ) p/ 2 ≤ C ′ p σ p h − (2 H +2) p N 2 pε +(2 H − 2) p . For sufficien tly lar ge p , we hav e 2 pε + (2 H − 2) p < − 1 . Thus (2.10) is p roved , which implies (2.8) by Borel-Cante lli lemma. In the same way , we can show (2.9). 3. Asymptotic. Now we are interes ted in the central limiting type theorem for the estimator s ˆ µ and ˆ σ 2 . First from (2.1), it is easy to see that q t t Γ − 1 H t ( ˆ µ − µ ) L − → N (0 , σ 2 ) as N tends to infinity W e want to study ˆ σ 2 Theor em 3.1 W e have r N 2 ˆ σ 2 − σ 2 L − → N (0 , σ 4 ) as N → ∞ . (3.1) PR OOF . T o simplify notation we assume H > 1 / 2 . The case H < 1 / 2 is similar . W e define G N = r N 2 ( ˆ σ 2 − σ 2 ) = σ 2 √ 2 N [( B H t ) ′ Γ − 1 H B H t − ( t ′ Γ − 1 H B H t ) 2 t t Γ − 1 H t ] − r N 2 σ 2 . From (2.7), it is ob vious that E [ G 2 N ] con ver ges to σ 4 . Thus from Theo rem 4 of [8] to sho w (3.1), it suf fices to show that k D G N k 2 H L 2 (Ω) − − − − → C . First, using the definition of Mallia vin calculus, we obtain D s G N = r 2 N σ 2 [ D s ( B H t ) ′ Γ − 1 H B H t − t ′ Γ − 1 H B H t · t ′ Γ − 1 H D s B H t t t Γ − 1 H t ] , 6 where D s ( B H t ) ′ = (1 [0 ,h ] ( s ) , 1 [0 , 2 h ] ( s ) , . . . , 1 [0 ,N h ] ( s )) . Therefore , we ha ve k D s G N k 2 H = 2 σ 4 N α H Z ′ 0 Z ′ 0 | u − s | 2 H − 2 " D s ( B H t ) ′ Γ − 1 H B H t − t ′ Γ − 1 H B H t · t ′ Γ − 1 H D s B H t t t Γ − 1 H t # " D u ( B H t ) ′ Γ − 1 H B H t − t ′ Γ − 1 H B H t · t ′ Γ − 1 H D u B H t t t Γ − 1 H t # duds = 2 σ 4 N · 4 α H Z ′ 0 Z ′ 0 | u − s | 2 H − 2 [ D s ( B H t ) ′ Γ − 1 H B H t · D u ( B H t ) ′ Γ − 1 H B H t − 2 D s ( B H t ) ′ Γ − 1 H B H t · t ′ Γ − 1 H B H t · t t Γ − 1 H D u B H t t t Γ − 1 H t + ( t ′ Γ − 1 H B H t ) 2 · t ′ Γ − 1 H D s B H t · t ′ Γ − 1 H D u B H t ( t t Γ − 1 H t ) 2 ] duds =2 σ 4 [ A (1) T − 2 A (2) T + A (3) T ] . Since both D s ( B H t ) ′ Γ − 1 H B H t and D u ( B H t ) ′ Γ − 1 H B H t are Gaussian rando m v ariables we can write E ( | A (1) T − E A (1) T | 2 ) = 2 N 2 α 2 H Z [0 ,T ] 4 E [ D s ( B H t ) ′ Γ − 1 H B H t · D r ( B H t ) ′ Γ − 1 H B H t ] · E [ D u ( B H t ) ′ Γ − 1 H B H t · D v ( B H t ) ′ Γ − 1 H B H t ] | s − u | 2 H − 2 | r − v | 2 H − 2 dsdr dudv = 2 N 2 Z [0 ,T ] 4 [ D s ( B H t ) ′ Γ − 1 H D r B H t · D u ( B H t ) ′ Γ − 1 H D v B H t ] · | s − u | 2 H − 2 | r − v | 2 H − 2 dsdr dudv . Let Γ − 1 H = (Γ − 1 ij ) i,j =1 ,...,N , Γ H = (Γ ij ) i,j =1 ,...,N and δ lk be the Kroneck er symbol. W e shall use R ih 0 R i ′ h 0 | s − u | 2 H − 2 dsdu = Γ ii ′ and P N j =1 Γ − 1 ij Γ i ′ j = δ ii ′ . 7 Then we ha ve E ( | A (1) T − E A (1) T | 2 ) = 2 N 2 Z [0 ,T ] 4 1 [0 ,ih ] ( s )Γ − 1 ij 1 [0 ,j h ] ( r ) · 1 [0 ,i ′ h ] ( u )Γ − 1 i ′ j ′ 1 [0 ,j ′ h ] ( v ) · α H | s − u | 2 H − 2 α H | r − v | 2 H − 2 dsdr dudv = 2 N 2 N X i,j =1 N X i ′ ,j ′ =1 Γ − 1 ij Γ − 1 i ′ j ′ · Γ ii ′ Γ j j ′ = 2 N 2 N X i,j ′ =1 δ 2 ij ′ = 2 N , which con v erge s to 0 as N → ∞ . No w we deal with A (2) T . E ( | A (2) T − E A (2) T | 2 ) = 2 α 2 H N 2 Z [0 ,T ] 4 E [ D s ( B H t ) ′ Γ − 1 H B H t · t ′ Γ − 1 H B H t · t t Γ − 1 H D u B H t t t Γ − 1 H t ] · E [ D r ( B H t ) ′ Γ − 1 H B H t · t ′ Γ − 1 H B H t · t t Γ − 1 H D v B H t t t Γ − 1 H t ] · | s − v | 2 H − 2 | u − r | 2 H − 2 dv dsdudr = 2 α 2 H N 2 Z [0 ,T ] 4 D s ( B H t ) ′ Γ − 1 H t · t t Γ − 1 H D u B H t t t Γ − 1 H t · D r ( B H t ) ′ Γ − 1 H t · t t Γ − 1 H D v B H t t t Γ − 1 H t | s − v | 2 H − 2 | u − r | 2 H − 2 dv dsdudr = 2 N 2 Z [0 ,T ] 4 P N i,j =1 P N i ′ ,j ′ =1 1 [0 ,ih ] ( s )Γ − 1 ij j h · i ′ h Γ − 1 i ′ j ′ 1 [0 ,j ′ h ] ( u ) t t Γ − 1 H t · P N k ,l =1 P N k ′ ,l ′ =1 1 [0 ,k h ] ( r )Γ − 1 k l lh · k ′ h Γ − 1 k ′ l ′ 1 [0 ,l ′ h ] ( v ) t t Γ − 1 H t · α H | s − v | 2 H − 2 α H | u − r | 2 H − 2 dv dsdudr = 2 N 2 X Γ − 1 ij j h Γ i ′ j ′ i ′ h Γ − 1 k l lh Γ − 1 k ′ l ′ k ′ h Γ il ′ Γ j ′ k , where the summation is over 1 ≤ i, j, i ′ , j ′ , k , l , k ′ , l ′ ≤ N . Sum first ov er 1 ≤ 8 i, j ′ ≤ N and then ov er 1 ≤ l ′ , k ≤ N , we ha ve E ( | A (2) T − E A (2) T | 2 ) = 2 N 2 P N j,l, k ′ ,i ′ =1 j hlhk ′ hl ′ h Γ − 1 k ′ j Γ − 1 i ′ l ( t ′ Γ − 1 H t ) 2 = 2 N 2 . which con v erge s to 0 as N → ∞ . As for A (3) T , we hav e E ( | A (3) T − E A (3) T | 2 ) = 2 α 2 H N 2 Z [0 ,T ] 4 t ′ Γ − 1 H D s B H t · t ′ Γ − 1 H D s ′ B H t ( t t Γ − 1 H t ) 2 · t ′ Γ − 1 H D u B H t · t ′ Γ − 1 H D u ′ B H t ( t t Γ − 1 H t ) 2 · E ( t ′ Γ − 1 H B H t ) 2 2 | s − u | 2 H − 2 | s ′ − u ′ | 2 H − 2 dv dsdudr = 2 α 2 H N 2 Z [0 ,T ] 4 t ′ Γ − 1 H D s B H t · t ′ Γ − 1 H D s ′ B H t ( t t Γ − 1 H t ) 2 · t ′ Γ − 1 H D u B H t · t ′ Γ − 1 H D u ′ B H t ( t t Γ − 1 H t ) 2 · t ′ Γ − 1 H t 2 | s − u | 2 H − 2 | s ′ − u ′ | 2 H − 2 dv dsdudr = 2 α 2 H N 2 h Z [0 ,T ] 2 t ′ Γ − 1 H D s B H t t ′ Γ − 1 H D u B H t t t Γ − 1 H t | s − u | 2 H − 2 dsdu i 2 = 2 N 2 h Z [0 ,T ] 2 P N i,j =1 P N i ′ ,j ′ =1 ih Γ − 1 ij 1 [0 ,j h ] ( s ) · i ′ h Γ − 1 i ′ j ′ 1 [0 ,j ′ h ] ( u ) t t Γ − 1 H t · α H | s − u | 2 H − 2 dsdu i 2 = 2 N 2 h P N i,j =1 P N i ′ ,j ′ =1 ih Γ − 1 ij i ′ h Γ − 1 i ′ j ′ t t Γ − 1 H t Γ j j ′ i 2 = 2 N 2 , which con v erge s to 0 as N → ∞ . By triangu lar ine quality , we ha ve that E ( k D G N k 2 H − E k DG N k 2 H ) 2 = E ( A (1) T + A (2) T + A (3) T − E ( A (1) T + A (2) T + A (3) T )) 2 ≤ 9 E ( A (1) T − E ( A (1) T )) 2 + E ( A (2) T − E ( A (2) T )) 2 + E ( A (3) T − E ( A (3) T )) 2 → 0 . This completes the proof of the theorem. 9 4. Simulation. This secti on contai ns numerical simulati ons of the es timators obtain ed in this paper . The fractiona l Brownia n motions are simulated by the Pax- son’ s meth od [10]. T ABLE 1 The means and standa rd de viation s of estimators ( µ =0.7880 , σ 2 =0.8116 ) H =0.25 H =0.45 H =0.55 H =0.75 µ σ 2 µ σ 2 µ σ 2 µ σ 2 Mean 0.7862 0.8152 0.7884 0.8153 0.7911 0.8126 0.7678 0.7910 Std.de v . 0.0116 0.0830 0.0112 0.0937 0.0514 0.0692 0.0974 0.0736 T ABLE 2 The means and standa rd de viation s of estimators ( µ =1.5880 , σ 2 =1.8116 ) H =0.25 H =0.45 H =0.55 H =0.75 µ σ 2 µ σ 2 µ σ 2 µ σ 2 Mean 1.5863 1.8694 1.5882 1.8719 1.5961 1.8647 1.5925 1.7864 Std.de v . 0.0148 0.1724 0.0456 0.1786 0.0710 0.1567 0.1879 0.1644 T ABLE 3 The means and standa rd de viation s of estimators ( µ =3.5880 , σ 2 =5.8116 ) H =0.25 H =0.45 H =0.55 H =0.75 µ σ 2 µ σ 2 µ σ 2 µ σ 2 Mean 3.5861 5.8133 3.5810 5.8192 3.5837 5.8229 3.5834 5.8346 Std.de v . 0.0314 0.1648 0.0792 0.1737 0.0905 0.1031 0.0526 0.1026 From these numerical computation s, we see the estimators are excellen t both for H > 1 / 2 and H < 1 / 2 . Ackno wledgements W e thank David Nualart for help ful discussions . 10 REFERENCES [1] B E R A N , J . (1994) . Statist ics for Long-Memor y Pr ocesses , Chapman and Hall, New Y ork. MR1304490 [2] B I A G I N I , F . , H U , Y . , Ø K S E N D A L , B . and Z H A N G , T. (2008). Stoc has- tic calculus for fr actional Br ownian motion and application s. S pringer , New Y ork. MR2387368 [3] F O X , R . and T AQ Q U , M . S . (1986). Large-s ample properties of parameter estimates for strongly depen dent st ationary Gauss ian time series. Ann. Sta tist. 14 517–53 2. MR084 0512 [4] H A N N A N , E . J . (1973) . The asy mptotic theory of lin ear time-series m odels. J . Appl. Pr obabilit y . 10 130–145 . MR0365960 [5] H U , Y . (2000 ). A unified approach to sev eral inequali ties for Gaussian and dif fusion measures. S ´ eminair e de Pr obabilit ´ es. XX XIV , Lectur e Notes in Math. 1729 329–3 35. Spring er , Berli n. MR 176807 2. [6] H U , Y . and N U A L A RT , D . (2009). Parame ter estimat ion for frac tional Ornstein -Uhlenbeck pro cesses. P reprint. [7] G O L U B G . H . and V A N L O A N C . F. (1996). Matrix computat ions. 3rd ed. Hopkins Uni versity Press, Baltimore and London. MR1417720 [8] N U A L A R T , D . and O RT I Z , S . (2008). C entral limit theorems for multiple stocha stic integrals and Mallia vin calculus. Stochast ic Pr ocesses Appl. 118 614–6 28. MR23948 45 [9] P A L M A , W . (20 07). Long-memory time se ries. Theor y a nd method s. W ile y- Intersc ience, Hobok en, N.J. [10] P A X S O N , V . (1997). F ast, appr oximate synthe sis of fract ional Gaussian n oise for generatin g self-similar network traf fi c. Computer Communicatio ns Re- view . 27 5–18. [11] P R I V AU L T , N . and R ´ E V E I L L A C , A . (200 8). Stein estimation for the drift o f Gaussian pro cesses us ing the Mallia vin calcul us. Ann. Statist . 36 2531–2550 . MR245819 7 Y AO Z H O N G H U W E I L I N X I AO A N D W E I G U O Z H A N G D E PA R T M E N T O F M AT H E M ATI C S S C H O O L O F B U S I N E S S A D M I N I S T R AT I O N U N I V E R S I T Y O F K A N S A S , 4 0 5 S N O W H A L L S O U T H C H I NA U N I V E R S I T Y O F T E C H N O L O G Y L AW R E N C E , K A N S A S G U A N G Z H O U U S A C H I N A E-MAIL: hu@math.ku.ed u E-MAIL: xiao@ma th.ku.edu 11
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment