Conditional mode regression: Application to functional time series prediction

We consider $\alpha$-mixing observations and deal with the estimation of the conditional mode of a scalar response variable $Y$ given a random variable $X$ taking values in a semi-metric space. We provide a convergence rate in $L^p$ norm of the estim…

Authors: Sophie Dabo-Niang, Ali Laksaci

Electronic Journal of Stati stics ISSN: 1935-7524 Condit ional mo de regress ion: Applic ation to functional time serie s predi ction Sophie Dab o-Niang 1 and Ali Laksac i ∗ 2 1 L a b o . GREMARS, Maison de Re cher che, Univ. Lil le3, BP6014 9, 59653 Vil leneuve d’Asc q c e dex Lil le, F r anc e e-mail: sophiedabo@un iv-lille3 .fr 2 D´ ep artement de Math ´ ematiques Univ. Djil lali Liab ` es BP 89, 22000 Sidi Bel Abb ` es, Alg ´ erie e-mail: alila k@yahoo.f r Abstract: W e consider α -mixing observ ations and deal with the estimation of the conditiona l mode of a scalar resp onse v ariable Y give n a random v ariable X taking v alues in a s emi-metric space. W e provide a con v ergence rate i n L p norm of the estimator. A useful and typica l application to functional times ser ies prediction is given. AMS 2000 sub ject classificatio ns: Primary 62G05, 62G08; secondary 62G20. Keywords and phrases: Kernel estimation, Conditional mo de, F unctional random v ariables, Semi-metric space, Small balls pr obabilit y. 1. In tro duction Let us introduce n pairs of ra ndom v a riables ( X i , Y i ) i =1 ,...,n that we suppo se dr awn from the pa ir ( X, Y ), v a lue d in F × I R , where F is a s emi-metric space. Let d denotes the semi-metric. Assume that there exists a reg ular version of the conditional proba bilit y of Y given X , which is abso lutely contin uous with r esp e ct to Lebesg ue mea sure on I R and has b ounded densit y . Assume tha t for a ∗ corresponding author 1 imsart-e js ver. 2008/01/24 file: ejs_2008_347.tex date: May 29, 2018 L aksaci and D ab o-Niang/Conditional mo de r e g r e ssion 2 given x there is some co mpact subset S := ( α x , β x ), such that the co nditional density of Y giv en X = x has an unique mo de θ ( x ) on S . In the r emainder of the pap e r x is fixed in F and N x denotes a neigh bor ho o d of x . Let f x (resp. f x ( j ) ) b e the conditio na l density (resp. the j th order der iv ative of the conditional densit y) of the v ar iable Y giv en X = x . W e define the kernel estimator b f x of f x as follows: b f x ( y ) = h − 1 H P n i =1 K ( h − 1 K d ( x, X i )) H ( h − 1 H ( y − Y i )) P n i =1 K ( h − 1 K d ( x, X i )) , ∀ y ∈ I R with the conv ention 0 0 = 0. The functions K and H ar e kernels and h K = h K,n (resp. h H = h H,n ) is a sequence of p ositive real num ber s . Note that a similar estimate was a lr eady introduced in the sp ecial case wher e X is a rea l random v ariable by many authors, Rosen blatt (1969 ) and Y oundj´ e (1 996) among others. F or the functional case, s e e F erraty et al. (20 06a). A natural and usual estimator o f θ ( x ) denoted b θ ( x ), is given b y: b θ ( x ) = a r g sup y ∈ S b f x ( y ) . (1) Note that this estimate b θ ( x ) is no t neces sarily unique, so, the remainder of the pap er concerns any v alue b θ ( x ) satisfying (1 ). The main g oal of this pa per is to study the no npa rametric estimate b θ ( x ) of θ ( x ) when the explanator y v ar iable X is v a lued in the space F of even tually infinite dimension a nd when the observ ations ( Y i , X i ) i ∈ I N are stro ngly mixing. The motiv a tion for this mo de regressio n model is its int erest in so me nonpar ametric estimatio n problems where the mo de regressio n pr ovides b etter estimations than the cla ssical mean r egressio n (see for instance Collomb et al. 1 987, Quintela & Vieu (199 7), Ould- Sad (1997 ), Berlinet et al. (1998), or Lo ua ni & Ould-Sad (19 99), for the multiv ariate case). Currently , the pro gress of informatics to ols p ermits the recov ery of incr e asingly bulky data. These large data sets are av ailable essentially b y rea l time monitoring, a nd computers ca n manage s uch databases. The ob ject of statistical study can then b e curv es (consecutive discrete recordings a r e aggre g ated and viewed as sampled v a lue s of a rando m curve) not nu mbers o r vectors. F unctional data analysis (FD A) (see Bosq 200 0, F eratty and Vieu, 2006, Ramsay and Silverman, 1997 , 200 2) can help to analy ze such high-dimensiona l data sets. The statistical problems inv olved in the modelizatio n of functional r andom v aria bles has received increasing interests in recent literature (see for exa mple imsart-e js ver. 2008/01/24 file: ejs_2008_347.tex date: May 29, 2018 L aksaci and D ab o-Niang/Conditional mo de r e g r e ssion 3 Dabo -Niang 200 2, F erraty & Vieu 20 04, Dabo -Niang & Rhomari 200 4, Masry 2 005 for no npa rametric context). In this functiona l area, the firs t results concer ning the conditional mo de estimation were obtained by F erraty et al. 2 006a. They established the a lmo st complete conv ergence of the kernel estimator in the i.i.d. ca s e. This la st re sult has been extended to dep endent cas e by F erraty et al. 2005. Ezzahrioui and Ould- said (20 06a, 2006b) ha ve s tudied the asymptotic normality of the kernel estimator of the conditional mo de for b oth i.i.d. and str o ng mixing ca ses. The monogra ph of F erraty & Vieu 2006b presents an imp orta nt collectio n of statistical to ols for nonpara metric prediction o f functional v aria bles. Recently , Da b o- Niang & Laksa c i (2007 ) s ta ted the conv ergence in L p norm o f the conditional mo de function in the indep enden t case. In this pap er, we consider the case wher e the data are both dep enden t and of functional nature. W e prov e the p -in tegrated consistency b y giv ing the upp er bounds for the estimatio n erro r. W e sho w how our results ca n b e applied to pr ediction of functional times ser ies, by cutting the past of the time series in contin uous paths. As an a pplication, w e applied our metho d to some environmental data. The pap er is or ganized as follows: the fo llowing Section is devoted to fixing no tations and hypotheses. W e state our results o n Section 3. Sectio n 4 is devoted to an applicatio n to a time s e r ies prediction problem. 2. Notation and Assumptions W e b e gin by r ecalling the definition of the strong mixing prop erty . F or this we introduce the follow- ing notations. Let F k i ( Z ) deno te the σ − alg ebra generated by { Z j , i ≤ j ≤ k } . Definition 1 L et { Z i , i = 1 , 2 , ... } denote a se quenc e of rv’s. Given a p ositive inte ger n , s et α ( n ) = s up  | I P ( A ∩ B ) − I P( A )I P ( B ) | : A ∈ F k 1 ( Z ) and B ∈ F ∞ k + n ( Z ) , k ∈ I N ∗  . The s e quenc e is said to b e α -mixing (stro ng mixing) if the m ix ing c o efficient α ( n ) → 0 as n → ∞ . There exist man y pro ce s ses fulfilling the strong mixing prop erty . W e quote, her e, the us ual ARMA pro cesses which are geometrically s trongly mixing, i.e., ther e exis t ρ ∈ (0 , 1) and a > 0 such tha t, imsart-e js ver. 2008/01/24 file: ejs_2008_347.tex date: May 29, 2018 L aksaci and D ab o-Niang/Conditional mo de r e g r e ssion 4 for a n y n ≥ 1, α ( n ) ≤ aρ n (see, e.g. , Jo nes (197 8)). The threshold models , the EXP AR models (see, Ozaki (19 79)), the simple AR CH mo dels (se e Engle (1982 )), their GAR CH extension (see Bo llerslev (1986)) and the bilinea r Mar ko vian mo dels are geometrically strong ly mixing under some general ergo dicity co nditio ns . Throughout the pap er, when no confusion is p ossible, we will denote by C or C ′ some strictly po sitive generic constants, and w e will use the nota tio n B ( x, h ) = { x ′ ∈ F : d ( x ′ , x ) < h } . Our nonpar ametric mo del will b e quite general in the sense that we will just need the following assumptions: (H1) P ( X ∈ B ( x, r )) = φ x ( r ) > 0. (H2) ( X i , Y i ) i ∈ I N is an α -mixing sequence whose co efficie nts satisfy ∃ a > 0 , ∃ c > 0 : ∀ n ∈ I N α ( n ) ≤ cn − a . (H3) ∀ i 6 = j , 0 < sup i 6 = j P [( X i , X j ) ∈ B ( x, h ) × B ( x, h )] = O  ( φ x ( h )) ( a +1) /a n 1 /a  . (H4) ∀ ( y 1 , y 2 ) ∈ S × S , ∀ ( x 1 , x 2 ) ∈ N x × N x , | f x 1 ( y 1 ) − f x 2 ( y 2 ) | ≤ C  d ( x 1 , x 2 ) b 1 + | y 1 − y 2 | b 2  , b 1 > 0 , b 2 > 0 . (H5) f x is j -times con tin uously differentiable with r esp ect y o n S s uc h tha t, f x ( l ) ( θ ( x )) = 0 , for 1 ≤ l < j, and    f x ( j ) ( y )    < ∞ , for all y ∈ S. (H6) K is a function with suppo rt (0 , 1) such that 0 < C ′ < K ( t ) < C < ∞ . (H7) H is a function whic h satisfies :          (i) There exists an integrable function g s uc h that | H ( t ) − H ( s ) | ≤ C g ( | t − s | ) , (ii) Z | t | b 2 H ( t ) dt < ∞ , and Z H ( t ) dt = 1 . The concentration proprie ty (H1) is less restric tive that the frac tal condition introduced by Gasser et al. (19 98) and is known to hold for s everal con tin uous time pro cesses (see fo r instance Bogachev (1999) for a ga ussian mea s ure, Li & Shao (2001) for a general gaussia n pro cess and Dabo-Nia ng & Laksaci (20 07) for more discussion). In order to establish the same co nv ergence rate as in the i.i.d. case (see Niang & Lak s aci (20 06) ), we reinforc e the mixing by introducing (H2) and (H3). Note that we can establish the conv ergence res ults w itho ut these mixing assumptions, how ev er, the conv ergence rate expression will b e p erturb ed, it will contain the cov ariance ter m of the observ ations. imsart-e js ver. 2008/01/24 file: ejs_2008_347.tex date: May 29, 2018 L aksaci and D ab o-Niang/Conditional mo de r e g r e ssion 5 Assumptions (H4) and (H5) a r e regular ity conditions whic h characterize the functional space of our mo del and are needed to ev aluate the bias term in our a symptotic devlopments. It should b e noted that, the fla tness o f the function f x around the mo de θ ( x ) co nt rolled by the num ber of v anishing deriv a tives at θ ( x ) ( assumption (H5)), has a gr eat influence on the asymptotic r ates o f our estimate (see Theorem 1). Assumptions (H6) and (H7) are standar d tec hnical conditions in no nparametric estimation. They are impos e d for the sake of s implicit y and brev it y of the pro o fs. 3. Main results In this Sec tio n, w e establish the p -mean rate of conv ergence of the estimate b θ ( x ) to θ ( x ). Theorem 1 Under the hyp otheses (H1)-(H7), we have for al l p ∈ [ j, ∞ [  E | b θ ( x ) − θ ( x ) | p  1 /p = O  h b 1 j K + h b 2 j H  + O  1 n φ x ( h K )  1 2 j ! , whenever ∃ η > 0 , C n 2+ p − a a +1 − p + η ≤ φ x ( h K ) ≤ C ′ n 1 1 − a (2) holds with a > max  p + 1 ,  4 + p + p (4 + p ) 2 − 4 − 8 p  / 2  . Pro of. Let us now write the following T aylor expansion of the function f x under (H5) f x ( b θ ( x )) = f x ( θ ( x )) + 1 j ! f x ( j ) ( θ ∗ )( θ ( x ) − b θ ( x )) j , for some θ ∗ betw een θ ( x ) and b θ ( x ). By simple analytic arg uments, w e can show that | b θ ( x ) − θ ( x ) | j ≤ j ! min y ∈ ( α x ,β x ) f x ( j ) ( y ) sup y ∈ ( α x ,β x ) | b f x ( y ) − f x ( y ) | . The Minko wski inequality p ermits us to write:      sup y ∈ ( α x ,β x )    b f x ( y ) − f x ( y )         p ≤      n X i =1 W ni ( x ) sup y ∈ ( α x ,β x ) | H i ( y ) − E ( H i ( y ) /X i ) |      p +      n X i =1 W ni ( x ) sup y ∈ ( α x ,β x ) | E ( H i ( y ) | X i ) − f x ( y ) |      p + sup y ∈ ( α x ,β x ) | f x ( y ) | P n X i =1 W ni ( x ) = 0 !! 1 /p , ∀ p ≥ j, imsart-e js ver. 2008/01/24 file: ejs_2008_347.tex date: May 29, 2018 L aksaci and D ab o-Niang/Conditional mo de r e g r e ssion 6 where W ni ( x ) = K ( h − 1 K d ( x, X i )) P n i =1 K ( h − 1 K d ( x, X i )) and H i ( y ) = h − 1 H H ( h − 1 H ( y − Y i )) . Then, Theor em 1 is a conseq uence of the fo llowing lemmas. Lemma 1 We get under the hyp otheses (H1), (H4), (H6) and (H7(ii)):      n X i =1 W ni ( x ) sup y ∈ I R | E ( H i ( y ) | X i ) − f x ( y ) |      p = O ( h b 2 H ) + O ( h b 1 K ) . Lemma 2 Under the hyp otheses (H1)-(H3), (H6) and (H7(i)), we have for al l p ∈ [ j, ∞ [      n X i =1 W ni ( x ) sup y ∈ I R | H i ( y ) − E ( H i ( y ) | X i ) |      p = O  1 nφ x ( h K )  1 2 whenever ∃ η > 0 , C n 2+ p − a a +1 − p + η ≤ φ x ( h K ) ≤ C ′ n 1 1 − a (3) holds with a > max  p + 1 ,  4 + p + p (4 + p ) 2 − 4 − 8 p  / 2  . Lemma 3 Under the c onditions of L emm a 2, we have: P n X i =1 W ni ( x ) = 0 !! 1 /p = o  1 nφ x ( h K )  1 2 . 4. Application The most imp ortant application of conditio nal mo de estimation when the o bserv a tions are dependent and of functional nature is the pr ediction o f future v alue s of some pr o cess b y taking in to ac count the whole pa s t contin uously . Indeed, let ( Z t ) t ∈ [0 ,b [ be a co n tin uous time r eal v a lued random pro ce ss. F ro m Z t we may construct N functional random v ariables ( X i ) i =1 ,...,N defined by: ∀ t ∈ [0 , b [ , X i ( t ) = Z N − 1 (( i − 1) b + t ) , and a rea l characteristic Y i = G ( X i +1 ). The above co nsistency result p ermits to predict the char- acteristic Y N by the conditional mo de estimate b Y = b θ ( X N ) given b y using the N − 1 pair s o f r .v ( X i , Y i ) i =1 ,...,N − 1 . imsart-e js ver. 2008/01/24 file: ejs_2008_347.tex date: May 29, 2018 L aksaci and D ab o-Niang/Conditional mo de r e g r e ssion 7 5. App endix Pro of of l emma 1 : By definition of the L p norm, we ha v e      n X i =1 W ni ( x ) sup y ∈ ( α x ,β x ) | E ( H i ( y ) | X i ) − f x ( y ) |      p = E 1 /p      n X i =1 W ni ( x )1 I B ( x,h K ) ( X i ) sup y ∈ ( α x ,β x ) | E ( H i ( y ) | X i ) − f x ( y ) |      p , where 1 I is the indicator function. If w e co nsider the change of v a riables t = y − z h H , then we get under (H7 (ii)) and (H4) | E ( H i ( y ) | X i ) − f x ( y ) | ≤ Z   H ( t )  f X i ( y − th H ) − f x ( y )    dt, 1 I B ( x,h K ) ( X i ) | ( E ( H i ( y ) | X i ) − f x ( y )) | ≤ C  Z ( | th H | b 2 + | h K | b 1 ) H ( t ) dt  . W e deduce from P n i =1 W ni ( x ) = 1, that:      n X i =1 W ni ( x ) sup y ∈ ( α x ,β x ) | E ( H i ( y ) | X i ) − f x ( y ) |      p ≤ C  Z ( | th H | b 2 + | h K | b 1 ) H ( t ) dt  . The proves the lemma. Pro of of l emma 2 It is easy to see that      n X i =1 W ni ( x ) sup y ∈ I R | H i ( y ) − E ( H i ( y ) | X i | )      p ≤ C E 1 /p "  sup j W nj  p/ 2 n X i =1 W 1 / 2 ni ( x ) s up y ∈ I R | H i ( y ) − E ( H i ( y ) | X i ) | ! p # , By (H7 (i)), the definitio n o f H i and the b oundedness of the conditional density with resp e ct to the t wo v ariables, we have for a ll i : | H i ( y ) − E ( H i ( y ) | X i ) | = h − 1 H     H ( h − 1 H ( y − Y i )) − Z H ( h − 1 H ( y − z )) f X i ( z ) dz     ≤ h − 1 H  Z   H ( h − 1 H ( y − Y i )) − H ( h − 1 H ( y − z ))   f X i ( z ) dz  ≤ C  Z h − 1 H g    h − 1 H ( Y i − z )    dz  . imsart-e js ver. 2008/01/24 file: ejs_2008_347.tex date: May 29, 2018 L aksaci and D ab o-Niang/Conditional mo de r e g r e ssion 8 It follows fr o m the usua l change of v aria bles h − 1 H ( Y i − z ) = t that | H i ( y ) − E ( H i ( y ) | X i ) | ≤ C Z g ( | t | ) dt. Moreov er, by Cauch y-Sc h wartz inequality , we ca n wr ite n X i =1 W 1 / 2 ni ( x ) ≤ √ n. So, we deduce that      n X i =1 W ni ( x ) sup y ∈ ( α x ,β x ) | H i ( y ) − E ( H i ( y ) | X i ) |      p ≤ C √ n  Z g ( | t | ) dt  C ′ E 1 /p  sup j W nj  p/ 2 W e first ev aluate the q uantit y E 1 /p  sup j W nj  p/ 2 . F or this, set U = K ( h − 1 K d ( X n , x )), u = E ( U ) and V = P n − 1 j =1 K ( h − 1 K d ( X j , x )), it is clear that C ′ φ x ( h K ) ≤ u ≤ C φ x ( h K ) and E ( V ) = ( n − 1) u . If we consider the ra ndom v ar iable Z n − 1 = min(1 , C /V ), then, for a ll j W nj ( x ) = K ( h − 1 K d ( X j , x )) / ( U + V ) ≤ Z n − 1 . W e hav e for all c > 0; Z p/ 2 n − 1 = Z p/ 2 n − 1 1 I V 0 and r > 1 P  1 ( n − 1) E K 1 | V − E ( V ) | ≥ 4 λ  ≤ A 1 + A 2 imsart-e js ver. 2008/01/24 file: ejs_2008_347.tex date: May 29, 2018 L aksaci and D ab o-Niang/Conditional mo de r e g r e ssion 9 where A 1 =  1 + λ 2 ( n − 1) 2 ( E K 1 ) 2 rV ar ( V )  − r / 2 and A 2 = 4 n − 1 r  r λ ( n − 1 ) E K 1  a +1 ! . Set λ = ( λ 0 / 4) s log( n − 1) ( n − 1) φ x ( h K ) , we get A 2 ≤ C ( n − 1) r a ( n − 1) − ( a +1) / 2 φ x ( h K ) − ( a +1) / 2 (log( n − 1)) − ( a +1) / 2 By taking r = O ((log ( n − 1)) 2 ) and b y using the left part of inequa lit y (3), w e ca n find η ′ > 0 such that A 2 ≤ C n − η ′ n − p/ 2 φ x ( h K ) − p/ 2 . (4) F or A 2 , we m ust ev alua te asymptotically the quantit y V ar ( Z ) = n − 1 X i,j =1 C ov ( K i , K j ) := s 2 ∗ n + nV a r ( K 1 ) where s 2 ∗ n = n − 1 X i 6 = j C ov ( K i , K j ) . In the sequel, we use techniques developed b y Masry (1986) to give the asy mpto tic b ehavior of s 2 ∗ n . Define the sets S 1 = { ( i, j ) suc h that 1 ≤ i − j ≤ m n } and S 2 = { ( i, j ) suc h that m n + 1 ≤ i − j ≤ n − 1 } where the sequence m n is chosen such that m n → ∞ . W e denote b y J 1 ,n and J 2 ,n be the sum of the cov ariance ov er S 1 and S 2 resp ectively . Then, J 1 ,n = X S 1 | C ov ( K i , K j ) | ≤ X S 1 | E K i K j − E K i E K j | . Because of (H1), (H3) and (H6) we can wr ite J 1 ,n ≤ C nm n φ x ( h K )  φ x ( h K ) n  1 /a + φ x ( h K ) ! . On the other hand, to study the sum ov er S 2 , we us e the Davydov-Rio’s inequality (see Rio, (1999)) in the L ∞ cases. This lea ds, for a ll i 6 = j to | C ov ( K i , K j ) | ≤ C α ( | i − j | ) imsart-e js ver. 2008/01/24 file: ejs_2008_347.tex date: May 29, 2018 L aksaci and D ab o-Niang/Conditional mo de r e g r e ssion 10 and therefore we get J 2 ,n = X S 2 | C ov ( K i , K j ) | ≤ C n 2 m − a n . By choosing m n =  φ x ( h K ) n  − 1 /a , using the right par t of (3), we obtain n − 1 X i 6 = j C ov ( K i , K j ) = O ( nφ x ( h K )) . (5) Now, for all i = 1 , . . . n − 1 we can write V ar ( K 1 ) = E ( K 2 1 ) − ( E K 1 )) 2 . By using (H1) we get V ar ( K 1 ) ≤ C ( φ x ( h K ) + ( φ x ( h K )) 2 ) . Finally , this last r esult combined with (5) leads dire c tly to V ar ( Z ) = O (( n − 1) φ x ( h K )) . (6) This allows us to deduce that A 1 ≤ C  1 + λ 2 0 log( n − 1) 16 r  − r / 2 = C exp  − r/ 2 log  1 + λ 2 0 log( n − 1) 16 r  Cho osing r = C (log( n − 1)) 2 , yields that A 1 ≤ C exp  − λ 2 0 log n 32  = C n − λ 2 0 / 32 . As ( p < ∞ ), fo r λ 0 large enough, A 1 ≤ C n − λ 2 0 / 32 ≤ C n − ν n − p/ 2 φ x ( h K ) − p/ 2 for some ν > 0 . (7) So, by combin ing the res ults (4) and (7) w e get, for λ 0 large enough, P ( V < ( n − 1 ) u/ 2) = O  1 nφ x ( h K )  p/ 2 ! (8) which implies that E 1 /p ( Z p/ 2 n − 1 ) = O  1 nφ x ( h K )  1 / 2 ! . So,      n X i =1 W ni ( x ) s up y ∈ I R | H i ( y ) − E ( H i ( y ) | X i ) |      p = O  1 nφ x ( h K )  1 / 2 ! . imsart-e js ver. 2008/01/24 file: ejs_2008_347.tex date: May 29, 2018 L aksaci and D ab o-Niang/Conditional mo de r e g r e ssion 11 Pro of of l emma 3 W e set Z = 1 E K 1 n X i =1 K i , then, P n X i =1 W ni ( x ) = 0 !! = ( P ( Z = 0)) . It is clear tha t, fo r all ε < 1, we have ( P ( Z = 0)) ≤ ( P ( Z ≤ 1 − ǫ )) ≤ ( P ( | Z − 1 | ≥ ǫ )) . By following similar arg umen ts to those inv olved in the pr o of of (8), we hav e that: P ( | Z − 1 | ≥ ǫ ) = o  1 nφ x ( h K )  p/ 2 ! . This implies that, P n X i =1 W ni ( x ) = 0 !! (1 /p ) = o  1 nφ x ( h K )  1 / 2 ! . A cknowle dgements : The autho r s thank P rofessor s F. F erra t y and P h. Vieu o f the LSP , Universit y of Paul Sabatier, and N. Rhomar i of Universit y of Oujda, for helpful discussio ns and comments. References Bogachev, V.I. (1 999), Gaussia n measures. Math surveys and mono gr aphs, 62 , Amer. Math. Soc . Bollerslev, T., (198 6 ), Gener al autor e gr essive c onditio nal heter oske dasticity. J. Econom., 31 , 30 7- 327. Bosq, D. (2000 ) Line ar Pr o c esses in F unction Sp ac es: The ory and applic ations. Lecture Notes in Statistics, 14 9, Springer. Collomb, G., H¨ ardle, W. and Hassani, S. (198 7), A note on prediction via conditional mo de estima- tion. J. Statist. Plann. and In f. , 15 , 227 - 236. Dabo -Niang, S. and Rhomar i, N., (2 003), Estimation non par amtrique de la rgressio n av ec v ariable explicative dans un espace mtrique . C. R., Math., A c ad. Sci. Paris 336 , No.1 , 75-80 . imsart-e js ver. 2008/01/24 file: ejs_2008_347.tex date: May 29, 2018 L aksaci and D ab o-Niang/Conditional mo de r e g r e ssion 12 Dabo -Niang, S. a nd Rho mari, N., (2002), No npa rametric r egressio n estimation when the r egres- sor takes v alues in a metric spa ce, T echnical Repp ort 20 02-9, L ST A Univ. pa r is 6, 20 01, http : //w ww .ccr.j uss ieu.f r /l sta/ R 2002 9 .pd f . Dabo -Niang, S. a nd Laksaci, A., (2007), Estimation n on p ar amtrique du mo de c onditionnel p our variable explic ative fonctionnel le, C. R., Math., A c ad. Sci. Paris , 34 4 , 49-52 . Dabo -Niang, S. and Laksaci, A., (2006), Nonp ar ametric estimation of c onditiona l quantiles when the r e gr essor is value d in a semi-metric sp ac e, submitted. Engle, R.F., (1 982), Autor e gr essive c onditiona l heter oske da sticity with estimates of the varianc e of U.K. inflation. Ec onometric a , 50 , 9 87-10 07. Ezzahrio ui, M., Ould- Said, E. (2005), Asymptotic normality of nonp ar ametric estimators of t he c onditional mo de function for functional data. L MP A, Aot 2005 (Prepr int). Ezzahrio ui, M., Ould-Sa id, E. (2006b), On the asymptotic pr op erties of a nonp ar ametric estimator of the c onditional mo de function for functional dep endent data. LMP A, jan vier 200 6 (Pr eprint). F err aty , F., Vieu, P h.,(2 004), Nonp ar ametric mo dels for functional data, with applic ation in r e gr es- sion, time-series pr e diction and curve discrimination. J . Nonpara metric Stat. 16 , No.1-2 , 111-125 . F err aty , F., L a ksaci, A. and Vieu, P . (20 05), F unctional times se r ies predictio n via conditional mo de. C. R., Math., A c ad. Sci. Paris 340 , No.5, 38 9-392 F err aty , F., Laksa c i, A. and Vieu, P . (200 6a), Estimating some char acteristics of the c ondi tional distribution in nonp ar ametric functional mo del s. Statist. Inf. for Stoch. Pro c. 9 , No.2 , 47-76 . F err aty , F. and Vieu, Ph., (2006 b), N onp ar ametric functional data analysis. Springer-V erlag , New- Y or k . Jones, D.A. (19 78), Nonline ar autor e gr essive pr o c esses. Pr o c. R oy. So c. L ondon A 360 , 71-95 . Li, W.V. and Shao , Q.M. (200 1), Gaussian pro cesses: inequalities, small ba ll probabilities a nd ap- plications. In St o chastic pr o c esses: The ory and Metho ds , E d. C.R. Rao a nd D. Shanbhag. Hanbo ok of Statistics, 19 , North-Holland, Amsterdam. Louani, D. and Ould-Sa ¨ ıd, E., (19 99), Asymptotic normality of kernel es tima tors of the conditional imsart-e js ver. 2008/01/24 file: ejs_2008_347.tex date: May 29, 2018 L aksaci and D ab o-Niang/Conditional mo de r e g r e ssion 13 mo de under stro ng mixing hypothesis. J . Nonp ar ametric S tat. , 11 (4), 413-4 42. Masry , E. (1986) R e cursive pr ob ability density estimation for we akl y dep endent stationary pr o c esses. IEEE, T r ans. Inform. Theor y , 32 , 254-2 67. Masry , E . (2005), Nonp ar ametric r e gr ession est imation for dep enden fu n ctional data: asymptotic normality. Sto chastic Pro cess. Appl. 115 , No.1 , 155-1 77. Ozaki, T., (1979), Nonline ar t ime series mo dels for nonline ar r andom vibr atio ns. T echnical rep or t. Univ. of Manchester. Ramsay , J. O and Silverman, B.W. (1 9 97) F unctional data analysis . Spring er, New Y or k. Ramsay , J.O. and Silverman, B.W. (2 002). A pplie d functional data analysi s; Metho ds and c ase stud- ies. Springer -V er lag, New Y or k. Rio, E. (1999). Th´ eorie asymptotique des pro cessus faiblement d´ ependants. Math´ ematiques & A p- plic ations , 31 , Springer- SMAI. Ould-Sad, E., (1997), A note o n erg o dic pro cesses predictio n via estimation of the co nditional mo de function. Sc and. J. St at. 24 , No.2, 2 31-23 9. Quintela del Rio, A. and Vieu, P h., (199 7), A nonpa r ametric conditionna l mo de estimate. Nonp ar a- metric Statistics , 8 , 2 53-26 6. Rosenblatt, M., (196 9 ), Conditiona l probability density and reg ression estimators. In Multivariate Analy sis II , Ed. P .R. Krishnaiah. Academic Pre s s, New Y or k and Londo n. Y oundj, E., (199 3 ), Estimation non pa r amtrique de la densit conditionnelle par la mtho de du noy au. Thse de Do ctar a t, Univ ersit de Ro uen. imsart-e js ver. 2008/01/24 file: ejs_2008_347.tex date: May 29, 2018

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment