Additive Regression Model for Continuous Time Processes

Reading time: 6 minute
...

📝 Original Info

  • Title: Additive Regression Model for Continuous Time Processes
  • ArXiv ID: 0706.1154
  • Date: 2007-06-11
  • Authors: Researchers from original ArXiv paper

📝 Abstract

In the setting of additive regression model for continuous time process, we establish the optimal uniform convergence rates and optimal asymptotic quadratic error of additive regression. To build our estimate, we use the marginal integration method.

💡 Deep Analysis

Deep Dive into Additive Regression Model for Continuous Time Processes.

In the setting of additive regression model for continuous time process, we establish the optimal uniform convergence rates and optimal asymptotic quadratic error of additive regression. To build our estimate, we use the marginal integration method.

📄 Full Content

The multivariate regression function estimation is an important problem which has been extensively treated for discrete time processes. It is well-known from (11) that the additive regression models bring out a solution to the problem of the curse of dimensionality in nonparametric multivariate regression estimation, which is characterized by a loss in the rate of convergence of the regression function estimator when the dimension of the covariates increases. Additive models allow to reach even univariate rate when these models fit well. For continuous time processes, (2) obtained the optimal rate for the estimator of multivariate regression, which is the same as in the i.i.d. case. He even proved that, for processes with irregular paths, it is possible to reach the parametric rate. This one, called the superoptimal rate, does not depend on the dimension of the variables, but the needed conditions on the processes are very strong. That is the reason why it is relevant to study additive models to bring out a solution to the problem of the curse of dimensionality.

Let Z t = (X t , Y t ), (t ∈ R) be a R d × R-valued measurable stochastic process defined on a probability space (Ω, A, P ). Denote by ψ a given real measurable function. We consider the additive regression function associated to m ψ (Y ) defined by,

= µ + d l=1 m l (x l ) := m ψ,add (x).

(

Let K 1 , K 2 , K 3 and K, be kernels respectively defined on R, R d-1 , R d and R d . We denote by fT the estimate of f , the density function of the covariable X, (see (1)), that is,

where (h T ) is a positive real function. In estimating the regression function defined in (1), we use the following two estimators (see for exemple (3) and ( 5))

and

where (h j,T ), j = 1, 2 are positive real functions. Let q 1 , …, q d be d density functions defined in R. Setting q(x) = d l=1 q l (x l ) and q -l (x -l ) = j =l q j (x j ). To estimate the additive components of the regression function, we use the marginal integration method (see ( 6) and ( 8)). We obtain then

in such a way that the following two equalities hold,

In view of ( 6) and ( 7), we note thatη l and m l are equal up to an additional constant. Therefore, η l is also an additive component, fulfilling a different identifiability condition.

From (4) and ( 5), a natural estimate of this l-th component is given by

from which we deduce the estimate m ψ,T,add of the additive regression function,

Before stating our results, we introduce some additional notations and our assumptions. Let C 1 , …, C d , be d compact intervals of R and set C = C 1 × … × C d . For every subset E of R q , q ≥ 1, and any δ > 0, introduce the δ-neighborhood E δ of E, namely, E δ = {x : inf y∈E xy R q < δ}, with • R q standing for the euclidian norm on R q .

(C.1) There exists a positive constant M such that |ψ(y)| ≤ M < ∞.

(C.

  1. The function m ψ is k-times continuously differentiable, k ≥ 1, and

Denote by f ℓ , ℓ = 1, …, d the density functions of X ℓ , ℓ = 1, …, d. The functions f and f ℓ , ℓ = 1, …, d, are supposed to be continuous, bounded and

Where . is a norm on R d and L is a positive constant.

The kernels K 1 , K 2 , K 3 and K are assumed to fulfill the following conditions (K.1) K 1 , K 2 , K 3 and K are continuous respectively on the compact supports

The density functions q ℓ , ℓ = 1, …, d, satisfy the following assumption (Q.1) For any 1 ≤ l ≤ d, q ℓ has k continuous and bounded derivatives, with a compact support included in C ℓ .

There exists a set Γ ∈ B R 2 containing D = {(s, t) ∈ R 2 : s = t} such that (D.1) f (Xs,Ys),(Xt,Yt) -f (Xs,Ys) f (Xt,Yt) exists everywhere for (s, t) ∈ Γ C ,

We work under the following conditions upon the smoothing parameters h T and h j,T , j = 1, 2,

, for a fixed 0 < c ′ < ∞,

Throughout this work, we use the α-mixing dependance structure where the associated coefficient is defined, for every σ-fields A and B by

For all Borelian set I in R + the σ-algebra defined by (Z t , t ∈ I) is denoted by σ(Z t , t ∈ I).

a.s.

The proofs of our theorems are split into two steps. First, we consider the case where the density is assumed to be known. Subsequently, we treat the general case when f is unknown.

Denote by η, m ψ,T (x) and m ψ,T,l (x) the versions of η, m ψ,T (x) and m ψ,T,l (x) associated to a known (formally, we replace fT by f in the expressions (3), ( 4) and m ψ,T,l (x) by m ψ,T,l (x) in (8).

Introduce now the following quantities (see, for the discrete case ( 4)), we establish the proof for the first component,

The following Lemma is of particular interest to establish the result of theorem (1). Note that (19) is “only” be instrumental in the proof of (20).

Lemma 1 Under the assumptions (C.1) -(C.2), (F.1) -(F.2), (K.1), (Q.1) and (H.2) , we have

Proof: According to Fubini’s Theorem and under the additive model assumption, we have

Setting v 1 h 1,T = x 1 -u 1 and using a Taylor expansion, we get, by (C.2) and (K.1)

Under (H.2), it follows that,

The

…(Full text truncated)…

📸 Image Gallery

cover.png page_2.webp page_3.webp

Reference

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut