Asymptotic near-efficiency of the Gibbs-energy (GE) and empirical-variance estimating functions for fitting Mat{e}rn models -- II: Accounting for measurement errors via conditional GE mean
Consider one realization of a continuous-time Gaussian process $Z$ which belongs to the Mat' ern family with known regularity'' index $\nu >0$. For estimating the autocorrelation-range and the variance of $Z$ from $n$ observations on a fine grid, we studied in Girard (2016) the GE-EV method which simply retains the empirical variance (EV) and equates it to a candidate Gibbs energy (GE)’’ i.e.~the quadratic form ${\bf z}^T R^{-1} {\bf z}/n$ where ${\bf z}$ is the vector of observations and $R$ is the autocorrelation matrix for ${\bf z}$ associated with a candidate range. The present study considers the case where the observation is ${\bf z}$ plus a Gaussian white noise whose variance is known. We propose to simply bias-correct EV and to replace GE by its conditional mean given the observation. We show that the ratio of the large-$n$ mean squared error of the resulting CGEM-EV estimate of the range-parameter to the one of its maximum likelihood estimate, and the analog ratio for the variance-parameter, have the same behavior than in the no-noise case: they both converge, when the grid-step tends to $0$, toward a constant, only function of $\nu$, surprisingly close to $1$ provided $\nu$ is not too large. We also obtain, for all $\nu$, convergence to 1 of the analog ratio for the microergodic-parameter.
💡 Research Summary
The paper addresses the problem of estimating the range (θ) and variance (τ²) parameters of a continuous‑time Gaussian process Z whose covariance belongs to the Matérn family, when the observations are contaminated by additive Gaussian white noise of known variance σ²ₑ. In a previous work (Girard 2016) the author introduced the GE‑EV method, which simply equates the empirical variance (EV) of the raw observations to a candidate “Gibbs energy” (GE) defined as the quadratic form zᵀR⁻¹z/n, where R is the autocorrelation matrix corresponding to a candidate range. That approach was shown to be asymptotically near‑efficient: the mean‑squared error (MSE) of the GE‑EV estimator converges to the MSE of the maximum‑likelihood estimator (MLE) as the sampling grid becomes dense.
The present study extends this framework to the realistic situation where the observed vector y = z + ε consists of the true process values plus independent Gaussian noise ε ∼ N(0, σ²ₑ I). Two modifications are introduced. First, the empirical variance is bias‑corrected by subtracting the known noise variance: EV̂ = (‖y‖²/n) – σ²ₑ, which yields an unbiased estimator of τ². Second, the Gibbs energy is replaced by its conditional expectation given the noisy observations, i.e. E
Comments & Academic Discussion
Loading comments...
Leave a Comment