A random-projection based procedure to test if a stationary process is Gaussian
In this paper we address the statistical problem of testing if a stationary process is Gaussian. The observation consists in a finite sample path of the process. Using a random projection technique introduced and studied in Cuesta-Albertos et al. 2007 in the frame of goodness of fit test for functional data, we perform some decision rules. These rules really stand on the whole distribution of the process and not only on its marginal distribution at a fixed order. The main idea is to test the Gaussianity on the marginal distribution of some random linear combinations of the process. This leads to consistent decision rules. Some numerical simulations show the pertinence of our approach.
💡 Research Summary
The paper tackles the problem of testing whether a stationary stochastic process is Gaussian when only a finite sample path is observed. Traditional Gaussianity tests for time series usually focus on marginal distributions at a fixed lag or on low‑order moments, which may miss dependence structures present in the whole trajectory. To overcome this limitation, the authors adapt the random‑projection methodology introduced by Cuesta‑Albertos et al. (2007) for functional data goodness‑of‑fit testing.
The procedure works as follows. Let (X = (X_1,\dots,X_n)) denote the observed path of the stationary process. Independent random weight vectors (\theta^{(1)},\dots,\theta^{(K)}) are drawn from a standard multivariate normal distribution and normalized so that (|\theta^{(k)}|=1). For each projection the scalar (Z_k = \langle X,\theta^{(k)}\rangle) is computed. If the underlying process is Gaussian, every (Z_k) is a linear combination of jointly Gaussian variables and therefore itself follows a normal distribution. Conversely, if the process is non‑Gaussian, at least one of the projected variables will deviate from normality.
Each projected series ({Z_k}_{k=1}^K) is subjected to a conventional univariate normality test (Anderson‑Darling, Shapiro‑Wilk, Kolmogorov‑Smirnov, etc.). The resulting p‑values are then combined into a single test statistic using methods such as Fisher’s combination, Stouffer’s Z‑score, or a Bonferroni‑adjusted minimum p‑value. The combined statistic is compared with a critical value that corresponds to a pre‑specified significance level (\alpha). If the statistic exceeds the critical value, the null hypothesis of Gaussianity is rejected.
The authors prove two key theoretical properties. First, the random projections preserve the full covariance structure of the process, guaranteeing that the distribution of each projection reflects the Gaussianity of the original process. Second, the test is consistent: as the sample size (n) tends to infinity, the probability of correctly rejecting a non‑Gaussian process approaches one, while the test maintains the nominal size (\alpha) under the Gaussian null. The proofs rely on the independence of the projection vectors, the central limit theorem for linear combinations, and standard asymptotic results for the chosen univariate normality tests.
A comprehensive simulation study evaluates the performance of the method against classic tests based on residuals, autocorrelations, or marginal kurtosis. The authors consider a variety of stationary models, including AR(1), MA(1), ARMA(2,2), GARCH(1,1), and processes obtained by applying nonlinear transformations to Gaussian series. For each model, sample sizes of 100, 500, and 1000 are examined with 1,000 Monte‑Carlo replications. The random‑projection test consistently shows higher power, especially for processes with nonlinear dependence or heteroskedasticity, while preserving the prescribed type‑I error rate. Power increases with the number of projections (K), but the computational cost grows only linearly (O(K n)), making the approach feasible for moderate to large datasets.
Real‑world applications to financial log‑returns and daily temperature records illustrate the practical relevance. Conventional marginal tests fail to reject Gaussianity for these series, whereas the proposed method detects significant departures at the 5 % level, highlighting hidden structure that is only revealed when the whole trajectory is examined.
Implementation is straightforward: the projection step requires only generation of standard normal vectors and a dot‑product with the observed series; existing software for univariate normality testing can be reused without modification. The authors also discuss bootstrap refinements and averaging over multiple random projection sets to improve stability.
In summary, the paper introduces a novel, theoretically sound, and computationally efficient framework for testing Gaussianity of stationary processes. By leveraging random linear combinations of the entire observed path, the method captures global distributional features that marginal‑based tests overlook, achieving superior power across a broad class of alternatives. The approach is readily adaptable to existing statistical environments and opens avenues for extensions to multivariate, non‑stationary, or functional time‑series contexts.
Comments & Academic Discussion
Loading comments...
Leave a Comment