Sparse VARs Do Not Imply Sparse Local Projections: Robust Inference for High-Dimensional Granger Causality

Sparse VARs Do Not Imply Sparse Local Projections: Robust Inference for High-Dimensional Granger Causality
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

This paper studies multi-horizon Granger causality using high-dimensional local projections in sparse Vector Autoregressive (VAR) systems. Since local projection coefficients are nonlinear transformations of the underlying VAR parameters, existing approaches, such as de-biased least absolute shrinkage and selection operator (LASSO) and post-double-selection methods applied directly to local projections, lack a general justification, as sparsity of the VAR does not always propagate to higher horizons. We propose a two-step framework that avoids imposing sparsity at each horizon and delivers valid inference without relying on heteroskedasticityand autocorrelation-consistent (HAC) corrections. We establish large sample theory for the proposed estimators and develop feasible Wald tests. Monte Carlo experiments demonstrate improved size control across horizons relative to existing methods. An application to large financial systems illustrates horizon-specific connectedness.


💡 Research Summary

The paper tackles the problem of conducting valid multi‑horizon Granger‑causality inference in high‑dimensional time‑series settings where the underlying data‑generating process is a sparse vector autoregression (VAR). While sparsity of VAR coefficients can be exploited for regularized estimation, the authors point out a crucial obstacle: the coefficients of local projections (LPs), which are the objects of interest for horizon‑specific causality, are nonlinear functions of the VAR parameters. Because LP coefficients involve powers of the VAR companion matrix, even a very sparse VAR can generate dense LP coefficients at longer horizons. Consequently, existing approaches that impose sparsity directly on each LP (e.g., de‑biased LASSO applied to each horizon, post‑double‑selection, or HAC‑adjusted inference) lack a solid theoretical foundation and often suffer from severe size distortions and bias, especially for h ≥ 5 or 10.

To overcome this, the authors propose a two‑step estimation and de‑biasing framework. In the first step, a regularized VAR is estimated using a LASSO‑type procedure followed by element‑wise thresholding to enforce approximate sparsity both on the coefficient matrices and on the innovation covariance Σ_u. The fitted VAR yields estimated residuals and a sparse estimate of Σ_u, which are then used to construct the impulse‑response matrices Ψ_h = J Â^h J′ (where  is the estimated companion matrix).

In the second step, the authors exploit the closed‑form relationship between the LP coefficient β_h and the VAR moments: β_h = (E


Comments & Academic Discussion

Loading comments...

Leave a Comment