Robust Bayesian estimation in conditionally heteroscedastic time series models

Robust Bayesian estimation in conditionally heteroscedastic time series models
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Outliers can seriously distort statistical inference by inducing excessive sensitivity in the likelihood function, thereby compromising the reliability of Bayesian estimation. To address this issue, we develop a robust Bayesian estimation method for conditionally heteroscedastic time series models by extending the density power divergence (DPD) framework to the Bayesian setting. The resulting DPD-based posterior distribution, controlled by a tuning parameter, achieves a smooth balance between efficiency and robustness. We establish the asymptotic properties of the proposed estimator; specifically, the DPD-based posterior is shown to satisfy a Bernstein-von Mises type theorem, converging to a normal distribution centered at the minimum DPD estimator (MDPDE). Furthermore, the corresponding Bayes estimator, defined as the posterior mean under the DPD-based posterior (EDPE), is asymptotically equivalent to the MDPDE. Monte Carlo simulations based on GARCH(1,1) models confirm that the proposed EDPE performs well under both uncontaminated and contaminated data, maintaining robustness where the ordinary Bayes estimator becomes severely biased. An empirical application to BTC-USD returns further demonstrates the practical advantages of the proposed robust Bayesian framework for financial time series analysis.


💡 Research Summary

This paper tackles the well‑known vulnerability of Bayesian inference to outliers in conditionally heteroscedastic time‑series models such as GARCH. Classical Bayesian updating relies on the log‑likelihood, which can be dramatically distorted by a few extreme observations, leading to severely biased posterior means. To mitigate this, the authors extend the Density Power Divergence (DPD) framework—originally proposed for robust frequentist estimation—to the Bayesian setting, constructing a pseudo‑posterior that replaces the log‑likelihood with the DPD‑based objective function (Q_{\gamma,n}(\theta)). The tuning parameter (\gamma\ge 0) governs the trade‑off between efficiency and robustness: (\gamma=0) recovers the ordinary posterior, while larger (\gamma) down‑weights the influence of outlying data points.

The methodological development proceeds as follows. The conditional heteroscedastic model is written as (X_t=\sigma_t(\theta)\varepsilon_t), where (\sigma_t^2(\theta)) follows a recursive specification (e.g., GARCH). Because the recursion requires initial values, the authors work with a feasible approximation (\tilde\sigma_t^2(\theta)) and the corresponding conditional density (\tilde f_\theta(X_t\mid\mathcal F_{t-1})). The DPD between the true conditional density and a model density is defined, and the Minimum DPD Estimator (MDPDE) (\hat\theta_{\gamma,n}) is obtained by maximizing the empirical DPD criterion (eQ_{\gamma,n}(\theta)). The robust pseudo‑posterior is then \


Comments & Academic Discussion

Loading comments...

Leave a Comment