Expected Kullback-Leibler-based characterizations of score-driven updates

Expected Kullback-Leibler-based characterizations of score-driven updates
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Score-driven (SD) models are a standard tool in statistics and econometrics, with applications in hundreds of published articles in the past decade. We provide an information-theoretic characterization of SD updates based on reductions in the expected Kullback-Leibler (EKL) divergence relative to the true – but unknown – data-generating density. EKL reductions occur if and only if the expected update direction aligns with the expected score; i.e., their inner product should be positive. This equivalence condition uniquely identifies SD updates (including scaled or clipped variants) as being EKL reducing, even in non-concave, multivariate, and misspecified settings. We further derive explicit bounds on admissible learning rates in terms of score moments, linking SD methods to adaptive optimization techniques. By contrast, alternative performance measures in the literature impose stronger conditions (e.g., concave logarithmic densities) and do not characterize SD updates: other updating rules may improve these measures, while SD updates need not. Our results provide a rigorous justification for SD models and establish EKL as their natural information-theoretic foundation.


💡 Research Summary

The paper provides a rigorous information‑theoretic foundation for score‑driven (SD) models by characterizing when an update reduces the expected Kullback‑Leibler (EKL) divergence between the true data‑generating density and the model density. The authors define EKL as a double‑integral over two independent draws from the true density: one draw supplies the observation that triggers the parameter update, and a second independent draw evaluates the updated model’s fidelity. This “two‑sample” perspective captures the expected gain from repeatedly updating and re‑evaluating the model.

The central theoretical contribution is an “if and only if” result: a sufficiently small parameter change yields an EKL improvement if and only if the expected update direction has a positive inner product with the expected score. Formally, EKL(p‖f_{t|t}) < EKL(p‖f_{t|t‑1}) ⇔ E


Comments & Academic Discussion

Loading comments...

Leave a Comment