Codifference as a measure of dispersion and dependence for mixture models

Codifference as a measure of dispersion and dependence for mixture models
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Codifference is a commonly used measure of dependence for stable vectors and processes for which covariance is infinite. However, we argue that it can also be used for other heavy-tail distributions and it provides useful information for other non-Gaussian distributions as well, no matter the tails. Motivated by this, we analyse codifference using as little assumptions as possible about the studied model. It leads us to propose its natural domain and three natural variants of it. Using the wide class of variable scale mixture distributions we argue that the codifference can be interpreted as the measure of bulk properties which ignores the tails much more than the covariance. It can also detect forms of non-linear memory which covariance cannot. Finally, we show the asymptotic distribution of its estimator.


💡 Research Summary

The paper revisits the codifference—a dependence measure originally devised for symmetric α‑stable distributions where the covariance is infinite—and demonstrates that it can be meaningfully applied to a far broader class of heavy‑tailed and non‑Gaussian models. The authors begin by defining a minimal admissible class D of random variables: those possessing a strictly positive‑definite Lebesgue density (or a degenerate atom at zero) and whose characteristic function is real‑valued, positive, and tends to zero at infinity. This setting guarantees that the characteristic function behaves like a quasi‑norm, enabling the construction of a “log‑characteristic‑function” (lcf)

\


Comments & Academic Discussion

Loading comments...

Leave a Comment