Random vectors in the presence of a single big jump
The multidimensional distributions with heavy tails attracted recently the attention of several papers on Applied Probability. However, the most of the works of the last decades are focused on multivariate regular variation, while the rest of the heavy-tailed distribution classes were not studied extensively. About the multivariate subexponentiality we can find several approximations, but none of them get established widely. Having in mind the single big jump and further the multivariate subexponentiality suggested by Samorodnitsky and Sun (2016), we introduce the multivariate long, dominatedly and constistently varying distribution classes. We examine the closure properties of these classes with respect to product convolution, to scale mixture and convolution of random vectors. Especially in the class of multivariate subexponential and dominatedly varying distributions we provide the asymptotic behavior of the random vector and its normalized Levy measure, through their linear combination, that leads to their characterization. Furthermore, we study the single big jump in finite and in random sums of random vectors, permitting some dependence structures, which contain the independence as special case. Finally, we present an application on the asymptotic evaluation of the present value of the total claims in a risk model, with common Poisson counting process, general financial factors and independent, identically distributed claims, with common multivariate subexponential distribution.
💡 Research Summary
The paper “Random vectors in the presence of a single big jump” develops a comprehensive theory of multivariate heavy‑tailed distributions that goes beyond the well‑studied multivariate regular variation (MRV) and the existing multivariate subexponential (SR) frameworks. The authors first review the classical one‑dimensional heavy‑tail classes—K (heavy‑tailed), L (long‑tailed), S (subexponential), D (dominatedly varying), C (consistently varying), and regularly varying R⁻α—and recall their inclusion relations (S⊂L⊂K, D⊂K, C⊂D∩L, etc.). They then lift these concepts to the multivariate setting by fixing a family 𝓡 of increasing, convex cones A⊂ℝᵈ₊ that do not contain the origin, and defining the tail of a random vector X on A as F_A(x)=P
Comments & Academic Discussion
Loading comments...
Leave a Comment