Counting models with excessive zeros ensuring stochastic monotonicity

Counting models with excessive zeros ensuring stochastic monotonicity
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Standard count models such as the Poisson and Negative Binomial models often fail to capture the large proportion of zero claims commonly observed in insurance data. To address such issue of excessive zeros, zero-inflated and hurdle models introduce additional parameters that explicitly account for excess zeros, thereby improving the joint representation of zero and positive claim outcomes. These models have further been extended with random effects to accommodate longitudinal dependence and unobserved heterogeneity. However, their consistency with fundamental probabilistic principles in insurance, particularly stochastic monotonicity, has not been formally examined. This paper provides a rigorous analysis showing that standard counting random-effect models for excessive zeros may violate this property, leading to inconsistencies in posterior credibility. We then propose new classes of counting random-effect models that both accommodate excessive zeros and ensure stochastic monotonicity, thereby providing fair and theoretically coherent credibility adjustments as claim histories evolve.


💡 Research Summary

This paper addresses a fundamental yet overlooked issue in the actuarial modeling of claim frequencies that exhibit a large proportion of zero counts. While zero‑inflated and hurdle count models have become standard tools for handling excess zeros, their extensions with random effects—intended to capture longitudinal dependence and unobserved heterogeneity—have never been examined for compliance with stochastic monotonicity, also known as the credibility order. Stochastic monotonicity requires that, as a policyholder’s observed claim history becomes more adverse, the posterior (a‑posteriori) distribution of future claim frequency should shift in a direction that reflects higher risk; formally, conditional expectations of any non‑decreasing function of the future claim count must be non‑decreasing in the observed history.

The authors first formalize a general Poisson‑hurdle mixture model (Model 1) with two latent random effects: one governing the binary “hurdle” (whether any claim occurs) and another governing the count of claims conditional on crossing the hurdle. They then specialize to a bivariate normal random‑effect specification (Model 2) that allows correlation between the two latent components. Using likelihood‑ratio ordering and multivariate total positivity of order two (MTP₂), they derive conditions under which the model preserves monotonicity for histories with at least one positive claim (Lemma 3). However, the crucial comparison between a zero‑claim history (Y₁=0) and a single‑claim history (Y₁=1) is shown to be problematic. Theorem 1 demonstrates that, even when the correlation is non‑negative, letting the variance of the hurdle‑effect shrink to zero can reverse the expected future claim count, i.e., E


Comments & Academic Discussion

Loading comments...

Leave a Comment