On the tails of log-concave density estimators

On the tails of log-concave density estimators
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

It is shown that the nonparametric maximum likelihood estimator of a univariate log-concave probability density satisfies desirable consistency properties in the tail regions. Specifically, let $P$ and $f$ denote the true underlying distribution and density, respectively. If $\hat{f}_n$ is the estimated log-concave density, and $\hatφ_n = \log \hat{f}n$, then we specify sequences $(b_n){n\in \mathbb{N}}$ such that $P([b_n,\infty)) \to 0$ at a specific speed, ensuring that the absolute errors or absolute relative errors of $\hat{f}_n, \ \hatφ_n$ and $\hatφ_n’$ converge to zero uniformly on sets $[a, b_n]$. The main tools, besides characterizations of $\hat{f}_n$, are exponential and maximal inequalities for truncated moments of log-concave distributions, which are of independent interest.


💡 Research Summary

The paper investigates the behavior of the non‑parametric maximum likelihood estimator (MLE) for a univariate log‑concave density (f(x)=\exp{\varphi(x)}) in the extreme tails of the distribution. While previous work has established consistency and rates of convergence on any fixed compact subset of the support, the authors address the gap concerning the estimator’s performance near the boundaries where the true density may vanish or decay rapidly.

The main contributions are three theorems that give uniform convergence results on sequences of intervals (


Comments & Academic Discussion

Loading comments...

Leave a Comment