Generalized double Pareto shrinkage

Generalized double Pareto shrinkage
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We propose a generalized double Pareto prior for Bayesian shrinkage estimation and inferences in linear models. The prior can be obtained via a scale mixture of Laplace or normal distributions, forming a bridge between the Laplace and Normal-Jeffreys’ priors. While it has a spike at zero like the Laplace density, it also has a Student’s $t$-like tail behavior. Bayesian computation is straightforward via a simple Gibbs sampling algorithm. We investigate the properties of the maximum a posteriori estimator, as sparse estimation plays an important role in many problems, reveal connections with some well-established regularization procedures, and show some asymptotic results. The performance of the prior is tested through simulations and an application.


💡 Research Summary

The paper introduces a new Bayesian shrinkage prior, the Generalized Double Pareto (GDP) distribution, for use in linear regression models, especially in high‑dimensional settings where sparsity is desired. The GDP density
\


Comments & Academic Discussion

Loading comments...

Leave a Comment