Generalized Beta Mixtures of Gaussians
In recent years, a rich variety of shrinkage priors have been proposed that have great promise in addressing massive regression problems. In general, these new priors can be expressed as scale mixtures of normals, but have more complex forms and better properties than traditional Cauchy and double exponential priors. We first propose a new class of normal scale mixtures through a novel generalized beta distribution that encompasses many interesting priors as special cases. This encompassing framework should prove useful in comparing competing priors, considering properties and revealing close connections. We then develop a class of variational Bayes approximations through the new hierarchy presented that will scale more efficiently to the types of truly massive data sets that are now encountered routinely.
💡 Research Summary
This paper introduces a unifying framework for continuous shrinkage priors in high‑dimensional regression by defining a three‑parameter generalization of the beta distribution, called the Three‑Parameter Beta (TPB). The TPB density adds a multiplicative factor ((1+(\phi-1)x)^{-(a+b)}) to the classic beta kernel, where (a) controls concentration near zero, (b) controls tail heaviness, and (\phi) acts as a global shrinkage parameter. By mixing a normal distribution over its scale with a TPB‑distributed mixing variable, the authors obtain the TPB‑Normal (TPB‑N) prior: \
Comments & Academic Discussion
Loading comments...
Leave a Comment