Wrong Priors
📝 Original Info
- Title: Wrong Priors
- ArXiv ID: 0709.1067
- Date: 2009-11-13
- Authors: Researchers mentioned in the ArXiv original paper
📝 Abstract
All priors are not created equal. There are right and there are wrong priors. That is the main conclusion of this contribution. I use, a cooked-up example designed to create drama, and a typical textbook example to show the pervasiveness of wrong priors in standard statistical practice.💡 Deep Analysis
This research explores the key findings and methodology presented in the paper: Wrong Priors.All priors are not created equal. There are right and there are wrong priors. That is the main conclusion of this contribution. I use, a cooked-up example designed to create drama, and a typical textbook example to show the pervasiveness of wrong priors in standard statistical practice.
📄 Full Content
Consider bivariate normals with unit covariance matrix and mean vector restricted to a region of the euclidean plane. Specifically, for given values a and b the experiment consists of choosing (x, y) at random on the euclidean plane with,
The prior π 1 is the true uniform over the model. Notice that the picture is not drawn at scale. The actual peaks should be more than 40 times taller than the ones displayed.
The unknown parameters are a, b ∈ R but c = 0.1 is assumed known, and ε 1 and ε 2 are independent standard normals. The problem consists of learning the parameters θ = (a, b) from n independent observations (x 1 , y 1 ), . . . , (x n , y n ). We want to compare the performance of two priors on (a, b). The naive “‘ignorant”’ prior π 0 that takes a and b independently from N(0, 100) and the uniform prior over the manifold model, π 1 given by,
where Z is a finite normalization constant. Equation ( 1) is just the normalized volume form of the model computed trivially as √ det g/Z with g as the information matrix (minus the expected values of the second derivatives of the log likelihood). This prior puts positive mass on the entire (a, b) plane (except on the line a = b but that region has measure 0) but it is very far from uniform as it is shown in figure 1. Notice also that there are two peaks because the likelihood is invariant under the exchange of a with b. The volume prior respects this symmetry.
With the help of the free MCMC package [1,2] it only takes a few lines of code to realize the inadequacy of the naive prior for this example. The results of the MCMC simulations are summarized in figure 2. The true parameters where fixed at a = 0.025 and b = -0.01 and independent samples were chosen from the distribution with those parameters. With the naive flat prior the posteriors after observing 100, 500 and 1000 samples were essentially identical to the priors N(0, 100), i.e. nothing was learned from the data. With 10000 observations the program was able to learn the values (0.048 ± 0.24, 0.039 ± 0.24) for the true parameters. In contrast, just after 100 observations the posterior with the true uniform prior estimates the parameters very precisely as (0.025 ± 0.020, -0.032 ± 0.016), still one order of magnitude of extra accuracy over the posterior with the flat prior with two orders of magnitude of extra data!
To understand why the naive flat prior is so bad and the volume prior so good let’s identify the transformed region of means (u, v) given by,
as (a, b) range over the entire plane. An easy way to find the shape of this region is to pick points (a, b) at random on the plane and plot the corresponding (u, v) points. Figure 3 shows 10000(u, v) points obtained from 10000(a, b) points uniformly distributed inside a circle centered at the origin of radius 3. Notice that lots of points disappear into the origin!. Now take another 10000(a, b) points but now distributed according to π 1 with density given in (1). Figure 4 shows these (a, b) points. Notice that they are all highly concentrated about two points close to the origin. The corresponding (u, v) points are shown in figure 5. Got it? The equation of the boundary of the leaf of (u, v) points
The computation of the exact equation of the leaf boundary in figure 5 is a nice exercise in simple optimization: Find max and min of v subject to the constraint that u = t. The max is given by the Red (dark) R(t) curve in figure 6, with
(
The min is given by the Green (light) curve,
with 0 < t < 1 in both cases.
Notice that there is a non-removable corner singularity at t = 0 but it is a piece of euclidean space so the curvature is zero at every point.
Perhaps the first non-trivial example of a multiparameter bayesian model is simple logistic regression (see [3, p.88]). Twenty animals were tested, five at each of four dose levels (see Table 1). The standard model for this kind of data is,
where θ i is the probability of death for animals given dose x i . The standard logistic dose-response relation is:
The joint distribution of (y 1 , . . . , y k ) is a function of the unknown parameters (a, b) and straight (but tedious) calculations give the volume element dV = √ det g dadb in the (a, b) parameterization as
This is a strange looking density (see figure 7). In particular this prior is proper and it assigns correlation of about 0.5 between a and b. This correlation is known a priori from the underlying geometry. In fact, the volume prior provides a better fit to the data than the standard diffuse naive prior that models a and b as independent variables with large variances. Figure 8 shows the results of the posterior simulations with both priors. Left panel with naive prior, right panel with volume prior. The red (dark) middle curves represent the logistic curves associated to the mean posterior values for (a, b) (100 thousand of them). The pictures also show 500 logistic curves obtained by sampling 500 (a, b) pairs from the available posterior samples. There is clearly more spread of logistic curves on the right than on the left panel. This is compatible with the fact that the volume prior samples uniformly over the manifold. Just like in the cooked-up example the over-spread (a, b) points cover only a small region of the manifold. is infinite; there is no uniform distribution over M. However, the underlying information geometry provides the following class of priors given as scalar density fields defined invariantly on M by,
where p ∈ M, t is a probability distribution guessing the actual distribution of the data, δ , ν are scalar parameters in [0, 1], α > 0 large enough so that Z < ∞ and I δ (p : t) is the δ -information deviation between (unnormalized) distributions p and t given by,
where the integral is over the whole data space manifold. This family of priors exists for any regular model and it has many remarkable properties. In particular this family maximizes a simple and objective notion of ignorance. For details see my A geometric theory of ignorance . The hyper parameters can be estimated with priors of the same kind or with a nonparametric prior of the Dirichlet Process type (which could itself be seen as part of this family if we allow M to be infinite dimensional). There are still many open problems but the road ahead seems clear: More geometry.
Wrong PriorsNovember 11, 2021
November 11, 2021
📸 Image Gallery
