An illustration of the risk of borrowing information via a shared likelihood

An illustration of the risk of borrowing information via a shared   likelihood
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

A concrete, stylized example illustrates that inferences may be degraded, rather than improved, by incorporating supplementary data via a joint likelihood. In the example, the likelihood is assumed to be correctly specified, as is the prior over the parameter of interest; all that is necessary for the joint modeling approach to suffer is misspecification of the prior over a nuisance parameter.


💡 Research Summary

The paper presents a simple yet powerful illustration of how borrowing information through a joint likelihood can backfire when the prior on a nuisance parameter is misspecified, even if the likelihood and the prior on the parameter of interest are correct. The setting involves a binary parameter θ∈{0,1} that we wish to estimate from a single observation Y∼N(θ,1). In addition, a second, independent observation X∼N(θ+μ,1) is available, where μ is an unknown shift that acts as a nuisance. The prior on θ is uniform (½,½) and the loss function is 0‑1 classification loss, so the Bayes estimator is the posterior mode.

Two estimators are compared: ˆθ_y, which uses only Y, and ˆθ_xy, which uses both X and Y via the joint likelihood f(X,Y|θ). The risk R(·)=E


Comments & Academic Discussion

Loading comments...

Leave a Comment