On the estimation of a parameter with incomplete knowledge on a nuisance parameter

In this paper we consider the problem of estimating a parameter of a probability distribution when we have some prior information on a nuisance parameter. We start by the very simple case where we kno

On the estimation of a parameter with incomplete knowledge on a nuisance   parameter

In this paper we consider the problem of estimating a parameter of a probability distribution when we have some prior information on a nuisance parameter. We start by the very simple case where we know perfectly the value of the nuisance parameter. The complete likelihood is the classical tool in this case. Then, progressively, we consider the case where we are given a prior probability distribution on this nuisance parameter. The marginal likelihood is then the classical tool in this case. Then, we consider the case where we only have a fixed number of its moments. Here, we may use the maximum entropy (ME) principle to assign a prior law and thus go back to the previous case. Finally, we consider the case where we know only its median. In our knowledge, there is not any classical tool for this case. We propose then a new tool for this case based on a recently proposed alternative distribution to the marginal probability distribution. This new criterion is obtained by first remarking that the marginal distribution can be considered as the mean value of the original distribution over the prior probability law of the nuisance parameter, and then, by using the median in place of the mean. In this paper, we first summarize the classical tools used for the three first cases, then we give the precise definition of this new criterion and its properties and, finally, present a few examples to show the differences of these cases. Key Words: Nuisance parameter, Bayesian inference, Maximum Entropy, Marginalization, Incomplete knowledge, Mean and Median of the Likelihood over the prior distribution


💡 Research Summary

The paper addresses the classic problem of estimating a parameter of interest, θ, when the statistical model also contains a nuisance parameter, ν, whose knowledge is incomplete. It systematically distinguishes four increasingly vague information scenarios about ν and shows how each leads to a different inference tool.

  1. Exact knowledge of ν. When ν is known with certainty, the full likelihood L(θ,ν)=p(x|θ,ν) can be used directly. The maximum‑likelihood estimator (MLE) θ̂_ML = arg max_θ L(θ,ν₀) coincides with the Bayes estimator under a degenerate prior π(ν)=δ(ν−ν₀). This case serves as a baseline.

  2. A full prior distribution for ν. If a proper prior π(ν) is available, the standard Bayesian marginalisation is applied: the marginal likelihood p(x|θ)=∫L(θ,ν)π(ν)dν replaces the full likelihood. The posterior for θ becomes p(θ|x)∝p(x|θ)π(θ). The paper discusses analytical and numerical strategies (Laplace approximation, Monte‑Carlo integration) for evaluating the integral and notes that all classical Bayesian tools are recovered.

  3. Only a finite set of moments of ν is known. When only moments such as E


📜 Original Paper Content

🚀 Synchronizing high-quality layout from 1TB storage...