Statistical Decisions Using Likelihood Information Without Prior Probabilities
This paper presents a decision-theoretic approach to statistical inference that satisfies the likelihood principle (LP) without using prior information. Unlike the Bayesian approach, which also satisfies LP, we do not assume knowledge of the prior distribution of the unknown parameter. With respect to information that can be obtained from an experiment, our solution is more efficient than Wald’s minimax solution.However, with respect to information assumed to be known before the experiment, our solution demands less input than the Bayesian solution.
💡 Research Summary
The paper proposes a decision‑theoretic framework that adheres to the Likelihood Principle (LP) while completely dispensing with prior probability distributions. Traditional Bayesian inference satisfies the LP but requires a subjective or empirically justified prior, which may be unavailable or unreliable in many practical settings. Wald’s minimax approach, on the other hand, avoids priors by focusing on the worst‑case risk over the entire parameter space, but it typically ignores the specific information contained in the observed data and can be overly conservative. The authors bridge this gap by constructing a “likelihood‑based decision rule” that uses only the observed data’s likelihood function and a pre‑specified loss function.
The core construction is as follows. For an observed sample (x) and a parametric model with likelihood (L(\theta|x)), the conditional risk of an action (a) is defined as a likelihood‑weighted average of the loss: \