On the elicitation of continuous, symmetric, unimodal distributions

In this brief note, we highlight some difficulties that can arise when fitting a continuous, symmetric, unimodal distribution to a set of expert's judgements. A simple analysis shows it is possible to

On the elicitation of continuous, symmetric, unimodal distributions

In this brief note, we highlight some difficulties that can arise when fitting a continuous, symmetric, unimodal distribution to a set of expert’s judgements. A simple analysis shows it is possible to fit a Cauchy distribution to an expert’s beliefs when their beliefs actually follow a normal distribution. This example stresses the need for careful distribution fitting and for feedback to the expert about what the fitted distribution implies about their beliefs.


💡 Research Summary

The paper addresses a subtle but important problem that arises when translating expert judgments into a formal probability distribution. In many decision‑making contexts—risk analysis, Bayesian inference, policy design—practitioners often assume that the expert’s belief can be represented by a continuous, symmetric, unimodal (CSU) distribution. This assumption is attractive because it imposes a simple shape (single peak, mirror symmetry) while allowing a wide range of possible tails. The authors demonstrate, however, that the limited information typically elicited from an expert (a few quantiles, a mean, perhaps a variance) is insufficient to uniquely identify a CSU distribution. In particular, they show that a normal distribution and a Cauchy distribution can both satisfy the same set of elicited quantiles, yet they differ dramatically in tail behavior.

The authors construct a simulation in which an expert’s true belief follows a standard normal distribution. The expert is asked only for three quantiles: the 5 %, 50 % (median), and 95 % points. Using these three numbers, the authors fit a Cauchy distribution by maximum‑likelihood (or equivalently by solving the quantile equations). The fitted Cauchy reproduces the supplied quantiles almost exactly, but its heavy tails assign far more probability to extreme outcomes than the true normal belief. Consequently, any downstream analysis that relies on the fitted distribution—such as calculating Value‑at‑Risk, setting safety margins, or performing Bayesian updating—will be biased toward excessive conservatism or, paradoxically, toward under‑estimating the likelihood of moderate deviations.

Two root causes are identified. First, experts rarely receive feedback about what their supplied numbers imply for the shape of the underlying distribution. Without seeing a visual representation of the implied tail weight, they cannot correct misconceptions about the extremity of their own judgments. Second, the statistical fitting routine often imposes only the minimal CSU constraints, ignoring any prior knowledge about tail thickness. In the absence of explicit tail‑shape priors, the algorithm may select a heavy‑tailed distribution like the Cauchy simply because it satisfies the symmetry and unimodality requirements.

To mitigate these issues, the authors propose a three‑step practical protocol. (1) Conduct iterative elicitation sessions that gather additional information about tail beliefs, such as direct probability assessments for extreme events or qualitative statements about “how heavy” the tails should be. (2) Provide immediate, graphical feedback to the expert showing the fitted distribution, highlighting the probability mass in the tails versus the central region, and asking the expert to confirm or adjust. (3) Adopt a Bayesian framework where a prior distribution over possible CSU families is specified (e.g., a prior favoring normal‑like tails). Expert responses are then used to update this prior, yielding a posterior that reflects both the elicited quantiles and the prior tail assumptions. This approach makes the tail‑shape an explicit component of the model rather than an implicit by‑product of limited data.

The conclusion emphasizes that careful distribution fitting and transparent feedback are essential for preserving the integrity of expert‑driven analyses, especially in domains where tail risk is critical. The paper calls for further research on elicitation methods that can handle asymmetric or multimodal beliefs and on systematic ways to encode expert uncertainty about distributional shape into the prior. By doing so, practitioners can avoid the pitfall of inadvertently fitting a Cauchy when the expert’s true belief is much more benign, thereby improving the reliability of risk assessments and policy recommendations.


📜 Original Paper Content

🚀 Synchronizing high-quality layout from 1TB storage...