Partially Identified Ambiguity
This paper develops a theory of learning under ambiguity induced by the decision maker’s beliefs about the collection of data correlated with the true state of the world. Within our framework, two classical results on Bayesian learning extend to the setting with ambiguity: experiments are equivalent to distributions over posterior beliefs, and Blackwell’s more informative and more valuable orders coincide. When applied to the setting of robust Bayesian analysis, our results clarify the source of time inconsistency in the Gamma-minimax problem and provide an argument in favor of the conditional Gamma-minimax criterion. We also apply our results to a persuasion game to illustrate that our model provides a natural benchmark for communication under ambiguity.
💡 Research Summary
The paper develops a formal theory of learning when the decision‑maker (DM) faces “partially identified ambiguity.” The DM knows only a coarse partition Φ of the true state space Θ and a reduced‑form prior τ that assigns probabilities to the cells of Φ. A set of priors P is said to be partially identified by (τ, Φ) if it consists of all probability distributions on Θ that respect the cell‑wise probabilities τ(ϕ) while allowing any conditional distribution within each cell. This structure sits between the extremes of full identification (a singleton prior) and full ambiguity (the whole simplex Δ(Θ)).
The authors introduce the notion of a “consistent experiment.” An experiment is a Blackwell experiment π:Θ→Δ(Y) that maps each state to a distribution over observable signals Y. The DM updates each prior in P by Bayes’ rule (prior‑by‑prior updating), producing a posterior set P(y) for each signal realization y. Consistency requires that the collection of posterior sets satisfies a set‑valued martingale property they call Aumann‑plausibility: the Aumann expectation of the posterior set equals the prior set. In other words, the average of the posterior beliefs (taken with respect to the set‑valued expectation) must reproduce the original ambiguity.
The first main result shows that, under prior‑by‑prior updating, an experiment is consistent if and only if it generates Aumann‑plausible distributions over posterior beliefs, and this happens exactly when the prior set is partially identified. This generalizes the classic Bayesian result that any experiment yields a Bayes‑plausible distribution of posteriors when there is a single prior. The proof hinges on the linear constraints imposed by the partition Φ; only under those constraints does the martingale property survive the set‑valued setting.
The second main result adds a stronger structural condition called “maximal partial identification.” Here the prior set admits a Minkowski decomposition into extreme subsets with disjoint supports (essentially a partition of Θ into independent sub‑priors). Under this condition, consistency and Aumann‑plausibility become fully equivalent: every Aumann‑plausible posterior distribution can be generated by some consistent experiment, and vice‑versa. This extends the “splitting” characterization of Aumann and Mascher (1995) to the multiple‑prior environment.
With these foundations the authors revisit two classic topics. First, they prove that Blackwell’s two definitions of informativeness—(i) the more‑informative order (existence of a garbling) and (ii) the more‑valuable order (higher expected utility for all monotone value functions)—coincide even under partially identified ambiguity, provided the DM’s value function is of the Gilboa‑Schmeidler max‑min form. Thus the familiar Blackwell ordering remains valid when the DM’s beliefs are set‑valued.
Second, they address the time‑inconsistency of the Gamma‑minimax rule. The standard Gamma‑minimax criterion selects an ex‑ante experiment that minimizes the worst‑case expected loss over the whole prior set, while the conditional Gamma‑minimax selects the optimal action after observing the signal. When the prior set is not a singleton, these criteria diverge. By exploiting the Aumann‑plausibility characterization, the authors define a time‑consistent “Gamma*‑minimax” rule that retains the robustness properties of Gamma‑minimax and, crucially, coincides with the conditional Gamma‑minimax action. This provides a normative argument in favor of the conditional criterion in settings with partial identification.
Finally, the theory is applied to a persuasion (information design) game. The sender (e.g., a prosecutor) can only generate signals that are partially identified with respect to the underlying state (e.g., evidence can reveal involvement but not intent). Consistency and Aumann‑plausibility impose constraints on the sender’s feasible signaling strategies. Because the set of feasible posteriors is exactly the set of Aumann‑plausible distributions, the authors can employ the standard concavification technique of Kamenica and Gentzkow (2011) to solve for equilibrium. The example illustrates how legal or technological limits on data generation shape both beliefs and strategic communication, yielding a more realistic model of ambiguous persuasion.
Overall, the paper makes three substantive contributions: (1) it formalizes partially identified ambiguity as a bridge between econometric partial identification and decision‑theoretic ambiguity; (2) it introduces Aumann‑plausibility as the appropriate martingale condition for set‑valued beliefs, thereby extending classic Bayesian learning results; and (3) it shows how this framework resolves known puzzles in robust decision‑making (Gamma‑minimax) and information design (persuasion under ambiguity). The results are broadly relevant to dynamic economics, robust statistics, and any domain where data are informative but not fully identifying.
Comments & Academic Discussion
Loading comments...
Leave a Comment