Weighted Garbling
We introduce an information order on experiments based on weighted garbling, a generalization of the standard notion of garbling. In this order, an experiment is more informative than another if the latter is a weighted garbling of the former. We show that this is equivalent to ordinary garbling conditional on a payoff-irrelevant event. We also characterize the order in terms of induced posterior belief distributions, showing that it depends only on their support. Our main results provide two decision-theoretic characterizations of this order. First, in static decision problems, one experiment dominates another if and only if its value of information is at least a fixed fraction of the other’s across all problems. Second, in a class of stopping time problems with a hidden Markov process and repeated experimentation, one experiment dominates another if and only if it yields weakly higher expected payoffs for every problem with a regular prior.
💡 Research Summary
The paper introduces a new preorder on information structures called “weighted garbling,” which relaxes the classic Blackwell garbling relation. In the Blackwell framework, one experiment (or signal structure) is more informative than another if the latter can be obtained by adding noise that carries no additional state information; this yields a very strong partial order because it requires dominance of expected pay‑offs in every possible static decision problem. Many pairs of experiments are incomparable under this order, motivating the authors to propose a more permissive notion.
A weighted garbling of experiment Π′ into Π is defined by two components: (i) a non‑negative weight function γ on the signal space of Π′, and (ii) a standard Blackwell garbling kernel ϕ. Formally, for every state θ and Borel set X, the conditional distribution of Π satisfies
πθ(X)=∫_{S′}γ(s′)·ϕ(X|s′) π′θ(ds′).
The “size” β of the weighted‑garbling relationship is the essential supremum of γ (the smallest possible upper bound on γ across all admissible representations). By construction β≥1, with β=1 exactly recovering ordinary Blackwell garbling. Intuitively, γ first re‑weights the signals of Π′ in a state‑independent way, producing a new experiment γΠ′; then ϕ garbles this intermediate experiment in the usual Blackwell sense to obtain Π.
The authors provide several equivalent characterizations. First, weighted garbling is shown to be equivalent to “conditional informativeness”: there exists an (uninformative) event E such that, conditional on E, Π′ becomes Blackwell‑more informative than Π. The maximal probability of such an event is precisely β⁻¹, linking the size directly to the probability of the conditioning event.
Second, a belief‑based characterization is given. In the Blackwell order, Π′ dominates Π if, for every full‑support prior, the posterior belief distribution induced by Π′ is a mean‑preserving spread of that induced by Π. For weighted garbling, the requirement weakens: there must exist a distribution over posteriors that is “close” to the posterior distribution of Π′ and that is a mean‑preserving spread of Π’s posterior distribution. When signal spaces are finite, this reduces to a simple geometric condition: the convex hull of Π′’s posterior beliefs must contain the convex hull of Π’s posterior beliefs. In binary‑state settings this means the extreme posterior beliefs generated by Π′ lie outside (or at least as far from ½ as) those generated by Π. This condition is far easier to verify than checking a full mean‑preserving spread.
The paper’s first major decision‑theoretic result (Theorem 4) connects weighted garbling to the worst‑case ratio of values of information across all static decision problems. If Π′ is a β‑weighted garbling of Π, then for every decision problem A, the value of information satisfies V_A(Π′) ≥ β⁻¹ · V_A(Π). Conversely, if such a uniform lower bound exists, Π′ must be a weighted garbling of Π with size at most the reciprocal of the bound. Thus weighted garbling guarantees that Π′ delivers at least a fixed fraction (β⁻¹) of the informational benefit of Π in every possible static problem.
The second major result (Theorem 5) studies a class of optimal‑stopping problems with a hidden Markov state that evolves over time. The decision maker may repeat the same experiment many times before making a single irreversible action at a terminal date T. Under regularity conditions on the prior (not too extreme) and on the Markov transition, the authors prove that if Π′ dominates Π in the weighted‑garbling sense, then there exists a finite horizon T′ such that for all T ≥ T′, Π′ yields a weakly higher expected payoff than Π in every such dynamic problem. The converse also holds: if Π′ is not a weighted garbling of Π, one can construct a dynamic problem (with a regular prior) where Π yields a strictly higher payoff for arbitrarily large horizons. This dynamic characterization shows that weighted garbling captures the idea that an experiment is more useful when the agent can acquire information repeatedly.
Practically, the size β can be computed from data when signal spaces are finite by solving a linear program that minimizes the supremum of γ subject to the weighted‑garbling constraints. This makes the concept readily applicable in empirical work where researchers compare information policies, design surveys, or evaluate monitoring technologies.
The paper situates its contribution within a rich literature on information comparison. It extends Blackwell’s classic results, relates to conditional‑informativeness notions (Lehmann, Persico), to second‑order stochastic dominance orders (Ganuza & Penalva), and to recent dynamic information comparison frameworks (Moscarini & Smith, Azrieli, Muet et al.). Unlike those works, weighted garbling offers a preorder that is both more permissive than Blackwell’s and more tractable than many prior‑dependent orders, while retaining clear economic interpretations in both static and dynamic settings.
In summary, the authors develop a comprehensive theory of weighted garbling: a mathematically elegant relaxation of Blackwell garbling, a simple geometric belief‑based test, a uniform bound on static information values, and a dynamic optimal‑stopping characterization. The framework bridges theoretical rigor with empirical feasibility, providing a valuable tool for economists and statisticians interested in comparing, designing, or evaluating information structures.
Comments & Academic Discussion
Loading comments...
Leave a Comment