Aggregate Efficiency in Games
We show that, in large population games, decentralized information aggregation generically corrects for individual-level biases. This establishes a new testable aggregate efficiency benchmark where the behavior of boundedly rational agents mimics that of fully rational agents. However, we find that structural economic forces such as strategic network formation and profit-maximizing platforms can systematically select pathological environments to exploit individuals’ biases, thereby causing aggregate inefficiencies. We characterize these inefficiencies in monopoly and labor markets. Our findings therefore suggest that policy should shift focus from correcting individuals’ behavior to monitoring and regulating information structures.
💡 Research Summary
The paper investigates whether boundedly rational agents, who suffer from correlation neglect, can collectively achieve outcomes as efficient as those of fully rational agents in large‑population games. The authors introduce the notion of “aggregate efficiency” – a benchmark analogous to the allocative efficiency of perfect competition – and prove that, under generic information structures, the set of correlation patterns that induce welfare‑altering biases has measure zero. In other words, when agents draw signals from a large, randomly correlated sample, individual biases tend to cancel out, and the aggregate behavior mirrors the predictions of rational expectations.
To formalize agents’ learning and decision‑making, the authors develop a new equilibrium concept called Correlated Sampling Equilibrium with Statistical Inference (CoSESI). CoSESI captures four key ingredients: (1) agents do not know the true joint distribution of the signals they observe; (2) they hold a possibly misspecified subjective model; (3) they use the empirical frequency of observed actions as a sufficient statistic; and (4) they choose the optimal action given their inferred belief about the population’s action share. CoSESI nests Nash equilibrium, the SESI framework of Salant and Cherry (2020), and the analogy‑based expectation equilibrium of Jehiel (2005), while also accommodating classic biases such as the hot‑hand fallacy and the gambler’s fallacy.
The central theoretical result (Theorem 1) shows that, for a unit‑mass population facing a binary choice with strategic externalities (either substitutes or complements), the probability that the equilibrium correlation structure leads to a systematic aggregate bias is zero in the space of all possible correlation parameters. This “measure‑zero” result provides a rigorous justification for the empirical observation that individual‑level biases often disappear in large markets (e.g., Croc‑ket et al., 2021).
However, the paper emphasizes that the mere rarity of pathological equilibria does not guarantee their absence in practice. When an economic agent or institution can control the information environment—through strategic network formation, platform algorithms, or monopoly pricing—non‑generic correlation structures can be deliberately selected. The authors term the resulting phenomenon “information failure,” drawing a parallel to traditional market failures caused by power or externalities.
Two concrete applications illustrate how structural forces generate such failures. In monopoly markets for luxury goods, the seller can engineer highly clustered consumer encounters (e.g., by restricting sales to exclusive clubs). Consumers, neglecting the correlation among observed purchases, underestimate demand and consequently purchase less, allowing the monopolist to raise prices. The profit gain disappears only when the sample size grows enough for the correlation pattern to become generic. In a two‑sided labor market, homophilic referral networks create positively correlated signals about job availability. Correlation neglect then amplifies matching frictions, reducing overall employment relative to the rational‑expectations benchmark. The employment gap can be proportional to the intensity of the friction.
To make the benchmark empirically useful, the authors propose a simple statistical test (Proposition 3) that distinguishes generic from non‑generic environments using only a finite sample of observed actions. The test compares the empirical covariance structure against the null hypothesis of a generic correlation pattern.
Policy implications follow naturally: rather than focusing on “correcting” individual cognition (through education or nudges), regulators should monitor and, where necessary, intervene in the information structure itself. Potential tools include transparency requirements for platform recommendation algorithms, antitrust scrutiny of exclusive club‑membership schemes, and regulations that limit the ability of firms to engineer highly correlated observation environments.
In sum, the paper makes three major contributions: (1) it establishes a rigorous aggregate‑efficiency benchmark that shows when bounded rationality is harmless at the macro level; (2) it identifies the precise mechanisms—strategic network formation and platform design—through which information structures can be weaponized to sustain aggregate inefficiencies; and (3) it provides a practical statistical methodology for detecting such environments. These insights bridge behavioral economics, network theory, and competition policy, offering a fresh perspective on how to safeguard market outcomes in the age of algorithmic information curation.
Comments & Academic Discussion
Loading comments...
Leave a Comment