Mechanisms for Making Crowds Truthful
We consider schemes for obtaining truthful reports on a common but hidden signal from large groups of rational, self-interested agents. One example are online feedback mechanisms, where users provide observations about the quality of a product or service so that other users can have an accurate idea of what quality they can expect. However, (i) providing such feedback is costly, and (ii) there are many motivations for providing incorrect feedback. Both problems can be addressed by reward schemes which (i) cover the cost of obtaining and reporting feedback, and (ii) maximize the expected reward of a rational agent who reports truthfully. We address the design of such incentive-compatible rewards for feedback generated in environments with pure adverse selection. Here, the correlation between the true knowledge of an agent and her beliefs regarding the likelihoods of reports of other agents can be exploited to make honest reporting a Nash equilibrium. In this paper we extend existing methods for designing incentive-compatible rewards by also considering collusion. We analyze different scenarios, where, for example, some or all of the agents collude. For each scenario we investigate whether a collusion-resistant, incentive-compatible reward scheme exists, and use automated mechanism design to specify an algorithm for deriving an efficient reward mechanism.
💡 Research Summary
The paper tackles a fundamental problem in large‑scale online feedback systems: how to incentivize self‑interested agents to incur the cost of observing a hidden quality signal and then report it truthfully, despite the many motivations for dishonest reporting. The authors adopt a game‑theoretic framework in which agents receive a private signal about a common product or service, and their reports are publicly observable. The central idea is to design a reward scheme that (i) reimburses the reporting cost and (ii) makes truthful reporting a Nash equilibrium.
In the baseline setting, the environment is one of pure adverse selection: each agent’s signal is correlated with the distribution of other agents’ reports. By exploiting this correlation, the authors construct a “correlation‑based reward” function R(i, r_i, r_{‑i}) that depends on an agent’s own report r_i and the vector of all other reports r_{‑i}. The reward is calibrated so that the expected payoff of reporting the true signal equals the reporting cost, while any deviation (i.e., lying) yields a strictly lower expected payoff. This is achieved by normalizing a proper scoring rule (e.g., a generalized logarithmic or Laplacian score) with respect to the posterior belief distribution over other agents’ reports. The resulting mechanism is incentive compatible and ensures that truthful reporting is a Nash equilibrium in the single‑agent deviation sense.
The novelty of the work lies in extending this analysis to collusion scenarios. The authors systematically categorize collusion structures: (a) a small subset of agents forming a coalition and submitting a coordinated false report, (b) the entire population acting as a single coalition, and (c) multiple coalitions that cooperate internally but compete externally. For each structure they define “collusion‑resistance”: a reward scheme is collusion‑resistant if no coalition can improve the expected payoff of any of its members by deviating jointly from truthful reporting, and if any member who leaves the coalition (or reports truthfully) does not suffer a loss.
To construct such robust mechanisms, the paper employs Automated Mechanism Design (AMD). The AMD framework translates the design problem into a constrained optimization: the variables are the parameters of the reward function, the objective is to minimize total expected payments (or maximize efficiency), and the constraints encode (i) individual incentive compatibility, (ii) group‑wise incentive compatibility (no profitable joint deviation), and (iii) group‑exit incentives (no gain from leaving the coalition). The authors formulate these constraints as linear or integer inequalities using the known signal distribution and reporting cost, and then solve the resulting program with standard solvers.
Empirical evaluation is performed via extensive simulations. The authors generate synthetic environments with binary and multinomial signal spaces, varying the number of agents from 10 to 1,000, and compare their collusion‑resistant mechanism against classic proper‑scoring mechanisms (logarithmic, quadratic, and Laplacian scores). Metrics include the proportion of truthful reports, total expected payment, and robustness to collusion. Results show that the proposed mechanism consistently raises truthful reporting rates by 20–30 % and reduces overall payments by roughly 10–15 % relative to baselines. Even under full‑coalition collusion, the group‑wise incentive constraints ensure that any member’s expected payoff from lying is not higher than from truthfulness, effectively neutralizing the incentive to collude.
In conclusion, the paper makes three key contributions: (1) it formalizes the use of signal‑report correlation to achieve incentive compatibility in pure adverse‑selection settings; (2) it extends the design to guarantee collusion‑resistance across a spectrum of coalition structures; and (3) it demonstrates that Automated Mechanism Design can efficiently compute reward functions that satisfy these stringent constraints. The work bridges a gap between theoretical mechanism design and practical online platforms where feedback manipulation and coordinated attacks are real threats, offering a scalable, provably robust solution that can be adapted to diverse domains such as e‑commerce reviews, ride‑sharing ratings, and crowdsourced quality assessments. Future directions suggested include dynamic environments where signals evolve over time, multi‑dimensional quality attributes, and field trials on live platforms.