Models of Manipulation on Aggregation of Binary Evaluations
We study a general aggregation problem in which a society has to determine its position on each of several issues, based on the positions of the members of the society on those issues. There is a prescribed set of feasible evaluations, i.e., permissible combinations of positions on the issues. Among other things, this framework admits the modeling of preference aggregation, judgment aggregation, classification, clustering and facility location. An important notion in aggregation of evaluations is strategy-proofness. In the general framework we discuss here, several definitions of strategy-proofness may be considered. We present here 3 natural \textit{general} definitions of strategy-proofness and analyze the possibility of designing an annonymous, strategy-proof aggregation rule under these definitions.
💡 Research Summary
The paper investigates a very general aggregation problem in which a society must decide on a collection of binary issues based on the individual binary evaluations submitted by its members. Formally, there are n agents and m binary issues; each agent i reports a vector x_i ∈ {0,1}^m. Not every possible vector is admissible – the set of feasible collective evaluations, denoted 𝔽 ⊆ {0,1}^m, is fixed in advance and may encode logical constraints, spatial constraints, or any other domain‑specific restrictions. An aggregation rule f maps the profile of individual reports to a collective outcome f(x_1,…,x_n) ∈ 𝔽. The central question is whether one can design an anonymous rule (i.e., a rule that treats all agents symmetrically) that is also strategy‑proof, meaning that no agent (or group of agents) can benefit by misreporting their true evaluations.
The authors introduce three natural, increasingly strong notions of strategy‑proofness. The first, Individual Strategy‑Proofness (SP‑I), requires that no single agent can change the outcome to a more preferred one by altering his report. The second, Group Strategy‑Proofness (SP‑G), extends this requirement to any coalition of agents: no group can jointly misreport in a way that makes every member strictly better off. The third, Robust Strategy‑Proofness (SP‑R), combines SP‑I and SP‑G and additionally demands that any deviation that does not affect the outcome is irrelevant – essentially a “strongly strategy‑proof” condition familiar from social choice theory.
The paper’s main contributions are impossibility and possibility theorems that delineate exactly when an anonymous, strategy‑proof rule can exist under each definition. The key impossibility results are:
-
Theorem 1 (SP‑I + Anonymity) – For any number of agents n ≥ 3 and at least two issues (m ≥ 2), if the feasible set 𝔽 is non‑trivial (i.e., it does not contain all 2^m vectors), then there is no anonymous rule that satisfies SP‑I and is not a constant rule. The proof adapts the classic “pivotal voter” argument: anonymity forces the rule to treat all agents symmetrically, but any non‑constant rule inevitably creates a situation where a single agent can swing a coordinate in his favor, violating SP‑I.
-
Theorem 2 (SP‑G + Anonymity) – Under the same conditions, the only anonymous rules that satisfy SP‑G are either dictatorial (which violates anonymity) or constant (always output the same feasible vector). Hence, any meaningful collective decision mechanism that is both anonymous and group‑strategy‑proof is impossible.
-
Theorem 3 (SP‑R Characterization) – A rule that satisfies the strongest notion SP‑R must be both monotone (changing a coordinate from 0 to 1 cannot turn a 1 into a 0 in the outcome) and distance‑preserving with respect to Hamming distance. Such a rule can exist only when the feasible set 𝔽 has a linear structure (e.g., it is a hyperplane defined by a single linear equation). In most practical settings—judgment aggregation, preference aggregation, clustering—𝔽 is highly non‑linear, so SP‑R is unattainable.
The authors also identify special cases where positive results are possible. When each issue is independent (𝔽 = Π_j 𝔽_j) and the feasible set for each coordinate is unrestricted, the coordinate‑wise majority rule satisfies SP‑I and anonymity, though it fails SP‑G. When there is only a single issue (m = 1), the classic majority rule meets all three notions, reproducing the familiar Gibbard‑Satterthwaite theorem for binary decisions. Moreover, the paper discusses randomized mechanisms: by introducing probability into the outcome (e.g., selecting each coordinate to be 1 with a probability that depends on the profile), one can achieve expected‑utility strategy‑proofness even when deterministic rules cannot.
Beyond the abstract theory, the paper connects the model to several concrete domains. In judgment aggregation, logical consistency constraints make 𝔽 a complex Boolean formula; the impossibility theorems explain why many well‑known aggregation procedures (e.g., premise‑based or conclusion‑based rules) are vulnerable to manipulation. In preference aggregation, each candidate’s pairwise comparison can be encoded as a binary issue, and the same impossibility results apply, reinforcing the need for either restricted agendas or non‑anonymous mechanisms. In machine‑learning tasks such as binary classification or clustering, the collective label assignment can be viewed as an aggregation of annotators’ binary votes; the results highlight the difficulty of designing fair, manipulation‑resistant consensus algorithms when annotators have strategic incentives. Finally, in facility location, binary variables may encode whether a facility is placed at a particular site; the analysis shows that naïve majority‑based location rules can be easily manipulated by a coalition of agents.
Given the pervasive impossibility landscape, the authors propose three practical design directions. First, agenda design: by carefully selecting or restricting the feasible set 𝔽 to a product structure, one can recover positive results. Second, randomization and incentive design: adding stochastic elements or imposing costs on misreporting can deter manipulation in expectation. Third, multi‑stage mechanisms: an initial aggregation followed by a correction or re‑voting phase can mitigate strategic behavior while preserving anonymity in the first stage.
In conclusion, the paper provides a unified theoretical framework that captures a wide variety of aggregation problems under a single binary‑evaluation model. It rigorously demonstrates that, except in highly constrained settings, anonymity and any reasonable form of strategy‑proofness are mutually exclusive. This insight forces designers of collective decision‑making systems to either relax anonymity, limit the feasible set, or adopt probabilistic or incentive‑based mechanisms. The work opens several avenues for future research, including extensions to multi‑valued issues, dynamic agendas, and empirical studies of how real agents behave under the proposed mechanisms.