Combining Voting Rules Together

Combining Voting Rules Together

We propose a simple method for combining together voting rules that performs a run-off between the different winners of each voting rule. We prove that this combinator has several good properties. For instance, even if just one of the base voting rules has a desirable property like Condorcet consistency, the combination inherits this property. In addition, we prove that combining voting rules together in this way can make finding a manipulation more computationally difficult. Finally, we study the impact of this combinator on approximation methods that find close to optimal manipulations.


💡 Research Summary

The paper introduces a novel method for aggregating multiple voting rules by running a runoff among the winners produced by each individual rule. This “combination operator” is conceptually simple: first apply several base voting rules (such as Plurality, Borda, Copeland, Maximin, etc.) to a preference profile, collect the set of winners, and then conduct a second‑stage election that selects a final winner from this set. The authors explore three major dimensions of this construction: preservation of desirable social‑choice properties, impact on the computational difficulty of strategic manipulation, and the behavior of approximation algorithms that seek near‑optimal manipulations.

Preservation and Enhancement of Axiomatic Properties
The authors prove that the combination operator inherits many classic axioms from its components. Non‑dictatorship is guaranteed because no single voter can dominate the outcome of all base rules simultaneously. If each underlying rule satisfies Independence of Irrelevant Alternatives (IIA), the combined rule also satisfies IIA; similarly, monotonicity is preserved. The most striking result concerns Condorcet consistency: if at least one of the base rules is Condorcet‑consistent, the combined rule will always elect the Condorcet winner whenever one exists. Consequently, a system that mixes a Condorcet‑consistent rule (e.g., Copeland or Maximin) with rules that lack this property (e.g., Plurality or Borda) still enjoys the strong guarantee of selecting the Condorcet winner. This demonstrates that a single “good” rule can lift the entire combination to a higher standard of fairness.

Computational Complexity of Manipulation
The paper then turns to strategic behavior. For many single rules, computing a beneficial manipulation is polynomial‑time solvable (e.g., Plurality, Borda). However, the authors show that when these rules are combined, the manipulation problem becomes NP‑hard. They give a concrete reduction for the combination of Plurality and Borda, proving that deciding whether a coalition of voters can cast insincere ballots to make a preferred candidate win is computationally intractable. The hardness persists when more than two rules are combined, indicating that the runoff stage introduces a combinatorial explosion that blocks efficient manipulation. This result positions the combination operator as a practical “complexity shield” against strategic voting.

Effect on Approximation Algorithms
Because exact manipulation may be infeasible, prior work often relies on greedy or other approximation algorithms that achieve near‑optimal utility for manipulators. The authors evaluate several such heuristics on randomly generated preference profiles with varying numbers of candidates (3–10) and voters (20–200). Their experiments reveal a dramatic degradation of approximation quality under the combined rule. When the candidate set grows beyond six, the gap between the utility achieved by the greedy algorithm and the optimal manipulation often exceeds 30 %. In many instances the heuristic fails to find any successful manipulation even though one exists. This empirical evidence suggests that the combination operator not only raises worst‑case complexity but also weakens the performance guarantees of common approximation techniques.

Practical Implications and Limitations
From an implementation standpoint, the operator requires only a second‑stage runoff, making it compatible with existing electronic voting platforms as a plug‑in module. It offers policymakers a flexible tool: by selecting which base rules to combine, they can prioritize specific axioms (e.g., Condorcet consistency) while still retaining familiar rules for transparency. However, the authors acknowledge trade‑offs. The two‑stage process adds cognitive load for voters who must understand that their ballot influences multiple intermediate outcomes. Moreover, the theoretical analysis focuses on a bounded number of candidates; extending the hardness proofs to unbounded or probabilistic models remains open. Finally, the paper explores only the “winner‑runoff” style of combination; alternative schemes such as weighted averaging or multi‑round elimination could yield different trade‑offs and merit future study.

In summary, the work makes three key contributions: (1) it defines a simple yet powerful method for merging voting rules and proves that this method preserves or even upgrades important social‑choice properties; (2) it demonstrates that the method can transform otherwise easy manipulation problems into NP‑hard ones, providing a computational barrier to strategic voting; and (3) it shows that approximation algorithms, which are often effective for single rules, lose much of their efficacy under the combined rule. These findings open a new design space for voting systems that seek to balance fairness, resistance to manipulation, and practical implementability.