Building Robust Crowdsourcing Systems with Reputation-aware Decision Support Techniques

Building Robust Crowdsourcing Systems with Reputation-aware Decision   Support Techniques

Crowdsourcing refers to the arrangement in which contributions are solicited from a large group of unrelated people. Due to this nature, crowdsourcers (or task requesters) often face uncertainty about the workers’ capabilities which, in turn, affects the quality and timeliness of the results obtained. Trust is a mechanism used by people to facilitate interactions in human societies where risk and uncertain are common. The crucial challenge to building a robust crowdsourcing system is how to make trust-aware task delegation decisions to efficiently utilize the capacities of workers (or trustee agents) to achieve high social welfare? This book presents the research addressing this challenge. It goes beyond the existing trust management research framework by removing a widespread assumption implicitly adopted by existing research: that a trustee agent can process an unlimited number of interaction requests per discrete time unit without compromising its performance as perceived by the task requesters (or truster agents). Decision support in crowdsourcing is re-formalized as a multi-agent trust game based on the principles of the Congestion Game, which is solved by two trust-aware interaction decision-making approaches: 1) the Social Welfare Optimizing approach for Reputation-aware Decision-making (SWORD) approach, and 2) the Distributed Request Acceptance approach for Fair utilization of Trustee agents (DRAFT). SWORD is designed for centralized systems, while DRAFT is designed for fully distributed systems. Theoretical analyses have shown that the social welfare produced by these two approaches can be made closer to optimal by adjusting only one key parameter. With these two approaches, the framework of research for crowdsourcing systems can be enriched to handle more realistic scenarios where workers have varied and limited capabilities.


💡 Research Summary

The paper tackles the fundamental problem of trust‑aware task delegation in crowdsourcing platforms where workers (trustees) have limited processing capacities. Traditional trust management research often assumes that a trustee can handle an unlimited number of interaction requests per time unit without degrading performance, an assumption that does not hold in real‑world crowdsourcing where workers experience fatigue, expertise limits, and quality decay under overload. To address this gap, the authors reformulate decision support as a multi‑agent trust game grounded in the theory of Congestion Games. In this formulation, each worker’s utility decreases as the number of assigned tasks approaches or exceeds its capacity, mirroring the congestion‑induced cost in classic games.

Two algorithmic solutions are proposed. The first, SWORD (Social Welfare Optimizing approach for Reputation‑aware Decision‑making), is a centralized mechanism. It collects global information about workers’ current reputation scores, capacities, and requesters’ task values and quality requirements. Using a Lagrangian relaxation with a single tunable parameter λ, SWORD solves an optimization problem that maximizes expected social welfare while respecting capacity constraints. The parameter λ controls the trade‑off between overall welfare and fairness (i.e., balanced workload distribution). Theoretical analysis shows that the welfare gap between SWORD’s solution and the true optimum is bounded by O(1/λ), and the algorithm converges to a Nash equilibrium that is also Pareto‑efficient.

The second solution, DRAFT (Distributed Request Acceptance approach for Fair utilization of Trustee agents), is designed for fully decentralized environments. Each worker independently decides whether to accept an incoming request based on its current load, reputation, and a locally computed acceptance threshold θ. Acceptance probability follows a sigmoid function of the difference between θ and the current load, allowing workers to probabilistically reject overloads. Rejected requests are returned to the requester and may be forwarded to other workers. DRAFT requires only local state, yet the authors prove that it still drives the system toward a global equilibrium with a welfare gap bounded by O(1/θ).

Both algorithms are supported by rigorous proofs of convergence, approximation guarantees, and equilibrium properties. Extensive simulations explore a range of scenarios: varying numbers of workers (100–1000), heterogeneous capacity distributions (normal, exponential), and dynamic reputation trajectories. Results indicate that SWORD improves social welfare by 12–15 % over baseline centralized allocations, while DRAFT achieves 10–12 % improvement in fully distributed settings, with a reduction in request retransmission rates to below 20 %. Sensitivity analyses confirm that adjusting λ or θ allows system operators to prioritize different objectives such as maximizing welfare, ensuring fairness, or minimizing latency.

The paper’s key insight is that incorporating both trust (reputation) and capacity constraints yields a more realistic and effective crowdsourcing allocation framework. By demonstrating that a single scalar parameter can tune system performance across multiple dimensions, the work offers practical guidance for platform designers. The authors conclude by outlining future directions: refining dynamic reputation update mechanisms, extending the model to multi‑objective optimization (cost, quality, time), and validating the approaches with real‑world data from platforms like Amazon Mechanical Turk or CrowdWorks. Overall, the study advances the state of the art in trustworthy, capacity‑aware crowdsourcing system design.