The Labor Economics of Paid Crowdsourcing

The Labor Economics of Paid Crowdsourcing
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Crowdsourcing is a form of “peer production” in which work traditionally performed by an employee is outsourced to an “undefined, generally large group of people in the form of an open call.” We present a model of workers supplying labor to paid crowdsourcing projects. We also introduce a novel method for estimating a worker’s reservation wage–the smallest wage a worker is willing to accept for a task and the key parameter in our labor supply model. It shows that the reservation wages of a sample of workers from Amazon’s Mechanical Turk (AMT) are approximately log normally distributed, with a median wage of $1.38/hour. At the median wage, the point elasticity of extensive labor supply is 0.43. We discuss how to use our calibrated model to make predictions in applied work. Two experimental tests of the model show that many workers respond rationally to offered incentives. However, a non-trivial fraction of subjects appear to set earnings targets. These “target earners” consider not just the offered wage–which is what the rational model predicts–but also their proximity to earnings goals. Interestingly, a number of workers clearly prefer earning total amounts evenly divisible by 5, presumably because these amounts make good targets.


💡 Research Summary

The paper “The Labor Economics of Paid Crowdsourcing” develops a formal labor‑supply model for workers on paid crowdsourcing platforms, using Amazon’s Mechanical Turk (AMT) as the empirical testbed. The authors begin by positing that each worker has a reservation wage – the lowest hourly compensation they are willing to accept for a given task. To estimate this unobservable parameter, they design a two‑stage experimental protocol. In the first stage, participants receive a base payment plus a marginal increase for each additional unit of work completed; the point at which a participant stops working reveals a threshold wage. In the second stage, the payment schedule is altered to verify that the identified threshold behaves consistently across incentive structures. By inverting the observed stopping points, the authors recover individual reservation wages.

The distribution of estimated reservation wages is found to be log‑normally distributed, with a median of $1.38 per hour and a mean around $1.55 per hour. This low median reflects the “micro‑task” nature of AMT work and the fact that many workers treat these tasks as supplemental income rather than primary employment. Using the calibrated reservation‑wage distribution, the authors compute the point elasticity of extensive labor supply – the responsiveness of the decision to participate (versus not) to changes in the offered wage. At the median wage, the elasticity is 0.43, meaning a 10 % increase in the hourly rate would raise the proportion of workers who choose to engage by only about 4.3 %. This modest elasticity suggests that simple wage hikes are insufficient to dramatically expand the labor pool on such platforms.

Beyond the rational wage‑maximizing behavior predicted by the model, the experiments uncover a sizable subgroup of “target earners.” These participants do not base their effort solely on the marginal wage; instead, they set internal earnings goals (e.g., $5, $10) and adjust their labor supply to reach or stay near these targets. The data show a pronounced preference for earnings that are multiples of five dollars, indicating a cognitive bias toward round numbers. When a worker is close to a self‑selected target, effort intensifies; once the target is surpassed, many workers reduce their speed or stop altogether, even if the marginal wage remains attractive.

The authors discuss several implications for platform design and policy. First, because reservation wages are heterogeneous and log‑normally distributed, platform operators should avoid relying on a single average wage figure when setting payment levels. Second, the low extensive‑margin elasticity implies that non‑monetary incentives—such as reputation scores, badges, skill‑building opportunities, or gamified progress bars—may be more effective at attracting and retaining workers than modest wage increases. Third, recognizing the target‑earning behavior, designers could structure payment schemes that align with round‑number goals (e.g., offering bonuses when a worker’s cumulative earnings cross a $5 or $10 threshold) to sustain participation. Fourth, the findings suggest that crowd work can be modeled with standard labor‑economics tools, but extensions are needed to capture behavioral deviations like goal‑oriented labor supply.

The study’s limitations are acknowledged. All data come from AMT, a platform with a predominantly U.S. and Indian worker base; generalizing to other crowdsourcing markets (e.g., Clickworker, Prolific, Chinese platforms) requires further validation. The experimental design captures short‑term responses; long‑term dynamics such as learning, fatigue, or worker churn are not addressed. Moreover, the model assumes a single task type with homogeneous effort, whereas real platforms host a wide variety of tasks differing in skill requirements and time intensity.

In conclusion, the paper makes three major contributions: (1) it introduces a novel, behavior‑based method for estimating workers’ reservation wages; (2) it quantifies the elasticity of extensive labor supply in a paid crowdsourcing context, showing that wage changes have limited impact on participation; and (3) it uncovers systematic target‑earning behavior, highlighting the importance of cognitive biases in labor decisions. These insights provide a foundation for more efficient compensation schemes, better prediction of labor supply, and a richer integration of behavioral economics into the design of digital gig platforms.


Comments & Academic Discussion

Loading comments...

Leave a Comment