The AI Penalty: People Reduce Compensation for Workers Who Use AI
We investigate whether and why people might adjust compensation for workers who use AI tools. Across 13 studies (N = 4,956), participants consistently lowered compensation for workers who used AI compared to those who did not. This “AI penalty” is robust across different work scenarios and work tasks, worker statuses, forms and timing of compensation, methods of eliciting compensation, and perceptions of output quality. Moreover, the effect emerges in both hypothetical compensation scenarios as well as real monetary compensation of gig workers. We find that perceived effort and perceived agency – the degree to which an individual serves as the originating source of the core intellectual or creative contribution in a task – explain decisions to reduce compensation for AI-users. However, the penalty is not inevitable. Workers who strategically retain creative agency over core tasks recover most of the AI penalty, and employment contracts that make compensation reductions impermissible provide structural means of reducing the AI penalty.
💡 Research Summary
The paper “The AI Penalty: People Reduce Compensation for Workers Who Use AI” investigates whether, and why, people adjust monetary compensation for workers who employ artificial‑intelligence tools. Across thirteen experiments involving a total of 4,956 participants, the authors consistently find that workers who use AI receive lower compensation than comparable workers who do not use AI. This “AI penalty” appears robust across a wide variety of contexts: different task domains (graphic design, social‑media copywriting, etc.), worker statuses (full‑time vs. freelance), forms of compensation (hypothetical one‑off payments, real bonuses to gig workers), decision‑making settings (comparative vs. non‑comparative), and even when AI use is framed as a productivity boost.
The authors ground their hypothesis in equity theory, arguing that evaluators compare a worker’s perceived inputs (effort, expertise, and especially creative agency) with the output (pay). AI use is argued to diminish two key inputs: (1) perceived effort, because AI is associated with efficiency gains; and (2) perceived creative agency, because AI is seen as an autonomous contributor that “takes over” the core intellectual work. When inputs are judged lower relative to the output, evaluators restore equity by lowering pay.
Empirical evidence begins with two initial studies (N = 303 and 359) in which participants allocated a hypothetical payment to a graphic designer. When the designer was described as intending to use AI, the average payment fell from roughly $45–$47 to $33–$35 (large effect sizes, d ≈ ‑1.1). The pattern replicates in a series of follow‑up studies. Study 10 (N = 230) moves beyond hypothetical scenarios: real gig workers produce social‑media posts with or without AI, and a second group of “managers” allocate actual cash bonuses. Managers give AI‑users $0.35 on average versus $0.65 for non‑AI users, even after controlling for independent performance ratings. Study 11 (N = 505) shows the same effect when each manager evaluates a single worker in isolation, confirming that the penalty does not depend on direct comparison.
Further experiments (Studies 3–6) test boundary conditions. Whether the worker is a permanent employee or a freelancer, whether the evaluator has a long history with the worker, and whether AI use is explicitly linked to productivity gains—all fail to eliminate the penalty. In Study 6, 66 % of participants report an intention to increase pay when AI is framed as productivity‑enhancing, yet the proportion actually increasing pay drops from 79 % (no AI) to 52 % (AI).
To rule out a generic “external‑help” penalty, Study 9 compares AI assistance with human assistance. Human assistance actually raises compensation (d = 0.35), whereas AI assistance lowers it (d = ‑0.94), indicating that the effect is specific to AI’s perceived autonomy rather than mere assistance.
Mediation analyses reveal that perceived effort and perceived creative agency both mediate the relationship between AI use and lower pay. Structural equation modeling confirms that AI → reduced effort/agency → lower compensation is a statistically significant pathway.
Importantly, the penalty is not inevitable. Study 12 demonstrates that when AI users retain core creative control (e.g., they generate the concept and only use AI for polishing), the compensation gap shrinks dramatically. Moreover, contractual safeguards that forbid compensation reductions for AI‑assisted work can structurally block the penalty.
In sum, the research provides robust experimental evidence that AI adoption, despite its productivity promise, can trigger a bias that devalues the human worker’s contribution and leads to lower pay. The findings have practical implications for managers, policymakers, and designers of AI‑augmented work systems: to avoid exacerbating wage inequality, organizations should explicitly protect workers’ creative agency in AI‑human collaborations and consider contractual or policy mechanisms that prevent unjustified compensation cuts.
Comments & Academic Discussion
Loading comments...
Leave a Comment