Does Algorithmic Uncertainty Sway Human Experts? Evidence from a Field Experiment in Selective College Admissions

Does Algorithmic Uncertainty Sway Human Experts? Evidence from a Field Experiment in Selective College Admissions
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Algorithmic predictions are inherently uncertain: even models with similar aggregate accuracy can produce different predictions for the same individual, raising concerns that high-stakes decisions may become sensitive to arbitrary modeling choices. In this paper, we define algorithmic reliance as the extent to which a decision outcome depends on whether a more favorable versus less favorable algorithmic prediction is presented to the decision-maker. We estimate this in a randomized field experiment (n=19,545) embedded in a selective U.S. college admissions cycle, in which admissions officers reviewed each application alongside an algorithmic score while we randomly varied whether the score came from one of two similarly accurate prediction models. Although the two models performed similarly in aggregate, they frequently assigned different scores to the same applicant, creating exogenous variation in the score shown. Surprisingly, we find little evidence of algorithmic reliance: presenting a more favorable score does not meaningfully increase an applicant’s probability of admission on average, even when the models disagree substantially. These findings suggest that, in this expert, high-stakes setting, human decision-making is largely invariant to arbitrary variation in algorithmic predictions, underscoring the role of professional discretion and institutional context in mediating the downstream effects of algorithmic uncertainty.


💡 Research Summary

Purpose and Context
The paper investigates whether the inherent uncertainty of algorithmic predictions—specifically, predictive multiplicity where equally accurate models produce different scores for the same individual—affects high‑stakes human decisions. The authors focus on selective college admissions, a domain where decisions are holistic, lack an objective ground truth, and are ultimately made by trained admissions officers. They introduce the concept of “algorithmic reliance” as the causal effect of presenting a more favorable versus a less favorable algorithmic score on the probability of a favorable decision (admission).

Definition of Algorithmic Reliance
For any two algorithmic predictions a_H (more favorable) and a_L (less favorable), algorithmic reliance is defined as
AR(a_H, a_L) = E


Comments & Academic Discussion

Loading comments...

Leave a Comment