The law forbids discrimination. But the ambiguity of human decision-making often makes it extraordinarily hard for the legal system to know whether anyone has actually discriminated. To understand how algorithms affect discrimination, we must therefore also understand how they affect the problem of detecting discrimination. By one measure, algorithms are fundamentally opaque, not just cognitively but even mathematically. Yet for the task of proving discrimination, processes involving algorithms can provide crucial forms of transparency that are otherwise unavailable. These benefits do not happen automatically. But with appropriate requirements in place, the use of algorithms will make it possible to more easily examine and interrogate the entire decision process, thereby making it far easier to know whether discrimination has occurred. By forcing a new level of specificity, the use of algorithms also highlights, and makes transparent, central tradeoffs among competing values. Algorithms are not only a threat to be regulated; with the right safeguards in place, they have the potential to be a positive force for equity.
Deep Dive into Discrimination in the Age of Algorithms.
The law forbids discrimination. But the ambiguity of human decision-making often makes it extraordinarily hard for the legal system to know whether anyone has actually discriminated. To understand how algorithms affect discrimination, we must therefore also understand how they affect the problem of detecting discrimination. By one measure, algorithms are fundamentally opaque, not just cognitively but even mathematically. Yet for the task of proving discrimination, processes involving algorithms can provide crucial forms of transparency that are otherwise unavailable. These benefits do not happen automatically. But with appropriate requirements in place, the use of algorithms will make it possible to more easily examine and interrogate the entire decision process, thereby making it far easier to know whether discrimination has occurred. By forcing a new level of specificity, the use of algorithms also highlights, and makes transparent, central tradeoffs among competing values. Algorithm
1
All rights reserved
DISCRIMINATION IN THE AGE OF ALGORITHMS
February 5, 2019
Jon Kleinberg+
Jens Ludwig*
Sendhil Mullainathan**
Cass R. Sunstein***
ABSTRACT
The law forbids discrimination. But the ambiguity of human decision-making often makes it
extraordinarily hard for the legal system to know whether anyone has actually discriminated. To
understand how algorithms affect discrimination, we must therefore also understand how they affect the
problem of detecting discrimination. By one measure, algorithms are fundamentally opaque, not just
cognitively but even mathematically. Yet for the task of proving discrimination, processes involving
algorithms can provide crucial forms of transparency that are otherwise unavailable. These benefits do
not happen automatically. But with appropriate requirements in place, the use of algorithms will make it
possible to more easily examine and interrogate the entire decision process, thereby making it far easier
to know whether discrimination has occurred. By forcing a new level of specificity, the use of algorithms
also highlights, and makes transparent, central tradeoffs among competing values. Algorithms are not
only a threat to be regulated; with the right safeguards in place, they have the potential to be a positive
force for equity.
- Tisch University Professor, Cornell University.
- Edwin A. and Betty L. Bergman Distinguished Service Professor, University of Chicago.
** Roman Family University Professor of Computation and Behavioral Science, University of Chicago.
***Robert Walmsley University Professor, Harvard University.
Thanks to Michael Ridgway for his assistance with data analysis; to Justin McCrary for helpful discussions; to Solon Barocas,
James Grenier, Saul Levmore, Karen Levy, Eric Posner, Manish Raghavan, and David Robinson for valuable comments; to the
MacArthur, Simons, and Russell Sage Foundations for financial support for this work on algorithmic fairness; to the Program
on Behavioral Economics and Public Policy at Harvard Law School; and to Tom and Susan Dunn, Ira Handler and the Pritzker
Foundation for support of the University of Chicago Urban Labs more generally. Thanks to Andrew Heinrich and Cody
Westphal for superb research assistance. All opinions and any errors are obviously ours alone.
2
I.
INTRODUCTION
The law forbids discrimination, but it can be exceedingly difficult to find out whether human beings have
discriminated.1 Accused of violating the law, people might well dissemble. Some of the time, they
themselves might not even be aware that they have discriminated. Human decisions are frequently opaque
to outsiders, and they may not be much more transparent to insiders. A defining preoccupation of
discrimination law, to which we shall devote considerable attention, is how to handle the resulting
problems of proof.2 Those problems create serious epistemic challenges, and they produce predictable
disagreements along ideological lines.
Our central claim here is that when algorithms are involved, proving discrimination will be easier โ or at
least it should be, and can be made to be. The law forbids discrimination by algorithm, and that
prohibition can be implemented by regulating the process through which algorithms are designed. This
implementation could codify the most common approach to building machine learning classification
algorithms in practice, and add detailed recordkeeping requirements. Such an approach would provide
valuable transparency about the decisions and choices made in building algorithms โ and also about the
tradeoffs among relevant values.
We are keenly aware that these propositions are jarring, and that they will require considerable
elaboration. They ought to jar because in a crucial sense algorithms are not decipherable โ one cannot
determine what an algorithm will do by reading the underlying code. This is more than a cognitive
limitation; it is a mathematical impossibility. To know what an algorithm will do, one must run it.3 The
task at hand, though, is to take an observed gap, such as differences in hiring rates by gender, and to
decide whether the gap should be attributed to discrimination as the law defines it. Such attributions need
not require that we read the code. Instead, they can be accomplished by examining the data given to the
algorithm and probing its outputs, a process that (we will argue) is eminently feasible. The opacity of the
algorithm does not prevent us from scrutinizing its construction or experimenting with its behavior โ two
activities that are impossible with humans.4
Crucially, policy changes are needed to realize these benefits, such as the requirement that all the
components of an algorithm (including the training data) must be stored and made available for
examination and experimentation. It is important to see that without
…(Full text truncated)…
This content is AI-processed based on ArXiv data.