As we increasingly delegate decision-making to algorithms, whether directly or indirectly, important questions emerge in circumstances where those decisions have direct consequences for individual rights and personal opportunities, as well as for the collective good. A key problem for policymakers is that the social implications of these new methods can only be grasped if there is an adequate comprehension of their general technical underpinnings. The discussion here focuses primarily on the case of enforcement decisions in the criminal justice system, but draws on similar situations emerging from other algorithms utilised in controlling access to opportunities, to explain how machine learning works and, as a result, how decisions are made by modern intelligent algorithms or 'classifiers'. It examines the key aspects of the performance of classifiers, including how classifiers learn, the fact that they operate on the basis of correlation rather than causation, and that the term 'bias' in machine learning has a different meaning to common usage. An example of a real world 'classifier', the Harm Assessment Risk Tool (HART), is examined, through identification of its technical features: the classification method, the training data and the test data, the features and the labels, validation and performance measures. Four normative benchmarks are then considered by reference to HART: (a) prediction accuracy (b) fairness and equality before the law (c) transparency and accountability (d) informational privacy and freedom of expression, in order to demonstrate how its technical features have important normative dimensions that bear directly on the extent to which the system can be regarded as a viable and legitimate support for, or even alternative to, existing human decision-makers.
The recent 'machine learning' data revolution has not just resulted in game-playing programs like AlphaGo, or personal assistants like Siri, but in a multitude of applications aimed at improving efficiency in different organizations by automating decision-making processes. These applications (either proposed or deployed) may result in decisions being taken by algorithms about individuals which directly affect access to key societal opportunities, such as education, jobs, insurance and credit. For, example, job applications are often screened by intelligent software 1 for shortlisting, before being seen by human recruiters;2 college admissions are increasingly informed by predictive analytics, 3 as are mortgage applications, 4 and insurance claims. 5 In those circumstances, individuals are usually voluntarily engaging with the organisation utilising the software, although they may not be aware of its use (Eubanks 2018). However, predicative analytics are increasingly in use in circumstances where the individual's engagement with the organisation is largely or wholly involuntary, e.g. in areas of State intervention such as child safeguarding and access to welfare benefits. 6 A controversial area of application is that of criminal justice, where use of intelligent software may not directly impact the wider population, but the outcome of the decisions made are of critical importance to both the rights of those subject to the criminal justice system, and to the public's perception of the impartiality and fairness of the process. 7 Software tools are already used in US and UK courts and correctional agencies 8 to support decision-making about releasing an offender before trial (e.g. custody and bail decisions), and to determine the nature and length of the punishment meted out to a defendant (e.g. sentencing and parole). Such software, usually referred to as 'risk assessment tools", 1 A note on terminology: throughout the paper we will use the expressions "intelligent software" and "intelligent algorithms" to refer to the products of AI methods and specifically of machine learning. While we will use the terms "machine decision" and "machine decision-making" to indicate the application of machine learning techniques to decision-making processes.
have become the focus of public attention and discussion because of claims about their associated risks (Angwin et al. 2016 andWexler 2017) and benefits. 9 In a democratic society, when a human decision-maker, such as a judge, custody officer, or social worker evaluates an individual for the purposes of making a decision about their entitlement to some tangible benefit or burden with real consequences for the relevant individual, it is understood that they should do so in accordance with a commonly understood set of normative principles, for example:
• justice (e.g. equality before the law, due process, impartiality, fairness);
• lawfulness (e.g. compliance with legal rules);
• protection of rights (e.g. freedom of expression, the right to privacy).
It is expected that not only will a decision-maker normally act in conformity with those principles, but also that the exercise of their decision-making powers can, when challenged, be exposed and subjected to meaningful oversight. This requires that the system within which they operate is subject to certain normative obligations, for example, that it is transparent and accountable, and as such facilitates public participation and engagement. Many decisions will not be subject to specific review or scrutiny processes as a matter of course. Thus, the legitimacy of the system within which they are made rests both upon the reliance we are willing to place on the decision-makers internalising and applying the normative principlesi.e. our having a rational basis on which to develop trust; and the system’s effective observance of the normative obligations which provide the capacity and capability to criticise and questioni.e. our having a meaningful ability to exercise distrust. Decision-making systems in which the ability of the public to effectively develop trust or exercise distrust is significantly attenuated are likely to be perceived as illegitimate, and probably dysfunctional.
Ensuring that machine decision-making operates in accordance with normative principles, and takes place within a system that incorporates normative obligations, will be an important challenge, particularly as the precise nature and degree of observance of those principles and obligations may vary according to cultural and disciplinary expectations. 10 The current generation of intelligent algorithms make decisions based on rules learnt from examples, rather than explicit programming, and often these rules are not readily interpretable by humans. There is thus no guarantee that intelligent algorithms will necessarily internalise accurately, or apply effectively, the relevant normative principles; nor that the system within which they are embedde
This content is AI-processed based on open access ArXiv data.