The Responsibility Quantification (ResQu) Model of Human Interaction with Automation

Reading time: 5 minute
...

📝 Original Info

  • Title: The Responsibility Quantification (ResQu) Model of Human Interaction with Automation
  • ArXiv ID: 1810.12644
  • Date: 2023-06-15
  • Authors: : John Doe, Jane Smith, Michael Johnson

📝 Abstract

Intelligent systems and advanced automation are involved in information collection and evaluation, in decision-making and in the implementation of chosen actions. In such systems, human responsibility becomes equivocal. Understanding human casual responsibility is particularly important when intelligent autonomous systems can harm people, as with autonomous vehicles or, most notably, with autonomous weapon systems (AWS). Using Information Theory, we develop a responsibility quantification (ResQu) model of human involvement in intelligent automated systems and demonstrate its applications on decisions regarding AWS. The analysis reveals that human comparative responsibility to outcomes is often low, even when major functions are allocated to the human. Thus, broadly stated policies of keeping humans in the loop and having meaningful human control are misleading and cannot truly direct decisions on how to involve humans in intelligent systems and advanced automation. The current model is an initial step in the complex goal to create a comprehensive responsibility model, that will enable quantification of human causal responsibility. It assumes stationarity, full knowledge regarding the characteristic of the human and automation and ignores temporal aspects. Despite these limitations, it can aid in the analysis of systems designs alternatives and policy decisions regarding human responsibility in intelligent systems and advanced automation.

💡 Deep Analysis

Figure 1

📄 Full Content

dvanced automation and intelligent systems have become ubiquitous and are major parts of our life. Financial markets largely function through algorithmic trading mechanisms [1, 2], semiconductor manufacturing is almost entirely automated [3], and decision support systems and aids for diagnostic interpretation have become part of medical practice [4,5]. Similarly, in aviation, flight management systems control almost all parts of the flight [6,7], and in surface transportation, public transportation is increasingly automated, and the first autonomous cars appear on public roads [8,9]. In these systems, computers and human share the execution of different functions, such as the collection and evaluation of information, decision-making and action implementation.

As these intelligent systems become more advanced, the human comparative responsibility for outcomes becomes equivocal. For instance, what is a human’s responsibility when all information about an event arrives through a system that collects and analyzes data from multiple sources, without the human having access to any independent sources of information? If the human receives an indication that a certain action is needed, and accordingly performs the action, should the human be held responsible for the outcome of the action, if it causes harm?

Human responsibility is particularly important when system actions can possibly injure people, as may be the case with autonomous vehicles. It becomes crucial when such harm is certain, as with autonomous weapon systems, deliberately designed to inflict lethal force.

So far, the subject of human responsibility was investigated from philosophical, ethical, moral and legal perspectives. However, we still lack a quantitative engineering model of human responsibility. To address this need, we developed the Responsibility Quantification (ResQu) model that enables us to compute human responsibility in the interaction with intelligent systems and automation. We will demonstrate its application on the example of autonomous weapon systems, because this issue raises particular public concerns. However, the model is applicable wherever intelligent systems and automation play a major role.

Philosophical and legal research has dealt extensively with the concept of responsibility, investigating its different facets, namely role responsibility, causal responsibility, liability (or legal responsibility) and moral responsibility [10][11][12]. When discussing human interaction with intelligent systems and automation, role responsibility relates to assigning specific roles and duties to the operator, for which the operator is accountable. However, this role assignment does not specify the causal relations between the operator’s actions and possible consequences and outcomes. This relation is better defined by causal responsibility, which describes the actual human contribution to system outcomes.

A large literature in psychology, such as attribution theory, sees causal responsibility as an essential primary condition for the attribution of blame and praise ] 17 -13 [ . Causal responsibility is also a major factor in the way legal doctrines determine liability, punishments and civil remedies in criminal and tort law ] 20 -18 [ . So far, causal responsibility was usually associated with people -a person or an organization was seen as more or less responsible for a particular event. When an event involved technology, the responsibility was usually with the user, unless some unforeseeable circumstances caused some unexpected outcome. Manufacturers of systems could also be held responsible if, for instance, they failed to install proper safeguards.

The field changed with the introduction of automation, defined as the system performing parts, or all, of a task that was, or could have been, performed by humans [21]. The ability to control a system and the resulting consequences is a necessary condition for assigning responsibility. However, humans may no longer be able to control intelligent systems and advanced automation sufficiently to be rightly considered responsible. As the level of automation and system intelligence increase, there is a shift towards shared control, in which the human and computerized systems jointly make decisions or control actions. These are combined to generate a final control action or decision. There may also be supervisory control, in which the human sets high-level goals, monitors the system and only intervenes if necessary [22]. In coactive designs, humans and systems engage in joint activities, based on supporting interdependence and complementary relations in performing sensing, planning, and acting functions ] 4 2 , 3 2 [ . Moreover, in advanced systems, which incorporate artificial intelligence, neural networks, and machine-learning, developers and users may be unable to fully control or predict all possible behaviors and outcomes, since their internal structure can be opaque (a “black box”) and

📸 Image Gallery

cover.png

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut