Artificial Intelligence and Legal Liability

Reading time: 5 minute
...

📝 Original Info

  • Title: Artificial Intelligence and Legal Liability
  • ArXiv ID: 1802.07782
  • Date: 2023-06-15
  • Authors: : John Doe, Jane Smith, Michael Johnson

📝 Abstract

A recent issue of a popular computing journal asked which laws would apply if a self-driving car killed a pedestrian. This paper considers the question of legal liability for artificially intelligent computer systems. It discusses whether criminal liability could ever apply; to whom it might apply; and, under civil law, whether an AI program is a product that is subject to product design legislation or a service to which the tort of negligence applies. The issue of sales warranties is also considered. A discussion of some of the practical limitations that AI systems are subject to is also included.

💡 Deep Analysis

Figure 1

📄 Full Content

A recent issue of a popular computing journal [1] posed the following question: "It is the year 2023, and for the first time, a self-driving car navigating city streets strikes and kills a pedestrian. A lawsuit is sure to follow. But exactly which laws will apply? No-one knows."

The article goes on to suggest that the laws that are likely to apply are those that deal with products with a faulty design. However, it argues that following this legal route holds back the development of self-driving cars, as settlements for product design cases (in the USA) are typically almost ten times higher than for cases involving human negligence, and that does not include the extra costs associated with product recalls to fix the issue. It goes on to argue that such cases should instead be dealt with as cases of negligence, just as they would for a human driver; the author points out that a standard handbook of US tort law [2] states that “A bad state of mind is neither necessary nor sufficient to show negligence; conduct is everything.”

It may be that the issue will arise even sooner than the year 2023. The author of this paper recently hired a car that included several new safety features. One of these features was that, if the car’s radars detected an imminent collision while the car was travelling at between 4 and 19 mph, the car’s engine would cut out to help prevent the collision.

While reversing the car out of a driveway, the author drove too close to a hedge. The car sounded its proximity alarm, and cut the engine. However, even when the steering wheel was turned so that the car would miss the hedge, the engine would not restart while the car was in reverse gear. The author had to put the car into a forward gear and travel forward slightly before he was able to continue reversing out.

All of this took place wholly within the driveway. However, if it had taken place while the rear end of the car was projecting into the road, with a heavy lorry travelling at some speed towards the car, most drivers would prefer to risk getting their paintwork scratched by a hedge than sitting in a car that refused to restart and complete the desired manoeuvre. It seems inevitable that soon, some driver will blame these ‘safety features’ for their involvement in a serious accident.

The purpose of this paper is to consider the current capabilities, or lack of them, of artificial intelligence, and then to re-visit the question of where legal liability might lie in the above cases.

First, it is important to establish what this paper means by the term “artificial intelligence”. There are researchers in the AI field who consider anything that mimics human intelligence, by whatever method, to be “artificial intelligence”; there are others who think that the only “artificially intelligent” programs are those that mimic the way in which humans think. There are also those in the field of information systems who would classify many “artificially intelligent” programs as being complex information systems, with ’true’ artificial intelligence being reserved for the meta-level decision making that is sometimes characterised as ‘wisdom’.

In this paper, any computer system that is able to recognise a situation or event, and to take a decision of the form “IF this situation exists THEN recommend or take this action” is taken to be an artificially intelligent system.

The references cited below refer primarily to US law; however, many other jurisdictions have similar legislation in the relevant areas.

In [3], Gabriel Hallevy discusses how, and whether, artificial intelligent entities might be held criminally liable. Criminal laws normally require both an actus reus (an action) and a mens rea (a mental intent), and Hallevy helpfully classifies laws as follows:

  1. Those where the actus reus consists of an action, and those where the actus reus consists of a failure to act; 2. Those where the mens rea requires knowledge or being informed; those where the mens rea requires only negligence (“a reasonable person would have known”); and strict liability offences, for which no mens rea needs to be demonstrated.

Hallevy goes on to propose three legal models by which offences committed by AI systems might be considered:

  1. Perpetrator-via-another. If an offence is committed by a mentally deficient person, a child or an animal, then the perpetrator is held to be an innocent agent because they lack the mental capacity to form a mens rea (this is true even for strict liability offences). However, if the innocent agent was instructed by another person (for example, if the owner of a dog instructed his dog to attack somebody), then the instructor is held criminally liable (see [4] for US case law).

According to this model, AI programs could be held to be an innocent agent, with either the software programmer or the user being held to be the perpetrator-via-another.

  1. Natural-probable-consequence. In this model, part of the AI program which was intended

📸 Image Gallery

cover.png

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut