Are Minds Computable?
📝 Original Info
- Title: Are Minds Computable?
- ArXiv ID: 1110.3002
- Date: 2013-05-14
- Authors: : Stuart Kauffman
📝 Abstract
This essay explores the limits of Turing machines concerning the modeling of minds and suggests alternatives to go beyond those limits.💡 Deep Analysis

📄 Full Content
Universal Turing machines (UTMs) have been a central concept in computer science and A.I. (Denning, 2010). UTMs are used to define computable functions. They are one of the theoretical foundations of modern computers. Still, they have limitations. Since A.I. has been working with computers based on the concept of a UTM, it is affected by the same limitations. Alternatives to classical A.I. (e.g. Rumelhart et al., 1986;Brooks, 1991) also suffer from these limitations, since these do not depend on representations (Gershenson, 2004).
The main limitations of UTMs-related to modeling minds-are two:
- UTMs are closed. 2. UTMs compute-i.e. produce an output-only once they halt.
UTMs are closed because once an initial condition and a program are set, the result will be deterministic. However, there are many phenomena where data (in the tape or program) changes at runtime. One example of this can be seen with coupled Turing machines (Copeland, 1997;Copeland and Sylvan, 1999). Certainly, if the interactions and updating scheme between Turing machines (where the computation of one machine affect the data of another machine) are deterministic and defined precisely, these could be modeled by a UTM. Nevertheless, in most cases these are only known a posteriori, because of computational irreducibility (Wolfram, 2002). Thus, one cannot define the output of an “open” Turing machine, since the inputs during runtime are unknown a priori. As opposed to UTMs, minds are constantly receiving inputs.
Computations carried out by UTMs have to halt before an output can be extracted from them. However, there are many computations that do not halt (Wegner, 1998;Denning, 2011), for example, biological computation (Mitchell, 2011) is not about computing a function, but a continuous information processing that sustains living systems (Maturana and Varela, 1980). Minds also fall within this category. A mind does not compute a function and halts. A mind is constantly processing information.
From the above arguments, it can be concluded that UTMs cannot compute minds. Thus, A.I. as it stands now cannot model minds completely. Still, this does not imply that minds cannot be computed. Now the question is: Are there (non-Turing) mechanisms capable of computing a process similar to a mind? If we see computation as transformation of information (Gershenson, 2007), then human minds are already computing processes. Thus, the answer is affirmative-human minds can be seen as computers-but not very useful. Minds are computable, but how?. A more pragmatic framing of the question would be: How can we describe computations of processes similar to minds?. This description should enable us to understand better cognitive processes and also to build more sophisticated artificial cognitive systems.
As it was argued above, interactions (Gershenson, 2011) are the missing element in UTMs. Probably this is due to a reductionist bias in science. Classical science, since the times of Galileo, Descartes, Newton and Laplace, has tried to simplify and separate in order to predict (Kauffman, 2000). This is understandable, since traditional modeling is limited by the amount of information included in the model. This has naturally led to neglecting interactions. However, modern computers-a product of reductionist science-have enabled us to include much more information in our models, opening the possibility to include interactions.
Computing with interactions (Wegner, 1998) implies models of computation that are open and that do not halt. Interestingly, modern computerswhile based on UTMs-are able to perform interactive computation. It should be noted that Turing-computability-which is theoretical-is different from practical computability (in a physical computer). For example, there are Turing-computable functions that are not computable in practice simply because there is not enough time and/or memory in the universe. On the other hand, computers can compute non-Turing-computable functions, such as halting functions with the aid of an “oracle” (a posteriori). Interactions are another example, since computers may receive inputs while processing information and may not halt. This is positive for A.I. since it implies that modeling minds with current computers does not imply a hardware challenge.
The remaining challenge is related to reductionism. By necessity, many A.I. systems rely on interactions, be it with humans or with other artificial systems. Examples include
📸 Image Gallery
