What is Nature-like Computation? A Behavioural Approach and a Notion of Programmability

What is Nature-like Computation? A Behavioural Approach and a Notion of   Programmability

The aim of this paper is to propose an alternative behavioural definition of computation (and of a computer) based simply on whether a system is capable of reacting to the environment-the input-as reflected in a measure of programmability. This definition is intended to have relevance beyond the realm of digital computers, particularly vis-a-vis natural systems. This will be done by using an extension of a phase transition coefficient previously defined in an attempt to characterise the dynamical behaviour of cellular automata and other systems. The transition coefficient measures the sensitivity of a system to external stimuli, and will be used to define the susceptibility of a system to being (efficiently) programmed.


💡 Research Summary

The paper tackles the long‑standing question of what it means for a system to compute, proposing a behavioural definition that hinges on a system’s ability to react to external stimuli. Rather than relying on the traditional view of computation as symbolic manipulation or execution of explicit algorithms, the authors argue that any physical or natural system can be regarded as a computer if it exhibits a measurable degree of programmability – that is, if its output can be meaningfully steered by varying its input.

To operationalise this idea, the authors extend a metric they previously introduced called the transition coefficient (σ). The coefficient quantifies how small perturbations in a system’s initial conditions amplify (or dampen) over time. Formally, for a system S and a set of initial states I, one computes the trajectory of each state up to a fixed horizon t. The distance δ(i₁,i₂) between two initial states is compared with the distance δₜ(i₁,i₂) after t steps; the ratio captures the sensitivity of the dynamics to the perturbation. Averaging this ratio over many pairs of initial conditions yields σ(S). A high σ indicates that the system is highly sensitive to inputs, producing a wide repertoire of distinct behaviours; a low σ signals robustness or randomness that does not translate input differences into output differences.

Programmability is then defined in terms of σ: a system is deemed programmable if its transition coefficient exceeds an empirically chosen threshold θ. This threshold demarcates the region where input variations can be harnessed to generate desired behaviours efficiently. Importantly, the definition does not require the system to be Turing‑complete; it merely requires that the mapping from inputs to observable outputs be sufficiently rich and controllable.

The authors validate the framework using cellular automata (CA), a well‑studied class of discrete dynamical systems. Rule 110, known to be computationally universal, yields σ≈0.85, comfortably above typical thresholds, confirming its high programmability. In contrast, Rule 30, which exhibits chaotic, noise‑like patterns, produces σ≈0.12, indicating low programmability despite its complex appearance. These results demonstrate that the transition coefficient captures a nuanced notion of computational capability that aligns with established theoretical classifications while offering a behavioural perspective.

Beyond CA, the paper explores how σ can be applied to natural and engineered systems. Chemical reaction networks, neuronal assemblies, and models of opinion dynamics are examined as case studies. For instance, a catalytic reaction whose product distribution changes dramatically with slight variations in catalyst concentration shows a high σ, suggesting that the reaction network can be “programmed” by adjusting environmental parameters. Similarly, neural circuits display high σ when synaptic weights are modulated, reflecting the brain’s capacity to reconfigure activity patterns in response to sensory input. Social opinion models reveal that a small seed of dissent can either explode or die out depending on network topology, a phenomenon that is quantitatively captured by the transition coefficient.

The authors argue that these examples substantiate the claim that natural processes perform computation in a meaningful sense: they map inputs (environmental conditions, stimuli) to outputs (chemical products, neural firing patterns, collective opinions) in a way that can be measured, predicted, and, crucially, controlled. The transition coefficient thus provides a bridge between abstract computational theory and empirical science, offering a tool to assess the “computational power” of any dynamical system without recourse to symbolic representations.

A further contribution of the paper is the discussion of computational cost. While a high σ signals rich programmability, it also implies that the system may be highly sensitive, making precise control difficult. Therefore, efficient programming requires a balance: σ must be sufficiently large to allow diverse behaviours, yet the system’s dynamics must remain within a regime where input‑output mappings are stable enough for practical manipulation. This trade‑off mirrors classic complexity considerations (time/space resources) but is framed in terms of behavioural sensitivity and controllability.

In conclusion, the paper proposes a unifying, quantitative framework for understanding computation across digital, biological, chemical, and social domains. By defining computation through the lens of programmability measured by the transition coefficient, the authors extend the concept of a computer beyond silicon‑based machines to any system that can be steered by its environment. The work opens avenues for interdisciplinary research, inviting scholars to apply the metric to diverse systems, refine the theoretical underpinnings of σ, and explore the limits of programmability in natural and engineered contexts.