Program-Size Versus Time Complexity, Speed-Up and Slowdown Phenomena in Small Turing Machines

Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

The aim of this paper is to undertake an experimental investigation of the trade-offs between program-size and time computational complexity. The investigation includes an exhaustive exploration and systematic study of the functions computed by the set of all 2-color Turing machines with 2, 3 and 4 states–denoted by (n,2) with n the number of states–with particular attention to the runtimes and space usages when the machines have access to larger resources (more states). We report that the average runtime of Turing machines computing a function almost surely increases as a function of the number of states, indicating that machines not terminating (almost) immediately tend to occupy all the resources at hand. We calculated all time complexity classes to which the algorithms computing the functions found in both (2,2) and (3,2) belong to, and made a comparison among these classes. For a selection of functions the comparison was extended to (4,2). Our study revealed various structures in the micro-cosmos of small Turing machines. Most notably we observed “phase-transitions” in the halting-probability distribution that we explain. Moreover, it is observed that short initial segments fully define a function computed by a Turing machine.


💡 Research Summary

The paper conducts a large‑scale empirical study of the trade‑off between program size (the number of states in a Turing machine) and time (and space) complexity, focusing on the exhaustive set of 2‑symbol Turing machines with 2, 3, and 4 states, denoted (n, 2). By generating every possible transition table for each configuration—64 machines for (2, 2), 256 for (3, 2), and 1 024 for (4, 2)—the authors simulate each machine on inputs 0, 1, … k (k≈20) and record whether it halts within a generous step bound (10⁶ steps), the number of steps taken, and the number of tape cells visited.

The first major observation is statistical: the average runtime of machines that do halt grows monotonically with the number of states. In (2, 2) most machines either halt immediately or after a handful of steps; in (3, 2) and especially (4, 2) a substantial fraction require thousands or millions of steps, indicating that once a machine does not terminate “almost instantly” it tends to consume all available resources.

Next, the authors cluster machines that produce identical input‑output maps, thereby defining the functions computed by the whole space. Remarkably, they find that a short initial segment of the input (typically the first five values) already determines the entire function for each cluster. This “initial‑segment completeness” suggests a strong structural constraint: the behavior on a few small inputs forces the behavior on all larger inputs.

For each identified function the authors perform a regression analysis of runtime versus input size, assigning the function to a classical time‑complexity class: O(1), O(log n), O(n), O(n log n), O(n²), and in a few cases O(2ⁿ). The distribution of classes is similar for (2, 2) and (3, 2), but when the same logical function appears in (4, 2) it often moves to a higher‑order class, demonstrating that a larger program can implement the same mapping less efficiently.

A particularly striking result concerns the halting‑probability distribution. As the number of states increases, the proportion of machines that halt drops sharply, and the authors identify a clear “phase‑transition” region where the halting probability falls from near‑certainty to near‑zero over a narrow range of state counts. They relate this to a sudden change in the connectivity of the underlying state‑transition graph, drawing an analogy to phase transitions in statistical physics.

Overall, the study reveals several structural phenomena in the “micro‑cosmos” of small Turing machines: (1) a robust correlation between program size and average runtime; (2) the possibility of a function being realized with different asymptotic complexities depending on the size of the machine; (3) a phase‑transition‑like behavior in halting probabilities; and (4) the determinative power of short input prefixes. The authors argue that these findings bridge theoretical complexity analysis and concrete experimental observation, and they suggest future work extending the enumeration to five or more states, to multi‑symbol alphabets, and to a formal mathematical model of the observed phase transition.


Comments & Academic Discussion

Loading comments...

Leave a Comment