The Machine as Data: A Computational View of Emergence and Definability
Turing's (1936) paper on computable numbers has played its role in underpinning different perspectives on the world of information. On the one hand, it encourages a digital ontology, with a perceived
Turing’s (1936) paper on computable numbers has played its role in underpinning different perspectives on the world of information. On the one hand, it encourages a digital ontology, with a perceived flatness of computational structure comprehensively hosting causality at the physical level and beyond. On the other (the main point of Turing’s paper), it can give an insight into the way in which higher order information arises and leads to loss of computational control - while demonstrating how the control can be re-established, in special circumstances, via suitable type reductions. We examine the classical computational framework more closely than is usual, drawing out lessons for the wider application of information-theoretical approaches to characterizing the real world. The problem which arises across a range of contexts is the characterizing of the balance of power between the complexity of informational structure (with emergence, chaos, randomness and ‘big data’ prominently on the scene) and the means available (simulation, codes, statistical sampling, human intuition, semantic constructs) to bring this information back into the computational fold. We proceed via appropriate mathematical modelling to a more coherent view of the computational structure of information, relevant to a wide spectrum of areas of investigation.
💡 Research Summary
The paper revisits Alan Turing’s seminal 1936 article “On Computable Numbers, with an Application to the Entscheidungsproblem” and uses it as a springboard to explore how modern computational theory can account for the emergence of higher‑order information and the accompanying loss of algorithmic control. The author distinguishes two strands that run through Turing’s work. The first is the familiar digital ontology that treats the physical world as a flat substrate in which every causal process can be simulated by a Turing machine. The second, often overlooked, is Turing’s insight that certain computational processes generate new structures—what we now call emergence—that cannot be captured directly by a single, fixed‑type computation.
To bridge these strands the author introduces a type‑theoretic framework. Types are hierarchical collections of computable functions; moving from a higher type to a lower one is called type reduction. This reduction inevitably discards information, but under specific conditions the discarded information can be recovered, or at least the system’s behaviour can be brought back under algorithmic control. The paper formalises this “control re‑establishment” in two steps. First, when a complex system devolves into apparent randomness or chaos, statistical sampling, compression, and dimensionality‑reduction techniques are used to isolate a lower‑type subsystem that remains definable. Second, the classic Turing‑machine model is applied to this subsystem, restoring decidability and predictability. In this view, type reduction is the search for an intersection between algebraic definability (the existence of a closed‑form description) and logical definability (the existence of a decision procedure).
The author models emergence using infinite trees whose nodes represent computation steps and whose branches encode possible state transitions. Certain self‑replicating or explosively growing patterns on these trees are identified as emergent phenomena. Although such patterns lie beyond the decidability frontier of ordinary Turing machines, the type‑reduction process can compress them into bounded sub‑trees that are again decidable. This provides a rigorous mathematical account of how emergent behaviour can be “tamed” without discarding the richness of the original system.
Beyond the formal theory, the paper discusses the practical roles of big data and human intuition. Big data supplies the high‑dimensional raw material from which lower‑type representations can be extracted via statistical summarisation and dimensionality reduction. Human intuition, framed as an informal semantic construct, guides the selection of relevant features and constrains the search space, effectively acting as a heuristic that complements algorithmic reduction. When combined, these resources enable scientists to translate otherwise intractable emergent dynamics into tractable simulations and predictions.
The final section surveys applications across physics, biology, social science, and artificial intelligence. Quantum wave‑function collapse is likened to a type‑reduction event; cellular self‑replication is modelled as a pattern on an infinite tree; rapid opinion cascades in social networks are interpreted as emergent bursts that can be re‑controlled through statistical sampling; and large language models’ unpredictable text generation is presented as a modern illustration of the loss‑and‑re‑gain of computational control.
In sum, the paper argues that Turing’s original work already contains the seeds of a unified computational view of emergence and definability. By extending his ideas with modern type theory, statistical learning, and semantic heuristics, the author offers a coherent framework that balances the raw complexity of informational structures against the limited but powerful tools—simulation, coding, sampling, intuition—that scientists can bring to bear. This framework not only clarifies the theoretical limits of digital ontology but also provides actionable guidance for handling the “complexity versus control” dilemma that pervades contemporary science and technology.
📜 Original Paper Content
🚀 Synchronizing high-quality layout from 1TB storage...