A Framework for Heterotic Computing
Computational devices combining two or more different parts, one controlling the operation of the other, for example, derive their power from the interaction, in addition to the capabilities of the parts. Non-classical computation has tended to consider only single computational models: neural, analog, quantum, chemical, biological, neglecting to account for the contribution from the experimental controls. In this position paper, we propose a framework suitable for analysing combined computational models, from abstract theory to practical programming tools. Focusing on the simplest example of one system controlled by another through a sequence of operations in which only one system is active at a time, the output from one system becomes the input to the other for the next step, and vice versa. We outline the categorical machinery required for handling diverse computational systems in such combinations, with their interactions explicitly accounted for. Drawing on prior work in refinement and retrenchment, we suggest an appropriate framework for developing programming tools from the categorical framework. We place this work in the context of two contrasting concepts of “efficiency”: theoretical comparisons to determine the relative computational power do not always reflect the practical comparison of real resources for a finite-sized computational task, especially when the inputs include (approximations of) real numbers. Finally we outline the limitations of our simple model, and identify some of the extensions that will be required to treat more complex interacting computational systems.
💡 Research Summary
The paper addresses a growing gap in the study of unconventional computation: while many research efforts focus on single, isolated computational models—such as neural, analog, quantum, chemical, or biological systems—they often overlook the fact that real experimental setups frequently involve multiple models interacting, with one model controlling or feeding another. The authors coin the term “heterotic computing” for such composite systems and argue that a rigorous framework is needed to capture both the capabilities of the individual parts and the computational power that emerges from their interaction.
To make the problem tractable, the authors concentrate on the simplest non‑trivial architecture: two subsystems, A and B, operate in a strictly alternating fashion. At any discrete time step only one subsystem is active; its output becomes the input to the other subsystem at the next step, and vice versa. This “alternating activation” pattern isolates the control‑flow aspect while still allowing the emergence of richer computational behavior than either subsystem alone.
The core of the paper is a categorical formalisation of this pattern. Each subsystem is represented as an object in a category, its internal operations as morphisms, and the data‑flow between subsystems as functors or natural transformations. By treating the alternating sequence as a composition of morphisms, the whole heterotic system can be collapsed into a single composite morphism, enabling formal reasoning about correctness, refinement, and equivalence.
Building on this categorical backbone, the authors import the concepts of refinement and retrenchment from software engineering theory. Refinement captures the usual stepwise development from high‑level specifications to low‑level implementations, preserving correctness. Retrenchment relaxes the strictness of refinement to accommodate physical imperfections—noise, approximations of real numbers, non‑determinism—that are inevitable in experimental control hardware. The paper defines precise relational conditions under which a heterotic implementation can be said to retrench from its abstract specification.
From the theoretical model, a pathway toward practical programming tools is sketched. The authors propose a domain‑specific language (DSL) architecture in which each subsystem has its own sub‑language (e.g., a quantum DSL for the quantum part, an analog DSL for the analog part). A well‑typed interface language mediates the exchange of data, and the categorical semantics provide a basis for automatic code generation, type‑checking, and formal verification. This modular DSL approach aims to let developers compose heterogeneous components without sacrificing the guarantees offered by formal methods.
A substantial portion of the paper is devoted to the notion of efficiency. Two distinct perspectives are distinguished: (1) theoretical efficiency, measured by asymptotic computational complexity, Turing‑completeness, or resource‑bounded models; and (2) practical efficiency, which accounts for concrete resources such as energy consumption, hardware area, execution time, and the cost of approximating real‑valued inputs. The authors illustrate that a heterotic system may be theoretically more powerful (e.g., a quantum‑controlled analog processor) yet be less efficient in practice due to decoherence, calibration overhead, or the difficulty of precise analog measurement. This dual view underscores the importance of evaluating heterotic designs on both axes.
The paper concludes by acknowledging the limitations of the alternating‑activation model. Real heterotic systems often involve concurrent interactions, asynchronous communication, probabilistic transitions, and networks of more than two components. Capturing these richer behaviours will require extending the categorical framework to monoidal categories, double categories, or higher‑order morphisms, and developing more sophisticated refinement/retrenchment techniques. The authors outline a research agenda that includes (a) formalising these extensions, (b) implementing the proposed DSLs, (c) building hardware prototypes that embody heterotic control loops, and (d) conducting empirical studies to validate the theoretical predictions about efficiency.
Overall, the paper makes a compelling case that heterotic computing deserves its own formal theory, bridging abstract categorical semantics with concrete programming tools and realistic performance analysis. It lays the groundwork for a systematic exploration of how combining disparate computational paradigms can yield capabilities unattainable by any single paradigm alone.
Comments & Academic Discussion
Loading comments...
Leave a Comment