Evolving MultiAlgebras unify all usual sequential computation models
It is well-known that Abstract State Machines (ASMs) can simulate 'step-by-step' any type of machines (Turing machines, RAMs, etc.). We aim to overcome two facts: 1) simulation is not identification,
It is well-known that Abstract State Machines (ASMs) can simulate “step-by-step” any type of machines (Turing machines, RAMs, etc.). We aim to overcome two facts: 1) simulation is not identification, 2) the ASMs simulating machines of some type do not constitute a natural class among all ASMs. We modify Gurevich’s notion of ASM to that of EMA (“Evolving MultiAlgebra”) by replacing the program (which is a syntactic object) by a semantic object: a functional which has to be very simply definable over the static part of the ASM. We prove that very natural classes of EMAs correspond via “literal identifications” to slight extensions of the usual machine models and also to grammar models. Though we modify these models, we keep their computation approach: only some contingencies are modified. Thus, EMAs appear as the mathematical model unifying all kinds of sequential computation paradigms.
💡 Research Summary
The paper addresses a longstanding gap in the theory of abstract state machines (ASMs). While ASMs are known to be able to simulate any conventional computational model (Turing machines, RAMs, etc.) step‑by‑step, simulation does not give a true identification: each simulated machine is represented by a distinct ASM program, and the collection of ASMs that happen to simulate a given class does not form a natural subclass of the whole ASM universe. To remedy this, the authors propose a modest but conceptually significant modification of Gurevich’s ASM framework, which they call an Evolving MultiAlgebra (EMA). In an EMA the syntactic “program” is replaced by a single semantic object – a functional that updates the dynamic part of the algebra. This functional must be definable using only the static signature (the fixed domains and basic operations) and must be “very simply definable”, i.e., expressed as a finite composition of static operations.
An EMA therefore consists of three layers: (1) a static signature that declares the sorts (states, registers, tape cells, symbols, etc.) and the primitive operations available on them; (2) a static structure that interprets this signature; and (3) a dynamic structure that records the current configuration (current state, head position, memory contents, etc.). The evolution functional takes the current dynamic structure as input and, by applying only static operations, produces the next dynamic structure. Because the functional is part of the model’s definition rather than an external program, any two EMAs that share the same static signature and evolution functional are literally the same computational device.
The authors then systematically map several classic sequential models onto EMAs. For a Turing machine, the static sorts include the set of states, the tape alphabet, and the movement directions; the dynamic part contains the current state, the head position, and the tape contents. The evolution functional encodes the transition relation directly: read the symbol under the head, look up the corresponding rule, write the new symbol, move the head, and change the state. No encoding of the transition table into a separate ASM program is needed. For a RAM, registers and the program counter are static sorts, while the current values of registers and the counter form the dynamic part. The functional applies the instruction pointed to by the counter using the static arithmetic operations, updates the registers, and increments the counter. For grammar‑based models (e.g., context‑free grammars) the non‑terminals and production rules belong to the static signature, the current sentential form is dynamic, and the functional selects a production and rewrites the string.
In each case the authors allow only “minor extensions” of the original model (e.g., adding nondeterministic choice, probabilistic branching, or a bounded amount of extra bookkeeping). These extensions do not change the computational power but make the mapping to an EMA smoother. Consequently, every conventional sequential model can be identified, up to such harmless extensions, with a class of EMAs that are literally the same objects. This identification eliminates the need for external encodings and simulation overhead, turning the relationship from “simulates” to “is”.
A further contribution is the observation that EMAs form a natural subclass of the broader ASM universe. Because the evolution functional is constrained to be definable solely from the static signature, all EMAs that share a given signature belong to a single mathematically well‑defined family. This property enables direct comparison, composition, and transformation of different computational paradigms within a unified algebraic setting. For instance, by choosing a static signature that simultaneously includes tape cells and registers, one can view a Turing machine and a RAM as two viewpoints on the same EMA, facilitating cross‑model reasoning.
The paper concludes with a discussion of limitations and future work. While the current development covers deterministic, nondeterministic, and probabilistic sequential models, extending EMAs to truly parallel or distributed computations remains open. Moreover, the requirement that the evolution functional be “very simply definable” may restrict the expressiveness of certain advanced models, a point that calls for empirical investigation. Nonetheless, the authors argue convincingly that EMAs provide a clean, mathematically robust unification of sequential computation paradigms, offering a promising foundation for further theoretical exploration and for the design of languages or verification tools that need to reason uniformly about diverse machine models.
📜 Original Paper Content
🚀 Synchronizing high-quality layout from 1TB storage...