On the Expressiveness of Markovian Process Calculi with Durational and Durationless Actions

Several Markovian process calculi have been proposed in the literature, which differ from each other for various aspects. With regard to the action representation, we distinguish between integrated-ti

On the Expressiveness of Markovian Process Calculi with Durational and   Durationless Actions

Several Markovian process calculi have been proposed in the literature, which differ from each other for various aspects. With regard to the action representation, we distinguish between integrated-time Markovian process calculi, in which every action has an exponentially distributed duration associated with it, and orthogonal-time Markovian process calculi, in which action execution is separated from time passing. Similar to deterministically timed process calculi, we show that these two options are not irreconcilable by exhibiting three mappings from an integrated-time Markovian process calculus to an orthogonal-time Markovian process calculus that preserve the behavioral equivalence of process terms under different interpretations of action execution: eagerness, laziness, and maximal progress. The mappings are limited to classes of process terms of the integrated-time Markovian process calculus with restrictions on parallel composition and do not involve the full capability of the orthogonal-time Markovian process calculus of expressing nondeterministic choices, thus elucidating the only two important differences between the two calculi: their synchronization disciplines and their ways of solving choices.


💡 Research Summary

The paper investigates the expressive relationship between two families of Markovian process calculi that differ in how they treat the duration of actions. In the integrated‑time Markovian process calculus (ITMPC) every action is annotated with an exponentially distributed delay, so the execution of an action and the passage of time are inseparable. In contrast, the orthogonal‑time Markovian process calculus (OTMPC) separates “time steps” (τ‑transitions with a rate) from “instantaneous actions” (a‑transitions). Both calculi are equipped with the usual operators (choice, parallel composition, restriction, recursion) and are interpreted over continuous‑time Markov chains; behavioral equivalence is defined via Markovian bisimulation.

The authors ask whether an ITMPC term can be systematically translated into an OTMPC term while preserving this equivalence, and they answer affirmatively for three well‑known execution policies that have been studied in deterministic timed calculi:

  1. Eagerness (early execution) – actions fire as soon as they become enabled.
  2. Laziness (delayed execution) – actions may be postponed until they are needed.
  3. Maximal progress – internal τ‑steps have priority over observable actions.

For each policy they define a translation function (f_E, f_L, f_MP). The core idea is to replace an ITMPC action a/λ by a pair of OTMPC transitions: a τ‑transition with the same rate followed by an instantaneous a‑transition. The three translations differ in how they arrange the τ‑step and the subsequent a‑step:

  • f_E inserts τ/λ before a, guaranteeing that the action cannot be delayed.
  • f_L wraps the τ/λ in a guarded choice, allowing the system to stay idle (deadlock) and thus postpone the observable action.
  • f_MP gives τ/λ priority: the a‑transition is only enabled when the τ‑step is not possible, reflecting the maximal‑progress discipline.

The authors prove that each translation preserves Markovian bisimulation: the probability of reaching any observable state after a given amount of time is identical in the source ITMPC term and its OTMPC image. The proofs rely on the fact that the rate of the τ‑step is exactly the rate originally attached to the action, so the exponential waiting time distribution is unchanged.

However, the translations are not universal; they are restricted to a subset of ITMPC terms that avoid complex synchronisation in parallel composition. Specifically, the source term must be a parallel composition where the components do not share action labels (i.e., no synchronisation on shared actions). This restriction prevents race conditions between τ‑steps of different components that could otherwise alter the combined rate. Consequently, the mappings do not exploit the full nondeterministic choice operator of OTMPC; the choice structure is essentially inherited from the ITMPC term, and the resolution of choices remains governed by the original rates rather than by OTMPC’s independent nondeterminism.

The paper’s analysis leads to two main insights about the differences between the calculi:

  • Synchronization discipline – In ITMPC an action itself is the synchronisation point; in OTMPC synchronisation occurs on τ‑steps, while observable actions are purely instantaneous.
  • Choice resolution – ITMPC resolves a probabilistic choice by summing the rates of competing actions; OTMPC first makes a nondeterministic choice and then assigns probabilities based on the rates of the subsequent τ‑steps.

These observations explain why the two calculi, despite their syntactic differences, can be made behaviorally equivalent for a large class of processes once the appropriate translation is applied.

The authors discuss practical implications. Many probabilistic model‑checking tools (e.g., PRISM, MRMC) are built around orthogonal‑time semantics; the presented translations enable engineers to take an existing integrated‑time model (perhaps derived from a high‑level specification) and automatically obtain an equivalent orthogonal‑time model that can be fed to such tools without losing quantitative properties. The paper also points out the limitations: processes that involve intricate synchronisation patterns or that heavily rely on the expressive power of OTMPC’s nondeterministic choice are not covered, suggesting directions for future work such as extending the translation to richer parallel constructs or developing automated tool support.

In summary, the work demonstrates that the apparent gap between integrated‑time and orthogonal‑time Markovian process calculi is not fundamental; with careful handling of execution policies and synchronization constraints, one can map integrated‑time specifications into orthogonal‑time ones while preserving the exact stochastic behaviour. This contributes both to the theoretical understanding of timed stochastic process algebras and to their practical applicability in performance and reliability analysis.


📜 Original Paper Content

🚀 Synchronizing high-quality layout from 1TB storage...