Qualitative Logics and Equivalences for Probabilistic Systems

Reading time: 6 minute
...

📝 Original Info

  • Title: Qualitative Logics and Equivalences for Probabilistic Systems
  • ArXiv ID: 0903.2445
  • Date: 2009-03-13
  • Authors: Krishnendu Chatterjee, Luca de Alfaro, Marco Faella, Axel Legay

📝 Abstract

We investigate logics and equivalence relations that capture the qualitative behavior of Markov Decision Processes (MDPs). We present Qualitative Randomized CTL (QRCTL): formulas of this logic can express the fact that certain temporal properties hold over all paths, or with probability 0 or 1, but they do not distinguish among intermediate probability values. We present a symbolic, polynomial time model-checking algorithm for QRCTL on MDPs. The logic QRCTL induces an equivalence relation over states of an MDP that we call qualitative equivalence: informally, two states are qualitatively equivalent if the sets of formulas that hold with probability 0 or 1 at the two states are the same. We show that for finite alternating MDPs, where nondeterministic and probabilistic choices occur in different states, qualitative equivalence coincides with alternating bisimulation, and can thus be computed via efficient partition-refinement algorithms. On the other hand, in non-alternating MDPs the equivalence relations cannot be computed via partition-refinement algorithms, but rather, they require non-local computation. Finally, we consider QRCTL*, that extends QRCTL with nested temporal operators in the same manner in which CTL* extends CTL. We show that QRCTL and QRCTL* induce the same qualitative equivalence on alternating MDPs, while on non-alternating MDPs, the equivalence arising from QRCTL* can be strictly finer. We also provide a full characterization of the relation between qualitative equivalence, bisimulation, and alternating bisimulation, according to whether the MDPs are finite, and to whether their transition relations are finitely-branching.

💡 Deep Analysis

Deep Dive into Qualitative Logics and Equivalences for Probabilistic Systems.

We investigate logics and equivalence relations that capture the qualitative behavior of Markov Decision Processes (MDPs). We present Qualitative Randomized CTL (QRCTL): formulas of this logic can express the fact that certain temporal properties hold over all paths, or with probability 0 or 1, but they do not distinguish among intermediate probability values. We present a symbolic, polynomial time model-checking algorithm for QRCTL on MDPs. The logic QRCTL induces an equivalence relation over states of an MDP that we call qualitative equivalence: informally, two states are qualitatively equivalent if the sets of formulas that hold with probability 0 or 1 at the two states are the same. We show that for finite alternating MDPs, where nondeterministic and probabilistic choices occur in different states, qualitative equivalence coincides with alternating bisimulation, and can thus be computed via efficient partition-refinement algorithms. On the other hand, in non-alternating MDPs the

📄 Full Content

Markov decision processes (MDPs) provide a model for systems exhibiting both probabilistic and nondeterministic behavior. MDPs were originally introduced to model and solve control problems for stochastic systems: there, nondeterminism represented the freedom in the choice of control action, while the probabilistic component of the behavior described the system's response to the control action [Ber95]. MDPs were later adopted as models for concurrent probabilistic systems, probabilistic systems operating in open environments [Seg95], and under-specified probabilistic systems [BdA95,dA97a].

Given an MDP and a property of interest, we can ask two kinds of verification questions: quantitative and qualitative questions. Quantitative questions relate to the numerical value of the probability with which the property holds in the system; qualitative questions ask whether the property holds with probability 0 or 1. Examples of quantitative questions include the computation of the maximal and minimal probabilities with which the MDP satisfies a safety, reachability, or in general, ω-regular property [BdA95]; the corresponding qualitative questions asks whether said properties hold with probability 0 or 1.

While much recent work on probabilistic verification has focused on answering quantitative questions, the interest in qualitative verification questions predates the one in quantitative ones. Answering qualitative questions about MDPs is useful in a wide range of applications. In the analysis of randomized algorithms, it is natural to require that the correct behavior arises with probability 1, and not just with probability at least p for some p < 1. For instance, when analyzing a randomized embedded scheduler, we are interested in whether every thread progresses with probability 1 [dAFMR05]. Such a qualitative question is much easier to study, and to justify, than its quantitative version; indeed, if we asked for a lower bound p < 1 for the probability of progress, the choice of p would need to be justified by an analysis of how much failure probability is acceptable in the final system, an analysis that is generally not easy to accomplish. For the same reason, the correctness of randomized distributed algorithms is often established with respect to qualitative, rather than quantitative, criteria (see, e.g., [PSL00, KNP00,Sto02]). Furthermore, since qualitative answers can generally be computed more efficiently than quantitative ones, they are often used as a useful pre-processing step. For instance, when computing the maximal probability of reaching a set of target states T , it is convenient to first pre-compute the set of states T 1 ⊇ T that can reach T with probability 1, and then compute the maximal probability of reaching T : this reduces the number of states where the quantitative question needs to be answered, and leads to more efficient algorithms [dAKN + 00]. Lastly, we remark that qualitative answers, unlike quantitative ones, are more robust to perturbations in the numerical values of transition probabilities in the MDP. Thus, whenever a system can be modeled only within some approximation, qualitative verification questions yield information about the system that is more robust with respect to modeling errors, and in many ways, more basic in nature.

In this paper, we provide logics for the specification of qualitative properties of Markov decision processes, along with model-checking algorithms for such logics, and we study the equivalence relations arising from such logics. Our starting point for the logics is provided by the probabilistic logics pCtl and pCtl * [HJ94, ASB + 95, BdA95]. These logics are able to express bounds on the probability of events: the logic pCtl is derived from Ctl by adding to its path quantifiers ∀ (“for all paths”) and ∃ (“for at least one path”) a probabilistic quantifier P. For a bound q ∈ [0, 1], an inequality ⊲⊳∈ {<, ≤, ≥, >}, and a path formula ϕ, and bisimulation are local). These results are surprising. One is tempted to consider alternating and non-alternating MDPs as equivalent, since a non-alternating MDP can be translated into an alternating one by splitting its states into multiple alternating ones. The difference between the alternating and non-alternating models was already noted in [ST05] for strong and weak “precise” simulation, and in [BS01] for axiomatizations. Our results indicate that the difference between the alternating and non-alternating model is even more marked for ≈ >0 , which is a local relation on alternating models, and a non-local relation in non-alternating ones.

More surprises follow when examining the roles of the (“next”) and U (“until”) operators, and the distinction between Qrctl and Qrctl * . For Ctl, it is known that the operator alone suffices to characterize bisimulation; the U operator does not add distinguishing power. The same is true for Qrctl on finite, alternating MDPs. On the other hand, we show that for non-alternating,

…(Full text truncated)…

📸 Image Gallery

cover.png

Reference

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut