CHR(PRISM)-based Probabilistic Logic Learning
PRISM is an extension of Prolog with probabilistic predicates and built-in support for expectation-maximization learning. Constraint Handling Rules (CHR) is a high-level programming language based on multi-headed multiset rewrite rules. In this paper, we introduce a new probabilistic logic formalism, called CHRiSM, based on a combination of CHR and PRISM. It can be used for high-level rapid prototyping of complex statistical models by means of “chance rules”. The underlying PRISM system can then be used for several probabilistic inference tasks, including probability computation and parameter learning. We define the CHRiSM language in terms of syntax and operational semantics, and illustrate it with examples. We define the notion of ambiguous programs and define a distribution semantics for unambiguous programs. Next, we describe an implementation of CHRiSM, based on CHR(PRISM). We discuss the relation between CHRiSM and other probabilistic logic programming languages, in particular PCHR. Finally we identify potential application domains.
💡 Research Summary
The paper introduces CHRiSM, a novel probabilistic logic programming language that tightly integrates Constraint Handling Rules (CHR) with the probabilistic inference and learning capabilities of PRISM. The motivation is to provide a high‑level, declarative framework—called “chance rules”—that enables rapid prototyping of sophisticated statistical models while reusing PRISM’s well‑established expectation‑maximization (EM) learning engine.
The authors first define the syntax of CHRiSM. A rule consists of a conventional CHR head (one or more constraints) together with an optional probabilistic annotation. The annotation is expressed via PRISM’s multi‑valued switch (msw) construct, which assigns a probability distribution to the possible outcomes of the rule. This design allows the programmer to write deterministic constraints exactly as in CHR and to turn any rule into a stochastic one simply by adding a msw call.
Operational semantics are presented as a two‑stage transition system. The first stage performs standard CHR matching and constraint rewriting, generating a set of applicable “chance rules”. The second stage invokes PRISM’s sampling mechanism to resolve the msw choices, thereby selecting a concrete probabilistic branch. The authors distinguish between ambiguous programs—where a given initial multiset can evolve along multiple probabilistic derivations leading to different distributions—and unambiguous programs, for which a unique distribution over final states can be defined. For the latter, a rigorous distribution semantics is given: the probability of a derivation is the product of the probabilities of the msw choices along that path, and the overall probability of a result is the sum over all derivations that yield it.
Implementation details are described next. CHRiSM is built on top of an existing CHR(PRISM) interpreter. The interpreter is extended with a preprocessing phase that translates chance rule annotations into PRISM msw calls while leaving ordinary CHR rules untouched. This approach preserves backward compatibility: existing CHR programs can be turned into CHRiSM programs with minimal changes. Parameter learning leverages PRISM’s EM algorithm unchanged; observed data are supplied as a set of goal constraints, and the EM engine automatically computes expected sufficient statistics for the hidden msw variables and updates the probability parameters. Consequently, CHRiSM inherits PRISM’s efficient support for large‑scale evidence computation and incremental learning.
The paper then situates CHRiSM within the broader landscape of probabilistic logic programming. It compares CHRiSM to PCHR, another language that adds probabilistic transitions to CHR. While PCHR provides a probabilistic extension, it lacks a dedicated learning subsystem and its semantics become cumbersome when dealing with complex distributions. CHRiSM, by contrast, benefits from PRISM’s mature inference and learning infrastructure, making it suitable for a wide range of models such as Bayesian networks, hidden Markov models, and stochastic grammars, all expressed declaratively as CHR rules with probabilistic annotations.
Several illustrative examples are provided, covering domains such as natural‑language parsing (probabilistic context‑free grammars encoded as chance rules), robot motion planning (stochastic action selection under constraints), and biological network modeling (gene‑regulation interactions with uncertain activation). These examples demonstrate how CHRiSM can succinctly capture both the logical structure of a problem and the uncertainty inherent in real‑world data.
Finally, the authors discuss future directions. They identify the need for a systematic treatment of ambiguous programs—potentially via a refined semantics or program analysis tools—and for scalability improvements in learning when dealing with massive datasets. They also envision dedicated development environments, visualization aids for rule execution traces, and integration with other probabilistic programming ecosystems.
In summary, CHRiSM represents a significant step toward unifying declarative constraint programming with probabilistic modeling and learning. By leveraging the strengths of CHR’s rule‑based rewriting and PRISM’s EM‑based parameter estimation, it offers a powerful, expressive, and practically useful platform for researchers and engineers building complex statistical models.
Comments & Academic Discussion
Loading comments...
Leave a Comment