Verifying Probabilistic Correctness in Isabelle with pGCL

Verifying Probabilistic Correctness in Isabelle with pGCL
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

This paper presents a formalisation of pGCL in Isabelle/HOL. Using a shallow embedding, we demonstrate close integration with existing automation support. We demonstrate the facility with which the model can be extended to incorporate existing results, including those of the L4.verified project. We motivate the applicability of the formalism to the mechanical verification of probabilistic security properties, including the effectiveness of side-channel countermeasures in real systems.


šŸ’” Research Summary

The paper presents a comprehensive formalisation of the probabilistic Guarded Command Language (pGCL) within the Isabelle/HOL proof assistant, using a shallow embedding approach. By representing pGCL constructs directly as higher‑order functions and leveraging Isabelle’s existing type system, the authors avoid the overhead of a deep meta‑language encoding and achieve tight integration with Isabelle’s automation facilities. The core of the work is the definition of an expectation operator that maps probabilistic programs to real‑valued expectations. This operator is proved to satisfy linearity, monotonicity, and compatibility with Markov kernels, and the associated proof tactics are built on Isabelle’s standard libraries for real analysis, measure theory, and summability.

A major contribution is the demonstration that the shallow embedding allows seamless reuse of Isabelle/HOL’s extensive mathematical libraries. Existing lemmas about real numbers, limits, and integrals can be applied directly to reasoning about probabilistic programs, dramatically reducing the amount of auxiliary development required. The authors also show how to automate convergence arguments for programs that involve infinite sums, by employing Isabelle’s ā€œsummableā€ and ā€œtendstoā€ tactics. This contrasts with earlier deep‑embedding approaches where such convergence proofs often required bespoke meta‑level reasoning.

The paper further illustrates the extensibility of the framework by incorporating results from the L4.verified project. The state‑transition model and security properties that were previously verified for the L4 microkernel are imported into the pGCL setting, enabling the verification of probabilistic extensions such as randomized scheduling or memory allocation policies. This integration showcases the potential of the approach for verifying security properties that have an inherent probabilistic component.

A concrete case study focuses on side‑channel countermeasures. The authors model side‑channel leakage as a random variable and quantify the effectiveness of mitigation techniques (e.g., random padding, timing jitter) by measuring the reduction in expected leakage. The verification goal is expressed as an inequality on expectations: the post‑mitigation expected leakage must fall below a predefined threshold. The proof script automatically discharges the necessary obligations, relying on the previously established expectation algebra and convergence tactics.

Implementation challenges are discussed in depth. When composing probabilistic commands, infinite summations can arise, requiring careful handling of convergence. The authors formalise a version of the Bounded Convergence Theorem within Isabelle and embed it into the automation pipeline, ensuring that all generated expectations are well‑defined. Additionally, because the expectation operator is higher‑order, the authors had to prove a suite of composition lemmas to support reasoning about nested expectations; these lemmas are provided in the appendix.

The paper concludes by outlining future work. Extensions to handle probabilistic loops with potentially unbounded state spaces, integration of quantitative information‑flow metrics, and cross‑tool verification with other proof assistants (Coq, Lean) are identified as promising directions. The authors argue that their shallow‑embedding methodology paves the way for a new generation of verification tools that can reason about both functional correctness and probabilistic security guarantees within a single, highly automated framework.

Overall, the work demonstrates that a shallow embedding of pGCL in Isabelle/HOL not only preserves the expressive power needed for probabilistic reasoning but also unlocks the full potential of Isabelle’s automation, making it feasible to verify realistic security properties such as side‑channel resistance in complex, real‑world systems.


Comments & Academic Discussion

Loading comments...

Leave a Comment