Approximate inference on planar graphs using Loop Calculus and Belief Propagation
We introduce novel results for approximate inference on planar graphical models using the loop calculus framework. The loop calculus (Chertkov and Chernyak, 2006b) allows to express the exact partition function Z of a graphical model as a finite sum of terms that can be evaluated once the belief propagation (BP) solution is known. In general, full summation over all correction terms is intractable. We develop an algorithm for the approach presented in Chertkov et al. (2008) which represents an efficient truncation scheme on planar graphs and a new representation of the series in terms of Pfaffians of matrices. We analyze in detail both the loop series and the Pfaffian series for models with binary variables and pairwise interactions, and show that the first term of the Pfaffian series can provide very accurate approximations. The algorithm outperforms previous truncation schemes of the loop series and is competitive with other state-of-the-art methods for approximate inference.
💡 Research Summary
The paper tackles the long‑standing problem of computing the partition function Z for probabilistic graphical models that contain cycles. Exact evaluation of Z is intractable for most graphs, yet it is essential for tasks such as marginal inference, learning, and model comparison. The authors combine two complementary ideas: Belief Propagation (BP) and Loop Calculus. BP provides an efficient fixed‑point approximation that is exact on trees but only approximate on loopy graphs. Loop Calculus, introduced by Chertkov and Chernyak (2006), expresses the exact Z as a finite series of correction terms—one term for every generalized loop—evaluated using the BP fixed point. While the series is exact, enumerating all loops is computationally prohibitive for general graphs.
The novelty of this work lies in exploiting the planar structure of a subclass of graphical models. For planar graphs, the authors show that the loop series can be reorganized into a Pfaffian expansion. By assigning a Kasteleyn orientation to the planar embedding, they construct a skew‑symmetric adjacency matrix whose Pfaffian enumerates weighted perfect matchings. Remarkably, each Pfaffian term corresponds to a specific collection of loops, allowing the entire correction series to be written as a sum of Pfaffians of progressively larger sub‑matrices. The first Pfaffian term (the square root of the determinant of the Kasteleyn matrix) captures the contribution of all single‑cycle loops and can be computed in O(N³) time using standard linear‑algebra routines. Higher‑order Pfaffian terms involve more complex loop structures but can be truncated without severe loss of accuracy.
The authors focus on binary variables with pairwise interactions (Ising‑type models) defined on planar graphs such as square lattices, random planar graphs, and image‑grid models. They compare several inference strategies: (i) plain BP, (ii) truncated loop series based on loop length, (iii) the full Pfaffian series, and (iv) state‑of‑the‑art variational or sampling methods. Empirical results demonstrate that even the single‑term Pfaffian approximation dramatically improves over BP, reducing relative error on Z to well below 2 % across a wide range of temperatures and coupling strengths. When a few additional Pfaffian terms are added, the approximation becomes virtually indistinguishable from the exact partition function (computed via exhaustive enumeration on small instances). Importantly, the Pfaffian‑based truncation requires far fewer terms than length‑based loop truncation, because it leverages the global combinatorial structure of planar graphs rather than local loop counts.
From a computational standpoint, the algorithm scales as O(N³) for the first Pfaffian term, which is acceptable for moderate‑size planar problems (up to several thousand nodes). The authors also discuss implementation tricks such as sparse matrix representations and GPU‑accelerated determinant calculations that can further reduce runtime. They note that the method is naturally limited to planar graphs; extending it to non‑planar topologies would require graph‑planarization techniques or approximate embeddings, which remain open research directions.
In the discussion, the paper outlines three promising extensions: (1) adapting the Pfaffian framework to models with multi‑valued variables or higher‑order factors by constructing appropriate tensor‑network representations; (2) investigating approximate planar embeddings for near‑planar graphs, thereby enabling the technique to be applied to a broader class of real‑world networks; and (3) integrating the Pfaffian correction as a post‑processing step in modern deep learning‑based inference pipelines, such as graph neural networks, to improve their probabilistic calibration.
Overall, the contribution is twofold: a theoretically elegant reformulation of the loop series for planar graphs as a Pfaffian expansion, and a practical algorithm that delivers high‑quality approximations of the partition function with modest computational effort. The work bridges the gap between exact combinatorial methods (Kasteleyn’s dimer counting) and approximate message‑passing algorithms, offering a new tool for researchers dealing with planar probabilistic models in statistical physics, computer vision, and network analysis.