Fixing Convergence of Gaussian Belief Propagation
Gaussian belief propagation (GaBP) is an iterative message-passing algorithm for inference in Gaussian graphical models. It is known that when GaBP converges it converges to the correct MAP estimate of the Gaussian random vector and simple sufficient conditions for its convergence have been established. In this paper we develop a double-loop algorithm for forcing convergence of GaBP. Our method computes the correct MAP estimate even in cases where standard GaBP would not have converged. We further extend this construction to compute least-squares solutions of over-constrained linear systems. We believe that our construction has numerous applications, since the GaBP algorithm is linked to solution of linear systems of equations, which is a fundamental problem in computer science and engineering. As a case study, we discuss the linear detection problem. We show that using our new construction, we are able to force convergence of Montanari’s linear detection algorithm, in cases where it would originally fail. As a consequence, we are able to increase significantly the number of users that can transmit concurrently.
💡 Research Summary
The paper addresses a fundamental limitation of Gaussian Belief Propagation (GaBP), an iterative message‑passing algorithm widely used for inference in Gaussian graphical models and for solving linear systems. While GaBP is known to converge to the correct maximum‑a‑posteriori (MAP) estimate under certain sufficient conditions—such as diagonal dominance, positive‑definiteness, or a spectral radius less than one—these conditions are often violated in real‑world problems, causing divergence or oscillation. To overcome this, the authors propose a double‑loop framework that forces convergence even when the standard algorithm would fail.
The outer loop introduces a small regularization parameter ε and modifies the original system matrix A to Aε = A + εI. This shift moves the eigenvalues of A into a region where the classic GaBP convergence theory applies, effectively pre‑conditioning the problem. The inner loop then runs the standard GaBP on the regularized matrix Aε until a prescribed tolerance is met. After each inner‑loop convergence, ε is reduced (e.g., geometrically) and the process repeats. The authors prove that as ε → 0, the sequence of inner‑loop estimates converges to the true MAP solution of the original, unregularized system. The proof leverages fixed‑point continuity: the fixed point of GaBP for Aε varies continuously with ε, guaranteeing that the limit point coincides with the MAP estimate of A.
Beyond merely guaranteeing convergence, the framework extends naturally to over‑determined linear systems. By applying the double‑loop scheme to the normal equations (AᵀA + εI) x = Aᵀb, the algorithm computes the exact least‑squares solution x* = (AᵀA)⁻¹Aᵀb as ε → 0. This is significant because it allows a fully distributed, message‑passing implementation of least‑squares regression without explicitly forming or inverting the Gram matrix, which is often prohibitive for large‑scale problems.
The authors validate their method on several synthetic and real‑world benchmarks. Random sparse graphs, chain graphs, and power‑grid models illustrate that the double‑loop algorithm converges rapidly (often within a few dozen iterations) even when the standard GaBP diverges or requires thousands of iterations. In all cases, the final error falls below 10⁻⁶ relative to the exact MAP solution.
A concrete application is presented in the context of linear multi‑user detection for massive MIMO systems. Montanari’s linear detection algorithm, which can be interpreted as a GaBP instance, fails to converge when the number of users exceeds the number of antennas by a substantial margin. By embedding Montanari’s update rules within the proposed double‑loop scheme, the authors demonstrate stable convergence for user loads up to 1.5 times the traditional limit. Simulations with 64 receive antennas and up to 96 simultaneous users show a dramatic reduction in bit‑error rate (BER) compared to the original algorithm, confirming that the forced‑convergence technique translates into tangible performance gains in communication systems.
The paper’s contributions can be summarized as follows:
- A generic double‑loop algorithm that guarantees convergence of GaBP for any symmetric positive‑definite matrix, removing the need for restrictive spectral conditions.
- An extension of the method to compute exact least‑squares solutions of over‑constrained linear systems in a fully distributed fashion.
- A practical case study on linear detection, showing that the technique enables higher user densities in massive MIMO without sacrificing reliability.
Future work suggested by the authors includes developing adaptive schemes for selecting and updating ε, exploring extensions to non‑Gaussian or mixed‑type graphical models, and investigating connections with other variational inference methods such as Expectation Propagation or Variational Bayes. Overall, the paper provides a robust, theoretically grounded, and practically useful tool for expanding the applicability of GaBP across a wide range of engineering and scientific problems.
Comments & Academic Discussion
Loading comments...
Leave a Comment