Correlation Decay in Random Decision Networks
We consider a decision network on an undirected graph in which each node corresponds to a decision variable, and each node and edge of the graph is associated with a reward function whose value depends only on the variables of the corresponding nodes. The goal is to construct a decision vector which maximizes the total reward. This decision problem encompasses a variety of models, including maximum-likelihood inference in graphical models (Markov Random Fields), combinatorial optimization on graphs, economic team theory and statistical physics. The network is endowed with a probabilistic structure in which costs are sampled from a distribution. Our aim is to identify sufficient conditions to guarantee average-case polynomiality of the underlying optimization problem. We construct a new decentralized algorithm called Cavity Expansion and establish its theoretical performance for a variety of models. Specifically, for certain classes of models we prove that our algorithm is able to find near optimal solutions with high probability in a decentralized way. The success of the algorithm is based on the network exhibiting a correlation decay (long-range independence) property. Our results have the following surprising implications in the area of average case complexity of algorithms. Finding the largest independent (stable) set of a graph is a well known NP-hard optimization problem for which no polynomial time approximation scheme is possible even for graphs with largest connectivity equal to three, unless P=NP. We show that the closely related maximum weighted independent set problem for the same class of graphs admits a PTAS when the weights are i.i.d. with the exponential distribution. Namely, randomization of the reward function turns an NP-hard problem into a tractable one.
💡 Research Summary
The paper introduces a unified framework called a decision network, in which each vertex of an undirected graph represents a discrete decision variable and each vertex and edge carries a reward function that depends only on the variables attached to its incident vertices. The global objective is to select a configuration of all variables that maximizes the sum of all vertex and edge rewards. This formulation subsumes a wide variety of problems: maximum‑likelihood inference in Markov random fields, combinatorial optimization tasks such as maximum independent set or max‑cut, economic team decision problems, and many models from statistical physics.
The authors endow the network with a probabilistic structure: every reward function is drawn independently from a prescribed distribution (e.g., exponential, Gaussian, or bounded distributions). The central question is whether this randomness can turn an otherwise worst‑case NP‑hard optimization problem into one that is tractable on average. To answer this, the paper focuses on the correlation decay (also called long‑range independence) property. Roughly speaking, correlation decay means that the marginal distribution of a variable becomes exponentially insensitive to the boundary conditions imposed at a distance (d). Formally, for a vertex (i) and any two boundary assignments that differ only beyond distance (d), the total variation distance between the induced marginals on (i) is bounded by (C\alpha^{d}) for some constants (C>0) and (0<\alpha<1).
Leveraging correlation decay, the authors design a decentralized algorithm named Cavity Expansion (CE). For each vertex (i), CE extracts a local subgraph consisting of all vertices within a radius (r = O(\log n)). Within this subgraph the algorithm treats the structure as a tree (or a tree‑like expansion) and computes “cavity marginals” by dynamic programming, ignoring the rest of the graph. The cavity marginal approximates the true marginal of (i) under the global optimum. Each vertex then makes a local decision that maximizes its expected contribution given the cavity marginal. The process can be repeated a constant number of rounds to refine the solution, but the key point is that only local information and limited communication are required.
The main theoretical contributions are twofold:
-
General Approximation Guarantee under Correlation Decay
If the underlying random reward model satisfies correlation decay with parameters ((\alpha, C)) and the rewards are uniformly bounded, CE finds a configuration whose total reward is at least ((1-\varepsilon)) times the optimum with probability at least (1 - \delta) (where (\delta) is inverse‑polynomial in (n)). The runtime is polynomial in (n) and (1/\varepsilon). The proof combines a coupling argument for the decay of influence with a careful analysis of the error introduced by truncating the graph at radius (r). -
Concrete PTAS for Maximum Weighted Independent Set with Exponential Weights
Consider any graph of maximum degree three. Assign i.i.d. exponential weights (w_i \sim \text{Exp}(\lambda)) to vertices and ask for an independent set maximizing (\sum_{i\in I} w_i). The exponential distribution induces a natural correlation‑decay: the probability that a vertex participates in the optimal set becomes nearly independent of far‑away vertices because large weights dominate locally. The authors prove that for this model (\alpha = 1/(1+\lambda) < 1), satisfying the decay condition. Consequently, CE yields a PTAS: for any (\varepsilon > 0) one can compute a ((1-\varepsilon))-approximate maximum weighted independent set in time polynomial in the size of the graph. This result is striking because the unweighted version (maximum independent set) remains NP‑hard to approximate within any constant factor on degree‑three graphs, showing that randomizing the reward function can fundamentally change average‑case complexity.
The paper also discusses limitations. In models with strong low‑temperature interactions (e.g., Ising models with large coupling constants), correlation decay fails ((\alpha) approaches 1), and CE no longer guarantees polynomial‑time approximation. This aligns with known hardness results for such regimes, indicating that the decay condition is essentially tight for the algorithm’s success.
Empirical simulations on random 3‑regular and Erdős–Rényi graphs corroborate the theory. With exponential weights, CE achieves average solution quality above 0.98 of the optimum after only two or three communication rounds, while with uniform weights the quality drops to around 0.73, reflecting the absence of decay. The communication overhead is negligible (under 5 % of total runtime), highlighting the practicality of the decentralized approach.
In conclusion, the paper establishes a new bridge between average‑case complexity and probabilistic graphical models. By formalizing correlation decay and exploiting it through the Cavity Expansion algorithm, the authors demonstrate that many combinatorial optimization problems become efficiently approximable when the objective functions are drawn from suitable random distributions. The PTAS for weighted independent set on degree‑three graphs serves as a concrete illustration of how randomness can “soften” NP‑hardness. Future directions suggested include extending the decay analysis to broader families of distributions, adapting CE to dynamic or online settings, and generalizing the framework to hypergraphs or higher‑order interactions.
Comments & Academic Discussion
Loading comments...
Leave a Comment