Annealed MAP
Maximum a Posteriori assignment (MAP) is the problem of finding the most probable instantiation of a set of variables given the partial evidence on the other variables in a Bayesian network. MAP has been shown to be a NP-hard problem [22], even for constrained networks, such as polytrees [18]. Hence, previous approaches often fail to yield any results for MAP problems in large complex Bayesian networks. To address this problem, we propose AnnealedMAP algorithm, a simulated annealing-based MAP algorithm. The AnnealedMAP algorithm simulates a non-homogeneous Markov chain whose invariant function is a probability density that concentrates itself on the modes of the target density. We tested this algorithm on several real Bayesian networks. The results show that, while maintaining good quality of the MAP solutions, the AnnealedMAP algorithm is also able to solve many problems that are beyond the reach of previous approaches.
💡 Research Summary
The paper tackles the Maximum a Posteriori (MAP) problem in Bayesian networks, where one seeks the most probable joint assignment to a set of query variables X given observed evidence e. MAP is known to be NP‑hard, and existing exact methods (e.g., branch‑and‑bound, integer programming) or approximate schemes (loopy belief propagation, variational inference) quickly become infeasible as the number of variables and network connectivity grow. To address this scalability barrier, the authors introduce AnnealedMAP, a simulated‑annealing‑based algorithm that constructs a non‑homogeneous Markov chain whose stationary distribution concentrates on the modes of the target posterior P(X|e).
Algorithmic core
The method defines a temperature‑dependent distribution π_T(X) = P(X|e)^{1/T}. At high temperature the distribution is flat, encouraging global exploration; as the temperature T is lowered, π_T sharpens around high‑probability regions, eventually collapsing onto the MAP mode when T → 0. Transition moves are generated by a hybrid of Gibbs sampling and Metropolis‑Hastings: a variable X_i is selected at random, a candidate value x_i’ is drawn from the conditional posterior P(X_i | X_{‑i}, e), and the move is accepted with probability
α = min{1,