Using parallel computation to improve Independent Metropolis--Hastings based estimation

Using parallel computation to improve Independent Metropolis--Hastings   based estimation
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In this paper, we consider the implications of the fact that parallel raw-power can be exploited by a generic Metropolis–Hastings algorithm if the proposed values are independent. In particular, we present improvements to the independent Metropolis–Hastings algorithm that significantly decrease the variance of any estimator derived from the MCMC output, for a null computing cost since those improvements are based on a fixed number of target density evaluations. Furthermore, the techniques developed in this paper do not jeopardize the Markovian convergence properties of the algorithm, since they are based on the Rao–Blackwell principles of Gelfand and Smith (1990), already exploited in Casella and Robert (1996), Atchade and Perron (2005) and Douc and Robert (2010). We illustrate those improvements both on a toy normal example and on a classical probit regression model, but stress the fact that they are applicable in any case where the independent Metropolis-Hastings is applicable.


💡 Research Summary

The paper investigates how the raw parallel computing power that is now widely available can be harnessed to improve the Independent Metropolis–Hastings (IMH) algorithm, a special case of the Metropolis–Hastings family where the proposal distribution is independent of the current state. The authors observe that when proposals are independent, one can generate many candidate points in a single iteration, evaluate the target density for each, and then combine the information without increasing the total number of target‑density evaluations.

The central methodological contribution is the application of Rao–Blackwellisation to the IMH output. In each iteration the algorithm draws (K) independent proposals (y_1,\dots ,y_K) from the proposal density (q). For each candidate the importance weight (w_i=\pi(y_i)/q(y_i)) (where (\pi) is the target density) is computed. Instead of using a single accepted draw, the estimator of any functional (f(\theta)) is formed as a weighted average
\


Comments & Academic Discussion

Loading comments...

Leave a Comment