Adaptive Multiple Importance Sampling

Adaptive Multiple Importance Sampling
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

The Adaptive Multiple Importance Sampling (AMIS) algorithm is aimed at an optimal recycling of past simulations in an iterated importance sampling scheme. The difference with earlier adaptive importance sampling implementations like Population Monte Carlo is that the importance weights of all simulated values, past as well as present, are recomputed at each iteration, following the technique of the deterministic multiple mixture estimator of Owen and Zhou (2000). Although the convergence properties of the algorithm cannot be fully investigated, we demonstrate through a challenging banana shape target distribution and a population genetics example that the improvement brought by this technique is substantial.


💡 Research Summary

The paper introduces Adaptive Multiple Importance Sampling (AMIS), a novel scheme that dramatically improves the efficiency of importance‑sampling‑based Monte Carlo methods by fully recycling all previously generated draws. Traditional adaptive importance sampling approaches such as Population Monte Carlo (PMC) generate a new proposal distribution at each iteration, draw fresh samples, and keep the weights of earlier draws fixed. Consequently, if the initial proposal is poorly matched to the target, early samples can dominate the estimator, leading to high variance and slow convergence.

AMIS overcomes this limitation by employing the deterministic multiple‑mixture estimator originally proposed by Owen and Zhou (2000). At iteration t a proposal density (q_t(x)) produces (N_t) draws ({x_i^{(t)}}). After the final iteration (T) the weight of every draw—regardless of when it was generated—is recomputed as

\


Comments & Academic Discussion

Loading comments...

Leave a Comment