Adaptive approximate Bayesian computation for complex models
Approximate Bayesian computation (ABC) is a family of computational techniques in Bayesian statistics. These techniques allow to fi t a model to data without relying on the computation of the model likelihood. They instead require to simulate a large…
Authors: Maxime Lenorm, (UR LISC), Franck Jabot (UR LISC)
Adaptiv e appro ximate Ba y esian computation for complex mo dels Maxime Lenormand, 1 F ranc k jab ot, 1 and Guillaume Deffuan t 1 1 IRSTEA, LISC, 24 avenue des L andais, 63172 A UBIERE, F r anc e W e prop ose a new approximate Bay esian computation (ABC) algorithm that aims at minimizing the n umber of mo del runs for reac hing a given qualit y of the posterior appro ximation. This algorithm automatically determines its sequence of tolerance lev els and makes use of an easily interpretable stopping criterion. Moreo ver, it av oids the problem of particle duplication found when using a MCMC k ernel. When applied to a toy example and to a complex social model, our algorithm is 2 to 8 times faster than the three main sequen tial ABC algorithms curren tly av ailable. Appro ximate Ba yesian computation (ABC) tech- niques app ear particularly relev ant for calibrating sto c hastic mo dels b ecause they are easy to imple- men t and applicable to any mo del. They generate a sample of model parameter v alues ( θ i ) i =1 ,..,N (of- ten also called particles) from the prior distribution π ( θ ) and select the θ i v alues leading to mo del out- puts x ∼ f ( x | θ i ) satisfying a pro ximit y criterion with the target data y ( ρ ( x, y ) ≤ , ρ expressing a dis- tance, b eing a tolerance level). The selected sample of parameter v alues appro ximates the posterior distri- bution of parameters, leading to mo del outputs with the exp ected quality of appro ximation. Ho wev er, in practise, running these techniques is very demanding computationally because sampling the whole space of parameters requires a num b er of sim ulations which gro ws exponentially with the num ber of parameters to iden tify . This tends to limit the application of these tec hniques to easily computable models [ 1 ]. In this pap er, our goal is minimizing the num ber of model runs for reac hing a giv en quality of posterior appro xi- mation, and th us to mak e the approac h applicable to a larger set of mo dels. ABC is the sub ject of intense scienti fic researches and several improv ed versions of the original scheme are av ailable, such as using lo cal regressions to im- pro ve parameter inference [ 2 , 3 ], automatically select- ing informative summary statistics [ 4 , 5 ], coupling to Mark ov chain Monte Carlo [ 6 , 7 ] or improving se- quen tially the posterior distributions with sequential Mon te Carlo metho ds [ 8 – 10 ]. This last class of meth- o ds approximates progressiv ely the p osterior, using sequen tial samples S ( t ) = ( θ ( t ) i ) i =1 ,..,N deriv ed from sample S ( t − 1) , and using a decreasing set of tolerance lev els { 1 , ..., T } . This strategy fo cuses the sampling effort in parts of the parameter space of high like- liho od, av oiding to sp end muc h computing time in systematically sampling the whole parameter space. The first sequen tial method applied to ABC was prop osed b y [ 8 ] with the ABC-PRC (P artial Rejec- tion Control). This metho d is based on a theoretical w ork of [ 11 ] to ABC. How ev er, in [ 10 ] the authors ha ve shown that this metho d leads to a bias in the appro ximation of the poste rior. In [ 9 , 10 ] the authors prop osed a new algorithm, called P opulation Monte Carlo ABC in [ 10 ] and hereafter called PMC. This algorithm, corrects the bias by assigning to each par- ticle a weigh t corresp onding to the in v erse of its im- p ortance in the sample. It is particularly interesting in our p erspective because it pro vides with a rigor- ous framework to the sequential sample idea, whic h seems a goo d wa y for minimizing the num b er of runs. In this approach, the problem is then defining the se- quence of tolerance lev els { 1 , ..., T } . In [ 12 ] and [ 13 ] the authors solv e partly this problem b y deriving the tolerance lev el at a giv en step from the previously se- lected sample. Ho wev er, a difficult y remains: when to stop? If the final tolerance level T is too large, the final posterior will b e of bad quality . Inv ersely , a to o small T leads to a p osterior that could hav e b een obtained with less model runs. In this paper, we propose a modification of the pop- ulation Mon te Carlo ABC algorithm proposed in [ 10 ] that w e call adaptive p opulation Monte Carlo ABC (hereafter called APMC). This new algorithm deter- mines b y itself the sequence of tolerance levels as in [ 12 ] and [ 13 ], and it also provides a stopping crite- rion. F urthermore, our approach a v oids the problem of duplications of particles due to the MCMC ker- nel used in [ 12 ] and [ 13 ]. W e prov e that the com- putation of the weigh ts asso ciated to the particles in this algorithm lead to the in tended p osterior distri- bution and w e also pro ve that the algorithm stops whatev er the chosen v alue of the stopping parameter. W e sho w that our algorithm, applied to a toy exam- ple and to an individual-based so cial mo del, requires significan tly less simulations to reach a given quality lev el of the p osterior distribution than the p opula- tion Mon te Carlo ABC algorithm of [ 10 ] (hereafter called PMC), the replenishment SMC ABC algorithm of [ 12 ] (hereafter called RSMC) and the adaptive SMC ABC algorithm of [ 13 ] (hereafter called SMC). Our new algorithm has been implemented in the R pack- age ’EasyABC’ [ 14 ]. Sequen tial Monte-Carlo metho ds in appro ximate Ba yesian computation In this section we presen t the three main sequen tial ABC algorithms currently av ailable and their limita- tions. W e present the P opulation Monte-Carlo ABC prop osed in [ 10 ] (hereafter called PMC), the Replen- ishmen t Sequential Mon te-Carlo ABC proposed in [ 12 ] and the Sequen tial Monte-Carlo ABC prop osed in [ 13 ]. These algorithms are detailed in App endix A. 2 The PMC algorithm This metho d consists in generating a sample S ( t ) = ( θ ( t ) i ) i =1 ,..,N at eac h iteration of the algorithm, 1 ≤ t ≤ T . Eac h particle of the sample S ( t ) satisfying the predefined tolerance level t where 1 ≥ t ≥ T . W e say that a parameter v alue θ ( t ) i , satisfies the tol- erance level t , if when running the mo del we get x ∼ f ( x | θ ( t ) i ), suc h that its distance ρ ( t ) i = ρ ( x, y ) to the target data y , is b elow t . At step t the sam- ple S ( t ) is derived from sample S ( t − 1) using a particle filter metho dology . The first sample S 1 is generated using a regular ABC step. At step t a new parti- cle θ ( t ) i is generated using a Marko v transition k er- nel K t , θ ( t ) i ∼ K t ( θ | θ ∗ ), until θ ( t ) i satisfies t where θ ∗ is randomly draw from S ( t − 1) with probability ( w ( t − 1) i ) i =1 ,..,N . The weigh t w ( t − 1) i is prop ortional to the in v erse of its imp ortance in the sample S ( t − 1) (Eq. 2 ). The k ernel function K t is a Gaussian kernel with a v ariance equal to twice the w eighted empirical v ariance of the set S ( t − 1) [ 10 ]. The algorithm stops when the sample S ( T ) is generated i.e the target T is reac hed. See Algorithm 2 for details. Weights c orr e cting the kernel sampling bias As pointed out by [ 10 ], the newly generated parti- cles θ ( t ) i in a sequen tial procedure are no more dra wn from the prior distribution but from a sp ecific prob- abilit y density d ( t ) i that depends on the particles se- lected at the previous step and on the c hosen kernel. This introduces a bias in the procedure. This bias should b e corrected by attributing a w eight equal to π ( θ ( t ) i ) /d ( t ) i to eac h newly generated particle θ ( t ) i . The density of probabilit y d ( t ) i to generate particle θ ( t ) i at step t is giv en b y the sum of the probabilities to reac h θ ( t ) i from one of the N particles of the previous step times their respective weigh ts: d ( t ) i ∝ N X j =1 w ( t − 1) j σ − 1 t − 1 ϕ σ − 1 t − 1 ( θ ( t ) i − θ ( t − 1) j ) (1) where ϕ ( x ) = 1 √ 2 π e − x 2 2 is the k ernel function. This yields the expression of the w eight w ( t ) i to be attributed to the newly drawn particle θ ( t ) i : w ( t ) i ∝ π ( θ ( t ) i ) P N j =1 w ( t − 1) j σ − 1 t − 1 ϕ σ − 1 t − 1 ( θ ( t ) i − θ ( t − 1) j ) (2) Limitations of the PMC algorithm The ma jor problem in the PMC algorithm is to define the decreasing sequence of tolerance levels { 1 , ..., T } to get close to an optimal gain in com- puting time. If the decrease in tolerance v alues is too sharp or to o shallo w, the b enefits of the imp ortance sampling pro cedure has goo d chance to b e low er than what could b e possible. In the following, we will in- deed demonstrate that our algorithm leads to a se- quence of tolerance lev els which clearly outp erforms an arbitrary choice for the sequence of tolerance lev- els. The RSMC and the SMC algorithms In [ 12 ] and [ 13 ] the authors prop osed tw o metho ds to determine ”on-line” the sequence of tolerance lev- els. The main idea is to define the t v alue with the previous sample S ( t − 1) . In the RSMC algorithm of [ 12 ], t is defined as a quantile of the ρ ( x, y ) v alues of the previous sample S ( t − 1) (see Algorithm 3 for de- tails). In the SMC algorithm of [ 13 ], t is computed so that the effective sample size of the particles is reduced b y a constant factor at each time step (see Algorithm 4 for details). A second difference b et ween the PMC and the RSMC/SMC algorithms concerns the prop osal dis- tribution. The RSMC and the SMC algorithms use a MCMC kernel to mov e the particles. At step t , a new particle θ ( t ) i is generated using a MCMC k er- nel θ ( t ) i ∼ K t ( θ | θ ∗ ) where θ ∗ is randomly dra w from S ( t − 1) with probabilit y ( w ( t − 1) i ) i =1 ,..,N . This weigh t ( w ( t ) i ) i =1 ,..,N is equal to 1 if the particle θ ( t ) i satisfies t , and to 0 otherwise. The jump is accepted with prob- abilit y , p acc , based on the Metrop olis-Hastings ratio (Eq. 3 ). 1 ∧ π ( θ ( t ) i ) K t ( θ ∗ | θ ( t ) i ) π ( θ ∗ ) K t ( θ ( t ) i | θ ∗ ) 1 ρ ( x,y ) ≤ t (3) where x ∧ y means the minim um of x and y . Limitations of the RSMC and the SMC algorithms The MCMC kernel used in [ 12 ] and [ 13 ] to sample new v alues θ ( t ) j has a significan t dra wbac k in our view: it can lead to particle duplications. Indeed, each time the MCMC jumps from a particle to a new one which is not accepted, the initial particle is k ept in the new sample of particles. When this o ccurs several times with the same initial particle, this particle app ears sev eral times in the new sample. The n um b er of suc h ”duplicated” particles can grow and strongly deterio- rate the qualit y of the posterior, as illustrated below. T o solve this problem, [ 12 ] proposed to p erform R MCMC jump trials instead of one. R evolv es during the course of the algorithm (Eq. 4 ) since its v alue is c hosen suc h that there is a probabilit y of 1 − c that the particle gets mo ved at least once where c = 0 . 01 in 3 [ 12 ]. T o circumv en t the problem of particle duplica- tions [ 13 ] proposed to resample the parameter v alues when to o many are duplicated. In [ 13 ] the authors also prop osed to run the model M times for each particle, in order to decrease the v ariance of the acceptance ra- tio of the MCMC jump. Ho wev er, all these solutions increase the n um b er of model runs, going against the initial benefit of using sequential samples. R = log( c ) log(1 − p acc ) (4) Adaptiv e p opulation Mon te-Carlo approximate Ba yesian computation Ov erview of the APMC algorithm The APMC algorithm follows the main principles of the sequential ABC, and defines on-line the toler- ance level at each step lik e in [ 15 ], [ 12 ] and [ 13 ]. F or eac h tolerance level t , it generates a sample S ( t ) of particles and computes their asso ciated weigh ts. This w eighted sample appro ximates the p osterior distribu- tion, with an increasing appro ximation quality as t decreases. Supp ose the APMC reached step t − 1, with a sample S ( t − 1) of N α = b αN c particles and their asso ciated weigh ts ( θ ( t − 1) i , w ( t − 1) i ) i =1 ,..,N α , the main features of the APMC are (see Algorithm 5 for details): • the algorithm generates N − N α parti- cles ( θ ( t − 1) j ) j = N α +1 ,..,N where θ ( t − 1) j ∼ N ( θ ∗ j , σ 2 ( t − 1) ), the seed θ ∗ j is randomly drawn from the weigh ted set ( θ ( t − 1) i , w ( t − 1) i ) i =1 ,..,N α and the v ariance σ 2 ( t − 1) of the Gaussian k ernel N ( θ ∗ j , σ 2 ( t − 1) ) is twice the empirical v ariance of the weigh ted set ( θ ( t − 1) i , w ( t − 1) i ) i =1 ,..,N α , follo wing [ 10 ]. • the w eigh ts w ( t − 1) j of the new particles ( θ ( t − 1) j ) j = N α +1 ,..,N are computed so that these new particles can b e com bined with the sample S ( t − 1) of the previous step without causing a bias in the posterior distribution. These weigh ts are giv en b y Eq. 6 (see below). • the algorithm concatenates the N α previous par- ticles ( θ ( t − 1) i ) i =1 ,..,N α with the N − N α new particles ( θ ( t − 1) j ) j = N α +1 ,..,N , together with their asso ciated weigh ts and distances to the data. This constitutes a new set noted S ( t ) temp = ( θ ( t ) i , w ( t ) i , ρ ( t ) i ) i =1 ,..,N . • the next tolerance level t is determined as the first α − quan tile of the ( ρ ( t ) i ) i =1 ,..,N . • the new sample S ( t ) = ( θ ( t ) i , w ( t ) i ) i =1 ,..,N α is then constituted from the N α particles of S ( t ) temp sat- isfying the tolerance lev el t . • if the prop ortion p acc of particles satisfying the tolerance lev el t − 1 among the N − N α newly generated particles is b elow a c hosen v alue p acc min , the algorithm stops, and its result is ( θ ( t ) i ) i =1 ,..,N α with their associated weigh ts. Note that in our algorithm, to get a n um b er N α of retained particles for the next step, the choice of t is hea vily constrained: it has to b e at least equal to the first α − quantile of the ( ρ ( t ) i ) i =1 ,..,N and smaller than the immediately superior ( ρ ( t ) i ) v alue. W e chose to fix it to the first α − quantile for simplicity . This c hoice also ensures that the tolerance level decreases from one iteration to the next: in the worst case where p acc = 0 (no newly simulated particles accepted), t = t − 1 . Our algorithm do es not use a MCMC k ernel and av oids duplicating particles. It requires a reweigh ting step in O ( N 2 α ) instead of O ( N α ) in [ 12 ], but in our persp ective, this computational cost is sup- p osed negligible compared with the cost of running the mo del. W eigh ts correcting the kernel sampling bias F or the APMC algorithm the density of probability d ( t ) i to generate particle θ ( t ) i at step t is: d ( t ) i = N α X j =1 w ( t − 1) j P N α k =1 w ( t − 1) k σ − 1 t − 1 ϕ σ − 1 t − 1 ( θ ( t ) i − θ ( t − 1) j ) (5) where ϕ ( x ) = 1 √ 2 π e − x 2 2 is the k ernel function. This yields the expression of the w eight w ( t ) i to be attributed to the newly drawn particle θ ( t ) i : w ( t ) i = π ( θ ( t ) i ) P N α j =1 w ( t − 1) j / P N α k =1 w ( t − 1) k σ − 1 t − 1 ϕ σ − 1 t − 1 ( θ ( t ) i − θ ( t − 1) j ) (6) This form ula differs from the sc heme of [ 10 ] where the weigh ts need only to b e prop ortional to Eq. 6 at each step. Since we w ant to concatenate parti- cles obtained at differen t steps of the algorithm (while [ 10 ] generate the sample at step t from scratc h), w e need the scaling of w eigh ts to b e consistent across the differen t steps of the algorithm. Using the w eight of Eq. 6 guarantees the correction of the sam- pling bias throughout the APMC pro cedure and en- sures that the N α w eighted particles θ ( t ) i pro duced at the t -th iteration follow the p osterior distribution π ( θ | ρ ( x, y ) < t ). 4 Figure 1 : Number of distinct particles in a sample of N = 5000 particles during the course of the SMC and RSMC algorithms applied to the to y example; In all three panels w e plot the results obtained for 50 executions of the algorithm. (a) SMC with α = 0 . 9 and M = 1; (b) SMC with α = 0 . 99 and M = 1; (c) RSMC with α = 0 . 5. In all three panels, the tolerance target is equal to 0 . 001. The stopping criterion W e stop the algorithm when the proportion of ”ac- cepted” particles (Eq. 7 ) among the N − N α new particles is below a predetermined threshold p acc min . This c hoice of stopping rule ensures that additional sim ulations would only marginally c hange the p oste- rior distribution. Note that this stopping criterion will be achiev ed ev en if p acc min = 0, this ensures that the algorithm con verges. W e present a formal pro of of this assertion in App endix B. p acc ( t ) = 1 N − N α N X k = N α +1 1 ρ ( t − 1) k < t − 1 (7) Exp erimen ts on a toy example W e consider four algorithms: APMC, PMC, the SMC and the RSMC. Their implementations in R [ 16 ] are av ailable [ 21 ]. W e compare them on the to y example studied in [ 8 ] where π ( θ ) = U [ − 10 , 10] and f ( x | θ ) ∼ 1 2 φ θ , 1 100 + 1 2 φ ( θ, 1) where φ µ, σ 2 is the normal density of mean µ and v ariance σ 2 . In this example, w e consider that y = 0 is observed, so that the p osterior density of in terest is prop ortional to φ 0 , 1 100 + φ (0 , 1) π ( θ ). W e structure the comparisons on tw o indicators: the num b er of simulations p erformed during the ap- plication of the algorithms, and the L 2 distance b e- t ween the exact p osterior density and the histogram of particle v alues obtained with the algorithms. This L 2 distance is computed on the 300-tuple obtained by dividing the supp ort [ − 10 , 10] in to 300 equally-sized bins. W e choose the L 2 distance to compare the sam- ple to the true p osterior b ecause it is a well-kno wn accuracy measure easy to compute and a go o d indi- cator to compare differen t methods. W e c ho ose N = 5000 particles and a target toler- ance level equal to 0 . 01. F or the PMC algorithm we use a decreasing sequence of tolerance lev els from 1 = 2 down to 11 = 0 . 01. F or the SMC algorithm, w e use 3 differen t v alues for α : { 0 . 9 , 0 . 95 , 0 . 99 } and M = 1 as in [ 13 ]. F or the RSMC algorithm w e use α = 0 . 5 as in [ 12 ]. T o explore our algorithm, we test 9 different v al- ues for α : { 0 . 1 , 0 . 2 , 0 . 3 , 0 . 4 , 0 . 5 , 0 . 6 , 0 . 7 , 0 . 8 , 0 . 9 } , and 4 different v alues for p acc min : { 0 . 01 , 0 . 05 , 0 . 1 , 0 . 2 } . In eac h case, w e perform 50 times the algorithm, and compute the a v erage and standard deviation of the t wo indicators: the total num b er of sim ulations and the L 2 distance b etw een the exact p osterior densit y and the histogram of particle v alues. W e used as k ernel transition a normal distribution parameterized with twice the weigh ted v ariance of the previous sam- ple, as in [ 10 ]. W e report below the effects of v arying α and p acc min on the p erformance of our algorithm, and compare it with the PMC, SMC and RSMC algorithms. P article duplication in SMC and RSMC The n um b er of distinct particles decreases during the course of the SMC algorithm whatev er the v alue of α , as sho wn on Fig. 1 a-b. The oscillations of the n um- b er of distinct particles are caused by the resampling step in the SMC algorithm (see [ 13 ]), but they are not sufficien t to counterbalance the ov erall decrease. This decrease deteriorates the p osterior appro ximation as sho wn on Fig. 2 . F or the RSMC algorithm, the ini- tial oscillation of the n um b er of particles is due to the initial v alue of R , initially set to 1, but whic h quic kly ev olves tow ards a v alue ensuring a relativ ely constant n umber of distinct particles. This num b er of distinct particles is main tained at a reasonably high lev el (Fig. 1 c), but this has a cost in terms of the n umber of re- quired mo del runs (see Fig. 2 ). Note that the APMC and the PMC algorithms keep N distinct particles. 5 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.05 0.10 0.15 0.20 Number of simulations ( x10 5 ) L 2 0.25 0.5 1 2 4 8 16 32 Figure 2 : Posterior qualit y ( L 2 ) versus computing cost (num b er of simulations) a veraged o ver 50 replicates. V ertical and horizon tal bars represen t the standard deviations among replicates. Algorithm parameters used for APMC: α in { 0 . 1 , 0 . 2 , 0 . 3 , 0 . 4 , 0 . 5 , 0 . 6 , 0 . 7 , 0 . 8 , 0 . 9 } and p acc min in { 0 . 01 , 0 . 05 , 0 . 1 , 0 . 2 } . Blue circles are used for p acc min = 0 . 01, orange triangles for p acc min = 0 . 05, green squares for p acc min = 0 . 1, and purple diamonds for p acc min = 0 . 2. PMC: red plain triangles for a sequence of tolerance levels from 1 = 2 down to 11 = 0 . 01. SMC: grey plain square for α in { 0 . 9 , 0 . 95 , 0 . 99 } (from left to righ t), M = 1 and a target equal to 0.01. RSMC: brown plain diamond for α = 0 . 5 and a target equal to 0.01. Results obtained with a standard rejection-based ABC algorithm are depicted with black plain circles. 0 500 1000 1500 2000 2500 3000 (a) Number of simulations x L 2 2 PMC SMC RSMC ABC APMC (b) α p acc min 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.01 0.025 0.05 0.075 0.1 0.2 0 200 400 600 800 1000 Figure 3 : (a) Bo xplot of the criterion “squared L 2 distance times the num b er of simulations” for the different ABC algorithms. APMC: for α in { 0 . 1 , 0 . 2 , 0 . 3 , 0 . 4 , 0 . 5 , 0 . 6 , 0 . 7 , 0 . 8 , 0 . 9 } and p acc min = 0 . 01; SMC: for α in { 0 . 9 , 0 . 95 , 0 . 99 } , M = 1 and a target equal to 0.01; RSMC: for α = 0 . 5 and a target equal to 0.01; ABC: for a target equal to 0.01; PMC: for a sequence of tolerance levels from 1 = 2 to 11 = 0 . 01. (b) Criterion “squared L 2 distance times the n um b er of simulations” in the APMC algorithm for the different v alues of α and p acc min . Each cell depicts the av erage of the criterion ov er the 50 p erformed replicates of the APMC. Influence of parameters on APMC The v alues of α and p acc min ha ve an impact on the studied indicators. W e find that smaller α and p acc min impro ve the quality of the appro ximation (smaller L 2 distance), and increase the total n um- b er of mo del runs, with p acc min ha ving the largest effect (Fig. 2 ). With a large α , the tolerance lev els decrease slowly and there are numerous steps b efore the algorithm stops. In this toy example, our sim- ulations show that all explored sets of ( α , p acc min ) suc h that p acc min < 0 . 1 give go od results for the crite- rion Numb er of simulations × L 2 2 (Fig. 3 b). Large α pro vide sligh tly b etter results for small p acc min while small α pro vide sligh tly better results for large p acc min (Fig. 3 b). On this to y example it app ears that in ter- mediate v alues of α and p acc min (0 . 3 ≤ α ≤ 0 . 7 and 0 . 01 ≤ p acc min ≤ 0 . 05), presen t a go od compromise b et w een n um b er of mo del runs and the quality of the p osterior appro ximation. 6 T able 1 : SimVillages parameter descriptions P arameters Description Range θ 1 Av erage n um b er of children p er woman [0 , 4] θ 2 Probabilit y to accept a new residence for a household [0 , 1] θ 3 Probabilit y to mak e couple for tw o individuals [0 , 1] θ 4 Probabilit y to split for a couple in a y ear [0 , 0 . 5] T able 2 : Summary statistic descriptions Summary statistic Description Measure of discrepancy S 1 Num b er of inhabitan ts in 1999 L 1 distance S 2 Age distribution in 1999 χ 2 distance S 3 Household type distribution in 1999 χ 2 distance S 4 Net migration in 1999 L 1 distance S 5 Num b er of inhabitan ts in 2006 L 1 distance S 6 Age distribution in 2006 χ 2 distance S 7 Household type distribution in 2006 χ 2 distance S 8 Net migration in 2006 L 1 distance Comparing p erformances Whatev er the v alue of α and p acc min , the APMC algorithm alw a ys yields b etter results than the other three algorithms. It requires b etw een 2 and 8 times less simulations to reach a given p osterior quality L 2 (Fig. 2 ). F urthermore, go o d approximate p oste- rior distributions are very quickly obtained (Fig. 2 ). The compromise b et w een simulation sp eed and con- v ergence level can also b e illustrated using the crite- rion Numb er of simulations × L 2 2 [ 17 ]. This criterion is smaller for the APMC algorithm (Fig. 3 a). Application to the mo del SimVillages In this section, w e c heck if our algorithm still per- forms b etter than the PMC, the RSMC and the SMC when applied to an individual-based so cial mo del dev elop ed during the Europ ean pro ject PRIMA[ 22 ]. The aim of the model is to simulate the effect of a scenario of job creation (or destruction) on the ev olu- tion of the p opulation and activities in a netw ork of m unicipalities. Mo del and data The mo del simulates the dynamics of virtual indi- viduals living in 7 in terconnected villages in a rural area of Auvergne (a region of Central F rance). A single run of the mo del SimVillages with sev en rural m unicipalities tak es ab out 1 . 4 seconds on a desktop mac hine (PC Intel 2.83 GHz). The dynamics include demographic c hange (aging, marriage, div orce, births and deaths), activit y change (c hange of jobs, unem- plo yment, inactivit y , retirement), and movings from one municipalit y to another or outside of the set. The mo del also includes a dynamics of creation / destruc- tion of jobs of proximit y services, derived from the size of the local population. More details on the model can b e found in [ 18 ]. The individuals (ab out 3000) are initially generated using the 1990 census data of the National Institute of Statistics and Economic Studies ( I N S E E ), some of them are given a job type and a lo cation for this job (in a municipalit y of the set or outside), they are organised in households living in a m unicipality of the set. The mo del dynamics is mostly data driven, but four parameters cannot b e directly deriv ed from the a v ailable data. They are noted θ p for 1 ≤ p ≤ 4, describ ed in T able 1 . W e use our algorithm to identify the distribution of the four parameters for which the simulations, initial- ized with the 1990 census data, satisfy matc hing crite- ria with the data of the 1999 and 2006 census. The set of summary statistics { S m } 1 ≤ m ≤ M and the asso ciated discrepancy measure used ρ m are described in T able 2 . W e note S m the sim ulated summary statistics and S 0 m the observed statistics. The eight summary statistics are normalized (v ariance equalization) and they are com bined using the infinit y norm (Eq. 8 ): k ( ρ m ( S m , S 0 m )) 1 ≤ m ≤ M k ∞ = sup 1 ≤ m ≤ M ρ m ( S m , S 0 m ) (8) W e first generate a sample of length N from the prior U [ a,b ] , where [ a, b ] is a v ailable for eac h parameter in 7 1.5 2.0 2.5 3.0 3.5 0.2 0.4 0.6 0.8 θ 1 θ 2 (a) 1.5 2.0 2.5 3.0 3.5 0.2 0.4 0.6 0.8 θ 1 θ 3 (b) 1.5 2.0 2.5 3.0 0.02 0.04 0.06 0.08 0.10 0.12 0.14 θ 1 θ 4 (c) 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 θ 2 θ 3 (d) 0.2 0.4 0.6 0.8 0.02 0.04 0.06 0.08 0.10 0.12 0.14 θ 2 θ 4 (e) 0.2 0.4 0.6 0.8 0.02 0.04 0.06 0.08 0.10 0.12 0.14 θ 3 θ 4 (f) 0.0 0.2 0.4 0.6 0.8 1.0 Density 0.0 0.2 0.4 0.6 0.8 1.0 Density Figure 4 : Con tour plot of the biv ariate join t densities of θ i and θ j obtained with our algorithm, and with α = 0 . 5 and p acc min = 0 . 01; (a) θ 1 and θ 2 ; (b) θ 1 and θ 3 ; (c) θ 1 and θ 4 ; (d) θ 2 and θ 3 ; (e) θ 2 and θ 4 ; (f ) θ 3 and θ 4 . T able 1 , with a Latin hypercub e [ 19 ] and w e select the best N α particles. T o mo ve the particles, w e use as kernel transition a multiv ariate normal distribu- tion parameterized with twice the w eigh ted v ariance- co v ariance matrix of the previous sample [ 20 ]. As in the section , w e p erform a parameter study and compare APMC with its three comp etitors. F or APMC, α v aries in ( { 0 . 3 , 0 . 5 , 0 . 7 } ) and p acc min in ( { 0 . 01 , 0 . 05 , 0 . 1 , 0 . 2 } ), and we set N α = 5000 parti- cles. F or the PMC, SMC and RSMC we also set N = 5000 particles and a tolerance lev el target equal to 1 . 4. The tolerance v alue = 1 . 4 corre- sp onds to the a verage final tolerance v alue w e ob- tain with APMC for p acc min = 0 . 01. Note that oth- erwise this final tolerance is difficult to set prop erly and a worse c hoice for this v alue would ha ve lead to w orse performances of these algorithms. F or the PMC algorithm, w e use the decreasing sequence of tolerance levels { 3 , 2 . 5 , 2 , 1 . 7 , 1 . 4 } . F or the SMC algo- rithm, we use 3 differen t v alues for the couple ( α , M ): { (0 . 9 , 1) , (0 . 99 , 1) , (0 . 9 , 15) } . F or the RSMC algo- rithm w e use α = 0 . 5, as in [ 12 ]. F or eac h algorithm and parameter setting, w e perform 5 replicates. W e appro ximated p osterior density (unknown in this case) with the original rejection-based ABC algo- rithm, starting with N = 10 , 000 , 000, selecting 7890 particles below the tolerance level = 1 . 4. T o compute the L 2 distance b etw een p osterior den- sities, we divided each parameter supp ort into 4 equally sized bins, leading to a grid of 4 4 = 256 cells, and we computed on this grid the sum of the squared differences betw een histogram v alues. Study of APMC result APMC yields a unimo dal approximate p osterior distribution for the mo del SimVillages (Fig. 4 ). Inter- estingly , parameters θ 1 and θ 4 are sligh tly correlated (Fig. 4 c). This is logical since they ha ve contradictory effects on the n um b er of children in the p opulation. What is less straightforw ard is that w e are able to partly te ase apart these tw o effects with the av ailable census data, since w e get a p eak in the approximate p osterior distribution instead of a ridge. Influence of parameters on APMC As for the toy example, w e find that the in termedi- ate v alues of ( α, p acc min ) that w e used lead to similar results (Fig. 5 c). In practice, w e therefore recom- mend to use α = 0 . 5 and p acc min b et w een 0 . 01 and 0 . 05 depending on the wished lev el of con vergence. Comparing p erformances APMC requires betw een 2 and 7 times less sim ula- tions to reach a given p osterior quality than the other algorithms L 2 (Fig. 5 a). Again, the gain in sim ulation n umber is progressive during the course of the algo- rithm. The Numb er of simulations × L 2 2 criterion is again smaller for the APMC algorithm (Fig. 5 b). 8 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.05 0.10 0.15 0.20 (a) Number of simulations ( x10 5 ) L 2 0.5 1 2 4 8 16 32 500 1000 1500 2000 2500 (b) Number of simulations x L 2 2 PMC SMC RSMC APMC (c) α p acc min 0.3 0.5 0.7 0.01 0.025 0.05 0.075 0.1 0.2 0 200 400 600 800 1000 Figure 5 : (a) Posterior qualit y ( L 2 ) v ersus computing cost (num ber of simulations) av eraged o ver 5 replicates. V ertical and horizon tal bars represen t the standard deviations among replicates. Algorithm parameters used for APMC: α in { 0 . 3 , 0 . 5 , 0 . 7 } and p acc min in { 0 . 01 , 0 . 05 , 0 . 1 , 0 . 2 } . Blue circles are used for p acc min = 0 . 01, orange triangles for p acc min = 0 . 05, green squares for p acc min = 0 . 1, and purple diamonds for p acc min = 0 . 2. PMC: red plain triangles for a sequence of tolerance levels from 1 = 3 to 5 = 1 . 4. SMC: grey plain square for ( α, M ) in { (0 . 9 , 1) , (0 . 99 , 1) } , grey star for ( α, M ) = (0 . 9 , 15) and a target equal to 1.4. RSMC: brown plain diamond for α = 0 . 5 and a target equal to 1.4. Results obtained with a standard rejection-based ABC algorithm are depicted with blac k plain circles. (b) Boxplot of the criterion “squared L 2 distance times the n umber of simulations” for the different algorithms. APMC: for α in { 0 . 3 , 0 . 5 , 0 . 7 } and p acc min = 0 . 01; SMC: for ( α, M ) in { (0 . 9 , 1) , (0 . 99 , 1) , (0 . 9 , 15) } and a target equal to 0.01; RSMC: for α = 0 . 5 and a target equal to 0.01; ABC: for a target equal to 1.4; PMC: for a sequence of tolerance lev els from 1 = 3 to 5 = 1 . 4. (c) Criterion “squared L 2 distance times the num ber of simulations” in the APMC algorithm for the differen t v alues of α and p acc min . Each cell depicts the av erage of the criterion ov er the 5 p erformed replicates of the APMC. Discussion The go od performances of APMC should of course b e confirmed on other examples. Nevertheless w e ar- gue that they are due to the main assets of our ap- proac h: • W e c ho ose an appropriate reweigh ting pro cess instead of a MCMC k ernel, which corrects the sampling bias without duplicating particles; • W e define an easy to in terpret stopping crite- rion that automatically defines the num ber of sequen tial steps. Therefore, we can hav e some confidence in the go o d p erformances of APMC on other examples. In the future, it w ould b e interesting to ev aluate this algorithm on models in volving a larger num b er of pa- rameters and/or m ulti-mo dal p osterior distributions. Moreo ver, APMC could b enefit from other impro ve- men ts, in particular b y p erforming a semi-automatic selection of informative summary statistics after the first ABC step [ 4 , 5 ] and by using lo cal regressions for p ost-processing the final p osterior distribution [ 2 , 3 ]. W e did not p erform such combinations in the presen t con tribution, so that our algorithm is directly com- parable with the three other sequential algorithms w e lo ok ed at. How ev er, they would be straigh tforw ard, b ecause the differen t improv emen ts concern different steps of the ABC pro cedure. Ac knowledgemen ts This publication has b een funded by the Proto- t ypical policy impacts on multifunctional activities in rural m unicipalities collab orativ e pro ject, European Union 7th F ramework Programme (ENV 2007-1), con- tract no. 212345. The work of the first author has b een funded b y the Auvergne region. [1] M. A. Beaumon t. Appr oximate Bayesian c omputation in evolution and e c olo gy , volume 41 of Annual R eview of Ec olo gy, Evolution, and Systematics . 2010. [2] M. A. Beaumont, W. Zhang, and D. J. Balding. Ap- pro ximate Ba y esian computation in p opulation genet- ics. Genetics , 162(4):2025–2035, 2002. [3] M. G. B. Blum and O. F ran¸ cois. Non-linear regres- sion mo dels for approximate Bay esian computation. Statistics and Computing , 20(1):63–73, 2010. [4] P . Jo yce and P . Marjoram. Appro ximately sufficient statistics and Bay esian computation. Statistical Ap- plic ations in Genetics and Mole cular Biolo gy , 7(1), 2008. [5] P . F earnhead and D. Prangle. Constructing sum- mary statistics for approximate Bay esian compu- tation: Semi-automatic ABC. T echnical Rep ort 1004.1112, arXiv.org, 2011. [6] P . Marjoram, J. Molitor, V. Plagnol, and S. T a v ar ´ e. Mark ov chain Monte Carlo without likelihoo ds. Pr o- c e e dings of the National A c ademy of Scienc es of the 9 Unite d States of Americ a , 100(26):15324–15328, 2003. [7] D. W egmann, C. Leuen berger, and L. Excoffier. Effi- cien t approximate ba y esian computation coupled with mark ov chain mon te carlo without likelihoo d. Genet- ics , 182(4):1207–1218, 2009. [8] S. A. Sisson, Y. F an, and M. M. T anak a. Sequential Mon te Carlo without likelihoo ds. Pr o c e e dings of the National Ac ademy of Scienc es of the United States of Americ a , 104(6):1760–1765, 2007. [9] Tina T oni, Da vid W elch, Natalja Strelk ow a, Andreas Ipsen, and Mic hael P . H. Stumpf. Appro ximate Ba yesian computation scheme for parameter inference and mo del selection in dynamical systems. Journal of the R oyal So ciety Interfac e , 6:187, 2009. [10] M. A. Beaumont, J.M. Cornuet, J.M. Marin, and C. P . Rob ert. Adaptive approximate Bay esian com- putation. Biometrika , 96(4):983–990, 2009. [11] P . Del Moral, A. Doucet, and A. Jasra. Sequential Mon te Carlo samplers. Journal of the R oyal Statistic al So ciety.Series B: Statistic al Metho dolo gy , 68(3):411– 436, 2006. [12] C. C. Dro v andi and A. N. P ettitt. Estimation of pa- rameters for macroparasite p opulation evolution us- ing approximate Bay esian computation. Biometrics , 67(1):225–233, 2011. [13] P . Del Moral, A. Doucet, and A. Jasra. An adap- tiv e sequen tial Mon te Carlo method for appro ximate Ba yesian computation. Statistics and Computing , 22(5):1009–1020, 2012. [14] F ranc k Jab ot, Thierry F aure, and Nicolas Dumoulin. Easy ab c: performing efficient appro ximate bay esian computation sampling schemes using R. Metho ds in Ec olo gy and Evolution , 2013. [15] Daniel W egmann, Christoph Leuen b erger, Samuel Neuensc hw ander, and Laurent Excoffier. Ab cto olbox: a versatile toolkit for appro ximate bay esian computa- tions. BMC Bioinformatics , 11(1):116, 2010. [16] R Developmen t Core T eam. R: A L anguage and Envi- r onment for Statistic al Computing . R F oundation for Statistical Computing, Vienna, Austria, 2011. ISBN 3-900051-07-0. [17] P .W. Glynn and W. Whitt. The asymptotic effciency of simulation estimators. Oper. R es. , 40(3):505–520, 1992. [18] S. Huet and G. Deffuant. Common framework for the microsim ulation model in prima pro ject. T echnical rep ort, Cemagref LISC, 2011. [19] R. Carnell. lhs: Latin hypercub e samples. R p ackage version 0.5 , 2009. [20] S. Filippi, C. Barnes, and M. P . H. Stumpf. On optimalit y of kernels for appro ximate Ba yesian computation using sequen tial Monte Carlo. (arXiv:1106.6280v4), 2012. [21] http://motive.cemagref.fr/people/maxime. lenormand/script_r_toyex [22] PRototypical policy Impacts on Multifunctional Ac- tivities in rural m unicipalities - EU 7th F ramew ork Researc h Programme; 2008-2011; https://prima. cemagref.fr/the- project 10 App endix A: Description of the algorithms Algorithm 1 : Likelihoo d-free rejection sampler (ABC) Giv en N the n um b er of particles for i = 1 to N do rep eat Generate θ ∗ ∼ π ( θ ) Sim ulate x ∼ f ( x | θ ∗ ) un til ρ ( S ( x ) , S ( y )) < Set θ i = θ ∗ end for Algorithm 2 : Population Monte Carlo Approximate Bay esian Computation (PMC) Giv en N the n um b er of particles and a decreasing sequence of tolerance lev el 1 ≥ ... ≥ T , F or t = 1, for i = 1 N do rep eat Sim ulate θ (1) i ∼ π ( θ ) and x ∼ f ( x | θ (1) i ) un til ρ ( S ( x ) , S ( y )) < 1 Set w (1) i = 1 N end for T ake σ 2 2 as twice the weigh ted empirical v ariance of ( θ (1) i ) 1 ≤ i ≤ N for t = 2 to T do for i = 1 to N do rep eat Sample θ ∗ i from θ ( t − 1) j with probabilities w ( t − 1) j Generate θ ( t ) i | θ ∗ i ∼ N ( θ ∗ i , σ 2 t ) and x ∼ f ( x | θ ( t ) i ) un til ρ ( S ( x ) , S ( y )) < t Set w ( t ) i ∝ π ( θ ( t ) i ) P N j =1 w ( t − 1) j σ − 1 t ϕ ( σ − 1 t ( θ ( t ) i − θ ( t − 1) j )) end for T ake σ 2 t +1 as twice the weigh ted empirical v ariance of ( θ ( t ) i ) 1 ≤ i ≤ N end for Where ϕ ( x ) = 1 √ 2 π e − x 2 2 11 Algorithm 3 : Sequential Monte Carlo Approximate Bay esian Computation Replenishment (RSMC) Giv en N , 1 , T , c , α ∈ [0 , 1] and N α = b αN c , for i = 1 to N do rep eat Sim ulate θ i ∼ π ( θ ) and x ∼ f ( x | θ i ) ρ i = ρ ( S ( x ) , S ( y )) un til ρ i ≤ 1 end for Sort ( θ i , ρ i ) by ρ i Set M AX = ρ N while M AX > T do Remo ve the N α particles with largest ρ Set N EX T = ρ N − N α Set i acc = 0 Compute the parameters of the proposal M C M C q ( · , · ) with the N − N α particles. for j = 1 to N α do Sim ulate θ N − N α + j ∼ ( θ i ) 1 ≤ i ≤ N − N α for k = 1 R do Generate θ ∗ ∼ q ( θ ∗ , θ N − N α + j ) et x ∗ ∼ f ( x ∗ | θ ∗ ) Generate u < U [0 , 1] if u ≤ 1 ∧ π ( θ ∗ ) q ( θ N − N α + j , θ ∗ ) π ( θ N − N α + j ) q ( θ ∗ , θ N − N α + j ) 1 ρ ( S ( x ∗ ) ,S ( y )) ≤ N E X T then Set θ N − N α + j = θ ∗ Set ρ N − N α + j = ρ ( S ( x ∗ ) , S ( y )) i acc ← i acc + 1 end if end for end for Set p acc = i acc RN α Set R = log( c ) log(1 − p acc ) end while 12 Algorithm 4 : Adaptive Sequential Mon te Carlo Appro ximate Bay esian Computation (SMC) Giv en N , M , α ∈ [0 , 1], 0 = ∞ , and N T , F or t = 0, for i = 1 to N do Sim ulate θ (0) i ∼ π ( θ ) for k = 1 M do Sim ulate X (0) ( i,k ) ∼ f ( ·| θ (0) i ) end for Set W (0) i = 1 N end for W e hav e E S S (( W (0) i ) , 0 ) = N where E S S (( W (0) i ) , 0 ) = P N i =1 ( W (0) i ) 2 − 1 Set t = 1 while t − 1 > do Determine t resolving E S S (( W ( t ) i ) , t ) = αE S S (( W ( t − 1) i ) , t − 1 ) where W ( t ) i ∝ W ( t − 1) i P M k =1 1 A t − 1 ,y ( X ( t − 1) ( i,k ) ) P M k =1 1 A t − 1 ,y ( X ( t − 1) ( i,k ) ) et A ,y = { x | ρ ( S ( x ) , S ( y )) < } if t < then n = end if if E S S (( W ( t ) i ) , t ) < N T then for i = 1 to N do Sim ulate ( θ ( t − 1) ( i ) , X ( t − 1) ( i, 1: M ) ) in ( θ ( t − 1) ( j ) , X ( t − 1) ( j, 1: M ) ) with probabilities W ( t ) j , 1 ≤ j ≤ N Set W ( t ) i = 1 N end for end if for t = 1 to N do if W ( t ) j > 0 then Generate θ ∗ ∼ K ( θ ∗ | θ ( t − 1) ( i ) ) for k = 1 to M do Sim ulate X ( ∗ ,k ) ∼ f ( ·| θ ∗ ) end for Generate u < U [0 , 1] if u ≤ 1 ∧ P M k =1 1 A t ,y ( X ( ∗ ,k ) ) π ( θ ∗ ) K t ( θ ( t − 1) ( i ) | θ ∗ ) P M k =1 1 A t ,y ( X ( t − 1) ( i,k ) ) π ( θ ( t − 1) ( i ) ) K t ( θ ∗ | θ ( t − 1) ( i ) ) then Set ( θ ( t ) ( i ) , X ( t ) ( i, 1: M ) ) = ( θ ∗ , X ( ∗ , 1: M ) ) else Set ( θ ( t ) ( i ) , X ( t ) ( i, 1: M ) ) = ( θ ( t − 1) ( i ) , X ( t − 1) ( i, 1: M ) ) end if end if end for end while 13 Algorithm 5 : Adaptive Population Mon te Carlo Appro ximate Bay esian Computation Giv en N , N α = b αN c the num ber of particles to k eep at each iteration among the N particles ( α ∈ [0 , 1]) and p acc min the minimal acceptance rate. for t = 1 do for i = 1 to N do Sim ulate θ (0) i ∼ π ( θ ) and x ∼ f ( x | θ (0) i ) Set ρ (0) i = ρ ( S ( x ) , S ( y )) Set w (0) i = 1 end for Let 1 = Q ρ (0) ( α ) the first α -quantile of ρ (0) where ρ (0) = n ρ (0) i o 1 ≤ i ≤ N Let n ( θ (1) i , w (1) i , ρ (1) i ) o = n ( θ (0) i , w (0) i , ρ (0) i ) | ρ (0) i ≤ 1 , 1 ≤ i ≤ N o T ake σ 2 1 as twice the weigh ted empirical v ariance of { ( θ (1) i , w (1) i ) } 1 ≤ i ≤ N α Set p acc = 1 t ← t + 1 end for while p acc > p acc min do for i = N α + 1 to N do Pic k θ ∗ i from θ ( t − 1) j with probability w ( t − 1) j P N α k =1 w ( t − 1) k , 1 ≤ j ≤ N α Generate θ ( t − 1) i | θ ∗ i ∼ N ( θ ∗ i , σ 2 ( t − 1) ) and x ∼ f ( x | θ ( t − 1) i ) Set ρ ( t − 1) i = ρ ( S ( x ) , S ( y )) Set w ( t − 1) i = π ( θ ( t − 1) i ) P N α j =1 ( w ( t − 1) j / P N α k =1 w ( t − 1) k ) σ − 1 t − 1 ϕ ( σ − 1 t − 1 ( θ ( t − 1) i − θ ( t − 1) j )) end for Set p acc = 1 N − N α P N k = N α +1 1 ρ ( t − 1) i < t − 1 Let t = Q ρ ( t − 1) ( α ) where ρ ( t − 1) = n ρ ( t − 1) i o 1 ≤ i ≤ N Let n ( θ ( t ) i , w ( t ) i , ρ ( t ) i ) o = n ( θ ( t − 1) i , w ( t − 1) i , ρ ( t − 1) i ) | ρ ( t − 1) i ≤ t , 1 ≤ i ≤ N o T ake σ 2 t as twice the weigh ted empirical v ariance of { ( θ ( t ) i , w ( t ) i ) } 1 ≤ i ≤ N α t ← t + 1 end while Where ∀ u ∈ [0 , 1] and X = { x 1 , ..., x n } , Q X ( u ) = inf { x ∈ X | F X ( x ) ≥ u } and F X ( x ) = 1 n P n k =1 1 x k ≤ x . Where ϕ ( x ) = 1 √ 2 π e − x 2 2 14 App endix B: Pro of that the algorithm stops W e know that there exists ∞ > 0 such that t − → t → + ∞ ∞ b ecause, by construction of the algorithm ( t ) is a p ositiv e decreasing sequence and it is b ounded b y 0. F or eac h θ ∈ Θ, w e consider the distance ( ρ ( x, y ) | θ ) as a random v ariable ρ ( θ ). Let f ρ ( θ ) b e the probability densit y function of ρ ( θ ). The probabilit y P [ ρ ( θ ) ≥ t ] that the dra wn distance asso ciated to parameter θ is higher than the curren t tolerance t satisfies: P [ ρ ( θ ) ≥ t ] = 1 − P [( ρ ( θ ) < t ] = 1 − R t ∞ f ρ ( θ ) ( x ) d x W e define: P max = sup θ ∈ Θ sup x ∈ R + f ρ ( θ ) ( x ) W e ha v e: P [ ρ ( θ ) ≥ t ] ≥ 1 − P max ( t − ∞ ) The N − N α particles are indep endent and iden tically distributed from π t +1 the densit y defined by the algorithm, hence the probabilit y P [ p acc ( t + 1) = 0] that no particle is accepted at step t + 1 is such that: P [ p acc ( t + 1) = 0] ≥ (1 − P max ( t − ∞ )) N − N α If P max < + ∞ , b ecause t − ∞ − → t → + ∞ 0, w e ha ve: P [ p acc ( t + 1) = 0] − → t → + ∞ 1 W e can conclude that p acc ( t ) con verges in probabilit y to w ards 0 if P max < + ∞ . This ensures that the algorithm stops, whatev er the c hosen v alue of p acc min .
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment