A variant of the Recoil Growth algorithm to generate multi-polymer systems

Reading time: 5 minute
...

📝 Original Info

  • Title: A variant of the Recoil Growth algorithm to generate multi-polymer systems
  • ArXiv ID: 0708.1116
  • Date: 2009-07-02
  • Authors: ** 논문에 명시된 저자 정보가 제공되지 않았습니다. **

📝 Abstract

The Recoil Growth algorithm, proposed in 1999 by Consta et al., is one of the most efficient algorithm available in the literature to sample from a multi-polymer system. Such problems are closely related to the generation of self-avoiding paths. In this paper, we study a variant of the original Recoil Growth algorithm, where we constrain the generation of a new polymer to take place on a specific class of graphs. This makes it possible to make a fine trade-off between computational cost and success rate. We moreover give a simple proof for a lower bound on the irreducibility of this new algorithm, which applies to the original algorithm as well.

💡 Deep Analysis

📄 Full Content

Designing an algorithm that efficiently samples from a multi-polymer system according to a given probability distribution is the focus of much research activity in chemical physics [2,10]. The state of a multi-polymer system being a collection of self-avoiding paths that do not overlap each other, this problem is closely related to the classical problem in computer science of generating self-avoiding paths. The state space of such systems is huge, especially in high dimension or if the paths are long, which makes these problems hard. The main issue consists of defining an algorithm that both keeps the computational cost low and converges rapidly to the sampling distribution. For multi-polymer systems, various approaches have been suggested to tackle this problem, among which is the Recoil Growth (RG) algorithm, meant to be one of the most efficient algorithm currently available in the literature. For a precise description of this algorithm, the reader is referred to [3] 1 . In the present paper (which is an extended version of [12]), we define and analyze a variant of the RG algorithm, to which we refer as the RG * algorithm. The main ideas of both the RG and the RG * algorithms are to be found in two important classes of algorithms, which we briefly describe below: the Metropolis algorithm [9] and the auxiliary variable method [1,7].

Date: November 17, 2018. 1 The present paper is self-contained, and does not require prior knowledge on the RG algorithm.

The Metropolis algorithm. The Metropolis algorithm is a very generic algorithm that approximately samples according to a probability distribution π defined on some finite state space X . It takes as other input a Markov Chain P , and constructs a reversible Markov Chain with π as stationary distribution. In the long-run, we can therefore sample from a distribution that is arbitrarily close to π. The implicit idea is that it is hard to sample from π, whereas it is easy to generate P , for instance by local modifications of a state. The new Markov Chain is built thanks to the following rejection procedure, in which we use the notation A(x, y) = π(x)P (x, y). Starting from x ∈ X , choose as candidate for the next state some y ∈ X with probability P (x, y). If A(y, x) ≥ A(x, y), then go to y. Otherwise, flip a coin with bias A(y, x)/A(x, y): if tails, go to y, and if heads, stay in x. By doing so, the probability of going from x to y = x is P (x, y) • min 1, A(y, x)/A(x, y) , and it is easy to see that this Markov chain has π as reversible distribution. An important remark is that for the rejection procedure, only the ratio A(y, x)/A(x, y) matters. In particular, it is sufficient to know π up to its normalizing constant, what is very useful in many concrete applications. For instance, one could wish to sample complex combinatorial objects uniformly: in this case, one does not need to know the cardinality of these objects.

Under a more global look, the Markov Chain P can be seen as an exogenous process that, at each step of the algorithm, proposes a candidate for the next step. In general, the stationary distribution of P is not π, and so the role of the rejection procedure aims at compensating for this bias: by suitably rejecting or accepting this candidate, the new Markov Chain can be shown to be reversible with π as stationary distribution. Reversibility is an important feature here, because it makes it very easy to show that π is indeed the stationary distribution. Otherwise, since the objects under consideration are usually fairly complex, proving stationarity could be very challenging.

The auxiliary variables method. The auxiliary variable method is a conceptually easy extension of the Metropolis algorithm: instead of sampling from X according to π, it is sometimes easier to sample from a pair X × Y according to some distribution π(•, •) which has π as marginal. In such cases, the idea is to apply the Metropolis algorithm to π, and then to recover π by summation. The new variable y ∈ Y is referred to as the “auxiliary variable”. As we will see, the concept of underlying graph in the RG * algorithm is very close to an auxiliary variable; underlying graphs are central in the RG * algorithm, and are introduced not because they make the sampling easier or more natural, but because they reduce the computational cost.

The RG * algorithm. The state space of the RG * algorithm is the set of all possible configurations of non-intersecting polymers in some dimension d. More precisely, there is an underlying d-dimensional grid, endowed with a cyclic structure on the edges, on which all the objects are considered. A polymer is then just an undirected self-avoiding path of given length, a path being a sequence of neighboring vertices on the underlying grid. The fact that polymers cannot intersect each other, and that they cannot intersect themselves, comes from a very natural physical constraint, namely that a point in space cannot be occupied by two different par

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut