Vertex Models and Quantum Spin Systems:

이 논문은 클러스터 알고리즘을 이용한 모노드론 분자 구현에 관한 연구입니다. 스벤든-와ंग 알고리즘의 장단점과 관련하여 새로운 클러스터 알고리즘인 루프 알고리즘을 소개하고, 이 알고리즘을 사용하여 6-각모델을 시뮬레이션 한 결과를 제시합니다.

스вен든-와ंग 알고리즘은 모노드론 분자를 구현하기 위해 개발된 최초의 클러스터 알고리즘입니다. 그러나 이는 반드시 지역 업데이트만 수행하는 것이 아니라, 전체적인 시스템을 고려하여 전역적 업데이트를 수행합니다.

이 논문에서는 스벤든-와ंग 알고리즘과 루프 알고리즘의 차이를 살펴보고, 루프 알고리즘을 6-각모델에 적용한 결과를 제시합니다. 또한, 루프 알고리즘은 다른 클러스터 알고리즘들과 달리 지역 업데이트가 아닌 전역적 업데이트를 수행하기 때문에, 시뮬레이션 시간이 현저히 줄어드는 것을 관찰했습니다.

루프 알고리즘은 모노드론 분자의 구현에 있어 매우 유용한 도구이며, 이 논문에서는 6-각모델을 포함하여 다양한 물리 시스템에서 루프 알고리즘의 적용 가능성을 제시하고 있습니다.

영어 요약 시작:

The paper presents a new cluster algorithm for the simulation of monodromy group representations on graphs. The authors introduce the loop algorithm, which is an alternative to the Swendsen-Wang algorithm and provide numerical results for its application to the 6-vertex model.

The Swendsen-Wang algorithm was the first cluster algorithm devised for the simulation of monodromy group representations. However, it relies on local updates only and does not take into account global properties of the system.

In this paper, the authors compare the Swendsen-Wang algorithm with the loop algorithm and present numerical results for its application to the 6-vertex model. They observe that the loop algorithm is more efficient than the Swendsen-Wang algorithm due to its ability to perform global updates instead of local ones.

The loop algorithm is a useful tool for the simulation of monodromy group representations on graphs and the authors provide evidence of its applicability to various physical systems, including the 6-vertex model.

영어 요약 끝

Vertex Models and Quantum Spin Systems:

arXiv:cond-mat/9305019v1 17 May 1993FSU-SCRI-93C-65cond-mat/9305019May 1993Vertex Models and Quantum Spin Systems:A nonlocal approach∗Hans Gerd Evertz1 and Mihai Marcu21 Supercomputer Computations Research Institute,Florida State University, Tallahassee, FL 32306evertz@scri.fsu.edu2 School of Physics and Astronomy,Raymond and Beverly Sackler Faculty of Exact Sciences,Tel Aviv University, 69978 Tel Aviv, Israelmarcu@taunivm.bitnetAbstractWithin a general cluster framework, we discuss the loop-algorithm, a new typeof cluster algorithm that reduces critical slowing down in vertex models andin quantum spin systems.We cover the example of the 6-vertex model indetail. For the F-model, we present numerical results that demonstrate theeffectiveness of the loop algorithm.We discuss how to modify the originalalgorithm for some more complicated situations, especially for quantum spinsystems in one and two dimensions.∗To appear in Computer Simulations in Condensed Matter Physics VI, ed.

D.P. Landau, K.K.

Mon,and H.B. Sch¨uttler, Springer Verlag, Heidelberg, Berlin, 1993.

1IntroductionFor Monte Carlo simulations of many interesting physical situations, critical slowing downis a major problem. Standard simulation algorithms employ local update procedures likethe Metropolis and the heat bath algorithm.

With local updates, “information” travelsslowly, like a random walk. If the relevant length scale is the correlation length ξ, thenumber of updates necessary to decorrelate large regions, i.e.

the autocorrelation time τ,grows likeτ ∝ξz,(1)where z ≈2 for local updates, as suggested by the random walk analogy.The way out of this problem is to employ nonlocal updates. The challenge is to devisealgorithms that are nonlocal and still satisfy detailed balance.

Multigrid algorithms are onepossible approach. In this paper we shall focus on cluster algorithms; for a nonexhaustiveselection of references see [1, 2, 3, 4, 5, 6].The first cluster algorithm was invented by Swendsen and Wang (SW) [1] for the caseof the Ising spin model.

The basic idea is to perform moves that significantly change thePeierls contours characterizing a configuration. As the size of Peierls contours is, typically,anything up to the order of the correlation length, critical slowing down may be eliminatedcompletely or at least partially by this approach.

The SW algorithm has been modifiedand generalized for other spin systems, mostly with two spin interactions [2, 3, 5]. Noticethat for these systems clusters are connected regions of spins, with the same dimensionalityas the underlying lattice.

A few generalizations along different lines were also done [4, 6].Recently [7, 8, 9] we introduced cluster algorithms for vertex models and quantum spinsystems, which are the first cluster algorithms for models with constraints. While [7] is anadaptation of the valleys-to mountains-reflections (VMR) algorithm [5], originally devisedfor solid-on-solid models, the loop algorithm introduced in [8, 9] does not resemble anyexisting scheme.In this paper we discuss the loop algorithm in detail.

In vertex models [10] the dynam-ical variables are localized on bonds, and the interaction is between all bonds meeting ata vertex. Furthermore there are constraints on the possible bond variable values around avertex.

Our scheme is devised such as to take into account the constraints automatically,and to allow a simple way to construct the clusters. The clusters here are not connectedregions of spins, but instead closed paths (loops) of bonds.In what follows we first comment on the SW algorithm for spin systems.

Then wediscuss the general cluster formalism of Kandel and Domany [4], treating the SW algorithmas an example. Next we define the 6-vertex model, and, as a special case, the F model.

Wethen introduce the loop algorithm, and show how to formulate it for the complete 6-vertexmodel. The optimization of the algorithm is discussed in a separate section.

For the specialcase of the F-model, we describe how the algorithm particularizes, and then we present ourvery successful numerical tests of the loop algorithm. We also show how to apply the loopalgorithm to simulations of one and two dimensional quantum spin systems [11], like e.g.the xxz quantum spin chain, and the spin 12 Heisenberg antiferromagnet and ferromagnetin two dimensions.

Finally we comment on further generalizations and applications.1

2Some comments on the Swendsen-Wang algorithmHere we present a somewhat unusual viewpoint on the SW algorithm, in order to betterunderstand what is new in the loop algorithm. Let us look at a spin system, like the Isingmodel, with variables sx living on sites x, and a nearest neighbor Hamiltonian H(sx, sy).The partition function of the model to be simulated is Z = Ps exp (−P H(sx, sy)),where < xy > is a pair of sites.Consider update proposals (flips) sx →s′x, such that H(sx, sy) = H(s′x, s′y).

Then itfollows that updating at the same time all spins in some “cluster” of spins, we will onlychange the value of the Hamiltonian at the boundary of that cluster, not inside it.In order to satisfy detailed balance, we have to choose clusters with an appropriateprobability. The SW algorithm amounts to making a Metropolis decision for each bond< xy >, namely whether to accept the change in H from H(sx, sy) to H(s′x, sy).Ifaccepted, a bond is called “deleted”, otherwise it is called “frozen”.

Clusters are then setsof sites connected by frozen bonds. Note that if deleted, a bond may be at the boundaryof a cluster, but need not.Finally, an update is performed by finding all the clusters in a given configuration andthen flipping each cluster with 50% probability [1].

In Wolff’s single cluster variant [3, 2],which is dynamically more favourable, we construct one cluster starting from a randomlychosen initial site, and then flip it with 100% probability.A technical remark: The Swendsen-Wang algorithm can be vectorized and parallelized[12]. The difficult task is to identify the clusters, which is the same task as e.g.

imagecomponent labeling.For the single cluster variant, vectorization is the most efficientapproach [13].3The Kandel - Domany framework.Cluster algorithms are conveniently described in the general framework of Kandel andDomany [4]. Let us consider the partition functionZ =Xuexp (−V (u)),(2)where u are the configurations to be summed over.We shall call the function V the“interaction”.

Let us also define a set of new interactions V i (the index i numbers themodified interactions).Assume that during a Monte Carlo simulation we arrived at agiven configuration u. We choose a new configuration in a two step procedure.

The firststep is to replace V with one of the V i. For a given i, V i is chosen with probability pi(u).The pi(u) satisfy:pi(u)=exp(V (u) −V i(u) + ci),Pi pi(u)=1 ,(3)where ci are constants independent of u.

The second step is to update the configurationby employing a procedure that satisfies detailed balance for the chosen V i. The combinedprocedure satisfies detailed balance for the original interaction V [4].2

In many cases, the interaction V is a sum over local functions Hc, where c typically arecells of the lattice (like bonds in the spin system case, sites for vertex models, plaquettes forgauge theories). More generally, Hc can contain part (or all) of the interactions in someneighborhood of the cell c. We can choose the modified interactions Hcic for each Hcindependently (ic numbers the possible new interactions for the cell c).

The probabilitiesfor this choice obey eq. (3), with V replaced by Hc, and i by ic.The total modifiedinteraction V i will then be the sum of all the Hcic (i is now a multiindex).

When clearfrom the context, we shall drop the index c.Take the Ising model as an example. The configuration u is comprised of spins sx = ±1,the cells where the interaction is localized are bonds < xy >, and V (u) = −J P sxsy.We choose the bonds as our cells c, so we can perform the first step of the Kandel-Domanyprocedure separately for each bond.

The original cell interaction isH(s) = −Jsxsy . (4)Now, let us define two new bond interactions; the first (i = 1) is called “freeze”, thesecond (i = 2) is called “delete”:Hfreeze(s) =(0,sx = sy∞,else;Hdelete(s) = 0 .

(5)Then from (3) we obtain the Swendsen-Wang probabilities if we choose the constants cithat minimize freezing:pdelete(s)=exp (−J(sxsy + 1)) = min (1, exp (−2Jsxsy)) ,pfreeze(s)=1 −pdelete(s) ,(6)which is just the Swendsen-Wang probability.4The 6-Vertex ModelThe 6-vertex model [10] is defined on a square lattice. On each bond there is an Ising-likevariable that is usually represented as an arrow.

For example, arrow up or right means+1, arrow down or left means −1. At each vertex we have the constraint that there aretwo incoming and two outgoing arrows.

In fig. 1 we show the six possible configurationsat a vertex, numbered as in [10].

The statistical weight of a configuration is given by theproduct over all vertices of the vertex weights ρ(u). Thus a priori, for each vertex there are6 possible weights ρ(u), u = 1, ..., 6.

We take the weights to be symmetric under reversalof all arrows. Thus, in standard notation [10], we have:ρ(1) = ρ(2) = a ,ρ(3) = ρ(4) = b ,ρ(5) = ρ(6) = c .

(7)The 6-vertex model has two types of phase transitions: of Kosterlitz-Thouless type andof KDP type [10]. A submodel exhibiting the former is the F model, defined by c = 1,a = b = exp (−K), K ≥0.

For the latter transition an example is the KDP model itself,defined by a = 1, b = c = exp (−K), K ≥0.3

✲✲✻✻1✛✛❄❄2✲✲❄❄3✛✛✻✻4✲✛❄✻5✛✲✻❄6Figure 1: The six vertex configurations, u = 1, ..., 6 (using the standard conven-tions of [10]).ll–urul–lr✄✂straightFigure 2:The three break-ups of a vertex: ll–ur (lower-left–upper-right), ul–lr(upper-left–lower-right), and straight.5The Loop AlgorithmIf we regard the arrows on bonds as a vector field, the constraint at the vertices is a zero-divergence condition. Therefore every configuration change can be obtained as a sequenceof loop-flips.

By “loop” we denote an oriented, closed, non-branching (but possibly self-intersecting) path of bonds, such that all arrows along the path point in the direction ofthe path. A loop-flip reverses the direction of all arrows along the loop.Our cluster algorithm performs precisely such operations, with appropriate probabili-ties.

It constructs closed paths consisting of one or several loops without common bonds.All loops in this path are flipped together.We shall construct the path iteratively, following the direction of the arrows. Let thebond b be the latest addition to the path.

The arrow on b points to a new vertex v. Thereare two outgoing arrows at v, and what we need is a unique prescription for continuingthe path through v. This is provided by a break-up of the vertex v. In addition to thebreak-up, we have to allow for freezing of v. By choosing suitable probabilities for break-upand freezing we shall satisfy detailed balance.The break-up operation is defined by splitting v into two pieces, as shown in fig. 2.The two pieces are either two corners or two straight lines.

On each piece, one of thearrows points towards v, while the other one points away from v. Thus we will not allow,e.g., the ul–lr break-up for a vertex in the configuration 3. If we break up v, the possiblenew configurations are obtained by flipping (i.e., reversing both arrows of) the two piecesindependently.

On the other hand, if we freeze v, the only possible configuration changeis to flip all four arrows.The break-up and freeze probabilities are conveniently described within the KandelDomany framework. It is sufficient to give them for one vertex, which is in the currentconfiguration u.

We define 6 new interactions (weight functions) ρi, i = 1, ..., 6, corre-4

iactionρi(˜u)pi(u)1freeze 1,21, ρ(˜u)=a0, elseq1/a, ρ(u) =a0, else2freeze 3,41, ρ(˜u)=b0, elseq2/b, ρ(u)=b0, else3freeze 5,61, ρ(˜u)=c0, elseq3/c, ρ(u)=c0, else4ll–ur0, ρ(˜u)=a1, else0, ρ(u)=aq4/ρ(u),else5ul–lr0, ρ(˜u)=b1, else0, ρ(u)=bq5/ρ(u),else6straight0, ρ(˜u)=c1, else0, ρ(u)=cq6/ρ(u),elseTable 1:The new interactions ρi(˜u) and the probabilities pi(u) to choose them at avertex in current configuration u. See eq.

(8).sponding to specific break-up and freeze operations (the labelling of the new interactionsis completely arbitrary, and the fact that we have six of them is just a coincidence). Foreach vertex in configuration u, we replace with probability pi(u) the original interaction ρby the new interaction ρi.

Equation 3 , i.e. detailed balance and the proper normalizationof probabilities, require that for every upi(u) = qiρi(u)ρ(u) ,Xipi(u) = 1 ,(8)where qi = exp (−ci) ≥0 are parameters.As discussed in [4] (see also table 1), freezing is described by introducing one newinteraction for each different value of ρ(u).

For example, to freeze the value a, we choosethe interaction ρ1 to be ρ1(˜u) = 1 if ρ(˜u) = a, and ρ1(˜u) = 0 otherwise. In other words,when ρ1 is chosen, transitions between ˜u = 1 and ˜u = 2 cost nothing, whereas the vertexconfigurations 3, 4, 5, and 6 are then not allowed.

Notice that we denote by u the currentconfiguration, and by ˜u the argument of the function ρi.Each break-up is also described by one new interaction. As an example take the ul–lrbreak-up.

It is given by the new interaction number five, with ρ5(˜u) = 1 if ρ(˜u) = aor c, and ρ5(˜u) = 0 if ρ(˜u) = b. In other words, with the new interaction ρ5, transitionsbetween 1, 2, 5 and 6 cost nothing, while the vertex configurations 3 and 4 are not allowed.This corresponds precisely to allowing independent corner flips in a ul–lr break-up (seefigs.

1,2).5

Table 1 gives the full list of new weights ρi(˜u) and probabilities pi(u) to choose themin current configuration u. From (8) we also obtain:q1 + q5 + q6 = a ,q2 + q4 + q6 = b ,q3 + q4 + q5 = c .

(9)Assume now that we have broken or frozen all vertices. Starting from a bond b0, weproceed to construct a closed path by moving in the arrow direction.

As we move fromvertex to vertex, we always have a unique way to continue the path. At broken vertices thepath enters the vertex through one bond and leaves it through another.

If the last bond badded to the cluster points to a frozen vertex v, the path bifurcates in the directions of thetwo outgoing arrows of v. One of these directions can be considered as belonging to theloop we came from, the other one as belonging to a new loop. Since we also have to flipthe second incoming arrow of v, we are assured that this new loop also closes.

The twoloops have to be flipped together. In general, the zero-divergence condition guaranteesthat all loops will eventually close.We have now finished describing the procedure for constructing clusters.

In order tospecify the algorithm completely, we must choose values for the constants qi, and decidehow the clusters are flipped. The former problem is of utmost importance, and it is theobject of the next chapter.

For the cluster flips, we may use both the Swendsen-Wangprocedure and the Wolffsingle cluster flip [3]. We choose the latter, i.e.

we “grow” a singlepath from a random starting bond b0, and flip it. The break-or-freeze decision is onlyneeded for the vertices along the path, so the computer time for one path is proportionalto the length of that path.There are some distinct differences between our loop-clusters and more conventionalspin-clusters.For spin-clusters, the elementary objects that can be flipped are spins;freezing binds them together into clusters.

Our closed loops on the other hand may beviewed as a part of the boundary of spin-clusters (notice that the boundary of spin clustersmay contain loops inside loops). It is reasonable to expect that in typical cases, buildinga loop-cluster will cost less work than for a spin-cluster.

This is an intrinsic advantage ofthe loop algorithm.6Optimization of free parametersWe have seen that freezing forces loops to be flipped together. Previous experience withcluster algorithms suggests that it is advantageous to be able to flip loops independently,as far as possible.

We therefore introduce the principle of minimal freezing as a guidefor choosing the constants qi: we shall minimize the freezing probabilities, given the con-straints (9) and qi ≥0. In the next chapter we will show that for the case of the F model,optimization by minimal freezing does indeed minimize critical slowing down.

Here wediscuss optimization for the 4 phases of the 6-vertex model, usually denoted by capitalroman numerals [10].6

Let us first look at phase IV, where c > a + b. To minimize the freezing of weight c,we have to minimize q3.

From (9), q3 = c −a −b + q1 + q2 + 2q6. With qi ≥0 this impliesq3,min = c −a −b.

The minimal value of q3 can only be chosen if at the same time weset q1 = q2 = 0, i.e. minimize (in this case do not allow for) the freezing of the smallerweights a and b.

The optimized parameters for phase IV are then:q1 = 0,q2 = 0,q3 = c −a −b,q4 = b,q5 = a,q6 = 0 . (10)In phase I the situation is technically similar.

Here a > b+c, and we minimize freezingwith q1 = a −b −c and q2 = q3 = 0. The same holds for phase II, b > a + c, where weobtain minimal freezing for q2 = b −a −c and q1 = q3 = 0.Phase III (the massless phase) is characterized by a, b, c < 12(a + b + c).

Here we canset all freezing probabilities to zero. Thus,q1 = 0,2q4 = b + c −a ,q2 = 0,2q5 = c + a −b ,q3 = 0,2q6 = a + b −c .

(11)7Case of the F modelThe F model is obtained from (10) and (11) as the special case a = b = exp (−K) ≤1,c = 1. It has a Kosterlitz-Thouless transition at Kc = ln 2, with a massless phase forK ≤Kc.How do we choose the parameters qi here ?

Symmetry a = b implies q1 = q2 andq4 = q5. We can eliminate freezing of vertices 1, 2, 3, 4, by setting q1 = q2 = 0.

In (9) thisleaves one parameter, q3:2q4=1 −q3 ,2q6=exp (−K) + q3 −1 . (12)In the massless phase, we can minimize freezing by also setting q3 = 0.

In the massivephase, q6 ≥0 limits q3. Thusq3,min =(1 −2 exp (−K),K ≥Kc,0,K ≤Kc.

(13)Notice that since a = b in the F model, the straight break-up, the freezing of a, and thatof b are operationally the same thing.If we choose q3 = q3,min, then for K ≤Kc vertices of type 5 and 6 are never frozen,which has as a consequence that every path consists of a single loop.This loop mayintersect itself, like in the drawing of the figure “8”.For K > Kc on the other hand,vertices of type 1, 2, 3 and 4 are never frozen, so we do not continue a path along astraight line through any vertex. As K →∞(temperature goes to zero), most verticesare of type 5 or 6, and they are almost always frozen.

Thus the algorithm basically flipsbetween the two degenerate ground states.7

For the F model we also have a spin-cluster algorithm – the VMR algorithm [7]. AtK = Kc and for q3 = q3,min, we have a situation where the loop-clusters form parts of theboundary of VMR spin-clusters.

Since flipping a loop-cluster is not the same as flipping aVMR cluster, we expect the two algorithms to have different performances. We found (see[7] and the next section) that in units of clusters, the VMR algorithm is more efficient,but in work units, which are basically units of CPU time, the loop algorithm wins.

AtKc/2, where the loop-clusters are not at all related to the boundary of VMR clusters, wefound the loop algorithm to be more efficient both in units of clusters and in work units,with a larger advantage in the latter.8F-model: Performance of the loop algorithmWe tested the loop algorithm on L×L square lattices with periodic boundary conditions attwo values of K: at the transition point Kc, and at 12Kc, which is deep inside the masslessphase. We carefully analyzed autocorrelation functions and determined the exponentialautocorrelation time τ.

At infinite correlation length, critical slowing down is quantifiedby the relation (1), τ ∝Lz.Local algorithms are slow, with z ≈2. For comparison, we performed runs with a localalgorithm that flips arrows around elementary plaquettes with Metropolis probability, andindeed found z = 2.2(2) at K = Kc.In order to make sure that we do observe the slowest mode of the Markov matrix, wemeasured a range of quantities and checked that they exhibit the same τ.

As in [7], itturned out that one can use quantities defined on a sublattice in order to couple strongly tothe slowest mode. Specifically, we wrote the energy as a sum over two sublattice quantities.We shall present more details of this phenomenon elsewhere.

Let us however note herethat for the total energy, the true value of τ was not visible within our precision except fora weak hint of a long tail in the autocorrelations on the largest lattices we considered. Notethat as a consequence of this situation, the so-called “integrated autocorrelation time” [3]is much smaller than τ, and it would be completely misleading to evaluate the algorithmbased only on its values.We shall quote autocorrelation times τ in units of “sweeps” [3], defined such that onthe average each bond is updated once during a sweep.

Thus, if τ cl is the autocorrelationtime in units of clusters, then τ = τ cl × ⟨cluster size⟩/(2L2). Each of our runs consistedof between 50000 and 200000 sweeps.

Let us also define zcl by τ cl ∝Lzcl, and a clustersize exponent c by ⟨cluster size⟩∝Lc. We then have:z = zcl −(2 −c) .

(14)Table 2 shows the autocorrelation time τ for the optimal choice q3 = q3,min.AtK =12Kc, deep inside the massless phase, critical slowing down is almost completelyabsent. A fit according to eq.

1 gives z = 0.19(2). The data are also consistent with z = 0and only logarithmic growth.

For the cluster size exponent c we obtained c = 1.446(2),which points to the clusters being quite fractal (notice that zcl = 0.74(2)). At the phase8

LK = KcK = 12Kc81.8(1)4.9(4)163.0(2)5.6(2)324.9(4)6.2(3)647.2(7)7.4(3)12815.5(1.5)8.3(2)25620.5(2.0)z0.71(5)0.19(2)Table 2:Exponential autocorrelation time τ at q3 =q3,min, and the resulting dynamicalcritical exponent z.Kq3z12Kc00.19(2)12Kc0.101.90(5)12Kc0.20≥2.6(4)Kc00.71(5)Kc0.050.77(6)Kc0.100.99(6)Kc0.20≥2.2(1)Table 3: Dependence of the dynamical critical exponent z on the parameter q3. Weuse “≥” where for our lattice sizes τ increases faster than a power of L.transition K = Kc we obtained z = 0.71(5), which is still small.

The clusters seem to beless fractal: c = 1.060(2), so that zcl = 1.65(5).We noted above that at K = Kc and for the optimal choice of q3, the loop-clusters arerelated to the VMR spin-clusters. In [7] we obtained for the VMR algorithm at K = Kcthe result zcl = 1.22(2), but we had c = 1.985(4), which left us with z = 1.20(2).

Inthis case, although in units of clusters the VMR algorithm is more efficient, the smallerdimensionality of the loop-clusters more than make up for this, and in CPU time the loopalgorithm is more efficient.As mentioned, no critical slowing down is visible for the integrated autocorrelationtime τint(E) of the total energy. At K = Kc, τint(E) is only 0.80(2) on the largest lattice,and we find the dynamical exponent zint(E) ≈0.20(2).

At K = 12Kc, τint(E) is 1.1(1) onall lattice sizes, so zint(E) is zero.What happens for non-minimal values of q3 ? Table 3 shows our results on the depen-dence of z on q3.

z rapidly increases as q3 moves away from q3,min. This effect seems tobe stronger at 12Kc than at Kc.

We thus see that the optimal value of q3 indeed producesthe best results, as conjectured from our principle of least possible freezing.9

In the massive phase close to Kc, we expect that z(Kc) will determine the behaviourof τ in a similar way as in ref. [7].

To confirm this, a finite size scaling analysis of τ isrequired. In order to complete the study of the loop algorithm’s performance, we shouldalso investigate it at the KDV transition.In summary, the loop algorithm strongly reduces critical slowing down, from z = 2.2(2)for the Metropolis algorithm, down to z = 0.71(5) at Kc and z = 0.19(2) at 12Kc deepinside the massless phase.

For the integrated autocorrelation time of the total energy, nocritical slowing down is visible in either case.10

9Quantum Spin SystemsParticularly promising is the possibility of accelerating Quantum Monte Carlo simulations,since quantum spin systems in one and two dimensions can be mapped into vertex modelsin 1 + 1 and 2 + 1 dimensions via the Trotter formula and suitable splittings of theHamiltonian [11]. The simplest example is the spin 12 xxz quantum chain, which is mappeddirectly into the 6-vertex model.

For higher spins, more complicated vertex models result(e.g. 19-vertex model for spin one).For (2 + 1) dimensions, different splittings of the Hamiltonian lead to quite differentvertex models, in particular on quite different lattice types [11].

For example, in the caseof spin 12 we can choose between 6-vertex models on a quite complicated 2+1 dimensionallattice, and models on a bcc lattice, with 8 bonds and a large number of configurationsper vertex.For the simulation of the 2-dimensional Heisenberg antiferromagnet and ferromagnetusing the former splitting, all relevant formulas have been worked out in the presentpaper.Actually, the low temperature properties of the antiferromagnet have recentlybeen investigated by Wiese and Ying [14] using our algorithm. Their calculation is, inour opinion, the first high quality verification of the magnon picture for the low lyingexcitations.

In particular, this excludes to a much higher degree of confidence than beforethe speculation (some years ago widespread) that the model had a nonzero mass gap.Notice that, similarly to other cluster algorithms [3], it is straightforward to defineimproved observables. The investigation [14] in fact uses them.Let us also remark that the loop algorithm can easily change global properties likethe number of world lines or the winding number (see [11]).

Thus it is well suited forsimulations in the grand canonical ensemble. Last, but not least, the loop algorithm alsoopens up a new avenue for taming the notorious fermion sign problem [15].10ConclusionsWe have presented a new type of cluster algorithm.

It flips closed paths of bonds in vertexmodels. Constraints are automatically satisfied.

We have successfully tested our algorithmfor the F model and found remarkably small dynamical critical exponents.There are many promising and straightforward applications of our approach, to othervertex models, and to 1+1 and 2+1 dimensional quantum spin systems. Investigationsof such systems are in progress.

In particular, we believe that our generalizations of thefreeze-delete scheme can be adapted for other models like the 8-vertex model.Already, the loop algorithm has found important applications in the study of the 2-dimensional Heisenberg antiferromagnet.AcknowledgementsThis work was supported in part by the German-Israeli Foundation for Research and De-velopment (GIF) and by the Basic Research Foundation of the Israel Academy of Sciences11

and Humanities. We would like to express our gratitude to the HLRZ at KFA J¨ulich andto the DOE for providing the necessary computer time for the F model study.12

References[1] R. H. Swendsen and J. S. Wang, Phys. Rev.

Lett. 58 (1987) 86.

[2] R. C. Brower and P. Tamayo, Phys. Rev.

Lett. 62 (1989) 1087;U. Wolff, Phys.

Rev. Lett.

62 (1989) 361, Nucl. Phys.

B322 (1989) 759, and Phys. Lett.228B (1989) 379.

[3] For reviews, see e.g.:U. Wolff, in Lattice ’89, Capri 1989, N. Cabbibo et al., editors, Nucl.

Phys. B (Proc.Suppl.) 17 (1990) 93;A. D. Sokal, in Lattice ’90, Tallahassee 1990, U. M. Heller et al., editors, Nucl.

Phys.B (Proc. Suppl.) 20 (1991) 55.

[4] D. Kandel and E. Domany, Phys. Rev.

B43 (1991) 8539. [5] H. G. Evertz, M. Hasenbusch, M. Marcu, K. Pinn and S. Solomon, Phys.

Lett. 254B(1991) 185, and in Workshop on Fermion Algorithms, J¨ulich 1991, H. J. Herrmannand F. Karsch editors, Int.

J. Mod. Phys.

C3 (1992) 235. [6] R. Ben-Av, D. Kandel, E. Katznelson, P. Lauwers and S. Solomon, J. Stat.

Phys.58 (1990) 125. This cluster algorithm for the 3-dimensional Z2 gauge theory may atfirst glance bear some resemblance to our loop algorithm, but the similarities are onlysuperficial.

[7] M. Hasenbusch, G. Lana, M. Marcu and K. Pinn, Cluster algorithm for a solid-on-solidmodel with constraints, Phys. Rev.

B46 (1992) 10472. [8] H.G.

Evertz, G. Lana and M. Marcu, Phys. Rev.

Lett. 70 (1993) 875.

[9] H.G. Evertz and M. Marcu, in Lattice 92, Amsterdam 1992, ed.

J. Smit et al., Nucl.Phys. B (Proc.

Suppl.) 30 (1993) 277.

[10] E. H. Lieb, Phys. Rev.

Lett. 18 (1967) 1046;E. H. Lieb and F. Y. Wu, Two-dimensional Ferroelectric Models, in Phase Transitionsand Critical Phenomena Vol.

1, C. Domb and M. S. Green, editors, (Academic, 1972)p. 331;R. J. Baxter, Exactly Solved Models in Statistical Mechanics (Academic, 1989). [11] For Quantum Monte Carlo simulations see:M. Suzuki editor, Quantum Monte Carlo methods in equilibrium and nonequilibriumsystems, Taniguchi symposium, Springer (1987).

[12] See the overview in [13]. [13] H.G.

Evertz, J. Stat. Phys.

70 (1993) 1075. [14] U.J.

Wiese and H.P. Ying, Bern preprint;bulletin board cond-mat/9212006.

[15] H.G. Evertz, in preparation.13


출처: arXiv:9305.019원문 보기

Subscribe to koineu.com

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe