SPECTRAL DENSITY STUDY OF THE SU(3)

이 논문은 SU(3) 이론에서 deconfining phase transition의 특성을 분석하는 새로운 스펙트럴 밀도 재 가중치 기법을 제시합니다. 이 방법을 사용하여, 연구진은 Monte Carlo 시뮬레이션으로 얻어진 데이터를 이용하여 action과 Polyakov line에 대한 특성치를 계산하고, 그 결과를 통해 phase transition의 1차 상전이 특성을 입증하였습니다.

해당 기법은 action density distributions의 비가우스 특성을 적극적으로 활용하였으며, 이는 SU(2) 이론에 비해 어려운 경우였습니다. 연구진은 재 가중치 분석을 수행하여 partition function zeros와 maxima of specific heat 및 order parameter susceptibility를 계산하였으며, finite size scaling analysis로 critical couplings βc, critical exponent ν, latent heat △s에 대한 예측치를 얻었습니다.

또한, 연구진은 autocorrelation times의 영향과 partition function zeros의 추출에서 발생하는 오류를 보정하기 위한 새로운 방법을 제시하였습니다. 이 논문은 future large scale simulations of QCD deconfining phase transition에 대한 유용한 모델 분석을 제공할 것으로 기대됩니다.

한글 요약 끝

SPECTRAL DENSITY STUDY OF THE SU(3)

arXiv:hep-lat/9107002v1 8 Apr 1992SPECTRAL DENSITY STUDY OF THE SU(3)DECONFINING PHASE TRANSITION†Nelson A. Alves1, Bernd A. Berg1,3 ∗∗and Sergiu Sanielevici2,31Fakult¨at f¨ur PhysikUniversit¨at Bielefeld, D-4800 Bielefeld 1, Germany2Department of PhysicsThe Florida State University, Tallahassee, FL 32306, USA3Supercomputer Computations Research InstituteThe Florida State University, Tallahassee, FL 32306, USAAbstractWe present spectral density reweighting techniques adapted to the analysisof a time series of data with a continuous range of allowed values. In a first appli-cation we analyze action and Polyakov line data from a Monte Carlo simulationon LtL3 (Lt = 2, 4) lattices for the SU(3) deconfining phase transition.

We cal-culate partition function zeros, as well as maxima of the specific heat and of theorder parameter susceptibility. Details and warnings are given concerning i) au-tocorrelations in computer time and ii) a reliable extraction of partition function† Submitted to Nuclear Physics B∗∗On leave of absence from Department of Physics, The Florida State University, Tallahassee.1

zeros. The finite size scaling analysis of these data leads to precise results for thecritical couplings βc, for the critical exponent ν and for the latent heat △s.

Inboth cases (Lt = 2 and 4), the first order nature of the transition is substantiated.2

1. INTRODUCTIONMonte Carlo simulations of finite statistical systems at a coupling β = β0 generatea time series of correlated data such that for appropriate observables f the arithmeticaverage of measurementsf(β0) =1NNXnfn(1.1)becomes an estimator of the expectation value < f > (β0):ˆf(β0) = < f > (β0) =limN→∞1NNXnfn.

(1.2)Using the spectral density representation of the partition functionZ(β) =Z SmaxSmindS n(S) exp(βS),(1.3)reweighting techniques allow to calculate the estimator f(β) for all β in a sufficientlysmall neighborhood of β0.Although reweighting techniques have a long history [1-9],the extent of practical improvements became only fully realized after the recent work byFerrenberg and Swendsen [7,8]. In this paper we further elaborate on these techniquesand cast them in a form suitable for practical applications in lattice gauge theories, whereone has to deal with a time series of continuous variables.

In particular, procedures forcombining (“patching”) data from simulations at various β0 values are discussed in somedetail.Altogether, we have in mind to establish a model analysis which may provideuseful guidance for the analysis of data from future large scale simulations of the QCDdeconfining phase transition, like the U.S. teraflop project [11]. Here, we give a reweightinganalysis for the SU(3) deconfining phase transition.

In a subsequent paper we shall presentour SU(2) results. (From the point of view of reweighting techniques, the SU(2) theoryturns out to be the more difficult case, because the action density distributions are barelydistinguishable from Gaussian distributions.

)Pioneering numerical work on the SU(3) deconfining transition was done some yearsago [12].Renewed interest was stimulated by the APE collaboration [13], who raiseddoubts about the first order nature of the transition. This prompted a number of finitesize scaling (FSS) studies [14-17] with the result that the first order nature of the tran-sition was rather unambiguously established.

Here, we give the details promised in [17].3

Beyond [17] we present an investigation of autocorrelation times and a theoretical analysisconcerning the numerical calculation of partition function zeros. The latter point enforcessome corrections of previously stated results.

Aside from quantities which are functions ofthe action (energy) [17], we also analyze now the Polyakov line susceptibility.The paper is organized as follows: Section 2 introduces the spectral density method asused in this paper, section 3 investigates autocorrelations in computer time (for a reviewsee [18]) and makes a connection to error bar calculations by binning [19,20], section 4 givesour reweighting calculations for the specific heat and a FSS estimate for the latent heat,section 5 is devoted to our calculation of partition function zeros and their FSS analysis.Beyond earlier approaches [5,21] a consistency argument is developed. In close analogywith work in [15] we give in section 6 a reweighting and FSS analysis for the Polyakov loopsusceptibility.

Summary and conclusions are contained in a final section 7.2. SPECTRAL DENSITY MC CALCULATIONSWe consider the SU(3) Wilson actionS =XpSpwithSp = 13Tr(Up).

(2.1)where the sum goes over all plaquettes of the four-dimensional lattice with volume V =LtL3 . The MC simulation provides us with a time series of measurements for the action:Sn, (n = 0, ..., N) where N is the total number of measurements.

We generate our data bymeans of the computer program published in ref. [22], taking measurements (meas.) afterevery sweep.

A certain number of initial sweeps are omitted for thermalization (therm. ).To have a convenient normalization, we monitor the corresponding action per plaquette(not to be confused with Sp)sn = SnVp,(2.2)where Vp = 6V is the total number of plaquettes.Let us first consider a single MCsimulation at a fixed value β = β0.

What is the β range of validity for a spectral densityas obtained from the MC simulation at β0? Let us assume it is valid for β ∈[βmin, βmax].In this range we are able to calculate estimators for quantities of physical interest by4

f(β) = FZwith F =NXn=1fn exp (△βSn) , Z =NXn=1exp (△βSn) and △β = β−β0. (2.3)For β ̸= β0 we call this reweighting.

For technical reasons (floating point number over-flows) one is actually forced to use in practice S′i = Si −S(β0) in these formulas. Thiscontributes an irrelevant multiplicative factor and makes the arithmetic computationallyfeasible.

With fn = exp(iβySn), the calculation of partition function zeros can be consid-ered as a special case (where the numerator may be omitted in the one histogram case).For biased quantities the empirical error bar △f of f is conveniently calculated by means ofthe jackknife method and one may also correct for the remaining bias (see [23] for details).The relevant interval △β = βmax−βmin (reweighting range) will shrink with increasingvolume like*△β ≈σ(s) dβdˆsβ=β0=const L−ρ/2L−d/2 =const L−1/ν,(2.4)where ρ = α/ν and the last equality made use of the hyperscaling relation [24] ρ+d = 2/ν.We now have to determine the constant in (2.4). Essentially this boils down to the question:From which percentage of our total data at β0 do we still expect meaningful results?Clearly, this depends somewhat on whether our statistics is large or small.

q-tiles sq ofour empirical action density with q in the range 0.025 to 0.1 may still be expected to givemeaningful answers. This has to be converted into a β-range.

Let q1 = q and q2 = 1 −q;we define βmin and βmax by:βmin = βq1andβmax = βq2,where βq is given by the implicit equationˆs(βq) = sq.This equation is solved numerically for βq. Figures 1 −3 illustrate this procedure for the4 · 203 lattice at β0 = 5.691.

Table 1 gives an overview of our SU(3) data including theβmin, βmax and sq1, sq2 values for the choice q = 0.025 (up to errors from conservativerounding of βmin, βmax). For Lt = 4 Figure 4 depicts the △β(L) ranges versus L. Our* Note that σ(s), the standard deviation of s, goes as Lρ/2, while dβdˆs |β=β0 ∼L−ρ.5

Table 1: Data and their reweighting ranges.Lt · L3therm.meas. (indep.

)β0βminβmaxsq1sq22 · 6310,000120,000 (360)5.0945.0265.15500.40600.47102 · 8310,000120,000 (120)5.0905.0425.12200.40800.46102 · 10310,000120,000(30)5.0905.0505.10800.40900.45502 · 12310,000120,000(12)5.0925.0645.10700.41100.45402 · 12310,000120,000(12)5.0955.0785.11400.41300.45804 · 4310,000120,000 (1300)5.5705.4885.67100.49600.55604 · 4310,000120,000 (2080)5.6105.5335.73500.51200.56404 · 4310,000120,000 (3000)5.6405.5665.76500.52200.56904 · 6310,000120,000 (4000)5.5005.4495.55100.48300.51104 · 6310,000120,000 (1200)5.6405.5905.68800.52300.55204 · 6310,000120,000 (1400)5.6455.5985.69700.52400.55404 · 6310,000120,000 (1400)5.6605.6145.71700.52900.55704 · 6310,000120,000 (1600)5.6905.6415.75400.53800.56404 · 6310,000120,000 (2800)5.7405.6875.80900.55000.57204 · 8310,000120,000 (2900)5.6005.5675.63400.51600.53304 · 8310,000120,000 (800)5.6705.6385.70300.53500.55404 · 8310,000120,000 (700)5.6935.6615.73100.54200.56004 · 8310,000120,000 (1300)5.7205.6875.76300.55000.56504 · 10310,000120,000 (3000)5.6005.5755.62600.51800.53104 · 10310,000120,000 (550)5.6805.6565.70200.54000.55404 · 10310,000120,000 (350)5.6935.6715.71900.54400.55704 · 10310,000120,000 (600)5.7105.6875.73700.54900.56104 · 12310,000120,000 (3000)5.6205.6015.63800.52500.53404 · 12310,000120,000 (330)5.6815.6625.69600.54100.55104 · 12310,000120,000 (150)5.6915.6755.70800.54400.55504 · 12310,000120,000 (600)5.7035.6875.72300.54900.55804 · 14310,000120,000 (240)5.6825.6685.69300.54300.55104 · 14310,000120,000 (110)5.6915.6785.70300.54500.55404 · 14310,000120,000 (440)5.6985.6875.71300.54900.55604 · 16315,000120,000 (180)5.6835.67115.69230.54280.54994 · 16315,000120,000(80)5.6915.67935.70060.54510.55364 · 16320,000120,000(80)5.6925.68145.70340.54550.55414 · 16315,000120,000 (200)5.6975.68855.70960.54850.55574 · 20324,000240,000(80)5.6905.68205.69590.54540.55224 · 20322,000240,000(65)5.6915.68335.69750.54580.55284 · 20308,000120,000(35)5.6925.68425.69900.54600.55296

subsequent analysis of autocorrelation times shows that for our present data the choiceq = 0.025 was too optimistic. However, this does not really matter, because the mainpurpose of calculating [βmin, βmax] intervals first is to prevent reweighting calculations atabsurd β values.In spin systems it is convenient to work with histograms.For lattice gauge theo-ries the action varies typically over a continuous range and a histogram method is notrecommendable for two reasons:i) The size of the histogram bin (i.e.

of the action interval deemed to constitute a singlehistogram entry) is an extraneous parameter. It is tedious to have to cope with it.ii) Whatever the size of the bin, inevitably part of the information contained in theoriginal sample gets lost.7

Instead of artificially introducing histograms, it is more convenient to rely directly on theempirical time series for the data. This requires to keep all measurements on disk or tape.In our present simulations we kept in double precision the spacelike and timelike plaquetteexpectation values and the real and imaginary Polyakov loop values.

This amounts toup to 4 ∗240, 000 Real*8 data per (L, βi0) simulation point, altogether filling up about0.2 gigabyte of disk space (in unformatted storage). Consequently, the feasibility of thiskind of analysis is tightly linked to the recent progress in storage technology and to theavailability of large disks.To cover a larger β range one has to patch MC results from runs at different βi0 values(βi+10> βi0, i = 1, ..., P), whose validity ranges overlap:si+1q2> siq2 > si+1q1> siq1.

(2.5)Various methods can be found in the literature [3,4,8,9]. The two recent discussions [8,9]both aim at minimizing the errors in the resultant estimates for n(S).

A crucial differenceis that [9] fixes the needed relative normalizations of the histograms from data in theoverlap regions only, whereas [8] exploits a self-consistency condition which was previouslystated in [4]. The approach [8] yields results even if there is no overlap at all, whereas [9]cannot be applied in such a situation.

For our purposes, results from patches withoutoverlap are assumed to be meaningless and applying the self-consistency condition maybe somewhat dangerous. For histograms with strong overlaps both methods will convergetowards identical results.More generally, it is useful to patch in such a way that the error of the actually cal-culated quantity is minimized.

This leads to the following very straightforward approach,which we present in a form suitable for the time series analysis of our data. The firstobservation is that any combinationf(β) =PPi=1 aiFiPPi=1 aiZiwith weight factorsai = ai(β) > 0(2.6)is a valid estimator for ⟨f⟩.

In the limit of infinite statistics each single histogram wouldyield the correct results. To simplify our considerations we impose the normalizationPXi=1wi = 1withwi = aiZi.

(2.7)This converts equation (2.6) into8

f =PXi=1wifiwithf i = FiZi. (2.8)This equation makes clear that the optimal choice for the normalized weight factors wi issimply the inverse variance of f iwi ∼1σ2(fi),(2.9a)and the overall constant is fixed by the normalization condition (2.7).

In practical appli-cations the exact variances σ2(fi) are unknown and we have to rely on the empirical errorbars as estimators:wi ∼1(△fi)2. (2.9b)Of course, this just means that the standard way to add estimators weighted by their errorbars is also the most suitable one for combining estimators from MC simulations at variousβ0 values.

However, several complications arise which deserve discussion.i) Typically, our data exhibit large autocorrelation times (see next section). This limits aserious statistical analysis to using twenty jackknife bins.

Imposing a 95 % confidencelimit (more precisely the [0.025,0.975] confidence interval), the χ2 distribution [25]implies that (△fi)2/σ2(fi) fluctuates in the range [0.58,2.13]. Our experience is thatthe effect of these fluctuations on f(β) is harmless as long as only data sets from sim-ulations at βi0 sufficiently close to β are included.

However, error bar fluctuations maybecome a serious problem if equation (2.9b) is applied blindly. A highly recommend-able cut-offis to set wi = 0 for β /∈[βmin, βmax].

It may be suitable to constrain theincluded data sets even further. For instance by excluding data sets which: a) takenalone give unsatisfactory results fi(β) and b) have β located at one of the edges ofthe validity range.ii) Once the weight factors are determined, error bars of f(β) from the combined statis-tics are not calculated by the standard rules of error propagation.

Instead, new binsare formed, each relying on the combined statistics (2.6). If, after binning, autocor-relations still existed in the single data sets, they will become further reduced now aseach new bin combines data from independent simulations.

When appropriate, thenew bins are used to construct jackknife bins in the standard way.9

iii) For connected estimators f c = f 2 −f2, like the specific heat or susceptibilities, it doesnot follow from equation (2.8) that the optimal weight factor is wi ∼1/σ2(f ci). Thereason is that one has to calculate f2 according to f2 = (Pi wifi)2 and not accordingto f2 = Pi wi(fi)2.

But patching f c would require that f 2 and f2 be calculatedwith identical weight factors.Fortunately however, this problem seems not to betoo serious either. Weight factors calculated from fi, f 2i or f ci should differ little.We have already noted that there are substantial statistical weight factor fluctuationswhich, in the reasonable range, do not influence the final result significantly.

Therefore,we decided in favour of the simplest solution, namely to use the weight factors wi ∼1/(△fi)2 for the calculation of f c(β).3. AUTOCORRELATIONS IN COMPUTER TIMEIt has been emphasized in recent literature [18] that one has to control the integratedautocorrelation time ˆτint, to be sure about the final error bars.

However, in a typicalrealistic situation a Monte Carlo simulation may be perfectly reasonable with respect toall calculated quantities, including the confidence intervals implied by their error bars.And yet, ˆτint may remain the only quantity of interest that cannot be calculated reliablyin such a simulation. It seems that previous investigations did not pay attention to thisscenario and, therefore, we find it worthwhile to present some details.To recall the concepts we first consider results obtained by a Metropolis generation ofthe Gaussian distribution.

We generate 131, 072 = 217 numbers; due to a finite Metropolisstepsize, successive numbers are correlated. The integrated autocorrelation time is definedby:2τint(nb) = ρ(0) + 2nbXi=1ρ(i)withρ(i) = ⟨(s0 −ˆs)(si −ˆs)⟩⟨(s0 −ˆs)(s0 −ˆs)⟩.

(3.1)For nb →∞one recognizes that 2τint is just the ratio of the correct variance σ2(s) dividedby the naive variance (obtained by using all events sn as if they were independent). Aconvenient way to calculate the correct variance is provided by binning [19].

The connectionbetween binning and and the integrated autocorrelation time has been emphasized in [20].Let us partition our data set into Nb = 217−k bins of length nb = 2k (Nb = 32 for k = 12)and denote the variance of s after the kth binning by σ2k(s). Note that10

σ2(s) =limk→∞σ2k(s),(3.2a)whereasσ2(s) = σ20(s). (3.2b)Nowσ2k(s)σ20(s) = ρ(0) + 2(nb −1)nbρ(1) + 2(nb −2)nbρ(2) + ... + 2nbρ(nb −1),(3.3)and for nb →∞the approach towards 2τint(nb) follows from the rapid falloffof ρ(i), i →∞.We introduce the notation (△ks)2 for the estimators corresponding to σ2k(s).Using the Gaussian Metropolis data, figure 5 compares the increase of the variance un-der multiple binning, (△k¯s)2/(△0¯s)2, with the direct calculation of the integrated autocor-relation time.

As expected, for sufficiently large nb identical values (≈3.7) are approached.The convergence is somewhat better for the direct calculation of the integrated correlationtime, whereas the binning procedure is computationally far more efficient. As usual inthis paper, the error bars of 2τint are calculated with the double jackknife method [23].However, for the error bar of the empirical variance (△k¯s)2 we do not use an estimator, butassume instead the χ2 distribution with Nb−1 degrees of freedom for (Nb−1)(△k¯s)2/σ2k(¯s).The error bars depicted in the figures are then chosen such that twice their range, seen fromthe center, corresponds to a normal 95.4 % confidence interval.

The depicted symbols arethe centers of these error bars and not the actually calculated estimators (△k¯s)2/(△0¯s)2(due to the asymmetry of the χ2 distribution, these have somewhat lower values – whichquickly approach the center as Nb →∞). If nb is large, the bins do not only becomestatistically independent, but the central limit theorem implies at the same time that theybecome normally distributed (each bin is an average over a large number of original data).For the normal (Gaussian) distribution it is well known [25] that the variance is χ2 dis-tributed.

Therefore, our assumption is justified whenever the statistical analysis is correct,i.e. sufficiently large nb values can be reached such that the correlations between the binscan indeed be neglected.Figure 6 is the analog of figure 5, calculated with our SU(3) action data from the 4·243lattice at β0 = 5.691.

Obviously, satisfactory convergence is not achieved. It is remarkable11

that direct calculation of the integrated autocorrelation time gives for large nb a muchnoisier estimator than we obtain from multiple binning. The enlargement of the first partof figure 6 in its upper left corner demonstrates this most clearly.

Presumably binned dataare favourably (de-)correlated. Consequently, we now rely on multiple binning.

For evenlarger nb (small number of bins Nb) also the error bars of σ2k(s)/σ20(s) increase rapidly.It is a perfectly possible scenario that Nb = 20 bins are already sufficiently independentto allow a statistically reliable calculation of the action s (the 2σ confidence level of theStudent distribution with Nb = 20 is almost normal), but a self-consistent calculation ofthe integrated autocorrelation time is impossible. The binning procedure sheds light on thereason.

Let us assume we have Nb = 20 independent bins. As already mentioned in thispaper, it follows from the χ2 distribution with Nb −1 degrees of freedom that the 95 %confidence interval of (△s)2/σ2(s) is [0.58,2.13].

In other words, the correct integratedautocorrelation time could be two times larger than the estimate or, equally well, onlyhalf the estimate. (Remember that the error bars given in the figures are half theseconfidence intervals.) To reduce this uncertainty to approximately ±15 %, we need aboutNb = 400 independent bins.

When, like in gauge theories or especially in full QCD, thecomputational effort is large, computer resources may only allow to create about Nb = 20independent bins. This may be fully sufficient to calculate quantities of physical interestwith reliable confidence intervals.However, to compute the integrated autocorrelationtime reliably requires a factor 20 more computer time.

We conclude that the demand tocalculate the integrated autocorrelation time in each investigation is exaggerated. Insteadone may have to work under the assumption that the MC statistics is sufficiently large, forinstance to give about Nb = 20 independent bins, and then support this assumption with anumber of consistency checks.

For example, one may perform Student difference tests forestimators from independent MC runs (see the subsequent sections). Such an assumptionof a sufficiently large statistics would be rather similar in spirit to other assumptions (forinstance about the approach towards the continuum limit) already made for a typicallattice gauge theory simulation.As one expects, the situation is under better control for our smaller systems.

Figure 7depicts the results from our 4 · 63 lattice at β0 = 5.640. While a direct calculation of theintegrated autocorrelation time remains inconclusive, its estimate from multiple binningis possible: 2ˆτint = 100 ± 10 is consistent with all results from k = 9 (Nb = 234) on.Proceeding from smaller to larger lattices we obtain rough estimates of 2ˆτint for all our12

data sets. These results are included in table 1 by defining the number of independent(indep.

)measurements as the number of measurements divided by 2ˆτint.One has tocaution that in the case of L = 20 (240,000 original measurements) we can only rely onNb = 14 bins for estimating 2ˆτint. This means an error of about ±50 % (around the centervalue) and no consistency checks towards higher k values.

The L = 24 estimates may beunreliable altogether. However, for our subsequent purposes the achieved overall accuracyseems to be sufficient.4.

SPECIFIC HEATTo characterize phase transitions, action (energy) fluctuations are particularly suit-able. The variance of the action is related to the specific heat by cV = β−2σ2(S)/V .

Thelarge L finite size behaviour isσ2(S) =*XpSp −ˆSp XpSp −ˆSp+= const Lρ Xp⟨(Sp −ˆSp)2⟩= const Lρ Vpσ2(Sp)withρ = αν ,(4.1)where σ2(Sp) is a constant bounded bySminp−Smaxp2 /4. It is convenient to use theaction density (2.1):σ2(s) = V −2pσ2(S) = const Lρ V −1pσ2(Sp).

(4.2)The exponent ρ differentiates three interesting classes:( 1. ρ = 0 for noncritical behaviour,2. 0 < ρ < d for second order critical behaviour,3.

ρ = d for a first order phase transition. (4.3)Here the given values of ρ are defined as those obtained in the limit L →∞.

It is convenientto introduce the notion of a strong second order phase transition for transitions with ρ lessbut close to d (for instance ρ = 2.99 and d = 3). For second and first order transitions,the subleading correction to equation (4.3) is assumed to be non-critical.

This leads to thefollowing FSS fitsσ2(s) = a1Lρ−d + a2L−dfor a second order phase transition(4.4a)13

Table 2: Results from the action.Lt · L3β0sσ20(s)σ2max(s)biasβmax2 · 635.0940.4390 (10)3.42 10−43.44 (08) 10−411 %5.0917 (15)2 · 835.0900.4329 (16)2.67 10−42.70 (07) 10−435 %5.0908 (10)2 · 1035.0900.4249 (26)2.05 10−42.69 (11) 10−4102 %5.0928 (11)2 · 1235.0920.4282 (31)2.31 10−42.56 (07) 10−4155 %5.0928 (07)2 · 1235.0950.4429 (25)1.61 10−42.62 (31) 10−462 %5.0928 (10)4 · 435.5700.52781 (48)2.51 10−42.698 (66) 10−44 %5.5446 (32)4 · 435.6100.54101 (34)1.84 10−4none4 · 435.6400.54822 (23)2.29 10−4none4 · 635.5000.49678 (13)5.32 10−5none4 · 635.6400.53732 (25)5.91 10−55.91 (12) 10−56 %5.638 (09)4 · 635.6450.53905 (20)5.99 10−56.17 (29) 10−53 %5.614 (14)4 · 635.6600.54322 (18)5.62 10−5none4 · 635.6900.55186 (14)4.66 10−5none4 · 635.7400.56188 (12)3.47 10−5none4 · 835.6000.52471 (11)2.25 10−5none4 · 835.6700.54429 (21)2.58 10−52.59 (07) 10−56 %5.6713 (36)4 · 835.6930.55098 (23)2.35 10−52.49 (07) 10−510 %5.6767 (52)4 · 835.7200.55787 (10)1.65 10−5none4 · 1035.6000.52458 (07)1.11 10−5none4 · 1035.6800.54664 (19)1.36 10−51.406 (50) 10−59 %5.6876 (25)4 · 1035.6930.55119 (17)1.28 10−51.346 (50) 10−56 %5.6842 (32)4 · 1035.7100.55558 (14)1.02 10−5none4 · 1235.6200.52998 (05)6.31 10−6none4 · 1235.6810.54603 (18)7.85 10−69.23 (55) 10−623 %5.6909 (30)4 · 1235.6910.54990 (27)9.35 10−69.52 (33) 10−621 %5.6884 (19)4 · 1235.7030.55427 (12)6.42 10−6none4 · 1435.6820.54637 (19)5.40 10−66.50 (32) 10−628 %5.6882 (14)4 · 1435.6910.54950 (30)6.99 10−67.14 (36) 10−625 %5.6924 (13)4 · 1435.6980.55281 (13)4.51 10−6none4 · 1635.6830.54598 (14)3.21 10−64.95 (42) 10−644 %5.6902 (15)4 · 1635.6910.54923 (28)5.16 10−65.30 (26) 10−628 %5.6921 (10)4 · 1635.6920.54987 (28)5.40 10−65.47 (26) 10−627 %5.6916 (10)4 · 1635.6970.55245 (13)3.47 10−6none4 · 2035.6900.54839 (22)3.32 10−63.90 (16) 10−654 %5.6918 (6)4 · 2035.6910.54912 (28)3.88 10−64.01 (15) 10−642 %5.6915 (6)4 · 2035.6920.54918 (32)3.64 10−63.90 (24) 10−643 %5.6929 (7)4 · 2435.6910.54781 (18)1.56 10−64.4 (1.7) 10−646 %5.6934 (9)4 · 2435.6930.55101 (18)1.93 10−62.89 (35) 10−646 %5.6910 (8)14

andσ2(s) = a1 + a2L−dfor a first order phase transition. (4.4b)The first order fit allows to determine the latent heat by the relation△s = 2√a1.

(4.5)A weak first order transition is a first order transition with a small latent heat. Numericallyit is difficult to distinguish a weak first order from a strong second order transition.

TheFSS scaling relations (4.4) are realized for β = βc, where βc is the infinite volume criticalcoupling. In practice, the exact βc value is normally unknown.

But we can construct theL dependent series of βmax(L) values defined byσ2(s; L) (β) = maximumforβ = βmax(L). (4.6)The corresponding σ2(s, βmax; L) is denoted by σ2max(s).

Eqs. (4.4) are also satisfied forthis series.

In addition, this approach yields a precise estimate of βc through the fitβmax(L) = βc + const L−d. (4.7)Exponentially small corrections to this fit are of relevance for small systems [7], but arebetter neglected [10] if the purpose is to estimate βc.

This is best done by fitting resultsfrom sufficiently large systems.For notational simplicity we drop henceforth the distinction between estimators andexact theoretical definitions. Numerical results for each of our data sets are given in table 2.No stable values for σmax are obtained when β0 is too far from βmax.

Similarly, insufficientstatistics may cause unstable behavior. The 4 · 243 lattice at β0 = 5.691 is presumably anexample.

The discrepancies with [17] are only due to the fact that we now use 20 jackknifebins, whereas the error bars in [17] were estimated with respect to 32 jackknife bins (onthe large systems a too generous number). The results for σ2max(s) are calculated by meansof the double jackknife approach [23] and table 2 also lists the additional correction forthe bias in % of the statistical error bar.

Clearly, statistical error and bias of the entirestatistics have a similar order of magnitude. For future reference we have also includedthe average action s = s(β0) and the variance σ20(s) at β0 in table 2.15

Table 3: Standard error propagation for the action results.Lt · L3σ2max(s), goodnessβmax, goodness#2 · 1232.56 (07) 10−4, 0.855.0928 (06), 0.9424 · 432.70 (07) 10−4,−5.5446 (32),−14 · 635.95 (11) 10−5, 0.415.631 (08), 0.1624 · 832.54 (05) 10−5, 0.325.6730 (30), 0.4024 · 1031.38 (04) 10−5, 0.405.6863 (20), 0.4124 · 1239.44 (29) 10−6, 0.655.6891 (16), 0.4924 · 1436.78 (24) 10−6, 0.205.6905 (10), 0.0324 · 1635.32 (17) 10−6, 0.575.6916 (07), 0.5734 · 2033.95 (10) 10−6, 0.865.6920 (04), 0.2934 · 2432.95 (34) 10−6, 0.405.6921 (06), 0.052Results are calculated from table 2 according to standard error progagation. The lastcolumn shows how many data sets were combined.Using standard error propagation, we combine in table 3 for each lattice size theσ2max(s) and βmax estimates of table 2.

Besides σ2max(s) and βmax the goodness of fit [26]is also listed. It is defined as the likelihood that the discrepancies between the estimatesof table 2 (lattice size fixed) is due to chance.

In case of two data sets we use the standardStudent test [25] with N = 20 data. For more than two data sets we rely on χ2 andassume a Gaussian distribution.

If the assumptions are correct (of course there are slightcorrections), the goodness of fit is a uniformly distributed random number between zeroand one. Except for the goodness of fit for βmax from the 4 · 143 and 4 · 243 systems, allother values are reasonable.

We tend to attribute the bad fit for the 4 · 143 to statisticalfluctuations (after all we have a rather large number of systems), whereas a closer inspection(see below) of the 4 · 243 data sets gives rise to the suspicion that the assembled statisticsis insufficient in this case.Table 4 combines the action results by means of the patching method outlined insection 2. Within their statistical errors, the corresponding estimates of tables 3 and 4 arefully compatible.

Only for the 4 · 243 lattices does patching reduce the error significantly.Figure 8 depicts the σ2(s, β) reweighting calculation for the single 4 · 243 lattices, whereasfigure 9 shows the patched result. From these figures it is clear that the real improvement16

Table 4: Patching for the action results.Lt · L3σ2max(s)βmax# : weights.2 · 1232.55 (07) 10−45.0928 (07)2: 0.67, 0.33.4 · 432.71 (07) 10−45.5439 (34)2: 0.76, 0.24.4 · 636.03 (13) 10−55.624 (11)2: 0.36, 0.64.4 · 636.03 (16) 10−55.620 (11)3: 0.30, 0.53, 0.17.4 · 832.54 (06) 10−55.6719 (30)2: 0.61, 0.39.4 · 1031.37 (03) 10−55.6852 (18)2: 0.45, 0.55.4 · 1239.38 (29) 10−65.6896 (15)2: 0.34, 0.66.4 · 1436.70 (25) 10−65.6901 (09)2: 0.48, 0.52.4 · 1436.68 (20) 10−65.6895 (07)3: 0.39, 0.44, 0.17.4 · 1635.24 (15) 10−65.6916 (07)3: 0.17, 0.41, 0.43.4 · 1635.26 (14) 10−65.6913 (06)4: 0.15, 0.35, 0.36, 0.14.4 · 2033.93 (09) 10−65.6920 (04)3: 0.38, 0.35, 0.27.4 · 2433.35 (17) 10−65.6919 (13)2: 0.42, 0.58.The last column gives the number of patched data sets and the relative weights, orderedby increasing β0. When the numbers match, the combined data sets are identical withthose processed in table 3.

Otherwise, the validity ranges of table 1 make it clear whichdata sets have been added. All error bars are calculated with respect to twenty jackknifebins and corrected for the bias.Table 5: Lt = 4 FSS fits of σ2max(s)(Standard error propagation)(Patching)L range106a1102a2Q106a1102a2Q4 - 242.22(09) 1.26(02)10−232.22(08) 1.27(2) 10−256 - 242.39(09) 1.20(02)0.402.42(08) 1.18(2)0.258 - 242.47(10) 1.17(03)0.762.49(09) 1.15(3)0.8110 - 242.48(12) 1.16(04)0.642.52(10) 1.14(3)0.8412 - 242.39(14) 1.22(06)0.772.46(12) 1.17(5)0.9114 - 242.41(18) 1.21(10)0.582.51(14) 1.14(8)0.97These fits use eq.

(4.4b): thus, a first order transition is assumed.due to patching is in fact much more impressive than noticeable from the σ2max(s) error barreduction in table 4 as compared with table 3 (4 · 243 lattice). For our other data sets the17

Table 6: Lt = 4 σ2(s) FSS fits of βmax(L)(Standard error propagation)(Patching)L rangeβcaQβcaQ4 - 245.6934(3) -09.54(21.0)0.205.6933(3) -09.60(22.0) 0.156 - 245.6934(4) -10.00(01.0)0.155.6936(4) -10.60(01.1) 0.138 - 245.6931(4) -08.40(01.2)0.605.6933(4) -09.30(01.2) 0.5810 - 245.6928(5) -06.10(01.8)0.985.6929(5) -07.60(01.7) 0.8012 - 245.6927(5) -05.60(02.6)0.965.6929(6) -07.50(02.5) 0.6514 - 245.6926(6) -04.90(03.4)0.895.6932(7) -09.30(03.1) 0.66The βc are extracted using eqs. (4.6) and (4.7).Table 7: Lt = 2 FSS fits of σ2max(s) and βmax(L)(Standard error propagation)(Patching)L range104a1102a2QβcaQ6 - 122.39(07) 2.17(25.0)0.145.0929(07) -0.44(38.0) 0.368 - 122.53(10) 0.94(71.0)0.485.0937(09) -1.43(84.0) 0.6110 - 122.38(23) 3.10(03.1)-5.0928(20)0.00(03.0)-Fits to σ2max use eq.

(4.4b); the βc are extracted using eqs. (4.6) and (4.7).reweighting analysis of single runs yields already reasonable σ2(s, β) pictures.

Therefore,only one more illustration of patching: Figure 10 gives the combined σ2(s, β) curve fromall our four 4 · 163 data sets.Let us first analyse the Lt = 4 data. Table 5 collects FSS fits (4.4b) (which assumea first order transition) for the σ2max(s) estimates of tables 3 and 4.

Obviously, resultsfrom lattices in the range from L = 8 on are fully compatible with our hypothesis thatthe transition is first order. If we now treat ρ as a free parameter and perform secondorder fits (4.4a), we get ρ = 3.08 ± 0.23 for the range L = 8 −24 (patched).

Althoughclearly consistent with ρ = 3, the accuracy is not very restrictive and the error becomeseven worse if we restrict the fit to larger lattice sizes. Our best estimate of the latent heatis depicted in figure 11 and yields18

△s = (3.16 ± 0.06) 10−3,(Lt = 4). (4.8)Here we have used the L = 8 −24 fit for the patched results, as given in table 5.

Thepatched results are preferred on the basis of the arguments given in section 2. Monitoringthe goodness of fit leads to the choice L = 8 −24.As soon as the goodness of fit isreasonable, there is no statistically significant inconsistency between the smaller and thelarger lattices.

Still, omitting some smaller lattices in such a situation may decrease thesystematic errors. However, this is not relevant anymore within the statistical accuracy.The main effect of omitting the smaller lattices is only an increase of the statistical noise.Next, we use the βmax(L) estimates of tables 3 and 4 as input for the FSS fit (4.7)and arrive at the results of table 6.

Our best estimate of βc, corresponding to (4.8), isβ specific heatc= 5.6933 ± 0.0004,(Lt = 4). (4.9)The line of arguments is similar as for our latent heat estimate.The analysis of the Lt = 2 data is straightforward.

The first order nature of thetransition is much more pronounced than for Lt = 4. For the 2 · 103 lattice the time seriesand corresponding action histogram are depicted in figures 12 and13.

This should becompared with figures 1 and 2. In both cases we have the scale factor L/Lt = 5, but onlyfor Lt = 2 is the two-peak structure immediately clear.

For the latent heat as well as forβc the Lt = 2 FSS fits are now summarized in table 7. For L = 12 (the only size for whichwe have two data sets), the results of straightforward error propagation and patching areidentical (see tables 3 and 4).

Thus we only need one table to show these fits. Our bestestimates (from L = 6 −12) are△s = (3.09 ± 0.05) 10−2,(Lt = 2)(4.10)andβ specific heatc= 5.0929 ± 0.0007,(Lt = 2).

(4.11)19

5. PARTITION FUNCTION ZEROSRef.

[27] discusses the physical relevance of partition function zeros. Their numer-ical calculation was pioneered in Refs.

[5,28,29,21,9,17], of which [21,17] are concernedwith SU(3) lattice gauge theory. In spin systems the action takes discrete values and thepartition function becomes a polynomial in exp(β).

Under such circumstances [28,9] theNewton-Raphson method is convenient to calculate the partition function zeros. Whenthe action takes continuous values a time series analysis is more recommendable and wecalculate the partition function zeros in two steps: first we scan graphically [29] for theseparate zeros of the real and imaginary part.

Figure 14 illustrates this for our 4 · 203lattice at β0 = 5.692. Re(Z) = 0 is denoted by the crosses and Im(Z) = 0 by the circles.A partition function zero is obtained when the lines cross.

Second, to compute the precisevalue for the leading zero, we then iterate with AMOEBA [26] (with starting values in asufficiently small neighborhood of the zero).Before we can present our SU(3) results we have to clarify some subtle details. Forsmaller SU(3) lattices we noted in [30] that our empirical action distributions are welldescribed by Gaussian fits.

However, a Gaussian distribution does not give rise to zerosin the complex β plane.Nevertheless we have reported zeros in [17].To resolve thisparadoxical situation, we first study Gaussian random numbers and proceed then with theanalysis of our SU(3) action distributions.5.1. The Gaussian DistributionAssume a lattice gauge theory MC simulation (Vp = number of plaquettes) and letx = s −ˆs.

(5.1)For non-critical behaviour the measured probability density for x will beP(x) =rAπ exp−Ax2withA =12σ2 = aVp. (5.2)By reweighting with β = βx + iβy we obtain the partition function up to a normalizationfactor:z(β) =Z(β)Z(β0) =rAπZ +∞−∞exp(Bx + iCx) exp(−Ax2) dx(5.3a)20

Table 8: Partition function zeros.Lt · L3β0β0xβ0ybiasxbiasy△βmaxxβmaxy2 · 635.09405.0910 (18)0.03960 (10)3 %-6 %0.031500.050502 · 835.09005.0905 (11)0.01759 (36)2 %-10 %0.012900.021802 · 1035.09005.0927 (11)0.00865 (14)-8 %-6 %0.006400.010402 · 1235.09205.0928 (08)0.00502 (08)-4 %-9 %0.000800.005002 · 1235.09505.0929 (11)0.00495 (11)23 %-28 %0.003500.005704 · 435.57005.5500 (03)0.09800 (04)-2 %-5 %0.050000.108004 · 435.61005.5600 (05)0.14900 (07)-17 %2 %0.0000∗0.1230∗4 · 435.64005.6070 (06)0.18500 (06)-7 %7 %0.0000∗0.1170∗4 · 635.5000none4 · 635.64005.6540 (10)0.05200 (25)104 %-140 %0.042000.065004 · 635.64505.6560 (05)0.07570 (64)-50 %-22 %0.0000∗0.0660∗4 · 635.66005.6420 (07)0.07840 (47)10 %35 %0.0000∗0.0670∗4 · 635.69005.6450 (08)0.08010 (80)-12 %-11 %0.0000∗0.0620∗4 · 635.7400none4 · 835.6000none4 · 835.67005.6747 (23)0.04660 (27)-2 %-12 %0.0000∗0.0410∗4 · 835.69305.6791 (33)0.04980 (42)5 %-16 %0.0000∗0.0420∗4 · 835.7200none4 · 1035.6000none4 · 1035.68005.6889 (14)0.03010 (18)8 %-11 %0.0000∗0.0270∗4 · 1035.69305.6864 (55)0.02800 (81)7 %-7 %0.0030∗0.0270∗4 · 1035.7100none4 · 1235.6200none4 · 1235.6810none4 · 1235.69105.6896 (17)0.02030 (07)6 %-3 %0.0000∗0.0180∗4 · 1235.7030none4 · 1435.68205.6886 (18)0.01430 (09)-2 %-13 %0.0050∗0.0140∗4 · 1435.69105.6922 (13)0.01380 (07)0 %-13 %0.0000∗0.0120∗4 · 1435.69805.6859 (23)0.02020 (28)-69 %87 %0.0000∗0.0130∗4 · 1635.68305.6904 (16)0.01010 (10)-11 %-15 %0.008100.010604 · 1635.69105.6918 (10)0.01010 (06)-2 %-13 %0.0000∗0.0094∗4 · 1635.69205.6917 (10)0.00960 (05)-2 %-14 %0.0000∗0.0092∗4 · 1635.6970none4 · 2035.69005.6917 (06)0.00554 (22)-5 %-16 %0.002300.005704 · 2035.69105.6915 (06)0.00527 (17)1 %-6 %0.001200.005404 · 2035.69205.6929 (07)0.00531 (28)-1 %-15 %0.0000∗0.0051∗4 · 2435.69105.6931 (07)0.00270 (02)-29 %-35 %0.003900.004204 · 2435.69305.6913 (09)0.00320 (04)32 %-49 %0.002800.00390321

with (defining bx and by)B = Vp(βx −β0) =: VpbxandC = Vpβy =: Vpby. (5.3b)Integration over x givesz(β) = expB2 −C24AexpiBC2A= |z(β)|cosBC2A+ i sinBC2A.

(5.4)We have zeros of the imaginary part forB = 0andC = 2nπAB, (n = 1, 2, ...)(5.5)and of the real part forC = (2n + 1)πAB, (n = 0, 1, 2, ...). (5.6)Rewriting equations (5.5-5.6) in terms of bx, by we obtain zeros of Im(Z) forbx = 0 (by arbitrary)andby = 2nπA(Vp)2bx= 2nπaVpbx, (n = 1, 2, ...)(5.5′)and of Re(Z) forby = (2n + 1)πA(Vp)2bx= (2n + 1)πaVpbx, (n = 0, 1, 2, ...).

(5.6′)The variance σ2(|z(β)|) is easily calculated to beσ2 (|z(β)|) = expB2A−|z(β)|2. (5.7)Assume that a numerical calculation has generated N independent data with the proba-bility density (5.2).

We trust with an about 84% (one sided!) confidence level that thecalculation of z(β) will not produce artificial zeros in the (B, C)-range determined byσ (|z(β)|) = σ (|z(β)|)√N≤|z(β)|.

(5.8)DefiningX = expB24AandY= exp−C24A,22

the equality of (5.8) becomesX = (N + 1)1/2 Y,where(N + 1)−1/2 ≤Y≤1. (5.9)The argument is only approximate, since numerical results within this confidence radiusmay have some degree of independence, which is difficult to assess.

Here they are justtreated as one event.To give a numerical example, we take A = 1/(2σ2) = 80, 000 and Vp = 6 · 4 · 143 =65, 856.We use a Gaussian pseudo random number generator and generate MC dataaccording to the probability density (5.2). Subsequently, a reweighting analysis is doneto determine the zeros.

For 1,200 and 76,800 independent data, respectively, figures 15and 16 compare the exact values for the leading zeros of Im(Z) and Re(Z) with thosefrom the simulation. Using equation (5.9) the expected range of validity for the numericalresults is also indicated and found to be respected by the data.

Namely, the apparentcrossings (zeros of the partition function) are seen to fall outside the confidence radius.In the Gaussian case we know for sure that this means they are numerical fakes. For ourSU(3) data, we shall therefore have to reject any zeros which fall outside the confidenceradius.Table 9: Patching of Partition Function Zeros.Lt · L3β0xβ0y[βminx, βmaxx]βmaxy# : weights.2 · 1235.09280.00501 (7)[5.090,5.095]0.00602: 0.63, 0.37.4 · 435.5520 (3)0.12300 (6)none0.117∗3: 0.74, 0.24, 0.02.4 · 635.6500 (7)0.07800 (5)none0.073∗4: 0.29, 0.37, 0.32, 0.02.4 · 835.6740 (2)0.04500 (2)none0.042∗2: 0.68, 0.32.4 · 103unstableunstablenonenone2: 0.52, 0.48.4 · 1435.6890 (2)0.01450 (6)[5.687,5.691]0.01563: 0.26, 0.44, 0.30.4 · 1635.6915 (7)0.00990 (4)[5.688,5.693]0.01063: 0.09, 0.45, 0.46.4 · 2035.6921 (5)0.00550 (2)[5.690,5.694]0.00583: 0.27, 0.42, 0.31.4 · 2435.6928 (6)0.00290 (2)[5.689,5.695]0.00442: 0.45, 0.55.We patch those data sets for which results are also reported in table 8.

The weights arein order of increasing β0. Instead of △βmaxxwe report nowβminx, βmaxxas β0 is no longerunique.23

5.2. SU(3) resultsFor single runs our leading partition function zeros are collected in table 8.

To estimatewhether they are inside or outside the confidence radii defined by equation (5.9), we usethe estimated number of independent measurements from table 1 (for instance 44 for the4 · 243 lattices) and σ20(s) from table 2 as width of a fictitious Gaussian distribution. Thisleads to the △βmaxxand βmaxyvalues reported in table 8.

An example of a zero and itsconfidence radius is given in figure 17 (also showing the definition of △βmaxxand βmaxy).An asterix in table 8 indicates that the estimated zero lies actually outside the radiusof confidence. We see that for the 4 · L3 lattices most results have problems.

The issueis actually quite subtle as repetitions with similar statistics lead to reproducible results.The reason is that for βx = β0 and a Gaussian distribution, Z(βy) falls offexponentiallywith increasing βy. As soon as the statistical noise becomes large enough this leads withcertainty to a fake crossover of real and imaginary zeros, as illustrated in figures 14 and 16.Upon a closer inspection of table 8 one may argue that the 2 · 123, β0 = 5.092 data set hasalso a problem.

However, for the Lt = 2 distributions we have a pronounced double peakstructure and the use of σ0(s) from table 2 is not justified. Each of the single Gaussianshas a much smaller width, leading to confidence radii larger than those reported in table 8.To rescue some of our estimates for the 4 · L3 lattices, we appeal to our patchingmethod.

The confidence radii for the patched results are estimated by iterating the equa-tionvuutPXi=1(wi)2σ2i (|z(β)|) =PXi=1wi|zi(β)|. (5.10)The final results are collected in table 9.

Whereas for lattices of size 4 · 43 to 4 · 123 we stillhave no conclusive results, we can now determine the leading zero on lattices 4 · L3 withL ≥14. For these lattices the FSS fitu0y(L) ∼L−1/νwithu = e−β(5.11)givesν = 0.35 ± 0.02, (Lt = 4)(5.12)24

with a goodness of fit Q = 0.26. This is consistent with ν = 1/d, i.e.

with a first ordertransition. Fitting β0x(L) with the FFS formula (4.7) yields another estimate of the infinitevolume critical point:β zerosc= 5.6934 ± 0.0007, (Lt = 4).

(5.13)The fitted range is L = 14 −24 and the goodness of fit is Q = 0.73.Compared with the Lt = 4 situation, the Lt = 2 case is unproblematic. All our zerosfrom L = 6 to L = 12 are within the radii of convergence and allow the FSS fit (5.11)ν = 0.332 ± 0.004, (Lt = 2)(5.14)with the acceptable goodness of fit Q = 0.12.

Fitting only the range L = 8 −12 givesν = 0.323(6) with the goodness of fit Q = 0.30. The result (5.14) confirms (with lesseffort) the pioneering work of Karliner et al.

[21], who reported ν = 0.331(6) from theirmore complicated constrained MC calculation of partition function zeros. Our L = 6 −12estimate of the critical β isβ zerosc= 5.0930 ± 0.0007, (Lt = 2).

(5.15)with a goodness of fit Q = 0.42. We see that the β zeroscestimates are well consistent withthe β specific heatcresults of section 4.Table 11: Polyakov Susceptibility by standard error propoagation.Lt · L3L−3χmax(P), goodnessβmax, goodness#2 · 1231.95 (05) 10−1, 0.835.0926 (06), 0.6724 · 632.92 (05) 10−2, 0.495.6957 (26), 0.9744 · 832.37 (06) 10−2, 0.035.6930 (18), 0.8634 · 1032.01 (06) 10−2, 0.865.6891 (12), 0.1934 · 1231.96 (09) 10−2, 0.745.6913 (12), 0.3524 · 1431.75 (09) 10−2, 0.175.6906 (10), 0.0724 · 1631.77 (07) 10−2, 0.995.6913 (06), 0.7434 · 2031.72 (06) 10−2, 0.655.6917 (04), 0.2034 · 2431.67 (31) 10−2, 0.345.6920 (06), 0.052Results are calculated from table 10 according to standard error progagation.

The lastcolumn shows how many data sets were combined.25

Table 10: Results from the Polyakov susceptibility.Lt · L3β0PL−3χ0(P)L−3χmax(P)biasβmax2 · 635.0940.77480 (0211)1.44 10−11.52 (003) 10−117 %5.0836 (25)2 · 835.0900.62510 (0378)1.58 10−11.60 (004) 10−138 %5.0894 (12)2 · 1035.0900.38700 (0700)1.49 10−11.87 (006) 10−1122 %5.0936 (16)2 · 1235.0920.84400 (0680)1.13 10−11.95 (005) 10−1177 %5.0928 (07)2 · 1235.0950.45420 (0872)1.79 10−12.01 (028) 10−153 %5.0922 (12)4 · 435.5700.27737 (0296)2.67 10−2none4 · 435.6100.33716 (0404)3.34 10−2none4 · 435.6400.35873 (0370)3.53 10−2none4 · 635.5000.09542 (0050)0.26 10−2none4 · 635.6400.21732 (0437)1.81 10−2none4 · 635.6450.23119 (0455)1.99 10−23.06 (014) 10−28 %5.6950 (06)4 · 635.6600.27107 (0430)2.38 10−22.90 (007) 10−25 %5.6970 (04)4 · 635.6900.37507 (0670)2.87 10−22.88 (007) 10−26 %5.6960 (06)4 · 635.7400.47735 (0653)2.74 10−23.06 (014) 10−216 %5.6940 (05)4 · 835.6000.09220 (0114)2.59 10−3none4 · 835.6700.22317 (0830)1.80 10−22.33 (010) 10−211 %5.6915 (35)4 · 835.6930.32431 (1003)2.50 10−22.50 (008) 10−212 %5.6937 (22)4 · 835.7200.45422 (0555)1.61 10−22.12 (012) 10−223 %5.6922 (51)4 · 1035.6000.06563 (0066)1.20 10−3none4 · 1035.6800.21619 (0834)1.72 10−22.047 (093) 10−212 %5.6901 (17)4 · 1035.6930.32961 (0900)1.86 10−21.984 (069) 10−217 %5.6867 (18)4 · 1035.7100.41699 (0815)1.33 10−21.997 (160) 10−230 %5.6922 (28)4 · 1235.6200.05761 (0052)0.10 10−2none4 · 1235.6810.15435 (1002)1.16 10−22.04 (025) 10−228 %5.6925 (17)4 · 1235.6910.27161 (1581)1.92 10−21.95 (009) 10−230 %5.6903 (16)4 · 1235.7030.40754 (0718)0.87 10−2none4 · 1435.6820.15595 (1196)1.14 10−21.64 (011) 10−230 %5.6884 (15)4 · 1435.6910.24181 (1895)1.82 10−21.87 (012) 10−231 %5.6920 (12)4 · 1435.6980.37534 (0768)0.73 10−2none4 · 1635.6830.10277 (1031)0.68 10−21.78 (009) 10−2102 %5.6902 (15)4 · 1635.6910.22025 (1992)1.72 10−21.76 (012) 10−233 %5.6915 (09)4 · 1635.6920.25229 (1937)1.73 10−21.78 (013) 10−230 %5.6914 (09)4 · 1635.6970.36245 (0793)0.68 10−2none4 · 2035.6900.16688 (1729)1.41 10−21.65 (009) 10−244 %5.6914 (05)4 · 2035.6910.20443 (2119)1.71 10−21.76 (008) 10−255 %5.6913 (05)4 · 2035.6920.19617 (2420)1.62 10−21.73 (012) 10−252 %5.6926 (06)4 · 2435.6910.08825 (1466)0.73 10−22.43 (085) 10−250 %5.6933 (09)4 · 2435.6930.31780 (1379)0.76 10−21.55 (033) 10−232 %5.6909 (08)All error bars are calculated with respect to twenty jackknife bins and corrected for the26

Table 12: Patching for the Polyakov Susceptibility.Lt · L3L−3χ max(P)βmax# Patches: weights2 · 1231.95 (06) 10−15.0927 (06)2: 0.68, 0.32.4 · 632.95 (04) 10−25.7007 (26)4: 0.14, 0.36, 0.33, 0.174 · 832.37 (06) 10−25.6922 (12)3: 0.37, 0.42, 0.21.4 · 1032.00 (05) 10−25.6885 (12)3: 0.41, 0.47, 0.12.4 · 1231.95 (08) 10−25.6910 (13)2: 0.31, 0.69.4 · 1431.71 (06) 10−25.6896 (07)3: 0.40, 0.41, 0.19.4 · 1631.73 (06) 10−25.6910 (05)4: 0.15, 0.36, 0.36, 0.13.4 · 2031.70 (05) 10−25.6917 (04)3: 0.39, 0.34, 0.27.4 · 2431.88 (09) 10−25.6919 (12)2: 0.46, 0.544.The last column gives the number of patched data sets and the relative weights, orderedby increasing β0. All error bars are calculated with respect to twenty jackknife bins andcorrected for the bias.Table 13: Lt = 4 FSS fits to χmax(P)(Standard error propagation)(Patching)L range102a1a2Q102a1a2Q6 - 141.78(05)2.5(2.0)0.171.73(4)2.7(2.0)0.046 - 161.76(04)2.6(2.0)0.211.71(4)2.7(2.0)0.056 - 241.74(04)2.6(2.0)0.331.71(3)2.7(2.0)0.048 - 161.68(06)3.5(5.0)0.731.63(5)3.8(5.0)0.468 - 201.68(04)3.5(4.0)0.871.63(4)3.8(4.0)0.638 - 241.68(05)3.5(5.0)0.941.66(4)3.5(4.0)0.1610 - 241.68(05)3.4(9.0)0.871.68(5)3.2(8.0)0.1112 - 241.65(08)4.8(2.3)0.841.69(7)2.8(1.9)0.0614 - 241.70(10)1.8(4.3)0.911.79(8)-2.5(3.0)0.25Eq.

(6.2) is used to fit the χmax(P) in tables 11 resp. 12.6.

POLYAKOV LINE SUSCEPTIBILITYRefs. [15,31] have advocated the FSS analysis of the susceptibility of the projectedPolyakov line as a good indicator of the order of the SU(3) phase transition.

Since we havemeasured and recorded the real and imaginary parts of the Polyakov line along with the27

Table 14: Lt = 4 χ(P) FSS fits of βmax(L)(Standard error propagation)(Patching)L rangeβcaQβcaQ6 - 245.6914(3)0.4(5.0)0.205.6907(3)1.1(5.0)0.0058 - 245.6917(4)-0.7(8.0)0.315.6910(4)-0.2(7.0)0.06010 - 245.6921(4)-2.8(1.3)0.945.6918(4)-3.4(1.3)0.60012 - 245.6920(5)-2.3(2.2)0.875.6918(6)-3.9(2.3)0.44014 - 245.6923(6)-4.4(3.3)0.985.6923(7)-6.7(3.0)0.740Eq. (4.7) is used to fit the βmax in tables 11 resp.

12.Table 15: Lt = 2 FSS fits of χmax(P) and its βmax(L)L range102a1a2QβcaQ6 - 1218.8(5.0)-08.3(01.3)0.00075.0942(08) -2.29 (56.0) 0.538 - 1221.2(8.0)-26.0(05.0)0.71005.0942(10)-2.31(97.0)0.2610 - 1220.6(1.7)-19.0(20.0)-5.0915(26)2.10(04.1)-The susceptibility maxima for Lt = 2 are fitted to eq. (6.2) and the corresponding βmaxto eq.

(4.7).plaquette action, for each of our runs, we can apply the procedures discussed in Section 4to the spectral-density FSS analysis of this quantity.We have a time series of measurements of the lattice average of the Polyakov linein the Euclidean time direction, Ω= V −1 Px Ωx.These are complex numbers: Ω=Re Ω+ iIm Ω= ρeiφ. (In our definition, Ωx is larger than in Ref.

[15] by the colour factor3).The projected real part P is computed as1. P = Re Ωfor φ ∈[−π/3, π/3),2.

P = Re exp(−i2π/3)Ωfor φ ∈[π/3, π),3. P = Re exp(i2π/3)Ωfor φ ∈[−π, −π/3).

(6.1)P provides a clear distinction between the Z3-symmetric phase of the theory (whereit is close to zero) and any of the three broken phases (where it is nonzero). Since φ isprojected out, there is no cancellation due to tunneling between the three broken phases28

(which makes the time series average ¯Ωvanish on a finite lattice, even above deconfine-ment). The susceptibility (variance) of P can now be analyzed exactly like the varianceof the action was analyzed in Section 4.

(To compute the moments of P, we calculate theBoltzmann factors from the corresponding action measurements.) The results correspond-ing to table 2 are collected in table 10.

(Remember that our susceptibilities differ fromthose of Ref. [15] by a factor 9, because of the normalization of Ωx.

)Table 11 shows the results of the standard error propagation analysis, as applied tothe valid results in table 10. Since the 4 · 43 lattices yield no valid results, there is no 4 · 43entry in table 11.

Like for the specific heat, the data sets for L = 14, 24 (Lt = 4) give poorestimates of βmax and the error on L−3χmax is large for L = 24. Again, patching withweights given by eq.

(2.9b) (where we put fi = Pi) improves the situation, as can be seenin table 12. However, we notice that the patched L−3χmax for L = 24 (Lt = 4) violatesthe trend in the results for the other L.Regarding the FSS behavior of the susceptibility maxima and of the correspondingvalues of β, the same considerations apply as discussed in Section 4.

Assuming a first ordertransition, we fit the data in tables 11 and 12 with the analog of eq. (4.4b):L−3χ(P) = a1 + a2L−3(6.2)Tables 13 and 15 show the results of these fits for Lt = 4 and Lt = 2 respectively.

ForLt = 4, we see that the patching data (which are of better quality) are more restrictive thanthose obtained by error propagation. Our suspicion about the data L = 24 (presumablyinsufficient statistics) are confirmed by the Q values for the corresponding fits.

In addition,L = 6 is presumably a too small lattice. The best estimate of the order parameter jump△P=2√a1 should be the one obtained from the patched fit L = 8 −24 (Q is stillacceptable, and while it makes sense to ignore the L = 6 result we cannot discard ourlargest lattice L = 24 even though the statistics is poor.) We get△P = (0.26 ± 0.04),(Lt = 4).

(6.3)For Lt = 2 too, the L = 6 data spoil the first order fit; the best estimate is△P = (0.92 ± 0.18),(Lt = 2)(6.4)from the range L = 8 −12.29

We can also use the β values corresponding to the maxima of the susceptibility, inorder to obtain βc through the fit (4.7). The results of this exercise appear in tables 14(for Lt = 4) and 15 (for Lt = 2).

Our best estimates for β susccareβ suscc= 5.6918 ± 0.0004(Lt = 4)(6.5)(using L = 10 −24 by patching), andβ suscc= 5.0942 ± 0.0008(Lt = 2)(6.6)using L = 6 −12. Note that the βc fit selects optimal ranges which are different from theone we used for the fit to the susceptibility maxima.

Obviously, this is allowed in principlesince the height of the peak and its location in β may be independent functions of L.While β susccfor Lt = 2 is seen to be consistent with those estimated from the analysisof the specific heat (cf. eq.

(4.11)) and of the partition function zeros (cf. eq.

(5.15)),β susccfor Lt = 4 is rather small and becomes only consistent on a two σ level (cf. eqs.

(4.10), (5.13)). This may indicate that with our statistics (presumably due to long timecorrelations) the Polyakov susceptibility is not accurate enough to allow really precise fits.7.

SUMMARY AND CONCLUSIONSSpectral density methods greatly facilitate accurate FSS calculations. One can calcu-late pseudocritical couplings precisely and extrapolate towards the infinite volume criticalcoupling.From the specific heat, the analysis of the partition function zeros and thePolyakov loop susceptibilities we have obtained three estimates of the infinite volume crit-ical β which differ somewhat due to remaining systematic errors.

In the absence of otherstrong criteria, one may average these estimates, weighted by their error bars and quote asthe final error the best error bar of the single estimates (one can not use error propagationas all results are obtained from the same configurations). In this way we obtain from (4.9),(5.13) and (6.3)βc = 5.6927 ± 0.0004,(Lt = 4),(7.1)and from (4.1), (5.15) and (6.4):βc = 5.0933 ± 0.0007,(Lt = 2),(7.1)30

The achieved accuracy improves estimates from the pioneering literature [12] by one orderof magnitude in the error bar (translating to two orders of magnitude in computer time).Another notable result are the latent heats (4.8) and (4.11), which are consistent withindependent results by other groups [15,32]. Again the spectral density FSS approach worksquite well.

The possibility to calculate a latent heat and, similarly, an order parameterjump self-consistently is in our opinion the strongest argument in favour of the first ordernature of this phase transition. Whereas for Lt = 2 one observes, additionally, a cleardouble peak structure the Lt = 4 transition is fairly weak and a double peak structurebegins (marginally) only to develop from L = 16 on.

It is remarkable that the FSS behavioris nevertheless already quite indicative for the first order nature. This seems to be drivenby the increase of the width of the almost Gaussian looking action density distribution.For Lt = 4 the analysis of the partition function zeros has turned out to be moresubtle than previously anticipated.

Nevertheless from L = 14 on the results seem to beconclusive and the obtained estimate (5.12) of the critical exponent ν is consistent withν = 1/d (d = 3) and rounds the picture of a weak first order transition. For Lt = 2the same analysis is fairly unproblematic and the results of [21] are nicely improved andconfirmed.

The Lt = 2 estimate (5.14) for ν is, of course, consistent with a first ordertransition.ACKNOWLEDGEMENTSThe MC data were produced on Florida State University’s ETA10’s. In addition thiswork made heavy use of the FSU Physics Department HEP VAX and the Supercom-puter Computations Research Institute RISC clusters and their attached storage facilities.This research project was partially funded by the U.S. National Science Foundation undergrant INT-8922411 and by the the U.S. Department of Energy under contracts DE-FG05-87ER40319 and DE-FC05-85ER2500.

Nelson Alves is supported by the German Humboldtfoundation.REFERENCES1) Z.W. Salsburg, J.D.

Jackson, W. Fickett and W.W. Wood, J. Chem.Phys., 30,(1959), 65.2) I.R.

McDonald and K. Singer, Discuss. Faraday Soc., 43, (1967), 40.3) J.P. Valleau and D.N.

Card, J. Chem. Phys., 57, (1972), 5457.31

4) C.H. Bennett, J. Comp.

Phys., 22, (1976), 245.5) M. Falcioni, E. Marinari, M.L. Paciello, G. Parisi and B. Taglienti, Phys.

Lett., 108B,(1982), 331; E. Marinari, Nucl. Phys., B235, [FS11], (1984), 123.6) G. Bhanot, S. Black, P. Carter and R. Salvador, Phys.

Lett., B183, (1987), 331.7) A.M. Ferrenberg and R.H. Swendsen, Phys. Rev.

Lett., 61, (1988), 2635, [erratum63, (1989), 1658].8) A.M. Ferrenberg and R.H. Swendsen, Phys. Rev.

Lett., 63, (1989), 1196.9) N.A. Alves, B.A.

Berg and R. Villanova, Phys. Rev., B41, (1990), 383.10) N.A.

Alves, B.A. Berg and R. Villanova, Phys.

Rev., B43, (1991), 5846.11) S. Aoki et al. (The Teraflop Collaboration): The QCD Teraflop Project, to be pub-lished in Int.

J. Mod. Phys.

C.12) J. Kogut et al., Phys. Rev.

Lett., 51, (1983); T. Celik, J. Engels and H. Satz, Phys.Lett., 129B, (1983), 323; A. Kennedy et al., Phys. Rev.

Lett., 54, (1985), 87; N.Christ and A. Terrano, Phys. Rev.

Lett., 56, (1986), 111.13) P. Bacilieri et. al., Phys.

Rev. Lett., 61, (1988), 1545; Phys.

Lett., 224B, (1989),333.14) A. Ukawa, Proceedings of the LAT89 Capri conference, Nucl. Phys.

B, (Proc. Suppl.

)17, (1990), 118.15) M. Fukugita, M. Okawa and A. Ukawa, Nucl. Phys., B337, (1990), 181.16) B.A.

Berg, R. Villanova and C. Vohwinkel, Phys. Rev.

Lett., 62, (1989), 2433.17) N.A. Alves, B.A.

Berg and S. Sanielevici, Phys. Rev.

Lett., 64, (1990), 3107.18) A.D. Sokal, “ Monte Carlo Methods in Statistical Mechanics: Foundations and NewAlgorithms”, preprint, New York University (1989).19) B.A. Berg and J. Stehr, Z. Phys, C9, (1981), 333.20) H. Flyvbjerg and H.G.

Peterson, J. Chem. Phys., 91, (1989), 461.21) M. Karliner, S. Sharpe and Y. Chang, Nucl.

Phys., B302, (1988), 204.22) B.A. Berg, A. Devoto and C. Vohwinkel, Comp.

Phys. Commun., 51, (1988), 331.23) B.A.

Berg, preprint, SCRI 90-100, to be published in Comp. Phys.

Commun.32

24) M.E. Fisher, in Nobel Symposium 24, B. Lundquist and S. Lundquist (editors), Aca-demic Press, New York, 1974.25) B.L.

Van der Waerden, Mathematical Statistics (Springer, New York, 1969).26) W. Press et al., Numerical Recipes, Cambridge University Press, London, 1986.27) C. Itzykson, R.B. Pearson and J.B. Zuber, Nucl.

Phys., B220, [FS8], (1983), 415.28) G. Bhanot, S. Black, P. Carter and R. Salvador, Phys. Lett., B183, (1987), 331.29) K. Bitar, Nucl.

Phys., B300, [FS22], (1988), 61.30) N.A. Alves, B.A.

Berg and S. Sanielevici, Phys. Lett., 241B, (1990), 557.31) M. Fukugita, H. Mino, M. Okawa and A. Ukawa, Nucl.

Phys. B, (Proc.

Suppl. ), 20,(1991), 258.32) F. Brown, N. Christ, Y. Deng, M. Gao and T. Woch, Phys.

Rev. Lett., 61, (1988),2058.33


출처: arXiv:9107.002원문 보기