RECENT PROGRESS IN LATTICE QCD
QCD분자역학의 목표와 질문에는 다음과 같은 6가지가 포함됩니다:
1. QCD가 관찰되는 헤드론 스펙트럼을 제공합니까? 이들은 정전하, 분해폭 등 물리적 특성을 가졌습니까?
2. 동일한 이론이 고에너지 입자 물리학을 설명할 수 있나요?
3. CKM 행렬의 나머지 요소들을 결정하는 데 필요한 헤드론 행렬 요소를 계산할 수 있나요? (fB, BK 등)
4. QCD가 유한 온도와 유한 밀도를 갖는 경우 어떻게 작동합니까?
5. 스펙트럼에서 이론적으로 예상되지 않은 "해외" 물질(state)들이 존재할까요? (glueballs)
6. 결체와 치환 심미의 물리학을 계산할 수 있나요?
이 논문은 4가지 문제 중 3가지를 다룹니다: 스펙트럼, αMS(MZ)의 계산, fB의 계산.
QCD는 유럽 공간에서 구현됩니다. 양자 색전하 이론(QCD)은 다음과 같이 표현됩니다.
1
Sg = 6
g2
Z
x
1
12
X
µν
Tr(GµνGµν)
Sq =
Z
x
X
q
¯q(γµDµ + mq)q
여기서 Gµν는 글루온 필드 강도, q는 쿼크입니다. Dµ = ∂µ + iAµ.
분자 역학은 유한 볼륨의 4차원 세계에서 수행되며 Monte-Carlo 방법으로 계산됩니다. 분자는 유한 행렬을 통해 얻어집니다.
2
C(t) =
*X
⃗x
[¯bγ0γ5d(t, ⃗x)] [ ¯dγ0γ5b(0)]
+
=
시스템에서 유한 볼륨이 작아지면 물리적 질량의 오차가 발생합니다. 또한 유한 볼륨의 크기가 너무 작으면 마찰오류가 발생합니다.
분자 역학 계산은 유한 볼륨을 사용하며 이는 시스템에 오차를 야기할 수 있습니다. 분자는 유한 볼륨이 작아지면 물리적 질량의 오차가 발생할 수 있습니다.
유한 볼륨 내에서 수행되는 분자 역학 계산은 시뮬레이션 결과로 얻어진다고 가정합니다. 그러나 실제로는 마찰오류가 발생하여 예상과 다를 수 있습니다.
QCD의 지평 이론적 이해는 분자 역학을 사용하는 것이 중요합니다. 분자는 유한 볼륨 내에서 수행됩니다. 분자는 유한 볼륨이 작아지면 물리적 질량의 오차가 발생할 수 있습니다.
분자는 유한 볼륨 내에서 계산되며 이는 시스템에 오차를 야기할 수 있습니다. QCD는 비결정론적 이론입니다. 분자 역학은 유한 볼륨 내에서 수행됩니다.
QCD는 양자장 이론의 한 종류이며, 기초적인 물리학의 중요한 문제를 해결하는 데 사용됩니다.
RECENT PROGRESS IN LATTICE QCD
arXiv:hep-ph/9212217v1 3 Dec 1992UW/PT-92-22DOE/ER/40614-37RECENT PROGRESS IN LATTICE QCDStephen R. Sharpe∗Department of Physics, FM-15University of WashingtonSeattle, Washington 98195AbstractI give a brief overview of the status of lattice QCD, concentrating on topicsrelevant to phenomenology. I discuss the calculation of the light quark spectrum,the lattice prediction of αMS(MZ), and the calculation of fB.Plenary talk given at the Meeting of the Division of Particles and Fields of the AmericanPhysical Society, Fermilab, November 10-14, 1992PREPARED FOR THE U.S. DEPARTMENT OF ENERGYThis report was prepared as an account of work sponsored by the United States Government.
Neither the United States northe United States Department of Energy, nor any of their employees, nor any of their contractors, subcontractors, or theiremployees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the product or processdisclosed, or represents that its use would not infringe privately-owned rights. By acceptance of this article, the publisherand/or recipient acknowledges the U.S. Government’s right to retain a nonexclusive, royalty-free license in and to any copyrightcovering this paper.December 1992∗email: sharpe@galileo.phys.washington.edu
1INTRODUCTIONLattice gauge theory has been somewhat out of the mainstream of particle physics for thepast decade. It seems to me, however, that the field is now coming of age.
It has certainlygrown rapidly: roughly 300 people attended the recent LATTICE ’92 conference, comparedto about 130 at LATTICE ’86.That it has also matured is indicated by the breadthof the subjects being studied with lattice methods. These include the traditional—QCDspectrum and matrix elements; the more statistical mechanical—rigorous theorems on finitesize scaling; the more abstract—random surfaces; and the exotic—finite temperature baryonnumber violation.
More concrete evidence for maturation is that lattice results are becominguseful to the rest of the particle physics community. For example, in his summary of B-physics, David Cassel noted that the bounds on the elements of the CKM-matrix whichfollow from B −¯B mixing depend on the values of fB and BB, and that lattice results arenow used as one of the estimates of these numbers.
But perhaps the most important piece ofevidence is that lattice studies are beginning to produce results with no unknown systematicerrors. The best example is the calculation of the full QCD running coupling constant αMSby the Fermilab group [1].
As I discuss below, this number can be directly compared toexperiment.In summary, I would say that lattice studies are beginning to make themselves useful.The “beginning” in this claim is important—there is a very long way to go before we cancalculate, say, the K →ππ amplitudes from first principles. Thus this talk does not consistof results for a long list of matrix elements.
Instead, I discuss a few topics in some detail.Because of lack of time, I concentrate entirely on results from lattice QCD (LQCD) whichare relevant to phenomenology.When thinking about the progress that has been made, it is useful to keep in mind thequestions that one ultimately wishes to answer using LQCD:1. Does QCD give rise to the observed spectrum of hadrons?
Do these particles have theobserved properties (charge radii, decay widths, etc)?2. Does the same theory also describe the perturbative, high-energy jet physics?3.
What are the values of the hadronic matrix elements (fB, BK, etc.) which are neededto determine the poorly known elements of the CKM matrix?4.
What happens to QCD at finite temperature, and at finite density?5. What are the properties of “exotic” states in the spectrum, e.g.
glueballs?6. What is the physics behind confinement and chiral symmetry breaking?The last question is the hardest, and, while it is important to keep thinking about it, therehave not been any breakthroughs.
The other questions are being addressed using numericalsimulations, complemented by a variety of analytic calculations. Most recent progress hasconcerned the first four questions, and I will discuss only the first three.
For additional detailssee the reviews of Mackenzie (heavy quarks), Petersson (finite temperature), Sachrajda (weakmatrix elements) and Ukawa (spectrum) at LATTICE ’92 [2].1
To set the stage I begin with a brief summary of LQCD, eschewing as many details aspossible. The action of QCD is the sum of a gauge term,Sg = 6g2Zx112XµνTr(GµνGµν) ,(1)with Gµν the gluon field strength, and a quark partSq =ZxXq¯q(γµDµ + mq)q ,Dµ = ∂µ + iAµ.These expressions are valid in Euclidean space, where numerical lattice calculations arealmost always done.
QCD is put on a hypercubic lattice by placing the quarks on the sites,and gauge fields on the links which join these sites, in such a way that gauge invarianceis maintained. The derivative in Dµ can be discretized in many ways, leading to differenttypes of lattice fermions.
The most common choices are Wilson and staggered fermions. Thecoupling constant g has been absorbed in the gauge fields, and appears only as an overallfactor in Sg.
It is conventional (see Eq. 1) to use the combination β = 6/g2 to specify g. Inpresent simulations β ∼6, so that g2 ∼1.
The lattice spacing a is determined implicitly bythe choice of g2, as discussed below.The prototypical quantity of interest is the two-point correlator, e.g.C(t)=*X⃗x[¯bγ0γ5d(t, ⃗x)] [ ¯dγ0γ5b(0)]+=sign(t) f 2BmBe−mB|t| 1 + O(e−(mB′−mB)|t|). (2)At large Euclidean time, t, one picks out the lightest state (here the B meson).
The contri-bution of excited states (beginning with the B′) is suppressed exponentially. Thus one canjust read offthe mass, mB, while the amplitude of the exponential gives the decay constant(fB) up to kinematical factors.The expectation value in Eq.
2 indicates a functional integral over quarks, antiquarksand gluons weighted by exp(−Sg −Sq). Doing the quark and antiquark integrals, one obtainsC(t) = −Z[dA] e−Sg [Πqdet( /D + mq)]X⃗xTr[γ0γ5Gd(t, ⃗x; 0)γ0γ5Gb(0; t, ⃗x)] .
(3)Here [dA] is shorthand for the functional integral over lattice gauge fields, and the G arequark propagators Gq = ( /D + mq)−1. The integral is normalized to give unity in the absenceof the trace term.
I have shown Eq. 3 diagrammatically in Fig.
1, where the lines are quarkpropagators, and the primed expectation value includes the determinant.To make the calculations numerically tractable, they must be done in a finite volumeof N3s × Nt points. The functional integral is then of large but finite dimension, and canbe done by Monte-Carlo methods.
Similarly, the propagators are obtained by inverting afinite matrix. Available computer speed and memory limits the number of sites in this fourdimensional world.
In fact, it is speed that limits present simulations: the bottleneck isthe inclusion of det( /D + mq) in the measure. To make progress, many calculations use the2
(0)xC(t) =db(t,x)Figure 1: Diagrammatic representation of Eq. 3“quenched” approximation in which the determinant in Eq.
3 is set equal to unity. Thisis equivalent to dropping all internal quark loops, so that there are only valence quarks.
Idiscuss the consequences of this drastic approximation below.It is important to realize that numerical LQCD calculations which do not use the quenchedapproximation become exact when the lattice spacing vanishes, the volume goes to infinity,and the statistical errors vanish.They are thus non-perturbative calculations from firstprinciples, which have errors that can be systematically reduced. They are not “model” cal-culations.
On the other hand, calculations in the quenched approximation are more similarto those in a model. Certain physics is being left out, and one does not know a priori howlarge an error is being made.
As I will show, present calculations suggest that the error issmall, at least for a range of quantities.In the following three sections I discuss the spectrum, the calculation of αMS, and that offB. I will focus mainly on quenched results since these are more extensive.
I will only makesome brief comments on results for “full QCD”, i.e. QCD including the determinant in themeasure.2SPECTRUMI begin by discussing the status of calculations of the spectrum of light hadrons.
I concentrateon mπ, mρ and mN (N refers to the nucleon), since these have the smallest errors. To givean idea of the rate of progress, I will compare with results from March 1990 [3].
The twoand a half year interval since then is long enough to allow substantial changes. For example,computer power has increased as roughly CPU ∝eyear, so that, for a four dimensional theorysuch as QCD, the linear dimensions could have increased by a factor of ∼e2.5/4 = 1.8 in thisperiod.
In fact, the extra time has only partly been used in this way.To display the data I follow the “APE” group and plot mN/mρ versus (mπ/mρ)2 (seeFigs. 2).
To understand the significance of these plots, recall the following. In a latticecalculation, we can dial the values of the quark masses.
Ignoring for the moment the strangequark, and assuming degenerate up and down quarks, we then have a single light quarkmass, ml, at our disposal. Each value of ml corresponds to a possible theory, each withdifferent values for dimensionless mass ratios such as m2π/m2ρ, mρ/mN, fπ/mρ, m∆/mN, etc.We would like to fix ml using one of these ratios, and then predict the others.
In practice,it is technically very difficult to do a simulation with small enough ml, and so we mustextrapolate. The APE plot is one way of displaying how well this extrapolation agrees with3
Figure 2: APE plots for quenched Wilson fermions: (a) March 1990 (b) November 1992the experimental masses. As ml varies from 0 →∞, m2π/m2ρ varies monotonically from0 →1.
Thus the theory maps out a curve in this plot, which we know must pass throughthe infinite mass point (mπ = mρ = 2mN/3; shown by a square in the plots). The issue iswhether this curve passes through the experimental point, indicated by a “?” on the plots.The solid lines if Figs.
2 are the predictions of two phenomenological models. The curvefor light quark masses uses chiral perturbation theory, with certain assumptions, and isconstrained to go through the physical point.
The curve at heavier masses is based on apotential model. For more discussion see Ref.
[3].What do we expect for the lattice results? There will be corrections to physical quantitieswhich vanish in the continuum limit as powers of a (up to logarithms, which I will ignore),e.g.
(mN/mρ)latt = (mN/mρ)cont[1 + aΛ + O(a2)] . (4)Thus, for finite a, the lattice curve should not pass through the experimental point.
Similarly,if the physical size of the lattice (L = Nsa) becomes too small the masses will be shiftedfrom their infinite volume values. Thus, in addition to extrapolating in the quark mass, onemust attempt to extrapolate both a →0 and L →∞.What has happened in the last two years is that these extrapolations have become morereliable.
I illustrate this with results using Wilson fermions in the quenched approximation(details of the data set are given in Table 1). Figure 2a shows the state-of-the-art in March1990.
The upper two sets of points are from a large lattice spacing (a ≈1/6 fm) with twodifferent lattice sizes (L ≈2 and 4 fm). The results agree within errors, and I concluded thatwe knew the infinite volume curve for a = 1/6 fm with a precision of about 5%.
This curveappeared not to pass through the physical point. The lower two sets of points are from asmaller lattice spacing (a ≈1/10 fm).
They suggested a downward shift in the data withdecreasing lattice spacing, but it was difficult to draw definite conclusions given the largeerrors.The present results are shown in Fig.2b.To avoid clutter, I show only data with4
Table 1: Parameters of lattices used to produce data shown in Fig. 2YearRef.βa(fm)π/a(GeV)NsL(fm)Lattices1990[4]5.71/63.812,242,4294,50[4]61/106.318,241.8,2.4104,331992[5]5.931/95.7242.7217[6]61/106.3242.478[5]6.171/127.5322.7219[7]6.31/1610241.5128a ≤1/9 fm.† These are all consistent with a single curve lying below that at a = 1/6 fmand close to the phenomenological predictions.
The improvements in the results have comefrom using more lattices to approximate the functional integral, which reduces the statisticalerrors (see Table 1). Furthermore, the decrease in a has been compensated by an increasein Ns, so that (with the exception of the APE results at β = 6.3) the physical extent ofthe lattices exceeds L = 2 fm.
The results at β = 5.7 imply that this is large enough to getwithin a few percent of the infinite volume results.The GF11 group have used their results at a ≈1/6, 1/9, 1/12 fm to do an extrapolationboth to physical quark masses and to a = 0 [5]. They find that, for a range of quantities,the results are consistent with their experimental values within ∼5% errors.
For example,mN/mρ = 1.284 +0.071−0.065 (cf 1.222 expt.) and m∆/mρ = 1.627 +0.051−0.092 (cf 1.604 expt.).
It shouldbe recalled that a few years ago the quenched results for mN/mρ were thought to be largerthan the experimental value, while m∆−mN was smaller. What we have learned is thatboth the errors introduced by working in finite volume and at finite lattice spacing shift thecurve in the APE plot upwards, so that one can easily be misled by results on small lattices.We seem, then, to know the spectrum of light hadrons in the quenched approximationfairly well, and it looks a lot like the observed spectrum.
There is not, however, completeagreement on the numerical results. The QCDPAX collaboration, working at lattice spacingscomparable to those in Fig.
2b, finds significantly larger values for mN/mρ at smaller m2π/m2ρ[8]. They suggest that the disagreement may be due to a contamination from excited statesin the correlators (see Eq.
2). This disagreement will get cleared up in the next year orso.
What is needed is operators which couple more strongly to the ground state, and lessto excited states. There has been considerable improvement in such operators, for systemsinvolving a heavy quark [9], but less so for light quark hadrons.Let me assume that the results of Fig.
2b are correct, so that the quenched spectrumdoes agree with the experiment to within ∼5%. Does this imply that we can trust quenchedcalculations of other quantities to this accuracy?
I do not think so. A priori, I would nothave expected the quenched approximation to work so well, because it leaves out so muchimportant physics.
For example, the quenched rho cannot decay into two pions, unlike the†The new results at a = 1/6 fm [5] confirm, with reduced errors, those in Fig. 2a.5
physical rho, which might lead to a 10% underestimate of its mass [10]. Also, the pion cloudaround the light hadrons is much different in the quenched approximation than in full QCD,which should shift mρ and mN in different ways.
While it is possible that these effects largelycancel for the spectrum, I see no reason for them to do so in other quantities. This argumentcan be made more concrete for the charge radii [11].
Futhermore, as I mention below, thereare reasons to think that the approach to the chiral limit in the quenched approximation issingular.My final remark on the quenched spectrum concerns the possibility of improving theapproach to the continuum limit [12]. The gauge action is accurate to O(a2), but the Wilsonfermion action has O(a) corrections, as in Eq.
4. It is possible to systematically “improve”the fermion action so that the corrections are successively reduced by powers of g2/4π.
Theidea is that by using a more complicated action one can work at a larger lattice spacing. Thefirst results of this program are encouraging.
Ref. [13] finds that an improved action shiftsthe results at a = 1/6 fm downwards so as to agree with those at a ≈1/10 fm, and that bothagree with the “unimproved” results at a ≤1/9 fm shown in Fig.
2b. Ref.
[14] finds thatthe improved action makes no difference at a smaller lattice spacing, a ≈1/14 fm. All thissuggests that it may be possible use lattice spacings as large as 1/6 fm with an improvedWilson fermion action.
Further evidence for this comes from lattice studies of charmonium[15, 16]. With staggered fermions, on the other hand, the prospects are less rosy.
Althoughthe corrections are of O(a2), and thus parametrically smaller than with Wilson fermions,they are in fact large at a = 1/6 fm [17].Ultimately, we must repeat the calculation using full QCD. Much of the increase incomputer time in the last few years has gone into simulations which include quark loops.Nevertheless, it is too early to discuss physical predictions, since it is not yet possible toreliably extrapolate mq →0 or a →0.
The limit L →∞is, however, well understood andsome interesting results have been obtained. The Kyoto-Tsukuba group finds good fits to theform m = m∞(1 + c/L3), and argue that this can be understood as due to the “squeezing”of the hadrons in the finite box [18].‡ The MILC collaboration finds that at L = 2.7 fm thefinite volume effects are smaller than 2% [20].
Both these results are consistent with thefinite volume effects observed in the quenched approximation [5].3αMS FROM THE LATTICEIt is straightforward, in principle, to extract αMS from lattice calculations:1. Pick a value of β = 6/g2.2.
Calculate a physically measurable quantity, e.g. mρ or fπ.3.
Compare lattice and physical values and extract a using, e.g.§(mρ)latt = (mρ)phys × a (1 + O(a)) . (5)The O(a) terms are not included when extracting a.‡This is not the asymptotic form: for large enough L the power law becomes an exponential [19].§The values for a quoted above were obtained in this way, using a variety of physical quantities.6
4. Convert from the lattice bare coupling constant g2(a) to αMS(q = π/a) using pertur-bation theory.
The result can then be evolved to other scales, e.g. MZ, using therenormalization group equation.5.
Repeat for a variety of values of β, and extrapolate to the continuum limit a = 0. Inthis way the O(a) corrections in Eq.
5 are removed.If this program were to be carried through, then the lattice result for αMS would allowabsolute predictions of jet cross sections, R(e+e−), etc. (modulo the effects of hadronization).If these predictions were successful, it would demonstrate that QCD could simultaneouslyexplain widely disparate phenomena occurring over a large range of mass scales.
While suchsuccess has not yet been achieved, there has been considerable progress in the last year orso. I will explain the two major problems, and the extent to which they have been resolved.I then discuss the present results.3.1Reliability of Perturbation TheoryThe fourth step in the program requires that perturbation theory be valid at the scale ofthe lattice cut-off, which is roughly π/a in momentum space.
On present lattices this rangesfrom 5 −12 GeV (the values are given in Table 1). It turns out that these values are notlarge enough for perturbation theory in the bare coupling constant to be accurate, becausethere are large higher order corrections.
These are exemplified by the relation needed in step4 above [21]1αMS(π/a) =1αlatt(a) −3.880 + O(α) ;(α = g2/4π) . (6)Since αlatt ≈1/13, the first order correction is large, and higher order terms are likely to beimportant.This problem has been understood by Lepage and Mackenzie [22].
The large correctionsarise from fluctuations of the lattice gauge fields, and in particular from “tadpole” diagramswhich are present on the lattice but absent in the continuum. The solution is to expressperturbative results in terms of a “continuum-like” coupling constant, e.g.
αMOM or αMS,with the scale q ≈π/a. When this is done the higher order coefficients are considerablyreduced.
This is similar to what happens in the continuum when one shifts from the MS tothe MS scheme.Having re-expressed all perturbative expressions in terms of, say, αMS, there remains theproblem of finding the value of this coupling in terms of αlatt, since Eq. 6 is not reliable.One has to use a non-perturbative definition of coupling constant which automatically sumsup the large contributions, and which is related to αMS in a reliable way.
One choice is [22]αP = −3 ln⟨Tr UP⟩4π;1αMS(π/a) = 1αP−0.5 + O(α) ,(7)where UP is the product of gauge links around an elementary square. One first determinesαP from the numerical value of Tr UP, and then converts this to αMS, which is then used inperturbative expressions.
Lepage and Mackenzie find that the resulting numerical predictions7
of lattice perturbation theory work well for quantities (such as small Wilson loops) which aredominated by short-distance perturbative contributions. This is true for lattice spacings aslarge as 1/9 fm, and perhaps 1/6 fm.
Thus the determination of αMS using Eq. 7 is reliable,with errors probably no larger than a few percent.3.2Errors Introduced by the Quenched ApproximationThe dominant source of uncertainty in present calculations of αMS is the use of the quenchedapproximation.The problem is the lack of a “physical” quenched theory to use in thecomparison of step 3 above.
This shows up in two ways. First, the value of a depends onthe quantity chosen in the comparison: using mρ gives one value, fπ another.¶ Second, thecoupling constant that one obtains is for a theory with zero flavors, α(0)MS, and must somehowbe related to the physical coupling constant αMS.
However, such a relationship involvesnon-perturbative physics, and so can only be determined by a calculation using full QCD!The best that we can hope for at present is a good estimate of the relationship between thecouplings.Recently, the FNAL group have made such an estimate for the coupling determinedusing the 1P −1S splitting in charmonium [1].The crucial simplifying feature is thatcharmonium is described reasonably well by a potential model, so one need only estimatethe effects of quark loops on the potential itself. In outline, this is done as follows.
Matchingthe lattice and continuum 1P −1S splittings makes the quenched and full QCD potentialssimilar at separations R ∼0.5 fm. The potentials will, however, differ at smaller separations.We understand this difference at small enough R, where the Coulomb term dominates, i.e.V ∝−αMS(1/R)/R.
In the quenched approximation α varies more rapidly with R because ofthe absence of fermion loops, which means that the quenched potential is steeper. Assumingthat this is true all the way out to 0.5 fm, where the potentials match, implies that thequenched potential must lie below that for full QCD at short distances.It follows thatαMS > α(0)MS at short distances.
A quantitative estimate is given in Ref. [1].Unfortunately, there is no such simple way of estimating the effects of the quenchedapproximation on the values of α(0)MS extracted from the properties of light quark hadrons.3.3ResultsVarious physical quantities have been used to calculate α(0)MS, and I collect the most accurateresults in Table 2.
I also give the “experimental” number obtained from an average of variousperturbative QCD fits to data [23]. I quote the coupling at the scale MZ, to allow comparisonwith Fig.
1 of the 1992 Review of Particle Properties (RPP) [23]. The second row gives theresults of the FNAL group, including the correction for quenching.
The remaining resultsuse the “string tension”, σ. This is the coefficient of the linear term in the heavy quark-antiquark potential: V (R) →σR for R →∞.
All groups use the “improved” perturbationtheory explained above. I have not shown results from light hadron masses as they are lessaccurate.¶Actually, since light hadron mass ratios are well reproduced by the quenched approximation, the variationof a is small for such quantities.
This need not be true in general.8
Table 2: Results for αMS in quenched and full QCD. Errors are statistical.QuantityRef.α(0)MS(MZ)αMS(MZ)“Experiment”[23]0.1134(35)M1P −M1S[1]0.0790(9)0.105(4)String tension[24]0.0796(3)String tension[25]0.0801(9)The results for α(0)MS obtained using σ have the smallest statistical errors.
Indeed, it is atriumph of LQCD that the linear term in the quenched potential is very well established, forit is this term that causes confinement. The difficulty is that σ is not a physical quantity.The “physical” value, √σ = 0.44 GeV, is extracted from the potential needed to fit the ¯ccand ¯bb spectra.
These systems do not, however, probe the region of the potential wherethe linear term dominates. Furthermore, at large R the full QCD potential flattens outdue to quark-pair creation, while the quenched potential continues its linear rise.
Thus theextracted value of σ is somewhat uncertain, and it is difficult to relate α(0)MS obtained using σto the full QCD coupling. Nevertheless, it seems to me possible to make a rough estimate,and it would certainly be interesting to try.The lattice prediction for αMS is slightly less than 2σ below the RPP average.
In hissummary talk, Keith Ellis quoted an updated average, αMS(MZ) = 0.119(4), which is almost3σ higher than the lattice result. I think that this near agreement is a success of LQCD,and that it is too early to make anything out of the small discrepancy.
For one thing, thedominant error in the lattice result is the uncertainty in the conversion from α(0)MS to αMS,and this might be underestimated. It is important to realize, however, that this uncertaintywill be gradually reduced as simulations of full QCD improve.I end this section with a general comment on the calculations.
For the method to work,both non-perturbative and perturbative physics must be included in a single lattice simula-tion. At long distances the quarks are confined in hadrons, while at the scale of the latticespacing their interactions must be described by perturbation theory.
On a finite lattice, thesetwo requirements pull in opposite directions. For example, if perturbation theory requiresa < 1/9 fm, while finite volume effects require L > 2.7 fm (as may be true for light hadrons),the lattice must have Ns ≥30, and so present lattices are barely large enough.
This isanother reason for using charmonium to determine αMS. The ¯cc states are smaller than lighthadrons, and it turns out that Ns = 24 is large enough to reduce the finite volume errorsbelow 1% [1].Nevertheless, it would be nice to extend the range of scales so as to provide a detailedtest of the dependence on a.
To accomplish this with present resources requires the use ofmultiple lattices having a range of sizes. L¨uscher et al.
have proposed such a program, anddone the calculation for SU(2) pure gauge theory [26].9
4The B-meson decay constant fBOne of the most important numbers that LQCD can provide phenomenologists is fB. Anal-yses of the constraints due to ¯K −K and ¯B −B mixing find (for mt ≥140 GeV) two types ofsolutions for CP-violation in the CKM-matrix (see, e.g.
Ref. [27]).
These are distinguishedby whether fB is “small” (100 −150 MeV) or “large” (160 −340 MeV). In the former case,CP-violation in B →KSJ/ψ is small; in the latter it is much larger.
We would like thelattice to resolve this ambiguity.The calculation of fB is also interesting as a testing ground for the “heavy quark effectivetheory” (HQET) [28]. The B-meson consists of heavy b quark, and a light quark (u or d).Imagine that we were free to vary the heavy quark mass, mQ.
The mass of the pseudoscalarmeson (which I call mP to distiguish it from the physical B meson mass, mB), and its decayconstant fP, would depend upon mQ. As mQ →∞one can show that [29]φP = fP√mP"α(mP)α(mB)#2/β0= φ∞ 1 + AmP+ Bm2P+ .
. .
!. (8)Here β0 is the first term in the QCD β-function, and A and B are constants except for aweak logarithmic dependence on mP.
The issue for HQET is the size of the 1/mP correctionsat the B and D masses, for these might be indicative of the size of the corrections in otherapplications of HQET, e.g. B →D transitions.It is important to realize that, while the mQ →∞limit simplifies the kinematics of theheavy quark, it does not simplify the dynamics of the light quark.
In particular, the quenchedapproximation is no better in the heavy quark limit, nor is the dynamics more perturbative.Thus one needs a LQCD calculation here just as much as for light quark hadrons.What we would like to do is map out the curve of φP versus 1/mP, and read offthevalues of fB and fD. Examples of present results are shown in Figs.
3, and in Table 3 Igive the present numerical values for fB and fD. I also quote fB(static) = φ∞/√mB, whichis the value of fB ignoring the 1/mP corrections in Eq.
8. Unfortunately, the situation isless straightforward than the figures imply, and I will spend the remainder of this sectionexplaining and evaluating these results.There are three major causes of uncertainty in fB.
The first concerns the overall scale.To extract a result in physical units we need to know the lattice spacing. As discussed above,the value we obtain depends on the physical quantity that we use, particularly at finite latticespacing.
fB is more sensitive to the uncertainty in a than are light hadron masses, sinceone is calculating a3/2φP rather than am. To illustrate this sensitivity, I include in Table3 results from Refs.
[9, 30] using two different determination of a. For the other results,the uncertainty is about 15%.
Ultimately, to remove this uncertainty one must repeat thecalculation with full QCD.The second problem concerns the isolation of the lightest state. In principle, calculatingφP is straightforward.
One simply studies the long time behavior of the two point functionof the local axial current, Eq. 2, and reads offfP√mP.
In practice, to obtain a signal it isnecessary to use extended operators, which couple more strongly to the lightest state thanthe local current. This is particularly important at large mQ.
With a less than optimaloperator there are likely to be systematic errors introduced by contamination from excited10
Figure 3: Results for φP = mP√mP: (a) from Ref. [32], (b) from Ref.
[33].Table 3: Results for decays constants in the quenched approximation. The normalization is such thatfπ = 132 MeV.
Only statistical errors are shown; systematic errors are discussed in the text.Ref.βScale fromfB(static)(MeV)fB(MeV)fD(MeV)[32]6.0-6.4fπ, mρ310(25)205(40)210(15)[34]6.2fπ183(10)198(5)[33]6.3fπ230(15)187(10)208(9)[9]5.9M(1P)−M(1S)319(11)σ265(10)[30]5.74-6.26σ230(22)mρ256(78)states. Indeed, there are disagreements between results using different operators.
This isillustrated in Table 3 by the variation in fB(static).It seems to me, however, that theoperators of Ref. [31] are close to optimal, and that the discrepancies will go away as othergroups optimize their operators.The third and most difficult problem concerns putting very heavy quarks on the lattice.As the quark’s mass increases, the ratio of its Compton wavelength to the lattice spacing,1/(mQa), decreases.For mQa > 1, its propagation through the lattice will be severelyaffected by lattice artifacts.
There are, however, no such difficulties for an infinitely massivequark, for such a quark remains at rest both in the continuum and on the lattice [29]. Thusit seems that we are forced to interpolate between mQ ∼1/a and mQ = ∞.To illustrate the problems that this introduces, consider the situation about two yearsago.
The smallest lattice spacing was a = 1/10 fm, so that mca ≈0.75 and mb ≈2.5. Typicalresults are those represented by squares in Fig.
3a. The quark mass has been restricted to11
satisfy mQa ≤0.7 to avoid large artifacts (an arbitrary but reasonable choice for an upperbound), and thus lie to the right of the D meson line. Clearly it is difficult to convincinglyinterpolate using Eq.
8: the variation in φP is so large that one cannot truncate the 1/mPexpansion. Nevertheless, if one assumes that the curvature is not too large, the fact that weknow φ∞does allow a rough estimate of fB.There has been considerable progress in the last two years.
The lattice spacing has beenreduced to 1/17 fm, which allows one to use heavier quarks while keeping mQa < 1. This isillustrated by the remaining points in Fig.
3a, all of which have mQa ≤0.7. These pointscan now be fit in a more reliable way to the asymptotic form of Eq.
8. The curve showssuch a fit, the results from which are given in Table 3.
Other groups, however, find results inapparent disagreement. For example.
the “uncorrected” points in Fig. 3b should agree withthose in 3a, but instead are lower.
This disagreement is, I suspect, mainly due to inadequateisolation of the lightest B meson by one or both groups.Another development has been the use an improved fermion action [34]. Since this hassmaller O(a) corrections, one should be able to work at larger values of mQa.
I give theresults in Table 3.Despite these improvements, it would be much better if we could work at any value ofmQa, and simply map out the entire curve. There is a dispute about whether this is possible,and I will attempt to give a summary of the arguments.I begin by noting that the errors in φP do not keep growing as mQa increases.
This isbecause, if mQ ≫ΛQCD, the quark is non-relativistic. Its dynamics can then be expandedin powers of 1/mQ [35]L = ψ†"iD0 + D2i2m1+ c(g2)σiBi2m2+ O(1/m2Q)#ψ ,(9)where ψ is a two component field, and c is a perturbative coefficient.
In the continuum,m1 = m2 = mQ. The discrete rotational symmetries are sufficient to ensure that a heavylattice quark will be described by the same Lagrangian, except that neither m1 nor m2 areequal to the bare lattice quark mass mQ.
Nevertheless, although the lattice heavy quarkshave the wrong dynamics, this should only introduce errors of O(ΛQCD/mQ). Thus, if a issmall enough that mQa < 1 when the quark becomes non-relativistic, the errors will be smallfor all mQa.Strictly speaking, to bring the lattice Lagrangian into the form of Eq.9, one mustperform a wavefuction renormalization on the heavy field, which changes the normalizationof the ¯bu axial current.
This has been calculated in perturbation theory keeping only the“tadpole” diagrams which are thought to be dominant [36]. The result is illustrated in Fig.3b: upon renormalization the “uncorrected” points are shifted upwards into those labeled“corrected”.
With this modification, the curve for finite mQa is guaranteed to pass throughthe φ∞. Without it, the curve will bend over and eventually pass through the origin.
Infact, the uncorrected points in Fig. 3b do appear to be flattening out at the smallest valuesof 1/mP, although those of Fig.
3a do not.A second correction has also been applied in Fig. 3b.
It is possible to partly account forthe difference between m1 and mQ by an appropriate shift to smaller 1/mQ. The size of thisshift has been calculated keeping “tadpole” diagrams [36].
This correction ensures that the12
data approach φ∞with a slope which is linear in 1/mP. This slope will not, however, becorrect since the 1/m2 term in Eq.
9 has the wrong normalization. Nevertheless, in contrastto the “uncorrected” points, the shifted points fit well to the form of Eq.
8. The fit is shownby the curve, and the resulting values for decay constants are given in Table 3.The controversial issue is how to further modify the calculation so as to obtain the correct1/mP terms.
Kronfeld, Lepage and Mackenzie have suggested a program for removing theerror [36]. The idea is to add new terms to the lattice action, and to choose the parametersso that one obtains the correct Lagrangian at O(1/mQ).
This is essentially a complicatedway of putting non-relativistic QCD on the lattice [35]. The difficult question in both casesis how to fix the parameters.Kronfeld et al propose that this can be done using perturbation theory.
Maiani et al.argue, however, that non-perturbative contributions may be important [37]. Normally, suchcontributions are suppressed by powers of a, but here they are enhanced by factors of 1/adue to linear divergences.
(A clear explanation of this effect is given in Ref. [38].) Theresult is that, with parameters fixed using perturbation theory, there will be errors in φPwhich are of O(ΛQCD/mQ), i.e.
of the same size as the terms one is trying to calculate. Thepoint of disagreement is whether these non-perturbative terms are large or small.
If theyare large, the only solution would be to fix parameters using non-perturbative normalizationconditions. In general this would reduce predictive power.Clearly more work is needed to resolve this dispute.
One way to do this is to push thetests of perturbation theory [22] to the level at which non-perturbative terms show up.It is fortunate, however, that this uncertainty has only a small effect on fB and fD. Thisis shown by the good agreement in Table 3 despite the different methods being used.
Theresults favor the “large” solution for fB. The only way this could change is if the systematicerror due to the quenched approximation turns out to be large.We can also estimate the size of the O(1/M) corrections to the heavy quark limit.
Takingthe data of Ref. [33] as an example, they are ∼15% for fB, and ∼45% for fD.
These numbersincrease further if one uses the larger values of φ∞found by Ref. [9].5OUTLOOKThere are many interesting developments that I have not have had time or space to cover.One which impinges on much of the work discussed above concerns the accuracy of thequenched approximation [39].
In this approximation there are nine pseudo-Goldstone bosons,rather than the eight of QCD, the extra one being the η′. This means that quenched particleshave an η′ cloud, not present in full QCD.
It turns out that this gives rise to singularities inthe chiral limit, much as pion loops give singularities in the charge radii of pions and nucleons[11]. For example, the quark condensate diverges.
The implications of these divergences arenot yet clear. The most optimistic view is that the effects of η′ loops are small as long as wework above a certain quark mass.
This is supported by the absence of numerical evidenceto date for the divergences.Unfortunately, the quenched approximation is likely to be with us for a number of years.At present, the fastest computers simulating LQCD are running at 1 −10 GFlops. Even a13
TeraFlop machine, such as that proposed by the TeraFlop collaboration [40], will focus onquenched lattices. These will have Ns ≈100, and should give definitive quenched results fora reasonable number of interesting quantities.
The major problem with simulations of fullQCD is that the CPU time scales as m−10.5πwith present algorithms. Clearly, it is crucialthat effort go into improving these algorithms.ACKNOWLEDGEMENTSI thank Don Weingarten Jim Labrenz and Akira Ukawa for providing me with data, andEstia Eichten, Aida El-Khadra, Rajan Gupta, Brian Hill, Andreas Kronfeld, Jim Labrenzand Paul Mackenzie for discussions.
This work was supported by the DOE under contractDE-FG09-91ER40614, and by an Alfred P. Sloan Fellowship.References[1] A. El-Khadra, G. Hockney, A. Kronfeld and P. Mackenzie, Phys. Rev.
Lett. 69, 729(1992) ; and FERMILAB-CONF-92-299-T[2] Proceedings of the International Symposium on Lattice Field Theory, “LATTICE 92”,Amsterdam, The Netherlands, 1992, to be published[3] S. Sharpe, Proc.
First Int. Symp.
on “Particles, Strings and Cosmology”, NortheasternUniv., Boston, USA, March, 1990, Eds. P. Nath and S. Reucroft, p270[4] P. Bacilieri et al., Nucl.
Phys. B317 (1989) 509;S. Cabasino et al., Proceedings of the International Symposium on Lattice Field Theory,“LATTICE 89”, Capri, Italy, 1989, edited by N. Cabibbo et al., Nucl.
Phys. B (Proc.Suppl.
)17 (1990) 431[5] F. Butler, H. Chen, A. Vaccarino, J. Sexton and D. Weingarten, these proceedings;preprint hep-lat/9211051, to appear in Ref. [2][6] S. Cabasino et al., Phys.
Lett. 258B (1991) 195[7] M. Guagnelli et al., Nucl.
Phys. B378 (1992) 616[8] Y. Iwasaki et al., preprint UTHEP-246, to appear in Ref.
[2][9] A. Duncan, E. Eichten, A. El-Khadra, J. Flynn, B. Hill and H. Thacker, preprintUCLA/92/TEP/40, to appear in Ref. [2][10] P. Geiger and N. Isgur, Phys.
Rev. D41, 1595 (1990)[11] D. Leinweber and T. Cohen, U. of MD Preprint 92-190[12] K. Symanzik, Nucl.
Phys. B226 (1983) 187; ibid 20514
[13] M.-P. Lombardo, G. Parisi and A. Vladikas, preprint ILL-(TH)-92-15[14] C.R. Allton et al., Phys.
Lett. 284B (1992) 377[15] A. El-Khadra, preprint FERMILAB-CONF-92/330-T, to appear in Ref.
[2][16] C.R. Allton et al., Phys.
Lett. 292B (1992) 408[17] S. Sharpe, Proceedings of the International Symposium on Lattice Field Theory, “LAT-TICE 91”, Tsukuba, Japan, 1991, edited by M. Fukugita et al., Nucl.
Phys. B (Proc.Suppl.
)26 (1992) 197[18] M. Fukugita, H. Mino, M. Okawa and A. Ukawa, Phys. Rev.
Lett. 68, 761 (1992)M. Fukugita, H. Mino, M. Okawa, G. Parisi and A.Ukawa, preprint KEK-TH-339[19] M. L¨uscher, Comm.
Math. Phys.
104(1986)177[20] C. Bernard et al., preprint IUHET-233, to appear in Ref. [2][21] A. Hasenfratz and P. Hasenfratz, Phys.
Lett. 93B (1980) 165R.
Dashen and D. Gross, Phys. Rev.
D23, 2340 (1981)[22] G.P. Lepage and P. Mackenzie, Proceedings of the International Symposium on LatticeField Theory, “LATTICE 90”, Tallahassee, Florida, USA, 1990, edited by U. M. Helleret al., Nucl.
Phys. B (Proc.
Suppl. )20 (1991) 173;preprint FERMILAB-PUB-19/355-T (9/92)[23] K. Hikasa et al., Phys.
Rev. D45, S1 (1992)[24] G. Bali and K. Schilling, preprint WUB-92-29[25] S. Booth et al., preprint LTH 285[26] M. L¨uscher, R. Sommer, U. Wolffand P. Weisz, preprint DESY-92-096[27] M. Lusignoli, L. Maiani, G. Martinelli and L. Reina, Nucl.
Phys. B369 (1992) 139[28] N. Isgur and M. Wise, Phys.
Lett. 232B (1989) 113; ibid 237B (1990) 447[29] E. Eichten, Proceedings of the International Symposium on Lattice Field Theory, “Fieldtheory on the Lattice”, Seillac, France, 1987, edited by A. Billoire et al., Nucl.
Phys. B(Proc.
Suppl. )4 (1988) 170[30] C. Alexandrou et al., preprint CERN-TH 6692/92[31] A. Duncan, E. Eichten and H. Thacker in Ref.
[2][32] A. Abada et al., Nucl. Phys.
B376 (1992) 172C. Allton et al., Nucl.
Phys. B349 (1991) 598[33] C. Bernard, J. Labrenz and A. Soni, preprint UW/PT-92-21, in Ref.
[2]15
[34] D.G. Richards, Edinburgh Preprint 92/518, to appear in Ref.
[2][35] G.P. Lepage and B. Thacker, Phys.
Rev. D43, 196 (1991)[36] G.P.
Lepage,Proceedings of the International Symposium on Lattice Field Theory,“LATTICE 91”, Tsukuba, Japan, 1991, edited by M. Fukugita et al., Nucl. Phys.
B(Proc. Suppl.
)26 (1992) 45;A. Kronfeld, preprint FERMILAB-CONF-92/329-T, to appear in Ref. [2][37] L. Maiani, G. Martinelli and C. Sachrajda, Nucl.
Phys. B368 (1992) 281[38] G. Martinelli,Proceedings of the International Symposium on Lattice Field Theory,“LATTICE 91”, Tsukuba, Japan, 1991, edited by M. Fukugita et al., Nucl.
Phys. B(Proc.
Suppl. )26 (1992) 31[39] C. Bernard and M. Golterman, Phys.
Rev. D46, 853 (1992) and in Ref.
[2]S. Sharpe, Phys. Rev.
D46, 3146 (1992) and in Ref. [2][40] S. Aoki et al., Int.
Jour. Mod.
Phys. C2 (1991) 82916
출처: arXiv:9212.217 • 원문 보기