Algorithm xxx — ORTHPOL: A package

ORTHPOL 패키지는 임의의 가중 함수에 대한 정준 다항식을 생성하고 가우스-타입 적분법 규칙을 구하는 데 사용되는 루틴들의 모음입니다. 패키지에는 총 4가지 방법이 있습니다: 명시적 공식 기반, 모멘트 정보 기반, 부스트 스트랩 및 내적 곱 연산자 기반으로 구현된 루틴들이 있습니다. 패키지는 정준 다항식의 쌍대 역함수에 대한 로그 함수를 계산하는 데 사용되며, 가우스-라다우, 가우스-로바토 타입의 적분법 규칙을 생성하기 위해 사용됩니다.

패키지에는 단 precision과 double precision 버전의 루틴들이 있으며, 각 루틴은 ANSI 표준에 맞게 구현되어 있습니다. 패키지는 Cyber 205 및 Sun 4/670 MP 워크스테이션에서 테스트되었습니다. 패키지는 다항식이 가우스-라다우, 가우스-로바토 타입의 적분법 규칙을 생성하기 위해 사용됩니다.

한글 요약 끝:

영어 요약 시작:

The ORTHPOL package is a collection of subroutines for generating orthogonal polynomials relative to arbitrary weight functions and Gauss-type quadrature rules. The package contains four methods implemented in the routines recur, cheb, sti, and lancz. The routine recur uses explicit formulae to generate classical orthogonal polynomials such as Jacobi, Laguerre, and Hermite polynomials. The routine cheb is based on moment information and can be used for discrete orthogonal polynomials. The routines sti and lancz use the Stieltjes procedure and Lanczos' method, respectively, to generate recursion coefficients for continuous or mixed-type orthogonal polynomials.

The package contains both single-precision and double-precision versions of each routine, except for the prefix “d” in double-precision procedures. The latter are generally a straightforward translation of the former. All routines have been checked for ANSI conformance and tested on two computers: the Cyber 205 and a Sun 4/670 MP workstation.

The package is organized as follows: Section 0 contains (slightly amended) netlib routines, Section 1 contains driver routines to test the subroutines, and Sections 2–6 constitute the core of the package: single-precision subroutines described in the equally-numbered sections.

영어 요약 끝:

Algorithm xxx — ORTHPOL: A package

arXiv:math/9307212v1 [math.CA] 9 Jul 1993Algorithm xxx — ORTHPOL: A packageof routines for generating orthogonalpolynomials and Gauss-type quadrature rules∗WALTER GAUTSCHIABSTRACT. A collection of subroutines and examples of their uses, as well asthe underlying numerical methods, are described for generating orthogonal poly-nomials relative to arbitrary weight functions.

The object of these routines is toproduce the coefficients in the three-term recurrence relation satisfied by the or-thogonal polynomials. Once these are known, additional data can be generated,such as zeros of orthogonal polynomials and Gauss-type quadrature rules, for whichroutines are also provided.1991 Mathematics Subject Classification.Primary 33–04, 33C45, 65–04,65D32.CR Classification Scheme.

G.1.2, G.1.4, G.4.1. INTRODUCTIONClassical orthogonal polynomials, such as those of Legendre, Chebyshev,Laguerre and Hermite, have been used for purposes of approximation inwidely different disciplines and over a long period of time.

Their popularityis due in part to the ease with which they can be employed, and in partto the wealth of analytic results known for them. Widespread use of non-classical orthogonal polynomials, in contrast, has been impeded by a lack ofeffective and generally applicable constructive methods.

The present set ofcomputer routines has been developed over the past ten years in the hopeof remedying this impediment and of encouraging the use of nonstandardorthogonal polynomials. A number of applications indeed have already beenmade, for example to numerical quadrature (Cauchy principal value integralswith coth-kernel [38], Hilbert transform of Jacobi weight functions [37], inte-gration over half-infinite intervals [28], rational Gauss-type quadrature [30,∗Work supported, in part, by National Research Foundation grants, most recently bygrant DMS-9023403.1

31]), to moment-preserving spline approximation [21,35,11], to the summa-tion of slowly convergent series [26,27], and, perhaps most notably, to theproof of the Bieberbach conjecture [24].In most applications, orthogonality is with respect to a positive weightfunction, w, on a given interval or union of intervals, or with respect topositive weights, wi, concentrated on a discrete set of points, {xi}, or acombination of both.For convenience of notation, we subsume all thesecases under the notion of a positive measure, dλ, on the real line R. Thatis, the respective inner product is written as a Riemann-Stieltjes integral,(u, v) =ZRu(t)v(t)dλ(t),(1.1)where the function λ(t) is the indefinite integral of w for the continuouspart, and a step function with jumps wi at xi for the discrete part. Weassume that (1.1) is meaningful whenever u, v are polynomials.

There isthen defined a unique set of (monic) orthogonal polynomials,πk(t) = tk + lower-degree terms,k = 0, 1, 2, . .

. ,(πk, πℓ) = 0 if k ̸= ℓ.

(1.2)We speak of “continuous” orthogonal polynomials if the support of dλ isan interval, or a union of intervals, of “discrete” orthogonal polynomialsif the support of dλ consists of a discrete set of points, and of orthogonalpolynomials of “mixed type” if the support of dλ has both a continuous anddiscrete part. In the first and last case, there are infinitely many orthogonalpolynomials — one for each degree —, whereas in the second case thereare exactly N orthogonal polynomials, π0, π1, .

. .

, πN−1, where N is thenumber of support points. In all cases, we denote the polynomials by πk(·) =πk(·; dλ), or πk(·; w), if we want to indicate their dependence onthe measure dλ or weight function w, and use similar notations for otherquantities depending on dλ or w.It is a distinctive feature of orthogonal polynomials, compared to otherorthogonal systems, that they satisfy a three-term recurrence relation,πk+1(t) = (t −αk)πk(t) −βkπk−1(t),k = 0, 1, 2, .

. .

,π0(t) = 1,π−1(t) = 0,(1.3)2

with coefficients αk = αk(dλ) ∈R, βk = βk(dλ) > 0 that are uniquelydetermined by the measure dλ.By convention, the coefficient β0, whichmultiplies π−1 = 0 in (1.3), is defined byβ0 = β0(dλ) =ZRdλ(t). (1.4)The knowledge of these coefficients is absolutely indispensable for any soundcomputational use and application of orthogonal polynomials [19,25].

Oneof the principal objectives of the present package is precisely to provideroutines for generating these coefficients. Routines for related quantities arealso provided, such as Gauss-type quadrature weights and nodes, hence alsozeros of orthogonal polynomials.Occasionally (e.g., in [21,35,11,30,31]), one needs to deal with indefinite(i.e., sign-changing) measures dλ.

The positivity of the βk is then no longerguaranteed, indeed not even the existence of all orthogonal polynomials.Nevertheless, our methods can still be formally applied, albeit at the risk ofpossible breakdowns or instabilities.There are basically four methods used here to generate recursion coef-ficients: (i) Methods based on explicit formulae. These relate to classicalorthogonal polynomials, and are implemented in the routine recur of §2.

(ii) Methods based on moment information. These are dealt with in §3, andare represented by a single routine, cheb.

Its origin can be traced back towork of Chebyshev on discrete least squares approximation. (iii) Bootstrapmethods based on inner product formulae for the coefficients, and orthogonalreduction methods.

We have attributed the idea for the former method toStieltjes and referred to it in [19] as the Stieltjes procedure. The prototypeis the routine sti in §4, applicable for discrete orthogonal polynomials.

Analternative routine is lancz, which accomplishes the same purpose, but usesthe method of Lanczos. Either of these routines can be used in mcdis, whichapplies to continuous as well as mixed-type orthogonal polynomials.

In con-trast to all previous routines, mcdis uses a discretization process, and thusfurnishes only approximate answers whose accuracies can be controlled bythe user. The routine, however, is by far the most sophisticated and flexi-ble routine in this package, one that requires, or can greatly benefit from,ingenuity of the user.

The same kind of discretization is also applicable tomoment-related methods, yielding the routine mccheb. (iv) Modification al-gorithms.

These are routines generating recursion coefficients for measuresmodified by a rational factor, utilizing the recursion coefficients of the orig-inal measure, which are assumed to be known. They can be thought of as3

algorithmic implementations of the Christoffel, or generalized Christoffel,theorem and are incorporated in the routines chri and gchri of §5. Animportant application of all these routines is made in §6, where routines areprovided that generate the weights and nodes of quadrature rules of Gauss,Gauss-Radau, and Gauss-Lobatto type.Each routine has a single-precision and double-precision version withsimilar names, except for the prefix “d” in double-precision procedures.

Thelatter are generally a straightforward translation of the former. An excep-tion is the routine dlga used in drecur for computing the logarithm of thegamma function, which employs a different method than the single-precisioncompanion routine alga.All routines of the package have been checked for ANSI conformance,and have been tested on two computers: the Cyber 205 and a Sun 4/670MP workstation.The former has machine precisions ǫs ≈7.11 × 10−15,ǫd ≈5.05 × 10−29 in single and double precision, respectively, while thelatter has ǫs ≈5.96 × 10−8, ǫd ≈1.11 × 10−16.The Cyber 205 has alarge floating-point exponent range, extending from approximately −8617to +8645 in single as well as double precision, whereas the Sun 4/670 hasthe rather limited exponent range [–38, 38] in single precision, but a largerrange, [–308, 308], in double precision.

All output cited relates to work onthe Cyber 205.The package is organized as follows: Section 0 contains (slightly amended)netlib routines, namely r1mach and d1mach, providing basic machine con-stants for a variety of computers. Section 1 contains all the driver routines— named test1, test2, etc.

— which are used (and described in the bodyof this paper) to test the subroutines of the package. The complete output ofeach test is listed immediately after the driver.

Sections 2–6 constitute thecore of the package: the single- and double-precision subroutines describedin the equally-numbered sections of this paper. All single-precision routinesare provided with comments and instructions for their use.

These, of course,apply to the double-precision routines as well.2. CLASSICAL WEIGHT FUNCTIONSAmong the most frequently used orthogonal polynomials are the Jacobipolynomials, generalized Laguerre polynomials, and Hermite polynomials,supported respectively on a finite interval, half-infinite interval, and the4

whole real line. The respective weight functions arew(t) = w(α,β)(t) = (1 −t)α(1 + t)β on (−1, 1),α > −1, β > −1 :Jacobi,(2.1)w(t) = w(α)(t) = tαe−t on (0, ∞), α > −1 :generalized Laguerre, (2.2)w(t) = e−t2 on (−∞, ∞) :Hermite.

(2.3)Special cases of the Jacobi polynomials are the Legendre polynomials (α =β = 0), the Chebyshev polynomials of the first (α = β = −12), second(α = β = 12), third (α = −β = −12) and fourth (α = −β = 12) kind, and theGegenbauer polynomials (α = β = λ −12). The Laguerre polynomials arethe special case α = 0 of the generalized Laguerre polynomials.For each of these polynomials, the corresponding recursion coefficientsαk = αk(w), βk = βk(w) are explicitly known (see, e.g., [5, pp.

217–221]),and are generated in single precision by the routine recur. Its calling se-quence isrecur(n,ipoly,al,be,a,b,ierr).On entry,nis the number of recursion coefficients desired;type integeripolyis an integer identifying the polynomial asfollows:1 = Legendre polynomial on (−1,1)2 = Legendre polynomial on (0,1)3 = Chebyshev polynomial of the first kind4 = Chebyshev polynomial of the second kind5 = Chebyshev polynomial of the third kind6 = Jacobi polynomial with parameters al,be7 = generalized Laguerre polynomial with pa-rameter al8 = Hermite polynomialal,beare the input parameters α, β for Jacobi andgeneralized Laguerre polynomials; type real;they are only used if ipoly = 6 or 7, and inthe latter case only al is used.5

On return,a,bare real arrays of dimension n with a(k), b(k)containing the coefficients αk−1, βk−1, respec-tively, k = 1, 2, . .

. , nierris an error flag, whereierr = 0 on normal return,ierr = 1 if either al or be are out of rangewhen ipoly = 6 or ipoly=7,ierr = 2 if there is potential overflow in theevaluation of β0 when ipoly = 6 oripoly = 7; in this case β0 is set equalto the largest machine-representablenumber,ierr = 3 if n is out of range,ierr = 4 if ipoly is not one of theadmissible integers.No provision has been made for Chebyshev polynomials of the fourth kind,since their recursion coefficients are obtained from those for the third-kindChebyshev polynomials simply by changing the sign of the α’s (and leavingthe β’s unchanged).The corresponding double-precision routine is drecur; it has the samecalling sequence, except for real data types now being double-precision.In the cases of Jacobi polynomials (ipoly = 6) and generalized Laguerrepolynomials (ipoly = 7), the recursion coefficient β0 (and only this one)involves the gamma function Γ.

Accordingly, a function routine, alga, isprovided that computes the logarithm ln Γ of the gamma function, and aseparate routine, gamma, computing the gamma function by exponentiatingits logarithm. Their calling sequences arefunction alga(x)function gamma(x,ierr),where ierr is an output variable set equal to 2 or 0 depending on whetherthe gamma function does, or does not, overflow, respectively.

The corre-sponding double-precision routines have the names dlga and dgamma. Allthese routines require machine-dependent constants for reasons explainedbelow.6

The routine alga is based on a rational approximation valid on theintervalh12, 32i. Outside this interval, the argument x is written asx = xe + m,wherexe =x −⌊x⌋+ 1if x −⌊x⌋≤12,x −⌊x⌋otherwiseis in the interval12, 32iand m ≥−1 is an integer.

If m = −1 (i.e., 0 < x ≤12), then ln Γ(x) = ln Γ(xe)−ln x, while for m > 0, one computes ln Γ(x) =ln Γ(xe) + ln p, where p = xe(xe + 1) · · · (xe + m −1). If m is so large, saym ≥m0, that the product p would overflow, then ln p is computed (at aprice!) as ln p = ln xe + ln (xe+1)+· · · + ln (xe+m−1).

It is here, where amachine-dependent integer is required, namely m0 = smallest integer m suchthat 1 · 3 · 5 · · · (2m + 1)/2m is greater than or equal to the largest machine-representable number, R. By Stirling’s formula, the integer m0 is seen to bethe smallest integer m satisfying ((m+1)/e)ln((m+1)/e) ≥(lnR−12ln8)/e,hence equal to ⌊e · t((lnR −12ln8)/e)⌋, where t(y) is the inverse function ofy = t lnt. For our purposes, the low-accuracy approximation of t(y), givenin [16, pp.

51–52], and implemented in the routine t, is adequate.The rational approximation chosen onh12, 32iis one due to W.J. Codyand K.E.

Hillstrom, namely the one labeled n = 7 in Table II of [9]. Itis designed to yield about 16 correct decimal digits (cf.

[9, Table I]), butbecause of numerical cancellation furnishes only about 13-14 correct decimaldigits.Since rational approximations for ln Γ having sufficient accuracies fordouble-precision computation do not seem to be available in the literature,we used a different approach for the routine dlga, namely the asymptoticapproximation (cf. [1, Eq.

6.1.42], where the constants B2m are Bernoullinumbers)ln Γ(y)=y −12ln y −y + 12 ln (2π)+Pnm=1B2m2m(2m−1) y−(2m−1) + Rn(y)(2.4)7

for values of y > 0 large enough to have|Rn(y)| ≤12 10−d,(2.5)where d is the number of decimal digits carried in double-precision arith-metic, another machine-dependent real number. If (2.5) holds for y ≥y0,and if x ≥y0, we compute ln Γ(x) from the asymptotic expression (2.4)(where y = x and the remainder term is neglected).

Otherwise, we let k0 bethe smallest positive integer k such that x + k ≥y0, and useln Γ(x) = ln Γ(x + k0) −ln (x(x + 1) · · · (x + k0 −1)),(2.6)where the first term on the right is computed from (2.4) (with y = x + k0).Since for y > 0,|Rn(y)| ≤|B2n+2|(2n + 2)(2n + 1) y−(2n+1)(cf. [1, Eq.

6.1.42]), the inequality (2.5) is satisfied ify ≥exp12n + 1d ln 10 + ln2|B2n+2|(2n + 2)(2n + 1). (2.7)In our routine dlga, we have selected n = 9.For double-precision ac-curacy on the Cyber 205, we have d ≈28.3, for which (2.7) then givesy ≥exp{.121188 .

. .

d + .053905 . .

.} ≈32.6.For single-precision calculation we selected the method of rational ap-proximation, rather than the asymptotic formula (2.4) and (2.6), since wefound that the former is generally more accurate, by a factor, on the average,of about 20 and as large as 300.

Neither method yields full machine accu-racy. The former, as already mentioned, loses accuracy in the evaluation ofthe approximation.

The latter suffers loss of accuracy because of cancella-tion occurring in (2.6), which typically amounts to a loss of 2–5 significantdecimal digits in the gamma function itself.Since these errors affect only the coefficient β0 (and only if ipoly = 6or 7), they are of no consequence unless the output of the routine recurserves as input to another routine, such as gauss (cf.§6), which makesessential use of β0. In this case, for maximum single-precision accuracy, itis recommended that β0 be first obtained in double precision by means ofdrecur with n = 1, and then converted to single precision.8

3. MOMENT-RELATED METHODSIt is a well-known fact that the first n recursion coefficients αk(dλ),βk(dλ), k = 0, 1, .

. .

, n −1 (cf. (1.3)), are uniquely determined by the first2n moments µk of the measure dλ,µk = µk(dλ) =ZRtkdλ(t),k = 0, 1, 2, .

. .

, 2n −1. (3.1)Formulae are known, for example, which express the α’s and β’s in terms ofHankel determinants in these moments.

The trouble is that these formulaebecome increasingly sensitive to small errors as n becomes large. There is aninherent reason for this: the underlying (nonlinear) map Kn: R2n →R2nhas a condition number, cond Kn, which grows exponentially with n (cf.

[19, §3.2]). Any method that attempts to compute the desired coefficientsfrom the moments in (3.1), therefore, is doomed to fail, unless n is quitesmall, or extended precision is being employed.

That goes in particular foran otherwise elegant method due to Chebyshev (who developed it for thecase of discrete measures dλ) that generates the α’s and β’s directly fromthe moments (3.1), bypassing determinants altogether (cf. [4], [19,§2.3]).Variants of Chebyshev’s algorithm with more satisfactory stability prop-erties have been developed by Sack and Donovan [46] and Wheeler [50] (in-dependently of Chebyshev’s work).

The idea is to forego the moments (3.1)as input data, and instead depart from so-called modified moments. Theseare defined by replacing the power tk in (3.1) by an appropriate polynomialpk(t) of degree k,νk = νk(dλ) =ZRpk(t)dλ(t),k = 0, 1, 2, .

. .

, 2n −1. (3.2)For example, pk could be one of the classical orthogonal polynomials.

Moregenerally, we shall assume that {pk} are monic polynomials satisfying athree-term recurrence relation similar to the one in (1.3),pk+1(t) = (t −ak)pk(t) −bkpk−1(t),k = 0, 1, 2, . .

. ,p0(t) = 1,p−1(t) = 0,(3.3)with coefficients ak ∈R, bk ≥0 that are known.

(In the special case ak = 0,bk = 0, we are led back to powers and ordinary moments. )There now9

exists an algorithm, called modified Chebyshev algorithm in [19, §2.4], whichtakes the 2n modified moments in (3.2) and the 2n−1 coefficients {ak}2n−2k=0 ,{bk}2n−2k=0 in (3.3), and from them generates the n desired coefficients αk(dλ),βk(dλ), k = 0, 1, . .

. , n −1.

It generalizes Chebyshev’s algorithm, which canbe recovered (if need be) by putting ak = bk = 0. The modified Chebyshevalgorithm is embodied in the subroutine cheb, which has the calling sequencecheb(n,a,b,fnu,alpha,beta,s,ierr,s0,s1,s2)dimension a(*),b(*),fnu(*),alpha(n),beta(n),s(n),s0(*),s1(*),s2(*)On entry,nis the number of recursion coefficients desired;type integera,bare arrays of dimension 2×n–1 holding thecoefficients a(k) = ak−1, b(k) = bk−1, k =1, 2, .

. .

, 2n −1fnuis an array of dimension 2×n holding the mod-ified moments fnu(k) = νk−1, k = 1, 2, . .

. , 2×nOn return,alpha,betaare arrays of dimension n containing the de-sired recursion coefficients alpha (k) = αk−1,beta (k) = βk−1, k = 1, 2, .

. .

, nsis an array of dimension n containing the num-bers s(k) =RR π2k−1dλ, k = 1, 2, . .

. , nierris an error flag, equal to 0 on normal return,equal to 1 if |ν0| is less than the machine zero,equal to 2 if n is out of range, equal to −(k−1)if s(k), k = 1, 2, .

. .

, n, is about to underflow,and equal to +(k−1), if it is about to overflow.10

The arrays s0, s1, s2 of dimension 2×n are needed for working space.There is again a map Kn: R2n →R2n underlying the modified Cheby-shev algorithm, namely the map taking the 2n modified moments into then pairs of recursion coefficients. The condition of the map Kn (actually of asomewhat different, but closely related map) has been studied in [19, §3.3]and [23] in the important case where the polynomials pk defining the mod-ified moments are themselves orthogonal polynomials, pk(·) = pk( · ; dµ),with respect to a measure dµ (for example, one of the classical ones) forwhich the recursion coefficients ak, bk are known.

The upshot of the anal-ysis then is that the condition of Kn is characterized by a certain positivepolynomial gn( · ; dλ) of degree 4n−2, depending only on the target measuredλ, in the sense thatcond Kn =ZRgn(t; dλ)dµ(t). (3.4)Thus, the numerical stability of the modified Chebyshev algorithm is deter-mined by the magnitude of gn on the support of dµ.The occurrence of underflow [overflow] in the computation of the α’sand β’s, especially on computers with limited exponent range, can often beavoided by multiplying all modified moments by a sufficiently large [small]scaling factor before entering the routine.

On exit, the coefficient β0 (andonly this one!) then has to be divided by the same scaling factor.

(There mayoccur harmless underflow of auxiliary quantities in the routine cheb, whichis difficult to avoid since some of these quantitites actually are expected tobe zero. )Example 3.1. dλω(t) = [(1 −ω2t2)(1 −t2)]−1/2dt on (−1,1), 0 ≤ω < 1.This example is of some historical interest, in that it has already beenconsidered by Christoffel [8, Example 6]; see also [44].

Computationally, theexample is of interest as there are empirical reasons to believe that for thechoice dµ(t) = (1 −t2)−1/2dt on (−1,1) — which appears rather natural— the modified Chebyshev algorithm is exceptionally stable, uniformly inn, in the sense that in (3.4) one has gn ≤1 on supp dµ for all n (cf. [22,Example 5.2]).

With the above choice of dµ, the polynomials pk are clearlythe Chebyshev polynomials of the first kind, p0 = T0, pk = 2−(k−1)Tk, k ≥1,and the modified moments are given byν0 =Z 1−1dλω(t),νk =12k−1Z 1−1Tk(t)dλω(t),k = 1, 2, 3, . .

. .

(3.5)11

They are expressible in terms of the Fourier coefficients Cr(ω2) in(1 −ω2 sin2 θ)−1/2 = C0(ω2) + 2∞Xr=1Cr(ω2) cos 2rθ(3.6)by means of (cf. [19, Example 3.3])ν0 = πC0(ω2),ν2m =(−1)mπ22m−1 Cm(ω2)ν2m−1 = 0m = 1, 2, 3, .

. .

. (3.7)The Fourier coefficients {Cr(ω2)} in turn can be accurately computed asminimal solution of a certain three-term recurrence relation (see [19, pp.310–311]).The ordinary momentsµ0 = ν0,µk =Z 1−1tkdλω(t),k = 1, 2, 3, .

. .

,(3.8)likewise can be expressed in terms of the Fourier coefficients Cr(ω2) by writ-ing t2m as a linear combination of Chebyshev polynomials T0, T2, . .

. , T2m(cf.

Eq. (22) on p. 454 of [43]).

The result isµ2m = (−1)mπ22m−1mXr=0(−1)rγ(m)rCm−r(ω2),µ2m−1 = 0,m = 1, 2, 3, . .

. ,(3.9)whereγ(m)0= 1,γ(m)r= 2m + 1 −rrγ(m)r−1,r = 1, 2, .

. .

, m −1,γ(m)m= m + 12mγ(m)m−1. (3.10)The driver routine test1 (in §1 of the package) generates for ω2 =.1(.2).9, .99, .999 the first n recurrence coefficients βk(dλω) (all αk = 0), both12

in single and double precision, using modified moments if modmom=.true.,and ordinary moments otherwise. In the former case, n = 80, in the latter,n = 20.

It prints the double-precision values of βk together with the rela-tive errors of the single-precision values (computed as the difference of thedouble-precision and single-precision values divided by the double-precisionvalue). In test1, as well as in all subsequent drivers, not all error flags areinterrogated for possible malfunction.

The user is urged, however, to do soas a matter of principle.ω2kβ doublekerr β singlek.103.2248826974404387964598327251.433(–14)1.50658408063826844751584957271.187(–14)5.24999999538900319018810282671.109(–14)11.24999999999999999963650485401.454(–18)18.25000000000000000000000000000.000.503.7081493546027438368677006949.005(–15)1.54305341895553637462503337732.431(–14)8.24999998464317232960837794804.109(–15)20.24999999999999999788946355848.442(–18)35.25000000000000000000000000000.000.905.1561842266963463764051415436.950(–15)1.63497316614524587116224926137.920(–15)19.24999999569259500946295028301.820(–14)43.24999999999999982821041008966.872(–16)79.24999999999999999999999999621.525(–26).99909.6822651211005940606782082571.194(–13)1.79378214213851769655317195716.311(–14)19.24990638943982092000474525371.026(–14)43.24999558226336808258597500688.282(–15)79.24999984176881578761530692111.548(–15)TABLE 3.1. Selected output from test1 in the caseof modified momentsThe routine13

fmm(n,eps,modmom,om2,fnu,ierr,f,f0,rr)used by the driver computes the first 2×n modified [ordinary] momentsfor ω2 = om2, to a relative accuracy eps if modmom=.true. [.false.].

Theresults are stored in the array fnu. The arrays f, f0, rr are internal workingarrays of dimension n, and ierr is an error flag.

On normal return, ierr =0; otherwise, ierr = 1, indicating lack of convergence (within a prescribednumber of iterations) of the backward recurrence algorithm for computingthe minimal solution {Cr(ω2)}. The latter is likely to occur if ω2 is too closeto 1.

The routine fmm as well as its double-precision version dmm are listedimmediately after the routine test1.In Table 3.1, we show selected results from the output of test1, whenmodmom=.true. (Complete results are given in the package immediatelyafter test1.) The values for k = 0 are expressible in terms of the completeelliptic integral, β0 = 2K(ω2), and were checked, where possible, againstthe 16S-values in Table 17.1 of [1].

In all cases, there was agreement to all16 digits. The largest relative error observed was 2.43 × 10−13 for ω2 = .999and k = 2.

When ω2 ≤.99, the error was always less than 2.64 × 10−14,which confirms the extreme stability of the modified Chebyshev algorithmin this example. It can be seen (as was to be expected) that for ω2 not tooclose to 1, the coefficients converge rapidly to 14.In contrast, Table 3.2 shows selected results (for complete results, see thepackage) in the case of ordinary moments (modmom=.false.) and demon-strates the severe instability of the Chebyshev algorithm.

We remark thatthe moments themselves are all accurate to essentially machine precision, ashas been verified by additional computations.ω2kerr βkω2kerr βk.111.187(–14).913.270(–15)72.603(–10)74.819(–10)139.663(–6)131.841(–5)194.251(–1)196.272(–1).512.431(–14).99916.311(–14)75.571(–10)71.745(–9)139.307(–6)138.589(–5)195.798(–1)194.808(0)14

TABLE 3.2. Selected output from test1 in the case ofordinary momentsThe next example deals with another weight function for which the mod-ified Chebyshev algorithm performs rather well.Example 3.2. dλσ(t) = tσ ln (1/t)dt on (0,1], σ > −1.What is nice about this example is that both modified and ordinarymoments of dλσ are known in closed form.

The latter are obviously givenbyµk(dλσ) =1σ + 1 + k ,k = 0, 1, 2, . .

. ,(3.11)whereas the former, relative to shifted monic Legendre polynomials (ipoly=2in recur), are (cf.

[17])(2k)!k!2 νk(dλσ) =(−1)k−σ σ!2(k −σ −1)! (k + σ + 1)!,0 ≤σ < k,σ an integer,1σ + 1(1σ + 1 +kXr=11σ + 1 + r −1σ + 1 −r)kYr=1σ + 1 −rσ + 1 + r ,otherwise.

(3.12)The routines fmm and dmm appended to test2 in §1 of the package, simi-larly as the corresponding routines in Example 3.1, generate the first 2×nmodified moments ν0, ν1, . .

. , ν2n−1, if modmom=.true., and the first 2×nordinary moments, otherwise.

The calling sequence of fmm isfmm(n,modmom,intexp,sigma,fnu).The logical variable intexp is to be set .true., if σ is an integer, and.false. otherwise.

In either case, the input variable sigma is assumed oftype real.The routine test2 generates the first n recursion coefficients αk(dλσ),βk(dλσ) in single and double precision for σ = −12, 0, 12, where n = 100for the modified Chebyshev algorithm (modmom=.true. ), and n = 12 for theclassical Chebyshev algorithm (modmom=.false.).

Selected double-precisionresults to 25 significant digits, when modified moments are used, are shownin Table 3.3. (The complete results are given in the package after test2.

)15

σkαkβk–.50.11111111111111111111111114.00000000000000000000000012.4994971916094638566242202.0623127708287748847756388624.4998662912324218943801592.0624537255734224260045722648.4999652635485445800661969.0624885571774868474243361899.4999916184024356271670789.0624973382305182163693715600.25000000000000000000000001.00000000000000000000000012.4992831802157361310272625.0623835683595357112356033024.4998062839486146398501532.0624710008446911100163912848.4999494083797023879356424.0624928126811096746237388999.4999877992015903283047919.06249832670616925926204896.50.3600000000000000000000000.444444444444444444444444412.4993755732917555644203267.0623708273828075261196088724.4998324497706394488722725.0624658101194549688354308948.4999567275223771727791521.0624911533271102717669593299.4999896931841789781887674.06249787251281682973825635TABLE 3.3. Selected output from test2 in the case ofmodified momentsThe largest relative errors observed, over all k = 0, 1, .

. .

, 99, were respec-tively 6.211×10−11, 2.237×10−12, 1.370×10−12 for the α’s, and 1.235×10−10,4.446×10−12, 2.724×10−12 for the β’s, attained consistently at k = 99. Theaccuracy achieved is slightly less than in Example 3.1, for reasons explainedin [22, Example 5.3].The complete results for σ =−12 are also available in [27, Appendix,Table 1].

(They differ occasionally by 1 unit in the last decimal place fromthose produced here, probably because of a slightly different computation ofthe modified moments.) The results for σ = 0 can be checked up to k = 15against the 30S-values given in [47, p. 92], and for 16 ≤k ≤19 against12S-values in [10, Table 3].

There is complete agreement to all 25 digits inthe former case, and agreement to 12 digits in the latter, although there areoccasional end figure discrepancies of 1 unit. These are believed to be dueto rounding errors committed in [10], since similar discrepancies occur alsoin the range k ≤15.

We do not know of any tables for σ =12, but a testwill be given in §5, Example 5.1.16

The use of ordinary moments (modmom= .false.) produces predictablyworse results, the relative errors of which are shown in Table 3.4.kσerr αkerr βkσerr αkerr βkσerr αkerr βk2–.51.8(–13)7.7(–14)04.2(–13)7.6(–13).51.6(–12)2.6(–13)52.2(–9)1.2(–9)4.2(–9)1.2(–10)1.3(–8)6.6(–9)81.1(–5)5.5(–6)4.3(–6)3.8(–6)6.0(–5)5.2(–6)112.5(–1)1.7(–1)1.3(0)3.2(–1)2.2(0)4.7(–1)TABLE 3.4.

Selected output from test2 in the case ofordinary moments4. STIELTJES, ORTHOGONAL REDUCTION, AND DISCRETIZATION PRO-CEDURES4.1.

The Stieltjes procedure. It is well known that the coefficients αk(dλ),βk(dλ) in the basic recurrence relation (1.3) can be expressed in terms ofthe orthogonal polynomials (1.2) and the inner product (1.1) as follows:αk(dλ) = (tπk, πk)(πk, πk) ,k ≥0;β0(dλ) = (π0, π0),βk(dλ) =(πk, πk)(πk−1, πk−1),k ≥1.

(4.1)Provided the inner product can be readily calculated, (4.1) suggests thefollowing “bootstrap” procedure: Compute α0 and β0 by the first relationsin (4.1) for k = 0.Then use the recurrence relation (1.3) for k = 0 toobtain π1. With π0 and π1 known, apply (4.1) for k = 1 to get α1, β1, thenagain (1.3) to obtain π2, and so on.

In this way, alternating between (4.1)and (1.3), we can bootstrap ourselves up to as many of the coefficients αk,βk as are desired. We attributed this procedure to Stieltjes, and called itStieltjes’s procedure in [19].In the case of discrete orthogonal polynomials, i.e., for inner products ofthe form(u, v) =NXk=1wku(xk)v(xk),wk > 0,(4.2)17

Stieltjes’s procedure is easily implemented; the resulting routine is calledsti, and has the calling sequencesti(n,ncap,x,w,alpha,beta,ierr,p0,p1,p2).On entry,nis the number of recursion coefficients desired;type integerncapis the number of terms, N, in the discrete in-ner product; type integerx,ware arrays of dimension ncap holding the ab-scissae x (k) = xk and weights w (k) = wk, k =1, 2, . .

. , ncap, of the discrete inner product.On return,alpha,betaare arrays of dimension n containing the de-sired recursion coefficients alpha (k) = αk−1,beta (k) = βk−1, k = 1, 2, .

. .

, nierris an error flag having the value 0 on normalreturn, and the value 1 if n is not in the properrange 1 ≤n ≤N; if during the computationof a recursion coefficient with index k thereis impending underflow or overflow, ierr willhave the value −k in case of underflow, andthe value +k in case of overflow. (No errorflag is set in case of harmless underflow.

)The arrays p0, p1, p2 are working arrays of dimension ncap. The double-precision routine has the name dsti.Occurrence of underflow [overflow] can be forestalled by multiplying allweights wk by a sufficiently large [small] scaling factor prior to entering theroutine.

Upon return, the coefficient β0 will then have to be readjusted bydividing it by the same scaling factor.18

4.2. Orthogonal reduction method.

Another approach to producing therecursion coefficients αk, βk from the quantities xk, wk defining the innerproduct (4.2) is based on the observation (cf. [2], [29, §7]) that the symmetrictridiagonal matrix of order N + 1,J(dλN) =1√β00√β0α0√β1√β1α1.........pβN−10pβN−1αN−1(4.3)(the “extended Jacobi matrix” for the discrete measure dλN implied in(4.2)), is orthogonally similar to the matrix1√wT√wDx,√w =√w1...√wN,Dx =x10...0xN.

(4.4)Hence, the desired matrix J(dλN) can be obtained by applying Lanczos’salgorithm to the matrix (4.4). This is implemented in the routinelancz(n,ncap,x,w,alpha,beta,ierr,p0,p1),which uses a judiciously constructed sequence of Givens transformationsto accomplish the orthogonal similarity transformation (cf.

[45,3,41,2]; theroutine lancz is adapted from the routine RKPW in [41, p. 328]). Theinput and output parameters of the routine lancz have the same meaningas in the routine sti, except that ierr can only have the value 0 or 1, whilep0, p1 are again working arrays of dimension ncap.

The double-precisionversion of the routine is named dlancz.The routine lancz is generally superior to the routine sti: The pro-cedure used in sti may develop numerical instability from some point on,and therefore give inaccurate results for larger values of n. It furthermore issubject to underflow and overflow conditions. None of these shortcomingsis shared by the routine lancz.

On the other hand, there are cases wheresti does better than lancz (cf. Example 4.5).We illustrate the phenomenon of instability (which is explained in [32])in the case of the “discrete Chebyshev” polynomials.19

Example 4.1. The inner product (4.2) with xk = −1 + 2 k−1N−1, wk =2N ,k = 1, 2, .

. .

, N.This generates discrete analogues of the Legendre polynomials, whichthey indeed approach as N →∞. The recursion coefficients are explicitlyknown:αk = 0,k = 0, 1, .

. .

, N −1;β0 = 2,βk =1 +1N −12 1 − kN2!4 −1k2−1k = 1, 2, . .

. , N −1.

(4.5)To find out how well the routines sti and lancz generate them (in singleprecision), when N = 40, 80, 160, 320, we wrote the driver test3, whichcomputes the respective absolute errors for the α’s and relative errors forthe β’s.Selected results for Stieltjes’s algorithm are shown in Table 4.1.Thegradual deterioration, after some point (depending on N), is clearly visible.Lanczos’s method, in contrast, preserves essentially full accuracy; the largesterror in the α’s is 1.42(–13), 2.27(–13), 4.83(–13), 8.74(–13) for N = 40, 80,160, 320, respectively, and 3.38(–13), 6.63(–13), 2.17(–12), 5.76(–12) for theβ’s.Nnerr αerr βNnerr αerr β40≤35≤1.91(–13)≤7.78(–13)160≤76≤2.98(–13)≤7.61(–13)363.01(–12)1.48(–11)851.61(–9)1.57(–8)376.93(–11)3.55(–10)941.25(–4)1.17(–3)382.57(–9)1.30(–8)1032.64(–3)1.51(–1)391.93(–7)9.58(–7)1122.35(–3)1.16(0)80≤53≤2.04(–13)≤6.92(–13)320≤106≤8.65(–13)≤7.39(–13)572.04(–10)5.13(–10)1173.96(–10)7.73(–10)613.84(–7)9.35(–7)1282.46(–6)4.67(–6)651.94(–3)4.61(–3)1392.94(–2)6.27(–2)691.87(–1)6.14(0)1501.15(–3)2.18(–2)TABLE 4.1. Errors in the recursion coefficients αk, βk of (4.5)computed by Stieltjes’s procedure20

4.3.Multiple-component discretization procedure.We assume now ameasure dλ of the formdλ(t) = w(t)dt +pXj=1yjδ(t −xj)dt,p ≥0,(4.6)consisting of a continuous part, w(t)dt, and (if p > 0) a discrete part writtenin terms of the Dirac δ-function.The support of the continuous part isassumed to be an interval or a finite union of disjoint intervals, some ofwhich may extend to infinity.In the discrete part, the abscissae xj areassumed pairwise distinct, and the weights positive, yj > 0.The innerproduct (1.1), therefore, has the form(u, v) =ZRu(t)v(t)w(t)dt +pXj=1yju(xj)v(xj). (4.7)The basic idea of the discretization procedure is rather simple: Oneapproximates the continuous part of the inner product, i.e., the integral in(4.7), by a sum, using a suitable quadrature scheme.

If the latter involvesN terms, this replaces the inner product (4.7) by a discrete inner product(· , ·)N+p consisting of N + p terms, the N “quadrature terms” and thep original terms.In effect, the measure dλ in (4.6) is approximated bya discrete (N + p)-point measure dλN+p.We then compute the desiredrecursion coefficients from the formula (4.1), in which the inner product(· , ·) is replaced, throughout, by (· , ·)N+p. Thus, in effect, we approximateαk(dλ) ≈αk(dλN+p),βk(dλ) ≈βk(dλN+p).

(4.8)The quantities on the right can be computed by the methods of §4.1 or §4.2,i.e., employing the routines sti or lancz.The difficult part of this approach is to find a discretization that resultsin rapid convergence, as N →∞, of the approximations on the right of (4.8)to the exact values on the left, even in cases where the weight function win (4.6) exhibits singular behavior. (The speed of convergence, of course, isunaffected by the discrete part of the inner product (4.7).) To be successfulin this endeavor often requires considerable inventiveness on the part ofthe user.Our routines, mcdis and dmcdis, that implement this idea insingle resp.

double precision, however, are designed to be flexible enough topromote the use of effective discretization procedures.21

Indeed, if the support of the weight function w in (4.7) is contained inthe (finite or infinite) interval (a, b), it will often be useful to first decomposethat interval into a finite number of subintervals,supp w ⊂[a, b] =m[i=1[ai, bi],m ≥1,(4.9)and to approximate the inner product separately on each subinterval [ai, bi],using an appropriate weighted quadrature rule. Thus, we write the integralin (4.7) asZRu(t)v(t)w(t)dt =mXi=1Z biaiu(t)v(t)wi(t)dt,(4.10)where wi is an appropriate weight function on [ai, bi].

The intervals [ai, bi]are not necessarily disjoint. For example, the weight function w may bethe sum w = w1 + w2 of two weight functions on [a, b], which we may wantto treat individually (cf.Example 4.2 below).In that case, one wouldtake [a1, b1] = [a2, b2] = [a, b] and w1 on the first interval, w2 on the other.Alternatively, we may simply want to use a composite quadrature rule toapproximate the integral, in which case (4.9) is a partition of [a, b] andwi(t) = w(t) for each i.

Still another example is a weight function w whichis already supported on a union of disjoint intervals; in this case, (4.9) wouldbe the same union, or possibly a refined union where some of the subintervalsare further partitioned.In whichever way (4.9) and (4.10) are constructed, each integral on theright of (4.10) is now approximated by an appropriate quadrature rule,Z biaiu(t)v(t)wi(t)dt ≈Qi(uv),(4.11)whereQif =NiXr=1wr,if(xr,i). (4.12)This gives rise to the approximate inner product(u, v)N+p =mXi=1NiXr=1wr,iu(xr,i)v(xr,i) +pXj=1yju(xj)v(xj),N =mXi=1Ni.

(4.13)22

In our routine mcdis, we have chosen, for simplicity, all Ni to be the sameinteger N0,Ni = N0,i = 1, 2, . .

. , m,(4.14)so that N = mN0.

Furthermore, if n is the number of αk and the num-ber of βk desired, we have used the following iterative procedure to de-termine the coefficients αk, βk to a prescribed (relative) accuracy ε: Welet N0 increase through a sequence {N [s]0 }s=0,1,2,... of integers, for each suse Stieltjes’s (or Lanczos’s) algorithm to compute α[s]k= αk(dλmN[s]0 +p),β[s]k = βk(dλmN[s]0 +p), k = 0, 1, . .

. , n −1, and stop the iteration for the firsts ≥1 for which all inequalities|β[s]k −β[s−1]k| ≤εβ[s]k ,k = 0, 1, .

. .

, n −1,(4.15)are satisfied. An error flag is provided if within a preset range N [s]0≤N max0the stopping criterion (4.15) cannot be satisfied.Note that the latter isbased solely on the β-coefficients.

This is because, unlike the α’s, they areknown to be always positive, so that it makes sense to insist on relativeaccuracy. (In our routine we actually replaced β[s]kon the right of (4.15)by its absolute value to insure proper termination in cases of sign-changingmeasures dλ.

)In view of the formulae (4.1), it is reasonable to expect, and indeedhas been observed in practice, that satisfaction of (4.15) entails sufficientabsolute accuracy for the α’s if they are zero or small, and relative accuracyotherwise.Through a bit of experimentation, we have settled on the following se-quence of integers N [s]0 :N [0]0= 2n,N [s]0= N [s−1]0+ ∆s,s = 1, 2, . .

. ,∆1 = 1,∆s = 2⌊s/5⌋· n,s = 2, 3, .

. .

. (4.16)It should be noted that if the quadrature formula (4.11) is exact foreach i, whenever u · v is a polynomial of degree ≤2n −1 (which is themaximum degree occurring in the inner products of (4.1), when k ≤n −1),then our procedure converges after the very first iteration step!

Therefore,if each quadrature rule Qi has (algebraic) degree of exactness ≥d(N0), and23

d(N0)/N0 = δ + O(N −10 ) as N0 →∞, then we let N [0]0= 1 + ⌊(2n −1)/δ⌋in an attempt to get exact answers after one iteration. Normally, δ = 1 (forinterpolatory rules) or δ = 2 (for Gauss-type rules).The calling sequence of the multiple-component discretization routine ismcdis(n,ncapm,mc,mp,xp,yp,quad,eps,iq,idelta,irout,finl,finr,endl,endr,xfer,wfer,alpha,beta,ncap,kount,ierr,ie,be,x,w,xm,wm,p0,p1,p2)dimension xp(*),yp(*),endl(mc),endr(mc),xfer(ncapm),wfer(ncapm),alpha(n),beta(n),be(n),x(ncapm),w(ncapm),xm(*),wm(*),p0(*),p1(*),p2(*)logical finl,finrOn entry,nis the number of recursion coefficients desired;type integerncapmis the integer N max0above, i.e., the maximuminteger N0 allowed (ncapm = 500 will usuallybe satisfactory)mcis the number of component intervals in thecontinuous part of the spectrum; type integermpis the number of points in the discrete part ofthe spectrum; type integer; if the measure hasno discrete part, set mp = 0xp,ypare arrays of dimension mp containing the ab-scissae and the jumps of the point spectrumquadis a subroutine determining the discretizationof the inner product on each component in-terval, or a dummy routine if iq ̸= 1 (see be-low); specifically, quad(n,x,w,i,ierr) pro-duces the abscissae x(r) = xr,i and weights24

w(r) = wr,i, r = 1, 2, . .

. , n, of the n-pointdiscretization of the inner product on the in-terval [ai, bi] (cf.

(4.13)); an error flag ierr isprovided to signal the occurrence of an errorcondition in the quadrature processepsis the desired relative accuracy of the nonzerorecursion coefficients; type realiqis an integer selecting a user-supplied quadra-ture routine quad if iq = 1 or the ORTHPOLroutine qgp (see below) otherwiseideltais a nonzero integer, typically 1 or 2, inducingfast convergence in the case of special quadra-ture routines; the default value is idelta = 1iroutis an integer selecting the routine for generat-ing the recursion coefficients from the discreteinner product; specifically, irout = 1 selectsthe routine sti, and irout ̸= 1 the routinelancz.The logical variables finl,finr and the arrays endl,endr,xfer,wfer areinput variables to the subroutine qgp and are used (and hence need to beproperly dimensioned) only if iq ̸= 1.On return,alpha,betaare arrays of dimension n holding the desiredrecursion coefficients alpha (k) = αk−1, beta(k) = βk−1, k = 1, 2, . .

. , nncapis the integer N0 yielding convergencekountis the number of iterations required to achieveconvergence25

ierris an error flag, equal to 0 on normal return,equal to –1 if n is not in the proper range,equal to i if there is an error condition in thediscretization on the ith interval, and equalto ncapm if the discretized Stieltjes proceduredoes not converge within the discretizationresolution specified by ncapmieis an error flag inherited from the routine stior lancz (whichever is used).The arrays be,x,w,xm,wm,p0,p1,p2 are used for working space, the lastfive having dimension mc×ncapm+mp.A general-purpose quadrature routine, qgp, is provided for cases in whichit may be difficult to develop special discretizations that take advantage ofthe structural properties of the weight function w at hand. The routineassumes the same setup (4.9)–(4.14) used in mcdis, with disjoint intervals[ai, bi], and provides for Qi in (4.12) the Fej´er quadrature rule, suitablytransformed to the interval [ai, bi], with the same number Ni = N0 of pointsfor each i.

Recall that the N-point Fej´er rule on the standard interval [–1,1]is the interpolatory quadrature formulaQFNf =NXr=1wFr f(xFr ),(4.17)where xFr = cos((2r−1)π/2N) are the Chebyshev points. The weights are allpositive and can be computed explicitly in terms of trigonometric functions(cf., e.g., [15]).

The rule (4.17) is now applied to the integral in (4.11) bytransforming the interval [–1,1] to [ai, bi] via some monotone function φi (alinear function if [ai, bi] is finite) and letting f = uvwi:Z biaiu(t)v(t)wi(t)dt =Z 1−1u(φi(τ))v(φi(τ))wi(φi(τ))φ′i(τ)dτ≈NXr=1wFr wi(φi(xFr ))φ′i(xFr ) · u(φi(xFr ))v(φi(xFr )).Thus, in effect, we take in (4.13)xr,i = φi(xFr ),wr,i = wFr wi(φi(xFr ))φ′i(xFr ),i = 1, 2, . .

. , m.(4.18)26

If the interval [ai, bi] is half-infinite, say of the form [0,∞], we use φi(t) =(1 + t)/(1 −t), and similarly for intervals of the form [–∞,b] and [a, ∞]. If[ai, bi] = [−∞, ∞], we use φi(t) = t/(1 −t2).The routine qgp has the following calling sequence:subroutine qgp(n,x,w,i,ierr,mc,finl,finr,endl,endr,xfer,wfer)dimension x(n),w(n),endl(mc),endr(mc),xfer(*),wfer(*)logical finl,finrOn entry,nis the number of terms in the Fej´er quadratureruleiindexes the interval [ai, bi] for which thequadrature rule is desired; an interval that ex-tends to –∞has to be indexed by 1, one thatextends to +∞by mcmcis the number of component intervals; typeintegerfinlis a logical variable to be set .true.if theextreme left interval is finite, and .false.otherwisefinris a logical variable to be set .true.if theextreme right interval is finite, and .false.otherwiseendlis an array of dimension mc containing the leftendpoints of the component intervals; if thefirst of these extends to –∞, endl(1) is notbeing used by the routineendris an array of dimension mc containing theright endpoints of the component intervals; ifthe last of these extends to +∞, endr(mc) isnot being used by the routine27

xfer,wferare working arrays holding the standard Fej´ernodes and weights, respectively; they are di-mensioned in the routine mcdis.On return,x,ware arrays of dimension n holding the abscis-sae and weights (4.18) of the discretized innerproduct for the ith component intervalierrhas the integer value 0.The routine calls on the subroutines fejer,symtr and tr, which are ap-pended to the routine qgp in §4 of the package. The first generates the Fej´erquadrature rule, the others perform variable transformations.

The user hasto provide his own function routine wf(x,i) to calculate the weight functionwi(x) on the ith component interval.Example 4.2. Chebyshev weight plus a constant: wc(t) = (1−t2)−1/2 +c,c > 0, −1 < t < 1.It would be difficult, here, to find a single quadrature rule for discretizingthe inner product and obtain fast convergence.However, using in (4.9)m = 2, [a1, b1] = [a2, b2] = [−1, 1], and w1(t) = (1 −t2)−1/2, w2(t) = cin (4.11), and taking for Q1 the Gauss-Chebyshev, and for Q2 the Gauss-Legendre n-point rule (the latter multiplied by c), yields convergence toαk(wc), βk(wc), k = 0, 1, .

. .

, n −1, in one iteration (provided δ is set equalto 2)! Actually, we need N0 = n + 1, in order to test for convergence; cf.(4.15).

The driver test4 implements this technique and calculates the firstn = 80 beta-coefficients to a relative accuracy of 5000×εs for c = 1, 10,100. (All αk are zero.) Attached to the driver is the quadrature routineqchle used in this example.It, in turn, calls for the Gauss quadratureroutine gauss, to be described later in §6.

Anticipating convergence afterone iteration, we put ncapm = 81.The weight function of Example 4.2 provides a continuous link betweenthe Chebyshev polynomials (c = 0) and the Legendre polynomials (c = ∞);the recursion coefficients βk(wc) indeed converge (except for k = 0) to thoseof the Legendre polynomials, as c →∞.Selected results of test4 (where irout in mcdis can be arbitrary) areshown in Table 4.2. The output variable kount is 1 in each case, confirmingconvergence after one iteration.

The coefficients β0(wc) are easily seen to be28

π + 2c.kβk(w1)βk(w10)βk(w100)05.14159265423.14159265203.14159271.4351692451.3559592080.33591083985.2510395775.2535184776.252812950012.2500610870.2504824840.250532419325.2500060034.2500682357.250133633851.2500006590.2500082010.250032688779.2500001724.2500021136.2500127264TABLE 4.2. Selected recursion coefficients βk(wc) for c = 1, 10, 100Example 4.3.

Jacobi weight with one mass point at the left endpoint:w(α,β)(x ; y) = [µ(α,β)0]−1(1 −x)α(1 + x)β + y δ(x + 1) on (−1,1), µ(α,β)0=2α+β+1Γ(α + 1)Γ(β + 1)/Γ(α + β + 2), α > −1, β > −1, y > 0.The recursion coefficients αk, βk are known explicitly (see [6, Eqs. (6.23),(3.5)]1) and can be expressed, with some effort, in terms of the recursioncoefficients αJk, βJk for the Jacobi weight w(α,β)(·) = w(α,β) (· ; 0).Theformulae are:α0 = αJ0 −y1 + y ,β0 = βJ0 + y,αk = αJk +2k(α+k)(α+β+2k)(α+β+2k+1) (ck −1) +2(β+k+1)(α+β+k+1)(α+β+2k+1)(α+β+2k+2)1ck −1,βk =ckck−1βJk ,k = 1, 2, 3, .

. .

,(4.19)wherec0 = 1 + y,ck =1 +(β+k+1)(α+β+k+1)k(α+k)ydk1 + ydk,k = 1, 2, . .

. ,(4.20)1 In [6], the interval is taken to be [0,2], rather than [−1,1].

There is a typographicalerror in the first formula of (6.23), which should have the numerator 2β + 2 instead of2β + 1.29

andd1 = 1,dk = (β + k)(α + β + k)(α + k −1)(k −1) dk−1,k = 2, 3, . .

. .

(4.21)Again, it is straightforward with mcdis to get exact results (modulorounding) after one iteration, by using the Gauss-Jacobi quadrature rule(see gauss in §6) to discretize the continuous part of the measure.Thedriver test5 generates in this manner the first n = 40 recursion coefficientsαk, βk, k = 0, 1, . .

. , n−1, to a relative accuracy of 5000×εs, for y = 12, 1, 2,4, 8.

For each α = −.8(.2)1. and β = –.8(.2)1., it computes the maximumrelative errors (absolute error, if αk ≈0) of the αk, βk by comparing themwith the exact coefficients. These have been computed in double precisionby a straightforward implementation of the formulae (4.19)–(4.21).As expected, the output of test5 reveals convergence after one iteration,the variable kount having consistently the value 1.

The maximum relativeerror in the αk is found to generally lie between 2 × 10−8 and 3 × 10−8, theone in the βk between 7.5 × 10−12 and 8 × 10−12; they are attained for k ator near 39. The discrepancy between the errors in the αk and those in theβk is due to the αk being considerably smaller than the βk, by 3–4 orders ofmagnitude.

Replacing the routine sti in mcdis by lancz yields very muchthe same error picture.It is interesting to note that the addition of a second mass point at theother endpoint makes an analytic determination of the recursion coefficientsintractable (cf. [6, p. 713]).

Numerically, however, it makes no differencewhether there are two or more mass points, and whether they are located in-side, or outside, or on the boundary of the support interval. It was observed,however, that if at least one mass point is located outside the interval [−1,1],the procedure sti used in mcdis becomes severely unstable2 and must bereplaced by lancz.Example 4.4.Logistic density function:w(x) = e−x/(1 + e−x)2 on(−∞, ∞).2 This has also been observed in the similar Example 4.8 of [19], but was incorrectlyattributed to a phenomenon of ill-conditioning.

Indeed, the statement made at the end ofExample 4.8 can now be retracted: stable methods do exist, namely the method embodiedby the routine mcdis in combination with lancz.30

In this example we illustrate a slight variation of the discretization pro-cedure (4.9)–(4.13), which ends up with a discrete inner product of the sametype as in (4.13) (and thus implementable by the routine mcdis) but derivedin a somewhat different manner. The idea is to integrate functions with re-spect to the density w by splitting the integral into two parts, one from −∞to 0, the other from 0 to ∞, changing variables in the first part, and thusobtainingZ ∞−∞f(t)w(t)dt =Z ∞0f(−t)e−t(1 + e−t)2 dt+Z ∞0f(t)e−t(1 + e−t)2 dt.

(4.22)Since (1 + e−t)−2 quickly tends to 1 as t →∞, a natural discretization ofboth integrals is provided by the Gauss-Laguerre quadrature rule applied tothe product f(±t)· (1 + e−t)−2. This amounts to taking, in (4.13), m = 2andxr,1 = −xLr , xr,2 = xLr ;wr,1 = wr,2 =wLr(1 + e−xLr )2 , r = 1, 2, .

. .

, N,where xLr , wLr are the Gauss-Laguerre N-point quadrature nodes and weights.The driver test6 incorporates this discretization into the routines mcdisand dmcdis, runs them for n = 40 with error tolerances 5000×εs and1000×εd, respectively, and prints the absolute errors in the α’s (αk = 0,in theory) and the relative errors in the β’s. (We used the default valueδ = 1.) Also printed are the number of iterations #it (= kount) in (4.15)and the corresponding final value N f0 (= ncap).In single precision, wefound #it = 1, N f0 = 81, and in double precision, #it = 5, N f0 = 281.

Bothroutines returned with the error flags equal to 0, indicating a normal coursekβkerr αkerr βk01.0000000000000000000000004.572(–13)1.918(–13)13.2898681336964528729448301.682(–13)5.641(–13)689.447603523159501888178322.187(–12)2.190(–12)15555.78278398792967750666971.732(–13)2.915(–12)261668.5802222686684218277883.772(–12)4.112(–12)393753.5340251948983877223542.482(–11)4.533(–12)TABLE 4.3. Selected output from test631

of events. A few selected double-precision values of the coefficients βk alongwith absolute errors in the α’s and relative errors in the β’s are shown inTable 4.3.

The results are essentially the same no matter whether sti orlancz is used in mcdis. The maximum errors observed are 2.482 × 10−11for the α’s, and 4.939 × 10−12 for the β’s, which are well within the single-precision tolerance ε = 5000 × εs.On computers with limited exponent range, convergence difficulties mayarise, both with sti and lancz, owing to underflow in many of the Laguerrequadrature weights.

This seems to perturb the problem significantly enoughto prevent the discretization procedure from converging.Example 4.5. Half-range Hermite measure: w(x) = e−x2 on (0, ∞).This is an example of a measure for which there do not seem to ex-ist natural discretizations other than those based on composite quadraturerules.

Therefore, we applied our general-purpose routine qgp (and its double-precision companion dqgp), using, after some experimentation, the partition[0, ∞] = [0, 3]∪[3, 6]∪[6, 9]∪[9, ∞]. The driver test7 implements this, withn = 40 and an error tolerance 50×εs in single precision, and 1000×εd indouble precision.The single-precision routine mcdis (using the default value δ = 1) con-verged after one iteration, returning ncap = 81, whereas the double-precisionkαkerr αkβkerr βk0.5641895835477562869480795.88622692545275801364908371.096(–13)3.180(–13)1.9884253928468002854870634.18169011381620932846223251.514(–13)7.741(–14)62.0806203364008332248176221.0023478510110108422245381.328(–13)5.801(–14)153.2142706360711282274489142.5009279171337026699543212.402(–14)8.186(–14)264.2030485788720019526602774.3338679012299504436044301.415(–13)7.878(–14)395.1315328868942965193196926.5003562377071329380351556.712(–13)1.820(–14)TABLE 4.4.

Selected output from test732

routine dmcdis took four iterations to converge, and returned ncapd = 201.Selected results (where err αk and err βk both denote relative errors) areshown in Table 4.4. The maximum error err αk occurred at k = 10, andhad the value 1.038 × 10−12, whereas maxk err βk = 3.180 × 10−13 is at-tained at k = 0.

The latter is within the error tolerance ε, the former onlyslightly larger. Comparison of the double-precision results with Table I onthe microfiche supplement to [12] revealed agreement to all 20 decimal dig-its given there, for all k in the range 0 ≤k ≤19.

Interestingly, the routinesti in mcdis did consistently better than lancz on the β’s, by a factor aslarge as 235 (for k = 33), and is comparable with lancz (sometimes better,sometimes worse) on the α’s.Without composition, i.e., using mc=1 in mcdis, it takes 8 iterations(N f0 = 521) in single precision, and 10 iterations (N f0 = 761) in doubleprecision, to satisfy the much weaker error tolerances ε = 12 10−6 and εd =12 10−12, respectively. All single-precision results, however, turn out to beaccurate to about 12 decimal places.

(This is because of the relatively largefinal increment ∆8 = 2n = 80 in N0 (cf. (4.16)) that forces convergence.)4.4.

Discretized modified Chebyshev algorithm. The whole apparatus ofdiscretization (cf.

(4.9)–(4.14)) can also be employed in connection with themodified Chebyshev algorithm (cf. §3), if one discretizes modified momentsrather than inner products.

Thus, one approximates (cf. (4.14), (4.16))νk(dλ) ≈νk(dλmN[s]0 +p)(4.23)and iterates the modified Chebyshev algorithm with s = 0, 1, 2, .

. .

untilthe convergence criterion (4.15) is satisfied. (It would be unwise to testconvergence on the modified moments, for reasons explained in [19, §2.5].

)This is implemented in the routine mccheb, whose calling sequence ismccheb(n,ncapm,mc,mp,xp,yp,quad,eps,iq,idelta,finl,finr,endl,endr,xfer,wfer,a,b,fnu,alpha,beta,ncap,kount,ierr,be,x,w,xm,wm,s,s0,s1,s2)Its input and output parameters have the same meaning as in the routinemcdis. In addition, the arrays a,b of dimension 2×n–1 are to be suppliedwith the recursion coefficients a(k) = ak−1, b(k) = bk−1, k = 1, 2, .

. .

, 2×n–1, defining the modified moments. The arrays be,x,w,xm,wm,s,s0,s1,s233

are used for working space. The double-precision version of the routine hasthe name dmcheb.The discretized modified Chebyshev algorithm must be expected to be-have similarly as its close relative, the modified Chebyshev algorithm.

Inparticular, if the latter suffers from ill-conditioning, so does the former.Example 4.6. Example 3.1, revisited.We recompute the n = 40 first recursion coefficients αk, βk of Example3.1 to an accuracy of 100×εs in single precision, using the routine mcchebinstead of the routine cheb.

For the discretization of the modified momentswe employed the Gauss-Chebyshev quadrature rule:Z 1−1f(t)(1 −ω2t2)−1/2(1 −t2)−1/2dt ≈πNNXr=1f(xr)(1 −ω2x2r)−1/2, (4.24)where xr = cos((2r −1)π/2N) are the Chebyshev points. This is accom-plished by the driver test8.

The results of this test (shown in the package)agree to all 10 decimal places with those of test1. The routine mcchebconverged in one iteration, with ncap = 81, for ω2 = .1, .3, .5, .7, .9, in 4iterations, with ncap = 201, for ω2 = .99, and in 8 iterations, with ncap =521, for ω2 = .999.

A double-precision version of test8 was also run withε =12 × 10−20 (not shown in the package) and produced correct results to20 decimals in one iteration (ncap = 81) for ω2 = .1, .3, .5, .7, in 3 iterations(ncap = 161) for ω2 = .9, in 6 iterations (ncap = 361) for ω2 = .99, and in11 iterations (ncap = 921) for ω2 = .999.5. MODIFICATION ALGORITHMSGiven a positive measure dλ(t) supported on the real line, and two poly-nomials u(t) = ± Πrρ=1(t −uρ), v(t) = Πsσ=1(t −vσ) whose ratio is finite onthe support of dλ, we may ask for the recursion coefficients ˆαk = αk(dˆλ),ˆβk = βk(dˆλ) of the modified measuredˆλ(t) = u(t)v(t) dλ(t),t ∈supp(dλ),(5.1)assuming known the recursion coefficients αk = αk(dλ), βk = βk(dλ) of thegiven measure.

Methods that accomplish the passage from the α’s and β’sto the ˆα’s and ˆβ’s will be called modification algorithms. The simplest case34

s = 0 (i.e., v(t) ≡1) and u positive on supp(dλ) has already been consideredby Christoffel [7], who represented the polynomial u(·)ˆπk(·) = u(·)πk (· ; dˆλ)in determinantal form in terms of the polynomials πj(·) = πj(· ; dλ), j =k, k + 1, . .

. , k + r. This is now known as Christoffel’s theorem.

Christoffel,however, did not address the problem of how to generate the new coefficientsˆαk, ˆβk in terms of the old ones. For the more general modification (5.1),Christoffel’s theorem has been generalized by Uvarov [48,49].

The coefficientproblem stated above, in this general case, has been treated in [20], andpreviously by Galant [13] in the special case v(t) ≡1.The passage from dλ to dˆλ can be carried out in a sequence of elementarysteps involving real linear factors t−x, or real quadratic factors (t−x)2+y2,either in u(t) or in v(t). The corresponding elementary steps in the passagefrom the α’s and β’s to the ˆα’s and ˆβ’s can all be performed by means ofcertain nonlinear recurrences.

Some of these, however, when divisions of themeasure dλ are involved, are liable to instabilities. An alternative methodcan then be used, which appeals to the modified Chebyshev algorithm sup-plied with appropriate modified moments.

These latter are of independentinterest and find application, e.g., in evaluating the kernel in the contourintegral representation of the Gauss quadrature remainder term.5.1 Nonlinear recurrence algorithms. The routine that carries out theelementary modification steps is called chri and has the calling sequencechri(n,iopt,a,b,x,y,hr,hi,alpha,beta,ierr).On entry,nis the number of recursion coefficients desired;type integer35

ioptis an integer identifying the type of modifica-tion as follows:1:dˆλ(t) = (t −x)dλ(t)2:dˆλ(t) = ((t −x)2 + y2)dλ(t), y > 03:dˆλ = (t2 + y2)dλ(t) with dλ(t) andsupp(dλ) assumed symmetric withrespect to the origin and y > 04:dˆλ(t) = dλ(t)/(t −x)5:dˆλ(t) = dλ(t)/((t −x)2 + y2), y > 06:dˆλ(t) = dλ(t)/(t2 + y2) with dλ(t) andsupp(dλ) assumed symmetric withrespect to the origin and y > 07:dˆλ(t) = (t −x)2dλ(t)a,bare arrays of dimension n+1 holding the re-cursion coefficients a(k) = αk−1(dλ), b(k) =βk−1(dλ), k = 1, 2, . .

. , n+1x,yare real parameters defining the linear andquadratic factors (or divisors) of dλhr,hiare the real and imaginary part, respectively,ofRR dλ(t)/(z −t), where z = x + iy; theparameter hr is used only if iopt = 4 or 5,the parameter hi only if iopt = 5 or 6.On return,alpha,betaare arrays of dimension n containing the de-sired recursion coefficients alpha(k) = αk−1(dˆλ), beta(k) = βk−1 (dˆλ), k = 1, 2, .

. .

, nierris an error flag, equal to 0 on normal return,equal to 1 if n ≤1 (the routine assumes thatn is larger than or equal to 2), and equal to 2if the integer iopt is inadmissible.It should be noted that in the cases iopt = 1 and iopt = 4, the modifiedmeasure dˆλ is positive [negative] definite if x is to the left [right] of thesupport of dλ, but indefinite otherwise. Nevertheless, it is permissible to36

have x inside the support of dλ (or inside its convex hull), provided theresulting measure dˆλ is still quasi-definite (cf. [20]).For iopt = 1, 2, .

. .

, 6, the methods used in chri are straightforwardimplementations of the nonlinear recurrence algorithms respectively in Eq. (3.7), (4.7), (4.8), (5.1), (5.8) and (5.9) of [20].

The only minor modificationrequired concerns ˆβ0 = β0(dˆλ). In [20], this constant was taken to be 0,whereas here it is defined to be ˆβ0 =RR dˆλ(t).

Thus, for example, if iopt= 2,ˆβ0 =ZR((t −x)2 + y2)dλ(t) =ZR((t −α0 + α0 −x)2 + y2)dλ(t)=ZR((t −α0)2 + (α0 −x)2 + y2)dλ(t),sinceRR(t −α0)dλ(t) =RR π1(t)dλ(t) = 0. Furthermore (cf.

(4.1)),ZR(t −α0)2dλ(t) = β0β1,so that the formula to be used for ˆβ0 isˆβ0 = β0(β1 + (α0 −x)2 + y2)(iopt = 2) .Similar calculations need to be made in the other cases.The case iopt = 7 incorporates a QR step with shift x, following Kautskyand Golub [42], and uses an adaptation of the algorithm in [51, Eq. (67.11),p. 567] to carry out the QR step.

The most significant modification madeis the replacement of the test c ̸= 0 by |c| > ε, where ε = 5 × εs is aquantity close to, but slightly larger than, the machine precision. (Withoutthis modification, the algorithm could fail.

)The methods used in chri are believed to be quite stable when themeasure dλ is modified multiplicatively (iopt = 1, 2, 3 and 7).Whendivisions are involved (iopt = 4, 5 and 6), however, the algorithms rapidlybecome unstable as the point z = x + iy ∈C moves away from the supportinterval of dλ. (The reason for this instability is not well understood atpresent; see, however, Galant [14].) For such cases there is an alternativeroutine, gchri (see §5.2), that can be used.37

Example 5.1. Checking the results (for σ = 12) of test2.We apply chri (and the corresponding double precision routine dchri)with iopt = 1, x = 0, to dλσ(t) = tσ ln (1/t) on (0,1) with σ = −12, torecompute the results of test2 for σ = 12.

This can be done by a minorσkerr αkerr βkerr αdkerr βdk.507.895(–14)4.796(–14)2.805(–28)7.952(–28)123.280(–12)6.195(–12)8.958(–26)1.731(–25)247.648(–12)1.478(–11)2.065(–25)3.985(–25)482.076(–11)4.088(–11)5.683(–25)1.121(–24)986.042(–11)1.201(–10)1.504(–24)2.987(–24)TABLE 5.1. Comparison between modified Chebyshev algorithmand modification algorithm in Example 5.1 (cf.

Example 3.2)modification, named test9, of test2.Selected results from it, showingthe relative discrepancies between the single-precision values αk, βk, resp.double-precision values αdk, βdk, computed by the modified Chebyshev algo-rithm and the modification algorithm, are shown in Table 5.1 (cf. Table3.3).

The maximum errors occur consistently for the last value of k (= 98).Example 5.2. Induced orthogonal polynomials.Given an orthogonal polynomial πm(· ; dλ) of fixed degree m ≥1, thesequence of orthogonal polynomials ˆπk,m(·) = πk(· ; π2mdλ), k = 0, 1, 2, .

. .

,has been termed induced orthogonal polynomials in [33]. Since their measuredˆλm modifies the measure dλ by a product of quadratic factors,dˆλm(t) =mYµ=1(t −xµ)2 · dλ(t),(5.2)where xµ are the zeros of πm, we can apply the routine chri (with iopt=7)m times to compute the n recursion coefficients ˆαk,m = αk(dˆλm), ˆβk,m =βk(dˆλm), k = 0, 1, .

. .

, n −1, from the n + m coefficients αk = αk(dλ), βk =βk(dλ), k = 0, 1, . .

. , n−1+m.

The subroutines indp and dindp in the drivertest10 implement this procedure in single resp.double precision.Thedriver itself uses them to compute the first n = 20 recursion coefficients of the38

induced Legendre polynomials with m = 0, 1, . .

. , 11.

It also computes themaximum absolute errors in the ˆα’s (ˆαk,m = 0 for all m) and the maximumrelative errors in the ˆβ’s by comparing single-precision with double-precisionresults.An excerpt of the output of test10 is shown in Table 5.2. It alreadysuggests a high degree of stability of the procedure employed by indp.

Thisis reinforced by an additional test (not shown in the package) generating n= 320 recursion coefficients ˆαk,m, ˆβk,m, 0 ≤k ≤319, for m = 40, 80, 160,320 and dλ being the Legendre, the 1st-kind Chebyshev, the Laguerre, andthe Hermite measure. Table 5.3 shows the maximum absolute error in theˆαk,m, 0 ≤k ≤319 (relative error in the Laguerre case), and the maximumrelative error in the ˆβk,m, 0 ≤k ≤319.km = 0, ˆβk,mm = 2, ˆβk,mm = 6, ˆβk,mm = 11, ˆβk,m02.0000000000.1777777778.0007380787.00000073291.3333333333.5238095238.5030303030.50095238106.2517482517.1650550769.2947959861.250991342412.2504347826.2467060415.2521022519.111172754119.2501732502.2214990335.2274818789.2509466619err ˆα0.000(0)1.350(–13)9.450(–13)1.357(–12)err ˆβ1.737(–14)2.032(–13)2.055(–12)3.748(–12)TABLE 5.2.

Induced Legendre polynomialsLegendreChebyshevLaguerreHermitemerr ˆαerr ˆβerr ˆαerr ˆβerr ˆαerr ˆβerr ˆαerr ˆβ403.4(–11)1.5(–10)1.9(–9)7.9(–10)3.0(–10)6.0(–10)1.8(–9)2.7(–10)801.4(–10)5.4(–10)2.1(–9)2.2(–9)5.8(–10)9.2(–10)7.9(–9)9.2(–10)1601.5(–9)5.1(–9)9.5(–9)1.1(–8)7.8(–10)1.4(–9)1.1(–8)6.8(–10)3203.3(–9)2.1(–8)9.6(–9)2.1(–8)3.9(–9)7.2(–9)2.5(–8)1.1(–9)TABLE 5.3. Accuracy of the recursion coefficients forsome classical induced polynomials39

5.2. Methods based on the modified Chebyshev algorithm.

As was notedearlier, the procedure chri becomes unstable for modified measures involv-ing division of dλ(t) by t −x or (t −x)2 + y2 as z = x + iy ∈C moves awayfrom the “support interval” of dλ, i.e., from the smallest interval contain-ing the support of dλ. We now develop a procedure that works better thefurther away z is from that interval.The idea is to use modified moments of dˆλ relative to the polynomi-als πk(· ; dλ) to generate the desired recursion coefficients ˆαk, ˆβk via themodified Chebyshev algorithm (cf.

§3). The modified moments in questionareνk = νk(x; dλ) =ZRπk(t; dλ)t −xdλ(t),k = 0, 1, 2, .

. .

,(5.3)for linear divisors, andνk = νk(x, y; dλ) =ZRπk(t; dλ)(t −x)2 + y2 dλ(t),k = 0, 1, 2, . .

. ,(5.4)for quadratic divisors.

Both can be expressed in terms of the integralsρk = ρk(z; dλ) =ZRπk(t; dλ)z −tdλ(t),z ∈C\supp(dλ), k = 0, 1, 2, . .

. ,(5.5)the first by means ofνk(x; dλ) = −ρk(z; dλ),z = x,(5.6)the others by means ofνk(x, y; dλ) = −Im ρk(z; dλ)Im z,z = x + iy.

(5.7)The point to observe is that {ρk(z; dλ)} is a minimal solution of the basicrecurrence relation (1.3) for the orthogonal polynomials {πk(· ; dλ)} (cf.[18]). The quantities ρk(z; dλ), k = 0, 1, .

. .

, n, therefore can be computedaccurately by a backward recurrence algorithm ([18, §5]) which, for ν > n,produces approximations ρ[ν]k (z; dλ) converging to ρk(z; dλ) when ν →∞,for any fixed k,ρ[ν]k (z; dλ) →ρk(z; dλ),ν →∞. (5.8)The procedure is implemented in the routineknum(n,nu0,numax,z,eps,a,b,rho,nu,ierr,rold),40

which computes ρk(z; dλ) for k = 0, 1, . .

. , n to a relative precision eps.

Theresults are stored as rho(k) = ρk−1(z; dλ), k = 1, 2, . .

. , n+1, in the complexarray rho of dimension n + 1.

The user has to provide a starting index nu0= ν0 > n for the backward recursion, which the routine then incrementsby units of 5 until convergence to within eps is achieved. If the requestedaccuracy eps cannot be realized for some ν ≤numax, the routine exits withierr = numax.

Likewise, if ν0 > numax, the routine exits immediately, withthe error flag ierr set equal to nu0. Otherwise, the value of ν for whichconvergence is obtained is returned as output variable nu.

The arrays a, bof dimension numax are to hold the recursion coefficients a(k) = αk−1(dλ),b(k) = βk−1(dλ), k = 1, 2, . .

. , numax,for the given measure dλ.Thecomplex array rold of dimension n + 1 is used for working space.

In theinterest of rapid convergence, the routine should be provided with a realisticestimate of ν0. For classical measures, such estimates are known (cf.

[18,§5]) and are implemented here by the function routinesnu0jac(n,z,eps), nu0lag(n,z,al,eps), nu0her(n,z,eps).The first is for Jacobi measures, the second for generalized Laguerre mea-sures with parameter al = α > −1, and the last for the Hermite measure.Note that ν0 for Jacobi measures does not depend on the weight parametersα, β, in contrast to ν0 for the generalized Laguerre measure.The name knum comes from the fact that ρn(z; dλ) in (5.5) is the numer-ator in the kernelKn(z; dλ) = ρn(z; dλ)πn(z; dλ)(5.9)of the remainder term of the n-point Gaussian quadrature rule for analyticfunctions (cf., e.g., [36]). For the sequence of kernels K0, K1, .

. .

, Kn wehave the following routine:subroutine kern(n,nu0,numax,z,eps,a,b,ker,nu,ierr,rold)complex z,ker,rold,p0,p,pm1dimension a(numax),b(numax),ker(*),rold(*)call knum(n,nu0,numax,z,eps,a,b,ker,nu,ierr,rold)if(ierr.ne.0) returnp0=(0.,0.)p=(1.,0. )do 10 k=1,n41

pm1=p0p0=pp=(z-a(k))*p0-b(k)*pm1ker(k+1)=ker(k+1)/p10 continuereturnendThe meaning of the input and output parameters is the same as in knum.The double precision version of the routine is named dkern.All the ingredients are now in place to describe the workings of gchri,the alternative routine to chri when the latter is unstable. First, the rou-tine knum is used to produce the first 2n modified moments νk(x; dλ) resp.νk(x, y; dλ), k = 0, 1, .

. .

, 2n −1. These are then supplied to the routinecheb along with the recursion coefficients αk(dλ), βk(dλ) (needed anyhow forthe computation of the νk), which produces the desired coefficients αk(dˆλ),βk(dˆλ), k = 0, 1, .

. .

, n −1. The routine has the calling sequencegchri(n,iopt,nu0,numax,eps,a,b,x,y,alpha,beta,nu,ierr,ierrc,fnu,rho,rold,s,s0,s1,s2).On entry,nis the number of recursion coefficients desired;type integerioptis an integer identifying the type of modifica-tion as follows:1:dˆλ(t) = dλ(t)/(t−x), where x is assumedoutside the smallest interval containingsupp(dλ)2:dˆλ(t) = dλ(t)/((t −x)2 + y2), y > 0nu0is an integer ν0 ≥2n estimating the startingindex for the backward recursion to computethe modified moments; if no other choices areavailable, take nu0 = 3×n42

numaxis an integer used to terminate backward re-cursion in case of nonconvergence; a conser-vative choice is numax=500epsis a relative error tolerance; type reala,bare arrays of dimension numax to be sup-plied with the recursion coefficients a(k) =αk−1(dλ), b(k) = βk−1(dλ), k = 1, 2, . .

. ,numaxx,yare real parameters defining the linear andquadratic divisors of dλ.On return,alpha,betaare arrays of dimension n containing the de-sired recursion coefficients alpha(k) = ˆαk−1beta(k) = ˆβk−1, k = 1, 2, .

. .

, nnuis the index ν for which the error toleranceeps is satisfied for the first time; if it is neversatisfied, nu will have the value numaxierris an error flag, whereierr = 0 on normal returnierr = 1 if iopt is inadmissibleierr = nu0 if nu0 > numaxierr = numax if the backward recurrencealgorithm does not converge.ierr = –1 if n is not in rangeierrcis an error flag inherited from the routinecheb.The real arrays fnu,s,s0,s1,s2 are working space, all of dimension 2×nexcept s, which has dimension n. The complex arrays rho, rold are alsoworking space, both of dimension 2n. The routine calls on the subroutinesknum and cheb.

The double-precision version of gchri has the name dgchri.43

Since the routine gchri is based on the modified Chebyshev algorithm,it shares with the latter its proneness to ill-conditioning, particularly incases of measures supported on an infinite interval. On finitely supportedmeasures, however, it can be quite effective, as will be seen in the nextexample.Example 5.3.

The performance of chri and gchri.To illustrate the severe limitations of the routine chri in situations wheredivisions of the measure dλ are involved, and at the same time to documentthe effectiveness of gchri, we ran both routines with n = 40 for Jacobimeasures dλ(α,β) with parameters α, β = −.8(.4).8, β ≥α. This is done intest11.The routine test11 first tests division by t −x, where x = –1.001,–1.01 , –1.04, –1.07, –1.1.

Both routines chri and gchri are run in sin-gle and double precision, the latter with ε = 10 × εs and ε = 100 × εd,respectively. The double-precision results are used to determine the abso-lute errors in the ˆα’s and the relative errors in the ˆβ’s for each routine.

Therequired coefficients αk, βk, 0 ≤k ≤νmax −1 (νmax = 500 for single preci-sion, and 800 for double precision) are supplied by recur and drecur withipoly = 6. The routine nu0jac is used to provide the starting recurrenceindex ν0 resp.νd0.In Tables 5.4 and 5.5, relating respectively to linearand quadratic divisors, we give only the results for the Legendre measure(α = β = 0).

The first line in each 3-line block of Table 5.4 shows x, ν0, νd0and the maximum (over k, 0 ≤k ≤39) errors in the ˆαk and ˆβk for gchri,followed by the analogous information (excepting the ν0’s) for chri. Therecurrence index ν yielding convergence was found (not shown in test11)to be ν = ν0 + 5 and νd = νd0 + 5, without exception.44

gchrichrixν0νd0err ˆαerr ˆβerr ˆαerr ˆβ–1.0014187578.000(–14)1.559(–13)1.013(–13)1.647(–13)reconstruction8.527(–14)1.705(–13)1.010(–27)2.423(–27)errors1.421(–14)5.329(–14)2.019(–28)1.211(–27)–1.011872944.016(–14)6.907(–14)1.396(–10)2.424(–10)3.553(–14)9.946(–14)6.058(–28)1.211(–27)7.105(–15)4.262(–14)1.515(–28)9.080(–28)–1.041331873.590(–14)4.759(–14)5.944(–6)8.970(–6)2.842(–14)7.103(–14)5.554(–28)1.312(–27)7.105(–15)4.263(–14)1.010(–28)9.080(–28)–1.071201612.194(–14)4.850(–14)5.334(–3)7.460(–3)2.842(–14)7.104(–14)6.058(–28)1.211(–27)7.105(–15)4.263(–14)1.010(–28)7.062(–28)–1.11141482.238(–14)4.359(–14)4.163(0)4.959(+1)2.132(–14)5.683(–14)3.534(–28)1.009(–27)1.549(–12)1.833(–12)1.010(–28)6.057(–28)TABLE 5.4. Performance of gchri and chri for elementarydivisors t −x of the Legendre measure dλ(t)It can be seen from the leading lines in Table 5.4 that chri rapidly losesaccuracy as x moves away from the interval [−1,1], all single-precision ac-curacy being gone by the time x reaches –1.1.

Similar, if not more rapid,erosion of accuracy is observed for the other parameter values of α, β. Thenext two lines in each 3-line block show “reconstruction errors”, i.e., themaximum errors in the α’s and β’s if the ˆα’s and ˆβ’s produced by gchri,chri and dgchri, dchri are fed back to the routines chri and dchri withiopt = 1 to recover the original recursion coefficients in single and doubleprecision. The first of these two lines shows the errors in reconstructing thesecoefficients from the output of gchri (resp.

dgchri), the second from theoutput of chri (resp. dchri).

Rather remarkably, the coefficients are recov-ered to essentially full accuracy, even when the input coefficients (produced45

by chri and dchri) are very inaccurate! This is certainly a phenomenonthat deserves further study.

It can also be seen from Table 5.4 (and themore complete results in §1 of the package) that gchri consistently pro-duces accurate results, some slight deterioration occurring only very closeto x = −1, where the routine has to work harder.The second half of test11 tests division by (t−x)2+y2, where z = x+iyis taken along the upper half of the ellipseEρ = {z ∈C : z = 12ρeiϑ + 1ρe−iϑ, 0 ≤ϑ ≤2π}, ρ > 1,(5.10)which has foci ±1 and sum of the semiaxes equal to ρ. (These ellipses arecontours of constant ν0 for Jacobi measures.

)We generated informationanalogous to the one in Table 5.4, for ρ = 1.05, 1.1625, 1.275, 1.3875, 1.5,except that all quantities are averaged over 19 equally spaced points on Eρcorresponding to ϑ = jπ/20, j = 1, 2, . .

. , 19.

Selected results (bars indicateaveraging), again for the Legendre case, are shown in Table 5.5. They reveala behavior very similar to the one in Table 5.4 for linear divisors.gchrichriρ¯ν0¯νd0err ˆαerr ˆβerr ˆαerr ˆβ1.053907007.879(–13)1.440(–12)7.685(–14)1.556(–13)reconstruction7.814(–13)1.433(–12)1.768(–26)3.042(–26)errors2.024(–14)8.442(–14)3.016(–28)1.742(–27)1.2751422046.252(–14)1.287(–13)4.562(–7)6.162(–7)6.554(–14)1.279(–13)1.541(–27)3.061(–27)2.295(–14)8.970(–14)3.579(–28)1.646(–27)1.51171543.991(–14)7.966(–14)4.906(–1)2.339(0)4.207(–14)9.064(–14)6.932(–28)1.676(–27)3.805(–14)8.971(–14)4.351(–28)1.744(–27)TABLE 5.5.

Performance of gchri and chri for elementarydivisors (t −x)2 + y2 of the Legendre measure dλ(t) with z = x + iy on Eρ.46

6. GAUSS-TYPE QUADRATURE RULESOne of the important uses of orthogonal polynomials is in the approxi-mation of integrals involving a positive measure dλ by quadrature rules ofmaximum, or nearly maximum, algebraic degree of exactness.

In this con-text, it is indispensable to know the recursion coefficients for the respectiveorthogonal polynomials {πk(· ; dλ)}, since they allow us to generate the de-sired quadrature rules accurately and effectively via eigenvalue techniques.The software developed in the previous sections thus finds here a vast areaof application.6.1. Gaussian quadrature.

Given the (positive) measure dλ (having aninfinite number of support points), there exists, for each n ∈N, a quadratureruleZRf(t)dλ(t) =nXk=1wkf(xk) + Rn(f)(6.1)having algebraic degree of exactness 2n −1, i.e., zero error, Rn(f) = 0,whenever f is a polynomial of degree ≤2n−1. The nodes xk indeed are thezeros of the nth-degree orthogonal polynomial πn(· ; dλ), and the weightswk, which are all positive, are also expressible in terms of the same orthog-onal polynomials.

Alternatively, and more importantly for computationalpurposes, the nodes xk are the eigenvalues of the nth-order Jacobi matrixJn(dλ) =α0√β10√β1α1√β2√β2.........pβn−10pβn−1αn−1,(6.2)where αk = αk(dλ), βk = βk(dλ) are the recurrence coefficients for the(monic) orthogonal polynomials {πk(· ; dλ)}, and the weights wk are ex-pressible in terms of the associated eigenvectors. Specifically, ifJn(dλ)vk = xkvk,vTk vk = 1,k = 1, 2, .

. .

, n,(6.3)i.e., vk is the normalized eigenvector of Jn(dλ) corresponding to the eigen-value xk, thenwk = β0v2k,1,k = 1, 2, . .

. , n,(6.4)47

where β0 = β0(dλ) is defined in (1.4) and vk,1 is the first component ofvk (cf. [40]).

There are well-known and efficient algorithms, such as theQR algorithm, to compute eigenvalues and (part of the) eigenvectors ofsymmetric tridiagonal matrices. These are used in the routine gauss3, whosecalling sequence isgauss(n,alpha,beta,eps,zero,weight,ierr,e).On entry,nis the number of terms in the Gauss formula;type integeralpha,betaare arrays of dimension n assumed to holdthe recursion coefficients alpha(k) = αk−1,beta(k) = βk−1, k = 1, 2, .

. .

, nepsis a relative error tolerance, for example, themachine precision.On return,zero,weightare arrays of dimension n containing the nodes(in increasing order) and the correspondingweights of the Gauss formula, zero(k) = xk,weight(k) = wk, k = 1, 2, . .

. , nierris an error flag equal to 0 on normal return,equal to i if the QR algorithm does not con-verge within 30 iterations on evaluating theith eigenvalue, equal to –1 if n is not in range,and equal to –2 if one of the β’s is negative.The array e of dimension n is used for working space.

The double precisionroutine has the name dgauss.We refrain here from giving numerical examples, since the use of theroutine gauss, and the routines yet to be described, is straightforward.3This routine was kindly supplied to the author by Professor G.H. Golub.48

Some use of gauss and dgauss has already been made in Examples 4.2, 4.3,4.4 and 5.2.6.2.Gauss-Radau quadrature.We now assume that dλ is a measurewhose support is either bounded from below, or bounded from above, orboth. Let, then, x0 be either the infimum or the supremum of supp dλ, solong as it is finite.

(Typically, if supp dλ = [−1, 1], then x0 could be either–1 or +1; if supp dλ = [0, ∞], then x0 would have to be 0; etc.). By Gauss-Radau quadrature we then mean a quadrature rule of maximum degree ofexactness that contains among the nodes the point x0.

It thus has the formZRf(t)dλ(t) = w0f(x0) +nXk=1wkf(xk) + Rn(f),(6.5)and, as is well known, can be made to have degree of exactness 2n, i.e.,Rn(f) = 0 for all polynomials of degree ≤2n. Interestingly, all nodes x0,x1, .

. .

, xn and weights w0, w1, . .

. , wn can again be interpreted in terms ofeigenvalues and eigenvectors, exactly as in the case of Gaussian quadraturerules, but now relative to the matrix (cf.

[39])J∗n+1(dλ) =α0√β10√β1α1.........pβn−1pβn−1αn−1√βn0√βnα∗n∈R(n+1)×(n+1),(6.6)where αk = αk(dλ) (0 ≤k ≤n −1), βk = βk(dλ) (1 ≤k ≤n) as before, butα∗n = α∗n(dλ) = x0 −βnπn−1(x0; dλ)πn(x0; dλ). (6.7)Hence, we can use the routine gauss to generate the Gauss-Radau formula.This is done in the following subroutine.subroutine radau(n,alpha,beta,end,zero,weight,ierr,e,a,b)dimension alpha(*),beta(*),zero(*),weight(*),e(*),a(*),b(*)cc The arraysalpha,beta,zero,weight,e,a,bare assumed to have49

c dimension n+1.cepsma=r1mach(3)cc epsma is the machine single precision.cnp1=n+1do 10 k=1,np1a(k)=alpha(k)b(k)=beta(k)10 continuep0=0.p1=1.do 20 k=1,npm1=p0p0=p1p1=(end-a(k))*p0-b(k)*pm120 continuea(np1)=end-b(np1)*p0/p1call gauss(np1,a,b,epsma,zero,weight,ierr,e)returnendThe input variables are n, alpha, beta, end representing, respectively, n, twoarrays of dimension n + 1 containing the αk(dλ), βk(dλ), k = 0, 1, 2, . .

. , n,and the “endpoint” x0.

The nodes (in increasing order) of the Gauss-Radauformula are returned in the array zero, the corresponding weights in thearray weight. The arrays e, a, b are working space, and ierr an errorflag inherited from the routine gauss.

The double-precision routine has thename dradau.We remark that x0 could also be outside the support of dλ, in whichcase the routine would generate a “Christoffel-type” quadrature rule.6.3.Gauss-Lobatto quadrature.Assuming now the support of dλbounded on either side, we let x0 = inf supp (dλ) and xn+1 = sup supp (dλ)and consider a quadrature rule of the typeZRf(t)dλ(t) = w0f(x0) +nXk=1wkf(xk) + wn+1f(xn+1) + Rn(f)(6.8)50

having maximum degree of exactness 2n+1. This is called the Gauss-Lobattoquadrature rule.

Its nodes x0, x1, . .

. , xn+1 and weights w0, w1, .

. .

, wn+1again admit the same spectral representation as in the case of Gauss andGauss-Radau formulae, only this time the matrix in question has order n+2and is given by (cf. [39])J∗n+2(dλ) =α0√β10√β1α1√β2√β2.........√βn√βnαnqβ∗n+10qβ∗n+1α∗n+1.

(6.9)Here, as before, αk = αk(dλ) (0 ≤k ≤n), βk = βk(dλ) (1 ≤k ≤n), andα∗n+1, β∗n+1 are the unique solution of the linear 2 × 2 system"πn+1(x0; dλ)πn(x0; dλ)πn+1(xn+1; dλ)πn(xn+1; dλ)#"α∗n+1β∗n+1#="x0πn+1(x0; dλ)xn+1πn+1(xn+1; dλ)#. (6.10)Hence, we have the following routine for generating Gauss-Lobatto formulae:subroutine lob(n,alpha,beta,aleft,right,zero,weight,ierr,e,a,b)dimension alpha(*),beta(*),zero(*),weight(*),e(*),a(*),b(*)cc The arraysalpha,beta,zero,weight,e,a,bare assumed to havec dimension n+2.cepsma=r1mach(3)cc epsma is the machine single precision.cnp1=n+1np2=n+2do 10 k=1,np2a(k)=alpha(k)b(k)=beta(k)51

10 continuep0l=0.p0r=0.p1l=1.p1r=1.do 20 k=1,np1pm1l=p0lp0l=p1lpm1r=p0rp0r=p1rp1l=(aleft-a(k))*p0l-b(k)*pm1lp1r=(right-a(k))*p0r-b(k)*pm1r20 continuedet=p1l*p0r-p1r*p0la(np2)=(aleft*p1l*p0r-right*p1r*p0l)/detb(np2)=(right-aleft)*p1l*p1r/detcall gauss(np2,a,b,epsma,zero,weight,ierr,e)returnendThe meaning of the input and output variables is as in the routine radau,the variable aleft standing for x0 and right for xn+1. The double-precisionroutine is named dlob.A remark analogous to the one after the routine radau applies to theroutine lob.Acknowledgment.

The author gratefully acknowledges a number of sug-gestions from two anonymous referees and from the associate editor, Dr.Ronald F. Boisvert, for improving the code of the package.Walter GautschiDepartment of Computer SciencesPurdue UniversityWest Lafayette, IN 47907E-mail address: wxg@cs.purdue.edu52

REFERENCES1. ABRAMOWITZ, M. and STEGUN, I.A.

(eds. ), Handbook of Mathemati-cal Functions, NBS Appl.

Math. Ser.

55, U.S. Government PrintingOffice, Washington, D.C., 1964.2. BOLEY, D. and GOLUB, G.H., A survey of matrix inverse eigenvalueproblems, Inverse Problems 3 (1987), 595–622.3.

de BOOR, C. and GOLUB, G.H., The numerically stable reconstructionof a Jacobi matrix from spectral data, Linear Algebra Appl. 21 (1978),245–260.4.

CHEBYSHEV, P.L., Sur l’interpolation par la m´ethode des moindrescarr´es, M´em. Acad.

Imp´er. Sci.

St. Petersbourg (7) 1, no. 15 (1859),1–24.

[Œuvres I, pp. 473–498.]5.

CHIHARA, T.S., An Introduction to Orthogonal Polynomials, Gordonand Breach, New York, 1978.6. CHIHARA, T.S., Orthogonal polynomials and measures with end pointmasses, Rocky Mountain J.

Math. 15 (1985), 705–719.7.

CHRISTOFFEL, E.B., ¨Uber die Gauβische Quadratur und eine Ver-allgemeinerung derselben, J. Reine Angew. Math.

55 (1858), 61–82.[Ges. Math.

Abhandlungen I, pp. 65–87.]8.

CHRISTOFFEL, E.B., Sur une classe particuli`ere de fonctions enti`ereset de fractions continues, Ann. Mat.

Pura Appl. (2) 8 (1877), 1–10.[Ges.

Math. Abhandlungen II, pp.

42–50.]9. CODY, W.J.

and HILLSTROM, K.E., Chebyshev approximations forthe natural logarithm of the gamma function, Math. Comp.

21 (1967),198–203.10. DANLOY, B., Numerical construction of Gaussian quadrature formulasforR 10 (−Log x) · xα · f(x) · dx andR ∞0 Em(x) · f(x) · dx, Math.

Comp.27 (1973), 861–869.11. FRONTINI, M., GAUTSCHI, W. and MILOVANOVI´C, G.V., Moment-preserving spline approximation on finite intervals, Numer.

Math. 50(1987), 503–518.53

12. GALANT, D.,Gaussquadraturerulesfortheevaluationof2π−1/2 R ∞0 exp(−x2)f(x)dx, Review 42, Math.

Comp. 23 (1969), 676–677.

Loose microfiche suppl. E.13.

GALANT, D., An implementation of Christoffel’s theorem in the theoryof orthogonal polynomials, Math. Comp.

25 (1971), 111–113.14. GALANT, D., Algebraic methods for modified orthogonal polynomials,Math.

Comp. 59 (1992), 541–546.15.

GAUTSCHI, W., Numerical quadrature in the presence of a singularity,SIAM J. Numer. Anal.

4 (1967), 357–362.16. GAUTSCHI, W., Computational aspects of three-term recurrence rela-tions, SIAM Rev.

9 (1967), 24–82.17. GAUTSCHI, W., On the preceding paper “A Legendre polynomial inte-gral” by James L. Blue, Math.

Comp. 33 (1979), 742–743.18.

GAUTSCHI, W., Minimal solutions of three-term recurrence relationsand orthogonal polynomials, Math. Comp.

36 (1981), 547–554.19. GAUTSCHI, W., On generating orthogonal polynomials, SIAM J. Sci.Stat.

Comput. 3 (1982), 289–317.20.

GAUTSCHI, W., An algorithmic implementation of the generalizedChristoffel theorem, in Numerical Integration (G. H¨ammerlin, ed.),pp. 89–106.

Intern. Ser.

Numer. Math., v. 57, Birkh¨auser, Basel,1982.21.

GAUTSCHI, W., Discrete approximations to spherically symmetric dis-tributions, Numer. Math.

44 (1984), 473–483.22. GAUTSCHI, W., Questions of numerical condition related to polynomi-als, in Studies in Mathematics, v. 24, Studies in Numerical Analysis(G.H.

Golub, ed. ), pp.

140–177.The Mathematical Association ofAmerica, 1984.23. GAUTSCHI, W., On the sensitivity of orthogonal polynomials to per-turbations in the moments, Numer.

Math. 48 (1986), 369–382.54

24. GAUTSCHI, W., Reminiscences of my involvement in de Branges’s proofof the Bieberbach conjecture, in The Bieberbach Conjecture (A. Baern-stein II et al., eds.

), Mathematical Surveys and Monographs, No. 21,pp.

205–211. American Math.

Soc., Providence, R.I., 1986.25. GAUTSCHI, W., Computational aspects of orthogonal polynomials, inOrthogonal Polynomials – Theory and Practice (P. Nevai, ed.

), pp.181–216.NATO ASI Series, Series C: Mathematical and PhysicalSciences, v. 294, Kluwer, Dordrecht, 1990.26. GAUTSCHI, W., A class of slowly convergent series and their summationby Gaussian quadrature, Math.

Comp. 57 (1991), 309–324.27.

GAUTSCHI, W., On certain slowly convergent series occurring in platecontact problems, Math. Comp.

57 (1991), 325–338.28. GAUTSCHI, W., Quadrature formulae on half-infinite intervals, BIT 31(1991), 438–446.29.

GAUTSCHI, W., Computational problems and applications of orthog-onal polynomials, in Orthogonal Polynomials and Their Applications(C. Brezinski et al., eds. ), pp.

61–71. IMACS Annals on Computingand Applied Mathematics, v. 9, Baltzer, Basel, 1991.30.

GAUTSCHI, W., Gauss-type quadrature rules for rational functions, inNumerical Integration IV (H. Brass and G. H¨ammerlin, eds. ), Internat.Ser.

Numer. Math., to appear.31.

GAUTSCHI, W., On the computation of generalized Fermi-Dirac andBose-Einstein integrals, Comput. Phys.

Comm., to appear.32. GAUTSCHI, W., Is the recurrence relation for orthogonal polynomialsalways stable?, BIT, to appear.33.

GAUTSCHI, W., and LI, S., A set of orthogonal polynomials inducedby a given orthogonal polynomial, Aequationes Math., to appear.34. GAUTSCHI, W., and MILOVANOVI´C, G.V., Gaussian quadrature involv-ing Einstein and Fermi functions with an application to summation ofseries, Math.

Comp. 44 (1985), 177–190.35.

GAUTSCHI, W., and MILOVANOVI´C, G.V., Spline approximations tospherically symmetric distributions, Numer. Math.

49 (1986), 111–121.55

36. GAUTSCHI, W. and VARGA, R.S., Error bounds for Gaussian quadra-ture of analytic functions, SIAM J. Numer.

Anal. 20 (1983), 1170–1186.37.

GAUTSCHI, W. and WIMP, J., Computing the Hilbert transform of aJacobi weight function, BIT 27 (1987), 203–215.38. GAUTSCHI, W., KOVAˇCEVI´C, M.A., and MILOVANOVI´C, G.V., The nu-merical evaluation of singular integrals with coth-kernel, BIT 27 (1987),389–402.39.

GOLUB, G.H., Some modified matrix eigenvalue problems, SIAM Rev.15 (1973), 318–334.40. GOLUB, G.H.

and WELSCH, J.H., Calculation of Gauss quadraturerules, Math. Comp.

23 (1969), 221–230.41. GRAGG, W.B.

and HARROD, W.J., The numerically stable reconstruc-tion of Jacobi matrices from spectral data, Numer. Math.

44 (1984),317–335.42. KAUTSKY, J. and GOLUB, G.H., On the calculation of Jacobi matrices,Linear Algebra Appl.

52/53 (1983), 439–455.43. LUKE, Y.L., Mathematical Functions and Their Approximations, Aca-demic Press, New York, 1975.44.

REES, C.J., Elliptic orthogonal polynomials, Duke Math. J.

12 (1945),173–187.45. RUTISHAUSER, H., On Jacobi rotation patterns, in Experimental Arith-metic, High Speed Computing and Mathematics (N.C. Metropolis etal., eds.

), pp. 219–239.Proc.Symp.

Appl. Math.15, AmericanMath.

Soc., Providence, R.I., 1963.46. SACK, R.A. and DONOVAN, A.F., An algorithm for Gaussian quadra-ture given modified moments, Numer.

Math. 18 (1971/72), 465–478.47.

STROUD, A.H. and SECREST, D., Gaussian Quadrature Formulas,Prentice-Hall, Englewood Cliffs, N.J., 1966.48. UVAROV, V.B., Relation between polynomials orthogonal with differ-ent weights (Russian), Dokl.

Akad. Nauk SSSR 126 (1959), 33–36.56

49. UVAROV, V.B., The connection between systems of polynomials thatare orthogonal with respect to different distribution functions (Rus-sian), ˇZ.

Vyˇcisl. Mat.

i Mat. Fiz.

9 (1969), 1253–1262.50. WHEELER, J.C., Modified moments and Gaussian quadrature, RockyMountain J.

Math. 4 (1974), 287–296.51.

WILKINSON, J.H., The Algebraic Eigenvalue Problem, Clarendon Press,Oxford, 1965.57


출처: arXiv:9307.212원문 보기

Subscribe to koineu.com

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe