We incorporate explicit Nystrom methods into the RKQ algorithm for stepwise global error control in numerical solutions of initial-value problems. The initial-value problem is transformed into an explicitly second-order problem, so as to be suitable for Nystrom integration. The Nystrom methods used are fourth-order, fifth-order and 10th-order. Two examples demonstrate the effectiveness of the algorithm.
In two previous papers we have considered the RKrvQz algorithm for stepwise control of the global error in the numerical solution of an initial-value problem (IVP), using Runge-Kutta methods [3] [4]. In the current paper, the third in the series, we focus our attention on the use of Nyström methods in this error control algorithm for n-dimensional problems of the form
Note that f is not explicitly dependent on y ′ (we note that Nyström methods can be used to solve the more general problem y ′′ (x) = f (x, y, y ′ ) , but that is not our focus here). We designate this Nyström-based algorithm RKNrvQz, and we will show in a later section how any first-order IVP can be written in the form (1), so that RKNrvQz is, in fact, generally applicable. The motivation for considering this modification to RKrvQz is twofold: most physical systems are described by second-order differential equations, and Nyström methods applied to (1) tend to be more efficient than their Runge-Kutta counterparts.
Here we describe concepts, terminology and notation relevant to our work. Note that boldface quantities are n × 1 vectors, except for α r i , I n , F r y , F r y ′ and g y , which are n × n matrices.
The most general definition of a Nyström method (sometimes known as Runge-Kutta-Nyström (RKN)) for solving (1) is
a pq k q p = 1, 2, …, m
The coefficients c p , a pq , b p and b p are unique to the given method. If a pq = 0 for all p q, then the method is said to be explicit; otherwise, it is known as an implicit RKN method. We will focus our attention on explicit methods. In the second line of (2), we have implicitly defined the function F. We treat w ′ i as an ‘internal parameter’; for our purposes here, we do not identify w ′ with y ′ , because f is not dependent on y ′ . The symbol w is used here and throughout to indicate the approximate numerical solution, whereas the symbol y will be used to denote the exact solution. We will denote an RKN method of order r as RKNr and, for such a method, we write
The stepsize h i is given by
and carries the subscript because it may vary from step to step. It is known that RKNr has a local error of order r + 1 and a global error of order r, just like its Runge-Kutta counterpart RKr.
This gives
where y i is the ith component of y, and g i is the ith component of g. Clearly, we have
for all j = 1, 2, . . . , n, and so we can write
The initial values for this second-order problem are then given by
Hence, any first-order IVP can be transformed into an IVP of the form (1). This is ideally suited to the Nyström methods, which are specifically designed for this type of IVP. They are also more efficient than their Runge-Kutta counterparts; for example, the methods to be used later, RKN4 and RKN5, require three and four stage evaluations, respectively, as opposed to RK4 and RK5, which require at least four and six stage evaluations, respectively.
It can be shown [5] that, for RKr,
where ε r i+1 = O h r+1 i is the local error, ∆ r i+1 is the global error and F r y is the Jacobian (with respect to y) of the function F r (x i , w r i ) associated with RKr. The term h i F r y (x i , ξ i ) in the matrix α r i arises from a first-order Taylor expansion of F r (x i , w i ) = F r (x i , y i + ∆ r i ) with respect to y i . For a Nyström method RKNr, we have F r = F r (x i , w r i ) and so, as above,
where ζ i is an appropriate constant. Hence, the global error in RKNr is also given by ( 5).
We
The algorithm RKNrvQz is nothing more than RKrvQz with RKr, RKv and RKz replaced with RKNr, RKNv and RKNz. Of course, RKNrvQz is applied to problems of the form (1), whereas RKrvQz is applied to problems of the form (4).
We also report on a refinement to the algorithm: in RKrvQz, if the global error at x i is too large, we replace w r i with w z i and then recompute w r i+1 and w v i+1 , using w z i as input for both RKr and RKv. This is the essence of the quenching procedure. However, in retrospect it seems quite acceptable to simply replace w r i+1 and w v i+1 with w z i+1 ; this avoids the need for recomputing w r i+1 and w v i+1 , which improves efficiency and, after all, it is the global error in w r i+1 and w v i+1 , not w r i and w v i , that is too large. Both approaches are effective, although one is more efficient than the other. It is the more efficient approach that we have employed in RKNrvQz.
It is not our intention to compare methods or algorithms but, for the sake of consistency, we will apply RKNrvQz to the same examples that we considered Since the solution oscillates between -1000 and 1000, there are regions where the solution has magnitude less than unity -here, we implement absolute error control -and regions where the solution has magnitude greater than unity, where we implement relative error control. With an imposed tolerance of 10 -8 on the local and global errors (relative and absolute) we found a maximum global error of ∼ 4 × 10 -8 in each component when using RKN45, and a glob
This content is AI-processed based on open access ArXiv data.