Numerical Method in Classical Dynamics

Reading time: 5 minute
...

📝 Original Info

  • Title: Numerical Method in Classical Dynamics
  • ArXiv ID: 1007.1258
  • Date: 2017-03-22
  • Authors: ** 제시되지 않음 (논문 본문에 저자 정보가 포함되어 있지 않음) **

📝 Abstract

A set of algorithms is presented for efficient numerical calculation of the time evolution of classical dynamical systems. Starting with a first approximation for solving the differential equations that has a "reversible" character, we show how to bootstrap easily to higher order accuracy.The method, first shown for a single particle in one dimension, is then neatly extended to many dimensions and many particles.

💡 Deep Analysis

📄 Full Content

We start by considering Newton's Law of motion for one particle moving in one dimension; in the Appendix we show how to extend this method neatly to many dimensions and many particles. We write a pair of first order time evolution equations:

where both x and v are time dependent functions to be determined at some later time t, given their values at some initial time t = 0. The force is given by some specified function f (x); and the quantity M is defined as the (nonlinear) matrix/operator specified above.

For simplicity I will write

We assume that there exists an operator E(tM) that is the exact propagator:

The addition property, E(t 1 M)E(t 2 M) = E((t 1 + t 2 )M) follows. Alternatively, we may write, d dt E(tM) = M E(tM). (1.4) This formalism is familiar in the case where M is a general linear operator, and E is simply the ordinary exponential function; however, it is also appropriate for non-linear operators, as derived in reference [1].

Our objective is to show simple and accurate approximations to the operator E(δM), for small time-steps δ, for use in automated computations.

Following the general method given in [1], we start by constructing an approximate propagation operator R(δ) with the following properties:

and

Rather than expanding the approximate result ψ(t + δ) ≈ R(δ) ψ(t) in a power series in δ, we represent R as the exact propagator for some other problem, which is expanded about the true one: based upon M. The restriction (2.1) means that only odd powers of δ occur in the expansion (2.2). The quantities X 3 , X 5 , etc., are unknown. Our method will show how to eliminate those higher order errors, step by step.

One more general property of the abstract propagator function E is the following.

(2.4) This is the nonlinear extension of the Baker-Campbell-Hausdorff theorem for the product of exponentials of non-commuting linear operators. The only difference is that, instead of the commutator [A, B] = AB -BA for linear operators, we have the “slash commutator” [A/B] = A/B -B/A for nonlinear operators, as defined in reference [1]. Now we proceed. The initial operator R(δ) is correct to order δ 2 and so we call it R 2 (δ). Now we construct the following sandwich:

(2.5) and try to choose the constants β, γ so that R 4 (δ) = E(δM + δ 5 Y 5 + . . .).

(2.6)

By working with Equation (2.4) we find the simple requirements,

This new formula (2.5) may be read as follows: Take a step forward of length 1.351207 . . . δ, then take a step backwards of length 1.702414 . . . δ, then another step forward of length 1.351207 . . . δ. The result will be one step forward of length δ -with errors of order δ 5 .

The real challenge now is to construct R(δ), seemingly accurate only to first order in δ but restricted by the requirement (2.1).

Here is one suggestion, for the particular problem we started with (1.1), that is built in the “sandwich” manner.

It should be apparent that this formulation is very easy to program for automated computation. On the other hand, it is rather cumbersome if one writes out explicit formulas for the overall result of this sequence of operations.

I have applied this method to a simple problem, the Kepler orbit in a plane. With the initial conditions x(0) = 1, y(0) = 0, v x (0) = 0, v y (0) = 1, I broke a complete orbit into N steps and saw what was the resulting error in y(N), which ought to return to zero. The results are shown in the tables below, for various values of N and various values of the source strength g (g=1 gives a circular orbit).

Using R 2 g=0.625 g=1.0 g=2.5 N=100 2x10 -1 8x10 -3 2x10 -2 N=1,000 2x10 -3 8x10 -5 3x10 -4 N=10,000 2x10 -5 8x10 -7 3x10 -6

Using R 4 g=0.625 g=1.0 g=2.5 N=100 3x10 -2 8x10 -5 2x10 -3 N=1,000 3x10 -6 8x10 -9 2x10 -7 N=10,000 3x10 -10 8x10 -13 2x10 -11 Each increase in the number of steps by a factor of 10 improves the accuracy by a factor of 10 2 if we use R 2 and by a factor of 10 4 if we use R 4 . Of course, R 4 requires three times as many operations per step, compared to R 2 ; but that seems a worthwhile investment since we can use many fewer steps for a given overall accuracy.

For comparison, I ran this same calculation using the popular Runge-Kutta method, at second order, and compared the results with those shown above for R 2 . Overall, one sees the same rate of improvement in accuracy as N is increased; and this is to be expected. For this particular problem I found that my method gave somewhat better accuracy at each level; but I would not offer that as a general rule without much further study; and I encourage others to try both methods on their own favorite problems. I will say, however, that the programming for my method was considerably simpler than that for the R-K method; and I expect that this aspect of the comparison is even more marked as one goes to the fourth order methods.

What about the Richardson technique? As a general rule, if you calculate something with a small parameter δ and know how it conver

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut