We propose a time-exact Krylov-subspace-based method for solving linear ODE (ordinary differential equation) systems of the form $y'=-Ay + g(t)$, where $y(t)$ is the unknown function. The method consists of two stages. The first stage is an accurate polynomial approximation of the source term $g(t)$, constructed with the help of the truncated SVD (singular value decomposition). The second stage is a special residual-based block Krylov subspace method. The accuracy of the method is only restricted by the accuracy of the polynomial approximation and by the error of the block Krylov process. Since both errors can, in principle, be made arbitrarily small, this yields, at some costs, a time-exact method.
Deep Dive into Block Krylov subspace exact time integration of linear ODE systems. Part 1: algorithm description.
We propose a time-exact Krylov-subspace-based method for solving linear ODE (ordinary differential equation) systems of the form $y'=-Ay + g(t)$, where $y(t)$ is the unknown function. The method consists of two stages. The first stage is an accurate polynomial approximation of the source term $g(t)$, constructed with the help of the truncated SVD (singular value decomposition). The second stage is a special residual-based block Krylov subspace method. The accuracy of the method is only restricted by the accuracy of the polynomial approximation and by the error of the block Krylov process. Since both errors can, in principle, be made arbitrarily small, this yields, at some costs, a time-exact method.
arXiv:1109.5100v1 [math.NA] 23 Sep 2011
BLOCK KRYLOV SUBSPACE EXACT TIME INTEGRATION
OF LINEAR ODE SYSTEMS
PART 1: ALGORITHM DESCRIPTION∗
MIKE A. BOTCHEV†
Abstract. We propose a time-exact Krylov-subspace-based method for solving linear ODE (or-
dinary differential equation) systems of the form y′ = −Ay+g(t), where y(t) is the unknown function.
The method consists of two stages. The first stage is an accurate polynomial approximation of the
source term g(t), constructed with the help of the truncated SVD (singular value decomposition).
The second stage is a special residual-based block Krylov subspace method.
The accuracy of the method is only restricted by the accuracy of the polynomial approximation
and by the error of the block Krylov process. Since both errors can, in principle, be made arbitrarily
small, this yields, at some costs, a time-exact method.
Key words. Krylov subspace; matrix function; block Arnoldi process; block Lanczos process;
exponential time integrators; matrix exponential residual; truncated SVD; proper orthogonal decom-
position
AMS subject classifications. 65F60, 65F10, 65F30, 65N22, 65L05
1. Problem formulation. Consider initial-value problem (IVP)
(
y′ = −Ay + g(t),
y(0) = v,
t ∈[0, T ],
(1.1)
where y(t) is the unknown vector function, y : R →Rn, and the matrix A ∈Rn×n,
vector function g : R →Rn, and vector v ∈Rn are given.
Let ˜y(t) ≡y(t)−v (meaning that ˜y(t) = y(t)−v for all t). Note that the function
˜y(t) satisfies IVP
(
˜y′ = −A˜y + ˜g(t),
˜y(0) = 0,
t ∈[0, T ],
(1.2)
where ˜g(t) ≡g(t)−Av. We will assume that the IVP (1.1) is brought to the equivalent
form (1.2) and, for simplicity, we omit the tilde sign ˜· in (1.2).
2. Polynomial approximation. We now describe the first stage of the method,
the best fit polynomial approximation of the source term g(t).
Choose s points
0 = t1 < t2 < · · · < ts−1 < ts = T on the time interval [0, T ]. The polynomial
approximation is based on the truncated SVD (singular value decomposition) of the
matrix
˜G =
g(t1)
g(t2)
. . .
g(ts)
∈Rn×s,
whose columns are samples g(ti), i = 1, . . . s, of the vector function g(t).
More
precisely, let
˜G = ˜U ˜Σ ˜V T ,
˜Σ = diag(σ1, . . . , σs) ∈Rs×s,
σ1 ⩾. . . ⩾σs ⩾0,
(2.1)
∗This manuscript is created on September 23, 2011.
†Department of Applied Mathematics, University of Twente, P.O. Box 217, 7500 AE Enschede,
the Netherlands, mbotchev@na-net.ornl.gov.
1
2
M.A. Botchev
be the thin SVD [3, Section 2.5.4], where the matrices ˜U ∈Rn×s and ˜V ∈Rs×s have
orthonormal columns u1, . . . , us and v1, . . . , vs, respectively. An approximation to
˜G can be obtained by truncating the SVD as
˜G = ˜U ˜Σ ˜V T =
s
X
i=1
σiuivT
i ≈
m
X
i=1
σiuivT
i = UΣV T ,
m < s,
(2.2)
where Σ ∈Rm×m = diag(σ1, . . . , σm) and the matrices U ∈Rn×m and V ∈Rs×m
are formed by the first m columns of ˜U and ˜V , respectively. Denote the obtained
approximate matrix by G = UΣV T . If follows from (2.2) that the SVD of ˜G −G is
readily available as Ps
i=m+1 σiuivT
i . Hence, for the 2-norm and Frobenius norm of
the error ˜G −G holds [3, Section 2.5.3]:
∥˜G −G∥2 = σm+1,
∥˜G −G∥2
F = σ2
m+1 + · · · + σ2
s.
Looking at SVD identity (2.1) columnwise, we see that every sample value g(ti) of
the function g(t) can be approximated by a linear combination of the vectors u1, . . . ,
um:
g(ti) = (σ1vi1)u1 + (σ2vi2)u2 + · · · + (σsvis)us
≈(σ1vi1)u1 + (σ2vi2)u2 + · · · + (σsvim)um,
where vij are the entries of the unitary matrix V . Following the approach of [2],
we consider the coefficients of these linear combinations, namely σjvij, j = 1, . . . , m,
as values of some unknown functions fj(t) at ti. These functions can be easily ap-
proximated, at a low cost (typically m ≪n) and with a very high accuracy, by a
polynomial fit [2]. This yields
g(ti) ≈f1(ti)u1 + f2(ti)u2 + · · · + fm(ti)um
≈p1(ti)u1 + p2(ti)u2 + · · · + pm(ti)um.
(2.3)
For simplicity, we assume that all the best-fit polynomials have the same the order
r. Packing the polynomials pj(t), j = 1, . . . , m, in one polynomial vector function
p(t) = (p1(t), . . . , pm(t))T , we obtain a polynomial approximation
g(t) ≈Up(t).
(2.4)
There are three sources contributing to the approximation error here.
First, the
quality of the approximation is influenced by the choice of the sample points t1, . . . ,
ts. Second, by the number of terms m in the SVD truncation (2.2) and, finally, by
the polynomial best fit in (2.3). All these errors can be easily controlled when the
approximation is constructed [2], thus giving possibility for an adaptive approximation
procedure. With (2.4), the original initial-value problem (1.2) takes the form
(
y′ = −Ay + Up(t),
y(0) = 0,
t ∈[0, T ],
(2.5)
We now introduce a block Krylov subspace method to solve this problem.
Block Krylov subspace exact time integration
3
3. Residual-based block Krylov subspace method. To construct a Krylov
subspace block iterative method for solving (2.5), we use the exponential residual
concept described in [1]. Choosing the initial guess y0(t) to be a zero v
…(Full text truncated)…
This content is AI-processed based on ArXiv data.