Tải bản đầy đủ (.pdf) (61 trang)

Comparison of some runge kutta methods for solving differential algebraic equations

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (709.87 KB, 61 trang )

VIETNAM NATIONAL UNIVERSITY, HANOI
UNIVERSITY OF SCIENCE - VNU

NGUYEN THI HONG THAM

COMPARISON OF SOME RUNGE-KUTTA
METHODS FOR SOLVING
DIFFERENTIAL-ALGEBRAIC EQUATIONS

MASTER OF SCIENCE THESIS

Hanoi - 2017


VIETNAM NATIONAL UNIVERSITY, HANOI
UNIVERSITY OF SCIENCE - VNU
- - - - - - - - - o0o - - - - - - - - -

Nguyen Thi Hong Tham

COMPARISON OF SOME RUNGE-KUTTA
METHODS FOR SOLVING
DIFFERENTIAL-ALGEBRAIC EQUATIONS

Major: Applied Mathematics
Code: 60460112

MASTER OF SCIENCE THESIS

THESIS SUPERVISOR:
Assoc. Prof. Dr. VU HOANG LINH


Hanoi - 2017


ACKNOWLEDGEMENT
I would like to thank all the people who have helped me make this thesis
possible. It is not possible to list all here but I will name just a few.

First of all, I am very grateful to my supervisor Assoc. Prof. Dr. Vu
Hoang Linh, who has spent a lot of time guiding and encouraging me. I would
like to express my deepest gratitude to him for his enormous help, critical
comment, advice and for providing inspiration which cannot expressed by
words.

I wish to thank all the other lectures and professors at Faculty of Mathematics, Mechanics and Informatics of University of Science for their teaching,
continuous support, tremendous research and study environment they have
created. I also thank to my classmates for their friendship and support. I
will never forget their care and kindness.

Finally, I express my deep appreciation to my family for all the wonderful, never-ending, unlimited support and encouragement. I thank my parents,
who have sacrificed so much for my education and have encouraged me toward master degree. Without their emotional support, I am sure I would not
have been able to finish my study and to complete this thesis.

Hanoi, April 28th 2017.
Student

Nguyen Thi Hong Tham
1


Contents

1 Introduction
1.1

1.2

1

Differential-algebraic equations . . . . . . . . . . . . . . . . .

1

1.1.1

Definition of DAEs . . . . . . . . . . . . . . . . . . .

1

1.1.2

Index of a DAE . . . . . . . . . . . . . . . . . . . . . .

3

1.1.3

Classification of DAEs . . . . . . . . . . . . . . . . . .

4

1.1.4


Special DAE Forms . . . . . . . . . . . . . . . . . . .

7

Runge-Kutta methods . . . . . . . . . . . . . . . . . . . . . .

8

1.2.1

Formulation of Runge-Kutta methods . . . . . . . . .

8

1.2.2

Classes of Runge-Kutta methods . . . . . . . . . . . .

10

1.2.3

Simplifying assumptions . . . . . . . . . . . . . . . . .

12

2 Implicit RK methods and half-explicit RK methods for semiexplicit DAEs of index 2

13


2.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . .

14

2.2

Implicit Runge-Kutta methods . . . . . . . . . . . . . . . . .

15

2.2.1

Formula of implicit Runge-Kutta methods . . . . . . .

15

2.2.2

Convergence of implicit Rung-Kutta methods . . . . .

16

2.2.3

Order conditions . . . . . . . . . . . . . . . . . . . . .

18


2.2.4

Numerical experiment . . . . . . . . . . . . . . . . . .

21

Half-explicit Rung-Kutta methods . . . . . . . . . . . . . . .

22

2.3

2


2.4

2.3.1

Formula of half-explicit Runge-Kutta methods . . . .

23

2.3.2

Discussion of the convergence . . . . . . . . . . . . . .

23


2.3.3

Order conditions . . . . . . . . . . . . . . . . . . . . .

24

2.3.4

Numerical experiment . . . . . . . . . . . . . . . . . .

28

Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

30

3 Partitioned HERK methods for semi-explicit DAEs of index
2

31

3.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . .

32

3.2

Partitioned half-explicit Runge-Kutta methods . . . . . . . .


34

3.2.1

Definition of partitioned half-explicit RK method . . .

34

3.2.2

Existence and influence of perturbations . . . . . . . .

35

3.2.3

Convergence of partitioned half-explicit Runge-Kutta
methods . . . . . . . . . . . . . . . . . . . . . . . . . .

3.3

39

Construction of partitioned half-explicit Runge-Kutta methods 42
3.3.1

Methods of order up to 4 . . . . . . . . . . . . . . . .

43


3.3.2

Methods of order 5 and 6 . . . . . . . . . . . . . . . .

47

3.4

Numerical experiment . . . . . . . . . . . . . . . . . . . . . .

48

3.5

Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

50

Bibliography

53

3


Abstract
In recent years, the use of differential equations in connection with algebraic constraints on the variables, for example due to laws of conservation or position constraints, has become a widely accepted tool for modeling
the dynamical behaviour of physical processes. Such combinations of both
differential and algebraic equations are called differential-algebraic equations

(DAEs). Differential-algebraic equations arise in a variety of applications such
as modeling constrained multibody systems, electrical networks, aerospace
engineering, chemical processes, computational fluid dynamics, gas transport
networks. Therefore, their analysis and numerical treatment plays an important role in modern applied mathematics. Fast and efficient numerical solvers
for DAEs are highly desirable for finding solutions. Many numerical methods have been developed for DAEs. Most numerical methods for differential
algebraic equations based on standard methods from the theory of ordinary
differential equations. It is well known that the robust and numerically stable
application of these ODE methods to higher index DAEs has to be based on
the structure of the DAE. Numerical methods for differential-algebraic equations of index-1 have already discussed in my undergraduate thesis.
Therefore, this thesis concentrates on numerical methods for semi-explicit
DAEs of index 2.
Here, we are concerned with one-step methods for index 2 DAEs in Hessenberg form. These methods combine efficient integrators for ODE theory
with a method to handle algebraic part. We aim to present three classes of
Rung-Kutta methods and give a comparison.
We introduce primarily about implicit Rung-Kutta methods. Then, we also
introduce half-explicit Runge-Kutta methods (HERK) that allows to solve


more efficiently certain problems of the semi-explicit DAEs of index 2 form
arising in the simulation of multi-body systems in (index 2) descriptor form.
For half-explicit Rung-Kutta methods, although they are efficient, robust,
and easy to implement, they suffer from order reduction. To reestablish superconvergence, we also pay a particular attention to partitioned half-explicit
Rung-Kutta methods (PHERK). A detailed analysis of these methods is also
presented in this thesis. We examine the existence and uniqueness of the
proposed numerical solutions, the influence of perturbations, the local error
and global convergence and order conditions of the methods. Furthermore,
we use MATLAB for numerical experiments on the Radau IIA, HERK and
PHERK methods for DAEs of index 2 are presented.
The thesis is organized as follows. Chapter 1 provides some background
material on differential-algebraic equations and Runge-Kutta methods. Implicit Runge-Kutta and half-explicit Runge-Kutta methods applied to semiexplicit DAEs of index 2 and the characteristic properties of each method are

presented in chapter 2. Chapter 3 is the main part of the thesis, in which we
pay particular attention to PHERK for approximating the numerical solution
of non-stiff semi-explicit DAEs of index 2 and their numerical experiments.
Finally, we discuss the pros and cons of each family of the methods.

2


Chapter 1

Introduction
Differential-algebraic equations (DAEs) arise in a variety of applications
such as chemical process, physical process and electrical networks and modeling constrained multi-body system. Therefore, their analysis and numerical
treatment play an important role in modern applied mathematics. This chapter gives an introduction to the theory of DAEs. Some background material
on DAEs and Runge-Kutta methods will be provided.

1.1
1.1.1

Differential-algebraic equations
Definition of DAEs

A differential-algebraic equation (DAE) is an equation involving an
unknown function and its derivatives.
A first order DAE is a system of equations of the form
F (t, x, x)
˙ = 0,

(1.1)


where t ∈ ❘ is the independent variable (generally referred to as the ” time”
1


variable), x(t) ∈ ❘n is the unknown function, and x(t)
˙
=

dx
dt (t).

The function

F : ❘ × ❘n × ❘n → ❘n
is assumed to be differentiable.
The system (1.1) is a very general form of DAEs. We consider in this thesis
only initial value problem, i.e., system of the form (1.1) subject to the additional initial condition x(t0 ) = x0 for some initial time t0 ∈ ❘ and value
x0 ∈ ❘n .
Remark 1.1.1.
• In general, if the Jacobian matrix

∂F
∂ x˙

is non-singular (invertible) then

the system F (t, x, x)
˙ = 0 can be transformed into an ordinary differential equation (ODE) of the form x˙ = f (t, x). Numerical methods for
ODE models have been already well discussed. Therefore, the most
interesting case is when


∂F
∂ x˙

is singular.

• The method for solving of a DAE will depend on its structure. A
special but important class of DAEs of the form (1.1) is the semiexplicit DAE or ordinary differential equation (ODE) with constraints

y˙ = f (t, y, z),
0 = g(t, y, z),
which appear frequently in applications.
Example 1.1.1. The system
x1 − x˙ 1 + 2 = 0,
x˙ 1 x2 + 2 = 0
2


is a DAE. To see this, determine the Jacobian ∂F
∂ x˙ of


x1 − x˙ 1 + 1

F (t, x, x)
˙ =
x˙ 1 x2 + 2
 
x˙ 1
with x˙ =  , so that

x˙ 2

∂F
=
∂ x˙

∂F1
 ∂ x˙ 1
∂F2
∂ x˙ 1



∂F1
∂ x˙ 2 
∂F2
∂ x˙ 2


−1
=
x2


0
 , ( see that, det ∂F
∂ x˙
0

= 0).


Hence, the Jacobian is a singular matrix irrespective of the values of x2 .
Furthermore, we observe that in this example the derivative x˙ 2 does not
appear.

1.1.2

Index of a DAE

Generally, the idea of all these index concepts is to classify DAEs with
respect to their difficulty in the analytical as well as the numerical solution.
There are different index definitions: Kronecker index (for linear constant
coefficient DAEs), differentiation index (Brenan et al. 1996), perturbation
index (Hairer et al. 1996), tractability index (Griepentrog et al. 1986), geometric index (Rabier et al. 2002), and strangeness index (Kunkel et al. 2006).
In this thesis, the focus is set on the differentiation index.
DAEs are usually very complex and hard to be solved analytically. Therefore, DAEs are commonly solved by using numerical methods.
Question: Is it possible to use numerical methods of ODEs for the solution
of DAEs?
Idea: Attempt to transform the DAE into an ODE. This can be achieved
through repeated derivations of the system with respect to time t.

3


Definition 1.1.1. The minimum number of differentiation steps required to
transform a DAE into an ODE is known as the (differentiation) index of the
DAE.
Remark 1.1.2.
• Index measures the distance from a DAE to its related ODE. It reveals
the mathematical structure and potential complications in the analysis

and the numerical solution of the DAE.
• The higher the index of a DAE is, the more difficulties for its numerical
solutions appear.
Example 1.1.2. Let q(t) be a given smooth function in Rn .
The scalar equation
y = q(t)
is a (trivial) index-1 DAE ( with a differentiation, you obtain an ODE for y).
Consider
y1 = q(t),
y2 = y˙ 1 .
One differentiates the first equation to get y2 = y˙ 1 = q(t)

= y2 and then
y 2 = yă1 = qă(t) This is an index-2 DAE (constraint differentiated twice to get
ODE for y2 ).

1.1.3

Classification of DAEs
Frequently, DAEs posses mathematical structure that are specific to

a given application area. As a result we have nonlinear DAEs, linear DAEs,
etc.
4


Nonlinear DAEs:
In the DAE F (t, x, x)
˙ = 0 if the function F is nonlinear with respect to any
one of t, x or x,

˙ then it is said to be a nonlinear DAE.
Linear DAEs:
A DAE of the form
A(t)x(t)
˙ + B(t)x(t) = c(t),
where A(t) and B(t) are n × n matrices, is linear. If A(t) ≡ A and B(t) ≡ B,
then we have time-invariant linear DAE.

Semi-explicit DAEs

A DAE system given in the form
x = f (t, x, z),

(1.2)

0 = g(t, x, z)

(1.3)

is called semi-explicit.
• Note that the derivative of the variable z does not appear in the DAE.
• Such a variable z is called an algebraic variable; while x is called a
differential variable.
• The equation 0 = g(t, x, z) called algebraic equation or a constraint.
Example 1.1.3 (Simple pendulum).

Consider a simple pendulum de-

scribed in the figure leads to equation
F

x,
l
F

y = mg y.
l


x=

5


Conservation of mechanical energy: x2 + y 2 = l2 .
We have a semi-explicit DAE system:
x˙ 1 = x3 ,
x˙ 2 = x4 ,
F
x1 ,
ml
F
x˙ 4 = g x2 ,
l
x˙ 3 = −

0 = x2 + y 2 − l 2 .
Fully-implicit DAEs
A DAE system of the form:
F (t, x, x)
˙ =0

is called fully implicit.
Example 1.1.4. The system
x1 − x˙ 1 + 1 = 0

(1.4)

x˙ 1 x2 + 2 = 0

(1.5)

is a fully-implicit DAE.
Remark 1.1.3.
• Any fully-implicit DAE can be always transformed into semi-explicit
DAEs with index increased by one as follows:
For F (t, y, y)
˙ = 0, let y˙ = z. Then, we have
y˙ = z,
0 = F (t, y, z).
• Semi-explicit form is nice because differential and algebraic variables
are decoupled.
6


1.1.4

Special DAE Forms
The general DAE F (t, y, y)
˙ = 0 can include problems which are not

well-defined in mathematical sense, as well as problems which will result

in failure for any direct discretization method (i.e., without reformulation).
Fortunately, many of higher-index problems encountered in practice can be
expressed as a combination of more restrictive structures of ODEs coupled
with constraints. Differential and algebraic variables can be identified and
treated appropriately. Algebraic variables can be eliminated with the same
number of differentiations. These are called Hessenberg forms.
Hessenberg index-1 DAEs have the form

x = f (t, x, z),
0 = g(t, x, z),

∂g
non-singular for all t.
∂z

Also called semi-explicit index-1 DAE. This is very closely related to implicit
ODEs because we can solve (in principle) for z in terms of x, t (implicit
function theorem).
Hessenberg index-2 DAEs have the form
x = f (t, x, z),
0 = g(t, x),

∂g ∂f
nonsingular for all t.
∂x ∂z

Note that g is independent of z. It is also called pure index-2 DAE (algebraic
variables are index-2 only, not a mixture of index 1 and 2).
It is also possible to take a semi-explicit index-2 DAE


x˙ = f (t, x, z),
0 = g(t, x)
7


to a fully implicit, index-1 form: Let w˙ = z. Then,
x˙ = f (t, x, w),
˙
0 = g(t, x, w).
˙
These problem classes are equivalent!

1.2

Runge-Kutta methods
Runge-Kutta methods (RK) are the most popular single-step (or one-

step) methods in practical applications. They were discovered by Carl Runge
and Martin Kutta, two German mathematicians, about a century ago. RK
methods for solving ODEs are well known and have enjoyed much success.
RK methods have also been extended to DAEs. In this section we describe
some RK terminologies that will be useful later.

1.2.1

Formulation of Runge-Kutta methods

In carrying out a step we evaluate s stage values
Y1 , Y2 , . . . , Ys
and s stage derivatives

k1 , k2 , . . . , ks ,
using the formula ki = f (Yi ).
Each Yi is defined as a linear combination of the kj added on to y0
s

Yi = y0 + h

aij kj ,

i = 1, 2, . . . , s,

j=0

and the approximation at x1 = x0 + h is found from
s

y1 = y0 + h

bi ki .
j=0

8


Definition 1.2.1. Let b1 , . . . , bs , aij (i, j = 1, . . . , s) be real numbers. Let
ci =

j

aij . The method

s

ki = f (x0 + ci h, y0 + h

aij kj )
j=1

s

y1 = y0 + h

bi ki
i=1

is called an s − stage Runge-Kutta method.
There are many different Runge-Kutta methods; individual ones are usually described by presenting their Butcher Array.
Butcher tableau:

c

A
bT

=

c1
..
.

a11

..
.

···

a1s
..
.

cs

as1

···

ass

b1

···

bs

where the ci are the row-sums of the matrix.

Example 1.2.1.
Runge-Kutta methods can be divided into two main types according to
the style of the matrix A. If the A matrix is strictly lower triangular, then
the method is called ” explicit”; otherwise, the method is called ” implicit”.
9



1.2.2

Classes of Runge-Kutta methods

1.2.2.1

Explicit Runge-Kutta methods

For explicit methods, the Butcher array is strictly lower triangular. Thus
it is common to omit the upper-diagonal terms when writing the Butcher
array:
0

0

c2
..
.

a21
..
.

0
..
.

..


cs

as1

...

as,s−1

0

b1

...

bs−1

bs

.

where c2 = a21 , c3 = a31 + a32 , . . . , cs = as1 + · · · + as,s−1 .
Examples:
0

0

0

0


1

1

0

1
2

1
2

0
1

1-stage Explicit Euler method

0

0

0

0

1

1


0

0

1
2

1
4

1
4

0

1
6

1
6

2
3

2-stage Runge-Kutta method

3-stage Runge-Kutta

0


0

0

0

0

1
2

1
2

0

0

0

1
2

0

1
2

0


0

1

0

0

1

0

1
6

2
6

2
6

1
6

4-stage Runge-Kutta Formula

10


1.2.2.2


Implicit Runge-Kutta methods

Implicit Runge-Kutta (IRK) methods are more complicated. The diagonal
and supra-diagonal elements of the Butcher array may be nonzero.
c1
..
.

a11
..
.

...

a1s
..
.

cs

as1

...

ass

b1

...


bs

Examples:
1
2

1
2

0

0

0

1

1
2

1
2

1
2

1
2


1
1-stage Implicit Midpoint

2-stage Implicit Trapezoidal Rule

1
3

5
12

−1
12

0

1
2

−1
2

1

3
4

1
4


1

1
2

1
2

3
4

1
4

1
2

1
2

Radau IIA

Lobatoo IIIC

Remarks :
Implicit Runge-Kutta methods are suitable for stiff problems because their
regions of absolute stability are large. In contrast, explicit Runge-Kutta
methods are not acceptable for stiff problems because they have an bounded
region of absolute stability.


11


1.2.3

Simplifying assumptions

The following are properties that an RK method usually has. For historic
reasons these properties are known as the simplifying assumptions. For the
method description we have to introduce:
s

bi ck−1
=
j

1
,
k

aij ck−1
=
j

cki
,
k

B(p) :
j=1

s

C(q) :
j=1

1 ≤ i ≤ s, 1 ≤ k ≤ p.

1 ≤ i ≤ s, 1 ≤ k ≤ q.

s

bi ck−1
aij = bi (1 − cki ), 1 ≤ i ≤ s, 1 ≤ k ≤ r.
i

D(r) :
i=1

Condition B(p) means that the quadrature formula with weights b1 , ..., bs
and nodes c1 , ..., cs integrates polynomials up to degree p − 1 exactly on the
interval [0, 1], and condition C(q) says that polynomials up to degree at least
q −1 are integrated exactly on the interval [0, ci ], for each i, by the quadrature
formula with weights ai1 , ..., ais .
An RK method is said to have stage order q if C(q) is satisfied.

12


Chapter 2


Implicit RK methods and
half-explicit RK methods
for semi-explicit DAEs of
index 2
Most numerical methods for differential-algebraic equations (DAEs) are
based on standard methods from the theory of ordinary differential equation
(ODEs). It is well known that the robust and numerically stable application
of these ODE methods to higher index DAEs has to be based on the structure
of the DAE (e.g. index, semi-explicit form, Hessenberg form..). In the present
chapter we focus on implicit Runge-Kutta (IRK) methods and half-explicit
Runge-Kutta (HERK) methods for index 2 DAEs in Hessenberg form. We
consider the numerical solution of systems of semi-explicit index 2 differentialalgebraic equations (DAEs) by methods based on Runge-Kutta coefficients.

13


These methods combine efficient s-stage Runge-Kutta method from ODEtheory for differential part with a method to handle the algebraic part [2].

2.1

Introduction

We consider the following class of semi-explicit differential-algebraic equations (DAEs) of index 2 given in autonomous and Hessenberg form
y˙ = f (y, z),
(2.1)
0 = g(y),
where y ∈ ❘n are the differential variables, z ∈ ❘m are the algebraic variables, and the functions f : ❘n ×❘m → ❘n and g : ❘n → ❘m are assumed to be
sufficiently differentiable, f and g are assumed to be sufficiently differentiable
and
gy (y)fz (y, z) is invertible


(2.2)

The initial values y0 , z0 at t0 are supposed to be given and to be consistent,
i.e., to satisfy
g(y0 ) = 0,

(2.3)

gy (y0 )f (y0 , z0 ) = 0.
The latter is called the hidden constraint, which is obtained by differentiating
the algebraic equation.
The system of DAEs (2.1) is thus of index 2 [3, 7].
The system (2.1) can be integrated considering the system obtained from
(2.1) differentiating once the constraint, i.e., the index 1 DAE
y˙ = f (y, z), 0 = gy (y)f (y, z).

(2.4)

Here, we are concerned with one-step methods that directly integrate
the problem (2.1), without making use of the algebraic constraints of (2.3)
14


(the so called hidden constraints). The first one step methods for integrating
directly the system (2.1) studied in the literature are implicit Runge-Kutta
methods [3, 7] (see also [5]). Half-explicit Runge- Kutta methods, proposed
in [3] and developed in [4] allows to solve more efficiently certain problems of
the form (2.1) arising in the simulation of multi-body systems in (index 2)
descriptor form.


2.2
2.2.1

Implicit Runge-Kutta methods
Formula of implicit Runge-Kutta methods

The standard definition of an s-stage Runge-Kutta method applied to
semi-explicit index 2 DAEs is described as follows [3, 7]. We consider one
step with stepsize h starting from yn at tn . The numerical solution yn+1 ,
zn+1 approximating the exact solution y(t), z(t) at tn+1 = tn + h is given by
s

yn+1 = yn + h

bi f (Yni , Zni ),

(2.5)

bi wij (Znj − zn ),

(2.6)

i=1
s

s

zn+1 = zn +
i=1 j=1


where the s internal stages Yni and Zni (i = 1, . . . , s) are the solution of the
system of nonlinear equations
s

Yni = yn + hn

aij f (Ynj , Znj ),

(2.7a)

j=1

0 = g(Yni ),

i = 1, . . . , s.

(2.7b)

Here W := (wij )si,j=1 denotes the inverse of the RK matrix A, i.e., W := A−1 ,
assuming that A is invertible.
Remark 2.2.1.
15


1. From the above standard application of implicit RK methods it is important to notice that from (2.7b) all internal stages Yni (i = 1, . . . , s)
satisfy the constraint g(y) = 0, whereas the numerical solution yn+1
generally does not. However, for stiffly accurate RK methods, i.e.,
for methods satisfying asi = bi for i = 1, . . . , s we have yn+1 = Yns .
Therefore g(yn+1 ) = 0 is automatically satisfied for such methods since

from (2.7b) for i = s we have g(Yns ) = 0. Superconvergence of stiffly
accurate methods has been demonstrated [3, 5, 7].
2. Y1 , Y2 , . . . . . . , Ys and Z1 , Z2 , . . . . . . ., Zs are obtained by solving (m +
n) × s nonlinear equations at the same time. Therefore, it can be very
computationally expensive.

2.2.2

Convergence of implicit Rung-Kutta methods
Existence and uniqueness of the system of nonlinear equations arising

with implicit Runge-Kutta methods are shown in the following theorem.
Theorem 2.2.1. Let η, ξ satisfy
g(η) = O(h2 ),

gy (η)f (η, ξ) = O(h),

(2.8)

and suppose that (2.2) holds in an h-independent neighbourhood of (η, ξ). If
the Runge-Kutta matrix A = (aij ) is invertible, then the nonlinear system
s

Yi = η + h

aij f (Yj , Zj )

(2.9a)

j=1


0 = g(Yi )

(2.9b)

possesses a solution for h ≤ h0 . The solution is locally unique and satisfies
Yi − η = O(h),

Zi − ξ = O(h)

16

(2.10)


Definition 2.2.1. The difference between the numerical solution (y1 , z1 ) and
the exact solution at x + h:
δyh (x) = y1 − y(x + h),

δzh (x) = y1 − z(x + h)

is called the local error.
Lemma 2.2.1. Suppose that the Runge-Kutta method satisfies with p ≥ q + 1
and q ≥ 1 the conditions
s

bi ck−1
=
i


B(p) :
i=1
s

1
k

aij ck−1
=
j

C(q) :
j=1

for k = 1, . . . , p,

cki
k

for k = 1, . . . , q and all i.

Then the local error is of magnitude
δyh (x) = O(hq+1 ),

P (x)δyh (x) = O(hq+2 )

(2.11)

δzh (x) = O(hq ).
Here P (x) is a projection given by

P (x) = I − Q(x),

Q(x) = (fz (gy fz )−1 gy )(y(x), z(x)).

(2.12)

The limit of the stability function at infinity plays a decisive role for
differential algebraic equations. For invertible A it is given by
R(∞) = 1 − bT A−1 ✶, where ✶ = (1, 1, ......, 1)T .

(2.13)

Convergence for the y-component
Theorem 2.2.2. Suppose that the estimate (2.2) holds in a neighbourhood
of the solution (y(x), z(x)) of (2.1) and that the initial values are consistent.
If the Runge-Kutta matrix A is invertible, |R(∞)| < 1 and the local error
satisfies
δyh (x) = O(hr ),

P (x)δyh (x) = O(hr+1 )
17

(2.14)


with P (x) given in (2.12), then the method (2.5) is convergent of order r,
i.e.,
yn − y(xn ) = O(hr )

for


xn = nh ≤ Const.

If in addition δyh (x) = O(hr+1 ), then we have g(yn ) = O(hr+1 ).
The proof of this theorem is given in [4, Theorem VII.4.5] and [10, Theorem 4.4]; it is therefore omitted.
Convergence for the z-component
The following theorem shows that the global error for the z-component is
essentially equal to its local error, if |R(∞)| < 1.
Theorem 2.2.3. Suppose that the estimate (2.2) holds in a neighbourhood
of the solution (y(x), z(x)) of (2.1) and that the initial values are consistent.
Assume that the Runge-Kutta matrix A is invertible and |R(∞)| < 1. If the
global error of the y-component is O(hk ), g(yn ) = O(hk+1 ) and the local error
of the z-component is O(hk ), then we have for the global error
zn − z(xn ) = O(hk ) for xn = nh ≤ Const.

2.2.3

Order conditions

The goal now is to obtain a local error estimate for differential variables
y1 compared to the exact solution y(t) at t0 + h passing through consistent
initial values (y0 , z0 ) at t0 . In this thesis we will not define once again the
whole tree theory for semi-explicit index 2 DAEs which is found for example
in [3, 7]. Definitions of tree t ∈ Ty and related quantities p(t), γ(t),etc., are
as in [3, Section 5] and [7, Section VII.5]. We only mention the main idea,
algorithm and results.
We will derive the conditions on the method coefficients which ensure that
the local error is of a given order. This is obtained by comparing the Taylor
expansions of the exact and numerical solution.
18



×