Tải bản đầy đủ (.pdf) (51 trang)

APPLIED NUMERICAL METHODS USING MATLAB phần 5 pptx

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (371.03 KB, 51 trang )

198 NONLINEAR EQUATIONS
6
5
4
3
2
1
0
02
y
=
xy
=
x
y
=
g
a
(
x
) = (
x
2
+ 1)
x
o1
x
o1
x
o2
x


o2
46
−1
1
3
y
=
g
b
(
x
) = 3 −
1
x
(a)
x
k

+ 1
=
g
a
(
x
k
) = (
x
k
2
+ 1)

1
3
3
2.5
2
1.5
1
0.5
0
0123
−0.5
(b)
x
k

+ 1
=
g
b
(
x
k
) = 3 −
1
x
k
Figure P4.1 Iterative method based on the fixed-point theorem.
Noting that the first derivative of this iterative function g
a
(x) is

g

a
(x) =
2
3
x(P4.1.4)
determine which solution attracts this iteration and certify it in Fig.
P4.1a. In addition, run the MATLAB routine “
fixpt()” to perform
the iteration (P4.1.3) with the initial points x
0
= 0, x
0
= 2, and x
0
= 3.
What does the routine yield for each initial point?
(b) Now consider the following iterative formula:
x
k+1
= g
b
(x
k
) = 3 −
1
x
k
(P4.1.5)

Noting that the first derivative of this iterative function g
b
(x) is
g

b
(x) =−
1
x
2
(P4.1.6)
determine which solution attracts this iteration and certify it in Fig. P4.1b.
In addition, run the MATLAB routine “
fixpt()” to carry out the itera-
tion (P4.1.5) with the initial points x
0
= 0.2,x
0
= 1, and x
0
= 3. What
does the routine yield for each initial point?
(cf) This illustrates that the outcome of an algorithm may depend on the start-
ing point.
PROBLEMS 199
4.2 Bisection Method and Fixed-Point Iteration
Consider the nonlinear equation treated in Example 4.2.
f(x)= tan(π − x) − x = 0 (P4.2.1)
Two graphical solutions of this equation are depicted in Fig. P4.2, which
can be obtained by typing the following statements into the MATLAB

command window:
>>ezplot(’tan(pi-x)’,-pi/2,3*pi/2)
>>hold on, ezplot(’x+0’,-pi/2,3*pi/2)
(a) In order to use the bisection method for finding the solution between
1.5 and 3, Charley typed the statements shown below. Could he get the
right solution? If not, explain him why he failed and suggest him how
to make it.
>>fp42 = inline(’tan(pi-x)-x’,’x’);
>>TolX = 1e-4; MaxIter = 50;
>>x = bisct(fp42,1.5,3,TolX,MaxIter)
(b) In order to find some interval to which the bisection method is applica-
ble, Jessica used the MATLAB command “
find()” as shown below.
>>x = [0: 0.5: pi]; y = tan(pi-x) - x;
>>k = find(y(1:end-1).*y(2:end) < 0);
>>[x(k) x(k + 1); y(k) y(k + 1)]
ans = 1.5000 2.0000 2.0000 2.5000
-15.6014 0.1850 0.1850 -1.7530
This shows that the sign of f(x) changes between x = 1.5 and 2.0
and also between x = 2.0 and 2.5. Noting this, Jessica thought that she
might use the bisection method to find a solution between 1.5 and 2.0
by typing the following command.
>>x=bisct(fp42,1.5,2,TolX,MaxIter)
Check the validity of the solution—that is, check if f(x) = 0 or not—by
typing
>>fp42(x)
If her solution is not good, explain the reason. If you are not sure about
it, you can try plotting the graph in Fig. P4.2 by typing the following
statements into the MATLAB command window.
>>x = [-pi/2+0.05:0.05:3*pi/2 - 0.05];

>>plot(x,tan(pi - x),x,x)
200 NONLINEAR EQUATIONS
0−1
−5
0
5
1234
y
=
x
y
= tan (p −
x
)
Figure P4.2 The graphical solutions of tan(π − x) −x = 0ortan(π −x) = x.
(cf) This helps us understand why fzero(fp42,1.8) leads to the wrong solu-
tion even without any warning message as mentioned in Example 4.2.
(c) In order to find the solution around x = 2.0 by using the fixed-point
iteration with the initial point x
0
= 2.0, Vania defined the iterative func-
tion as
>>gp421 = inline(’tan(pi - x)’,’x’); % x = g
1
(x ) = tan (π − x )
and typed the following statement into the MATLAB command window.
>>x = fixpt(gp421,2,TolX,MaxIter)
Could she reach the solution near 2? Will it be better if you start the
routine with any different initial point? What is wrong?
(d) Itha, seeing what Vania did, decided to try with another iterative formula

tan
−1
x = π, x = g
2
(x) = π − tan
−1
x(P4.2.2)
So she defined the iterative function as
>>gp422 = inline(’pi-atan(x)’, ’x’); % x = g(x ) = π − tan
−1
(x )
and typed the following statement into the MATLAB command window:
>>x = fixpt(gp422,2,TolX,MaxIter)
What could she get? Is it the right solution? Does this command work
with different initial value, like 0 or 6, which are far from the solution
we want to find? Describe the difference between Vania’s approach and
Itha’s.
PROBLEMS 201
4.3 Recursive (Self-Calling) Routine for Bisection Method
As stated in Section 1.3, MATLAB allows us to make nested (recursive) rou-
tines which call itself. Modify the MATLAB routine “
bisct()” (in Section
4.2) into a nested routine “
bisct_r()” and run it to solve Eq. (P4.2.1).
4.4 Newton Method and Secant Method
As can be seen in Fig. 4.5, the secant method introduced in Section 4.5
was devised to remove the necessity of the derivative/gradient and improve
the convergence. But, it sometimes turns out to be worse than the Newton
method. Apply the routines “
newton()”and“secant()” to solve

f
p44
(x) = x
3
− x
2
− x + 1 = 0 (P4.4)
starting with the initial point x
0
=−0.2 one time and x
0
=−0.3 for another
shot.
4.5 Acceleration of Aitken–Steffensen Method
A sequence converging to a limit x
o
can be described as
x
o
− x
k+1
= e
k+1
≈ Ae
k
= A(x
o
− x
k
)

with lim
k→∞
x
o
− x
k+1
x
o
− x
k
= A(|A| < 1) (P4.5.1)
In order to think about how to improve the convergence speed of this
sequence, we define a new sequence p
k
as
x
o
− x
k+1
x
o
− x
k
≈ A ≈
x
o
− x
k
x
o

− x
k−1
; (x
o
− x
k+1
)(x
o
− x
k−1
) ≈ (x
o
− x
k
)
2
(x
o
)
2
− x
k+1
x
o
− x
k−1
x
o
+ x
k+1

x
k−1
≈ (x
o
)
2
− 2x
o
x
k
+ x
2
k
x
o

x
k+1
x
k−1
− x
2
k
x
k+1
− 2x
k
+ x
k−1
= p

k
(P4.5.2)
(a) Check that the error of this sequence p
k
is as follows.
x
o
− p
k
= x
o

x
k+1
x
k−1
− x
2
k
x
k+1
− 2x
k
+ x
k−1
= x
o

x
k−1

(x
k+1
− 2x
k
+ x
k−1
) − x
2
k−1
+ 2x
k−1
x
k
− x
2
k
x
k+1
− 2x
k
+ x
k−1
= x
o
− x
k−1
+
(x
k
− x

k−1
)
2
x
k+1
− 2x
k
+ x
k−1
= x
o
− x
k−1
+
(−(x
o
− x
k
) + (x
o
− x
k−1
))
2
−(x
o
− x
k+1
) + 2(x
o

− x
k
) − (x
o
− x
k−1
)
= x
o
− x
k−1
+
(−A + 1)
2
(x
o
− x
k−1
)
2
(−A
2
+ 2A − 1)(x
o
− x
k−1
)
= 0 (P4.5.3)
202 NONLINEAR EQUATIONS
Table P4.5 Comparison of Various Methods Applied for Solving Nonlinear

Equations
Newton Secant Steffensen Schroder fzero() fsolve()
x
0
= 1.6 x 2.0288
f
42
f(x) 1.19e-8 1.72e-9
Flops 158 112 273 167 986 1454
x
0
= 0 x 1.0000
f
p44
f(x)
Flops 53 30 63 31 391 364
x
0
= 0 x 5.0000 NaN
f
p45
f(x) NaN
Flops 536 434 42 19 3683 1978
(cf) Since the flops() command is no longer available in MATLAB 6.x version, the numbers of
floating-point operations are obtained from MATLAB 5.x version so that the readers can compare
the various algorithms in terms of their computational loads.
(b) Modify the routine “newton()” into a routine “stfns()” that generates
the sequence (P4.5.2) and run it to solve
f
42

(x) = tan(π − x) − x = 0 (with x
0
= 1.6) (P4.5.4)
f
p44
(x) = x
3
− x
2
− x + 1 = 0 (with x
0
= 0) (P4.5.5)
f
p45
(x) = (x − 5)
4
= 0 (with x
0
= 0) (P4.5.6)
Fill in Table P4.5 with the results and those obtained by using the
routines “
newton()”, “secant()” (with the error tolerance TolX =
10
−5
), “fzero()”, and “fsolve()”.
4.6 Acceleration of Newton Method for Multiple Roots: Schroder Method
In order to improve the convergence speed, Schroder modifies the Newton
iterative algorithm (4.4.2) as
x
k+1

= x
k
− M
f(x
k
)
f

(x
k
)
(P4.6.1)
with M : the order of multiplicity of the root we want to find
Based on this idea, modify the routine “
newton()” into a routine

schroder()” and run it to solve Eqs. (P4.5.4.6). Fill in the corresponding
blanks of Table P4.5 with the results.
PROBLEMS 203
4.7 Newton Method for Systems of Nonlinear Equations
Apply the routine “
newtons()” (Section 4.6) and the MATLAB built-in
routine “
fsolve()” (with [x0 y0] = [1 0.5]) to solve the following systems
of equations. Fill in Table P4.7 with the results.
(a) x
2
+ y
2
= 1

x
2
− y = 0
(P4.7.1)
(b) 5cosθ
1
+ 6cos(θ
1
+ θ
2
) = 10
5sinθ
1
+ 6sin(θ
1
+ θ
2
) = 4
(P4.7.2)
(c) 3x
2
+ 4y
2
= 3
x
2
+ y
2
=


3/2
(P4.7.3)
(d) x
3
1
+ 10x
1
− x
2
= 5
x
1
+ x
3
2
− 10x
2
=−1
(P4.7.4)
(e) x
2


3xy + 2y
2
= 10
4x
2
+ 3


3xy + y = 22
(P4.7.5)
(f) x
3
y − y − 2x
3
=−16
x − y
2
=−1
(P4.7.6)
(g) x
2
+ 4y
2
= 16
xy
2
= 4
(P4.7.7)
(h) xe
y
− x
5
+ y = 3
x + y +tan x − sin y = 0
(P4.7.8)
(i) 2logy − x = 0
xy − y = 1
(P4.7.9)

(j) 12xy − 6x =−1
60x
2
− 180x
2
y − 30xy = 1
(P4.7.10)
4.8 Newton Method for Systems of Nonlinear Equations
Apply the routine “
newtons()” (Section 4.6) and the MATLAB built-in
routine “
fsolve()”(with[x0y0z0]= [1 1 1]) to solve the following
systems of equations. Fill in Table P4.8 with the results.
(a) xyz =−1
x
2
+ 2y
2
+ 4z
2
= 7
2x
2
+ y
3
+ 6z = 7
(P4.8.1)
(b) xyz = 1
x
2

+ 2y
3
+ z
2
= 4
x + 2y
2
− z
3
= 2
(P4.8.2)
(c) x
2
+ 4y
2
+ 9z
2
= 34
x
2
+ 9y
2
− 5z = 40
x
2
z − y = 7
(P4.8.3)
(d) x
2
+ 2sin(yπ/2) + z

2
= 0
−2xy + z = 3
e
x+y
− z
2
= 0
(P4.8.4)
204 NONLINEAR EQUATIONS
Table P4.7 Applying newtons()/fsolve() for Systems of Nonlinear Equations
newtons() fsolve()
x
0
= [1 0.5] x
(P4.7.1) ||f(x)||
Flops 1043 1393
x
0
= [1 0.5] x [0.1560 0.4111]
(P4.7.2) ||f(x)|| 3.97e-15 (3.66e-15)
Flops 2489 3028
x
0
= [1 0.5] x
(P4.7.3) ||f(x)||
Flops 1476 3821
x
0
= [1 0.5] x [0.5024 0.1506]

(P4.7.4) ||f(x)|| 8.88e-16 (1.18e-6)
Flops 1127 1932
x
0
= [1 0.5] x
(P4.7.5) ||f(x)||
Flops 2884 3153
x
0
= [1 0.5] x [1.6922 -1.6408]
(P4.7.6) ||f(x)|| 1.83e-15
Flops 9234 12896
x
0
= [1 0.5] x
(P4.7.7) ||f(x)||
Flops 2125 2378
x
0
= [1 0.5] x [0.2321 1.5067]
(P4.7.8) ||f(x)|| 1.07 (1.07)
Flops 6516 6492
x
0
= [1 0.5] x
(P4.7.9) ||f(x)||
Flops 1521 1680
x
0
= [1 0.5] x [0.2236 0.1273]

(P4.7.10) ||f(x)|| 0 (1.11e-16)
Flops 1278 2566
(cf) The numbers of floating-point operations and the residual (mismatching) errors in the parentheses are obtained
from MATLAB 5.x version.
PROBLEMS 205
Table P4.8 Applying newtons()fsolve() for Systems of Nonlinear Equations
newtons() fsolve()
x
0
= [1 1 1] x [1.0000 -1.0000 1.0000]
(P4.8.1) ||f(x)|| 1.1102e-16 (1.1102e-16)
Flops 8158 12964
x
0
= [1 1 1] x [111]
(P4.8.2) ||f(x)|| 0
Flops 990 854
x
0
= [1 1 1] x
(P4.8.3) ||f(x)||
Flops 6611 4735
x
0
= [1 1 1] x [1.0000 -1.0000 1.0000]
(P4.8.4) ||f(x)|| 4.5506e-15 (4.6576e-15)
Flops 18,273 21,935
x
0
= [1 1 1] x

(P4.8.5) ||f(x)||
Flops 6811 5525
x
0
= [1 1 1] x [2.0000 1.0000 3.0000]
(P4.8.6) ||f(x)|| 3.4659e-8 (2.6130e-8)
Flops 6191 4884
x
0
= [1 1 1] x [1.0000 3.0000 2.0000]
(P4.8.7) ||f(x)|| 1.0022e-13 (1.0437e-13)
Flops 8055 6102
(e) x
2
+ y
2
+ z
2
= 14
x
2
+ 2y
2
− z = 6
x − 3y
2
+ z
2
=−2
(P4.8.5)

(f) x
3
− 12y + z
2
= 5
3x
2
+ y
3
− 2z = 7
x + 24y
2
− 2sin(πz/18) = 25
(P4.8.6)
206 NONLINEAR EQUATIONS
(g) x
2
+ y
2
− 2z = 6
x
2
− 2y + z
3
= 3
2xz −3y
2
− z
2
=−27

(P4.8.7)
4.9 Newton Method for a System of Nonlinear Equations with Varying Para-
meter(s)
In order to find the average modulation order x
i
foreachuserofanOFDM
(orthogonal frequency division multiplex) system that has N (128) subcha-
nnels to assign to each of the four users in the environment of noise power
N
0
and the bit error rate (probability of bit error) P
e
, a communication
system expert, Mi-hyun, formulated the problem into the system of five
nonlinear equations as follows:
f
i
(x) = (2
x
i
(x
i
ln 2 − 1) + 1)
N
0
3
2(erfc
−1
(P
e

/2))
2
− λ = 0 (P4.9.1)
for i = 1, 2, 3, 4
f
5
(x) =
4

i=1
a
i
x
i
− N = 0 (P4.9.2)
where N = 128 and a
i
is the data rate of each user
where erfc
−1
(x) is the inverse function of the complementary error function
erfc(x) =
2

π


x
e
−t

2
dt = 1 −
2

π

x
0
e
−t
2
dt = 1 − erf(x) (P4.9.3)
and defined as the MATLAB built-in function ‘
erfcinv()’. She defined
the mismatching error (vector) function as below and save it in the M-file
named “
fp_bits.m”.
function y = fp_bits(x,a,Pe)
%x(i),i = 1:4 correspond to the modulation order of each user
%x(5) corresponds to the Lagrange multiplier (Lambda)
if nargin < 3, Pe = 1e-4;
if nargin < 2, a = [64 64 64 64]; end
end
N = 128; N0 = 1;
x14 = x(1:4);
y = (2.^x14.*(log(2)*x14 - 1)+1)*N0/3*2*erfcinv(Pe/2).^2 - x(5);
y(5) = sum(a./x14) - N;
Compose a program which solves the above system of nonlinear equations
(with N
0

= 1andP
e
= 10
−4
) to get the modulation order x
i
of each user
PROBLEMS 207
for five different sets of data rates
a = [32 32 32 32], [64323232], [128 32 32 32], [256 32 32 32], and [512 32 32 32]
and plots a
1
/x
1
(the number of subchannels assigned to user 1) versus a
1
(the data rate of user 1).
4.10 Temperature Rising from Heat Flux in a Semi-infinite Slab
Consider a semi-infinite slab whose temperature rises as a function of posi-
tion x>0 and time t>0as
T(x,t)=
Qx
k

e
−s
2

πs
− erfc(s)


with s
2
= x
2
/4at (P4.10.1)
where the function erfc() is defined by Eq. (P4.9.3) and
Q(heat flux) = 200 J/m
2
s,k(conductivity) = 0.015 J/m/s/

C,
a(diffusivity) = 2.5 × 10
−5
m
2
/s
In order to find the heat transfer speed, a heating system expert, Kyung-
won, wants to solve the above equation to get the positions x(t) with a
temperature rise of T = 30

Catt = 10:10:200 s. Compose the program
which does this job and plots x(t) versus t.
4.11 Damped Newton Method for a Set of Nonlinear Equations
Consider the routine “
newtons()”, which is made for solving a system of
equations and introduced in Section 4.6.
(a) Run the routine with the initial point (x
10
,x

20
) = (0.5, 0.2) to solve
Eq. (4.6.5) and certify that it does not yield the right solution as depicted
in Fig. 4.6c.
(b) In order to keep the step size adjusted in the case where the norm of the
vector function f(x
k+1
) at iteration k + 1 is larger than that of f(x
k
) at
iteration k, insert (activate) the statements numbered from 1 to 6 of the
routine “
newtons()” (Section 4.6) by deleting the comment mark (%)at
the beginning of each line to make a modified routine “
newtonds()”,
which implements the damped Newton method. Run it with the initial
point (x
10
,x
20
) = (0.5, 0.2) to solve Eq. (4.6.5) and certify that it yields
the right solution as depicted in Fig. 4.6d.
(c) Run the MATLAB built-in routine “
fsolve()” with the initial point
(x
10
,x
20
) = (0.5, 0.2) to solve Eq. (4.6.5). Does it present you a right
solution?

5
NUMERICAL
DIFFERENTIATION/
INTEGRATION
5.1 DIFFERENCE APPROXIMATION FOR FIRST DERIVATIVE
For a function f(x) of a variable x, its first derivative is defined as
f

(x) = lim
h→0
f(x + h) − f(x)
h
(5.1.1)
However, this gives our computers a headache, since they do not know how
to take a limit. Any input number given to computers must be a definite num-
ber and can be neither too small nor too large to be understood by the com-
puter. The ‘theoretically’ infinitesimal number h involved in this equation is a
problem.
A simple approximation that computers might be happy with is the forward
difference approximation
D
f 1
(x, h) =
f(x + h) − f(x)
h
(h is step size)(5.1.2)
How far away is this approximation from the true value of ( 5.1.1)? In order to do
the error analysis, we take the Taylor series expansion of f(x + h) about x as
f(x+ h) = f(x)+ hf


(x) +
h
2
2
f
(2)
(x) +
h
3
3!
f
(3)
(x) +··· (5.1.3)
Applied Numerical Methods Using MATLAB

, by Yang, Cao, Chung, and Morris
Copyr ight
 2005 John Wiley & Sons, I nc., ISBN 0-471-69833-4
209
210 NUMERICAL DIFFERENTIATION/ INTEGRATION
Subtracting f(x)from both sides and dividing both sides by the step size h yields
D
f 1
(x, h) =
f(x + h) − f(x)
h
= f

(x) +
h

2
f
(2)
(x) +
h
2
3!
f
(3)
(x) +···
= f

(x) + O(h) (5.1.4)
where O(g(h)),called‘bigOhofg(h)’, denotes a truncation error term propor-
tional to g(h) for |h|≺1. This means that the error of the forward difference
approximation (5.1.2) of the first derivative is proportional to the step size h,or,
equivalently, in the order of h.
Now, in order to derive another approximation formula for the first derivative
having a smaller error, let’s remove the first-order term with respect to h from
Eq. (5.1.4) by substituting 2h for h in the equation
D
f 1
(x, 2h) =
f(x + 2h) − f(x)
2h
= f

(x) +
2h
2

f
(2)
(x) +
4h
2
3!
f
(3)
(x) +···
and subtracting this result from two times the equation. Then, we get
2D
f 1
(x, h) − D
f 1
(x, 2h) = 2
f(x + h) − f(x)
h

f(x + 2h) − f(x)
2h
= f

(x) −
2h
2
3!
f
(3)
(x) +···
D

f 2
(x, h) =
2D
f 1
(x, h) − D
f 1
(x, 2h)
2 − 1
=
−f(x + 2h) + 4f(x + h) − 3f(x)
2h
= f

(x) + O(h
2
) (5.1.5)
which can be regarded as an improvement over Eq. (5.1.4), since it has the
truncation error of O(h
2
) for |h|≺1.
How about the backward difference approximation?
D
b1
(x, h) =
f(x)− f(x − h)
h
≡ D
f 1
(x, −h) (h is step size)(5.1.6)
This also has an error of O(h) and can be processed to yield an improved version

having a truncation error of O(h
2
).
D
b2
(x, h) =
2D
b1
(x, h) − D
b1
(x, 2h)
2 − 1
=
3f(x)− 4f(x − h) + f(x − 2h)
2h
= f

(x) + O(h
2
) (5.1.7)
In order to derive another approximation f ormula for the first derivative, we
take the Taylor series expansion of f(x+h) and f(x − h) up to the fifth order
APPROXIMATION ERROR OF FIRST DERIVATIVE 211
to write
f(x +h) = f(x)+ hf

(x) +
h
2
2

f
(2)
(x) +
h
3
3!
f
(3)
(x) +
h
4
4!
f
(4)
(x) +
h
5
5!
f
(5)
(x) +···
f(x −h) = f(x)− hf

(x) +
h
2
2
f
(2)
(x) −

h
3
3!
f
(3)
(x) +
h
4
4!
f
(4)
(x) −
h
5
5!
f
(5)
(x) +···
and divide the difference between these two equations by 2h to get the central
difference approximation for the first derivative as
D
c2
(x, h) =
f(x + h) − f(x − h)
2h
= f

(x) +
h
2

3!
f
(3)
(x) +
h
4
5!
f
(5)
(x) +···
= f

(x) + O(h
2
) (5.1.8)
which has an error of O(h
2
) similarly to Eqs. (5.1.5) and (5.1.7). This can also be
processed to yield an improved version having a truncation error of O(h
4
).
2
2
D
c2
(x, h) − D
c2
(x, 2h) = 4
f(x + h) − f(x − h)
2h


f(x + 2h) − f(x − 2h)
2 · 2h
= 3f

(x) −
12h
4
5!
f
(5)
(x) −···
D
c4
(x, h) =
2
2
D
c1
(x, h) − D
c1
(x, 2h)
2
2
− 1
=
8f(x + h) − 8f(x − h) − f(x + 2h) + f(x−2h)
12h
= f


(x) + O(h
4
) (5.1.9)
Furthermore, this procedure can be formularized into a general formula, called
‘Richardson’s extrapolation’, for improving the difference approximation of the
derivatives as follows:
<Richardson’s extrapolation>
D
f,n+1
(x, h) =
2
n
D
f,n
(x, h) − D
f,n
(x, 2h)
2
n
− 1
(n: the order of error) (5.1.10a)
D
b,n+1
(x, h) =
2
n
D
b,n
(x, h) − D
b,n

(x, 2h)
2
n
− 1
(5.1.10b)
D
c,2(n+1)
(x, h) =
2
2n
D
c,2n
(x, h) − D
c,2n
(x, 2h)
2
2n
− 1
(5.1.10c)
5.2 APPROXIMATION ERROR OF FIRST DERIVATIVE
In the previous section, we derived some difference approximation formulas
for the first derivative. Since their errors are proportional to some power of
212 NUMERICAL DIFFERENTIATION/ INTEGRATION
the step-size h, it seems that the errors continue to decrease as h gets smaller.
However, this is only half of the story since we considered only the truncation
error caused by truncating the high-order terms in the Taylor series expansion
and did not take account of the round-off error caused by quantization.
In this section, we will discuss the round-off error as well as the truncation
error so as to gain a better understanding of how the computer really works. For
this purpose, suppose that the function values

f(x+ 2h), f (x + h), f (x), f (x − h), f (x − 2h)
are quantized (rounded-off) to
y
2
= f(x + 2h) + e
2
,y
1
= f(x + h) + e
1
y
0
= f(x)+ e
0
(5.2.1)
y
−1
= f(x − h) + e
−1
,y
−2
= f(x − 2h) + e
−2
where the magnitudes of the round-off (quantization) errors e
2
,e
1
,e
0
,e

−1
,and
e
−2
are all smaller than some positive number ε,thatis,|e
i
|≤ε. Then, the total
error of the forward difference approximation (5.1.4) can be derived as
D
f 1
(x, h) =
y
1
−y
0
h
=
f(x+h) +e
1
−f(x)−e
0
h
(5.1-4)
= f

(x) +
e
1
−e
0

h
+
K
1
2
h
|D
f 1
(x, h) −f

(x)|≤




e
1
−e
0
h




+
|K
1
|
2
h ≤


h
+
|K
1
|
2
h with K
1
= f
(2)
(x)
Look at the right-hand side of this inequality—that is, the upper bound of error.
It consists of two parts; the first one is due to the round-off error and in inverse
proportion to the step-size h, while the second one is due to the truncation error
and in direct proportion to h. Therefore, the upper bound of the total error can
be minimized with respect to the step-size h to give the optimum step-size h
o
as
d
dh


h
+
|K
1
|
2
h


=−

h
2
+
|K
1
|
2
= 0,h
o
= 2

ε
|K
1
|
(5.2.2)
Thetotal error of the central difference approximation (5.1.8) can also be derived
as follows:
D
c2
(x, h) =
y
1
− y
−1
2h
=

f(x + h) + e
1
− f(x − h) − e
−1
2h
(5.1.8)
= f

(x) +
e
1
− e
−1
2h
+
K
2
6
h
2
|D
c2
(x, h) − f

(x)|≤




e

1
− e
−1
2h




+
|K
1
|
6
h
2


2h
+
|K
2
|
6
h
2
with K
2
= f
(3)
(x)

APPROXIMATION ERROR OF FIRST DERIVATIVE 213
The right-hand side of this inequality is minimized to yield the optimum step
size h
o
as
d
dh

ε
h
+
|K
2
|
6
h
2

=−
ε
h
2
+
|K
2
|
3
h = 0,h
o
=

3


|K
2
|
(5.2.3)
Similarly, we can derive the total error of the central difference approximation
(5.1.9) as
|D
c4
(x, h) − f

(x)|≤




8e
1
− 8e
−1
− e
2
+ e
−2
12h





+
|K
4
|
30
h
4

18ε
12h
+
|K
4
|
30
h
4
with K
4
= f
(5)
(x)
and find out the optimum step size h
o
as
d
dh



2h
+
|K
4
|
30
h
4

=−

2h
2
+
2|K
4
|
15
h
3
= 0,h
o
=
5

45ε
4|K
4
|
(5.2.4)

From what we have seen so far, we can tell that, as we make the step size h
smaller, the round-off error may increase, while the truncation error decreases.
This is called ‘step-size dilemma’. Therefore, there must be some optimal step
size h
o
for the difference approximation formulas, as derived analytically in
Eqs. (5.2.2), (5.2.3), and (5.2.4). However, these equations are only of theoretical
value and cannot be used practically to determine h
o
because we usually don’t
have any information about the high-order derivatives and, consequently, we
cannot estimate K
1
,K
2
, B esides, noting that h
o
minimizes not the real error,
but its upper bound, we can never expect the true optimal step size to be uniform
for all x even with the same approximation formula.
Now, we can verify the step-size dilemma and the existence of some optimal
step size h
o
by computing the numerical derivative of a function, say, f(x) =
sin x, whose analytical derivatives are well known. To see how the errors of the
difference approximation formulas (5.1.4) and (5.1.8) depend on the step size h,
we computed their values for x = π/4 together with their errors as summarized
in Tables 5.1 and 5.2. From these results, it a ppears that the errors of (5.1.4) and
(5.1.8) are minimized with h ≈ 10
−8

and h ≈ 10
−5
, respectively. This may be
justified by the following facts:
ž
Noting that the number of significant bits is 52, which is the number of man-
tissa bits (Section 1.2.1), or, equivalently, the number of significant digits
is about 52 × 3/10 ≈ 16 (since 2
10
≈ 10
3
), and the value of f(x) = sin x is
less than or equal to one, the round-off error is roughly
ε ≈ 10
−16
/2
214 NUMERICAL DIFFERENTIATION/ INTEGRATION
Table 5.1 The Forward Difference Approximation (5.1.4) for the First Derivative of f(x) =
sin x and Its Error from the True Value (cos π/4 = 0.7071067812) Depending on the Step
Size h
h
k
= 10
−k
D
1k|x=π/4
D
1k
− D
1(k−1)

D
1k|x=π/4
− cos(π/4)
h
1
= 0.1000000000 0.6706029729 −0.03650380828
h
2
= 0.0100000000 0.7035594917 0.0329565188 −0.00354728950
h
3
= 0.0010000000 0.7067531100 0.0031936183 −0.00035367121
h
4
= 0.0001000000 0.7070714247 0.0003183147 −0.00003535652
h
5
= 0.0000100000 0.7071032456 0.0000318210 −0.00000353554
h
6
= 0.0000010000 0.7071064277 0.0000031821 −0.00000035344
h
7
= 0.0000001000 0.7071067454 0.0000003176 −0.00000003581
h
8
= 0.0000000100

0.7071067842 0.0000000389 0.00000000305


h
9
= 0.0000000010 0.7071068175 0.0000000333

0.00000003636
h
10
= 0.0000000001 0.7071077057 0.0000008882 0.00000092454
h
o
= 0.0000000168 (the optimal value of h obtained from Eq. (5.2.2))
Table 5.2 The Forward Difference Approximation (5.1.8) for the First Derivative of f(x) =
sin x and Its Error from the True Value (cos π/4 = 0.7071067812) Depending on the Step
Size h
h
k
= 10
−k
D
2k|x=π/4
D
2k
− D
2(k−1)
D
2k|x=π/4
− cos(π/4)
h
1
= 0.1000000000 0.7059288590 −0.00117792219

h
2
= 0.0100000000 0.7070949961 0.0011661371 −0.00001178505
h
3
= 0.0010000000 0.7071066633 0.0000116672 −0.00000011785
h
4
= 0.0001000000 0.7071067800 0.0000001167 −0.00000000118
h
5
= 0.0000100000

0.7071067812 0.0000000012 −0.00000000001

h
6
= 0.0000010000 0.7071067812 0.0000000001

0.00000000005
h
7
= 0.0000001000 0.7071067804 −0.0000000009 −0.00000000084
h
8
= 0.0000000100 0.7071067842 0.0000000039 0.00000000305
h
9
= 0.0000000010 0.7071067620 −0.0000000222 −0.00000001915
h

10
= 0.0000000001 0.7071071506 0.0000003886 0.00000036942
h
o
= 0.0000059640 (the optimal value of h obtained from Eq. (5.2.3))
ž
Accordingly, Eqs. (5.2.2) and (5.2.3) give the theoretical optimal values of
step size h as
h
o
= 2

ε
|K
1
|
= 2

ε
|f

(π/4)|
= 2

10
−16
/2
|−sin(π/4)|
= 1.68 × 10
−8

h
o
=
3


|K
2
|
=
3


|f
(3)
(π/4)|
=
3

3 × 10
−16
/2
|−cos(π/4)|
= 0.5964 × 10
−5
APPROXIMATION ERROR OF FIRST DERIVATIVE 215
10
−8
10
−6

10
−4
10
−2
10
0
10
−10
10
−5
10
0
h
o
: optimal value
h
o
: optimal value
D
f
1
(
x, h
) −
f
′(
x
)

+

2
h
K
1
h
2e
D
c
2
(
x, h
) −
f
′(
x
)

+
6
h
2
K
2
2
h
2e
10
−10
10
0

h
o
h
(a) Error bound of Eq. (5.1.4) vs. step size
h
10
−10
10
0
h
o
h
(b) Error bound of Eq. (5.1.8) vs. step size
h
Figure 5.1 Forward/central difference approximation error of first derivative versus step size h.
Figure 5.1a/b shows how the e rror bounds of the difference approximations
(5.1.4)/(5.1.8) for the first derivative vary with the step-size h, implying that there
is some optimal value of step-size h with which the error bound of the numerical
derivative is minimized. It seems that we might be able to get the optimal step-
size h
o
by using this kind of graph or directly using Eq. (5.2.2),(5.2.3) or (5.2.4).
But, as mentioned before, it is not possible, as long as the high-order derivatives
are unknown (as is usually the case). Very fortunately, Tables 5.1 and 5.2 sug-
gest that we might be able to guess the good value of h by watching how
small |D
ik
− D
i(k−1)
| is for a given problem. On the other hand, Fig. 5.2a/b

shows the tangential lines based on the forward/central difference approximations
(5.1.4)/(5.1.8) of the first derivative at x = π/4 with the three values of step-
size h. They imply that there is some optimal step-size h
o
and the numerical
approximation error becomes larger if we make the step-size h larger or smaller
than the value.
0.2
0.4
0.6
0.8
1
h
= 10
−16
h
= 10
−8
h
= 0.5
f
(
x
) = sin
x
0 0.5 1 1.5 2
x
(a) Forward difference approximation by Eq. (5.1.4) (b) Central difference approximation by Eq. (5.1.8)
0.2
0.4

0.6
0.8
1
h
= 10
−16
h
= 10
−5
h
= 1
f
(
x
) = sin
x
0 0.5 1 1.5 2
x
Figure 5.2 Forward/central difference approximation of first derivative of f(x) = sin x.
216 NUMERICAL DIFFERENTIATION/ INTEGRATION
5.3 DIFFERENCE APPROXIMATION FOR SECOND
AND HIGHER DERIVATIVE
In order to obtain an approximation formula for the second derivative, we take
the Taylor series expansion of f(x+ h) and f(x− h) up to the fifth order to
write
f(x + h) = f(x)+hf

(x) +
h
2

2
f
(2)
(x) +
h
3
3!
f
(3)
(x) +
h
4
4!
f
(4)
(x) +
h
5
5!
f
(5)
(x) +···
f(x − h) = f(x)−hf

(x) +
h
2
2
f
(2)

(x) −
h
3
3!
f
(3)
(x) +
h
4
4!
f
(4)
(x) −
h
5
5!
f
(5)
(x) +···
Adding these two equations (to remove the f

(x) terms) and then subtracting
2f(x)from both sides and dividing both sides by h
2
yields the central difference
approximation for the second derivative as
D
(2)
c2
(x, h) =

f(x + h) − 2f(x)+ f(x − h)
h
2
= f
(2)
(x) +
h
2
12
f
(4)
(x) +
2h
4
6!
f
(6)
(x) +··· (5.3.1)
which has a truncation error of O(h
2
).
Richardson’s extrapolation can be used for manipulating this equation to
remove the h
2
term, which yields an improved version
2
2
D
(2)
c2

(x, h) −D
(2)
c2
(x, 2h)
2
2
− 1
=
−f(x +2h) + 16f(x + h) − 30f(x)+16 f(x− h) −f(x − 2h)
12h
2
= f
(2)
(x) −
h
4
90
f
(5)
(x) +···
D
(2)
c4
(x, h) =
−f(x +2h) + 16f(x + h) − 30f(x)+16 f(x− h) −f(x − 2h)
12h
2
= f
(2)
(x) + O(h

4
) (5.3.2)
which has a truncation error of O(h
4
).
The difference approximation formulas for the first a nd second derivatives
derived so far are summarized in Table 5.3, where the following notations are
used:
D
(N)
fi
/D
(N)
bi
/D
(N)
ci
is the forward/backward/central difference approximation for
the Nth derivative having an error of O(h
i
)(h is the step size)
f
k
= f(x + kh)
DIFFERENCE APPROXIMATION FOR SECOND AND HIGHER DERIVATIVE 217
Now, we turn our attention to the high-order derivatives. But, instead of deriv-
ing the specific formulas, let’s make an algorithm to generate whatever difference
approximation formula we want. For instance, if we want to get the approxima-
tion formula of the second derivative based on the function values f
2

,f
1
,f
0
,f
−1
,
and f
−2
, we write
D
(2)
c4
(x, h) =
c
2
f
2
+ c
1
f
1
+ c
0
f
0
+ c
−1
f
−1

+ c
−2
f
−2
h
2
(5.3.3)
and take the Taylor series expansion of f
2
,f
1
,f
−1
,andf
−2
excluding f
0
on the
right-hand side of this equation to rewrite it as
D
(2)
c4
(x, h)
=
1
h
2




























c
2

f
0
+ 2hf


0
+
(2h)
2
2
f
(2)
0
+
(2h)
3
3!
f
(3)
0
+
(2h)
4
4!
f
(4)
0
+···

+c
1

f
0

+ hf

0
+
h
2
2
f
(2)
0
+
h
3
3!
f
(3)
0
+
h
4
4!
f
(4)
0
+···

+ c
0
f
0

+c
−1

f
0
− hf

0
+
h
2
2
f
(2)
0

h
3
3!
f
(3)
0
+
h
4
4!
f
(4)
0
−···


+c
−2

f
0
− 2hf

0
+
(2h)
2
2
f
(2)
0

(2h)
3
3!
f
(3)
0
+
(2h)
4
4!
f
(4)
0

−···




























=

1
h
2























(c
2
+ c
1

+ c
0
+ c
−1
+ c
−2
)f
0
+ h(2c
2
+ c
1
− c
−1
− 2c
−2
)f

0
+h
2

2
2
2
c
2
+
1
2

c
1
+
1
2
c
−1
+
2
2
2
c
−2

f
(2)
0
+h
3

2
3
3!
c
2
+
1
3!
c
1


1
3!
c
−1

2
3
3!
c
−2

f
(3)
0
+h
4

2
4
4!
c
2
+
1
4!
c
1
+
1

4!
c
−1
+
2
4
4!
c
−2

f
(4)
0
+···
























(5.3.4)
We should solve the following set of equations to determine the coefficients
c
2
,c
1
,c
0
,c
−1
,andc
−2
so as to make the expression conform to the second
derivative f
(2)
0
at x + 0h = x.







1111 1
210−1 −2
2
2
/2! 1/2! 0 1/2! 2
2
/2!
2
3
/3! 1/3! 0 −1/3! −2
3
/3!
2
4
/4! 1/4! 0 1/4! 2
4
/4!











c
2

c
1
c
0
c
−1
c
−2





=





0
0
1
0
0





(5.3.5)

218 NUMERICAL DIFFERENTIATION/ INTEGRATION
Table 5.3 The Difference Approximation Formulas for the First and Second Derivatives
O(h) forward difference approximation for the first derivative:
D
f 1
(x, h) =
f
1
− f
0
h
(5.1.4)
O(h
2
) forward difference approximation for the first derivative:
D
f 2
(x, h) =
2D
f 1
(x, h) − D
f 1
(x, 2h)
2 − 1
=
−f
2
+ 4f
1
− 3f

0
2h
(5.1.5)
O(h) backward difference approximation for the first derivative:
D
b1
(x, h) =
f
0
− f
−1
h
(5.1.6)
O(h
2
) backward difference approximation for the first derivative:
D
b2
(x, h) =
2D
b1
(x, h) − D
b1
(x, 2h)
2 − 1
=
3f
0
− 4f
−1

+ f
−2
2h
(5.1.7)
O(h
2
) central difference approximation for the first derivative:
D
c2
(x, h) =
f
1
− f
−1
2h
(5.1.8)
O(h
4
) forward difference approximation for the first derivative:
D
c4
(x, h) =
2
2
D
c2
(x, h) − D
c2
(x, 2h)
2

2
− 1
=
−f
2
+ 8f
1
− 8f
−1
+ f
−2
12h
(5.1.9)
O(h
2
) central difference approximation for the second derivative:
D
(2)
c2
(x, h) =
f
1
− 2f
0
+ f
−1
h
2
(5.3.1)
O(h

4
) forward difference approximation for the second derivative:
D
(2)
c4
(x, h) =
2
2
D
(2)
c2
(x, h) − D
(2)
c2
(x, 2h)
2
2
− 1
=
−f
2
+ 16f
1
− 30f
0
+ 16f
−1
− f
−2
12h

2
(5.3.2)
O(h
2
) central difference approximation for the fourth derivative:
D
(4)
c2
(x, h) =
f
−2
− 4f
−1
+ 6f
0
− 4f
1
+ f
2
h
4
(from difapx(4,[-2 2]) (5.3.6)
DIFFERENCE APPROXIMATION FOR SECOND AND HIGHER DERIVATIVE 219
function [c,err,eoh,A,b] = difapx(N,points)
%difapx.m to get the difference approximation for the Nth derivative
l = max(points);
L = abs(points(1)-points(2))+ 1;
ifL<N+1,error(’More points are needed!’); end
forn=1:L
A(1,n) = 1;

for m = 2:L + 2, A(m,n) = A(m - 1,n)*l/(m - 1); end %Eq.(5.3.5)
l = l-1;
end
b = zeros(L,1); b(N + 1) = 1;
c =(A(1:L,:)\b)’; %coefficients of difference approximation formula
err = A(L + 1,:)*c’; eoh = L-N; %coefficient & order of error term
if abs(err) < eps, err = A(L + 2,:)*c’; eoh=L-N+1;end
if points(1) < points(2), c = fliplr(c); end
The procedure of setting up this equation and solving it is cast into the
MATLAB routine “
difapx()”, which can be used to generate the coefficients
of, say, the approximation formulas (5.1.7), (5.1.9), and (5.3.2) just for prac-
tice/verification/fun, whatever your purpose is.
>>format rat %to make all numbers represented in rational form
>>difapx(1,[0 -2]) %1
st
derivative based on {f
0
, f
−1
, f
−2
}
ans = 3/2 -2 1/2 %Eq.(5.1-7)
>>difapx(1,[-2 2]) %1
st
derivative based on {f
−2
, f
−1

, f
0
, f
1
, f
2
}
ans = 1/12 -2/3 0 2/3 -1/12 %Eq.(5.1.9)
>>difapx(2,[2 -2]) %2
nd
derivative based on {f
2
, f
1
, f
0
, f
−1
, f
−2
}
ans = -1/12 4/3 -5/2 4/3 -1/12 %Eq.(5.3.2)
Example 5.1. Numerical/Symbolic Differentiation for Taylor Series Expansion.
Consider how to use MATLAB to get the Taylor series expansion of a func-
tion—say, e
−x
about x = 0—which we already know is
e
−x
= 1 − x +

1
2
x
2

1
3!
x
3
+
1
4!
x
4

1
5!
x
5
+··· (E5.1.1)
As a numerical method, we can use the MATLAB routine “
difapx()”. On
the other hand, we can also use the MATLAB command “
taylor()”, which
is a symbolic approach. Readers may put ‘
help taylor’ into the MATLAB
command window to see its usage, which is restated below.
ž
taylor(f) gives the fifth-order Maclaurin series expansion of f.
ž

taylor(f,n + 1) with an integer n > 0 gives the nth-order Maclaurin
series expansion of
f.
ž
taylor(f,a) with a real number(a) gives the fifth-order Taylor series expan-
sion of
f about a.
220 NUMERICAL DIFFERENTIATION/ INTEGRATION
ž
taylor(f,n + 1,a) gives the n th-order Taylor series expansion of f about
default
variable =a.
ž
taylor(f,n + 1,a,y) gives the nth-order Taylor series expansion of f(y)
about y=a.
(cf) The target function f must be a legitimate expression given directly as the first
input argument.
(cf) Before using the command “
taylor()”, one should declare the arguments of the
function as symbols by putting the statement like “
syms x t”.
(cf) In the case where the function has several arguments, it is a good practice to put the
independent variable as the last input argument of “
taylor()”, though taylor()
takes one closest (alphabetically) to ‘x’ as the independent variable by default only
if it has been declared as a symbolic variable and is contained as an input argument
of the function
f.
(cf) One should use the MATLAB command “
sym2poly()” if h e wants to extract the

coefficients from the Taylor series expansion obtained as a symbolic expression.
The following MATLAB program “nm5e01” finds us the coefficients of fifth-
order Taylor series expansion of e
−x
about x = 0 by using the two methods.
%nm5e01:Nth-order Taylor series expansion for e^-x about xo in Ex 5.1
f=inline(’exp(-x)’,’x’);
N=5;xo=0;
%Numerical computation method
T(1) = feval(f,xo);
h = 0.005 %.01 or 0.001 make it worse
tmp=1;
for i = 1:N
tmp = tmp*i*h; %i!(factorial i)*h^i
c = difapx(i,[-i i]); %coefficient of numerical derivative
dix = c*feval(f,xo + [-i:i]*h)’; %/h^i; %derivative
T(i+1) = dix/tmp; %Taylor series coefficient
end
format rat, Tn = fliplr(T) %descending order
%Symbolic computation method
syms x; Ts = sym2poly(taylor(exp(-x),N + 1,xo))
%discrepancy
format short, discrepancy=norm(Tn - Ts)
5.4 INTERPOLATING POLYNOMIAL AND NUMERICAL
DIFFERENTIAL
The difference approximation formulas derived in the previous sections are appli-
cable only when the target function f(x) to differentiate is somehow given. In
this section, we think about how to get the numerical derivatives when we are
INTERPOLATING POLYNOMIAL AND NUMERICAL DIFFERENTIAL 221
given only the data file containing several data points. A possible measure is

to make the interpolating function by using one of the methods explained in
Chapter 3 and get the derivative of the interpolating function.
For simplicity, let’s reconsider the problem of finding the derivative of f(x)=
sin x at x = π/4, where the function is given as one of the following data
point sets:


π
8
, sin
π
8

,

π
4
, sin
π
4

,


8
, sin

8



(0, sin 0),

π
8
, sin
π
8

,

π
4
, sin
π
4

,


8
, sin

8

,


8
, sin


8



16
, sin

16

,


16
, sin

16

,


16
, sin

16

,


16
, sin


16

,


16
, sin

16

We make the MATLAB program “nm540”, which uses the routine “lagranp()”
to find the interpolating polynomial, uses the routine “
polyder()” to differentiate
the polynomial, and computes the error of the resulting derivative from the true
value. Let’s run it with
x defined appropriately according to the given set of data
points and see the results.
>>nm540
dfx( 0.78540) = 0.689072 (error: -0.018035) %with x = [1:3]*pi/8
dfx( 0.78540) = 0.706556 (error: -0.000550) %with x = [0:4]*pi/8
dfx( 0.78540) = 0.707072 (error: -0.000035) %with x = [2:6]*pi/16
This illustrates that if we have more points that are distributed closer to the target
point, we may get better result.
%nm540
% to interpolate by Lagrange polynomial and get the derivative
clear, clf
x0 = pi/4;
df0 = cos(x0); % True value of derivative of sin(x) at x0 = pi/4
for m = 1:3

if m == 1, x = [1:3]*pi/8;
elseif m == 2, x = [0:4]*pi/8;
else x = [2:6]*pi/16;
end
y = sin(x);
px = lagranp(x,y); % Lagrange polynomial interpolating (x,y)
dpx = polyder(px); % derivative of polynomial px
dfx = polyval(dpx, x0);
fprintf(’ dfx(%6.4f) = %10.6f (error: %10.6f)\n’, x0,dfx,dfx - df0);
end
One more thing to mention before closing this section is that we have the
MATLAB built-in routine “
diff()”, which finds us the difference vector for a
given vector. When the data points {(x
k
,f(x
k
)), k = 1, 2, } are given as an
222 NUMERICAL DIFFERENTIATION/ INTEGRATION
ASCII data fi le named “ xy.dat”, we can use the routine “diff()”togetthe
divided difference, which is similar to the derivative of a continuous function.
>>load xy.dat %input the contents of ’xy.dat’ as a matrix named xy
>>dydx = diff(xy(:,2))./diff(xy(:,1)); dydx’ %divided difference
dydx = 2.0000 0.50000 2.0000
k
x
k
xy(:,1)
f(x
k

)
xy(:,2)
x
k+1
− x
k
diff(xy(:,1))
f(x
k+1
) − f(x
k
)
diff(xy(:,2))
D
k
=
f(x
k+1
) − f(x
k
)
x
k+1
− x
k
1 −12 1 2 2
20 4 2 1 1/2
32 5 −1 −22
41 3
5.5 NUMERICAL INTEGRATION AND QUADRATURE

The general form of numerical integration of a function f(x) over some interval
[a, b] is a weighted sum of the function values at a finite number (N + 1) of
sample points (nodes), referred to as ‘quadrature’:

b
a
f(x)dx

=
N

k=0
w
k
f(x
k
) with a = x
0
<x
1
< ···<x
N
= b(5.5.1)
Here, the sample points are equally spaced for the midpoint rule, the trapezoidal
rule, and Simpson’s rule, while they are chosen to be zeros of certain polynomials
for Gaussian quadrature.
Figure 5.3 shows the integrations over two segments by the midpoint rule,
the trapezoidal rule, and Simpson’s rule, which are referred to as Newton–Cotes
formulas for being based on the approximate polynomial and are implemented
by the following formulas.

midpoint rule

x
k+1
x
k
f(x)dx

=
hf
mk
(5.5.2)
with h = x
k+1
− x
k
,f
mk
= f(x
mk
), x
mk
=
x
k
+ x
k+1
2
trapezoidal rule


x
k+1
x
k
f(x)dx

=
h
2
(f
k
+ f
k+1
) (5.5.3)
with h = x
k+1
− x
k
,f
k
= f(x
k
)
Simpson’s rule

x
k+1
x
k−1
f(x)dx


=
h
3
(f
k−1
+ 4f
k
+ f
k+1
) (5.5.4)
with h =
x
k+1
− x
k−1
2
NUMERICAL INTEGRATION AND QUADRATURE 223
x
k
− 1
hh
x
k
x
k
+ 1
x
k
− 1

hh
x
k
x
k
+ 1
(a) The midpoint rule (b) The trapezoidal rule
x
k
− 1
hh
x
k
x
k
+ 1
(c) Simpson's rule
Figure 5.3 Various methods of numerical integration.
These three integration rules are based on approximating the target function
(integrand) to the zeroth-, first- and second-degree polynomial, respectively. Since
the first two integrations are obvious, we are going to derive just Simpson’s rule
(5.5.4). For simplicity, we shift the graph of f(x) by −x
k
along the x axis,
or, equivalently, make the variable substitution t = x −x
k
so that the abscissas
of the three points on the curve of f(x) change from x ={x
k
− h, x

k
,x
k
+ h}
to t ={−h, 0, +h}. Then, in order to find the coefficients of the second-degree
polynomial
p
2
(t) = c
1
t
2
+ c
2
t +c
3
(5.5.5)
matching the points (−h, f
k−1
), (0,f
k
), (+h, f
k+1
), we should solve the follow-
ing set of equations:
p
2
(−h) = c
1
(−h)

2
+ c
2
(−h) + c
3
= f
k−1
p
2
(0) = c
1
0
2
+ c
2
0 + c
3
= f
k
p
2
(+h) = c
1
(+h)
2
+ c
2
(+h) + c
3
= f

k+1
to determine the coefficients c
1
,c
2
,andc
3
as
c
3
= f
k
,c
2
=
f
k+1
− f
k−1
2h
,c
1
=
1
h
2

f
k+1
+ f

k−1
2
− f
k

Integrating the second-degree polynomial (5.5.5) with these coefficients from
t =−h to t = h yields

×