Tải bản đầy đủ (.pdf) (10 trang)

A textbook of Computer Based Numerical and Statiscal Techniques part 29 ppsx

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (115.5 KB, 10 trang )

266
COMPUTER BASED NUMERICAL AND STATISTICAL TECHNIQUES
Hermite’s interpolation formula is
H(x)=
()
()
()
()
11
2
2
00
[1 2 ( )] [ ]
ii i i i i i
ii
xxLxL x y xx Lx y
==
′′
−− + −


∑∑
= [1–2 (x – x
0
) L
0
’ (x
0
)] [L
0
(x)]


2
y
0
+ [1 – 2(x – x
i
) L
1
’ (x
1
)] [L
1
(x)]
2
y
1
+ (x – x
0
) [L
0
(x)]
2
y’
0
+ (x – x
1
) [L
1
(x)]
2


1
y

(1)
Now, L
0
(x)=
1
01
xx xb
xx ab
−−
=
−−
L
1
(x)=
0
10
xx xa
xx ba
−−
=
−−

()
0
Lx

=

1
ab

and L′
1
(x) =
1
ba

Hence,
()
00
Lx

=
1
ab

and L′
1
(x
1
) =
1
ba

Therefore from equation (1)
H(x)=
()
()

()
()
22
12 12
xa xb
xb xa
fa fb
ab ab ba ba
 −−
 
−−
−+−
   
−− −−
 
 
()
()
()
()
22
xb xa
xa fa xb fb
ab ba
 
−−
′′
+− +−
 
−−

 
H
2
ab+



=
()
()
22
22
22
12 12
ab ab
ab ab
ab
ba
fa fb
ab ab ba ba
++
 
 
++
 
−−
−−
 
 
 

 
 
−+−
 
−− −−
 
 
 
 
 
 
()
()
++
 
−−
++
 
 
+− +−
′′
 
 
 
−−
 
 
22
22
221

ab ab
ba
ab ab
afabfb
ab b
=
1
2
f(a) +
1
2
f(b) +
()
8
ba

f

(a) –
()
8
ba

f ’(b)
=
()
()
()
()
()

28
ba fa fb
fa fb
′′
−−

+

+
. Hence Proved.
PROBLEM SET 5.4
1. Apply Hermite formula to find a polynomial which meets the following specifications:
012
010
000
k
k
k
x
y
y

[Ans. x
4
– 4x
3
+ 4x
2
]
INTERPOLATION WITH UNEQUAL INTERVAL

267
2. Apply Hermite’s interpolation to find f(1.05) given:
11.0 0.5
1.1 1.04881 0.47673
xf f

[Ans. 1.02470]
3. Apply Hermite’s interpolation to find log 2.05 given that:
1
log
2.0 0.69315 0.5
2.1 0.74194 0.47619
xx
x
[Ans. 0.71784]
4. Determine the Hermite polynomial of degree 5 which fits the following data and hence find
an approximate value of log
e
2.7
1
log
2.0 0.69315 0.5
2.5 0.91629 0.4
3.0 1.09861 0.33333
e
xyxy
x

+=
[Ans. 0.993252]

5. Find y = f(x) by Hermite’s interpolation from the table:
11 5
011
137
iii
xyy

−−
Computer y
2
and y
2
′. [Ans. 1 + x – x
2
+ 2x
4
, y
2
= 31, y′
2
= 61]
6. Compute
e
by Hermite’s formula for the function f(x) = e
x
at the points 0 and 1. Compare
the value with the value obtained by using Lagrange’s interpolation.
[Ans. (1 + 3x) (1 – x)
2
+ (2 – x) ex

2
; 1.644, 1.859]
268
COMPUTER BASED NUMERICAL AND STATISTICAL TECHNIQUES
7. Apply Hermite’s formula to find a polynomial which meets the following specifications:
110
000
110
iii
xyy

−−
()
35
1
53
2
xx




Ans.
8. Apply osculating interpolation formula to find a polynomial which meets the following
requirements:
010
100
290
iii
xyy


[Ans.


x
4
– 4x
3
+ 4x
2
]
9. Apply Hermite’s interpolation formula to find f(x) at x = 0.5 which meets the following
requirements:
() ()
11 5
01 1
13 7
ii i
xfx fx

−−
Also find f(–0.5).
42
11 3
21;,
88
xxx

−++



Ans.
10. Construct the Hermite interpolation polynomial that fits the data:
() ()
1 7.389 14.778
2 54.598 109.196
xfx fx

Estimate the value of f(1.5). [Ans. 29.556x
3
– 85.793x
2
+ 97.696x – 34.07; 19.19125]
11. (i) Construct the Hermite interpolation polynomial that fits the data:
() ()
00 1
0.5 0.4794 0.8776
1.0 0.8415 0.5403
xfx fx

Estimate the value of f(0.75).
INTERPOLATION WITH UNEQUAL INTERVAL
269
(ii) Construct the Hermite interpolation polynomial that fits the data
() ()
04 5
16 14
222 17
xyx yx



−−
−−
Interpolate y(x) at x = 0.5 and 1.5.
12. Obtain the unique polynomial p(x) of degree 3 or less corresponding to a funcion f(x) where
f(0) = 1, f’(0) = 2, f(1) = 5, f’(1) = 4.
13. (i) Construct the Hermite interpolation polynomial that fits the data
() ()
229 50
3 105 105
xfx fx

Interpolate f(x) at x = 2.5
(ii) Fit the cubic polynomial P(x) = c
0
+ c
1
x + c
2
x
2
+ c
3
x
3
to the data given in problem 13 (i).
Are these polynomials same?
5.7. SOME RELATED TERMS
5.7.1 Some Remarkable Points about Chosen Different Interpolation Formulae
We have derived some central difference interpolation formulae. Obviously here a question arise

that which one of these formulae gives the most accurate or approximate, nearest result?
(1) If interpolation is required near the beginning or end of a given data, there is only
alternative to apply Newton’s Forward and backward difference formulae.
(2) For interpolation near the center of a given data, Stirling’s formula gives the best or
most accurate result for –
1
4
< u <
1
4
and Bessel’s formula is most efficient near u =
1
2
or
1
4

u


3
4
.
(3) But in the case where a series of calculations have to be made, it would be inconvenient
to use both these (Stirling’s & Bessel’s) formulae i.e., the choice depends on the order of
the highest differences that could be neglected so that contributions from it and further
differences would be less than half a unit in the last decimal place. If this highest difference
is of odd order. Stirling’s formula is recommended; if it is even order Bessel’s formula
might be preferred.
(4) It is known from algebra that the n

th
degree polynomial which passes through (n + 1)
points is unique. Hence the various interpolaion formulae derived here are actually, only
different forms of the same polynomial. Therefore all the interpolation formulae should
give the same functional value.
270
COMPUTER BASED NUMERICAL AND STATISTICAL TECHNIQUES
(5) Here we discussed several interpolation formulae for equispaced argument value. The
most important thing about these formulae is that, the co-efficients in the central difference
formulae are smaller and converges faster than those in Newton’s formulae. After a few
terms, the co-efficients in the stirling’s formula decrease more rapidly than those of the
Bessel’s formulae and the co-efficient of Bessel’s formula decreases more rapidly than
those of Newton’s formula. Therefore, whenever possible central difference formulae
should be used in preference to Newton’s formulae. However, the right choice depends
on the position of the interpolated value in the given pairs of values.
The Zig-Zag paths for various formulae
y
∆ ∆
2

3

4

5

6
Newton’s Backward
Newton’s Forward
Gauss Backward

Gauss Forward
Stirling
Bessel’s
Laplace Evertt’s
FIG. 5.1
5.7.2 Approximation of Function
To evaluate most mathematical functions, we must first produce computable approximations to
them. Functions are defined in a variety of ways in applications, with integrals and infinite series
being the most common types of formulas used for the definition. Such a definition is useful in
establishing the properties of the function, but it is generally not an efficient way to evaluate the
function. In this part we examine the use of polynomials as approximation to a given function.
For evaluating a function f(x) on a computer it is generally more efficient of space and time
to have an analytic approximation to f(x) rather than to store a table and use interpolation i.e.,
function evaluation through interpolation techniques over stored table of values has been found
to be quite costlier when compared to the use of efficient function approximations. It is also
INTERPOLATION WITH UNEQUAL INTERVAL
271
desirable to use the lowest possible degree of polynomial that will give the desired accuracy in
approximating f(x). The amount of time and effort expended on producing an approximation
should be directly proportional to how much the approximation will be used. If it is only to be used
a few times, a truncated Taylor series will often suffice. But if an approximation is to be used
millions of times by many people, then much care should be used in producing the approximation.
There are forms of approximating function other than polynomials.
Let f
1
, f
2
f
n
be the values of given function and

12
,
n
φφ φ be the corresponding values of the
approximating function. Then the error vector is e where the components of e are given by
e
i
= f
i

i
φ . So the approximation may be chosen in two ways. One is, to find the approximation
such that the quantity
22 2
12

n
ee e
++
is minimum. This leads us to the least square approximation.
Second is, choose the approximation such that the maximum components of e is minimized. This
leads to Chebyshev polynomials which have found important applications in the approximation
of functions.
(i) Approximation of function by Taylors series method: Taylor’s series
approximation is one of the most useful series expressions of a function. If a function f(x) has upto
(n + 1)
th
derivatives in an interval [a, b], near x = x
0
then it can be expressed as,

f (x) = f(x
0
) + f ’ (x
0
) (x – x
0
) + f ‘’(x
0
)
()
2
0
2!
xx

+ f
n
(x
0
)
()
0
!
n
xx
n

+ f
n+1
(s)

×
()
()
1
0
1!
n
xx
n
+

+
(1)
In the above expansion f ’(x
0
), f ’’(x
0
) etc., are the first, second derivatives of f(x) evaluated at x
0
.
The term
()
()
()
1
1
0
1!
n
n

fsxx
n
+
+

+
is called the remainder term. The quantity s is a number which is a function of x and lies
between x and x
0
. The remainder term gives the truncation error if only the first n terms in the
Taylor series are used to represent the function. The truncation error is thus:
Truncation error =
()
()
()
1
1
0
1!
n
n
fsxx
n
+
+

+
(2)
or T


=
()
()
1
0
1!
n
xx
M
n
+

+
(3)
where M = max.
()
1n
fs
+
for x in [a, b].
Obviously, the Taylor’s series is a polynomial with base function 1, (x – x
0
), (x – x
0
)
2
,
(x – x
0
)

n
. The co-efficients are constants given by f(x
0
), f’(x
0
), f ’’(x
0
) f ’’(x
0
)/2! etc. Thus the series
can be written in the rested form.
(ii) Approximation of function by Chebyshev polynomial: The polynomials are
linear combination of the monomials 1, x, x
2


x
n
. An examination of the monomial in the interval
(–1, + 1) shows that each achieves its maximum magnitude 1 at x = ± 1 and minimum magnitude
O at x = 0.
y(x) = a
0
+ a
1
x + a
2
x
2
+ + a

n
x
n
dropping the higher order terms or modification of the co-efficients a
1
, a
2
a
n
will produce little
error for small x near zero. But probably substantial error near the ends of the interval
272
COMPUTER BASED NUMERICAL AND STATISTICAL TECHNIQUES
(x near
±
1). In particular, it seems reasonable to look for other sets of simple related functions that
have their extreme values well distributed on their interval (–1, 1). We want to find approximations
which are fairly easy to generate and which reduce the maximum error to minimum value. The
cosine functions cos
θ
, cos 2θ, cos nθ appear to be good candidates. The set of polynomials
T
n
(x) = cos n
θ
, n = 0, 1 generates from the sequence of cosine functions using the transformation.
θ
= cos
–1
x

is known as Chebyshev polynomial. These polynomial are used in the theory of approximation of
function.
Chebyshev polynomials: Chebyshev polynomial T
n
(x) of the first kind of degree n over
the interval [–1, 1] is defined by the relation
T
n
(x) = cos [n cos
–1
(x)] (1)
Let cos
–1
x =
θ
, so that x = cos
θ

T
n
(x) = cos n
θ
for n = 0, T
0
(x) = 1
for n = 1, T
1
(x) = x
The Chebyshev polynomials satisfy the recurrence relation
T

n+1
(x) = 2x T
n
(x) – T
n–1
(x) (2)
which can be obtained easily using the following trigonometric identity.
cos (n + 1)
θ
+ cos (n – 1)
θ
= 2 cos
θ
cos n
θ
Above recurrence relation can be used to generate successively all T
n
(x), as well as to express
the powers of x in terms of the Chebyshev polynomials. Some of the Chebyshev polynomials and
the expansion for powers of x in terms of T
n
(x) are given as follows:
T
0
(x)= 1, 1= T
0
, (x),
T
1
(x)= x, x = T

1
(x),
T
2
(x)= 2x
2
– 1, x
2
=
1
2
(T
0
(x) + T
2
(x))
T
3
(x)= 4x
3
– 3x, x
3
=
() ()
()
13
1
3,
4
Tx Tx

+
T
4
(x)= 8x
4
– 8x
2
+ 1, x
4
=
() () ()
()
024
1
34 ,
8
Tx Tx Tx
++
T
5
(x)= 16x
5
– 20x
3
+ 5x, x
5
=
() () ()
()
135

,
1
10 5
16
Tx Tx Tx
++
T
6
(x)= 32x
6
– 48x
4
+ 18x
2
– 1, x
6
=
() () () ()
()
0246
1
10 15 6 ,
32
Tx Tx Tx Tx
+++
T
7
(x)= 64x
7
– 112x

5
+ 56x
3
– 7x,
T
8
(x) = 128x
8
– 256x
6
+ 160x
4
– 32x
2
+ 1,
T
9
(x) = 256x
9
– 576x
7
+ 432x
5
– 120x
3
+ 9x,
T
10
(x) = 512x
10

– 1280x
8
+ 1120x
6
– 400x
4
+ 50x
2
– 1 (3)
INTERPOLATION WITH UNEQUAL INTERVAL
273
Note that the co-efficient of x
n
in T
n
(x) is always 2
n–1
, and expression for x
n
i.e., 1, x, x
2
x
n
will be useful in the economization of power series.
Further, these polynomials satisfy the differential equation.
()
2
22
2
10

dy dy
xxny
dx
dx
−−+=
where, y = T
n
(x),
We also have
()
[]
1, 1,1
n
Tx x
≤∈−
(4)
Also, the Chebyshev polynomials satisfy the orthogonality relation
() ()
1
2
1
0, if ;
1
,if 0;
1
,if 0
2
nm
mn
TxT xdx mn

x
mn





=π = =



π

=≠


(5)
Another important property of these polynomials, is that, of all polynomials of degree n where
the co-efficient of x
n
is unity, the polynomial 2
1–n
T
n
(x) has the smallest least upper bound to its
magnitude in the interval [–1, 1], i.e.,
() ()
1
11 11
max 2 max

n
nn
xx
Tx Px

−≤≤ −≤≤

(6)
This is called the minimax property.
Here, p
n
(x) is any polynomial of degree n with leading co-efficient unity, T
n
(x) is defined by
T
n
(x) = cos(n cos
–1
x) = 2
n–1
x
n
– (7)
Because the maximum magnitude of T
n
(x) is one, the upper bound referred to is 1/2
n–1
i.e.,
2
1–n

. This is important because we will be able to write power-series representations of functions
whose maximum errors are given in terms of this upper bound.
Thus in Chebyshev approximation, the maximum error is kept down to a minimum. This
is called as minimax principle and the polynomial P
n
(x) = 2
1–n
T
n
(x); (n

1) is called the minimax
polynomial. By this process we can obtain the best lower order approximation called the minimax
approximation.
Example 1. Find the best lower-order approximation to the cubic 2x
3
+ 3x
2
.
Sol. Using the relation gives in equation (3), we have
2x
3
+ 3x
2
= 2
() ()
{}
2
13
1

33
4
Tx Tx x

++


= 3x
2
+
3
2
T
1
(x) +
1
2
T
3
(x)
= 3x
2
+
31
22
x +
T
3
(x), Since T
1

(x) = x
The polynomial 3x
2
+
3
2
x
is the required lower order approximation to the given cubic with
a maximum error
±
1
2
in the range [–1, 1].
274
COMPUTER BASED NUMERICAL AND STATISTICAL TECHNIQUES
Example 2. Obtain the best lower degree approximation to the cubic (x
3
+ 2x
2
), on the interval
[–1, 1].
Sol. We write
x
3
+ 2x
2
=
1
4
[3 T

1
(x) + T
3
(x)] + 2x
2
,
= 2x
2
+
3
4
T
1
(x) +
1
4
T
3
(x),
= 2x
2
+
3
4
x +
1
4
T
3
(x).

Hence, the polynomial
2
3
2
4
xx

+


is the required lower order approximation to the given
cubic.
The error of this approximation on the interval [–1, 1] is
()
3
11
11
max
44
x
Tx
−≤≤
=
.
Example 3. Use Chebyshev polynomials to find the best uniform approximation of degree 4 or less
to x
5
on [–1, 1].
Sol. x
5

in terms of Chebyshev polynomials can be written as
x
5
=
135
551
81616
TTT++
Now T
5
being polynomial of degree five therefore we omit the term
5
16
T
and approximate
f(x) = x
5
by
13
55
816
TT

+


.
Thus the uniform polynomial approximation of degree four or less to x
5
is given by

x
5
=
5
8
T
1
+
5
16
T
3
=
5
8
x +
5
16
[4x
3
– 3x]
=
3
55
16 4
xx

−+



and the error of this approximation on [–1, 1] is
5
11
1
max
16 16
x
T
−≤ ≤

=


.
Example 4. Find the best lower order approximation to the polynomial.
()
234 5
xxx x 11
yx 1 x ,x ,
2 6 24 120 2 2

=+++++ ∈−


.
Sol. On substituting x =
2
ξ
, we get
()

ξξ ξ ξ ξ
ξ= + + + + + − <ξ<
23 4 5
1,11
2 8 48 384 3840
y
.
INTERPOLATION WITH UNEQUAL INTERVAL
275
Above equation can be written in Chebyshev polynomals
y(
ξ
)= T
0
(
ξ
) +
1
2
T
1
(
ξ
) +
1
16
[T
0
(
ξ

) + T
2
(
ξ
)]
() ()
13
1
3
192
TT
+ξ+ξ


[][]
024 135
11
3 ( ) + 4T ( ) + ( ) 10 ( ) + 5T ( ) + T ( )
3072 61440
TT T
+ ξ ξξ+ ξ ξξ
= 1.063477T
0
()
ξ
+ 0.515788T
1
()
ξ
+ 0.063802T

2
()
ξ
+ 0.00529T
3
()
ξ
+ 0.000326T
4
()
ξ
+ 0.000052T
5
()
ξ
dropping the term containing T
5
()
ξ
, we get
y
()
ξ
= 1

+ 0.4999186
ξ
+ 0.125ξ
2
+ 0.0211589ξ

3
+ 0.0026041ξ
4
.
Hence, y(x) = 1 + 0.999837x + 0.5x
2
+ 0.0211589x
3
+ 0.041667x
4
.
Properties of Chebyshev polynomial T
n
(x)
1. T
n
(x) is a polynomial of degree n.
2. T
n
(–x) = (–1)
n
T
n
(x),
which show that T
n
(x) is an odd function of x if n is odd and an even function of x if n is even.
3.
()
[]

,1,1
n
Tx x
≤∈−
4. T
n
(x) assumes extreme values at (n + 1) points
x
m
= cos
()
m
π
, m = 0, 1, 2, ,n
and the extreme value of x
m
is (–1)
m
.
5.
() ()
1
1
0,
/2, 0
,0
mn
if m n
TxTxdx ifmn
if m n





=π = ≠


π==


which can be proved easily by putting x = cos θ.
Also T
n
(x) are orthogonal on the interval [–1, 1] with respect to the weight function
w(x) = 1/
()
2
1
x

6. If P
n
(x) is a monic polynomial of degree n then
() ()
1
11 11
max 2 max
n
nn
xx

Tx Px

−≤ ≤ −≤ ≤

is known
as minimax property since
()
1
n
Tx

.
Chebyshev polynomial approximation: Let f(x) be a continuous function defined on
the interval [–1, 1] and let B
0
+ B
1
x + Bx
2
+ B
n
x
n
be the required minimax polynomial approximation
for f(x).
Suppose f(x) =
0
2
a
1

ii
i
aT

=

(x) is Chebyshev series expansion for f(x). Then the truncated series
of the partial sum.

×