Tải bản đầy đủ (.pdf) (40 trang)

Advanced Mathematical Methods for Scientists and Engineers Episode 4 Part 4 docx

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (291.02 KB, 40 trang )


In terms of real functions, this is
=
1
π


n=−∞
1 − (−1)
n
ın
(cos(nx) + ı sin(nx))
=
2
π


n=1
1 − (−1)
n
ın
sin(nx)
sign(x) ∼
4
π


n=1
oddn
1
n


sin(nx).
25.9 Least Squares Fit to a Function and Completeness
Let {φ
1
, φ
2
, φ
3
, . . .} be a set of real, square integrable functions that are orthonormal with respect to the weighting
function σ(x) on the interval [a, b]. That is,
φ
n
|σ|φ
m
 = δ
nm
.
Let f(x) be some square integrable function defined on the same interval. We would like to approximate the function
f(x) with a finite orthonormal series.
f(x) ≈
N

n=1
α
n
φ
n
(x)
f(x) may or may not have a uniformly convergent expansion in the orthonormal functions.
We would like to choose the α

n
so that we get the best possible approximation to f(x). The most common measure
of how well a series approximates a function is the least squares measure. The error is d efined as the integral of the
weighting function times the square of the deviation.
E =

b
a
σ(x)

f(x) −
N

n=1
α
n
φ
n
(x)

2
dx
1294
The “best” fit is found by choosing the α
n
that minimize E. Let c
n
be the Fourier coefficients of f(x).
c
n

= φ
n
|σ|f
we expand the integral for E.
E(α) =

b
a
σ(x)

f(x) −
N

n=1
α
n
φ
n
(x)

2
dx
=

f −
N

n=1
α
n

φ
n




σ




f −
N

n=1
α
n
φ
n

= f|σ|f − 2

N

n=1
α
n
φ
n





σ




f

+

N

n=1
α
n
φ
n




σ




N


n=1
α
n
φ
n

= f|σ|f − 2
N

n=1
α
n
φ
n
|σ|f +
N

n=1
N

m=1
α
n
α
m
φ
n
|σ|φ
m


= f|σ|f − 2
N

n=1
α
n
c
n
+
N

n=1
α
2
n
= f|σ|f +
N

n=1

n
− c
n
)
2

N

n=1
c

2
n
Each term involving α
n
in non-negative and is minimized for α
n
= c
n
. The Fourier coefficients give the least squares
approximation to a function. The least squares fit to f(x) is thus
f(x) ≈
N

n=1
φ
n
|σ|fφ
n
(x).
1295
Result 25.9.1 If {φ
1
, φ
2
, φ
3
, . . .} is a set of real, square integrable functions that are orthog-
onal with respect to σ(x) then the least squares fit of the first N orthogonal functions to the
square integrable function f(x) is
f(x) ≈

N

n=1
φ
n
|σ|f
φ
n
|σ|φ
n

φ
n
(x).
If the set is orthonormal, this formula reduces to
f(x) ≈
N

n=1
φ
n
|σ|fφ
n
(x).
Since the error in the approximation E is a nonne gative number we can obtain on inequality on the sum of the
squared coefficients.
E = f|σ|f −
N

n=1

c
2
n
N

n=1
c
2
n
≤ f|σ|f
This equation is known as Bessel’s Inequality. Since f|σ|f  is just a nonnegative number, independ ent of N, the
sum


n=1
c
2
n
is convergent and c
n
→ 0 as n → ∞
Convergence in the Mean. If the error E goes to zero as N tends to infinity
lim
N→∞

b
a
σ(x)

f(x) −

N

n=1
c
n
φ
n
(x)

2
dx = 0,
1296
then the sum converges in the mean to f(x) relative to the weighting function σ(x). This implies that
lim
N→∞

f|σ|f −
N

n=1
c
2
n

= 0


n=1
c
2

n
= f|σ|f.
This is known as Parseval’s identity.
Completeness. Consider a set of functions {φ
1
, φ
2
, φ
3
, . . .} that is orthogonal with respect to the weighting function
σ(x). If every function f(x) that is square integrable with respect to σ(x) has an orthogonal series expansion
f(x) ∼


n=1
c
n
φ
n
(x)
that converges in the mean to f(x), then the set is complete.
25.10 Closure Relation
Let {φ
1
, φ
2
, . . .} be an orthonormal, comple te set on the domain [a, b]. For any square integrable function f(x) we can
write
f(x) ∼



n=1
c
n
φ
n
(x).
1297
Here the c
n
are the generalized Fourier coefficients and the sum converges in the mean to f(x). Substituting the
expression for the Fourier coefficients into the sum yields
f(x) ∼


n=1
φ
n
|fφ
n
(x)
=


n=1


b
a
φ

n
(ξ)f(ξ) dξ

φ
n
(x).
Since the sum is not necessarily uniformly convergent, we are not justified in exchanging the order of summation and
integration. . . but what the heck, let’s do it anyway.
=

b
a



n=1
φ
n
(ξ)f(ξ)φ
n
(x)


=

b
a




n=1
φ
n
(ξ)φ
n
(x)

f(ξ) dξ
The sum behaves like a Dirac delta function. Recall that δ(x − ξ) satisfies the equation
f(x) =

b
a
δ(x − ξ)f(ξ) dξ for x ∈ (a, b).
Thus we could say that the sum is a representation of δ(x −ξ). Note that a series representation of the delta function
could not be convergent, hence the necessity of throwing caution to the wind when we interchanged the summation
and integration in deriving the series. The closure relation for an orthonormal, complete set states


n=1
φ
n
(x)φ
n
(ξ) ∼ δ(x − ξ).
1298
Alternatively, you can derive the closure relation by computing the generalized Fourier coefficients of the delta
function.
δ(x − ξ) ∼



n=1
c
n
φ
n
(x)
c
n
= φ
n
|δ(x − ξ)
=

b
a
φ
n
(x)δ(x − ξ) dx
= φ
n
(ξ)
δ(x − ξ) ∼


n=1
φ
n
(x)φ
n

(ξ)
Result 25.10.1 If {φ
1
, φ
2
, . . .} is an orthogonal, complete set on the domain [a, b], then


n=1
φ
n
(x)φ
n
(ξ)
φ
n

2
∼ δ(x −ξ).
If the set is orthonormal, then


n=1
φ
n
(x)φ
n
(ξ) ∼ δ(x −ξ).
Example 25.10.1 The integral of the Dirac delta function is the Heaviside function. On the interval x ∈ (−π, π)


x
−π
δ(t) dt = H(x) =

1 for 0 < x < π
0 for − π < x < 0.
1299
Consider the orthonormal, complete set {. . . ,
1


e
−ıx
,
1


,
1


e
ıx
, . . .} on the domain [−π, π]. The delta function
has the series
δ(t) ∼


n=−∞
1



e
ınt
1


e
−ın0
=
1



n=−∞
e
ınt
.
We will find the series expansion of the Heaviside function first by expanding directly and then by integrating the
expansion for the delta function.
Finding the series expansion of H(x) directly. The generalized Fourier coefficients of H(x) are
c
0
=

π
−π
1



H(x) dx
=
1



π
0
dx
=

π
2
c
n
=

π
−π
1


e
−ınx
H(x) dx
=
1




π
0
e
−ınx
dx
=
1 − (−1)
n
ın


.
1300
Thus the Heaviside function has the expansion
H(x) ∼

π
2
1


+


n=−∞
n=0
1 − (−1)
n
ın



1


e
ınx
=
1
2
+
1
π


n=1
1 − (−1)
n
n
sin(nx)
H(x) ∼
1
2
+
2
π


n=1
oddn
1

n
sin(nx).
Integrating the series for δ(t).

x
−π
δ(t) dt ∼
1


x
−π


n=−∞
e
ınt
dt
=
1




(x + π) +


n=−∞
n=0


1
in
e
ınt

x
−π



=
1




(x + π) +


n=−∞
n=0
1
ın

e
ınx
−(−1)
n





=
x

+
1
2
+
1



n=1
1
ın

e
ınx

e
−ınx
−(−1)
n
+ (−1)
n

=
x


+
1
2
+
1
π


n=1
1
n
sin(nx)
1301
Expanding
x

in the orthonormal set,
x




n=−∞
c
n
1


e
ınx

.
c
0
=

π
−π
1


x

dx = 0
c
n
=

π
−π
1


e
−ınx
x

dx =
ı(−1)
n
n



x




n=−∞
n=0
ı(−1)
n
n


1


e
ınx
= −
1
π


n=1
(−1)
n
sin(nx)
Substituting the series for
x


into the expression for the integral of the delta function,

x
−π
δ(t) dt ∼
1
2
+
1
π


n=1
1 − (−1)
n
n
sin(nx)

x
−π
δ(t) dt ∼
1
2
+
2
π


n=1

oddn
1
n
sin(nx).
Thus we see that the series expans ions of the Heaviside function and the integral of the delta function are the same.
25.11 Linear Operators
1302
25.12 Exercises
Exercise 25.1
1. Suppose {φ
k
(x)}

k=0
is an orthogonal system on [a, b]. Show that any finite set of the φ
j
(x) is a linearly
independent set on [a, b]. That is, if {φ
j
1
(x), φ
j
2
(x), . . . , φ
j
n
(x)} is the set and all the j
ν
are distinct, then
a

1
φ
j
1
(x) + a
2
φ
j
2
(x) + ··· + a
n
φ
j
n
(x) = 0 on a ≤ x ≤ b
is true iff: a
1
= a
2
= ··· = a
n
= 0.
2. Show that the complex functions φ
k
(x) ≡
e
ıkπx/L
, k = 0, 1, 2, . . . are orthogonal in the sense that

L

−L
φ
k
(x)φ

n
(x) dx =
0, for n = k. Here φ

n
(x) is the complex conjugate of φ
n
(x).
Hint, Solution
1303
25.13 Hints
Hint 25.1
1304
25.14 Solutions
Solution 25.1
1.
a
1
φ
j
1
(x) + a
2
φ
j

2
(x) + ··· + a
n
φ
j
n
(x) = 0
n

k=1
a
k
φ
j
k
(x) = 0
We take the inner product with φ
j
ν
for any ν = 1, . . . , n. (φ, ψ ≡

b
a
φ(x)ψ

(x) dx.)

n

k=1

a
k
φ
j
k
, φ
j
ν

= 0
We interchange the order of summation and integration.
n

k=1
a
k
φ
j
k
, φ
j
ν
 = 0
φ
j
k
φ
j
ν
 = 0 for j = ν.

a
ν
φ
j
ν
φ
j
ν
 = 0
φ
j
ν
φ
j
ν
 = 0.
a
ν
= 0
Thus we see that a
1
= a
2
= ··· = a
n
= 0.
1305
2. For k = n, φ
k
, φ

n
 = 0.
φ
k
, φ
n
 ≡

L
−L
φ
k
(x)φ

n
(x) dx
=

L
−L
e
ıkπx/L
e
−ınπx/L
dx
=

L
−L
e

ı(k−n)πx/L
dx
=

e
ı(k−n)πx/L
ı(k −n)π/L

L
−L
=
e
ı(k−n)π

e
−ı(k−n)π
ı(k −n)π/L
=
2L sin((k −n)π)
(k −n)π
= 0
1306
Chapter 26
Self Adjoint Linear Operators
26.1 Adjoint Operators
The adjoint of an operator, L

, satisfies
v|Lu − L


v|u = 0
for all elements u an v. This is known as Green’s Identity.
The adjoint of a matrix. For vectors, one can represent linear operators L with matrix multiplication.
Lx ≡ Ax
1307
Let B = A

be the adjoint of the matrix A. We determine the adjoint of A from Green’s Identity.
x|Ay − Bx|y = 0
x · Ay = Bx · y
x
T
Ay = Bx
T
y
x
T
Ay = x
T
B
T
y
y
T
A
T
x = y
T
BxB = A
T

Thus we see that the adjoint of a matrix is the conjugate transpose of the matrix, A

= A
T
. The conjugate transpose
is also called the Hermitian transpose and is denoted A
H
.
The adjoint of a differential operator. Consider a second order linear differential operator acting on C
2
functions
defined on (a . . . b) which satisfy certain boundary conditions.
Lu ≡ p
2
(x)u

+ p
1
(x)u

+ p
0
(x)u
26.2 Self-Adjoint Operators
Matrices. A matrix is self-adjoint if it is equal to its conjugate transpose A = A
H
≡ A
T
. Such matrices are called
Hermitian. For a Hermitian matrix H, Green’s identity is

y|Hx = Hy|x
y · Hx = Hy ·x
1308
The eigenvalues of a Hermitian matrix are real. Let x be an eigenvector with eigenvalue λ.
x|Hx = Hx|x
x|λx − λx|x = 0
(λ − λ)x|x = 0
λ = λ
The eigenvectors corresponding to distinct eigenvalues are distinct. Let x and y be eigenvectors with distinct eigenvalues
λ and µ.
y|Hx = Hy|x
y|λx − µy|x = 0
(λ − µ)y|λx = 0
(λ − µ)y|x = 0
y|x = 0
Furthermore, all Hermitian matrices are similar to a diagonal matrix and have a complete set of orthogonal eigenvectors.
Trigonometric Series. Consider the problem
−y

= λy, y(0) = y(2π), y

(0) = y

(2π).
1309
We verify that the differential operator L = −
d
2
dx
2

with periodic boundary conditions is self-adjoint.
v|Lu = v| − u


= [−vu

]

0
− v

| − u


= v

|u


=

v

u


0
− v

|u

= −v

|u
= Lv|u
The eigenvalues and eigenfunctions of this problem are
λ
0
= 0, φ
0
= 1
λ
n
= n
2
, φ
(1)
n
= cos(nx), φ
(2)
n
= sin(nx), n ∈ Z
+
1310
26.3 Exercises
1311
26.4 Hints
1312
26.5 Solutions
1313
Chapter 27

Self-Adjoint Boundary Value Problems
Seize the day and throttle it.
-Calvin
27.1 Summary of Adjoint Operators
The adjoint of the operator
L[y] = p
n
d
n
y
dx
n
+ p
n−1
d
n−1
y
dx
n−1
+ ··· + p
0
y,
is defined
L

[y] = (−1)
n
d
n
dx

n
(p
n
y) + (−1)
n−1
d
n−1
dx
n−1
(p
n−1
y) + ··· + p
0
y
If each of the p
k
is k times continuously differentiable and u and v are n times continuously differentiable on some
interval, then on that interval Lagrange’s identity states
vL[u] − uL

[v] =
d
dx
B[u, v]
1314
where B[u, v] is the bilinear form
B[u, v] =
n

m=1


j+k=m−1
j≥0,k≥0
(−1)
j
u
(k)
(p
m
v)
(j)
.
If L is a second order operator then
vL[u] − uL

[v] = u

p
2
v + u

p
1
v + u

− p
2
v

+ (−2p


2
+ p
1
)v

+ (−p

2
+ p

1
)v

.
Integrating Lagrange’s identity on its interval of validity gives us Green’s formula.

b
a

vL[u] − uL

[v]

dx = v|L[u] − L

[v]|u = B[u, v]


x=b

− B[u, v]


x=a
27.2 Formally Self-Adjoint Operators
Example 27.2.1 The linear operator
L[y] = x
2
y

+ 2xy

+ 3y
has the adjoint operator
L

[y] =
d
2
dx
2
(x
2
y) −
d
dx
(2xy) + 3y
= x
2
y


+ 4xy

+ 2y −2xy

− 2y + 3y
= x
2
y

+ 2xy

+ 3y.
In Example 27.2.1, the adjoint operator is the same as the operator. If L = L

, the operator is said to be formally
self-adjoint.
1315
Most of the differential equations that we stud y in this book are second order, formally self-adjoint, with real-valued
coefficient functions. Thus we wish to find the general form of this operator. Consider the operator
L[y] = p
2
y

+ p
1
y

+ p
0

y,
where the p
j
’s are real-valued functions. The adjoint operator then is
L

[y] =
d
2
dx
2
(p
2
y) −
d
dx
(p
1
y) + p
0
y
= p
2
y

+ 2p

2
y


+ p

2
y −p
1
y

− p

1
y + p
0
y
= p
2
y

+ (2p

2
− p
1
)y

+ (p

2
− p

1

+ p
0
)y.
Equating L and L

yields the two equations,
2p

2
− p
1
= p
1
, p

2
− p

1
+ p
0
= p
0
p

2
= p
1
, p


2
= p

1
.
Thus second order, formally self-adjoint operators with real-valued coefficient functions have the form
L[y] = p
2
y

+ p

2
y

+ p
0
y,
which is equivalent to the form
L[y] =
d
dx
(py

) + qy.
Any linear differential equation of the form
L[y] = y

+ p
1

y

+ p
0
y = f(x),
where each p
j
is j times continuously differentiable and real-valued, can be written as a formally self adjoint equation.
We just multiply by the factor,
e
P (x)
= exp(

x
p
1
(ξ) dξ)
1316
to obtain
exp [P (x)] (y

+ p
1
y

+ p
0
y) = exp [P (x)] f(x)
d
dx

(exp [P (x)] y

) + exp [P (x)] p
0
y = exp [P(x)] f(x).
Example 27.2.2 Consider the equation
y

+
1
x
y

+ y = 0.
Multiplying by the factor
exp


x
1
ξ


=
e
log x
= x
will make the equation formally self-adjoint.
xy


+ y

+ xy = 0
d
dx
(xy

) + xy = 0
Result 27.2.1 If L = L

then the linear operator L is formally self-adjoint. Second order
formally self-adjoint operators have the form
L[y] =
d
dx
(py

) + qy.
Any differential equation of the form
L[y] = y

+ p
1
y

+ p
0
y = f(x),
where each p
j

is j times continuously differentiable and real-valued, can b e written as a
formally self adjoint equation by multiplying the equation by the factor exp(

x
p
1
(ξ) dξ).
1317

×