Tải bản đầy đủ (.pdf) (7 trang)

Handbook of mathematics for engineers and scienteists 78part potx

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (407.19 KB, 7 trang )

12.3. SECOND-ORDER NONLINEAR DIFFERENTIAL EQUATIONS 507
2

. We now consider an equation of the general form
εy

xx
= F (x, y, y

x
)(12.3.5.42)
subject to boundary conditions (12.3.5.33).
For the leading term of the outer expansion y = y
0
(x)+···, we have the equation
F (x, y
0
, y

0
)=0.
In the general case, when using the method of matched asymptotic expansions, the
position of the boundary layer and the form of the inner (extended) variable have to be
determined in the course of the solution of the problem.
First we assume that the boundary layer is located near the left boundary. In (12.3.5.42),
we make a change of variable z = x/δ(ε) and rewrite the equation as
y

zz
=
δ


2
ε
F

δz, y,
1
δ
y

z

.(12.3.5.43)
The function δ = δ(ε) is selected so that the right-hand side of equation (12.3.5.43) has a
nonzero limit value as ε → 0, provided that z, y,andy

z
are of the order of 1.
Example 5. For F (x, y, y

x
)=–kx
λ
y

x
+ y,where0 ≤ λ < 1, the substitution z = x/δ(ε) brings
equation (12.3.5.42) to
y

zz

=–
δ
1+λ
ε
kz
λ
y

z
+
δ
2
ε
y.
In order that the right-hand side of this equation has a nonzero limit value as ε → 0, one has to set δ
1+λ
/ε = 1
or δ
1+λ
/ε = const, where const is any positive number. It follows that δ = ε
1
1+λ
.
The leading asymptotic term of the inner expansion in the boundary layer, y = y
0
(z)+···, is determined
by the equation y

0
+ kz

λ
y

0
= 0, where the prime denotes differentiation with respect to z.
If the positionof the boundary layer is selected incorrectly, theouter and inner expansions
cannot be matched. In this situation, one should consider the case where an arbitrary
boundary layer is located on the right (this case is reduced to the previous one with the
change of variable x = 1 –z). In Example 5 above, the boundary layer is on the left if k > 0
and on the right if k < 0.
There is a procedure for matching subsequent asymptotic terms of the expansion (see
the seventh row and last column in Table 12.3). In its general form, this procedure can be
represented as
inner expansion of the outer expansion (y-expansion for x → 0)
= outer expansion of the inner expansion (y-expansion for z →∞).
Remark 1. The method of matched asymptotic expansions can also be applied to construct periodic
solutions of singularly perturbed equations (e.g., in the problem of relaxation oscillations of the Van der Pol
oscillator).
Remark 2. Two boundary layers can arise in some problems (e.g., in cases where the right-hand side of
equation (12.3.5.42) does not explicitly depend on y

x
).
Remark 3. The method of matched asymptotic expansions is also used for solving equations (in semi-
infinite domains) that do not degenerate at ε = 0. In such cases, there are no boundary layers; the original
variable is used in the inner domain, and an extended coordinate is introduced in the outer domain.
Remark 4. The method of matched asymptotic expansions is successfully applied for the solution of
various problems in mathematical physics that are described by partial differential equations; in particular, it
plays an important role in the theory of heat and mass transfer and in hydrodynamics.
508 ORDINARY DIFFERENTIAL EQUATIONS

12.3.6. Galerkin Method and Its Modifications (Projection Methods)
12.3.6-1. General form of an approximate solution.
Consider a boundary value problem for the equation
F[y]–f(x)=0 (12.3.6.1)
with linearhomogeneous boundary conditions* at the points x=x
1
and x=x
2
(x
1
≤ x ≤ x
2
).
Here, F is a linear or nonlinear differential operator of the second order (or a higher order
operator); y = y(x) is the unknown function and f = f(x) is a given function. It is assumed
that F[0]=0.
Let us choose a sequence of linearly independent functions (called basis functions)
ϕ = ϕ
n
(x)(n = 1, 2, , N)(12.3.6.2)
satisfying the same boundary conditions as y = y(x). According to all methods that will
be considered below, an approximate solution of equation (12.3.6.1) is sought as a linear
combination
y
N
=
N

n=1
A

n
ϕ
n
(x), (12.3.6.3)
with the unknown coefficients A
n
to be found in the process of solving the problem.
The fi nite sum (12.3.6.3) is called an approximation function. The remainder term
R
N
obtained after the finite sum has been substituted into the left-hand side of equation
(12.3.6.1),
R
N
= F[y
N
]–f(x). (12.3.6.4)
If the remainder R
N
is identically equal to zero, then the function y
N
is the exact
solution of equation (12.3.6.1). In general, R
N
0.
12.3.6-2. Galerkin method.
In order to find the coefficients A
n
in (12.3.6.3), consider another sequence of linearly
independent functions

ψ = ψ
k
(x)(k = 1, 2, , N). (12.3.6.5)
Let us multiply both sides of (12.3.6.4) by ψ
k
and integrate the resulting relation over the
region V = {x
1
≤ x ≤ x
2
}, in which we seek the solution of equation (12.3.6.1). Next,
we equate the corresponding integrals to zero (for the exact solutions, these integrals are
equal to zero). Thus, we obtain the following system of linear algebraic equations for the
unknown coefficients A
n
:

x
2
x
1
ψ
k
R
N
dx = 0 (k = 1, 2, , N). (12.3.6.6)
Relations (12.3.6.6) mean that the approximation function (12.3.6.3) satisfies equation
(12.3.6.1) “on the average” (i.e., in the integral sense) with weights ψ
k
. Introducing

* Nonhomogeneous boundary conditions can be reduced to homogeneous ones by the change of variable
z = A
2
x
2
+ A
1
x + A
0
+ y (the constants A
2
, A
1
,and A
0
are selected using the method of undetermined
coefficients).
12.3. SECOND-ORDER NONLINEAR DIFFERENTIAL EQUATIONS 509
the scalar product 〈g, h〉 =

x
2
x
1
gh dx of arbitrary functions g and h, we can consider
equations (12.3.6.6) as the condition of orthogonality of the remainder R
N
to all weight
functions ψ
k

.
The Galerkin method can be applied not only to boundary value problems, but also to
eigenvalue problems (in the latter case, one takes f = λy and seeks eigenfunctions y
n
,
together with eigenvalues λ
n
).
Mathematical justification of the Galerkin method for specific boundary value problems
can be found in the literature listed at the end of Chapter 12. Below we describe some other
methods that are in fact special cases of the Galerkin method.
Remark. Most often, one takes suitable sequences of polynomials or trigonometric functions as ϕ
n
(x)
in the approximation function (12.3.6.3).
12.3.6-3. Bubnov–Galerkin method, the moment method, the least squares method.
1

. The sequences of functions (12.3.6.2) and (12.3.6.5) in the Galerkin method can be
chosen arbitrarily. In the case of equal functions,
ϕ
k
(x)=ψ
k
(x)(k = 1, 2, , N), (12.3.6.7)
the method is often called the Bubnov–Galerkin method.
2

.Themoment method is the Galerkin method with the weight functions (12.3.6.5) being
powers of x,

ψ
k
= x
k
.(12.3.6.8)
3

. Sometimes, the functions ψ
k
are expressed in terms of ϕ
k
by the relations
ψ
k
= F[ϕ
k
](k = 1, 2, ),
where F is the differential operator of equation (12.3.6.1). This version of the Galerkin
method is called the least squares method.
12.3.6-4. Collocation method.
In the collocation method, one chooses a sequence of points x
k
, k = 1, , N, and imposes
the condition that the remainder (12.3.6.4) be zero at these points,
R
N
= 0 at x = x
k
(k = 1, , N ). (12.3.6.9)
When solving a specific problem, the points x

k
, at which the remainder R
N
is set equal
to zero, are regarded as most significant. The number of collocation points N is taken equal
to the number of the terms of the series (12.3.6.3). This allows one to obtain a complete
system of algebraic equations for the unknown coefficients A
n
(for linear boundary value
problems, this algebraic system is linear).
Note that the collocation method is a special case of the Galerkin method with the
sequence (12.3.6.5) consisting of the Dirac delta functions:
ψ
k
= δ(x – x
k
).
In the collocation method, there is no need to calculate integrals, and this essentially
simplifies the procedure of solving nonlinear problems (although usually this method yields
less accurate results than other modifications of the Galerkin method).
510 ORDINARY DIFFERENTIAL EQUATIONS
Example. Consider the boundary value problem for the linear variable-coefficient second-order ordinary
differential equation
y

xx
+ g(x)y – f(x)=0 (12.3.6.10)
subject to the boundary conditions of the first kind
y(–1)=y(1)=0. (12.3.6.11)
Assume that the coefficients of equation (12.3.6.10) are smooth even functions, so that f(x)=f(–x)and

g(x)=g(–x). We use the collocation method for the approximate solution of problem (12.3.6.10)–(12.3.6.11).
1

. Take the polynomials
y
n
(x)=x
2n–2
(1 – x
2
), n = 1, 2, N,
as the basis functions; they satisfy the boundary conditions (12.3.6.11), y
n
( 1)=0.
Let us consider three collocation points
x
1
=–σ, x
2
= 0, x
3
= σ (0 < σ < 1) (12.3.6.12)
and confine ourselves to two basis functions (N = 2), so that the approximation function is taken in the form
y(x)=A
1
(1 – x
2
)+A
2
x

2
(1 – x
2
). (12.3.6.13)
Substituting (12.3.6.13) in the left-hand side of equation (12.3.6.10) yields the remainder
R(x)=A
1

–2 +(1 – x
2
)g(x)

+ A
2

2 – 12x
2
+ x
2
(1 – x
2
)g(x)

– f(x).
It must vanish at the collocation points (12.3.6.12). Taking into account the properties f(σ)=f(–σ)and
g(σ)=g(–σ), we obtain two linear algebraic equations for the coefficients A
1
and A
2
:

A
1

–2 + g(0)

+ 2A
2
– f(0)=0 (at x = 0),
A
1

–2 +(1 – σ
2
)g(σ)

+ A
2

2 – 12σ
2
+ σ
2
(1 – σ
2
)g(σ)

– f(σ)=0 (at x = σ).
(12.3.6.14)
2


. To be specific, let us take the following functions entering equation (12.3.6.10):
f(x)=–1, g(x)=1 + x
2
. (12.3.6.15)
On solving the corresponding system of algebraic equations (12.3.6.14), we find the coefficients
A
1
=
σ
4
+ 11
σ
4
+ 2σ
2
+ 11
, A
2
=–
σ
2
σ
4
+ 2σ
2
+ 11
. (12.3.6.16)
In Fig. 12.3, the solid line depicts the numerical solution to problem (12.3.6.10)–(12.3.6.11), with the
functions (12.3.6.15), obtained by the shooting method (see Paragraph 12.3.7-3). The dashed lines 1 and 2
show the approximate solutions obtained by the collocation method using the formulas (12.3.6.13), (12.3.6.16)

with σ =
1
2
(equidistant points) and σ =

2
2
(Chebyshev points, see Subsection 12.4.4), respectively. It is evident
that both cases provide good coincidence of the approximate and numerical solutions; the use of Chebyshev
points gives a more accurate result.
0.5
√2√2
22
0.5
0.5
11
1
1
2
y
x
O
Figure 12.3. Comparison of the numerical solution of problem (12.3.6.10), (12.3.6.11), (12.3.6.15) with the
approximate analytical solution (12.3.6.13), (12.3.6.16) obtained with the collocation method.
Remark. The theorem of convergence of the collocation method for linear boundary value problems is
given in Subsection 12.4.4, where nth-order differential equations are considered.
12.3. SECOND-ORDER NONLINEAR DIFFERENTIAL EQUATIONS 511
12.3.6-5. Method of partitioning the domain.
The domain V = {x
1

≤ x ≤ x
2
} is split into N subdomains: V
k
= {x
k1
≤ x ≤ x
k2
},
k = 1, , N . In this method, the weight functions are chosen as follows:
ψ
k
(x)=

1 for x V
k
,
0 for x V
k
.
The subdomains V
k
are chosen according to the specific properties of the problem under
consideration and can generally be arbitrary (the union of all subdomains V
k
may differ
from the domain V ,andsomeV
k
and V
m

may overlap).
12.3.6-6. Least squared error method.
Sometimes, in order to find the coefficients A
n
of the approximation function (12.3.6.3),
one uses the least squared error method based on the minimization of the functional:
Φ =

x
2
x
1
R
2
N
dx → min . (12.3.6.17)
For given functions ϕ
n
in (12.3.6.3), the integral Φ is a function with respect to the
coefficients A
n
. The corresponding necessary conditions of minimum in (12.3.6.17) have
the form
∂Φ
∂A
n
= 0 (n = 1, , N).
This is a system of algebraic (transcendental) equations for the coefficients A
n
.

12.3.7. Iteration and Numerical Methods
12.3.7-1. Method of successive approximations (Cauchy problem).
The method of successive approximations is implemented in two steps. First, the Cauchy
problem
y

xx
= f(x, y, y

x
) (equation), (12.3.7.1)
y(x
0
)=y
0
, y

x
(x
0
)=y

0
(initial conditions) (12.3.7.2)
is reduced to an equivalent system of integral equations by the introduction of the new
variable u(x)=y

x
. These integral equations have the form
u(x)=y


0
+

x
x
0
f

t, y(t), u(t)

dt, y(x)=y
0
+

x
x
0
u(t) dt.(12.3.7.3)
Then the solution of system (12.3.7.3) is sought by means of successive approximations
defined by the following recurrence formulas:
u
n+1
(x)=y

0
+

x
x

0
f

t, y
n
(t), u
n
(t)

dt, y
n+1
(x)=y
0
+

x
x
0
u
n
(t) dt; n = 0, 1, 2,
As the initial approximation, one can take y
0
(x)=y
0
and u
0
(x)=y

0

.
512 ORDINARY DIFFERENTIAL EQUATIONS
12.3.7-2. Runge–Kutta method (Cauchy problem).
For the numerical integration of the Cauchy problem (12.3.7.1)–(12.3.7.2), one often uses
the Runge–Kutta method.
Let Δx be sufficiently small. We introduce the following notation:
x
k
= x
0
+ kΔx, y
k
= y(x
k
), y

k
= y

x
(x
k
), f
k
= f(x
k
, y
k
, y


k
); k = 0, 1, 2,
The desired values y
k
and y

k
are successively found by the formulas
y
k+1
= y
k
+ y

k
Δx +
1
6
(f
1
+ f
2
+ f
3
)(Δx)
2
,
y

k+1

= y

k
+
1
6
(f
1
+ 2f
2
+ 2f
3
+ f
4
)Δx,
where
f
1
= f

x
k
, y
k
, y

k

,
f

2
= f

x
k
+
1
2
Δx, y
k
+
1
2
y

k
Δx, y

k
+
1
2
f
1
Δx

,
f
3
= f


x
k
+
1
2
Δx, y
k
+
1
2
y

k
Δx +
1
4
f
1
(Δx)
2
, y

k
+
1
2
f
2
Δx


,
f
4
= f

x
k
+ Δx, y
k
+ y

k
Δx +
1
2
f
2
(Δx)
2
, y

k
+ f
3
Δx

.
In practice, the step Δx is determined in the same way as for first-order equations (see
Remark 2 in Paragraph 12.1.10-3).

12.3.7-3. Shooting method (boundary value problems).
In order to solve the boundary value problem for equation (12.3.7.1) with the boundary
conditions
y(x
1
)=y
1
, y(x
2
)=y
2
,(12.3.7.4)
one considers an auxiliary Cauchy problem for equation (12.3.7.1) with the initial conditions
y(x
1
)=y
1
, y

x
(x
1
)=a.(12.3.7.5)
(The solution of this Cauchy problem can be obtained by the Runge–Kutta method or some
other numerical method.) The parameter a is chosen so that the value of the solution
y = y(x, a) at the point x = x
2
coincides with the value required by the second boundary
condition in (12.3.7.4):
y(x

2
, a)=y
2
.
In a similar way one constructs the solution of the boundary value problem with mixed
boundary conditions
y(x
1
)=y
1
, y

x
(x
2
)+ky(x
2
)=y
2
.(12.3.7.6)
In this case, one also considers the auxiliary Cauchy problem (12.3.7.1), (12.3.7.5). The
parameter a is chosen so that the solution y = y(x, a) satisfies the second boundary condition
in (12.3.7.6) at the point x = x
2
.
12.3. SECOND-ORDER NONLINEAR DIFFERENTIAL EQUATIONS 513
12.3.7-4. Method of accelerated convergence in eigenvalue problems.
Consider the Sturm–Liouville problem for the second-order nonhomogeneous linear equa-
tion
[f(x)y


x
]

x
+[λg(x)–h(x)]y = 0 (12.3.7.7)
with linear homogeneous boundary conditions of the first kind
y(0)=y(1)=0.(12.3.7.8)
It is assumed that the functions f , f

x
, g, h are continuous and f > 0, g > 0.
First, using the Rayleigh–Ritz principle, one finds an upper estimate for the first eigen-
value λ
0
1
[this value is determined by the right-hand side of relation (12.2.5.6)]. Then, one
solves numerically the Cauchy problem for the auxiliary equation
[f(x)y

x
]

x
+[λ
0
1
g(x)–h(x)]y = 0 (12.3.7.9)
with the boundary conditions
y(0)=0, y


x
(0)=1.(12.3.7.10)
The function y(x, λ
0
1
) satisfies the condition y(x
0
, λ
0
1
)=0,wherex
0
< 1. The criterion of
closeness of the exact and approximate solutions, λ
1
and λ
0
1
, has the form of the inequality
|1 – x
0
| ≤ δ,whereδ is a sufficiently small given constant. If this inequality does not hold,
one constructs a refinement for the approximate eigenvalue on the basis of the formula
λ
1
1
= λ
0
1

– ε
0
f(1)
[y

x
(1)]
2
y
2
, ε
0
= 1 – x
0
,(12.3.7.11)
where y
2
=

1
0
g(x)y
2
(x) dx. Then the value λ
1
1
is substituted for λ
0
1
in the Cauchy

problem (12.3.7.9)–(12.3.7.10). As a result, a new solution y and a new point x
1
are
found; and one has to check whether the criterion |1 – x
1
| ≤ δ holds. If this inequality is
violated, one refines the approximate eigenvalue by means of the formula
λ
2
1
= λ
1
1
– ε
1
f(1)
[y

x
(1)]
2
y
2
, ε
1
= 1 – x
1
,(12.3.7.12)
and repeats the above procedure.
Remark 1. Formulas of the type (12.3.7.11) are obtained by a perturbation method based on a transfor-

mation of the independent variable x (see Paragraph 12.3.5-2). If x
n
> 1, the functions f, g,and h are
smoothly extended to the interval (1, ξ], where ξ ≥ x
n
.
Remark 2. The algorithm described above has the property of accelerated convergence ε
n+1
∼ ε
2
n
,which
ensures that the relative error of the approximate solution becomes 10
–4
to 10
–8
after two or three iterations
for ε
0
∼ 0.1. This method is quite effective for high-precision calculations, is fail-safe, and guarantees against
accumulation of roundoff errors.
Remark 3. In a similar way, one can compute subsequent eigenvalues λ
m
, m = 2, 3, (to that end, a
suitable initial approximation λ
0
m
should be chosen).
Remark 4. A similar computation scheme can also be used in the case of boundary conditions of the
second and the third kinds, periodic boundary conditions, etc. (see the reference below).

×