Tải bản đầy đủ (.pdf) (447 trang)

Chang k c methods in nonlinear analysis (SMM 2005)(ISBN 3540241337)(447s)

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (3.47 MB, 447 trang )

Springer Monographs in Mathematics


Kung-Ching Chang

Methods in
Nonlinear Analysis

ABC


Kung-Ching Chang
School of Mathematical Sciences
Peking University
100871 Beijing
People’s Republic of China
E-mail:

Library of Congress Control Number: 2005931137
Mathematics Subject Classification (2000): 47H00, 47J05, 47J07, 47J25, 47J30, 58-01,
58C15, 58E05, 49-01, 49J15, 49J35, 49J45, 49J53, 35-01
ISSN 1439-7382
ISBN-10 3-540-24133-7 Springer Berlin Heidelberg New York
ISBN-13 978-3-540-24133-1 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is
concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting,
reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication
or parts thereof is permitted only under the provisions of the German Copyright Law of September 9,
1965, in its current version, and permission for use must always be obtained from Springer. Violations are
liable for prosecution under the German Copyright Law.
Springer is a part of Springer Science+Business Media


springeronline.com
c Springer-Verlag Berlin Heidelberg 2005
Printed in The Netherlands
The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply,
even in the absence of a specific statement, that such names are exempt from the relevant protective laws
and regulations and therefore free for general use.
Typesetting: by the authors and TechBooks using a Springer LATEX macro package
Cover design: design & production GmbH, Heidelberg
Printed on acid-free paper

SPIN: 11369295

41/TechBooks

543210


Preface

Nonlinear analysis is a new area that was born and has matured from abundant research developed in studying nonlinear problems. In the past thirty
years, nonlinear analysis has undergone rapid growth; it has become part of
the mainstream research fields in contemporary mathematical analysis.
Many nonlinear analysis problems have their roots in geometry, astronomy,
fluid and elastic mechanics, physics, chemistry, biology, control theory, image
processing and economics. The theories and methods in nonlinear analysis
stem from many areas of mathematics: Ordinary differential equations, partial
differential equations, the calculus of variations, dynamical systems, differential geometry, Lie groups, algebraic topology, linear and nonlinear functional
analysis, measure theory, harmonic analysis, convex analysis, game theory,
optimization theory, etc. Amidst solving these problems, many branches are
intertwined, thereby advancing each other.

The author has been offering a course on nonlinear analysis to graduate students at Peking University and other universities every two or three
years over the past two decades. Facing an enormous amount of material,
vast numbers of references, diversities of disciplines, and tremendously different backgrounds of students in the audience, the author is always concerned
with how much an individual can truly learn, internalize and benefit from a
mere semester course in this subject.
The author’s approach is to emphasize and to demonstrate the most fundamental principles and methods through important and interesting examples
from various problems in different branches of mathematics. However, there
are technical difficulties: Not only do most interesting problems require background knowledge in other branches of mathematics, but also, in order to solve
these problems, many details in argument and in computation should be included. In this case, we have to get around the real problem, and deal with a
simpler one, such that the application of the method is understandable. The
author does not always pursue each theory in its broadest generality; instead,
he stresses the motivation, the success in applications and its limitations.


VI

Preface

The book is the result of many years of revision of the author’s lecture
notes. Some of the more involved sections were originally used in seminars as
introductory parts of some new subjects. However, due to their importance,
the materials have been reorganized and supplemented, so that they may be
more valuable to the readers.
In addition, there are notes, remarks, and comments at the end of this
book, where important references, recent progress and further reading are
presented.
The author is indebted to Prof. Wang Zhiqiang at Utah State University,
Prof. Zhang Kewei at Sussex University and Prof. Zhou Shulin at Peking
University for their careful reading and valuable comments on Chaps. 3, 4
and 5.

Peking University
September, 2003

Kung Ching Chang


Contents

1

Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1 Differential Calculus in Banach Spaces . . . . . . . . . . . . . . . . . . . . .
1.1.1 Frechet Derivatives and Gateaux Derivatives . . . . . . . . . .
1.1.2 Nemytscki Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1.3 High-Order Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Implicit Function Theorem and Continuity Method . . . . . . . . . .
1.2.1 Inverse Function Theorem . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2.3 Continuity Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 Lyapunov–Schmidt Reduction and Bifurcation . . . . . . . . . . . . . .
1.3.1 Bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3.2 Lyapunov–Schmidt Reduction . . . . . . . . . . . . . . . . . . . . . . .
1.3.3 A Perturbation Problem . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3.4 Gluing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3.5 Transversality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.4 Hard Implicit Function Theorem . . . . . . . . . . . . . . . . . . . . . . . . . .
1.4.1 The Small Divisor Problem . . . . . . . . . . . . . . . . . . . . . . . . .
1.4.2 Nash–Moser Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

1
2
7
9
12
12
17
23
30
30
33
43
47
49
54
55
62

2

Fixed-Point Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
2.1 Order Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
2.2 Convex Function and Its Subdifferentials . . . . . . . . . . . . . . . . . . . 80
2.2.1 Convex Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
2.2.2 Subdifferentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
2.3 Convexity and Compactness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
2.4 Nonexpansive Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
2.5 Monotone Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
2.6 Maximal Monotone Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120



VIII

Contents

3

Degree Theory and Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
3.1 The Notion of Topological Degree . . . . . . . . . . . . . . . . . . . . . . . . . 128
3.2 Fundamental Properties and Calculations of Brouwer Degrees . 137
3.3 Applications of Brouwer Degree . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
3.3.1 Brouwer Fixed-Point Theorem . . . . . . . . . . . . . . . . . . . . . . 148
3.3.2 The Borsuk-Ulam Theorem and Its Consequences . . . . . 148
3.3.3 Degrees for S 1 Equivariant Mappings . . . . . . . . . . . . . . . . 151
3.3.4 Intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
3.4 Leray–Schauder Degrees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
3.5 The Global Bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
3.6 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
3.6.1 Degree Theory on Closed Convex Sets . . . . . . . . . . . . . . . 175
3.6.2 Positive Solutions and the Scaling Method . . . . . . . . . . . . 180
3.6.3 Krein–Rutman Theory for Positive Linear Operators . . . 185
3.6.4 Multiple Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
3.6.5 A Free Boundary Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 192
3.6.6 Bridging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
3.7 Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
3.7.1 Set-Valued Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
3.7.2 Strict Set Contraction Mappings
and Condensing Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . 198
3.7.3 Fredholm Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200


4

Minimization Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
4.1 Variational Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
4.1.1 Constraint Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
4.1.2 Euler–Lagrange Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
4.1.3 Dual Variational Principle . . . . . . . . . . . . . . . . . . . . . . . . . . 212
4.2 Direct Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
4.2.1 Fundamental Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
4.2.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
4.2.3 The Prescribing Gaussian Curvature Problem
and the Schwarz Symmetric Rearrangement . . . . . . . . . . 223
4.3 Quasi-Convexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
4.3.1 Weak Continuity and Quasi-Convexity . . . . . . . . . . . . . . . 232
4.3.2 Morrey Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
4.3.3 Nonlinear Elasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
4.4 Relaxation and Young Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
4.4.1 Relaxations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
4.4.2 Young Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
4.5 Other Function Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
4.5.1 BV Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
4.5.2 Hardy Space and BMO Space . . . . . . . . . . . . . . . . . . . . . . . 266
4.5.3 Compensation Compactness . . . . . . . . . . . . . . . . . . . . . . . . 271
4.5.4 Applications to the Calculus of Variations . . . . . . . . . . . . 274


Contents

IX


4.6 Free Discontinuous Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
4.6.1 Γ-convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
4.6.2 A Phase Transition Problem . . . . . . . . . . . . . . . . . . . . . . . . 280
4.6.3 Segmentation and Mumford–Shah Problem . . . . . . . . . . . 284
4.7 Concentration Compactness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
4.7.1 Concentration Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
4.7.2 The Critical Sobolev Exponent and the Best Constants 295
4.8 Minimax Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
4.8.1 Ekeland Variational Principle . . . . . . . . . . . . . . . . . . . . . . . 301
4.8.2 Minimax Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
4.8.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
5

Topological and Variational Methods . . . . . . . . . . . . . . . . . . . . . . 315
5.1 Morse Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
5.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
5.1.2 Deformation Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
5.1.3 Critical Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
5.1.4 Global Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
5.1.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
5.2 Minimax Principles (Revisited) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
5.2.1 A Minimax Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
5.2.2 Category and Ljusternik–Schnirelmann
Multiplicity Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
5.2.3 Cap Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
5.2.4 Index Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
5.2.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
5.3 Periodic Orbits for Hamiltonian System
and Weinstein Conjecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
5.3.1 Hamiltonian Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373

5.3.2 Periodic Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
5.3.3 Weinstein Conjecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
5.4 Prescribing Gaussian Curvature Problem on S 2 . . . . . . . . . . . . . 380
5.4.1 The Conformal Group and the Best Constant . . . . . . . . . 380
5.4.2 The Palais–Smale Sequence . . . . . . . . . . . . . . . . . . . . . . . . . 387
5.4.3 Morse Theory for the Prescribing Gaussian Curvature
Equation on S2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
5.5 Conley Index Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
5.5.1 Isolated Invariant Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
5.5.2 Index Pair and Conley Index . . . . . . . . . . . . . . . . . . . . . . . . 397
5.5.3 Morse Decomposition on Compact Invariant Sets
and Its Extension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408

Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425


1
Linearization

The first and the easiest step in studying a nonlinear problem is to linearize
it. That is, to approximate the initial nonlinear problem by a linear one. Nonlinear differential equations and nonlinear integral equations can be seen as
nonlinear equations on certain function spaces. In dealing with their linearizations, we turn to the differential calculus in infinite-dimensional spaces. The
implicit function theorem for finite-dimensional space has been proved very
useful in all differential theories: Ordinary differential equations, differential
geometry, differential topology, Lie groups etc. In this chapter we shall see
that its infinite-dimensional version will also be useful in partial differential
equations and other fields; in particular, in the local existence, in the stability,
in the bifurcation, in the perturbation problem, and in the gluing technique
etc. This is the contents of Sects. 1.2 and 1.3. Based on Newton iterations

and the smoothing operators, the Nash–Moser iteration, which is motivated
by the isometric embeddings of Riemannian manifolds into Euclidean spaces
and the KAM theory, is now a very important tool in analysis. Limited in
space and time, we restrict ourselves to introducing only the spirit of the
method in Sect. 1.4.

1.1 Differential Calculus in Banach Spaces
There are two kinds of derivatives in the differential calculus of several variables, the gradients and the directional derivatives. We shall extend these two
to infinite-dimensional spaces.
Let X, Y and Z be Banach spaces, with norms · X , · Y , · Z ,
respectively. If there is no ambiguity, we omit the subscripts. Let U ⊂ X be
an open set, and let f : U → Y be a map.


2

1 Linearization

1.1.1 Frechet Derivatives and Gateaux Derivatives
Definition 1.1.1 (Fr´echet derivative) Let x0 ∈ U ; we say that f is Fr´echet
differentiable (or F-differentiable) at x0 , if ∃A ⊂ L(X, Y ) such that
f (x) − f (x0 ) − A(x − x0 )

Y

= ◦( x − x0

X)

.


Let f (x0 ) = A, and call it the Fr´echet (or F-) derivative of f at x0 .
If f is F-differentiable at every point in U , and if x → f (x), as a mapping
from U to L(X, Y ), is continuous at x0 , then we say that f is continuously
differentiable at x0 . If f is continuously differentiable at each point in U ,
then we say that f is continuously differentiable on U , and denote it by f ∈
C 1 (U, Y ).
Parallel to the differential calculus of several variables, by definition, we
may prove the following:
1. If f is F-differentiable at x0 , then f (x0 ) is uniquely determined.
2. If f is F-differentiable at x0 , then f must be continuous at x0 .
3. (Chain rule) Assume that U ⊂ X, V ⊂ Y are open sets, and that f is
F-differentiable at x0 , and g is F-differentiable at f (x0 ), where
f

g

U −−−−→ V −−−−→ Z
Then
(g ◦ f ) (x0 ) = g ◦ f (x0 ) · f (x0 ) .
Definition 1.1.2 (Gateaux derivative) Let x0 ∈ U ; we say that f is Gateaux
differentiable (or G-differentiable) at x0 , if ∀h ∈ X, ∃ df (x0 , h) ⊂ Y , such that
f (x0 + th) − f (x0 ) − tdf (x0 , h)

Y

= ◦(t) as t → 0

for all x0 + th ⊂ U . We call df (x0 , h) the Gateaux derivative (or G-derivative)
of f at x0 .

We have

d
f (x0 + th) |t=0 = df (x0 , h) ,
dt
if f is G-differentiable at x0 .
By definition, we have the following properties:
1. If f is G-differentiable at x0 , then df (x0 , h) is uniquely determined.
2. df (x0 , th) = tdf (x0 , h) ∀t ∈ R1 .
3. If f is G-differentiable at x0 , then ∀h ∈ X, ∀y ∗ ∈ Y ∗ , the function
ϕ(t) = y ∗ , f (x0 + th) is differentiable at t = 0, and ϕ (t) = y ∗ , df (x0 , h) .
4. Assume that f : U → Y is G-differentiable at each point in U , and that
the segment {x0 + th | t ∈ [0, 1]} ⊂ U , then
f (x0 + h) − f (x0 )

Y

sup
0
df (x0 + th, h)

Y


1.1 Differential Calculus in Banach Spaces

3

Proof. Let

ϕy∗ (t) = y ∗ , f (x0 + th)

t ∈ [0, 1], ∀y ∗ ∈ Y ∗

| y ∗ , f (x0 + h) − f (x0 ) | = | ϕy∗ (1) − ϕy∗ (0) |
= | ϕy∗ (t∗ ) |
= | y ∗ , df (x0 + t∗ h, h) |
for some t∗ ∈ (0, 1) depending on y ∗ . The conclusion follows from the
Hahn–Banach theorem.
5. If f is F-differentiable at x0 , then f is G-differentiable at x0 , with
df (x0 , h) = f (x0 )h ∀h ∈ X.
Conversely it is not true, but we have:
Theorem 1.1.3 Suppose that f : U → Y is G-differentiable, and that ∀x ∈
U , ∃A(x) ∈ L(X, Y ) satisfying
df (x, h) = A(x)h

∀h ∈ X .

If the mapping x → A(x) is continuous at x0 , then f is F-differentiable at x0 ,
with f (x0 ) = A(x0 ).
Proof. With no loss of generality, we assume that the segment {x0 + th |
t ∈ [0, 1]} is in U . According to the Hahn–Banach theorem, ∃y ∗ ∈ Y ∗ , with
y ∗ = 1, such that
f (x0 + h) − f (x0 ) − A(x0 )h
Let

Y

= y ∗ , f (x0 + h) − f (x0 ) − A(x0 )h .


ϕ(t) = y ∗ , f (x0 + th) .

From the mean value theorem, ∃ξ ∈ (0, 1) such that
| ϕ(1) − ϕ(0) − y ∗ , A(x0 )h | = | ϕ (ξ) − y ∗ , A(x0 )h |
= | y ∗ , df (x0 + ξh, h) − A(x0 )h) |
= | y ∗ , [A(x0 + ξh) − A(x0 )]h |
= ◦( h ) ,
i.e., f (x0 ) = A(x0 ).
The importance of Theorem 1.1.3 lies in the fact that it is not easy to
write down the F-derivative for a given map directly, but the computation
of G-derivative is reduced to the differential calculus of single variables. The
same situation occurs in the differential calculus of several variables: Gradients are reduced to partial derivatives, and partial derivatives are reduced to
derivatives of single variables.


4

1 Linearization

Example 1. Let A ∈ L(X, Y ), f (x) = Ax. Then f (x) = A ∀x.
Example 2. Let X = Rn , Y = Rm , and let ϕ1 , ϕ2 . . . , ϕm ∈ C 1 (Rn , R1 ). Set


ϕ1 (x)


f (x) = ⎝ ... ⎠ , i.e., f : X → Y .
ϕm (x)
Then
f (x0 ) =


∂ϕi (x0 )
∂xj

.
m×n

Example 3. Let Ω ⊂ Rn be an open bounded domain. Denote by C(Ω) the
continuous function space on Ω. Let
ϕ : Ω × R1 −→ R1 ,
be a C 1 function. Define a mapping f : C(Ω) → C(Ω) by
u(x) → ϕ(x, u(x)) .
Then f is F-differentiable, and ∀u0 ∈ C(Ω),
(f (u0 ) · v)(x) = ϕu (x, u0 (x)) · v(x) ∀v ∈ C(Ω) .
Proof. ∀h ∈ C(Ω)
t−1 [f (u0 + th) − f (u0 )](x) = ϕu (x, u0 (x) + tθ(x)h(x))h(x) ,
where θ(x) ∈ (0, 1). ∀ε > 0, ∀M > 0, ∃δ = δ(M, ε) > 0 such that
| ϕu (x, ξ) − ϕu (x, ξ ) |< ε, ∀x ∈ Ω ,
as |ξ|, |ξ |
|t| < δ < 1,

M and |ξ − ξ |

δ. We choose M = u0

+

h , then for

|ϕu (x, u0 (x) + tθ(x)h(x)) − ϕu (x, u0 (x))| < ε .

It follows that df (u0 , h)(x) = ϕu (x, u0 (x))h(x).
Noticing that the multiplication operator h → A(u)h = ϕu (x, u(x)) · h(x)
is linear and continuous, and the mapping u → A(u) from C(Ω) into
L(C(Ω), C(Ω)) is continuous, from Theorem 1.1.3, f is F-differentiable, and
(f (u0 ) · v)(x) = ϕu (x, u0 (x)) · v(x) ∀v ∈ C(Ω) .


1.1 Differential Calculus in Banach Spaces

5

We investigate nonlinear differential operators on more general spaces.
Let Ω ⊂ Rn be a bounded open set, and let m be a nonnegative integer,
older space C m,γ (Ω)) is defined to be the function
γ ∈ (0, 1). C m (Ω) (and the H¨
m
space consisting of C functions (with γ-H¨older continuous m-order partial
derivatives).
The norms are defined as follows:
Cm =

u

|∂ α u(x)| ,

max
x∈Ω

|α|≤m


and
u

C m,γ =

u

Cm

+

|∂ α u(x) − ∂ α u(y)|
,
|x − y|γ
|α|=m

max
x,y∈Ω

where α = (α1 , α2 , . . . , αn ) is a multi-index, |α| = α1 + α2 + · · · + αn , ∂ α =
∂xα11 ∂xα22 · · · ∂xαnn .
We always denote by m∗ the number of the index set {α = (α1 , α2 , . . . , αn ) |
|α| m}, and Dm u the set {∂ α u | |α| m}.

Suppose that r is a nonnegative integer, and that ϕ ∈ C ∞ (Ω×Rr ). Define
a differentiable operator of order r:
f (u)(x) = ϕ(x, Dr u(x)) .
Suppose m
r, then f : C m (Ω) → C m−r (Ω) (and also C m,γ (Ω) →
(Ω)) is F-differentiable. Furthermore

C
m−r,γ

ϕα (x, Dr u0 (x)) · ∂ α h(x), ∀h ∈ C m (Ω) ,

(f (u0 )h)(x) =
|α|≤r

where ϕα is the partial derivative of ϕ with respect to the variable index α.
The proof is similar to Example 3.


Example 4. Suppose ϕ ∈ C ∞ (Ω × Rr ). Define
ϕ(x, Dr u(x))dx

f (u) =

∀u ∈ C r (Ω) .



Then f : C r (Ω) → R1 is F-differentiable. Furthermore
ϕα (x, Dr u0 (x))∂ α h(x)dx

f (u0 ), h =

∀h ∈ C r (Ω) .

Ω |α|≤r


Proof. Use the chain rule:
ϕ(·,D r u(·))

C r (Ω) −−−−−−−→ C(Ω) −−−Ω−→ R1 ,
and combine the results of Examples 1 and 3.


6

1 Linearization

In particular, the following functional occurs frequently in the calculus of
variations (r = 1, r∗ = n + 1). Assume that ϕ(x, u, p) is a function of the form:
n

ϕ(x, u, p) =

1 2
|p| +
ai (x)pi + a0 (x)u ,
2
i=1

where p = (p1 , p2 , . . . , pn ), and ai (x), i = 0, 1, . . . , n, are in C(Ω).
Set
n
1
|∇u(x)|2 +
f (u) =
ai (x)∂xi u + a0 (x)u(x) dx ,

Ω 2
i=1
we have
n

∇u(x) · ∇h(x) +

f (u), h =


ai (x)∂xi h(x) + a0 (x)h(x) dx
i=1

∀h ∈ C 1 (Ω) .
Example 5. Let X be a Hilbert space, with inner product (, ). Find the Fderivative of the norm f (x) = x , as x = θ.
Let F (x) = x 2 . Since
t−1 ( x + th

2



x

2

) = 2(x, h) + t

h


2

,

we have dF (x, h) = 2(x, h). It is continuous for all x, therefore F is Fdifferentiable, and
F (x)h = 2(x, h) .
1

Since f = F 2 , by the chain rule
F (x) = 2

x

·f (x) .

As x = θ,
x
,h .
x
In the applications to PDE as well as to the calculus of variations, Sobolev
spaces are frequently used. We should extend the above studies to nonlinear
operators defined on Sobolev spaces.
∀p 1, ∀ nonnegative integer m, let
f (x)h =

W m,p (Ω) = {u ∈ Lp (Ω) | ∂ α u ∈ Lp (Ω) | |α|

m} ,

where ∂ α u stands for the α-order generalized derivative of u, i.e., the derivative

in the distribution sense. Define the norm
⎞ p1

u

W m,p

=⎝

∂αu

p

Lp (Ω)

.

|α≤m

The Banach space is called the Sobolev space of index {m, p}.
W m,2 (Ω) is denoted by H m (Ω), and the closure of C0∞ (Ω) under this norm
is denoted by H0m (Ω).


1.1 Differential Calculus in Banach Spaces

7

1.1.2 Nemytscki Operator
On Sobolev spaces, we extend the composition operator u → ϕ(x, u(x)) such

that ϕ may not be continuous in x. The class of operators is sometimes called
Nemytski operators.
Definition 1.1.4 Let (Ω, B , µ) be a measure space. We say that ϕ : Ω ×
RN → R1 is a Caratheodory function, if
1. ∀a.e.x ∈ Ω, ξ → ϕ(x, ξ) is continuous.
2. ∀ξ ∈ RN , x → ϕ(x, ξ) is µ-measurable.
The motivation in introducing the Caratheodory function is to make the
composition function measurable if u(x) is only measurable. Indeed, there
exists a sequence of simple functions {un (x)}∞ , such that un (x) → u(x)
a.e., ϕ(x, un (x)) is measurable according to (2). And from (1), ϕ(x, un (x)) →
ϕ(x, u(x)) a.e., therefore ϕ(x, u(x)) is measurable.
p

1, a > 0 and b ∈ Ldµ2 (Ω). Suppose that ϕ
Theorem 1.1.5 Assume p1 , p2
is a Caratheodory function satisfying
|ϕ(x, ξ)|

p1

b(x) + a|ξ| p2 .

Then f : u(x) → ϕ(x, u(x)) is a bounded and continuous mapping from
p
p
Ldµ1 (Ω, RN ) to Ldµ2 (Ω, RN ).
Proof. The boundedness follows from the Minkowski inequality:
f (u)

p2


b

p2

+a

u

p1
p2

p1

,

where · p is the Lpdµ (Ω, RN ) norm. We turn to proving the continuity. It is
p1
sufficient to prove that ∀{un }∞
1 if un → u in L , then there is a subsequence
p2
{uni } such that f (uni ) → f (u) in L . Indeed one can find a subsequence
{uni } of {un } which converges a.e. to u, along which uni − uni−1 p1 < 21i ,
i = 2, 3, . . .; therefore


|uni (x)|

Φ(x) := |un1 (x)| +


|uni (x) − uni−1 (x)| .
i=2

Since Φ is measurable, and
|Φ(x)|p1 dµ

1
p1





un1

p1

uni − uni−1

+
i=2

p

we conclude that Φ ∈ Ldµ1 (Ω). Noticing

p1 <

+∞ ,



8

1 Linearization

f (uni ) = ϕ(x, uni (x)) → ϕ(x, u(x))
and

p1

a.e. ,

p

b(x) + a(Φ(x)) p2 ∈ Ldµ2 (Ω, RN ) ,

|f (uni )(x)|

we have f (uni ) − f (u) p2 → 0, according to Lebesgue dominance theorem.
This proves the continuity of f .
Corollary 1.1.6 Let Ω ⊂ Rn be a smooth bounded domain, and let 1 ≤

p1 , p2 ≤ ∞. Suppose that ϕ : Ω × Rm → R is a Caratheodory function
satisfying
m

|ϕ(x, ξ0 , . . . , ξm )|

αj


|ξj | p2 ,

b(x) + a
j=0

where ξj is a #{α = (α1 , . . . , αn )| |α| = j}-vector, αj

( p1 −
1

m−j −1
,
n )

a>

0, and b ∈ L (Ω). Then f (u)(x) = ϕ(x, D u(x)) defines a bounded and
continuous map from W m,p1 (Ω) into Lp2 (Ω).
p2

m

Corollary 1.1.7 Suppose that Ω ⊂ Rn and that ϕ : Ω×R1 → R1 and ϕξ (x, ξ)
2n
are Caratheodory functions. If |ϕξ (x, ξ)| b(x) + a|ξ|r , where b ∈ L n+2 (Ω),
n+2
(if n ≤ 2, then the restriction is not necessary), then the
a > 0, and r = n−2
functional
ϕ(x, u(x))dx


f (u) =


is F-differentiable on H 1 (Ω), with F-derivative
ϕξ (x, u(x)) · v(x)dx ,

f (u), v =


where , is the inner product on H 1 (Ω).
Proof. The Sobolev embedding theorem says that the injection i : H 1 (Ω) →
2n
2n
L n−2 (Ω) is continuous, so is the dual map i∗ : L n+2 (Ω) → (H 1 (Ω))∗ .
2n
2n
According to Theorem 1.1.5, ϕξ (·, ·) : L n−2 → L n+2 is continuous. Therefore the Gateaux derivative
ϕξ (x, u(x)) · v(x)dx

df (u, v) =

∀v ∈ H 1 (Ω)



is continuous from H 1 (Ω) to (H 1 (Ω))∗ . Applying Theorem 1.1.3, we conclude
that f is F-differentiable on H 1 (Ω). The proof is complete.
Corollary 1.1.8 In Corollary 1.1.6, the differential operator
f (u(x)) = ϕ(x, Dm u(x))

from C l,γ (Ω) to C l−m,γ (Ω), l

m, 0

γ < 1, is F-differentiable, with

ϕα (x, Dm u(x))∂ α h(x)

(f (u0 )h)(x) =
|α|≤m

∀h ∈ C l,γ (Ω) .


1.1 Differential Calculus in Banach Spaces

9

1.1.3 High-Order Derivatives
The second-order derivative of f at x0 is defined to be the derivative of f (x)
at x0 . Since f : U → L(X, Y ), f (x0 ) should be in L(X, L(X, Y )). However,
if we identify the space of bounded bilinear mappings with L(X, L(X, Y )),
and verify that f (x0 ) as a bilinear mapping is symmetric, see Theorem 1.1.9
below, then we can define equivalently the second derivative f (x0 ) as follows:
For f : U → Y , x0 ∈ U ⊂ X, if there exists a bilinear mapping f (x0 )(·, ·) of
X × X → Y satisfying
1
f (x0 +h)−f (x0 )−f (x0 )h− f (x0 )(h, h) = ◦( h 2 ) ∀h ∈ X, as h → 0 ,
2
then f (x0 ) is called the second-order derivative of f at x0 .

By the same manner, one defines the mth-order derivatives at x0 successively: f (m) (x0 ) : X × · · · × X → Y is an m-linear mapping satisfying
m

f (x0 + h) −
j=0

as

f (j) (x0 )(h, . . . , h)
= ◦( h
j!

m

),

h → 0. Then f is called m differentiable at x0 .
Similar to the finite-dimensional vector functions, we have:

Theorem 1.1.9 Assume that f : U → Y is m differentiable at x0 ∈ U . Then
for any permutation π of (1, . . . , m), we have
f (m) (x0 )(h1 , . . . , hm ) = f (m) (x0 )(hπ(1) , . . . , hπ(m) ) .
Proof. We only prove this in the case where m = 2, i.e.,
f (x0 )(ξ, η) = f (x0 )(η, ξ)


∀ξ, η ∈ X .




Indeed ∀y ∈ Y , we consider the function
ϕ(t, s) = y ∗ , f (x0 + tξ + sη) .
It is twice differentiable at t = s = 0; so is
∂2
∂2
ϕ(0, 0) =
ϕ(0, 0) .
∂t∂s
∂s∂t
Since f (x0 + tξ + sη) is continuous as |t|, |s| small, one has

ϕ(t, ·)|s=0 = y ∗ , f (x0 + tξ)η ;
∂s
and then,
∂2
ϕ(t, s)|t=s=0 = y ∗ , f (x0 )(ξ, η) .
∂t∂s
Similarly
∂2
ϕ(t, s)|t=s=0 = y ∗ , f (x0 )(η, ξ) .
∂s∂t
This proves the conclusion.


10

1 Linearization

Theorem 1.1.10 (Taylor formula) Suppose that f : U → Y is continuously
m-differentiable. Assume the segment {x0 + th | t ∈ [0, 1]} ⊂ U . Then

m

f (x0 + h) =
j=0

+


1 (j)
f (x0 )(h, . . . , h)
j!

1
m!

1

(1 − t)m f (m+1) (x0 + th)(h, . . . , h)dt .
0



Proof. ∀y ∈ Y , we consider the function:
ϕ(t) = y ∗ , f (x0 + th) .
From the Hahn–Banach theorem and the Taylor formula for single-variable
functions:
m

ϕ(1) =
j=0


1

1 (j)
1
ϕ (0) +
j!
m!

(1 − t)m ϕ(m+1) (t)dt ,
0

we obtain the desired Taylor formula for mappings between B-spaces.
Example 1. X = Rn , Y = R1 . If f : X → Y is twice continuously differentiable, then
∂ 2 f (x)
.
f (x) = Hf (x) =
∂xi ∂xj i,j=1,...,n
Example 2. X = C 1 (Ω, RN ), Y = R1 . Suppose that g ∈ C 2 (Ω × RN , R1 ).
Define
1
|∇u|2 +
g(x, u(x))
f (u) =
2 Ω

as u ∈ X. By definition, we have
f (u) · ϕ =

[∇u(x)∇ϕ(x) + gu (x, u(x))ϕ(x)]dx ,



and
f (u)(ϕ, ψ) =

[∇ψ(x)∇ϕ(x) + guu (x, u(x))ϕ(x)ψ(x)]dx .


With some additional growth conditions on guu :
4

|guu (x, u)| ≤ a(1 + |u| n−2 ), a > 0 ∀u ∈ RN ,
f is twice differentiable in H01 (Ω, RN ). As an operator from H01 (Ω, RN ) into
itself,
f (u) = id + (− )−1 gu (·, u(·)) .


1.1 Differential Calculus in Banach Spaces

is self-adjoint, or equivalently, the operator −
is self-adjoint with domain H 2 ∩ H01 (Ω, RN ).

11

+ guu (x, u(x))· defined on L2

Example 3. Let X = H01 (Ω, R3 ), where Ω is a plane domain. Consider the
volume functional
u · (ux ∧ uy ) .
Q(u) =



One has
Q (u) · ϕ =

ϕ(ux ∧ uy ) + u · [(ϕx ∧ uy ) + (ux ∧ ϕy )] ,


and
ϕ[(ψx ∧ uy ) + (ux ∧ ψy )] + ψ[(ϕx ∧ uy ) + (ux ∧ ϕy )]

Q (u)(ϕ, ψ) =


+ u[(ϕx ∧ ψy ) + (ψx ∧ ϕy )] ,
∀ϕ, ψ ∈ H01 (Ω, R3 ).
If further we assume u ∈ C 2 (Ω, R3 ), then from integration by parts and
the antisymmetry of the exterior product, we have
u(ϕx ∧ uy ) = −

(ϕ ∧ uy ) · ux + (ϕ ∧ uxy ) · u





(ux ∧ uy )ϕ −

=



(ϕ ∧ uxy ) · u ,


and
u(ux ∧ ϕy ) = −

uy (ux ∧ ϕ) + u(uxy ∧ ϕ)





ϕ(ux ∧ uy ) +

=


(ϕ ∧ uxy )u .


Therefore,
ϕ · (ux ∧ uy ) .

Q (u) = 3


By the same manner, we obtain
u[(ϕx ∧ ψy ) + (ϕy ∧ ψx )] .


Q (u)(ϕ, ψ) = 3

C2

Geometrically, let u : Ω → R3 be a parametrized surface in R3 ; Q(U ) is
the volume of the body enclosed by the surface.
As exercises, one computes the first- and second-order differentials of the
following functionals:


12

1 Linearization

1. X = W01,p (Ω, R1 ), Y = R1 , 2 < p < ∞,
|∇u|p dx .

f (u) =


2. X = C01 (Ω, Rn ), where Ω ⊂ Rn is a domain,
f (u) =

det(∇u(x))dx .


3. X = C01 (Ω, R1 ), where Ω ⊂ Rn is a domain,
1 + |∇u|2 dx .

f (u) =



1.2 Implicit Function Theorem and Continuity Method
1.2.1 Inverse Function Theorem
It is known that the implicit function theorem for functions of several variables plays important roles in many branches of mathematics (differential
manifold, differential geometry, differential topology, etc.). Its extension to
infinite-dimensional space is also extremely important in nonlinear analysis,
as well as in the study of infinite-dimensional manifolds.
Theorem 1.2.1 (Implicit function theorem) Let X, Y , Z be Banach spaces,
U ⊂ X × Y be an open set. Suppose that f ∈ C(U, Z) has an F-derivative
w.r.t. y, and that fy ∈ C(U, L(Y, Z)). For a point (x0 , y0 ) ∈ U , if we have
f (x0 , y0 ) = θ ,
fy−1 (x0 , y0 ) ∈ L(Z, Y ) ;
then ∃r, r1 > 0, ∃|u ∈ C(Br (x0 ), Br1 (y0 )), such that

⎨ Br (x0 ) × Br1 (y0 ) ⊂ U ,
u(x0 ) = y0 ,

f (x, u(x)) = θ ∀x ∈ Br (x0 ) .
Furthermore, if f ∈ C 1 (U, Z), then u ∈ C 1 (Br (x0 ), Y ), and
u (x) = −fy−1 (x0 , u(x0 )) ◦ fx (x, u(x))

∀x ∈ Br (x0 ) .

(1.1)


1.2 Implicit Function Theorem and Continuity Method

13


Proof. (1) After replacing f by
g(x, y) = fy−1 (x0 , y0 ) ◦ f (x + x0 , y + y0 ) ,
one may assume x0 = y0 = θ, Z = Y and fy (θ, θ) = idY .
(2) We shall find the solution y = u(x) ∈ Br1 (θ) of the equation
f (x, y) = θ

∀x ∈ Br (θ) .

Setting
R(x, y) = y − f (x, y) ,
it is reduced to finding the fixed point of R(x, ·) ∀x ∈ Br (θ).
We shall apply the contraction mapping theorem to the mapping R(x, ·).
Firstly, we have a contraction mapping:
R(x, y1 ) − R(x, y2 )

=

y1 − y2 − [f (x, y1 ) − f (x, y2 )]

=

y 1 − y2 −

1

fy (x, ty1 + (1 − t)y2 )dt · (y1 − y2 )
0

1


idY − fy (x, ty1 + (1 − t)y2 )

dt·

y1 − y2

.

0

Since fy : U → L(X, Y ) is continuous, ∃r, r1 > 0 such that
R(x, y1 ) − R(x, y2 ) <

1
2

y1 − y2

(1.2)

∀(x, yi ) ∈ Br (θ) × Br1 (θ), i = 1, 2.
Secondly, we are going to verify R(x, ·) : B r1 (θ) → B r1 (θ). Indeed,
R(x, y)

R(x, θ)
f (x, θ)

+ R(x, y) − R(x, θ)
1

y .
+
2

For sufficiently small r > 0, where
f (x, θ) <

1
r1 ,
2

∀x ∈ B r (θ) ,

(1.3)

it follows that R(x, y) < r1 , ∀(x, y) ∈ Br (θ) × Br1 (θ). Then, ∀x ∈ B r (θ),
∃|y ∈ B r1 (θ) satisfying f (x, y) = θ. Denote by u(x) the solution y.
(3) We claim that u ∈ C(Br , Y ). Since
u(x) − u(x )

R(x, u(x)) − R(x , u(x ))

=
1
2

we obtain

u(x) − u(x )


+

R(x, u(x)) − R(x , u(x))

,


14

1 Linearization

u(x) − u(x )

2

R(x, u(x)) − R(x , u(x))

.

(1.4)

Noticing that R ∈ C(U, Y ), we have
u(x ) → u(x) as x → x .
(4) If f ∈ C 1 (U, Y ), we want to prove u ∈ C 1 . First, by (1.2) and (1.4)
u(x) − u(x )

2

f (x, u(x)) − f (x , u(x ))
1


fx (tx + (1 − t)x , u(x))

2

dt·

x−x

.

0

Therefore
u(x + h) − u(x) = O( h )

for

h →0.

From
f (x + h, u(x + h)) = f (x, u(x)) = θ ,
it follows that
f (x + h, u(x + h)) − f (x, u(x + h)) + [f (x, u(x + h)) − f (x, u(x))] = θ ;
also
fx (x, u(x + h))h + ◦( h ) + fy (x, u(x))(u(x + h) − u(x)) + ◦( h ) = θ .
Therefore
u(x + h) − u(x) + fy−1 (x, u(x)) ◦ fx (x, u(x + h))h = ◦( h ) ,
i.e., u ∈ C 1 , and
u (x) = −fy−1 (x, u(x)) ◦ fx (x, u(x)) .


Remark 1.2.2 In the first part of Theorem 1.2.1, the space X may be assumed to be a topological space. In fact, neither linear operations nor the
properties of the norm were used.
Theorem 1.2.3 (Inverse function theorem) Let V ⊂ Y be an open set, and
g ∈ C 1 (V, X). Assume y0 ∈ V and g (y0 ) ∈ L(X, Y ). Then there exists δ > 0
such that Bδ (y0 ) ⊂ V and
g : Bδ (y0 ) → g(Bδ (y0 ))
is a differmorphism. Furthermore
(g −1 ) (x0 ) = g −1 (y0 ), with x0 = g(y0 ) .

(1.5)


1.2 Implicit Function Theorem and Continuity Method

15

Proof. Set
f ∈ C 1 (X × V, X) .

f (x, y) = x − g(y),

We use the implicit function theorem (IFT) to f , there exist r > 0 and a
unique u ∈ C 1 (Br (x0 ), Br (y0 )) satisfying
x = g ◦ u(x) .
Since g is continuous, ∃δ ∈ (0, r) such that g(Bδ (y0 )) ⊂ Br (x0 ), therefore
g : Bδ (y0 ) → g(Bδ (y0 )) is a diffeomorphism. And (1.5) follows from (1.1).
In the spirit of the IFT, we have a nonlinear version of the Banach open
mapping theorem.
Theorem 1.2.4 (Open mapping) Let X, Y be Banach spaces, and let δ > 0

and y0 ∈ Y . Suppose that g ∈ C 1 (Bδ (y0 ), X) and that g (y0 ) : Y → X is an
open map, then g is an open map in a neighborhood of y0 .
Proof. We want to prove that ∃δ1 ∈ (0, δ) and r > 0, such that
Br (g(y0 )) ⊂ g(Bδ1 (y0 )) .
With no loss of generality, we may assume y0 = θ and g(y0 ) = θ. Let A = g (θ).
Since A is surjective, ∃C > 0 such that
inf

z∈ker A

y−z

Y

C

Ay

X,

∀y∈Y ,

(1.6)

provided by the Banach inverse theorem. One chooses δ1 ∈ (0, δ) and r > 0,
satisfying
1
∀y ∈ Bδ1 (θ) ,
g (y) − A
2(C + 1)

and
r<

δ1
.
2(C + 1)

Now, ∀x ∈ Br (θ), we are going to find y ∈ Bδ1 (θ), satisfying g(y) = x. Write
R(y) = g(y) − Ay .
The problem is equivalent to solving the following equation:
Ay = x − R(y) .

(1.7)

We solve it by iteration.
Initially, we take h0 = θ. Suppose that hn ∈ Bδ1 (θ) has been chosen; from
(1.6), we can find hn+1 , satisfying
Ahn+1 = x − R(hn ) ,
and


16

1 Linearization

hn+1 − hn

A(hn+1 − hn )

(C + 1)


.

Thus
hn+1 − hn

R(hn ) − R(hn−1 )

(C + 1)

1

g (thn + (1 − t)hn−1 )dt − A

= (C + 1)

·

hn − hn−1

0



1
2

hn − hn−1

∀n


1.

Since
h1

(1 + C)

1
δ1 ,
2

x

and
n

h1

hn+1

hj+1 − hj

+
j=1

1
1
1
+ · · · + n + n+1

2
2
2

δ1 < δ1 ,

it follows that hn+1 ∈ Bδ1 (θ). Then we can proceed inductively.
The sequence hn has a limit y. Obviously y is the solution of (1.7).
Essentially, the implicit function theorem is a consequence of the contraction mapping theorem. The continuity assumption of fy in Theorem 1.2.2
seems too strong in some applications. We have a weakened version.
Theorem 1.2.5 Let X, Y, Z be Banach spaces, and let Br (θ) ⊂ Y be a closed
ball centered at θ with positive radius r. Suppose that T ∈ L(Y, Z) has a
bounded inverse, and that η : X × Br → Z satisfies the following Lipschitz
condition:
η(x, y1 ) − η(x, y2 )
where K < T −1

−1

K

y1 − y 2

∀y1 , y2 ∈ Br (θ), ∀x ∈ X

. If η(θ, θ) = θ, and
η(x, θ)

( T −1


−1

−K)r ;

then ∀x ∈ X, there exists a unique u : X → Br (θ) satisfying
T u(x) + η(x, u(x)) = θ

∀x ∈ X .

Furthermore, if η is continuous, then so is u.
Proof. ∀x ∈ X we find the fixed point of the map −T −1 η(x, y). It is easily
verified that T −1 · η(x, ·) : Br (θ) → Br (θ) is a contraction mapping.


1.2 Implicit Function Theorem and Continuity Method

17

1.2.2 Applications
As we mentioned in the beginning of this section, the IFT plays an important
role in solving nonlinear equations. However, the IFT is only a local statement.
If a problem is local, then it is extremely powerful for local solvability. As to
global solvability problems, we first solve them locally, and then extend the
solutions by continuation. In this case, the IFT is applied in the first step.
However, in this subsection we only present several examples to show how
the method works for local problems, and in the next subsection for global
problems.
Example 1. (Structural stability for hyperbolic systems)
A matrix L ∈ GL(n, R) is called hyperbolic if the set of eigenvalues of
L, σ(L) ∩ iR1 = ∅. The associate differential system reads as:

x˙ = Lx, x ∈ C 1 (R1 , Rn ) .
The flow φt = eLt ∈ GL(n, R) is also linear. The flow line can be seen on
the left of Fig. 1.1. Let ξ ∈ C 0,1 (Rn , Rn ) be a Lipschitzian (Lip.) map we
investigate the hyperbolic system under the nonlinear perturbation:
x˙ = Lx + ξ(x), x ∈ C 1 (R1 , Rn ) ,
and let ψt be the associate flow, the flow line of which is on the right of
Fig. 1.1.
What is the relationship between φt and ψt , if ξ is small? One says that the
hyperbolic system is structurally stable, which means that the flow lines φt
and ψt are topologically equivalent. More precisely, there is a homeomorphism
h : Rn → Rn such that h ◦ ψt = φt ◦ h.
We shall show that the hyperbolic system is structurally stable. Let
A = eL , then the set of eigenvalues σ(A) of A satisfies σ(A) ∩ S 1 = ∅.
We decompose ψ1 = A + f , where f ∈ C 0,1 (Rn , Rn ), it is known that the
Lipschitzian constant of f is small if that of ξ is.
To the matrix A ∈ GL(n, R), since σ(A) ∩ S 1 = ∅, provided by the Jordan
form, we have the decomposition Rn = Eu Es , where Eu , Es are invariant
subspaces, on which the eigenvalues of Au := A|Eu lie outside the unit circle,
and those of As := A|Es lie inside the unit circle. Due to these facts, one has
As < 1,

A−1
<1.
u

The following notations are used for any Banach spaces X, Y . C 0 (X, Y )
stands for the space of all bounded and continuous mappings h : X → Y with
norm:
h = sup h(x) Y .
x∈X

0,1

C (X, Y ) stands for the space of all bounded Lipschitzian maps from X to
Y with norm:


×