Tải bản đầy đủ (.pdf) (469 trang)

Differential geometry analysis and physics j lee

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (9.43 MB, 469 trang )

Differential Geometry, Analysis and Physics
Jeffrey M. Lee
c

2000 Jeffrey Marc lee
ii
Contents
0.1 Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
1 Preliminaries and Local Theory 1
1.1 Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Chain Rule, Product rule and Taylor’s Theorem . . . . . . . . . 11
1.3 Local theory of maps . . . . . . . . . . . . . . . . . . . . . . . . . 11
2 Differentiable Manifolds 15
2.1 Rough Ideas I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2 Topological Manifolds . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3 Differentiable Manifolds and Differentiable Maps . . . . . . . . . 17
2.4 Pseudo-Groups and Models Spaces . . . . . . . . . . . . . . . . . 22
2.5 Smooth Maps and Diffeomorphisms . . . . . . . . . . . . . . . . 27
2.6 Coverings and Discrete groups . . . . . . . . . . . . . . . . . . . 30
2.6.1 Covering spaces and the fundamental group . . . . . . . . 30
2.6.2 Discrete Group Actions . . . . . . . . . . . . . . . . . . . 36
2.7 Grassmannian manifolds . . . . . . . . . . . . . . . . . . . . . . . 39
2.8 Partitions of Unity . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.9 Manifolds with boundary. . . . . . . . . . . . . . . . . . . . . . . 43
3 The Tangent Structure 47
3.1 Rough Ideas II . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.2 Tangent Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.3 Interpretations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.4 The Tangent Map . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.5 The Tangent and Cotangent Bundles . . . . . . . . . . . . . . . . 55
3.5.1 Tangent Bundle . . . . . . . . . . . . . . . . . . . . . . . . 55


3.5.2 The Cotangent Bundle . . . . . . . . . . . . . . . . . . . . 57
3.6 Important Special Situations. . . . . . . . . . . . . . . . . . . . . 59
4 Submanifold, Immersion and Submersion. 63
4.1 Submanifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.2 Submanifolds of
R
n
. . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.3 Regular and Critical Points and Values . . . . . . . . . . . . . . . 66
4.4 Immersions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
iii
iv
CONTENTS
4.5 Immersed Submanifolds and Initial Submanifolds . . . . . . . . . 71
4.6 Submersions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.7 Morse Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.8 Problem set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5 Lie Groups I 81
5.1 Definitions and Examples . . . . . . . . . . . . . . . . . . . . . . 81
5.2 Lie Group Homomorphisms . . . . . . . . . . . . . . . . . . . . . 84
6 Fiber Bundles and Vector Bundles I 87
6.1 Transitions Maps and Structure . . . . . . . . . . . . . . . . . . . 94
6.2 Useful ways to think about vector bundles . . . . . . . . . . . . . 94
6.3 Sections of a Vector Bundle . . . . . . . . . . . . . . . . . . . . . 97
6.4 Sheaves,Germs and Jets . . . . . . . . . . . . . . . . . . . . . . . 98
6.5 Jets and Jet bundles . . . . . . . . . . . . . . . . . . . . . . . . . 102
7 Vector Fields and 1-Forms 105
7.1 Definition of vector fields and 1-forms . . . . . . . . . . . . . . . 105
7.2 Pull back and push forward of functions and 1-forms . . . . . . . 106
7.3 Frame Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

7.4 Lie Bracket . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
7.5 Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
7.6 Action by pullback and push-forward . . . . . . . . . . . . . . . . 112
7.7 Flows and Vector Fields . . . . . . . . . . . . . . . . . . . . . . . 114
7.8 Lie Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
7.9 Time Dependent Fields . . . . . . . . . . . . . . . . . . . . . . . 123
8 Lie Groups II 125
8.1 Spinors and rotation . . . . . . . . . . . . . . . . . . . . . . . . . 133
9 Multilinear Bundles and Tensors Fields 137
9.1 Multilinear Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . 137
9.1.1 Contraction of tensors . . . . . . . . . . . . . . . . . . . . 141
9.1.2 Alternating Multilinear Algebra . . . . . . . . . . . . . . . 142
9.1.3 Orientation on vector spaces . . . . . . . . . . . . . . . . 146
9.2 Multilinear Bundles . . . . . . . . . . . . . . . . . . . . . . . . . 147
9.3 Tensor Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
9.4 Tensor Derivations . . . . . . . . . . . . . . . . . . . . . . . . . . 149
10 Differential forms 153
10.1 Pullback of a differential form. . . . . . . . . . . . . . . . . . . . 155
10.2 Exterior Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . 156
10.3 Maxwell’s equations. . . . . . . . . . . . . . . . . . . . . . . . . 159
10.4 Lie derivative, interior product and exterior derivative. . . . . . . 161
10.5 Time Dependent Fields (Part II) . . . . . . . . . . . . . . . . . . 163
10.6 Vector valued and algebra valued forms. . . . . . . . . . . . . . . 163
CONTENTS
v
10.7 Global Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . 165
10.8 Orientation of manifolds with boundary . . . . . . . . . . . . . . 167
10.9 Integration of Differential Forms. . . . . . . . . . . . . . . . . . . 168
10.10Stokes’ Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
10.11Vector Bundle Valued Forms. . . . . . . . . . . . . . . . . . . . . 172

11 Distributions and Frobenius’ Theorem 175
11.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
11.2 Integrability of Regular Distributions . . . . . . . . . . . . . . . 175
11.3 The local version Frobenius’ theorem . . . . . . . . . . . . . . . . 177
11.4 Foliations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
11.5 The Global Frobenius Theorem . . . . . . . . . . . . . . . . . . . 183
11.6 Singular Distributions . . . . . . . . . . . . . . . . . . . . . . . . 185
12 Connections on Vector Bundles 189
12.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
12.2 Local Frame Fields and Connection Forms . . . . . . . . . . . . . 191
12.3 Parallel Transport . . . . . . . . . . . . . . . . . . . . . . . . . . 193
12.4 Curvature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
13 Riemannian and semi-Riemannian Manifolds 201
13.1 The Linear Theory . . . . . . . . . . . . . . . . . . . . . . . . . . 201
13.1.1 Scalar Products . . . . . . . . . . . . . . . . . . . . . . . 201
13.1.2 Natural Extensions and the Star Operator . . . . . . . . . 203
13.2 Surface Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
13.3 Riemannian and semi-Riemannian Metrics . . . . . . . . . . . . . 214
13.4 The Riemannian case (positive definite metric) . . . . . . . . . . 220
13.5 Levi-Civita Connection . . . . . . . . . . . . . . . . . . . . . . . . 221
13.6 Covariant differentiation of vector fields along maps. . . . . . . . 228
13.7 Covariant differentiation of tensor fields . . . . . . . . . . . . . . 229
13.8 Comparing the Differential Operators . . . . . . . . . . . . . . . 230
14 Formalisms for Calculation 233
14.1 Tensor Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
14.2 Covariant Exterior Calculus, Bundle-Valued Forms . . . . . . . . 234
15 Topology 235
15.1 Attaching Spaces and Quotient Topology . . . . . . . . . . . . . 235
15.2 Topological Sum . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
15.3 Homotopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239

15.4 Cell Complexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
16 Algebraic Topology 245
16.1 Axioms for a Homology Theory . . . . . . . . . . . . . . . . . . . 245
16.2 Simplicial Homology . . . . . . . . . . . . . . . . . . . . . . . . . 246
16.3 Singular Homology . . . . . . . . . . . . . . . . . . . . . . . . . . 246
vi
CONTENTS
16.4 Cellular Homology . . . . . . . . . . . . . . . . . . . . . . . . . . 246
16.5 Universal Coefficient theorem . . . . . . . . . . . . . . . . . . . . 246
16.6 Axioms for a Cohomology Theory . . . . . . . . . . . . . . . . . 246
16.7 De Rham Cohomology . . . . . . . . . . . . . . . . . . . . . . . . 246
16.8 Topology of Vector Bundles . . . . . . . . . . . . . . . . . . . . . 246
16.9 de Rham Cohomology . . . . . . . . . . . . . . . . . . . . . . . . 248
16.10The Meyer Vietoris Sequence . . . . . . . . . . . . . . . . . . . . 252
16.11Sheaf Cohomology . . . . . . . . . . . . . . . . . . . . . . . . . . 253
16.12Characteristic Classes . . . . . . . . . . . . . . . . . . . . . . . . 253
17 Lie Groups and Lie Algebras 255
17.1 Lie Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
17.2 Classical complex Lie algebras . . . . . . . . . . . . . . . . . . . . 257
17.2.1 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . 258
17.3 The Adjoint Representation . . . . . . . . . . . . . . . . . . . . . 259
17.4 The Universal Enveloping Algebra . . . . . . . . . . . . . . . . . 261
17.5 The Adjoint Representation of a Lie group . . . . . . . . . . . . . 265
18 Group Actions and Homogenous Spaces 271
18.1 Our Choices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
18.1.1 Left actions . . . . . . . . . . . . . . . . . . . . . . . . . . 272
18.1.2 Right actions . . . . . . . . . . . . . . . . . . . . . . . . . 273
18.1.3 Equivariance . . . . . . . . . . . . . . . . . . . . . . . . . 273
18.1.4 The action of Diff(
M) and map-related vector fields. . . 274

18.1.5 Lie derivative for equivariant bundles. . . . . . . . . . . . 274
18.2 Homogeneous Spaces. . . . . . . . . . . . . . . . . . . . . . . . . 275
19 Fiber Bundles and Connections 279
19.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
19.2 Principal and Associated Bundles . . . . . . . . . . . . . . . . . . 282
20 Analysis on Manifolds 285
20.1 Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
20.1.1 Star Operator II . . . . . . . . . . . . . . . . . . . . . . . 285
20.1.2 Divergence, Gradient, Curl . . . . . . . . . . . . . . . . . 286
20.2 The Laplace Operator . . . . . . . . . . . . . . . . . . . . . . . . 286
20.3 Spectral Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . 289
20.4 Hodge Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
20.5 Dirac Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
20.5.1 Clifford Algebras . . . . . . . . . . . . . . . . . . . . . . . 291
20.5.2 The Clifford group and Spinor group . . . . . . . . . . . . 296
20.6 The Structure of Clifford Algebras . . . . . . . . . . . . . . . . . 296
20.6.1 Gamma Matrices . . . . . . . . . . . . . . . . . . . . . . . 297
20.7 Clifford Algebra Structure and Representation . . . . . . . . . . 298
20.7.1 Bilinear Forms . . . . . . . . . . . . . . . . . . . . . . . . 298
20.7.2 Hyperbolic Spaces And Witt Decomposition . . . . . . . . 299
CONTENTS
vii
20.7.3 Witt’s Decomposition and Clifford Algebras . . . . . . . . 300
20.7.4 The Chirality operator . . . . . . . . . . . . . . . . . . . 301
20.7.5 Spin Bundles and Spin-c Bundles . . . . . . . . . . . . . . 302
20.7.6 Harmonic Spinors . . . . . . . . . . . . . . . . . . . . . . 302
21 Complex Manifolds 303
21.1 Some complex linear algebra . . . . . . . . . . . . . . . . . . . . 303
21.2 Complex structure . . . . . . . . . . . . . . . . . . . . . . . . . . 306
21.3 Complex Tangent Structures . . . . . . . . . . . . . . . . . . . . 309

21.4 The holomorphic tangent map. . . . . . . . . . . . . . . . . . . . 310
21.5 Dual spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
21.6 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
21.7 The holomorphic inverse and implicit functions theorems. . . . . 312
22 Classical Mechanics 315
22.1 Particle motion and Lagrangian Systems . . . . . . . . . . . . . . 315
22.1.1 Basic Variational Formalism for a Lagrangian . . . . . . . 316
22.1.2 Two examples of a Lagrangian . . . . . . . . . . . . . . . 319
22.2 Symmetry, Conservation and Noether’s Theorem . . . . . . . . . 319
22.2.1 Lagrangians with symmetries. . . . . . . . . . . . . . . . . 321
22.2.2 Lie Groups and Left Invariants Lagrangians . . . . . . . . 322
22.3 The Hamiltonian Formalism . . . . . . . . . . . . . . . . . . . . . 322
23 Symplectic Geometry 325
23.1 Symplectic Linear Algebra . . . . . . . . . . . . . . . . . . . . . . 325
23.2 Canonical Form (Linear case) . . . . . . . . . . . . . . . . . . . . 327
23.3 Symplectic manifolds . . . . . . . . . . . . . . . . . . . . . . . . . 327
23.4 Complex Structure and K¨ahler Manifolds . . . . . . . . . . . . . 329
23.5 Symplectic musical isomorphisms . . . . . . . . . . . . . . . . . 332
23.6 Darboux’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 332
23.7 Poisson Brackets and Hamiltonian vector fields . . . . . . . . . . 334
23.8 Configuration space and Phase space . . . . . . . . . . . . . . . 337
23.9 Transfer of symplectic structure to the Tangent bundle . . . . . . 338
23.10Coadjoint Orbits . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
23.11The Rigid Body . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
23.11.1 The configuration in R
3N
. . . . . . . . . . . . . . . . . . 342
23.11.2 Modelling the rigid body on SO(3) . . . . . . . . . . . . . 342
23.11.3 The trivial bundle picture . . . . . . . . . . . . . . . . . . 343
23.12The momentum map and Hamiltonian actions . . . . . . . . . . . 343

24 Poisson Geometry 347
24.1 Poisson Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . 347
viii
CONTENTS
25 Quantization 351
25.1 Operators on a Hilbert Space . . . . . . . . . . . . . . . . . . . . 351
25.2 C*-Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
25.2.1 Matrix Algebras . . . . . . . . . . . . . . . . . . . . . . . 354
25.3 Jordan-Lie Algebras . . . . . . . . . . . . . . . . . . . . . . . . . 354
26 Appendices 357
26.1 A. Primer for Manifold Theory . . . . . . . . . . . . . . . . . . . 357
26.1.1 Fixing a problem . . . . . . . . . . . . . . . . . . . . . . . 360
26.2 B. Topological Spaces . . . . . . . . . . . . . . . . . . . . . . . . 361
26.2.1 Separation Axioms . . . . . . . . . . . . . . . . . . . . . . 363
26.2.2 Metric Spaces . . . . . . . . . . . . . . . . . . . . . . . . . 364
26.3 C. Topological Vector Spaces . . . . . . . . . . . . . . . . . . . . 365
26.3.1 Hilbert Spaces . . . . . . . . . . . . . . . . . . . . . . . . 367
26.3.2 Orthonormal sets . . . . . . . . . . . . . . . . . . . . . . . 368
26.4 D. Overview of Classical Physics . . . . . . . . . . . . . . . . . . 368
26.4.1 Units of measurement . . . . . . . . . . . . . . . . . . . . 368
26.4.2 Newton’s equations . . . . . . . . . . . . . . . . . . . . . . 369
26.4.3 Classical particle motion in a conservative field . . . . . . 370
26.4.4 Some simple mechanical systems . . . . . . . . . . . . . . 375
26.4.5 The Basic Ideas of Relativity . . . . . . . . . . . . . . . . 380
26.4.6 Variational Analysis of Classical Field Theory . . . . . . . 385
26.4.7 Symmetry and Noether’s theorem for field theory . . . . . 386
26.4.8 Electricity and Magnetism . . . . . . . . . . . . . . . . . . 388
26.4.9 Quantum Mechanics . . . . . . . . . . . . . . . . . . . . . 390
26.5 E. Calculus on Banach Spaces . . . . . . . . . . . . . . . . . . . . 390
26.6 Categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391

26.7 Differentiability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
26.8 Chain Rule, Product rule and Taylor’s Theorem . . . . . . . . . 400
26.9 Local theory of maps . . . . . . . . . . . . . . . . . . . . . . . . . 405
26.9.1
Linear case.
. . . . . . . . . . . . . . . . . . . . . . . . . 411
26.9.2 Local (nonlinear) case. . . . . . . . . . . . . . . . . . 412
26.10The Tangent Bundle of an Open Subset of a Banach Space . . . 413
26.11Problem Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
26.11.1 Existence and uniqueness for differential equations . . . . 417
26.11.2 Differential equations depending on a parameter. . . . . . 418
26.12Multilinear Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . 418
26.12.1 Smooth Banach Vector Bundles . . . . . . . . . . . . . . . 435
26.12.2 Formulary . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
26.13Curvature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
26.14Group action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
26.15Notation and font usage guide . . . . . . . . . . . . . . . . . . . . 445
27 Bibliography 453
0.1. PREFACE
ix
0.1 Preface
In this book I present differential geometry and related mathematical topics with
the help of examples from physics. It is well known that there is something
strikingly mathematical about the physical universe as it is conceived of in
the physical sciences. The convergence of physics with mathematics, especially
differential geometry, topology and global analysis is even more pronounced in
the newer quantum theories such as gauge field theory and string theory. The
amount of mathematical sophistication required for a good understanding of
modern physics is astounding. On the other hand, the philosophy of this book
is that mathematics itself is illuminated by physics and physical thinking.

The ideal of a truth that transcends all interpretation is perhaps unattain-
able. Even the two most impressively objective realities, the physical and the
mathematical, are still only approachable through, and are ultimately insepa-
rable from, our normative and linguistic background. And yet it is exactly the
tendency of these two sciences to point beyond themselves to something tran-
scendentally real that so inspires us. Whenever we interpret something real,
whether physical or mathematical, there will be those aspects which arise as
mere artifacts of our current descriptive scheme and those aspects that seem to
be objective realities which are revealed equally well through any of a multitude
of equivalent descriptive schemes-“cognitive inertial frames” as it were. This
theme is played out even within geometry itself where a viewpoint or interpre-
tive scheme translates to the notion of a coordinate system on a differentiable
manifold.
A physicist has no trouble believing that a vector field is something beyond
its representation in any particular coordinate system since the vector field it-
self is something physical. It is the way that the various coordinate descriptions
relate to each other (covariance) that manifests to the understanding the pres-
ence of an invariant physical reality. This seems to be very much how human
perception works and it is interesting that the language of tensors has shown up
in the cognitive science literature. On the other hand, there is a similar idea as
to what should count as a geometric reality. According to Felix Klein the task
of geometry is
“given a manifold and a group of transformations of the manifold,
to study the manifold configurations with respect to those features
which are not altered by the transformations of the group”
-Felix Klein 1893
The geometric is then that which is invariant under the action of the group.
As a simple example we may consider the set of points on a plane. We may
impose one of an infinite number of rectangular coordinate systems on the plane.
If, in one such coordinate system (

x, y
)
,
two points
P
and Q
have coordinates
(x
(
P )
, y
(
P
)) and (
x
(
Q)
, y(
Q
)) respectively, then while the differences ∆
x =
x
(P
) − x
(
Q) and ∆y = y
(
P
)


y(Q
) are very much dependent on the choice of
these rectangular coordinates, the quantity (∆x
)
2
+ (∆
y)
2
is
not so dependent.
x
CONTENTS
If (
X, Y ) are any other set of rectangular coordinates then we have (∆x
)
2
+
(∆
y)
2
= (∆X
)
2
+ (∆
Y )
2
. Thus we have the intuition that there is something
more real about that later quantity. Similarly, there exists distinguished systems
for assigning three spatial coordinates (x, y, z) and a single temporal coordinate
t to any simple event in the physical world as conceived of in relativity theory.

These are called inertial coordinate systems. Now according to special relativity
the invariant relational quantity that exists between any two events is (∆x
)
2
+
(∆
y
)
2
+ (∆z )
2
− (∆
t)
2
. We see that there is a similarity between the physical
notion of the ob jective event and the abstract notion of geometric point. And
yet the minus sign presents some conceptual challenges.
While the invariance under a group action approach to geometry is powerful
it is becoming clear to many researchers that the looser notions of groupoid and
pseudogroup has a significant role to play.
Since physical thinking and geometric thinking are so similar, and even at
times identical, it should not seem strange that we not only understand the
physical through mathematical thinking but conversely we gain better mathe-
matical understanding by a kind of physical thinking. Seeing differential geom-
etry applied to physics actually helps one understand geometric mathematics
better. Physics even inspires purely mathematical questions for research. An
example of this is the various mathematical topics that center around the no-
tion of quantization. There are interesting mathematical questions that arise
when one starts thinking about the connections between a quantum system and
its classical analogue. In some sense, the study of the Laplace operator on a

differentiable manifold and its spectrum is a “quantized version” of the study of
the geodesic flow and the whole Riemannian apparatus; curvature, volume, and
so forth. This is not the definitive interpretation of what a quantized geometry
should be and there are many areas of mathematical research that seem to be
related to the physical notions of quantum verses classical. It comes as a sur-
prise to some that the uncertainty principle is a completely mathematical notion
within the purview of harmonic analysis. Given a specific context in harmonic
analysis or spectral theory, one may actually prove the uncertainty principle.
Physical intuition may help even if one is studying a “toy physical system” that
doesn’t exist in nature or only exists as an approximation (e.g. a nonrelativistic
quantum mechanical system). At the very least, physical thinking inspires good
mathematics.
I have purposely allowed some redundancy to occur in the presentation be-
cause I believe that important ideas should be repeated.
Finally we mention that for those readers who have not seen any physics
for a while we put a short and extremely incomplete overview of physics in
an appendix. The only purpose of this appendix is to provide a sort of warm
up which might serve to jog the readers memory of a few forgotten bits of
undergraduate level physics.
Chapter 1
Preliminaries and Local
Theory
I never could make out what those damn dots meant.
Lord Randolph Churchill
Differential geometry is one of the subjects where notation is a continual
problem. Notation that is highly precise from the vantage point of set theory and
logic tends to be fairly opaque with respect to the underlying geometric intent.
On the other hand, notation that is uncluttered and handy for calculations tends
to suffer from ambiguities when looked at under the microscope as it were. It
is perhaps worth pointing out that the kind of ambiguities we are talking about

are accepted by every calculus student without much thought. For instance,
we find (x, y, z) being used to refer variously to “indeterminates”, “a triple of
numbers”, or functions of some variable as when we write
x(
t
) = (
x
(
t
), y(t), z
(
t))
.
Also, we often write
y
=
f
(x
) and then even write y =
y
(
x) and
y

(
x
) or
dy/dx
which have apparent ambiguities. This does not mean that this notation is bad.
In fact, it can be quite useful to use slightly ambiguous notation. In fact, human

beings are generally very good at handling ambiguity and it is only when the self
conscious desire to avoid logical inconsistency is given priority over everything
else do we begin to have problems. The reader should be warned that while
we will develop fairly pedantic notation we shall also not hesitate to resort to
abbreviation and notational shortcuts as the need arises (and with increasing
frequency in later chapters).
The following is short list of notational conventions:
1
2
CHAPTER 1. PRELIMINARIES AND LOCAL THEORY
Category Sets Elements Maps
Vector Spaces
V , W v,w,
x, y A, B, λ, L
Topological vector spaces (TVS)
E,V,
W,R
n
v,w,x,y
A, B, λ, L
Open sets in TVS U, V, O, U
α
p,x,y,v,w
f, g, ϕ, ψ
Lie Groups
G, H, K g, h, x, y h, f, g
The real (resp. complex) numbers R
,
(resp.
C) t, s, x, y, z, f, g, h

One of R or
C F ’‘ ‘’
A
more complete chart may be found at the end of the book.
The reader is reminded that for two sets
A
and B
the
Cartesian product
A
× B is the set of pairs
A ×
B :=
{(
a, b
) :
a
∈ A, b

B}
. More generally,

i
A
i
:= {(
a
i
) :
a

i
∈ A
i
}.
Notation 1.1
Here and throughout the book the symbol combination “
:= ”
means “equal by definition”.
In particular,
R
n
:= R ×··· × R
the product of -copies of the real numbers
R. Whenever we represent a linear transformation by a matrix, then the matrix
acts on column vectors from the left. This means that in this context elements
of R
n
are thought of as column vectors. It is sometimes convenient to repre-
sent elements of the dual space (
R
n
)

as row vectors so that if α ∈ (R
n
)

is
represented by (a
1

, , a
n
) and v ∈ R
n
is represented by

v
1
, , v
n

t
then
α
(v) = (
a
1
a
n
)



v
1
.
.
.
v
n




.
Since we do not want to have to write the transpose symbol in every instance
and for various other reasons we will sometimes use upper indices (superscripts)
to label the component entries of elements of R
n
and lower indices (subscripts)
to label the component entries of elements of ( R
n
)

. Thus

v
1
, , v
n

invites
one to think of a column vector (
even when the transpose symbol is not present)
while (a
1
, , a
n
) is a row vector. On the other hand, a list of elements of a
vector space such as a basis will be labelled using subscripts while superscripts
label lists of elements of the dual space of the initially introduced space.

1.1 Calculus
Let
V
be a finite dimensional vector space over a field
F where
F is the real num-
bers
R or the complex numbers C. Occasionally the quaternion number algebra
H (a skew field) will be considered. Each of these spaces has a conjugation map
which we take to be the identity map for R
while for
C
and
H
we have
x +
yi
→
x
− yi
and
x
+
y
i + z
j +
w k → x −y
i − zj −
w k
1.1. CALCULUS

3
respectively. The vector spaces that we are going to be dealing with will serve as
local models for the global theory. For the most part the vector spaces that serve
as models will be isomorphic to F
n
which is the set of n
−tuples (
x
1
, , x
n
) of
elements of F. However, we shall take a slightly unorthodox step of introducing
notation that exibits the variety of guises that the space
F
n
may appear in.
The point is that althought these spaces are isomorphic to
F
n
some might have
different interpretations as say matrices, bilinear forms, tensors, and so on. We
have the following spaces:
1.
F
n
which is the set of n-tuples of elements of
F
which we choose to think
of as column vectors. These are written as (x

1
, , x
n
)
t
or more commonly
simply as (x
1
, , x
n
) where the fact that we have place the indices as
superscripts is enough to remind us that in matrix multiplication these
are supposed to be columns. The standard basis for this vector space
is {
e
i
}
1

i

n
where e
i
has all zero entries except for a single 1 in the
i-
th position. The indices have been lowered on purpose to facilitate the
expression like
v =


i
v
i
e
i
. The appearence of a repeated index one of
which is a subscript while the other a superscript, signals a summation.
Acording to the Einstien summatation convention we may omit the

i
and write v = v
i
e
i
the summation being implied.
2.
F
n
is the set
n-tuples of elements of
F thought of as row vectors. The
elements are written (ξ
1
, , ξ
n
) . This space is often identified with the
dual of F
n
where the pairing becomes matrix multiplication ξ, v
 =

ξw.
Of course, we may also take
F
n
to be its own dual but then we must write

v
,
w

= v
t
w
.
3.
F
n
m
is just the set of m
×
n matrices, also written M

n
. The elements are
written as (
x
i
j
). The standard basis is
{e

j
i
} so that A
= (
A
i
j
) =

i,j
A
i
j
e
j
i
4. F
m,n
is also the set of
m
×n matrices but the elements are written as (
x
ij
)
and these are thought of as giving maps from
F
n
to F
n
as in (v

i
) →
(
w
i
)
where
w
i
=

j
x
ij
v
j
. A particularly interesting example is when the
F
=
R. Then if (g
ij
) is a symmetic positive definite matrix since then we
get an isomorphism g
: F
n

=
F
n
given by v

i
=

j
g
ij
v
j
. This provides us
with an inner product on F
n
given by

j
g
ij
w
i
v
j
and the usual choice for
g
ij
is
δ
ij
= 1 if i
= j
and 0 otherwise. Using
δ

ij
makes the standard basis
on
F
n
an orthonormal basis.
5.
F
I
J
is the set of all elements of
F
indexed as
x
I
J
where I ∈ I
and
I
∈ J
for some indexing sets
I
and
J. The dimension of
F
I
J
is the cardinality
of
I ×J

. To look ahead a bit, this last notation comes in handy since it
allows us to reduce a monster like
c
i
1
i
2
i
r
=

k
1
,k
2
, ,k
m
a
i
1
i
2
i
r
k
1
k
2
k
m

b
k
1
k
2
k
m
4
CHAPTER 1. PRELIMINARIES AND LOCAL THEORY
to something like
c
I
=

K
a
I
K
b
K
which is more of a “cookie monster”.
In every case of interest
V has a natural topology that is de-
termined by a norm. For example the space F
I
(:= F
I

) has an
inner product 

v
1
,
v
2

=

I∈I
a
I
¯
b
I
where
v
1
=

I∈I
a
I
e
I
and
v
2
=

I

∈I
b
I
e
I
. The inner product gives the norm in the usual way
|v| :=
v
, v
1
/2
which determines a topology on
V
. Under this topol-
ogy all the vector space operations are continuous. Futhermore, all
norms give the same topology on a finite dimensional vector space.
§§
Interlude
§§
Infinite dimensions.
What about infinite dimensional spaces? Are there any “standard” spaces
in the infinite dimensional case? Well, there are a few problems that must
be addresses if one want to include infinite dimensional spaces. We will not
systematically treat infinite dimensioanl manifold theory but calculas on infinite
dimensional spaces can be fairly nice if one restricts to complete normed spaces
(Banach spaces). As a sort of warm up let us step through a progressively
ambitious attemp to generalize the above spaces to infinite dimensions.
1. We could just base our generalization on the observation that an element
(x
i

) of
F
n
isreally just a function x
: {
1
, 2
, , n
} → F
so mayb e we should
consider instead the index set
N
=
{
1
, 2,
3,
→ ∞}
. This is fine except
that we must interpret the sums like v
=
v
i
e
i
. The reader will no doubt
realize that one possible solution is to restrict to

-tuple (sequences) that
are in 

2
. This the the Hilbert space of square summable sequences.This
one works out very nicely although there are some things to be concerned
about. We could then also consider spaces of matrices with the rows and
columns infinite but square summable these provide operetors 
2


2
.
But should we restrict to trace class operators? Eventually we get to
tensors where which would have to be indexed “tuples” like (Υ
rs
ijk
) which
are square summable in the sense that


rs
ijk
)
2
<

.
2. Maybe we could just replace the indexing sets by subsets of the plane
or even some nice measure space Ω. Then our elements would just be
functions and imediately we see that we will need measurable functions.
We must also find a topology that will be suitable for defining the limits
we will need when we define the derivative below. The first possiblity is

to restrict to the square integrable functions. In other words, we could
try to do everything with the Hilbert space L
2
(Ω). Now what should the
standard basis be? OK, now we are starting to get in trouble it seems.
But do we really need a standard basis?
1.1. CALCULUS
5
3.
It turns out that all one needs to do calculus on the space is for it to be a
sufficiently nice topological vector space. The common choice of a Banach
space is so that the proof of the inverse function theorem goes through
(but see [KM]).
The reader is encouraged to look at the appendices for a more
formal treatment of infinite dimensional vector spaces.
§§
Next we record those parts of calculus that will be most important to our
study of differentiable manifolds. In this development of calculus the vector
spaces are of one of the finite dimensional examples given above and we shall
refer to them generically as Euclidean spaces. For convenience we will restrict
our attention to vector spaces over the real numbers R. Each of these “Eu-
clidean” vector spaces has a norm where the norm of
x
denoted by
x. On the
other hand, with only minor changes in the proofs, everything works for Banach
spaces. In fact, we have put the proofs in an appendix where the spaces are
indeed taken to be general Banach spaces.
Definition 1.1 Let V and
W be Euclidean vector spaces as above (for example

R
n
and
R
m
). Let
U an open subset of V. A map f
: U
→ W is said to be
differentiable at
x ∈
U
if and only if there is a (necessarily unique) linear map
Df|
p
:
V →
W such that
lim
|
x
|→0



f
(
x +
v) −f(x
) − Df|

p
v



x

Notation 1.2
We will denote the set of all linear maps from V to W
by
L
(V
,
W
).
The set of all linear isomorphisms from V onto W
will be denoted by
GL(
V
, W)
.
In case,
V =
W
the corresponding spaces will be denoted by gl(V)
and
GL(
W
)
.

For linear maps
T
: V
→ W
we sometimes write
T ·v instead of
T
(v)
depending
on the notational needs of the moment. In fact, a particularly useful notational
device is the following: Suppose we have map
A
: X

L
(V;
W)
.
Then we would
write
A(
x

v
or A|
x
v.
Here GL(
V) is a group under composition and is called the general linear
group . In particular, GL(V, W) is a subset of

L(V, W
) but not a linear subspace.
Definition 1.2
Let
V
i
,
i = 1, , k
and W be finite dimensional
F-vector spaces
.
A map µ :
V
1
×···×
V
k

W is called
multilinear (
k-multilinear) if for each
i
, 1 ≤
i ≤ k and each fixed (
w
1
, ,

w
i

, , w
k
)

V
1
× ··· ×

V
1
× ···×V
k
we have
that the map
v
→
µ
(
w
1
, ,
v
i
−th
, ,
w
k

1
),

6
CHAPTER 1. PRELIMINARIES AND LOCAL THEORY
obtained by fixing all but the i-th variable is a linear map. In other words, we
require that
µ
be
F- linear in each slot separately.
The set of all multilinear maps
V
1
× ··· ×
V
k

W will b e denoted by
L
( V
1
, ,
V
k
; W
). If
V
1
= ··· = V
k
=
V
then we write L

k
( V;
W) instead of
L
(
V
, , V
;
W)
Since each vector space has a (usually obvious) inner product then we have
the group of linear isometries
O
(
V
) from
V
onto itself. That is,
O
(
V
) consists
of the bijective linear maps Φ : V

V such that
Φ
v, Φ
w

= v, w
 for all

v,
w ∈ V
. The group O( V) is called the orthogonal group.
Definition 1.3 A (bounded) multilinear map µ :
V
× ··· × V

W
is called
symmetric (resp.
skew-symmetric
or alternating) iff for any
v
1
, v
2
, ,
v
k

V
we have that
µ
(
v
1
, v
2
, , v
k

) =
K(
v
σ1
,
v
σ
2
, ,
v
σk
)
resp.
µ
(
v
1
,
v
2
, , v
k
) =
sgn(σ)µ
(v
σ1
,
v
σ
2

, , v
σk
)
for all permutations
σ on the letters
{1
,
2
, , k
}.
The set of all bounded sym-
metric (resp. skew-symmetric) multilinear maps V
× ··· × V → W is denoted
L
k
sy m
(
V
;
W)
(resp.
L
k
skew
(V;
W
) or L
k
alt
(

V;
W))
.
Now the space L(
V, W ) is a normed space with the norm

l
 = sup
v
∈V
l
(
v)

W

v
V
= sup
{l(
v)
W
:

v
V
= 1
}.
The spaces
L

(
V
1
, ,
V
k
;
W
) also have norms given by

µ
:= sup
{µ
(v
1
, v
2
, , v
k
)

W
:
v
i

V
i
= 1 for
i = 1

, , k}
Notation 1.3
In the context of R
n
, we often use the so called “multiindex
notation”. Let α
= (
α
1
, , α
n
) where the
α
i
are integers and
0
≤ α
i

n. Such
an n
-tuple is called a multiindex. Let
|
α|
:=
α
1
+
+
α

n
and

α
f
∂x
α
:=

|
α
|
f
∂(
x
1
)
α
1

(x
1
)
α
2
···

(
x
1

)
α
n
.
Proposition 1.1 There is a natural linear isomorphism
L(
V, L
(
V
,
W
))

=
L
2
(
V, W )
given by
l(v
1
)(
v
2
) ←→
l
(
v
1
, v

2
)
and we identify the two spaces. In fact,
L
(V
, L(
V
, L(V
, W
))

=
L
3
(V
; W
)
and in
general L
(V, L
(
V, L
(V
, , L(V
,
W
))

=
L

k
(
V
;
W) etc.
Proof.
It is easily checked that if we just define (
ι T )(v
1
)(
v
2
) = T(
v
1
, v
2
)
then ι T ↔
T does the job for the
k
= 2 case. The
k > 2 case can be done by
an inductive construction and is left as an exercise. It is also not hard to show
that the isomorphism norm preserving.
1.1. CALCULUS
7
Definition 1.4
If it happens that a function
f is differentiable for all p

through-
out some open set
U then we say that f
is differentiable on
U. We then have
a map
Df
: U ⊂ V →
L
(
V, W
)
given by p
→ Df(p
)
. If this map is differen-
tiable at some
p ∈ V then its derivative at
p is denoted DDf
(
p) =
D
2
f(p
)
or
D
2
f



p
and is an element of
L
(
V, L
(V
, W))

=
L
2
(
V; W )
. Similarly, we may in-
ductively define
D
k
f

L
k
(V; W)
whenever
f is sufficiently nice that the process
can continue.
Definition 1.5
We say that a map
f
: U ⊂ V → W

is
C
r
−differentiable on
U
if D
r
f|
p

L
r
(
V,
W)
exists for all p
∈ U
and if
D
r
f
is continuous as map
U → L
r
(
V
,
W
). If f
is

C
r
−differentiable on
U for all
r >
0 then we say that
f
is
C

or smooth
(on U).
Definition 1.6 A bijection
f
between open sets
U
α
⊂ V and U
β

W
is called
a C
r

diffeomorphism iff f
and
f

1

are both C
r
−differentiable (on U
α
and
U
β
respectively). If r = ∞ then we simply call
f
a diffeomorphism. Often, we
will have W
=
V
in this situation.
Let U be open in V
. A map
f
:
U → W is called a
local C
r
diffeomorphism
iff for every
p

U
there is an open set
U
p
⊂ U

with
p ∈ U
p
such that f|
U
p
:
U
p

f
(
U
p
) is a
C
r

diffeomorphism.
In the context of undergraduate calculus courses we are used to thinking
of the derivative of a function at some a ∈
R
as a number
f

(a
) which is the
slope of the tangent line on the graph at (a, f(a
)). From the current point of
view

Df
(
a) =
Df|
a
just gives the linear transformation
h
→
f

(a) ·
h
and the
equation of the tangent line is given by y = f
(a
) +
f

(a)(x −a)
. This generalizes
to an arbitrary differentiable map as
y =
f
(
a) +
Df(a
)
·
(x


a
) giving a map
which is the linear approximation of
f
at a.
We will sometimes think of the derivative of a curve
1
c : I ⊂ R

E
at t
0
∈ I,
written ˙c(
t
0
), as a velocity vector and so we are identifying ˙c
(
t
0
)

L(
R
, E) with
Dc|
t
0
·
1

∈ E
. Here the number 1 is playing the role of the unit vector in R.
Let f : U ⊂ E →
F be a map and suppose that we have a splitting
E
=
E
1
×
E
2
× ···E
n
for example . We will write f(
x
1
, ,
x
n
) for (
x
1
, ,
x
n
)

E
1
×E

2
×···E
n
.
Now for every
a =(a
1
, , a
n
)

E
1
×···×
E
n
we have the partial
map
f
a
,i
:
y
→ f(
a
1
, ,
y,
a
n

) where the variable y
is is in the
i slot. This de-
fined in some neighborhood of
a
i
in
E
i
. We define the partial derivatives when
they exist by
D
i
f
(
a) = Df
a
,i
(
a
i
). These are, of course, linear maps.
D
i
f
(a) : E
i
→ F
The partial derivative can exist even in cases where
f might not be differentiable

in the sense we have defined. The point is that
f
might be differentiable only
in certain directions.
1
We will often use the letter
I
to denote a generic (usually open) interval in the real line.
8
CHAPTER 1. PRELIMINARIES AND LOCAL THEORY
If
f has continuous partial derivatives D
i
f
(x
) : E
i
→ F near
x

E
=
E
1
×
E
2
×···E
n
then

Df
(
x) exists and is continuous near
p. In this case,
Df
(
x)
·
v
=
n

i
=1
D
i
f(x, y
) · v
i
where v =(v
1
, ,
v
n
).
§Interlude
§
Thinking about derivatives in infinite dimensions
The theory of differentiable manifolds is really just an extension of calculus
in a setting where, for topological reasons, we must use several coordinates sys-

tems. At any rate, once the coordinate systems are in place many endeavors
reduce to advanced calculus type calculations. This is one reason that we re-
view calculus here. However, there is another reason. Namely, we would like to
introduce calculus on Banach spaces. This will allow us to give a good formula-
tion of the variational calculus that shows up in the study of finite dimensional
manifolds (the usual case). The idea is that the set of all maps of a certain type
between finite dimensional manifolds often turns out to be an infinite dimen-
sional manifold. We use the calculus on Banach spaces idea to define infinite
dimensional differentiable manifolds which look locally like Banach spaces. All
this will be explained in detail later.
As a sort of conceptual warm up, let us try to acquire a certain flexibility
in the way we think about vectors. A vector as it is understood in some con-
texts is just an n
−tuple of numbers which we picture either as a point in
R
n
or an arrow emanating from some such point but an n−tuple (
x
1
, , x
n
) is also
a function x :
i
→
x
(
i
) = x
i

whose domain is the finite set
{1
, 2, , n
}. But
then why not allow the index set to be infinite, even uncountable? In doing
so we replace the
n−tuple (x
i
) be the “continuous” tuple f(
x). We are used
to the idea that something like

n
i
=1
x
i
y
i
should be replaced by an integral

f
(x)g
(x)
dx when moving to these continuous tuple (functions). Another ex-
ample is the replacement of matrix multiplication

a
i
j

v
i
by the continuous
analogue

a(x, y)
v
(
y
)dy. But what would be the analogue of a vector valued
functions of a vector variable? Mathematicians would just consider these to be
functions or maps again but it is also traditional, especially in physics literature,
to called such things functionals. An example might be an “action functional”,
say S, defined on a set of curves in R
3
with a fixed interval [t
0
, t
1
] as domain:
S
[c
] =

t
1
t
0
L(c
(

t
)
,
˙
c
(t
), t
)
dt
Here,
L
is defined on R
3
×R
3
×[
t
0
, t
1
] but
S takes a curve as an argument and
this is not just composition of functions. Thus we will not write the all too
1.1. CALCULUS
9
common expression “
S[c
(
t
)]”. Also, S[

c
] denotes the value of the functional at
the curve c
and not the functional itself. Physicists might be annoyed with this
but it really does help to avoid conceptual errors when learning the subject of
calculus on function spaces (or general Banach spaces).
When one defines a directional derivative in the Euclidean space
R
n
:=
{
x=(x
1
, , x
n
) :
x
i
∈ R}
it is through the use of a difference quotient:
D
h
f
(
x
) := lim
ε
→0
f(
x

+ εh
) −
f
(
x)
ε
which is the same thing as
D
h
f
(x) :=
d



ε=0
f
(x + εh
). Limits like this one
make sense in any topological vector space
2
. For example, if
C
([0,
1]) denotes
the space of continuous functions on the interval [0, 1] then one may speak of
functions whose arguments are elements of
C([0
,
1]). Here is a simple example

of such a “functional”:
F
[
f
] :=

[0,
1]
f
2
(
x)
dx
The use of square brackets to contain the argument is a physics tradition that
serves to warn the reader that the argument is from a space of functions. Notice
that this example is not a linear functional. Now given any such functional, say
F , we may define the directional derivative of F
at f ∈ C
([0,
1]) in the direction
of the function h

C([0, 1]) to be
D
h
F (f) := lim
ε
→0
F
(f +

εh
) −
F
(
f
)
ε
whenever the limit exists.
Example 1.1 Let F
[f
] :=

[0
,1]
f
2
(
x)
dx
as above and let f
(
x) = x
3
and
h(x) =
sin(
x
4
)
. Then

D
h
F
(f
) = lim
ε→
0
1
ε
F (
f
+ εh
) −
F
(f)
=
d





ε
=0
F (f
+
εh)
=
d






ε
=0

[0,1]
(f
(x) + εh
(
x))
2
dx
= 2

[0
,
1]
f
(
x)
h(
x)dx
= 2

1
0
x
3

sin(
πx
4
)
dx
=
1
π
Note well that
h and f
are functions but here they are, more importantly,
“points” in a function space! What we are differentiating is
F . Again, F
[
f] is
not
a composition of functions; f
is the dependent variable here.
2
“Topological vector space” will be defined shortly.
10
CHAPTER 1. PRELIMINARIES AND LOCAL THEORY
Exercise 1.1
See if you can make sense out of the expressions and analogies
in the following chart:
x 
f
f
(
x)  F

[f
]
df 
δF
∂f
∂x
i

δ F
δ f
(
x)

∂f
∂x
i
v
i


δ F
δ f
(
x
)
v(
x
)
dx


···

f(
x)dx
1
dx
n





F
[f](

x
df
(x))
Exercise 1.2 Some of these may seem mysterious-especially the last one which
still lacks a general rigorous definition that covers all the cases needed in quan-
tum theory. Don’t worry if you are not familiar with this one. The third one
in the list is only mysterious because we use δ. Once we are comfortable with
calculus in the Banach space setting we will see that
δF
just mean the same
thing as
dF whenever
F
is defined on a function space. In this context dF
is a

linear functional on a Banach space.
So it seems that we can do calculus on infinite dimensional spaces. There are
several subtle points that arise. For instance, there must be a topology on the
space with respect to which addition and scalar multiplication are continuous.
This is the meaning of topological vector space. Also, in order for the derivative
to be unique the topology must be Hausdorff. But there are more things to
worry about.
We are also interested in having a version of the inverse mapping theorem. It
turns out that most familiar facts from calculus on R
n
go through if we replace
R
n
by a complete normed space (see 26.21
). There are at least two issues that
remain even if we restrict ourselves to Banach spaces. First, the existence of
smooth bump functions and smooth partitions of unity (to be defined below) are
not guaranteed. The existence of smooth bump functions and smooth partitions
of unity for infinite dimensional manifolds is a case by case issue while in the
finite dimensional case their existence is guaranteed . Second, there is the
fact that a subspace of a Banach space is not a Banach space unless it is a
closed subspace. This fact forces us to introduce the notion of a split subspace
and the statements of the Banach spaces versions of several familiar theorems,
including the implicit function theorem, become complicated by extra conditions
concerning the need to use split (complemented) subspaces.
§§
1.2. CHAIN RULE, PRODUCT RULE AND TAYLOR’S THEOREM
11
1.2 Chain Rule, Pro duct rule and Taylor’s The-
orem

Theorem 1.1 (Chain Rule) Let U
1
and U
2
be open subsets of Euclidean spaces
E
1
and E
2
respectively. Suppose we have continuous maps composing as
U
1
f

U
2
g

E
3
where E
3
is a third Euclidean space. If
f is differentiable at p and g
is dif-
ferentiable at f
(
p)
then the composition is differentiable at p and D
(

g

f) =
Dg
(
f(
p
)) ◦
Dg
(p
)
. In other words, if
v
∈ E
1
then
D(g
◦ f
)
|
p
·
v
=
Dg
|
f
(p
)
·

(
Df
|
p
·
v).
Furthermore, if
f

C
r
(U
1
) and g ∈
C
r
(U
2
)
then
g ◦
f ∈ C
r
(
U
1
)
.
We will often use the following lemma without explicit mention when calcu-
lating:

Lemma 1.1 Let
f
: U ⊂
V
→ W be twice differentiable at x
0
∈ U
⊂ V then
the map
D
v
f
:
x
→
Df(
x
) ·
v is differentiable at x
0
and its derivative at x
0
is
given by
D(D
v
f
)
|
x

0
· h
= D
2
f(
x
0
)(
h, v
).
Theorem 1.2 If f
: U
⊂ V

W
is twice differentiable on
U such that
D
2
f is
continuous, i.e. if
f ∈ C
2
(U)
then D
2
f is symmetric:
D
2
f

(p)(
w,
v) = D
2
f
(p)(
v,
w
).
More generally, if
D
k
f exists and is continuous then D
k
f
(p
) ∈
L
k
sy m
(
V
; W
).
Theorem 1.3 Let
 ∈ L(
F
1
, F
2

;
W
) be a bilinear map and let f
1
: U

E

F
1
and
f
2
:
U
⊂ E → F
2
be differentiable (resp.
C
r
, r
≥ 1) maps. Then the
composition (
f
1
, f
2
) is differentiable (resp.
C
r

, r
≥ 1) on U where 
(f
1
, f
2
) :
x
→
(
f
1
(x
), f
2
(
x
)). Furthermore,
D
|
x
(
f
1
, f
2
) · v =
(
Df
1

|
x
· v
, f
2
(x)) +
(f
1
(x
), Df
2
|
x
·
v)
.
In particular, if F is an algebra with product 
and
f
1
: U
⊂ E

F and
f
2
:
U

E →

F then f
1
 f
2
is defined as a function and
D
(
f
1
 f
2
) · v
= (
Df
1
· v)  (f
2
) + (
Df
1
·
v)
 (
Df
2
·
v).
1.3 Local theory of maps
Inverse Mapping Theorem
Definition 1.7

Let
E
and F
be Euclidean vector spaces. A map will be called a
C
r
diffeomorphism near
p
if there is some open set U ⊂
dom(
f)
containing
12
CHAPTER 1. PRELIMINARIES AND LOCAL THEORY
p
such that f|
U
:
U →
f
(
U
) is a
C
r
diffeomorphism onto an open set f(U)
. The
set of all maps which are diffeomorphisms near
p will be denoted Diff
r

p
(E
,
F
). If
f
is a
C
r
diffeomorphism near p
for all p ∈
U
= dom(
f) then we say that f is
a
local C
r
diffeomorphism.
Theorem 1.4 (Implicit Function Theorem I) Let E
1
,
E
2
and
F
Euclidean
vector spaces and let U
×
V


E
1
×
E
2
be open. Let
f
:
U ×
V
→ F be a
C
r
mapping such that f(x
0
, y
0
) = 0. If
D
2
f
(
x
0
,y
0
)
:
E
2


F is a continuous linear
isomorphism then there exists a (possibly smaller) open set
U
0
⊂ U
with x
0
∈ U
0
and unique a mapping g
: U
0
→ V
with
g
(x
0
) = y
0
such that
f(
x
, g
(x
)) = 0
for all x

U
0

.
Proof. Follows from the following theorem.
Theorem 1.5 (Implicit Function Theorem II) Let E
1
,
E
2
and
F
be as above
and U ×
V ⊂
E
1
×
E
2
open. Let
f
: U
×
V

F be a
C
r
mapping such that
f
(x
0

, y
0
) =
w
0
. If D
2
f
(x
0
, y
0
) : E
2
→ F is a continuous linear isomorphism then
there exists (possibly smaller) open sets U
0
⊂ U
and
W
0

F
with
x
0
∈ U
0
and
w

0

W
0
together with a unique mapping g
: U
0
×
W
0

V
such that
f(x, g
(
x,
w
)) =
w
for all x
∈ U
0
. Here unique means that any other such function
h
defined on a
neighborhood U

0
× W


0
will equal
g
on some neighborhood of (
x
0
,
w
0
)
.
Proof.
Sketch: Let Ψ :
U ×
V
→ E
1
×
F be defined by Ψ(x,
y
) = (x, f
(
x, y
)).
Then
DΨ(
x
0
,
y

0
) has the operator matrix

id
E
1
0
D
1
f(x
0
, y
0
) D
2
f(
x
0
,
y
0
)

which shows that D
Ψ(x
0
,
y
0
) is an isomorphism. Thus Ψ has a unique local

inverse Ψ

1
which we may take to b e defined on a product set U
0
×
W
0
. Now
Ψ
−1
must have the form (x
, y) → (x, g(
x,
y)) which means that (x
, f(x
, g(
x,
w))) =
Ψ(x
, g
(
x
, w
)) = (
x
,
w). Thus
f(
x

, g(
x, w
)) =
w
. The fact that
g is unique follows
from the local uniqueness of the inverse Ψ
−1
and is left as an exercise.
Let
U
be an open subset of
V
and let
I ⊂
R
be an open interval containing
0. A (local) time dependent vector field on U
is a
C
r
-map F
:
I ×U →
V (where
r ≥
0). An integral curve of F
with initial value
x
0

is a map
c
defined on an
open subinterval J ⊂
I also containing 0 such that
c

(
t) =
F (t, c(
t
))
c(0) =
x
0
A local flow for
F
is a map α :
I
0
×
U
0

V such that U
0

U and such that
the curve α
x

(t) =
α
(
t, x
) is an integral curve of F with α
x
(0) =
x
If
f
: U
→ V is a map between open subsets of
V and W
we have the notion
of rank at p
∈ U which is just the rank of the linear map D
p
f
: V

W.
1.3. LOCAL THEORY OF MAPS
13
Definition 1.8
Let
X, Y
be topological spaces. When we write f :: X →
Y we
imply only that f is defined on some open set in X. If we wish to indicate that
f is defined near p ∈ X and that f

(
p) =
q
we will used the pointed category
notation together with the symbol “ ::
”:
f :: (
X,
p) →
(
Y, q)
We will refer to such maps as
local maps at p
.
Local maps may be com-
posed with the understanding that the domain of the composite map may become
smaller: If f
:: (
X
, p
) →
(
Y
, q) and
g
:: (Y,
q
) → (
G, z)
then g


f
:: (X
, p)

(
G,
z
)
and the domain of
g ◦
f will be a non-empty open set.
Theorem 1.6 (The Rank Theorem)
Let
f
: (
V, p
)
→ (
W
, q)
be a local map
such that
Df
has constant rank
r
in an open set containing
p. Suppose that
dim(V) = n
and dim(W

) =
m Then there are local diffeomorphisms
g
1
::
(
V
, p)
→ (
R
n
,
0)
and g
2
:: (
W, q)
→ (R
m
,
0)
such that g
2

f
◦ g

1
1
is a local

diffeomorphism near
0
with the form
(x
1
, x
n
) →
(x
1
, x
r
, 0, ,
0).
Proof. Without loss of generality we may assume that f : (R
n
,
0) →
(
R
m
,
0)
and that (reindexing) the r × r matrix

∂f
j
∂x
j


1≤
i,j
≤r
is nonsingular in an open ball centered at the origin of R
n
. Now form a map
g
1
(x
1
, x
n
) = (
f
1
(
x
), , f
r
(x
), x
r
+1
, , x
n
). The Jacobian matrix of
g
1
has
the block matrix form

 
∂f
i
∂x
j

0 I
n
−r

which clearly has nonzero determinant at 0 and so by the inverse mapping
theorem g
1
must be a local diffeomorphism near 0. Restrict the domain of
g
1
to this possibly smaller open set. It is not hard to see that the map f ◦ g

1
1
is of the form (
z
1
, , z
n
) → (
z
1
, , z
r

, γ
r
+1
(
z)
, , γ
m
(
z)) and so has Jacobian
matrix of the form

I
r
0


∂γ
i
∂x
j


.
Now the rank of

∂γ
i
∂x
j


r
+1≤i
≤m, r+1≤
j≤
n
must be zero near 0 since the rank(f
) =
rank(
f

h
−1
) = r
near 0. On the said (possibly smaller) neighborhood we now
define the map
g
2
: (
R
m
, q) → (
R
m
,
0) by
(
y
1
, , y
m

) →
(
y
1
, , y
r
, y
r
+1
− γ
r+1
(
y

,
0)
, , y
m
− γ
m
(
y

, 0))
where (y

, 0) = (y
1
, , y
r

, 0, ,
0). The Jacobian matrix of g
2
has the form

I
r
0
∗ I

14
CHAPTER 1. PRELIMINARIES AND LOCAL THEORY
and so is invertible and the composition
g
2
◦ f ◦
g

1
1
has the form
z
f

g
−1
1
→
(z


, γ
r
+1
(z
), , γ
m
(
z))
g
2
→
(
z

, γ
r
+1
(
z
)
− γ
r+1
(z

, 0), , γ
m
(z
)
− γ
m

(z

, 0))
where (z

, 0) = (z
1
, , z
r
,
0, , 0). It is not difficult to check that
g
2

f
◦ g

1
1
has the required form near 0.
Starting with a fixed V, say the usual example
F
n
,
there are several stan-
dard methods of associating related vector space using multilinear algebra. The
simplest example is the dual space (
F
n
)


. Now beside F
n
there is also F
n
which
is also a space of
n

tuples but this time thought of as row vectors
. We shall
often identify the dual space (
F
n
)

with
F
n
so that for
v ∈
F
n
and ξ ∈ (F
n
)

the duality is just matrix multiplication ξ
(
v

) =
ξ
v
. The group of nonsingular
matrices, the general linear group Gl(n, F) acts on each of these a natural way:
1.
The primary action on
F
n
is a left action and corresponds to the standard
representation and is simply multiplication from the left: (g,
v)

g
v.
2.
The primary action on (F
n
)

=
F
n
is also a left action and is (g,
v
) →
v
g
−1
(again matrix multiplication). In a setting where one insists on using

only column vectors (even for the dual space) then this action appears as
(
g, v
t
)
→ (
g
−1
)
t
v
t
. The reader may recognize this as giving the contragra-
dient representation.
Differential geometry strives for invariance and so we should try to get away
from the special spaces
V such as F
n
and
F
n
which often have a standard
preferred basis. So let V be an abstract F−
vector space and V

its dual. For
every choice of basis
e = (e
1
, , e

n
) for V
there is the natural map
u
e
: F
n

V
given by e
:
v
→ e(v
) =
v
i
e
i
. Identifying
e with the row of basis vectors
(e
1
, , e
n
) we see that u
e
(
v) is just formal matrix multiplication
u
e

:
v → ev
= (e
1
, , e
n
)



v
1
.
.
.
v
n



Corresponding to e
= (e
1
, , e
n
) there is the dual basis for V

which we write as
e


= (e
1
, , e
n
). In this case too we have a natural map u
e

: F
n∗
:=
F
n

V

given by
u
e

: v

→
v

e

= (
v
1
, , v

n
)



e
1
.
.
.
e
n



In each, the definition of basis tells us that e : F
n

V and
e

:
F
n
→ V

are
both linear isomorphisms. Thus for a fixed frame
e
each v


V may be written
uniquely
v
= ev
while each v

∈ V

we have the expansion v

=
ve
Chapter 2
Differentiable Manifolds
An undefined problem has an infinite numb er of solutions.
-Robert A. Humphrey
2.1 Rough Ideas I
The space of n-tuples
R
n
is often called Euclidean space by mathematicians but
it might be a bit more appropriate the refer to this a Cartesian space which is
what physics people often call it. The point is that Euclidean space (denoted
here as
E
n
) has both more structure and less structure than Cartesian space.
More since it has a notion of distance and angle, less because Euclidean space
as it is conceived of in pure form has no origin or special choice of coordinates.

Of course we almost always give
R
n
it usual structure as an inner product space
from which we get the angle and distance and we are on our way to having a
set theoretic model of Euclidean space.
Let us imagine we have a pure Euclidean space. The reader should think
physical of space as it is normally given to intuition. Rene Descartes showed
that if this intuition is axiomatized in a certain way then the resulting abstract
space may be put into one to one correspondence with the set of n
-tuples, the
Cartesian space R
n
. There is more than one way to do this but if we want the
angle and distance to match that given by the inner product structure on R
n
then we get the familiar rectilinear coordinates.
After imposing rectilinear coordinates on a Euclidean space
E
n
(such as the
plane
E
2
) we identify Euclidean space with
R
n
, the vector space of
n


tuples of
numbers. In fact, since a Euclidean space in this sense is an object of intuition
(at least in 2d and 3d) some may insist that to be sure such a space of points
really exists that we should in fact start
with
R
n
and “forget” the origin and
all the vector space structure while retaining the notion of point and distance.
The coordinatization of Euclidean space is then just a “remembering” of this
forgotten structure. Thus our coordinates arise from a map x :
E
n

R
n
which
is just the identity map.
15

×