Tải bản đầy đủ (.pdf) (320 trang)

Linear algebra and geometry

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (4.41 MB, 320 trang )

Linear Algebra

and Geometry
KAM-TIM LEUNG

HONG KONG UNIVERSITY PRESS


www.pdfgrip.com

The Author

Dr K.T. Leung took his doctorate in Mathematics in 1957 at the University of Zurich.

From 1958 to 1960 he taught at Miami
University and the University of Cincinnati in

the U.S.A. Since 1960 he has been with the
University of Hong Kong, where he is now
Senior Lecturer in Mathematics and Dean of
the Faculty of Science.

He is the author (with Dr Doris L.C. Chen)

of Elementary set theory, Parts I and II,

also published by the Hong Kong University
Press.


www.pdfgrip.com



LINEAR ALGEBRA AND GEOMETRY


www.pdfgrip.com

Linear Algebra

and Geometry
KAM-TIM LEUNG

HONG KONG UNIVERSITY PRESS
1974


www.pdfgrip.com

© Copyright 1974 Hong Kong University Press
ISBN 0-85656-111-8
Library of Congress Catalog Card Number 73-89852

Printed in Hong Kong by
EVERBEST PRINTING CO., LTD
12-14 Elm Street, Kowloon, Hong Kong


www.pdfgrip.com

PREFACE
Linear algebra is now included in the undergraduate curriculum of

most universities. It is generally recognized that this branch of
algebra, being less abstract and directly motivated by geometry, is
easier to understand than some other branches and that because of the

wide applications it should be taught as early as possible. The
present book is an extension of the lecture notes for a course in
algebra and geometry given each year to the first-year undergraduates
of mathematics and physical sciences in the University of Hong Kong
since 1961. Except for some rudimentary knowledge in the language

of set theory the prerequisites for using the main part of this book
do not go beyond Form VI level. Since it is intended for use by
beginners, much care is taken to explain new theories by building up
from intuitive ideas and by many illustrative examples, though the
general level of presentation is thoroughly axiomatic.
The book begins with a chapter on linear spaces over the real and
the complex field in leisurely pace. The more general theory of linear

spaces over an arbitrary field is not touched upon since no substantial gain can be achieved by its inclusion at this level of instruction. In §3 a more extensive knowledge in set theory is needed for
formulating and proving results on infinite-dimensional linear spaces.
Readers who are not accustomed to these set-theoretical ideas may
omit the entire section.
Trying to keep the treatment coordinate-free, the book does not
follow the custom of replacing any space by a set of coordinates,
and then forgetting about the space as soon as possible. In this spirit
linear transformations come (Chapter II) before matrices (Chapter V).

While using coordinates students are reminded of the fact that a
particular isomorphism is given preference. Another feature of the
book is the introduction of the language and ideas of category

theory (§8) through which a deeper understanding of linear algebra

can be achieved. This section is written with the more capable
students in mind and can be left out by students who are hard
pressed for time or averse to a further level of abstraction. Except for

a few incidental remarks, the material of this section is not used
explicitly in the later chapters.
v


www.pdfgrip.com
vl

Geometry is a less popular subject than it once was and its
omission in the undergraduate curriculum is lamented by many mathe-

maticians. Unlike most books on linear algebra, the present book
contains two substantial geometrical chapters (Chapters III and IV)
in which affine and projective geometry are developed algebraically
and in a coordinate-free manner in terms of the previously developed
algebra. I hope this approach to geometry will bring out clearly the
interplay of algebraic and geometric ideas.
The next two chapters cover more or less the standard material on
matrices and determinants. Chapter VII handles eigenvalues up to the

Jordan forms. The last chapter concerns itself with the metric
properties of euclidean spaces and unitary spaces together with their
linear transformations.
The author acknowledges with great pleasure his gratitude to Dr


D.L.C Chen who used the earlier lecture notes in her classes and
made several useful suggestions. I am especially grateful to Dr C.B.

Spencer who read the entire manuscript and made valuable suggestion for its improvement both mathematically and stylistically.
Finally I thank Miss Kit-Yee So and Mr K.W. Ho for typing the
manuscript.
K. T. Leung
University of Hong Kong
January 1972


www.pdfgrip.com

CONTENTS
PREFACE .......................................

v

..........................
.................

4

...................

17

Infinite-Dimensional Linear Space
A. Existence of Base B. Dimension C. Exercises


..................

32

§4 Subspace .....................................

35

Chapter I LINEAR SPACE
§ 1 General Properties of Linear Space
A. Abelian groups B. Linear spaces
D. Exercises
§2 Finite-Dimensional Linear Space

I

C. Examples

A. Linear combinations B. Base C. Linear independence D. Dimension E. Coordinates F. Exercises
§3

A. General properties B. Operations on subspaces
C. Direct sum D. Quotient space E. Exercises

.............

Chapter II LINEAR TRANSFORMATIONS
§5 General Properties of Linear Transformation ..........
A. Linear transformation and examples B. Composition

C. Isomorphism D. Kernel and image E. Factorization
F. Exercises

45
45

§6 The Linear Space Hom (X, Y) .....................

62

Dual Space ....................................

73

A. The algebraic structure of Horn (X, Y) B. The
associative algebra End (X) C. Direct sum and direct
product D. Exercises

§7

A. General properties of dual space B. Dual transformations C. Natural transformations D. A duality
between ..(AX) and

E. Exercises

§8 The Category of Linear Spaces ....................
A. Category
D. Exercises

B. Functor C. Natural transformation


84


www.pdfgrip.com

.....................

§9 Affine Space ..................................

96
96

§ 10 Affine Transformations ..........................

113

Chapter IV PROJECTIVE GEOMETRY ................

118

§ 11 Projective Space ................................

118

.....................

141

Chapter III AFFINE GEOMETRY


A. Points and vectors B. Barycentre C. Linear varieties D. Lines E. Base F. Exercises
A. General properties B. The category of affine spaces

A. Points at infinity B. Definition of projective
space C. Homogeneous coordinates D. Linear variety
E. The theorems of Pappus and Desargues F. Cross
ratio G. Linear construction H. The principle of
duality I. Exercises

§ 12 Mappings of Projective Spaces
A. Projective isomorphism B. Projectivities C. Semilinear transformations
D. The projective group
E. Exercises

.............................
.....................

Chapter V MATRICES
§ 13 General Properties of Matrices
A. Notations B. Addition and scalar multiplication of
matrices C. Product of matrices D. Exercises
§ 14 Matrices and Linear Transformations
A. Matrix of a linear transformation B. Square matrices
C. Change of bases D. Exercises

155

................


166

§15 Systems of Linear Equations ......................

175

Chapter VI MULTILINEAR FORMS ..................

196

§ 16 General Properties of Multilinear Mappings ...........
A. Bilinear mappings B. Quadratic forms C. Multilinear forms D. Exercises

197

155

A. The rank of a matrix B. The solutions of a system
of linear equations C. Elementary transformations on
matrices D. Parametric representation of solutions
E. Two interpretations of elementary transformations
on matrices F. Exercises


www.pdfgrip.com

§ 17 Determinants ..................................

206


Chapter VII EIGENVALUES ........................

§ 18 Polynomials ...................................

230
230

§ 19 Eigenvalues ...................................

239

A. Determinants of order 3 B. Permutations C. Determinant functions D. Determinants E. Some useful
rules F. Cofactors and minors G. Exercises

A. Definitions B. Euclidean algorithm C. Greatest
common divisor D. Substitutions E. Exercises
A. Invariant subspaces B. Eigenvectors and eigenvalues
C. Characteristic polynomials D. Diagonalizable endo-

morphisms E. Exercises
§20 Jordan Form ..................................

A. Triangular

form

B.

Hamilton-Cayley


250

theorem

C. Canonical decomposition D. Nilpotent endomorphisms E. Jordan theorem F. Exercises

Chapter VIII INNER PRODUCT SPACES ...............

§21 Euclidean Spaces ...............................
A. Inner

267
268

product and norm B. Orthogonality
inequality D. Normed linear space

C. SCHWARZ'S

E.

Exercises

§22 Linear Transformations of Euclidean Spaces..........
A. The conjugate isomorphism B. The adjoint transformation C. Self-adjoint linear transformations
D. Eigenvalues of self-adjoint transformations E. Bilinear forms on a euclidean space F. Isometry
G. Exercises
§ 23 Unitary Spaces
A. Orthogonality B. The Conjugate isomorphism
C. The adjoint D. Self-adjoint transformations E. Isometry F. Normal transformation G. Exercises


280

.................................

297

Index ...........................................

306


www.pdfgrip.com

Leitfaden - summary of the interdependence of the chapters.

Linear space
I

Linear transformations

Affine geometry

Matrices

Eigenvalues

Multilinear
forms


Inner product
spaces

I Projective geometry


www.pdfgrip.com

CHAPTER I LINEAR SPACE

In the euclidean plane E, we choose a fixed point 0 as the origin,
and consider the set X of arrows or vectors in E with the common
initial point 0. A vector a in E with initial point 0 and endpoint A is
by definition the ordered pair (0, A ) of points. The vector a = (0, A)
can be regarded as a graphical representation of a force acting at the
origin 0, in the direction of the ray (half-line) from 0 through A
with the magnitude given by the length of the segment OA.
Let a = (0, A) and b = (O, B) be vectors in E. Then a point Con E
is uniquely determined by the requirement that the midpoint of the
segment OC is identical with the midpoint of the segment AB. The
different cases which can occur are illustrated in fig. 1.

The sum a + b of the vectors a and b is defined as the vector
c = (0, C). In the case where the points 0, A and B are not collinear,
OC is the diagonal of the parallelogram OACB. Clearly our construction of the sum of two vectors follows the parallelogram method
of constructing the resultant of two forces in elementary mechanics.

Addition, i.e. the way of forming sums of vectors, satisfies the
familiar associative and commutative laws. This means that, for any
three vectors a, b and c in the plane E, the following identities hold:


(Al) (a+b)+c=a+(b+c);
(A2) a+b=b+a.

Moreover, the null vector 0 = (0, 0), that represents the force whose
magnitude is zero, has the property
(A3)

0 +a = a for all vectors a in E.
1


www.pdfgrip.com
2

I

LINEAR SPACE

Furthermore, to every vector a = (0, A) there is associated a unique
vector a'= (0, A') where A' is the point such that 0 is the midpoint of
the segment AA'.

The vectors a and a' represent forces with equal magnitude but in
opposite directions. The resultant of such a pair of forces is zero; for
the vectors a and a' we get
(A4) a' + a = 0.
In an equally natural way, scalar multiplication of a vector by a real
number is defined. Let A be a real number and a = (0, A) a vector in
the plane E. On the line that passes through the points 0 and A there

is a unique point B such that (i) the length of the segment OB is IAI

times the length of the segment OA and (ii) B lies on the same
(opposite) side of 0 as A if A is positive (negative).

The product Xa of A and a is the vector b = (0, B). If a represents

a force F, then Xa represents the force in the same or opposite
direction of F according to whether A is positive or negative, with
the magnitude equal to IAI times the magnitude of F.
The following three rules of calculation can be easily seen to hold
good. For any real numbers A and p and any vectors a and b,
(M 1) A(pa) _ (AA)a,

(M2) A(a+b)=Aa+Ab and (A+p)a=Aa+pa,
(M3) la =a.


www.pdfgrip.com
I

LINEAR SPACE

3

To summarize, (1) there is defined in the set of all vectors in the

plane E an addition that satisfies (Al) to (A4) and (2) for each
vector a and each real number A, a vector Aa, called the product
is defined such that (Ml) to (M3) are satisfied. At the same time,

similar operations are defined for the forces acting at the origin 0 such
that the same sets of requirements are satisfied. Moreover, results of

a general nature on vectors (with respect to these operations) can

be related, in a most natural way, to those on forces. Similar
operations are also defined for objects from quite different branches

of mathematics. Consider, for example, a pair of simultaneous
linear equations:

(*)

4X1+5X2- 6X3 =0
X, - X2 + 3X3 = 0.

Solutions of this pair of equations are ordered triples (a, , a2 , a3) of
real numbers such that when a; is substituted for X; (i = 1, 2, 3) in
the equations we get

4a1 + 5a2 - 6a3 = 0 and a, -a2 + 3a3 = 0.
Thus (1, -2, -1), (-1, 2, 1) are two of the many solutions of the
pair of equations (*). Let us denote by S the set of all solutions of
the equations (*) and define for any two solutions a = (a,, a2, a3)
and b = (b 1, b2 , b3) and for any real number A the sum as

a+b=(al +b1 a2 +b2 a3 +b3)
and the product as
ka = (Aal


,

Xa2 ,)la3).

Then we can easily verify that both a + b and Xa are again solutions
of the equations (*); therefore addition and scalar multiplication are
defined in the S. It is straightforward to verify further that (Al) to
(A4) and (Ml) to (M3) are satisfied.
This suggests that it is desirable to have a unified mathematical
theory, called linear algebra by mathematicians, which can be applied
suitably to the study of vectors in the plane, forces acting at a point,
solutions of simultaneous linear equations and other quite dissimilar
objects. The mathematician's approach to this problem is to lay down

a definition of linear space that is adaptable to the situations discussed above as well as many others. Therefore a linear space should
be a set of objects, referred to as vectors, together with an addition
and a scalar multiplication that satisfy a number of axioms similar to


www.pdfgrip.com
4

1

LINEAR SPACE

(Al) ,(A2), (A3), (A4) and (M 1), (M2), (M3). Once such a definition
is laid down, the main interest will lie in what there is that we can do
with vectors and linear spaces. It is to be emphasized here, that the
physical character of the vectors is of no importance to the theory of

linear algebra - in the same way that results of arithmetical calcula-

tions are independent of any physical meaning the numbers may
have for the calculator.

§ 1. General Properties of Linear Space
A. Abelian groups
From the preliminary discussion, we see that it is necessary for us

to know more about the nature of the operations of addition and
scalar multiplication. These are essentially definite rules that assign
sums and products respectively to certain pairs of objects and satisfy
a number of requirements called axioms. To formulate these more
precisely, we employ the language of set theory.
Let A be a set. Then an internal composition law in A is a mapping

r: A x A - A. For each element (a, b) of A x A, the image r(a, b)
under r is called the composite of the elements a and b of A and is
usually denoted by a r b.
Thus addition of vectors, addition of forces and addition of solutions discussed earlier are all examples of internal composition laws.
Analogously, an external composition law in A between elements of
A and elements of another set B is a mapping a: B x A -> A. Similarly,

scalar multiplications of vectors, forces and solutions discussed
earlier are all examples of external composition laws. Now we say
that an algebraic structure is defined on a set A if in A we have one
or more (internal or external) composition laws that satisfy some
specific axioms. These axioms are not just chosen arbitrarily; on the
contrary, they are well-known properties shared by composition laws
that we encounter in the applications, such as commutativity and

associativity. Abstract algebra is then the mathematical theory of
these algebraic structures.

With this in mind, we introduce the algebraic structure of the
abelian group and study the properties thereof.


www.pdfgrip.com
§1

GENERAL PROPERTIES OF LINEAR SPACE

5

Let A be a set. An internal composition law
r: A x A - A is said to define an algebraic structure of abelian
DEFINITION 1.1.

group on A if and only if the following axioms are satisfied.
[G 1 ]
For any elements a, b and c of A, (a rb)rc = a r(b rc).
[G2] For any elements a and b of A, a rb = bra.
[G3] There is an element 0 in A such that Ora = a for every
element a of A.
[G4] Let 0 be a fixed element of A satisfying [G3]. Then for
every element a of A there is an element -a of A such that

(-a)ra = 0.
In this case, the ordered pair (A, r) is called an abelian group.
It follows from axiom [G3] that if (A, r) is an abelian group, then

A is a non-empty set. We note that the non-empty set A is just a part
of an abelian group (A, r) and it is feasible that the same set A may

be part of different abelian groups; more precisely, there might be
two different internal composition laws r, and r2 in the non-empty
set A, such that (A, r,) and (A, r2) are abelian groups. Therefore we
should never use a statement such as "an abelian group is a. non-empty
set on which an internal composition law exists satisfying the axiom
[Gl ] to [G4]" as a definition of an abelian group. In fact, it can be

proved that on every non-empty set such an internal composition
always exists, and furthermore if the set in question is not a singleton, then more than one such internal composition law is possible.
For this reason, care should be taken to distinguish the underlying
set A from the abelian group (A, r). However, when there is no
danger of confusion, then we shall denote, for convenience, the
abelian group (A, r) simply by A and say that A constitutes an
abelian group (with respect to the internal composition law r). In
this case, the set A is the abelian group A stripped of its algebraic
structure, and by a subset (an element) of the abelian group A, we
mean a subset (an element) of the set A.

The most elementary example of an abelian group is the set Z of
all integers together with the usual addition of integers. In this case,
axioms [G11 to [G4] are well-known properties of ordinary
arithmetic. In fact, many other well-known properties of ordinary
arithmetic of integers have their counterparts in the abstract theory
of abelian groups. In the remainder of this section, § IA, we shall use
the abelian group Z as a prototype of an abelian group to study the
general properties of abelian groups.



www.pdfgrip.com

6

1

LINEAR SPACE

For convenience of formulation, we shall use the following
notations and abbreviations.

(i) The internal composition law r of (A, r) is referred to as the
addition of the abelian group A.

(ii) The composite arb is called the sum of the elements a and b
of the abelian group A and denoted by a + b. a, b are called
summands of the sum a + b. The particular notations that we

use are not essential to our theory, but if they are well
chosen, they will not only simplify the otherwise clumsy
formulations but will also help us to handle the calculations
efficiently.

(iii) A neutral element of the abelian group A is an element 0 of
A satisfying [G3 ] above, i.e., 0 + a = a for every a eA.

(iv) For any element a of the abelian group A, an additive
inverse of a is an element -a of A satisfying [G4] above,
i.e., a + (-a) = 0.

(v) As a consequence of the notations chosen above, the abelian

group A is called an additive abelian group or simply an
additive group.

Now we shall turn our attention to the general properties of
additive groups. To emphasize their importance in our theory, we
formulate some of them as theorems. In deriving these properties, we

shall only use the axioms of the definition and properties already
established. Therefore, all properties shared by additive groups are
posessed by virtue of the definition only.

THWREM 1.2. For any two elements a and b of an additive group
A, there is one and only one element x of A such that a + x = b.

is required to show that there exists (i) at most one
and (ii) at least one such element x of A; in other words, we
have to show the uniqueness and the existence of x. For the
PROOF. It

former, let us assume that we have elements x and x' of A such

that a + x = b and a + x' = b. From the first equation, we get
-a + b =-a + (a + x) = (-a + a) +x =0 +x = x. Similarly, -a +b = x'.

Therefore x = x', and this proves the uniqueness. For the existence, we need only verify that the element -a + b of A satisfies the condition that x has to satisfy. Indeed, we have
a + (-a + b) = (a + (-a)) + b = 0 + b = b. Our proof of the theorem
is now complete.



www.pdfgrip.com
§I

GENERAL PROPERTIES OF LINEAR SPACE

7

Another version of the above theorem is this: given any elements a and b of an additive group A, the equation a + x = b
admits a unique solution x in A. This solution will be denoted by b -a,

and is called the difference of b and a. Using this notation, we get
a -a = 0 and 0 -a = -a for all elements a of A. Consequently, we have
COROLLARY 1.3. In an additive group, there is exactly one neutral
element and for each element a there is exactly one additive inverse
-a of a.

Here is another interesting consequence of 1.2 (or 1.3). In an
additive group A, for each element x of A, x = 0 if and only if
a + x = a for some a of A. In particular, we get -0 = 0.
Let us now study more closely the axioms [G1 ] and [G2] ; these
are called the associative law and the commutative law of addition respectively. The equation (a + b) + c = a + (b + c) means that the element

of A obtained by repeated addition is independent of the position of
the brackets. Therefore it is unambiguous to write this sum of three

elements as a + b + c. Analogously, we can write a + b + c + d =
(a + b + c) + d for the sum of any four elements of an additive
group A. In general, if a, , ... , aN are elements of an additive group
A then, for any positive integer n such that 0 < n < N, we have the

recursive definition:

a, +a2 + ... +a, +an+I =(a, +a2 + ...+an)+an+
The associative law [G 1 ] can be generalized into

[GI'] (ai +a2 + ... +am)+(am+I +am+2 + ... +am+n)

=a,+ a2 + ... +am+n,
The proof of [G1'] is carried out by induction on the number n. For
n = 1, [GI'] follows from the recursive definition. Under the induction assumption that [G1'] holds for an n > 1, we get

(a, + ... + am) + (am +, + ... + am +n + i )

=(a, + ... +am)+ [(am+, + ... +am+n)+am+n+i ]
[(a, + ... +am)+(am+, + ... +am+n)] +am+n+i
=(a, + ... +am+n) + am+n+i

=a, + ... +am+n+,.


www.pdfgrip.com

8

LINEAR SPACE

1

This establishes the generalized associative law [G1'] of addition.


A simple consequence of the generalized associative law is that
we can now write a multiple sum a + ... + a (n times) of an element
a of an additive group A as na.
The commutative law [G2] of addition means that the sum a + b
is independent of the order in which the summands a and b appear.

In other words, we can permute the summands in a sum without
changing it. Generalizing this, we get

[G2'] For any permutation (0(l), 0(2),

(1,2, ...,n),

ao(I) + ao(1) + ...

+ aO(n) =a1

... ,O(n)) of the n-tuple

+a2 + ... +an

where a permutation is a bijective mapping of the set (1, 2, ... , n )
onto itself. The statement [G2'] is trivially true for n = 1. We assume

that this is true for n-1 > 1, and let k be the number such that
O(k) = n. Then (0(l), . . . , Ak), . . . , 0(n)), where the number
¢(k) under the sumbol ^ is deleted, is a permutation of the
(n-l)-tuple (1, ... , n-1). From the induction assumption, we get

a¢(1) + ... + acb(k) +

under ^ is deleted. Now

... +

2O(n)

= a1 + ... +an-1 where aq5(k)

ao(1) + ... +aO(n) =(ao(1) + ... +ao(k) + ... + ao(n ))+ ao(k)

=(a1 + ... +an-1)+an =a1 + ... +an.
The generalized commutative law [G2'] of addition therefore holds.
Taking [G Y] and [G2'] together we can now conclude that the sum
of a finite number of elements of an additive group is independent of
(i) the way in which the brackets are inserted and (ii) the order in
which the group elements appear.
It is convenient to use the familiar summation sign E to write the
n

sum a I+ a2 + ... + an as Eai or E (ai : i = 1, ... ,n}. In particular,
whenever the range of summation is clear from the context, we also

write a1 + a2 + ... + an as Eat
or simply Eat. The
elements
n
i

ai (i = 1, ... , n) are called the summands of the sum Ea.*
i=1

Using this notation, we can handle the double summations with

more ease. Let aii be an element of an additive group A for each
i = 1, ... , m and j = 1, ... , n. We can then arrange these m n
group elements in a rectangular array:


www.pdfgrip.com
§1

9

GENERAL PROPERTIES OF LINEAR SPACE

a1 1

a12 ...

all ........ain

a21

a22

. . .

a2J

....... . a2n


ai i

ai 2

...

ail

........ ai n

aml

amt

am/

amn.

A natural way of summing is to get the partial sums of the rows first

and then the final sum of these partial sums. Making use of the
summation sign ;, we write:
(a11 +a12 + ... +a1n) +(a21 +a22 + ... +a2n)+ ... +
m

n

(ail + ai2 + ... + ain) + ... + (am 1 + amt + amn) = ; (;ail)
1=1 /=1


On the other hand, we can also get the partial sums of the columns
first and then the final sum of these partial sums. Thus we write
(a11 +a21 + ... aml)+(a12 + a22 + .. . +am2)+ ... +
m
(all +a2J+ ... +amj)+ ... +(aln +a2n + ... +amn)=;n(Fail).
1=1 i-1

Applying [G1'] and [G2'] , we get
mn
n m

; (;all) =E

1= 1 /=1

ail).

1=1 1=1

Therefore this sum can be written unambiguously as

;

i=1,...,m a.
1.

or;[a11:i=1,...,m;j=1,...,n}
if

j=1'...,n


or simply Fail when no danger of confusion about the ranges of i and
i,J
j is possible. Triple and multiple summations can be handled similar-

ly. Finally we remark that there are many other possible ways of
getting ;aij besides the two ways given above.
1,

Another important consequence of the laws [G1'] and [G2'] is
that we can define summations over certain families of elements of
an additive group. More precisely, we say that a family (ai)iGJ of
elements of an additive group A is of finite support if all but a finite


www.pdfgrip.com

10

1

LINEAR SPACE

number of terms ai of the family are equal to the neutral element 0
of A. The sum Eat of the family (ai)iEI is defined as the sum of the
iEl

neutral element 0 and all the terms ai of the family that are
distinct from 0. It follows from [G1'] and [G2'] that the sum
Ea1


is well-defined for any family (ai)1Ej of finite support of elements

j E=-1

an additive group. In particular, when I

family, we get Eai = 0. Moreover, if 1
iEO

1,

0, i.e., for the empty

... , n 1, then
n

Lai =a1 +a2 + ... + a, = Ea;.

iEI

i=1

B. Linear spaces

With the experience gained in dealing with additive groups, we
now have no difficulty in laying down a definition of linear space.

DEFINITION 1.4. Let X be an additive group and R the set of all real
numbers. An external composition law a: R x X -i X in X is said to

define an algebraic structure of real linear space on X if and only if
the following axioms are satisfied:
[M 1 ]

for any elements A,µ of R and any element x of X,
Aa(µax) = (X z)ax;

[M2] for any elements X, p of R and any elements x, y of X,
(X + µ)ax = Xax + iax and Xa(x + y) = ]sax + pay;
[ M3 ] for all elements x of X, 1 ax = x.

In this case, the ordered pair (X,a) is called a real linear space, a
linear space over R, a real vector space or a vector space over R.

Here again, the additive group X and hence also the set X are only
parts of a real linear space (X,a). However, when the addition and
the external composition law a are clear from the context, we denote
by X the real linear space (X,a). In this way, the letter X represents
the set X, the additive group X, or the real linear space X, as the case
may be. The external composition law a is called the scalar multiplication of the real linear space X and the addition of the additive
group X is called the addition of the linear space X. The axioms
[Ml ] and [M2] are called the associative law of scalar multiplication
and the distributive laws of scalar multiplication over addition respectively. The composite Xox is called the product of X and x or a

multiple of x and is denoted by Ax. Elements of R are usually


www.pdfgrip.com
§1


GENERAL PROPERTIES OF LINEAR SPACE

11

referred to as scalars and elements of X as vectors; in particular the
neutral element 0 of X is called the nullvector or the zero vector of
the linear space X.
Algebraic structure of complex linear space and complex linear
space are similarly defined. In fact, we need only replace in 1.4 the
set R of all real numbers by the set C of all complex numbers. In the
general theory of linear spaces, we only make use of the ordinary

properties of the arithmetic of the real numbers or those of the
complex numbers; therefore our results hold good in both the real
and the complex cases. For simplicity, we shall use the terms "X is a
linear space over A" and "X is a A-linear space" to mean that X is a
real linear space or X is a complex linear space according to whether
A = R or A = C.

Now that we have laid down the definition of linear space, the
main interest will lie in what we can do with vectors and linear
spaces. We emphasize again, that the physical character of vectors is
of no importance to the theory of linear algebra, and that the results
of the theory are all consequences of the definition.

Here are some immediate consequences of the axioms of linear
spaces.

THEOREM 1.5. Let X be a linear space over A. Then, for any AeA and


xeX,Ax=Oifandonly ifX=Oorx=0.

PROOF. If A= 0 or x = 0, then Ax = (A + A)x = Ax + Ax, or Ax = A(x + x)

= Xx + Ax respectively. Therefore in both cases, we get ax = 0.
Conversely, let us assume that Ax = 0 and X 0 0. Then we get
x=Ix
From the distributive laws, we get (-X)x = A(-x) _ -(Ax) for each
vector x and each scalar A. In particular, we have (-1)x = -x. We can

use arguments, similar to those we had in § I A, to prove the
generalized distributive laws:
[M2'] (A1 + ... +

A(al + ...

Al a +

+ Xa;
+Aa,,.

C. Examples

Before we study linear space in detail, let us consider some
examples of linear spaces.


www.pdfgrip.com

12


1

LINEAR SPACE

EXAMPLE 1.6. A most trivial example of a linear space over A is the
zero linear space 0 consisting of a single vector which is necessarily

the zero vector and is hence denoted by 0. Addition and scalar
multiplication are given as follows. 0 + 0 = 0 and AO = 0 for all scalars
A of A.

1.7. The set V2 of all vectors in the euclidean plane E
with common initial point at the origin 0 constitutes a real linear
space with respect to the addition and the scalar multiplication
defined at the beginning of this chapter. The linear space V3 of all
vectors in the euclidean space or (ordinary) 3-dimensional space is
I?XAMPLE

defined in an analogous way. Similarly, the set of forces acting at the
origin 0 of E also constitutes a real linear space with respect to the
addition and the scalar multiplication defined earlier.
EXAMPLE 1.8.

For the set Rn of all ordered n-tuples of real

numbers, we define the sum of two arbitrary elements

x=(1,
of Rnby


,

Sn)andY=(111,

. . . ,

11n)

x+Y=( 1+111, ..., to+11n)

and the scalar product of a real number A and x by

Ax = (A1,

... , Atn)

It is easily verified that the axioms of linear space are satisfied. In
particular 0

=

(0,

. . . ,

0) is

the zero vector of Rn


and

-x = (-h1, ... , -ln) is the additive inverse of x. With respect to the
addition and the scalar multiplication above Rn is called the
n-dimensional arithmetical real linear space.
The n-dimensional arithmetical complex linear space Cn is similarly defined.

The set of all ordered n-tuples of complex numbers
also constitutes a real linear space with respect to the addition and
the scalar multiplication defined by

EXAMPLE 1.9.

rr
(1, ..., to )+(771, ... , 'nN
S I + 111, ... , Sn +71n)
A(G1, ...,tn)=(AS1, ...,Atn)
where A is a real number and ti, 71; are complex numbers for
i = 1, . . , n. This real linear space shall be denoted by RC2 n .
.

Note that the set RC2n and the set Cn are equal, but the real linear


www.pdfgrip.com
§I

GENERAL PROPERTIES OF LINEAR SPACE

13


space RC2 n and the complex linear space C" are distinct linear spaces.

At this juncture, the reader may ask why the superscript 2n is used
here instead of n. Until we have a precise definition of dimension
(see §2C) we have to ask the reader for indulgence to accept that the
linear space RC2n is a 2n-dimensional real linear space while the
linear space Cn, with the same underlying set, the same addition
but a different scalar multiplication, is an n-dimensional complex
linear space.

EXAMPLE 1.10. Let R[T] be the set of all polynomials with real
coefficients in the indeterminate T. R[T] is then a real linear space
with respect to the usual addition of polynomials and the usual
multiplication of a polynomial by a real number.

EXAMPLE 1.11. The set F of all real valued functions f defined on
the closed interval [a,b ] = [ t( =-R: a < t < b) of the real axis is a real
linear space with respect to the following addition and scalar multiplication:
(f +g)(t) = f(t) +g(t) and (Af)(t) = A(f(t)) for all tE [a, b] I.
EXAMPLE 1.12. Consider the set of all differentiable functions f de-

fined on the closed interval [a, b I, which satisfy

a SCHRODINGER'S

differential equation
d2
dt2f-Af


for a fixed real number A. This set constitutes a real linear space with
respect to addition and scalar multiplication of functions as defined
in 1.11.

(a) Let S = {s1, ....,sn } be a finite non-empty
set. Consider the set FS of all functions f : S - R. With respect to
EXAMPLES 1.13.

addition and scalar multiplication of functions as defined in 1.11, FS

constitutes a real linear space called the free real linear space
generated by S or the free linear space generated by Sover R. If for
every element s, (=- S we denote by f, : S -+ R the function defined by

f (s,) =
then every
f= f(s1)f1 +

vector f of

I ifi=j
0 if i f/

FS can be written uniquely as
+ f(sn)fn. It is convenient to identify each s; ES with

the corresponding f;ES and consequently for every fEFs we get
f=f(s1)s1 +
+f(Sn)sn



www.pdfgrip.com

14

1

LINEAR SPACE

(b) If S is an infinite set, some slight modification of the method
given above is necessary for the construction of a free linear space.

At the end of § 1A we saw that the natural generalization of a
finite sum of vectors is a sum of a family of vectors of finite
support. Therefore in order to arrive at a representation similar to
+ f(sn)f, above, we only consider functions
f: S-> R for which the subset {tES: f(t) * 0} of S is finite. A function
f: S -> R with this property is called a function of finite support.
Let Fs be the set of all functions f: S -* R of finite support. Then
with respect to addition and scalar multiplication as defined in 1.11,

f = f(sl)f1 +

Fs constitutes a real linear space called the free linear space generated

by S. For every tES by ft : S -* R again we denote the function
defined by

f ()
r


1

if t = x

0 ift*x

Then ft E Fs. If f E Fs, then the family (f(t))tES of scalars is of
finite support since f is of finite support. Therefore the family
(f(t )ft)r Es of vectors of Fs is also of finite support. Hence E f(t)ft is
tES
a vector of Fs and f = E f(t)ft. Again, for convenience, we identify
(ES

each t E S with the corresponding ft E Fs and consequently for every

vector f r= Fs we get f = E f(t)t.

tES
Free complex linear space generated by S is similarly defined.

The restriction to functions of finite support imposed to Fs of 1.13 (b) is necessary for representation of vectors of
Fs by a sum: f = E f(t)t. To provide another type of linear space we
drop this restriction. Let S be a (finite or infinite) non-empty set. We
consider the set Its = Map (S, R) of all mappings S -* R and define
sum and product by
EXAMPLE 1.14.

(f + g) (t) = f(t) + g(t)
(Xf) (t) = X(f(t))

Then RS is a real linear space with respect to addition and scalar
multiplication above. Every vector f E RS is uniquely represented by
the family (f(t))1ES of scalars. Moreover the representation is compatible with the algebraic structure of RS in the following sense: if f
and g are represented by (f(t))res and (g(t))tE=-S respectively, then

f + g and Xf are represented by the families (f(t) + g(t))fES and


Tài liệu bạn tìm kiếm đã sẵn sàng tải về

Tải bản đầy đủ ngay
×