Tải bản đầy đủ (.pdf) (469 trang)

(Textbooks in mathematics) solomon, bruce linear algebra, geometry and transformation crc press (2014)

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (6.43 MB, 469 trang )

TEXTBOOKS in MATHEMATICS

Linear Algebra

Geometry and
Transformation

Bruce Solomon


Linear Algebra

Geometry and
Transformation

TEXTBOOKS in MATHEMATICS

Series Editors: Al Boggess and Ken Rosen

PUBLISHED TITLES

ABSTRACT ALGEBRA: AN INQUIRY-BASED APPROACH
Jonathan K. Hodge, Steven Schlicker, and Ted Sundstrom

ABSTRACT ALGEBRA: AN INTERACTIVE APPROACH
William Paulsen

ADVANCED CALCULUS: THEORY AND PRACTICE
John Srdjan Petrovic

ADVANCED LINEAR ALGEBRA


Nicholas Loehr

ANALYSIS WITH ULTRASMALL NUMBERS
Karel Hrbacek, Olivier Lessmann, and Richard O’Donovan

APPLYING ANALYTICS: A PRACTICAL APPROACH
Evan S. Levine

COMPUTATIONS OF IMPROPER REIMANN INTEGRALS
Ioannis Roussos

CONVEX ANALYSIS
Steven G. Krantz

COUNTEREXAMPLES: FROM ELEMENTARY CALCULUS TO THE BEGINNINGS OF ANALYSIS
Andrei Bourchtein and Ludmila Bourchtein

DIFFERENTIAL EQUATIONS: THEORY, TECHNIQUE, AND PRACTICE, SECOND EDITION
Steven G. Krantz

DIFFERENTIAL EQUATIONS WITH MATLAB®: EXPLORATION, APPLICATIONS, AND THEORY
Mark A. McKibben and Micah D. Webster

ELEMENTARY NUMBER THEORY
James Kraft and Larry Washington

ELEMENTS OF ADVANCED MATHEMATICS, THIRD EDITION
Steven G. Krantz

EXPLORING LINEAR ALGEBRA: LABS AND PROJECTS WITH MATHEMATICA®

Crista Arangala

PUBLISHED TITLES CONTINUED

AN INTRODUCTION TO NUMBER THEORY WITH CRYPTOGRAPHY
James Kraft and Larry Washington

AN INTRODUCTION TO PARTIAL DIFFERENTIAL EQUATIONS WITH MATLAB®, SECOND EDITION
Mathew Coleman

INTRODUCTION TO THE CALCULUS OF VARIATIONS AND CONTROL WITH MODERN APPLICATIONS
John T. Burns

LINEAR ALGEBRA, GEOMETRY AND TRANSFORMATION
Bruce Solomon

THE MATHEMATICS OF GAMES: AN INTRODUCTION TO PROBABILITY
David G. Taylor

QUADRACTIC IRRATIONALS: AN INTRODUCTION TO CLASSICAL NUMBER THEORY
Franz Holter-Koch

REAL ANALYSIS AND FOUNDATIONS, THIRD EDITION
Steven G. Krantz

RISK ANALYSIS IN ENGINEERING AND ECONOMICS, SECOND EDITION
Bilal M. Ayyub

RISK MANAGEMENT AND SIMULATION
Aparna Gupta



TEXTBOOKS in MATHEMATICS

Linear Algebra

Geometry and
Transformation

Bruce Solomon

Indiana University, Bloomington
USA

CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742

© 2015 by Taylor & Francis Group, LLC
CRC Press is an imprint of Taylor & Francis Group, an Informa business

No claim to original U.S. Government works
Version Date: 20141103

International Standard Book Number-13: 978-1-4822-9930-4 (eBook - PDF)

This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been
made to publish reliable data and information, but the author and publisher cannot assume responsibility for the valid-
ity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright

holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this
form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may
rectify in any future reprint.

Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or uti-
lized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopy-
ing, microfilming, and recording, or in any information storage or retrieval system, without written permission from the
publishers.

For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://
www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923,
978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For
organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged.

Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for
identification and explanation without intent to infringe.

Visit the Taylor & Francis Web site at


and the CRC Press Web site at


To my teachers. . .
. . . and to my students


Contents

Preface xi


Chapter 1. Vectors, Mappings, and Linearity 1

1. Numeric Vectors 1

2. Functions 22

3. Mappings and Transformations 33

4. Linearity 41

5. The Matrix of a Linear Transformation 47

Chapter 2. Solving Linear Systems 57

1. The Linear System 57

2. The Augmented Matrix and RRE Form 65

3. Homogeneous Systems in RRE Form 75

4. Inhomogeneous Systems in RRE Form 84

5. The Gauss–Jordan Algorithm 93

6. Two Mapping Answers 105

Chapter 3. Linear Geometry 113

1. Geometric Vectors 113


2. Geometric/Numeric Duality 123

3. Dot-Product Geometry 129

4. Lines, Planes, and Hyperplanes 144

5. System Geometry and Row/Column Duality 158

Chapter 4. The Algebra of Matrices 167

1. Matrix Operations 167

2. Special Matrices 179

3. Matrix Inversion 189

4. A Logical Digression 199

5. The Logic of the Inversion Algorithm 214

6. Determinants 220

Chapter 5. Subspaces 235

1. Basic Examples and Definitions 235

2. Spans and Perps 245

3. Nullspace 251


ix

x CONTENTS

4. Column-Space 257

5. Perp/Span Conversion 265

6. Independence 274

7. Basis 284

8. Dimension and Rank 291

Chapter 6. Orthogonality 301

1. Orthocomplements 301

2. Four Subspaces, 16 Questions 314

3. Orthonormal Bases 320

4. The Gram–Schmidt Algorithm 332

Chapter 7. Linear Transformation 341

1. Kernel and Image 341

2. The Linear Rank Theorem 348


3. Eigenspaces 357

4. Eigenvalues and Eigenspaces: Calculation 368

5. Eigenvalues and Eigenspaces: Similarity 379

6. Diagonalizability and the Spectral Theorem 390

7. Singular Value Decomposition 406

Appendix A. Determinants 425

1. The Permutation Formula 425

2. Basic Properties of the Determinant 431

3. The Product Formula 434

Appendix B. Proof of the Spectral Theorem 437

Appendix C. Lexicon 441

Index 453

Preface

“The eyes of the mind, by which it sees and observes
things, are none other than proofs.”


—Baruch Spinoza

The organizing concept of this book is this: every topic should bring
students closer to a solid geometric grasp of linear transformations.
Even more specifically, we aim to build a strong foundation for two
enormously important results that no undergraduate math student
should miss:

• The Spectral Theorem for symmetric transformations, and

• The Inverse/Implicit Function Theorem for differentiable map-
pings, or even better, the strong form of that result, sometimes
called the Rank Theorem.

Every student who continues in math or its applications will encounter
both these results in many contexts. The Spectral Theorem belongs
to Linear Algebra proper; a course in the subject is simply remiss if it
fails to get there. The Rank Theorem actually belongs to multivariable
calculus, so we don’t state or prove it here. Roughly, it says that a
differentiable map of constant rank can be locally approximated by—
and indeed, behaves geometrically just like—a linear map of the same
rank. A student cannot understand this without a solid grasp of the
linear case, which we do formulate and prove here as the Linear Rank
Theorem in Chapter 7, making it, and the Spectral Theorem, key goals
of our text.
The primacy we give those results motivates an unconventional start
to our book, one that moves quickly to a first encounter with multi-
variable mappings and to the basic questions they raise about images,
pre-images, injectivity, surjectivity, and distortion. While these are
fundamental concerns throughout mathematics, they can be frustrat-

ingly difficult to analyze in general. The beauty and power of Linear

xi

xii PREFACE

Algebra stem in large part from the utter transparency of these prob-
lems in the linear setting. A student who follows our discussion will
apprehend them with a satisfying depth, and find them easy to apply
in other areas of mathematical pursuit.

Of course, we cover all the standard topics of a first course in Linear
Algebra—linear systems, vector geometry, matrix algebra, subspaces,
independence, dimension, orthogonality, eigenvectors, and diagonaliza-
tion. In our view, however, these topics mean more when they are
directed toward the motivating results listed above.

We therefore introduce linear mappings and the basic questions they
raise in our very first chapter, and aim the rest of our book toward
answering those questions.

Key secondary themes emerge along the way. One is the centrality
of the homogeneous system and the version of Gauss-Jordan we teach
for solving it—and for expressing its solution as the span of indepen-
dent “homogeneous generators.” The number of such generators, for
instance, gives the nullity of the system’s coefficient matrix A , which
in turn answers basic questions about the structure of solutions to in-
homogeneous systems having A as coefficient matrix, and about the
linear transformation represented by A .


Throughout, we celebrate the beautiful dualities that illuminate the
subject:

ã An n ì m matrix A is both a list of rows, acting as linear
functions on Rm , and a list of columns, representing vectors
in Rn . Accordingly, we can interpret matrix/vector multipli-
cation in dual ways: As a transformation of the input vector,
or as a linear combination of the matrix columns. We stress
the latter viewpoint more than many other authors, for it often
delivers surprisingly clear insights.

ã Similarly, an n ì m system Ax = b asks for the intersection
of certain hyperplanes in Rm , while simultaneously asking
for ways to represent b ∈ Rn as a linear combination of the

columns of A .

• The solution set of a homogeneous system can be alternatively
expressed as the image (column-space) of one linear map, or
as the pre-image (kernel) of another.

• The ubiquitous operations of addition and scalar multiplica-
tion manifest as pure algebra in the numeric vectorspaces Rn ,

PREFACE xiii

while simultaneously representing pure geometry in 2- and 3-
dimensional Euclidean space.

• Every subspace of Rn can be described in essentially just two

dual ways: as a span—the span of a generating set, or as an
intersection of hyperplanes—what we call a perp.

We emphasize the computational and conceptual skills that let students
navigate easily back and forth along any of these dualities, since prob-
lems posed from one perspective can often be solved with less effort
from the dual viewpoint.

Finally, we strive to make all this material a ramp, lifting students from
the computational mathematics that dominates their experience before
this course, to the conceptual reasoning that often dominates after it.
We move very consciously from simple “identity verification” proofs
early on (where students check, using the definitions, for instance, that
vector addition commutes, or that it distributes over dot products)
to constructive and contrapositive arguments—e.g., the proof that the
usual algorithm for inverting a matrix fulfills its mission. One can base
many such arguments on reasoning about the outcome of the Gauss-
Jordan algorithm—i.e., row-reduction and reduced row-echelon form—
which students easily master. Linear algebra thus forms an ideal con-
text for fostering and growing students’ mathematical sophistication.

Our treatment omits abstract vector spaces, preferring to spend the
limited time available in one academic term focusing on Rn and its
subspaces, orthogonality and diagonalization. We feel that when stu-
dents develop familiarity and the ability to reason well with Rn and—
especially—its subspaces, the transition to abstract vector spaces, if
and when they encounter it, will pose no difficulty.

Most of my students have been sophomores or juniors, typically ma-
joring in math, informatics, one of the sciences, or business. The lack

of an engineering school here has given my approach more of a liberal
arts flavor, and allowed me to focus on the mathematics and omit ap-
plications. I know that for these very reasons, my book will not satisfy
everyone. Still, I hope that all who read it will find themselves shar-
ing the pleasure I always feel in learning, teaching, and writing about
linear algebra.

Acknowledgments. This book springs from decades of teaching
linear algebra, usually using other texts. I learned from each of those
books, and from every group of students. About 10 years ago, Gilbert
Strang’s lively and unique introductory text inspired many ideas and

xiv PREFACE

syntheses of my own, and I began to transition away from his book
toward my own notes. These eventually took the course over, evolving
into the present text. I thank all the authors, teachers, and students
with whom I have learned to think about this beautiful subject, starting
with the late Prof. Richard F. Arens, my undergraduate linear algebra
teacher at UCLA.

Sincere thanks also go to CRC Press for publishing this work, and
especially editor Bob Ross, who believed in the project and advocated
for me within CRC.

I could not have reached this point without the unflagging support of
my wife, family, and friends. I owe them more than I can express.

Indiana University and its math department have allowed me a life of
continuous mathematical exploration and communication. A greater

privilege is hard to imagine, and I am deeply grateful.

On a more technical note, I was lucky to have excellent software tools:
TeXShop and LATEX for writing and typesetting, along with Wolfram
Mathematica R ,1 which I used to create all figures except Figure 28 in
Chapter 3. The latter image of M.C. Escher’s striking 1938 woodcut
Day and Night (which also graces the cover) comes from the Official
M.C. Escher website (www.mcescher.com).

Bruce Solomon
Indiana University

Bloomington, Indiana

1Wolfram Mathematica R is a registered trademark of Wolfram Research, Inc.

CHAPTER 1

Vectors, Mappings, and Linearity

1. Numeric Vectors

The overarching goal of this book is to impart a sure grasp of the nu-
meric vector functions known as linear transformations. Students will
have encountered functions before. We review and expand that famil-
iarity in Section 2 below, and we define linearity in Section 4. Before we
can properly discuss these matters though, we must introduce numeric
vectors and their basic arithmetic.

Definition 1.1 (Vectors and scalars). A numeric vector (or just

vector for short) is an ordered n-tuple of the form (x1, x2, . . . , xn).
Here, each xi—the ith entry (or ith coordinate) of the vector—is a
real number.
The (x, y) pairs often used to label points in the plane are familiar
examples of vectors with n = 2, but we allow more than two en-
tries as well. For instance, the triple (3, −1/2, 2), and the 7-tuple
(1, 0, 2, 0, −2, 0, −1) are also numeric vectors.
In the linear algebraic setting, we usually call single numbers scalars.
This helps highlight the difference between numeric vectors and indi-
vidual numbers.

Vectors can have many entries, so to clarify and save space, we often la-
bel them with single bold letters instead of writing out all their entries.
For example, we might define

x := (x1, x2, . . . , xn)
a := (a1, a2, a3, a4)
b := (−5, 0, 1)

and then use x, a, or b to indicate the associated vector. We use
boldface to distinguish vectors from scalars. For instance, the same
letters, without boldface, would typically represent scalars, as in x = 5,
a = −4.2, or b = π.
Often, we write numeric vectors vertically instead of horizontally, in
which case x, a, and b above would look like this:

2 1. VECTORS, MAPPINGS, AND LINEARITY

 x1   
a1 −5

 x2 
x= .. , a =  a   a2  , b= 0 
 .  1
3
xm
a4

In our approach to the subject (unlike some others) we draw absolutely
no distinction between

 x1 

(x1, x2, . . . , xn) and  x2 
..
 . 
 

xn

These are merely different notations for the same vector—the very same
mathematical object.

Definition 1.2. We denote the set of all scalars—also known as the
real number line—by R1 or simply R.

Similarly, Rn denotes the collection of all numeric vectors with n

entries; that is, all (x1, x2, . . . , xn). The “all zero” vector (0, 0, . . . , 0) ∈
Rn is called the origin, and denoted by 0.


As examples, the vectors x, a, and b above belong to Rm, R4, and
R3, respectively. We express this symbolically with the “element of”
symbol “ ∈ ”:

x ∈ Rm, a ∈ R4, and b ∈ R3

If a does not lie in R5, we can write a ∈ R5.
Rm is more than just a set, though, because it supports two important
algebraic operations: vector addition and scalar multiplication.

1.3. Vector addition. To add (or subtract) vectors in Rm, we

simply add (or subtract) coordinates, entry-by-entry. This is best de-

picted vertically. Here are two examples, one numeric and one sym-

bolic:

 1   4   1+4   5 

 2  +  −5  =  2 − 5  =  −3 

3 6 3+6 9

     
a1 b1 c1 a1 + b1 − c1

 a3   a2  +  b3   b2  −  c3   c2  =  a3 + b3 − c3   a2 + b2 − c2 

a4 b4 c4 a4 + b4 − c4


1. NUMERIC VECTORS 3

Adding the origin 0 ∈ Rm to any vector obviously leaves it unchanged:
0 + x = x for any x ∈ Rm. For this reason, 0 is called the additive
identity in Rm.

Recall that addition of scalars is commutative and associative. That
is, for any scalars x, y, and z we have

x+y = y+x (Commutativity)
(x + y) + z = x + (y + z) (Associativity)

It follows easily that vector addition has these properties too:

Proposition 1.4. Given any three vectors x, y, z ∈ Rm, we have

x+y = y+x (Commutativity)
(x + y) + z = x + (y + z) (Associativity)

Proof. We prove associativity, and leave commutativity as an ex-
ercise.

The associativity statement is an identity: it asserts that two things
are equal. Our approach is a basic and useful one for proving such
assertions: Expand both sides of the identity to show individual entries,
then simplify using the familiar algebra of scalars. If the simplified
expressions can be made equal using legal algebraic moves, we have a
proof.


Here, we start with the left-hand side, labeling the coordinates of x, y,
and z using xi, yi, and zi, and then using the definition of vector
addition twice:

 x1   y1   z1 

 x2   y2   z2 
(x + y) + z =  .. + .  +  ..
 .   .  . . 
  

xm ym zm

 x1 + y1   z1 

 x2 + y2   z2 
=  . + . 

 ..   .. 

xm + ym zm

 (x1 + y1) + z1 

 (x2 + y2) + z2 
= .
.. 

 


(xm + ym) + zm

Similarly, for the right-hand side of the identity, we get

4 1. VECTORS, MAPPINGS, AND LINEARITY

 x1   y1   z1 

 x2   y2   z2 
x + (y + z) =  .  +  .. + . 
 .  . .   . .
 

xm ym zm

 x1   y1 + z1 

 x2   y2 + z2 
=  . + . 

 ..   .. 

xm ym + zm

 x1 + (y1 + z1) 

 x2 + (y2 + z2) 
= .
.. 


 

xm + (ym + zm)

The simplified expressions for the two sides are now very similar. The
parentheses don’t line up the same way on both sides, but we can fix
that by using the associative law for scalars. The two sides then agree,
exactly, and we have a proof.

In short, the associative law for vectors boils down, after simplification,
to the associative law for scalars, which we already know.

1.5. Scalar multiplication. The second fundamental operation
in Rn is even simpler than vector addition. Scalar multiplication lets
us multiply any vector x ∈ Rm by an arbitrary scalar t to get a new

vector t x. As with vector addition, we execute it entry-by-entry:

 x1   t x1 

 x2   t x2 
tx = t  .. := ..
 .   . 
  

xm t xm

For instance, 2 (1, 3, 5) = (2, 6, 10) and −3 (1, 1, 0, 1) = (−3, −3, 0, −3),
while 0 x = (0, 0, . . . , 0) no matter what x is.


Recall that for scalars, multiplication distributes over addition. This
means that for any scalars t, x, and y, we have

t(x + y) = tx + ty

Since scalar multiplication and vector addition both operate entry-by-
entry, scalar multiplication distributes over vector addition too. This
simple relationship between the two operations is truly fundamental in


×