www.pdfgrip.com
Elementary Linear
Algebra
Fourth Edition
Stephen Andrilli
Department of Mathematics
and Computer Science
La Salle University
Philadelphia, PA
David Hecker
Department of Mathematics
Saint Joseph’s University
Philadelphia, PA
AMSTERDAM • BOSTON • HEIDELBERG • LONDON
NEW YORK • OXFORD • PARIS • SAN DIEGO
SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO
Academic Press is an imprint of Elsevier
www.pdfgrip.com
Academic Press is an imprint of Elsevier
30 Corporate Drive, Suite 400, Burlington, MA 01803, USA
525 B Street, Suite 1900, San Diego, California 92101-4495, USA
84 Theobald’s Road, London WC1X 8RR, UK
Copyright © 2010 Elsevier Inc. All rights reserved.
No part of this publication may be reproduced or transmitted in any form or by any means, electronic
or mechanical, including photocopy, recording, or any information storage and retrieval system,
without permission in writing from the publisher.
Permissions may be sought directly from Elsevier’s Science & Technology Rights Department in
Oxford, UK: phone: (+44) 1865 843830, fax: (+44) 1865 853333, e-mail:
You may also complete your request online via the Elsevier homepage (), by
selecting “Support & Contact” then “Copyright and Permission” and then “Obtaining Permissions.”
Library of Congress Cataloging-in-Publication Data
Application submitted.
British Library Cataloguing-in-Publication Data
A catalogue record for this book is available from the British Library.
ISBN: 978-0-12-374751-8
For information on all Academic Press publications
visit our Web site at www.elsevierdirect.com
Printed in Canada
09 10 11 9 8 7 6 5 4 3 2 1
www.pdfgrip.com
To our wives, Ene and Lyn, for all their help and encouragement
www.pdfgrip.com
This page intentionally left blank
www.pdfgrip.com
Contents
Preface for the Instructor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ix
Preface for the Student . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xix
Symbol Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xxiii
Computational and Numerical Methods, Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii
CHAPTER 1
Vectors and Matrices
1.1
1.2
1.3
1.4
1.5
CHAPTER 2
2.3
2.4
Introduction to Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Determinants and Row Reduction . . . . . . . . . . . . . . . . . . . . . . . . .
Further Properties of the Determinant . . . . . . . . . . . . . . . . . . . .
Eigenvalues and Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . .
Finite Dimensional Vector Spaces
4.1
4.2
4.3
4.4
4.5
4.6
4.7
CHAPTER 5
79
Solving Linear Systems Using Gaussian Elimination . . . . . . 79
Gauss-Jordan Row Reduction and Reduced Row
Echelon Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Equivalent Systems, Rank, and Row Space. . . . . . . . . . . . . . . . . 110
Inverses of Matrices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Determinants and Eigenvalues
3.1
3.2
3.3
3.4
CHAPTER 4
1
2
18
31
48
59
Systems of Linear Equations
2.1
2.2
CHAPTER 3
Fundamental Operations with Vectors . . . . . . . . . . . . . . . . . . . .
The Dot Product. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
An Introduction to Proof Techniques . . . . . . . . . . . . . . . . . . . . .
Fundamental Operations with Matrices . . . . . . . . . . . . . . . . . . .
Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Introduction to Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Subspaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Span . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Basis and Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Constructing Special Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Coordinatization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
143
143
155
165
178
203
204
215
227
239
255
269
281
Linear Transformations
5.1
5.2
305
Introduction to Linear Transformations. . . . . . . . . . . . . . . . . . . . 306
The Matrix of a Linear Transformation . . . . . . . . . . . . . . . . . . . . . 321
v
www.pdfgrip.com
vi Contents
5.3
5.4
5.5
5.6
CHAPTER 6
Graph Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Ohm’s Law. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Least-Squares Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Markov Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Hill Substitution: An Introduction to Coding Theory . . . . .
Elementary Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Rotation of Axes for Conic Sections . . . . . . . . . . . . . . . . . . . . . . .
Computer Graphics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Least-Squares Solutions for Inconsistent Systems . . . . . . . . .
Quadratic Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Numerical Methods
9.1
9.2
9.3
9.4
9.5
Appendix A
Complex n-Vectors and Matrices . . . . . . . . . . . . . . . . . . . . . . . . . .
Complex Eigenvalues and Complex Eigenvectors . . . . . . . .
Complex Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Orthogonality in Cn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Inner Product Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Additional Applications
8.1
8.2
8.3
8.4
8.5
8.6
8.7
8.8
8.9
8.10
8.11
CHAPTER 9
397
Orthogonal Bases and the Gram-Schmidt Process . . . . . . . . 397
Orthogonal Complements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
Orthogonal Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
Complex Vector Spaces and General Inner Products
7.1
7.2
7.3
7.4
7.5
CHAPTER 8
338
350
356
371
Orthogonality
6.1
6.2
6.3
CHAPTER 7
The Dimension Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
One-to-One and Onto Linear Transformations . . . . . . . . . . . .
Isomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Diagonalization of Linear Operators . . . . . . . . . . . . . . . . . . . . . . .
Numerical Methods for Solving Systems . . . . . . . . . . . . . . . . . .
LDU Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Power Method for Finding Eigenvalues . . . . . . . . . . . . . . .
QR Factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Miscellaneous Proofs
445
446
454
460
464
472
491
491
501
504
512
525
530
537
544
561
570
578
587
588
600
608
615
623
645
Proof of Theorem 1.14, Part (1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
Proof of Theorem 2.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
Proof of Theorem 2.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
www.pdfgrip.com
Contents vii
Proof of Theorem 3.3, Part (3), Case 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648
Proof of Theorem 5.29. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
Proof of Theorem 6.18. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650
Appendix B
Functions
Appendix C
Complex Numbers
661
Appendix D
Answers to Selected Exercises
665
Index
653
Functions: Domain, Codomain, and Range . . . . . . . . . . . . . . . . . . . . . . . 653
One-to-One and Onto Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654
Composition and Inverses of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 655
725
www.pdfgrip.com
This page intentionally left blank
www.pdfgrip.com
Preface for the Instructor
This textbook is intended for a sophomore- or junior-level introductory course in linear
algebra. We assume the students have had at least one course in calculus.
PHILOSOPHY AND FEATURES OF THE TEXT
Clarity of Presentation: We have striven for clarity and used straightforward lan-
guage throughout the book, occasionally sacrificing brevity for clear and convincing
explanation. We hope you will encourage students to read the text deeply and
thoroughly.
Helpful Transition from Computation to Theory: In writing this text, our main intention
was to address the fact that students invariably ran into trouble as the largely computational first half of most linear algebra courses gave way to a more theoretical
second half. In particular,many students encountered difficulties when abstract vector
space topics were introduced. Accordingly, we have taken great care to help students
master these important concepts. We consider the material in Sections 4.1 through
5.6 (vector spaces and subspaces, span, linear independence, basis and dimension,
coordinatization, linear transformations, kernel and range, one-to-one and onto linear
transformations, isomorphism, diagonalization of linear operators) to be the “heart” of
this linear algebra text.
Emphasis on the Reading and Writing of Proofs: One reason that students have trouble
with the more abstract material in linear algebra is that most textbooks contain few,
if any, guidelines about reading and writing simple mathematical proofs. This book is
intended to remedy that situation. Consequently, we have students working on proofs
as quickly as possible. After a discussion of the basic properties of vectors, there
is a special section (Section 1.3) on general proof techniques, with concrete examples using the material on vectors from Sections 1.1 and 1.2. The early placement of
Section 1.3 helps to build the students’confidence and gives them a strong foundation
in the reading and writing of proofs.
We have written the proofs of theorems in the text in a careful manner to give
students models for writing their own proofs. We avoided “clever” or “sneaky” proofs,
in which the last line suddenly produces “a rabbit out of a hat,” because such proofs
invariably frustrate students. They are given no insight into the strategy of the proof
or how the deductive process was used. In fact, such proofs tend to reinforce the
students’ mistaken belief that they will never become competent in the art of writing
proofs. In this text,proofs longer than one paragraph are often written in a“top-down”
manner, a concept borrowed from structured programming. A complex theorem is
broken down into a secondary series of results, which together are sufficient to prove
the original theorem. In this way,the student has a clear outline of the logical argument
and can more easily reproduce the proof if called on to do so.
ix
www.pdfgrip.com
x Preface for the Instructor
We have left the proofs of some elementary theorems to the student. However, for
every nontrivial theorem in Chapters 1 through 6,we have either included a proof,or
given detailed hints which should be sufficient to enable students to provide a proof
on their own. Most of the proofs of theorems that are left as exercises can be found in
the Student Solutions Manual.The exercises corresponding to these proofs are marked
with the symbol .
Computational and Numerical Methods, Applications: A summary of the most important
computational and numerical methods covered in this text is found in the chart located
in the frontpages. This chart also contains the most important applications of linear
algebra that are found in this text. Linear algebra is a branch of mathematics having a
multitude of practical applications, and we have included many standard ones so that
instructors can choose their favorites. Chapter 8 is devoted entirely to applications
of linear algebra, but there are also several shorter applications in Chapters 1 to 6.
Instructors may choose to have their students explore these applications in computer
labs,or to assign some of these applications as extra credit reading assignments outside
of class.
Revisiting Topics: We frequently introduce difficult concepts with concrete examples
and then revisit them frequently in increasingly abstract forms as students progress
throughout the text. Here are several examples:
■
Students are first introduced to the concept of linear combinations beginning in
Section 1.1, long before linear combinations are defined for real vector spaces
in Chapter 4.
■
The row space of a matrix is first encountered in Section 2.3, thereby preparing
students for the more general concepts of subspace and span in Sections 4.2
and 4.3.
■
Students traditionally find eigenvalues and eigenvectors to be a difficult topic,so
these are introduced early in the text (Section 3.4) in the context of matrices.
Further properties of eigenvectors are included throughout Chapters 4 and 5 as
underlying vector space concepts are covered. Then a more thorough, detailed
treatment of eigenvalues is given in Section 5.6 in the context of linear transformations. The more advanced topics of orthogonal and unitary diagonalization
are covered in Chapters 6 and 7.
■
The technique behind the first two methods in Section 4.6 for computing bases
are introduced earlier in Sections 4.3 and 4.4 in the Simplified Span Method and
the Independence Test Method, respectively. In this way, students will become
comfortable with these methods in the context of span and linear independence
before employing them to find appropriate bases for vector spaces.
■
Students are first introduced to least-squares polynomials in Section 8.3 in a
concrete fashion,and then (assuming a knowledge of orthogonal complements),
the theory behind least-squares solutions for inconsistent systems is explored
later on in Section 8.10.
www.pdfgrip.com
Preface for the Instructor xi
Numerous Examples and Exercises: There are 321 numbered examples in the text, and
many other unnumbered examples as well, at least one for each new concept or
application, to ensure that students fully understand new material before proceeding
onward. Almost every theorem has a corresponding example to illustrate its meaning
and/or usefulness.
The text also contains an unusually large number of exercises. There are more than
980 numbered exercises, and many of these have multiple parts, for a total of more
than 2660 questions. Some are purely computational. Many others ask the students
to write short proofs. The exercises within each section are generally ordered by
increasing difficulty, beginning with basic computational problems and moving on
to more theoretical problems and proofs. Answers are provided at the end of the
book for approximately half the computational exercises; these problems are marked
with a star (★). Full solutions to the ★ exercises appear in the Student Solutions
Manual.
True/False Exercises: Included among the exercises are 500 True/False questions,
which appear at the end of each section in Chapters 1 through 9, as well as in the
Review Exercises at the end of Chapters 1 through 7, and in Appendices B and C.
These True/False questions help students test their understanding of the fundamental
concepts presented in each section. In particular, these exercises highlight the importance of crucial words in definitions or theorems. Pondering True/False questions
also helps the students learn the logical differences between “true,” “occasionally
true,” and “never true.” Understanding such distinctions is a crucial step toward the
type of reasoning they are expected to possess as mathematicians.
Summary Tables: There are helpful summaries of important material at various points
in the text:
■
Table 2.1 (in Section 2.3): The three types of row operations and their inverses
■
Table 3.1 (in Section 3.2): Equivalent conditions for a matrix to be singular
(and similarly for nonsingular)
Chart following Chapter 3: Techniques for solving a system of linear equations,
and for finding the inverse,determinant,eigenvalues and eigenvectors of a matrix
Table 4.1 (in Section 4.4): Equivalent conditions for a subset to be linearly
independent (and similarly for linearly dependent)
■
■
■
■
■
Table 4.2 (in Section 4.6): Contrasts between the Simplified Span Method and
the Independence Test Method
Table 5.1 (in Section 5.2): Matrices for several geometric linear operators
in R3
Table 5.2 (in Section 5.5): Equivalent conditions for a linear transformation to
be an isomorphism (and similarly for one-to-one, onto)
Symbol Table: Following the Prefaces, for convenience, there is a comprehensive Symbol Table listing all of the major symbols related to linear algebra that are employed in
this text together with their meanings.
www.pdfgrip.com
xii Preface for the Instructor
Instructor’s Manual: An Instructor’s Manual is available for this text that contains the
answers to all computational exercises, and complete solutions to the theoretical and
proof exercises. In addition, this manual includes three versions of a sample test for
each of Chapters 1 through 7. Answer keys for the sample tests are also included.
Student Solutions Manual: A Student Solutions Manual is available that contains full
solutions for each exercise in the text bearing a ★ (those whose answers appear in
the back of the textbook). The Student Solutions Manual also contains the proofs of
most of the theorems whose proofs were left to the exercises. These exercises are
marked in the text with a . Because we have compiled this manual ourselves, it
utilizes the same styles of proof-writing and solution techniques that appear in the
actual text.
Web Site: Our web site,
/>contains appropriate updates on the textbook as well as a way to communicate with
the authors.
MAJOR CHANGES FOR THE FOURTH EDITION
Chapter Review Exercises: We have added additional exercises for review following each
of Chapters 1 through 7, including many additional True/False exercises.
Section-by-Section Vocabulary and Highlights Summary: After each section in the textbook, for the students’ convenience, there is now a summary of important vocabulary
and a summary of the main results of that section.
QR Factorization and Singular Value Decomposition: New sections have been added on
QR Factorization (Section 9.4) and Singular Value Decomposition (Section 9.5). The
latter includes a new application on digital imaging.
Major Revisions: Many sections of the text have been augmented and/or rewritten for
further clarity. The sections that received the most substantial changes are as follows:
■
Section 1.5 (Matrix Multiplication): A new subsection (“Linear Combinations
from Matrix Multiplication”) with some related exercises has been added to
show how a linear combination of the rows or columns of a matrix can be
accomplished easily using matrix multiplication.
■
Section 3.2 (Determinants and Row Reduction): For greater convenience,
the approach to finding the determinant of a matrix by row reduction has been
rewritten so that the row reduction now proceeds in a forward manner.
■
Section 3.4 (Eigenvalues and Diagonalization): The concept of similarity
is introduced in a more formal manner. Also, the vectors obtained from the
row reduction process are labeled as“fundamental eigenvectors”from this point
www.pdfgrip.com
Preface for the Instructor xiii
onward in the text, and examples in the section have been reordered for greater
clarity.
■
Section 4.4 (Linear Independence): The definition of linear independence
is now taken from Theorem 4.7 in the Third Edition: that is, {v1 , v2 , . . . , vn } is
linearly independent if and only if a1 v1 ϩ a2 v2 ϩ · · · ϩ an vn ϭ 0 implies a1 ϭ
a2 ϭ · · · ϭ an ϭ 0.
■
Section 4.5 (Basis and Dimension): The main theorem of this section (now
Theorem 4.12), that any two bases for the same finite dimensional vector space
have the same size, was preceded in the previous edition by two lemmas. These
lemmas have now been consolidated into one “technical lemma” (Lemma 4.11)
and proven using linear systems rather than the exchange method.
■
Section 4.7 (Coordinatization):The examples in this section have been rewritten to streamline the overall presentation and introduce the row reduction
method for coordinatization sooner.
■
Section 5.3 (The Dimension Theorem): The Dimension Theorem is now
proven (in a more straightforward manner) for the special case of a linear transformation from Rn to Rm , and the proof for more general linear transformations
is now given in Section 5.5, once the appropriate properties of isomorphisms
have been introduced. (An alternate proof for the Dimension Theorem in the
general case is outlined in Exercise 18 of Section 5.3.)
■
Section 5.4 (One-to-One and Onto Linear Transformations) and
Section 5.5 (Isomorphism): Much of the material of these two sections
was previously in a single section, but has now been extensively revised. This
new approach gives the students more familiarity with one-to-one and onto
transformations before proceeding to isomorphisms. Also, there is a more thorough explanation of how isomorphisms preserve important properties of vector
spaces. This, in turn, validates more carefully the methods used in Chapter 4 for
finding particular bases for general vector spaces other than Rn . [The material formerly in Section 5.5 in the Third Edition has been moved to Section 5.6
(Diagonalization of Linear Operators) in the Fourth Edition.]
■
Chapter 8 (Additional Applications): Several of the sections in this chapter
have been rewritten for improved clarity,including Section 8.2 (Ohm’s Law) in
order to stress the use of both of Kirchhoff’s Laws, Section 8.3 (Least-Squares
Polynomials) in order to present concrete examples first before stating the
general result (Theorem 8.2), Section 8.7 (Rotation of Axes) in which the
emphasis is now on a clockwise rotation of axes for simplicity, and Section 8.8
(Computer Graphics) in which there are many minor improvements in the presentation, including a more careful approach to the display of pixel coordinates
and to the concept of geometric similarity.
■
Appendix A (Miscellaneous Proofs): A proof of Theorem 2.4 (uniqueness of
reduced row echelon form for a matrix) has been added.
www.pdfgrip.com
xiv Preface for the Instructor
Also, Chapter 10 in the Third Edition has been eliminated and two of its three
sections (Elementary Matrices,Quadratic Forms) have been incorporated into Chapter
8 in the Fourth Edition (as Sections 8.6 and 8.11, respectively). The sections from the
Third Edition entitled“Change of Variables and the Jacobian,”“Max-Min Problems in Rn
and the Hessian Matrix,”and“Function Spaces”have been eliminated, but are available
for downloading and use from the text’s web site. Also, the appendix “Computers
and Calculators”from previous editions has been removed because the most common
computer packages (e.g., Maple, MATLAB, Mathematica) that are used in conjunction
with linear algebra courses now contain introductory tutorials that are much more
thorough than what can be provided here.
PREREQUISITE CHART FOR SECTIONS IN CHAPTERS 7, 8, 9
Prerequisites for the material in Chapters 7 through 9 are listed in the following chart.
The sections of Chapters 8 and 9 are generally independent of each other, and any of
these sections can be covered after its prerequisite has been met.
Section
Prerequisite
Section 7.1 (Complex n-Vectors
and Matrices)
Section 1.5 (Matrix Multiplication)
Section 7.2 (Complex Eigenvalues
and Complex Eigenvectors)*
Section 3.4 (Eigenvalues and Diagonalization)
Section 7.3 (Complex Vector Spaces)*
Section 5.2 (The Matrix of a Linear Transformation)
Section 7.4 (Orthogonality in Cn )*
Section 6.3 (Orthogonal Diagonalization)
Section 7.5 (Inner Product Spaces)*
Section 6.3 (Orthogonal Diagonalization)
Section 8.1 (Graph Theory)
Section 1.5 (Matrix Multiplication)
Section 8.2 (Ohm’s Law)
Section 2.2 (Gauss-Jordan Row Reduction and
Reduced Row Echelon Form)
Section 8.3 (Least-Squares Polynomials)
Section 2.2 (Gauss-Jordan Row Reduction and
Reduced Row Echelon Form)
Section 8.4 (Markov Chains)
Section 2.2 (Gauss-Jordan Row Reduction and
Reduced Row Echelon Form)
Section 8.5 (Hill Substitution: An
Introduction to Coding Theory)
Section 2.4 (Inverses of Matrices)
Section 8.6 (Elementary Matrices)
Section 2.4 (Inverses of Matrices)
Section 8.7 (Rotation of Axes for Conic Sections)
Section 4.7 (Coordinatization)
(Continued)
www.pdfgrip.com
Preface for the Instructor xv
(Continued)
Section
Prerequisite
Section 8.8 (Computer Graphics)
Section 5.2 (The Matrix of a Linear Transformation)
Section 8.9 (Differential Equations)**
Section 5.6 (Diagonalization of Linear Operators)
Section 8.10 (Least-Squares
Solutions for Inconsistent Systems)
Section 6.2 (Orthogonal Complements)
Section 8.11 (Quadratic Forms)
Section 6.3 (Orthogonal Diagonalization)
Section 9.1 (Numerical Methods for
Solving Systems)
Section 2.3 (Equivalent Systems, Rank,
and Row Space)
Section 9.2 (LDU Decomposition)
Section 2.4 (Inverses of Matrices)
Section 9.3 (The Power Method
for Finding Eigenvalues)
Section 3.4 (Eigenvalues and Diagonalization)
Section 9.4 (QR Factorization)
Section 6.1 (Orthogonal Bases and the Gram-Schmidt
Process)
Section 9.5 (Singular Value
Decomposition)
Section 6.3 (Orthogonal Diagonalization)
*In addition to the prerequisites listed, each section in Chapter 7 requires the sections of Chapter 7 that precede
it, although most of Section 7.5 can be covered without having covered Sections 7.1 through 7.4 by concentrating
only on real inner products.
**The techniques presented for solving differential equations in Section 8.9 require only Section 3.4 as a
prerequisite. However, terminology from Chapters 4 and 5 is used throughout Section 8.9.
PLANS FOR COVERAGE
Chapters 1 through 6 have been written in a sequential fashion. Each section is generally needed as a prerequisite for what follows. Therefore, we recommend that these
sections be covered in order. However, there are three exceptions:
■
Section 1.3 (An Introduction to Proofs) can be covered, in whole, or in part,
at any time after Section 1.2.
■
Section 3.3 (Further Properties of the Determinant) contains some material
that can be omitted without affecting most of the remaining development. The
topics of general cofactor expansion,(classical) adjoint matrix,and Cramer’s Rule
are used very sparingly in the rest of the text.
■
Section 6.1 (Orthogonal Bases and the Gram-Schmidt Process) can be
covered any time after Chapter 4, as can much of the material in Section 6.2
(Orthogonal Complements).
Any section in Chapters 7 through 9 can be covered at any time as long as the
prerequisites for that section have previously been covered. (Consult the Prerequisite
Chart for Sections in Chapters 7, 8, 9.)
www.pdfgrip.com
xvi Preface for the Instructor
The textbook contains much more material than can be covered in a typical
3- or 4-credit course. We expect that the students will read much on their own, while
the instructor emphasizes the highlights. Two suggested timetables for covering the
material in this text are presented below — one for a 3-credit course,and the other for a
4-credit course.A 3-credit course could skip portions of Sections 1.3,2.3,3.3,4.1 (more
abstract vector spaces), 5.5, 5.6, 6.2, and 6.3, and all of Chapter 7. A 4-credit course
could cover most of the material of Chapters 1 through 6 (perhaps de-emphasizing
portions of Sections 1.3, 2.3, and 3.3), and could cover some of Chapter 7. In either
course, some of the material in Chapter 1 could be skimmed if students are already
familiar with vector and matrix operations.
3-Credit Course
4-Credit Course
Chapter 1
5 classes
5 classes
Chapter 2
5 classes
6 classes
Chapter 3
5 classes
5 classes
Chapter 4
11 classes
13 classes
Chapter 5
8 classes
13 classes
Chapter 6
2 classes
5 classes
Chapter 7
2 classes
Chapters 8 and 9 (selections)
3 classes
4 classes
Tests
3 classes
3 classes
Total
42 classes
56 classes
ACKNOWLEDGMENTS
We gratefully thank all those who have helped in the publication of this book. At
Elsevier/Academic Press, we especially thank Lauren Yuhasz, our Senior Acquisitions
Editor,Patricia Osborn,ourAcquisitions Editor,Gavin Becker,ourAssistant Editor,Philip
Bugeau, our Project Manager, and Deborah Prato, our Copyeditor.
We also want to thank those who have supported our textbook at various stages.
In particular, we thank Agnes Rash, former Chair of the Mathematics and Computer
Science Department at Saint Joseph’s University for her support of our project. We
also thank Paul Klingsberg and Richard Cavaliere of Saint Joseph’s University, both
of whom gave us many suggestions for improvements to this edition and earlier
editions.
www.pdfgrip.com
Preface for the Instructor xvii
We especially thank those students who have classroom-tested versions of the earlier editions of the manuscript. Their comments and suggestions have been extremely
helpful, and have guided us in shaping the text in many ways.
We acknowledge those reviewers who have supplied many worthwhile suggestions. For reviewing the first edition, we thank the following:
C. S. Ballantine, Oregon State University
Yuh-ching Chen, Fordham University
Susan Jane Colley, Oberlin College
Roland di Franco, University of the Pacific
Colin Graham, Northwestern University
K. G. Jinadasa, Illinois State University
Ralph Kelsey, Denison University
Masood Otarod, University of Scranton
J. Bryan Sperry, Pittsburg State University
Robert Tyler, Susquehanna University
For reviewing the second edition, we thank the following:
Ruth Favro, Lawrence Technological University
Howard Hamilton, California State University
Ray Heitmann, University of Texas, Austin
Richard Hodel, Duke University
James Hurley, University of Connecticut
Jack Lawlor, University of Vermont
Peter Nylen, Auburn University
Ed Shea, California State University, Sacramento
For reviewing the third edition, we thank the following:
Sergei Bezrukov, University of Wisconsin Superior
Susan Jane Colley, Oberlin College
John Lawlor, University of Vermont
Vania Mascioni, Ball State University
Ali Miri, University of Ottawa
Ian Morrison, Fordham University
Don Passman, University of Wisconsin
Joel Robbin, University of Wisconsin
Last,but most important of all,we want to thank our wives,Ene and Lyn,for bearing
extra hardships so that we could work on this text. Their love and support has been
an inspiration.
Stephen Andrilli
David Hecker
May, 2009
www.pdfgrip.com
This page intentionally left blank
www.pdfgrip.com
Preface for the Student
OVERVIEW OF THE MATERIAL
Chapters 1 to 3: Appetizer: Linear algebra is a branch of mathematics that is largely
concerned with solving systems of linear equations. The main tools for working with
systems of linear equations are vectors and matrices.Therefore,this text begins with an
introduction to vectors and matrices and their fundamental properties in Chapter 1.
This is followed by techniques for solving linear systems in Chapter 2. Chapter 3
introduces determinants and eigenvalues, which help us to better understand the
behavior of linear systems.
Chapters 4 to 7: Main Course: The material of Chapters 1, 2, and 3 is treated in a more
abstract form in Chapters 4 through 7. In Chapter 4, the concept of a vector space
(a collection of general vectors) is introduced, and in Chapter 5, mappings between
vector spaces are considered. Chapter 6 explores orthogonality in the most common
vector space, and Chapter 7 considers more general types of vector spaces, such as
complex vector spaces and inner product spaces.
Chapters 8 and 9: Dessert: The powerful techniques of linear algebra lend themselves
to many important and diverse applications in science, social science, and business,
as well as in other branches of mathematics. While some of these applications are
covered in the text as new material is introduced, others of a more lengthy nature are
placed in Chapter 8, which is entirely devoted to applications of linear algebra. There
are also many useful numerical algorithms and methods associated with linear algebra,
some of which are covered in Chapters 1 through 7. Additional numerical algorithms
are explored in Chapter 9.
HELPFUL ADVICE
Strategies for Learning: Many students find the transition to abstractness that begins
in Chapter 4 to be challenging. This textbook was written specifically to help you in
this regard. We have tried to present the material in the clearest possible manner with
many helpful examples.We urge you to take advantage of this and read each section
of the textbook thoroughly and carefully many times over. Each re-reading will allow
you to see connections among the concepts on a deeper level. Try as many problems
in each section as possible. There are True/False questions to test your knowledge at
the end of each section, as well as at the end of each of the sets of Review Exercises
for Chapters 1 to 7. After pondering these first on your own, consult the explanations
for the answers in the Student Solutions Manual.
Facility with Proofs: Linear algebra is considered by many instructors as a transi-
tional course from the freshman computationally-oriented calculus sequence to the xix
www.pdfgrip.com
xx Preface for the Student
junior-senior level courses which put much more emphasis on the reading and writing
of mathematical proofs. At first it may seem daunting to you to write your own proofs.
However, most of the proofs that you are asked to write for this text are relatively
short. Many useful strategies for proof-writing are discussed in Section 1.3. The proofs
that are presented in this text are meant to serve as good examples. Study them carefully. Remember that each step of a proof must be validated with a proper reason—
a theorem that was proven earlier, or a definition, or a principle of logic. Understanding carefully each definition and theorem in the text is very valuable. Only by fully
comprehending each mathematical definition and theorem can you fully appreciate
how to use it in a proof. Learning how to read and write proofs effectively is an important skill that will serve you well in your upper-division mathematics courses and
beyond.
Student Solutions Manual: A Student Solutions Manual is available that contains full
solutions for each exercise in the text bearing a ★ (those whose answers appear in
the back of the textbook). It therefore contains additional useful examples and models
of how to solve various types of problems.The Student Solutions Manual also contains
the proofs of most of the theorems whose proofs were left to the exercises. These
exercises are marked in the text with a . The Student Solutions Manual is intended
to serve as a strong support to assist you in mastering the textbook material.
LINEAR ALGEBRA TERM-BY-TERM
As students vector through the space of this text from its initial point to its terminal
point, we hope that on a one-to-one basis, they will undergo a real transformation
from the norm. Their induction into the domain of linear algebra should be sufficient
to produce a pivotal change in their abilities.
One characteristic that we expect students to manifest is a greater linear independence in problem-solving.After much reflection on the kernel of ideas presented in this
book, the range of new methods available to them should be graphically augmented
in a multiplicity of ways. An associative feature of this transition is that all of the new
techniques they learn should become a consistent and normalized part of their identity in the future. In addition, students will gain a singular new appreciation of their
mathematical skills. Consequently, the resultant change in their self-image should be
one of no minor magnitude.
One obvious implication is that the level of the students’ success is an isomorphic
reflection of the amount of homogeneous energy they expend on this complex material. That is, we can often trace the rank of their achievement to the depth of their
resolve to be a scalar of new distances. Similarly, we make this symmetric claim: the
students’positive,definite growth is clearly a function of their overall coordinatization
of effort. Naturally, the matrix of thought behind this parallel assertion is that students
should avoid the negative consequences of sparse learning. Instead, it is the inverse
approach of systematic and iterative study that will ultimately lead them to less error,
and not rotate them into useless dead-ends and diagonal tangents of zero worth.
www.pdfgrip.com
Preface for the Student xxi
Of course some nontrivial length of time is necessary to transpose a student with
an empty set of knowledge on this subject into higher echelons of understanding. But,
our projection is that the unique dimensions of this text will be a determinant factor
in enriching the span of students’ lives, and translate them onto new orthogonal paths
of wisdom.
Stephen Andrilli
David Hecker
May, 2009
www.pdfgrip.com
This page intentionally left blank
www.pdfgrip.com
Symbol Table
⊕
A
I
≈
[A|B]
pL (x)
pA (x)
Aij
z
z
Z
C
Cn
g◦f
L2 ◦ L1
Z*
C 0 (R)
C 1 (R)
[w]B
xϫy
f (n)
|A|
␦
Dn
dim(V)
x·y
E
{ }, ∅
aij
f:X → Y
I, In
⇔, iff
f (S)
f (x)
i
⇒
< x, y >
Z
f Ϫ1
addition on a vector space (unusual)
adjoint (classical) of a matrix A
ampere (unit of current)
approximately equal to
augmented matrix formed from matrices A and B
characteristic polynomial of a linear operator L
characteristic polynomial of a matrix A
cofactor, (i, j), of a matrix A
complex conjugate of a complex number z
complex conjugate of z ∈ Cn
complex conjugate of Z ∈ MC
mn
complex numbers, set of
complex n-vectors, set of (ordered n-tuples of complex numbers)
composition of functions f and g
composition of linear transformations L1 and L2
conjugate transpose of Z ∈ MC
mn
continuous real-valued functions with domain R, set of
continuously differentiable functions with domain R, set of
coordinatization of a vector w with respect to a basis B
cross product of vectors x and y
derivative, nth, of a function f
determinant of a matrix A
determinant of a 2 ϫ 2 matrix, ad Ϫ bc
diagonal n ϫ n matrices, set of
dimension of a vector space V
dot product or complex dot product of vectors x and y
eigenvalue of a matrix
eigenspace corresponding to eigenvalue
empty set
entry, (i, j), of a matrix A
function f from a set X (domain) to a set Y (codomain)
identity matrix; n ϫ n identity matrix
if and only if
image of a set S under a function f
image of an element x under a function f
imaginary number whose square ϭ Ϫ1
implies; if...then
inner product of x and y
integers, set of
inverse of a function f
xxiii
www.pdfgrip.com
xxiv Symbol Table
LϪ1
A Ϫ1
∼
ϭ
ker(L)
␦ij
||a||
Mf
pf
Ln
|z|
Mmn
MC
mn
ABC
|Aij |
N
not A
|S|
⍀
(v1 , v2 , . . . , vn )
W⊥
⊥
Pn
PnC
P
Rϩ
Ak
f Ϫ1 (S)
f Ϫ1 (x)
proja b
projW v
Aϩ
range(L)
rank(A)
R
Rn
i ← c i
i ←c j ϩ i
i ↔ j
R(A)
k
mϫn
span(S)
inverse of a linear transformation L
inverse of a matrix A
isomorphic
kernel of a linear transformation L
Kronecker delta
length, or norm, of a vector a
limit matrix of a Markov chain
limit vector of a Markov chain
lower triangular n ϫ n matrices, set of
magnitude (absolute value) of a complex number z
matrices of size m ϫ n, set of
matrices of size m ϫ n with complex entries, set of
matrix for a linear transformation with respect to ordered
bases B and C
minor, (i, j), of a matrix A
natural numbers, set of
negation of statement A
number of elements in a set S
ohm (unit of resistance)
ordered basis containing vectors v1 , v2 , . . . , vn
orthogonal complement of a subspace W
perpendicular to
polynomials of degree Յn, set of
polynomials of degree Յn with complex coefficients, set of
polynomials, set of all
positive real numbers, set of
power, kth, of a matrix A
pre-image of a set S under a function f
pre-image of an element x under a function f
projection of b onto a
projection of v onto a subspace W
pseudoinverse of a matrix A
range of a linear transformation L
rank of a matrix A
real numbers, set of
real n-vectors, set of (ordered n-tuples of real numbers)
row operation of type (I)
row operation of type (II)
row operation of type (III)
row operation R applied to matrix A
scalar multiplication on a vector space (unusual)
singular value, kth, of a matrix
size of a matrix with m rows and n columns
span of a set S