Tải bản đầy đủ (.pdf) (418 trang)

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (3.66 MB, 418 trang )


www.pdfgrip.com

Universitext


www.pdfgrip.com

Universitext
Series Editors:
Sheldon Axler

San Francisco State University
Vincenzo Capasso

Università degli Studi di Milano
Carles Casacuberta

Universitat de Barcelona
Angus J. MacIntyre

Queen Mary, University of London
Kenneth Ribet

University of California, Berkeley
Claude Sabbah

CNRS, École Polytechnique
Endre Süli

University of Oxford


Wojbor A. Woyczynski

Case Western Reserve University

Universitext is a series of textbooks that presents material from a wide variety
of mathematical disciplines at master’s level and beyond. The books, often well
class-tested by their author, may have an informal, personal even experimental
approach to their subject matter. Some of the most successful and established books
in the series have evolved through several editions, always following the evolution
of teaching curricula, to very polished texts.
Thus as research topics trickle down into graduate-level teaching, first textbooks
written for new, cutting-edge courses may make their way into Universitext.

For further volumes:
/>

www.pdfgrip.com

Fuzhen Zhang

Matrix Theory
Basic Results and Techniques
Second Edition

Linear Park, Davie, Florida, USA


www.pdfgrip.com

Fuzhen Zhang

Division of Math, Science, and Technology
Nova Southeastern University
Fort Lauderdale, FL 33314
USA


ISBN 978-1-4614-1098-0
e-ISBN 978-1-4614-1099-7
DOI 10.1007/978-1-4614-1099-7
Springer New York Dordrecht Heidelberg London
Library of Congress Control Number: 2011935372
Mathematics Subject Classification (2010): 15-xx, 47-xx

© Springer Science+Business Media, LLC 2011
All rights reserved. This work may not be translated or copied in whole or in part without the written
permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York,
NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in
connection with any form of information storage and retrieval, electronic adaptation, computer software,
or by similar or dissimilar methodology now known or hereafter developed is forbidden.
The use in this publication of trade names, trademarks, service marks, and similar terms, even if they
are not identified as such, is not to be taken as an expression of opinion as to whether or not they are
subject to proprietary rights.
Printed on acid-free paper
Springer is part of Springer Science+Business Media (www.springer.com)


www.pdfgrip.com

To Cheng, Sunny, Andrew, and Alan



www.pdfgrip.com


www.pdfgrip.com

Preface to the Second Edition

The first edition of this book appeared a decade ago. This is a revised
expanded version. My goal has remained the same: to provide a text for
a second course in matrix theory and linear algebra accessible to advanced
undergraduate and beginning graduate students. Through the course, students learn, practice, and master basic matrix results and techniques (or
matrix kung fu) that are useful for applications in various fields such as
mathematics, statistics, physics, computer science, and engineering, etc.
Major changes for the new edition are: eliminated errors, typos, and
mistakes found in the first edition; expanded with topics such as matrix
functions, nonnegative matrices, and (unitarily invariant) matrix norms;
included more than 1000 exercise problems; rearranged some material from
the previous version to form a new chapter, Chapter 4, which now contains
numerical ranges and radii, matrix norms, and special operations such as
the Kronecker and Hadamard products and compound matrices; and added
a new chapter, Chapter 10, “Majorization and Matrix Inequalities”, which
presents a variety of inequalities on the eigenvalues and singular values of
matrices and unitarily invariant norms.
I am thankful to many mathematicians who have sent me their comments on the first edition of the book or reviewed the manuscript of this
edition: Liangjun Bai, Jane Day, Farid O. Farid, Takayuki Furuta, Geoffrey
Goodson, Roger Horn, Zejun Huang, Minghua Lin, Dennis Merino, George
P. H. Styan, Găotz Trenkler, Qingwen Wang, Yimin Wei, Changqing Xu, Hu
Yang, Xingzhi Zhan, Xiaodong Zhang, and Xiuping Zhang. I also thank
Farquhar College of Arts and Sciences at Nova Southeastern University for

providing released time for me to work on this project.
Readers are welcome to communicate with me via e-mail.
Fuzhen Zhang
Fort Lauderdale
May 23, 2011

www.nova.edu/˜zhang

vii


www.pdfgrip.com


www.pdfgrip.com

Preface

It has been my goal to write a concise book that contains fundamental ideas, results, and techniques in linear algebra and (mainly) in matrix
theory which are accessible to general readers with an elementary linear
algebra background. I hope this book serves the purpose.
Having been studied for more than a century, linear algebra is of central
importance to all fields of mathematics. Matrix theory is widely used in
a variety of areas including applied math, computer science, economics,
engineering, operations research, statistics, and others.
Modern work in matrix theory is not confined to either linear or algebraic techniques. The subject has a great deal of interaction with combinatorics, group theory, graph theory, operator theory, and other mathematical
disciplines. Matrix theory is still one of the richest branches of mathematics;
some intriguing problems in the field were long standing, such as the Van
der Waerden conjecture (1926–1980), and some, such as the permanentaldominance conjecture (since 1966), are still open.
This book contains eight chapters covering various topics from similarity and special types of matrices to Schur complements and matrix

normality. Each chapter focuses on the results, techniques, and methods
that are beautiful, interesting, and representative, followed by carefully selected problems. Many theorems are given different proofs. The material
is treated primarily by matrix approaches and reflects the author’s tastes.
The book can be used as a text or a supplement for a linear algebra
or matrix theory class or seminar. A one-semester course may consist of
the first four chapters plus any other chapter(s) or section(s). The only
prerequisites are a decent background in elementary linear algebra and
calculus (continuity, derivative, and compactness in a few places). The
book can also serve as a reference for researchers and instructors.
The author has benefited from numerous books and journals, including
The American Mathematical Monthly, Linear Algebra and Its Applications,
Linear and Multilinear Algebra, and the International Linear Algebra Society (ILAS) Bulletin Image. This book would not exist without the earlier
works of a great number of authors (see the References).
I am grateful to the following professors for many valuable suggestions
and input and for carefully reading the manuscript so that many errors
have been eliminated from the earlier version of the book:
Professor R. B. Bapat (Indian Statistical Institute),
Professor L. Elsner (University of Bielefeld),
Professor R. A. Horn (University of Utah),

ix


www.pdfgrip.com

x

Preface
Professor
Professor

Professor
Professor
Professor
Professor
Professor
Professor

T.-G. Lei (National Natural Science Foundation of China),
J.-S. Li (University of Science and Technology of China),
R.-C. Li (University of Kentucky),
Z.-S. Li (Georgia State University),
D. Simon (Nova Southeastern University),
G. P. H. Styan (McGill University),
B.-Y. Wang (Beijing Normal University), and
X.-P. Zhang (Beijing Normal University).
F. Zhang
Ft. Lauderdale
March 5, 1999

www.nova.edu/˜zhang


www.pdfgrip.com

Contents

Preface to the Second Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .ix
Frequently Used Notation and Terminology . . . . . . . . . . . . . . . . . . . .xv
Frequently Used Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii

1

Elementary Linear Algebra Review
1.1
1.2
1.3
1.4

2

3

Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Matrices and Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Linear Transformations and Eigenvalues . . . . . . . . . . . . . . . . . . . . 17
Inner Product Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

Partitioned Matrices, Rank, and Eigenvalues
2.1
2.2
2.3
2.4
2.5
2.6

35

Elementary Operations of Partitioned Matrices . . . . . . . . . . . . . 35
The Determinant and Inverse of Partitioned Matrices . . . . . . .42
The Rank of Product and Sum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

The Eigenvalues of AB and BA . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
The Continuity Argument and Matrix Functions . . . . . . . . . . . 62
Localization of Eigenvalues: The Gerˇsgorin Theorem . . . . . . . 67

Matrix Polynomials and Canonical Forms
3.1
3.2
3.3
3.4
3.5

1

73

Commuting Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Matrix Decompositions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Annihilating Polynomials of Matrices . . . . . . . . . . . . . . . . . . . . . . . 87
Jordan Canonical Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
The Matrices AT , A, A∗ , AT A, A∗ A, and AA . . . . . . . . . . . . 102

4 Numerical Ranges, Matrix Norms, and Special Operations 107
4.1
4.2
4.3
4.4

Numerical Range and Radius . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Matrix Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .113
The Kronecker and Hadamard Products . . . . . . . . . . . . . . . . . . . 117

Compound Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

xi


www.pdfgrip.com

xii
5

Contents
Special Types of Matrices
5.1
5.2
5.3
5.4
5.5
5.6
5.7

6

7

7.6
7.7

9

253


Hermitian Matrices and Their Inertias . . . . . . . . . . . . . . . . . . . . 253
The Product of Hermitian Matrices . . . . . . . . . . . . . . . . . . . . . . . 260
The Min-Max Theorem and Interlacing Theorem . . . . . . . . . . 266
Eigenvalue and Singular Value Inequalities . . . . . . . . . . . . . . . . 274
Eigenvalues of Hermitian matrices A, B, and A + B . . . . . . . 281
A Triangle Inequality for the Matrix (A∗ A)1/2 . . . . . . . . . . . . 287

Normal Matrices
9.1
9.2
9.3
9.4

199

Positive Semidefinite Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
A Pair of Positive Semidefinite Matrices . . . . . . . . . . . . . . . . . . . 207
Partitioned Positive Semidefinite Matrices . . . . . . . . . . . . . . . . . 217
Schur Complements and Determinant Inequalities . . . . . . . . . 227
The Kronecker and Hadamard Products
of Positive Semidefinite Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . 234
Schur Complements and the Hadamard Product . . . . . . . . . . .240
The Wielandt and Kantorovich Inequalities . . . . . . . . . . . . . . . 245

Hermitian Matrices
8.1
8.2
8.3
8.4

8.5
8.6

171

Properties of Unitary Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Real Orthogonal Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Metric Space and Contractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
Contractions and Unitary Matrices . . . . . . . . . . . . . . . . . . . . . . . . 188
The Unitary Similarity of Real Matrices . . . . . . . . . . . . . . . . . . . 192
A Trace Inequality of Unitary Matrices . . . . . . . . . . . . . . . . . . . . 195

Positive Semidefinite Matrices
7.1
7.2
7.3
7.4
7.5

8

Idempotence, Nilpotence, Involution, and Projections . . . . . .125
Tridiagonal Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Circulant Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Vandermonde Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Hadamard Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Permutation and Doubly Stochastic Matrices . . . . . . . . . . . . . . 155
Nonnegative Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164

Unitary Matrices and Contractions

6.1
6.2
6.3
6.4
6.5
6.6

125

293

Equivalent Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .293
Normal Matrices with Zero and One Entries . . . . . . . . . . . . . . .306
Normality and Cauchy–Schwarz–Type Inequality . . . . . . . . . . 312
Normal Matrix Perturbation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319


www.pdfgrip.com
Contents
10

Majorization and Matrix Inequalities
10.1
10.2
10.3
10.4
10.5
10.6
10.7


xiii
325

Basic Properties of Majorization . . . . . . . . . . . . . . . . . . . . . . . . 325
Majorization and Stochastic Matrices . . . . . . . . . . . . . . . . . . . 334
Majorization and Convex Functions . . . . . . . . . . . . . . . . . . . . . 340
Majorization of Diagonal Entries, Eigenvalues,
and Singular Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Majorization for Matrix Sum . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
Majorization for Matrix Product . . . . . . . . . . . . . . . . . . . . . . . . 363
Majorization and Unitarily Invariant Norms . . . . . . . . . . . . 372

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395


www.pdfgrip.com


www.pdfgrip.com

Frequently Used Notation and Terminology

dim V , 3
Mn , 8
A = (aij ), 8
I, 9
AT , 9
A, 9

A∗ , 9
A−1 , 13
rank (A), 11
tr A, 21
det A, 12
|A|, 12, 83, 164
(u, v), 27
∥ · ∥, 28, 113
Ker(A), 17
Im(A), 17
ρ(A), 109
σmax (A), 109
λmax (A), 124
A ≥ 0, 81
A ≥ B, 81
A ◦ B, 117
A ⊗ B, 117
x ≺w y, 326
x ≺wlog y, 344

dimension of vector space V
n × n (i.e., n-square) matrices with complex entries
matrix A with (i, j)-entry aij
identity matrix
transpose of matrix A
conjugate of matrix A
T
conjugate transpose of matrix A, i.e., A∗ = A
inverse of matrix A
rank of matrix A

trace of matrix A
determinant of matrix A
determinant for a block matrix A or (A∗ A)1/2 or (|aij |)
inner product of vectors u and v
norm of a vector or a matrix
kernel or null space of A, i.e., Ker(A) = {x : Ax = 0}
image space of A, i.e., Im(A) = {Ax}
spectral radius of matrix A
largest singular value (spectral norm) of matrix A
largest eigenvalue of matrix A
A is positive semidefinite (or all aij ≥ 0 in Section 5.7)
A − B is positive semidefinite (or aij ≥ bij in Section 5.7)
Hadamard (entrywise) product of matrices A and B
Kronecker (tensor) product of matrices A and B
∑k
∑k
weak majorization, i.e., all i=1 x↓i ≤ i=1 yi↓ hold
∏k
∏k
weak log-majorization, i.e., all i=1 x↓i ≤ i=1 yi↓ hold

An n × n matrix A is said to be
upper-triangular
diagonalizable
similar to B
unitarily similar to B
unitary
positive semidefinite
Hermitian
normal


if
if
if
if
if
if
if
if

all entries below the main diagonal are zero
P −1 AP is diagonal for some invertible matrix P
P −1 AP = B for some invertible matrix P
U ∗ AU = B for some unitary matrix U
AA∗ = A∗ A = I, i.e., A−1 = A∗
x∗ Ax ≥ 0 for all vectors x ∈ Cn
A = A∗
A∗ A = AA∗

λ ∈ C is an eigenvalue of A ∈ Mn if Ax = λx for some nonzero x ∈ Cn .

xv


www.pdfgrip.com


www.pdfgrip.com

Frequently Used Theorems


• Cauchy–Schwarz inequality: Let V be an inner product space over
a number field (R or C). Then for all vectors x and y in V
|(x, y)|2 ≤ (x, x)(y, y).
Equality holds if and only if x and y are linearly dependent.
• Theorem on the eigenvalues of AB and BA: Let A and B be m× n
and n × m complex matrices, respectively. Then AB and BA have the
same nonzero eigenvalues, counting multiplicity. As a consequence,
tr(AB) = tr(BA).
• Schur triangularization theorem: For any n-square matrix A, there
exists an n-square unitary matrix U such that U ∗ AU is upper-triangular.
• Jordan decomposition theorem: For any n-square matrix A, there
exists an n-square invertible complex matrix P such that
A = P −1 (J1 ⊕ J2 ⊕ · · · ⊕ Jk )P,
where each Ji , i = 1, 2, . . . , k, is a Jordan block.
• Spectral decomposition theorem: Let A be an n-square normal
matrix with eigenvalues λ1 , λ2 , . . . , λn . Then there exists an n-square
unitary matrix U such that
A = U ∗ diag(λ1 , λ2 , . . . , λn )U.
In particular, if A is positive semidefinite, then all λi ≥ 0; if A is Hermitian, then all λi are real; and if A is unitary, then all |λi | = 1.
• Singular value decomposition theorem: Let A be an m×n complex
matrix with rank r. Then there exist an m-square unitary matrix U and
an n-square unitary matrix V such that
A = U DV,
where D is the m × n matrix with (i, i)-entries being the singular values
of A, i = 1, 2, . . . , r, and other entries 0. If m = n, then D is diagonal.

xvii



www.pdfgrip.com


www.pdfgrip.com

CHAPTER 1
Elementary Linear Algebra Review

Introduction: We briefly review, mostly without proof, the basic

concepts and results taught in an elementary linear algebra course.
The subjects are vector spaces, basis and dimension, linear transformations and their eigenvalues, and inner product spaces.

1.1

Vector Spaces

Let V be a set of objects (elements) and F be a field, mostly the real
number field R or the complex number field C throughout this book.
The set V is called a vector space over F if the operations addition
u + v,

u, v ∈ V,

and scalar multiplication
cv,

c ∈ F, v ∈ V,

are defined so that the addition is associative, is commutative, has

an additive identity 0 and additive inverse −v in V for each v ∈ V ,
and so that the scalar multiplication is distributive, is associative,
and has an identity 1 ∈ F for which 1v = v for every v ∈ V .

F. Zhang, Matrix Theory: Basic Results and Techniques, Universitext,
DOI 10.1007/978-1-4614-1099-7_1, © Springer Science+Business Media, LLC 2011

1


www.pdfgrip.com

2

Elementary Linear Algebra Review

Chap. 1

Put these in symbols:
1. u + v ∈ V for all u, v ∈ V .
2. cv ∈ V for all c ∈ F and v ∈ V .
3. u + v = v + u for all u, v ∈ V .
4. (u + v) + w = u + (v + w) for all u, v, w ∈ V .
5. There is an element 0 ∈ V such that v + 0 = v for all v ∈ V .
6. For each v ∈ V there is an element −v ∈ V so that v+(−v) = 0.
7. c(u + v) = cu + cv for all c ∈ F and u, v ∈ V .
8. (a + b)v = av + bv for all a, b ∈ F and v ∈ V .
9. (ab)v = a(bv) for all a, b ∈ F and v ∈ V .
10. 1v = v for all v ∈ V .
u+v


v

cv, c > 1
u
O

v
O

Figure 1.1: Vector addition and scalar multiplication
We call the elements of a vector space vectors and the elements
of the field scalars. For instance, Rn , the set of real column vectors


x1
 x2 


 ..  , also written as (x1 , x2 , . . . , xn )T
 . 
xn
(T for transpose) is a vector space over R with respect to the addition
(x1 , x2 , . . . , xn )T + (y1 , y2 , . . . , yn )T = (x1 + y1 , x2 + y2 , . . . , xn + yn )T
and the scalar multiplication
c (x1 , x2 , . . . , xn )T = (cx1 , cx2 , . . . , cxn )T ,

c ∈ R.



www.pdfgrip.com
Sec. 1.1

Vector Spaces

3

Note that the real row vectors also form a vector space over R;
and they are essentially the same as the column vectors as far as
vector spaces are concerned. For convenience, we may also consider
Rn as a row vector space if no confusion is caused. However, in the
matrix-vector product Ax, obviously x needs to be a column vector.
Let S be a nonempty subset of a vector space V over a field F.
Denote by Span S the collection of all finite linear combinations of
the vectors in S; that is, Span S consists of all vectors of the form
c1 v1 + c2 v2 + · · · + ct vt , t = 1, 2, . . . , ci ∈ F, vi ∈ S,
The set Span S is also a vector space over F. If Span S = V , then
every vector in V can be expressed as a linear combination of vectors
in S. In such cases we say that the set S spans the vector space V .
A set S = {v1 , v2 , . . . , vk } is said to be linearly independent if
c1 v1 + c2 v2 + · · · + ck vk = 0
holds only when c1 = c2 = · · · = ck = 0. If there are also nontrivial
solutions, i.e., not all c are zero, then S is linearly dependent.
For example, both {(1, 0), (0, 1), (1, 1)} and {(1, 0), (0, 1)} span
R2 . The first set is linearly dependent; the second one is linearly
independent. The vectors (1, 0) and (1, 1) also span R2 .
A basis of a vector space V is a linearly independent set that spans
V . If V possesses a basis of an n-vector set S = {v1 , v2 , . . . , vn }, we
say that V is of dimension n, written as dim V = n. Conventionally,
if V = {0}, we write dim V = 0. If any finite set cannot span V ,

then V is infinite-dimensional and we write dim V = ∞. Unless
otherwise stated, we assume throughout the book that the vector
spaces are finite-dimensional, as we mostly deal with finite matrices,
even though some results hold for infinite-dimensional spaces.
For instance, C
√ is a vector space of dimension 2 over R with basis
{1, i}, where i = −1, and of dimension 1 over C with basis {1}.
Cn , the set of row (or column) vectors of n complex components,
is a vector space over C having standard basis
e1 = (1, 0, . . . , 0, 0), e2 = (0, 1, . . . , 0, 0), . . . , en = (0, 0, . . . , 0, 1).


www.pdfgrip.com

4

Elementary Linear Algebra Review

Chap. 1

If {u1 , u2 , . . . , un } is a basis for a vector space V of dimension n,
then every x in V can be uniquely expressed as a linear combination
of the basis vectors:
x = x1 u1 + x2 u2 + · · · + xn un ,
where the xi are scalars. The n-tuple (x1 , x2 , . . . , xn ) is called the
coordinate of vector x with respect to the basis.
Let V be a vector space of dimension n, and let {v1 , v2 , . . . , vk } be
a linearly independent subset of V . Then k ≤ n, and it is not difficult
to see that if k < n, then there exists a vector vk+1 ∈ V such that
the set {v1 , v2 , . . . , vk , vk+1 } is linearly independent (Problem 16). It

follows that the set {v1 , v2 , . . . , vk } can be extended to a basis of V .
Let W be a subset of a vector space V . If W is also a vector space
under the addition and scalar multiplication for V , then W is called
a subspace of V . One may check (Problem 9) that W is a subspace if
and only if W is closed under the addition and scalar multiplication.
For subspaces V1 and V2 , the sum of V1 and V2 is defined to be
V1 + V2 = {v1 + v2 : v1 ∈ V1 , v2 ∈ V2 }.
It follows that the sum V1 + V2 is also a subspace. In addition,
the intersection V1 ∩ V2 is a subspace, and
V1 ∩ V2 ⊆ Vi ⊆ V1 + V2 , i = 1, 2.
The sum V1 + V2 is called a direct sum, symbolized by V1 ⊕ V2 , if
v1 + v2 = 0, v1 ∈ V1 , v2 ∈ V2



v1 = v2 = 0.

One checks that in the case of a direct sum, every vector in V1 ⊕V2
is uniquely written as a sum of a vector in V1 and a vector in V2 .
V1

V1 ⊕ V2

O

Figure 1.2: Direct sum

V2



www.pdfgrip.com
Sec. 1.1

Vector Spaces

5

Theorem 1.1 (Dimension Identity) Let V be a finite-dimensional
vector space, and let V1 and V2 be subspaces of V . Then
dim V1 + dim V2 = dim(V1 + V2 ) + dim(V1 ∩ V2 ).
The proof is done by first choosing a basis {u1 , . . . , uk } for V1 ∩V2 ,
extending it to a basis {u1 , . . . , uk , vk+1 , . . . , vs } for V1 and a basis
{u1 , . . . , uk , wk+1 , . . . , wt } for V2 , and then showing that
{u1 , . . . , uk , vk+1 , . . . , vs , wk+1 , . . . , wt }
is a basis for V1 + V2 .
It follows that subspaces V1 and V2 contain nonzero common
vectors if the sum of their dimensions exceeds dim V .
Problems
1. Show explicitly that R2 is a vector space over R. Consider R2 over
C with the usual addition. Define c(x, y) = (cx, cy), c ∈ C. Is R2 a
vector space over C? What if the “scalar multiplication” is defined as
c(x, y) = (ax + by, ax − by), where c = a + bi, a, b ∈ R?
2. Can a vector space have two different additive identities? Why?
3. Show that Fn [x], the collection of polynomials over a field F with
degree at most n, is a vector space over F with respect to the ordinary
addition and scalar multiplication of polynomials. Is F[x], the set of
polynomials with any finite degree, a vector space over F? What is
the dimension of Fn [x] or F[x]?
4. Determine whether the vectors v1 = 1 + x − 2x2 , v2 = 2 + 5x − x2 ,
and v3 = x + x2 in F2 [x] are linearly independent.

5. Show that {(1, i), (i, −1)} is a linearly independent subset of C2 over
the real R but not over the complex C.
6. Determine whether R2 , with the operations
(x1 , y1 ) + (x2 , y2 ) = (x1 x2 , y1 y2 )
and
c(x1 , y1 ) = (cx1 , cy1 ),
is a vector space over R.


www.pdfgrip.com

6

Elementary Linear Algebra Review

Chap. 1

7. Let V be the set of all real numbers in the form


a + b 2 + c 5,
where a, b, and c are rational numbers. Show that V is a vector space
over the rational number field Q. Find dim V and a basis of V .
8. Let V be a vector space. If u, v, w ∈ V are such that au +bv + cw = 0
for some scalars a, b, c, ac ̸= 0, show that Span{u, v} = Span{v, w}.
9. Let V be a vector space over F and let W be a subset of V . Show
that W is a subspace of V if and only if for all u, v ∈ W and c ∈ F
u + v ∈ W and cu ∈ W.
10. Is the set {(x, y) ∈ R2 : 2x − 3y = 0} a subspace of R2 ? How about
{(x, y) ∈ R2 : 2x − 3y = 1}? Give a geometric explanation.

11. Show that the set {(x, y − x, y) : x, y ∈ R} is a subspace of R3 . Find
the dimension and a basis of the subspace.
12. Find a basis for Span{u, v, w}, where u = (1, 1, 0), v = (1, 3, −1),
and w = (1, −1, 1). Find the coordinate of (1, 2, 3) under the basis.
13. Let W = {(x1 , x2 , x3 , x4 ) ∈ R4 : x3 = x1 + x2 and x4 = x1 − x2 }.
(a) Prove that W is a subspace of R4 .
(b) Find a basis for W . What is the dimension of W ?
(c) Prove that {c(1, 0, 1, 1) : c ∈ R} is a subspace of W .
(d) Is {c(1, 0, 0, 0) : c ∈ R} a subspace of W ?
14. Show that each of the following is a vector space over R.
(a) C[a, b], the set of all (real-valued) continuous functions on [a, b].
(b) C ′ (R), the set of all functions of continuous derivatives on R.
(c) The set of all even functions.
(d) The set of all odd functions.
(e) The set of all functions f that satisfy f (0) = 0.
[Note: Unless otherwise stated, functions are added and multiplied by
scalars in a usual way, i.e., (f +g)(x) = f (x)+g(x), (kf )(x) = kf (x).]
15. Show that if W is a subspace of vector space V of dimension n, then
dim W ≤ n. Is it possible that dim W = n for a proper subspace W ?


Tài liệu bạn tìm kiếm đã sẵn sàng tải về

Tải bản đầy đủ ngay
×