Tải bản đầy đủ (.pdf) (561 trang)

Linear algebra and linear operators in engineering with applications in

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (17.72 MB, 561 trang )


www.pdfgrip.com

LINEAR ALGEBRA
AND LINEAR OPERATORS
IN ENGINEERING
with Applications
in Mathematica


www.pdfgrip.com

This is Volume 3 of
PROCESS SYSTEMS ENGINEERING
A Series edited by George Stephanopoulos and John Perkins


www.pdfgrip.com

LINEAR ALGEBRA
AND LINEAR
OPERATORS
IN ENGINEERING

with Applications
in Mathematica
H. Ted Davis
Department of Chemical Engineering and Materials Science
University of Minnesota
Minneapolis, Minnesota


Kendall T. Thomson
School of Chemical Engineering
Purdue University
West Lafayette, Indiana

ACADEMIC PRESS
An Imprint ofEbevier

San Diego

San Francisco

New York

Boston

London

Sydney

Tokyo


www.pdfgrip.com

This book is printed on acid-free paper,

fe/

Copyright © 2000 by ACADEMIC PRESS

All Rights Reserved,
No part of this publication may be reproduced or transmitted in any fonn or by any
means, electronic or mechanical, including photocopy, recording, or any information
storage and retrieval system, without pennission in writing from the publisher.
Permissions may be sought directly frotn Elsevier's Science and Technology Rights Department in
Oxtbrd, UK. Phone: (44) 1865 843830, Fax: (44) 1865 853333, e-mail:
You may also complete your request on-line via the Elsevier homepage: hllp://www.elsevier.com by
selecting "Customer Support" and then "Obtaining Permissions".

Academic Press
An Imprint of Elsevier
525 B Street, Suite 1900, San Diego, CaUfomia 92101-4495, USA
demicpres$.com

Academic Press
Harcouit Place, 32 Jamestown Road, London NWl 7BY, UK
http;//www.hbuk.co.uk/ap/
Library of Congress Catalog Card Number: 00-100019
ISBN-13:978-0-12-206349-7
ISBN-10:0-12-206349-X
PRINTED IN THE UNITED STATES OF AMERICA
05
QW
9 8 7 6 5 4 3 2


www.pdfgrip.com

CONTEKTS


PREFACE

xi

I Determinants
1.1. Synopsis 1
1.2. Matrices 2
1.3. Definition of a Determinant 3
1.4. Elementary Properties of Determinants 6
1.5. Cofactor Expansions 9
1.6. Cramer's Rule for Linear Equations 14
1.7. Minors and Rank of Matrices 16
Problems 18
Further Reading 22

2 Vectors and Matrices
2.1.
2.2.
2.3.
2.4.

Synopsis 25
Addition and Multiplication
The Inverse Matrix 28
Transpose and Adjoint 33

26


www.pdfgrip.com


VI

CONTENTS

2.5. Partitioning Matrices 35
2.6. Linear Vector Spaces 38
Problems 43
Further Reading 46

3 Solution of Linear and Nonlinear Systems
3.1. Synopsis 47
3.2. Simple Gauss Elimination 48
3.3. Gauss Elimination with Pivoting 55
3.4. Computing the Inverse of a Matrix 58
3.5. LU-Decomposition 61
3.6. Band Matrices 66
3.7. Iterative Methods for Solving Ax = b 78
3.8. Nonhnear Equations 85
Problems 108
Further Reading 121

4 General Theory of Solvability of Linear
Algebraic Equations
4.1. Synopsis 123
4.2. Sylvester's Theorem and the Determinants of Matrix Products 124
4.3. Gauss-Jordan Transformation of a Matrix 129
4.4. General Solvability Theorem for Ax = b 133
4.5! Linear Dependence of a Vector Set and the Rank of Its Matrix 150
4.6. The Fredholm Alternative Theorem 155

Problems 159
Further Reading 161

5 The Eigenproblem
5.1. Synopsis 163
5.2. Linear Operators in a Normed Linear Vector Space
5.3. Basis Sets in a Normed Linear Vector Space 170
5.4. Eigenvalue Analysis 179
5.5. Some Special Properties of Eigenvalues 184
5.6. Calculation of Eigenvalues 189
Problems 196
Further Reading 203

165


www.pdfgrip.com

CONTENTS

VII

6 Perfect Matrices
6.1. Synopsis 205
6.2. Implications of the Spectral Resolution Theorem 206
6.3. Diagonalization by a Similarity Transformation 213
6.4. Matrices with Distinct Eigenvalues 219
6.5. Unitary and Orthogonal Matrices 220
6.6. Semidiagonalization Theorem 225
6.7. Self-Adjoint Matrices 227

6.8. Normal Matrices 245
6.9. Miscellanea 249
6.10. The Initial Value Problem 254
6.11. Perturbation Theory 259
Problems 261
Further Reading 278

7 Imperfect or Defective Matrices
7.1. Synopsis 279
7.2. Rank of the Characteristic Matrix 280
7.3. Jordan Block Diagonal Matrices 282
7.4. The Jordan Canonical Form 288
7.5. Determination of Generalized Eigenvectors 294
7.6. Dyadic Form of an Imperfect Matrix 303
7.7. Schmidt's Normal Form of an Arbitrary Square Matrix
7.8. The Initial Value Problem 308
Problems 310
Further Reading 314

8 Infinite-Dimensional Linear Vector Spaces
8.1. Synopsis 315
8.2. Infinite-Dimensional Spaces 316
8.3. Riemann and Lebesgue Integration 319
8.4. Inner Product Spaces 322
8.5. Hilbert Spaces 324
8.6. Basis Vectors 326
8.7. Linear Operators 330
8.8. Solutions to Problems Involving fe-term Dyadics
8.9. Perfect Operators 343
Problems 351

Further Reading 353

336

304


www.pdfgrip.com

VIII

CONTENTS

9 Linear Integral Operators in a Hilbert Space
9.1. Synopsis 355
9.2. Solvability Theorems 356
9.3. Completely Continuous and Hilbert-Schmidt Operators
9.4. Volterra Equations 375
9.5. Spectral Theory of Integral Operators 387
Problems 406
Further Reading 411

366

10 Linear Differential Operators in a Hilbert Space
10.1. Synopsis 413
10.2. The Differential Operator 416
10.3. The Adjoint of a Differential Operator 420
10.4. Solution to the General Inhomogeneous Problem 426
10.5. Green's Function: Inverse of a Differential Operator 439

10.6. Spectral Theory of Differential Operators 452
10.7. Spectral Theory of Regular Sturm-Liouville Operators 459
10.8. Spectral Theory of Singular Sturm-Liouville Operators 477
10.9. Partial Differential Equations 493
Problems 502
Further Reading 509
APPENDIX

A.l. Section 3.2: Gauss Elimination and the Solution to the Linear System
Ax = b 511
A,2. Example 3.6.1: Mass Separation with a Staged Absorber 514
A.3. Section 3.7: Iterative Methods for Solving the Linear System Ax = b 515
A.4. Exercise 3.7.2: Iterative Solution to Ax = b—Conjugate Gradient
Method 518
A.5. Example 3.8.1: Convergence of the Picard and Newton-Raphson
Methods 519
A.6. Example 3.8.2: Steady-State Solutions for a Continuously Stirred Tank
Reactor 521
A.7. Example 3.8.3: The Density Profile in a Liquid-Vapor Interface (Iterative
Solution of an Integral Equation) 523
A.8. Example 3.8.4: Phase Diagram of a Polymer Solution 526
A.9. Section 4.3: Gauss-Jordan Elimination and the Solution to the Linear
System Ax = b 529
A. 10. Section 5.4: Characteristic Polynomials and the Traces of a Square
Matrix 531
A. 11. Section 5.6: Iterative Method for Calculating the Eigenvalues of
Tridiagonal Matrices 533
A. 12. Example 5.6.1: Power Method for Iterative Calculation of Eigenvalues 534



www.pdfgrip.com

CONTENTS

IX

A. 13. Example 6.2.1: Implementation of the Spectral Resolution
Theorem—Matrix Functions 535
A.M. Example 9.4.2: Numerical Solution of a Volterra Equation (Saturation in
Porous Media) 537
A.15. Example 10.5.3: Numerical Green's Function Solution to a Second-Order
Inhomogeneous Equation 540
A.16. Example 10.8.2: Series Solution to the Spherical Diffusion Equation
(Carbon in a Cannonball) 542
INDEX

543


www.pdfgrip.com

This Page Intentionally Left Blank


www.pdfgrip.com

PREFACE

This textbook is aimed at first-year graduate students in engineering or the physical
sciences. It is based on a course that one of us (H.T.D.) has given over the past

several years to chemical engineering and materials science students.
The emphasis of the text is on the use of algebraic and operator techniques
to solve engineering and scientific problems. Where the proof of a theorem can
be given without too much tedious detail, it is included. Otherwise, the theorem is
quoted along with an indication of a source for the proof. Numerical techniques for
solving both nonlinear and linear systems of equations are emphasized. Eigenvector
and eigenvalue theory, that is, the eigenproblem and its relationship to the operator
theory of matrices, is developed in considerable detail.
Homework problems, drawn from chemical, mechanical, and electrical engineering as well as from physics and chemistry, are collected at the end of each
chapter—the book contains over 250 homework problems. Exercises are sprinkled
throughout the text. Some 15 examples are solved using Mathematica, with the
Mathematica codes presented in an appendix. Partially solved examples are given
in the text as illustrations to be completed by the student.
The book is largely self-contained. The first two chapters cover elementary
principles. Chapter 3 is devoted to techniques for solving linear and nonlinear
algebraic systems of equations. The theory of the solvability of linear systems is
presented in Chapter 4. Matrices as linear operators in linear vector spaces are
studied in Chapters 5 through 7. The last three chapters of the text use analogies
between finite and infinite dimensional vector spaces to introduce the functional
theory of linear differential and integral equations. These three chapters could serve
as an introduction to a more advanced course on functional analysis.
H. Ted Davis
Kendall T. Thomson

Xi


www.pdfgrip.com

This Page Intentionally Left Blank



www.pdfgrip.com

I

DETERMINANTS

I . I . SYNOPSIS
For any square array of numbers, i.e., a square matrix, we can define a
determinant—a scalar number, real or complex. In this chapter we will give the
fundamental definition of a determinant and use it to prove several elementary
properties. These properties include: determinant addition, scalar multiplication,
row and column addition or subtraction, and row and column interchange. As we
will see, the elementary properties often enable easy evaluation of a determinant,
which otherwise could require an exceedingly large number of multiplication and
addition operations.
Every determinant has cofactors, which are also determinants but of lower
order (if the determinant corresponds to an n x n array, its cofactors correspond
to (n — 1) X (n — 1) arrays). We will show how determinants can be evaluated
as linear expansions of cofactors. We will then use these cofactor expansions to
prove that a system of linear equations has a unique solution if the determinant of
the coefficients in the linear equations is not 0. This result is known as Cramer's
rule, which gives the analytic solution to the linear equations in terms of ratios of
determinants. The properties of determinants established in this chapter will play
(in the chapters to follow) a big role in the theory of linear and nonlinear systems
and in the theory of matrices as linear operators in vector spaces.


www.pdfgrip.com


CHAPTER I DETERMINANTS

1.2. MATRICES

A matrix A is an array of numbers, complex or real. We say A is an m x ndimensional matrix if it has m rows and n columns, i.e..

-'22

^23

-*32

'*33

^m2

•*m3

^3n

(1.2.1)

The numbers a,y (/ = 1 , . . . , m, 7 = I , . . . , w) are called the elements of A with
the element a^j belonging to the ith row and 7th column of A. An abbreviated
notation for A is

Ki-

ll.2.2)


By interchanging the rows and columns of A, the transpose matrix A^ is
generated. Namely,

A^ =

(1.2.3)

The rows of A^ are the columns of A and the iji\\ element of A^ is a^,, i.e.,
{^)ij = ciji' If A is an m X n matrix, then A^ is an n x m matrix.
When m = n, we say A is a square matrix. Square matrices figure importantly
in applications of linear algebra, but non-square matrices are also encountered in
common physical problems, e.g., in least squares data analysis. The m x 1 matrix

•^2

X =

(1.2.4)

and the 1 x m matrix
y'^ = [ j i , - . . > J n ]

(1.2.5)

are also important cases. They are called vectors. We say that x is an m-dimensional
column vector containing m elements, and y^ is an n-dimensional row vector containing n elements. Note that y^ is the transpose of the n x 1 matrix y—the ndimensional column vector y.


www.pdfgrip.com


DEFINITION OF A DETERMINANT

If A and B have the same dimensions, then they can be added. The rule of
matrix addition is that corresponding elements are added, i.e.,
A + B = [a^j] + [bij] = [a^j -f b^j]

^21 + ^21

^22 + ^22

, ^ml + ^ m l

^ml '^ ^ml

«2« + K

(1.2.6)

Consistent with the definition of m x « matrix addition, the multiplication of
the matrix A by a complex number a (scalar multiplication) is defined by
a A = [aa,.^.];

(1.2.7)

i.e., a A is formed by replacing every element a^j of A by aa,y.

1.3. DEFINITION OF A DETERMINANT
A determinant is defined specifically for a square matrix. The various notations for
the determinant of A are


D = D ^ = D e t A = |A|=:K.y|

(1.3.1)

We define the determinant of A as follows:

D^ Yl (-l)''«./,«2,,

(1.3.2)

where the summation is taken over all possible products of a^^ in which each
product contains n elements and one and only one element from each row and
each column. The indices / , , . . . , / „ are permutations of the integers 1 , . . . , n.
We will use the symbol Y! to denote summation over all permutations. For a
given set {/i,..., /„}, the quantity P denotes the number of transpositions required
to transform the sequence /j, / 2 , . . . , /„ into the ordered sequence 1, 2 , . . . , w. A
transposition is defined as an interchange of two numbers /, and Ij. Note that there
are n\ terms in the sum defining D since there are exactly n\ ways to reorder the
set of numbers { 1 , 2 , . . . , n) into distinct sets {IxJi^ • • •»h)As an example of a determinant, consider
aji

^12

ci\

^21

^22


^23

^31

^32

^33

— ^n«22%3 ~ ^12^21^33 ~ ^11^23^32

«13«22«31 + «I3«2I<'32 + «I2«23«31 •

(1.3.3)


www.pdfgrip.com

CHAPTER I DETERMINANTS

The sign of the second term is negative because the indices {2, 1,3} are transposed
to {1, 2, 3} with the one transposition
2, l , 3 - > 1,2,3,
and so P = 1 and (—1)^ = —1. However, the transposition also could have been
accomplished with the three transpositions
2 , 1 , 3 - > 2,3,1 -> 1,3,2-^ 1,2,3,
in which case P = 3 and (—1)^ = — 1. We see that the number of transpositions P
needed to reorder a given sequence / j , . . . , /„ is not unique. However, the evenness
or oddness of P is unique and thus (—1)^ is unique for a given sequence.
• • •
EXERCISE 1.3.1. Verify the signs in Eq. (1.3.3). Also, verify that the number

I I • of transpositions required for <^ii«25^33'^42^54 is even.
A definition equivalent to that in Eq. (1.3.2) is
(1.3.4)
If the product ai^iai^2'' '%n *s reordered so that the first indices of the «/, are
ordered in the sequence 1 , . . . , « , the second indices will be in a sequence requiring P transpositions to reorder as 1 , . . . , M. Thus, the n! n-tuples in Eqs. (1.3.2)
and (1.3.4) are the same and have the same signs.
The determinant in Eq. (1.3.3) can be expanded according to the defining
equation (1.3.4) as

'*22

^23

— ^11^22^32 ~ ^21^12^33 ~ ^11^32^23

(1.3.5)

'*32

« 3 , a 2 2 ^ j 3 4 - « 2 l « 3 2 « l 3 + «3l<^12«23-

It is obvious by inspection that the right-hand sides of Eqs. (1.3.3) and (1.3.5) are
identical since the various terms differ only by the order in which the multiplication
of each 3-tuple is carried out.
In the case of second- and third-order determinants, there is an easy way to
generate the distinct n-tuples. For the second-order case,

21

"22


the product of the main diagonal, an^22» is one of the 2-tuples and the product of
the reverse main diagonal, ai2«2i» is the other. The sign of «i2<^2i is negative since
{2,1) requires one transposition to reorder to {1, 2}. Thus,
— diiCl'j')
Ml "22
^21

^22

di'yCl'
12"21

(1.3.6)


www.pdfgrip.com

DEFINITION OF A DETERMINANT

since there are no other 2-tuples containing exactly one element from each row
and column.
In the case of the third-order determinant, the six 3-tuples can be generated
by multiplying the elements shown below by solid and dashed curves
/
/Tl

/« f l /C^Al /013 '
/


/

/

qi\ a^2 «^3
[ OKi^ ail ai^ ^

The products associated with solid curves require an even number of transpositions P and those associated with the dashed curves require an odd P. Thus, the
determinant is given by
a,3
^22

«23

= «11^22^33 + «12^23«31 + ^13<^2I«32

(1.3.7)

«33
«I3«22^31 -

« l l « 2 3 « 3 2 " ^12^21^33^

in agreement with Eq. (1.3.3), the defining expression. For example, the following
determinant is 0:
2
5
2

= 1(5)1 + 2(6)3 + 3(2)4 - 3(5)3 - 2(6)1 - 1(2)4

(1.3.8)
= 0.

The evaluation of a determinant by calculation of the n\ n-tuples requires
{n — l)(n!) multiplications. For a fourth-order determinant, this requires 72 multiplications, not many in the age of computers. However, if n = 100, the number of
required multiplications would be

<"-""-<"-'>(7)"=K^)'

(1.3.9)

~ 3.7 X 10*^^

where Stirling's approximation, n\ ^ {n/eY, has been used. If the time for one
multiplication is 10~^ sec, then the required time to do the multiplications would be
3.7 X lO^'*^ sec,

or

1.2 x 10^^^ years!

(1.3.10)

Obviously, large determinants cannot be evaluated by direct calculation of the
defining n-tuples. Fortunately, the method of Gauss elimination, which we will
describe in Chapter 3, reduces the number of multiplications to n^. For n = 100,
this is 10*^ multiplications, as compared to 3.7 x 10^^^ by direct n-tuple evaluation.
The Gauss elimination method depends on the application of some of the elementary properties of determinants given in the next section.



www.pdfgrip.com

CHAPTER i DETERMINANTS

1.4. ELEMENTARY PROPERTIES OF DETERMINANTS
If the determinant of A is given by Eq. (1.3.2), then—because the elements of the
transpose A^ are Uj^—it follows that
(1.4.1)
However, according to Eq. (1.3.4), the right-hand side of Eq. (1.4.1) is also equal
to the determinant D^ of A. This establishes the property that
1. A determinant is invariant to the interchange of rows and columns; i.e.,
the determinant of A is equal to the determinant of A^.
For example,
1
3

8
= 7 - 2 4 = -17
7

1
8

3
= 7 - 2 4 = -17.
7

Another elementary property of a determinant is that
2. If two rows (columns) of a determinant are interchanged, then the
determinant changes sign.

For example.
D =

=

flii^^??

— floiflf
21"12»

m

D' =

^11

a^xQ
i\^\i

^w^n = - ' ^ ã

From the definition of D in Eq. (1.3.2),

^=EVi)%ô2/,

hir'

^nL^

it follows that the determinant D' formed by the interchange of rows i and j in

D is
^ ' = E (~l)^ «l/,«2/2 • • • ^Jh • ' ' ^ilj ' ' ' ^nl„ '

(1.4.2)

Each term in D' corresponds to one in D if one transposition is carried out. Thus,
P and P' differ by 1, and so (-1)^' = (-1)^^^ = - ( - 1 ) ^ . From this it follows
that D' = — D. A similar proof that the interchange of two columns changes the
sign of the determinant can be given using the definition of D in Eq. (1.3.4).
Alternatively, from the fact that D^ = D^T, it follows that if the interchange of
two rows changes the sign of the determinant, then the interchange of two columns
does the same thing because the columns of A^ are the rows of A.


www.pdfgrip.com

ELEMENTARY PROPERTIES OF DETERMINANTS

/

The preceding property implies:
3. If any two rows (columns) of a matrix are the same, its determinant is 0.
If two rows (columns) are interchanged, D ~ —D'. However, if the rows (columns)
interchanged are identical, then D = D\ The two equalities, D = —D^ and D =
D\ are possible only if D = D' = 0.
Next, we note that
4. Multiplication of the determinant D by a constant k is the same as
multiplying any row (column) by k.
This property follows from the commutative law of scalar multiplication, i.e.,
kab {ka)b = a(kb), or

kD=

E'(-l)^^i/,ô2/2 ã ã - H / , ã "(ini

ô2l

"2ô

(1.4.3)

kai
ci

a,.

ô12

ka^

ô22

kG'j

*ln

ka nj

^nl

Multiplication of the determinant


D

1
2
1

2
4
2

3
6
8

by \ gives
1
2
2^ =
1

1
2
1

3
6
8

from which we can conclude that D/2 = 0 and D = 0, since D/2 has two identical

columns. Stated differently, the multiplication rule says that if a row (column) of
D has a common factor k, then D = kD\ where D' is formed from D by replacing
the row (column) with the common factor by th6 row (column) divided by the


www.pdfgrip.com

CHAPTER I DETERMINANTS

common factor. Thus, in the previous example,
1
D=4 I
1

1
1
1

3
3
8

The fact that a determinant is 0 if two rows (columns) are the same yields the
property:
5. The addition of a row (column) multiplied by a constant to any other row
(column) does not change the value of D.
To prove this, note that

(1.4.4)


The second determinant on the right-hand-side of Eq. (1.4.4) is 0 since the elements
of the iih and jth rows are the same. Thus, D' = D. The equality D^ = D^T
establishes the property for column addition. As an example,
1
1

H-2^
1+3A:

2
=3-2=1=
3

2
= 3 + 6A:-2-6A:=l.
3

Elementary properties can be used to simplify a determinant. For example.
1
2
1

2
4
2

3
6 =
8


1[
2
1

-2(2)

1
= 4 1
1

2
4
2

3 -2
1 2 1
6-4 = 2 4 2
8-2
1 1 2 6

1
1
1
0
0
0

1
1
1


1
1
1 = 4 1
6
1

0
1
0 = 41 0
5
0

0
0
0

1-1
1-1
1-1

1 -1
1-1
6-1

(1.4.5)

Ol
o|
5|


The sequence of application of the elementary properties in Eq. (1.4.5) is, of course,
not unique.
Another useful property of determinants is:
6. If two determinants differ only by one row (column), their sum differs
only in that the differing rows (colunms) are summed.


www.pdfgrip.com

COFACTOR EXPANSIONS

That is.
ôir--

ô.ãã ay-\-by-

ôii

a,

+
^m

ônl ã ã ã ôm + Ki • • • «n

^nn

(1.4.6)
This property follows from the definition of determinants and the distributive law

(ca + cb = c(a -f b)) of scalar multiplication.
As the last elementary property of determinants to be given in this section,
consider differentiation of D by the variable /:
dD
'dt

= E'(-,
+ ããã

, ^

-an,

ã ã '^nl

+ E'(- !)''ô.^2/2

+ E'(--Ifa,,

day,
dt

dai^
dt

(IA7)

or
dD
dt


= E'(-

dt

-%2 ' ã ã ô/ô

+ E'(-- 1 ) % .

%2-

dt

(1.4.8)

=j:^i'
The determinant D'- is evaluated by replacing in D the elements of the ith row by
the derivatives of the elements of the ith row. Similarly, for Dl\ replace in D the
elements of the ith column by the derivatives of the elements of the ith column.
For example.
dOii

d

dt

It

dci^2


dt

+

«22

da2\

da22

IT

dt

da II

dai2

dt
dan
dt

(1.4.9)

dt

-f

d^22
^21


IT

.5. COFACTOR EXPANSIONS
We define the cofactor A,^ as the quantity (—1)'^^ multiplied by the determinant
of the matrix generated when the ith row and jth column of A are removed. For
example, some of the cofactors of the matrix

A =

^21

^22

^23

•^32

^33

(1.5.1)


www.pdfgrip.com

10

CHAPTER I DETERMINANTS

include

Au =

^22

^23

^32

,

«12

A21 — —

«13
'

^33

«32

a,2

fl,3

^22

^23

^31 —


«33

(1.5.2)

In general,
M.7+1

*i,;-i

A,^^{-\r^

^i-l,n

H-iA

(1.5.3)

^f + l,n

^nl

'*«.7-l

Note that an /i x n matrix has n^ cofactors.
Cofactors are important because they enable us to evaluate an nth-order determinant as a linear combination of n {n — l)th-order determinants. The evaluation
makes use of the following theorem:
CoFACTOR EXPANSION

THEOREM.


The determinant D of K can be computed

from
where i is an arbitrary row,

(1.5.4)

where j is an arbitrary column.

(1.5.5)

D — XI ^u ^'7'
7=1

or
D = ^ciij^ijy
1=1

Equation (1.5.4) is called a cofactor expansion by the fth row and Eq. (1.5.5)
is called a cofactor expansion by the 7th column.
Before presenting the proof of the cofactor expansion, we will give an example.
Let

A =

4
-1
0


-1
4
-1

0
-1
4

(1.5.6)

By the expression given in Eq. (1.3.7), it follows that
D = . 6 4 + 0 + 0 + ( - 0 ) - 4 - 4 = 56.
The cofactor expansion by row 1 yields
D=4

4
-1

-1
4

(-1)

= 60 - 4 + 0 = 56,

1

-1

0


4

+0

-1
0

4
-1


www.pdfgrip.com

II

COFACTOR EXPANSIONS

and the cofactor expansion by column 2 yields

-1
0

D^^-H^l)

+4

4
0


0
4

= 56.

1(-1)

To prove the cofactor expansion theorem, we start with the definition of the
determinant given in Eq. (1.3.4). Choosing an arbitrary column j , we can rewrite
this equation as
(1.5.7)
'= 1

/l.....'•

In

where the primed sum now refers to the sum over all permutations in which Ij — i.
For a given value of / in the first sum, we would like now to isolate the ijth cofactor
of A. To accomplish this, we must examine the factor (—1)^ closely. First, we
note that the permutations defined by P can be redefined in terms of permutations
in which all elements except element / are in proper order plus the permutations
required to put / in its place in the sequence 1, 2 , . . . , w. For this new definition,
the proper sequence, in general, would be
1,2,3,

, / - 1, / + 1, / + 2 , . . . , ; - 1, 7 + 1, ; + 2 , . . . , n - 1, n.

(1.5.8)


We now define P/^ as the number of permutations required to bring a sequence back
to the proper sequence defined in Eq. (1.5.8). We now note that \j — i\ permutations
are required to transform this new proper sequence back to the original proper
sequence 1, 2 , . . . , n. Thus, we can write (-1)'' = (-1)^^(-1)'+^ and Eq. (1.5.7)
becomes

D = Ei-lY^^l
i=l

E'

(-l)''^ô/,.ô(,2 ã ã ãa,._,;-,%,;+, ã ã ãô/)ôo,

\/,,....i,...,/

(1-5-9)

/

which we recognize using the definition of a cofactor as

1=1

A similar proof exists for Eq. (1.5.4).
With the aid of the cofactor expansion theorem, we see that the determinant
of an upper triangular matrix, i.e.,

0
U =
0


u 22

*23

*2/i

0

II 33

*3n

0

0

(1.5.11)

where u^j = 0, when / > j , is the product of the main diagonal elements of U, i.e.,
\V\ = f]u,.
1=1

(1.5.12)


www.pdfgrip.com

12


CHAPTER I DETERMINANTS

To derive Eq. (1.5.12), we use the cofactor expansion theorem with the first
column of U to obtain
^22

^^23

*2n

0

u 33

*3n

|U| = M„

1=2

0

0

"22

0

"23


*2n

U'^^

*3/i

0

0

where Un is the /I cofactor of U. Repeat the process on the (n — l)th-order upper
triangular determinant, then the (n — 2)th one, etc., until Eq. (1.5.13) results. Similarly, the row cofactor expansion theorem can be used to prove that the determinant
of the lower triangular matrix.

L =

0

\lu
In

'22

IL

l„2






0



0
(1.5.13)

••

hin

IS

(1.5.14)

\u = Uhi'^

i.e., it is again the product of the main diagonal elements. In L, l^j ~ 0 when j > i.
The property of the row cofactor expansion is that the sum
n

replaces the ith row of D^ with the elements a^j of the ith row; i.e., the sum puts
in the ith row of D^ the elements a^i, a,2' • • •»^/n- Thus, the quantity
n

puts the elements a j , fl2» • •»^n in the /th row of D^, i.e.,

"2n
^i-1,1


(1,5.15)
^i+i, 1

a


×