Tải bản đầy đủ (.pdf) (918 trang)

Fundamentals of Circuits and Filters pptx

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (19.3 MB, 918 trang )

Fundamentals of
Circuits and Filters
The Circuits and Filters
Handbook
Third Edition
Fundamentals of Circuits and Filters
Feedback, Nonlinear, and Distributed Circuits
Analog and VLSI Circuits
Computer Aided Design and Design Automation
Passive, Active, and Digital Filters
Edited by
Wai-Kai Chen
Edited by
Wai-Kai Chen
University of Illinois
Chicago, U. S. A.
The Circuits and Filters Handbook
Third Edition
Fundamentals of
Circuits and Filters
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
© 2009 by Taylor & Francis Group, LLC
CRC Press is an imprint of Taylor & Francis Group, an Informa business
No claim to original U.S. Government works
Printed in the United States of America on acid-free paper
10 9 8 7 6 5 4 3 2 1
International Standard Book Number-13: 978-1-4200-5887-1 (Hardcover)


This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been
made to publish reliable data and information, but the author and publisher cannot assume responsibility for the valid-
ity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright
holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this
form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may
rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or uti-
lized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopy-
ing, microfilming, and recording, or in any information storage or retrieval system, without written permission from the
publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://
www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923,
978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For orga-
nizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for
identification and explanation without intent to infringe.
Library of Congress Cataloging-in-Publication Data
Fundamentals of circuits and filters / edited by Wai-Kai Chen.
p. cm.
Includes bibliographical references and index.
ISBN-13: 978-1-4200-5887-1
ISBN-10: 1-4200-5887-8
1. Electronic circuits. 2. Electric filters. I. Chen, Wai-Kai, 1936- II. Title.
TK7867.F835 2009
621.3815 dc22 2008048126
Visit the Taylor & Francis Web site at

and the CRC Press Web site at

Contents

Preface vii
Editor-in-Chief ix
Contributors xi
SECTION I Mathematics
1 Linear Operators and Matrices 1-1
Cheryl B. Schrader and Michael K. Sain
2 Bilinear Operators and Matrices 2-1
Michael K. Sain and Cheryl B. Schrader
3 Laplace Transformation 3-1
John R. Deller, Jr.
4 Fourier Methods for Signal Analysis and Processing 4-1
W. Kenneth Jenkins
5 z-Transform 5-1
Jelena Kovačevi

c
6 Wavelet Transforms 6-1
P. P. Vaidyanathan and Igor Djokovic
7 Graph Theory 7-1
Krishnaiyan Thulasiraman
8 Signal Flow Graphs 8-1
Krishnaiyan Thulasiraman
9 Theory of Two-Dimensional Hurwitz Polynomials 9-1
Hari C. Reddy
10 Application of Symmetry: Two-Dimensional Polynomials,
Fourier Transforms, and Filter Design
10-1
Hari C. Reddy, I-Hung Khoo, and P. K. Rajan
v
SECTION II Circuit Elements, Devices, and Their Models

11 Passive Circuit Elements 11-1
Stanisław Nowak, Tomasz W. Postupolski, Gordon E. Carlson,
and Bogdan M. Wilamowski
12 RF Passive IC Components 12-1
Tomas H. Lee, Maria del Mar Hershenson, Sunderarajan S. Mohan,
Hirad Samavati, and C. Patrick Yue
13 Circuit Elements, Modeling, and Equation Formulation 13-1
Josef A. Nossek
14 Controlled Circuit Elements 14-1
Edwin W. Greeneich and James F. Delansky
15 Bipolar Junction Transistor Amplifiers 15-1
David J. Comer and Donald T. Comer
16 Operational Amplifiers 16-1
David G. Nairn and Sergio B. Franco
17 High-Frequency Amplifiers 17-1
Chris Toumazou and Alison Payne
SECTION III Linear Circuit Analysis
18 Fundamental Circuit Concepts 18-1
John Choma, Jr.
19 Network Laws and Theorems 19-1
Ray R. Chen, Artice M. Davis, and Marwan A. Simaan
20 Terminal and Port Representations 20-1
James A. Svoboda
21 Signal Flow Graphs in Filter Analysis and Synthesis 21-1
Pen-Min Lin
22 Analysis in the Frequency Domain 22-1
Jiri Vlach and John Choma, Jr.
23 Tableau and Modified Nodal Formulations 23-1
Jiri Vlach
24 Frequency-Domain Methods 24-1

Peter B. Aronhime
25 Symbolic Analysis 25-1
Benedykt S. Rodanski and Marwan M. Hassoun
26 Analysis in the Time Domain 26-1
Robert W. Newcomb
27 State-Variable Techniques 27-1
Kwong S. Chao
Index IN-1
vi Contents
Preface
The purpose of this book is to provide in a single volume a comprehensive reference work covering the
broad spectrum of mathematics for circuits and filters; circuits elements, devices, and their models; and
linear circuit analysis. This book is written and developed for the practicing electrical engineers in
industry, government, and academia. The goal is to provide the most up-to-date information in the field.
Over the years, the fundamentals of the field have evolved to include a wide range of topics and a broad
range of practice. To encompass such a wide range of knowledge, this book focuses on the key concepts,
models, and equations that enable the design engineer to analyze, design, and predict the behavior of
large-scale circuits. While design formulas and tables are listed, emphasis is placed on the key concepts
and theories underlying the processes.
This book stresses fundamental theories behind professional applications and uses several examples to
reinforce this point. Extensive development of theory and details of proofs have been omitted. The reader
is assumed to have a certain degree of sophistication and experience. However, brief reviews of theories,
principles, and mathematics of some subject areas are given. These reviews have been done concisely with
perception.
The compilation of this book would not have been possible without the dedication and efforts of
Professors Yih-Fang Huang and John Choma, Jr., and most of all the contributing authors. I wish to
thank them all.
Wai-Kai Chen
vii


Editor-in-Chief
Wai-Kai Chen is a professor and head emeritus of the Department
of Electrical Engineering and Computer Science at the University of
Illinois at Chicago. He received his BS and MS in electrical engin-
eering at Ohio University, where he was later recognized as a
distinguished professor. He earned his PhD in electrical engineering
at the University of Illinois at Urbana–Champaign.
Professor Chen has extensive experience in education and indus-
try and is very active professionally in the fields of circuits and
systems. He has served as a visiting professor at Purdue University,
the University of Hawaii at Manoa, and Chuo University in Tokyo,
Japan. He was the editor-in-chief of the IEEE Transactions on
Circuits and Systems, Series I and II, the president of the IEEE
Circuits and Systems Society, and is the founding editor and the
editor-in-chief of the Journal of Circuits, Systems and Computers.
He received the Lester R. Ford Award from the Mathematical
Association of America; the Alexander von Humboldt Award from Germany; the JSPS Fellowship
Award from the Japan Society for the Promotion of Science; the National Taipei University of Science
and Technology Distinguished Alumnus Award; the Ohio University Alumni Medal of Merit for
Distinguished Achievement in Engineering Education; the Senior University Scholar Award and the
2000 Faculty Research Award from the University of Illinois at Chicago; and the Distinguished Alumnus
Award from the University of Illinois at Urbana–Champaign. He is the recipient of the Golden Jubilee
Medal, the Education Award, and the Meritorious Service Award from the IEEE Circuits and Systems
Society, and the Third Millennium Medal from the IEEE. He has also received more than a dozen
honorary professorship awards from major institutions in Taiwan and China.
A fellow of the Institute of Electrical and Electronics Engineers (IEEE) and the American Association
for the Advancement of Science (AAAS), Professor Chen is widely known in the profession for the
following works: Applied Graph Theory (North-Holland), Theory and Design of Broadband Matching
Networks (Pergamon Press), Active Network and Feedback Amplifier Theory (McGraw-Hill), Linear
Networks and Systems (Brooks=Cole), Passive and Active Filters: Theory and Implements (John Wiley),

Theory of Nets: Flows in Networks (Wiley-Interscience), The Electrical Engineering Handbook (Academic
Press), and The VLSI Handbook (CRC Press).
ix

Contributors
Peter B. Aronhime
Electrical and Computer
Engineering Department
University of Louisville
Louisville, Kentucky
Gordon E. Carlson
Department of Electrical and
Computer Engineering
University of Missouri–Rolla
Rolla, Missouri
Kwong S. Chao
Department of Electrical and
Computer Engineering
Texas Tech University
Lubbock, Texas
Ray R. Chen
Department of Electrical
Engineering
San Jose State University
San Jose, California
Wai-Kai Chen
Department of Electrical and
Computer Engineering
University of Illinois at Chicago
Chicago, Illinois

John Choma, Jr.
Ming Hsieh Department of
Electrical Engineering
University of Southern
California
Los Angeles, California
David J. Comer
Department of Electrical and
Computer Engineering
Brigham Young University
Provo, Utah
Donald T. Comer
Department of Electrical and
Computer Engineering
Brigham Young University
Provo, Utah
Artice M. Davis
Department of Electrical
Engineering
San Jose State University
San Jose, California
James F. Delansky
Department of Electrical
Engineering
Pennsylvania State University
University Park, Pennsylvania
John R. Deller, Jr.
Department of Electrical and
Computer Engineering
Michigan State University

East Lansing, Michigan
Igor Djokovic
PairGain Technologies
Tustin, California
Sergio B. Franco
Division of Engineering
San Francisco State University
San Francisco, California
Edwin W. Greeneich
Department of Electrical
Engineering
Arizona State University
Tempe, Arizona
Marwan M. Hassoun
Department of Electrical and
Computer Engineering
Iowa State University
Ames, Iowa
Maria del Mar Hershenson
Center for Integrated Systems
Stanford University
Stanford, California
Yih-Fang Huang
Department of Electrical
Engineering
University of Notre Dame
Notre Dame, Indiana
W. Kenneth Jenkins
Department of Electrical
Engineering

Pennsylvania State University
University Park, Pennsylvania
I-Hung Khoo
Department of Electrical
Engineering
California State University
Long Beach, California
Jelena Kovačevi

c
AT&T Bell Laboratories
Murray Hill, New Jersey
xi
Tomas H. Lee
Center for Integrated Systems
Stanford University
Stanford, California
Pen-Min Lin
School of Electrical Engineering
Purdue University
West Lafayette, Indiana
Sunderarajan S. Mohan
Center for Integrated Systems
Stanford University
Stanford, California
David G. Nairn
Department of Electrical
Engineering
Queen’s University
Kingston, Canada

Robert W. Newcomb
Electrical Engineering
Department
University of Maryland
College Park, Maryland
Josef A. Nossek
Institute for Circuit Theory and
Signal Processing
Technical University of Munich
Munich, Germany
Stanisław Nowak
Institute of Electronics
University of Mining and
Metallurgy
Krakow, Poland
Alison Payne
Institute of Biomedical
Engineering
Imperial College of Science,
Technology and Medicine
London, England
Tomasz W. Postupolski
Institute of Electronic Materials
Technology
Warsaw, Poland
P. K. Rajan
Department of Electrical and
Computer Engineering
Tennessee Tech University
Cookeville, Tennessee

Hari C. Reddy
Department of Electrical
Engineering
California State University
Long Beach, California
and
Department of Computer
Science=Electrical and
Control Engineering
National Chiao-Tung University,
Taiwan
Benedykt S. Rodanski
Faculty of Engineering
University of Technology,
Sydney
Sydney, New South Wales,
Australia
Michael K. Sain
Department of Electrical
Engineering
University of Notre Dame
Notre Dame, Indiana
Hirad Samavati
Center for Integrated Systems
Stanford University
Stanford, California
Cheryl B. Schrader
College of Engineering
Boise State University
Boise, Idaho

Marwan A. Simaan
Department of Electrical and
Computer Engineering
University of Pittsburgh
Pittsburgh, Pennsylvania
James A. Svoboda
Department of Electrical
Engineering
Clarkson University
Potsdam, New York
Krishnaiyan Thulasiraman
School of Computer Science
University of Oklahoma
Norman, Oklahoma
Chris Toumazou
Institute of Biomedical
Engineering
Imperial College of Science,
Technology and Medicine
London, England
P. P. Vaidyanathan
Department of Electrical
Engineering
California Institute of
Technology
Pasadena, California
Jiri Vlach
Department of Electrical and
Computer Engineering
University of Waterloo

Waterloo, Ontario, Canada
Bogdan M. Wilamowski
Alabama Nano
=Micro
Science
and Technology
Center
Department
of Electrical and
Computer Engineering
Auburn University
Auburn, Alabama
C. Patrick Yue
Center for Integrated Systems
Stanford University
Stanford, California
xii Contributors
1
Linear Operators and
Matrices
Cheryl B. Schrader
Boise State University
Michael K. Sain
University of Notre Dame
1.1 Introduction 1-1
1.2 Vector Spaces over Fields 1-2
1.3 Linear Operators and Matrix Representations 1-4
1.4 Matrix Operations 1-6
1.5 Determinant, Inverse, and Rank 1-8
1.6 Basis Transformations 1-12

1.7 Characteristics: Eigenvalues, Eigenvectors,
and Singular Values 1-15
1.8 On Linear Systems 1-18
References 1-20
1.1 Introduction
It is only after the engineer masters’ linear concepts—linear models and circuit and filter theory—that the
possibility of tackling nonlinear ideas becomes achievable. Students frequently encounter linear meth-
odologies, and bits and pieces of mathematics that aid in problem solution are stored away. Unfortu-
nately, in memorizing the process of fi nding the inverse of a matrix or of solving a system of equations,
the essence of the problem or associated knowledge may be lost. For example, most engineers are fairly
comfortable with the concept of a vector space, but have difficulty in generalizing these ideas to the
module level. Therefore, the intention of this section is to provide a unified view of key concepts in the
theory of linear circuits and filters, to emphasize interrelated concepts, to provide a mathematical
reference to the handbook itself, and to illustrate methodologies through the use of many and varied
examples.
This chapter begins with a basic examination of vector spaces over fields. In relating vector spaces, the
key ideas of linear operators and matrix representations come to the fore. Standard matrix operations
are examined as are the pivotal notions of determinant, inverse, and rank. Next, transformations are
shown to determine similar representations, and matrix characteristics such as singular values and
eigenvalues are defined. Finally, solutions to algebraic equations are presented in the context of matrices
and are related to this introductory chapter on mathematics as a whole.
Standard algebraic notation is introduced first. To denote an element s in a set S, use s 2 S. Consider
two sets S and T. The set of all ordered pairs (s, t) where s 2 S and t 2 T is defined as the Cartesian
product set S 3 T. A function f from S into T, denoted by f : S ! T, is a subset U of ordered pairs (s, t) 2
S 3 T such that for every s 2 S, one and only one t 2 T exists such that (s, t) 2 U.
The function evaluated
at the element s gives t as a
solution ( f(s) ¼t), and each s 2 S as a first element in U appears exactly once.
1-1
A binary operation is a function acting on a Cartesian product set S 3 T. When T ¼S, one speaks of a

binary operation on S.
1.2 Vector Spaces over Fields
A field F is a nonempty set F and two binary operations, sum (þ) and product, such that the following
properties are satisfied for all a, b, c 2 F:
1. Associativity: (a þb) þc ¼a þ(b þc); (ab)c ¼a(bc)
2. Commutativity: a þb ¼b þa ; ab ¼ba
3. Distributivity: a(b þc) ¼(ab) þ(ac)
4. Identities: (Additive) 0 2 F exists such that a þ0 ¼a
(Multiplicative) 1 2 F exists such that a1 ¼a
5. Inverses: (Additive) For every a 2 F, b 2 F exists such that a þb ¼0
(Multiplicative) For every nonzero a 2 F, b 2 F exists
such that ab ¼1
Examples
.
Field of real numbers R
.
Field of complex numbers C
.
Field of rational functions with real coefficients R(s)
.
Field of binary numbers
The set of integers Z with the standard notions of addition and multiplication does not form a field
because a multiplicative inverse in Z exists only for Æ1. The integers form a commutative ring. Likewise,
polynomials in the indeterminate s with coefficients from F form a commutative ring F[s]. If field
property 2 also is not available, then one speaks simply of a ring. An additive group is a nonempty set G
and one binary operation þsatisfying field properties 1, 4, and 5 for addition, that is, associativity and the
existence of additive identity and inverse. Moreover, if the binary operation þis commutative (field
property 2), then the additive group is said to be abelian. Common notation regarding inverses is that the
additive inverse for a 2 F is b ¼Àa 2 F. In the multiplicative case b ¼a
À1

2 F.
An F-vector space V is a nonempty set V and a field F together with binary operations þ: V 3 V ! V
and
*
: F 3 V ! V subject to the following axioms for all elements v, w 2 V and a, b 2 F:
1. V and þform an additive abelian group
2. a
*
(vþw) ¼(a
*
v)þ(a
*
w)
3. (aþb)
*
v ¼(a
*
v)þ(b
*
v)
4. (ab)
*
v ¼a
*
(b
*
v)
5. 1
*
v ¼v

Examples
.
Set of all n-tuples (v
1
, v
2
, , v
n
) for n > 0 and v
i
2 F
.
Set of polynomials of degree less than n with real coefficients (F ¼R)
Elements of V are referred to as vectors, whereas elements of F are scalars. Note that the terminology
vector space V over the field F is used often. A module differs from a vector space in only one aspect; the
underlying field in a vector space is replaced by a ring. Thus, a module is a direct generalization of a
vector space.
When considering vector spaces of n-tuples, þis vector addition defined element by element using the
scalar addition associated with F. Multiplication (
*
), which is termed scalar multiplication, also is defined
1-2 Fundamentals of Circuits and Filters
element by element using multiplication in F. The additive identity in this case is the zero vector (n-tuple
of zeros) or null vector, and F
n
denotes the set of n-tuples with elements in F, a vector space over F.
A nonempty subset
e
V & V is called a subspace of V if for each v, w 2
e

V and every a 2 F, v þw 2
e
V,
and a
*
v 2
e
V. When the context makes things clear, it is customary to suppress the
*
, and write av in
place of a
*
v.
A set of vectors {v
1
, v
2
, , v
m
} belonging to an F-vector space V is said to span the vector space if
any element v 2 V can be represented by a linear combination of the vectors v
i
. That is, scalars
a
1
, a
2
, ,a
m
2 F are such that

v ¼ a
1
v
1
þ a
2
v
2
þÁÁÁþa
m
v
m
(1:1)
A set of vectors {v
1
, v
2
, ,v
p
} belonging to an F-vector space V is said to be linearly dependent over F if
scalars a
1
, a
2
, , a
p
2 F, not all zero, exist such that
a
1
v

1
þ a
2
v
2
þÁÁÁþa
p
v
p
¼ 0(1:2)
If the only solution for Equation 1.2 is that all a
i
¼0 2 F, then the set of vectors is said to be linearly
independent.
Examples
.
(1, 0) and (0, 1) are linearly independent.
.
(1, 0, 0), (0, 1, 0), and (1, 1, 0) are linearly dependent over R. To see this, simply choose a
1
¼a
2
¼1
and a
3
¼À1.
.
s
2
þ2s and 2s þ4 are linearly independent over R, but are linearly dependent over R(s)by

choosing a
1
¼À2 and a
2
¼s.
A set of vectors {v
1
, v
2
, ,v
n
} belonging to an F-vector space V is said to form a basis for V if it both
spans V and is linearly independent over F. The number of vectors in a basis is called the dimension of
the vector space, and is denoted as dim(V). If this number is not finite, then the vector space is said to be
infinite dimensional.
Examples
.
In an n-dimensional vector space, any n linearly independent vectors form a basis.
.
Natural (standard) basis
e
1
¼
1
0
0
.
.
.
0

0
2
6
6
6
6
6
6
4
3
7
7
7
7
7
7
5
, e
2
¼
0
1
0
.
.
.
0
0
2
6

6
6
6
6
6
4
3
7
7
7
7
7
7
5
, e
3
¼
0
0
1
.
.
.
0
0
2
6
6
6
6

6
6
4
3
7
7
7
7
7
7
5
, , e
nÀ1
¼
0
0
0
.
.
.
1
0
2
6
6
6
6
6
6
4

3
7
7
7
7
7
7
5
, e
n
¼
0
0
0
.
.
.
0
1
2
6
6
6
6
6
6
4
3
7
7

7
7
7
7
5
both spans F
n
and is linearly independent over F.
Consider any basis {v
1
, v
2
, , v
n
}inann-dimensional vector space. Every v 2 V can be represented
uniquely by scalars a
1
, a
2
, ,a
n
2 F as
Linear Operators and Matrices 1-3
v ¼ a
1
v
1
þ a
2
v

2
þÁÁÁþa
n
v
n
(1:3)
¼ [v
1
v
2
ÁÁÁ v
n
]
a
1
a
2
.
.
.
a
n
2
6
6
6
6
4
3
7

7
7
7
5
(1:4)
¼ [v
1
v
2
ÁÁÁ v
n
]a (1:5)
Here, a 2F
n
is a coordinate representation of v 2 V with respect to the chosen basis. The reader will be able
to discern that each choice of basis will result in another representation of the vector under consideration.
Of course, in the applications, some representations are more popular and useful than others.
1.3 Linear Operators and Matrix Representations
First, recall the definition of a function f: S ! T. Alternate terminology for a function is mapping,
operator, or transformation. The set S is called the domain of f, denoted by D( f ). The range of f, R( f ), is
the set of all t 2 T such that (s, t) 2 U ( f(s) ¼t) for some s 2 D( f ).
Examples
Use S ¼{1, 2, 3, 4} and T ¼{5, 6, 7, 8}.
.
e
U ¼{(1, 5), (2, 5), (3, 7), (4, 8)} is a function. The domain is {1, 2, 3, 4} and the range is {5, 7, 8}.
.
Û ¼{(1, 5), (1, 6), (2, 5), (3, 7), (4, 8)} is not a function.
.
U ¼{(1, 5), (2, 6), (3, 7), (4, 8)} is a function. The domain is {1, 2, 3, 4} and the range is {5, 6, 7, 8}.

If R( f ) ¼T, then f is said to be surjective (onto). Loosely speaking, all elements in T are used up. If
f : S ! T has the property that f(s
1
) ¼f(s
2
) implies s
1
¼s
2
, then f is said to be injective (one-to-one). This
means that any element in R( f ) comes from a unique element in D( f ) under the action of f. If a function
is both injective and surjective, then it is said to be bijective (one-to-one and onto).
Examples
.
e
U is not onto because 6 2 T is not in R( f). Also
e
U is not one-to-one because f(1) ¼5 ¼f(2),
but 1 6¼ 2.
.
U is bijective.
Now consider an operator L: V !W, where V and W are vector spaces over the same field F. L is said
to be a linear operator if the following two properties are satisfied for all v, w 2 V and for all a 2 F:
L(av) ¼ aL(v)(1:6)
L(v þw) ¼ L(v) þ L(w)(1:7)
Equation 1.6 is the property of homogeneity and Equation 1.7 is the property of additivity. Together they
imply the principle of superposition, which may be written as
L(a
1
v

1
þ a
2
v
2
) ¼ a
1
L(v
1
) þ a
2
L(v
2
)(1:8)
for all v
1
, v
2
2 V and a
1
, a
2
2 F. If Equation 1.8 is not satisfied, then L is called a nonlinear operator.
1-4 Fundamentals of Circuits and Filters
Examples
.
Consider V ¼C and F ¼C. Let L: V ! V be the operator that takes the complex conjugate: L(v) ¼

v
for v,


v 2 V. Certainly
L(v
1
þ v
2
) ¼ v
1
þ v
2
¼ v
1
þ v
2
¼ L(v
1
) þ L(v
2
)
However,
L(a
1
v
1
) ¼ a
1
v
1
¼ a
1

L(v
1
) 6¼ a
1
L(v
1
)
Then L is a nonlinear operator because homogeneity fails.
.
For F-vector spaces V and W, let V be F
n
and W be F
nÀ1
. Examine L: V ! W, the operator that
truncates the last element of the n-tuple in V, that is,
L((v
1
, v
2
, , v
nÀ1
, v
n
)) ¼ (v
1
, v
2
, , v
nÀ1
)

Such an operator is linear.
The null space (kernel) of a linear operator L: V ! W is the set
ker L ¼ {v 2 V such that L(v) ¼ 0} (1:9)
Equation 1.9 defines a vector space. In fact, ker L is a subspace of V. The mapping L is injective if and
only if ker L ¼0; that is, the only solution in the right member of Equation 1.9 is the trivial solution. In
this case, L is also called monic.
The image of a linear operator L: V ! W is the set
im L ¼ {w 2 W such that L(v ) ¼ w for some v 2 V}(1:10)
Clearly, im L is a subspace of W, and L is surjective if and only if im L is all of W. In this case, L is also
called epic.
A method of relating specific properties of linear mappings is the exact sequence. Consider a sequence
of linear mappings
ÁÁÁV !
L
W !
~
L
U !ÁÁÁ (1:11)
This sequence is said to be exact at W if im L ¼ker
~
L. A sequence is called exact if it is exact at each vector
space in the sequence. Examine the following special cases:
0 ! V !
L
W (1:12)
W !
~
L
U ! 0(1:13)
Sequence (Equation 1.12) is exact if and only if L is monic, whereas Equation 1.13 is exact if and only if

~
L
is epic.
Further, let L: V ! W be a linear mapping between finite-dimensional vector spaces. The rank of L,
r(L), is the dimension of the image of L. In such a case
Linear Operators and Matrices 1-5
r(L) þ dim ( ker L) ¼ dim V (1:14)
Linear operators commonly are represented by matrices. It is quite natural to interchange these two
ideas, because a matrix with respect to the standard bases is indistinguishable from the linear operator it
represents. However, insight may be gained by examining these ideas separately. For V and W, n- and
m-dimensional vector spaces over F, respectively, consider a linear operator L: V ! W. Moreover, let
{v
1
, v
2
, ,v
n
} and {w
1
, w
2
, ,w
m
} be respective bases for V and W. Then L: V ! W can be represented
uniquely by the matrix M 2 F
m3n
where
M ¼
m
11

m
12
m
1n
m
21
m
22
m
2n
.
.
.
.
.
.
.
.
.
.
.
.
m
m1
m
m2
m
mn
2
6

6
6
4
3
7
7
7
5
(1:15)
The ith column of M is the representation of L(v
i
) with respect to {w
1
, w
2
, , w
m
}. Element m
ij
2 F of
Equation 1.15 occurs in row i and column j.
Matrices have a number of properties. A matrix is said to be square if m ¼n. The main diagonal of a
square matrix consists of the elements m
ii
.Ifm
ij
¼0 for all i > j (i < j), a square matrix is said to be upper
(lower) triangular. A square matrix with m
ij
¼0 for all i 6¼ j is diagonal. Additionally, if all m

ii
¼1, a
diagonal M is an identity matrix. A row vector (column vector) is a special case in which m ¼1(n ¼1).
Also, m ¼n ¼1 results essentially in a scalar.
Matrices arise naturally as a means to represent sets of simultaneous linear equations. For example, in
the case of Kirchhoff equations, Chapter 7 shows how incidence, circuit, and cut matrices arise. Or
consider a p network having node voltages v
i
, i ¼1, 2 and current sources i
i
, i ¼1, 2 connected across the
resistors R
i
, i ¼1, 2 in the two legs of the p. The bridge resistor is R
3
. Thus, the unknown node voltages
can be expressed in terms of the known source currents in the manner
(R
1
þ R
3
)
R
1
R
3
v
1
À
1

R
3
v
2
¼ i
1
(1:16)
(R
2
þ R
3
)
R
2
R
3
v
2
À
1
R
3
v
1
¼ i
2
(1:17)
If the voltages, v
i
, and the currents, i

i
, are placed into a voltage vector v 2 R
2
and current vector i 2 R
2
,
respectively, then Equations 1.16 and 1.17 may be rewritten in matrix form as
i
1
i
2
"#
¼
(R
1
þR
3
)
R
1
R
3
À
1
R
3
À
1
R
3

(R
2
þR
3
)
R
2
R
3
"#
v
1
v
2
"#
(1:18)
A conductance matrix G may then be defined so that i ¼Gv, a concise representation of the original pair
of circuit equations.
1.4 Matrix Operations
Vector addition in F
n
was defined previously as an element-wise scalar addition. Similarly, two matrices
both M and N in F
m3n
can be added (subtracted) to form the resultant matrix P 2 F
m3n
by
m
ij
Æ n

ij
¼ p
ij
, i ¼ 1, 2, , m, j ¼ 1, 2, , n (1:19)
1-6 Fundamentals of Circuits and Filters
Matrix addition, thus, is defined using addition in the field over which the matrix lies. Accordingly, the
matrix, each of whose entries is 0 2 F, is an additive identity for the family. One can set up additive
inverses along similar lines, which, of course, turn out to be the matrices each of whose elements is the
negative of that of the original matrix.
Recall how scalar multiplication was defined in the example of the vector space of n-tuples. Scalar
multiplication can also be defined between a field element a 2 F and a matrix M 2 F
m3n
in such a way
that the product aM is calculated element-wise:
aM ¼ P , am
ij
¼ p
ij
, i ¼ 1, 2, , m, j ¼ 1, 2, , n (1:20)
Examples
(F ¼ R): M ¼
43
21

, N ¼
2 À3
16

, a ¼À0:5
.

M þ N ¼ P ¼
4 þ 23À 3
2 þ 11þ 6

¼
60
37

.
M À N ¼
~
P ¼
4 À 23þ 3
2 À 11À 6

¼
26
1 À5

.
aM ¼
^
P ¼
(À0:5)4 (À0:5)3
(À0:5)2 (À0:5)1

¼
À2 À1:5
À1 À0:5


To multiply two matrices M and N to form the product MN requires that the number of columns of M
equal the number of rows of N. In this case the matrices are said to be conformable. Although vector
multiplication cannot be defined here because of this constraint, Chapter 2 examines this operation in
detail using the tensor product. The focus here is on matrix multiplication. The resulting matrix will have
its number of rows equal to the number of rows in M and its number of columns equal to the number of
columns of N. Thus, for M 2 F
m3n
and N 2 F
n3p
, MN ¼P 2 F
m3p
. Elements in the resulting matrix P
may be determined by
p
ij
¼
X
n
k¼l
m
ik
n
kj
(1:21)
Matrix multiplication involves one row and one column at a time. To compute the p
ij
term in P, choose
the ith row of M and the jth column of N. Multiply each element in the row vector by the corresponding
element in the column vector and sum the result. Notice that in general, matrix multiplication is not
commutative, and the matrices in the reverse order may not even be conformable. Matrix multiplication

is, however, associative and distributive with respect to matrix addition. Under certain conditions, the
field F of scalars, the set of matrices over F, and these three operations combine to form an algebra.
Chapter 2 examines algebras in greater detail.
Examples
(F ¼ R): M ¼
43
21

, N ¼
135
246

Linear Operators and Matrices 1-7
.
MN ¼ P ¼
10 24 38
41016

To find p
11
, take the first row of M, [4 3], and the first column of N,
1
2

, and evaluate Equation
1.21: 4(1) þ3(2) ¼10. Continue for all i and j.
.
NM does not exist because that product is not comformable.
.
Any matrix M 2 F

m3n
multiplied by an identity matrix I 2 F
n3n
such that MI 2 F
m3n
results in the
original matrix M. Similarly, IM ¼M for I an m 3 m identity matrix over F. It is common to interpret I
as an identity matrix of appropriate size, without explicitly denoting the number of its rows and
columns.
The transpose M
T
2 F
n3m
of a matrix M 2 F
m3n
is found by interchanging the rows and columns.
The first column of M becomes the first row of M
T
, the second column of M becomes the second row of
M
T
, and so on. The notations M
T
and M
0
are also used. If M ¼M
T
the matrix is called symmetric. Note
that two matrices M, N 2 F
m3n

are equal if and only if all respective elements are equal: m
ij
¼n
ij
for all i, j.
The Hermitian transpose M
*
2 C
n3m
of M 2 C
m3n
is also termed the complex conjugate transpose.
To compute M
*
, form M
T
and take the complex conjugate of every element in M
T
. The following
properties also hold for matrix transposition for all M, N 2 F
m3n
, P 2 F
n3p
, and a 2 F :(M
T
)
T
¼M,
(M þN)
T

¼M
T
þN
T
,(aM)
T
¼aM
T
, and (MP)
T
¼P
T
M
T
.
Examples
(F ¼ C): M ¼
j 1 À j
42þ j3

.
M
T
¼
j 4
1 À j 2 þ j3

.
M* ¼
Àj 4

1 þ j 2 À j3

1.5 Determinant, Inverse, and Rank
Consider square matrices of the form [m
11
] 2 F
131
. For these matrices, define the determinant as m
11
and
establish the notation det([m
11
]) for this construction. This definition can be used to establish the
meaning of det(M), often denoted by jMj, for M 2 F
232
. Consider
M ¼
m
11
m
12
m
21
m
22

(1:22)
The minor of m
ij
is defined to be the determinant of the submatrix which results from the removal of

row i and column j. Thus, the minors of m
11
, m
12
, m
21
, and m
22
are m
22
, m
21
, m
12
, and m
11
, respectively.
To calculate the determinant of this M, (1) choose any row i (or column j), (2) multiply each element m
ik
(or m
kj
) in that row (or column) by its minor and by (À1)
iþk
(or (À1)
kþj
), and (3) add these results. Note
that the product of the minor with the sign (À1)
iþk
(or (À1)
kþj

) is called the cofactor of the element in
question. If row 1 is chosen, the determinant of M is found to be m
11
(þm
22
) þm
12
(Àm
21
), a well-known
result. The determinant of 2 3 2 matrices is relatively easy to remember: multiply the two elements along
the main diagonal and subtract the product of the other two elements. Note that it makes no difference
which row or column is chosen in step 1.
1-8 Fundamentals of Circuits and Filters
A similar procedure is followed for larger matrices. Consider
det (M) ¼
m
11
m
12
m
13
m
21
m
22
m
23
m
31

m
32
m
33












(1:23)
Expanding about column 1 produces
det (M) ¼ m
11
m
22
m
23
m
32
m
33











À m
21
m
12
m
13
m
32
m
33










þ m
31

m
12
m
13
m
22
m
23










(1:24)
¼ m
11
(m
22
m
33
À m
23
m
32
) À m

21
(m
12
m
33
À m
13
m
32
)
þ m
31
(m
12
m
23
À m
13
m
22
)(1:25)
¼ m
11
m
22
m
33
þ m
12
m

23
m
31
þ m
13
m
21
m
32
À m
13
m
22
m
31
À m
11
m
23
m
32
À m
12
m
21
m
33
(1:26)
An identical result may be achieved by repeating the first two columns next to the original matrix:
m

11
m
12
m
13
m
11
m
12
m
21
m
22
m
23
m
21
m
22
m
31
m
32
m
33
m
31
m
32
(1:27)

Then, form the first three products of Equation 1.26 by starting at the upper left corner of Equation 1.27
with m
11
, forming a diagonal to the right, and then repeating with m
12
and m
13
. The last three products
are subtracted in Equation 1.26 and are formed by starting in the upper right corner of Equation 1.27
with m
12
and taking a diagonal to the left, repeating for m
11
and m
13
. Note the similarity to the 2 3 2 case.
Unfortunately, such simple schemes fail above the 3 3 3 case.
Determinants of n 3 n matrices for n > 3 are computed in a similar vein. As in the earlier cases, the
determinant of an n 3 n matrix may be expressed in terms of the determinants of (n À1) 3 (n À1)
submatrices; this is termed as Laplace’s expansion. To expand along row i or column j in M 2 F
n3n
, write
det (M) ¼
X
n
k¼l
m
ik
~
m

ik
¼
X
n
k¼l
m
kj
~
m
kj
(1:28)
where the m
ik
(m
kj
) are elements of M. The
~
m
ik
(
~
m
kj
) are cofactors formed by deleting the ith (kth)
row and the kth (jth) column of M, forming the determinant of the (n À1) 3 (n À1) resulting submatrix,
and multiplying by (À1)
iþk
((À1)
kþj
). Notice that minors and their corresponding cofactors are related

by Æ1.
Examples
(F ¼ R): M ¼
012
345
236
2
4
3
5
.
Expanding about row 1 produces
Linear Operators and Matrices 1-9
det (M) ¼ 0
45
36








À 1
35
26









þ 2
34
23








¼À(18 À 10) þ 2(9 À 8) ¼À6
.
Expanding about column 2 yields
det (M) ¼À1
35
26









þ 4
02
26








À 3
02
35








¼À(18 À 10) þ 4(0 À 4) À 3(0 À 6) ¼À6
.
Repeating the first two columns to form Equation 1.27 gives
01201
34534
23623
Taking the appropriate products,
0 Á 4 Á 6 þ 1 Á 5 Á 2 þ 2 Á 3 Á 3 À 1 Á 3 Á 6 À 0 Á 5 Á 3 À 2 Á 4 Á 2

results in À6 as the determinant of M.
.
Any square matrix with a zero row and=or zero column will have zero determinant. Likewise, any
square matrix with two or more identical rows and=or two or more identical columns will have
determinant equal to zero.
Determinants satisfy many interesting relationships. For any n 3 n matrix, the determinant may be
expressed in terms of determinants of (n À1) 3 (n À1) matrices or first-order minors. In turn, deter-
minants of (n À1) 3 (n À1) matrices may be expressed in terms of determinants of (n À2) 3 (n À2)
matrices or second-order minors, etc. Also, the determinant of the product of two square matrices is
equal to the product of the determinants:
det (MN) ¼ det (M) det (N)(1:29)
For any M 2 F
n3n
such that jMj 6¼ 0, a unique inverse M
À1
2 F
n3n
satisfies
MM
À1
¼ M
À1
M ¼ I (1:30)
For Equation 1.29 one may observe the special case in which N ¼M
À1
, then (det(M))
À1
¼det(M
À1
). The

inverse M
À1
may be expressed using determinants and cofactors in the following manner. Form the
matrix of cofactors
~
M ¼
~
m
11
~
m
12
ÁÁÁ
~
m
1n
~
m
21
~
m
22
ÁÁÁ
~
m
2n
.
.
.
.

.
.
.
.
.
.
.
.
~
m
n1
~
m
n2
ÁÁÁ
~
m
nn
2
6
6
6
4
3
7
7
7
5
(1:31)
The transpose of Equation 1.31 is referred to as the adjoint matrix or adj(M). Then,

M
À1
¼
~
M
T
Mjj
¼
adj(M)
Mjj
(1:32)
1-10 Fundamentals of Circuits and Filters
Examples
.
Choose M of the previous set of examples. The cofactor matrix is
9 À81
0 À42
À36À3
2
4
3
5
Because jMj¼À6, M
À1
is
À
3
2
0
1

2
4
3
2
3
À1
À
1
6
À
1
3
1
2
2
6
4
3
7
5
Note that Equation 1.32 is satisfied.
.
M ¼
21
43

, adj(M) ¼
3 À1
À42


, M
À1
¼
3
2
À
1
2
À21

In the 2 3 2 case, this method reduces to interchanging the elements on the main diagonal,
changing the sign on the remaining elements, and dividing by the determinant.
.
Consider the matrix equation in Equation 1.18. Because det(G) 6¼ 0, whenever the resistances are
nonzero, with R
1
and R
2
having the same sign, the node voltages may be determined in terms of
the current sources by multiplying on the left of both members of the equation using G
À1
. Then
G
À1
i ¼v.
The rank of a matrix M, r(M), is the number of linearly independent columns of M over F, or using
other terminology, the dimension of the image of M. For M 2 F
m3n
the number of linearly independent
rows and columns is the same, and is less than or equal to the minimum of m and n.Ifr(M) ¼n, M is of

full-column rank; similarly, if r(M) ¼m, M is of full-row rank. A square matrix with all rows (and all
columns) linearly independent is said to be nonsingular. In this case, det(M) 6¼ 0. The rank of M also may
be found from the size of the largest square submatrix with a nonzero determinant. A full-rank matrix
has a full-size minor with a nonzero determinant.
The null space (kernel) of a matrix M 2 F
m3n
is the set
ker M ¼ {v 2 F
n
such that Mv ¼ 0} (1:33)
Over F, ker M is a vector space with dimension defined as the nullity of M, v(M). The fundamental
theorem of linear equations relates the rank and nullity of a matrix M 2 F
m3n
by
r(M) þ v(M) ¼ n (1:34)
If r(M) < n, then M has a nontrivial null space.
Examples
.
M ¼
01
00

The rank M is 1 because only one linearly independent column of M is found. To examine the null
space of M, solve Mv ¼0. Any element in ker M is of the form
f
1
0

for f
1

2 F. Therefore, v(M) ¼1.
Linear Operators and Matrices 1-11
.
M ¼
1452
2571
3690
2
4
3
5
The rank of M is 2 and the nullity is 2.
1.6 Basis Transformations
This section describes a change of basis as a linear operator. Because the choice of basis affects the
matrix of a linear operator, it would be most useful if such a basis change could be understood within
the context of matrix operations. Thus, the new matrix could be determined from the old matrix by
matrix operations. This is indeed possible. This question is examined in two phases. In the first phase the
linear operator maps from a vector space to itself. Then a basis change will be called a similarity
transformation. In the second phase, the linear operator maps from one vector space to another,
which is not necessarily the same as the first. Then a basis change will be called an equivalence
transformation. Of course, the first situation is a special case of the second, but it is customary to
make the distinction and to recognize the different terminologies. Philosophically, a fascinating special
situation exists in which the second vector space, which receives the result of the operation, is an identical
copy of the first vector space, from which the operation proceeds. However, in order to avoid confusion,
this section does not delve into such issues.
For the first phase of the discussion, consider a linear operator that maps a vector space into itself, such
as L: V ! V, where V is n-dimensional. Once a basis is chosen in V, L will have a unique matrix
representation. Choose {v
1
, v

2
, ,v
n
} and {

v
1
,

v
2
, ,

v
n
} as two such bases. A matrix M 2 F
n3n
may be
determined using the first basis, whereas another matrix
M 2 F
n3n
will result in the latter choice.
According to the discussion following Equation 1.15, the ith column of M is the representation of L(v
i
)
with respect to {v
1
, v
2
, ,v

n
}, and the ith column of M is the representation of L(

v
i
) with respect to
{

v
1
,

v
2
, ,

v
n
}. As in Equation 1.4, any basis element v
i
has a unique representation in terms of the basis
{

v
1
,

v
2
, ,


v
n
}. Define a matrix P 2 F
n3n
using the ith column as this representation. Likewise, Q 2 F
n3n
may have as its ith column the unique representation of

v
1
with respect to {v
1
, v
2
, ,v
n
}. Either
represents a basis change which is a linear operator. By construction, both P and Q are nonsingular.
Such matrices and linear operators are sometimes called basis transformations. Notice that P ¼Q
À1
.
If two matrices M and
M represent the same linear operator L, they must somehow carry essentially
the same information. Indeed, a relationship between M and
M may be established. Consider a
v
, a
w
,


a
v
,

a
w
2 F
n
such that Ma
v
¼a
w
and M

a
v
¼

a
w
. Here, a
v
denotes the representation of v with respect to the
basis v
i
,

a
v

denotes the representation of v with respect to the basis

v
i
, and so forth. In order to involve P
and Q in these equations, it is possible to make use of a sketch:
a
w
M
M
PQPQ
a
w
a
v
a
v
In this sketch, a vector at a given corner can be multiplied by a matrix on an arrow leaving the corner
and set equal to the vector that appears at the corner at which that arrow arrives. Thus, for example,
a
w
¼Ma
v
may be deduced from the top edge of the sketch. It is interesting to perform ‘‘chases’’ around
such sketches. By way of illustration, consider the lower right corner. Progress around the sketch
1-12 Fundamentals of Circuits and Filters

×