Preface &; Acknowledgments
Robust control theory has been the object of much of the research activity developed
in the last fifteen years within the context of linear systems control. At the stage,
the results of these efforts constitute a fairly well established part of the scientific
community background, so that the relevant techniques can reasonably be exploited
for practical purposes. Indeed, despite their complex derivation, these results are of
simple implementation and capable of accounting for a number of interesting real life
applications. Therefore the demand of including these topics in control engineering
courses is both timely and suitable and motivated the birth of this book which covers
the basic facts of robust control theory, as well as more recent achievements, such as
robust stability and robust performance in presence of parameter uncertainties. The
book has been primarily conceived for graduate students as well as for people first
entering this research field. However, the particular care which has been dedicated
to didactic instances renders the book suited also to undergraduate students who are
already acquainted with basic system and control. Indeed, the required mathematical
background is supplied where necessary.
Part of the here collected material has been structured according to the textbook
Controllo in RH2-RH00 (in Italian) by the authors. They are deeply indebted to the
publisher Pitagora for having kindly permitted it. The first five chapters introduces
the basic results on RH2 and RHoo theory whereas the last two chapters are devoted
to present more recent results on robust control theory in a general and self-contained
setting. The authors gratefully acknowledge the financial support of Centro di Teoria
del Sistemi of the Italian National Research Council - CNR, the Brazilian National
Research Council - CNPq (under grant 301373/80) and the Research Council of the
State of Sao Paulo, Brazil - FAPESP (under grant 90/3607 - 0).
This book is a result of a joint, fruitful and equal scientific cooperation. For this
reason, the authors' names appear in the front page in alphabetical order.
Patrizio Colaneri
Jose C. Geromel
Arturo Locatelli
Milan, Italy
Campinas, Brazil
Milan, Italy
Table of Contents
Preface & Acknowledgments
1
Introduction
1
2
Preliminaries
3
3
Feedback Systems Stability
69
4
RH[subscript 2] Control
87
5
RH[actual symbol not reproducible] Control
6
Nonclassical Problems in RH[subscript 2] and RH[actual
symbol not reproducible]
121
195
7
Uncertain Systems Control Design
263
A
Some Facts on Polynomials
301
B
Singular Values of Matrices
303
C
Riccati Equation
315
D
Structural Properties
325
E
The Standard 2-Block Scheme
327
F
Loop Shifting
337
G
Worst Case Analysis
343
H
Convex Functions and Sets
349
I
Convex Programming Numerical Tools
359
Bibliography
371
Index
375
Chapter 1
Introduction
Frequency domain techniques have longly being proved to be particularly fruitful and
simple in the design of (linear time invariant) SISO ^ control systems. Less appealing
have appeared for many years the attempts of generalizing such nice techniques to the
MIMO ^ context. This partially motivated the great deal of interest which has been
devoted to time domain design methodologies starting in the early 60's. Indeed, this
stream of research originated a huge number of results both of remarkable conceptual
relevance and practical impact, the most celebrated of which is probably the LQG ^
design. Widely acknowledged are the merits of such an approach: among them the relatively small computational burden involved in the actual definition of the controller
and the possibility of affecting the dynamical behavior of the control system through
a guided sequence of experiments aimed at the proper choice of the parameters of
both the performance index (weighting matrices) and uncertainty description (noises
intensities). Equally well known are the limits of the LQG design methodology, the
most significant of which is the possible performance decay caused by operative conditions even slightly differing from the (nominal) ones referred to in the design stage.
Specifically, the lack of robustness of the classical LQG design originates from the fact
that it does not account for the uncertain knowledge or unexpected perturbations of
the plant, actuators and sensors parameters.
The need of simultaneously complying with design requirements naturally specified
in the frequency domain and guaranteeing robustness of the control system in the face
of uncertainties and/or parameters deviations, focused much of the research activity
on the attempt of overcoming the traditional and myopic dichotomy between time
and frequency domain approaches. At the stage, after about two decades of intense
efforts on these lines, the control system designer can rely on a set of well established
results which give proper answers to the significant instances of performance and
stability robustness. The value of the results achieved so far partially stems in the
construction of a unique formal theoretical picture which naturally includes both the
classical LQG design {RH2 design), revisited at the light of a transfer function-like
approach, and the new challenging developments of the so called robust design {RHoo
design), which encompasses most of the above mentioned robustness instances.
The design methodologies which are presented in the book are based on the minimization of a performance index, simply consisting of the norm of a suitable transfer
^Single-input single-output
^Multi-input multi-output
"^Linear quadratic gaussian
2
CHAPTER 1.
INTRODUCTION
function. A distinctive feature of these techniques is the fact that they do not come
up with a unique solution to the design problem; rather, they provide a whole set
of (admissible) solutions which satisfy a constraint on the maximum deterioration of
the performance index. The attitude of focusing on the class of admissible controllers
instead of determining just one of them can be traced back to a fundamental result
which concerns the parametrization of the class of controllers stabilizing a given plant.
Chapter 3 is actually dedicated to such a result and deals also with other questions
on feedback systems stability. In subsequent Chapters 4 and 5 the main results of
RH2 and RHQQ design are presented, respectively. In addition, a few distinguishing
aspects of the underlying theory are emphasized as well, together with particular,
yet significant, cases of the general problem. Chapter 5 contains also a preliminary
discussion on the robustness requirements which motivate the formulation of the so
called standard RHoo control problem. Chapter 6 and 7 go beyond the previous
ones in the sense that the design problems to be dealt with are setting in a more
general framework. One of the most interesting examples of this situation is the so
called mixed RH2/RH00 problem which is expressed in terms of both RH2 and RHoo
norms of two transfer functions competing with each other to get the best tradeoff
between performance and robustness. Other problems that fall into this framework
are those related to regional pole placement, time-domain specification and structural
constraints. All of them share basically the same difficulty to be faced numerically.
Indeed, they can not be solved by the methodology given in the previous Chapters but
by means of mathematical programming methods. More specifically, all can (after a
proper change of variables) be converted into convex problems. This feature is important in both practical and theoretical points of view since numerical efficiency allows
the treatment of real-word problems of generally large dimension while global optimality is always assured. Chapter 7 is devoted to the controllers design for systems
subject to structured convex bounded uncertainties which models in an adequate and
precise way many classes of parametric uncertainties with practical appealing. The
associated optimal control problems are formulated and solved jointly with respect
to the controller transfer function and the feasible uncertainty in order to guarantee
minimum loss in the performance index. One of such situation of great importance
for its own is the design problem involving actuators failure. Robust stability and
performance are addressed for two classes of nonlinear perturbations, leading to what
are called Persidiskii and Lur'e design. In general terms, the same technique involving
the reduction of the related optimal control design problems to convex programming
problems is again used. The main point to be remarked is that the two classes of nonlinear perturbations considered impose additional linear and hence convex constraints,
to the matrices variables to be determined.
Treating these arguments requires a fairly deep understanding of some facts from
mathematics not so frequently included in the curricula of students in Engineering.
Covering the relevant mathematical background is the scope of Chapter 2, where
the functional (Hardy) spaces which permeate all over the book are characterized.
Some miscellaneous facts on matrix algebra, system and control theory and convex
optimization are collected in Appendix A through I.
Chapter 2
Preliminaries
2.1
Introduction
The scope of this chapter is twofold: on one hand it is aimed at presenting the extension of the concepts of poles and zeros, well known for single-input single-output
(SISO) systems, to the multivariable case; on the other, it is devoted to the introduction of the basic notions relative to some functional spaces whose elements are
matrices of rational functions (spaces RL2^ RLoo, RH2^ RH^). The reason of this
choice stems from the need of presenting a number of results concerning significant
control problems for linear, continuous-time, finite dimensional and time-invariant
systems.
The derivation of the related results takes substantial advantage on the nature
of the analysis and design methodology adopted; such a methodology was actually
developed so as to take into account state-space and frequency based techniques at
the same time.
For this reason, it should not be surprising the need of carefully extending to multiinput multi-output (MIMO) systems the notions of zeros and poles, which proved so
fruitful in the context of SISO systems. In Section 2.5, where this attempt is made,
it will be put into sharp relief few fascinating and in some sense unexpected relations
between poles, zeros, eigenvalues, time responses and ranks of polynomial matrices.
Analogously, it should be taken for granted the opportunity of going in depth
on the characterization of transfer matrices (transfer functions for MIMO systems)
in their natural embedding, namely, in the complex plane. The systems considered
hereafter obviously have rational transfer functions. This leads to the need of providing, in Section 2.8 the basic ideas on suitable functional spaces and linear operators
so as to throw some light on the connections between facts which naturally lie in
time-domain with others more suited with the frequency-domain setting.
Although the presentation of these two issues is intentionally limited to few basic
aspects, nevertheless it requires some knowledge on matrices of polynomials, matrices
of rational functions, singular values and linear operators. To the acquisition of such
notions are dedicated Sections 2.3-2.7.
4
2.2
CHAPTER 2.
PRELIMINARIES
Notation and terminology
The continuous-time linear time-invariant dynamic systems, object of the present
text, are described, depending on circumstances, by a state space representation
X = Ax + Bu
y = Cx + Du
or by their transfer function
G{s) = C{sI-A)-^B
+D
The signals which refer to a system are indifferently intended to be in time-domain
or in frequency-domain all the times the context does not lead to possible misunderstandings. Sometimes, it is necessary to explicitly stress that the derivation is in
frequency-domain. In this case, the subscript "L" indicates the Laplace transform
of the considered signal, whereas the subscript "LO" denotes the Laplace transform
when the system state at the initial time is zero (typically, this situation occurs when
one thinks in terms of transfer functions). For instance, with reference to the above
system, one may write
VLO =
G{S)UL
yL^yLo-^C{sI-A)-'x{0)
Occasionally, the transfer function G{s) of a system E is explicitly related to one of
its realizations by writing
G{s) =
E{A,B,C,D)
or
"A B '
C D
G{s)
The former notation basically has a compactness value, whereas the latter is mainly
useful when one wants to display possible partitions in the input and/or output matrices. For example, the system
X = Ax + Biw -\- B2U
z — Cix + D12U
y = C2X + D21W
is related to its transfer function G{s) by writing
•
Gis) =
A
Bi
Ci
0
. '^2
B2 '
Du
0
£•21
When a purely algebraic (i.e. nondynamic) system is considered, these notations
become
G(s) = S(0,0,0,Z))
, ^,^
2.3. POLYNOMIAL
MATRICES
5
Referring to the class of systems considered here, the transfer functions are in fact
rational matrices of complex variable, namely, matrices whose generic element is a
rational function, i.e., a ratio of polynomials with real coefficients. The transfer
function is said to be proper when each element is a proper rational function, i.e., a
ratio of polynomials with the degree of the numerator not greater than the degree of
the denominator. When this inequality holds in a strict sense for each element of the
matrix, the transfer function is said to be strictly proper . Briefly, G{s) is proper if
lim G{s) ^ K < oo
where the notation K < oo means that each element of matrix K is finite. Analogously, G{s) is strictly proper if
lim G{s) = 0
A rational matrix G{s) is said to be analytic in Re{s) > 0 (resp. < 0) if all the
elements of the matrix are bounded functions in the closed right (resp. left) half
plane.
In connection with a system characterized by the transfer function
G{s)
' A B'
C D
(2.1)
the so-called adjoint system has transfer function
-A'
B'
G-{s) := G'{-s)
-C '
D'
whereas the transfer function of the so-called transpose system is
G'{s) :-
' A' C" '
B' D'
System (2.1) is said to be input-output stable if its transfer function G{s) is analytic
in Re{s) > 0 (G(s) is stable, by short). It is said to be internally stable if matrix A
is stable, i.e., if all its eigenvalues have negative real parts.
Now observe that a system is input-output stable if and only if all elements of
G(5), whenever expressed as ratio of polynomials without common roots, have their
poles in the open left half plane only. If the realization of system (2.1) is minimal^
the system is input-output stable if and only if it is internally stable.
Finally, the conjugate transpose of the generic (complex) matrix A is denoted by
A^ and, if it is square, Xi{A) is its i-th eigenvalue, while
rs{A) :=max|Ai(A)|
denotes its spectral radius.
2.3
Polynomial matrices
A polynomial matrix is a matrix whose elements are polynomials in a unique unknown.
Throughout the book, such an unknown is denoted by the letter s. All the polynomial
CHAPTER 2.
PRELIMINARIES
coefficients are real Hence, the element nij{s) in position (i, j ) in the polynomial
matrix N{s) takes the form
nij{s) = ajys"" 4- a^-i^''
+ ais + ao, ak E R , V/c
The degree of a polynomial p{s) is denoted by deg[p(s)]. If the leading coefficient ajy
is equal to one, the polynomial is said to be monic.
The rank of a polynomial matrix N{s), denoted by rank[Ar(5)], is defined by
analogy from the definition of the rank of a numeric matrix, i.e., it is the dimension
of the largest square matrix which can be extracted from N{s) with determinant not
identically zero.
A square polynomial matrix is said to be unimodular if it has full rank (it is
invertible) and its determinant is constant.
Example 2.1 The polynomial matrices
1
0
s+1
3
N2{s) =
s+1
s+2
s-2
s-1
are unimodular since det[A/'i(s)]=det[A^2(5)]=3.
A very peculiar property of a unimodular matrix is that its inverse is still a polynomial
(and obviously unimodular) matrix. Not differently from what is usually done for
polynomials, the polynomial matrices can be given the concepts of divisor Sind greatest
common divisor as well.
Definition 2.1 (Right divisor) Let N{s) be a polynomial matrix. A square polynomial matrix R{s) is said to be a right divisor of N{s) if it is such that
N{s) = N{s)R{s)
with N{s) a suitable polynomial matrix.
•
An analogous definition can be formulated for the left divisor.
Definition 2.2 (Greatest common right divisor) LetN{s) and D{s) be polynomial
matrices with the same number of columns. A square polynomial matrix R{s) is said
to be a Greatest Common Right Divisor (CCRD) of {N{s)^D{s)) if it is such that
i) R{s) is a right divisor of D{s) and N{s), i.e.
N{s) = N{s)R{s)
D{s) = D{s)R{s)
with N{s) and D{s) suitable polynomial matrices
a) For each polynomial matrix R{s) such that
N{s) =
N{s)R{s)
D{s) = D{s)R{s)
with N{s) and L){s) polynomial matrices, it turns out that R{s) =
where W{s) is again a suitable polynomial matrix.
W{s)R{s)
•
2.3.
POLYNOMIAL
7
MATRICES
A similar definition can be formulated for the Greatest Common Left Divisor (GCLD).
It is easy to see, by exploiting t h e properties of unimodular matrices, t h a t , given
two polynomial matrices N{s) and D{s), there exist infinite G C R D ' s (and obviously
G O L D ' S ) , A way t o compute a G C R D (resp. GCLD) of two assigned polynomial
matrices N{s) and D{s) relies on their manipulation through a unimodular matrix
which represents a sequence of suitable elementary operations on their rows (resp.
columns). T h e elementary operations on t h e rows (resp. columns) of a polynomial
matrix N{s) are
1) Interchange of t h e i-th row (resp. i-th column) with the j - t h row (resp. j - t h
column)
2) Multiplication of t h e i-th row (resp. i-th column) by a nonzero scalar
3) Addition of a polynomial multiple of the i-th row (resp. i-th column) t o the j - t h
row (resp. j - t h column).
It is readily seen t h a t each elementary operation can be performed premultiplying
(resp. postmultiplying) N{s) by a suitable polynomial and unimodular matrix T{s).
Moreover, matrix T{s)N{s)
(resp. N{s)T{s))
t u r n s out to have the same rank as
N{s).
R e m a r k 2.1 Notice that, given two polynomials ro(s) and ri(s) with deg[ro(5)]>deg[ri(s)],
it is always possible to define two sequences of polynomials {ri{s), z = 2, 3, • • •,p -h 2} and
{gi(s), z = 1, 2, • • • ,p + 1}, with 0 < p
ri{s) = qi^i{s)ri+i{s)
-h r^^2{s) , z = 0,1, • • • ,p
deg[ri+2(s)] < deg[ri+i(s)
2 = 0,1,-
rp+2(5) = 0
Letting
T.{s) :=
T^{s) :=
I
ni{s) :=
1
1
-Qi{s)
0
1
ri-i{s)
r^{s)
i = 1,3, 5, • •
ni{s) :=
ri{s)
ri-i{s)
2 = 2,4,6,--
T{S):=1[T,^,-,{S)
and noticing that T(s) is unimodular (product of unimodular matrices), it turns out that
T{s)ni{s)
0
0
T{s)ni{s)
p = 1,3,5,
, p = 2,4,6.
For instance, take ro{s) = s^ -h 2s^ — s + 2, ri(s) = s^ -\- s — 2. It follows that ^'1(5) = s,
g2(s) = 8-1, r2{s) = 5^ + 5 -h 2 and rsis) = 0.
D
By repeatedly exploiting t h e facts shown in Remark 2.1, it is easy t o verify t h a t , given
a polynomial matrix N{s) with t h e number of rows not smaller t h a n the number of
columns, there exists a suitable polynomial and unimodular matrix T{s) such t h a t
T{s)N{s)
R{s)
0
CHAPTER 2.
PRELIMINARIES
where R(s) is a square polynomial matrix.
Algorithm 2.1 (GCRD of two polynomial matrices) Let N{s) and D{s) be two
polynomial matrices with the same number, say m, of columns and with n^ and rid
rows, respectively.
1) Assume that m < rid-{-rim otherwise go to point 4). Let P{s) :— [D'{s) N'{s)Y
and determine a polynomial and unimodular matrix T{s) such that
Tis)Pis)
R{s)
0
}'
Notice that T{s) can be partitioned as follows
Tis) :=
Tdiis)
Td2{s)
Tniis)
r„2(s)
rid columns
2) Letting S{s) := T-'^is) and writing
S{s) :=
Sdi{s)
Sd2{s)
Snl{s)
Sn2{s)
Ud columns
it turns out
D{s) = Sdi{s)R{s)
N{s) = Sni{s)R{s)
so that R{s) is a right divisor of both D{s) and N{s).
3) It also holds that
R{s) = Tdi{s)D{s) + Tni{s)N{s)
(2.2)
Hence, suppose that R{s) is any other right divisor of both D{s) and N{s).
Therefore, for some polynomial matrices D{s) and N{s) it follows that D{s) —
D{s)R{s) and N{s) = N{s)R{s). The substitution of these two expressions
in eq. (2.2) leads to R{s) = [Tdi{s)D{s) +Tni{s)N{s)]R{s)
so that R{s) is a
GCRDoi{N{s),D{s)).
4) If m > n^ + n^, take two matrices D{s) and N{s) both with m columns and rid
and rin rows, respectively
D{s) := [ 7
N{s)
[I
0 0 ]
0 0]
and let
R{s) :=
D{s)
N{s)
0
(2.3)
m — Ud — rin rows
2.3.
POLYNOMIAL
MATRICES
Thus, D{s) = D{s)R{s) and N{s) = N{s)R{s).
Hence, R{s) is a right divisor
of b o t h D{s) and N{s). Assume now t h a t R{s) is any other right divisor, i.e.
there exist two polynomial matrices D{s) and N{s) such t h a t D{s) =
D{s)R{s)
and N{s) = N{s)R{s).
By substituting these two last expressions in eq. (2.3)
one obtains
- D{s)
R{s) := I N{s)
0
I R{s),
so leading to the conclusion t h a t R{s) is a G C R D of {N{s),
D{s)).
D
E x a m p l e 2.2 Consider the matrices
^^^^
N{s)
I 25^ + 95 + 5
=
[ s^ + 1
25^ + 55 + 5
s^ + 2s + 1 ]
Now take
Ti{s)
Tsis)
T5{s)
Tris)
1
-2
-1
0 0
1 0
0 1
-5/3
1
s/6
T2(s)
0
14/103
-196s/309
1
0
0
0
1
0
0
Teis) =
0
0
1
0 - 1 0
1
0
0
0
1
0
T4s)
0
0
1
1
0
0
1
0
0
0
1
0
0
-51/14
1
0
1
-1
Tsis)
—6
-11
1
•
6
1_
0
0
1
Then
T{s) = l[Ts-^{s)
(3s - 2)/2
- ( 2 4 s + 93)/103
L (112s^ + 252s + 196)/103
=
s/6
(3s - 14)/103
{-Us^ + 28s + 14)/103
(6-lls)/6
(18s + 70)/103
-(84s2 + 70s + 70)/103
so that
:= T{s
1
0
1
0
0
Dis)
N{s)
R{s)
T{s)P{s)
(-17s2 + 6s + 6)/6
s
0
and
(-17s2-h6s + 6)/6
s
Finally, notice that
T-\s):=S{s)
=
Sdl(s)
Snlis)
Sd2{s)
Sn2{s)
CHAPTER 2.
10
PRELIMINARIES
with
Sdi(s)
s{s - 1)
(17s^ - 235^ + 6s + 6)/6
2s^ + 95 + 5 (345^ + 141s^ + 31s - 54)/6
Sni{s) = [ s ^ + l
(17s^ - 65^ + 17s + 6)/6 ]
It is then easy to verify that D{s) = Sdi{s)R{s) and that N{s) = Sni{s)R{s).
•
The familiar concept of coprimeness, easily introduced for polynomials, can be properly extended to polynomial matrices as follows.
Definition 2.3 (Right coprimeness) Two polynomial matrices N{s) and D{s) having the same number of columns, are said to be right coprime if the two equations
N{s) =
N{s)T{s)
D{s) = D{s)T{s),
where N{s) and D{s) are suitable polynomial matrices, are verified by a unimodular
polynomial matrix T{s) only.
•
Example 2.3 The matrices
N(^)=\ ^ s^ + 3s^ - \s - , 1 ,
.3^£2"
3 _
^
D{S)^
2s^ + 6s + 4
s^ + 3s^ + 2s
are not right coprime. Actually, it turns out that
r
s-i
N{s) = N{s)R{s) , N{s) = [ s^ + 2s - 3
D{s) = D{s)R{s) , D{s) =
R{s) = s + 1
' 2s + 4 '
s^ + 2s
and R{s) is not unimodular (det[i?(s)]=s + 1).
Q
Of course, an analogous definition can be stated for the left coprimeness. Definitions
2.1-2.3 also yield that two matrices are right (resp. left) coprime if all their common
right (resp. left) divisors are actually unimodular. In particular, each GCRD (resp.
GCLD) of two right (resp. left) coprime matrices must be unimodular. In view of
Algorithm 2.1, this entails that a possible way to verify whether or not two matrices
are right (resp. left) coprime, is computing and evaluating the determinant of a
greatest common divisor. As a matter of fact, if a GCRD (resp. GCLD) is unimodular,
then all other greatest common divisors are unimodular as well. More precisely,
if jRi(s) and R2{s) are two GCRD's and Ri{s) is unimodular, it results Ri{s) =
W{s)R2{s), with W{s) polynomial. Since det[i?i(s)] 7^ 0 it follows that det[i^2(5)]
7^ 0 as well.
Again from Algorithm 2.1 (step 1) it can be concluded that two polynomial matrices D{s) and N{s) with the same number of columns, say m, are right coprime if
the rank of P{s) := [D'{s) N\s)Y is m for any s. As a matter of fact, coprimeness
is equivalent to R{s) being unimodular so that Tdink[T{s)P{s)] must be constant and
equal to m. Since T{s) is unimodular, rank[P(5)] must be constant and equal to m
as well.
2.3. POLYNOMIAL
MATRICES
11
Lemma 2.1 Let N{s) and D{s) be two polynomial matrices with the same number
of columns and let R{s) be a GCRD of {N{s)^ D{s)). Then,
i) T{s)R{s) is a GCRD of{N{s), D{s)) for any polynomial and unimodular matrix
Tis)
a) If R{s) is an arbitrary CCRD of {N{s)^D{s)),
and unimodular matrix T{s) such that
then there exists a polynomial
R{s) = T{s)R{s)
Proof Point i) Being R{s) a GCRD of {N{s), D{s)) it follows that
N{s) = N{s)R{s) , D{s) = D{s)R{s)
N{s) = N{s)R{s) , D{s) = D{s)R{s)
R{s) = W{s)R{s)
where N{s)^ D{s), ^{^)^ ^{^) ^^^ W{s) are suitable polynomial matrices. Taken an
arbitrary polynomial and unimodular matrix T{s)^ let R{s) := T{s)R{s). It follows
that N{s) = N{s)R{s), N{s) = N{s)T-\s),
D{s) = D{s)R{s), D{s) = D{s)T-^{s).
Furthermore, it is R{s) = W{s)R{s), W{s) = T{s)W{s). Hence, R{s) is a GCRD of
{N{s),D{s)) as well.
Point a) If R{s) and R{s) are two GCRD's of {N{s)^D{s))^ then, for two suitable polynomial matrices W{s) and W{s)^ it results R{s) = W{s)R{s) and R{s) =
W{s)R{s). From these relations it follows that rank[^(s)] < rank[i?(5)] < rank[^(5)].
Therefore, rank[i^(s)] = rank[^(5)]. Let now U{s) and U{s) be two polynomial and
unimodular matrices such that
H{s)
0
U{s)R{s)
H{s)
0
U{s)R{s) =
where the two submatrices H{s) and H{s) have the same number of rows equal to
rank[i?(5)]. Consequently,
H{s)
0
U{s)R{s)
U{s)W{s)R{s)
U{s)W{s)U-'^{s)
= r{s)
H{s)
0
H{s)
0
rii(s) ri2(s)
r2i(s) r22(s)
His)
0
and
H{s)
0
= U{s)R{s)
=
=
U{s)W{s)R{s)
U{s)W{s)U-\s)
His)
0
(2.4)
CHAPTER 2.
12
= f(.)
PRELIMINARIES
H{s)
0
fll(s)
f2l(s)
H{S)
fi2(s)
f22(s)
0
(2.5)
From eq.(2.4), (2.5) it follows that
0 = T2i{s)H{s)
(2.6)
0 = T2i{s)H{s)
(2.7)
Being R{s) and R{s) square, matrices H{s) and H{s) have ranks equal to the number
of their rows, which is obviously not greater than the number of their columns. Therefore, from eqs. (2.6) and (2.7) it follows that Tsijs) = f2i(5). Equations (2.4),(2.5)
outline that matrices ri2(5), T22{s)^ ri2(s) and r2i(5) are in fact arbitrary. Hence,
one can set ri2(5)=f 12(5) = 0 and T22{s) = 1^22(5) = / . Based on these considerations, one can henceforth assume that T{s) and f (5) have the form
T{s)
Til
0
0
/
fii
0
f(s)
0
/
so that, from eqs. (2.4) and (2.5) it follows
His)
0
Til
0
• H{s) 0
Til
0
"fn
0
0 ' - His) 1
0
In particular, it is H{s) = rii{s)Tii{s)H{s),
so that, recalling the properties of H{s),
it results / = r i i ( s ) r i i ( s ) . Hence, both r i i ( s ) and fii(s) are unimodular, since
their inverses are still polynomial matrices. The same holds for r ( s ) and f (s) as well.
Finally,
R{s) =
U-'is)
H{s)
0
u-Hs)fis)
=
His)
0
U-\s)f{s)U{s)R{s)
:= T{s)R{s)
and T{s) is actually unimodular since it is the product of unimodular matrices.
•
R e m a r k 2.2 In view of the results now proved and the given definitions, it is apparent
that when the matrices are in fact scalars it results: (z) a right divisor is also a left divisor
and vice-versa; (ii) two GCRD's differ only for a multiplicative scalar, since all unimodular
polynomials p{s) take the form p{s) := a ^ R, a / 0; {Hi) two polynomials are coprime if
and only if they do not have common roots.
•
Right coprime polynomial matrices enjoy the property stated in the following lemma,
which provides the generalization of a well known result relative to integers and polynomials (see also Theorem A.l).
2.3.
POLYNOMIAL
13
MATRICES
L e m m a 2.2 Let N{s) and D{s) be two polynomial matrices with the same number
of columns. Then, they are right coprime if and only if there exist two polynomial
matrices X{s) and Y{s) such that
X{s)N{s)
+
Y{s)D{s)
(2.8)
P r o o f Based on the results illustrated in Algorithm 2.1, it is always possible to write
a generic G C D R R{s) of {N{s),D{s))
as R{s) = X{s)N{s)-\-Y{s)D{s).
Moreover, if
N{s) and D{s) are coprime, R{s) must be unimodular so t h a t
I =
R-^i^s)R{s)
= R-\s)[X{s)N{s)
=
+
Y{s)D{s)]
X{s)N{s)+Y{s)D{s),
where X{s) := R-\s)X{s),
Y{s) : =
R-\s)Y{s).
Conversely, suppose t h a t there exist two matrices X{s)
(2.8) and let R{s) be a G C R D of {N{s),D{s)),
i.e.
N{s) = N{s)R{s)
, D{s) =
It is then possible to write / = [X{s)N{s)
R-\s)
satisfying eq.
D{s)R{s)
+ Y{s)D{s)]R{s)
- X{s)N{s)
and Y{s)
yielding
+ Y{s)D{s)
(2.9)
The right side of equation (2.9) is a polynomial matrix. This entails t h a t R{s) is a
unimodular matrix so t h a t N{s) and D{s) are right coprime.
•
E x a m p l e 2.4 Consider the two polynomial matrices
2s^ + 1
s
N{s)
25
1
D(s) = [ 2s^ + s
They are right coprime. As a matter of fact
-Is-" - 2s^ + 1
2s" + 2s^ + s^
X(5)
2s^ + 2s
-2s^ - 2s^ - s - 1
y(5) :=
are such that X{s)N{s)
-2s
2s^ + l
+ Y{s)D{s)
= I. Moreover, taken
-
. ^ 3 _ ^2 ^
T{s) :--
^
s^ -\-s
—s
—s
1
0
one can easily realize that it is unimodular and that
T{s)
N{s)
D{s)
R{s)
0
with
R{s)
which is unimodular as well.
s' + l
s"^ -\-s
-s-1
1
2s^ ]
CHAPTER 2.
14
PRELIMINARIES
Of course, a version of the result provided in Lemma 2.2 can be stated for left coprimeness as well. An important and significant canonical form can be associated
with a polynomial matrix, namely the so called Smith form. This canonical form
is formally defined in the following theorem whose proof also provides a systematic
procedure for its computation.
Theorem 2.1 (Smith form) Let N{s) be a n x m polynomial matrix and consider
that rank[A^(s)] := r < min[n,m]. Then two polynomial and unimodular matrices
L{s) and R{s) exist such that N{s) — L{s)S{s)R{s) with
ai{s)
0
0:2(5)
0
0
0
0
0
0
0
•
0"
0
S{s) =
•
ar{s) 0
0
0 _ }n-
- r rows
m — r columns
where each polynomial ai{s) is monic and divides the next one, i.e. ai{s)\ai^i{s),
1,2, • • •, r — 1. Matrix S{s) is said to be the Smith form of N{s).
i =
Proof The proof of the theorem is constructive since the procedure to be described
leads to the determination of the matrices S{s), L{s) and R{s). In the various steps
which characterize such a procedure, the matrix N{s) is subject to a number of manipulations resulting from suitable elementary operations on its rows and columns, i.e.
pre-multiplications or post-multiplications by unimodular matrices. These operations
determine the matrices L{s) and R{s). For simplicity, let nij{s) be the (i,j) element
of the matrix which is presently considered.
1) Through two elementary operations on the rows and the columns of N{s), bring
a nonzero and minimum degree polynomial of N{s) in position (1,1).
2) Write the element (2,1) of N{s) as n2i{s) = nn(5)7(5) + /3(s), with l3{s) such
that deg[/?(5)]
result from the second row. In this way the (2,1) element becomes /3{s). Now, if
f3{s) = 0 go to step 3), otherwise interchange the first row with the second one and
repeat again this step. This causes a continuous reduction of the degree of the element
(2,1) so that, in a finite number of iterations, it results n2i{s) = 0.
3) As exactly done in step 2), bring all the elements of the first column but element
(1,1) to zero.
4) Through elementary operations on the columns bring all the elements of the
first row but 77-11(5) to zero.
5) If step 4) brought the elements of the first column under 7111(5) to be nonzero,
then go back to step 2). Notice that a finite number of operations through steps
2)-4) leads to a situation in which 77.11(5) 7^ 0, 77,^1(5) = 0, z = 2,3, • • • ,71, 77,1^(5) =
0, j = 2,3, • • •, 772. If, in one of the columns aside the first one an element is not
divisible by 77,11(5), add this column to the first one and go back to step 2). At
each iteration of the cycle 2)-5) the degree of 77-11(5) decreases. Hence, in a finite
number of cycles one arrives to the situation reported above where 77,11(5) divides
each element of the submatrix A^i(5) constituted by the last m — I columns and
77,-1 rows. Assume that 7211(5) is monic (otherwise perform an obvious elementary
2.4. PROPER RATIONAL MATRICES
15
operation) and let ai{s) = nii{s).
Now apply to the submatrix Ni{s) (obviously
assumed to be nonzero) the entire procedure performed for matrix N{s). The (1,1)
element will be now a2{s) and, in view of the adopted procedure 0:1(5) will be a
divisor of 0:2(5). Finally 0^(5) 7^ 0, z = 1, • • • , r since an elementary operation does
not affect the matrix rank.
D
E x a m p l e 2.5 Consider the polynomial matrix
s' + l
5+ 1
N{s)
S^ +S
and the two polynomial and unimodular matrices
L{s)
It follows that N{s)
(s2 + l ) / 2
(s + l ) / 2
s-1
1
L{s)S(s)R(s)
where
S{s) :--
2.4
R{s) =
-1/2
0
s' + s^
Proper rational matrices
This section deals with matrices F(s) whose elements are ratios of polynomials in the
same unknown. Therefore, the generic element fij{s) of F{s) has the form
a{s)
ayS
J^jy^) = UTS •= IT
o^^-l
Oj.-iS'^ ^H
h O i 5 + Oo
ai e R, i = 0 , 1 , • • •, i^, Pi e R, z = 0 , 1 , • • • //
T h e relative degree Te\deg[fij{s)\ of fij{s) is defined as the difference between the
degree of the two polynomials which constitute the denominator and numerator of
fij{s), respectively. Specifically, with reference to t h e above function, and assuming
o^y ^ 0 and /3^ ^ 0, it is
reldeg[/i^(s)] := deg[6(5)] - deg[a(5)] = /^ - u
A rational matrix F{s) is said to be proper (resp. strictly proper) if Te\deg[fij{s)] > 0
(resp. reldeg[/ij(5)] > 0) for all z, j . Throughout the section it is implicitly assumed
t h a t the rational matrices considered herein are always either proper or strictly proper.
The rank of a rational matrix F{s) is, in analogy with the definition given for
a polynomial matrix, the dimension of the largest square submatrix in F{s) with
determinant not identically equal to zero.
A rational square matrix is said to be unimodular if it has maximum rank and
its determinant is a rational function with zero relative degree. Hence, a unimodular
rational matrix admits a unimodular rational inverse and vice-versa.
E x a m p l e 2.6 The matrix
F{s)
( 5 ' + 25 + 3 ) 7 ( 5 2 - 1 )
5/(5 + 1)
16
CHAPTER 2.
PRELIMINARIES
is unimodular. Actually, det[F(s)]= (-s^ + 4s + 3)/(s^ - 1). Moreover,
F-\s)
=
(s^ - l)/{-s'' + 4s + 3)
(-5=^ 4- 5)/(-s2 + 4s 4- 3)
{-2s'' + 2)/{-s^ + 4s + 3)
(s^ + 2s H- 3)/{-s^ + 4s + 3)
The concepts of divisor and greatest common divisor^ already given for polynomial
matrices, are now extended to rational matrices in the definitions below.
Definition 2.4 (Right divisor) Let F{s) be a rational matrix.
matrix R{s) is said to be a right divisor of F{s) if
F{s) =
A rational square
Fis)R{s)
with F{s) rational.
•
A similar definition could be given for a left divisor as well.
Definition 2.5 (Greatest common right divisor) Consider two rational matrices
F{s) and G{s) with the same number of columns. A Greatest Common Right Divisor (GCRD) of {F{s)^G{s)) is a square rational matrix R{s) such that
i) R{s) is a right divisor of {F{s),G{s)),
i.e.
F{s) = F{s)R{s)
G{s) = G{s)R{s)
with F{s) and G{s) rational.
a) If R{s) is any other right divisor of {F{s)^G{s)),
W{s) rational.
then R{s) = W{s)R{s)
with
D
A similar definition holds for a Greatest Common Left Divisor (GCLD). By exploiting
the properties of the rational unimodular matrices, it is easy to see that, given two
rational matrices F{s) and G{s)^ there exist more than one GCRD (and GCLD). A
way to compute a GCRD (resp. GCLD) of an assigned pair of rational matrices F{s)
and G{s) calls for their manipulation via a rational unimodular matrix resulting from
a sequence of elementary operations on their rows (resp. columns). The elementary
operations on the rows (resp. columns) of a rational matrix F{s) are:
1) Interchange of the i-th row (resp. i-th column) with the j-th row (resp. j-th
column) and vice-versa
2) Multiplication of the i-th row (resp. i-th column) by a non zero rational function
with zero relative degree
3) Addition of the i-th row (resp. i-th column) to the j-th row (resp. j-th column)
multiplied by a rational function with zero relative degree
Obviously, each of these elementary operations reduces to premultiplying (resp. postmultiplying) matrix F{s) by a suitable rational unimodular matrix T{s). Moreover,
matrix T{s)F{s) (resp. F{s)T{s)) has the same rank as F{s).
2.4.
PROPER
RATIONAL
MATRICES
17
R e m a r k 2.3 Given two scalar rational functions f{s) and g{s) with relative degree such
that reldeg[/(s)] < reldeg[5'(s)], then reldeg[p(s)//(s)] > 0. Hence, considering the rational
unimodular matrix
1
0
T{s)
£(£)
m
it follows that
T{s)
fis)
0
By recursively exploiting this fact, it is easy to convince oneself that, if a rational matrix
F{s) does not have more columns than rows, it is always possible to build up a rational
unimodular matrix T{s) such that
T(s)F{s)
=
R{s)
0
with R{s) square and rational. Moreover, the null matrix vanishes when F{s) is square.
•
A G C R D of two rational matrices can be computed in the way described in the
following algorithm, which relies on the same arguments as in Algorithm 2.1.
A l g o r i t h m 2.2 Let F{s) and G{s) be two rational matrices with the same number,
say m, of columns and possibly different numbers of rows, say Uf and Ug^ respectively.
1) Assume first t h a t m < Uf -\- Ug, otherwise go to point 2). Let H{s)
[F'{s) G^{s)y and determine a rational unimodular matrix T{s) such t h a t
:=
Ris)
0
T{s)His) =
Then R{s) is a GCRD
2) li m > Uf + Hg, then
F{s)
R{s)
G{s)
0
D
is a G C R D
E x a m p l e 2.7 Consider the two rational matrices
F{s) = [ {s + l)/s
1/(5 + 2) ]
{s + l)/{s-l)
1
G{s)
s/{s' + l)
s/{s^l)
Take now
-{s + l)/s
•
-(. + l ) / ( . - l )
1
Ti{s)
1
0
0
0
1
0
1
Tsis) =
0
0
1
(s4 + s3 + 4 s ) / ( l - s 4
0
T2{s)
1
1
0
0
0
1
1
0
0
18
CHAPTER
2.
PRELIMINARIES
It follows that
1
0
0
Fis)
G{s)
T3{s)T2{s)Ti{s)
s/{s + 1)
- ( s + l ) / ( s + 2)
0
so that
R{s)
1
0
=
s/(s + l)
- ( s + l)/(5 + 2)
{s +
G{s)
and F(s) = F(s)R{s),
l)/(s- 1)
1
F{s)^[
{s + l)/s
(s^ + s^ + 4s)/{s^
0
1
1)
G{s) = G{s)R{s).
D
Also for rational matrices it is possible to consistently introduce the concept of coprimeness.
D e f i n i t i o n 2.6 (Right coprimeness) Two rational matrices F{s) and G{s) with the
same number of columns are said to be right coprime if the relations
F{s) =
Fis)Tis)
G{s) = G{s)T{s)
with F{s) and G{s) rational matrices,
matrix.
are verified only ifT{s)
is a rational
unimodular
•
E x a m p l e 2.8 The two matrices
F{s)
=
G(s) =
s-l)/{s
+ lf
l/{s + l)
[ l / ( s + l)
(s2 + l ) / ( s + l ) ( s 2 + 3s)
s/{s'-l)
(2s + l ) / ( s + l)(s + 3)
are not right coprime since it results
F{s)
G(s)
(s-
= [ 1
l ) / ( ^ + l)
1
(25 + l)/(5 + 3)
with R(s) — l / ( s + 1) which is not unimodular.
s/{s-l)
R{s)
]R{S)
•
An analogous definition holds for left coprimeness. From Definition 2.6 it follows
t h a t two rational matrices are right (resp. left) coprime if all their common right
(resp. left) divisors are unimodular. In particular, each G C R D (resp. GOLD) of
two rational right (resp. left) coprime matrices must be unimodular. Therefore, a
necessary condition for matrices F{s) and G{s) to be right (resp. left) coprime is
t h a t the number of their columns be not greater t h a n the sum of the number of their
rows (resp. the number of their rows be not greater t h a n the sum of the number of
their columns), since, from Algorithm 2.2 point 2) in the opposite case one of their
G C R D would not be unimodular. Moreover, a way to verify whether or not two
rational matrices are right (resp. left) coprime consists in the computation through
Algorithm 2.2 of a greatest common divisor and evaluation of its determinant. As a
m a t t e r of fact, as stated in the next lemma, if a greatest common divisor is unimodular
then all the greatest common divisors are unimodular as well.
2.4. PROPER RATIONAL
MATRICES
19
Lemma 2.3 Let F{s) and G{s) be two rational matrices with the same number of
columns. Let R{s) be a GCRD of {F{s),G{s)).
Then,
i) T{s)R{s)
is a GCRD for any rational unimodular T{s)
a) If R{s) is a GCRD of {F{s)^G{s)), then there exists a rational unimodular
matrix T{s) such that R{s) = T{s)R{s)
Proof The proof follows from that of Lemma 2.1 by substituting there the term
"rational" in place of the term "polynomial" and symbols F{s) and G{s) in place of
N{s) and D{s)^ respectively.
D
A further significant property of a GCRD of a pair of rational matrices F{s) and
G{s) is stated in the following lemma, whose proof hinges on Algorithm 2.2.
Lemma 2.4 Consider two rational matrices F{s) and G{s) with the same number
of columns and let R{s) be a GCRD of {F{s)^G{s)). Then, there exist two rational
matrices X{s) and Y{s) such that
X{s)F{s) -\-Y{s)G{s) = R{s)
Proof Let Uf and Ug be the number of rows of F{s) and G(s), respectively, and m
the number of their columns. Preliminarily, assume that uj -\- Ug > m and let T{s)
be a unimodular matrix such that
T{s)
F{s)
G{s)
Tii(s)
T2l{s)
ri2(s)
T22{s)
Ris)
Fis)
Gis)
0
(2.10)
Based on Algorithm 2.2, matrix R{s) turns out to be a GCRD of {F{s)^G{s)).
Hence, thanks to Lemma 2.3, there exist a unimodular matrix U{s) such that R{s) =
U{s)R{s), that is, in view of eq. (2.10),
R{s) = U{s)Tn{s)F{s)
+ U{s)Ti2is)G{s)
= X{s)F{s) + Y{s)G{s)
On the contrary, if m > n / + n^,. Algorithm 2.2 entails that
R{s) :--
is a GCRD of {F{s),G{s)).
Fis)
G{s)
0
In view of Lemma 2.3, it is possible to write
R{s) = U{s)R{s)
I
F{s) + U{s)
•.= X{s)Fis)
G{s)
+ Y(s)G{s)
where U{s) is a suitable rational and unimodular matrix.
D
The following result, that parallels the analogous one presented in Lemma 2.2, can
now be stated.
20
CHAPTER
2.
PRELIMINARIES
L e m m a 2.5 Let F{s) and G{s) be two rational matrices with the same number of
columns. Then, F{s) and G{s) are right coprime if and only if there exist two rational
matrices X{s) and Y{s) such that
X{s)F{s)
P r o o f Recall t h a t two
unimodular. Hence, if
results R{s) =X{s)F{s)
From this last equation
+ Y{s)G{s)
= I
matrices are right coprime if each one of their G C R D ' s is
R{s) is a G C R D of {F{s)^G{s))^ t h a n k s to L e m m a 2.4, it
+ Y{s)G{s), with X{s) and Y{s) suitable rational matrices.
it follows
/ = R-\s)X{s)F{s)
+ R-\s)Y{s)G{s)
: = X{s)F{s)
+
Y{s)G{s)
Conversely, let R{s) be a G C R D of {F{s),G{s))
derived according to Algorithm 2.2
point 1), as t h e number of their columns must be not greater t h a n t h e sum of t h e
numbers of their rows, so t h a t
F{s)
G{s)
T{s)
R{s)
0
where T{s) is a suitable rational and unimodular matrix. Hence
R{s)
0
Fis)
G{s)
Sii{s)
S2l{s)
R{s)
0
Suis)
S22{S)
so t h a t
/ = X{s)F{s)
+ Y{s)G{s)
= [X{s)Su{s)
+
Y{s)S2r{s)]R{s)
shows t h a t R{s) is unimodular (its inverse is rational). Therefore, {F{s)^G{s))
right coprime.
E x a m p l e 2.9 Consider two rational matrices
F{s) = [ 1/(5' + s)
G{s)
(s^-s-
{2s^ + 2s - 2)/{s^ + 2s) ]
- 5 / ( 5 + 1)
l)/{s^ + s)
5/(5 + 1)
{s^ + 2s + 2)/(s2 + 2s)
These matrices are right coprime. Actually,
Tis)
F{s)
G{s)
R{s)
0
where
T{s)
s-" -s'
-2s-l
s^ + 2s^ + s
1
p(s)
-s^ + s
s^ + s^
s^ + s^ - s - 1
- s ^ -2s^ -8
„3
with p(s) := 2s^ — 3s — 2 and
R{s)
s/(s + l)
-1
(s + l ) / ( s + 2)
1
that is a rational and unimodular matrix. Moreover, taking
X{s) =
1
p{s)q{s)
^ 13s^ - 8s - 2
- 2 s ^ - 9s'
2s^ + 6s^ + 2s^ -Is^
-ls-2
are
•
2.4. PROPER RATIONAL
MATRICES
21
and
Y{s)
-2s^ - 65^ - 45^ + 2s^ + 2s
2s^ + Ss"^ + lOs^ + 2s^ As
-5^ - 45^ - 5s - 2
s^ + 35^ + 2s
p{s)q{s)
with q{s) := 2s^ + 4s + 1, it follows X{s)F{s)
+ Y(s)G{s)
= /.
D
Also for rational matrices there exist a particularly useful canonical form., which is
called Smith-McMillan form. This form is precisely defined in the following theorem,
whose proof also provides a procedure for its computation.
Theorem 2.2 (Smith-McMillan form) Let G{s) he a proper rational matrix with n
rows, m columns and rank[G(s)]= r
and unimodular matrices L{s) and R{s) such that G{s) = L{s)M{s)R{s), where
his)
0
0
/2(^)
0
0
0
0
M{s)
fr{s)
0
0
0
n — r rows
m — r columns
with
and
• ei{s) and il^i{s) are monic, z = 1, 2, • • • r
• ei{s) and tpi{s) are coprime i = 1, 2, • • • r
• £i{s) divides £i-\.i{s), i =
1,2^''-r
• ^pi-\-l divides ipi, i = 1^2,- • -r
Matrix M{s) is the Smith-McMillan form of F{s).
Proof Let 7/^(5) be the least common multiple of all polynomials at the denominators
of the elements of F{s). Therefore, matrix N{s) := ilj{s)F{s) is polynomial. If S{s)
is the Smith form of A^(5) (recall Theorem 2.1) it follows that
F{s)
^ N{s) =
tP{s
-j^L{s)S{s)R{s)
Hence,
M{s) =
5(£)
once all the possible simplifications between the elements of ^(s) and the polynomial
1/^(5) have been performed. This matrix obviously has the properties claimed in the
statement.
D
CHAPTER 2. PRELIMINARIES
22
R e m a r k 2.4 The result stated in Theorem 2.2 allows one to represent a generic rational
p X m matrix G{s) with rank[G(s)] :— r < m.m[m,p\ in the two forms
G{s) = N{s)D-\s)
=
D-\s)N{s)
where the polynomial matrices N{s) and D{s) are right coprime, while the polynomial matrices D{s) and N{s) are left coprime. Actually, observe that letting
Ms)
0
0
^{s) : 0
0
0
0
0
£2{s)
£1(5)
0
E{s)
£r{s)
it follows
<^-\s) 0
0
/
E{s)
0
0
0
E{s)
0
M{s) =
0
/
0
0
Now, defining
N{s) := L{s)
N{s) :--
E{s)
0
E{s)
0
0
0
D{s)
:^R'\s)
^(5)
0
D{s) :-.
R{s
^{s)
0
0
/
0
/
L-\s)
one can easily check that G{s) = N{s)D~^{s) = D~^{s)N{s). In order to verify that N{s)
and D{s) are right coprime, one can resort to Lemma 2.2. Actually, considering the two
matrices X{s) and Y{s) defined by
xi{s)
0
0
X2{s)
0
0
0
0
Xr{s)
0
0
0
0
0
'
0
0
yr{s)
0
0
/
X(s)
L-'{s)
••
0
0
yi{s)
0
0
y2{s)
'
Y{s) :-•
0
0
R{s)
it turns out that X{s)N{s)-{-Y{s)D{s)
= / for a suitable choice of the polynomials Xi{s) and
yi{s) (recall that the polynomials i^i{s) and £i{s) are coprime and take in mind Theorem
A.l). From Lemma 2.2 it follows that the two matrices N{s) and D(s) are right coprime.
Analogously, one can verify that N{s) and D{s) are left coprime.
•