Tải bản đầy đủ (.pdf) (27 trang)

Essentials of Control Techniques and Theory_7 pptx

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (1.31 MB, 27 trang )

202 ◾ Essentials of Control Techniques and Theory
it can be rearranged as


xxxxu
3321
342+++=
is will be equivalent to

  
x xxxu
1 111
342+++=
If we take the Laplace transform, we have

()() ()sssXsUs
32
1
342+++=
at gives us a system with the correct set of poles. In matrix form, the state
equations are:





x
x
x
1
2


3
010
001
342










=
−−−










xx
x
x
u

1
2
3
0
0
1












+










at settles the denominator. How do we arrange the zeros, though? Our out-

put now needs to contain derivatives of x
1
,

y xxx=++4
111
 
But we can use our first two state equations to replace this by

yx xx=++
12 3
4
i.e.,

y =
[]
114 x
We can only get away with this form y = Cx if there are more poles than zeros.
If they are equal in number, we must first perform one stage of “long division” of
the numerator polynomial by the denominator to split off a Du term proportional
to the input. e remainder of the numerator will then be of a lower order than
the denominator and so will fit into the pattern. If there are more zeros than poles,
give up.
Now whether it is a simulation or a filter, the system can be generated in
terms of a few lines of software. If we were meticulous, we could find a lot of
unanswered questions about the stability of the simulation, about the quality of
91239.indb 202 10/12/09 1:45:38 PM
Linking the Time and Frequency Domains ◾ 203
the approximation and about the choice of step length. For now let us turn our
attention to the computational techniques of convolution.

Q 14.7.1
We wish to synthesize the filter s
2
/(s
2
 + 2s + 1) in software. Set up the state equations
and write a brief segment of program.
91239.indb 203 10/12/09 1:45:38 PM
This page intentionally left blank
205
15Chapter
Time, Frequency, and
Convolution
Although the coming sections might seem something of a mathematician’s
playground, they are extremely useful for getting an understanding of underlying
principles of functions of time and the way that dynamic systems affect them. In
fact, many of the issues of convolution can be much more easily be explored in
terms of discrete time and sampled systems, but first we will take the more tradi-
tional approach of infinite impulses and vanishingly small increments of time.
15.1 Delays and the Unit Impulse
We have already looked into the function of time that has a Laplace transform
which is just 1. is is the “delta function” δ(t) when t = 0. e unit step has Laplace
transform 1/s, and so we can think of the delta function as its derivative. Before we
go on, we must derive an important property of the Laplace transform, the “shift
theorem.”
If we have a function of time, x(t), and if we pass this signal through a time
delay τ, then the output is the same signal that was input τ seconds earlier, x(t – τ).
e bilateral Laplace transform of this output will be

xt te dt

st
(–)

−∞


91239.indb 205 10/12/09 1:45:39 PM
206 ◾ Essentials of Control Techniques and Theory
If we write T for t – τ, then dt will equal dT, and the integral becomes

xT edT
sT
()
()−+
−∞


τ

=
−−
−∞


exTe dT
ssTτ
()

=


eXs

()
where X(s) is the Laplace transform of x(t). If we delay a signal by time τ, its Laplace
transform is simply multiplied by e
−sτ
.
Since we are considering the bilateral Laplace transform, integrated over all
time both positive and negative, we could consider time advances as well. Clearly
all signals have to be very small for large negative t, otherwise their contribution to
the integral would be enormous when multiplied by the exponential.
We can immediately start to put the shift theorem to use. It tells us that the
transform of δ(t – τ), the unit impulse shifted to occur at t = τ, is e
−sτ
. We could of
course have worked this out from first principles.
We can regard the delta function as a “sampler.” When we multiply it by any
function of time, x(t) and integrate over all time, we will just get the contribution
from the product at the time the delta function is non-zero.

xt tdtx()() ()δτ τ−=
−∞


(15.1)
So when we write

L
δτ δτ
() ()tetdt

st

(
)
=−

−∞


we can think of the answer as sampling e
−st
at the value t = τ.
Let us briefly indulge in a little philosophy about the “meaning” of functions.
We could think of x(t) as a simple number, the result of substituting some value of
t into a formula for computing x.
We can instead expand our vision of the function to consider the whole graph of
x(t) plotted against time, as in a step response. In control theory we have to take this
broader view, regarding inputs and outputs as time “histories,” not just as simple
values. is is illustrated in Figure 15.1.
91239.indb 206 10/12/09 1:45:41 PM
Time, Frequency, and Convolution ◾ 207
Now we can view Equation 15.1 as a sampling process, allowing us to pick one
single value of the function out of the time history. But just let us exchange the
symbols t and τ in the equation and suddenly the perspective changes. e substi-
tution has no absolute mathematical effect, but it expresses our time history x(t) as
the sum of an infinite number of impulses of size x(τ)dτ,

xt xtd() ()()=−
−∞



τδττ
(15.2)
is result may not look important, but it opens up a whole new way of looking
at the response of a system to an applied input.
15.2 The Convolution Integral
Let us first define the situation. We have a system described by a transfer function
G(s), with input function u(t) and output y(t), as in Figure 15.2.
If we apply a unit impulse to the system at t = 0, the output will be g(t), where
the Laplace transform of g(t) is G(s). is is portrayed in Figure 15.3.
How do we go about deducing the output function for any general u(t)?
Perhaps the most fundamental property of a linear system is the “principle of
superposition.” If we know the output response to a given input function and also
to another function, then if we add the two input functions together and apply
them, the output will be the sum of the two corresponding output responses.
In mathematical terms, if u
1
(t) produces the response y
1
(t) and u
2
(t) produces
response y
2
(t), then an input of u
1
(t) + u
2
(t) will give an output y
1

(t) + y
2
(t).
Now an input of the impulse δ(t) to G(s) provokes an output g(t). An impulse
applied at time t = τ, u(τ)δ(t – τ) gives the delayed response u(τ)g(t – τ). If we
apply several impulses in succession, the output will be the sum of the individual
responses, as shown in Figure 15.4.
u(t)
x(t)
t
t
Figure 15.1 Input and output as time plots.
91239.indb 207 10/12/09 1:45:42 PM
208 ◾ Essentials of Control Techniques and Theory
Notice that as the time parameter in the u-bracket increases, the time in the
g-bracket reduces. At some later time t, the effect of the earliest impulse will have
had longest to decay. e latest impulse has an effect that is still fresh.
Now we see the significance of Equation 15.2. It allows us to express the input
signal u(t) as an infinite train of impulses u(τ)δτ δ(t – τ). So to calculate the output,
u(t) y(t)
G(s)
Figure 15.2 Time-functions and the system.
δ(t)
t = 0
t = 0
g(t)
G(s)
Figure 15.3 For a unit impulse input, G(s) gives an output g(t).
u
1

u
2
u
2
u
1
u
3
u
3
τ
1
τ
2
τ
2
τ
1
τ
3
τ
2
τ
1
τ
3
τ
3
τ
1

τ
2
τ
3
t
t
t
t
(t – τ
1
)
(t – τ
2
)
(t – τ
3
)
u
1
g(t – τ
1
)
u
2
g(t – τ
2
)
u
3
g(t – τ

3
)
Figure 15.4 Superposition of impulse responses.
91239.indb 208 10/12/09 1:45:44 PM
Time, Frequency, and Convolution ◾ 209
we add all the responses to these impulses. As we let δτ tend to zero, this becomes
the integral,

yt ugtd() ()()=



τ−ττ

(15.3)
is is the convolution integral.
We do not really need to integrate over all infinite time. If the input does not
start until t = 0 the lower limit can be zero. If the system is “causal,” meaning that
it cannot start to respond to an input before the input happens, then the upper
limit can be t.
15.3 Finite Impulse Response (FIR) Filters
We see that instead of simulating a system to generate a filter’s response, we could set
up an impulse response time function and produce the same result by convolution.
With infinite integrals lurking around the corner, this might not seem such a wise
way to proceed!
In looking at digital simulation, we have already cut corners by taking a finite
step-length and accepting the resulting approximation. A digital filter must similarly
accept limitations in its performance in exchange for simplification. Instead of an
infinite train of impulses, u(t) is now viewed as a train of samples at finite intervals.
e infinitesimal u(τ)dτ has become u(nT)T. Instead of impulses, we have numbers

to input into a computational process.
e impulse response function g(t) is similarly broken down into a train of
sample values, using the same sampling interval. Now the infinitesimal operations
of integration are coarsened into the summation

ynTTurTgnrT
r
r
() ()(( ))=−
=−∞
=∞

(15.4)
e infinite limits still do not look very attractive. For a causal system, however,
we need go no higher than r = n, while if the first signal was applied at r = 0 then
this can be the lower limit.
Summing from r = 0 to n is a definite improvement, but it means that we have
to sum an increasing number of terms as time advances. Can we do any better?
Most filters will have a response which eventually decays after the initial impulse
is applied. e one-second lag 1/(s + 1) has an initial response of unity, gives an output
of around 0.37 after one second, but after 10 seconds the output has decayed to less
than 0.00005. ere is a point where g(t) can safely be ignored, where indeed it is
91239.indb 209 10/12/09 1:45:46 PM
210 ◾ Essentials of Control Techniques and Theory
less than the resolution of the computation process. Instead of regarding the impulse
response as a function of infinite duration, we can cut it short to become a Finite
Impulse Response. Why the capital letters? Since this is the basis of the FIR filter.
We can rearrange Equation 15.4 by writing n – r instead of r and vice versa.
We get


ynTTun rT grT
r
r
() (( ))()=
=∞
=∞



Now if we can say that g(rT ) is zero for all r < 0, and also for all r > N, the sum-
mation limits become

ynTTun rT grT
r
rN
() (( ))()=
=
=


0
e output now depends on the input u at the time in question, and on its
past N values. ese values are now multiplied by appropriate fixed coefficients
and summed to form the output, and are moved along one place to admit the next
input sample value. e method lends itself ideally to a hardware application with
a “bucket-brigade” delay line, as shown in Figure 15.5.
e following software suggestion can be made much more efficient in time and
storage; it concentrates on showing the method. Assume that the impulse response
has already been set up in the array g(i), where i ranges from 0 to N. We provide
another array u(i) of the same length to hold past values.

//Move up the input samples to make room for a new one
for(i=N;i>0;i ){
u[i]=u[i-1];
}
//Take in a new sample
u[0]=GetNewInput();
Delay Delay Delay Delay
a
0
a
1
a
2
a
n
u(t)
y(t)++ +
Figure 15.5 A FIR filter can be constructed from a “bucket-brigade” delay line.
91239.indb 210 10/12/09 1:45:47 PM
Time, Frequency, and Convolution ◾ 211
//Now compute the output
y=0;
for(i=0;i<N+1;i++){
y=y+u[i]*g[i];
}
//y now holds the output value
is still seems more trouble than the simulation method; what are the advan-
tages? Firstly, there is no question of the process becoming unstable. Extremely sharp
filters can be made for frequency selection or rejection which would have poles very
close to the stability limit. Since the impulse response is defined exactly, stability is

assured.
Next, the rules of causality can be bent a little. Of course the output cannot
precede the input, but by considering the output signal to be delayed the impulse
response can have a “leading tail.” Take the non-causal smoothing filter discussed
earlier, for example. is has a bell-shaped impulse response, symmetrical about t = 0
as shown in Figure 15.6. By delaying this function, all the important terms can be
contained in a positive range of t. ere are many applications, such as offline sound
and picture filtering, where the added delay is no embarrassment.
15.4 Correlation
is is a good place to give a mention to that close relative of convolution, correlation.
You will have noticed that convolution combines two functions of time by running
the time parameter forward in the one and backward in the other. In correlation
the parameters run in the same direction.
Non-causal response—impossible in real time
A time shift makes a causal approximation
t = 0
t = 0
g(t)
Figure 15.6 By delaying a non-causal response, it can be made causal.
91239.indb 211 10/12/09 1:45:48 PM
212 ◾ Essentials of Control Techniques and Theory
e use of correlation is to compare two time functions and find how one is
influenced by the other. e classic example of correlation is found in the satellite
global positioning system (GPS). e satellite transmits a pseudo random binary
sequence (PRBS) which is picked up by the receiver. Here it is correlated against
the various signals that are known to have been transmitted, so that the receiver is
able to determine both whether the signal is present and by how much it has been
delayed on its journey.
So how does it do it?
e correlation integral is


Φτ τ
xy
xt yt dt() ()()=+

(15.5)
giving a function of the time-shift between the two functions. e coarse acqui-
sition signal used in GPS for public, rather than military purposes, is a PRBS
sequence of length 1023. It can be regarded as a waveform of levels of value  +1 or
−1. If we multiply the sequence by itself and integrate over one cycle, the answer
is obviously 1023. What makes the sequence “pseudo random” is that if we mul-
tiply its values by the sequence shifted by any number of pulses, the integral gives
just −1.
Figure 15.7 shows all 1023 pulses of such a sequence as lines of either black or
white. e autocorrelation function, the correlation of the sequence with itself, is
as shown in Figure 15.8, but here the horizontal scale has been expanded so show
Figure 15.7 Illustration of a pseudo-random sequence.
Value is 1023 at t = 0
–1 1
Figure 15.8 Autocorrelation function of a PRBS.
91239.indb 212 10/12/09 1:45:50 PM
Time, Frequency, and Convolution ◾ 213
just a few pulse-widths of shift. In fact this autocorrelation function repeats every
1023 shifts; it is cyclic.
When we correlate the transmitted signal against the signal at the receiver, we
will have a similar result to Figure 15.8, but shifted by the time it takes for the
transmitted signal to reach the receiver. In that way the distance from the satellite
can be estimated. (e receiver can reconstruct the transmitted signal because it
“knows” the time and the formula that generates the sequence.)
ere are in fact 32 different “songs” of this length that the satellites can trans-

mit and the cross-correlation of any pair will be zero. us, the receiver can identify
distances to any of the satellites that are in view. From four or more such distances,
using an almanac and ephemeris to calculate the exact positions of the satellites, the
receiver can solve for x, y, z, and t, refining its own clock’s value to a nanosecond.
So what is a treatise on GPS doing in a control textbook?
ere are some valuable principles to be seen. In this case, the “transfer func-
tion” of the path from satellite to receiver is a time-delay. e cross-correlation
enables this to be measured accurately. What happens when we calculate the cross-
correlation between the input and the output of any control system? We have

Φτ τ
uy
t
ut yt dt() ()()=+

Equation 15.3 tells us that

yt ugtd() ()()=−
=−∞


τττ
τ
i.e.,

yt uT gt TdT
T
() ()()=
=−∞





yt uT gt TdT
T
() ()()+= +
=−∞


ττ−
So

Φ
uy
Tt
ut uT gt TdTdt() () ()()ττ=+−






∫∫

Φ
uy
Tt
ut uT tg TdTdt() () ()()ττ=+−







∫∫
91239.indb 213 10/12/09 1:45:52 PM
214 ◾ Essentials of Control Techniques and Theory
Now if we reverse the order of integration, we have

Φ
uy
tT
utuT tdtg Tdt() ()()()ττ=+







∫∫

ΦΦ
uy uu
T
Tg Tdt() () ()ττ=
(
)



(15.6)
e cross-correlation function is the function of time that would be output
from the system if the input’s autocorrelation function were applied instead of the
input. is illustrated in Figure 15.9.
Provided we have an input function that has enough bandwidth, so that its
autocorrelation function is “sharp enough,” we can deduce the transfer function by
cross-correlation. is can enable adaptive controllers to adapt.
More to the point, we can add a PRBS to any linear input and so “tickle”
the system rather than hitting it with a single impulse. In the satellite system, the
G(s)
G(s)
u(t)
Correlator
Correlator
y(t)
Φ
uu
(τ)
Φ
uy
(τ)
Figure 15.9 Cross-correlation and the transfer function.
91239.indb 214 10/12/09 1:45:54 PM
Time, Frequency, and Convolution ◾ 215
PRBS is clocked at one megahertz and repeats after a millisecond. But it is easy to
construct longer sequences. One such sequence only repeats after a month! So in
a multi-input systems, orthogonal sequences can be applied to various inputs to
identify the individual transfer functions as impulse responses.
15.5 Conclusion
We have seen that the state description of a system is bound tightly to its repre-

sentation as an array of transfer functions. We have requirements which appear to
conflict. One the one hand, we seek a formalism which will allow as much of the
work as possible to be undertaken by computer. On the other, we wish to retain
an insight into the nature of the system and its problems, so that we can use intel-
ligence in devising a solution.
Do we learn more from the time domain, thinking in terms of matrix equations
and step and impulse responses, or does the transfer function tell us more, with its
possibilities of frequency response and root locus?
In the next chapter, we will start to tear the state equations apart to see what the
system is made of. Maybe we can get the best of both worlds.
91239.indb 215 10/12/09 1:45:54 PM
This page intentionally left blank
217
16Chapter
More about Time and
State Equations
16.1 Introduction
It is time to have a thorough look at the nature of linear systems, the ways in
which their state equations can be transformed and the formal analysis of dynamic
compensators.
We have now seen the same system described by arrays of transfer functions,
differential equations and by first-order matrix state equations. We have seen that
however grandiose the system may be, the secret of its behavior is unlocked by find-
ing the roots of a single polynomial characteristic equation. e complicated time
solution then usually crumbles into an assortment of exponential functions of time.
In this chapter we are going to hit the matrix state equations with the power
of algebra, to open up the can of worms and simplify the structure inside it. We
will see that a transformation of variables will let us unravel the system into an
assortment of simple subsystems, whose only interaction occurs at the input or the
output.

16.2 Juggling the Matrices
We start with the now familiar matrix state equations


xAxBu
yCx
=+
=
(16.1)
91239.indb 217 10/12/09 1:45:55 PM
218 ◾ Essentials of Control Techniques and Theory
When we consider a transformation to a new set of state variables, w, where w
and x are related by the transformation

wTx=
with inverse transformation

xT=
−1
w
then we find that


wTAT wTBu
yCTw
1
1
=+
=



(16.2)
e new equations still represent the same system, just expressed in a different
set of coordinates, so the “essential” properties of A must be unchanged by any
valid transformation. What are they?
How can we make the transformation tell us more about the nature of A? In
Section 5.5 we had a foretaste of transformations and saw that in that particular
example the new matrix could be made to be diagonal. Is this generally the case?
16.3 Eigenvectors and Eigenvalues Revisited
In Chapter 8 we previewed the nature of eigenvalues and eigenvectors. Perhaps we
should consider some examples to clarify them. Let us take as an example the matrix:

A =






22
31
If we post-multiply A by a column vector, we get another column vector, for
example,

22
31
1
0
2
3













=






e direction of the new vector is different and its size is changed. Multiplying
by A takes (0, 1)′ to (2, 1)′, takes (1, −1)′ to (0, 2)′, and takes (1, 1)′ to (4, 4)′.
Wait a minute, though. Is there not something special about this last
example? e vector (4, 4)′ is in exactly the same direction as the vector (1, 1)′, and
91239.indb 218 10/12/09 1:45:57 PM
More about Time and State Equations ◾ 219
is multiplied in size by 4. No matter what set of coordinates we choose, the prop-
erty that there is a vector on which A just has the effect of multiplying it by 4 will
be unchanged. Such a vector is called an eigenvector and the number by which it is
multiplied is an eigenvalue.
Is this the only vector on which A has this sort of effect? We can easily find out.

We are looking for a vector x such that Ax = λx, where λ is a mere constant. Now

Ax =λx
i.e.,

Ax Ix=λ ,
where I is the unit matrix, so

()0.AIx−=λ
(16.3)
is 0 is a vector of zeros. We have a respectable vector x being multiplied by
a matrix to give a null vector. Multiplying a matrix by a vector can be regarded as
mixing the column vectors that make up the matrix, to arrive at another column
vector. Here the components of x mix the columns of A to get a vector of zeros.
Now one way of simplifying the calculation of a determinant is to add columns
together in a mixture that will make more of the coefficients zero. But here we arrive
at a column that is all zero, so the determinant must be zero.

det( )0.AI−=λ
(16.4)
Write s in place of λ and this should bring back memories. How that character-
istic polynomial gets around!
In the example above, the determinant of A − λI is

22
31


λ
λ

giving

λλ
2
340−−=
is factorizes into

()()λλ+−=140
91239.indb 219 10/12/09 1:46:10 PM
220 ◾ Essentials of Control Techniques and Theory
e root λ = 4 comes as no surprise, and we know that it corresponds to the
eigenvector (1, 1)’.
Let us find the vector that corresponds to the other value, −1. Substitute this
value for λ in Equation 16.3 and we have

32
32
0
0
1
2













=






x
x
e two simultaneous equations have degenerated into one (if they had not, we
could not solve for a single value of x
1
/x
2
). e vector (2, −3)’ is obviously a solu-
tion, as is any multiple of it. Multiply it by A, and we are reassured to see that the
result is (−2, 3).
In general, if A is n by n we will have n roots of the characteristic equation,
and we should be able to find n eigenvectors to go with them. If the roots, the
eigenvalues, are all different, it can be shown that the eigenvectors are all linearly
independent, meaning that however we combine them we cannot get a zero vector.
We can pack these columns together to make a respectable transformation matrix,
T
−1
. When this is pre-multiplied by A the result is a pack of columns that are the
original eigenvectors, each now multiplied by its eigenvalue.
If we pre-multiply this by the inverse transformation, T, we arrive at the elegant

result of a diagonal matrix, each element of which is one of the eigenvalues.
To sum up:
First solve det(A − λI)=0 to find the eigenvalues.
Substitute the n eigenvalues in turn into (A − λI)x = 0 to find the n eigenvectors
in the form of column vectors.
Pack these column vectors together, in order of decreasing eigenvalue for
neatness, to make a matrix T
−1
. Find its inverse, T.
We now see that

TAT

=




1
1
2
3
00
00
00
λ
λ
λ













(16.5)
Q 16.3.1
Perform the above operations on the matrix

01
65−−






then look back at Section 5.5.
91239.indb 220 10/12/09 1:46:12 PM
More about Time and State Equations ◾ 221
16.4 Splitting a System into Independent Subsystems
As soon as we have a diagonal system matrix, the system falls apart into a set of
unconnected subsystems. Each equation for the derivative of one of the w’s contains
only that same w on the right-hand side,



ww
nnnn
=+λ ()TBu
(16.6)
In the “companion form” described in Section 14.7, we saw that a train of
off-axis 1’s expressed each state variable as the derivative of the one before, link-
ing them all together. ere is no such linking in the diagonal case. Each variable
stands on its own. e only coupling between variables is their possible sharing of
the inputs, and their mixture to form the outputs, as shown in Figure 16.1.
Each component of w represents one exponential function of time in the
analytic solution of the differential equations. e equations are equivalent to taking
each transfer function expression, factorizing its denominator and splitting it into
partial fractions.
Let us now suppose that we have made the transformation and we have a diago-
nal A, with new state equations expressed in variables that we have named x.
When the system has a single input and a single output we can be even more
specific about the shape of the system. Such a system might for example be a filter,
represented by one single transfer function. Now each of the subsystems is driven
from the single input, and the outputs are mixed together to form the single output.
e matrix B is an n-element column vector, while C is an n-element row vector.
λ
1
u
1
w
1
w
2
y

1
y
2
w
3
u
2
λ
2
λ
3
CT
–1
“mixbox”
TB
“mixbox”
Figure 16.1 The TB matrix provides a mixture of inputs to the independent state
integrators.
91239.indb 221 10/12/09 1:46:13 PM
222 ◾ Essentials of Control Techniques and Theory
If we double the values of all the B elements and halve all those of C, the overall
effect will be exactly the same. In fact the gain associated with each term will be the
product of a B coefficient and the corresponding C coefficient. Only this product
matters, so we can choose each of the B values to be unity and let C sort out the result,
as shown in Figure 16.2. Equally we could let the C values be unity, and rely on B to
set the product as shown in Figure 16.3.
In matrix terms, the transformation matrix, T
−1
is obtained by stacking
together the column vectors of the eigenvectors. Each of these vectors could be mul-

tiplied by a constant, and it would still be an eigenvector. e transformation matrix
u
y
1
1
1
λ
1
c
1
c
2
c
3
λ
2
λ
3
Figure 16.2 A SISO system with unity B matrix coefficients.
u
y
1
1
1
λ
1
λ
2
λ
3

b
1
b
2
b
3
Figure 16.3 A SISO system with unity C matrix coefficients.
91239.indb 222 10/12/09 1:46:15 PM
More about Time and State Equations ◾ 223
is therefore far from unique, and can be fiddled to make B or C (in the single- input–
single-output case) have elements that are all unity.
Let us bring the discussion back down to earth by looking at an example or two.
Consider the second-order lag, or low-pass filter, defined by the transfer function:

Ys
sasb
Us()
()()
()=
++
1
e obvious way to split this into first-order subsystems is to regard it as two
cascaded lags, as in Figure 16.4.

Ys
sasb
Us()
()()
()=
++

11
Applying the two lags one after the other suggests a state-variable
representation



xbxu
xaxx
yx
11
221
2
=− +
=− +
=
i.e.,


xx
x
=








+







=
[]
b
a
u
y
0
1
1
0
01
Transforming the A-matrix to diagonal form is equivalent to breaking the
transfer function into partial fractions:

Ys
ba sa sb
Us()
()()()
()=
−+

+







11 1
1
1
s+a
s+b
u
y
x
1
x
2
Figure 16.4 Two cascaded lags.
91239.indb 223 10/12/09 1:46:19 PM
224 ◾ Essentials of Control Techniques and Theory
so

Ys
ba
Xs Xs()
()
() ()=


(
)
1

12
where

Xs
sa
Us
Xs
sb
Us
1
2
1
1
() ()
() ()
=
+
=
+
So we have represented this second-order lag as the difference between two first-
order lags, as shown in Figure 16.5.
e second-order lag response to a step input function has zero initial deriva-
tive. e two first-order lags are mixed in such a way that their initial slopes are
canceled.
1
1
s + a
s + b
1
b – a

–1
b – a
+
u
x
1
x
1
x
2
x
2
y = x
1
– x
2
Figure 16.5 A second-order system as the difference of first-order subsystems.
91239.indb 224 10/12/09 1:46:21 PM
More about Time and State Equations ◾ 225
Q 16.4.1
Write out the state equations of the above argument in full, for the case where
a = 2, b = 3.
Q 16.4.2
Write out the equations again for the case a = b = 2. What goes wrong?
16.5 Repeated Roots
ere is always a snag. If all the roots are distinct, then the A-matrix can be made
diagonal using a matrix found from the eigenvectors and all is well. A repeated root
throws a spanner in the works in the simplest of examples.
In Section 16.4 we saw that a second-order step response could usually be
derived from the difference between two first-order responses. But if the time

constants coincide the difference becomes zero, and hence useless.
e analytic solution contains not just a term e
−at
but another term te
−at
. e
exponential is multiplied by time.
We simply have to recognize that the two cascaded lags can no longer be sepa-
rated, but must be simulated in that same form. If there are three equal roots, then
there may have to be three equal lags in cascade. Instead of achieving a diagonal
form, we may only be able to reduce the A-matrix to a form such as

A
a
a
a
b
b
c
=





1 0000
01000
00 000
000 10
0000 0

00000















(16.7)
is is the Jordan Canonical Form, illustrated in Figure 16.6.
Repeated roots do not always mean that a diagonal form is impossible. Two
completely separate single lags, each with their own input and output, can be
combined into a singe set of system equations. e A-matrix is then of course diag-
onal, since there is no reaction between the two subsystems. If the system is single-
input–single-output, however, repeated roots always mean trouble.
In another case, the diagonal form is possible but not the most desirable.
Suppose that some of the roots of the characteristic equation are complex. It is not
easy to apply a feedback gain of 2 + 3j around an integrator!
91239.indb 225 10/12/09 1:46:22 PM
226 ◾ Essentials of Control Techniques and Theory
e second-order equation with roots −k + / −j ⋅ n,


 
ykykny u+++=2
22
()
is more neatly represented for simulation by



x
x
kn
nk
x
x
1
2
1
2
0
1






=

−−













+






u
(16.8)
than by a set of matrices with complex coefficients. If we accept quadratic terms as
well as linear factors, any polynomial with real coefficients can be factorized with-
out having to resort to complex numbers.
Q 16.5.1
Derive state equations for the system

yyu+=
. Find the Jordan Canonical Form,
and also find a form in which a real simulation is possible, similar to the example of
expression 16.8. Sketch the simulation.

a
a
a
b
b
c
Figure 16.6 Simulation structure of a system with repeated roots.
91239.indb 226 10/12/09 1:46:24 PM

×