Part II
Measuring instrumentation
Vittorio Ferrari
Copyright © 2003 Taylor & Francis Group LLC
13 Basic concepts of
measurement and measuring
instruments
13.1
Introduction
The importance of making good measurements is readily understood when
considering that the effectiveness of any analysis is strongly determined by
the quality of the input data, which are typically obtained by measurement.
Since analysis and processing methods cannot add information to the
measurement data but can only help in extracting it, no final result can be
any better than such data originally are.
With the intention of highlighting correct measurement practice, this
chapter presents the fundamental concepts involved with measurement and
measuring instruments. The first two sections on the measurement process
and uncertainty form a general introduction. Then three sections follow which
describe the functional model of measuring instruments and their static and
dynamic behaviour. Afterwards, a comprehensive treatment of the loading
effect caused by the measuring instrument on the measured system is
presented, which makes use of the two-port models and of the
electromechanical analogy. Worked out examples are included. Finally, a
survey of the terminology used for specifying the characteristics of measuring
instruments is given.
This chapter is intended to be propaedeutic and not essential to the next
two chapters; the reader more interested in the technical aspects can skip to
Chapters 14 and 15 regarding transducers and the electronic instrumentation.
13.2
The measurement process and the measuring
instrument
Measurement is the experimental procedure by which we can obtain
quantitative knowledge on a component, system or process in order to
describe, analyse and/or exert control over it. This requires that one or more
quantities or properties which are descriptive of the measurement object,
called the measurands, are individuated. The measurement process then
basically consists of assigning numerical values to such quantities or, more
Copyright © 2003 Taylor & Francis Group LLC
formally stated, of yielding measures of the measurands. This should be
accomplished in both an empirical and objective way, i.e. based on
experimental procedures and following rules which are independent of the
observer. As a relevant consequence of the numerical nature of the measure
of a quantity, measures can be used to express facts and relationships involving
quantities through the formal language of mathematics.
The practical execution of measurements requires the availability and
proper use of measuring instruments. A measuring instrument has the ultimate
and essential role of extending the capability of the human senses by
performing a comparison of the measurand against a reference and providing
the result expressed in a suitable measuring unit. The output of a measuring
instrument represents the measurement signal, which in today’s instruments
is most frequently presented in electrical form.
The process of comparison against a reference may be direct or, more
often, indirect. In the former case, the instrument provides the capability of
comparing the unknown measurand against reference samples of variable
magnitude and detecting the occurrence of the equality condition (e.g. the
arm-scale with sample masses, or the graduated length ruler). In the latter
case, the instrument’s functioning is based on one or more physical laws and
phenomena embodied in its construction, which produce an observable effect
that is related to the measurand in a quantitatively known fashion (e.g. the
spring dynamometer).
The indirect comparison method is often the more convenient and
practicable one; think, for instance to the case of measurement of an intensive
quantity such as temperature. Motion and vibration measuring instruments
most frequently rely on an indirect measuring method.
Regardless of whether the measuring method is direct or indirect, it is
fundamental for achieving objective and universally valid measures that the
adopted references are in an accurately known relationship with some
conventionally agreed standard. Given a measuring instrument and a
standard, the process of determination and maintenance of this relationship
is called calibration. A calibrated and properly used instrument ensures that
the measures are traceable to the adopted standard, and they are therefore
assumed to be comparable to the measures obtained by different instruments
and operators, provided that calibration and proper use is in turn guaranteed.
If we refer back to the definition of measurement, it can be recognized
that measurement is intrinsically connected with the concept of information.
In fact, measuring instruments can be thought of as information-acquiring
machines which are required to provide and maintain a prescribed functional
relationship between the measurand and their output [1]. However,
measurement should not be considered merely as the collection of information
from the real world, but rather as the extraction of information which requires
understanding, skill and attention from the experimenter. In particular, it
should be noted that even the most powerful signal postprocessing techniques
and data treatment methods can only help in retrieving the information
Copyright © 2003 Taylor & Francis Group LLC
embedded in the raw measurement data, but have no capability of increasing
the information content. As such, they should not be misleadingly regarded
as substitutive to good measurements, nor a fix for poor measurement data.
Therefore, carrying out good measurements is of primary importance and
should be considered as an unavoidable need and prerequisite to any further
analysis. A fundamental limit to the achievable knowledge on the
measurement object is posed at this stage, and there is no way to overcome
such a limit in subsequent steps other than by performing better
measurements.
13.3 Measurement errors and uncertainty
After realizing the importance of making good measurements as a necessary
first step, we may want to be able to determine when measurements are
good or, at least, satisfying to our needs. In other words, we become concerned
with the problem of qualifying measurement results on the basis of some
quantifiable parameter which characterizes them and allows us to assess
their reliability. We are essentially interested in knowing how well the result
of the measurement represents the value of the quantity being measured.
Traditionally, this issue has been addressed by making reference to the concept
of measuring error, and error analysis has long been considered an essential
part of measurement science.
The concept of error is based on the reasonable assumption that a
measurement result only approximates the value of the measurand but is
unavoidably different from it, i.e. it is in error, due to imperfections inherent
to the operation in nonideal conditions. Blunders coming from gross defects
or malfunctioning in the instrumentation, or improper actions by the operator
are not considered as measuring errors and of course should be carefully
avoided.
In general, errors are viewed to have two components, namely, a random
and a systematic component. Random errors are considered to arise from
unpredictable variations of influence effects and factors which affect the
measurement process, producing fluctuations in the results of repeated
observation of the measurand. These fluctuations cancel the ideal one-toone relationship between the measurand and its measured value. Random
errors cannot be compensated for but only treated statistically. By increasing
the number of repetitions, the average effect of random errors approaches
zero or, more formally stated, their expectation or expected value is zero.
Systematic errors are considered to arise from effects which influence the
measurement results in a systematic way, i.e. always in the same direction
and amount. They can originate from known imperfections in the
instrumentation or in the procedure, as well as from unknown or overlooked
effects. The latter sources in principle always exist due to the incompleteness
of our knowledge and can only be hopefully reduced to a negligible level.
Copyright © 2003 Taylor & Francis Group LLC
Conversely, the former sources, as they are known, can be compensated for
by applying a proper correction factor to the measurement results. After the
correction, the expected value of systematic errors is zero.
Although followed for a long time, the approach based on the concept of
measurement error has an intrinsic inconsistency due to the impossibility of
determining the value of a quantity with absolute certainty. In fact, the true
value of a quantity is unknown and ultimately unknowable, since it could
only be determined by measurement which, in turn, is recognizably imperfect
and can only provide approximate results. As a consequence, the measurement
error is unknowable as well, since it represents the deviation of the
measurement result from the unknowable true value. As such, the concept
of error can not provide a quantitative and consistent mean to qualify
measurement results on a theoretically sound basis.
As a solution to the problem, a different approach has been developed in
the last few decades and is currently adopted and recommended by the
international metrological and standardization institutions [2]. It is based
on recognizing that when performing a measurement we obtain only an
estimate of the value of the measurand and we are uncertain on its correctness
to some extent. This degree of uncertainty is, however, quantifiable, though
we do not know precisely how much we are in error since we do not know
the true value. The term measurement uncertainty can be therefore introduced
and defined as the parameter that characterizes the dispersions of the values
that could be attributed to the measurand. In other words, the uncertainty
is an estimate of the range of values within which the true value of a
measurand lies according to our presently available knowledge. Therefore
uncertainty is a measure of the ‘possible error’ in the estimated value of a
measurand as obtained by measurement.
It is worth noting that the result of a measurement can unknowably be
very close to the value of the measurand, hence having a small error,
nonetheless it may have a large uncertainty. On the other hand, even when
the uncertainty is small there is no absolute guarantee that the error is small,
since some systematic effect may have been overlooked because it is unknown
or not recognized and, as such, not corrected for in the measurement result.
From this standpoint, a different meaning can be attributed to the term
true value in which the adjective ‘true’ loses its connotation of uniqueness
and becomes formally unnecessary. The true value, or simply the value, of a
measurand can be conventionally considered as the value obtained when the
measurement with lowest possible uncertainty according to the presently
available knowledge is performed, i.e. when an exemplar measuring method
which minimizes and corrects for every recognized influencing effect is used.
In practical cases, the idea of an exemplar method should be commensurate
with the accuracy needed for the particular application; for instance, when
we measure the length of a table with a ruler we consciously disregard the
influence of temperature on both the table and the ruler, since we consider
this effect to be negligible for our present measuring needs. We simply
Copyright © 2003 Taylor & Francis Group LLC
acknowledge that our result has an uncertainty which is higher than the best
obtainable, but is suitable for our purposes.
However, we may be in the situation of negligible uncertainty of the
instrument (the ruler in this case) compared to that caused by temperature on
the measurement object (the table), for which we are therefore able to detect
and measure the thermal expansion. The converse situation is that of negligible
uncertainty of the measurement object compared to that of the measuring
instrument and procedure. This is the case encountered when testing an
instrument by using a reference or standard of low enough uncertainty to be
ignored. Thus the value of the reference or standard can be conventionally
assumed as the true value, and the test thought of as a mean to determine the
errors of the measuring instrument and procedure. Quantifying such errors
and correcting those due to systematic effects is actually no different from
performing a calibration of the measuring instrument under test.
Summarizing, the introduction of the concept of uncertainty removes the
inconsistency of the theory of errors, and directly provides an operational
mean for characterizing the validity of measurement results. In practice, there
are many possible sources of uncertainty that, in general, are not independent,
for example: incomplete definition of the measurand, effect of interfering
environmental conditions and noise, inexact calibration and finite
discrimination capability of measuring instruments and variations in their
readings in repeated observations under apparently identical observations,
unconscious personal bias in the operation of the experimenter.
In principle, the influence of each conceivable source of uncertainty could
be evaluated by the statistics of repeated observations. In the practical cases
this is essentially impossible and, therefore, many source of uncertainty can
be more conveniently quantified a priori by analysing with scientific judgment
the pool of available information, such as tabulated data, previous
measurement results, instrument specifications. The results of the two
evaluation methods are called respectively type A and type B uncertainties,
which are classified as different according to their derivation but do not
differ in nature and, therefore, are directly comparable. A detailed treatment
of the methods used to evaluate uncertainty can be found in [2] and [3].
13.4 Measuring instrument functional model
Irrespective of the measured variable and the operating principle involved, a
measuring instrument can be represented by the block diagram of Fig. 13.1.
This is a simplified and general model which focuses on the very fundamental
features that, with various degrees of sophistication in the implementation,
are typical of every measuring instrument.
The measuring instrument can be seen as composed of three cascaded
blocks, which provide an information transfer path from the measurand
quantity to the observer. The first block, named the sensing element, is the
Copyright © 2003 Taylor & Francis Group LLC
Fig. 13.1 Functional model of (a) a measuring instrument and (b) an electronic
measuring system.
stage being in contact with the measurand and interacting with it in order to
sense its value. This interaction should be perturbing as little as possible so
that negligible load is produced by the instrument on the measured object,
as discussed in Section 13.7. The output of the sensing element is in the form
of some physical variable which is in a known relationship with the
measurand. If we take, for example, a mercury glass thermometer, than the
sensing element is constituted by the mercury, and its output is the thermal
expansion of the fluid volume in the bulb. As we shall see, in electronic
instruments and systems the sensing element function is performed by sensors
and transducers.
The second block, named the variable-conversion stage, accepts the output
of the sensing element and converts it into another variable and/or
manipulates it with the general aim of obtaining a representation of the
signal more suitable to its presentation, yet preserving the original information
content. In our example of the glass thermometer the variable-conversion
stage is the capillary tube that transduces the volume expansion into the
elongation of the fluid column.
The third block, named the presentation stage, undertakes the final
translation of the measurement signal into a form which is perceived and
understood by humans, once again preserving the original information
content. The role of this stage is straightforward, but its importance should
not be overlooked. In fact, the degree of discrimination between closely spaced
values of the measurand that an instrument allows, i.e. the resolution, is
strongly related, among other factors, to the design and construction of its
presentation stage. This can be readily recognized if we think at our glass
thermometer for which the presentation stage is the gridmark pattern on the
capillary tube. Although the mercury expansion is a continuous function of
temperature, the discrete spacing of the gridmarks enables discrimination
Copyright © 2003 Taylor & Francis Group LLC
no better than 0.1 °C to the naked eye, which is, nevertheless, all that is
needed in many applications.
It should be observed that the distinction between functional blocks does
not necessarily reflect a physical separation of such blocks in the real
instruments. On the contrary, there are many cases in which several functions
are somewhat distributed among different pieces of hardware so that it is
difficult, besides essentially useless, to distinguish and parse them.
Nowadays, most of the measurement tasks in any field are performed by
instruments and systems which measure physical quantities by electronic means.
Basically, the use of electronics in measuring instrumentation offers higher
performance, improved functionality and reduced cost compared to purely
mechanical systems. A very general block representation of an electronic
measuring instrument or system is given in Fig. 13.1(b), which is fairly similar
to that of Fig. 13.1(a) with some important differences. In this case the sensing
function is performed by sensors, or transducers, which respond to the physical
stimulus caused by the measurand with a corresponding electrical signal. Such
a signal is then amplified, possibly converted into a digital format and processed
in order to extract the information of interest contained in the sensor signal,
and filter out the unwanted spurious components. All such processing
operations are carried out in the electrical domain irrespective of the nature of
the measurand, and therefore they may take advantage of the high capabilities
of modern electronic elaboration circuitry.
The obtained results can then be presented to the observer through a
display stage, and/or possibly stored into some form of memory device, most
typically electronic or magnetic. The memory storage capability offered by
many electronic measuring instruments is of fundamental importance, as it
enables analysis, processing and comparisons on measurement data to be
performed offline, that is, arbitrarily later than the time when the data are
captured. Some instruments are optimized for extremely fast cycles of data
storage-retrieval-processing so that they can perform specialized functions,
such as filtering, correlation or frequency transforms, in real time, i.e. with
a delay inessential for the particular application.
Transducers and electronic signal amplification and processing will be
treated in Chapters 14 and 15 respectively.
A fundamental fact resulting from both block diagrams of Fig. 13.1 is that
the measuring instrument occupies the position at the interface between the
observer and the measurand. Moreover, all of them are under the global
influence of the surrounding environment. This influence is generally a cause
of interference on the information transfer path from the measurand to the
observer, producing a perturbing action which ultimately worsens the
measurement uncertainty. This fact may be represented by considering the
output y of a measuring instrument being a function not only of the measurand
x, as we ideally would like to happen, but also of a number of further quantities
qi related to the effects of the boundary conditions. Such quantities are named
the influencing or interfering quantities. Typical influencing quantities may
Copyright © 2003 Taylor & Francis Group LLC
be of an environmental nature, such as temperature, barometric pressure and
humidity, or related to the instrument operation, such as posture, loading
conditions and power supply.
Besides observing that y, x and the quantities qi are actually functions of
time y(t), x(t) and qi(t), we may even consider time itself as an influencing
quantity, since in the most general case the output of a real measuring
instrument depends to some extent on the time t at which the measurement
is performed. This means that the same input combination of measurand
and influencing quantities applied at different time instants of the instrument’s
operating life may, in general, produce different output values due to
instrument ageing and drift. Considered as an influencing quantity, time has
a peculiar nature due to the fact that, unlike what theoretically can be done
for the qis, the observer cannot exert any kind of control over it.
Developing a formal description of measuring instruments which globally
takes into account all the involved variables as functions of time with the
aim of deriving the time evolution of the output is a difficult task. Usually,
a more practicable approach is followed which, besides, provides a better
understanding of the instrument performances and a deeper insight into its
operation. It consists of distinguishing between static and dynamic behaviour,
each of which can be analysed separately. Operation under static conditions
can be analysed by neglecting the time dependence of the measurand and
the influencing quantities, therefore avoiding the solution of complicated
partial-derivative differential equations. The consequent reduction in
complexity enables a detailed description of the output-to-measurand
relationship and the evaluation of the impact due to influencing quantities.
On the other hand, the analysis of dynamic operation is essentially
performed by taking into account the time evolution of the measurand only
and the resultant time dependence of the instrument output, thereby requiring
only ordinary differential equations. The effect of the influencing quantities
on dynamic behaviour is generally evaluated by a semiquantitative extension
of the results obtained for the static analysis. Though this approach it is not
strictly rigorous, it offers a viable solution to an otherwise unmanageable
problem and, as such, it is of great practical utility.
13.5
Static behaviour of measuring instruments
Let us assume that the measurand x and the influencing quantities qis are
constant and independent of time. It should be noted that this assumption is
not in contradiction with regarding x and the qis as variables. In fact, we
consider that the x and the qis are subject to variations over a range of
values, but we do not take into account the time needed by such variations
to take place. In other words, we consider only the static combinations of
constant inputs once the transients have died out. Under such an assumption,
the relationship between the instrument output y and the measurand x, the
Copyright © 2003 Taylor & Francis Group LLC
qis and the time t at which the measurement is performed is given by the
following expression:
(13.1)
where fg is a function which defines the global conversion characteristic of
the measuring instrument.
The differential of y is given by
(13.2)
The quantities
and
represent the sensitivities of the
measuring instrument in response to the measurand x, the ith influence quantity
qi and the time t. The term
is responsible for the time stability of the
conversion characteristic or, better, of its instability. Higher values of
imply a more pronounced ageing effect on the instrument and require a more
frequent calibration. An instrument for which
is called time-invariant.
The instrument is the more selective for x the lower the value of the terms
are compared to
so that their effect on the output is negligible
with respect to the measurand. If all the terms
were ideally zero, the
instrument would respond to the measurand only and would be called specific
for x. In the real cases, given the desired level of accuracy and estimated the
ranges of variability of x and the qis, the comparison between
and
the
allows us to determine the influence quantities which actually
play a role and need to be taken into account in the case at hand.
In principle, the contribution of the significant influence quantities could be
experimentally evaluated by varying each of them in turn over a given interval,
while keeping the measurand and the other qis constant and monitoring the
instrument output. In practice, this is hardly possible and usually the contribution
is estimated partly from experimental data and partly from theoretical predictions.
Of course, it is expected that the instrument is mostly responsive to the
measurand x, and, therefore, the above procedure is primarily applied to the
experimental determination of the measurand-to-output relationship. The
curve obtained in this way is the static calibration or conversion characteristic
of the instrument under given conditions of the influencing quantities. Under
varying conditions, a family of calibration characteristics is obtained, which
contain information on the impact of the considered qis.
Assuming a reference condition for which the influencing quantities are
kept constant at their nominal or average values qoi, and ageing effects are
neglected, it follows that the output y depends on the measurand only and
eq (13.1) reduces to
(13.3)
Copyright © 2003 Taylor & Francis Group LLC
The function f represents the instrument’s static conversion characteristic in
the reference condition. For the instrument to be of practical utility, f(x)
should be monotonic so that its inverse function, which relates the instrument
reading with the measurand, is single-valued.
The term
is called the sensitivity of the instrument with respect
to the measurand x. In general, the sensitivity is not constant throughout the
measurand range but is itself a function of x, i.e. S=S(x). In most cases, however,
the instrument is built to ensure a relationship of proportionality between y
and x of the type
In these cases the instrument is said to be
linear if yo=0 and incrementally linear if
and the sensitivity S becomes
a constant given by the coefficient k, which is typically called the instrument
scale factor, calibration factor or conversion coefficient. The term yo is called
the instrument offset and represents the output at zero applied measurand.
Figure 13.2 shows the conversion characteristics for both an incrementally
linear and a nonlinear instrument. For incrementally linear instruments, the
variations in the coefficients k and yo about their reference values induced
by the influencing quantities are generally adopted to specify their effect.
Taking temperature as an example, we may therefore find widespread usage
of the terms temperature coefficient of the scale factor and of the offset,
meaning the temperature-induced variations in k and yo respectively.
It is very important to point out that nonlinear instruments may be
linearized by considering a small interval of the input x about an average
value xo, and approximating dy with S(xo)dx in such an interval. For suitably
small variations around xo the sensitivity can therefore assumed to be constant
equal to S(xo) and the instrument considered as locally linear. This procedure
is the so called small-signal linearization.
The property of linearity is extremely important for measuring instruments,
as it is for every system, since it implies the validity of the superposition
principle. Essentially, this means that a linear system responds to the sum of
two inputs with an output which is the sum of the two single responses
caused by each input when applied alone. As a consequence, linear systems,
and linear instruments in particular, produce an output which is a scaled
replica of the input, i.e. the readings of the instrument provide an undistorted
image of the measurand variations.
Fig. 13.2 Examples of (a) incrementally linear and (b) nonlinear conversion
characteristics.
Copyright © 2003 Taylor & Francis Group LLC
It is worth noting that for an instrument to have a linear conversion
characteristic there is no need that each of the blocks of Fig. 13.1 is linear. In
fact, this is only a sufficient condition for overall linearity, and we may as well
have several blocks with nonlinear behaviours which mutually cancel, giving
rise to a globally linear instrument. This property is very often exploited when
input or intermediate stages are intrinsically nonlinear, and such a nonlinearity
is compensated for within an additional conversion stage or even within the
presentation stage. As an example, you may think of an instrument that, to
correct a nonlinearity of some intermediate stages, uses a needle indicator
whose reading scale has unequally spaced marks, as happens in logarithmic
paper. Of course, an unfortunate drawback of this expedient is the possible
reduction of the indicator readability in some parts of its range.
13.6
Dynamic behaviour of measuring instruments
Let us consider that the measurand x is actually a function of time x(t) and
assume that the effect of the influencing quantities is negligible. Besides, suppose
that time ageing and drift phenomena are absent or, as generally happens,
very slow compared to the time evolution of the measurand signal, so that
they can be overlooked and the instrument considered as time-invariant.
Then we may rewrite eq (13.3) which now takes the form
(13.4)
F is conceptually different from f in eq (13.3), since F is an operator, in the
sense that it represents a correspondence between entities which are
themselves functions of time and not scalar values as for f. In eq (13.3) F
defines the dynamic conversion characteristic of the measuring instrument
and generally contains time derivatives and integrals of both x(t) and y(t),
giving rise to integrodifferential nonlinear equations.
We restrict the field of the many mathematical forms that F can take, by
assuming that it has the property of linearity and, therefore, we limit ourselves
to considering linear instruments.
Briefly, a linear dynamic system, and an instrument in particular, is one
for which the superposition principle is valid when input and output,
respectively considered as cause and effect, are regarded as functions of time.
It is worth pointing out that the linearity of F, which could be indicated
as dynamic linearity, is not equivalent to the linearity of f, that is the static
linearity described in the preceding section. In fact, they refer to two different
ideas of the concept of linearity, namely operational in the former case and
functional in the latter. Indeed, the dynamic linearity is a more restrictive
condition than static linearity. That is, we may have a system for which the
superposition principle holds for constant values of the input, and, on the
contrary, does not apply when the input is considered as a function of time.
Copyright © 2003 Taylor & Francis Group LLC
For example, a system for which the input-output relationship is given by
is not linear in the dynamical sense, though it
is statically linear, since for x independent of time the output becomes y=bx.
Conversely, dynamic linearity implies static linearity.
For a time-invariant dynamically linear instrument for which input and
output are real functions of time, eq (13.4) takes the form of a linear ordinary
differential equation with constant coefficients, which can be generally
written as
(13.5)
The coefficients ai and bi are a combination of instrument parameters assumed
to be independent of time, and are therefore real and constant numbers.
Equations of the form of eq (13.5) are encountered in a wide number of
fields of engineering and science, and standard methods have been developed
for their solution. We will not go into details about this aspect, on which the
interested reader can find many exhaustive references, such as [4]. We would
rather like to point out the main lines of reasoning that can be followed to
approach the problem, and illustrate the modelling of measuring instruments
as dynamic systems [5, 6].
The first approach is that of directly solving eq (13.5) in the time domain.
It is well known that the general form of the solution y(t) is
(13.6)
where yf(t) is the forced response, and yi(t) is the free response determined
by the initial conditions. In turn, yf(t) is the sum of a steady-state term yfS(t)
and a transient term yfT(t).
The time-domain approach becomes rather complex unless low-order
systems with simple input functions are considered, and is therefore of limited
practical utility. Instead, it is very fruitful to take advantage of the property
of linearity and the consequent validity of superposition principle. The generic
input x(t) can be decomposed as a finite or infinite sum of elementary
functions for which eq (13.5) simplifies to a set of readily solvable algebraical
equations. The solutions of such equations are then summed to produce the
overall response y(t) to the original stimulus x(t). Depending on the type of
the elementary functions used as a decomposition basis, either complex
with α real, the
exponentials eiωt or damped complex exponentials
above procedure leads to the methods of Fourier and Laplace transform
respectively.
Copyright © 2003 Taylor & Francis Group LLC
In the Fourier transform method, solving eq (13.5) in the time domain
becomes equivalent to solving the following complex algebraical equation
in the frequency domain
(13.7)
where X(ω) and Y(ω) are complex functions of the angular frequency ω called
the Fourier (or ) transform of x(t) and y(t), given by
(13.8a)
(13.8b)
and T(ω) is called the frequency, or sinusoidal, response function of the system.
For a given angular frequency ω, T(ω) is a complex number whose magnitude
and argument respectively represent the gain and phase shift between the
sinusoidal input of angular frequency ω and the corresponding sinusoidal
output.
The Fourier transform method can be applied to the class of functions
of time for which the transform exists, i.e. the integral given in eq (13.8)
converges. In the most general case, such functions are suitably regular
nonperiodic functions with their transform being nonzero over a
continuous spectrum of frequencies. A subset of such functions is
represented by the periodic functions, for which the integral of eq (13.8)
becomes a summation over a discrete spectrum of frequencies and the
transform becomes the series of Fourier coefficients. The method of
analysis based on the expression of periodic functions of time as Fourier
series is called the harmonic analysis.
In the Laplace transform method, damped complex exponentials are used
as the elementary functions constituting the decomposition basis, thereby
extending the transform method to functions which are not transformable
but, nevertheless, have great practical importance, such as the linear ramp
and exponential functions. Again, solving eq (13.5) in the time domain
becomes equivalent to solving the following complex algebraical equation
in the domain of the complex angular frequency
(13.9)
Copyright © 2003 Taylor & Francis Group LLC
where X(s) and Y(s) are complex functions being the Laplace (or
transforms of x(t) and y(t) given by
-)
(13.10a)
(13.10b)
and the complex function T(s) is called the transfer function of the system.
As the -transform of the impulse function δ(t) is unity, T(s) is the -transform
of the system response when subject to an impulsive stimulus, sometimes
called a ballistic excitation.
As can be seen by comparing eqs (13.8) with eqs (13.10), the -transform is a
generalization of the transform based on substituting ω with
where
α is such that the integral converges. The Laplace transform method offers the
desirable advantage that it takes into account the initial conditions in a consistent
way, thereby being a powerful tool for dealing with transient problems.
The use of the transform method enables us to describe the dynamic
behaviour of a linear instrument by simply analysing its frequency response
function or its transfer function. In turn, they may be derived by defining
elementary blocks which compose the instrument and properly combining the
respective T(ω ) or T(s) of such blocks. As a rule, the frequency response or
transfer function of cascaded blocks is the product of the individual T(ω ) or
T(s). As a result a block representation of the instrument can be obtained,
which is shown in Fig. 13.3 for both the and -transforms.
It is important to point out that several relevant features of the system
under consideration can be analysed directly in the frequency domain by using
T(ω ) and T(s), without the need to formulate the problem in the time domain,
thereby avoiding the related difficulties. T(ω ) and T(s) can be experimentally
measured by monitoring the outputs generated by swept-sine and impulse
inputs respectively. In practice, it is sometimes preferable to use a step
excitation in place of the impulse, which may be more difficult to generate.
Since the unitary step function 1(t) is the integral of the impulse δ(t), if the
Fig. 13.3 Block-diagram representation of a measuring instrument in the (a) Fourier
and in (b) Laplace domains.
Copyright © 2003 Taylor & Francis Group LLC
system is linear the step response can be derived to obtain the impulse response
and, thereafter, the transfer function T(s).
Three main models of measuring instruments can be distinguished, which
differ in their dynamic behaviour according to the degree n of the denominator
of their respective transfer functions, alternatively seen as the order of the
differential equation in the time domain. They are the zeroth-, first- and
second-order instrument models.
For a zeroth-order instrument the input-output relationship in the time
domain is given by
(13.11)
which in the s-domain becomes
(13.12)
We can observe that, both in time- and frequency-domain representations,
the output is proportional to the input. This means that y(t) instantly follows
x(t) whatever its time evolution is, differing from it only by the scale factor
which represents the instrument sensitivity. In particular, the step
response of a zeroth-order instrument is a step function itself as shown in
Fig. 13.4, and the sinusoidal frequency response T(ω ) is flat throughout the
frequency axis (Fig. 13.5). An example of a zeroth-order instrument is a
resistive potentiometer displacement transducer.
For a first-order instrument the input-output relationship in the time
domain is given by
(13.13)
which in the s-domain becomes
(13.14)
Fig. 13.4 Step response of a zeroth-order instrument.
Copyright © 2003 Taylor & Francis Group LLC
Fig. 13.5 Frequency response of a zeroth-order instrument: (a) magnitude;
(b) phase.
The static sensitivity is given by
Considering the expression of the transfer
function which has a pole at
and applying the Laplace method, the
step response can be determined. As shown in Fig. 13.6, the output y(t) is a
rising exponential function with a time constant given by
and a steadystate value given by
Therefore, the output lags the input when abrupt
variations take place, while it reaches the steady-state value after a characteristic
response time (after 5 the output is at 99% of its final value).
The frequency response function T(ω ) is plotted in Fig. 13.7, showing the
existence of a cutoff or corner frequency
at which the magnitude is
attenuated of
compared to the low-frequency value
and after which it becomes to decrease with an asymptotic log-log slope of
–20 dB/decade. For
the phase shift is zero at
Fig. 13.6 Step response of a first-order instrument ([6, p. 177], reproduced with
permission).
Copyright © 2003 Taylor & Francis Group LLC
Fig. 13.7 Frequency response of a first-order instrument: (a) magnitude; (b) phase
([6, p. 122], reproduced with permission).
at
and it asymptotically reaches
The system
then behaves as a first-order low-pass filter with a –3 dB bandwidth extending
from DC, i.e. zero frequency, to ω c . An example of a first-order instrument
is a thermometer with a finite thermal resistance and capacitance.
For a second-order instrument the input-output relationship in the time
domain is given by
(13.15)
which in the s-domain becomes
(13.16)
Copyright © 2003 Taylor & Francis Group LLC
Fig. 13.8 Step response of a second-order instrument ([6, p. 129], reproduced with
permission).
Fig. 13.9 Frequency response of a second-order instrument: (a) magnitude.
(b) phase ([6, p. 135], reproduced with permission).
Copyright © 2003 Taylor & Francis Group LLC
Second-order dynamic systems were extensively treated in Chapter 4, and
therefore we simply limit ourselves to few considerations here.
The static sensitivity is given by
Figure 13.8 shows the step response
which, depending on the damping ratio
follows a monotonic
or oscillatory trend toward the steady-state value
The ringing
approaches a constant amplitude oscillation at
when the
damping ratio ζ tends to zero. ω 0 is called the undamped natural frequency.
The frequency response function T(ω) is plotted in Fig. 13.9. For ω around
ω 0 the magnitude of T(ω) starts decreasing from its low-frequency value
with a log-log slope of –40 dB/decade. The damping ratio ζ determines
the amount of peaking and the steepness of the phase curve in the region
around ω 0. For
the magnitude attenuation at
is equal to
For
the phase shift is zero at
and
it asymptotically reaches –π for
The system then behaves as a secondorder low-pass filter with a bandwidth extending from DC to ω 0, with the
extension of the flatness region, which depends on the amount of damping.
A typical example of a second-order instrument is a seismic accelerometer.
13.7 Loading effect
13.7.1 General considerations
Measurement is a transfer of information which implies an exchange of
energy. In fact, the measuring instrument interacts with the measurement
object and unavoidably perturbs it by altering its energetic equilibrium. As
a consequence of this perturbation, we are fundamentally unable to determine
the value of a quantity without simultaneously modifying it due to the action
of measuring. This effect, which is very important to understand and take
into account, is named the loading effect, as the measuring instruments loads
the measurement object. As will be illustrated shortly, if all relevant quantities
were exactly known the loading effect could be evaluated and therefore
compensated for as a systematic effect. In practice this is not generally feasible,
and it is a better practice to reduce loading as much as possible, in order to
minimize its quantitative impact.
In general, the perturbing actions caused by the instrument on the
measurement object can be divided into two categories. The first category
comprises those interactions which alter the measurand value and globally
modify the conditions of the system under measurement, which thereby assumes
a different configuration. For example, if a flowmeter is inserted into a tube to
measure fluid velocity, the presence of the meter not only alters the local value
of the flow but also distorts the overall flow field producing a global effect.
The case is different for the second category of interactions which can be
exemplified, for instance, by a mass-spring-damper vibrating system to which
an accelerometer is attached. It can be readily realized that the accelerometer
Copyright © 2003 Taylor & Francis Group LLC
adds its mass to the vibrating mass, causing a loading action which affects
both the amplitude and the frequency of vibration.
The difference between the two categories of interactions lies in the fact
that the former comprises situations typical of distributed-parameter systems,
whose analysis requires the solution of partial-derivative differential equations
and is therefore rather complicated, whereas the latter example represents
interactions that can be analysed by making reference to simplified lumpedparameter systems and, as such, more easily treated and understood. However,
the second category is actually a particular subset of the first one. In fact, if
we suppose that the mass-spring-damper system represents the lumpedparameter model of a continuous structure, then the real effect caused by the
accelerometer on such a structure might need to be considered in more detail.
Possibly, it may happen that the dimensions and/or the location of the
accelerometer on the structure are such that the mode shapes are appreciably
modified and the analysis based on a lumped-parameter model is no longer
appropriate. Eventually, an approach based on a distributed-parameter
representation of the structure-accelerometer system could be required.
13.7.2 The two-port representation
As far as the action of the instrument on the measurement object can be
represented as the interaction between two blocks modelled as lumpedparameter systems, the correspondent loading effect can be analysed by
making use of the concept of two-port devices. A two-port device is a system
seen as a black box which exchanges energy with the external world through
two connections, named ports, positioned at its input and output. Irrespective
of the domain to which the input and output of a two-port device belong,
i.e. mechanical, electrical, thermal or other, the energy transfer across each
port is realized by means of a couple of variables, namely a flow or through
variable f, and an effort or across variable e. Flow and effort variables are
functions of time and are characterized by the feature that their product
P(t)=f(t)e(t) is the instantaneous power transferred across the port [1, 6].
Examples of flow variables are given by velocity, current and heat flux, and
the respective effort variables are force, voltage and temperature drop. A
block diagram of a two-port device is shown in Fig. 13.10.
The representation of a system as a two-port device is strongly related to
the physical behaviour of the system and to the configuration of its input and
Fig. 13.10 Two-port representation of a measuring instrument.
Copyright © 2003 Taylor & Francis Group LLC
output connections in their effect on the flow of energy. As such, it is
conceptually different and more descriptive then the functional representation
of Fig. 13.3, which is actually limited to the description of the flow of signals,
without direct reference to their physical nature.
Of the four variables involved in a two-port device, namely ei, fi, eo and fo,
only two can be independent. The remaining two variables are necessarily
dependent. The choice of which couple among the four variables should be
considered as independent is arbitrary from a formal point of view, but,
depending on the case at hand, it may be more convenient to choose one
representation or another among the six combinations available. One possible
choice which is convenient for analysing the loading effect is that of considering
ei and fo as independent variables, and eo and fi as dependent variables.
We will hereafter assume that the two-port is linear and time-invariant, and
indicate with
and Fo(s) the -transforms of
and fo. In
place of the -transforms, the transforms could be used as well, but since the
-transform method is actually a generalization of the transform method,
we will use -transforms in the following. Therefore, it can be written that
(13.17)
If the two-port is not linear we can linearize it locally, and the above equations
remain valid with each variable substituted by its incremental value and with
A(s), B(s), C(s) and D(s) becoming dependent on the point around which the
linearization is performed. Without loss of generality we may therefore make
reference to the eqs (13.17) for both linear and locally linear two-port devices.
The terms A(s), B(s), C(s) and D(s) are complex functions of s which
have the following meaning:
is the forward effort-transfer function under no load, i.e. with no power
drawn from the output port since Fo(s)=0.
is the generalized output impedance and Yo(s) is the generalized output
admittance. The minus sign comes from the choice of taking fo positive in the
outward direction, so that power is considered as positive when transferred
by the device to its output. Note that Zo and Yo are defined under the condition
of zero input effort, i.e. Ei(s)=0.
Copyright © 2003 Taylor & Francis Group LLC
is the generalized input admittance and Zi(s) is the generalized input
impedance. Note that Yi and Zi are defined under the condition of zero
output flow, i.e. Fo(s)=0.
is the reverse flow-transfer function under no load, i.e. with no power drawn
from the input port, since Ei(s)=0.
It should be carefully noted that in the definition of
the
no-load condition is identified in the first case with Fo(s)=0, and in the second
case with Ei(s)=0. This apparent contradiction is indeed consistent with the
variable taken as the output in the two cases, Eo(s) and Fi(s) respectively,
and with the requirement of zero power flow across the port which defines
the no-load situation. To differentiate between these two occurrences of the
no-load condition it may be helpful to borrow the terminology of electrical
circuits and call
the open-circuit forward effort-transfer function, and
the short-circuit reverse flow-transfer function.
In the following we further assume that the reverse transfer is equal to
therefore considering the two-port to be
zero, i.e. in this case
unilateral. This assumption is quite reasonable for measuring instruments,
since the application of the measurand at the input produces an output, but
the opposite is obviously not true. It should be noticed that here we are not
referring to the behaviour of the instrument components, which may often
be bilateral when taken singularly (e.g. the piezoelectric or electrodynamic
elements), but to that of the whole instrument, which is generally designed
to be unilateral.
To simplify the notation, we will then drop the superscript F from
by implicitly assuming that we hereafter refer to forward transfer functions
only. Therefore eqs (13.17) become
(13.18)
The same line of reasoning can be applied to the cases in which other couples
of variables, different from Ei, and Fo, are taken as independent. Since for
analysing the loading effect it is convenient that the couple of independent
Copyright © 2003 Taylor & Francis Group LLC
variables is formed by an input and an output variable, the following four
systems of equations resume all the possibilities for linear unilateral twoport devices:
(13.19a)
(13.19b)
(13.19c)
(13.19d)
The terms
and Kfe(s) are the open-circuit effort-, shortcircuit flow-, short-circuit effort-to-flow- and open-circuit flow-to-efforttransfer functions respectively. For a given device they are not mutually
independent, consistent with the fact that eqs (13.19a) to (13.19d) are just
different representations of the same unique system, and the following
relationships hold:
(13.20)
Incidentally, it can be noticed that, while Ke and Kf are dimensionless numbers,
Kef and Kfe do have dimensions, and they are therefore sometimes called
hybrid transfer functions.
The representations of a two-port device can be particularized to the case
of a device having only two connecting terminals, called a one-port device. As
shown in Fig. 13.11, a one-port can be thought as a two-port with the output
terminals only, and therefore two variables, i.e. Eo and Fo, are sufficient to
describe it. They are linked by the one-port constitutive equations which, in
Copyright © 2003 Taylor & Francis Group LLC
Fig. 13.11 Equivalent representations of one-ports: (a) Thevenin; (b) Norton.
the hypothesis of linearity, can take either one of the two following forms:
(13.21a)
(13.21b)
Equation (13.21a), called the Thevenin representation, expresses the output
effort Eo as a function of the flow Fo. Equation (13.21b), called the Norton
representation, expresses the output flow Fo as a function of the effort Eo.
The terms Eint and Fint represent an internal effort and flow source respectively,
which may be absent if the one-port is purely passive.
Since both eqs (13.21a) and (13.21b) are nothing but two different
representations of the same device they must be equivalent. Therefore, Eint
and Fint must be related by
(13.22)
where Zo is the generalized output, or internal, impedance.
In contrast to the two-ports, which behave as energy converters, oneports are energy sources or sinks, depending on the sign of Eint (or Fint), or
energy dissipating and/or storage elements if
with
Two-port devices can be cascaded, as schematized in Fig. 13.12 provided
that each input quantity is homogeneous with the output quantity of the
preceding device and, as a special case, the first device of the chain may as
well be a one-port. As far as the interconnection with the following block is
concerned, this circumstance is indistinguishable from the more general
situation in which both blocks are two-ports and, as such, a single analysis
approach can be followed.
We can firstly assume that devices 1 and 2 are both represented by the form of
eqs (13.19a), or of eqs (13.21a) in the case of a one-port, i.e. they are thought
Copyright © 2003 Taylor & Francis Group LLC