Tải bản đầy đủ (.pdf) (10 trang)

Handbook of Reliability, Availability, Maintainability and Safety in Engineering Design - Part 74 ppsx

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (273.62 KB, 10 trang )

714 5 Safety and Risk i n Engineering Design
The problem is that the fuzzy sets F
i
and H
i
are both defined by their membership
functions
μ
, with domain R, the set of real numbers, the input vectors of the training
set having infinite elements.
Obviously, it is impossible to have infinitely large neural networks, so the mem-
bership functions are transformed so that they are discrete (by taking samples
at equal intervals). Furthermore, the range of the membership functions are con-
tained to the interval [0, 1]. If the range is [−∞,+∞], the transform T is then
D
[−∞,+∞]
→ D
[0,1]
.
This is termed a loss-less transformation. To graphically present this transfor-
mation, as illustrated in Fig. 5.67, draw a semicircle in the region defined by
0 < x < 1,0 < y < 0.5, with the centre (0.5, 0.5), and draw lines to all points on
the x-axis. T(x
0
) is the x coordinate of the intersection of the line crossing the x-axis
at x
0
with the semicircle.
With k samples of the membership function at x
i
and i = 0 k,x


i
= i/k, i,the
training set of the fuzzy neural network is:
{(
μ
F
i
(x
0
),
μ
F
i
(x
1
)
μ
F
i
(x
k
)),(
μ
H
i
(x
0
),
μ
H

i
(x
i
)
μ
H
i
(x
k
))|i = 0 n}
The training set consists of pairs of sampled membership functions. The pairs
correspond to the rules of the fuzzy rule-based neural network considered. As in-
dicated previously, the advantage of fuzzy rule-based neural networks is the fact
that the designer does not have to program the system, and the fuzzy neural net-
work makes the membership functions. With the example above, the membership
functions were already known. In actual use of fuzzy ANN models, the membership
functions would be extracted from the training pairs using the ANN.
Fuzzy artificial perceptrons (FAP) Fuzzy T-norm functions have the following
properties:
T : [0,1]x[0,1] → [0,1],T(x,y)=T(y,x),T(0, x)=0,
T(1,x)=x,T(T(x,y),z)=T(x,T(y,z)) ,
x ≤ a ∩y ≤ b →T(x,y) ≤ T(a,b)
From the definition of intersection of fuzzy sets, the notation
μ
F∩G
(x,y)=min(
μ
F
(x),
μ

G
(y)) is a T-norm,wherex = y .
Fig. 5.67 Graph of member-
ship function transformation
of a fuzzy ANN
5.3 Analytic Development of S afety and Risk in Engineering Design 715
Fig. 5.68 A fuzzy artificial
perceptron (AP)
Input
layer
Output
layer
W

x

0
x
0
y
W

y

0
Fuzzy T-conorm functions have the following properties:
T : [0,1]x[0,1] → [0,1],T(x,y)=T(y,x), T(0,x)=x,
T(1,x)=1,T(T(x, y),z)=T(x,T(y,z)),
x ≤ a ∩y ≤ b →T(x,y ) ≤ T(a, b)
From the definition of union of fuzzy sets, the notation

μ
F∪G
(x,y)=max(
μ
F
(x),
μ
G
(y)) is a T-conorm,wherex = y .
A fuzzy artificial perceptron (AP) can n ow be defined; these are really ANNs with
two input neurodes (x and y), no hidden layer, and an output neurode o (Fig. 5.68).
The weights are w
xo
and w
yo
.
Fuzzy AND AP: x, y, o, w
xo
, w
yo
∈ [0,1].Wheret is a T-norm function, s is
a T-conorm function: o = t(s(x,w
xo
),s(y,w
yo
)).
Fuzzy OR AP: x, y, o, w
xo
, w
yo

∈ [0,1].Wheret is a T-norm function, s is a T-
conorm function: o = s(t(x,w
xo
),t(y,w
yo
)).
f) Artificial Neural Networks in Engineering Design
As indicated previously, an ANN is a computer model consisting of many simple
processing elements (PEs) in layered structures. The PEs interact through weighted
connections that, when manipulated, enable an ANN to recognise patterns from
sample data of system (or assembly/component) performance based on specific in-
put variables. Neural networks can also be used to predict input variables for condi-
tions that have not been determined experimentally.
Figure 5.69 is an example of an ANN-generated, three-dimensional plot of pre-
dicted wear rate for a mechanical device, as a function of piston sliding distance and
sliding speed. The figure dep icts wear rate values obtained f or different distances
and speeds (Fusaro 1998).
Critical parameters such as load, speed, sliding distance, friction coefficient,
wear, and material properties are used to produce models for each set of sample
data.
The study shows that artificial neural networks are able to model such simple
systems, illustrating the feasibility of using ANN models to perform accelerated life
testing on more complicated prototype mechanical systems. The following graph
716 5 Safety and Risk i n Engineering Design
Fig. 5.69 Three-dimensional
plots generated from a neural
network model illustrating the
relationship between speed,
load, and wear rate (Fusaro
1998)

Fig. 5.70 Comparison of
actual data to those of an
ANN model approximation
(Fusaro 1998)
(Fig. 5.70) compares actual wear data to those generated from an ANN model. As
the graph illustrates, the correlation is very good (Fusaro 1998).
ANNs arenormally classified by learning procedure, the most common being un-
supervised and supervised learning. In unsupervisedlearning, the network is trained
by internal characterisation of data patterns, with no other information or teaching
requirement. This type of ANN is appropriate to preliminary engineering design ap-
plications, as it can analyse the possible occurrence of a process failure condition
but not necessarily the type o f failure characteristics or extent of the fault.
In supervised learning, individual values of the weighted connections between
neurodes are adjusted during training iterations to produce a desired output for
a given input. Knowledge is thus represented by the structure of the network and
the values of the weights. This type of ANN is appropriate to detail design appli-
cations supported by sample data. This procedure offers several advantages in the
field of pattern recognitio n and analysis of samp le failure data, including an ability
to learn fr om examples, and the ability to generalise. The generalisation property
results in a network trained only on representative input sample data, to be able to
5.3 Analytic Development of S afety and Risk in Engineering Design 717
provide relatively accurate results without being trained on all possible input data.
Thus, the primary advantage of ANN models over opera tional modelling and expert
system approaches is that representative sample data can be used to train the ANN
with no prior knowledge of system operation (Farell et al. 1994).
ANN models typically exhibit the rule-based expert system characteristics of
knowledge-based expert systems without the need for prior representation of the
rules. However, it is the ability to generalise and form accurate evaluations from
design specification data not present in the sample data training set that is the key
requirement of the ANN.

Example of ANN in engineering design—preparation of training data The ma-
jority of designs based o n process engineering analysis rely on operational models
or simulated processes. While providing guidelines for design implementation, they
do not highlight inhe rent problems regarding information qua lity and availability.
For this reason, engineering d esign data depend on practical process information,
such as sensitivity of parameters to fault conditions and, of course, expert process
design knowledge. As an example of the application of ANN models in engineering
design, a feed-forward ANN topology, using the back-propagation learning algo-
rithm for training, is investigated for pump fault prediction (Lippmann 1987).
This ANN topology incorporates a supervised training technique and, thus, it is
necessary to define training data prior to the ANN analysis. Process measurements
relating to potential fault conditions and normal operation, including information on
types of failure, are necessary for ANN learning. This information can, however, be
difficult to obtain in practical situations. Knowledge for ANN training is established
from models or experience.
Engineeringprocesses and systems are often complex and difficult to incorporate
precise descriptions of normal and faulty operating conditions into models. Data
founded on experience can be based on quantitative measurements or even quali-
tative information derived from previous measuremen ts. The quantitative approach,
involving data corresponding to historically experienced failures in similar systems
and equipment,produces a more accurate evaluation of the design specifications but
is dependent on data qu ality. In real-world situations, the quality of histor ical con-
dition data and records relating to failure conditions of complex systems is more
often questionable. Furthermore, it is unlikely that every potential failure would be
experienced historically; consequently, qualitative data are often incorporated to ex-
pand quantitative data in the design knowledge base, or even used on their own if
no quantitative data are available. However, in situations such as critical pump fail-
ure analysis, where problems can be manifested in various forms depending on the
design type and size, qualitative data are not considered precise.
A database of historical pump problems and typical failure data of similar pumps

enabled an initial approach to pump failure prediction based on quantitative data.
The cumulative sum charting meth od is applied to assign specific parameter mea-
surements to pump operating conditions for ANN training purposes. The cusum
chart is constructed from an in itial choice of target values. Th e difference between
each measurement and the target is added to a cumulative sum. This value is plotted
718 5 Safety and Risk i n Engineering Design
Fig. 5.71 Example failure data using cusum analysis (Ilott et al. 1997)
to provide a simple yet e ffective way to determine minor deviations in parameter
levels. A knowledge base is established from parameters commonly available for
typical fault conditions of similar pumps, as the ANN requires consistent parameter
input to distinguish between different operating conditions. The parameters used in
the example are motor current and delivery pressure (Ilott et al. 1995).
From motor current data prior to failure, a target value is chosen for calculation of
the cumulative sum, such as 150 A. Initial observation of the sample data highlights
the difficulty in identifying fault data. For example, the motor current data relating to
a specific fault may be consistently higher during the initial stages of operation, due
to a primary bearing pro blem. On further examinatio n of the sample data, there is
evidence of a marked deviation in motor current values that coincide with a decrease
in delivery pressure. The cusum chart clearly indicates a deviation in motor current
operating level from positive to negative during the sample data period, indicating
the motor current to be consistently below target value.
This procedure is repeated for all historical pump failures to establish a usable
knowledge base of pumpfailure data. Figure 5.71 shows the motor current data prior
to failure, including both sample data and cumulative sum values.
ANN model experimental procedure A feed-forward ANN is trained u sing the
back-propagationlearning algorithm to predict pump operating conditionsfrom fea-
tures provided by the knowledge base of motor current and delivery pressure values.
The knowledge base established from the cusum analysis is split into training data
and test datasets for ANN implementation. These datasets typically include a series
of data patterns, each incorporating one motor current and one delivery pressure

parameter value, relating to specific fault conditions as well as normal pump oper-
ation. The data patterns are input to the ANN every training iteration. Once trained
to a preset number of itera tions or error levels, the ANN is tested with data not pre-
sented in the training dataset to verify generalisation capability. The quantity and
quality of data available for ANN trainin g purposes is an important issue and dic-
tates the confidence in results from the ANN model. Sufficient data would provide
good representation of the decision space relating to specific fault conditions and
5.3 Analytic Development of S afety and Risk in Engineering Design 719
normal pump operation. The exact quantity of data required cannot be specified but
insufficient data cause poor generalisation ability.
In designing non-complex pumping systems where adequate models can be de-
veloped, the knowledge base can simply be manufactured. The ANN model is
trained using the back-propagation learning algorithm where the sum squared er-
ror (SSE) between the desired and actual ANN output is used to amend weighted
connections between the PEs to reduce the SSE duringfurther training. For complex
system designs, m any amendments are required due to re-investigation and system
alterations. Rep resentation capability of an ANN is determ ined by the size of the
input space.
The example ANN structure consists of three layers, and its topology consists
of two sets of input neurodes (values of delivery pressure and motor current scaled
between 0 and 1), several hidden neurodes, and five output neurodes (for fault con-
ditions and normal operation). The ANN topology is illustrated in Fig. 5.72 (Ilott
et al. 1997).
The example involves training the ANN model to a predefined error level, to
investigate the effect on generalisation ability. The learning rule performs weight
adjustment in order to minimise the SSE. Furthermore, a learning coefficient is used
to improve ANN learning. The learning coefficient governs the size of the weight
change with every iteration and subsequently the rate of decrease of the SSE value,
and is adjusted dynamically so as to speed up network training. Convergence speed
refers to the num ber of iterations necessary for suitable training of the ANN.

Fig. 5. 72 Topology of the example ANN (Ilott et al. 1997)
720 5 Safety and Risk i n Engineering Design
Fuzzy ANN modelling Fuzzy ANN modelling is based on fuzzy pre-processing
of input data. The purpose of such fuzzy pre-processing is to observe the effect
of data representation on ANN performance with respect to the sen sitivity of the
pump parameters to identification of pump failure conditions. This methodology
considers the definition of qualitative membership functions for each input param-
eter, and is considered an alternative method to increase ANN representation ca-
pability through compression of training data. Using the pump example, a motor
current of 140 A would have membership of 0.5 to membership function 2 (MF2),
a lower degree of membership to MF3 (0.06) and no membership to MF1. This pro-
cedure is repeated for delivery pressure and a value o f each parameter MF is input
to the ANN. An example of the fuzzy membership fun ctio ns for motor curren t and
delivery pressure parameters is given in Fig. 5.73a,b.
Example results The example results focus on the importance of data quality and,
consequently, pre-processing with respect to ANN convergence speed and general-
isation ability. The ANN topology is trained to investigate the effect of data quality
Fig. 5.73 a) An example fuzzy membership functions for pump motor current (Ilott et al. 1995),
b) example fuzzy membership functions for pump pressure (Ilott et al. 1995)
5.3 Analytic Development of S afety and Risk in Engineering Design 721
Fig. 5. 74 Convergence rate of ANN iterations
on ANN performance. The SSE value is used to gauge the accuracy of training.
The ANN converges faster with each iteration of refined test training data, as indi-
cated in Fig. 5.74. After ANN training, generalisatio n ability is investigated using
the original test set patterns. The quality of training data has a considerable effect
on generalisation ability, which varies with th e type of failure, and is lower for fail-
ure classes defined b y fewer measurements in the training dataset. The example
focused on maximising a design knowledge base despite the inherent limitations of
real sample d a ta.
The cusum charting procedure is a valuable tool in the development of the ANN

knowledge base, through identification of parameter deviations in the sample d ata.
The quality of tr aining data as well as pre-processing both influence ANN con-
vergence rate and ANN generalisation ability. Generalisation is one of the primary
goals in training neural networks, to ensure that the ANN performswell on data that
it has not been trained on. The standard method of ensuring good generalisation is
to divide the training data into multiple datasets.
The most common datasets are the training, cross validation, and testing datasets.
Refinement of the original training data improves ANN generalisation ability. The
fuzzy pre-processing methodology results in a better improvement to ANN gener-
alisation ability but is slow to converge du ring learning. The fuzzy pre-processing
technique converges much faster during the learning phase, and produces generali-
sation ability comparative to that of the fuzzy approach.
Conclusion Accurate ANN analysis of pump failure conditions, based on a lim-
ited supply of historical data, is feasible for engineering design application during
the detail design phase. However, the use of ANN models for engineering design,
particularly in designing for safety, is dependent upon the availability of histori-
cal data and the sensitivity of parameter values in distinguishing between failure
conditions. ANN analysis capability is also very much dependent upon methods of
knowledge base generation, and the availability of design knowledge expertise (Ilott
et al. 1995).
722 5 Safety and Risk i n Engineering Design
g) ANN Computational Architectures
Neural networks can be very powerful learning systems. However, it is very impor-
tant to match the neural architecture to the problem. Several learning architectures
are available with neural network software packages. These architectures are cat-
egorised into two groups: supervised and unsupervised. Supervised architectures
are used to classify patterns or make predictions. Unsupervised neural networks are
used to classify training patterns into a specified number of categories.
Supervised learning paradigms (back-propagation, probabilistic, and general re-
gression) are composed of at least three layers: input, hidden and output. In each

graphical representation, the input layer is on the left and the output layer on the
right. Hidden layers are represented between the input and output layer. The input
layer contains variables used by the network to make predictions and classifications.
Analysis of data patterns or learning takes place in the hidden layer. The output layer
contains the values the neural network is predicting or classifying. Information in
the input layer is weighted as it passed to the hidden layer.
The h idden layer weight values are received from the input layer and produces
outputs. Historical informationis continuously analysedby the system through back
propagation of error, where error is passed backwards until it is reduced to accept-
able levels. Learning takes place when the neural network compares itself to correct
answers and makes adjustments to the weights in the direction of the correct an-
swers. Variations of supervised learning paradigms include differences in the num-
ber of hidden neurodes and/or weight connections.
The unsupervised network is composed of only two layers: input and output. The
input layer is represented on the left and the output layer on the right. Information
fed into the input layer is weighted and passed to the output layer. Learning takes
place when adjustments are made to the weights in the direction of a succeeding
neurode . In the illustrations below,each artificial neural network architecture is rep-
resented by a graphic containing rectangles and lines. Rectangles depict layers and
lines depict weights.
Several types of supervised neural networks and one unsupervised neural net-
work are illustrated collectively in Figs. 5.75 through to 5.81 (Schocken 1994).
ANN model architect ure: supervised neural networks (I=inputlayer, H=hidden
layer, O=output layer)
Standard back propagation —each layer is connected to the immediately previous
layer (with either one, two or three hidden layers). Standardback-propagationn eural
networks are known to generalise well on a wide variety of problems (Fig. 5.75).
Jump connectionback propagation—eachlayer is connected to every previouslayer
(with either one, two or three hidden layers). Jump connection back-propagation
networks are known to work with very complex patterns, such as patterns not easily

noticeable (Fig. 5.76).
Recurrent back-propagation networks with dampened feedback—each architecture
contains two input layers, one hidden layer, and one output layer (Fig. 5.77).
5.3 Analytic Development of S afety and Risk in Engineering Design 723
Fig. 5. 75 Standard back-propagation ANN architecture (Schocken 1994)
Fig. 5. 76 Jump connection back-propagation ANN architecture (Schocken 1994)
Fig. 5.77 Recurrent back-propagation with dampened feedback ANN architecture (Schocken
1994)
The extra input layer retains previous training experiences, much like memory.
Weight connections are modified from the input, hidden or output layers, back into
the network for inclusion with the next pattern. Recurrent back-propagation net-
works with dampened feedback networks are known to learn sequences and time
series data.
Ward back propagation—each architecture contains an input layer, two or three hid-
den layers, and an output layer.Differentactivation functions(methodofoutput) can
be applied. Ward networks are known to detect different features in the low, middle
and high dataset ranges (Fig. 5.78).
Probabilistic (PNN)—each layer is connected together. The hidden layer contains
one neurode per data array. The output layer contains one neurode for each possible
category. PNNs separate data into a specified number o f output categories and train
quickly on sparse data (Fig. 5.79).
General regression (GRNN)—each layer is connected together. Hidden and out-
put layers are the same as PNN. Rather than categorising data like PNN, however,

×