Tải bản đầy đủ (.pdf) (10 trang)

Handbook of Reliability, Availability, Maintainability and Safety in Engineering Design - Part 24 doc

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (177.02 KB, 10 trang )

3.3 Analytic Development of Reliability and Performance in Engineering Design 213
Fig. 3.44 Revised Weibull chart
214 3 Reliability and Performance in Engineering Design
ment of the characteristics of failure. A major problem arises, though, when the
measures and/or estimates of the Weibull parameters cannot be based on obtained
data, and engineering design analysis cannot be quantitative. Credible and statisti-
cally acceptable qualitative methodologies to determine the integrity of engineer-
ing design in the case where data are not available o r not meaningful are included,
amongst others, in the concept of information integration technology (IIT).
IIT is a combination of techniques, methods and tools for collecting, organising,
analysing and utilising diverse information to guide optimal decision-making. The
method know as performance and reliability evaluation with diverse information
combination and tracking (PREDICT) is a highly successful example (Booker et al.
2000) of IIT that has been applied in automotive system design and development,
and in nuclear weapons storage. Specifically, IIT is a formal, multidisciplinary ap-
proach to evaluating the performance and reliability of engineering processes when
data are sparse or non-existent. This is particularly useful when complex integra-
tions of systems and their interactionsmake it difficult and evenimpossible to gather
meaningful statistical data that could allow for a quantitative estimation of the per-
formance parameters of probability distributions, such as the Weibull distribution.
The objective is to evaluate equipment reliability early in the detail design phase,
by making effective use of all available information: expert knowledge, historical
information, experience with similar processes, and computer models. Much of this
information, especially expert knowledge, is not formally included in performance
or reliability calculations of engineering designs, because it is often implicit, undoc-
umented or not quantitative. The intention is to provide accurate reliability estimates
for equipment while th ey are still in the engineering design stage. As equipment
may undergo changes during the development or construction stage, or conditions
change, or new information becomes available, these reliability estimates must be
updated accordingly, providing a lifetime record of performance of the equipment.
a) Expert Judgment as Data


Expert judgment is the expression of informed opinion, based on knowledge and
experience, made by experts in responding to technical problems (Ortiz et al. 1991).
Experts are individuals who have specialist background in the subject area and
are recognised by their peers as being qualified to address specific technical prob-
lems. Expert judgment is used in fields such as medicine, economics, engineering,
safety/risk assessment, knowledge acquisition, the decision sciences, and in envi-
ronmental studies (Booker et al. 2000).
Because expert judgment is often used implicitly, it is not always acknowledged
as expert judgment, and is thus preferably obtained explicitly through the use of for-
mal elicitation. Formal use of expert judgment is at the heart of the engineering de-
sign process, and appears in all its phases. For years, methods have been researched
on how to structure elicitations so that analysis of this information can be performed
statistically (Meyer and Booker 1991). Expertise gathered in an ad hoc manner is
not recommended (Booker et al. 2000).
3.3 Analytic Development of Reliability and Performance in Engineering Design 215
Examples of expert judgment include:
• the probability of an occurrence o f an event,
• a prediction of the performance of some product or process,
• decision about what statistical methods to use,
• decision about what variables enter into statistical analysis,
• decision about which datasets are relevant for use,
• the assumptions used in selecting a model,
• decision concerning which probability distributions are appropriate,
• description of information sources for any of the above responses.
Expert judgment can be expressed quantitatively in the form of probabilities, rat-
ings, estimates, weighting factors, distribution parameters or physical quantities
(e.g. costs, length, weight). Alternatively, expert judgment can be expressed quali-
tatively in the form o f textual descriptions, linguistic variables and natural language
statements of extent or quantities (e.g. minimum life or characteristic life, burn-in,
useful life or wear-out failure patterns).

Quantitative expert judgment can be considered to be data. Qualitative expert
judgment, however, must be quantified in order for it also to be considered as data.
Nevertheless, even if expert judgment is qualitative, it can be given the same con-
siderations as for data made available from tests or observations, particularly with
the following (Booker et al. 2000):
• Expert judgment is considered affected by how it is gathered.Elicitation methods
take advantage of the body of knowledge on human cognition and motivation,
and include procedures for countering effects arising from the phrasing of ques-
tions, response modes, and extraneous influences from both the elicitor and the
expert (Meyer and Booker 1991).
• The methodology of experimental design (i.e. randomised treatment) is similarly
applied in expert judgment, particularly with respect to incompleteness of infor-
mation.
• Expert judgment has uncertainty, which can be characterised and subsequently
analysed. Many experts are accustomed to giving uncertainty estimates in the
form of simple ranges of values. I n eliciting uncertainties, however, the natural
tendency is to underestimate it.
• Expert judgment can be subject to several conditioning factors. These factors
include the informationto be considered, the phrasing of questions (Payne 1951),
the methods of solving the problem (Booker and Meyer 1988), as well as the
experts’ assumptions (Ascher 1978). A formal structured approach to elicitation
allows a better control over conditioning factors.
• Expert judgment can be combined with other quantitative data through Bayesian
updating, whereby an expert’s estimate can be used as a prior distribution for
initial reliability calculation. The expert’s reliability estimates are updated when
test data become available, using Bayesian methods (Kerscher et al. 1998).
• Expert judgment can be accumulated in knowledge systems with respect to tech-
nical applications (e.g. problem solving). For example, the knowledge system
can address questions such as ‘what is x under circumstance y?’, ‘what is the
216 3 Reliability and Performance in Engineering Design

failure probability?’, ‘what is the expected effect of the failure?’, ‘what is the
expected consequen ce?’, ‘what is the estimated risk?’ or ‘what is the criticality
of the consequence?’.
b) Uncertainty, Probability Theory and Fuzzy Logic Reviewed
A major portion of engineering design analysis focuses on propagating uncertainty
through the use of distribution functions of one type or another, particularly the
Weibull distribution in the case of reliability evaluation. Uncertainties enter into
the analysis in a number of different ways. For instance, all data and information
have uncertainties. Even when no data are available, and estimates are elicited from
experts, uncertainty values usually in the form of ranges are alsoelicited. In addition,
mathematical and/or simulation models have uncertainties regarding their input–
output relationships, as well as uncertainties in the choice of models and in defining
model parameters.
Differentmeasures and units are often involvedin specifying the performancesof
the various systems being designed. To map these performances into common units,
conversion factors are often required. These conversions can also have uncertainties
and require representation in distribution functions (Booker et al. 2000).
Probability theory provides a coherent means for determining uncertainties.
There are other interpretations of probability besides conventional distributions,
such as the r elative frequency theory and the subjective theory, as well as the Bayes
theorem. Because of the flexibility of interpretation of the subjectivetheory (Bement
et al. 2000a), it is perhaps the best approach to a qualitative evaluation of system
performance and reliability, through the combination of diverse information.
For example, it is usually the case that some a spect of information relating to
a specific design’s system performance and/or its design reliability is known, which
is utilised in engineering design analysis before observations can be made. Subjec-
tive interpretation of such information also allows for the consideration of one-of-
a-kind failure events, and to interpret these quantities as a minimal failure rate.
Because reliability is a common performance metric an d is defined as a proba-
bility that the system p erforms to specifications, probability theory is necessary in

reliability evaluation. However, in using expert judgment due to data being unavail-
able, not all experts may think in terms of probability. The best approach is to use
alternatives such as possibility theory, fuzzy logic and fuzzy sets (Zadeh 1965) where
experts think in terms of rules, suc h as if–then rules, for cha racterising a certain type
of ambiguity uncertainty.
For example, experts usually have knowledge about the system, expressed in
statements such as ‘if the temperature is too hot, the component’s expected life
will rapidly diminish’. While this statement contains no numbers for analysis or
for probability distributions, it does co ntain valuable information, and the use of
membership functions is a convenient way to capture and quantify that information
(Laviolette 1995; Smith et al. 1998).
3.3 Analytic Development of Reliability and Performance in Engineering Design 217
From probability (crisp set) theory
Where: PDFs = Probability density functions; f(t)
CDFs = Cumulative distribution functions; F(t)
From fuzzy set and possibility theory
PDFs
CDFs Likelihoods
Membership
functions
Possibility
distribution
Fig. 3.45 Theories for representing uncertainty distributions (Booker et al. 2000)
However, reverting this information back into a pro babilistic fram ework requires
a bridging mechanism for the membership functions. Such a bridging can be ac-
complished using the Bayes theorem, whereby the membership functions may b e
interpreted as likelihoods (Bement et al. 2000b). This bridging is illustrated in
Fig. 3.45, which depicts various methods used for formulating uncertainty (Booker
et al. 2000).
c) Application of Fuzzy Logic and Fuzzy Sets in Reliability Evaluation

Fuzzy logic or, alternately, fuzzy set theory provides a basis f or mathematical mod-
elling and language in which to express quite sophisticated algorithms in a precise
manner. For instance, fuzzy set theory is used to develop expert system models,
which are fairly complex computer systems that model decision-making processes
by a system of logical statements. Consequently, fuzzy set theory needs to be re-
viewed with respect to expert judgment in terms of possibilities, rather than proba-
bilities, with the following definition (Bezdek 1993).
Fuzzy sets and membership f unctions reviewed Let X be a space of objects (e.g.
estimated parameter values), and x be a generic element o f X. A classical set A,
A ⊆ X is defined as a collection of elements or objects x ∈ X, such that each ele-
ment x can either belong to or not be part of the set A. By defining a characteristic
or membership function for each element x in X, a classical set A can be represented
by a set of ordered p airs (x, 0) or (x,1), which indicate x /∈ A or x ∈ A respectively.
Unlike conventional sets, a fuzzy set expresses the degree to which an element be-
longs to a set. Hence, the membership function of a fuzzy set is allowed to have
values between 0 and 1, which denote the degree of membership of an element in
the given set.
If X is a collection of objects denoted generically by x,thenafuzzy set A in X is
defined as a set of ordered pairs where
A = {(x,
μ
A
(x))|x ∈X} (3.172)
218 3 Reliability and Performance in Engineering Design
in which
μ
A
(x) is called the membership function (or MF, for short) for the fuzzy
set A.
The MF maps each element of X to a membership grade (or membership value)

between 0 and 1 (included). Obviously, th e definition of a fuzzy set is a simple ex-
tension of the definition of a classical (crisp) set in which the characteristic function
is permitted to have any values between 0 and 1. If the value of the member ship
function is restricted to either 0 or 1, then A is reduced to a classical set. For clarity,
references to classical sets consider ordinary sets, crisp sets, non-fuzzy sets, or just
sets. Usually, X is referred to as the universe of discourse or, simply, the universe,
and it may consist of discrete (ordered or non-ordered) objects or it can be a contin-
uous space. However, a crucial aspect of fuzzy set theory, especially with respect to
IIT, is understanding how membership functions are obtained.
The usefulness o f fuzzy lo gic and mathematics based on fuzzy sets in reliability
evaluation depends critically on the capability to construct appropriate member-
ship functions for various concepts in various given contexts (Klir and Yuan 1995).
Membership functions are therefore the fundamental connection between, on the
one hand, empirical data and, on the other hand, fuzzy set models, thereby allow-
ing for a bridging mechanism for reverting expert judgment on these membership
functions back into a probabilistic framework, such as in the case of the definition
of reliability.
Formally, the membership function
μ
x is a function over some domain, or prop-
erty space X, mapping to the unit interval [0, 1]. The crucial aspect of fuzzy set
theory is taken up in the following question: what does the membership function
actually measure? It is an index of the membership of a defined set, which measures
the degree to which object A with property x is a member of that set.
The usual definition of a classical set uses properties of objects to determine
strict membership or non-membership. The main difference between classical set
theory and fuzzy set theory is that the latter accommodates partial set membership.
This makes fuzzy set theory very useful for modelling situations of vagueness,that
is, non-probabilistic uncertainty. For instance, there is a fundamental ambiguity
about the term ‘failure characteristic’ representing the parameter

β
of the Weibull
probability distribution. It is difficult to put many items unambiguously into or out
of the set of equipment currently in the burn-in or infant mortality phase, or in the
service life phase, or in the wear-out phase of their characteristic life. Such cases
are difficult to classify and, of course, depend heavily on the defin ition o f ‘failure’;
in turn, this depends on the item’s functional application. It is not so much a matter
of whether the item could possibly be in a well-defined set but rather that the set
itself does not have firm boundaries.
Unfortunately, there has b een substantial confusion in the literature about the
measurement level of a membership function. The general consensus is that a mem-
bership function is a ratio scale with two endpoints. However,in a continuous order-
dense domain—that is, one in which there is always a value possible between any
two given values, with no ‘gaps’ in the domain—the membership function may be
considered as being not much different from a mathematical interval (Norwich and
Turksen 1983). The membership function, unlike a probability measure, does not
3.3 Analytic Development of Reliability and Performance in Engineering Design 219
fulfil the concatenation requirement that underlies any ratio scale (Roberts 1979).
The simplest way to understandthis is toconsider the followingconcepts:it is mean-
ingful to add the probability of the union of two mutually exclusive events, A and B,
because a prob a bility measure is a ratio scale
P(A)+P(B)=P(A and B) . (3.173)
It is not, however,meaningful to add the membership values of two objects or values
in a fuzzy set.
For instance, the sum
μ
A
+
μ
B

may be arithmetically possible but it is certainly
not interpretable in terms of fuzzy sets. T here does not seem tobe any other concate-
nation operator in general that would be meaningful (Norwich and Turksen 1983).
For example, if one were to add together two failure pr obability values in a series
configuration, it makes sense to say that the probability of failure of the combined
system is the sum of the two probabilities. However, if one were to take two failure
probability parameters that are elements of fuzzy sets (such as the failure charac-
teristic parameter
β
of the Weibull probability distribution), and attempt to sensibly
add these together, there is no natural way to combine the two—unlike the failure
probability.
By far the most common method for assigning membership is based on direct,
subjective judgments by one or more experts. This is the method recommended
for IIT. In this method, an expert rates values (such as the Weibull parameters) on
a membership scale, assigning membership values directly and with no intervenin g
transformations. For conceptually simple sets such as ‘expected life’, this method
achieves the objective quite well, and should not be neglected as a means of ob-
taining membership values. However, the method has many shortcomings. Experts
are often better with simpler estimates—e.g. paired comparisons or generating rat-
ings on several more concrete indicators—than they are at providing values for one
membership function of a relatively complex set.
Membership functions and probability measures One of the most controversial
issues in uncertainty modelling and the information sciences is the relationship be-
tween probability theory and fuzzy sets. The main points are as follows (Dubois and
Prade 1993a):
• Fuzzy set theory is a consistent body of mathematical tools.
• Although fuzzy sets and probability measures are distinct, there are several
bridges relating these, including random sets and belief functions, and likelihood
functions.

• Possibility theory stands at the crossroads between fuzzy sets and probability
theory.
• Mathematical algor ithms that behave like fuzzy sets exist in probability theory,
in that they may produce random partial sets. This does not mean that fuzziness
is reducible to randomness.
• There are ways of approaching fuzzy sets and possibility theory that are not con -
ducive to probability theory.
220 3 Reliability and Performance in Engineering Design
Some interpretations of fuzzy sets are in agreement with probability calculus, others
are not. However, despite misunderstandings between fuzzy sets and probabilities,
it is just as essential to consider probabilistic interpretations of membership func-
tions (which may help in membership function assessment) as it is to consider non-
probabilistic interpretations of fuzzy sets. Some risk for confusion may be present,
though, in the way various definitions are understood. From the original definition
(Zadeh 1965), a fuzzy set F on a universe U is defined by a membership function:
μ
F
: U → [0,1] and
μ
F
(u) is the grade of membership of element u in F (for
simplicity, let U be restricted to a finite universe).
In contrast, a probability measure P is a mapping 2
U
→ [0 , 1] that assigns a number
P(A) to each subset of U, and satisfies the axioms
P(U)=1; P(/0)=0 (3.174)
P(A∪B)=P(A)+P(B) if A∩B = /0 . (3.175)
P(A) is the probability that an ill-k nown single-valued variable x ranging on U co-
incides with the fixed well-known set A. Typical misunderstanding is to confuse the

probability P(A) with a membershipgrade. When
μ
F
(u) is considered, the element u
is fixed and known, and the set is ill defined whereas, with the probability P(A),the
set A is well defined while the value of the underlying variable x,towhichP is at-
tached, is unknown. Such a set-theoretic calculus for probability distributions has
been developed under the name of Lebesgue logic (Bennett et al. 1992).
Possibility theory and fuzzy sets reviewed Related to fuzzy sets is the develop-
ment of the theory of possibility (Zadeh 1978), and its expansion (Dubois and Prade
1988). Possibility theory appears as a more direct contender to probability theory
than do fuzzy sets, because it also proposes a set-function that quantifies the uncer-
tainty of events (Dubois and Prade 1993a).
Consider a possibility measure on a finite set U as a mapping from 2
U
to [0,1]
such that
Π
(/0)=0 (3.176)
Π
(A∪B)=max(
Π
(A),
Π
(B)) . (3.177)
The condition
Π
(U)=1 is to be added for normal possibility measures. These
are completely characterised by the following possibility distribution
π

: U →
[0,1] (such that
π
(u)=1forsomeu ∈ U, in the normal case), since
Π
(A)=
max{
π
(u),u ∈ A}.
In the infinite case, the equivalence between
π
and
Π
requires that Eq. (3.177)
be extended to an infinite family of subsets. Zadeh (1978) views the possibility
distribution
π
as being determined by the membership function
μ
F
of a fuzzy set F.
This does not mean, however,that the two concepts of a fuzzy set and of a possibility
distribution are equivalent (Dubois and Prade 1993a).
Zadeh’s equation, given as
π
x
(u)=
μ
F
(u), is similar to equating the likeli-

hood function to a conditional probability where
π
x
(u) represents the r elationship
3.3 Analytic Development of Reliability and Performance in Engineering Design 221
π
(x = u|F), since it estimates the possibility that variable x is equal to the element
u, with incomplete state of knowledge ‘x is F’. Furthermore,
μ
F
(u) estimates the
degree of compatibility of the precise infor mation x = u with the statement ‘x is F’.
Possibility theory and probability theory may be viewed as complementary the-
ories of uncertainty that model different kinds of states of knowledge. However,
possibility theory further has the ability to model ignorance in a non-biased way,
while probability theory, in its Bayesian approach, cannot account for ignorance.
This can be explain ed with the definition of Bayes’ theorem, which incorpor ates the
concept of conditional probability.
In this case, conditional probability cannot be used directly in cases where igno-
rance prevails, for example:
‘of the i components belonging to system F, j definitely have a high failure rate’.
Almost all the values for these variables are unknown. However, what might be
known, if only informally, is how many components might fail out of a set F if
a value for the characteristic life parameter
μ
of the system were available. As
indicated previously, this parameter is by definition the mean operating period in
which the likelihood of component failure is 63% or, conversely, it is the operating
period during which at least 63% of the system’s components are expected to fail.
Thus:

P(component failure f|
μ
) ≈ 63% .
In this case, the Weibull characteristic life parameter
μ
must not be confused with
the possibility distribution
μ
, a nd it would be safer to consider the probability in the
following format:
P(component failure f|characteristic life c) ≈ 63% .
Bayes’ theorem of probability states that if the likelihood of component failure and
the number of components in the system are known, then the conditional probabil-
ity of the characteristic life of the system (i.e. MTBF) may be evaluated, given an
estimated number of component failures. Thus
P(c|f)=
P(c)P( f|c)
P( f)
(3.178)
or:
|c∩ f|
|f|
=
|c|
F
·
|f ∩c|
|c|
·
F

|f|
, (3.179)
where:
|c∩ f| = |f ∩c|.
The point of Bayes’ theorem is that the probabilities on the right side of the equation
are easily available by comparison to the conditional probability on the left side.
However, if the estimated number of component failures is not known (ignorance of
222 3 Reliability and Performance in Engineering Design
the probability of failure), then the conditional probability of the characteristic life
of the system (MTBF) cannot be evaluated. Thus, probability theory in its Bayesian
approach cannot account for ignorance.
On the contrary, possibility measures ar e decomposable (however, with respect
to union only), and
N(A)=1−
Π
(
˜
A) , (3.180)
where:
The certainty of A is 1—the impossibility of A,
˜
A is the complement (impossibility) of A,andN(A) is a degree of certainty.
This is compositional with respect to intersection only, for example
N(A∩B)=min(N(A),N(B)) . (3.181)
When one is totally ignorant about event A,wehave
Π
(A)=
Π
(
˜

A)=1andN(A)=N(
˜
A)=0 , (3.182)
while
Π
(A∩
˜
A)=0andN(A∪
˜
A)=1 . (3.183)
This ability to model ignorance in a non-biased way is a typical asset of possibility
theory.
The likelihood function Engineering design analysis is rarely involved with di-
rectly observable quantities. The concepts used for design analysis are, by and large,
set at a fairly high level of abstractio n and related to abstract design concepts. The
observable world impinges on these concepts only indirectly. Requiring design en-
gineers to rate conceptual objects on m embership in a highly abstract set may be
very difficult, and thus time and resources would be better spent using expert judg-
ment to rate conceptual objects on more concrete scales, subsequently combined
into a single index by an aggregation procedure (Klir and Yuan 1995).
Furthermore,judgment bias or inconsistency can creep in when ratings need to be
estimated for conceptually complicated sets—which abound in engineering design
analysis. It is much more difficult to defend a member ship rating that comes solely
from expert judgment when there is little to support the procedure other than the
expert’s status as an expert. It is therefore better to have a formal procedure in place
that is transparent, such as IIT. In addition, it is essential that expert judgment relates
to empirical evidence (Booker et al. 2000).
It is necessary to establish a relatively strong metric basis for membership func-
tions for a number of reasons, the most important being the need to revert informa-
tion that contains no numbers for analysis or for probability distributions, and that

was captured and quantified by the use of membership functions, back into a proba-
bilistic framework for further analysis. As indicated before, such a bridging can be
accomplished using the Bayes theorem whereby the membership functions may be
interpreted as likelihoods (Bement et al. 2000b).

×