Tải bản đầy đủ (.pdf) (10 trang)

Handbook of Reliability, Availability, Maintainability and Safety in Engineering Design - Part 80 pot

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (303.99 KB, 10 trang )

774 5 Safety and Risk i n Engineering Design
Fig. 5.126 Fuzzy logic for m anaging uncertain data
a theoretical overview of reliability, availability, maintainab ility and safety in engi-
neering design—the methodology presented in this handbook.
Plant analysis in the AIB blackboard is the working memory of the knowledge-
based expert systems, consisting of a global database of facts relating to the integrity
of engineering design, which are used for establishing automated continual design
reviews. The basic aims of automated continual design reviews are to automatically
assess system requirements and allocations to ensure that the design specifications
are complete; to automatica lly compare the design output against design specifica-
tions; to automatically present the risks associated with a collaborative and continu-
ous design effort; and to continually allow for decision-making in selecting the most
suitable design amongst the current design solutions.
Figures 5.128 and 5.129 illustrate the typical AIB b lackboard format of an au-
tomated continual design review. Figure 5.128 shows the blackboard systems hier-
archy navigation and selection format whereby critical components can be viewed
with r egard to their systems r elationships.
Figure 5.129 shows a typical criticality assessment of a component, based on
condition and performance obtained from an FMECA analysis.
The artificial intelligence blac kboard model—overview Artificial intelligence-
based strategies for decision-making and, in particular, for decisions concerning the
5.4 Application Modelling of Safety and Risk in Engineering Design 775
Fig. 5.127 AIB blackboard model with plant analysis overview option
integrity of engineering design are centred around three approaches termed deter-
ministic knowledge, probabilistic knowledge and possibilistic knowledge.
Deterministic knowledge, in engineering design integrity formulation, is based
on a well-defined systems structure and d efinition of the operational and physical
functions of equipment, the usefulness of which depends on the ability to relate
the information specifically to failure conditions (or failure modes) in identifying
problems of equipment failure consequences.
Probabilistic knowledge is gain ed mainly from a statistical analysis of the prob-


able occurrences of events, such as component failures, in order to predict the ex-
pected occurrence of these events in the future to be able to design-out problems or
to implement some form of preventive action.
Possibilistic knowledge focuses primarily on imprecision or uncertainty that is
intrinsic to equipment degradation. Imprecision here is meant to express a sense of
vagueness, rather than the lack of any knowledge at all about predicted equipment
condition, particularly its physical condition. In other words, possibilistic knowl-
edge concerns the concept of ‘fuzziness’, and not ‘randomness’.
The application of fuzzy logic expert systems focuses on the use of expert systems
technology and fuzzy logic to achieve intelligent computer automated methodology
to determine the integrity of engineering design. The most important impact areas
of expert systems on the integrity of engineering design are:
776 5 Safety and Risk in Engineering Design
Fig. 5.128 Automated continual design review: component SBS
• automatic checking of design constraints that affect the design’s integrity, allow-
ing for alternatives to be considered in a co llaborative design environment;
• automation of complex tasks and activities for determining design integrity
where expertise is specialised and technical;
• strategies for searching in the space of alternative designs, and monitoring of
progress towards the targets of achieving the required design integrity;
• integration of diverse knowledge sources in an AIB blackboard system, with ex-
pertise applied concurrently to the problem of ensuring design integrity;
• provision of intelligent computer automated methodology for determining the
integrity of engineering design through automated continual design reviews.
5.4.2 Evaluation of Modelling Results
As previously indicated, blackboard systems consist mainly of a set of knowledge
sources and a blackboard data structure. A blackboard knowledge source is a highly
specialised, highly independent process that takes inputs from the blackboard data
structure, performs a computation, and places the results of the computation back in
5.4 Application Modelling of Safety and Risk in Engineering Design 777

Fig. 5.129 Automated continual design review: component criticality
the blackboard data structure. This blackboard data structure is a centralised global
data structure partitioned in a hierarchical manner and used to represent the problem
domain (in this case, the engineering design problem), and acts as a shared memory
visible to all of the knowledge sources to allow intercommunication between the
knowledge sources. The blackboard data structure contains shared blackboard data
objects and can be accessed by all of the knowledge sources. This design allows
for an opportunistic control strategy that enables a knowledge source to contribute
towards the solution of the current problem without knowing which of the other
knowledge sources will use the information.
Blackboard systems are a natural progression of expert systems into a more pow-
erful problem-solving technique. They generally provide a way for several h ighly
specialised knowledge sources to cooperate to solve larger and more complex prob-
lems. Due to the hierarchical structure of the blackboard, each data object on
the blackboard will usually have only one knowledge source that can update it.
Although these knowledge sources are often referred to as ‘experts’, knowledge
sources are not restricted to expert systems such as the ExSys
c

Expert System
(ExSys 2000) or other AI systems, and include the ability to add conventionally
coded software such as the artificial intelligence-based (AIB) model, to cooperate
in solving problems.
778 5 Safety and Risk in Engineering Design
Many knowledge sources are numeric or algorithmic in nature (i.e. the AIB
blackboard knowledge source for artificial neural network (ANN) computation that
is specifically applied for processing time-varying information, such as non-linear
dynamic modelling, time series prediction, adaptive control, etc. of various engi-
neering design problems). The use of multiple, independent knowledge sources al-
lows each knowledge source to use the data representation scheme and problem-

solving strategy that best suit the specific purpose of that knowledge source. These
specialised knowledge sources are thus easier to develop and can be hosted on dis-
tributed hardware.
The use of opportunistic problem-solving and highly specialised knowledge
sources allows a set of distributed knowledge sources to cooperate concurrently to
solve large, complex design problems. However, blackboard systems are not easily
developed, especially where a high degree of concurrent knowledge source execu-
tion must b e achieved while maintaining knowledge consistency on the blackboard.
In general, blackboard systems have not attained their apparent potential, because
there are no established tools or methods to analyse their performance.
The lack of a coherent set of p erformance analysis tools has in many cases re-
sulted in the revision of a poorly designed system to be ignored once the system
had been implemented. This lack of the appropriate performance analysis tools for
evaluating blackboard system design is one of the reasons why incorporating con-
currency into the blackboard problem-solving model has not generally been suc-
cessful. Consequently, a method for the validation of blackboard system design has
been developed (McManus 1991). This method has been applied to the AIB black-
board system for determining the integrity of process engineering design.
Knowledge source connectivity analysis is a method for evaluating blackboard
system performanceusing a formalised model for blackboard systems design. A de-
scription of the blackboard data structure, the function computedbyeach knowledge
source, and the knowledge source’sinput and output variablesare sufficient to create
a formalised model of a blackboard system (McManus 1992). Connectivity analy-
sis determines the data transfers between the knowledge sources and data migration
across the blackboard.
The attributes of specialisation, serialisation and interdependence are evaluated
for each knowledge source. This technique allows for the evaluation of a blackboard
design specification b efore the blackboardsystem is developed. This also allows the
designer to address knowledge source connectivity problems, feedback loops and
interdependence problems as a part of the initial design process. Knowledge source

connectivity analysis measures the output set overlap, functional connectivity,and
output to input connectivity between pairs of knowledge sources. Output set overlap
is a measure of the specialisation of pairs of knowledge sources, whereas functional
connectivity between pairs of knowledge sources is a measure of their serialisation,
and output to input connectivity is a measure of their interdependence.
5.4 Application Modelling of Safety and Risk in Engineering Design 779
a) The Formalised Model for Blackboard Systems Design
Knowledge source connectivity analysis requires a specification of the system de-
veloped using a formalised model for blackboard systems (McManus 1992). Black-
board systems can be modelled as a blackboard data structure containing shared
blackboard data objects, and a set of cooperating knowledge sources that can access
all of the blackboard data objects. These knowledge sources are processes that take
inputs from the blackboard, perform some computation, then place the results back
on the blackboard for other design teams in a collaborative design environment.
Blackboard data structure A blackboard data structure is a global data structure
consisting of a set of blackboard data objects, {d
1
, ,d
j
},usedtorepresentthe
problem domain.
Blackboard data object Each blackboard data object is a predefined data object
type with a point value or a range of values. A blackboard data object, d
j
, is thus an
object that h as a single value or multiple values.
Knowledge source A knowledge source, ks
j
, of a set of knowledge sources,
β

=
{ks
1
, ,ks
j
}, consists of the following:
• a set of input variables, IV = {iv
1
, ,iv
n
},
• a set of input conditions, IC = {ic
1
, ,ic
n
},
• a set of output variables, OV = {ov
1
, ,ov
m
},
• a description of the computation delivered by the knowledge source,
• a set of preconditions, PR = {pr
1
, ,pr
k
},
• a set of post-conditions, PT = {pt
1
, ,pt

k
} and
• an input queue, IQ.
A knowledge source’s input conditions are a set o f Boolean variables used to notify
a knowledgesource when one of its input variables has been updated. The precondi-
tions are a set of Boolean functions that all must b e TRUE for a knowledgesource to
be activated, and the post-conditions are a set of Boolean functions that all must be
TRUE for a knowledgesource to post the result of its computation to the blackboard.
If all of a knowledge source’s activation conditions are met while it is executing, the
input queue stores the knowledge source’s input variables.
There are two classes of input variables pertaining to knowledge sources: ex-
plicit input variables and generic input variables. An explicit input variable spec-
ifies a single, unique blackboard data object that is used as the input variable to
a knowledge source. A knowledge source can use only the blackboard data object
specified by the explicit input variable as a valid input. A generic input variable
specifies a class or type of blackboard data object that can be used as the input
variable to the knowledge source. The knowledge source can accept an instance
of a blackboard data object of the specified class as an input variable. The use of
generic input variables allows development of knowledge sources that function on
a class of blackboard data objects.
780 5 Safety and Risk in Engineering Design
Knowledge sources can be classified by their input variables:
• Explicit knowledge sources have only explicit input variables;
• Mixed knowledge sources have both explicit and generic input variables;
• Generic knowledge sources have only generic input variables.
Blackboard system A blackboard system is used to allow intercommunication of
knowledge sources, and acts as a shared memory that is visible to all of the knowl-
edge sources. A blackboard system, B, is a tuple X,P,
β
,Id,

θ
,where:
• X is a set of blackboard data objects, X = {d
1
, ,d
i
};
• P is the set of blackboard data object states, P = V
1
·V
2
· ·V
i
,whereV
i
is a set
of all valid values for blackboard data object d
i
;

β
is the set of knowledge sources,
β
= {ks
1
, ,ks
j
};
• each knowledge source’s domain is a subset of P, and its range is a subset of P;
• Id is an i-vector describing the i initial values of the blackboard data objects,

Id ∈ P;

θ
is a relation on
β
,where
θ

β
·
β
and ks
j
, ks
k
∈
θ
if and only if ∃d
j
∈ X
where: d
j
∈OV and (ks
j
) ∧d
j
∈ IV(ks
k
);
• If ks

j
, ks
k
∈
θ
,thenks
k
is a successor of ks
j
,andks
j
is a predecessor of ks
k
.
b) Performance Analysis of the Blackboard Systems Design
The performance of a blackboard system design can be analysed in the following
manner (McManus 1991): for each knowledge source ks
j
in
β
is an input set,
Ψ
j
,
containing all of the input variables of ks
j
and an output set,
Φ
j
, containing all of

the output variables of ks
j
Ψ
j
= {iv
1
,iv
2
, ,iv
n
} (5.118)
Φ
j
= {ov
1
,ov
2
, ,ov
m
}
Once
Ψ
j
and
Φ
j
have been established for all ks
j
in
β

,thesets
Γ
j,k
and
θ
j,k
can
be computed for all knowledge source pairs {ks
j
,ks
k
} in
β
(j = k)
Γ
j,k
=
Φ
j

Φ
k
(5.119)
θ
j,k
=
Φ
j

Ψ

k
As indicated, output set overlap is a measure of the specialisation of pairs of
knowledge sources, whereas functional connectivity between the pairs of knowl-
edge sources is a measure of their serialisation, and output to input connectivity is
a measure of their interdependence.
Specialisation value The output set overlap is a measure of the specialisation of
pairs of knowledge sources, whereby the set
Γ
j,k
is computed to assess functional
specialisation. The cardinality of the set
Γ
j,k
for each pair {ks
j
,ks
k
} in
β
is a mea-
sure of the output overlap for the pair {ks
j
,ks
k
} (i.e. a measure of the specialisation
5.4 Application Modelling of Safety and Risk in Engineering Design 781
of pairs of knowledge sources). Knowledge source pairs {ks
j
,ks
k

} with a large out-
put overlap imply that ks
j
and ks
k
share a large number of output variables and,
thus, have similar functions. Knowledge source pairs {ks
j
,ks
k
} with a low overlap
imply that ks
j
and ks
k
have different functions. A proposed heuristic to measure
knowledge source specialisation is to compute a specialisation value,
Ω
j,k
, for each
pair {ks
j
,ks
k
} in
β
. Specialisation values measure the output set overlap of a pair
of knowledge sources, {ks
j
,ks

k
}. The specialisation value is computed using the
following (McManus 1992):
Ω
j,k
=
card(
Γ
j,k
)
min(card(
Φ
j
),card(
Φ
k
))
(5.120)
The cardinality of th e set
Γ
j,k
divided by the minimum of the cardinalities of
the sets
Φ
j
and
Φ
k
computes a percentage of overlap between the set
Γ

j,k
and the
smaller of the sets
Φ
j
and
Φ
k
.As
Ω
j,k
approaches 1.0, the output overlap between
ks
j
and ks
k
increases.As
Ω
j,k
approaches 0.0, the output overlap between ks
j
and
ks
k
decreases. For the limiting cases, where
Φ
j

Φ
k

or
Φ
k

Φ
j
, we know that
Ω
j,k
= 1.0, and ks
j
and ks
k
compute the same outputs—thus, the knowledge sources
are not specialised. However, if
Γ
j,k
=
φ
(where
φ
is the null value), then
Ω
j,k
= 0.0,
and the two knowledge sources have no common outputs and are highly specialised
in relation to each other.
Serialisation value The functional connectivity between pairs of knowledge
sources is a measure of their serialisation, whereby the set
θ

j,k
is computed to
assess serialisation. The cardinality of the set
θ
j,k
for each pair {ks
j
,ks
k
} in
β
,
compared to the cardinality of the set
Ψ
k
, is a measure of the input overlap for the
pair {ks
j
,ks
k
} (i.e. a measure of the serialisation of pairs of knowledge sources).
Knowledge source pairs {ks
j
,ks
k
} with a large input overlap imply that ks
j
and ks
k
share a large number of output to input variables and, thus, form serialised execu-

tion. Knowledge source pairs {ks
j
,ks
k
} with a low input overlap imply that ks
j
and
ks
k
can execute separately. A serialisation value measures the functional connectiv-
ity between a pair of knowledge sources where the functional connectivity is the
relative output to input ratio. A proposed heuristic, therefore, to measure knowledge
source serialisation is to com pute a serialisation value,
Σ
j,k
, for each pair {ks
j
,ks
k
}
in
β
. Serialisation values measure the functional connectivity of a pair of knowledge
sources, {ks
j
,ks
k
}.
The serialisation value is computed using (McManus 1992):
Σ

j,k
=
(card
θ
j,k
)
(card
Ψ
k
)
(5.121)
This heuristic computes the percentage of the input data objects for knowledge
source ks
k
that are provided by knowledge source ks
j
. The cardinality of the set
θ
j,k
divided by the cardinality of the set
Ψ
k
computes a percentage of input overlap
between
θ
j,k
and
Ψ
k
.

782 5 Safety and Risk in Engineering Design
As
Σ
j,k
approaches 1.0, the percentage of overlap between
θ
j,k
and
Ψ
k
is greater,
and the serialisation between ks
j
and ks
k
strengthens. As
Σ
j,k
approaches 0.0, the
serialisation between ks
j
and ks
k
weakens. For the limiting cases, if
Ψ
k

Φ
j
,

Π
j,k
= 1.0, and ks
j
and ks
k
have direct serialisation. If
θ
j,k
=
φ
(where
φ
is the
null value), then
Σ
j,k
= 0.0, and the two knowledge sources are independent and
can execute concurrently.
Strongly connected knowledge sources have high serialisation values. These
knowledgesources form serialised execution pipelines, with each knowledge source
blocking completion of any computation for the same input data objects by other
knowledge sources. Unless multiple copies of the serialised knowledge sources are
developed, the serial pipelines reduce the blackboard’s capability for concurrent
execution. Weakly connected knowledge sources reduce knowledge source serial-
isation and increase the opportunity for concurrent knowledge source execution.
Knowledge source pairs that have high serialisation values are best suited for knowl-
edge source integration whereby the first knowledge source provides all of the in-
puts to the second knowledge source. Such a serially connected pair of knowledge
sources can be reduced to a single knowledge source that combines the functionality

of the two.
Interdependence value The output to input connectivity between pairs of knowl-
edge sources is a measure of their interdependence, whereby the set
θ
j,k
is computed
to assess interdependence. The cardinality of the set
θ
j,k
for each pair {ks
j
,ks
k
} in
β
is a measure of the output to input connectivity for the pair {ks
j
,ks
k
}.Knowl-
edge source pairs {ks
j
,ks
k
} with a high output to input connectivity imply that ks
k
is highly dependent on ks
j
for its input variables. Knowledge source pairs {ks
j

,ks
k
}
with a low output to input connectivity imply that ks
k
’s inputs are independent of
ks
j
’s outputs.
A proposed heuristic to measure knowledge source interdependence is to com-
pute an interdependence value,
Π
j,k
, for each pair {ks
j
,ks
k
} in
β
. Interdepen-
dence values measure the output to input connectivity between knowledge sources,
{ks
j
,ks
k
}. The interdependence value is computed using the following (McManus
1992):
Π
j,k
=

(card
θ
j,k
)
min(card(
Φ
j
),card(
Ψ
k
))
(5.122)
This heuristic computes the percentage of overlap between the sets
Φ
j
and
Ψ
k
,or
the percentage of output data objects of ks
j
that are used as input data objects by ks
k
.
The cardinality of the set
θ
j,k
divided by the minimum of the cardinalities of the sets
Φ
j

and
Ψ
k
computes a percentage of overlap between the set
θ
j,k
and the smaller of
the sets
Φ
j
and
Ψ
k
.As
Π
j,k
approaches 1.0, the output to input connectivity between
ks
j
and ks
k
strengthensand the knowledge sources become more interdependent.As
Π
j,k
approaches 0.0, the output to input connectivity between ks
j
and ks
k
weakens
and the knowledge sources become independent. For the limiting cases, if

Φ
j

Ψ
k
,
Π
j,k
= 1.0, and ks
j
and ks
k
have direct output to input connectivity and are interde-
pendent. If the set
θ
j,k
=
φ
(where
φ
is the null value), then
Π
j,k
= 0.0, and the two
knowledge sources have no output to input connectivity and are independent.
5.4 Application Modelling of Safety and Risk in Engineering Design 783
c) Evaluation of the AIB Blackboard Model for Determining the Integrity
of Engineering Design
The AIB blackboard model for determining the integrity of engineering design in-
cludes subsets of the knowledge sources and b lackboard data objects that are used

by the knowledge-based expert system section. This knowledge-based expert sys-
tem section allows for the development of various expert systems, and is structured
into facts, functions, conditions, constraints, rules and goals related to the subsets of
the knowledge sources and blackboard data objects of process analysis, plant analy-
sis and operations analysis sections. The primary subsets of the knowledge sources
for the process analysis and plant analysis sections are d escribed below in accor-
dance with Fig. 5.82 illustrating the AIB blackboard model for engineering design
integrity.
Process analysis section
• Let Ks
1
be the process definition module. This knowledge source makes use of
six global data object inputs—d
i1
, d
i2
, d
i3
, d
i4
, d
i5
and d
i6
, which can be repre-
sented by the set of input variables IV
6
= {iv
1
, ,iv

n
}—aswellasaprocess
description input, and computes five data object outputs that can be represented
by the set of output variables OV
5
= {iv
1
, ,iv
n
}, for the data object outputs d
o1
to d
o5
.
The data object inputs d
i1
to d
i6
and data object outputs d
o1
to d
o5
:
d
i1
= Plant/facility d
i7
= Process description
d
i2

= Operation/area d
o1
= Process sequence
d
i3
= Section/building d
o2
= Mass balance
d
i4
= System/process d
o3
= Heat balance
d
i5
= Assembly/unit d
o4
= Energy balance
d
i6
= Component/item d
o5
= Utilities balance.
• Let Ks
2
be the performance assessment module. This knowledge source makes
use of the six global data object inputs d
i1
, d
i2

, d
i3
, d
i4
, d
i5
and d
i6
,aswellas
a performance specification set, d
i8
, and computes a performance output variable
set, d
o6
.
The performance specification set d
i8
can be represented by the set of input vari-
ables IV
8
= {iv
1
, ,iv
n
},whered
i8
=performancespecification data object with
IV
8
= {efficiency, flow, precipitation, throughput, output, pressure, viscosity, ab-

sorption, temperature, losses, etc.}.
The performance output variable set d
o6
can be represented by the set of output
variables O V
6
= {ov
1
, ,ov
m
},whered
o6
is the performance output data ob-
ject with OV
6
= {efficiency rating, flow rating, throughput rating, output rating,
yield, pressure rating, consistency, temperature rating, productivity, etc.}.
• Let Ks
3
be the RAM assessment module.This knowledge source makes use of the
six global data object inputs d
i1
, d
i2
, d
i3
, d
i4
, d
i5

and d
i6
,aswellasaconditions
description set, d
i9
, and computes a conditions failu re output variable set, d
o7
.

×