Tải bản đầy đủ (.pdf) (27 trang)

Tài liệu Computational Intelligence In Manufacturing Handbook P17 doc

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (427.37 KB, 27 trang )

Chinnam, Ratna Babu "Intelligent Quality Controllers for On-Line Parameter Design"
Computational Intelligence in Manufacturing Handbook
Edited by Jun Wang et al
Boca Raton: CRC Press LLC,2001

©2001 CRC Press LLC

17

Intelligent Quality
Controllers for On-Line

Parameter Design

17.1 Introduction

17.2 An Overview of Certain Emerging Technologies
Relevant to On-Line Parameter Design

17.3 Design of Quality Controllers
for On-Line Parameter Design

17.4 Case Study: Plasma Etching Process Modeling
and On-Line Parameter Design

17.5 Conclusion


17.1 Introduction

Besides aggressively innovating and incorporating new materials and technologies into practical, effective,


and timely commercial products, in recent years, many industries have begun to examine new directions
that they must cultivate to improve their competitive position in the long term. One thing that has
become clearly evident is the need to push the quality issue farther and farther upstream so that it
becomes an integral part of every aspect of the product/process life cycle. In particular, many have begun
to recognize that it is through engineering design that we have the greatest opportunity to influence the
ultimate delivery of products and processes that far exceed the customer needs and expectations.
For the last two decades, classical experimental design techniques have been widely used for setting
critical product/process parameters or targets during design. But recently their potential is being ques-
tioned, for they tend to focus primarily on the mean response characteristics. One particular design
approach that has gained a lot of attention in the last decade is the

robust parameter design approach

that
borrows heavily from the principles promoted by Genichi Taguchi [1986, 1987]. Taguchi views the design
process as evolving in three distinct phases:
1.

System Design Phase —

Involves application of specialized field knowledge to develop basic design
alternatives.
2.

Parameter Design Phase —

Involves selection of “best” nominal values for the important design
parameters. Here “best” values are defined as those that “minimize the transmitted variability
resulting from the noise factors.”
3.


Tolerance Design Phase —

Involves setting of tolerances on the nominal values of critical design
parameters. Tolerance design is considered to be an economic issue, and the loss function model
promoted by Taguchi can be used as a basis.

Ratna Babu Chinnam

Wayne State University

©2001 CRC Press LLC

Besides the basic parameter design method, Taguchi strongly emphasized the need to perform

robust

parameter design. Here “robustness” refers to the insensitive behavior of the product/process performance
to changes in environmental conditions and noise factors. Achieving this insensitivity at the design stage
through the use of designed experiments is a corner stone of the Taguchi methodology.
Over the years, many distinct approaches have been developed to implement Taguchi’s parameter
design concept; these can be broadly classified into the following three categories:
1. Purely analytical approaches
2. Simulation approaches
3. Physical experimentation approaches.
Due to the lack of precise mechanistic models (models derived from fundamental physics principles)
that explain product/process performance characteristics (in terms of the different controllable and
uncontrollable variables), the most predominant approach to implementing parameter design involves
physical experimentation. Two distinct approaches to physical experimentation for parameter design
include (i) orthogonal array approaches, and (ii) traditional factorial and fractional factorial design

approaches.
The orthogonal array approaches are promoted extensively by Taguchi and his followers, and the
traditional factorial and fractional factorial design approaches are normally favored by the statistical
community. Over the years, numerous papers have been authored comparing the advantages and disad-
vantages of these approaches. Some of the criticisms for the orthogonal array approach include the
following [Box, 1985]: (i) the method does not exploit a sequential nature of investigation, (ii) the designs
advocated are rather limited and fail to deal adequately with interactions, and (iii) more efficient and
simpler methods of analysis are available.
In addition to the different approaches to “generating” data on product/process performance, there
exist two distinct approaches to “measuring” performance:
1. Signal-to-Noise (S/N) Ratios — Tend to combine the location and dispersion characteristics of
performance into a one-dimensional metric; the higher the S/N ratio, the better the performance.
2. Separate Treatment — The location and dispersion characteristics of performance are evaluated
separately.
Once again, numerous papers have been authored questioning the universal use of the S/N ratios
suggested by Taguchi and many others. The argument is that the Taguchi parameter design philosophy
should be blended with an analysis strategy in which the mean and variance of the product/process
response characteristics are modeled to a considerably greater degree than practiced by Taguchi. Numer-
ous papers authored in recent years have established that one can achieve the primary goal of the Taguchi
philosophy, i.e., to obtain a target condition on the mean while minimizing the variance, within a response
surface methodology framework. Essentially, the framework views both the mean and the variances as
responses of interest. In such a perspective, the dual response approach developed by Myers and Carter
[1973] provides an alternate method for achieving a target for the mean while also achieving a target for
the variance. For an in-depth discussion on response surface methodology and its variants, see Myers
and Montgomery [1995]. For a panel discussion on the topic of parameter design, see Nair [1992].

17.1.1 Classification of Parameters

A block diagram representation of a product/process is shown in Figure 17.1. A number of parameters
can influence the product/process response characteristics, and can be broadly classified as


controllable

parameters and

uncontrollable

parameters (note that the word

parameter

is equivalent to the word

factor

or

variable

normally used in parameter design literature).
1.

Controllable Parameters:

These are parameters that can be specified freely by the product/process
designer and or the user/operator of the product/process to express the intended value for the

©2001 CRC Press LLC

response. These parameters can be classified into two further groups:


fixed

controllable parameters
and

non-fixed

controllable parameters.
a.

Fixed Controllable Parameters:

These are parameters that are normally optimized by the prod-
uct/process designer at the design stage. The parameters may take multiple values, called

levels

,
and it is the responsibility of the designer to determine the best levels for these parameters.
Changes in the levels of certain fixed controllable parameters may not have any bearing on
manufacturing or operation costs; however, when the levels of certain others are changed, the
manufacturing and/or operation costs might change (these factors that influence the manu-
facturing cost are also referred to as

tolerance

factors in the parameter design literature). Once
optimized, these parameters remain fixed for the life of the product/process. For example,
parameters that influence the geometry of a machine tool used for a machining process, and/or

its material/technology makeup fall under this category.
b.

Non-Fixed Controllable Parameters:

These are controllable parameters that can be freely
changed before or during the operation of the product/process (these factors are also referred
to as

signal

factors in the parameter design literature). For example, the cutting parameters
such as speed, feed, and depth of cut on a machining process can be labeled non-fixed
controllable parameters.
2.

Uncontrollable Parameters:

These are parameters that cannot be freely controlled by the pro-
cess/process designer. Parameters whose settings are difficult to control or whose levels are expensive
to control can also be categorized as uncontrollable parameters. These parameters are also referred
to as

noise factors

in the parameter design literature. These parameters can be classified into two
further groups:

constant


uncontrollable parameters and

non-constant

uncontrollable parameters.
a.

Constant Uncontrollable Parameters:

These are parameters that tend to remain constant during
the life of the product or process but are not easily controllable by the product/process designer.
Certainly, the parameters representing variation in components that make up the product/pro-
cess fall under this category. This variation is inevitable in almost all manufacturing processes
that produce any type of a component and is attributed to

common causes

(representing natural
variation or true process capability) and

assignable/special causes

(representing problems with
the process rendering it out of control). For example, the nominal resistance of a resistor to
be used in a voltage regulator may be specified at 100



K




. However, the resistance of the
individual resistors will deviate from the nominal value affecting the performance of the
individual regulators. Please note that the parameter (i.e., resistance) is to some degree uncon-
trollable; however, the level/amplitude of the uncontrollable parameter for any given individual
regulator remains more or less constant for the life of that voltage regulator.
b.

Non-Constant Uncontrollable Parameters:

These parameters normally represent the environ-
ment in which the product/process operates, the loads to which they are subjected, and their
deterioration. For example, in machining processes, some examples of non-constant uncon-
trollable variables include room temperature, humidity, power supply voltage and current, and
amplitude of vibration of the shop floor.

FIGURE 17.1

Block diagram of a product/process.
Controllable
Parameters
Uncontrollable
Parameters
Fixed
Non-Fixed
Constant
Non-Constant
Product
or

Process
Response

©2001 CRC Press LLC

17.1.2 Limitations of Existing Off-Line Parameter Design Techniques

Whatever the method of design, in general, parameter design methods do not take into account the
common occurrence that some of the uncontrollable variables are observable during production [Pledger,
1996] and part usage. This extra information regarding the levels of non-constant uncontrollable factors
enhances our choice of values for the non-fixed controllable factors, and, in some cases, determines the
viability of the production process and or the product. This process is hypothetically illustrated for a
time-invariant product/process in Figure 17.2. Here T

0

and T

1

denote two different time/usage instants
during the life of the product/process. Given the level of the uncontrollable variable at any instant, the
thick line represents the response as a function of the level of the controllable variable. Given the response
model, the task here is to optimize the controllable variable as a function of the level of the uncontrollable
variable. In depicting the optimal levels for the controllable variable in Figure 17.2, the assumption made
is that it is best to maximize the product/process response (i.e., larger the response, the better it is). The
same argument can be extended to cases where the product/process has multiple controllable/uncontrol-
lable variables and multiple outputs. In the same manner, it is possible to extend the argument to smaller-
is-better and nominal-is-best response cases, and combinations thereof.
Given the rapid decline in instrumentation costs over the last decade, the development of methods

that utilize this additional information will facilitate optimal utilization of the capability of products/pro-
cesses. Pledger [1996] described an approach that explicitly introduces uncontrollable factors into a
designed experiment. The method involves splitting uncontrollable factors into two sets,

observable

and

unobservable

. In the first set there may be factors like temperature and humidity, while in the second
there may be factors such as chemical purity and material homogeneity that may be unmeasurable due
to time, physical, and economic constraints. The aim is to find a relationship between the controllable

FIGURE 17.2

On-line parameter design of a time-invariant product/process.
ResponseResponse
Optimal
Level at T1
Status
at T1
Status
at T0
Optimal
Level at T0
Controllable Variable
Controllable Variable
Uncontrollable Variable
Uncontrollable Variable

Time/Usage of Product/ProcessT0T1

©2001 CRC Press LLC

factors and the observable uncontrollable factors while simultaneously minimizing the variance of the
response and keeping the mean response on target. Given the levels of the observable uncontrollable
variables, appropriate values for the controllable factors are generated on-line that meet the stated
objectives.
As is also pointed out by Pledger [1996], if an observable factor changes value in wild swings, it would
not be sensible to make continuous invasive adjustments to the product or process (unless there is minimal
cost associated with such adjustments). Rather, it would make sense to implement formal control over
such factors. Pledger derived a closed-form expression, using Lagrangian minimization, that facilitates
minimization of product or process variance while keeping the mean on target, when the model that
relates the quality response variable to the controllable and uncontrollable variables is linear in parameters
and involves no higher order terms. However, as Pledger pointed out, if the model involves quadratic
terms or other higher order interactions, there can be no closed-form solution.

17.1.3 Overview of Proposed Framework for On-Line Parameter Design

Here, we develop some general ideas that facilitate on-line parameter design. The specific objective is to
not impose any constraint on the nature of the relationship between the different controllable and
uncontrollable variables and the quality response characteristics, and allow multiple quality response
characteristics. In particular, we recommend feedforward neural networks (FFNs) for modeling the
quality response characteristics. Some of the reasons for making this recommendation are as follows:
1.

Universal Approximation

. FFNs can approximate any continuous function


f



(R

N

, R

M

) over a
compact subset of R

N

to arbitrary precision [Hornik et al., 1989]. Previous research has also shown
that neural networks offer advantages in both accuracy and robustness over statistical methods
for modeling processes (for example, Nadi et al. [1991]; Himmel and May [1993]; Kim and May
[1994]). However, there is some controversy surrounding this issue.

2. Adaptivity

. Most training algorithms for FFNs are incremental learning algorithms and exhibit a
built-in capability to adapt the network to changes in the operating environment [Haykin, 1999].
Given that most product and processes tend to be time-variant (nonstationary) in the sense that
the response characteristics change with time, this property will play an important role in achieving
on-line parameter design of time-variant systems.
Besides proposing nonparametric neural network models for “modeling” quality response character-

istics of manufacturing processes, we recommend a gradient descent search technique and a stochastic
search technique for “optimizing” the levels of the controllable variables on-line. In particular, we consider
a neural network iterative inversion scheme and a stochastic search method that utilizes genetic algorithms
for optimization of controllable variables. The overall framework that facilitates these two on-line tasks,
i.e., modeling and optimization, constitutes a

quality controller

. Here, we focus on development of quality
controllers for manufacturing processes whose quality response characteristics are static and time-invari-
ant. Future research can concentrate on extending the proposed controllers to deal with dynamic and
time-variant systems. In addition, future research can also concentrate on modeling the signatures of the
uncontrollable variables to facilitate feedforward parameter design.

17.1.4 Chapter Organization

The chapter is organized as follows: Section 17.2 provides an overview of feedforward neural networks
and genetic algorithms utilized for process modeling and optimization; Section 17.3 describes an
approach to designing intelligent quality controllers and discusses the relevant issues; Section 17.4
presents some results from the application of the proposed methods to a plasma etching semiconductor
manufacturing process; and Section 17.5 provides a summary and gives directions for future work.

©2001 CRC Press LLC

17.2 An Overview of Certain Emerging Technologies

Relevant to On-Line Parameter Design

17.2.1 Feedforward Neural Networks


In general, feedforward artificial neural networks (ANNs) are composed of many nonlinear computa-
tional elements, called

nodes

, operating in parallel, and arranged in patterns reminiscent of biological
neural nets [Lippman, 1987]. These processing elements are connected by weight values, responsible for
modifying signals propagating along connections and used for the training process. The number of nodes
plus the connectivity define the topology of the network, and range from totally connected to a topology
where each node is just connected to its neighbors. The following subsections discuss the characteristics
of a class of feedforward neural networks.

17.2.1.1 Multilayer Perceptron Networks

A typical multilayer perceptron (MLP) neural network with an input layer, an output layer, and two
hidden layers is shown in Figure 17.3 (referred to as a three-layer network; normally, the input layer is
not counted). For convenience, the same network is denoted in block diagram form as shown in Figure
17.4 with three weight matrices

W

(1)

,

W

(2)

, and


W

(3)

and a diagonal nonlinear operator

Γ

with identical
sigmoidal elements

γ

following each of the weight matrices. The most popular nonlinear nodal function
for multilayer perceptron networks is the sigmoid [

unipolar







γ



(


x

) = 1/(1 +

e

–x

) where 0





γ



(

x

)



1
for –






<



x



<





and

bipolar







γ




(

x

) = (1 –

e

–x

)/(1 +

e

–x

) where –1





γ



(


x

)



1 for –





<



x



<





]. It is
necessary to either scale the output data to fall within the range of the sigmoid function or use a linear
nodal function in the outermost layer of the network. It is also common practice to include an externally


FIGURE 17.3

A three-layer neural network.

FIGURE 17.4

A block diagram representation of a three-layer network.
x
1
{
w
ij
}
{
w
ij
}
{
w
ij
}
x
2
x
N
Input
Layer
Hidden
Layer #1
Hidden

Layer #2
Output
Layer
y
M
y
_
M
y
_
2
y
_
2
y
_
y
_
x
y
_
y
y
y
y
_
2
y
_
p

y
p
y
_
Q
y
Q
y
_
1
y
_
1
y
_
1
y
2
y
2
y
2
y
1
y
1
y
1
(1) (1)
(1)

(1)
(1)
(1)
(1)
(1)
(1)
(2) (2)
(2)
(2) (2)
(3)
(2)
(2)
(2)
(2)
γ
Σ
ΣΣ
Σ
Σ
Σ
γ
γ
γ
γ
γ
Σ
γ
Σ
γ
Σ

γ
W
(1)
W
(2)
Γ Γ
W
(3)
Γ
xy
(1)
y
(2)
y

©2001 CRC Press LLC

applied

threshold

or

bias

that has the effect of lowering or increasing the net input to the nodal function.
Each layer of the network can then be represented by the operator
Equation (17.1)
and the input–output mapping of the MLP network can be represented by
Equation (17.2)

The weights of the network

W

(1)

,

W

(2)

, and

W

(3)

are adjusted (as described in Section 17.2.1.2) to
minimize a suitable function of error

e

between the predicted output

y

of the network and a desired
output


y

d

(error-correction learning), resulting in a mapping function

N

[

x

]. From a systems theoretic
point of view, multilayer perceptron networks can be considered as versatile nonlinear maps with the
elements of the weight matrices as parameters.
It has been shown in Hornik et al., [1989], using the Stone–Weierstrass theorem, that even an MLP
network with just one hidden layer and an arbitrarily large number of nodes can approximate any
continuous function over a compact subset of to arbitrary precision (universal
approximation). This provides the motivation to use MLP networks in modeling/identification of any
manufacturing process’ response characteristics.

17.2.1.2 Training MLP Networks Using Backpropagation Algorithm

If MLP networks are used to solve the identification problems treated here, the objective is to determine
an adaptive algorithm or rule that adjusts the weights of the network based on a given set of input–output
pairs. An error-correction learning algorithm will be discussed here, and readers can see Zurada [1992]
and Haykin [1999] for information regarding other training algorithms. If the weights of the networks
are considered as elements of a parameter vector

θ


, the error-correction learning process involves the
determination of the vector

θ

*

, which optimizes a performance function

J

based on the output error. In
error-correction learning, the gradient of the performance function with respect to

θ

is computed and
adjusted along the negative gradient as follows:
Equation (17.3)
where



η

is a positive constant that determines the rate of learning (step size) and

s


denotes the iteration step.
In the three-layered network shown in Figure 17.3,

x

= (

x

1

, …,

x

N

)

T

denotes the input pattern vector
while

y

= (

y


1

, …,

y

M

)

T

is the output vector. The vectors

y

(1)

= (

y

1
(1)

, …,

y

P


(1)

) and

y

(2)

= (

y

1
(2)

, …,

y

Q

(2)

)

T

are the outputs at the first and the second hidden layers, respectively. The matrices
and are the weight matrices associated with the three layers

as shown in Figure 17.3. Note that the first subscript in weight matrices denotes the neuron in the
next layer and the second subscript denotes the neuron in the current layer. The vectors
and are as shown in Figure 17.3 with, , , and
Nx Wx
l
l
[]
=






()
ΓΓ ,
y=NxWWW NNNx
[]
=



















=
[]
() () ()
ΓΓΓΓΓΓΓΓ
321
321
.
fC
NM
∈ℜℜ(, )

N
θθη
θ
ss
Js
s
+
()
=
()

()


()
1 –
ww
ij
PN
ij
QP
12
()
×
()
×












,,w
ij
MQ
3
()

×






y
1
()
∈ℜ
P
,
y
2
()
∈ℜ
Q
,
y ∈ℜ
M
γ
yy
ii
11
() ()







=
γ
yy
ii
22
() ()






=
©2001 CRC Press LLC
where , , and are elements of , , and respectively. If y
d
= (y
d1
,
…, y
dM
)
T
is the desired output vector, the output error of a given input pattern x is defined as e = y – y
d
.
Typically, the performance function J is defined as
Equation (17.4)

where the summation is carried out over all patterns in a given training data set S. The factor 1/2 is used
in Equation 17.4 to simplify subsequent derivations resulting from minimization of J with respect to free
parameters of the network.
While strictly speaking, the adjustment of the parameters (i.e., weights) should be carried out by
determining the gradient of J in parameter space, the procedure commonly followed is to adjust it at
every instant based on the error at that instant. A single presentation of every pattern in the data set to
the network is referred to as an epoch. In the literature, a well-known method for determining this
gradient for MLP networks is the backpropagation method. The analytical method of deriving the
gradient is well known in the literature and will not be repeated here. It can be shown that the back-
propagation method leads to the following gradients for any MLP network with L layers:
Equation (17.5)
for neuron i in output layer L Equation (17.5a)
for neuron i in hidden layer l Equation(17.5b)
Here, denotes the local gradient defined for neuron i in layer l and the use of prime in sig-
nifies differentiation with respect to the argument. It can be shown that for a unipolar sigmoid function,
g
'
(x) = x(1 – x) and for a bipolar function, g
'
(x) = 2x(1 – x). One starts with local gradient calculations
for the outermost layer and proceeds backwards until one reaches the first hidden layer (hence the name
backpropagation). For more information on MLP networks, see Haykin [1999].
17.2.1.3 Iterative Inversion of Neural Networks
In error backpropagation training of neural networks, the output error is “propagated backward” through
the network. Linden and Kindermann [1989] have shown that the same mechanism of weight learning
can be used to iteratively invert a neural network model. This approach is used here for on-line parameter
design and hence the discussion. In this approach, errors in the network output are ascribed to errors
in the network input signal, rather than to errors in the weights. Thus, iterative inversion of neural
networks proceeds by a gradient descent search of the network input space, while error backpropagation
training proceeds through a search in the synaptic weight space.

Through iterative inversion of the network, one can generate the input vector, x, that gives an output
as close as possible to the desired output, y
d
. By taking advantage of the duality between the synaptic
weights and the input activation values in minimizing the performance criterion, the iterative gradient
descent algorithm can again be applied to obtain the desired input vector:
γ
yy
ii
()
=
y
i
1
()
y
i
2
()
y
i
y
1
()
y
2
()
y
J
s

=

1
2
2
e ,

()

()
=
() ()
()
() ( )
Js
ws
sy s
ij
l
i
l
j
l


δ
1
δγ
i
L

ii
i
L
se y s
() ()
()
=
()






'
δγ δ
i
l
i
i
l
k
l
k
ki
l
sys sws
() ()
+
()

+
()
()
=
()




() ()

'
11
δ
i
l
s
()
()
γ
i
i
L
ys
'
(())
()
©2001 CRC Press LLC
Equation (17.6)
where

η
is a positive constant that determines the rate of iterative inversion and the superscript refers
to the iteration step. For further information, see Linden and Kindermann [1989].
17.2.2 Genetic Algorithms
Genetic algorithms (GAs) are a class of stochastic optimization procedures that are based on natural
selection and genetics. Originally developed by John H. Holland [1975], the genetic algorithm works
on a population of solutions, also called individuals, represented by fixed bit strings. Although there
are many possible variants of the basic GA, the fundamental underlying mechanism operates on a
population of individuals, is relatively standard, and consists of three operations [Liepins and Hilliard,
1989]: (i) evaluation of individual fitness, (ii) formation of a gene pool, and (iii) recombination and
mutation, as illustrated in Figure 17.5(a). The individuals resulting from these three operations form
the next generation’s population. The process is iterated until the system ceases to improve. Individuals
contribute to the gene pool in proportion to their relative fitness (evaluation on the function being
optimized); that is, well performing individuals contribute multiple copies, and poorly performing
individuals contribute few copies, as illustrated in Figure 17.5(b). The recombination operation is the
crossover operator: the simplest variant selects two parents at random from the gene pool as well as a
crossover position. The parents exchange “tails” to generate the two offspring, as illustrated in Figure
17.5(c). The subsequent population consists of the offspring so generated. The mutation operator
illustrated in Figure 17.5(d) helps assure population diversity, and is not the primary genetic search
operator. A thorough introduction to GAs is provided in Goldberg [1989].
Due to their global convergence behavior, GAs are especially suited for the field of continuous param-
eter optimization [Solomon, 1995]. Traditional optimization methods such as steepest-descent, quadratic
approximation, Newton method, etc., fail if the objective function contains local optimal solutions. Many
papers suggest (see, for example, Goldberg [1989] and Mühlenbein and Schlierkamp-Voosen [1994])
that the presence of local optimal solutions does not cause any problems to a GA, because a GA is a
multipoint search strategy, as opposed to point-to-point search performed in classical methods.
17.3 Design of Quality Controllers for On-Line Parameter Design
The proposed framework for performing on-line parameter design is illustrated in Figure 17.6. In contrast
to the classical control theory approaches, this structure includes two distinct control loops. The process
control loop “maintains” the controllable variables at the optimal levels, and will involve schemes such

as feedback control, feedforward control, and adaptive control. It is the quality controller in the quality
control loop that “determines” these optimal levels, i.e., performs parameter design. The quality controller
includes both a model of the product/process quality response characteristics and an optimization routine
to find the optimal levels of the controllable variables. As was stated earlier, the focus here is on time-
invariant products and processes, and hence, the model building process can be carried out off-line. In
time-variant systems the quality response characteristics have to be identified and constantly tracked on-
line, and call for an experiment planner that facilitates constant and optimal investigation of the prod-
uct/process behavior.
In solving this on-line parameter design problem, the following assumptions are made:
1. Quality response characteristics of interest can be expressed as static nonlinear maps in the input space
(the vector space defined by controllable and uncontrollable variables). This assumption implies that
there exits no significant memory or inertia within the system, and that the process response state
is strictly a function of the “current” state of the controllable and uncontrollable variables. In other
xx
x
ss
Js
s
+
()
=
()


()

()
1 –
η

×