Working in an Electronic Environment 16-21
16.9 General Information Formats
The formats in this section are not specifically designed to support CAD information. These formats are
best suited for document templates, product database interrogations, and general distribution of text and
pictures.
16.9.1 Hypertext Markup Language (HTML)
HyperText Markup Language (HTML) operates as a database designed for the World Wide Web. HTML
code is a basic text file with formatting codes imbedded into the text. These formatting codes are read by
specific client software and acted upon to format the text. Most everyone has had experience with HTML
and its capabilities. What makes HTML very useful is the power of not being machine specific. Many
documents and pictures can be linked on different machines, in different offices, even in different coun-
tries, and still appear as if they are all in one place. This virtual Master Model follows the general rules of
the Master Model Theory, yet allows multiple areas for the data to be stored.
Current releases of several CAD programs are supporting the product development process as
follows:
• Showing the product design on the web as it matures
• Allowing the simple capture of design information
• Having other support groups “look in” without interrupting the design flow
solid Part1
facet normal 0.000000e+000 0.000000e+000 1.000000e+000
outer loop
vertex 1.875540e-001 2.619040e-001 4.146040e-001
vertex 1.875540e-001 2.319040e-001 4.146040e-001
vertex 2.175540e-001 2.619040e-001 4.146040e-001
endloop
endfacet
endsolid
Figure 16-4 File format for one triangle in an STL file
16.8.4 STereoLithography (STL)
STereoLithography interface format (STL) was generated by 3-D Systems, the designers of Stereolithography
Apparatus (SLA), to provide an unambiguous description of a solid part that could be interpreted by the
SLA’s software. The STL file is a “tessellated surface file” in which geometry is described by triangle
shapes laid onto the geometry’s surface. Associated with each triangle is a surface normal that is pointed
away from the body of the part. This format could be described as being similar to a finite analysis model.
When creating an STL file, care must be taken to generate the file with sufficient density so that the facets
do not affect the quality of the part built by the SLA. The SLA file holds geometry information only and is
used only in the interpretation of the part.
STL files represent the surfaces of a solid model as groups of small polygons. The system writes
these polygons to an ASCII text or binary file. Fig. 16-4 shows the file format for an STL file.
16-22 Chapter Sixteen
16.9.2 Portable Document Format (PDF)
Portable Document Format (PDF) is an electronic distribution format for documents. The PDF format is
good because it keeps the document you are distributing in a format that looks almost exactly like the
original. For distributing corporate standards, this format is nice because it can be configured to allow or
disallow modifications and printing, as well as other security features. PDF files are compact, cross
platform and can be viewed by anyone with a free Adobe Acrobat Reader. This format and accompany-
ing browser supports zooming in on text as well as page-specific indexing and printing.
16.10 Graphics Formats
These formats are used to support color graphics needed for silkscreen artwork, labels, and other graphic-
intensive design activities. The formats may also be used to capture photographic information.
16.10.1 Encapsulated PostScript (EPS)
EPS stands for Encapsulated PostScript. PostScript was originally designed only for sending to a printer,
but PostScript’s ability to scale and translate makes it possible to embed pieces of PostScript and place
them where you want on the page. These pieces of the file are usually EPS files. The file format is ASCII-
text based, and can be edited with knowledge of the format.
Encapsulated PostScript files are supported by many graphics programs and also supported across
different computing platforms. This format keeps the font references associated with the graphics. When
transferring this file format to other programs, it is important to make sure they support the necessary
fonts. The format also keeps the references to text and line objects. This allows editing of the objects by
other supporting graphics programs.
This is a common file format when transferring graphic artwork for decals and labels to a vendor.
16.10.2 Joint Photographic Experts Group (JPEG)
The Joint Photographic Experts Group (JPEG) format is a standardized image compression mechanism
used for digital photographic compression. The Joint Photographic Experts Group was the original com-
mittee that wrote the standard.
JPEG is designed for compressing either full-color or gray-scale images of natural, real-world scenes.
It works well on photographs, naturalistic artwork, and similar material, but not so well on lettering, simple
cartoons, or line drawings. When saving the JPEG file, the compression parameters can be adjusted to
achieve the desired finished quality.
This is a common binary format for World Wide Web distribution and most web browsers support the
viewing of the file. I use this format very often when I e-mail digital photographs of components to show
my overseas vendors.
16.10.3 Tagged Image File Format (TIFF)
TIFF is a tag-based binary image file format that is designed to promote the interchange of digital image
data. It is a standard for desktop images and is supported by all major imaging hardware and software
developers. This nonproprietary industry standard for data communication has been implemented by
most desktop publishing applications.
The format does not save any object information such as fonts or lines. It is strictly graphics data.
This allows transfer to any other software with minimal risk of graphic data compatibility. This is a very
common format for sending graphic data to vendors for the generation of labels and decals.
Working in an Electronic Environment 16-23
16.11 Conclusion
Some of the many techniques for electronic automation, information management, and manufacturing
guidelines are presented in this chapter. This small sample has given you more tools to use in successful
product development. The chapter also provides two main points to keep in mind in future projects:
Engineering and manufacturing data are critical components in the development process and need to
be strategically planned. Computers and electronic data can offer huge possibilities for rapid develop-
ment, but process success relies on understanding not only what can be done but also why it is done.
The age of the paper document is not gone yet, but successful corporations in the coming years will
rely completely on capturing and sharing design information to manufacture products with minimal
paper movement.
16.12 Appendix A IGES Entities
IGES Color Codes IGES Entity
IGES Code Color
8 White
5 Yellow
2,6 Red
4,7 Blue
Type Name Form
100 Circular Arc
106 Copius Data 11-Polylines
31-Section
40-Witness Line
63-Simple Closed Planar Curve
108 Clipping Planes
110 Line
116 Point
124 Transformation Matrix
202 Angular Dimension
206 Diameter Dimension
210 General Label
212 General Note
214 Leader (Arrow)
216 Linear Dimension
218 Ordinate Dimension
222 Radius Dimension
228 General Symbol
230 Sectioned Area
304 Line Font Definition
314 Color Definition
404 Drawing
406 Property Entity 15-Name
16-Drawing Size
17-Drawing Units
410 View Entities
P • A • R • T • 4
MANUFACTURING
17-1
Collecting and Developing Manufacturing
Process Capability Models
Michael D. King
Raytheon Systems Company
Plano, Texas
Mr. King has more than 23 years of experience in engineering and manufacturing processes. He is a
certified Six Sigma Black Belt and currently holds a European patent for quality improvement tools and
techniques. He has one US patent pending, numerous copyrights for his work as a quality champion,
and has been a speaker at several national quality seminars and symposiums. Mr. King conceptualized,
invented, and developed new statistical tools and techniques, which led the way for significant break-
through improvements at Texas Instruments and Raytheon Systems Company. He was awarded the
“DSEG Technical Award For Excellence” from Texas Instruments in 1994, which is given to less than
half of 1% of the technical population for innovative technical results. He completed his masters degree
from Southern Methodist University in 1986.
17.1 Why Collect and Develop Process Capability Models?
In the recent past, good design engineers have focused on form, fit, and function of new designs as the
criteria for success. As international and industrial competition increases, design criteria will need to
include real considerations for manufacturing cost, quality, and cycle time to be most successful. To
include these considerations, the designer must first understand the relationships between design fea-
tures and manufacturing processes. This understanding can be quantified through prediction models that
are based on process capability models. This chapter covers the concepts of how cost, quality, and cycle
time criteria can be designed into new products with significant results!
In answer to the need for improved product quality, the concepts of Six Sigma and quality improve-
ment programs emerged. The programs’ initial efforts focused on improving manufacturing processes and
Chapter
17
17-2 Chapter Seventeen
using SPC (Statistical Process Control) techniques to improve the overall quality in our factories. We
quickly realized that we would not achieve Six Sigma quality levels by only improving our manufacturing
processes. Not only did we need to improve our manufacturing process, but we also needed to improve
the quality of our new designs. The next generation of Six Sigma deployment involved using process
capability data collected on the factory floor to influence new product designs prior to releasing them for
production.
Next, quality prediction tools based on process capability data were introduced. These prediction
tools allowed engineers and support organizations to compare new designs against historical process
capability data to predict where problems might occur. By understanding where problems might occur,
designs can easily be altered and tolerances reallocated to meet high-quality standards and avoid problem
areas before they occur. It is critical that the analysis is completed and acted upon during the initial
design stage of a new design because new designs are very flexible and adaptable to changes with the
least cost impact. The concept and application of using historical quality process capability data to
influence a design has made a significant impact on the resulting quality of new parts, assemblies, and
systems.
While the concepts and application of Six Sigma techniques have made giant strides in quality, there
are still areas of cost and cycle time that Six Sigma techniques do not take into account. In fact, if all
designs were designed around only the highest quality processes, many products would be too expen-
sive and too late for companies to be competitive in the international and industrial market place. This
leads us to the following question: If we can be very successful at improving the quality of our designs by
using historical process capability data, then can we use some of the same concepts using three-dimen-
sional models to predict cost, quality, and cycle time? Yes. By understanding the effect of all three during
the initial design cycle, our design engineers and engineering support groups can effectively design
products having the best of all three worlds.
17.2 Developing Process Capability Models
By using the same type of techniques for collecting data and developing quality prediction models, we
can successfully include manufacturing cost, quality, and cycle time prediction models. This is a signifi-
cant step-function improvement over focusing only on quality! An interactive software tool set should
include predictive models based on process capability history, cost history, cycle time history, expert
opinion, and various algorithms. Example technology areas that could be modeled in the interactive
prediction software tool include:
• Metal fabrication
• Circuit card assembly
• Circuit card fabrication
• Interconnect technology
• Microwave circuit card assembly
• Antenna / nonmetallic fabrication
• Optical assembly, optics fabrication
• RF/MW module technology
• Systems assembly
We now have a significant opportunity to design parts, assemblies, and systems while understand-
ing the impact of design features on manufacturing cost, quality, and cycle time before the design is
completed and sent to the factory floor. Clearly, process capability information is at the heart of the
Collecting and Developing Manufacturing Process Capability Models 17-3
prediction tools and models that allow engineers to design products with accurate information and con-
siderations for manufacturing cost, quality, and cycle time! In the following paragraphs, I will focus only
on the quality prediction models and then later integrate the variations for cost and cycle time predictions.
17.3 Quality Prediction Models - Variable versus Attribute Information
Process capability data is generally collected or developed for prediction models using either variable or
attribute type information. The process itself and the type of information that can be collected will deter-
mine if the information will be in the form of variable, attribute, or some combination of the two. In general,
if the process is described using a standard deviation, this is considered variable data. Information that is
collected from a percent good versus percent bad is considered attribute information. Some processes can
be described through algorithms that include both a standard deviation and a percent good versus
percent bad description.
17.3.1 Collecting and Modeling Variable Process Capability Models
The examples and techniques of developing variable models in this chapter are based on the premise of
determining an average short-term standard deviation for processes to predict long-term results. Average
short-term standard deviation is used because it better represents what the process is really capable of,
without external influences placed upon it.
One example of a process where process capability data was collected from variable information is
that of side milling on a numerically controlled machining center. Data was collected on a single dimension
over several parts that were produced using the process of side milling on a numerically controlled
machine. The variation from the nominal dimension was collected and the standard deviation was calcu-
lated. This is one of several methods that can be used to determine the capability of a variable process.
The capability of the process is described mathematically with the standard deviation. Therefore, I
recommend using SPC data to derive the standard deviation and develop process capability models.
Standard formulas based on Six Sigma techniques are used to compare the standard deviation to the
tolerance requirements of the design. Various equations are used to calculate the defects per unit (dpu),
standard normal transformation (Z), defects per opportunity (dpo), defects per million opportunities
(dpmo), and first time yield (fty). The standard formulas are as follows (Reference 3):
dpu = dpo * number of opportunities for defects per unit
dpu = total opportunities * dpmo / 1000000
fty = e
-dpu
Z = ((upper tolerance + lower tolerance)/2) / standard deviation of process
sigma = (SQRT(LN(1/dpo)^2)))-(2.515517 + 0.802853 * (SQRT(LN(1/dpo)^2))) + 0.010328 *
(SQRT(LN(1/dpo)^2)))^2)/(1 + 1.432788 * (SQRT(LN(1/ (dpo)^2))) + 0.189269 *
(SQRT(LN(1 / (dpo)^2)))^2 + 0.001308 * (SQRT(LN(1 / dpo)^2)))^3) +1.5
dpo = [(((((((1 + 0.049867347 * (z –1.5)) + 0.0211410061 * (z –1.5) ^2) + 0.0032776263 *(z -1.5)^3) +
0.0000380036 * (z –1.5)^4) + 0.0000488906 * (z –1.5)^5) + 0.000005383 * (z –1.5)^6)^ – 16)/2]
dpmo = dpo * 1000000
where
dpmo = defects per million opportunities
dpo = defects per opportunity
dpu = defects per unit
fty = first time yield percent (this only includes perfect units and does not include any scrap or
rework conditions)
17-4 Chapter Seventeen
Let’s look at an example. You have a tolerance requirement of ±.005 in 50 places for a given unit and
you would like to predict the part or assembly’s sigma level (Z value) and expected first time yield. (See
Chapters 10 and 11 for more discussion on Z values.) You would first need to know the short-term
standard deviation of the process that was used to manufacture the ±.005 feature tolerance. For this
example, we will use .001305 as the standard deviation of the process. The following steps would be used
for the calculation:
1. Divide the ±tolerance of .005 by the standard deviation of the process of .001305. This results in a
predicted sigma of 3.83.
2. Convert the sigma of 3.83 to defects per opportunity (dpo) using the dpo formula. This formula
predicts a dpo of .00995.
3. Multiply the dpo of .00995 times the opportunity count of 50, which was the number of places that the
unit repeated the ±.005 tolerance. This results in a defect per unit (dpu) of .4975.
4. Use the (e
-dpu
) first time yield formula to calculate the predicted yield based on the dpu. The result is
60.8% predicted first time yield.
5. The answer to the initial question is that the process is a 3.83 sigma process, and the part or assembly
has a predicted first time yield of 60.8% based on a 3.83 sigma process being repeated 50 times on a
given unit.
Typically a manufactured part or assembly will include several different processes. Each process will
have a different process capability and different number of times that the processes will be applied. To
calculate the overall predicted sigma and yield of a manufactured part or assembly, the following steps are
required:
1. Calculate the overall dpu and opportunity count of each separate process as shown in the previous
example.
2. Add all of the total dpu numbers of each process together to give you a cumulative dpu number.
3. Add the opportunity counts of each process together to give you a cumulative opportunity count
number.
4. To calculate the cumulative first time yield of the part or assembly use the (e
-dpu
) first time yield
formula and the cumulative dpu number in the formula.
5. To calculate the sigma rollup of the part or assembly divide the cumulative dpu by the cumulative
opportunity count to give you an overall (dpo) defect per opportunity. Now use the sigma formula to
convert the overall dpo to the sigma rollup value.
When using an SPC data collection system to develop process capability models, you must have a
very clear understanding of the process and how to set up the system for optimum results. For best
results, I recommend the following:
• Select features and design tolerances to measure that are close to what the process experts consider to
be just within the capability of the process.
• Calculate the standard deviations from the actual target value instead of the nominal dimension if they
are different from each other.
• If possible, use data collected over a long period of time, but extract the short-term data in groups and
average it to determine the standard deviation of a process.
• Use several different features on various types of processes to develop a composite view of a short-
term standard deviation of a specific process.
Collecting and Developing Manufacturing Process Capability Models 17-5
Selecting features and design tolerances that are very close to the actual tolerance capability of the
process is very important. If the design tolerances are very easily attained, the process will generally be
allowed to vary far beyond its natural variation and the data will not give a true picture of the processes
capability. For example, you may wish to determine the ability of a car to stay within a certain road width.
See Fig. 17-1. To do this, you would measure how far a car varies from a target and record points along
the road. Over a distance of 100 miles, you would collect all the points and calculate the standard
deviation from the center of the road. The standard deviation would then be in with the previous
formulas to predict how well the car might stay within a certain width tolerance of a given road. If the
driver was instructed to do his or her best to keep the car in the center of a very narrow road, the variance
would probably be kept at a minimum and the standard deviation would be kept to a minimum. However,
if the road were three lanes wide, and the driver was allowed to drive in any of the three lanes during the
100-mile trip, the variation and standard deviation would be significantly larger than the same car and
driver with the previous instructions.
Figure 17-1 Narrow road versus three-lane road
This same type of activity happens with other processes when the specifications are very wide
compared to the process capability. One way to overcome this problem is to collect data from processes
that have close requirements compared to the processes’ actual capability.
Standard deviations should be calculated from the actual target value instead of the nominal dimen-
sion if they are different from each other. This is very important because it improves the quality of your
answer. Some processes are targeted at something other than the nominal for very good reasons. The
actual process capability is the variation from a targeted position and that is the true process capability.
For example, on a numerically controlled machining center side milling process that machines a nominal
dimension of .500 with a tolerance of +. 005/–. 000, the target dimension would be .5025 and the nominal
dimension would be .500. If the process were centered on the .500 dimension, the process would result in
defective features. In addition to one-sided tolerance dimensions, individual preferences play an impor-
tant role in determining where a target point is determined. See Fig. 17-2 for a graphical example of how
data collected from a manufacturing process may have a shifting target.
Figure 17-2 Data collected from a process with a shifted target
17-6 Chapter Seventeen
It is best to collect data from variable information over a long period of time using several different
feature types and conditions. Once collected, organize the information into short-term data subgroups
within a target value. Now calculate the standard deviation of the different subgroups. Then average the
short-term subgroup information after discarding any information that swings abnormally too high or too
low compared to the other information collected. See Fig. 17-3 for an example of how you may wish to
group the short-term data and calculate the standard deviation from the new targets.
A second method for developing process capability models and determining the standard deviation
of a process might include controlled experiments. Controlled experiments are very similar to the SPC data
collection process described above. The difference is in the selection of parts to sample and in the
collection of data. You may wish to design a specific test part with various features and process require-
ments. The test parts could be run over various times or machines using the same processes under
controlled conditions. Data collected would determine the standard deviation of the processes. Other
controlled experiments might include collecting data on a few features of targeted parts over a certain
period of time to result in a composite perspective of the given process or processes. Several different
types of controlled experiments may be used to determine the process capability of a specific process.
A third method of determining the standard deviation of a given process is based on a process
expert’s knowledge. This process might be called the “five sigma rule of thumb” estimation technique for
determining the process capability. To determine a five sigma tolerance of a specific process, talk to
someone who is very knowledgeable about a given process or a process expert to estimate a tolerance that
can be achieved 98%-99% of the time on a generally close tolerance dimension using a specific process.
That feature should be a normal-type feature under normal conditions for manufacturing and would not
include either the best case or worst case scenario for manufacturing. Once determined, divide that
number by 5 and consider it the standard deviation. This estimation process gets you very close to the
actual standard deviation of the process because a five sigma process when used multiple times on a
given part or unit will result in a first time yield of approximately 98% - 99%.
Process experts on the factory floor generally have a very good understanding of process capability
from the perspective of yield percents. This is typically a process that has a good yield with some loss, but
is performing well enough not to change processes. This tolerance is generally one that requires close
attention to the process, but is not so easily obtained that outside influences skew the natural variations
and distort the data. Even though this method uses expert opinion to determine the short-term standard
deviation and not actual statistical data, it is a quick method for obtaining valuable information when none
is available. Historically, this method has been a very accurate and successful tool in estimating informa-
tion (from process experts) for predicting process capability. In addition to using process experts, toler-
ances may be obtained from reference books and brochures. These tolerances should result in good
quality (98%-100% yield expectations).
Figure 17-3 Averaging and grouping short-term data
Collecting and Developing Manufacturing Process Capability Models 17-7
Models that are variable-based usually provide the most accurate predictors of quality. There are
several different methods of determining the standard deviation of a process. However, the best method
is to use all three of these techniques with a regressive method to adjust the models until they accurately
predict the process capability. The five sigma rule of thumb will help you closely estimate the correct
answer. Use it when other data is not available or as a check-and-balance against SPC data.
17.3.2 Collecting and Modeling Attribute Process Capability Models
Models that are variable models are attribute models. Defect information for attribute models is usually
collected as percent good versus bad or yield. An example of an attribute process capability model would
be the painting process. An attribute model can be developed for the painting process in several different
ways based on the type of information that you have.
• At the simplest level, you could just assign an average defect rate for the process of painting.
• At higher levels of complexity, you could assign different defect rates for the various features of the
painting process that affect quality.
• At an even higher level of complexity, you could add interrelationships among different features that
affect the painting process.
17.3.3 Feature Factoring Method
The factoring method assigns a given dpmo to a process as a basis. In the model, all other major quality
drivers are listed. Each quality driver is assigned a defect factor, which may be multiplied times the dpmo
basis to predict a new dpmo if that feature is used on a given design. Factors may have either a positive
or negative effect on the dpmo basis of an attribute model. Each quality driver may be either independent
or dependent upon other quality drivers. If several features with defect factors are concurrently chosen,
they will have a cumulative effect on the dpmo basis for the process. The factoring method gives signifi-
cant flexibility and allows predictions at the extremes of both ends of the quality spectrum. See Fig. 17-4 for
an example of the feature factoring methods flexibility with regards to predictions and dpmo basis.
Figure 17-4 Feature factoring methodology flexibility
17.3.4 Defect-Weighting Methodology
This defect-weighting method assigns a best case dpmo and a worst case dpmo for the process similar to
a guard-banding technique. Defect driver features are listed and different weights assigned to each. As
different features are selected from the model, the defect weighting of each feature or selection reduces the
process dpmo accordingly. Generally, when all the best features are selected, the process dpmo remains at
its guard-banded best dpmo rating. And when most or all of the worst features with regards to quality are
selected, the dpmo rating changes to the worst dpmo rating allowed under the guard-banding scenario.
17-8 Chapter Seventeen
The following steps describe the defect-weighting model.
1. Using either data collected or expert knowledge, determine the dpmo range of the process you are
modeling.
2. Determine the various feature selections that affect the process quality.
3. Assign a number to each of the features that will represent its defect weight with regard to all of the
other feature selections. The total of all selectable features must equal 1.0 and the higher the weight
number, the higher the effect on the defect rating it will be. The features may be categorized so that
you can choose one feature from each category with the totals of each category equal to 1.0.
4. Calculate the new dpmo prediction number by subtracting the highest dpmo number from the lowest
dpmo number and multiplying that number times the total weight number. Then add that number to the
lowest dpmo number to get the new dpmo number.
The formula is: The new process defect per million opportunity (dpmo) rating
= (highest dpmo number – lowest dpmo number)
× the cumulative weight numbers
For example, you may assign the highest dpmo potential to be 2,000 with the lowest dpmo at 100. If the
cumulative weights of the features with defect ratings equal .5, then the new process dpmo rating would
be a dpmo of 1,050 (2000 – 100 = 1,900; 1900 × .5 = 950; 950 + 100 = 1,050).
See Fig. 17-5 for a graphic of the defect-weighting methodology with regard to guard-banding and
dpmo predictions. This defect-weighting method allows you to set the upper and lower limits of a given
process dpmo rating. The method also includes design features that drive the number of defects. The
design dpmo rating will vary between the dpmo minimum number and the dpmo maximum number. If the
designer chooses features with the higher “weights,” the design dpmo approaches the dpmo maximum. If
the designer chooses features with lower “weights,” the design dpmo approaches the dpmo minimum.
Figure 17-5 Dpmo-weighting and guard-banding technique
17.4 Cost and Cycle Time Prediction Modeling Variations
You might wish to use a combination of both or either of the two previously discussed modeling tech-
niques for your cost and cycle time prediction models. Cost and cycle time may have several different
definitions depending upon your needs and familiar terminology. For the purpose of this example, cost is
defined as the cost of manufacturing labor and overhead. Cycle time is defined as the total hours required
producing a product from order placement to final delivery. Cost and cycle time will generally have a very
close relationship.
One method for predicting cost of a given product might be to associate a given time to each process
feature of a given design. Multiply the associated process time by the hourly process rate and overhead.
Collecting and Developing Manufacturing Process Capability Models 17-9
Depending upon the material type and part size, you may wish to also assign a factor to different material
types and part envelope sizes from some common material type and material size as a basis. Variations
from that basis will either factor the manufacturing time and cost up or down. Additional factors may be
applied such as learning curve factors and formulas for lot size considerations. Cost and cycle time
models should also include factors related to the quality predictions to account for scrap and rework
costs. The cycle time prediction portion of the model would be based upon the manufacturing hours
required plus normal queue and wait time between processes. An almost unlimited number of factors can
be applied to cost and cycle time prediction models. Most important is to develop a methodology that
gives you a basis from which to start. Use various factors that will be applied to that basis to model cost
and cycle time predictions.
Cost and cycle time predictions can be very valuable tools when making important design decisions.
Using an interactive predictive model including relative cost predictions would easily allow real-time
what-if scenarios. For example, a design engineer may decide to machine and produce a given part design
from material A. Other options could have been material B, C or D, which have similar properties to material
A. There may not be any difference in material A, B, C or D as far as fit, form or function of the design is
concerned. However, material A could take 50% more process time to complete and thus be 50% more
costly to produce.
Here is an example of how cycle time models might be influential. Take two different chemical corro-
sion resistance processes that yield the same results with similar costs. The difference might only be in the
cycle time prediction model that highlights significant cycle time requirements of different processes due
to where the corrosion resistance process is performed. Process A might be performed in-house or locally
with a short cycle time. Process B might be performed in a different state or country only, which typically
requires a significant cycle time. Overall, cost and cycle time prediction models are very powerful comple-
ments to quality prediction models. They can be very similar in concept or very different from either the
attribute or variable models used in quality predictions.
17.5 Validating and Checking the Results of Your Predictive Models
Making sure your predictive models are accurate is a very important part of the model development
process. The validation and checking process of process capability models is a very iterative process and
may be done using various techniques. Model predictions should be compared to actual results with
modifications made to the predictive model, data collection system, or interpretation of the data as needed.
Models should be compared at the individual model level and at the part or assembly rollup level, which
may include several processes. Validating the prediction model at the model level involves comparing
actual process history to the answer predicted by the interactive model.
With variable models, the model level validation involves comparing both the standard deviation
number and the actual part yields through the process versus the first time yield (fty) prediction of the
process. The second step of the validation process for variable models requires talking with process
experts or individuals that have a very good understanding of the process and its real-world process
capabilities. One method of comparing variable prediction models, standard deviations, and expert opin-
ion involves using the five sigma rule of thumb technique.
A 5.0 sigma rating at a specific tolerance will mathematically relate to a first time yield of 98%-99%
when several opportunities are applied against it. The process experts selected should be individuals on
the factory floor that have hands-on experience with the process rather than statisticians. A process
expert can determine a specific standard deviation number. Ask them to estimate the tolerance that the
process can produce consistently 98%-99% of the time on a close tolerance dimension. The answer given
can be considered the estimated 5.0 sigma process. Using the five sigma rule of thumb technique, divide
the tolerance given by the process experts by 5 to determine the standard deviation for the process. You
17-10 Chapter Seventeen
would probably want to take a sampling of process experts to determine the number that you will be
dividing by 5. Note that the way you phrase the question to the process experts is very critical. It is very
important to ask the process experts the question with regard to the following criteria:
1. The process needs to be under normal process conditions.
2. The estimate is not based on either best or worst case tolerance capabilities.
3. The tolerance that will yield 98%-99% of the product on a consistent basis is based on a generally close
tolerance and if the tolerance were any smaller, they would expect inconsistent yields from the process.
After receiving the answer from the process experts, repeat back to them the answer that they gave
you and ask them if that is what they understood their answer to be. If they gave you an answer of ±.005,
you might ask the following back to them: Under normal conditions, and a close tolerance dimension for
that process, you would expect ±.005 to yield approximately 98%-99% product that would not require
rework or scrap of the product? Would you expect the same process with ±.004 (four sigma) to yield
approximately 75%-80% yields under normal conditions? If they answer “yes” to both of these answers,
they probably have a good understanding of your previous questions and have given you a good answer
to your question. If you question several process experts and generally receive the same answer, you can
consider it a good estimation of a five sigma process under that tolerance.
Compare the estimated standard deviation from that of your SPC data collection system. If there is
more than a 20% difference between the two, something is significantly wrong and you must revisit both
sources of information to determine the right ones. The two standard deviation numbers should be within
5%-10% of each other for prediction models to be reasonable.
Overall, the best approach to validating variable models is to use a combination of all three tech-
niques to determine the best standard deviation number to use for the process. To do this, compare:
1. The standard deviation derived from the average short-term SPC data.
2. The standard deviation derived from expert opinion and the five sigma rule of thumb method.
3. Using the standard deviations derived from the two methods listed above, enter them one at a time
into the interactive prediction tool or equations. Then compare actual process yield results to predict
yield predictions based on the two standard deviations and design requirements.
Attribute models are also validated at the model level by comparing actual results to predictive
results of the individual model. Similarly, expert opinions are very valuable in validating the models when
actual data at the model level cannot be extracted. The validation of attribute models can be achieved by
reviewing a series of predictions under different combinations of selections with factory process experts.
The process experts should be asked to agree or disagree with different model selection combinations and
results. The models should be modified several times until the process experts agree with the model’s
resulting predictions. Actual historical data should be shared with the process experts during this process
to better understand the process and information collected.
In addition to model validation at the individual model level, many processes and combinations of
processes need to be validated at the part or assembly rollup level. Validation at the rollup level requires
that all processes be rolled up together at either the part or subassembly level and actual results compared
to predictions. For a cost rollup validation on a specific part, the cost predictions associated with all
processes should be added together and compared to the total cost of the part for validation. For a quality
rollup validation on a specific part, all dpu predictions should be added up and converted to yield for
comparison to the actual yield of manufacturing that specific part.
Collecting and Developing Manufacturing Process Capability Models 17-11
17.6 Summary
Both international and industrial competition motivate us to stay on the cutting edge of technology with
our designs and manufacturing processes. New technologies and innovative processes like those de-
scribed in this chapter give design engineers significant competitive advantage and opportunity to de-
sign for success. Today’s design engineers can work analytical considerations for manufacturing cost,
quality, and cycle time into new designs before they are completed and sent to the factory floor.
The new techniques and technology described in this chapter have been recently implemented at a
few technically aggressive companies in the United States with significant cost-saving results. The
impact of this technology includes more than $50 million of documented cost savings during the first year
of deployment at just one of the companies using the technology! With this kind of success, we need to
continue to focus on adopting and using new technologies such as those described in this chapter.
17.7 References
1. Bralla, James G. 1986. Handbook of Product Design for Manufacturing. New York, New York: McGraw-Hill
Book Co.
2. Dodge, Nathon. 1996. Michael King: Interview and Discussion about PCAT. Texas Instruments Technical
Journal. 31(5):109-111.
3. Harry, Mikel J. and J. Ronald Lawson. 1992. Six Sigma Producibility Analysis and Process Characterization.
Reading, Massachusetts: Addison-Wesley Publishing Company.
4. King, Michael. 1997. Designing for Success. Paper presented at Applied Statistical Tools and Techniques
Conference, 15 October, 1997, at Raytheon TI Systems, Dallas, TX.
5. King, Michael. 1996. Improving Mechanical / Metal Fabrication Designs. Process Capability Analysis Toolset
Newsletter. Dallas, Texas: Raytheon TI Systems
6. King, Michael. 1994. Integration and Results of Six Sigma on the DNTSS Program. Paper presented at Texas
Instruments 1st Annual Process Capability Conference. 27 October, 1994, Dallas, TX.
7. King, Michael. 1994. Integrating Six Sigma Tools with the Mechanical Design Process. Paper presented at Six
Sigma Black Belt Symposium. Chicago, Illinois.
8. King, Michael. 1992. Six Sigma Design Review Software. TQ News Newsletter. Dallas, Texas: Texas Instruments,
Inc.
9. King, Michael. 1993. Six Sigma Software Tools. Paper presented at Six Sigma Black Belt Symposium. Rochester,
New York.
10. King, Michael. 1994. Using Process Capability Data to Improve Casting Designs. Paper presented at Interna-
tional Casting Institute Conference. Washington, DC.
P • A • R • T • 5
GAGING
18-1
Paper Gage Techniques
Martin P. Wright
Behr Climate Systems, Inc.
Fort Worth, Texas
Martin P. Wright is supervisor of Configuration Management for Behr Climate Systems, Inc. in Fort
Worth, Texas, where he directs activities related to dimensional management consulting and company
training programs. He has more than 20 years of experience utilizing the American National Standard
on Dimensioning and Tolerancing and serves as a full-time, on-site consultant assisting employees with
geometric tolerancing applications and related issues. Mr. Wright has developed several multilevel
geometric tolerancing training programs for several major companies, authoring workbooks, study
guides, and related class materials. He has instructed more than 4,500 individuals in geometric
tolerancing since 1988.
Mr. Wright is currently an active member and Working Group leader for ASME Y14.5, which devel-
ops the content for the American National Standard on dimensioning and tolerancing. He also serves
as a member of the US Technical Advisory Group (TAG) to ISO TC213 devoted to dimensioning, toler-
ancing, and mathematization practices for international standards (ISO). In addition to these stan-
dards development activities, Mr. Wright serves as a member and/or officer on six other technical
standard subcommittees sponsored by the American Society of Mechanical Engineers (ASME).
18.1 What Is Paper Gaging?
Geometric Dimensioning and Tolerancing (GD&T) as defined by ASME Y14.5M-1994 provides many
unique and beneficial concepts in defining part tolerances. The GD&T System allows the designer to
specify round, three-dimensional (3-D) tolerance zones for locating round, 3-D features (such as with a
pattern of holes). The system also offers expanded concepts, such as the maximum material condition
(MMC) principle, that allows additional location tolerance based on the produced size of the feature.
Chapter
18
18-2 Chapter Eighteen
(See Chapter 5.) These concepts work well in assuring that part features will function as required by the
needs of the design, while maximizing all available production tolerances for the individual workpiece.
Although these tolerancing concepts are beneficial for both design and manufacturing, their use can pose
some unique problems for the inspector who must verify the requirements.
It is widely recognized that, in terms of inspection, the optimum means for verifying part conformance
to geometric tolerancing requirements is through the use of a fixed-limit gage. (See Chapter 19.) This gage
is essentially the physical embodiment of a 3-D, worst case condition of the mating part. If the part fits into
the functional gage, the inspector may also be assured that it will assemble and interchange with its
mating part. Since the gaging elements are fixed in size, the additional location tolerance allowed for a
larger produced hole (or the dynamic “shift” of a datum feature subject to size variation) is readily
captured by the functional gage. Additionally, functional gages are easily used by personnel with minimal
inspection skills and they can significantly reduce overall inspection time. However, there are drawbacks
to using functional gages. They are expensive to design, build, and maintain, and they require that a
portion of the product tolerance be sacrificed (usually about 10%) to provide tolerance for producing the
gage itself. For these reasons, use of functional gages is generally limited to cases where a large quantity
of parts are to be verified and the reduced inspection time will offset the cost of producing the gage.
Verification of geometric tolerances for the vast majority of produced parts is accomplished through
the use of data collected either manually in a layout inspection, or electronically using a Coordinate
Measuring Machine (CMM). Either method requires the inspector to lock the workpiece into a frame of
reference as prescribed by the engineering drawing and take actual measurements of the produced fea-
tures. The inspector must then determine “X” and “Y” coordinate deviations for the produced features by
comparing the actual measured values to the basic values as indicated on the drawing. Typically, these
coordinate deviations are used in determining positional tolerance error for the produced feature through
one of two methods: mathematical conversion of the coordinate deviations or by use of a paper gage.
Paper gaging is one of several common inspection verification techniques that may be used to ensure
produced feature conformance to an engineering drawing requirement. This technique, also referred to as
Soft Gaging,Layout Gaging, or Graphical Inspection Analysis, provides geometric verification through a
graphical representation and manipulation of the collected inspection data. Cartesian coordinate devia-
tions derived from the measurement process are plotted on to a coordinate grid, providing a graphical
“picture” of the produced feature locations in relation to their theoretically “true” location.
Modern tolerancing methods as defined throughout ASME Y14.5M-1994 prescribe that round fea-
tures, such as holes, be located within round tolerance zones. However, most dimensional inspection
techniques measure parts in relation to a square, Cartesian coordinate system. Paper gaging provides a
convenient and accurate method for converting these measured values into the round, polar coordinate
values required in a positional tolerance verification. This is accomplished graphically by superimposing
a series of rings over the coordinate grid that represents the positional tolerance zones.
18.2 Advantages and Disadvantages to Paper Gaging
Since the optimum means for a geometric tolerancing requirement is through the use of a fixed-limit gage,
the primary advantage provided by paper gaging lies in its ability to verify tolerance limits similar to those
of a hard gage. Paper gaging techniques graphically represent the functional acceptance boundaries for
the feature, without the high costs of design, manufacture, maintenance, and storage required for a fixed-
limit gage. Additionally, paper gaging does not require that any portion of the product tolerance be
sacrificed for gage tolerance or wear allowance.
Paper gaging is also extremely useful in capturing dynamic tolerances found in datum features sub-
ject to size variation or feature-to-feature relationships within a pattern of holes. Neither of these can be
Paper Gage Techniques 18-3
effectively captured in a typical layout inspection. The ability to manipulate the polar coordinate overlay
used in the paper gage technique gives the inspector a way to duplicate these unique tolerance effects.
Since it provides a visual record of the actual produced features, paper gaging can be an extremely
effective tool for evaluating process trends and identifying problems. Unlike a hard gage, which simply
verifies GO/NO-GO attributes of the workpiece, the paper gage can provide the operator with a clear
illustration of production problems and the precise adjustment necessary to bring the process back into
control. Factors such as tooling wear and misalignment can readily be detected during production through
periodic paper gaging of verified parts. Additionally, paper gages can be easily stored using minimal, low-
cost space.
The primary drawback to paper gage method of verification is that it is much more labor-intensive
than use of a fixed-limit gage. Paper gaging requires a skilled inspector to extract actual measurements
from the workpiece, then translate this data to the paper gage. For this reason, paper gaging is usually
considered only when the quantity of parts to be verified is small, or when parts are to be verified only as
a random sampling.
18.3 Discrimination Provided By a Paper Gage
With paper gaging, the coordinate grid and polar overlay are developed proportionately relative to one
another and do not necessarily represent a specific measured value. Because they are generic in nature,
the technique may be used with virtually any measurement discrimination. The spacing between the lines
of the coordinate grid may represent .1 inch for verification of one part, and .0001 inch for another.
A typical inspection shop may only need to develop and maintain three or four paper gage masters.
Each master set would represent a maximum tolerance range capability for that particular paper gage. The
difference between them would be the number of grid lines per inch used for the coordinate grid. More grid
lines per inch on the coordinate grid allow a wider range of tolerance to be effectively verified by the paper
gage. However, an increase in the range of the paper gage lowers the overall accuracy of the plotted data.
The inspector should always select an appropriate grid spacing that best represents the range of toler-
ance being verified.
18.4 Paper Gage Accuracy
A certain amount of error is inherent in any measurement method, and paper gages are no exception. The
overall accuracy of a paper gage may be affected by factors such as error in the layout of the lines that
make up the graphs, coefficient of expansion of the material used for the graphs or overlays, and the
reliability of the inspection data. Most papers tend to expand with an increase in the humidity levels and,
therefore, make a poor selection for grid layouts where fine precision is required. Where improved accu-
racy is required, Mylar is usually the material of choice since it remains relatively stable under normal
changes in temperature and humidity.
By amplifying (enlarging) the grid scale, we can reduce the effects of layout error in the paper gage.
Most grid layout methods will provide approximately a .010 inch error in the positioning of grid lines. From
this, the apparent error provided by the grid as a result of the line positioning error of the layout may be
calculated as follows:
Line Position Error
= Apparent Layout Error
Scale Factor
For example, if a 10 × 10 to-the-inch grid is selected, with each line of the grid representing .001, a scale
factor of 100-to-1 is provided, resulting in an apparent layout error for the grid of .0001 inch. However, if a
18-4 Chapter Eighteen
5 × 5 to-the-inch grid is selected, with each line of the grid representing .001, a scale factor of 200-to-1 is
provided, resulting in an apparent layout error for the grid of only .00005 inch.
18.5 Plotting Paper Gage Data Points
It is extremely important for all users to plot data points on the coordinate grid of a paper gage in the same
manner. This is a mandatory requirement in order to maintain consistency and to provide an accurate
representation of the produced part. Inadvertently switching the X and Y values, or plotting the points in
the wrong direction (plus or minus) will result in an inaccurate picture of the produced part features. This
renders the paper gage useless as an effective process analysis tool.
On the engineering drawing, each hole or feature has a basic or “true” location specified. If the hole
or feature were located perfectly, the measured value and the basic value would be the same. It could
therefore be stated that the theoretical address of the hole or feature at true position is X=0, Y=0. Since
geometric location tolerances are only concerned with the deviation from true position, the center of the
coordinate grid may be used to represent the theoretical address for each feature being verified.
The data points represent deviations from true position and should always be plotted on the coordi-
nate grid based on the relationship to its theoretical address and in a manner consistent with the view in
which the holes are specified. For example, when plotting the X deviation for a hole, the data point is
considered to have a plus X value where the feature falls to the right of its theoretical address, and a minus
X value where it falls to the left of its theoretical address. When plotting the Y deviation, the data point is
considered to have a plus Y value where the feature falls above its theoretical address, and a minus Y
value where it falls below the theoretical address. See Fig. 18-1. Consistently following this methodology
for plotting the data points will assure the reliability of the paper gage for both tolerance evaluation and
process analysis.
Basic Dimension
Basic
Dimension
Theoretical
Address 0,0
.004
.002
Produced Hole
Location
+Y
- Y
-X +X
Hole location for example would be
plotted on the coordinate grid as:
X = +.004, Y = +.002
Figure 18-1 Directional indicators for data point plotting
18.6 Paper Gage Applications
The following examples illustrate some of the common applications for paper gages in evaluating part
tolerances and analyzing process capabilities. Although these examples illustrate just a few of the many
uses for a paper gage, they provide the reader with an excellent overview as to the effectiveness and
versatility of this valuable manufacturing and inspection tool.
Paper Gage Techniques 18-5
18.6.1 Locational Verification
Development of a functional gage to verify feature locations may not be practical or cost effective for
many parts. For example, parts that will be produced in relatively small quantities, or parts that will fall
under some type of process control where part verification will only be done on a random, periodic basis
may not require production of a functional gage. For these parts, it may be more cost effective to verify the
tolerances manually using data collected from a layout inspection. This data may then be transferred to a
paper gage to verify the locational attributes of the features (similar to a fixed-limit gage) for only a fraction
of the cost.
18.6.1.1 Simple Hole Pattern Verification
The following example illustrates how the paper gage may be used to verify the locational requirement of
the hole pattern for the part shown in Fig. 18-2. The drawing states that the axis of each hole must lie within
a Ø.010 tolerance zone when produced at their maximum material condition size limit of Ø.309. Since an
MMC modifier has been specified, additional locational tolerance is allowed for the holes as they depart
their MMC size limit (get larger) by an amount equal to the departure.
Figure 18-2 Example four-hole part
A layout inspection requires that the inspector collect actual measurements from the produced part
and compare these with the tolerances indicated by the engineering drawing. The actual measurement
data may be obtained electronically using a CMM or manually using a surface table and angle plate setup.
The data collected from a layout inspection provides actual “X” and “Y” values for the location of
features in relation to the measurement origin. That is, the measurement provided is always in relation to
a Cartesian Coordinate frame of reference.
In evaluating the locational requirements for the hole pattern, the inspector must first verify that all
holes fall within their acceptable limits of size. The inspector must also know the produced size of each
hole in order to determine the amount of positional tolerance allowed for each hole. To determine the
produced hole size, the inspector inserts the largest gage pin possible into each of the holes. This
effectively defines the actual mating size of the hole, allowing the inspector to calculate the amount of
18-6 Chapter Eighteen
additional positional tolerance (bonus tolerance) allowed for location. The difference between the actual
mating size and the specified MMC size is the allowed bonus tolerance. This tolerance may be added to
the tolerance value specified in the feature control frame.
Once it has been determined that the hole sizes are within acceptable limits, the inspector must set up the
part to measure the hole locations. He accomplishes this by relating the datum features specified by the
feature control frame to the measurement planes of the inspector’s equipment (i.e., surface table, angle plate).
The inspector MUST use the datum features in the same sequence as indicated by the feature control frame.
The final setup for the sample part shown above may resemble the part illustrated in Fig. 18-3.
The pins placed in the holes aid the inspector when measuring the hole location. Actual “X” and “Y”
measurements are made to the surface of the pin and as near to the part face as practicable. With the size
of each pin known, adding 1/2 of the pin’s diameter to the measured value will provide the total actual
measurement to the center of each hole.
Once the part is locked into the datum reference frame, measurements are made in an “X” and a “Y”
direction and the data is recorded on the Inspection Report for final evaluation. This evaluation involves
taking the coordinate data from the actual measurements and converting it into a round positional toler-
ance. Table 18-1 illustrates a sample Inspection Report that provides the data for paper gage evaluation of
the hole pattern.
Table 18-1 Layout Inspection Report of four-hole part
LAYOUT INSPECTION REPORT
NO.
FEATURE
FEATURE SIZE
MMC
ACTUAL
DEV.
ALLOW
TOL.
X LOCATION
DEV
ACCEPT
REJECT
BASIC
ACTUAL
Y
LOCATION
DEV
BASIC
ACTUAL
1
.312±.003
.309
.311
.002
Ø.012
1.500
1.503
+.003
2
.500
2
.501
+.001
2
.312±.003
.309
.313
.004
Ø.014
1.500
1.505
1
.000
.998
-
.002
3
.312±.003
.309
.312
.003
Ø.013
4
.500
.496
2
.500
2
.497
-
.003
4
.312±.003
.309
.310
.001
Ø.011
4
.500
.494
1
.000
1.002
+.002
+.005
-
.004
-
.006
X
X
X
X
Figure 18-3 Layout inspection of four-hole part
Paper Gage Techniques 18-7
Using the data from the Inspection Report, the information is then transferred to the paper gage by
plotting each of the holes on a coordinate grid as shown in Fig. 18-4. The center of the grid represents the
basic or true position (theoretical address 0,0) for each of the holes. Their actual location in relation to their
theoretical address is plotted on the grid using the X and Y deviations from the Inspection Report.
GRID LINES = .001 INCH
-X +X
-Y
+Y
0
#4
#3
#1
#2
Figure 18-4 Plotting the holes on the
coordinate grid
Once the holes have been plotted onto the coordinate grid, a polar coordinate system (representing
the round positional tolerance zones) is laid over the coordinate grid. See Fig. 18-5. The rings of the polar
coordinate system represent the range of positional tolerance zones as allowed by the drawing specifica-
.010
.011
.012
.016
.015
.014
.013
GRID LINES = .001 INCH
-X +X
-Y
+Y
0
#4
#3
#1
#2
Figure 18-5 Overlaying the polar
coordinate system
18-8 Chapter Eighteen
tion; Ø.010 positional tolerance allowed for a Ø.309 hole, up to Ø.016 allowed for a Ø.315 hole. With the
center of the polar coordinate system aligned with the center of the coordinate grid, the inspector then
visually verifies that each plotted hole falls inside its allowable position tolerance. If all the holes fall inside
their zones, the part is good and the inspector is done.
For the example, all of the holes fall inside their respective tolerance zones, with the exception of hole
#4 which is required to be inside a Ø.011 tolerance zone. However, the paper gage shows that the hole does
fall inside a Ø.013 ring. With the MMC concept, the hole may be enlarged by Ø.002 to a size of Ø.312, which
in turn increases the allowable positional tolerance to Ø.013. This brings the hole into compliance with the
drawing specification.
18.6.1.2 Three-Dimensional Hole Pattern Verification
In the previous example, the holes were verified using a two-dimensional (2-D) analysis of the hole
pattern using only measurements taken along the X and Y axes. This is a common practice used in
reducing overall inspection time. By using only a 2-D analysis of the hole pattern, the inspector takes a
calculated risk that the holes will remain relatively perpendicular based on known capabilities of the
processes. Longer holes (usually 1/2-inch in length or longer) should be verified through a 3-D analysis
of the hole pattern.
Fig. 18-6 illustrates the part used in the previous example except that the part thickness is greatly
increased, making the length of the holes approximately 1-1/2 inches long. The part must be verified three-
dimensionally to ensure that the entire length of the hole resides within the specified positional tolerance.
Figure 18-6 Example four-hole part with long holes
Setup and measurement of the workpiece is done in a manner similar to that used for the 2-D analysis
except that the inspector must now collect two sets of measurements— one set for each end of the hole.
Collecting data from each end of the hole allows the inspector to plot both ends of the hole axis on the
coordinate grid of the paper gage: providing a 3-D rendering of the hole axis. Table 18-2 illustrates a sample
Inspection Report used for a 3-D analysis of the hole pattern.