Tải bản đầy đủ (.pdf) (174 trang)

roman louban - image processing of edge and surface defects

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (7.05 MB, 174 trang )

Springer Series in
materials science 123
Springer Series in
materials science
Editors: R. Hull R. M. Osgood, Jr. J. Parisi H. Warlimont
The Springer Series in Materials Science covers the complete spectrum of materials physics,
including fundamental principles, physical propert ies, materials theory and design. Recognizing
the increasing importance of materials science infuture device technologies, the book titles inthis
series reflect the state-of-the-art in understanding and controlling the structure and properties
of all important classes of materials.
Please view available titles in Springer Series in Materials Science
on series homepage />Roman Louban
Image Processing
of Edge and Surface Defects
Theoretical Basis of Adaptive Algorithms
with Numerous Practical Applications
With118Figures
123
Dr. Roman Louban
Thermosensorik GmbH
AmWeichselgarten 7, 91058 Erlangen, Germany
E-mail:
Series Editors:
ProfessorRobertHull
University of Virginia
Dept. of Materials Science and Engineering
Thornton Hall
Charlottesville, VA 22903-2442, USA
Professor R.M. Osgood, Jr.
Microelectronics Science Laboratory


Department of Electrical Engineering
Columbia University
SeeleyW. Mudd Building
New York, NY 10027, USA
Professor Jürgen Parisi
Universität Oldenburg, Fachbereich Physik
Abt. Energie- und Halbleiterforschung
Carl-von-Ossietzky-Straße 9–11
26129 Oldenburg, Germany
Professor HansWarlimont
DSL Dresden Material-Innovation GmbH
Pir naer Landstr. 176
01257 Dresden, Germany
Springer Series in Materials Science ISSN 0933-033X
ISBN 978-3-642-00682-1 e-ISBN 978-3-642-00683-8
DOI 10.1007/978-3-642-00683-8
Springer Dordrecht
Heidelberg
LondonNewYork
Library of Congress Control Number: 2009929025
c
 Springer-Verlag Berlin Heidelberg 2009
This work is subject to copyright. All rights are reserved, whether the whole or part of the material
is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broad-
casting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this
publication or parts thereof is permitted only under the provisions of the German Copyr ight Law of
September 9,1965, in its current version, and permission for use must always be obtained from Springer.
Violations are liable to prosecution under the German Copyri ght Law.
The use of general descriptive names, registered names, trademarks, etc. in this publication does not
imply, even in the absence of a specific statement, that such names are exempt from the relevant protec-

tive laws and regulations and therefore free for general use.
Printed on acid-free paper
Springer is part of Springer Science+Business Media (www.springer.com)
To my wife Olga
Preface
The human ability to recognize objects on various backgrounds is amazing.
Many times, industrial image processing tried to imitate this ability by its
own techniques.
This book discusses the recognition of defects on free-form edges and in-
homogeneous surfaces. My many years of experience has shown that such a
task can be solved efficiently only under particular conditions. Inevitably, the
following questions must be answered: How did the defect come about? How
and why is a person able to recognize a specific defect? In short, one needs an
analysis of the process of defect creation as well as an analysis of its detection.
As soon as the principle of these processes is understood, the processes can
be described mathematically on the basis of an appropriate physical model
and can then be captured in an algorithm for defect detection. This approach
can be described as “image processing from a physicist’s perspective”. I have
successfully used this approach in the development of several industrial image
processing systems and improved upon them in the course of time. I would like
to present the achieved results in a hands-on book on the basis of edge-based
algorithms for defect detection on edges and surfaces.
I would like to thank all who have supported me in writing this book.
My special thanks to Charlotte Helzle, Managing Director, hema electronic
GmbH, Aalen, Germany. During my 12 years of cooperation with that com-
pany, I have had the opportunity to transform many projects in industrial im-
age processing from proof of concept to the development stage and bring them
into service. I would also like to thank Professor Joachim P. Spatz, Managing
Director, Department of New Materials and Biosystems at the Max-Planck
Institute for Metals Research, Stuttgart, Germany, who gave me permission

to use the corresponding applications of adaptive algorithms as illustrative
examples in my book. I thank the foundation All M.C. Escher works,Cordon
Art-Baarn-Holland, and the magazine Qualit¨at und Zuverl¨assigkeit from Carl
Hanser for permitting me to use their images as illustrations in this book.
My personal thanks go to Michael Rohrbacher, my former supervisor and
a good friend, for having incessantly supported and encouraged me. I thank
VIII Preface
J¨urgen Kraus for the creative support in the development of the Christo func-
tion, which plays a fudamental role in defect detection.
I especially thank my children, Anna Louban, who is a student at the
University of Konstanz, Germany, and Ilia Louban, who is a doctoral candi-
date at the Institute for Physical Chemistry, Biochemistry Group, University
of Heidelberg, Germany, as well as another doctoral candidate of the Institute
for Physical Chemistry, Patrick Hiel, for thoroughly proofreading the entire
book and for their numerous suggestions for improvement. I also would like to
express my sincere thanks to Konstantin Sigal and Alexandra Lyon, without
whose help the English version of this book would not have been possible.
I sincerely thank the employees of Springer, particularly Dr. habil. Claus
E. Ascheron, Executive Editor Physics, for taking personal interest in this
book and for the support in every phase of its creation.
I thank all readers in advance for their suggestions of improvement and
compliments.
Crailsheim, Germany Roman Louban
June 2009
Contents
1 Introduction 1
1.1 What Does an Image Processing Task Look Like? 1
1.2 Conventional Methods of Defect Recognition 3
1.2.1 Structural Analysis 3
1.2.2 Edge-Based Segmentation with Pre-defined

Thresholds 5
1.3 Adaptive Edge-Based Object Detection 6
2 Edge Detection 9
2.1 Detection of an Edge 9
2.1.1 Single Edge 10
2.1.2 Double Edge 21
2.1.3 Multiple Edges 24
2.2 Non-Linear Approximation as Edge Compensation 27
3 Defect Detection on an Edge 31
3.1 Defect Recognition on a Regular Contour 32
3.2 Defect Detection on a Dented Wheel Contour 33
3.3 Recognition of a Defect on a Free-Form Contour 34
3.3.1 Fundamentals on Morphological Enveloping Filtering 37
3.3.2 Defect Recognition on a Linear Edge Using
an Envelope Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.3.3 Defect Recognition on a Free-Form Edge Using
an Envelope Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4 Defect Detection on an Inhomogeneous High-Contrast
Surface 47
4.1 Defect Edge 47
4.2 Defect Recognition 50
4.2.1 Detection of Potential Defect Positions 51
4.2.2 100% Defect Positions 56
X Contents
4.2.3 How Many 100% Defect Positions Must a Real Defect
Have? 57
4.2.4 Evaluation of Detected Defects 60
4.3 Setup of Adaptivity Parameters of the SDD Algorithm 60
4.4 Industrial Applications 64
4.4.1 Surface Inspection of a Massive Metallic Part 64

4.4.2 Surface Inspection of a Deep-Drawn Metallic Part 65
4.4.3 Inspection of Non-Metallic Surfaces 65
4.4.4 Position Determination of a Welded Joint 66
4.4.5 Robot-Assisted Surface Inspection 68
5 Defect Detection on an Inhomogeneous Structured Surface 71
5.1 How to Search for a Blob? 71
5.2 Adaptive Blob Detection 73
5.2.1 Adaptivity Level 1 74
5.2.2 Further Adaptivity Levels 79
5.3 Setup of Adaptivity Parameters of the ABD Algorithm 81
5.4 Industrial Applications 83
5.4.1 Cell Inspection using Microscopy 84
5.4.2 Inspection of a Cold-Rolled Strip Surface 85
5.4.3 Inspection of a Wooden Surface 86
6 Defect Detection in Turbo Mode 93
6.1 What is the Quickest Way to Inspect a Surface? 93
6.2 How to Optimize the Turbo Technique? 95
7 Adaptive Edge and Defect Detection as a basis
for Automated Lumber Classification and Optimisation 99
7.1 How to Grade a Wood Cutting? 99
7.1.1 Boundary Conditions 100
7.1.2 Most Important Lumber Terms 100
7.2 Traditional Grading Methods 101
7.2.1 Defect-Related Grading 101
7.2.2 Grading by Sound Wood Cuttings 102
7.3 Flexible Lumber Grading 103
7.3.1 Adaptive Edge and Defect Detection 104
7.3.2 Defect-Free Areas: From “Spaghetti” to “Cutting” 104
7.3.3 Simple Lumber Classification Using only Four
Parameters 106

7.3.4 The 3-Metres Principle 116
7.3.5 Grading of Lumber with Red Heart
119
7.4 The System for Automatic Classification and Sorting
ofHardwoodLumber 123
7.4.1 Structure of the Vision system 123
7.4.2 User Interface 124
Contents XI
8 Object Detection on Images Captured Using
a Special Equipment 129
8.1 Evaluation of HDR Images 129
8.2 Evaluation of X-ray Images 131
9 Before an Image Processing System is Used 135
9.1 Calibration 135
9.1.1 Evaluation Parameters 136
9.1.2 Industrial Applications 141
9.2 Geometrical Calibration 142
9.2.1 h-Calibration 144
9.2.2 l-Calibration 149
9.3 Smallest Detectable Objects 158
9.3.1 Technical Pre-Condition for Minimal Object Size 158
9.3.2 Minimum Detectable Objects in Human Perception 159
References 161
Index 165
1
Introduction
This is obvious, Watson!
Sherlock Holmes
Industrial image processing is gaining more and more importance as a testing
methodology. One of the most challenging and complex problems of industrial

image processing is surface inspection, which is the process aimed at detecting
a defect on a surface. Often, the surface to be inspected is inhomogeneous and
of high contrast. Brightness fluctuations on the surface are common. Still,
all defects need to be detected irrespective of other problems and without
identifying regular objects as defects.
There are a number of image processing systems that are able to carry
out surface inspection more or less successfully. However, the requirements
of industry are growing so rapidly and on such a large scale that existing
systems can no longer satisfy the demand. The reason for this is not the
computing capacity of an image processing system but the methods used for
the recognition of defects.
This book will present an approach to this problem that allows the devel-
opment of an algorithm suitable for the recognition of a surface defect. This
algorithm has been implemented as C-library functions for Seelector by hema
electronic GmbH (a digital signal processing image processing system) [1] and
as plug-ins for NeuroCheck (a PC image processing system) [2] and has been
successfully tested in several applications. This algorithm will be presented in
this book and demonstrated with numerous examples.
1.1 What Does an Image Processing Task Look Like?
As with any task, preparation is of paramount importance. Thus a problem
with a correct definition is already half solved. Unfortunately, in the field of
surface inspection, a detailed and, above all, correct definition of the defects
2 1 Introduction
to be detected is far from satisfactory. Typically, all defects are captured by
photography, and they are logged into a defect catalogue. A further descrip-
tion of these defects is often performed in a formal way, where size, form,
orientation, and, at best, brightness of a defect are taken into consideration.
But, when a more tangible defect definition is asked for, there is a “detailed”
explanation: “Well, can’t you see it?!” [2]. This is true: what you see is usually
enough for a humans. Human beings learn to detect defects according to their

characteristic features of which they are not explicitly aware, and are able to
recognize them even if those defects were not explicitly defined earlier. All
this is done in the background of this process according to a “program” that
has been developed and refined in the course of human evolution. But how
could an image processing system, which is a machine, achieve such a perfor-
mance? When speaking of this, you would have often heard a well-intended
advice: “Don’t you bother, the computer will do it!” But the problem is that
a computer must be programmed by a human first.
Well, how does a human see? What are the features of an object that he
really perceives?
Let us take a look at the famous picture “Waterfall” by M.C. Escher
(Fig. 1.1). At first sight, the water is flowing upward, which is impossible
according to the rules of gravity. The artist and our minds play tricks on us.
But if we take a closer look, we are able to understand how this illusion is
created. Which features of the picture are true to reality and how do we
recognize them? We know that water never flows upward and thanks to our
knowledge of physics we do not believe the illusion. This helps us to get
behind the painter’s tricks and to perceive the features of the picture that are
unobtrusive but “valid”.
The same applies for defect recognition: Because of a formal defect descrip-
tion, many image processing methods refer to formal features of the required
defect.
But, the creation of a defect is a physical process. The properties of the
damaged material and the processing deformations induced by surface damage
determine the appearance of a defect. The characteristic features, created
enable the explicit recognition of such a defect. This is why the analysis of the
physical nature of a defect is a basic part of the approach to defect recognition
presented in this book.
In order to stress the difference between this and conventional methods
of surface defect recognition, we shall first give a review of these methods.

More detailed information on conventional methods is given in several books
on digital image processing, e.g., [3].
1.2 Conventional Methods of Defect Recognition 3
Fig. 1.1. M.C. Escher’s “Wasserfall” 2009. The M.C. Escher Company-Holland. All
rights reserved
1.2 Conventional Methods of Defect Recognition
1.2.1 Structural Analysis
One of the most common image processing methods used to recognize a spe-
cific object type on a surface is structural analysis [3]. It provides hundreds
of static features in order to describe an object and thus to recognize it [4].
These textural features are calculated directly on the basis of the image to
be analysed or on the basis of a histogram or a gradient image [5,6] captured
from the source image. This technique can be refined by increasing the number
of iterations or features.
The methods of structural analysis can be successfully applied where detec-
tion and classification of artificial defects are concerned, which is the case, for
example, in defect detection on printed products (in paper or textile industry).
However, all these methods consider only the formal aspects of the ob-
jects (defects) without taking into account their natural properties. This is
4 1 Introduction
the reason why they fail to recognize non-artificial objects which are never
identical to the reference objects and are in an inhomogeneous environment.
The number of pseudo-defects increases rapidly.
Furthermore, the number of features necessary for the recognition of ob-
jects increases so vastly that a control of such recognition systems is almost
impossible. More than 1500 textural features are currently used for defect
recognition [7].
The support by neuronal networks is of little help. Consequently, the so-
called feature clouds in a multi-dimensional feature space become more and
more blurred as the number of learned objects increases so that the defect

recognition capability of an image processing system decreases, whereas the
recognition of pseudo-defects increases. The following example illustrates this
process. A fork with four spikes and a knife (Fig. 1.2a) are two completely
different objects that an image processing system can learn to recognize and
perfectly separate by using a structural analysis software. Let us expand the
terms “fork” and “knife” as follows. First, we add a more slender three-spike
fork to the four-spike fork, then another and even more slender and longer
two-spike fork and a meat fork. The knives are added to include more and
Fig. 1.2. Structural analysis of objects. (a) Reference objects, (b) Expanded object
range
1.2 Conventional Methods of Defect Recognition 5
more slender, shorter, and unusual knives: e.g., a cheese knife with sparings
in the middle of the blade and two horns at the tip (Fig. 1.2b). We make the
image processing system learn all the new objects. Despite the fact that this
expansion has led to a major change in appearance of the objects in question,
a person still considers the ensemble as two different groups of objects: forks
and knives. An image processing system, however, even supported by neuronal
networks, may assign the meat fork and the cheese knife to the same object
class, as each one of these is a boundary object of its group. The reason for this
is both the high similarity of these objects and the immense deviation of the
remaining learned objects from one another within every reference group. Two
different groups are classed as one, and once the following test is complete,
the four-spike fork and a knife will be incorrectly classed as related objects.
Another method used to detect a defect on a sample image is edge-based
segmentation [3]. Here the detection of edges of an object plays a major role.
The most sophisticated general edge localization is done by transforming
the entire image to an edge image using different cut-off filters.
Besides the high computing effort, this method is at a disadvantage in
that the image is processed, i.e., changed by filtering, which adversely affects
the edge detection itself and the respective results. Some edges cannot even

be detected due to insufficient contrast, whereas random areas with sufficient
illumination gradient are wrongly detected as edges.
1.2.2 Edge-Based Segmentation with Pre-defined Thresholds
Another technique [8] requires an initial binarization of the image. After bi-
narization, an edge is first detected and then the object is scanned. In order
to binarize, a threshold must be determined. It can be either pre-defined or
calculated on the basis of the content of the image.
The pre-defined threshold, however, does not consider variances, e.g., illu-
mination fluctuations that can occur either on a series of consecutive captures
or in different areas of the current image. In this case, the inspection image
cannot be properly binarized, which means that the edges are then incorrectly
detected.
It is possible to adapt the object detection process to the inspection image
by calculating the threshold directly from contents of the image. A histogram
is used [9, 10] to display the frequency of individual grey scale values occur-
ring in the image. A binarization threshold can then be determined with the
absolute or local maximum or minimum of the histogram. This technique can
be refined by increasing the number of iterations [11].
If the histogram is captured on an image section that is too large, individ-
ual details of this section will be adversely affected, which also applies to the
edges located there. Consequently, those edges tend to get blurred or shifted.
On the contrary, if image sections are chosen too small, no exact recognition
of correct minimum or maximum is possible, as the number of test pixels is
too low. Therefore, the split area cannot be binarized correctly. The process
6 1 Introduction
of splitting the image to binarize into appropriate sections [9] can be opti-
mized only by experimental means. The binarization result of the image then
depends on the pre-determined splitting of the image. The technique loses its
flexibility.
In order to determine the appropriate binarization threshold, a series of

binary images captured with falling or rising thresholds can be evaluated [10].
This is, however, very laborious and time consuming and, above all, this is
possible only in a very limited number of cases.
Furthermore, binarization of the image affects the recognition of objects
and thus distorts it, as does any other image filtering process. In addition,
since it is based only on the variation of the grey scale value, this technique
causes highly increased pseudo-defect recognition. Therefore, edge recognition
or object recognition should be carried out only on the basis of the original
grey-scale image.
One of the best known methods for the detection of an object in an image
is the segmentation based on contour tracing (so-called blob analysis) [12].
This may be a dark object on a bright surface or a bright object on a dark
surface. On order to simplify the discussion, we will generally focus on a dark
object on a bright surface. In the other case, the image can be inverted. The
blob analysis is carried out in a test area, where the first pixel that is part of
an object is determined along a scanning line. Normally, the scanning line is
placed over all rows of the test area. The first detected object point, called
starting point, has to show a brightness that lies below the surface brightness
and above the object brightness, whereas the previous pixel should show a
brightness above the surface brightness. From the detected starting point, the
object contour can be further detected by the means of conventional contour
tracing algorithms. Contour tracing can be carried out using the minimum
value of the surface brightness, where all pixels that are part of the object
will have a brightness which is lower than this value.
Blob analysis, however, uses fixed thresholds, which cannot ensure reliable
defect recognition on a structured inhomogeneous surface. A simplified ver-
sion of this technique [2], where the minimum surface brightness threshold
is identical with the maximum defect brightness threshold, which means a
binarization of the image, is even less appropriate.
To summarize, it can be stated that neither formal characteristics nor

pre-defined brightness variances of a defect can be assumed as its explicit
recognition features. This is why the methods described above cannot ensure
a flexible and at the same time explicit defect recognition on an inhomogeneous
surface that shows global and local brightness variances.
1.3 Adaptive Edge-Based Object Detection
The task therefore is to create such a technique of defect recognition. To
achieve this, we need a wholly different approach to this problem. Instead
1.3 Adaptive Edge-Based Object Detection 7
of trying various formal image-processing methods for defect recognition, the
background of defect recognition must be analysed taking into account the
physical aspects of defect formation and human sight behaviour. In doing so,
new characteristic features can be detected, which the technique of defect
recognition must correspond to. An explicit “genetic fingerprint” of a defect
must be acquired. These characteristic features must not be dependent on
defect and surface size, shape and orientation, or brightness. An explicit defect
recognition is ensured on the basis of these characteristic features.
What is it then that decisively differentiates between a defective and a
faultless surface? In case of a defect, there is always a boundary between the
defect and the defect-free surface – a material edge. For example, this edge
can be identified on an angular grinding of a metallic surface that has a crack
(Fig. 1.3) [13]. The roughness profile of the test surface shows the same result
(Fig. 1.4a). An intact surface cannot show such edges (Fig. 1.4b).
Fig. 1.3. Angular grinding of a metallic surface with a crack
Fig. 1.4. Roughness profile of a (a) defective and (b) an intact metallic surface
8 1 Introduction
So, the creation of a material edge is determined by the physical proper-
ties of the material and by the development of the damage process. Therefore,
defect recognition can be done on the basis of the defect edge detection, in-
dependently of the brightness variations on the defect edge. Global and local
brightness conditions have to be taken into account. Later, all detected ob-

jects have to be analysed according to their further features and eventually to
their sizes, and sorted accordingly. This is the reason why the methods below
for detection and recognition of surface defects are called methods of adaptive
edge-based object detection.
Edge detection, which plays a major role in defect recognition, must of
course be the first to be thoroughly investigated and described. However, we
will discuss it in a very general way to ensure that the findings can be used
for recognition of different edges under different environmental conditions.
2
Edge Detection
The hardest thing of all is to find a black cat in a dark room, especially
if there is no cat.
Confucius
The recognition of an edge on a light-and-shadows image captured by a cam-
era is a necessary precondition for all techniques that involve the detection,
measurement, or processing of an object. Edge detection technique is therefore
of major economic importance.
In industrial image processing, an entire block of the above-mentioned
techniques is used. Here the central point is to detect whether there is an
edge in the test area at all and to localize the edge when it is known to
exist. Most edge recognition methods [2, 3], however, presume that an edge
does already exist in the test area, and the task is to detect it as precisely as
possible. In reality and primarily in defect detection, the potential location
of an edge must be determined first. Only then can an edge be successfully
scanned for and located.
Besides that, real boundary conditions can aggravate the detection of an
edge, such as brightness fluctuations of the scanned edge (e.g., different local
brightness values at the edge, as with wood), sharpness of the edge represen-
tation (e.g., a cant), and the complexity of the edge (e.g., double edge as in a
wood board with bark).

2.1 Detection of an Edge
One of the most frequently used methods of direct edge detection from a grey
scale image is based on a pre-determined edge model [3] and concerns the
situation where the edge location must be known in advance. Nevertheless, it
will be presented here in order to stress the difference to the technique that
will be described later.
10 2 Edge Detection
Usually, a scan for edges within a certain edge model occurs along scanning
lines in a certain direction. The criteria for the detection of an edge result
from the grey scale profile along a scanning line. Here two edge directions are
differentiated: rising edges and falling edges. You speak of a falling edge when
the grey scale profile runs from bright to dark; otherwise, it is a rising edge.
A typical technique uses the following parameters:
• Edge height: in order to detect a valid edge, there must be a minimum
difference of grey scale values along a scanning line. This is called the edge
height.
• Edge length: The edge length value describes on what length must occur
the minimum difference of grey scale values determined by the edge height.
As these parameters remain unchanged for every image to be inspected,
there can be no dynamic adaptation of the inspection features for the current
image. Thus, a general highlighting of edges on an inhomogeneous surface
will result in missing out on real defects and in massive recognition of pseudo-
defects.
Other conventional techniques of image processing are also known, e.g.,
using a histogram or a grey scale profile or combining the two for edge detec-
tion. Here, the significant parameters also must be generally pre-determined.
Therefore these techniques are still not capable of providing a flexible and at
the same time explicit edge detection on an inhomogeneous image.
To achieve this, a histogram of the test area and the grey scale profile
captured along the scanning line within the test area must be investigated

and analysed on a substantially more precise physical basis. This physical
background can be explained in the detection of a single-level edge, which
will be referred to as a single edge below.
2.1.1 Single Edge
It is known that the intensity distribution in a light beam shows a Gaussian
profile [14]. As a surface can be regarded as a light source because of its
reflection, a brightness distribution that runs from the surface over the edge
to the background can be described by a Gaussian distribution [3]. This model
has shown to be the best for the exact calculation of the edge position with
sub-pixel accuracy [15]. This is why a Gaussian profile can be assumed for
the description of the grey scale profile and its differentiation (the brightness
gradient).
A histogram is a frequency distribution of the brightness on a background.
In natural images, the content usually has a falling amplitude spectrum,
whereas the noise has an approximately constant spectrum in the frequency
range. The histogram therefore, like the grey-scale profile, shows a Gaussian
profile [16] or a profile resulting from several Gaussian distributions [3].
The brightness value that occurs most frequently on the surface of an
object can be defined as the brightness of that object. This technique, as
2.1 Detection of an Edge 11
opposed to, for example, the mean value technique, explicitly and reliably
determines the real brightness I
surf
(surface) of a test surface, as it is perceived
by a human observer. However, this applies only if a fault with a specific
brightness value does not feature a higher or comparable frequency. In this
case, it is not possible to differentiate the fault from the main surface (e.g.,
checker board). If the test histogram is represented by a very noisy curve, this
histogram can be analysed so that the search position of the surface brightness
I

surf
can be determined according to the centre of mass.
The same applies for a background with brightness I
bgrd
(background).
Generally, you can assume that the test edge separates the dark back-
ground from a bright surface (Fig. 2.1). If not, the roles of the surface and of
the background have to be interchanged.
The position of the edge is scanned on the grey scale profile (Fig. 2.2).
This is created along a scanning line which begins on a dark background area
and runs to the bright surface area, all within a test area, e.g., a rectangle
(Fig. 2.1). The test results of the histogram (Fig. 2.3) from the test area are
used here simultaneously.
Using the captured histogram (Fig. 2.3) from the test area (Fig. 2.1), the
surface brightness I
surf
as well as the background brightness I
bgrd
can be
determined. Here, it is important to determine a typical brightness separation
value I
sprt
(separation) to be able to separate the corresponding parts of the
histogram (background and surface) from one another (Fig. 2.3). The methods
for the determination of this separation value of the surface brightness I
surf
and of the background brightness I
bgrd
will be outlined later on.
The edge location is done within a testing distance L

0
along a scanning
line, while the local maximum brightness I
max
and the local brightness in-
crease ΔI are determined at the test distance L
0
and compared to the edge-
specific minimum brightness I
0
and to the edge-specific minimum brightness
increase ΔI
0
(minimum difference of the grey scale values). The length of the
test distance L
0
, the edge-specific minimum brightness value I
0
, and the edge-
specific brightness increase ΔI
0
are calculated using the brightness values of
the test area.
Fig. 2.1. On-edge detection methods (scheme)
12 2 Edge Detection
Fig. 2.2. Grey-scale profile across an edge
Fig. 2.3. Histogram of the test area
The examination of the histogram is followed by a curve sketching the
captured grey scale profile. Since, as assumed, the grey scale profile shows
a Gaussian profile, this curve represents a normal distribution according to

Gauss, and thus an exponential function, showing certain correlations between
characteristic points.
The grey scale Gaussian profile can be described as follows [17]:
I(x)=I
surf
exp


x
2

2

, (2.1)
where I(x) is the current brightness of the test point at the distance x from
profile maximum; I
surf
is the surface brightness at the profile maximum; x is
the distance of the test point from the profile maximum point; and σ is the
Gaussian profile’s standard deviation.
2.1 Detection of an Edge 13
The most important points on the grey scale profile according to this
technique are the points that are placed at the distance corresponding to the
single or double Gaussian deviation from standard σ from the maximum of
the profile (Fig. 2.2).
Starting at the maximum of the profile, the single deviation from standard
σ shows a turning point of the grey scale profile I
turn
(turn point), indicating
that theoretically there may be an edge:

I
turn
= I
surf
ξ
1
, (2.2)
where I
surf
is the surface brightness and ξ
1
is the turning point coefficient that
can be determined as the edge factor.
Considering the condition x = σ, it follows from (2.1) and (2.2) that
ξ
1
= e
−1/2
≈ 0.606531. (2.3)
Starting again at the profile maximum, the double deviation from the
standard 2σ shows a point with the grey scale profile intensity of an edge.
This is where the background is located:
I
bgrd
= I
surf
ξ
2
, (2.4)
where ξ

2
is the edge point coefficient that can be determined as the background
factor.
Considering the condition x =2σ, it follows from (2.1) and (2.4) that
ξ
2
= e
−2
≈ 0.135335. (2.5)
The distance between the points with the values I
turn
and I
bgrd
also corre-
sponds to the standard deviation σ and is therefore strictly dependent on the
respective grey scale profile (Fig. 2.2). The ratio of these values is, however,
constant for all possible grey scale profiles crossing an edge and represents a
minimum brightness value η
0
. It follows from (2.1)to(2.4) that
η
0
=
I
bgrd
I
turn
=
ξ
2

ξ
1
= e
−3/2
≈ 0.223130. (2.6)
So the brightness factor η
0
defines a minimum ratio of brightness values
of the background and the (so far) theoretical edge position. The brightness
at the first possible edge location can be defined as edge-specific minimum
brightness I
0
. Thus the point where the grey scale profile shows the edge-
specific minimum brightness I
0
is considered the third important point of the
Gaussian analysis profile.
Remarkably, the brightness factor η
0
represents a constant that is
important beyond image processing. Generally speaking, this constant in-
dicates the presence of a passing or a crossover if the corresponding process
is a Markov process or, in other words, whether it shows the Gaussian distri-
bution [17,18]. This phenomenon occurs in a number of real-world situations.
14 2 Edge Detection
The most widely known example is the 80–20 rule, also known as the Pareto
principle [19]. This states that the first 20% of an effort is responsible for
80% of the result and the other 20% of the result requires the remaining 80%
of the overall effort. According to another example from economics, 75% of
all world trade is turned over among 25% of the global population. These

cases describe the beginning of a qualitative change in a quantitative process,
with the limit lying between 20 and 25%. Thus, the constant η
0
≈ 0.223 can
be understood as a universal constant which marks the limit of this change.
With regard to the grey value profile that is oriented at 90

to an edge, this
constant precisely and reliably determines the place at witch the test edge
can be located.
In order to determine the edge, a minimum test distance L
0
is defined
inside which the test edge can be located, so that the high-interference areas
neighbouring the background or the surface lie outside this distance. Since an
edge means an ascent of the grey scale curve, the grey scale profile must show
an edge-specific minimal brightness increase ΔI
0
(difference of grey values),
found at the edge within the test distance L
0
.
The length of the test distance L
0
must not be less than the distance
between the turn point I
turn
and the edge point (background brightness I
bgrd
),

ensuring that the position of the edge is definitely within the test distance L
0
.
This distance corresponds to the standard deviation σ of the grey scale profile.
At the same time, the test distance L
0
must not exceed double the standard
deviation σ. Otherwise, the test distance L
0
becomes larger than the entire
transition area between the background and the surface (Fig. 2.2). This is the
reason why the following condition for the test distance L
0
must be met:
σ<L
0
< 2σ. (2.7)
An edge is present within the test distance L
0
as long as the following
conditions are met:
I
min
≥ I
0
, (2.8)
I
max
− I
min

≥ ΔI
0
, (2.9)
I
min
I
max
≥ η
0
, (2.10)
with I
max
the local maximum brightness within the test distance L
0
and I
min
the local minimum brightness within the test distance L
0
.
In order to determine the parameters I
0
and ΔI
0
, the following limiting
cases can be considered. If the final point of the test distance L
0
has already
reached the surface
I
max

= I
surf
, (2.11)
the edge is still within this test distance. Then, it follows from (2.10) that
I
min
= I
max
η
0
. (2.12)

×