Tải bản đầy đủ (.pdf) (205 trang)

Development of vision based systems for structural health monitoring

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (15 MB, 205 trang )

Development of vision-based systems for
structural health monitoring

Ho Hoai Nam

The Graduate School
Sejong University
Department of Civil and Environmental Engineering


Development of vision-based systems for
structural health monitoring

Ho Hoai Nam

A Dissertation Submitted to the Department of Civil and Environmental
Engineering, and the Graduate School of Sejong University
in partial fulfillment of the requirements
for the degree of Doctor of Philosophy

December 2013

Approved by
Jong-Jae Lee
Major Advisor



TABLE OF CONTENTS
TABLE OF CONTENTS ............................................................................................................... i
LIST OF TABLES ....................................................................................................................... vi


LIST OF FIGURES..................................................................................................................... vii
ABSTRACT ................................................................................................................................ xii
CHAPTER 1

INTRODUCTION ............................................................................................... 1

1.1 Introduction ...................................................................................................................... 1
1.2 Scope and Objectives ....................................................................................................... 4
1.3 Organization ..................................................................................................................... 4
CHAPTER 2

FUNDAMENTALS OF DIGITAL IMAGE PROCESSING AND PATTERN
RECOGNITION .................................................................................................. 6

2.1 Digital image formation and representation ..................................................................... 6
2.1.1 Image formation ..................................................................................................... 6
2.1.2 Image presentation ................................................................................................. 7
2.1.3 Coordinate convention ........................................................................................... 9
2.2 Color fundamentals ........................................................................................................ 10
2.2.1 Basic concepts ...................................................................................................... 10
2.2.2 Color models ........................................................................................................ 14
2.3 Basic image processing operations................................................................................. 25
2.3.1 Elementary image processing operators............................................................... 25

i


2.3.2 Binary image processing operations .................................................................... 26
2.3.3 Geometric transform ............................................................................................ 27
2.4 Edge detection algorithms .............................................................................................. 28

2.4.1 Roberts Cross edge detector ................................................................................. 29
2.4.2 Sobel edge detector .............................................................................................. 31
2.4.3 Prewitt gradient edge detector.............................................................................. 34
2.4.4 Laplacian edge detector........................................................................................ 35
2.4.5 Canny edge detection ........................................................................................... 36
2.5 Image segmentation algorithms ...................................................................................... 39
2.5.1 Threshold techniques ........................................................................................... 39
2.5.2 Edge-based methods ............................................................................................ 40
2.5.3 Region-based techniques...................................................................................... 40
2.5.4 Connectivity-preserving relaxation methods ....................................................... 40
2.6 Pattern recognition ......................................................................................................... 41
2.6.1 Principal component analysis (PCA) ................................................................... 41
2.6.2 Support Vector Machines (SVM) ........................................................................ 45
2.6.3 Artificial Neutral Networks (ANN) ..................................................................... 49
CHAPTER 3

ADVANCED VISION-BASED SYSTEMS FOR DISPLACEMENT AND
ROTATION ANGLE MEASUREMENT ......................................................... 53

3.1 Introduction .................................................................................................................... 53
3.2 Advanced vision-based displacement measurement system .......................................... 53

ii


3.2.1 Development of the advanced vision-based system............................................. 54
3.2.2 Displacement measurement of high-rise buildings .............................................. 72
3.3 High resolution rotation angle measurement system...................................................... 85
3.3.1 A vision-based rotation angle measurement system ............................................ 86
3.3.2 Verification tests .................................................................................................. 88

3.3.3 Testing on a single-span bridge............................................................................ 93
CHAPTER 4

AN EFFICIENT VISION-BASED 3-D MOTION MONITORING SYSTEM
FOR CIVIL STRUCTURES ............................................................................. 97

4.1 Introduction ................................................................................................................... 97
4.2 Development of 3-D vision-based motion monitoring system....................................... 99
4.3 Experimental verifications ........................................................................................... 107
CHAPTER 5

AN IMAGE-BASED SURFACE DAMAGE DETECTION SYSTEM FOR
CABLE-STAYED BRIDGES ......................................................................... 111

5.1 Introduction .................................................................................................................. 111
5.2 Cable inspection system .............................................................................................. 112
5.3 Damage detection algorithm......................................................................................... 114
5.4 Experimental verifications ........................................................................................... 119

iii


CHAPTER 6

SONAR IMAGE ENHANCEMENT APPLYING FOR UNDERWATER
CIVIL STRUCTURES .................................................................................... 124

6.1 Fundamentals of Sonar ................................................................................................. 124
6.1.1 Beam pattern ...................................................................................................... 126
6.1.2 Resolution .......................................................................................................... 128

6.1.3 Backscatter ......................................................................................................... 132
6.1.4 Distortions .......................................................................................................... 132
6.2 Reading sonar image file .............................................................................................. 135
6.3 Sonar image enhancement techniques .......................................................................... 136
6.3.1 Median filter....................................................................................................... 136
6.3.2 Morphological filter ........................................................................................... 139
6.3.3 Weiner filter ....................................................................................................... 144
6.3.4 Frost filter........................................................................................................... 146
6.3.5 Contrast enhancement ........................................................................................ 149
6.4 Efficient image enhancement algorithm....................................................................... 153
CHAPTER 7

CONCLUSIONS AND FUTURE STUDY ..................................................... 156

7.1 Conclusions .................................................................................................................. 156
7.1.1 Synchronized real-time vision-based systems applying for displacement and
rotation measurement ......................................................................................... 156
7.1.2 An 3D motion measurement system for civil infra-structures ........................... 157

iv


7.1.3 An efficient image-based damage detection for cable surface in cable-stayed
bridges ................................................................................................................ 157
7.1.4 Sonar image enhancement applying for underwater civil structures ................. 158
7.2 Recommendations for future study .............................................................................. 158
REFERENCES .......................................................................................................................... 160
APPENDICES........................................................................................................................... 175

v



LIST OF TABLES

Table 3.1: List of hardware components .................................................................................... 75
Table 3.2: Static test results........................................................................................................ 80
Table 3.3: Commercial tilt sensors .............................................................................................. 86
Table 3.4: Static test results......................................................................................................... 91
Table 4.1: Static displacement test results................................................................................. 109
Table 4.2: Static rotation test results ......................................................................................... 109
Table 5.1: Typical camera specifications [95] .......................................................................... 113
Table 5.2: Laboratory test results .............................................................................................. 120
Table 6.1: Rules for Dilation and Erosion................................................................................. 141

vi


LIST OF FIGURES

Figure 2.1: Model of a digital formation system. .......................................................................... 7
Figure 2.2: Combination and variant of RGB color. ..................................................................... 8
Figure 2.3: Coordinate convention of digital images. ................................................................. 10
Figure 2.4: Energy spectrum [37]. .............................................................................................. 11
Figure 2.5: Mixture of light and Mixture of pigments [36]......................................................... 12
Figure 2.6: The wavelengths of the vision spectrum................................................................... 14
Figure 2.7: RGB color cube. ....................................................................................................... 16
Figure 2.8: Example of color, gray and binary image. ................................................................ 17
Figure 2.9: CMY color cube. ...................................................................................................... 19
Figure 2.10: HIS color cube. ....................................................................................................... 22
Figure 2.11: Image rotation. ........................................................................................................ 27

Figure 2.12: Shear along horizontal axis. .................................................................................... 28
Figure 2.13: Roberts Cross convolution kernels. ........................................................................ 30
Figure 2.14: Results of applying Roberts Cross convolution kernels. ........................................ 31
Figure 2.15: Sobel convolution kernels....................................................................................... 32
Figure 2.16: Outputs of Sobel and Roberts edge detectors. ........................................................ 34
Figure 2.17: Masks for the Prewitt gradient edge detector. ........................................................ 34
Figure 2.18: Three small common kernels of Laplacian edge detector. ..................................... 35
Figure 2.19: Comparison of the edge detection methods. ........................................................... 38
vii


Figure 2.20: Main components (PC1 and PC2). ......................................................................... 44
Figure 2.21: PCA example [63]. ................................................................................................. 44
Figure 2.22: SVM concept [67]................................................................................................... 45
Figure 2.23: Linear classifier using SVM. .................................................................................. 46
Figure 2.24: Fundamental block of ANN.................................................................................... 51
Figure 2.25: Multilayer neuron network. .................................................................................... 51
Figure 3.1: Schematic of the system. .......................................................................................... 55
Figure 3.2: Flowchart of the software. ........................................................................................ 58
Figure 3.3: Software interface. .................................................................................................... 59
Figure 3.4: Binary image conversion. ......................................................................................... 60
Figure 3.5: Target recognition. .................................................................................................... 61
Figure 3.6: Time synchronization process. ................................................................................. 63
Figure 3.7: Processing time per frame. ....................................................................................... 64
Figure 3.8: Shaking table and target size. ................................................................................... 65
Figure 3.9: Experimental location setup...................................................................................... 66
Figure 3.10: Shaking table test results. ....................................................................................... 68
Figure 3.11: Time synchronization test. ...................................................................................... 69
Figure 3.12: Time lag between the subsystems........................................................................... 70
Figure 3.13: Displacement with excitation frequencies of 1 Hz and 4 Hz. ................................. 71

Figure 3.14: Partitioning method proposed by Park, et al. [74]. ................................................. 74
Figure 3.15: Camera and target location. .................................................................................... 76

viii


Figure 3.16: Experimental setup. ................................................................................................ 78
Figure 3.17: Dynamic test results................................................................................................ 79
Figure 3.18: Experimental setup of the field test. ....................................................................... 81
Figure 3.19: Measured and estimated data under sinusoidal excitation. ..................................... 83
Figure 3.20: Measured and estimated data under random excitation. ......................................... 84
Figure 3.21: Vision-based dynamic rotation angle measurement. .............................................. 87
Figure 3.22: Application examples. ............................................................................................ 88
Figure 3.23: Experimental setup. ................................................................................................ 90
Figure 3.24: Dynamic test results................................................................................................ 92
Figure 3.25: Testing plan. ........................................................................................................... 93
Figure 3.26: Static load test results. ............................................................................................ 95
Figure 3.27: Dynamic load test results. ....................................................................................... 96
Figure 4.1: Binary image conversion. ....................................................................................... 100
Figure 4.2: Results of binary conversion. ................................................................................. 101
Figure 4.3: Target and camcorder installation for 3-D measurement. ...................................... 103
Figure 4.4: Processing time per frame. ..................................................................................... 104
Figure 4.5: Software interface. .................................................................................................. 106
Figure 4.6: Testing setup. .......................................................................................................... 107
Figure 4.7: Dynamic test results................................................................................................ 110
Figure 5.1: An overview of the cable inspection system. ......................................................... 112
Figure 5.2: PCA-based damage detection algorithm................................................................. 116

ix



Figure 5.3: PCA training process. ............................................................................................. 118
Figure 5.4: Three cable specimens for laboratory tests and example images for training. ....... 120
Figure 5.5: Online damage detection. ....................................................................................... 121
Figure 5.6: Post-processing damage detection. ......................................................................... 121
Figure 5.7: A closed box embedded cameras and LED lightning system ................................. 123
Figure 6.1: Sidescan sonar survey scenario [111]. .................................................................... 125
Figure 6.2: Geometry of sidescan sonar and definitions of some basic parameters.................. 126
Figure 6.3: Typical beam pattern of a sidescan sonar [114]...................................................... 127
Figure 6.4: Thickness of a sonar pulse [115]. ........................................................................... 128
Figure 6.5: The across-track resolution of a sonar increases with range [115]. ........................ 129
Figure 6.6: The along-track resolution [115]. ........................................................................... 130
Figure 6.7: The horizontal resolution of a sidescan sonar. ........................................................ 131
Figure 6.8: Towfish instabilities, which degrade the quality of the sonar data [112]. .............. 133
Figure 6.9: Slant-range distortion.............................................................................................. 134
Figure 6.10: Structure of the XTF data file. .............................................................................. 135
Figure 6.11: Median filter. ........................................................................................................ 137
Figure 6.12: Applying median filter. ......................................................................................... 138
Figure 6.13: Operation of morphological filters. ...................................................................... 140
Figure 6.14: Morphological operation. ..................................................................................... 143
Figure 6.15: Weiner filter. ......................................................................................................... 145
Figure 6.16: Frost filter algorithm. ............................................................................................ 147

x


Figure 6.17: Frost filter result. .................................................................................................. 148
Figure 6.18: Histogram equalization process. ........................................................................... 150
Figure 6.19: Histogram equalization result. .............................................................................. 151
Figure 6.20: Unsharp masking result. ....................................................................................... 152

Figure 6.21: SONAR image enhancement algorithm. .............................................................. 154
Figure 6.22: Results of the proposed algorithm. ....................................................................... 155

xi


ABSTRACT

Development of vision-based systems for
structural health monitoring

Ho Hoai Nam
Department of Civil and Environmental Engineering
The Graduate School of Sejong University

Vision-based systems are regaining their popularity in various engineering disciplines due to a
recent remarkable evolution in the image capture devices. Image sequences recorded by the
devices contain both spatial and temporal information of the target object; hence can be used to
extract the object’s dynamic characteristics such as natural frequencies, damping ratios, and
mode shapes. In this dissertation, some effective vision-based systems that can be used for
structural health monitoring have been successfully developed.
The advanced vision-based systems are developed to remotely measure the displacement and
rotation of large civil structures. The system consists of software, master PC and slave PCs. The
data of each slave system including system time are wirelessly transferred to the master PC and
vice versa. In addition, synchronization process is carried out to guarantee the time

xii


synchronization between the master PC and slave PCs. Several laboratory and field tests are

conducted to verify the effectiveness of the proposed system.
An 3D motion measurement system for civil infra-structures consists of hardware (three
camcorders, one commercial PC and one frame grabber) and software. The efficient software
and measurement scheme, and automatic optimal threshold selection algorithm are successfully
developed to obtain the dynamic motion with six degrees of freedom (DOF) in real-time. This
system is verified using laboratory tests including static and dynamic tests.
An efficient image-based damage detection system for cable-stayed bridge that can
automatically identify damages to the cable surface through image processing techniques and
pattern recognition is successfully developed. Initially, image data from the cameras are
wirelessly transmitted to a server computer located on a stationary cable support. To improve
the overall quality of the images, the system utilizes an image enhancement method together
with a noise removal technique. Then the input images are projected into the feature space, the
Mahalanobis square distance is used to evaluate the distances between the input images and
training pattern data. The smallest distance is found to be a match for an input image. The
proposed damage detection system is verified through laboratory tests on three types of cables.
Test results show that the proposed system could be used to detect damage for bridge cables.
Finally, an efficient Sonar image enhancement is proposed. The algorithm consists of two
steps, noise removal and image sharpening. The Wiener filter and median filter are utilized to
suppress noise. Then the filtered image is de-blured and enhanced using unsharp masking and
histogram equalization. The proposed algorithm is successfully applied on many sonar images.

xiii


CHAPTER 1
INTRODUCTION

1.1 Introduction
Civil engineering structures, such as bridges, buildings, dams etc, endure various external
loads during their life cycle. Thus, structural health monitoring plays an important role to ensure

structures maintain their structural performance. Based on knowledge of structural operational
condition, suitable maintenance procedures can be performed to prolong the service life of the
structure, and the risk of catastrophic failure should be significantly reduced.
Two general approaches to structural health monitoring include local detection techniques and
global vibration-based techniques. Local methods (which usually utilize many kinds of sensor
techniques such as X-rays, radiography, acoustics, strain gauges, optical fibers, etc.) focus on
the detection of local damages of structures and require the prior knowledge of the area, where
the possible damage will occur, and the structural location to be examined should be easily
accessible. Hence, the local techniques might not be the efficient solutions to damage diagnosis,
especially for large-scale structures subjected to cost limitations, accessing difficulties and
structural complexity. Global vibration–based methods investigate the health condition or the
potential damages of a structure through the changes of dynamic characteristics such as natural
frequencies, damping ratios, mode shapes. Integrated with appropriate identification techniques,
1


these methods are able to detect the damage at the early stage, estimate the damage locations,
quantify the damage severity and predict the remaining service life of the structure. Other main
advantages of the global methods are ready for automatic processing and have little reliance on
engineers’ judgments or on the analytical model of structure. A lot of studies in the field of
structural health monitoring were extensively surveyed in the Ref. [1, 2].
With the advent of inexpensive and high performance image capture devices and associated
image processing techniques, vision-based systems become a popular tool of great interest for
wide applications in medicine, archaeology, biomechanics, astronomy, public safety, etc. The
vision-based system is regaining its popularity in various engineering disciplines due to a recent
remarkable evolution in cost-effective and high-performance image capture devices.
In civil engineering, vision-based systems have been successfully applied to wide range of
applications such as detection of unsafe actions of a construction worker [3], extraction of
cracks from concrete beams [4], finding cracks on a bridge deck [5, 6], road pavement
assessment [7], filling holes in the road [8], detection of building crack [9, 10], etc.

In civil structures, structural members are exposed to severe environment conditions in their
life cycle such as earthquakes, tornadoes, typhoons etc., so structural health monitoring plays a
key role to sustain long service-life of structures. The structural health monitoring starts with the
inspection of the structural members to determine the present condition. For a long time, the
inspection has been carried out by field inspectors and/or using contact-type sensors [11, 12]. In
reality, the inspection of the large structures suffers from high traffic volumes and/or difficult
access conditions which make the inspection time-consuming, difficult and expensive [13]. In

2


many cases, the inspection process exposes inspectors to dangerous situations when the
inspector needs to access desired locations using a lifting vehicle or lifting trolley [13, 14]. The
issues are the main motivations for the development of image-based detection systems which
can be used to remotely and non-contact inspected the structure conditions to reduce the
inspection time, cost, and to increase the effectiveness, safety of structural health monitoring.
Displacement is an important physical quantity to be monitored in the framework of structural
health monitoring. Vision-based displacement measurement is one of the most common noncontact measurement techniques in civil engineering applications [15-20]. In addition, rotation
angle measurement systems are widely used nowadays to monitor deformations of large-scale
civil structures such as long-span bridges, tunnels, dams, high-rise buildings using commercial
tilt-meter sensors or high-resolution rotation measurement systems [21, 22].
In a cable-stayed bridge, cables are the key structural members and face with severe
environment conditions including natural hazards like typhoons, earthquakes, etc., so the cables
should be regularly and carefully inspected for any possible defect. Some vision-based systems
for cable damage assessment [23-26] have been introduced to reduce the manual inspection
time and increase reliability of the damage inspection.
The SONAR (SOund Navigation And Ranging) system was initially created for underwater
sensing [27]. Sonar systems are important and powerful tools for survey underwater civil
structures where the optical capture devices are difficult or impossible to reach. However, sonar
images are often corrupted by signal-dependent noise called speckle [28], thus some de-noising

methods [29-32] were introduced to enhance the images for visualization and/or classification.

3


1.2 Scope and Objectives
This study concerns with vision-based systems utilizing in non-contact assessment of civil
structures which have low natural frequencies. The main objective of this study is to develop
vision-based systems for deformation measurement and damage detection of structures. The
specific objectives of this study are:


Develop an efficient synchronized multi-point vision-based displacement system
applying for civil structures.



Develop a high-resolution rotation angle measurement for civil structures.



Extend the multi-point vision-based displacement for 3D real-time measurement of
structures.



Develop an effective image-based system for damage detection of bridge cables, and
propose an efficient sonar image enhancement algorithm applying for underwater civil
structures.


1.3 Organization
This dissertation consists of seven chapters, a brief summary of chapter-by-chapter is
presented as follows:
Chapter 2 briefly reviews some basic concepts and techniques related to digital image
processing applying for civil applications including digital image formation, color models, basic
image processing, edge detection, image segmentation, and popular pattern recognition
techniques.
4


Chapter 3 develops the vision-based displacement and rotation measurement systems for large
civil structures. The vision-based measurement system consists of software and hardware (lowcost PCs, commercial frame grabbers, camcorders, and a wireless LAN access point). The
image data of targets are captured by camcorders and streamed into the PC via frame grabbers.
Then the final results are calculated using image processing techniques with pre-measured
calibration parameters.
Chapter 4 presents an efficient real-time 3D motion measurement. The proposed system
composes of hardware (industrial camcorders, commercial PC and frame grabber) and software.
The efficient software and measurement scheme, and automatic optimal threshold selection
algorithm are also introduced to obtain the dynamic motion with six degrees of freedom (DOF)
in real-time.
Chapter 5 develops a surface damage detection system for cable-stayed bridges using image
processing and pattern recognition techniques. Image data captured by three cameras moving
along the target cable are wirelessly transmitted to a server computer located at the cable
support. Surface damage of cables is detected by deploying principle component analysis.
Chapter 6 quickly reviews some fundamental knowledge and important image enhancement
techniques on sonar image, then proposes an efficient sonar image enhancement algorithm. The
proposed algorithm consists of two steps including noise removal and image sharpening.
Initially, the noise is removed using Wiener and median filters, and then the filtered image is
sharpened by applying unsharp masking and histogram equalization.
Chapter 7, the final chapter, concludes this dissertation and recommends future study.


5


CHAPTER 2
FUNDAMENTALS OF DIGITAL IMAGE PROCESSING
AND PATTERN RECOGNITION

2.1 Digital image formation and representation
2.1.1 Image formation
Digital image formation is the initial step in any digital image processing application.
Basically, the digital image formation system consists of an optical system, the sensor and the
digitizer. The optical signal is usually transformed to an electricity signal by using a sensing
device (e.g. a CCD sensor). The analog signal is transformed to a digital one by using a video
digitizer.
An image is the optical representation of an object illuminated by a radiating source. Thus, the
image formation process composes of following elements such as object, radiating source, and
image formation system.

6


Figure 2.1: Model of a digital formation system.

2.1.2 Image presentation
Any method chosen for image representation should fulfill two requirements:


It should facilitate convenient and efficient processing by means of computer.




It should encapsulate all information that defines the relevant characteristics of the
image.

The digital image can be conveniently represented by an M×N matrix i of the form:
i (0,1)
 i(0,0 )
 i (1,0 )
i (1,1)
i ( x, y ) = 




i
(
M

1
,
0
)
i
(
M
− 1,1)


i (0, N − 1) 


i (1, N − 1) 




⋯ i (M − 1, N − 1)


(2.1)

A typical color digital image is mostly represented by a triplet of values, one for each of the
color channels, as in the frequently used RGB color scheme. The letters R, G, B stand for red,
green, and blue. The individual color values are usually used 8-bit values resulting in a total of
24 bits (or 3 bytes) per pixel. This yields a three-fold increase in the storage requirements for

7


color versus monochrome images. In the color-interleaved format, the raw data is separated into
three 2D matrices, one for each of the color channels:
i( x, y ) = {r( x, y ), g( x, y), b(x, y)}

 r(0,0)

r(0, N −1)   g(0,0)

g (0, N −1)   b(0,0)
⋯ b(0, N −1) 







= 









, 
, 

r(M −1,0) ⋯ r(M −1, N −1) g (M −1,0) ⋯ g(M −1, N −1) b(M −1,0) ⋯ b(M −1, N −1)





(2.2)
The three primary colors may be used to synthesize any one of 224 approximately 16 million
colors. In each color square the red and blue color values are varied in increments of 1 , with
3
the amount of green increasing in the same manner from left to right. Normalized value range
0 ≤ r , g , b ≤ 1 for specifying color values have been used.


Figure 2.2: Combination and variant of RGB color.

Binary images are images whose pixels have only two possible intensity values, they are
normally displayed as black and white [33]. In reality, binary images are used in many
applications since they are the simplest to process, but they are such an impoverished
representation of the image information that their use is not always possible [33]. However, they

8


are useful where all the information we need can be provided by the silhouette of the object and
when you can obtain the silhouette of that object easily.
Sometimes the output of other image processing techniques is represented in the form of a
binary image, for example, the output of edge detection can be a binary image (edge points and
non-edge points). Binary image processing techniques can be useful for subsequent processing
of these output images.
Binary images are often produced by thresholding a grey-scale or color image, in order to
separate objects in the image from the background. The color of the object (usually white) is
considered as the foreground color. The remaining (usually black) is referred to as the
background color.
2.1.3 Coordinate convention
The result of sampling and quantization is a matrix of real numbers [34]. Assuming that an
image i ( x , y ) is sampled so that the resulting image has M rows and N columns. It is often said
that the image is of size M×N, and the values of the coordinates (x, y ) are discrete quantities. In
many image processing books, the image origin is defined to be at (x , y ) = (0 , 0 ) .

9



×