Tải bản đầy đủ (.pdf) (323 trang)

young, driggers, jacobs - signal processiong and performance analysis for imaging systems

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (4.83 MB, 323 trang )

Signal Processing and Performance
Analysis for Imaging Systems
For a listing of recent titles in the Artech House Optoelectronics Series,
turn to the back of this book.
Signal Processing and Performance
Analysis for Imaging Systems
S. Susan Young
Ronald G. Driggers
Eddie L. Jacobs
a
r
tec
hh
ouse
.
co
m
Library of Congress Cataloging-in-Publication Data
A catalog record for this book is available from the U.S. Library of Congress.
British Library Cataloguing in Publication Data
A catalogue record for this book is available from the British Library.
ISBN-13: 978-1-59693-287-6
Cover design by Igor Valdman
© 2008 ARTECH HOUSE, INC.
685 Canton Street
Norwood, MA 02062
All rights reserved. Printed and bound in the United States of America. No part of this book
may be reproduced or utilized in any form or by any means, electronic or mechanical, includ-
ing photocopying, recording, or by any information storage and retrieval system, without
permission in writing from the publisher.


All terms mentioned in this book that are known to be trademarks or service marks have
been appropriately capitalized. Artech House cannot attest to the accuracy of this informa-
tion. Use of a term in this book should not be regarded as affecting the validity of any trade-
mark or service mark.
10 9 8 7 6 5 4 3 2 1
To our families

Contents
Preface xiii
PART I
Basic Principles of Imaging Systems and Performance 1
CHAPTER 1
Introduction 3
1.1 “Combined” Imaging System Performance 3
1.2 Imaging Performance 3
1.3 Signal Processing: Basic Principles and Advanced Applications 4
1.4 Image Resampling 4
1.5 Super-Resolution Image Reconstruction 5
1.6 Image Restoration—Deblurring 6
1.7 Image Contrast Enhancement 7
1.8 Nonuniformity Correction (NUC) 7
1.9 Tone Scale 8
1.10 Image Fusion 8
References 10
CHAPTER 2
Imaging Systems 11
2.1 Basic Imaging Systems 11
2.2 Resolution and Sensitivity 15
2.3 Linear Shift-Invariant (LSI) Imaging Systems 16
2.4 Imaging System Point Spread Function and Modulation

Transfer Function 20
2.4.1 Optical Filtering 21
2.4.2 Detector Spatial Filters 22
2.4.3 Electronics Filtering 24
2.4.4 Display Filtering 25
2.4.5 Human Eye 26
2.4.6 Overall Image Transfer 27
2.5 Sampled Imaging Systems 28
2.6 Signal-to-Noise Ratio 34
2.7 Electro-Optical and Infrared Imaging Systems 38
2.8 Summary 39
References 39
vii
CHAPTER 3
Target Acquisition and Image Quality 41
3.1 Introduction 41
3.2 A Brief History of Target Acquisition Theory 41
3.3 Threshold Vision 43
3.3.1 Threshold Vision of the Unaided Eye 43
3.3.2 Threshold Vision of the Aided Eye 47
3.4 Image Quality Metric 50
3.5 Example 53
3.6 Summary 61
References 61
PART II
Basic Principles of Signal Processing 63
CHAPTER 4
Basic Principles of Signal and Image Processing 65
4.1 Introduction 65
4.2 The Fourier Transform 65

4.2.1 One-Dimensional Fourier Transform 65
4.2.2 Two-Dimensional Fourier Transform 78
4.3 Finite Impulse Response Filters 83
4.3.1 Definition of Nonrecursive and Recursive Filters 83
4.3.2 Implementation of FIR Filters 84
4.3.3 Shortcomings of FIR Filters 85
4.4 Fourier-Based Filters 86
4.4.1 Radially Symmetric Filter with a Gaussian Window 87
4.4.2 Radially Symmetric Filter with a Hamming Window at
a Transition Point 87
4.4.3 Radially Symmetric Filter with a Butterworth Window at
a Transition Point 88
4.4.4 Radially Symmetric Filter with a Power Window 89
4.4.5 Performance Comparison of Fourier-Based Filters 90
4.5 The Wavelet Transform 90
4.5.1 Time-Frequency Wavelet Analysis 91
4.5.2 Dyadic and Discrete Wavelet Transform 96
4.5.3 Condition of Constructing a Wavelet Transform 97
4.5.4 Forward and Inverse Wavelet Transform 97
4.5.5 Two-Dimensional Wavelet Transform 98
4.5.6 Multiscale Edge Detection 98
4.6 Summary 102
References 102
PART III
Advanced Applications 105
viii Contents
CHAPTER 5
Image Resampling 107
5.1 Introduction 107
5.2 Image Display, Reconstruction, and Resampling 107

5.3 Sampling Theory and Sampling Artifacts 109
5.3.1 Sampling Theory 109
5.3.2 Sampling Artifacts 110
5.4 Image Resampling Using Spatial Domain Methods 111
5.4.1 Image Resampling Model 111
5.4.2 Image Rescale Implementation 112
5.4.3 Resampling Filters 112
5.5 Antialias Image Resampling Using Fourier-Based Methods 114
5.5.1 Image Resampling Model 114
5.5.2 Image Rescale Implementation 115
5.5.3 Resampling System Design 117
5.5.4 Resampling Filters 118
5.5.5 Resampling Filters Performance Analysis 119
5.6 Image Resampling Performance Measurements 125
5.7 Summary 127
References 127
CHAPTER 6
Super-Resolution 129
6.1 Introduction 129
6.1.1 The Meaning of Super-Resolution 129
6.1.2 Super-Resolution for Diffraction and Sampling 129
6.1.3 Proposed Nomenclature by IEEE 130
6.2 Super-Resolution Image Restoration 130
6.3 Super-Resolution Image Reconstruction 131
6.3.1 Background 131
6.3.2 Overview of the Super-Resolution Reconstruction Algorithm 132
6.3.3 Image Acquisition—Microdither Scanner Versus Natural Jitter 132
6.3.4 Subpixel Shift Estimation 133
6.3.5 Motion Estimation 135
6.3.6 High-Resolution Output Image Reconstruction 143

6.4 Super-Resolution Imager Performance Measurements 158
6.4.1 Background 158
6.4.2 Experimental Approach 159
6.4.3 Measurement Results 166
6.5 Sensors That Benefit from Super-Resolution Reconstruction 167
6.5.1 Example and Performance Estimates 168
6.6 Performance Modeling and Prediction of Super-Resolution
Reconstruction 172
6.7 Summary 173
References 174
Contents ix
CHAPTER 7
Image Deblurring 179
7.1 Introduction 179
7.2 Regularization Methods 181
7.3 Wiener Filter 181
7.4 Van Cittert Filter 182
7.5 CLEAN Algorithm 183
7.6 P-Deblurring Filter 184
7.6.1 Definition of the P-Deblurring Filter 185
7.6.2 Properties of the P-Deblurring Filter 186
7.6.3 P-Deblurring Filter Design 188
7.7 Image Deblurring Performance Measurements 199
7.7.1 Experimental Approach 200
7.7.2 Perception Experiment Result Analysis 203
7.8 Summary 204
References 204
CHAPTER 8
Image Contrast Enhancement 207
8.1 Introduction 207

8.2 Single-Scale Process 208
8.2.1 Contrast Stretching 208
8.2.2 Histogram Modification 209
8.2.3 Region-Growing Method 209
8.3 Multiscale Process 209
8.3.1 Multiresolution Analysis 210
8.3.2 Contrast Enhancement Based on Unsharp Masking 210
8.3.3 Contrast Enhancement Based on Wavelet Edges 211
8.4 Contrast Enhancement Image Performance Measurements 217
8.4.1 Background 217
8.4.2 Time Limited Search Model 218
8.4.3 Experimental Approach 219
8.4.4 Results 222
8.4.5 Analysis 223
8.4.6 Discussion 226
8.5 Summary 227
References 228
CHAPTER 9
Nonuniformity Correction 231
9.1 Detector Nonuniformity 231
9.2 Linear Correction and the Effects of Nonlinearity 232
9.2.1 Linear Correction Model 233
9.2.2 Effects of Nonlinearity 233
9.3 Adaptive NUC 238
9.3.1 Temporal Processing 238
9.3.2 Spatio-Temporal Processing 240
x Contents
9.4 Imaging System Performance with Fixed-Pattern Noise 243
9.5 Summary 244
References 245

CHAPTER 10
Tone Scale 247
10.1 Introduction 247
10.2 Piece-Wise Linear Tone Scale 248
10.3 Nonlinear Tone Scale 250
10.3.1 Gamma Correction 250
10.3.2 Look-Up Tables 252
10.4 Perceptual Linearization Tone Scale 252
10.5 Application of Tone Scale to Enhanced Visualization in Radiation
Treatment 255
10.5.1 Portal Image in Radiation Treatment 255
10.5.2 Locating and Labeling the Radiation and Collimation Fields 257
10.5.3 Design of the Tone Scale Curves 257
10.5.4 Contrast Enhancement 262
10.5.5 Producing the Output Image 264
10.6 Tone Scale Performance Example 264
10.7 Summary 266
References 267
CHAPTER 11
Image Fusion 269
11.1 Introduction 269
11.2 Objectives for Image Fusion 270
11.3 Image Fusion Algorithms 271
11.3.1 Superposition 272
11.3.2 Laplacian Pyramid 272
11.3.3 Ratio of a Lowpass Pyramid 275
11.3.4 Perceptual-Based Multiscale Decomposition 276
11.3.5 Discrete Wavelet Transform 278
11.4 Benefits of Multiple Image Modes 280
11.5 Image Fusion Quality Metrics 281

11.5.1 Mean Squared Error 282
11.5.2 Peak Signal-to-Noise Ratio 283
11.5.3 Mutual Information 283
11.5.4 Image Quality Index by Wang and Bovik 283
11.5.5 Image Fusion Quality Index by Piella and Heijmans 284
11.5.6 Xydeas and Petrovic Metric 285
11.6 Imaging System Performance with Image Fusion 286
11.7 Summary 290
References 290
About the Authors 293
Index 295
Contents xi

Preface
In today’s consumer electronics market where a 5-megapixel camera is no longer
considered state-of-the-art, signal and image processing algorithms are real-time
and widely used. They stabilize images, provide super-resolution, adjust for detec-
tor nonuniformities, reduce noise and blur, and generally improve camera perfor-
mance for those of us who are not professional photographers. Most of these signal
and image processing techniques are company proprietary and the details of these
techniques are never revealed to outside scientists and engineers. In addition, it is
not necessary for the performance of these systems (including the algorithms) to be
determined since the metric of success is whether the consumer likes the product
and buys the device.
In other imaging communities such as military imaging systems (which, at a
minimum, include visible, image intensifiers, and infrared) and medical imaging
devices, it is extremely important to determine the performance of the imaging sys-
tem, including the signal and image processing techniques. In military imaging sys-
tems that involve target acquisition and surveillance/reconnaissance, the
performance of an imaging system determines how effective the warfighter can

accomplish his or her mission. In medical systems, the imaging system performance
determines how accurately a diagnosis can be provided. Signal and image process-
ing plays a key role in the performance of these imaging systems and, in the past 5 to
10 years, has become a key contributor to increased imaging system performance.
There is a great deal of government funding in signal and image processing for
imaging system performance and the literature is full of university and government
laboratory developed algorithms. There are still a great number of industry algo-
rithms that, overall, are considered company proprietary. We focus on those in the
literature and those algorithms that can be generalized in a nonproprietary manner.
There are numerous books in the literature on signal and image processing tech-
niques, algorithms, and methods. The majority of these books emphasize the math-
ematics of image processing and how they are applied to image information. Very
few of the books address the overall imaging system performance when signal and
image processing is considered a component of the imaging system. Likewise, there
are many books in the area of imaging system performance that consider the optics,
the detector, and the displays in the system and how the system performance
behaves with changes or modifications of these components. There is very little
book content where signal and imager processing is included as a component of the
overall imaging system performance. This is the gap that we have attempted to fill
with this book. While algorithm development has exploded in the past 5 to 10 years,
xiii
the system performance aspects are relatively new and not quite fully understood.
While the focus of this book is to help the scientist and engineer begin to understand
that these algorithms are really an imaging system component and help in the system
performance prediction of imaging systems with these algorithms, the performance
material is new and will undergo dramatic improvements in the next 5 years.
We have chosen to address signal and image processing techniques that are not
new, but the real time implementation in military and medical systems are relatively
new and the performance predication of systems with these algorithms are definitely
new. There are some algorithms that are not addressed such as electronic stabiliza-

tion and turbulence correction. There are current programs in algorithm develop-
ment that will provide great advances in algorithm performance in the next few
years, so we decided not to spend time on these particular areas.
It is worth mentioning that there is a community called “computational imag-
ing” where, instead of using signal/image processing to improve the performance of
an existing imaging system approach, signal processing is an inherent part of the
electro-optical design process for image formation. The field includes unconven-
tional imaging systems and unconventional processing, where the performance of
the collective system design is beyond any conventional system approach. In many
cases, the resulting image is not important. The goal of the field is to maximize sys-
tem task performance for a given electro-optical application using nonconventional
design rules (with signal processing and electro-optical components) through the
exploitation of various degrees of freedom (space, time, spectrum, polarization,
dynamic range, and so forth). Leaders in this field include Dennis Healey at DARPA,
Ravi Athale at MITRE, Joe Mait at the Army Research Laboratory, Mark
Mirotznick at Catholic University, and Dave Brady at Duke University. These
researchers and others are forging a new path for the rest of us and have provided
some very stimulating experiments and demonstrations in the past 2 or 3 years. We
do not address computational imaging in this book, as the design and approach
methods are still a matter of research and, as always, it will be some time before sys-
tem performance is addressed in a quantitative manner.
We would like to thank a number of people for their thoughtful assistance in this
work. Dr. Patti Gillespie at the Army Research Laboratory provided inspiration and
encouragement for the project. Rich Vollmerhausen has contributed more to mili-
tary imaging system performance modeling over the past 10 years than any other
researcher, and his help was critical to the success of the project. Keith Krapels and
Jonathan Fanning both assisted with the super-resolution work. Khoa Dang, Mike
Prarie, Richard Moore, Chris Howell, Stephen Burks, and Carl Halford contributed
material for the fusion chapter. There are many others who worked signal process-
ing issues and with whom we collaborated through research papers to include:

Nicole Devitt, Tana Maurer, Richard Espinola, Patrick O’Shea, Brian Teaney, Louis
Larsen, Jim Waterman, Leslie Smith, Jerry Holst, Gene Tener, Jennifer Parks, Dean
Scribner, Jonathan Schuler, Penny Warren, Alan Silver, Jim Howe, Jim Hilger, and
Phil Perconti. We are grateful for the contributions that all of these people have pro-
vided over the years.
We (S. Susan Young and Eddie Jacobs) would like to thank our coauthor, Dr.
Ronald G. Driggers for his suggestion of writing this book and encouragement in
this venture. Our understanding and appreciation of system performance signifi
-
cance started from collaborating with him. S. Susan Young would like to thank Dr.
xiv Preface
Hsien-Che Lee for his guidance and help early in her career in signal and image pro
-
cessing. On a personal side, we authors are very thankful to our families for their
support and understanding.
xv

PART I
Basic Principles of Imaging Systems
and Performance

CHAPTER 1
Introduction
1.1 “Combined” Imaging System Performance
The “combined” imaging system performance of both hardware (sensor) and soft-
ware (signal processing) is extremely important. Imaging system hardware is
designed primarily to form a high-quality image from source emissions under a
large variety of environmental conditions. Signal processing is used to help highlight
or extract information from the images that are generated from an imaging system.
This processing can be automated for decision-making purposes or it can be utilized

to enhance the visual acuity of a human looking through the imaging system.
Performance measures of an imaging system have been excellent methods for
better design and understanding of the imaging system. However, the imaging per-
formance of an imaging system with the aid of signal processing has not been widely
considered in the light of improving image quality from imaging systems and signal
processing algorithms. Imaging systems can generate images with low-contrast,
high-noise, blurring, or corrupted/lost high-frequency details, among others. How
does the image performance of a low-cost imaging system with the aid of signal pro-
cessing compare with the one of an expensive imaging system? Is it worth investing
in higher image quality by improving the imaging system hardware or by develop-
ing the signal processing software? The topic of this book is to relate the ability of
extracting information from an imaging system with the aid of signal processing to
evaluate the overall performance of imaging systems.
1.2 Imaging Performance
Understanding the image formation and recording process helps in understanding
the factors that affect image performance and therefore helps the design of imaging
systems and signal processing algorithms. The image formation process and the
sources of image degradation, such as loss of useful high-frequency details, noise, or
low-contrast target environment, are discussed in Chapter 2.
Methods of determining image performance are important tools in determining
the merits of imaging systems and signal processing algorithms. Image performance
determination can be performed via subjective human perception studies or image
performance modeling. Image performance prediction and the role of image perfor-
mance modeling are also discussed in Chapter 3.
3
1.3 Signal Processing: Basic Principles and Advanced Applications
The basic signal processing principles, including Fourier transform, wavelet trans-
form, finite impulse response (FIR) filters, and Fourier-based filters, are discussed in
Chapter 4.
In an image formation and recording process, many factors affect sensor perfor-

mance and image quality, and these can result in loss of high-frequency information
or low contrast in an image. Several common causes of low image quality are the
following:

Many low-cost visible and thermal sensors spatially or electronically
undersample an image. Undersampling results in aliased imagery in which
subtle/detailed information (high-frequency components) is lost in these
images.

An imaging system’s blurring function (sometimes called the point spread
function, or PSF) is another common factor in the reduction of high-frequency
components in the acquired imagery and results in blurred images.

Low-cost sensors and environmental factors, such as lighting sources or back-
ground complexities, result in low-contrast images.

Focal plan array (FPA) sensors have detector-to-detector variability in the FPA
fabrication process and cause the fixed-pattern noise in the acquired imagery.
There are many signal processing applications for the enhancement of imaging
system performance. Most of them attempt to enhance the image quality or remove
the degradation phenomena. Specifically, these applications try to recover the useful
high-frequency components that are lost or corrupted in the image and attempt to
suppress the undesired high-frequency components, which are noises. In Chapters 5
to 11, the following classes of signal processing applications are considered:
1. Image resampling;
2. Super-resolution image reconstruction;
3. Image restoration—deblurring;
4. Image contrast enhancement;
5. Nonuniformity correction (NUC);
6. Tone scale;

7. Image fusion.
1.4 Image Resampling
The concept of image resampling originates from the sampled imager. The discus-
sion in this chapter relates image resampling with image display and reconstruction
from sampled points of one single image. These topics provide the reader with a fun-
damental understanding that the way an image is processed and displayed is just as
important as the blur and sampling characteristics of the sensor. It also provides a
background for undersampled imaging for discussion on super-resolution image
reconstruction in the following chapter. In signal processing, image resampling is
4 Introduction
also called image decimation, or image interpolation, according to whether the goal
is to reduce or enlarge the size (or resolution) of a captured image. It can provide the
image values that are not recorded by the imaging system, but are calculated from
the neighboring pixels. Image resampling does not increase the inherent informa-
tion content in the image, but poor image display reconstruction function can
reduce the overall imaging system performance.
The image resampling algorithms include spatial and spatial-frequency domain,
or Fourier-based windowing, methods. The important considerations in image
resampling include the image resampling model, image rescale implementation, and
resampling filters, especially the anti-aliasing image resampling filter. These algo-
rithms, examples, and image resampling performance measurements are discussed
in Chapter 5.
1.5 Super-Resolution Image Reconstruction
The loss of high-frequency information in an image could be due to many factors.
Many low-cost visible and thermal sensors spatially or electronically undersample
an image. Undersampling results in aliased imagery in which the high-frequency
components are folded into the low-frequency components in the image. Conse-
quently, subtle/detailed information (high-frequency components) is lost in these
images. Super-resolution image reconstruction can produce high-resolution images
by using the existing low-cost imaging devices from a sequence, or a few snapshots,

of low-resolution images.
Since undersampled images have subpixel shifts between successive frames,
they represent different information from the same scene. Therefore, the informa-
tion that is contained in an undersampled image sequence can be combined to
obtain an alias-free (high-resolution) image. Super-resolution image reconstruction
from multiple snapshots provides far more detail information than any interpolated
image from a single snapshot.
Figure 1.1 shows an example of a high-resolution (alias-free) infrared image
that is obtained from a sequence of low-resolution (aliased) input images having
subpixel shifts among them.
1.5 Super-Resolution Image Reconstruction 5
(b)
(a)
Figure 1.1 Example of super-resolution image reconstruction: (a) input sequence of aliased infra
-
red images having subpixel shifts among them; and (b) output alias-free (high-resolution) image in
which the details of tree branches are revealed.
The first step in a super-resolution image reconstruction algorithm is to estimate
the supixel shifts of each frame with respect to a reference frame. The second step is
to increase the effective spatial sampling by operating on a sequence of low-resolu-
tion subpixel-shifted images. There are also spatial and spatial frequency domain
methods for the subpixel shift estimation and the generation of the high-resolution
output images. These algorithms, examples, and the image performance are
discussed in Chapter 6.
1.6 Image Restoration—Deblurring
An imaging system’s blurring function, also called the point spread function (PSF), is
another common factor in the reduction of high-frequency components in the
image. Image restoration tries to inverse this blurring degradation phenomenon, but
within the bandlimit of the imager (i.e., it enhances the spatial frequencies within the
imager band). This includes deblurring images that are degraded by the limitations

of a sensor or environment. The estimate or knowledge of the blurring function is
essential to the application of these algorithms. One of the most important consider-
ations of designing a deblurring filter is to control noise, since the noise is likely
amplified at high frequencies. The amplification of noise results in undesired arti-
facts in the output image. Figure 1.2 shows examples of image deblurring. One input
image [Figure 1.2(a)] contains the blur, while the deblurred version of it [Figure
1.2(b)] removes the most blur. Another input image [Figure 1.2(c)] contains the blur
and noise; the noise effect illustrates on the deblurred version of it [Figure 1.2(d)].
Image restoration tries to recover the high-frequency information below the diffrac-
tion limit while limiting the noise artifacts. The designs of deblurring filters, the
6 Introduction
(a) (b)
(c) (d)
Figure 1.2 Examples of image deblurring: (a) blurred bar image; (b) deblurred version of (a); (c)
blurred bar image with noise added; and (d) deblurred version of (c).
noise control mechanisms, examples, and image performance are discussed in
Chapter 7.
1.7 Image Contrast Enhancement
Image details can also be enhanced by image contrast enhancement techniques in
which certain image edges are emphasized as desired. For an example of a medical
application in diagnosing breast cancer from mammograms, radiologists follow the
ductal networks to look for abnormalities. However, the number of ducts and the
shape of ductal branches vary with individuals, which make the visual process of
locating the ducts difficult. The image contrast enhancement provides the ability to
enhance the appearance of the ductal elements relative to the fatty-tissue surround-
ings, which helps radiologists to visualize abnormalities in mammograms.
Image contrast enhancement methods can be divided into single-scale approach
and multiscale approach. In the single-scale approach, the image is processed in the
original image domain, such as a simple look-up table. In the multiscale approach,
the image is decomposed into multiple resolution scales, and processing is per-

formed in the multiscale domain. Because the information at each scale is adjusted
before the image is reconstructed back to the original image intensity domain, the
output image contains the desired detail information. The multiscale approach can
also be coupled with the dynamic range reduction. Therefore, the detail information
in different scales can be displayed in one output image. Localized contrast
enhancement (LCE) is the process in which these techniques are applied on a local
scale for the management of dynamic range in the image. For example, the
sky-to-ground interface in infrared imaging can include a huge apparent tempera-
ture difference that occupies most of the image dynamic range. Small targets with
smaller signals can be lost, while LCE can reduce the large sky-to-ground interface
signal and enhance small target signals (see Figure 8.10 later in this book). Details of
the algorithms, examples, and image performance are discussed in Chapter 8.
1.8 Nonuniformity Correction (NUC)
Focal plan array (FPA) sensors have been used in many commercial and military
applications, including both visible and infrared imaging systems, since they have
wide spectral responses, compact structures, and cost-effective production. How-
ever, each individual photodetector in the FPA has a different photoresponse, due to
detector-to-detector variability in the FPA fabrication process [1]. Images that are
acquired by an FPA sensor suffer from a common problem known as fixed-pattern
noise, or spatial nonuniformity. The technique to compensate for this distortion is
called nonuniformity correction (NUC). Figure 1.3 shows an example of a
nonuniformity corrected image from an original input image with the fixed-pattern
noise.
There are two main categories of NUC algorithms, namely, calibration-based
and scene-adaptive algorithms. A conventional, calibration-based NUC is the stan
-
dard two-point calibration, which is also called linear NUC. This algorithm esti
-
1.7 Image Contrast Enhancement 7
mates the gain and offset parameters by exposing the FPA to two distinct and

uniform irradiance levels. The scene-adaptive NUC uses the data acquired in the
video sequence and a motion estimation algorithm to register each point in the scene
across all of the image frames. This way, continuous compensation can be applied
adaptively for individual detector responses and background changes. These algo-
rithms, examples, and imaging system performance are discussed in Chapter 9.
1.9 Tone Scale
Tone scale is a technique that improves the image presentation on an output display
medium (softcopy display or hardcopy print). Tone scale is also a mathematical
mapping of the image pixel values from the sensor to a region of interest on an out-
put medium. Note that tone scale transforms improve only the appearance of the
image, but not the image quality itself. The gray value resolution is still the same.
However, a proper tone scale allows the characteristic curve of a display system to
match the sensitivity of the human eye to enhance the image interpretation task per-
formance. There are various tone scale techniques, including piece-wise linear tone
scale, nonlinear tone scale, and perceptual linearization tone scale. These techniques
and a tone scale performance example are discussed in Chapter 10.
1.10 Image Fusion
Because researchers realize that different sensors provide different signature cues of
the scene, image fusion has been receiving additional attention in signal processing.
Some of those applications are shown to benefit from fusing the images of multiple
sensors.
Imaging sensor characteristics are determined by the wavebands that they
respond to in the electromagnetic spectrum. Figure 1.4 is a diagram of the electro-
magnetic spectrum with wavelength indicated in metric length units [2]. The most
familiar classifications of wavebands are the radiowave, microwave, infrared, visi-
ble, ultraviolet, X-ray, and gamma-ray wavebands. Figure 1.5 shows further subdi
-
vided wavebands for broadband sensors [3]. For example, the infrared waveband is
8 Introduction
(

a
)(
b
)
Figure 1.3 Example of nonuniformity correction: (a) input image with the fixed-pattern noise
shown in the image; and (b) nonuniformity corrected image in which the helicopter in the center
is clearly illustrated.

×