A. Murat Tekalp. “Image and Video Restoration.”
2000 CRC Press LLC. <>.
ImageandVideoRestoration
A.MuratTekalp
UniversityofRochester
53.1Introduction
53.2Modeling
Intra-FrameObservationModel
•
MultispectralObserva-
tionModel
•
MultiframeObservationModel
•
Regularization
Models
53.3ModelParameterEstimation
BlurIdentification
•
EstimationofRegularizationParameters
•
EstimationoftheNoiseVariance
53.4Intra-FrameRestoration
BasicRegularizedRestorationMethods
•
RestorationofIm-
agesRecordedbyNonlinearSensors
•
RestorationofImages
DegradedbyRandomBlurs
•
AdaptiveRestorationforRing-
ingReduction
•
BlindRestoration(Deconvolution)
•
Restora-
tionofMultispectralImages
•
RestorationofSpace-Varying
BlurredImages
53.5MultiframeRestorationandSuperresolution
MultiframeRestoration
•
Superresolution
•
Superresolution
withSpace-VaryingRestoration
53.6Conclusion
References
53.1 Introduction
Digitalimagesandvideo,acquiredbystillcameras,consumercamcorders,orevenbroadcast-quality
videocameras,areusuallydegradedbysomeamountofblurandnoise.Inaddition,mostelectronic
camerashavelimitedspatialresolutiondeterminedbythecharacteristicsofthesensorarray.Common
causesofblurareout-of-focus,relativemotion,andatmosphericturbulence.Noisesourcesinclude
filmgrain,thermal,electronic,andquantizationnoise.Further,manyimagesensorsandmediahave
knownnonlinearinput-outputcharacteristicswhichcanberepresentedaspointnonlinearities.The
goalofimageandvideo(imagesequence)restorationistoestimateeachimage(frameorfield)asit
wouldappearwithoutanydegradations,byfirstmodelingthedegradationprocess,andthenapplying
aninverseprocedure.Thisisdistinctfromimageenhancementtechniqueswhicharedesignedto
manipulateanimageinordertoproducemorepleasingresultstoanobserverwithoutmaking
useofparticulardegradationmodels.Ontheotherhand,superresolutionreferstoestimatingan
imageataresolutionhigherthanthatoftheimagingsensor.Imagesequencefiltering(restoration
andsuperresolution)becomesespeciallyimportantwhenstillimagesfromvideoaredesired.This
isbecausetheblurandnoisecanbecomeratherobjectionablewhenobservinga“freeze-frame”,
althoughtheymaynotbevisibletothehumaneyeattheusualframerates.Sincemanyvideosignals
encounteredinpracticeareinterlaced,weaddressthecasesofbothprogressiveandinterlacedvideo.
c
1999byCRCPressLLC
The problemofimagerestorationhas sparked widespread interestin the signal processing commu-
nity over the past 20 or 30 years. Because image restoration is essentially an ill-posed inverse problem
which is also frequentlyencountered in various other disciplines such as geophysics, astronomy, med-
ical imaging, and computer vision, the literature that is related to image restoration is abundant. A
concise discussion of early results can be found in the books by Andrews and Hunt [1] and Gonzalez
and Woods [2]. More recent developments are summarized in the book by Katsaggelos [3], and re-
view papers by Meinel [4], Demoment [5], Sezan and Tekalp [6], and Kaufman and Tekalp [7]. Most
recently, printing high-quality still images from video sources has become an important application
for multi-frame restoration and superresolution methods. An in-depth coverage of video filtering
methods can be found in the book Digital Video Processing by Tekalp [8]. This chapter summarizes
key results in digital image and video restoration.
53.2 Modeling
Every image restoration/superresolution algorithm is based on an observation model, which relates
the observed degraded image(s) to the desired “ideal” image, and possibly a regularization model,
which conveys the available a priori information about the ideal image. The success of image restora-
tion and/or superresolution depends on how good the assumed mathematical models fit the actual
application.
53.2.1 Intra-Frame Observation Model
Let the observed and ideal images be sampled on the same 2-D lattice . Then, the observed blurred
and noisy image can be modeled as
g = s(Df ) + v
(53.1)
where g, f , and v denote vectors representinglexicographical ordering of the samples of the observed
image, ideal image, and a particular realization of the additive (random) noise process, respectively.
The operator D is called the blur operator. The response of the image sensor to light intensity is
represented by the memoryless mapping s(·), which is, in general, nonlinear. (This nonlinearity has
often been ignored in the literature for algorithm development.)
The blur may be space-invariant or space-variant. For space-invariant blurs, D becomesaconvo-
lution operator, which has block-Toeplitz structure; and Eq. (53.1) can be expressed, in scalar form,
as
g
(
n
1
,n
2
)
= s
(m
1
,m
2
)∈
S
d
d
(
m
1
,m
2
)
f
(
n
1
− m
1
,n
2
− m
2
)
+ v
(
n
1
,n
2
)
(53.2)
where d(m
1
,m
2
) and S
d
denote the kernel and support of the operator D, respectively. The kernel
d(m
1
,m
2
) is the impulse response of the blurring system, often called the point spread function
(PSF). In case of space-variant blurs, the operator D does not have a particular structure; and the
observation equation can be expressed as a superposition summation
g
(
n
1
,n
2
)
= s
(
m
1
,m
2
)
∈
S
d
(
n
1
,n
2
)
d
(
n
1
,n
2
; m
1
,m
2
)
f
(
m
1
,m
2
)
+ v
(
n
1
,n
2
)
(53.3)
where S
d
(n
1
,n
2
) denotes the support of the PSF at the pixel location (n
1
,n
2
).
The noise is usually approximated by a zero-mean, white Gaussian random field which is additive
and independent of the image signal. In fact, it has been generally accepted that more sophisticated
noise models do not, in general, lead to significantly improved restorations.
c
1999 by CRC Press LLC
53.2.2 Multispectral Observation Model
Multispectral images refer to image data with multiple spectral bands that exhibit inter-band cor-
relations. An important class of multispectral images are color images with three spectral bands.
Suppose we have K spectral bands, each blurred by possibly a different PSF. Then, the vector-matrix
model (53.1) can be extended to multispectral modeling as
g = Df + v
(53.4)
where
g
.
=
g
1
.
.
.
g
K
, f
.
=
f
1
.
.
.
f
K
, v
.
=
v
1
.
.
.
v
K
denote N
2
K × 1 vectors representing the multispectral observed, ideal, and noise data, respectively,
stacked as composite vectors, and
D
.
=
D
11
··· D
1K
.
.
.
.
.
.
.
.
.
D
K1
··· D
KK
is an N
2
K × N
2
K matrix representing the multispectral blur operator. In most applications, D is
block diagonal, indicating no inter-band blurring.
53.2.3 Multiframe Observation Model
Suppose a sequence of blurred and noisy images g
k
(n
1
,n
2
), k = 1,...,L, corresponding to multiple
shots (from different angles) of a static scene sampled on a 2-D lattice or frames (fields) of video
sampled (at different times) on a 3-D progressive (interlaced) lattice, is available. Then, we may
be able to estimate a higher-resolution “ideal” still image f(m
1
,m
2
) (corresponding to one of the
observed frames) sampled on a lattice, which has a higher sampling density than that of the input
lattice. The main distinction between the multispectral and multiframe observation models is that
here the observed images are subject to sub-pixel shifts (motion), possibly space-varying, which
makes high-resolution reconstruction possible. In the case of video, we may also model blurring due
to motion within the aperture time to further sharpen images.
To this effect, each observed image (frame or field) can be related to the desired high-resolution
ideal still-image through the superposition summation [8]
g
k
(
n
1
,n
2
)
= s
(
m
1
,m
2
)
∈
S
d
(
n
1
,n
2
;k
)
d
k
(
n
1
,n
2
; m
1
,m
2
)
f
(
m
1
,m
2
)
+ v
k
(
n
1
,n
2
)
(53.5)
where the support of the summation over the high-resolution grid (m
1
,m
2
) at a particular observed
pixel (n
1
,n
2
; k) depends on the motion trajectory connecting the pixel (n
1
,n
2
; k) to the ideal image,
the size of the support of the low-resolution sensor PSF h
a
(x
1
,x
2
) with respect to the high resolution
grid, and whether there is additional optical (out-of-focus, motion, etc.) blur. Because the relative
positions of low- and high-resolution pixels in general vary by spatial coordinates, the discrete sensor
PSFis space-varying. The support ofthe space-varyingPSF isindicated bytheshaded areainFig. 53.1,
where the rectangle depicted by solid lines shows the support of a low-resolution pixel over the high-
resolution sensor array. The shaded region corresponds to the area swept by the low-resolution pixel
due to motion during the aperture time [8].
c
1999 by CRC Press LLC
FIGURE 53.1: Illustration of the discrete system PSF.
Note that the model (53.5) is invalid in case of occlusion. That is, each observed pixel (n
1
,n
2
; k)
can be expressed as a linear combination of several desired high-resolution pixels (m
1
,m
2
),provided
that (n
1
,n
2
; k) is connected to (m
1
,m
2
) by a motion trajectory. We assume that occlusion regions
can be detected a priori using a proper motion estimation/segmentation algorithm.
53.2.4 Regularization Models
Restorationisanill-posed problemwhichcanberegularizedbymodelingcertain aspectsofthedesired
“ideal” image. Images can be modeled as either 2-D deterministic sequences or random fields. A
priori information about the ideal image can then be used to define hard or soft constraints on the
solution. In the deterministic case, images are usually assumed to be members of an appropriate
Hilbert space, such as a Euclidean space with the usual inner product and norm. For example, in the
context of set theoretic restoration, the solution can be restricted to be a member of a set consisting
of all images satisfying a certain smoothness criterion [9]. On the other hand, constrained least
squares (CLS) and Tikhonov-Miller regularization use quadratic functionals to impose smoothness
constraints in an optimization framework.
In the random case, models have been developed for the pdf of the ideal image in the context of
maximum a posteriori (MAP) image restoration. For example, Trussell and Hunt [10]haveproposed
a Gaussian distribution with space-varying mean and stationary covariance as a model for the pdf
of the image. Geman and Geman [11] proposed a Gibbs distribution to model the pdf of the image.
Alternatively, if the image is assumed to be a realization of a homogeneous Gauss-Markov random
process, then it can be statistically modeled through an autoregressive (AR) difference equation [12]
f
(
n
1
,n
2
)
=
(
m
1
,m
2
)
∈
S
c
c
(
m
1
,m
2
)
f
(
n
1
− m
1
,n
2
− m
2
)
+ w
(
n
1
,n
2
)
(53.6)
where {c(m
1
,m
2
) : (m
1
,m
2
) ∈ S
c
} denote the model coefficients, S
c
is the model support (which
may be causal, semi-causal, or non-causal), and w(n
1
,n
2
) represents the modeling error which is
Gaussian distributed. The model coefficients can be determined such that the modeling error has
minimum variance [12]. Extensions of (53.6) to inhomogeneous Gauss-Markov fields was proposed
by Jeng and Woods [13].
53.3 Model Parameter Estimation
In this section, we discuss methods for estimating the parameters that are involved in the observation
and regularization models for subsequent use in the restoration algorithms.
c
1999 by CRC Press LLC
53.3.1 Blur Identification
Blur identification refers to estimation of both the support and parameters of the PSF {d(n
1
,n
2
) :
(n
1
,n
2
) ∈ S
d
}. It is a crucial element of image restoration because the quality of restored images is
highly sensitive to errors in the PSF [14]. An early approach to blur identification has been based on
the assumption that the original scene contains an ideal point source, and that its spread (hence the
PSF) can be determined from the observed image. Rosenfeld and Kak [15] show that the PSF can
also be determined from an ideal line source. These approaches are of limited use in practice because
a scene, in general, does not contain an ideal point or line source and the observation noise may not
allow the measurement of a useful spread.
Models for certain types of PSF can be derived using principles of optics, if the source of the
blur is known [7]. For example, out-of-focus and motion blur PSF can be parameterized with a few
parameters. Further, they arecompletelycharacterizedbytheir zerosinthefrequency-domain. Power
spectrum andcepstrum (Fouriertransform ofthe logarithm ofthe powerspectrum)analysis methods
have been successfully applied in many cases to identify the location of these zero-crossings [16, 17].
Alternatively, Chang et al. [18] proposed a bispectrum analysis method, which is motivated by the
fact that bispectrum is not affected, in principle, by the observation noise. However, the bispectral
method requires much more data than the method based on the power spectrum. Note that PSFs,
which do not have zero crossings in the frequency domain (e.g., Gaussian PSF modeling atmospheric
turbulence), cannot be identified by these techniques.
Yet another approach for blur identification is the maximum likelihood (ML) estimation approach.
The ML approach aims to find those parameter values (including, in principle, the observation noise
variance) that have most likely resulted in the observed image(s). Different implementations of the
ML image and blur identification are discussed under a unifying framework [19]. Pavlovi
´
c and
Tekalp [20] propose a practical method to find the ML estimates of the parameters of a PSF based on
a continuous domain image formation model.
In multi-frame image restoration, blur identification using more than one frame at a time becomes
possible. For example, the PSF of a possibly space-varying motion blur can be computed at each
pixel from an estimate of the frame-to-frame motion vector at that pixel, provided that the shutter
speed of the camera is known [21].
53.3.2 Estimation of Regularization Parameters
Regularization model parameters aim to strike a balance between the fidelity of the restored image to
the observed data and its smoothness. Various methods exist to identify regularization parameters,
such as parametric pdf models, parametric smoothness constraints, and AR image models. Some
restoration methods require the knowledge of the power spectrum of the ideal image, which can be
estimated, forexample, froman AR modelof the image. TheAR parameters can, in turn, be estimated
from the observed image by a least squares [22] or an ML technique [63]. On the other hand,
non-parametric spectral estimation is also possible through the application of periodogram-based
methods toa prototype image [69, 23]. In the context of maximum a posteriori(MAP) methods, the a
priori pdf is often modeled by a parametric pdf, such as a Gaussian [10] or a Gibbsian [11]. Standard
methods for estimating these parameters do not exist. Methods for estimating the regularization
parameter in the CLS, Tikhonov-Miller, and related formulations are discussed in [24].
53.3.3 Estimation of the Noise Variance
Almost all restoration algorithms assume that the observation noise is a zero-mean, white random
process that is uncorrelated with the image. Then, the noise field is completely characterized by its
variance, which is commonly estimated by the sample variance computed over a low-contrast local
c
1999 by CRC Press LLC
region of the observed image. As we will see in the following section, the noise variance plays an
important role in defining constraints used in some of the restoration algorithms.
53.4 Intra-Frame Restoration
Westart byfirstlookingatsomebasicregularizedrestorationstrategies, in the case of an LSI blur model
with no pointwise nonlinearity. The effectofthenonlinear mapping s(.)isdiscussedin Section 53.4.2.
Methods that allow PSFs with a random components are summarized in Section 53.4.3. Adaptive
restoration for ringing suppression and blind restoration are covered in Sections 53.4.4 and 53.4.5,
respectively. Restoration of multispectral images and space-varying blurred images are addressed in
Sections 53.4.6 and 53.4.7, respectively.
53.4.1 Basic Regularized Restoration Methods
When the mapping s(.) is ignored, it is evident from Eq. (53.1) that image restoration reduces to
solving a set of simultaneous linear equations. If the matrix D is nonsingular (i.e., D
−1
exists) and
the vector g lies in the column space of D (i.e., there is no observation noise), then there exists a
unique solution which can be found by direct inversion (also known as inverse filtering). In practice,
however, we almost always have an underdetermined (due toboundary truncation problem [14]) and
inconsistent (due to observation noise) set of equations. In this case, we resort to a minimum-norm
least-squares solution. A least squares (LS) solution (not unique when the columns of D are linearly
dependent) minimizes the norm-square of the residual
J
LS
(f )
.
=||g − Df ||
2
(53.7)
LS solution(s) with the minimum norm (energy) is (are) generally known as pseudo-inverse solu-
tion(s) (PIS).
Restoration by pseudo-inversion is often ill-posed owing to the presence of observation noise [14].
This follows because the pseudo-inverse operator usually has some very large eigenvalues. For ex-
ample, a typical blur transfer function has zeros; and thus, its pseudo-inverse attains very large
magnitudes near these singularities as well as at high frequencies. This results in excessive amplifi-
cation at these frequencies in the sensor noise. Regularized inversion techniques attempt to roll-off
the transfer function of the pseudo-inverse filter at these frequencies to limit noise amplification.
It follows that the regularized inverse deviates from the pseudo-inverse at these frequencies which
leads to other types of artifacts, generally known as regularization artifacts [14]. Various strategies
for regularized inversion (and how to achieve the right amount of regularization) are discussed in
the following.
Singular-Value Decomposition Method
The pseudo-inverse D
+
can be computed using the singular value decomposition (SVD) [1]
D
+
=
R
i=0
λ
−1/2
i
z
i
u
T
i
(53.8)
where λ
i
denote the singular values, z
i
and u
i
are the eigenvectors of D
T
D and DD
T
, respectively,
and R is the rank of D. Clearly, reciprocation of zero singular-values is avoided since the summation
runs to R, the rank of D. Under the assumption that D is block-circulant (corresponding to a
circular convolution), the PIS computed through Eq. (53.8) is equivalent to the frequency domain
c
1999 by CRC Press LLC
pseudo-inverse filtering
D
+
(u, v) =
1/D(u, v) if D(u, v) = 0
0 if D(u, v) = 0
(53.9)
where D(u, v) denotes the frequency response of the blur. This is because a block-circulant matrix
can be diagonalized by a 2-D discrete Fourier transformation (DFT) [2].
Regularization of the PIS can then be achieved by truncating the singular value expansion (53.8)
to eliminate all terms corresponding to small λ
i
(which are responsible for the noise amplification)
at the expense of reduced resolution. Truncation strategies are generally ad-hoc in the absence of
additional information.
Iterative Methods (Landweber Iterations)
Several image restoration algorithms are based on variations of the so-called Landweber itera-
tions [25, 26, 27, 28, 31, 32]
f
k+1
= f
k
+ RD
T
g − Df
k
(53.10)
where R is a matrix that controls the rate of convergence of the iterations. There is no general way
to select the best C matrix. If the system (53.1) is nonsingular and consistent (hardly ever the case),
the iterations (53.10) will converge to the solution. If, on the other hand, (53.1) is underdetermined
and/or inconsistent, then (53.10) converges to a minimum-norm least squares solution (PIS). The
theory of this and other closely related algorithms are discussed by Sanz and Huang [26] and Tom
et al. [27]. Kawata and Ichioka [28] are among the first to apply the Landweber-type iterations to
image restoration, which they refer to as “reblurring” method.
Landweber-type iterative restoration methods can be regularized by appropriately terminating
the iterations before convergence, since the closer we are to the pseudo-inverse, the more noise
amplification we have. A termination rule can be defined on the basis of the norm of the residual
image signal [29]. Alternatively, soft and/or hard constraints can be incorporated into iterations to
achieve regularization. The constrained iterations can be written as [30, 31]
f
k+1
= C
f
k
+ RD
T
g − Df
k
(53.11)
where C is a nonexpansive constraint operator, i.e., ||C(f
1
) − C(f
2
)||≤||f
1
− f
2
||, to guarantee
the convergence of the iterations. Application of Eq. (53.11) to image restoration has been extensively
studied (see [31, 32] and the references therein).
Constrained Least Squares Method
Regularizedimage restoration can be formulated as a constrained optimization problem, where
a functional ||Q(f )||
2
of the image is minimized subject to the constraint ||g − Df ||
2
= σ
2
.Here
σ
2
is a constant, which is usually set equal to the variance of the observation noise. The constrained
least squares (CLS) estimate minimizes the Lagrangian [34]
J
CLS
(f ) =||Q(f )||
2
+ α
||g − Df ||
2
− σ
2
(53.12)
whereαistheLagrange multiplier. Theoperator QischosensuchthattheminimizationofEq.(53.12)
enforces some desired property of the ideal image. For instance, if Q is selected as the Laplacian
operator, smoothness of the restored image is enforced. The CLS estimate can be expressed, by taking
the derivative of Eq. (53.12) and setting it equal to zero, as [1]
ˆ
f =
D
H
D + γ Q
H
Q
−1
D
H
g
(53.13)
c
1999 by CRC Press LLC