Tải bản đầy đủ (.pdf) (22 trang)

Tài liệu 25 Signal Recovery from Partial Information ppt

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (355.94 KB, 22 trang )

Podilchuk, C. “Signal Recovery from Partial Information”
Digital Signal Processing Handbook
Ed. Vijay K. Madisetti and Douglas B. Williams
Boca Raton: CRC Press LLC, 1999
c

1999byCRCPressLLC
25
Signal Recovery from Partial
Information
Christine Podilchuk
Bell Laboratories
Lucent Technologies
25.1 Introduction
25.2 Formulation of the Signal Recovery Problem
Prolate Spheroidal Wavefunctions
25.3 Least Squares Solutions
WienerFiltering

ThePseudoinverseSolution

Regularization
Techniques
25.4 Signal Recovery using Projection onto Convex Sets (POCS)
ThePOCSFramework
25.5 Row-Based Methods
25.6 Block-Based Methods
25.7 Image Restoration Using POCS
References
25.1 Introduction
Signal recovery has been an active area of research for applications in many different scientific dis-


ciplines. A central reason for exploring the feasibility of signal recovery is due to the limitations
imposed by a physical device on the amount of data one can record. For example, for diffraction-
limited systems, the finite aperture size of the lens constrains the amount of frequency information
that can be captured. The image degradation is due to attenuation of high frequency components
resulting in a loss of details and other high frequency information. In other words, the finite aperture
size of the lens acts like a lowpass filter on the input data. In some cases, the quality of the recorded
image data can be improved by building a more costly recording device but many times the required
condition for acceptable data quality is physically unrealizable or too costly. Other times signal re-
covery may be necessary is for the recording of a unique event that cannot be reproduced under more
ideal recording conditions.
Some of the earliest work on signal recovery includes the work by Sondhi [1] and Slepian [2]on
recovering images from motion blur and Helstrom [3] on least squares restoration. A sampling of
some of the signal recovery algorithms applied to different types of problems can be found in [4]–
[21]. Further reading includes the other sections in this book, Chapter 53, and the extended list of
references provided by all the authors.
The simplesignaldegradation modeldescribed inthenext section turns outtobe a useful represen-
tation for many different problems encountered in practice. Some examples that can be formulated
using the general signal recovery paradigm include image restoration, image reconstruction, spectral
c

1999 by CRC Press LLC
estimation, and filter design. We distinguish between image restoration, which pertains to image re-
covery based on a measured distorted version of the original image, and image reconstruction, which
refers most commonly to medical imaging where the image is reconstructed from a set of indirect
measurements, usually projections. For many of the signal recovery applications, it is desirable to
extrapolate a signal outside of a known interval. Extrapolating a signal in the spatial or temporal
domain could result in improved spectral resolution and applies to such problems as power spec-
trum estimation, radio–astronomy, radar target detection, and geophysical exploration. The dual
problem, extrapolating the signal in the frequency domain, also known as superresolution, results in
improved spatial or temporal resolution and is desirable in many image restoration problems. As

will be shown later, the standard inverse filtering techniques are not able to resolve the signal estimate
beyond the diffraction limit imposed by the physical measuring device.
The observed signal is degraded from the original signal by both the measuring device as well as
external conditions. Besides the measured, distorted output signal we may have some additional
information about the following: the measuring system and external conditions, such as noise, as
well as some a priori knowledge about the desired signal to be restored or reconstructed. In order
to produce a good estimate of the original signal, we should take advantage of all the available
information.
Although the data recovery algorithms described here apply in general to any data type, we derive
most of the techniques based on two-dimensional input data for image processing applications. For
most cases, it isstraightforward toadaptthe algorithms to otherdatatypes. Examples ofdatarecovery
techniques for different inputs are illustrated in the other sections in this book as well as Chapter 53
for image restoration. The material in this section requires some basic knowledge of linear algebra
as found in [22].
Section 25.2 presents the signal degradation model and formulates the signal recovery problem.
The early attempts of signal recovery based on inverse filtering are presented in Section 25.3.The
concept of Projection Onto Convex Sets (POCS) described in Section 25.4 allows us to introduce a
priori knowledge about the original signal in the form of linear as well as nonlinear constraints into
the recovery algorithm. Convex set theoretic formulations allow us to design recovery algorithms
that are extremely flexible and powerful. Sections 25.5 and 25.6 present some basic POCS-based
algorithms and Section 25.7 presents a POCS-based algorithm for image restoration as well as some
results. The sample algorithms presented here are not meant to be exhaustive and the reader is
encouraged to read the other sections in this chapter as well as the references for more details.
25.2 Formulation of the Signal Recovery Problem
Signal recovery can be viewed as an estimation process in which operations are performed on an
observed signal in order to estimate the ideal signal that would be observed if no degradation was
present. In order to design a signal recovery system effectively, it is necessary to characterize the
degradation effectsofthephysicalmeasuringsystem. The basicidea istomodelthesignal degradation
effectsas accuratelyas possibleand performoperations toundo thedegradations andobtainarestored
signal. When the degradation cannot be modeled sufficiently, even the best recovery algorithms will

not yield satisfactory results. For many applications, the degradation system is assumed to be linear
and can be modeled as a Fredholm integral equation of the first kind expressed as
g(x) =

+∞
−∞
h(x; a)f (a)da + n(x).
(25.1)
This is the general case for a one-dimensional signal where f and g are the original and measured
signals, respectively, n represents noise, and h(x; a) is the impulse response or the response of the
c

1999 by CRC Press LLC
measuring system to an impulse at coordinate a.
1
A block diagram illustrating the general one-
dimensional signal degradation system is shown in Fig. 25.1. For image processing applications, we
modify this equation to the two-dimensional case, that is,
g(x, y) =

+∞
−∞

+∞
−∞
h(x, y; a, b)f (a, b)dadb + n(x, y)
(25.2)
The degradation operator h iscommonlyreferredtoasapoint spread function (PSF) in imaging
applications because in optics, h is the measured response of an imaging system to a point of light.
FIGURE 25.1: Block diagram of the signal recovery problem.

The Fourier transform of the point spread function h(x, y) denoted as H(w
x
,w
y
) is known as
the optical transfer function (OTF) and can be expressed as
H(w
x
,w
y
) =


−∞
h(x, y)exp

−i

w
x
x + w
y
y

dxdy


−∞
h(x, y)dxdy
.

(25.3)
The absolute value of the OTF is known as the modulation transfer function (MTF). A commonly
used optical image formation system is a circular thin lens. The recovery problem is considered
ill-posed when a small change in the observed image, g, results in a large change in the solution, f .
Most signal recovery problems in practice are ill-posed.
The continuous version of the degradation system for two-dimensional signals formulated in
Eq. (25.2) can be expressed in discrete form by replacing the continuous arguments with arrays of
samples in two dimensions, that is,
g(i, j) =

m

n
h(i, j; m, n)f (m, n) + n(i, j ).
(25.4)
It is convenient for image recovery purposes to represent the discrete formulation given in Eq. (25.4)
as a system of linear equations expressed as
g = Hf + n,
(25.5)
where g, f, and n are the lexicographic row-stacked versions of the discretized versions of g, f , and
n in Eq. (25.4) and H is the degradation matrix composed of the PSF.
This section presents an overview of some of the techniques proposed to estimate f when the
recovery problem can be modeled by Eq. (25.5). If there is no external noise or measurement error
1
This corresponds to the case of a shift–varying impulse response.
c

1999 by CRC Press LLC
and the set of equations is consistent, Eq. (25.5) reduces to
g = Hf.

(25.6)
It is usually not the case that a practical system can be described by Eq. (25.6). In this section, we
will focus on recovery algorithms where an estimate of the distortion operation represented by the
matrix H is known. For recovery problems where both the desired signal, f, and the degradation
operator, H, are unknown, refer to other articles in this book.
For most systems, the degradation matrix H is highly structured and quite sparse. The additive
noise term due to measurement errors and external and internal noise sources is represented by the
vector n. At first glance, the solution to the signal recovery problem seems to be straightforward —
find the inverse of the matrix H to solve for the unknown vector f. It turns out that the solution is not
so simple because in practice the degradation operator is usually ill-conditioned or rank-deficient
and the problem of inconsistencies or noise must be addressed. Other problems that may arise
include computational complexity due to extremely large problem dimensions especially for image
processing applications. The algorithms described here try to address these issues for the general
signal recovery problem described by Eq. (25.5).
25.2.1 Prolate Spheroidal Wavefunctions
We introduce the problem of signal recovery by examining a one-dimensional, linear, time-invariant
system that can be expressed as
g(x) =

+T
−T
f (α)h(x − α)dα,
(25.7)
whereg(x)istheobservedsignal, f(α)isthe desiredsignal offinitesupporton theinterval (−T,+T),
and h(x) denotes the degradation operator. Assuming that the degradation operator in this case is
an ideal lowpass filter, h can be described mathematically as
h(x) =
sin(x)
x
.

(25.8)
For this particular case, it is possible to solve for the exact signal f(x) with prolate spheroidal
wavefunctions [23]. The key to successfully solving for f lies in the fact that prolate spheroidal
wavefunctions are the eigenfunctions of the integral equation expressed by Eq. (25.7)withEq.(25.8)
as the degradation operator. This relationship is expressed as:

+T
−T
ψ
n
(α)
sin(x − α)
x − α
dα = λ
n
ψ
n
(x), n = 0,1,2,...,
(25.9)
where ψ
n
(x) are the prolate spheroidal wavefunctions and λ
n
are the corresponding eigenvalues. A
critical feature of prolate spheroidal wavefunctions is that they are complete orthogonal bases in the
interval (−∞, +∞) as well as the interval (−T,+T), that is,

+∞
−∞
ψ

n
(x)ψ
m
(x)dx =

1, if n = m,
0, if n = m,
(25.10)
and

+T
−T
ψ
n
(x)ψ
m
(x)dx =

λ
n
, if n = m,
0, if n = m.
(25.11)
c

1999 by CRC Press LLC
This allows the functions g(x) and f(x)to be expressed as the series expansion,
g(x) =



n=0
c
n
ψ
n
(x),
(25.12)
f(x) =


n=0
d
n
ψ
Ln
(x),
(25.13)
where ψ
Ln
(x) are the prolatespheroidal functions truncated to the interval (−T,T). The coefficients
c
n
and d
n
are given by
c
n
=



−∞
g(x)ψ
n
(x)dx
(25.14)
and
d
n
=
1
λ
n

T
−T
f(x)ψ
n
(x)dx.
(25.15)
If we substitute the series expansions given by Eqs. (25.12) and (25.13) into Eq. (25.7), we get
g(x) =


n=0
c
n
ψ
n
(x)
=


+T
−T



n=0
d
n
ψ
Ln
(α)

h(x − α)dα
(25.16)
=


n=0
d
n


+T
−T
ψ
n
(α)h(x − α)dα

.

(25.17)
Combining this result with Eq. (25.9),


n=0
c
n
ψ
n
(x) =


n=0
λ
n
d
n
ψ
n
(x),
(25.18)
where
c
n
= λ
n
d
n
,
(25.19)

and
d
n
=
c
n
λ
n
.
(25.20)
We get an exact solution for the unknown signal f(x)by substituting Eq. (25.20) into Eq. (25.13),
that is,
f(x) =


n=0
c
n
λ
n
ψ
Ln
(x).
(25.21)
Therefore, in theory, it is possible to obtain the exact image f(x)from the diffraction-limited im-
age, g(x), using prolate spheroidal wavefunctions. The difficulties of signal recovery become more
apparent when we examine the simple diffraction-limited case in relation to prolate spheroidal wave-
functions asdescribedin Eq. (25.21). Thefinite aperture sizeofa diffraction-limited systemtranslates
to eigenvalues λ
n

which exhibit a unit–step response; that is, the several largest eigenvalues are ap-
proximately one followedby asuccessionofeigenvalues that rapidly fall offtozero. The solution given
by Eq. (25.21) will be extremely sensitive to noise for small eigenvalues λ
n
. Therefore, for the general
c

1999 by CRC Press LLC
problem represented in vector–space by Eq. (25.5), the degradation operator H is ill-conditioned or
rank-deficient due to the small or zero-valued eigenvalues, and a simple inverse operation will not
yield satisfactory results. Many algorithms have been proposed to find a compromise between exact
deblurring and noise amplification. These techniques include Wiener filtering and pseudo-inverse
filtering. We begin our overview of signal recovery techniques by examining some of the methods
that fall under the category of optimization-based approaches.
25.3 Least Squares Solutions
The earliest attempts toward signal recovery are based on the concept of inverting the degradation
operator to restore the desired signal. Because in practical applications the system will often be ill-
conditioned, several problems can arise. Specifically, high detail signal information may be masked
by observation noise, or a small amount of observation noise may lead to an estimate that contains
very large false high frequency components. Another potential problem with such an approach
is that for a rank-deficient degradation operator, the zero-valued eigenvalues cannot be inverted.
Therefore, the general inverse filtering approach will not be able to resolve the desired signal beyond
the diffraction limit imposed by the measuring device. In other words, referring to the vector–space
description, the data that has been nulled out by the zero-valued eigenvalues cannot be recovered.
25.3.1 Wiener Filtering
Wiener filtering combines inverse filtering with a priori statistical knowledge about the noise and
unknown signal [24] in order to deal with the problems associated with an ill-conditioned system.
The impulse response of the restoration filter is chosen to minimize the mean square error as
defined by
E

f
= E


f −
ˆ
f

2

(25.22)
where
ˆ
f denotes the estimate of the ideal signal f and E{·} denotes the expected value. The Wiener
filter estimate is expressed as
H
−1
W
=
R
ff
H
T
HR
ff
H
T
+ R
nn
(25.23)

where R
ff
and R
nn
are the covariance matrices of f and n, respectively, and f and n are assumed to be
uncorrelated; that is,
R
ff
= E

ff
T

,
(25.24)
R
nn
= E

nn
T

,
(25.25)
and
R
fn
= 0.
(25.26)
The superscript T in the above equations denotes transpose. The Wiener filter can also be expressed

in the Fourier domain as
H
−1
W
=
H

S
ff
|
H
|
2
S
ff
+ S
nn
(25.27)
where S denotes the power spectral density, the superscript ∗ denotes the complex conjugate, and H
denotes the Fourier transform of H. Note that when the noise power is zero, the Wiener filter reduces
to the inverse filter; that is,
H
−1
W
= H
−1
.
(25.28)
c


1999 by CRC Press LLC
The Wiener filter approach for signal recovery assumes that the power spectra are known for the
input signal and the noise. Also, this approach assumes that finding a least squares solution that
optimizes Eq. (25.22) is meaningful. For the case of image processing, it has been shown, specifically
in the context of image compression, that the mean square error (mse) does not predict subjective
image quality [25]. Many signal processing algorithms are based on the least squares paradigm
because the solutions are tractable and, in practice, such approaches have produced some useful
results. However, in order to define a more meaningful optimization metric in the design of image
processing algorithms, we need to incorporate a human visual model into the algorithm design. In
the area of image coding, several coding schemes based on perceptual criteria have been shown to
produce improved results over schemes based on maximizing signal–to–noise ratio or minimizing
mse [25]. Likewise, the Wiener filtering approach will not necessarily produce an estimate that
maximizes perceived image or signal quality. Another limitation of the Wiener filter approach is that
the solution will not necessarily be consistent with any a priori knowledge about the desired signal
characteristics. In addition, the Wiener filter approach does not resolve the desired signal beyond
the diffraction limit imposed by the measuring system. For more details on Wiener filtering and the
various applications, see other chapters in this book.
25.3.2 The Pseudoinverse Solution
The Wiener filters attempt to minimize the noise amplification obtained in a direct inverse by pro-
viding a taper determined by the statistics of the signal and noise process under consideration. In
practice, the power spectra of the noise and desired signal might not be known. Here we present
what is commonly referred to as the generalized inverse solution. This will be the framework for
some of the signal recovery algorithms described later.
The pseudoinverse solution is an optimization approach that seeks to minimize the least squares
errorasgivenby
E
n
= n
T
n =


g − Hf

T
(g − Hf ).
(25.29)
The leastsquaressolution isnotunique when the rank ofthe M ×N matrix H is r<N≤ M. Inother
words, there are many solutions that satisfy Eq. (25.29). However, the Moore-Penrose generalized
inverse or pseudoinverse [26] does provide a unique least squares solution based on determining
the least squares solution with minimum norm. For a consistent set of equations as described in
Eq. (25.6), a solution is sought that minimizes the least squares estimation error; that is,
E
f
=

f −
ˆ
f

T
(f −
ˆ
f)
= tr

(f −
ˆ
f)

f −

ˆ
f

T

(25.30)
where f is the desired signal vector,
ˆ
f is the estimate, and tr denotes the trace [22]. The generalized
inverse provides an optimum solution that minimizes the estimation error for a consistent set of
equations. Thus, the generalized inverse provides an optimum solution for both the consistent and
inconsistent set of equations as defined by the performance functions E
f
and E
n
, respectively. The
generalized inverse solution satisfies the normal equations
H
T
g = H
T
Hf.
(25.31)
The generalized inverse solution, also known as the Moore-Penrose generalized inverse, pseudoin-
verse, or least squares solution with minimum norm is defined as
f

=

H

T
H

−1
H
T
g = H

g,
(25.32)
c

1999 by CRC Press LLC

×