Tải bản đầy đủ (.pdf) (234 trang)

Numerical methods for inverse problems

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (2.54 MB, 234 trang )



Numerical Methods for Inverse Problems


To my wife Elisabeth,
to my children David and Jonathan


Series Editor
Nikolaos Limnios

Numerical Methods for
Inverse Problems

Michel Kern


First published 2016 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as
permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced,
stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers,
or in the case of reprographic reproduction in accordance with the terms and licenses issued by the
CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the
undermentioned address:
ISTE Ltd
27-37 St George’s Road
London SW19 4EU
UK


John Wiley & Sons, Inc.
111 River Street
Hoboken, NJ 07030
USA

www.iste.co.uk

www.wiley.com

© ISTE Ltd 2016
The rights of Michel Kern to be identified as the author of this work have been asserted by him in
accordance with the Copyright, Designs and Patents Act 1988.
Library of Congress Control Number: 2016933850
British Library Cataloguing-in-Publication Data
A CIP record for this book is available from the British Library
ISBN 978-1-84821-818-5


Contents

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

ix

Part 1. Introduction and Examples . . . . . . . . . . . . . . . . . . . . . .

1

Chapter 1. Overview of Inverse Problems . . . . . . . . . . . . . . . . .


3

1.1. Direct and inverse problems . . . . . . . . . . . . . . . . . . . . . . . .
1.2. Well-posed and ill-posed problems . . . . . . . . . . . . . . . . . . . .

3
4

Chapter 2. Examples of Inverse Problems . . . . . . . . . . . . . . . . .

9

2.1. Inverse problems in heat transfer . . . . .
2.2. Inverse problems in hydrogeology . . . .
2.3. Inverse problems in seismic exploration .
2.4. Medical imaging . . . . . . . . . . . . . .
2.5. Other examples . . . . . . . . . . . . . .

.
.
.
.
.

.
.
.
.
.


.
.
.
.
.

10
13
16
21
25

Part 2. Linear Inverse Problems . . . . . . . . . . . . . . . . . . . . . . . .

29

Chapter 3. Integral Operators and Integral Equations

. . . . . . . . .

31

.
.
.
.
.

.
.

.
.
.

31
36
36
39
42

Chapter 4. Linear Least Squares Problems – Singular Value
Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

45

4.1. Mathematical properties of least squares problems . . . . . . . . . . . .
4.1.1. Finite dimensional case . . . . . . . . . . . . . . . . . . . . . . . . .

45
50

3.1. Definition and first properties . . . . . . . . .
3.2. Discretization of integral equations . . . . .
3.2.1. Discretization by quadrature–collocation
3.2.2. Discretization by the Galerkin method .
3.3. Exercises . . . . . . . . . . . . . . . . . . . .

.
.
.

.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.

.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.

.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.

.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.

.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.

.
.

.
.
.
.
.


vi

Numerical Methods for Inverse Problems

4.2. Singular value decomposition for matrices . . . . .
4.3. Singular value expansion for compact operators . .
4.4. Applications of the SVD to least squares problems
4.4.1. The matrix case . . . . . . . . . . . . . . . . . .
4.4.2. The operator case . . . . . . . . . . . . . . . . .
4.5. Exercises . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.

52
57

60
60
63
65

Chapter 5. Regularization of Linear Inverse Problems . . . . . . . . .

71

5.1. Tikhonov’s method . . . . . . . . . . .
5.1.1. Presentation . . . . . . . . . . . . .
5.1.2. Convergence . . . . . . . . . . . . .
5.1.3. The L-curve . . . . . . . . . . . . .
5.2. Applications of the SVE . . . . . . . .
5.2.1. SVE and Tikhonov’s method . . . .
5.2.2. Regularization by truncated SVE .
5.3. Choice of the regularization parameter
5.3.1. Morozov’s discrepancy principle . .
5.3.2. The L-curve . . . . . . . . . . . . .
5.3.3. Numerical methods . . . . . . . . .
5.4. Iterative methods . . . . . . . . . . . . .
5.5. Exercises . . . . . . . . . . . . . . . . .
Part 3. Nonlinear Inverse Problems

.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.


.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.

.
.
.
.
.
.
.
.
.
.
.
.
.

72
72
73
81
83
84
85
88
88
91
92
94
98


. . . . . . . . . . . . . . . . . . . . . 103

Chapter 6. Nonlinear Inverse Problems – Generalities . . . . . . . . . 105
6.1. The three fundamental spaces . . . . . . . . . . . . . . . . . . .
6.2. Least squares formulation . . . . . . . . . . . . . . . . . . . . . .
6.2.1. Difficulties of inverse problems . . . . . . . . . . . . . . . .
6.2.2. Optimization, parametrization, discretization . . . . . . . . .
6.3. Methods for computing the gradient – the adjoint state method
6.3.1. The finite difference method . . . . . . . . . . . . . . . . . .
6.3.2. Sensitivity functions . . . . . . . . . . . . . . . . . . . . . . .
6.3.3. The adjoint state method . . . . . . . . . . . . . . . . . . . .
6.3.4. Computation of the adjoint state by the Lagrangian . . . . .
6.3.5. The inner product test . . . . . . . . . . . . . . . . . . . . . .
6.4. Parametrization and general organization . . . . . . . . . . . . .
6.5. Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 7. Some Parameter Estimation Examples

.
.
.
.
.
.
.
.
.
.
.
.


.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.

106
111
114
114
116
116
118
119
120
123
123
125

. . . . . . . . . . . 127

7.1. Elliptic equation in one dimension . . . . . . . . . . . . . . . . . . . . . 127
7.1.1. Computation of the gradient . . . . . . . . . . . . . . . . . . . . . . 128
7.2. Stationary diffusion: elliptic equation in two dimensions . . . . . . . . 129


Contents


7.2.1. Computation of the gradient: application of the general
method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2.2. Computation of the gradient by the Lagrangian . . . .
7.2.3. The inner product test . . . . . . . . . . . . . . . . . . .
7.2.4. Multiscale parametrization . . . . . . . . . . . . . . . .
7.2.5. Example . . . . . . . . . . . . . . . . . . . . . . . . . .
7.3. Ordinary differential equations . . . . . . . . . . . . . . . .
7.3.1. An application example . . . . . . . . . . . . . . . . . .
7.4. Transient diffusion: heat equation . . . . . . . . . . . . . .
7.5. Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 8. Further Information

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

vii

132
134
135
135
136

137
144
147
152

. . . . . . . . . . . . . . . . . . . . . . . . 155

8.1. Regularization in other norms . . . . . . . . .
8.1.1. Sobolev semi-norms . . . . . . . . . . . . .
8.1.2. Bounded variation regularization norm . .
8.2. Statistical approach: Bayesian inversion . . .
8.2.1. Least squares and statistics . . . . . . . . .
8.2.2. Bayesian inversion . . . . . . . . . . . . . .
8.3. Other topics . . . . . . . . . . . . . . . . . . .
8.3.1. Theoretical aspects: identifiability . . . . .
8.3.2. Algorithmic differentiation . . . . . . . . .
8.3.3. Iterative methods and large-scale problems
8.3.4. Software . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.


.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.


.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.


.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.

155
155
157
157
158
160
163
163
163
164
164

Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
Appendix 1

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169

Appendix 2

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183

Appendix 3


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193

Bibliography
Index

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213



Preface

This book studies methods to concretely address (on a computer) inverse problems.
But what is an inverse problem? An inverse problem appears whenever the causes that
produced a given effect must be determined, or when we seek to indirectly estimate
the parameters of a physical system.
The most common example in the everyday life of many of us comes from the
medical field: the medical ultrasound that informs if an unborn baby is in good health
involves the solution of an inverse problem. A probe, placed on the belly of the patient,
emits and receives ultrasounds. These are deflected, and reflected, by the tissues of
the fetus. The sensor receives and interprets these echoes to return an image of the
contours of these tissues. The image is effectively obtained in an indirect manner. We
will see further examples throughout this book.
Intuitively, the observation of an effect may not be sufficient to determine its
cause. If I go inside a room and I note that the temperature is (nearly) uniform, it is
difficult for me to know what the distribution of temperature was 2 h earlier. It is said
that the inverse problem to determine the temperature in the past is “ill-posed”. This
definition contrasts with the question of determining the future evolution of the
temperature, which is, in a sense that we will specify, “well-posed”. As Molière’s

character Monsieur Jourdain does when he speaks prose, it is so common to solve
well-posed problems that we (almost) do it without thinking.
Solving inverse problems thus requires the mastery of techniques and specific
methods. This book presents some of those chosen for their very general domain
of application. It focuses on a small number of methods that will be used in most
applications:
– the reformulation of an inverse problem in the form of minimization of a square
error functional. The reason for this choice is mainly practical: it makes it possible to
carry out calculations at a reasonable cost;


x

Numerical Methods for Inverse Problems

– the regularization of ill-posed problems and in particular Tikhonov’s method;
– the use of the singular value decomposition to analyze an ill-posed problem;
– the adjoint state method to calculate the gradient of the functionals to minimize
when these are not quadratic.
These tools will help to address many (but not all!) inverse problems that arise
in practice. Two limitations should be however kept in mind. On the one hand, many
inverse problems will make use of different techniques (we will mention a few of
them). On the other hand, even when the presented tools can be employed, they are
rarely sufficient on their own to completely analyze a complex physical application.
Most often, it will be necessary to supplement these tools with a fine analysis of the
particular situation to make the most of it (redundancy or not of the data, fast or slow
variation of the parameters looked for, etc.).
It is common, in this type of preface, to justify the existence of the presented book!
It is true that the question is legitimate (many books already exist on the subject as can
be seen in the bibliography), and I do not claim any originality about the content.

Nonetheless, readers might still be interested to find a book that discusses both linear
and nonlinear problems. In addition, this book can be used as an introduction to the
more advanced literature.
This book is aimed at readers with a rather substantial mathematical and a scientific
computing background, equivalent to a masters in applied mathematics. Nevertheless,
it is a book with a practical perspective. The methods described therein are applicable,
and have been applied, and are often illustrated by numerical examples.
The prerequisites to approach this book are unfortunately more numerous that I
would have wished. This is a consequence of the fact that the study of inverse
problems calls upon many others areas of mathematics. A working knowledge of
(both theoretical and numerical) linear algebra is assumed, as is a familiarity with the
language of integration theory. Functional analysis, which is what linear algebra
becomes when it abandons the finite dimensional setting, is ubiquitous, and the
Appendices herein serve as reminders of concepts directly useful in this book. An
important part of the examples comes from models of partial differential equations.
Here again, the reader will benefit from a prior knowledge of analysis methods (weak
formulations, Sobolev spaces) and of numerical analysis (finite element method,
discretization schemes for differential equations).


Preface

xi

Book layout
We start the book with some general remarks on inverse problems. We will
introduce the fundamental concept of an ill-posed problem, which is characteristic of
inverse problems.
In Chapter 2, we will give several examples of inverse problems, originating from
several areas of physics.

An important source of linear inverse problems will be introduced in Chapter 3:
the integral equations of the first kind. After outlining the main properties of integral
operators, we will show that they lead to ill-posed problems. Finally, we will introduce
discretization methods, leading to least squares problems.
The study of these problems is the subject of the subsequent two chapters. In
Chapter 4, we will study their mathematical properties in a Hilbertian context: the
geometric aspect, and the relationship with normal equations, as well as the questions
of existence and uniqueness of the solutions. We will also introduce the fundamental
tool, both for theoretical analysis and for numerical approximation, that is the
singular value decomposition, first for matrices, then for operators between Hilbert
spaces. Reminders regarding the numerical aspects of inverse problems can be found
in Appendix 1. Techniques for solving ill-posed problems are the subject of
Chapter 5, especially Tikhonov’s regularization method and spectral truncation.
Tikhonov’s method will be first addressed from a variational perspective before
bringing clarification with singular value decomposition. We will discuss the
question of the choice of the regularization parameter and will finish by a short
introduction to iterative method.
In the second part, we will discuss nonlinear problems, which are essentially
problems of parameters estimation in differential or partial differential equations. In
Chapter 6, we will see how to formulate identification problems in terms of
minimization and explore the main difficulties that we can expect therefrom.
Appendix 2 contains reminders about the basic numerical methods in optimization.
Chapter 7 will address the important technique of the adjoint state to compute the
functional gradient involved in least squares problems. We will see in several
examples how to conduct this computation in an efficient way.
We conclude this second part by briefly introducing issues that could not be
discussed in this book, giving some bibliographic hints.
We have compiled reminders regarding the numerical methods of linear algebra for
least squares problems, reminders on optimization, as well as some functional analysis
results and supplements on linear operators in the appendices.



xii

Numerical Methods for Inverse Problems

Acknowledgments
My thanks go first to Professor Limnios, who suggested I write this book, from
a first version of course notes that I had published on the Internet. I am grateful to
him for giving me the opportunity to publish this course by providing more visibility
thereto.
The contents of this book owe a lot, and this is a euphemism, to Guy Chavent. This
book grew out of lecture notes that I had written for a course that had originally been
taught by G. Chavent, and for which he trusted me enough to let me replace him. Guy
was also my thesis supervisor and was the leader of the Inria team where I did all my
career. He has been, and remains, a source of inspiration with regard to how to address
a scientific problem.
I had the chance to work in the particularly stimulating environment of Inria and
to meet colleagues who added great scientific qualities to endearing personalities. I
am thinking especially of Jérôme Jaffré and Jean Roberts. A special mention for my
colleagues in the Serena team: Hend Benameur, Nathalie Bonte, François Clément,
Caroline Japhet, Vincent Martin, Martin Vohralík and Pierre Weis. Thank you for
your friendship, and thank you for making our work environment a pleasant and an
intellectually stimulating one.
I would like to thank all the colleagues who have told me of errors they found in
previous versions of the book, the students of the Pôle Universitaire Léonard de Vinci,
of Mines–ParisTech and of the École Nationale d’Ingénieurs of Tunis for listening to
me and for their questions, as well as the staff of ISTE publishing for their help in
seeing the book through to completion.


Michel K ERN
February 2016


PART 1

Introduction and Examples



1
Overview of Inverse Problems

1.1. Direct and inverse problems
According to Keller [KEL 76], two problems are said to be the inverse of one
another if the formulation of one of them involves the other. This definition includes
a degree of arbitrariness and confers a symmetric role to both problems under
consideration. A more operational definition is that an inverse problem consists of
determining the causes knowing the effects. Thus, this problem is the inverse of what
is called a “direct problem”, consisting of the deduction of the effects, the causes
being known.
This second definition shows that it is more usual to study direct problems. As a
matter of fact, since Newton, the notion of causality is rooted in our scientific
subconscious, and at a more prosaic level, we have learned to pose, and then solve,
problems for which the causes are given, where the objective is to find the effects.
This definition also shows that inverse problems may give rise to particular
difficulties. We will see further that it is possible to attribute a mathematical content
to the sentence “the same causes produce the same effects”; in other words, it is
reasonable to require that the direct problem is well-posed. On the other hand, it is
easy to imagine, and we will see numerous examples, that the same effects may

originate from different causes. At the origin, this idea contains the main difficulty of
the study of inverse problems: they can have several solutions and it is important to
have additional information in order to discriminate between them.
The prediction of the future state of a physical system, knowing its current state,
is the typical example of a direct problem. We can consider various inverse problems:
for example to reconstitute the past state of the system knowing its current state (if
this system is irreversible), or the determination of parameters of the system, knowing
(part of) its evolution. This latter problem is that of the identification of parameters,
which will be our main concern in the following.

Numerical Methods for Inverse Problems, First Edition. Michel Kern.
© ISTE Ltd 2016. Published by ISTE Ltd and John Wiley & Sons, Inc.


4

Numerical Methods for Inverse Problems

A practical challenge of the study of inverse problems is that it often requires a
good knowledge of the direct problem, which is reflected in the use of a large variety
of both physical and mathematical concepts. The success in solving an inverse
problem is based, in general, on elements specific to this problem. However, some
techniques present an extended application domain and this book is an introduction
to the principal techniques: the regularization of ill-posed problems and the least
squares method.
The most important technique is the reformulation of an inverse problem in the
form of the minimization of an error functional between the actual measurements
and the synthetic measurements (that is the solution to the direct problem). It will be
convenient to distinguish between linear and nonlinear problems. It should be noted
here that the nonlinearity in question refers to the inverse problem, and that the direct

problem itself may or may not be linear.
In the case of linear problems, resorting to linear algebra and to functional
analysis allows accurate results as well as efficient algorithms to be obtained. The
fundamental tool here is the singular value decomposition of the operator, or of the
matrix, being considered. We will study the regularization method in detail, which
consists of slightly “modifying” the problem under study by another that has “better”
properties. This will be specified in Chapters 4 and 5.
Nonlinear problems are more difficult and there exist less overall results. We will
study the application of optimization algorithms to problems obtained by the
reformulation referred to above. A crucial technical ingredient (from the numerical
point of view) is the calculation of the gradient of the functional to be minimized. We
will study the adjoint state method in Chapter 7. It allows this calculation at a cost
that is a (small) multiple of that of solving the direct problem.
As it can be seen, the content of this book primarily aims to present numerical
methods to address inverse problems. This does not mean that theoretical questions
do not exist, or are devoid of interest. The deliberate choice of not addressing them is
dictated by the practical orientation of the course, by the author’s taste and knowledge,
but also by the high mathematical level that these issues require.
1.2. Well-posed and ill-posed problems
In his famous book, Hadamard [HAD 23] introduced, as early as 1932, the notion
of a well-posed problem. It concerns a problem for which:
– a solution exists;
– the solution is unique;
– the solution depends continuously on the data.


Overview of Inverse Problems

5


Of course, these concepts must be clarified by the choice of space (and of
topologies) to which the data and the solution belong.
In the same book, Hadamard suggested (and it was a widespread opinion
until recently) that only a well-posed problem could properly model a physical
phenomenon. After all, these three conditions seem very natural. In fact, we shall
see that inverse problems often do not satisfy either of these conditions, or even the
three all together. Upon reflection, this is not so surprising:
– a physical model being established, the experimental data available are generally
noisy and there is no guarantee that such data originate from this model, even for
another set of parameters;
– if a solution exists, it is perfectly conceivable (and we will see examples of this)
that different parameters may result in the same observations.
The absence of one or any other of the three Hadamard’s conditions does not have
the same importance with respect to being able to solve (in a sense that remains to be
defined) the associated problem:
– the fact that the solution of an inverse problem may not exist is not a serious
difficulty. It is usually possible to restore the existence by relaxing the concept of
solution (a classic procedure in mathematics);
– the non-uniqueness is a more serious problem. If a problem has several solutions,
there should be a means of distinguishing between them. This requires additional
information (we speak of a priori information);
– the lack of continuity is probably the most problematic, in particular in view
of an approximate or a numerical solution. Lack of continuity means that it is not
possible (regardless of the numerical method) to approach a satisfactory solution of
the inverse problem, since the data available will be noisy, therefore close to the actual
data, but different from the actual data.
A problem that is not well-posed within the meaning of the definition above is said
to be ill-posed. We now give an example that, although very simple, illustrates the
difficulties that may be found in more general situations.
E XAMPLE 1.1.– Differentiation and integration are two problems that are the inverse

of each other. It would seem more natural to consider differentiation as the direct
problem and integration as the inverse problem. In fact, integration has good
mathematical properties that lead to consider it as the direct problem. In addition,
differentiation is the prototypical ill-posed problem, as we shall see in the following.


6

Numerical Methods for Inverse Problems

Consider the Hilbert space L2 (Ω), and the integral operator A defined by
x

f (t) dt.

Af (x) =

[1.1]

0

It is easy to directly see that A ∈ L(L2 (0, 1)), or theorem 3.1, can be applied (see
example 3.1). This operator is injective; however, its image is the vector subspace
Im A = {f ∈ H 1 (0, 1), u(0) = 0}
where H 1 (0, 1) is the Sobolev space. In effect, the equation
Af = g
is equivalent to
f (x) = g (x) and g(0) = 0.
The image of A is not closed in L2 (0, 1) (of course, it is closed in H 1 (0, 1)). As
a result, the inverse of A is not continuous on L2 (0, 1), as shown in the following

example.
Consider a function f ∈ C 1 ([0, 1]), and let n ∈ N. Let
fn (x) = f (x) +

1
sin n2 x ,
n

then
fn (x) = f (x) + n cos n2 x .
Simple calculations show that
f − fn

2

1
n

1
1

sin 2n2
2 4n

=n

1
1
+
sin 2n2

2 4n

=

1/2

=0

1
n

whereas
f − fn

2

1/2

= O(n).

,


Overview of Inverse Problems

7

Thus, the difference between f and fn may be arbitrarily large, even though the
difference between f and f is arbitrarily small. The derivation operator (the inverse
of A) is thus not continuous, at least with this choice of norms.

The instability of the inverse is typical of ill-posed problems. A small perturbation
over the data (here f ) can have an arbitrarily large influence on the result (here f ).
A second class of inverse problems is the estimation of parameters in differential
equations. We are going to discuss a very simple example of this situation.
E XAMPLE 1.2.– Considering the elliptic problem in one dimension:
for x ∈] − 1, 1[

− (a(x)u (x)) = f (x),
u(−1) = u(1) = 0.

[1.2]

This equation, or other similar although more complex, arises in several examples
in the following chapter. In this example, we choose a(x) = x2 + 1, and the solution
u(x) = (1 − x2 )/2, which gives f (x) = 3x2 + 1.
The direct problem consists of calculating u, given a and f . For the inverse
problem, we shall consider that f is known, and we will try to recover the coefficient
a from a measurement of u. For this example, voluntarily simplified, we shall assume
that u is measured over the whole interval ] − 1, 1[, which is obviously unrealistic.
We shall see that even in this optimistic situation, we are likely to face difficulties.
By integrating equation [1.2], and by dividing by u , we obtain the following
expression for a (assuming that u does not vanish, which is not true in our example):
a(x) =

C
1
+
u (x) u (x)

x


f (ξ) dξ,

[1.3]

0

which yields in our particular case:
a(x) =

C
+ x2 + 1
x

for x = 0,

[1.4]

where C is an integration constant.
We can see that even in this particular case, a is not determined by the data, that is
u. Of course in this case, it is clear that the “correct” solution corresponds to C = 0,
since this is the only value for which a is bounded. In order to be able to discriminate


8

Numerical Methods for Inverse Problems

among the various possible solutions, we resort to additional information (usually
referred to as a priori information).

In this problem, there are two sources of instability: first, equation [1.3] involves
u , and we just have seen that the transition from u to u causes instability. This is a
phenomenon common to linear and nonlinear problems. On the other hand, the
division by u shows an instability specific to nonlinear problems. If u vanishes at
some point, the division is impossible. If u is simply small, the division will be a
cause of instability.
This book is dedicated to the study of methods allowing for the recovery of a
certain degree of stability in ill-posed problems. It is however necessary to keep in
mind this observation from [ENG 96]: “no mathematical trick can make an inherently
unstable problem stable”. The methods we are going to introduce in the following will
make the problem under consideration stable, but at the price of a modification of the
solved problem (and therefore of its solution).


2
Examples of Inverse Problems

In this chapter, we present a few “concrete” examples of inverse problems, as
they occur in the sciences or in engineering. This list is far from exhaustive (see the
references at the end of this chapter for other applications).
Among the areas in which inverse problems play an important role, we can
mention the following:
– medical imaging (ultrasound, scanners, X-rays, etc.);
– petroleum engineering (seismic prospection, magnetic methods, identification of
the permeabilities in a reservoir etc.);
– hydrogeology (identification of the hydraulic permeabilities);
– chemistry (determination of reaction constants);
– radars (determination of the shape of an obstacle);
– underwater acoustics (same objective);
– quantum mechanics (determination of the potential);

– image processing (restoration of blurred images).
From a mathematical point of view, these problems are divided into two major
groups:
– linear problems (echography, image processing, etc.), which amount to solving
an integral equation of the first kind;
– nonlinear problems, which are mostly questions of parameter estimation in
differential or partial differential equations.

Numerical Methods for Inverse Problems, First Edition. Michel Kern.
© ISTE Ltd 2016. Published by ISTE Ltd and John Wiley & Sons, Inc.


10

Numerical Methods for Inverse Problems

2.1. Inverse problems in heat transfer
In order to determine the temperature distribution in an inhomogeneous material
occupying a domain (open connected subset) Ω of R3 , the conservation of energy is
first written as
ρc

∂T
+ div (q) = f (x, y, z)
∂t

in Ω

[2.1]


where T is the temperature, ρ is the density of the fluid, c is the specific heat, q
represents a heat flux and f is a volume source.
Fourier’s law then connects the heat flux density to the temperature gradient:
q = −K grad T,

[2.2]

where K is the thermal conductivity (which may be a tensor, and depends on the
position).
By eliminating q, we obtain the equation for the temperature, known as the heat
equation, in a heterogeneous medium:
ρc

∂T
− div (K grad T ) = f
∂t

in Ω.

[2.3]

This equation must be complemented by boundary conditions on the boundary of
the domain Ω and an initial condition.
The direct problem is to determine T knowing the physical coefficients ρ, c and K
as well as the source of heat f . This problem is well known, both from the theoretical
point of view (existence and uniqueness of the solution) and the numerical point of
view. Several inverse problems can be established:
– given a measurement of the temperature at an instant tf > 0, determine the
initial temperature. We will discuss it in example 2.1;
– given a (partial) temperature measurement, determine some of the coefficients

of the equation.
Note that the first of these problems is linear, while the second is nonlinear: in fact,
the application (ρ, c, K) → T is nonlinear.


Examples of Inverse Problems

11

E XAMPLE 2.1 (Backward heat equation).– We consider the ideal case of a
homogeneous and infinite material (in one spatial dimension to simplify). The
temperature is a solution of the heat equation:
∂T
∂2T
=0

∂t
∂x2

[2.4]

(there is no source). It is assumed that the temperature is known at some time tf , or
Tf (x) = T (x, tf ), and that the objective is to find the initial temperature T0 (x) =
T (x, 0).
The problem of determining Tf knowing T0 is the Cauchy problem for the heat
equation. It has a unique solution, which continuously depends on the initial data. As
we shall see, this is not true for the inverse problem that we consider here. Physically,
this is due to the irreversible character of the thermal diffusion. It is well known that
the temperature tends to become homogenized over time, and this implies that it is not
possible to go back, that is to recover the previous state that can be more heterogeneous

than the current state.
Because of the very simplified situation that we have chosen, we can calculate by
hand the solution of the heat equation [2.4]. Using the spatial Fourier transform of
equation [2.4] (we note Tˆ(k, t) the Fourier transform of T (x, t) keeping t as fixed),
we obtain an ordinary differential equation (where this time it is k that is used as a
parameter) whose solution is
2
Tˆf (k) = e−|k| tf Tˆ0 (k).

[2.5]

Using the inverse Fourier transform, we can see that the solution at the instant tf
is related to the initial condition by a convolution with the elementary solution of the
heat equation:
1
Tf (x) = √
2 πtf

+∞
−∞

e−(x−y)

2

/4tf

T0 (y) dy.

[2.6]


It is well known [CAN 84] that, for any “reasonable” function T0 (continuous,
bounded), the function Tf is infinitely differentiable, which mathematically expresses
the irreversibility mentioned earlier.
While remaining in the Fourier domain, we can pointwise invert equation [2.5],
but the function
2
k → e|k| t Tˆf (k)


×