Some basic maths for seismic data processing and inverse problems
(Refreshement only!)
Complex Numbers
Vectors
Linear vector spaces
Linear systems
Matrices
Determinants
Eigenvalue problems
Singular values
Matrix inversion
Mathematical foundations
Series
Taylor
Fourier
Delta Function
Fourier integrals
The idea is to illustrate
these mathematical tools
with examples from
seismology
Complex numbers
iφ
z = a + ib = re = r (cos φ + i sin φ )
Mathematical foundations
Complex numbers
conjugate, etc.
z* = a − ib = r (cos φ − i sin φ )
= r cos − φ − ri sin( −φ ) = r −iφ
z 2 = zz* = ( a + ib)(a − ib) = r 2
cos φ = (e iφ + e −iφ ) / 2
sin φ = (e iφ − e −iφ ) / 2i
Mathematical foundations
Complex numbers
seismological applications
Discretizing signals, description with eiwt
Poles and zeros for filter descriptions
Elastic plane waves
Analysis of numerical approximations
ui ( x j , t ) = Ai exp[ik (a j x j − ct )]
u(x, t ) = A exp[ikx − ωt ]
Mathematical foundations
Vectors and Matrices
For discrete linear inverse problems we will need the concept of
linear vector spaces. The generalization of the concept of size of a vecto
to matrices and function will be extremely useful for inverse problems.
Definition: Linear Vector Space.
A linear vector space over a field F of
scalars is a set of elements V together with a function called addition
from VxV into V and a function called scalar multiplication from FxV into
V satisfying the following conditions for all x,y,z ∈ V and all a,b ∈ F
1.
2.
3.
4.
5.
6.
7.
8.
(x+y)+z = x+(y+z)
x+y = y+x
There is an element 0 in V such that x+0=x for all x ∈ V
For each x ∈ V there is an element -x ∈ V such that x+(-x)=0.
a(x+y)= a x+ a y
(a + b )x= a x+ bx
a(b x)= ab x
1x=x
Mathematical foundations
Matrix Algebra – Linear Systems
Linear system of algebraic equations
a11 x1 + a12 x2 + ... + a1n xn = b1
a21 x1 + a22 x2 + ... + a2 n xn = b2
..........
an1 x1 + an 2 x2 + ... + ann xn = bn
... where the x1, x2, ... , xn are the unknowns ...
in matrix form
Ax = b
Mathematical foundations
Matrix Algebra – Linear Systems
Ax = b
a11
a
21
A = aij =
an1
[ ]
where
a1n
a22 a11
an 2 ann
a12
b1
b
2
b = { bi } =
bn
Mathematical foundations
x1
x
2
x = { xi } =
xn
A is a nxn (square) matrix,
and x and b are column
vectors of dimension n
Matrix Algebra – Vectors
Row vectors
v= [ v1 v2
Column vectors
w 1
w= w2
w
3
v3 ]
Matrix addition and subtraction
C = A +B
D = A −B
with
with
cij = aij + bij
d ij = aij − bij
Matrix multiplication
C = AB
with
m
cij = ∑ aik bkj
k =1
where A (size lxm) and B (size mxn) and i=1,2,...,l and
j=1,2,...,n.
Note that in general AB≠BA but (AB)C=A(BC)
Mathematical foundations
Matrix Algebra – Special
Transpose of a matrix
[ ]
A = aij
Symmetric matrix
[ ]
A T = a ji
( AB) T = BT A T
Identity matrix
1 0 0
0 1 0
I=
0 0 1
with AI=A, Ix=x
Mathematical foundations
A = AT
aij = a ji
Matrix Algebra – Orthogonal
Orthogonal matrices
a matrix is Q (nxn) is said to be
orthogonal if
QT Q = I n
... and each column is an
orthonormal vector
qi qi = 1
... examples:
1
Q=
2
1 − 1
1 1
it is easy to show that :
QT Q = QQT = I n
if orthogonal matrices operate on
vectors their size (the result of
their inner product x.x) does not
change -> Rotation
(Qx)T (Qx) = xT x
Mathematical foundations
Matrix and Vector Norms
How can we compare the size of vectors, matrices (and
functions!)?
For scalars it is easy (absolute value). The generalization of this
concept to vectors, matrices and functions is called a norm.
Formally the norm is a function from the space of vectors into
the space of scalars denoted by
(.)
with the following properties:
Definition: Norms.
1.
2.
3.
||v|| > 0 for any v∈0 and ||v|| = 0
implies v=0
||av||=|a| ||v||
||u+v||≤||v||+||u|| (Triangle
inequality)
We will only deal with the so-called lp Norm.
Mathematical foundations
The lp-Norm
The lp- Norm for a vector x is defined as (p≥1):
x
lp
1/ p
p
= ∑ xi
i =1
n
Examples:
- for p=2 we have the ordinary euclidian norm:x
- for p= ∞ the definition is
- a norm for matrices is induced via
- for l2 this means :
||A||2=maximum eigenvalue of ATA
Mathematical foundations
x
l∞
l2
= xT x
= max xi
1≤i ≤ n
A = max
x≠0
Ax
x
Matrix Algebra – Determinants
The determinant of a square matrix A is a scalar
number denoted det (A) or |A|, for example
a b
det
= ad − bc
c d
or
a11 a12 a13
det a21 a22 a23
a31 a32 a33
= a11a22 a33 + a12 a23a31 + a13a21a32 − a11a23a32 − a12 a21a33 − a13a22 a31
Mathematical foundations
Matrix Algebra – Inversion
A square matrix is singular if det A=0. This usually
indicates problems with the system (non-uniqueness, linear
dependence, degeneracy ..)
Matrix Inversion
For a square and nonsingular matrix A its inverse
is defined such as
AA −1 = A -1A = I
The cofactor matrix C of
matrix A is given by
Cij = (−1) i+ j Mij
where Mij is the determinant
of the matrix obtained by
eliminating the i-th row and
the j-th column of A.
The inverse of A is then
given by
1
A =
CT
det A
(AB)−1 = B-1A -1
Mathematical foundations
−1
Matrix Algebra – Solution techniques
... the solution to a linear system of equations is the given
by
x = A -1b
The main task in solving a linear system of equations is
finding the inverse of the coefficient matrix A.
Solution techniques are e.g.
Gauss elimination methods
Iterative methods
A square matrix is said to be positive definite if for any nonzero vector x
x T = Ax > 0
... positive definite matrices are non-singular
Mathematical foundations
Eigenvalue problems
… one of the most important tools in stress, deformation and wave
problems!
It is a simple geometrical question: find me the directions in which a
square matrix does not change the orientation of a vector … and find
me the scaling …
Ax = λx
.. the rest on the board …
Mathematical foundations
Some operations on vector fields
Gradient of a vector field
∂x
∂ x ux ∂ x ux
∇u = ∂ y u = ∂ y u y = ∂ x u y
∂
∂ u ∂ u
z
z z x z
∂ yux
∂ yu y
∂ yu z
What is the meaning of the gradient?
Mathematical foundations
∂ z ux
∂ zuy
∂ z uz
Some operations on vector fields
Divergence of a vector field
∂x
∂ x ux
∇ • u = ∂ y • u = ∂ y • u y = ∂ x u x + ∂ yu y + ∂ z u z
∂
∂ u
z
z z
When u is the displacement what is ist divergence?
Mathematical foundations
Some operations on vector fields
Curl of a vector field
∂x
∂ x u x ∂ yu z − ∂ z u y
∇ × u = ∂ y × u = ∂ y × uy = ∂ zux − ∂ x uz
∂
∂ u ∂ u − ∂ u
y x
z
z z x y
Can we observe it?
Mathematical foundations
Vector product
A = a × b = a b sin θ
Mathematical foundations
Matrices –Systems of equations
Seismological applications
Stress and strain tensors
Calculating interpolation or differential
operators for finite-difference methods
Eigenvectors and eigenvalues for
deformation and stress problems (e.g.
boreholes)
Norm: how to compare data with theory
Matrix inversion: solving for tomographic
images
Measuring strain and rotations
Mathematical foundations
The power of series
Many (mildly or wildly nonlinear) physical systems
are transformed to linear systems by using Taylor
series
1
1
2
f ( x + dx) = f ( x) + f ' dx + f ' ' dx + f ' ' ' dx 3 + ...
2
6
∞
f (i ) ( x) i
=∑
dx
i!
i =1
Mathematical foundations
… and Fourier
Let alone the power of Fourier series assuming a
periodic function …. (here: symmetric, zero at
both ends)
f ( x ) = a0 + ∑
n
n
an sin 2πx
2L
L
1
a0 = ∫ f ( x)dx
L0
2
nπx
an = ∫ f ( x) sin
dx
L0
L
L
Mathematical foundations
n = 1, ∞
Series –Taylor and Fourier
Seismological applications
Well: any Fouriertransformation, filtering
Approximating source input functions (e.g.,
step functions)
Numerical operators (“Taylor operators”)
Solutions to wave equations
Linearization of strain - deformation
Mathematical foundations
The Delta function
… so weird but so useful …
∫
∞
∫
∞
−∞
−∞
δ (t ) f (t )dt = f (0)
δ (t )dt = 1
, δ (t ) = 0
f (t )δ (t − a ) = f (a )
1
δ (at ) = δ (t )
a
δ (t ) =
Mathematical foundations
1
2π
∞
i ωt
e
∫ dω
−∞
für t ≠ 0