• Keine Ergebnisse gefunden

Some basic maths for seismic data processing and inverse problems

N/A
N/A
Protected

Academic year: 2021

Aktie "Some basic maths for seismic data processing and inverse problems"

Copied!
16
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Some basic maths for seismic data processing and inverse problems

(Refreshement only!)

• Complex Numbers

• Vectors

– Linear vector spaces

• Matrices

– Determinants

– Eigenvalue problems – Singular values

– Matrix inversion

The idea is to illustrate these mathematical tools with

examples from seismology

(2)

Complex numbers

) sin

(cos  

r i

re ib

a

z    i  

(3)

Complex numbers

conjugate, etc.

i e

e

e e

r ib

a ib

a zz

z

r ri

r

i r

ib a

z

i i

i i

i

2 / ) (

sin

2 / ) (

cos

) )(

(

*

) sin(

cos

) sin

(cos

*

2 2

(4)

Complex numbers

seismological application

Plane waves as superposition of harmonic signals using complex notation

Use this „Ansatz“ in the acoustic wave equation and interpret the consequences for wave propagation!

] exp[

) , (

)]

( exp[

) , (

t i

t

ct x

a ik

A t

x

u i j i j j

kx A

x

u

(5)

Vectors and Matrices

For discrete linear inverse problems we will need the concept of

linear vector spaces. The generalization of the concept of size of a vector to matrices and function will be extremely useful for inverse problems.

Definition: Linear Vector Space.

A linear vector space over a field F of scalars is a set of elements V together with a function called addition

from VxV into V and a function called scalar multiplication from FxV into V satisfying the following conditions for all x,y,z V and all  F

1. (x+y)+z = x+(y+z) 2. x+y = y+x

3. There is an element 0 in V such that x+0=x for all x V

4. For each x V there is an element -x V such that x+(-x)=0.

5. (x+y)= x+ y 6. ( + )x= x+ x 7. ( x)=  x

8. 1x=x

(6)

Matrix Algebra – Linear Systems

Linear system of algebraic equations

n n

nn n

n

n n

n n

b x

a x

a x

a

b x

a x

a x

a

b x

a x

a x

a

...

...

...

...

2 2 1

1

2 2

2 22 1

21

1 1

2 12 1

11

... where the x

1

, x

2

, ... , x

n

are the unknowns ...

in matrix form

b

Ax

(7)

Matrix Algebra – Linear Systems

where

 

 

 

 

 

nn n

n

n

a a

a

a a

a

a a

a

2 1

11 22

21

1 12

11

a

ij

A

b Ax

 

 

 

 

 

n i

x x x

x

2 1

x

 

 

 

 

 

n i

b b b

b

2 1

b

A is a nxn (square) matrix,

and x and b are column

vectors of dimension n

(8)

Matrix Algebra – Vectors

Row vectors Column vectors

v

1

v

2

v

3

v

 

 

 

 

3 2 1

w w w w

ij ij

ij

a b

c  

A B with

C

Matrix addition and subtraction

ij ij

ij

a b

d  

A B with

D

Matrix multiplication

m

k

kj ik

ij

a b

c

1

AB with C

where A (size lxm) and B (size mxn) and i=1,2,...,l and j=1,2,...,n.

Note that in general AB≠BA but (AB)C=A(BC)

(9)

Matrix Algebra – Special

Transpose of a matrix Symmetric matrix

   

T T T

T

A B AB

A A

 ) (

ji

ij

a

a

Identity matrix

ji

ij

a

a

A

T

A

 

 

 

 

1 0

0

0 1

0

0 0

1

I

with AI=A, Ix=x

(10)

Matrix Algebra – Orthogonal

Orthogonal matrices

n

T

Q I

Q

 

 

 

 1 1

1 1

2 Q 1

a matrix is Q (nxn) is said to be orthogonal if

... and each column is an orthonormal

vector q

i

q

i

 1

... examples:

it is easy to show that :

n

T

T

Q QQ I

Q  

if orthogonal matrices operate on vectors their size (the result of

their inner product x.x) does not ( Qx )

T

( Qx )  x

T

x

(11)

Matrix and Vector Norms

How can we compare the size of vectors, matrices (and functions!)?

For scalars it is easy (absolute value). The generalization of this

concept to vectors, matrices and functions is called a norm. Formally the norm is a function from the space of vectors into the space of scalars denoted by

(.)

with the following properties:

Definition: Norms.

1. ||v|| > 0 for any v0 and ||v|| = 0 implies 2. ||v=0 v||=|| ||v||

3. ||u+v||≤||v||+||u|| (Triangle inequality)

We will only deal with the so-called l

p

Norm.

(12)

The l p -Norm

The l

p

- Norm for a vector x is defined as (p≥1):

n p

i

p

l

x

i

x

p

/ 1

1

 

 

  

Examples:

- for p=2 we have the ordinary euclidian norm:

- for p= ∞ the definition is

- a norm for matrices is induced via - for l

2

this means :

x x x

l

T

2

n i

l i

x

x

max

1

x A Ax

x 0

max

(13)

Matrix Algebra – Determinants

The determinant of a square matrix A is a scalar number denoted det (A) or |A|, for example

bc d ad

c

b

a   

 

 det 

or

31 22 13 33

21 12 32

23 11 32

21 13 31

23 12 33

22 11

33 32

31

23 22

21

13 12

11

det

a a a a

a a a

a a a

a a a

a a a

a a

a a

a

a a

a

a a

a

 

 

(14)

Matrix Algebra – Inversion

A square matrix is singular if det A=0. This usually indicates problems with the system (non-uniqueness, linear dependence,

degeneracy ..)

Matrix Inversion

I A A

AA

1

-1

For a square and non-singular matrix A its inverse is defined such as

The cofactor matrix C of

matrix A is given by

C

ij

 (  1 )

ij

M

ij

where Mij is the determinant of the matrix obtained by

eliminating the i-th row and the j-th column of A.

The inverse of A is then given 1 -1 -1

1

A B (AB)

C A

T

A det

1

(15)

Matrix Algebra – Solution techniques

... the solution to a linear system of equations is the given by

b A x

-1

The main task in solving a linear system of equations is finding the inverse of the coefficient matrix A.

Solution techniques are e.g.

Gauss elimination methods Iterative methods

A square matrix is said to be positive definite if for any non- zero vector x

... positive definite matrices are non-singular

0 Ax

x

T

 

(16)

Matrices –Systems of equations

Seismological applications

• Stress and strain tensors

• Tomographic forward and inverse problems

• Calculating interpolation or

differential operators for finite-

difference methods

Referenzen

ÄHNLICHE DOKUMENTE

The main theoretical contribution of this thesis is the proof that a class of multiscale estimators with a BV penalty is minimax optimal up to logarithms for the estimation of

Stellar models were calculated with Modules for Experiments in Stellar Astrophysics r 8118 (MESA, Paxton et al. 2011 ) and stellar oscillations with the ADIPLS pulsa- tion package

The laser guided mining (LGM) uses uncertainty weighted multi-source sensor fusion to facilitate a compact hardware design for a large-scale optical position sensitive detector

curve reconstruction from gradients, discrete orthonormal polynomials, admissible functions, inclinometers, inverse boundary value

In Chapter 4, we describe our AEI approach, which combines state-of-the-art techniques from large-scale nonlinear optimization, such as inexact truncated Newton-like methods

Therefore, in this case where the boundary condition is unknown, the inverse problem is formulated in the following way: Given an incident field u i and the corre- sponding

Inverse problems with empirical process data described by Fokker-Planck equa- tion have also been studied by Dunker and Hohage [24] who obtained general con- vergence results of

We give some new results for statistical inverse problems, like higher order convergence rates given Poisson data, a saturation result for Gaussian white noise and the