• Keine Ergebnisse gefunden

Numerical Analysis and Complexity of Stochastic Processes

N/A
N/A
Protected

Academic year: 2022

Aktie "Numerical Analysis and Complexity of Stochastic Processes"

Copied!
37
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

18th Jyv¨askyl¨a Summer School

Klaus Ritter

TU Darmstadt, Germany

August 2008

(2)

Outline

I. Introductory Example: Reconstruction of a Brownian Motion Stochastic Processes

Brownian Motion

Reconstruction of a Brownian Motion

(3)

Outline

I. Introductory Example: Reconstruction of a Brownian Motion Stochastic Processes

Brownian Motion

Reconstruction of a Brownian Motion

(4)

Stochastic Processes

Stochastic process (real-valued): a family X = ( X ( t )) t ∈D of random variables

X ( t ) = X ( t , · ) : Ω → R on a probability space (Ω, A , P ).

Paths (trajectories) of X : for fixed ω ∈ Ω

D → R , t 7→ X ( t , ω ).

(5)

In these lectures

D = [0, 1] d

with d ∈ N , and X has (at least) continuous paths, i.e.,

∀ ω ∈ Ω : X ( · , ω) ∈ C ( D ).

Then X defines a continuous random function X : Ω → C (D ), ω 7→ X ( · , ω ).

Terminology, if d > 1: X random field.

(6)

X second-order process if

∀ t ∈ D : E( X 2 ( t )) < ∞ .

In this case, the mean and the covariance kernel, m : D → R , K : D × D → R , of X are given by

m ( t ) = E( X ( t )), K (s , t ) = E "

(X (s ) − m(s )) · (X (t ) − m(t )) .

Clearly "

X ( t 1 ), . . . , X ( t n )

has mean vector ( m ( t i )) 1≤ i ≤ n and

covariance matrix (K (t i , t j )) 1≤i ,j ≤n .

(7)

X Gaussian process if

"

X ( t 1 ), . . . , X ( t n )

normally distributed

for every n ∈ N and all t 1 , . . . , t n ∈ D .

(8)

Measures on function spaces: Consider the sup-norm on C (D ) and the respective Borel-σ -algebra B . Then

P X ( B ) = P ( { X ∈ B } ), B ∈ B ,

is well-defined. Clearly P X is a probability measure on ( C ( D ), B ), called the distribution of X .

Conversely, for any probability measure µ on ( C ( D ), B ), X ( t , f ) = f ( t )

defines a stochastic process on (C (D ), B , µ), called the canonical

process. Clearly µ X = µ.

(9)

Clearly

P X = P Y ⇒ m X = m Y ∧ K X = K Y .

For Gaussian processes X and Y we have ’ ⇔ ‘.

(10)

Outline

I. Introductory Example: Reconstruction of a Brownian Motion Stochastic Processes

Brownian Motion

Reconstruction of a Brownian Motion

(11)

Brownian Motion

In the sequel

D = [0, 1].

X one-dimensional Brownian motion if (i) X (0) = 0,

(ii) X ( t 1 ) − X ( t 0 ), . . . , X ( t n ) − X ( t n−1 ) independent for every n ∈ N and all 0 = t 0 < · · · < t n ≤ 1,

(iii) X ( t ) − X ( s ) ∼ N (0, t − s ) for 0 ≤ s < t ,

(iv) X has continuous paths.

(12)

Existence: Let Z 1 , . . . , Z n iid., E( Z 1 ) = 0, Var( Z 1 ) = 1, X n ( i / n ) =

i

X

j =1

√ 1

n · Z j , i = 0, . . . , n , and use piecewise linear interpolation. Then

P X

n

−−−−→ weakly

(n→∞) w .

The canonical process on ( C ( D ), B , w ) is a Brownian motion, and w is called the Wiener measure. Compare: central limit theorem.

Simulation: Use random numbers to simulate realizations of

Z 1 , . . . , Z n .

(13)

Simulations with

P ( Z 1 = ± 1) = 1/2 and n = 5, 50, 500.

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

−1

−0.8

−0.6

−0.4

−0.2 0 0.2 0.4 0.6 0.8 1

t

X(t)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

−1

−0.8

−0.6

−0.4

−0.2 0 0.2 0.4 0.6 0.8 1

t

X(t)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

−1

−0.8

−0.6

−0.4

−0.2 0 0.2 0.4 0.6 0.8 1

t

X(t)

(14)

Simulations with Z 1 ∼ N (0, 1) and n = 800.

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

−1

−0.8

−0.6

−0.4

−0.2 0 0.2 0.4 0.6 0.8 1

t

X(t)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

−1

−0.8

−0.6

−0.4

−0.2 0 0.2 0.4 0.6 0.8 1

t

X(t)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

−1

−0.8

−0.6

−0.4

−0.2 0 0.2 0.4 0.6 0.8 1

t

X(t)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

−0.5 0 0.5 1 1.5

t

X(t)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

−1

−0.5 0 0.5 1

t

X(t)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

−1.5

−1

−0.5 0 0.5

t

X(t)

(15)

Uniqueness: P X = w for every Brownian motion X . Applications: physics, finance, analysis, . . .

Lemma Every Brownian motion is Gaussian with zero mean and covariance kernel

K (s , t ) = min(s , t ).

Proof By (i)–(iii), X is Gaussian. Moreover,

m ( t ) = E( X ( t )) = E( X ( t ) − X (0)) = 0 and

K (t , t ) = E(X 2 (t )) = E((X (t ) − X (0)) 2 ) = t . If s ≤ t then

2 "

(16)

k -dimensional Brownian motion X ( t ) = "

X (1) ( t ), . . . , X ( k ) ( t )

with independent one-dimensional Brownian motions

X (1) , . . . , X (k ) .

(17)

Simulation with P ( Z 1 = ± 1) = 1/2 and n = 50 for both components.

−1 −0.5 0 0.5 1

−1

−0.5 0 0.5 1

X1(t) X2(t)

−0.8

−0.6

−0.4

−0.2 0 0.2 0.4 0.6 0.8 1

X1(t)

−0.8

−0.6

−0.4

−0.2 0 0.2 0.4 0.6 0.8 1

X2(t)

(18)

Simulation with P (Z 1 = ± 1) = 1/2 and n = 2000 for both components.

−1.5 −1 −0.5 0 0.5

−1

−0.5 0 0.5 1

X1(t) X2(t)

(19)

Simulations with Z 1 ∼ N (0, 1) and n = 800 for both components.

−0.5 0 0.5 1 1.5

−1.5

−1

−0.5 0 0.5

X1(t) X2(t)

−0.5 0 0.5 1 1.5

−1.5

−1

−0.5 0 0.5

X1(t) X2(t)

−1 −0.5 0 0.5 1

−1.5

−1

−0.5 0 0.5

X1(t) X2(t)

(20)

Literature

◮ Stochastic processes, measures on function spaces

V. I. Bogachev (1998), Gaussian Measures, Amer. Math. Soc., Providence.

I. Karatzas, S. E. Shreve (1999), Brownian Motion and Stochastic Calculus, Springer, New York.

M. A. Lifshits (1995), Gaussian Random Functions, Kluwer, Dordrecht.

L. Partsch (1984), Vorlesungen zum eindimensionalen Wiener Prozeß, Teubner, Leipzig.

N. N. Vakhania, V. I. Tarieladze, S. A. Chobanyan (1987), Probability

Distributions on Banach Spaces, Reidel, Dordrecht.

(21)

◮ Random numbers

J. E. Gentle (1998), Random Number Generation and Monte Carlo Methods, Springer-Verlag, New York.

P. L’Ecuyer (1998), Random number generation, in: Handbook on Simulation, J. Banks (ed.), Wiley, New York.

D. E. Knuth (1998), The Art of Computer Programming, Vol. 2, Seminumerical Algorithms, Addison-Wesley, New York.

G. Marsaglia (1995), The Marsaglia random number CDROM, including the DIEHARD battery of tests of randomness, Dep. of Statistics, Florida State Univ., Tallahassee.

H. Niederreiter (1992), Random Number Generation and Quasi-Monte Carlo

Methods, SIAM, Philadelphia.

(22)

Outline

I. Introductory Example: Reconstruction of a Brownian Motion Stochastic Processes

Brownian Motion

Reconstruction of a Brownian Motion

(23)

Reconstruction of a Brownian Motion X

Given n ∈ N , choose

knots 0 < t 1 < · · · < t n ≤ 1, functions a 1 , . . . , a n ∈ C ([0, 1]), such that X − X n is ‘small’, where

X n ( t ) =

n

X

i =1

X ( t i ) · a i ( t ).

(24)

Here mean-square L 2 -distance. For every ω ∈ Ω k X ( · , ω ) − X n ( · , ω) k 2 =

Z 1

0 | X (t , ω) − X n (t , ω) | 2 dt

1/2 .

The (average) error of X n is

e ( X n ) = E( k X − X n k 2 2 ) 1/2

.

n -th minimal error (minimization w.r.t. the a i ’s and t i ’s) e (n) = inf

X

n

e (X n ).

(25)

Minimization of e ( X n ) for fixed knots t i : Recall e 2 (X n ) =

Z

D

E(X (t ) − X n (t )) 2 dt . Put

Σ = ( K ( t i , t j )) 1≤ i , j ≤ n , b ( t ) = ( K ( t , t i )) 1≤ i ≤ n , a ( t ) = ( a i ( t )) 1≤ i ≤ n

to obtain

E(X (t ) − X n (t )) 2 = E(X 2 (t )) − 2 E(X (t ) · X n (t )) + E(X n 2 (t ))

= K ( t , t ) − 2

n

X

i =1

a i ( t ) K ( t , t i ) +

n

X

i ,j =1

a i ( t ) a j ( t ) K ( t i , t j )

= K ( t , t ) − 2 a ( t ) b ( t ) + a ( t ) Σ a ( t ).

(26)

Conclusion: With a (t ) = Σ −1 b(t ) X n (t ) =

n

X

i =1

X (t i ) · a i (t )

has minimal error among all X n that use the knots t i . Furthermore, e 2 ( X n ) =

Z

D

K ( t , t ) − b ( t ) Σ −1 b ( t )

dt .

Valid for every second-order process X (if det Σ 6 = 0)!

(27)

Explicit formulas in the Brownian motion case: Here

Σ =

t 1 t 1 . . . t 1 t 1 t 2 . . . t 2 .. . .. . . .. ...

t 1 t 2 . . . t n

= B C B ,

where, with t 0 = 0,

B =

1 0 . . . 0

1 1 . .. ...

.. . . .. 0 1 . . . . 1

, C = diag(t i − t i −1 ) 1≤i ≤n .

(28)

Moreover,

e 2 ( X n ) = 1 6

n

X

i =1

( t i − t i −1 ) 2 + 1

2 (1 − t n ) 2 .

Note: X n is ‘local’, which reflects the Markov property of X . Minimization of e ( X n ) with respect to the knots t i :

Easy to see: e (X n ) is minimal iff

t i = 3 i

3 n + 1 .

(29)

Theorem (Suldin 1960) For piecewise linear interpolation X n with knots t i

e ( X n ) = e ( n ) = 1

p 2(3n + 1) .

Used so far: m = 0 and K = min, but not: X Gaussian.

Notation: strong equivalence

a ( n ) ≈ b ( n ) if lim

n →∞ a ( n )/ b ( n ) = 1,

weak equivalence

(30)

Equidistant knots t i = i / n yield e ( X n ) = 1

√ 6 · n −1/2 ≈ e ( n ).

Simulation: Clearly

Y i = X (t i ) − X (t i −1 ) ∼ N (0, t i − t i −1 ), independent.

Use random numbers to simulate realizations of Y 1 , . . . , Y n .

Compare p. 3.

(31)

Reconstruction in L p -norm: For 1 ≤ p ≤ ∞ and 1 ≤ q < ∞ , error of X n

e p , q (X n ) = "

E( k X − X n k q p ) 1/ q

and n -th minimal error

e p,q (n) = inf

X

n

e p,q (X n ).

So far p = q = 2. We have e p , q ( n ) ≍

( n −1/2 if p < ∞ n −1/2 · √

ln n if p = ∞ .

(32)

Smoothness of a Brownian motion

◮ in mean-square sense:

q

E( X ( s ) − X ( t )) 2 = | s − t | 1/2 ,

◮ pathwise: almost surely (L´evy’s modulus of continuity) lim sup

h →0+

sup

|s −t |<h

| X ( s ) − X ( t ) |

p 2 h ln 1/ h = 1.

(33)

Decomposition of a Brownian motion

X − X n is zero mean Gaussian with covariance kernel

K ( s , t ) =

 

 

 

 

(min( s , t ) − t i 1 ) ( t i − max( s , t )) t i − t i 1

if s , t ∈ [t i 1 , t i ] min(s , t ) − t n if s , t ∈ [t n , 1]

0 otherwise.

Moreover, X n and X − X n are independent.

A Gaussian process on [ u , v ] with covariance kernel

K ( s , t ) = (min( s , t ) − u ) ( v − max( s , t ))

(34)

Simulation of corresponding trajectories of

X , X n , X − X n .

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

−0.5 0 0.5 1 1.5

t

W(t)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

−0.5 0 0.5 1 1.5

t

X(t)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

−1

−0.8

−0.6

−0.4

−0.2 0 0.2 0.4 0.6 0.8 1

t

W(t)−X(t)

Here n = 4 and

t 1 = 1/4, t 2 = 1/2, t 3 = 3/4, t 4 = 1.

(35)

Simulation of corresponding trajectories of

X , X n , X − X n .

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

−1

−0.8

−0.6

−0.4

−0.2 0 0.2 0.4 0.6 0.8 1

t

W(t)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

−1

−0.8

−0.6

−0.4

−0.2 0 0.2 0.4 0.6 0.8 1

t

X(t)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

−1

−0.8

−0.6

−0.4

−0.2 0 0.2 0.4 0.6 0.8 1

t

W(t)−X(t)

Here n = 3 and

(36)

Simulation of corresponding trajectories of

X , X n , X − X n .

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

−1

−0.8

−0.6

−0.4

−0.2 0 0.2 0.4 0.6 0.8 1

t

W(t)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

−1

−0.8

−0.6

−0.4

−0.2 0 0.2 0.4 0.6 0.8 1

t

X(t)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

−1

−0.8

−0.6

−0.4

−0.2 0 0.2 0.4 0.6 0.8 1

t

W(t)−X(t)

Here n = 4 and

t 1 = 0.3, t 2 = 0.4, t 3 = 0.8, t 4 = 1.

(37)

Literature

D. Lee (1986), Approximation of linear operators on a Wiener space, Rocky Mt. J.

Math. 16, 641–659.

K. Ritter (2000), Average-Case Analysis of Numerical Problems, Lect. Notes in Math.

1733, Springer, Berlin.

A. V. Suldin (1959, 1960), Wiener measure and its applications to approximation

methods I, II (in Russian), Isz. Vyssh. Ucheb. Zaved. Mat. 13, 145–158, 18, 165–179.

Referenzen

ÄHNLICHE DOKUMENTE

For the analysis of the regularity of the solution to elliptic diffusion problems on random domains in Chapter IV, we will exploit that there exists a one-to-one correspon-

The test data for input parameters are either constant values in case of primitive data types or objects retuned by already generated method sequences, which can be used as inputs

We here show that standard random graphs, small worlds and preferential attachment graphs can be gener- ated efficiently by presenting generators that are asymp- totically optimal

Once the result is obtained, however, the programmer should consider the decimal point to be at the extreme left in order to obtain random numbers distributed over

Although this first season of drilling was curtailed at a depth of 148 mbsf (metres below sea floor) by an unusual storm-generated ice break-out, the core recovered

Generation and testing of pseudo-random numbers to be used in the stochastic. simulation of

Inner sphere reorganization energies were computed from density functional calculations (functionals VWN 43 and Becke Perdew 53 with a TZ2P basis set) using the ADF program. 45

Achtung: für Funktionen TFi werden die Zufallszahlen ebenfalls über Histogramme mit einer Anzahl von Punkten