• Keine Ergebnisse gefunden

A concise survey of important theorems from stochastic calculus

1.1.1 Stochastic integration and quadratic (co-)variation

The following stochastic processes are defined on a filtered probability space (Ω,F,(Ft),P) with a right-continuous (Ft=Tu>tFu ∀t≥0) and complete (F0 contains all P-null sets) filtration (Ft)0≤t≤∞. A process (Ht)0≤t≤∞ is called simple predictable with respect to (Ft)0≤t≤∞, if it has a representation

Ht=ξ010(t) +

n

X

i=1

ξi1(Ti,Ti+1](t) ,

where 0 =T1. . .Tn+1<∞ is a finite sequence of stopping times andξi are (FTi )-measurable almost surely finite random variables. The set of simple predictable processes topologized by uniform convergence in (t, ω) is denoted S. The space of finite-valued random variables topologized by convergence in probability is denoted L0(P). For a stochastic processX and fixedtthe stochastic integral of simple predictable processes with respect toX is defined by the linear mapping

IX :S−→L0(P), Ht=ξ010(t) +

n

X

i=1

ξi1(Ti,Ti+1](t)7−→ξ0X0+

n

X

i=1

ξi XTi+1XTi .

IX does not depend on the choice of the representation of H inS.

Definition 1.1.1. An adapted càdlàg processX is called a total semimartingale if the mapping IX : S −→ L0(P) is continuous. X is called a semimartingale if, for all t∈[0,∞), the process(Xτ∧t)τ≥0 is a total semimartingale.

Note that we use angle instead of square brackets in the following definition unlike the commonly used notation in the literature.

Definition 1.1.2. The quadratic variation process of a semimartingale X is defined by hXi=X2−2

Z

X dX, where X := lim

u→s,u<sXu, (X)0 = 0 .

1.1. 1 Theoretical Concepts

The quadratic covariation process of two semimartingales X and Y is defined by hX, Yi=X·Y

Z

X dYZ

Y dX .

The last preceding basic definitions are taken from the book by Protter [2004]. We use the different notation for the quadratic (co-)variation, since we will not consider the conditional quadratic (co-) variation process which is the compensator of the quadratic (co-)variation process and which exists when the quadratic (co-)variation is locally bounded. For continuous semimartingales that we will focus on throughout this work both processes are equal.

Proposition 1.1.3. Let X, Y, Z,Z˜ be semimartingales. It holds true that:

• hXi is an adapted non-decreasing process with càdlàg paths of finite variation.

The mapping (X, Y)−→ hX, Yi is symmetric and bilinear.

d(Xt·Yt) = (X)tdYt+ (Y)tdXt+dhX, Yit (integration by parts)

• hR Z dX,R Z dY˜ it=R0tZτZ˜τdhX, Yiτ for Z,Z˜ càdlàg.

For partitions Π ={0 =t0, ..., tn=t} of [0, t]:

n

X

k=1

XtiXti−12 −→ hXip t as ||Π||:= max

1≤l≤n(|tltl−1|)→0.

The last statement is proved as Theorem 22 in Section II.6 in Protter [2004], the integration by parts formula in Corollary 2 of Section II.6. The fourth statement is included in Theorem 29. The other points are well-known and we refer to literature on stochastic calculus for more information (e. g. Revuz and Yor [1991], Shreve [2008], Karatzas and Shreve [1991]).

An adapted càdlàg processM is a local martingale if a sequenceTn↑ ∞(a. s.) of stopping times exists such thatMt∧Tn1{Tn>0} is a uniformly integrable martingale for eachn.

Proposition 1.1.4(Burkholder-Davis-Gundy inequalities). LetM be a continuous local martingale with M0= 0, then for every p >0 there exist constants cp and Cp, such that

cpE

hhM, MipT/2i≤E[(MT)p]≤CpE

hhM, MipT/2i

for every stopping timeT holds true, whereMt = sups≤t|Ms|.

The constants appearing in the Burkholder-Davis-Gundy inequalities (BDG) are universal in the sense that they only depend onp, but not on M and the underlying probability space. In particular, if

E hp

hM, Miti < ∞, ∀ 0 < t < ∞ holds, M is a martingale. A proof of the BDG-inequalities is given in Revuz and Yor [1991], Section IV.4.

1 Theoretical Concepts 1.1.

Throughout this work we are mainly concerned with Itô processes of the type Xt = Rt

0µs ds+R0tσsdBs with a Brownian motionBs and continuous processesσsand locally bounded µs. From Proposition 1.1.3 we know that the quadratic variation of such an Itô processhR σsdBs,R σsdBsit=R0tσs2dsis the so-called integrated volatility (or integrated variance) since the drift part is of finite variation. All processes of finite variation have a quadratic variation equal to zero. For the quadratic covariation of two Itô processes with correlated Brownian motions such that dhBX, BYis=ρs with a continuous processρs, hX, Yit=R0tρsσsXσsYdsis obtained.

Theorem 1.1 (Itô’s lemma). Let X be a continuous semimartingale andf ∈C2(R).

Then f(X) is again a semimartingale and the following Itô formula holds true:

f(Xt) =f(X0) + Z t

0

f0(Xs) dXs+ 1 2

Z t 0

f00(Xs) dhXis . A proof of Itô’s lemma can be found in Protter [2004] (Theorem II.32).

For processes of finite variation, we state the following Corollary:

Corollary 1.1.5 (Change of variables). Let A be a process with continuous paths of finite variation and f ∈C1(R). Then f(A) is a process of finite variation and it holds true that:

f(At) =f(A0) + Z t

0

f0(As) dAs .

For a process A having right-continuous paths of finite variation the following generaliza-tion holds true:

f(At) =f(A0) + Z t

0

f0(As−) dAs+ X

0<s≤t

f(As)−f(As−)−f0(As−)∆As .

We refer to Theorem 31 and Theorem 34 of Chapter 2 in Protter [2004] for the proofs.

The Corollary is based on the fact, that for a finite variation process Awith continuous paths and a càdlàg process H, the stochastic integralR Hs dAs is indistinguishable from the Lebesgue-Stieltjes integral computed path-by-path (Theorem II. 17 in Protter [2004]).

1.1.2 Girsanov’s theorem Let B = nBt=Bt(1), ..., B(d)t ,Ft

o be a d-dimensional Brownian motion and X = {Xt,Ft}a vector of measurable adapted processes with

Z T 0

Xt(i)2 dt <a. s. ∀i∈ {1, ..., d}, 0≤T <.

1.1. 1 Theoretical Concepts

The process defined by

Zt(X) := exp

d

X

i=1

Z t 0

Xs(i) dBs(i)−1 2

Z t 0

||Xs||2 ds

!

is a continuous local martingale. If Z is a martingale, for all 0 ≤ T < ∞, ˜PT(A) = E[1AZT(X)]

∀A∈FT defines a probability measure ˜PT on FT.

Theorem 1.2 (Girsanov). If Z(X) is a martingale, the process B,˜ Ft,0≤t < T defined by

B˜t(i)=Bt(i)Z t

0

Xs(i) ds, i∈ {1, ..., d} 0≤t < T

is a d-dimensional Brownian motion on(Ω,FT,T) for every fixed T ∈[0,∞).

This version of Girsanov’s theorem is proved in Karatzas and Shreve [1991], page 191 ff. A sufficient criterion to verify that Z(X) is a martingale is Novikov’s criterion E

hexp12R0T||Xs||2 dsi<, 0≤T <∞.

Consider an Itô process X as above with σt > 0 being strictly positive. There is an adapted process γs with R0Tγs2 ds <a. s.,0 ≤T < ∞ and σsγs+µs = 0. Assume E

h

exp(1/2)R0T γs2dsi<∞. By Girsanov’s theorem there is a measure ˜P, equivalent toP, with

dP/d˜P= exp

Z t

0

γsdBs+ 1 2

Z t 0

γs2 ds

, (1.1)

where dP/d˜P is a P-martingale and ˜Bt :=BtR0tγs ds is a ˜P-Brownian motion. The processX has, after a change of measure to ˜P, a martingale-representation. ˜P is called the equivalent martingale measure toP. ForX we can now write

dXt=µtdt+σtdBt=σtdB˜t .

For non-random boundedµt and σt the conditions thatσt>0 is non-degenerate and

t|/(σt)2 locally integrable are sufficient. This is proved in Karatzas and Shreve [1991], page 339 ff. For further specific cases and the assumptions on which the Girsanov transformation is possible we refer to Lipster and Shiryaev [2001].

We will apply Girsanov’s theorem to prove results about weak convergence and convergence in probability assuming without loss of generality that the drift is equal to zero (µt≡0).

Since if, for fixedT, the results hold under the equivalent martingale measure ˜P, they also hold true underP. In particular, if we find a consistent estimator ˆθn for a parameter θand can prove an asymptotic normality result that nαθnθ) m+AVAR·Z with ZN(0,1) under ˜P, consistency and the rate of convergence will also be true under the equivalent measureP. However, the asymptotic law under P may differ from the

1 Theoretical Concepts 1.1.

asymptotic distribution under ˜P. We only know that the normal distribution remains after a change of measures. In the next section, we will introduce the concept of stable convergence in law which will, furthermore, guarantee that the asymptotic distributions of the limiting random variable are equal under Pand ˜P(see Mykland and Zhang [2009]).

We also refer to Revuz and Yor [1991], page 327 ff. , and Section 8.6 in Øksendal [2005]

for further information concerning the change of measures.

1.1.3 The time-change theorem

In this subsection the theorem of Dambis and Dubins-Schwarz is presented which gives a fundamental connection between continuous local martingales and the Brownian motion.

In particular, every continuous local martingale has a representation as a time-changed Brownian motion which emphasizes the key role of Brownian motion in the continuous martingale theory.

Let At be a non-decreasing right-continuous process, adapted to a right-continuous filtration (Ft).

Tτ = inf{t:At> τ}, inf{∅}:= +∞

defines a non-decreasing right-continuous family of stopping times. Tτ regarded as function of τ is called the right-continuous inverse ofA. For all τ the random variable Atis a (FTτ)-stopping time, sinceAt= inf{τ :Tτ > t}.

Definition 1.1.6. A sequence of stopping timesTτ is called a time-change, if the mapping τ 7→Tτ is almost surely non-decreasing and right-continuous.

Theorem 1.3(Dambis, Dubins-Schwarz). LetM be a(Fτ,P)-continuous local mar-tingale, M0 = 0 and hM, Mi=∞, and Tτ the time-change

Tτ = inf{s:hM, Mis> τ} .

ThenBτ =MTτ is a(FTτ)-Brownian motion andMτ =BhM,Miτ. This Brownian motion B is called Dambis, Dubins-Schwarz (DDS) Brownian motion of M.

For the proof it can be shown thatB is a continuous local (FTτ)-martingale andhB, Biτ = hM, MiTτ = τ. The proof that it is a Brownian motion is based on Lévy’s theorem that standard Brownian motion is the only continuous local martingale withhBit =t (cf. Theorem II.39 in Protter [2004]). SinceBhM,Mi =MThM,Mi andMThM,Mi

τ =Mτ, one concludes that BhM,Mi=M. We refer to Theorem V.1.6 in Revuz and Yor [1991] for a thorough proof of the theorem.

The restriction tohM, Mi=∞ ensures thatTτ is almost surely finite and the DDS-Brownian motion can be defined on Ω,F˜τ,P

with ˜Fτ =FTτ. IfhM, Mi<∞, M has a representation as a time-changed Brownian motion on a suitable augmentation of the underlying probability space.

1.1. 1 Theoretical Concepts

Definition 1.1.7. An enlargement of the filtered probability space(Ω,Fτ,P) is another probability space ( ˜Ω,F˜τ,P˜) and a mapping σ : ˜Ω −→ Ω with σ−1(Fτ) ⊂ F˜τ ∀τ and σ(˜P) =P. The latter means thatσ−1(A)=P(A) for all A∈F˜.

A process X defined on Ω may be viewed as defined on ˜Ω by setting X(˜ω) =X(ω) if σ(˜ω) =ω.

Proposition 1.1.8. Assume all assumptions of Theorem 1.3, but hM, Mi<∞. There is an enlargement ( ˜Ω,F˜τ,P˜) of (Ω,Fτ,P) and a Brownian motion B defined on Ω,˜ independent of M, such that

Bτ =

MTτ if τ <hM, Mi

M+Bτ−hM,Mi if τ ≥ hM, Mi is a standard Brownian motion.

The theorem and the proposition can also be extended to the d-dimensional setting (Knight-Theorem). We refer to Section V.1 of Revuz and Yor [1991] for the proofs and further results on time-changed martingales. Section VIII.2 of the same book contains results on asymptotics and weak convergence of time-changed Brownian motions. We restrict ourselves to the following Theorem 1.4 which we call asymptotic Knight theorem:

Theorem 1.4 (asymptotic Knight-theorem). Let Mn be a sequence of continuous local martingales,(Mn)0 = 0and hMn, Mni=∞. LetTτn be the time-change associated withMn and Bn the DDS-Brownian motion to Mn. The sequence Bn converges weakly to a limiting Brownian motionB. This holds true in the d-dimensional case, if further hMn,i, Mn,jiTn,i

τ and hMn,i, Mn,jiTn,j

τ for all i 6= j;i, j ∈ {1, ..., d} converge to zero in probability asn→ ∞.

1.1.4 The Cramér-Wold device

For the proofs of weak convergence of weighted sums ofdrandom variables, the following relation to the weak convergence of thed-dimensional random vectors will be useful:

Theorem 1.5 (Cramér-Wold). Let (Xn) be a sequence of random vectors inRd. For the weak convergenceXn Y with anRd-valued limiting random vector Y, a necessary and sufficient condition is given by the weak convergencePdk=1λkXn(k) Pd

k=1λkY(k) for all1, . . . , λd) ∈ Rd. Xn(k) and Y(k) denote the k-th components of the random vectors Xn and Y, respectively.

The theorem is proved in Billingsley [1991] as Theorem 7.7.