• Keine Ergebnisse gefunden

In this section the state space isRp×q, except otherwise stated, and it is denoted byE.We are going to present results regarding the space of uniformly integrable G−martingales14as well as results for its subspace containing all the square integrableG−martingales. Recall that by Notation I.11 the former is denoted by M(G;E), while the latter by H2(G;E).

We start with the lemma that verifies that indeedH2(G;E)⊂ M(G;E).

Lemma I.56. If X∈ H2(G;E), thenX ∈ M(G;E).

Proof. We apply Theorem I.27 for the functionR+x−→Φ x2R+. Obviously, Φ∈ Y. Moreover, sup

t∈R+

E

Φ(Xt1) sup

t∈R+

E

Xt21 ≤pq sup

t∈R+

E

Xt22 <∞.

Theorem I.57. LetX be anE−valued stochastic process andX(ω) := limt↑∞Xt(ω), for everyω∈Ω.

(i) If X ∈ M(G;E), then the random variable X is P−almost surely well-defined and lies in L1(G;E). Moreover,

Xt L1(G;E)

−−−−−−−→X and Xt=E[X|Gt] P−a.s. for everyt∈R+, i.e. theG−martingaleX is (right) closed by its terminal valueX.

(ii) If X∈ M(G;E), thenX ∈ H2(G;E) if and only ifXL2(G;E)<∞.In this case, sup

t∈R+

XtL2(G;E)≤ X2L2(G;E)≤pq sup

t∈R+

XtL2(G;E).

13As usual we identify thePa.s.equal random variables.

14Recall that theG−stochastic basis is already given in Section I.1.

I.3. UNIFORMLY INTEGRABLE MARTINGALES 17

Proof. (i) For every pair (i, j), for 1≤i≤p, 1 ≤j ≤q, we apply Dellacherie and Meyer [27, Theorem VI.6]. More precisely, for every pair (i, j), for 1≤i≤p, 1≤j ≤q, there exists Ωij ⊂Ω with P(Ωij) = 1 such thatXij(ω) is well-defined for everyω∈Ωij,

Xtij L

1(G;R)

−−−−−−−→Xij and Xtij =E[Xij|Gt]P−a.s. for everyt∈R+. By the above convergence we obtain

kXt−XkL1(G;E)=

p

X

i=1 q

X

j=1

E h

Xtij−Xij

i−−−→

t↑∞ 0.

Define now ΩE :=T{Ωij,1 ≤ i ≤pand 1 ≤j ≤ q}. Then P(ΩE) = 1 and Xij(ω) is well-defined for every 1 ≤i ≤p, 1 ≤j ≤q and for every ω ∈ ΩE. Therefore, also the propertyXt =E[X|Gt] holds P−almost surely .

(ii) Let X ∈ H2(G;E), i.e. supt∈R

+E kXtk22

< ∞. We prove initially that Xij ∈ L2(G;R) for 1≤i≤p,1≤j≤q.The obvious domination

sup

t∈R+

E

|Xtij|2

≤ sup

t∈R+

E kXtk22

<∞

enables us to apply He et al. [35, Theorem 6.8.1] element-wise and, consequently, to obtain that E[|Xij|2] = sup

t∈R+

E[|Xtij|2] for 1≤i≤p,1≤j≤q. (I.15) But this is equivalent toX∈L2(G;E), more preciselyX∈L2(G;E).

Conversely, let X ∈L2(G;E),i.e. E[|Xij|2]<∞, for 1 ≤i ≤p,1 ≤j ≤q. By He et al. [35, Theorem 6.8.1] applied element-wise we obtain that Equality (I.15) holds. Therefore,

sup

t∈R+

kXtk2L2(G;E)= sup

t∈R+

p

X

i=1 q

X

j=1

E

|Xtij|2(I.15)

≤ sup

t∈R+

p

X

i=1 q

X

j=1

E

|Xij|2

=kXk2L2(G;E). (I.16) Moreover,

E

kXk22

=

p

X

i=1 q

X

j=1

E

|Xij|2(I.15)

=

p

X

i=1 q

X

j=1

sup

t∈R+

E

|Xtij|2

= sup p

X

i=1 q

X

j=1

E

|Xtij

ij|2

, tij ∈R+ for 1≤i≤p,1≤j≤q

p

X

i=1 q

X

j=1

sup

t∈R+

E

|Xtij|2

p

X

i=1 q

X

j=1

sup

t∈R+

E

kXtk22

=pq sup

t∈R+

E kXtk22

. The above inequality and (I.16) allow us to conclude.

Remark I.58. In view of the previous theorem, for anX ∈ M(G;E) we can write (Xt)t∈R+or (Xt)t∈

R+

interchangeably. Observe that, due to the “left-continuity” of X at the symbol ∞the random variable supt∈

R+|Xt| isP−a.s.well-defined andP−a.s. equal to supt∈R+|Xt|.

Definition I.59. AnE−valued andG ⊗ B(R+)−measurable processM isof class (D)for the filtration Gif the family{kMτk11[τ <∞], τ isG−stopping time}is uniformly integrable.

Theorem I.60 (Doob–Meyer Decomposition). If X is a real-valued, right-continuousG−submartingale of class (D), then there exists a unique, up to indistinguishability,A∈ Vpred+ (G;R)withA0= 0such that X−A∈ M(G;R).

Corollary I.61. Let X ∈ H2(G;R), then there exists a unique, up to indistinguishability, hXi ∈ Vpred+ (G;R)with hXi0= 0 such thatX2− hXi ∈ M(G;R).

Proof. It is immediate by Jensen Inequality that the processX2 is aG−submartingale. Then we

conclude by Theorem I.60.

Theorem I.62. Let X, Y ∈ H2(G;R). Then, there exists a unique up to indistinguishability process hX, YiG∈ Apred(G;R)such that XY − hX, YiG∈ M(G;R). Moreover,

hX, YiG=1

4 hX+Y, X+YiG− hX−Y, X−YiG .

Furthermore,hX, XiG is non-decreasing and it admits a continuous version if and only ifX isG −quasi-left-continuous.

Proof. See Jacod and Shiryaev [41, Theorem I.4.2].

The process hX, YiG is called the predictable quadratic variation or the angle bracket of the pair (X, Y). When no confusion can arise we will omit the symbol of the filtration from the notation. Moreover, hX, Xiwill be also denoted ashXi.

Notation I.63. WhenX ∈ H2(G;Rp), Y ∈ H2(G;Rq) we will denote byhX, Yithe element ofApred(G;E) for whichhX, Yiij =hXi, Yji.

Theorem I.64. Let A ∈ A(G;R). There exists a process, which will be called the compensator of A and denoted by A(p,G), which is unique up to an evanescent set, and which is characterised by being an element ofApred such that A−A(p,G)∈ M(G;R).

Proof. See Jacod and Shiryaev [41, Theorem I.3.18].

Corollary I.65. Let X∈ M(G;R)∩ Vpred(G;R). ThenX is indistinguishable from the zero process.

Proof. See Jacod and Shiryaev [41, Corollary I.3.16].

I.4. Stochastic integration

This section is devoted to introducing the notation and the main results regarding integration with respect to a finite variation process, a square-integrable martingale and an integer-valued random measure.

The reader may recall that we work with theG−stochastic basis, whereGis an arbitrary filtration, until the end of the chapter.

I.4.1. Stochastic integration with respect to an increasing process. This section is devoted to Lebesgue–Stieltjes integration either with respect to a finite variation function or a process of finite variation. In the latter case notation has been provided on page v. Nevertheless, we will introduce additional one which will be used interchangeably. Let a non–negative measure%defined on (R+,B(R+)) andGbe any finite dimensional space. We will denote the Lebesgue–Stieltjes integral of any measurable map f : (R+,B(R+))−→(G,B(G)) by

Z

(u,t]

f(s)%(ds) and Z

(u,∞)

f(s)%(ds), for anyu, t∈R+.

The above integrals as well as every Lebesgue–Stieltjes integral are to be understood in a component–

wise sense. In case %is a finite measure with associated distribution function F%(·) :=%([0,·]), we will indifferently denote the above integrals by

Z

(u,t]

f(s)dFs% and Z

(u,∞)

f(s)dFs%, for anyu, t∈R+.

When there is no confusion as to which measure the distribution function F% is associated to, we will omit the upper index and we will writeF. Conversely, if the distribution function is given, sayF, then we will denote the associated Borel measure byµF.

When ˜%is a signed measure with Jordan–Hahn decomposition ˜%= ˜%+−%˜, then we define Z

(u,t]

f(s) ˜%(ds) :=

Z

(u,t]

f(s) ˜%+(ds)− Z

(u,t]

f(s) ˜%(ds) for anyu, t∈R+. and analogously for

Z

(u,∞)

f(s) ˜%(ds) :=

Z

(u,∞)

f(s) ˜%+(ds)− Z

(u,∞)

f(s) ˜%(ds) for anyu∈R+.

I.4. STOCHASTIC INTEGRATION 19

If V%˜is the associated to the signed measure ˜%finite variation function we will denote interchangeably the above integrals by

Z

(u,t]

f(s) dVs%˜and Z

(u,∞)

f(s)dVs%˜for anyu, t∈R+.

More generally, for any measure ¯% on (R+×E,B(R+)⊗ B(E)) and for any measurable map g : (R+×E,B(R+)⊗ B(E))−→(G,B(G)) we will denote the Lebesgue–Stieltjes integral by

Z

(u,t]×A

g(s, x)%(ds,dx) and Z

(u,∞)×A

g(s, x)%(ds,dx), for anyt, u∈R+, A∈ B(E).

Conversely, ifA∈ V+(G) is given, then we will denote the associated toA(ω) Borel measure byµA(ω),for every ω ∈Ω.Then, µA isP−almost surely well-defined. Finally, the integration with respect to a finite variation process will be understood pathwise. More precisely, if A ∈ V(G;R) and H is an E−valued random function such thatH(ω,·) is Borel measurable, then we define the random integral, denoted by R

(0,·]HsdAs, as follows:

Z

(0,·]

HsdAs(ω) :=

(R

(0,·]Hs(ω) dAs(ω) if R

(0,·]kHs(ω)k1d

Var(A)s(ω)

<∞,

∞ otherwise.

Clearly, ifH is assumedG−optional, thenR

(0,·]HsdAs is a process.

I.4.2. The Itˆo stochastic integral. In this section we present the main results concerning Itˆo stochastic integration with respect to an `−dimensional square-integrable martingale. We follow Jacod and Shiryaev [41, Section III.6.a]. We assume that the real-valued case is familiar to the reader. If not, they may consult Jacod and Shiryaev [41, Section I.4d]. For the following, the arbitrary process X ∈ H2(G;R`) is fixed. In Section I.3 we have seen that the R`×`−valued process hXi is well-defined and such thathXi ∈ A(G;R`×`). Consider, now, a factorization ofhXi, that is to say, write

hXi·= Z

(0,·]

dhXis

dFs dFs, (I.17)

where F is an increasingG−predictable and c`adl`ag process anddhXi

·

dF·

ij

:= dhXidFij·

· for 1≤i, j ≤`.It can be proven by standard arguments that dhXidF ·

s·

t∈R+ is a G−predictable process with dhXidF t

t being a symmetric, non-negative definite element of R`×`, for everyt∈R+.

Notation I.66. We will denote by

H2(G, X;Rp×`) :=n

Z: (Ω×R+,PG)−−→(Rp×`,B(Rp×`)), kZkH2(G,X;R`)<∞o , wherekZk2H2(G,X;Rp×`):=E

Z

(0,∞)

Trh

ZsdhXis

dFs

Zs>i dFs

.

The following theorem verifies that the above space provides a suitable class of integrands for the Itˆo integral with respect to an`−dimensionalX.

Theorem I.67. For everyZ∈H2(G, X;Rp×`)the Itˆo stochastic integral ofZ with respect toX under the filtration G, denoted indifferently by Z·X or byR·

0ZsdXs,15is a well-defined Rp×1−valued process.

Moreover,

(i) If Z∈H2(G, X;Rp×`)and bounded then (Z·X)i =

`

X

j=1

Zij·Xj for every i=1,. . . ,p.

(ii) For every Z ∈ H2(G, X;Rp×`), there exists a sequence (Zk)k∈N ⊂ H2(G, X;Rp×`), where Zk is bounded for every k∈N, such that

Zk·X−Z·X

L2(G;Rp)−−−−→

k→∞ 0.

(iii) (Z·X)>∈ H2(G;Rp), for every Z∈H2(G, X;Rp×`).

15We have abstained from using the transpose ofX in the notation.

(iv) If Z1, Z2∈H2(G, X;R`)then

hZ1·X, Z2·Xi·= Z

(0,·]

Zs1dhXis

dFs

(Zs2)>dFs.

(v) Let Y ∈ H2(G;R)and define the R`−valuedG−predictable processrY := (rY1, . . . , rY `)by hY, Xii·=

Z

(0,·]

rY is dFs, for everyi= 1, . . . , `.

Then, for Z∈H2(G, X;R`),Z·X is the unique, up to indistinguishability, square-integrable martingale with Z·X0= 0 such that

hZ·X, Yi·= Z

(0,·]

Zs(rYs)>dFs, for every Y ∈ H2(G;R).

(vi) For Z ∈H2(G;Rp×`), the jump process of (Z·X)i is indistinguishable from P`

n=1Zin∆Xn. In other words, thei−element of the purely discontinuous part ofZ·X can be described byP`

n=1Zin·Xd,n. This allows us to denote (Z·X)d by Z·Xd.

(vii) If X∈ H2,c(G;R`), then for every Z∈H2(G, X;R`) holdsZ·X∈ H2,c(G;R).

Proof. We use Jacod and Shiryaev [41, Theorem III.6.4] to verify that (Z·X)i is well-defined, for i= 1, . . . , p. For (ii) we use again element-wise [41, Theorem III.6.4 a)]. For (vi) see Jacod and Shiryaev [41, Proposition III6.9]. The other parts are the same as in [41, Theorem III.6.4 a)], but in a different

order.

Notation I.68. To the classH2(G, X;Rp×`) we associate the spaceL2(G, X;Rp) where L2(G, X;Rp) :=n

W ·X, W ∈H2(G, X;Rp×`)o

. (I.18)

In other words, L2(G, X;Rp) is the space of theRp−valued Itˆo stochastic integrals with respect to the pair (G, X).

WhenFis a filtration such thatX ∈ H2(F;R`), in other words whenX is bothG−andF−adapted, for a Z ∈ H2(F, X;Rp×`), we will denote the Itˆo stochastic integral of Z with respect to X under the filtration G, resp. under the filtrationF, byZ·XG, resp. Z·XF.

We close this subsection with the following well-known result.

Theorem I.69(Galtchouk–Kunita–Watanabe Decomposition). LetX ∈ H2(G;R`)andY ∈ H2(G;Rp).

Then, there exists Z∈H2(G, X;Rp×`)such that

Yj = (Z·X)j+Nj for everyj= 1, . . . , p,

wherehNj, Xii= 0 fori= 1, . . . , ` andj= 1, . . . , p. The stochastic integralZ·X does not depend upon the chosen version of Z.

Proof. See Jacod [40, Chapitre IV, Section 2].

I.4.3. Stochastic integral with respect to a random measure. In this section we describe the construction of the stochastic integral with respect to a compensated integer-valued random measure and we present the results we are going to use in the next chapters. We will follow closely Jacod and Shiryaev [41, Section II.1]. However, Cohen and Elliott [20, Chapter 13] and He et al. [35, Chapter 11] will be also useful. The state space will be assumed to beE, except otherwise explicitly stated.

Definition I.70. A random measure µ on R+×R` is a family µ = µ(ω; dt,dx)

ω∈Ω of measures on R+×R`,B(R+)⊗ B(R`)

satisfyingµ(ω;{0} ×R`) = 0 identically.

We proceed to introduce some notation and definitions.

Notation I.71. Write

Ω := Ωe ×R+×R`.

(eΩ,PeG) := (eΩ,PG⊗B(R`)

. A measurable functionU: (Ω,e Pe)−−→(E,B(E)) will be saidG−predictable function

(eΩ,O) := (ee Ω,O ⊗ B(R`)

A measurable functionU : (eΩ,O)e −−→ (E,B(E)) will be saidG−optional function

I.4. STOCHASTIC INTEGRATION 21

Definition I.72. Given a random measure µ and a G−optional function W we define the stochastic integral ofW with respect to the random measureµto be theE−valued process

U ∗µ·(ω) :=

 Z

(0,·]

W(ω, t, x)µ(ω; dt,dx), if Z

(0,·]

W(ω, t, x)

1µ(ω; dt,dx)<∞,

∞, otherwise.

Observe that we have defined the integralω−wise. This is indeed eligible, becauseW is assumed to be G−optional function. Therefore,W(ω,·) is B(R+)⊗ B(R`)−measurable for eachω. Observe, moreover, that for a real-valued andG−optional functionW which is positive, thenW ∗µ∈ V+(G;R).

Remark I.73. This point is also a good chance to adopt a convenient abuse of notation. Let a function h : R`,B(R`)

−−→ E,B(E)

. We will treat h also as the G−predictable function Ωe 3 (ω, t, x) 7−→h h(x)∈E. Analogously, if Ais an E−valuedG−predictable process, resp. G−optional process, we will treat it also as Ωe 3 (ω, t, x) 7−→ At(ω) ∈ E, which is a G−predictable function, resp. a G−optional function.

Definition I.74. A random measure is calledG−optional, resp. G−predictable, if the processW ∗µis G−optional process, resp. G−predictable process, for every G−optional, resp. G−predictable, function W.

We introduce now the set from which we will choose the random measures we are going to work with.

Definition I.75. AG−optional random measure will be saidG−predictablyσ−integrableor interchange-ablyPeG−σ−finite if there exists a positiveG−predictable functionV such thatV ∗µ∈ A+(G;R).The set of G−optionalG−predictablyσ−integrable random measures will be denoted by Aeσ(G).

Observe that an equivalent condition for a random measure to lie in Aeσ(G) is the existence of a partition of PeG, say (Ak)k∈N, such that1Ak∗µ∈ A+(G;R) for everyk∈N.

Definition I.76. A random measure µwill be calledinteger-valuedG−random measure if it satisfies (i) µ(ω;{t} ×R`)≤1 identically.

(ii) For eachA∈ B(R+)⊗ B(R`),the random variableµ(·;A) takes values inN∪ {0}.

(iii) µ∈Aeσ(G).

In the remaining chapters we will be interested in random measures associated to the jumps of c`adl`ag processes. The next proposition verifies that to everyG−adapted, c`adl`ag andR`−valued process associates an integer-valuedG−random measure.

Proposition I.77. Let X be aG−adapted c`adl`agR`−valued process. Then µX(ω; dt,dx) :=X

s>0

1[∆X6=0](ω, s)δ(s,∆Xs(ω))(dt,dx)16 defines an integer-valuedG−random measure onR×R`.

Proof. See Jacod and Shiryaev [41, Proposition II.1.16].

Actually, every integer-valued measureµhas an analogous representation. More precisely, [41, Propo-sition II1.14] verifies that for every integer-valued random measureµthere exists a thin,G−optional set D and anE−valued optional process β such that

µ(ω; dt,dx) =X

s≥0

1D(ω, s)δ(s,βs(ω))(dt,dx).

Notation I.78. We will maintain the notation of the last proposition and we will denote the integer-valued G−random measure associated to aG−adapted and c`adl`ag processX byµX.

Now that we have defined the necessary objects, we can start presenting the desirable machinery.

We start with the core of this section. The following theorem allows us to transform the processW ∗µ, where W is aG−optional and µan integer-valuedG−random measure, into aG−martingale.

16Recall thatδzdenotes the Dirac measure sitting atz.

Theorem I.79. Let µ∈Aeσ(G). There exists a random measure, called the compensator ofµunder G and denoted byνG, which is unique up to aP−null set, and which is characterised as being aG−predictable random measure satisfying either of the two following equivalent properties:

(i) For every non-negativeG−predictable function W holdsE[W ∗µ] =E[W∗νG].

(ii) If W is a G−predictable function such that |W| ∗µ ∈ A+(G;R), then |W| ∗νG ∈ A+(G;R) and the process W ∗µ−W ∗νG is a uniformly integrable G−martingale. In other words, W ∗νG is the compensator under the filtrationGof the processW ∗µ.

Moreover, there is a version of νG such that νG({s} ×R`)≤1 identically.

Proof. See Jacod and Shiryaev [41, Theorem II.1.8] and [41, Proposition II.1.17].

Remark I.80. (i) For every a random measureµ∈Aeσ(G) we are going hereinafter to use the “nice”

version ofνG for which the property νG({s} ×R`)≤1 holds for everys∈R+.

(ii) Observe that for every c`adl`ag process X holds 1R+×{0}∗µX = 0P−almost surely. This, in view of Theorem I.79, in particular implies that R

(0,∞)×{0}ν(X,G)(ds,dx) = 0P−almost surely.

When it is clear to which filtration refers the measurability of an integer-valued random measure, then we will not use the symbol of the filtration in the description. However, this is not the case for the compensator of an integer-valued measure, where we will always prefer to denote the filtration. Since in this dissertation we will be interested only for integer-valued measures associated to some adapted process, we will present the rest results for these type of random measures. Therefore, we fix for the rest of the section an arbitrary R`−valued,G−adapted and c`adl`ag processX with associated integer-valued measure µX.

Notation I.81. For the associated to X integer-valued random measure µX we adopt the following notation.

The compensator of µX under the filtrationGwill be denoted byν(X,G).

If X ∈ H2(G;R`), then the integer-valued measure associated to the jumps of the martingale X will be denoted by µXd. In other words, µXd is the integer-valued measure associated to the purely discontinuous part of the G−martingale X. In this case, µXd ∈ Aeσ(G), i.e. its compensator is well-defined. The compensator ofµXd under the filtrationGwill be denoted byν(Xd,G). The reason for this change in notation will be discussed at the end of the section.

TheG−compensated integer-valued measureµe(X,G) is defined by

(X,G)(ω; dt,dx) :=µX(ω; dt,dx)−ν(X,G)(ω; dt,dx).

ForW anE−valuedG−predictable function and a G−predictable timeρwe denote Z

R`

W(ρ, x)µX({ρ} ×dx)(ω) :=W(ω, ρ(ω),∆Xρ(ω)(ω))1[∆Xρ6=0,kW(ω,ρ(ω),∆Xρ(ω)(ω))k1<∞]

+∞1[∆Xρ6=0,kW(ω,ρ(ω),∆Xρ(ω)(ω))k1=∞]

and Z

R`

W(ρ, x)ν(X,G)({ρ} ×dx)(ω) :=

Z

R+×R`

W(ω, ρ(ω), x)1JρK(ω, t)ν(X,G)(ω; dt,dx), if

R

R+×R`W(ω, ρ(ω), x)1JρK(ω, t)ν(X,G)(ω; dt,dx)

1 < ∞, otherwise we define it to be equal to ∞.

Moreover, Z

R`

W(ρ, x)µe(X,G)({ρ} ×dx)(ω) :=

Z

R`

W(ρ, x)µX({ρ} ×dx)(ω)− Z

R+×R`

W(ω, ρ(ω), x)1JρK(ω, t)ν(X,G)(ω; dt,dx).

The following remark justifies the forthcoming Definition I.83.

Remark I.82. By Jacod and Shiryaev [41, Lemma II.1.25] we have for everyG−predictable timeρthat Z

R`

W(ρ, x)ν(X,G)({ρ} ×dx) =E Z

R`

W(ρ, x)µX({ρ} ×dx)

Gρ−

on [ρ <∞].

I.4. STOCHASTIC INTEGRATION 23

In other words, for everyG−predictable timeρ E

Z

R`

W(ρ, x)µe(X,G)({ρ} ×dx)

Gρ−

= 0 on [ρ <∞].

If, moreover, for everyG−predictable timeρ E

Z

R`

W(ρ, x)µe(X,G)({ρ} ×dx) 11[ρ<∞]

<∞, then the two above properties imply by Theorem I.21 that

ΠGp Z

R`

W(·, x)µe(X,G)({·} ×dx)

= 0,

i.e. itsG−predictable projection is indistinguishable from the zero process. Now, by [41, Theorem I.4.56 a)], to every G−optional processH with the following properties

(i) H0= 0,

(ii) ΠGp(H) = 0 and (iii) P

s≤·kHsk22∈ A+(G;R)

we can associate a Y ∈ H2(G;E) such that ∆Y =H. Recalling Corollary I.9 we realise that we can actually associate a unique, up to indistinguishability, element ofH2,d(G;E), namelyYd.

Definition I.83. LetµX ∈Aeσ(G). AnE−valuedG−predictable functionW is said to be stochastically integrable with respect toµe(X,G) ifW ∈H2(G, µX;E), where

H2(G, µX;E) :=

U : Ω,e PeG

→ E,B(E) ,X

s≤·

Z

R`

W(s, x)µe(X,G)({s} ×dx)

2 2

∈ A+(G;R)

.

In this case, thestochastic integral ofW with respect toµe(X,G)is defined to be the element ofH2,d(G;E), which will be denoted byW ?µe(X,G), such that

∆(W ?µe(X,G))·= Z

R`

W(·, x)eµ(X,G)({·} ×dx) up to indistinguishability.

Notation I.84. To the classH2(G, µX;E) we associate the spaceK2(G, µX;E) where K2(G, µX;E) :=n

W ?eµ(X,G), W ∈H2(G, µX;E)o

. (I.19)

In other words, K2(G, µX;E) is the space of the E−valued stochastic integrals with respect to the pair (G, µX).

Proposition I.85. Let W be a G−predictable function such that |W| ∗µX ∈ A+(G;R) or equivalently

|W| ∗ν(X,G)∈ A+(G;R).Then the stochastic integralW ?eµ(X,G)is well-defined. Moreover, the following are true

(i) W ?µe(X,G) is aG−martingale of finite variation.

(ii) W ?µe(X,G)=W∗µX−W∗ν(X,G).

Proof. See Jacod and Shiryaev [41, Proposition II.1.28].

Proposition I.86. Let H be a bounded G−predictable process and W ∈ H2(G, µX;E). Then HW ∈ H2(G, µX;E)andH· W ?µe(X,G)

= (HW)?µe(X,G).

Proof. See [41, Proposition II.1.30].

Proposition I.87. X is G−quasi-left-continuous if and only if there exists a version of ν(X,G) that satisfies

Z

R`

ν(X,G)({t} ×dx) = 0 for every(ω, t)∈Ω×R+.

Proof. See Jacod and Shiryaev [41, Corollary II.1.19].

Notation I.88. For every Γ : Ω,e G ⊗ B(R+)⊗ B(R`)

→ E,B(E)

we define MµX[Γ] :=E

Γ∗µX .

We will refer toMµX as theDol´eans-Dade measure associated to the random measureµX.We have abused the terminology for MµX, however, when we restrict the domain of MµX on the indicator functions of PeG−measurable sets, then it is easy to check thatMµX is aσ−finite measure on Ω,e PeG

. We will denote byMµX|

PeG the restriction ofMµX on the set ofG−predictable functions.

Definition I.89. The integer-valued measureµX is said to be aG−martingale(integer-valued random) measure ifMµX|

PeG = 0.

Theorem I.90. Let µbe an integer-valued random measure. The following are equivalent:

(i) µis aG−martingale integer-valued random measure.

(ii) νG= 0.

(iii) For every E−valued G−predictable function W such that there exists an increasing sequence of G−stopping times, say (τk)k∈N, for which limk→∞τk =∞ P−almost surely and (W ∗µ)τk ∈ A(G;E) for every k∈N, the process(W ∗µ)τk∈ M(G;E) for everyk∈N.

Proof. See Cohen and Elliott [20, Theorem 13.2.27].

Corollary I.91. The integer-valued random measureµe(X,G) is aG−martingale measure.

Proof. Using Theorem I.79 we can see that the compensator ofµe(X,G)is 0. Now, we can conclude

by Theorem I.90.

Definition I.92. Let W : Ω,e G ⊗ B(R+)⊗ B(R`)

−→ Rp,B(Rp)

be such thatkWk1µX ∈ Aeσ(G).

Then we define thePeG−conditional projection ofW onµX, denoted byMµX

W|PeG

, as theRp−valued G−predictable function with

MµX

W|PeGi

:= dMWiµX|

PeG

dMµX|

PeG

for everyi= 1, . . . , p.

Theorem I.93. Let W ∈H2(G, µX;R) andN be a uniformly integrable real-valuedG−martingale such that |∆N|µ∈Aeσ(G).Then, if[W ?µe(X,G), N]∈ A(G;R), we have

hW ?µe(X,G), Ni·= (W Mµ

∆N ePG

∗ν·(X,G).

Proof. See Cohen and Elliott [20, Theorem 13.3.16].

Theorem I.94. Let W ∈H2(G, µXd;Rp). Then hU ?µe(Xd,G)i·=

Z

(0,·]×R`

U(s, x)>

U(s, x)ν(Xd,G)(ds,dx)

−X

s≤·

Z

R`

U(s, x)ν({s} ×dx) >Z

R`

U(s, x)ν({s} ×dx).

Proof. See He et al. [35, Theorem 11.21].

The purpose of the following discussion is to clarify a subtle point in the construction of the stochastic integral with respect to µY, where Y is an R`-valued, G−adapted and not G−quasi-left-continuous process. Assume additionally that Tr[q]∗µY ∈ A+(G;R).17 Then µY ∈ Aeσ(G) and its compensator ν(Y,G) are well-defined. Since we have assumed that Y is not G−quasi-left-continuous, there exists a finite G−stopping time ρsuch that ν(Y,G)({ρ} ×R`) >0 P−almost surely, for every version of ν(Y,G). This assumption can be justified by Proposition I.87.

Let us proceed now with the observation that Id`∈H2(G, µY;R). Therefore, we can associate to the process Y theG−martingale Id`?eµ(Y,G)∈ H2,d(G;R`).Abusing notation we will denote Id`?µe(Y,G) by Yd,i.e

∆Ytd:=

Z

R`

xeµ(Y,G)({t} ×dx) = ∆Yt− Z

R`

x ν(Y,G)({t} ×dx). (I.20)

17The function q has been defined on p. 2.

I.4. STOCHASTIC INTEGRATION 25

At the end of the current discussion we will see that, if Y ∈ H2(G;R`), then its purely discontinuous part and Id`?µe(Y,G) are indistinguishable, which does not lead to a notational conflict. Now that we have constructed the square-integrableG−martingaleYd, we have also the option to work withµYd with associated compensatorν(Yd,G). Then, we can write (I.20) as

Z

R`

xµe(Y,G)({t} ×dx) = ∆Yd= Z

R`

x µYd({t} ×dx). (I.21)

A seemingly naive question that one may pose is whetherµe(Y,G)andµe(Yd,G)generate the same spaces of stochastic integrals. Of course, we expect that the compensatorsν(Y,G)andν(Yd,G)do not coincide, since they correspond to different integer-valued measuresµY andµYd. Therefore, in generalK2(G, µX;E)6=

K2(G, µXd;E) for the arbitrary state spaceE.From a different perspective, we know by Corollary I.91 that eµ(Y,G) is a G−martingale measure, while for µYd this is not true. For example, if V is a positive real-valuedG−predictable function such thatV ∗µYd∈ A+(G;R), then we obtain aG−submartingale.

So, we proceed by examining aG−predictable functionW whose valueW(ω, s, x) is not linear onx.

Since an analogous function will be used in Chapter III in order to construct suitable martingale sequences, it is a good opportunity to see in detail why we will finally prefer the G−compensated integer-valued measureµe(Yd,G). This will justify also the second bullet in Notation I.81, where we substitute the notation of the integer-valued measure associated to the processX withµXd, if the processX∈ H2(G;R`).

SayW(ω, s, x) =kxk22∧1.18Then, the jump ofW ?eµ(Y,G)atρ, which was fixed at the beginning of the discussion, is given by

Z

R`

kxk22∧1

µY({ρ} ×dx)− Z

R`

kxk22∧1

ν(Y,G)({ρ} ×dx) =

∆Yρ

2

2∧1−E h

∆Yρ

2 2∧1

Gρ−

i, while the respective jump ofW ?µe(Yd,G)is given by

Z

R`

kxk22∧1

µYd({ρ} ×dx)− Z

R`

kxk22∧1

ν(Yd,G)({ρ} ×dx) =

∆Yρd

2

2∧1−E h

∆Yρd

2 2∧1

Gρ−

i

(I.21)

= Z

R`

xeµ(Y,G)({ρ} ×dx)

2 2

∧1−E

Z

R`

xµe(Y,G)({ρ} ×dx)

2 2

∧1

Gρ−

=

∆Yρ− Z

R`

x ν(Y,G)({ρ} ×dx)

2 2

∧1−E

∆Yρ− Z

R`

x ν(Y,G)({ρ} ×dx)

2 2

∧1

Gρ−

.

It is clear now, that the spaces K2(G, µY;E) and K2(G, µYd;E) are not equal. However, when Y ∈ H2(G;R`), then we have by Jacod and Shiryaev [41, Lemma I.2.27] that ΠGp(Y) =Y.In other words,

Z

R`

x ν(Y,G)(ω;{t} ×dx) = 0 for every (ω, t)×Ω×R+.

Therefore, the purely discontinuous part of the G−martingaleY has jumps ∆Y, which in view of the above property implies that

∆Yt(ω) = Z

R`

x µY(ω;{t} ×dx).

This verifies our comment at the beginning of the discussion, that the abuse of notation (I.20) does not conflict with the case Y ∈ H2(G;R`). Intuitively, one can say that having already a G−martingale, we lose the information of the jump-compensating procedure which transforms a G−adapted process to a G−martingale.

The above discussion was made on the basis thatY is aG−adapted process which is not G −quasi-left-continuous. In view of Definition I.15 and Proposition I.87, this subtle difference vanishes when the processY isG−quasi-left-continuous, hence the compensatorν(Y,G)has no jumps.

Henceforth, assuming that X is a G−martingale, when we refer to the jump process ∆Xd we will mean theR`−valued process

∆X·d:=

Z

R`

π1(x)µe(X,G)({·} ×dx), . . . , Z

R`

π`(x)eµ(X,G)({·} ×dx)

, (I.22)

18The fact thatWH2(G, µYd;R) is an outcome of the definition ofYdand thatWH2(G, µY;R).

while, assuming thatX is simply anG−adapted process, when we refer to the jump process ∆X we will mean theR`−valued process

∆X·= Z

R`

π1(x)µX({·} ×dx), . . . , Z

R`

π`(x)µX({·} ×dx)

. (I.23)

Theorem I.95. Let Y ∈ H2(G,Rp)and assume that X ∈ H2(G;R`). Then, there exists unique, up to indistinguishability,U ∈H2(G, µXd;Rp)andN ∈ H2(G;Rp)with MµXd

∆N ePG

= 0 such that Y =U ?µe(Xd,G)+N.

TheG−predictable functionU is uniquely determined up to indistinguishability by the triplet G, µXd, Y . If, moreover, X isG−quasi-left-continuous, thenU is indistinguishable from MµXd

∆Y ePG

.

Proof. See Jacod and Shiryaev [41, Theorem III.4.20].

I.5. Selected results from stochastic calculus

In this section we collect some well-known results from stochastic calculus which we are going to use heavily. However, the Burkholder–Davis–Gundy Inequality and the Doob’sLΥ−Inequality will be stated with the help of Young functions, which are not the forms that are usually met in the literature. We will rely on the generality of these results in order to prove Theorem III.3 by utilising Proposition I.51.

For the current section we work in theG−stochastic basis, whereGis an arbitrary filtration.

Definition I.96. LetX ∈ M(G;Rp) andY ∈ M(G;Rq). We define the (optional) quadratic variation of the pair(X, Y) is theRp×p−valuedG−optional process [X, Y] which is defined for everyt∈R+ by

[X, Y]ijt :=hXc,i, Yc,jit+ X

0<s≤t

∆Xsi∆Ysj fori= 1, . . . , pand j= 1, . . . , q.

Remark I.97. ForX, Y ∈ M(G;Rp), the quadratic variation satisfies the so calledpolarisation identity [X, Y] = 1

4 [X+Y]−[X−Y] .

Proposition I.98. Let S be a G−adapted process such that it can be written in the form S =M +A, whereM ∈ M(G;R)andA∈ V(G;R). Then, the following are equivalent:

(i) S∈ Ssp(G;R).

(ii) supu≤·|Su−S0| ∈ A+(G;R).

Proof. See [41, Proposition I.4.23].

Proposition I.99. Let X, Y ∈ H2(G;R). Then[X, Y]∈ A(G;R)and[X, Y]− hX, Yi ∈ M(G;R).

Proof. See [41, Proposition I.4.50].

Theorem I.100 (Kunita–Watanabe Inequality). Let X, Y ∈ M(G;R)and H, K beB ⊗ G−measurable processes. Then for everyt∈R+19holds

Z

(0,·]

|Hs| |Ks|dVar [X, Y]

s≤Z

(0,·]

|Hs|2d[X]s

12Z

(0,·]

|Ks|2d[Y]s

12

P−a.s. (I.24) and if hXi,hYi andhX, Yiall exist,

Z

(0,·]

|Hs| |Ks|dVar hX, Yi

s≤Z

(0,·]

|Hs|2dhXis

12Z

(0,·]

|Ks|2dhYis

12

P−a.s.. (I.25) Moreover,

E Z

(0,·]

|Hs| |Ks|dVar [X, Y]

s

≤E Z

(0,·]

|Hs|2d[X]s 12

E Z

(0,·]

|Ks|2d[Y]s 12

(I.26) and if hXi,hYi andhX, Yiall exist,

E Z

(0,·]

|Hs| |Ks|dVar hX, Yi

s

≤E Z

(0,·]

|Hs|2dhXis 12

E Z

(0,·]

|Ks|2dhYis 12

. (I.27)

19In the caset=the domain of the Lebesgue–Stieltjes integral is written (0,∞).

I.5. SELECTED RESULTS FROM STOCHASTIC CALCULUS 27

Proof. See Cohen and Elliott [20, Theorem 11.4.1].

Theorem I.101 (Burkholder–Davis–Gundy Φ−Inequality). If X is a real-valued G−martingale and Φ∈ YFmod, then

1 4scΦ sup

t∈R+

|X|Φ[X]12 Φ6scΦ sup

t∈R+

|X|Φ.

If at least one of the quantities supt∈R

+|X|Φ, [X]12 Φ,E[Φ(supt∈R

+|X|)] andE[Φ([X]12)] is finite, then there exist positive constants cΦ, CΦ (depending only onΦ)such that

cΦE[Φ( sup

t∈R+

|X|)]E[Φ([X]12 )]≤CΦE[Φ( sup

t∈R+

|X|)].

Proof. For the first inequality see Dellacherie and Meyer [27, Theorem VII.92, p. 287]. Observe that we can obtain the Burkholder–Davis–Gundy Inequality in a slightly more general framework, i.e.

when Φ is a convex and moderate function such that Φ(0) = 0.

For the second inequality, since Φ is moderate we have from Proposition I.53.(v) thatE[Φ([X]12 )]<∞ if and only if[X]12 Φ<∞and analogouslyE[Φ(supt∈R+|X|)]<∞if and only ifsupt∈R+|X|Φ<∞.

Then, for the constantscΦ, CΦ of second inequality see He et al. [35, Theorem 10.36].

We state also the Burkholder–Davis–Gundy Inequality in its mostly-known form because it will allow us to identify the space

H1(G;R) :=

M ∈ M(G;R), sup

t∈R+

MtL1(G;R) with the space

M ∈ M(G;R),[M]12 L1(G;R)

. Obviously, H1(G;R) ⊂ M(G;R). Moreover, the inclusion is strict; solve Revuz and Yor [60, Exercise 3.15 of Chapter II].

Lemma I.102. Let Φ ∈ YF and X, Y ∈ M(G;R) such that E Φ

[X]12 , E Φ

[Y]12 < ∞. Then [X, Y]∈ A(G;R)and, consequently, its compensator[X, Y]p,G is well-defined.

Proof. The integrability conditionsE Φ

[X]12 ,E Φ

[Y]12 <∞verify that [X]12 , [Y]12 are fi-niteP−almost surely. By Kunita–Watanabe Inequality (I.24) and Young’s Inequality, see Lemma I.44.(i), we have

Var [X, Y]

[X]12 [X]12 Φ [X]12

+ Φ [Y]12

.

Then, we can conclude the existence of [X, Y]p,G by Theorem I.64.

Theorem I.103 (Burkholder–Davis–Gundy LϑInequality). If X is a real-valued G−martingale with M0= 0andϑ≥1, then there exist constantscϑ, Cϑ such that

cϑE ( sup

t∈R+

|X|)ϑ ϑ1

E [X]ϑ2

ϑ1

scϑE ( sup

t∈R+

|X|)ϑ 1ϑ

.

Proof. See Cohen and Elliott [20, Theorem 11.5.5].

We conclude this section with the celebrated Doob’s Inequality and a useful lemma whose main argument relies on Doob’sLΥInequality. The latter will be useful in Chapter IV.

Theorem I.104 (Doob’sLΥ−Inequality). Let Υ∈ YF such thatΥ∈ YFmod andX be (i) either a real-valuedG−martingale

(ii) or a real-valued positive c`adl`agG−submartingale.

Then

sup

t∈R+

|Xt|Υ2scΥXΥ.

Proof. IfX is a positive c`adl`agG−submartingale see Dellacherie and Meyer [27, Paragraph VI.103, p. 169] and [27, Paragraph VI.97 p. 161],[27, Remark VI.21, p. 82] for the notation. IfXis aG−martingale,20 then by Jensen’s Inequality |X| is a positive c`adl`ag G−submartingale, which reduces to the former

case.

20Recall that we use its c`adl`ag modification

The classical form of Doob’s Inequality is presented in the next corollary. Observe however that the constant of Corollary I.105 is improved comparing to the one we get from Theorem I.104. The interested reader may have the answer in Rao and Ren [58, Section 2.4].

Corollary I.105 (Doob’sLϑ−Inequality). Let ϑ, ϑ(1,∞)be such that 1ϑ+ϑ1 = 1 holds andX be (i) either a real-valuedG−martingale

(ii) or a real-valued positive c`adl`agG−submartingale.

Then

E sup

t∈R+

|Xt|ϑϑ1

≤ϑ E

|X|ϑ

ϑ1

.

Proof. See Dellacherie and Meyer [27, Paragraph VI.101, p. 166].

Lemma I.106. Let (Gk)k∈N be a sequence of filtrations, where Gk :=

Gtk

t∈R+, andk)k∈N be a sequence ofRp−valued random variables such that((ξk((2

2

k∈Nis uniformly integrable. Then the sequence

supt∈R

+((E

ξkGtk ((2

2

is uniformly integrable.

Proof. Let Φ∈ YFmod be a Young function for which the sequence((ξk((2

2

k∈Nsatisfies the de La Vall´ee Poussin–Meyer Criterion,i.e.

sup

k∈NEΦ((ξk((2

2

<∞. (I.28)

Then Ψ := Φquad∈ YFmod(see Proposition I.51.(i)). Using the fact that Φ is increasing, we can write (I.28) as

M := sup

k∈NE Ψ

ξk2

<∞. (I.29)

The latter form will be more convenient for later use. Before we proceed to prove the claim of the lemma, we provide some helpful results. In order to ease notation, let us denote the Orlicz norm((ξk2((

Ψ by Ξk for k N. Observe that Ξk < , for every k N because of (I.29). We are going to prove that supk∈NΞk <∞. To this end, observe that

IfM 1, then supk∈NΞk 1 by the definition of the Orlicz norm; see Notation I.52.

IfM >1, then using the convexity of Ψ and the fact that Ψ(0) = 0 we obtain E

Ψ ξk2

M

E Ψ

ξk2

M

(I.29)

1 which implies Ξk ≤M.

and consequently

sup

k∈N

Ξk 1∨M (I.29)< ∞. (I.30)

We proceed now to prove uniform integrability of

supt∈R+((E

ξkGtk ((2

2

k∈N. By de La Vall´ee Poussin Theorem, see Theorem I.27, it suffices to prove that supk∈NE

Φ

supt∈R

+((E[ξk|Gtk]((2

2

<∞,or equiv-alently that supk∈NE

Ψ√ 2Sk

<∞,where we have defined Sk := supt∈R+((E[ξk|Gtk]((

2. Analogously to (I.14) we can obtain

E Ψ√

2Sk

1scΨ2SkΨ. (I.31)

Recall that Ψ ∈ YFmod due to its definition Ψ = Φquad. By Proposition I.51 we have that also Ψ∈ YFmod. Therefore,scΨ,scΨ =c

Ψ(1,).For the validity of the last argument recall Notation I.41, Definition I.43 and Corollary I.45. Consequently, we can conclude the required property if we prove that supk∈NSkΨ <∞, where the

2 can be taken out, since · Ψ is a norm. To this end we will utilise Doob’sLΥ−inequality. We can obtain now by standard properties of norms and the fact that·2≤ ·1

(and consequently Sksupt∈R

+((E[ξk|Gtk]((

1) the following inequalities ((Sk((

Ψ(((sup

t∈R+

((E[ξk|Gtk]((

1

(((

Ψ p j=1

(((sup

t∈R+

E[ξk,j|Gtk](((

Ψ

Doob’s In.

2scΨ

p j=1

((ξk,i((

Ψ

2pscΨ((ξk2((

Ψ = 2pscΨΞk.