• Keine Ergebnisse gefunden

To be (orthogonal), or not to be, here is the answer

The combination of (Β) and (Δ) will be useful to prove (Ε). This, in turn, will enable us to prove that NĎis orthogonal to the space generated by the natural pair of theF−martingaleX. This result will be provided in Subsection III.6.1. Observe that we did not claim thatNĎ∈ H2,(F, X,c, μX,d), because ĎN is nota priori square-integrable.

Exploiting these properties we will be able to identifyNĎ,i.e. the arbitrary weak-limit of (Nk)k∈N, with the zero process by proving (finally) that [Y,NĎ]∈ M(F;R). For this conclusion, we will make use of

The fact that the representation ofY is not affected by the expansion of the filtration.

Condition(M3). The latter can be stated alternatively asH2,(G, Xc, μX∞,d;R) ={0}.

ĎN is orthogonal to sufficiently many stochastic integrals with respect to the natural pair of X. Intuitively, this is what we need. The orthogonal martingales should converge to an element which is orthogonal to the space generated by the natural pair ofX and the “information that is collected from Y through stochastic integrals with respect to the natural pair ofX does not increase”. This step will be the final one of the proof and is going to be presented in Section III.7.

Let us now provide some more information on the roadway we will follow in order to obtain (Α)-(ΣΤ). Proving the properties (Α)-(ΣΤ) under the framework(M1)-(M5)requires a series of preparatory results. We start be providing Proposition III.7 in Section III.3. This proposition translates the problem of determining the predictable objectsX,c,i, LF,fori= 1, . . . , , andMXd

ΔL !PF into a problem of determining a rich enough family of F−martingales. In the previously used notation, L is a uniformly integrableF−martingale such that the objects are well-defined. With the aforementioned proposition we will prove in Subsection III.6.2 that (ΣΤ) is true, while with a simplification stated as Corollary III.8 we will prove in Subsection III.6.1 that (Ε) is valid.

Having a closer look on the problem-translation mentioned above, this amounts to ensuring the existence of a (suitable) deterministic function3h:

R,B(R)

R+,B(R+)) and a (suitable) family J of subsets ofRsuch that for every I∈ J holds

for (Ε):

(h1I)μ!(X∞,d,G),NĎ ,[X,i,ĎN]∈ M(F;R), for everyi= 1, . . . , , (III.9)

and for (ΣΤ):

(h1I)!μ(X,d,G), Y <

(h1I)!μ(X,d,G), Y=G

∈ M(F;R) [X,i, Y]− X,i, YG ∈ M(F;R), for everyi= 1, . . . , .

(III.10) In order to prove (III.9) and (III.10) we will make use of Proposition I.147. Therefore, we need to weakly approximate X, Y,ĎN and (h1I)μ!(X,d,G) for every I ∈ J. While we can weakly approximate X, Y and NĎby known sequences and some subsequence, this is not the case for the martingales (h1I)μ!(X∞,d,G). This is the purpose of Section III.4. At the beginning of Section III.4 we will discuss how to choose the function hand the familyJ and then we will present the results which provide the required family of convergent martingale-sequences.

Let us now briefly comment on (Α), i.e. how to construct the filtration F. As we have already mentioned, we are going to apply Proposition I.147. In the next two lines we follow the notation of this proposition. Having in our mind(M4), a simple observation reveals that we can enrich the natural filtration of the limit processHprovided we can improve the convergence Θk −−→L Θ to

k,E[1G|Gk])−−→L,E[1G|G]), for every G∈ G.

In other words, we will obtain that (Θ,E[1G|G]) is an FE[1G∞|G] FH-martingale, for every G∈ G. Then, it will be almost straightforward to obtain that Θ is martingale with respect to the filtrationF:=GFH. This issue will be discussed also in Section III.6. Finally, once we have all the above results, we will present in Section III.7 the proof of Theorem III.3.

III.3. To be (orthogonal), or not to be, here is the answer

This section constitutes the first part of the preparatory results we are going to use in Section III.7.

In order to avoid as many technicalities as possible amid the flow of the preparatory results, we provide in this section Proposition III.7 and Corollary III.8. It is also a good chance to provide some comments around these results. Generally speaking, the former result provides the martingale conditions, by which we can conclude if the projections (under two different filtrations) of a martingale Y on the spaces

3Recall Remark I.73 for the usage of deterministic functions as predictable functions.

generated (under the two different filtration) by the natural pair of a martingale X remain unaffected.

The latter provides the martingale conditions, by which we can conclude if a martingaleN is orthogonal to the space generated by the natural pair of a martingale X. The probability space is (Ω,G,P), as it has been already mentioned, but for this section we make the following assumption:

H1,H2 are fixed filtrations such thatH1⊂H2.

Let us now provide some more helpful comments. Assume initially thatX, Y ∈ M(H1,R)∩M(H2,R), that is to say, that they are both uniformly integrable martingales with respect to both given filtrations, and they are such that [X, Y] ∈ A(H1;R)∩ A(H2;R). By Theorem I.64 we know that there exists an H1−predictable process [X, Y](p,H1) of finite variation as well as anH2−predictable process [X, Y](p,H2) of finite variation such that

[X, Y]−[X, Y](p,H1)∈ M(H1,R) and [X, Y]−[X, Y](p,H2)∈ M(H2,R).

If we assume further that [X, Y]−[X, Y](p,H1)∈ M(H2;R), then it is a direct consequence of Corollary I.65 that [X, Y](p,H1) = [X, Y](p,H2). The Proposition III.7 provides a criterion, by which we can refine the above. In other words, it provides the conditions under which

hXc, YciH1 =hXc, YciH2 andMµXd

∆Y ePH1

=MµXd

∆Y ePH2

. (III.11)

For this, however, we need to improve the integrability of the martingales and assume that X, Y lie in H2(H1,R) as well as inH2(H2,R). The integrability of X, Y and the Kunita–Watanabe Inequality, see Theorem I.100, imply that Var [X, Y]

∈ A+(H1;R) ⊂ A+(H2;R). which verifies that [X, Y] ∈ A+(H1;R)⊂ A+(H2;R) as well. Roughly speaking, one could now translate (III.11) as “the coefficients4 of the orthogonal decompositions ofY with respect to H1, Xc, µXd

and the coefficients of the orthogonal decompositions of Y with respect to H2, Xc, µXd

are the same”. This, however, does not imply that the two orthogonal decompositions are indistinguishable. It may be the case that eitherhXciH1 6=hXciH2 or ν(Xd,H2)

PeH1 6=ν(Xd,H1), which would lead in general in different stochastic integrals. Now, Condition (Γ) will verify that we have also (III.8).

As we have already commented, we are going to apply the next proposition for the pairX, Y, while the role of the two filtrationH1,H2will be played byG,F.Moreover, we anticipate once more in order to inform the reader that Lemma III.45 ensure thathX∞,ciG =hX∞,ciF and ν(X∞,d,F)

PeG(X∞,d,G). So, we will also assume that the latter holds in the next proposition.

Proposition III.7. Assume that the following hold:

(i) H1⊂H2,i.e. H1t ⊂ H2t for everyt∈R+.

(ii) X ∈ M(H1;R`)∩ M(H2;R`),Y ∈ M(H1;R)∩ M(H2;R)andX isH1−quasi–left–continuous.

(iii) h: R`,B(R`)

−−→ R+,B(R+)

is such that h(∆Xtd)>0 whenever∆Xtd6= 0.

(iv) ν(Xd,H2)

PeH1(Xd,H1)

(v) [X, Y]− hX, YiH1 ∈ M(H2,R`).

(vi) I is a family of subsets of R` such that σ(I) = B(R`) and the martingale (h1A)?µe(Xd,H1)5 is well–defined and such that (h1A)?µe(Xd,H1)∈ H2(H1;R)for every A∈ I. Moreover,

(h1A)?µe(Xd,H1), Y

(h1A)?µe(Xd,H1), YH1

∈ M(H2;R), for every A∈ I.

(vii) |∆Y|µX∈Aeσ(H1),6 Then, we have that

hXc,i, YciH1 =hXc,i, YciH2 for everyi= 1, . . . ` (III.12) and

MµXd[∆Y|PeH1] =MµXd[∆Y|PeH2]MµXd −a.e.. (III.13)

4Although this terminology sounds naive, it is not far from the reality. One should observe the way the orthogonal decompositions are obtained and would realise that the above quantities are determining the integrands of the stochastic integrals taken with respect to the natural pair of a square integrable martingale.

5Recall Remark I.73 for the usage of deterministic functions as predictable functions.

6For the notation, recall Definition I.75. Moreover, observe that (i) and (vii) imply that also|∆YXAeσ(H2).

III.3. TO BE (ORTHOGONAL), OR NOT TO BE, HERE IS THE ANSWER 79

Proof. We will prove initially (III.13). To this end, we are going to translate condition (vi) into E

h

W h∆Y

∗µXdi

=E h

W hMµXd[∆Y|PeH1]

∗µXdi

for everyH2−measurable functionW. (III.14) Before we proceed, recall that we have assumed X to be H1−quasi-left-continuous. Thus, by Proposi-tion I.87 we obtain that

∆ (h1A)?µe(Xd,H1)

s=h ∆Xsd

1A(∆Xsd) (III.15)

By theH2−martingale property of

(h1A)?µe(Xd,H1), Y

− h(h1A)?µe(Xd,H1), YiH1, we obtain for every 0≤t < u <∞and everyC∈ H2t, that

E h1CE

h[(h1A)?µe(Xd,H1), Y]u− h(h1A)?eµ(Xd,H1), YiHu1 H2tii

=E h

1C[(h1A)?µe(Xd,H1), Y]t−1C[(h1A)?µe(Xd,H1), Y]ti , equivalently

E h

1CE

[(h1A)?µe(Xd,H1), Y]u

H2t

−1C[(h1A)?µe(Xd,H1), Y]t

i

=E h

1CE

h(h1A)?eµ(Xd,H1), YiHu1 H2t

−1Ch(h1A)?µe(Xd,H1), YiHt1i , equivalently

E h

1C[(h1A)?µe(Xd,H1), Y]u−1C[(h1A)?µe(Xd,H1), Y]t

i

=E h

1Ch(h1A)?µe(Xd,H1), YiHu1−1C[(h1A)?µe(Xd,H1), Y]t

i

. (III.16) By Definition I.96, since (h1A)?µe(Xd,H1)∈ H2,d(H1;R), we have that

[(h1A)?µe(Xd,H1), Y]u−[(h1A)?eµ(Xd,H1), Y]t= X

t<s≤u

∆ (h1A)?µe(Xd,H1)

s∆Ys

. Moreover, in view of by Theorem I.93 and (vii), which ensures the existence ofMµXd

∆Y ePH1

, we can write (III.16) equivalently

E h

1C X

t<s≤u

∆ (h1A)?µe(Xd,H1)

s∆Ys

i

=E h

1C

Z

(t,u]×R`

h(x)1A(x)MµXd

∆Y ePH1

(s, x)

ν(Xd,H1)(ds,dx)i , equivalently by (III.15),

E hZ

(t,u]×R`

1Ch(x)1A(x)∆Ys

µXd(ds,dx)i

=E hZ

(t,u]×R`

1Ch(x)1A(x)MµXd

∆Y ePH1

(s, x)

ν(Xd,H1)(ds,dx)i , equivalently

E hZ

(0,∞)×R`

1C1(t,u](s)1A(x)

h(x)∆Ys

µXd(ds,dx)i

=E hZ

(0,∞)×R`

1C1(t,u](s)1A(x)

h(x)MµXd

∆Y ePH1

(s, x)

ν(Xd,H1)(ds,dx)i .

(III.17)

Now, by monotone class theorem we can conclude that condition (III.14) holds, because PeH2 = PH2⊗ B(R`) =σ P×A, whereP ∈ PH2 andA∈ B(R`)

Th. I.14.(ii)

= σ C×(t, u]×A, where 0≤t < u, C ∈ Ht2andA∈ I , where we have from (vi) thatσ(I) =B(R`).

The next observation is that the functionΩe 3(ω, s, x)7−→ h(x)∈R isH2−predictable, since it is deterministic and continuous. Moreover, the function h(x) is positiveMµXd −a.e.; recall thatµXd has been defined using the random set [∆Xd6= 0].Therefore, we obtain

E

h(U∆Y)∗µXdi

=E

h U MµXd

∆Y ePH1

∗µXdi

, for everyH2−measurable functionU, (III.18)

by substituting in (III.14) the H2−predictable function W(ω, s, x) with the H2−predictable function

U(ω,s,x)

h(x) ,whereU is an arbitraryH2−predictable function.

On the other hand, condition (vii) ensures the well-posedness of MµXd

∆Y ePH2

. Indeed, there exists a positive H1−predictable function V such that E

(V∆Y)∗µXd

< ∞. But V, by (i), is also PeH2−measurable, which verifies our claim. By definition now of MµXd

∆Y ePH2

holds E

h

(U∆Y)∗µXdi

=E h

U MµXd

∆Y ePH2

∗µXdi

, for everyH2−measurable functionU. (III.19) Now, recalling thatH1⊂H2,which implies thatPH1⊂ PH2, it is only left to observe thatMµXd

∆Y ePH1

, MµXd

∆Y ePHr

are bothH2−predictable functions with E

h

U MµXd

∆Y ePH1

∗µXdi(III.18)

=

(III.19)E h

U MµXd

∆Y ePH2

∗µXdi

for everyH2−measurable functionU.

This implies that

MµXd

∆Y ePH1

=MµXd

∆Y ePH2

MµXd −almost everywhere. (III.20) We proceed now to show the validity of (III.12). By condition (ii) we have that [X, Y]i∈ A(H2;R) for everyi= 1, . . . , `, hence the processhX, YiH2is well-defined and such that [X, Y]−hX, YiH2∈ M(H2,R`).

But then

hX, YiH2− hX, YiH1 = [X, Y]− hX, YiH1− [X, Y]− hX, YiH2

∈ M(H2,R`)∩ Vpred(H2;R`).

By Corollary I.65 it has to be indistinguishable from the zero process,i.e.

hX, YiH2 =hX, YiH1 (III.21) By Theorem I.93, we obtain for everyi= 1, . . . , `

hXd,i, YiH1 (III.20)= hπi?µe(Xd,H1), YiH1 = (πiMµXd[∆Y|PeH1])∗ν(Xd,H1)

(III.20)

=

(iv)iMµXd[∆Y|PeH2])∗ν(Xd,H2)=hπi?eµ(Xd,H2), YiH2=hXd,i, YiH2. (III.22) The combination of (III.21) and (III.22) yields

hXc,i, YciH1=hXi, YiH1− hXd,i, YdiH1(III.21)=

(III.22)hXi, YiH2− hXd,i, YdiH2 =hXc,i, YciH1,

for every i= 1, . . . , `.

Now we can provide the following corollary, which can be proven by using completely analogous (but simpler) arguments to the previous proposition.

Corollary III.8. Assume the framework of Proposition III.7 where(ii),(v)and (vi)are substituted by the following

(ii0) X ∈ H2(H1;R`)∩ H2(H2;H2)andY ∈ M(F;R).

(v0) [X, Y]i ∈ A(H1;R)for every i= 1, . . . , ` and[X, Y]∈ M(H2,R`).

(vi0) Iis a family of subsets ofR`such thatσ(I) =B(R`), the martingale(h1A)?eµ(Xd,H1)is well–defined and such that (h1A)?µe(Xd,H1)∈ H2(H1;R)for everyA∈ I. Moreover,

(h1A)?µe(Xd,H1), Y

∈ M(H2;R), for everyA∈ I.

Then, we have that

hXc,i, Yci= 0for every i= 1, . . . `andMµXd[∆Y|PeH1] = 0MµXd −a.e..

Proof. The result in immediate by analogous arguments to Proposition III.7.

III.4. CONSTRUCTING A SUFFICIENTLY RICH FAMILY OF CONVERGENT MARTINGALE-SEQUENCES 81

III.4. Constructing a sufficiently rich family of convergent martingale-sequences This section is devoted to constructing, for a given functionh: R`,B(R`)

−→(R+,B(R)

, a family of sequences of purely discontinuous martingales, namely

hkI ?µe(Xk,d,Gk)

k∈N

forI∈ J withσ(J) =B(R`). Additionally, we will prove that forI∈ J Xk, hkI ?µe(Xk,d,Gk) (J1(R`+1),L2)

−−−−−−−−−−→ X,(h1I)?µe(X∞,d,G) (III.23) and

Xk,

Yk, hkI ?µe(Xk,d)

Yk, hkI?µe(Xk,d, ,Gk

(J1(R`×R),L2×L1

−−−−−−−−−−−−−→

X,

Yk,(h1I)?µe(Xk,d,Gk)

Yk,(h1I)?µe(Xk,d, .

(III.24)

Recall the discussion in Section III.2 for the reason we need such a family. In particular, (III.24) is connected with Proposition III.7.(vi). We divide this section into two subsections. In Subsection III.4.1 we provide Proposition III.17, which provides the validity of (III.23), associated with its preparatory results. The respective results of Convergence (III.24) are presented in Subsection III.4.2, where the main result of the subsection is Proposition III.25.

Let us now comment on the choice of the function h (we need only one such function in order to apply later Proposition III.7 and Corollary III.8), of the set J and of the integrands hkI, for k ∈ N, I ∈ J. Let us start with the definition of MµX∞,d[∆L|PeF], for F a filtration such that G ⊂F, and L ∈ M(F;R) such that ∆LµX∞,d ∈ Aeσ(F). The F−predictable function MµX∞,d[∆L|PeF] is defined as theF−predictable function such that it holds

E

(W∆L)∗µX∞,d

=E

(W MµX∞,d[∆L|PeF])∗µX∞,d

for every F−predictable functionW.

In the results of the previous section a positive functionhhas been intervened in the above products of integrands with the property that h(∆Xt∞,d)>0 whenever ∆Xt∞,d 6= 0; we have adapted the notation of the current section. The reason is quite obvious: we have to retain the information whenever a jump happens at a given (ω, t) ∈ [∆X∞,d 6= 0]. However, we have the flexibility to alter the jump values

∆Xt∞,d(ω) in a G−predictable way. For example we could choose h(x) = kxk1, which is positive whenever a jump occurs for ∆X∞,d and, moreover, does not allow for any cancellations between the jumps ∆X∞,d,i, fori= 1, . . . , `.But, this is not a satisfactory choice for our purpose since we will need better integrability properties. The next choice would be to distinguish between “big” and “small” jumps, so we are going to examine the function

R`3x7−→Rp

`

X

i=1

(|xi| ∧1)p∈R, forp∈[1,∞). (III.25) where a∧b= min{a, b}fora, b∈R. Now, we have that

X

0<s≤·

Rp(∆Xs∞,d)≤ X

0<s≤·

k∆Xs∞,dk2∈L1(G;R), for everyp∈[1,∞).

This property will allow us to construct sequences of martingales with (more than) sufficient integrability.

We will constraint ourselves in the case p= 2.

Now that we have chosen the function which will play the role of h, we need to approximate the G−submartingale (Rp1I)∗µX∞,d. We comment briefly how we obtain the approximation and then why we resolve in this way. In view of Lemma III.5.(i) we can obtain the last mentioned approximation by means of Proposition I.134, which lead us also to the proper choice ofJ as well as of hkI, fork∈N, I ∈ J. If we translate Proposition I.134 to the current framework, we are allowed to choose the sets I from the set J(X∞,d), see its definition at the beginning of the next section, and the integrands hkI have to be equal with R21I.The reader may have realised that we have obtained convergent sequences of submartingales instead of martingales as we have promised. This is true. But we have done so because these are easier to be constructed. Indeed, using Proposition I.134 we create for everyk∈Nthe process R21I∗µX∞,d which isGk−adapted, with piecewise-constant paths and with positive jumps,i.e. its path

are non-decreasingP−a.s.. Now, it is the results of Memin, more specifically Theorem I.159, which will allow us to obtain the initially required convergence

(R21I)?eµ(Xk,d,Gk)−−−−−→(J1,P) (R21I)?µe(X∞,d,G). (III.26) TheL2−convergence will be easily obtained as a result of the choice of R2.

III.4.1. The convergence (III.23) is true. Throughout this chapter, we will consider Xk to be a Gk−martingale, for everyk∈N. Before we proceed, let us introduce some notation that will be used throughout the rest of Chapter III.

Notation III.9. For a fixedi∈ {1, . . . , `}, following the notation used in Subsection I.6.1, we introduce the sets

W(X∞,i(ω)) :={u∈R\ {0},∃t >0 with ∆Xt∞,d,i(ω) =u}.

V(X∞,i) :={u∈R\ {0},P [∆Xt∞,d,i=u, for some t >0]

>0}.

I(X∞,i) :={(v, w)⊂R\ {0}, vw >0 andv, w /∈V(X∞,i)}.

J(X) :=n Q`

i=1Ii, whereIi∈ I(X∞,i)∪

R for everyi= 1, . . . , `o

\ R` .

For everyI:=I1× · · · ×I`∈ J(X), we set JI :=

i∈ {1, . . . , `}, Ii6=R} 6=∅. (III.27)

Let k ∈N, I :=I1× · · · ×I` ∈ J(X) and g : Ω×R` −→ R. Then we define theR`+1−valued process

Xbk[g, I] := Xk, Xk,g,I

, withXk,g,I := (g1I)∗µXk,d.

By Lemma I.138 we have that the setV(X∞,i) is at most countable, for everyi= 1, . . . , `.Moreover, observe that an alternative representation ofXk,g,I is given by

X·k,g,I(ω) = X

0<t≤·

g(ω,∆Xtk,d(ω))1I(∆Xtk,d(ω)) = Z

(0,·]×R`

g(ω, x)1I(x)µXk,d(ω; ds,dx).

Recall, moreover, that, due to (M2) and Theorem I.57.(ii), the random variable Xk exists P−a.s..

Consequently, the process Xk,g,I, as well as the processXbk[g, I], areP−a.s.well–defined.

Proposition III.10. Let condition (M2) hold and fix anI ∈ J(X)and a function g : (Ω×R`,G ⊗ B(R`))−→(R,B(R)). For the functiong we assume that there existsΩC⊂ΩwithP(ΩC) = 1, and

R`3x7−→g(ω, x)∈R is continuous on C(ω) for every ω∈ΩC, where

C(ω) :=

`

Y

i=1

Ai(ω), with Ai(ω) :=

(W(X∞,i(ω)), ifi∈JI,

W(X∞,i(ω))∪ {0}, ifi∈ {1, . . . , `} \JI, Then, it holds

Xbk[g, I] (J1(R

`+1),P)

−−−−−−−−−−→

k→∞ Xb[g, I].

Proof. Let us fix anI :=I1× · · · ×I` ∈ J(X) and recall that the space (D(R`+1), dJ1(R`+1)) is Polish. Therefore, it is sufficient, by Theorem I.141, to prove that for every subsequence (Xbkl[g, I])l∈N, there exists a further subsequence (Xbklm[g, I])m∈Nfor which

Xbklm[g, I] J1(R

`+1)

−−−−−−−−→

m→∞ Xb[g, I], P−a.s.. (III.28)

Let (Xbkl[g, I])l∈N be fixed hereinafter. For every i∈JI, since Ii ∈ I(X∞,i), there exists ΩIi ⊂Ω such that

(i) P ΩIi

= 1,

(ii) ∆Xt∞,d,i(ω)∈/ ∂Ii, for everyt∈R+ andω∈ΩIi,

III.4. CONSTRUCTING A SUFFICIENTLY RICH FAMILY OF CONVERGENT MARTINGALE-SEQUENCES 83

where ∂Adenotes the| · |−boundary of the setA.

Condition(M2) implies the convergenceXk (J1(R

`),P)

−−−−−−−→X.Hence, for the subsequence (Xkl)l∈N, there is a further subsequence (Xklm)m∈Nfor which holds

Xklm J1(R

`)

−−−−−−→X, P−a.s.. (III.29) Let Ωsub⊂Ω be such that P(Ωsub) = 1 and such that the convergence (III.29) holds for everyω∈Ωsub. Define ΩJsubI := Ωsub∩ ∩i∈JIIi

. Then (i0) P(ΩJsubI ) = 1,

(ii0) ∆Xt∞,d,i(ω)∈/ ∂Ii, for everyt∈R+, for everyi∈JI and everyω∈ΩJsubI .

By the last property we can conclude that∂Ii∩W(X∞,i(ω)) =∅, for everyω∈ΩJsubI , i∈JI.Therefore, the function R`3x7−−−→gI(ω) g(ω, x)1I(x) is continuous on the set C(ω) forP−almost everyω∈ΩJsubI .

We can now conclude once we apply, for eachω∈ΩJsubI , Proposition I.134 for the sequence (Xklm(ω))m∈N and for the functionR`3x7−→g(ω, x)∈R, which verifies (III.28).

For the following results, recall the continuous function Rp introduced in (III.25).

Corollary III.11. Let condition(M2)hold. Then, for every I∈ J(X) it holds Xbk[Rp, I] (J1(R

`+1),P)

−−−−−−−−−→Xb[Rp, I], for everyp∈R+.

Proof. Letp∈R+.It suffices to apply the above proposition to the function Rp, which is continuous.

Let us now provide more details on the strategy of the proof for this step. Proposition III.17 be-low is the most important result of this sub–section, since it provides us with a rich enough family of converging martingale sequences. It is the family for which we have already commented and satisfies convergence (III.26). To this end, we are going to apply Theorem I.159 to the sequence (Xk,R2,I)k∈N, where I ∈ J(X). However, all of this requires to make sure that this sequence indeed verifies the requirements of the aforementioned theorem, for everyI∈ J(X). This is the subject for the remainder of this subsection. Before we proceed we provide a helpful for the calculations remark which is direct by Assumption (III.2). Then we will set some convenient notation.

Remark III.12. Let us fix initially ak∈N. In view of Assumption (III.2) we obtain that E

h

Tr[q]∗µXki

=E h

Tr[q]∗ν(Xk,Gk)i

. (III.30)

This implies that we can associate to the processXka square-integrableGk−martingale, which in(M2) we have denoted with the same symbol Xk. 7 For the calculations, this means that P

s>0k∆Xk,dk22 ∈ A+(Gk;R). Now, letting k run through N, we have that Assumption (III.2) and the Equality (III.30) yield that

sup

k∈N

E h

Tr[q]∗µXki

= sup

k∈N

E h

Tr[q]∗ν(Xk,Gk)i

<∞, which in turn, in view of Proposition I.85, implies

sup

k∈N

E h

Tr Xk,d

i

= sup

k∈N

E hX

s>0

∆Xsk,d

2 2

i

= sup

k∈N

E h

Tr[q]∗µXk,di

= sup

k∈N

E hX

s>0

Tr[q]∗µe(Xk,Gk)

s

i

≤sup

k∈N

E

hTr[q]∗µXki + sup

k∈N

E

hTr[q]∗ν(Xk,Gk)i

<∞. (III.31)

We end this remark with following comment. The reader may recall that from Lemma III.5.(iv) we have better integrability properties for the sequence Tr

Xk,d

k∈N, however the above bound is immediate by Assumption (III.2) and we would prefer to refer to the above bounds whenever these are sufficient for our needs.

Let us, now, fixI :=Q`

i=1Ii∈ J(X), hence the setJI is a fixed non–empty subset of{1, . . . , `}.

Moreover, we defineR`3x7−−−→R2,A R2(x)1A(x),for every A⊂R`.

7We underline again at this point that, according to our notation, the integer-valued measureµXk associates to the jumps of the processXk, while the integer valued measureµXk,dassociates to the jumps of theGk−martingaleXk,whose purely discontinuous part isXk,d.

Lemma III.13. The processXk,R2,I is aGk−special semimartingale, for everyk∈N. In particular, its Gk–canonical decomposition is given by

X·k,R2,I = Z ·

0

Z

R`

R2,I(x)eµ(Xk,d,Gk)(ds,dx) + Z

(0,·]×I

R2(x)ν(Xk,d,Gk)(ds,dx) or equivalently

R2,I∗µXk,d = R2,I?µe(Xk,d,Gk)+ R2,I∗ν(Xk,d,Gk).

Proof. Let k ∈N. Observe that by construction the process Xk,R2,I isGk−adapted and c`adl`ag.

The function R2,I is positive, hence the processXk,R2,I is a Gk−submartingale of finite variation, as its paths areP−a.s. non–decreasing. Before we proceed, we need to show the integrability of R2,I∗µXk,d in order to make use of Proposition I.85. But, we have

E Z

(0,∞)×R`

R2,I(x)µXk,d(ds,dx)

≤E Z

(0,∞)×R`

kxk2µXk,d(ds,dx)

=E h

T

Xk,di(III.31)

≤ ∞, (III.32)

where the last equality holds because of Theorem I.79. Moreover, (III.32) yields also that Xk,R2,I ∈ Ssp(Gk;R), by Proposition I.98.(ii). We concluded the last property because the process in increasing.

Moreover, by Theorem I.79 and Condition (III.32) (which implicitly makes use of (III.12)) we obtain E

Z

(0,∞)×R`

R2,I(x)ν(Xk,d,Gk)(ds,dx)

<∞, for everyk∈N. (III.33) Therefore, we have

X·k,R2,I= (R21I)∗µXk,d=h

(R21I)∗µXk,d−(R21I)∗ν(Xk,d,Gk)i

+ (R21I)∗ν(Xk,d,Gk)

= (R21I)? µ(Xk,d,Gk)+ (R21I)∗ν(Xk,d,Gk)

= R2,I?µe(Xk,d,Gk)+ R2,I∗ν(Xk,d,Gk),

where in the third equality we have used Proposition I.85. The finite variation part R2,I∗ν(Xk,d,Gk)is pre-dictable, since R2,I is deterministic and the random measureν(Xk,d,Gk)is predictable (see Definition I.10 and Theorem I.79). Hence, we can conclude also via this route thatXk,R2,I is a special semimartingale, since it admits a representation as the sum of a martingale and a predictable part of finite variation.

Lemma III.14. (i) The sequence Xk,R2,I

k∈Nis uniformly integrable.

(ii) The sequence [Xk,R2,I]

k∈N is uniformly integrable.

Proof. (i) Using the definitions of R2 andXk,R2,I, we get Xk,R2,I =

Z

(0,∞)×R`

R2,I(x)µXk,d(ds,dx)≤ Z

(0,∞)×R`

|x|2µXk,d(ds,dx) = Tr

[Xk,d]

≤Tr [Xk]

. Hence, from Lemma III.5.(iv) and Corollary I.29.(ii) we can conclude.

(ii) By Lemma III.13, the process Xk,R2,I is a Gk−special semimartingale for every k ∈N, whose martingale part is purely discontinuous. Therefore, we have by Definition I.96 that

[Xk,R2,I]=X

s>0

R2 ∆Xsk,d

21I(∆Xsk,d)≤X

s>0

R2 ∆Xsk,d

2=X

s>0

X`

i=1

∆Xsk,d,i

∧122

=X

s>0

hX`

i=1

∆Xsk,d,i

21(0,1)

∆Xsk,d,i

+1[1,∞)

∆Xsk,d,i

i2

≤2`X

s>0

hX`

i=1

∆Xsk,d,i

41(0,1)

∆Xsk,d,i

+1[1,∞)

∆Xsk,d,i

i

≤2`X

s>0

hX`

i=1

∆Xsk,d,i2i

= 2`Tr

[Xk,d]

≤2`Tr [Xk]

.

Thus, using Lemma III.5.(iv) and Corollary I.29.(ii) again, we have the required result.

III.4. CONSTRUCTING A SUFFICIENTLY RICH FAMILY OF CONVERGENT MARTINGALE-SEQUENCES 85

Lemma III.15. (i) The sequence Var(R2,I∗ν(Xk,d,Gk))

k∈N is tight in(R,| · |).

(ii) The sequence P

s>0 ∆(R2,I∗ν(Xk,d,Gk))s2

k∈N is uniformly integrable.

Proof. (i) We have already observed that Xk,R2,I is a Gk−submartingale for every k ∈ N and consequently R2,I∗ν(Xk,d,Gk)is non–decreasing for everyk∈N; a property which is also immediate since R2,I is a positive function andν(Xk,d,Gk) is a (positive) measure. Therefore, it holds

Var R2,I∗ν(Xk,d,Gk)

= R2,I∗ν(Xk,d,Gk), for every k∈N. In view of the above and due to Markov’s inequality, it suffices to prove that supk∈

NE[R2,I∗ν(Xk,d,Gk)]<

∞. Indeed, for everyε >0 it holds forK:= 1εsupk∈NE[R2,I∗ν(Xk,d,Gk)]>0 that sup

k∈N

P

R2,I∗ν(Xk,d,Gk)> K

≤ 1 Ksup

k∈N

E[R2,I∗ν(Xk,d,Gk)]< ε, which yields the required tightness. Now, observe that we have

E[R2,I∗ν(Xk,d,Gk)]Proposition I.85

= E[R2,I∗µXk,d](III.32)< ∞. (III.34) We have concluded using inequality (III.32), which in turn makes use of Assumption (III.2). There-fore (III.34) yields that supk∈

NE[R2,I∗ν(Xk,d,Gk)] <∞. (ii) Before we proceed, the reader may recall Remark I.80, i.e. that we use a version of ν(Xk,d,Gk) for which ν(Xk,d,Gk)({s} ×R` ≤ 1 identically for every k∈N. In view of this property, we can apply Cauchy–Schwartz Inequality in order to obtain the following upper bound

Z

R`

R2,I(x)ν(Xk,d,Gk)({s} ×dx)2

≤ Z

R`

R22,I(x)ν(Xk,d,Gk)({s} ×dx) Z

R`

ν(Xk,d,Gk)({s} ×dx)

≤ Z

R`

R22,I(x)ν(Xk,d,Gk)({s} ×dx). (III.35) Now, we have, for everyk∈N, that the following holds:

X

s>0

∆(R2,I∗ν(Xk,d,Gk))s

2

=X

s>0

Z

R`

R2,I(x)ν(Xk,d,Gk)({s} ×dx)2

(III.35)

≤ X

s>0

Z

R`

R22,I(x)ν(Xk,d,Gk)({s} ×dx)≤ Z

(0,∞)×R`

R22,I(x)ν(Xk,d,Gk)(ds,dx)

= Z

(0,∞)×I

hX`

i=1

(|xi| ∧1)2i2

ν(Xk,d,Gk)(ds,dx)≤2`

Z

(0,∞)×I

`

X

i=1

(|xi|2∧1)ν(Xk,d,Gk)(ds,dx)

= 2`R2,I∗ν(Xk,d,Gk). (III.36)

Using Lemma III.14.(i) and the de La Vall´ee Poussin–Meyer Criterion, see Corollary I.48, there exists a moderate Young function Φ such that

sup

k∈N

E

Φ Xk,R2,I

<∞.

Then, using thatXk,R2,I is an increasing process, hence it is equal to its supremum process, the decom-position of Lemma III.13 and applying Theorem I.55 we can conclude that

sup

k∈N

E

Φ R2,I∗ν(Xk,d,Gk)

<∞.

By de La Vall´ee Poussin–Meyer Criterion, the latter condition is equivalent to the uniform integrability of the sequence R2,I∗ν(Xk,d,Gk)

k∈N. Then, by (III.36) and Corollary I.29.(ii) we can conclude the uniform

integrability of the required sequence.

Corollary III.16. (i) The sequence

R2,I?µe(Xk,d,Gk)

k∈Nis uniformly integrable.

(ii) Consequently, it holds that R2,I?µe(Xk,d,Gk)∈ H2,d(Gk;R), for everyk∈N. (iii) The sequence hR2,I?µe(Xk,d,Gk)i

k∈Nis uniformly integrable.

Proof. (i) Using Definition I.96 and that R2,I?µe(Xk,d,Gk)is a martingale of finite varation, we have

R2,I?µe(Xk,d,Gk)

=X

s>0

∆(R2,I?µe(Xk,d,Gk))s

2

=X

s>0

∆(Xk,R2,I)s−∆(R2,I∗ν(Xk,d,Gk))s

2

≤2X

s>0

∆(Xk,R2,I)s

2

+ 2X

s>0

∆(R2,I∗ν(Xk,d,Gk))s

2

= 2 [Xk,R2,I]+ 2X

s>0

∆(R2,I∗ν(Xk,d,Gk))s2 ,

where in the last equality we have used thatXk,R2,I is a semimartingale whose paths have finite variation and Definition I.96. In view now of the above inequality, Lemma III.14.(ii), Lemma III.15.(ii) and (ii),(iv) of Corollary I.29 we can conclude the required property. This shows (i).

(ii) In addition, (i) implies the integrability of

R2,I?µe(Xk,d,Gk)

, hence from [41, Proposition I.4.50.c)]

we get that R2,I?µe(Xk,d,Gk)∈ H2,d(Gk,∞;R).

(iii) It is immediate by (i) and Theorem I.55.

Proposition III.17. Let conditions(M1),(M2)and(M4)hold. Then the following convergence holds:

Xk,R2,I?µe(Xk,d,Gk) (J1(R`+1),L2)

−−−−−−−−−−−→ X,R2,I?µe(X∞,d,G)

. (III.37)

Proof. As we have already pointed out on page 83, we are going to apply Theorem I.159 to the sequence (Xk,R2,I)k∈

N. By Lemma III.13, this is a sequence of Gk−special semimartingales, for every k∈N.

In view of(M1), which states thatXisG−quasi–left–continuous, and using Proposition I.87, we get that the compensatorν(X∞,d,G)associated toµX∞,dis an atomless random measure. Therefore, the finite variation part of theG−canonical decomposition ofX∞,R2,I is a continuous process. Moreover, by Theorem I.16 and (M1), which states that the filtration G is quasi–left–continuous, it suffices to show that the martingale part ofX∞,R2,Iis uniformly integrable. The latter holds by Corollary III.16.(ii).

Lemma III.14.(ii) yields that condition (i) of Theorem I.159 holds. Lemma III.15.(i) yields that condition (ii) of the aforementioned theorem also holds. Moreover, from Corollary III.11 for p= 2, we obtain the convergence

Xk, Xk,R2,I (J1(R`+1),P)

−−−−−−−−−−→ X, X∞,R2,I

. (III.38)

The last convergence in conjunction with conditions(M2)and(M4), Remark III.1 and Corollary I.125, is equivalent to the convergence (Xk,R2,I,Gk)−−−−→ext (X∞,R2,I,G).Therefore, condition (iii) of Theorem I.159 is also satisfied.

Applying now Theorem I.159 to the sequence (Xk,R2,I)k∈

N, and keeping in mind the decomposition from Lemma III.13, we obtain the convergence

Xk,R2,I,R2,I?eµ(Xk,d,Gk) (J1(R2),P)

−−−−−−−−→ X∞,R2,I,R2,I?eµ(X∞,d,G)

. (III.39)

Using Corollary I.125, we can combine the convergence in (III.38) and (III.39) to obtain Xk,R2,I?µe(Xk,d,Gk) (J1(R`×R),P)

−−−−−−−−−−→ X,R2,I?eµ(X∞,d,G)).

Observe that, in view of the last three convergence and Corollary I.125, we can actually conclude the convergence

Xk,R2,I?µe(Xk,d,Gk),Gk ext

−−−−→ X,R2,I?µe(X∞,d,G),G), (III.40) which will be useful for later reference.

The last result can be further strengthened to anL2−convergence in view of the following arguments:

Letαk = Xk,R2,I?µe(Xk,d,Gk)

, then by Vitali’s Convergence Theorem, see Theorem I.32, the latter is equivalent to showing thatd2J

1k, α) is uniformly integrable. Moreover, by the inequality d2J

1k, α)≤ dJ1k,0) +dJ1(0, α)2

≤2d2J

1k,0) + 2d2J

1(0, α)≤2kαkk2+ 2kαk2,

III.4. CONSTRUCTING A SUFFICIENTLY RICH FAMILY OF CONVERGENT MARTINGALE-SEQUENCES 87

and Corollary I.29.(ii),(iv), it suffices to show that (kαkk2)n∈

N is uniformly integrable.

By Corollary III.16.(i) we know that the sequence [R2,I ?µe(Xk,Gk)]

k∈N is uniformly integrable.

Therefore, using de La Vall´ee Poussin–Meyer Criterion, there exists a moderate Young function φsuch that

sup

k∈N

E h

φ

R2,I?µe(Xk,Gk)

i

<∞. (III.41)

Proposition I.51 yields that the mapR+3x7−→ψ ψ(x) :=φ(12x2) is again moderate and Young. We can apply now the Burkholder–Davis–Gundy (BDG) inequality Theorem I.101 (in conjunction with Propo-sition I.53.(v)) to the sequence of martingales R2,I?µe(Xk,Gk)

k∈N using the functionψ, and we obtain that

sup

k∈N

E

φ1 2sup

s>0

(R2,I?µe(Xk,Gk))s

2

= sup

k∈N

E

ψ sup

s>0

(R2,I?µe(Xk,Gk))s

BDG≤ Cψsup

k∈N

E h

ψ

R2,I?µe(Xk,Gk)12

i

=Cψsup

k∈N

E h

φ1 2

R2,I?µe(Xk,Gk)

i(III.41)

< ∞. (III.42) Hence the sequence sups>0|R2,I ? eµ(Xk,Gk)|2

k∈N is uniformly integrable, again from de La Vall´ee Poussin–Meyer Criterion. Moreover, Tr

[Xk]

k∈N is a uniformly integrable sequence; it was proven in Lemma III.5.(iv). Using analogous arguments and recalling to the above inequality, we can conclude that the sequence sups>0kXskk22

k∈N is also uniformly integrable. Hence, the family

sup

s>0

|Xsk|2+ sup

s>0

R2,I?µe(Xk,Gk)

2

k∈N

is uniformly integrable, which allows us to conclude.

Lemma III.18. The sequence(R2,I?eµ(Xk,d,Gk))k∈N possesses the P-UT property, consequently R2,I?µe(Xk,d,Gk),[R2,I?µe(Xk,d,Gk)] (J1(R2),P)

−−−−−−−→ R2,I?eµ(X∞,d,G),[R2,I?µe(X∞,d,G)]

(III.43) Proof. We will apply Proposition I.153. To this end, let us verify that the requirements are fulfilled.

Firstly recall that R2,I ?µe(X∞,d,G) is G−martingale, for every k ∈ N. Then, (III.42) yields that the sequence (R2,I?eµ(X∞,d,G))k∈NisL2−bounded. Finally, (III.39) verifies the convergence

R2,I?µe(Xk,d,Gk)−−−−−−−→(J1(R),P) R2,I?µe(X∞,d,G).

III.4.2. The convergence (III.24) is true. Now that we made our choice forh, the convergence (III.24) reads

Xk,

Yk,R2,I?µe(Xk,d,Gk)

Yk,R2,I?µe(Xk,d,Gk) (J1(R`×R),L2×L1)

−−−−−−−−−−−−−→

X,

Yk,R2,I?µe(Xk,d,Gk)

Yk,R2,I?eµ(Xk,d,Gk) .

(III.44) We comment once more that (III.44) will be used later to apply Proposition III.7. As one would expect, the strategy is very similar to the previous subsection. In other words, we will apply Theorem I.159 for the sequences

Yk+ R2,I?µe(Xk,d,Gk)

k∈Nand

Yk−R2,I?µe(Xk,d,Gk)

k∈Nand then we will use the polarisation identity to conclude.

Notation III.19. The following notation will be valid only for this subsection.

Rk,2,I := R2,I?µe(Xk,d,Gk), for everyk∈N.

Sk,+:=

Yk+ Rk,2,I

∈ A+(Gk;R), for everyk∈N.

Sk,−:=

Yk−Rk,2,I

∈ A+(Gk;R), for every k∈N.

The following sequence of lemmata amounts to verifying that the requirements of Theorem I.159 are fulfilled for the sequences (Sk,+)k∈

N and (Sk,−)k∈

N. Obviously, we will provide the proofs only for the former sequence, since the arguments will be exactly the same for the latter.

Lemma III.20. For everyk∈NholdsSk,+, Sk,−∈ Ssp(Gk;R)with canonical decomposition

Sk,+=Mk,++hYk+ Rk,2,IiandSk,−=Mk,−+hYk−Rk,2,Ii, (III.45) whereMk,+, Mk,−∈ M(Gk;R),hYk+ Rk,2,Ii,hYk−Rk,2,Ii ∈ A+pred(Gk;R) for everyk∈N.

Proof. By Lemma III.4.(v) and Lemma III.14.(i) we have that Yk,Rk,2,I ∈ H2(Gk;R) for every k ∈N. Therefore, Yk+ Rk,2,I ∈ H2(Gk;R) as well, for everyk ∈N. By Proposition I.99 we have that Sk,+ ∈ A+(Gk;R), which in virtue of Proposition I.98 implies thatSk,+∈ Ssp(Gk;R), for every k∈N.

Now, again by Proposition I.99 we obtain theGk−canonical decomposition Sk,+=

Yk+ Rk,2,I

=Mk,++hYk+ Rk,2,Ii,

for some uniformly integrableGk−martingale, where the required properties hold for everyk∈N. Lemma III.21. The processesS∞,+, S∞,− areG−quasi-left-continuous.

Proof. SinceY,R∞,2,I ∈ H2(G;R)⊂ M(G;R), we obtain by Theorem I.16 that theR2−valued martingale (Y,R∞,2,I) is G−quasi-left-continuous. Therefore, by Proposition I.19 there exists a se-quence (τm)m∈N ofG−totally inaccessible stopping times which exhausts the jumps of (Y,R∞,2,I).

On the other hand, for everyG−stopping timeτ holds

∆Sτ= ∆[Y+ R∞,2,I]τ = ∆(Y+ R∞,2,I)τ2

= ∆Yτ+ ∆R∞,2,Iτ 2

.

Therefore, the sequence (τm)m∈N exhausts also the jumps of S and now we can conclude by using

Proposition I.19 again.

Lemma III.22. The sequences([Sk,+]

1

2 )k∈N,([Sk,−]

1

2 )k∈N are uniformly integrable.

Proof. We will dominate the sequence of positive random variables ([Sk,+])k∈

N by a uniformly integrable sequence; recall Corollary I.29.(ii). Observe that, since Sk,+ ∈ A+(Gk;R), then by Defini-tion I.96 holds [Sk,+]=P

s>0 ∆Ssk,+2

. Therefore, [Sk,+]

1

2 = X

s>0

∆Ssk,+212

= X

s>0

∆(Yk+ Rk,2,I)2s212

= X

s>0

∆Ysk+ ∆Rk,2,Is 412

8

≤X

s>0

∆Ysk+ ∆Rk,2,Is 2

≤2X

s>0

∆Ysk2

+ 2X

s>0

∆Rk,2,Is 2

≤2[Yk]+ 2[Rk,2,Is ].

By Lemma III.4.(v) we have that ([Yk])k∈

Nis uniformly integrable and by Lemma III.14.(i) we recall the uniform integrability of ([Rk,2,I])k∈N. Now, we use (i),(iv) and (ii) of Corollary I.29 to conclude the

desired uniform integrability.

Lemma III.23. The sequences (Sk,+)k∈

N, (Sk,−)k∈

N are uniformly integrable. As a particular conse-quence, the sequences hYk + Rk,2,Ii

k∈N, hYk−Rk,2,Ii

k∈N are uniformly integrable and tight in R.

Proof. We start by proving that the sequence (Sk,+ )k∈

N is uniformly integrable. To this end we use that Sk,+ is increasing in order to dominate the terminal Sk,+ by elements consisting a uniformly integrable family. We have

0≤Sk,+= [Yk+ Rk,2,I]= [Yk]+ 2[Yk,Rk,2,I]+ [Rk,2,I]

≤[Yk]+ 2Var [Yk,Rk,2,I]

+ [Rk,2,I]

≤[Yk]+ 2[Yk]

1

2 [Rk,2,I]

1

2 + [Rk,2,I]

≤[Yk]+ [Yk]+ [Rk,2,I]+ [Rk,2,I]

= 2[Yk]+ 2[Rk,2,I].

In the second inequality we used the Kunita–Watanabe Inequality, see Theorem I.100. In the third in-equality we used Young Inin-equality, see Lemma I.44.(i), applied for the pair of Young functions (quad,quad?) =

8We useω−wise the propertyk · kϑ2≤ k · kϑ1forϑ1ϑ2, where the norms are those associated to the classical spaces

`ϑ(N) :=

x:= (xk)k∈NR,kxkϑ:= P

k∈N|xk|ϑ1

ϑ < .

III.4. CONSTRUCTING A SUFFICIENTLY RICH FAMILY OF CONVERGENT MARTINGALE-SEQUENCES 89

(quad,quad), see Example I.46. Now, by Lemma III.4.(v), Lemma III.14.(ii) and Corollary I.29 we con-clude.

Before we proceed to prove the uniform integrability of the sequence hYk+ Rk,2,Ii

k∈N, we denote by Φ the moderate Young function associated to the uniformly integrable family (Sk,+)k∈Nby the de La Vall´ee Poussin–Meyer Criterion, see Corollary I.48. Then, it holds supk∈

NE

Φ(Sk,+)

<∞.Now, by the canonical decompositions (III.45) and Theorem I.55 we have that

sup

k∈N

E

Φ(hYk+ Rk,2,Ii)

≤(2cΦ)cΦsup

k∈N

E

Φ(Sk,+)

<∞.

In other words, we have that Φ and hYk+Rk,2,Ii

k∈Nsatisfy the de La Vall´ee Poussin–Meyer Criterion.

Therefore, hYk+ Rk,2,Ii

k∈Nis also uniformly integrable. Finally, the sequence hYk+ Rk,2,Ii

k∈Nis tight inRas a direct consequence of Markov’s Inequality and theL1−boundedness of hYk+Rk,2,Ii

k∈N;

by Theorem I.25.

Lemma III.24. The following convergence are valid

(Xk, Sk,+,Gk)−−−→ext (X, S∞,+,G)and(Xk, Sk,−,Gk)−−−→ext (X, S∞,−,G) Proof. We need to prove initially the joint convergence of (Yk,Rk,2,I)k∈

N. In view of the already proven convergence (we provide their associated numbering)

Xk, Yk,[Yk, Xk],hYk, Xki (J1(R`×R×R`×R`),P)

−−−−−−−−−−−−−−−→ X, Y,[Y, X],hY, Xi

(III.6) and

Xk,R2,I?µe(Xk,d,Gk),Gk ext

−−−−→ X,R2,I?µe(X∞,d,G),G), (III.40) we can conclude the convergence

Xk, Yk,Rk,2,I,Gk ext

−−−−→ X, Y,R∞,2,I,G .

The conclusion was drawn by Corollary I.125, because they share the sequence (Xk)k∈N. The sequences (Yk)k∈N, (Rk,2,I)k∈Nare P-UT by Lemma III.4.(vi), Lemma III.18. Then, by Remark I.150 we have that (Yk+ Rk,2,I)k∈Nis also P-UT. On the other hand, by Corollary I.122, Theorem I.152 and the last remark we can conclude that

Xk, Yk,Rk,2,I, Yk+ Rk,2,I, Sk,+,Gk ext

−−−−→ X, Y,R∞,2,I, Y+ R∞,2,I, S,G

, (III.46)

which implies the required convergence.

Proposition III.25. The following convergence is valid Xk,

Yk,R2,I?µe(Xk,d,Gk)

Yk,R2,I?µe(Xk,d,Gk) (J1(R`×R),L2×L1)

−−−−−−−−−−−−−→

X,

Yk,R2,I?µe(Xk,d,Gk)

Yk,R2,I?eµ(Xk,d,Gk) .

(III.44) Proof. In view of Lemma III.20 - Lemma III.24 we apply Theorem I.159 for the sequences (Sk,+)k∈

N

and (Sk,−)k∈

N. Therefore

Xk,[Yk+ Rk,2,I],hYk+ Rk,2,Ii,Gk ext

−−−−→ X,[Y+ R∞,2,I],hY+ R∞,2,Ii,G and

Xk,[Yk−Rk,2,I],hYk−Rk,2,Ii,Gk ext

−−−−→ X,[Y−R∞,2,I],hY−R∞,2,Ii,G , We are allowed to conclude the joint convergence with (Xk)k∈

Nbecause of Lemma III.24. Now, because of the common convergent sequence (Xk)k∈Nwe can obtain the joint convergence of the two last convergence, i.e.

Xk,[Yk+ Rk,2,I],hYk+ Rk,2,Ii,[Yk−Rk,2,I],hYk−Rk,2,Ii,Gk ext

−−−−→

X,[Y+ R∞,2,I],hY+ R∞,2,Ii,[Y−R∞,2,I],hY−R∞,2,IiG . Finally, we conclude the convergence III.44 by Corollary I.122, the polarisation identity and in view of

the integrability properties provided by Lemma III.23.