• Keine Ergebnisse gefunden

Optimal stopping under $\textit{G}$-expectation

N/A
N/A
Protected

Academic year: 2022

Aktie "Optimal stopping under $\textit{G}$-expectation"

Copied!
30
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Center for

Mathematical Economics

Working Papers 606

December 2018

Optimal Stopping under G -expectation

Hanwu Li

Center for Mathematical Economics (IMW) Bielefeld University

Universit¨atsstraße 25 D-33615 Bielefeld·Germany e-mail: imw@uni-bielefeld.de

(2)

Optimal Stopping under G-expectation

Hanwu Li

Abstract

We develop a theory of optimal stopping problems underG-expectation framework. We first define a new kind of random times, calledG-stopping times, which is suitable for this problem. For the discrete time case with finite horizon, the value function is defined backwardly and we show that it is the smallestG-supermartingale dominating the payoff process and the optimal stopping time exists. Then we extend this result both to the infinite horizon and to the continuous time case. We also establish the relation between the value function and solution of reflected BSDE driven byG-Brownian motion.

Key words: optimal stopping,G-expectation,G-stopping time, Knightian uncertainty MSC-classification: 60H10, 60H30

1 Introduction

Consider a filtered probability space (Ω,F, P,F = {Ft}t∈[0,T]), the objective of optimal stopping problem is try to find a stopping timeτin order to maximize the expectation ofXτover all stopping times. Here X is a given progressively measurable and integrable process, called the payoff process.

In financial market, X can be regarded as the gain of an option. An agent has the right to stop this option at any time tand then get the rewardXt, or to wait in the hope that he would obtain a bigger reward if he stops in the future. This problem has wide applications in finance and economics, such as pricing for American contingent claims and the decision of a firm to enter a new market.

Note that there is an implicit hypothesis in the above examples that the agent knows the probability distribution of the payoff process. This assumption is too restrictive that excludes the case where the agent faces Knightian uncertainty. In this paper, we will investigate the optimal stopping problem under Knightian uncertainty, especially volatility uncertainty.

The optimal stopping under Knightian uncertainty attracks a great deal of attention due to its importance both in theory and in applications. We may refer to the papers [1], [2], [3], [4], [12], [16].

Roughly speaking, Cheng and Riedel [3], Riedel [16] considered the optimal stopping problem in a multiple priors framework, which makes the linear expectation be a nonlinear one. Bayraktar and Yao [1, 2] studied this problem under what they called the filtration consistent nonlinear expectations. In these papers, they put assumptions either on the multiple priorsP or on the nonlinear expectationE to make sure that the associated conditional expectation is time consistent and the optional sampling theorem still hold true. Similar with the classical case, the value function is an E-supermartingale dominating the payoff process X. Besides, its first hitting time τ of X is optimal and it is an E- martingale up to timeτ. However, in the above papers, all probability measures inP are equivalent to a reference measure P thus these models can only represent drift uncertainty. The ambiguity of volatility uncertainty requires that P is a family of non-dominated probability measures which makes this situation much more complicated. Ekren, Touzi and Zhang [4] and Nutz and Zhang [12]

investigated the optimal stopping problem under non-dominated family of probability measures. In

Center for Mathematical Economics, Bielefeld University, hanwu.li@uni-bielefeld.de.

(3)

fact, [4] studied the problem supτsupPEP[Xτ] which can be seen as a control problem while [12]

considered the problem infτsupPEP[Xτ] which can be regarded as a game problem. It is worth pointing out that in their papers, the value function is defined pathwise. They also obtained the optimality of the first hitting time τ and the nonlinear martingale property of the value function stopped at τ.

According to the papers listed above, it is obvious that we need some nonlinear expectations to study the optimal stopping problem under Knightian uncertainty. Recently, Peng systematically established a time consistent nonlinear expectation theory, calledG-expectation theory (see [13, 14]).

As the counterpart of the classical linear expectation case, the notions of G-Brownian motion, G- martingale and G-Itˆo’s integral were also introduced. A basic mathematical tool for the analysis is backward stochastic differential equations driven by G-Brownian motion (G-BSDE) studied by Hu, Ji, Peng and Song. In [6, 7], they proved the existence and uniqueness of solutions to G-BSDE and established the comparison theorem, Girsanov transformation and Feynman-Kac formula. The G- expectation theory is convincing as a useful tool for developing a theory of financial markets under volatility uncertainty. Therefore, the objective of this paper is to study the optimal stopping problem under the G-expectation framework.

In the classical case, the value function is usually defined by the essential supremum over a set of random variables. However, in the G-framework, the essential supremum should be defined in the quasi-surely sense and thus may not exist. Besides, the random variables in G-framework require some continuity and monotonicity properties. For a random time τ and a process X, Xτ may not belong to a suitable space where the conditionalG-expectation is well defined. Due to the difficulties lie in this problem, the optimal stopping under G-expectation is far from being understood.

In this paper, we first deal with optimal stopping problems underG-expectation in discrete time case, both in finite and infinite time horizon. We first restrict ourselves to a new kind of random time, called G-stopping time. The advantage is that the definition of conditionalG-expectation can be extended to the random variable X stopped at aG-stopping timeτ. Besides, on this large space, many important properties, such as time consistency, still hold. For finite time case, we define the value function V backwardly. It is not difficult to check that the value function is the minimal G- supermartingale dominating the payoff process X. Although we can only deal with a special kind of stopping times, we do not loose to much information since the first hitting time is an optimal G-stopping time. Then we extend the theory to infinite horizon case, where the backward induction cannot be applied directly. The value function is defined by the limit of the one in the finite time case.

We show that it is still the minimal G-supermartingale which is greater than the payoff process and satisfies the recursive equations similar with the finite time case. Recall that Li et al. [11] studied the reflected BSDE driven by G-Brownian motion, which means the solution is required to be above an obstacle process. In fact, the solution of reflectedG-BSDE is the minimal nonlinear supermartingale dominating the obstacle process. We show that it coincides with the value function of the optimal stopping problem in continuous time case when the payoff processX equals to the obstacle process.

From the mathematical point of view, the optimal stopping problem introduced in [4] is the closest to ours. Compared with their results, the advantage of considering this problem underG-expectation lies in the following aspects. First, we do not need to assume the boundedness of the payoff process and we can study this problem when the time horizon is infinite. For the continuous time case, it can be shown that the value function is the limit of the ones of the discrete time case, which becomes useful to get numerical approximations. Besides, similar with the result in [3], we can establish the relation between the value function defined by the Snell envelope and the solution of reflected BSDE driven byG-Brownian motion. At last, the case that the payoff process is Markovian can be involved and similar results as the classical case still hold.

This paper is organized as follows. In Section 2, we first introduce the G-stopping times and the essential supremum in the quasi-surely sense. Section 3 is devoted to study optimal stopping

(4)

problems under G-expectation, both in finite and infinite time case. Then we extend the results to the continuous time case and we show that the value function of optimal stopping problem corresponds to the solution of reflected G-BSDE in Section 4. In Section 5, we presents some results of optimal stopping when the payoff process is Markovian.

2 G-stopping times and essential supremum in the quasi-surely sense

In this section, we introduce the essential supremum in the quasi-surely sense and a new kind of random time, called G-stopping time, appropriate for the study of optimal stopping under G-expectation.

Then we investigate some properties of extended (conditional) G-expectation. Some basic notions and results ofG-expectation can be found in the Appendix.

Let (Ω, L1G(Ω),ˆ

E) be theG-expectation space andP be the weakly compact set that represents ˆE. The following notations will be frequently used in this paper.

L0(Ω) :={X : Ω→[−∞,∞] and X isB(Ω)-measurable}, L(Ω) :={X ∈L0(Ω) :EP[X] exists for eachP ∈ P}, Lp(Ω) :={X ∈L0(Ω) : ˆE[|X|p]<∞}forp≥1,

L1G(Ω) :={X ∈L1(Ω) :∃{Xn} ⊂L1G(Ω) such thatXn ↓X, q.s.}, L∗1G(Ω) :={X−Y :X, Y ∈L1G(Ω)},

L1

G(Ω) :={X ∈L1(Ω) :∃{Xn} ⊂L1G(Ω) such thatXn↑X, q.s.}, L¯1

G(Ω) :={X ∈L1(Ω) :∃{Xn} ⊂L1

G(Ω) such that ˆE[|Xn−X|]→0}.

Remark 2.1 It is easy to check that L1G(Ω) ⊂ L∗1G(Ω) ⊂ L1

G(Ω) ⊂ L¯1

G(Ω). Furthermore, it is important to note that L1G(Ω),L1

G(Ω),L¯1

G(Ω) are not linear spaces andL∗1G(Ω) is a linear space.

Set Ωt={ω·∧t:ω∈Ω}fort >0. Similarly, we can defineL0(Ωt),L(Ωt),Lp(Ωt),L1G(Ωt),L1

G(Ωt) and ¯L1

G(Ωt) respectively. We now give the definition of stopping times under the G-expectation framework.

Definition 2.2 A random time τ : Ω→ [0,∞) is called a G-stopping time if I{τ≤t} ∈L1G(Ωt) for each t≥0.

LetH ⊂L1(Ω) be a set of random variables. We give the definition of essential supremum ofHin the quasi-surely sense. Roughly speaking, we only need to replace the “almost-surely” in the classical definition by “quasi-surely”.

Definition 2.3 The esssential supremum ofH, denoted byesssup

ξ∈H

ξ, is a random variable inL0(Ω) such that:

(i) For any ξ∈ H,esssup

ξ∈H

ξ≥ξ, q.s.

(ii) If there exists another random variable η0 ∈ L(Ω) such that η0 ≥ ξ, q.s. for any ξ ∈ H, then esssup

ξ∈H

ξ≤η0 q.s.

Remark 2.4 It remains open to prove the existence of the essential supremum in the quasi-surely sense for the general case. However, if it does exist, it must be unique.

(5)

Remark 2.5 Consider a probability space (Ω,F, P)and a set of random variablesΦ. Set C:= sup{sup

φ∈eΦ

EP[φ] :Φe is a countable subset of Φ}.

There exists a countable setΦ:={φn, n∈N} contained inΦ, such that C= lim

n→∞EPn].

Then η := supn∈Nφn is the essential supremum ofΦ under P. However, this construction does not hold in the quasi-surely sense if we only replace the expectation EP by the G-expectation Eˆ. We may consider the following example.

Let 1 =σ2<σ¯2 = 2. Consider H={fα(hBi1), α∈[1,2]}, where fα(x) = (x−1)α+ 1. We can calculate that, for any α,

Eˆ[fα(hBi1)] = sup

x∈[1,2]

fα(x) = 2.

Then choose a countable subset H:={fα(hBi1), α∈Q∩[32,2]}. It is easy to check that sup

fα(hBi1)∈H

Eˆ[fα(hBi1)] = sup{sup

ξ∈He

Eˆ[ξ] :He is a countable subset of H}.

However, supfα(hBi1)∈Hfα(hBi1) =f3

2(hBi1)≤f1(hBi1)is not the essential supremum ofH.

Remark 2.6 In the classical case, the essential supremum can be constucted by countable many ran- dom variables while this does not hold true for the one in the quasi-surely sense. We may consider the following example. Let 1 = σ2 < ¯σ2 = 2. Consider H ={I{hBi1=x}, x ∈ [1,2]}. If there exists He={I{hBi1=xn}, xn∈[1,2], n∈N}, such that

esssup

ξ∈H

ξ= sup

n∈N

I{hBi1=xn}.

Then there exists a constant x0∈[1,2]such thatx06=xn, for any n∈N. We have c(sup

n∈N

I{hBi1=xn}< I{hBi1=x0}) =c(hBi1=x0) = 1, which is a contradiction.

We now list some typcial situations under which the essential supremum exists.

Proposition 2.7 If there are only countable many random variables inH, then the essential supre- mum exists.

Proof. Without loss of generality, we may assumeH={ξn, n∈N}. We then define η(ω) := sup

n∈N

ξn(ω).

We claim thatη is the essential supremum ofH. It is easy to check (i). We now show (ii) holds true.

Ifη0 is a random variable inL(Ω) such thatη0≥ξn, q.s. for anyn∈N, then we havec(η0 < ξn) = 0, for any n∈N. Note that

c(η0 < η) =c({ω:∃nsuch that η0 < ξn})≤

X

n=1

c(η0< ξn) = 0, which completes the proof.

(6)

Definition 2.8 A setHe is said to be dense in H, if for any ξ∈ H, there exists a sequence{ξn, n∈ N} ⊂He such thatEˆ[|ξn−ξ|]→0 asn→ ∞.

Proposition 2.9 If Hhas a countable dense subset, then the essential supremum ofH exists.

Proof. Without loss of generality, setHe:={ξm, m∈N} is the countable dense subset ofH. Denote η(ω) := sup

m∈N

ξm(ω).

We claim that η is the essential supremum of H. It is sufficent to prove for any ξ ∈ H, η ≥ξ, q.s.

For any ξ ∈ H, there exists a sequence {ξˆn, n∈N} ⊂He such that ˆE[|ξˆn−ξ|] →0 as n→ ∞. By Proposition 1.17 of Chapter VI in [15], there exists a subsequence{ξˆnk}k=1 such that,

ξ= lim

k→∞

ξˆnk, q.s.

Since for any k, ˆξnk≤η, we haveξ≤η, q.s.

Remark 2.10 Consider the example in Remark 2.5. Set He:={fα(hBi1), α∈[1,2]∩Q}. It is easy to check that He is a countable dense subset of H. Then η := supξ∈

Heξ = f1(hBi1) is the essential supremum of H.

Proposition 2.11 Assume that H ⊂L1G(Ω)is upwards directed and sup

ξ∈H

Eˆ[ξ] = sup

ξ∈H

−Eˆ[−ξ].

Then the essential supremum of Hexists.

Proof. Since the familyHis upwards directed, there exist two increasing sequences{ξni, n∈N} ⊂ H, i= 1,2, such that

n→∞lim

Eˆ[ξn1] = sup

ξ∈H

Eˆ[ξ], lim

n→∞−ˆ

E[−ξn2] = sup

ξ∈H

−ˆ

E[−ξ]. (2.1)

We claim thatη:= supnηnis the essential supremum ofH, whereηnn1∨ξn2. Obviously, the second statement in Definition 2.3 holds true. We now prove the first statement. It is easy to check that

sup

ξ∈H

Eˆ[ξ]≥Eˆ[ηn]≥Eˆ[ξn1], sup

ξ∈H

−ˆ

E[−ξ]≥ −ˆ

E[−ηn]≥ −ˆ E[−ξn2].

Lettingn→ ∞, it follows that sup

ξ∈H

Eˆ[ξ] = lim

n→∞

Eˆ[ηn] = lim

n→∞−Eˆ[−ηn] = sup

ξ∈H

−Eˆ[−ξ].

Applying Proposition 28 (7) in [8], we have Eˆ[η] = lim

n→∞

Eˆ[ηn], −Eˆ[−η] = lim

n→∞−Eˆ[−ηn],

which implies that η has no mean uncertainty. For any ξ ∈ H, using the monotone convergence theorem, we get that

Eˆ[η∨ξ] = lim

n→∞

Eˆ[ηn∨ξ]≤sup

ξ∈H

Eˆ[ξ].

(7)

Then we conclude that

0≤ˆ

E[ξ∨η−η] = ˆE[ξ∨η]−ˆ E[η]≤0, which indicates thatξ∨η−η= 0, q.s. The proof is complete.

In the following, we list some properties of the extended (conditional)G-expectation. It is natural to extend the definition ofG-expectation ˆEto the spaceL(Ω), still denoted by ˆE. For eachX ∈ L(Ω), the extended G-expectation has the following representation.

Eˆ[X] = sup

P∈P

EP[X].

Lemma 2.12 Let {Xn, n∈N} ⊂ L(Ω). Suppose that there exists a random variable Y ∈ L(Ω) with

−Eˆ[−Y]>−∞such that, for anyn≥1,Xn≥Y q.s. Thenlim infn→∞Xn∈ L(Ω) and Eˆ[lim inf

n→∞ Xn]≤lim inf

n→∞

Eˆ[Xn].

Proof. By the classical monotone convergence theorem and Fatou’s Lemma, we have for eachP ∈ P, EP[lim infn→∞Xn] exists and

EP[lim inf

n→∞ Xn]≤lim inf

n→∞ EP[Xn]≤lim inf

n→∞

Eˆ[Xn].

Taking supremum over all P ∈ P, we get the desired result.

Remark 2.13 It is worth pointing out that the Fatou Lemma of the“lim sup” type does not hold under G-expectation. For example, set 0< σ2 <σ¯2 = 1. Consider the sequence{Xn, n∈N}, where Xn =I{hBi1∈(1−1

n,1)}. It is easy to check that Xn →0 andEˆ[Xn] = 1for any n∈N. Therefore, we have

0 = ˆE[lim sup

n→∞

Xn]<lim sup

n→∞

Eˆ[Xn] = 1.

In fact, for any given X ∈ L¯1

G(Ω), the supremum of expectation over all probability P ∈ P is attained.

Proposition 2.14 For any X ∈L¯1G(Ω), there exists some P∈ P, such that Eˆ[X] =EP[X].

Proof. We first claim that for anyX∈L1G(Ω), if Pn →P weakly, then we have lim sup

n→∞

EPn[X]≤EP[X]. (2.2)

We first prove Equation (2.2) for any X ∈ L1G(Ω). Note that there exists a sequence of random variables {Xm} ⊂L1G(Ω) such thatXm↓X, q.s. Then for anym∈N, we haveEPn[X]≤EPn[Xm].

Applying Lemma 1.29 of Chapter VI in [15], it follows that lim sup

n→∞

EPn[X]≤lim sup

n→∞

EPn[Xm] =EP[Xm].

Lettingm→ ∞, by the monotone convergence theorem, Equation (2.2) holds. For anyX ∈L1G(Ω), there exists some {Xm} ⊂L1G(Ω), such thatXm↑ X, q.s. By monotone convergence theorem, it is easy to check that

lim sup

n→∞

EPn[X] = lim sup

n→∞

lim inf

m→∞ EPn[Xm]≤lim inf

m→∞ lim sup

n→∞

EPn[Xm]

≤lim inf

m→∞ EP[Xm] =EP[X].

(8)

Now for any X∈L¯1

G(Ω), there exists a sequence of random variables{Xm} ∈L1

G(Ω) such that

m→∞lim sup

P∈P

EP|Xm−X|= lim

m→∞

Eˆ[|Xm−X|] = 0.

Then for any ε > 0, there exists some M independent of P, such that, for any m ≥ M, EP[X] ≤ EP[Xm] +ε. By the definition of extendedG-expectation, we can choose a sequence of probability measures {Pn}, such that

Eˆ[X] = lim

n→∞EPn[X].

Noting thatP is weakly compact, without loss of generality, we may assume thatPn converges to P weakly. We can calculate that

Eˆ[X] = lim

n→∞EPn[X]≤lim sup

n→∞

EPn[Xm] +ε≤EP[Xm] +ε→EP[X] +ε, as m→ ∞.

Sinceεcan be arbitrarily small, we get the desired result.

Now we extend the definition of conditionalG-expectation. For this purpose, we need the following lemma, which generalizes Lemma 2.4 in [6].

Lemma 2.15 For each ξ, η∈L∗1G(Ω) andA∈ B(Ωt), if ξIA≥ηIA q.s., then ˆ

Et[ξ]IA≥ˆ

Et[η]IA q.s.

Proof. Otherwise, we may choose a compact setK⊂Awithc(K)>0 such that (ˆEt[ξ]−Eˆt[η])>0 onK. Noting thatK is compact, there exists a sequence of nonnegative functions{ζn}n=1⊂Cb(Ωt) such that ζn ↓IK, which implies that IK ∈L1G(Ωt). Sinceξ, η ∈L∗1G(Ω), there existξi, ηi∈L1G(Ω) and {ξni}n=1,{ηni}n=1⊂L1G(Ω),i= 1,2 such thatξin↓ξ,ηin↓η and

ξ=ξ1−ξ2, η=η1−η2.

SetXnn12n,Yn2nn1. Then{Xn}n=1,{Yn}n=1⊂L1G(Ω) and they are decreasing inn. We denote by X, Y the limit of{Xn}n=1,{Yn}n=1 respectively. It is easy to check thatX, Y ∈L1G(Ω) and ξ−η=X−Y. For each fixedl, m, n∈N, we have

Eˆ[ζl(Xn−Ym)]↓ ˆ

E[IK(Xn−Ym)], as l→ ∞, and

Eˆ[ζlt[(Xn−Ym)]]↓Eˆ[IKt[(Xn−Ym)]], asl→ ∞, Noting that

Eˆ[ζl(Xn−Ym)] = ˆE[ζlt[(Xn−Ym)]], it follows that

Eˆ[IK(Xn−Ym)] = ˆE[IKt[(Xn−Ym)]].

For each fixed m, n ∈ N, we have IK(Xn −Ym) ∈ L1G(Ω). First letting m → ∞, we obtain IK(Xn −Ym) ↓ IK(Xn −Y) and IK(Xn−Y) ∈ L1G(Ω). Then letting n → ∞, we obtain IK(Xn−Y)↑IK(X−Y) andIK(X−Y)∈L1

G(Ω). Therefore, we can calculate that

n→∞lim lim

m→∞

Eˆ[IK(Xn−Ym)] = ˆE[IK(X−Y)] = ˆE[IK(ξ−η)] = 0.

By a similar analysis, we have

n→∞lim lim

m→∞

Eˆ[IKt[(Xn−Ym)]] = ˆE[IKt[(X−Y)]] = ˆE[IKt[(ξ−η)]].

(9)

By Proposition 34 (4) and (8) in [8], we can check that

(ˆEt[ξ]−Eˆt[η])≤Eˆt[(ξ−η)],

which yields ˆEt[(ξ−η)]>0 onK. Recall thatc(K)>0 which implies that ˆE[IKt[(ξ−η)]]>0.

This is a contradiction and the proof is complete.

Lemma 2.15 allows us to extend the definition of conditionalG-expectation. For eacht≥0, set L∗1,0,tG (Ω) ={ξ=

n

X

i=1

ηiIAi:{Ai}ni=1 is a partition ofB(Ωt), ηi∈L∗1G(Ω), n∈N}.

Definition 2.16 For each ξ ∈ L∗1,0,tG (Ω) with representation ξ =Pn

i=1ηiIAi, we define the condi- tional expectation, still denoted by Eˆs, by setting

s[ξ] :=

n

X

i=1

si]IAi, fors≥t. (2.3)

Remark 2.17 If furthermore, ξ∈ L¯1

G(Ω)∩L∗1,0,tG (Ω), then the extended conditional G-expectation (2.3)coincides with the one as in Proposition A.1. In fact, for any ξ∈L∗1,0,tG (Ω)with representation ξ=Pn

i=1ηiIAi ands≥t, we can calculate that

n

X

i=1

si]IAi =

n

X

i=1

IAi ess sup

Q∈P(s,P)

PEQi|Fs] = ess sup

Q∈P(s,P) P(

n

X

i=1

EQi|Fs]IAi)

= ess sup

Q∈P(s,P) PEQ[

n

X

i=1

ηiIAi|Fs] = ess sup

Q∈P(s,P)

PEQ[ξ|Fs], P-a.s., for any P ∈ P.

3 Optimal stopping in discrete-time case

In this section, we study the optimal stopping problem underG-expectation for the discrete time case, i.e. the G-stopping timeτ takes values in some discrete set. We first investigate the finite time case by applying the method of backward induction and then extend the results to the infinite time case.

3.1 Finite time horizon case

In this subsection, we need to assume the payoff processX satisfies the following condition.

Assumption 3.1 {Xn, n = 0,1,· · ·, N} is a sequence of random variables such that for any n, Xn∈L1G(Ωn).

Theorem 3.2 Let Assumption 3.1 hold true. We define the following sequence{Vn, n= 0,1,· · · , N}

by backward induction: Let VN =XN and Vn= max{Xn

En[Vn+1]}, n≤N−1.

Then we have the following conclusion.

(1) {Vn, n= 0,1,· · ·, N} is the smallestG-supermartingale dominating{Xn, n= 0,1,· · ·, N};

(10)

(2) Denote by Tj,N the set of all G-stopping time taking values in {j,· · · , N}. Set τj = inf{l ≥j : Vl = Xl}. Then τj is a G-stopping time and Vn∧τj ∈ L∗1G(Ωn), for any j ≤ N and n ≤ N. Furthermore,{Vn∧τj, n= 0,1,· · ·, N}is a G-martingale and for any j≤N,

Vj= ˆEj[Xτj] =esssup

τ∈Tj,N

j[Xτ].

Proof. (1) It is easy to check that for anyn= 0,1,· · ·, N, Vn≥Xn, andVn≥Eˆn[Vn+1],

which implies {Vn, n = 0,1,· · · , N} is a G-supermartingale dominating {Xn, n = 0,1,· · ·, N}. If {Un, n= 0,1,· · · , N} is another G-supmartingale dominating{Xn, n= 0,1,· · · , N}, we haveUN ≥ XN =VN and

UN−1≥EˆN−1[UN]≥EˆN−1[VN], UN−1≥XN−1.

It follows thatUN−1≥VN−1. By induction, we can prove that for alln= 0,1,· · ·, N,Vn ≤Un. (2)For anyn=j,· · ·, N, we can check that

j≤n}=∪nk=jk =k}=∪nk=j{Vk=Xk}.

and

Ij≤n}= max

j≤k≤nI{Vk−Xk=0}.

By Remark A.4,I{Vk−Xk=0}∈L1G(Ωk), for any j≤k≤n. It follows thatIj≤n}∈L1G(Ωn).

It is easy to check that, for anyj < n≤N, Vn∧τj =

n−1

X

k=j

(Vk−Vk+1)Ij≤k}+Vn

=

n−1

X

k=j

(Vk−Vk+1)+Ij≤k}

n−1

X

k=j

(Vk−Vk+1)Ij≤k}+Vn.

(3.1)

We conclude thatVn∧τj ∈L∗1G(Ωn). Note that

V(n+1)∧τj−Vn∧τj =Ij≥n+1}(Vn+1−Vn) =Ij≤n}c(Vn+1−Eˆn[Vn+1]). (3.2) Since{τj ≤n} ∈ B(Ωn), applying Lemma 2.15 and equation (2.3), we have

n[Ij≤n}c(Vn+1−Eˆn[Vn+1])] =Ij≤n}cn[(Vn+1−Eˆn[Vn+1])] = 0. (3.3) Combining (3.2) and (3.3), we can get

0 = ˆEn[V(n+1)∧τj −Vn∧τj] = ˆEn[V(n+1)∧τj]−Vn∧τj,

which shows that {Vn∧τj, n=j, j+ 1,· · ·, N} is aG-martingale. Consequently, we have Vj = ˆEj[Vτj] = ˆEj[Xτj].

We claim that, for anyτ∈ Tj,N,

Vj≥Eˆj[Xτ]. (3.4)

(11)

First, similar with (3.1), we obtain Xτ ∈L∗1G(ΩN). We then calculate that EˆN−1[Xτ]≤EˆN−1[Vτ] = ˆEN−1[

N−1

X

k=j

(Vk−Vk+1)I{τ≤k}+VN]

=

N−2

X

k=j

(Vk−Vk+1)I{τ≤k}+VN−1I{τ≤N−1}+ ˆEN−1[VNI{τ=N}]

=

N−2

X

k=j

(Vk−Vk+1)I{τ≤k}+VN−1I{τ≤N−1}+ ˆEN−1[VN]I{τ=N}

N−2

X

k=j

(Vk−Vk+1)I{τ≤k}+VN−1=V(N−1)∧τ,

where we use Equation (2.3) again in the last equality. Repeat this procedure, we get that (3.4) holds.

The proof is complete.

Remark 3.3 If {Vn}Nn=0 is defined by

VN =XN, Vn= min{Xn

En[Vn+1]}, n≤N−1.

By a similar analysis, we have the following conclusion.

(1) {Vn, n= 0,1,· · ·, N} is the largestG-submartingale dominated by {Xn, n= 0,1,· · ·, N};

(2) Setτj= inf{l≥j:Vl=Xl}. Thenτj is aG-stopping time and Vn∧τj ∈L∗1G(Ωn), for anyj≤N andn≤N. Furthermore, {Vn∧τj, n= 0,1,· · ·, N} is aG-martingale and for any j≤N,

Vj= ˆEj[Xτj] =essinf

τ∈Tj,N

j[Xτ].

By Proposition 2.14, there exists some P ∈ P such that V0= inf

τ∈T0,N

sup

P∈P

EP[Xτ] =EP[Xτ0].

3.2 Infinite time horizon case

Now we study the infinite time case. The conditions on the payoff process are more restrictive compared with the finite time case mainly due to the fact that the order of the right-hand side of Doob’s inequality underG-expectation is strictly larger than the one of the left-hand side.

Assumption 3.4 {Xn, n ∈ N} is a sequence of random variables bounded from below and for any n∈N,Xn∈LβG(Ωn), whereβ >1. Furthermore,

Eˆ[sup

n∈N

|Xn|β]<∞.

For each fixed N ∈ N, we define the following sequence {V˜nN, n = 0,1,· · · , N} by backward induction: Let ˜VNN =XN and

nN = max{Xn,Eˆn[ ˜Vn+1N ]}, n≤N−1. (3.5) It is easy to check that for any n≤N ≤M, ˜VnN ≤V˜nM. We may define

n= lim

N≥n,N→∞

nN. (3.6)

(12)

Proposition 3.5 The sequence{V˜n, n∈N}defined by (3.6)is the smallestG-supermartingale dom- inating the process {Xn, n∈N}.

Proof. By monotone convergence theorem, lettingN → ∞in (3.5), we have V˜n= max{Xn,Eˆn[ ˜Vn+1 ]},

which implies that {V˜n, n∈ N} is a G-supermartingale dominating the process {Xn, n ∈ N}. Let {Un, n∈N}be aG-supermartingale dominating the process{Xn, n∈N}. By Theorem 3.2,{V˜nN, n= 0,1,· · ·, N} is the smallest G-supermartingale dominating {Xn, n = 0,1,· · · , N}. Then for each n∈Nand N≥n, we have ˜VnN ≤Un. It follows that

Un≥ lim

N→∞

nN = ˜Vn,

which yields that {V˜n, n = 0,1,· · ·, N} is the smallest G-supermartingale dominating {Xn, n = 0,1,· · ·, N}.

For eachj ∈N, denote by Tj the collection of allG-stopping times taking values in{j, j+ 1,· · · } such that

Nlim→∞c(τ > N) = 0. (3.7)

Set

0= sup

τ∈T0

Eˆ[Xτ].

Remark 3.6 If a G-stopping time τ satisfies condition (3.7), noting that {τ =∞} ⊂ {τ > N} for any N ∈N, we obtain that

0≤c(τ =∞)≤ lim

N→∞c(τ > N) = 0,

which implies that τ is finite quasi-surely. However, the inverse does not hold true. Consider the following example. Let 0< σ2<σ¯2= 1. Set

τ=

(1, ifhBi1= 0,

N, ifhBi1∈(1−N1−1,1−N1], N≥2.

It is easy to check that τ is a G-stopping time and c(τ =∞) = 0. However, for any fixed N ∈N, we have

c(τ > N) =c(hBi1>1− 1 N) = 1.

Proposition 3.7 Under the above assumptions, we have

0= ˜V0.

Proof. By Theorem 3.2, it is obvious that ˜V0≥V˜0N, for any N ∈N, which implies that ˜V0 ≥V˜0. We then prove the inverse inequality. For any τ ∈ T0 and ε > 0, there exists some N such that c(τ > N)≤ε. By Assumption 3.4 and the H¨older inequality, we can calculate that

Eˆ[|Xτ−Xτ∧N|]≤ˆ E[2 sup

n∈N

|Xn|I{τ >N}]≤C(ˆE[sup

n∈N

|Xn|β])1β(ˆE[I{τ >N}])β−1β ≤Cεβ−1β . (3.8) It follows that

Eˆ[Xτ]≤Eˆ[Xτ∧N] +Cεβ−1β ≤V˜0N +Cεβ−1β ≤V˜0+Cεβ−1β . Lettingε→ ∞, sinceτ is arbitrarily chosen, we finally get the desired result.

(13)

Proposition 3.8 Assume that

τj= inf{l≥j: ˜Vl=Xl} satisfies condition (3.7). Then we have

(i) τj∈ Tj andV˜n∧τ j ∈L1

G(Ωn), for eachn∈N; (ii) {V˜n∧τ j, n=j, j+ 1,· · · } is aG-martingale;

(iii) For anyj∈N,

j= ˆEj[Xτj] =esssup

τ∈Tj

j[Xτ].

Proof. (i)Noting that for any k∈NandN ≥k, ˜VkN ≥Xk and ˜VkN ↑V˜kas N→ ∞, then we have {V˜k=Xk}=∩N≥k{V˜kN =Xk}, which implies

I{V˜k=Xk}= inf

N≥kI{V˜kN=Xk}.

By the proof of Theorem 3.2, we haveI{V˜kN=Xk}∈L1G(Ωk). Applying Proposition A.2, we can check that I{V˜k=Xk}∈L1G(Ωk). Since

Ij≤n}= max

j≤k≤nI{V˜k=Xk},

it follows thatIj≤n}∈L1G(Ωn). Without loss of generality, we may assumeXn≥0 for anyn∈N. It is easy to check that

n∧τ j =

n−1

X

k=j

( ˜Vk−V˜k+1 )Ij≤k}+ ˜Vn.

SinceIj≤k}∈L1G(Ωk), there exists a bounded sequence{ξnj,k}n=1⊂L1G(Ωk) such thatξj,kn ↓Ij≤k}. Note that

−V˜k+1N ξj,kn ↓ −V˜k+1 ξnj,k,asN → ∞,

−V˜k+1 ξnj,k↑ −V˜k+1 Ij≤k},asn→ ∞, V˜kNξj,kn ↓V˜kNIj≤k},asn→ ∞, V˜kNIj≤k}↑V˜kIj≤k},asN → ∞.

It follows that−V˜k+1 Ij≤k}∈L1

G(Ωk+1) and ˜VkIj≤k}∈L1

G(Ωk). Hence, ˜Vn∧τj ∈L1

G(Ωn).

(ii)Note that V˜(n+1)∧τ

j−V˜n∧τ j =Ij≥n+1}( ˜Vn+1 −V˜n) =Ij≤n}c( ˜Vn+1 −Eˆn[ ˜Vn+1 ]). (3.9) Since{τj ≤n} ∈ B(Ωn) and ˜Vn+1 ,−ˆ

En[ ˜Vn+1 ]∈L∗1G(Ωn+1), applying Lemma 2.15 and (2.3), we have Eˆn[Ij≤n}c( ˜Vn+1 −Eˆn[ ˜Vn+1 ])] =Ij≤n}cn[ ˜Vn+1 −Eˆn[ ˜Vn+1 ]] = 0. (3.10) By a similar analysis as Step (i), we can get that−V˜n∧τ j ∈L1

G(Ωn). The above two equalities implies that

0 = ˆEn[ ˜V(n+1)∧τ j −V˜n∧τj] = ˆEn[ ˜V(n+1)∧τ j]−V˜n∧τj, which shows that {V˜n∧τ j, n=j, j+ 1,· · · }is a G-martingale.

(14)

(iii)First, we claim that there exists some 1< p < βsuch that Eˆ[sup

n∈N

|V˜n|p]<∞.

By Theorem 3.2, we have ˜VjN = ˆEj[XτN

j ], whereτjN = inf{l≥j : ˜VlN =Xl}. It is easy to check that

|V˜jN| ≤Eˆj[supn∈N|Xn|] and

Eˆ[ sup

1≤j≤N

|V˜jN|p]≤ˆ E[ sup

1≤j≤N

j[sup

n∈N

|Xn|p]].

Since ˆE[supn∈N|Xn|β]<∞, by Theorem 3.4 in [17], there exists a constantCindependent ofN such that ˆE[sup1≤j≤N|V˜jN|p]≤C. By monotone convergence theorem, we have

Eˆ[sup

n∈N

|V˜n|p] = lim

N→∞

Eˆ[ sup

1≤j≤N

|V˜jN|p]≤C We then show that

j= ˆEj[ ˜Vτj ] = ˆEj[Xτj].

Indeed, by Step (ii), we have for any n≥j

j= ˆEj[ ˜Vn∧τ j]. (3.11) For any ε >0, there exists someN >0 such that, for anyn≥N, c(τj > n)≤ε. It is easy to check that

Eˆ[|Eˆj[ ˜Vn∧τ

j]−Eˆj[ ˜Vτ

j ]|]≤Eˆ[|V˜n∧τ

j−V˜τ

j |]≤Eˆ[2 sup

n∈N

|V˜n|Ij>n}]≤C(ˆE[sup

n∈N

|V˜n|p])1/pε1/q, where 1p+1q = 1. First, lettingn→ ∞, sinceεis arbitrarily small, (3.11) yields that ˜Vj= ˆEj[ ˜Vτj ].

In the following, we show that for anyτ ∈ Tj, ˜Vj≥Eˆj[Xτ]. For anyτ ∈ Tj andε >0, there exists some N such thatc(τ > N)≤ε. We can calculate that

Eˆ[|Xτ−Xτ∧N|]≤Eˆ[2 sup

n∈N

|Xn|I{τ >N}]≤Cεβ−1β . It follows that

j[Xτ] = lim

N→∞

Eˆ[Xτ∧N].

By a similar analysis as the proof of Theorem 3.2, we have for eachN ≥j, ˜Vj≥Eˆj[Xτ∧N]. Letting N → ∞, we deduce that ˜Vj≥ˆ

Ej[Xτ]. This completes the proof.

Remark 3.9 For each fixed N ∈N, we may define the sequence {VNn, n= 0,1,· · ·, N} recursively:

Let VNN =XN and

VNn = min{Xn,Eˆn[VNn+1]}, n≤N−1.

We can check that for any n≤N ≤M,VNn ≤VMn . It is natural to define Vn = lim

N≥n,N→∞VNn.

Then silimar results still hold for the sequence {V, n∈N}. More precisely, set τj= inf{l≥j:Vl =Xl}.

Assume that τj is finite quasi-surely (i.e. c(τj> N)→0, asN → ∞). Then we have

(15)

(i) τj∈ Tj andVn∧τ

j ∈L∗1G(Ωn), for each n∈N; (ii) {Vn∧τj, n=j, j+ 1,· · · } is aG-martingale;

(iii) For anyj∈N,

Vj = ˆEj[Xτj] =essinf

τ∈Tj

j[Xτ],

(iv) The sequence{Vn , n∈N}is the largestG-submartingale dominated by the process{Xn, n∈N}.

4 Optimal stopping in continuous time

4.1 Finite time horizon case

In this subsection, we provide the relation between the valuc function of the optimal stopping problem and the solution of reflected G-BSDE. For simplicity, assume the time horizon is [0,1]. We need to consider the following payoff process {Xt}t∈[0,1].

Assumption 4.1 The payoff process {Xt}t∈[0,1] ∈SGβ(0,1) (for the definition, we may refer to Ap- pendix B), where β >1.

Denote by Ts,t the collection of all G-stopping times τ such that s ≤ τ ≤ t and by Ts,tn the collection of all G-stopping times taking values in In such that s ≤ τ ≤ t, where 0 ≤ s < t and In={k/2n, k = 0,1,· · ·,2n}. Set

V0= sup

τ∈T0,1

Eˆ[Xτ]. (4.1)

For each n ∈N, we define the following sequence{Vtnn

k, k = 0,1,· · · ,2n} backwardly: Let V1n =X1 and

Vtnn

k = max(Xtn

k,ˆ Etnk[Vtnn

k+1]), k= 0,1,· · ·,2n−1, where tnk =k/2n. By Theorem 3.2, for anyn∈Nandk= 0,1,· · ·,2n, we have

Vtnn

k = esssup

τ∈T0,1n≥tnk

tnk[Xτ].

It is easy to check that for any n∈Nandk= 0,1,· · ·,2n,Vtnn

k ≤Vtn+1n

k . Then we may define Vtn

k = lim

m≥n,m→∞Vtmn

k. (4.2)

Proposition 4.2 Let I =∪nIn. For eacht ∈ I, we have Vt ∈ L∗1G(Ωt). Morevoer, the sequence {Vt, t∈ I}is the smallest G-supermartingale dominating the process{Xt, t∈ I}.

Proof. Lettnk, tml ∈ I andtnk < tml , wherem, n∈N. It is easy to check that Eˆtnk[Vtm

l ] = ˆEtnk[ lim

M≥m,M→∞VtMm

l ] = ˆEtnk[ lim

M≥(m∨n),M→∞VtMm l ]

= lim

M≥(m∨n),M→∞

tnk[VtMm

l ]≤ lim

M≥(m∨n),M→∞VtMn k =Vtn

k.

Now let{Ut, t∈ I}be aG-supermartingale dominating{Xt, t∈ I}. By Theorem 3.2, we know that {Vn, t ∈ In} is the smallest G-supermartingale dominating {Xt, t∈ I}. Therefore, for anym≥n, we have Utn

k ≥Vtmn

k. Lettingm→ ∞, we get thatUtn

k ≥Vtn

k, which completes the proof.

Referenzen

ÄHNLICHE DOKUMENTE

• Self-Play: Suppose, instead of playing against a random opponent, the reinforcement learning algorithm described above played against itself, with both sides learning?. What do

We want to discuss the origin of the BRST symmetry in a more general context, and show that, by quoting Zinn-Justin, the ”Slavnov Taylor identities in gauge theories owe less to

This exercise sheet aims to assess your progress and to explicitly work out more details of some of the results proposed in the previous lectures. Please, hand in your solutions

We take the whole of this body of work as read, in what we present here, for the simple reason that we do not have the time to repeat it. We do not repeat it also because we have

Economists like use discrete-time models more than continuous-time model in economic modeling because, on the one hand, economic data are reported in terms of discrete-time such

der Universit at M unchen Set