• Keine Ergebnisse gefunden

4.3 First try to a more general approach for effective importance sampling

4.3.4 Superhedging

We further test the efficient importance sampling algorithm in an example concerning borrowing con-straints which was already examined by Bender and Denk in an earlier version of their paper [2]. Here, an investor is faced with a constraint for the maximal amount he is allowed to borrow from the money market account. It is limited to a given fraction of his total wealth. Hence, even in a complete market he is not able to replicate contingent claims with a strategy relying on the underlyings and the money market account, but he has to set up a super-replication strategy with minimal initial wealth. In literature this problem is known as superhedging problem.

Bender and Kohlmann [3] show that the solution for superhedging problems with a rather universal class of constraints is given by the limit of a sequence of nonlinear BSDEs. The example we consider is a European call option with the above described borrowing constraint. Its optimal time zero superhedging price can be obtained as the limit ofY0ε0), with

dSt = bStdt+σStdWt, dYtε =

µ

rYtε+b−r σ Ztε1

ε µZtε

σ −ρYtε

+

dt+ZtεdWt, S0 = s0 YTε = (ST−K)+.

Here,ρ−1 is the fraction of the investor’s total wealth, which he is allowed to borrow. As parameters we choose as Bender and Denk:

b σ r T s0 K ρ

0.05 0.2 0.05 0.5 100 100 10

Furthermore, we use a basis of monomials up to power 4 and 40 equidistant time points and examine the numerical solution for different penalization weightsε. In this simple setting it is possible to determine the superhedging price analytically with methods described in Broadie et al. [8]. The reference value in our example is 8.06.

Bender and Denk report in their earlier version of [2] why we should expect in the BSDE approach for smallεa positive bias, which is reflected also by the results of our simulations within their framework without variance reduction: The extra term of the backward equation in comparison to the linear Black-Scholes world can be interpreted as penalization for not meeting the borrowing constraint. Hence, ifε is large, there will be more simulations(λYbε,n,λZbε,n)falling near the border of the forbidden region but

106 4.3. First try to a more general approach for effective importance sampling

within the allowed one than simulations that are close to the border but lying in the forbidden region.

Saying it with other words, it is more likely that simulated paths are wrongly penalized instead of being wrongly not penalized.

First tries with this crude least-squares Monte Carlo approach lead to several absurdly high estimators (evenYbtε,n0 stop,L 106is possible). However, theory tells us thatP(Y0ε >s0) =0 such that we can simply setYbtε,n0 stop,Lequal tos0for any choice ofh, if the algorithm yields unreasonable high results. But even after the use of this additional piece of information the crude least-squares Monte Carlo algorithm produces an enormous empirical standard deviation see Figures 4.16 to 4.27. Hence, it is selfevident that in this example variance reduction methods are desirable or even indispensable. Since for smallεthe Lipschitz constant of the driver of the BSDE is large this disadvantageous variability is more or less expected.

Again, we use the analog implementations for the selection of an ’optimal’h as in the former examples and obtain a variance reduction effect through EIS which is more prominent in the case of smallerεat least for the direct simplex method as the following table shows:

average variance bad estimators ε optimization method selectedh

reduction factor using crude algorithm direct simplex method 0.78312786 3.1290

2 sequential simplex method 1.76235507 3.6990

-sequential gradient method 0.10777530 1.3043 direct simplex method 0.78312783 5.7761

1 sequential simplex method -1.68477820 1·10−5

-sequential gradient method -0.38470397 0.0044 direct simplex method 0.78312784 30.4705

2/3 sequential simplex method -0.66547303 2.3·10−4 -sequential gradient method -1.16865662 5·10−5

direct simplex method 0.78312784 76.7538

1/2 sequential simplex method -0.72830747 2.8·10−4 0.13 % sequential gradient method 1.42538775 19.6533

direct simplex method 0.78312780 8870.6202

2/5 sequential simplex method -0.15001258 0.2792 1.5 % sequential gradient method 1.05173291 1904.0892

direct simplex method 0.78312784 21689.9088

1/3 sequential simplex method -0.03027747 0.8553 4.6 % sequential gradient method 2.81884671 0.0394

direct simplex method 0.78312780 22674.9079

1/4 sequential simplex method 0.06220084 1.2544 15.8 %

sequential gradient method 0.06268260 1.2629 direct simplex method 0.78312781 1.0247

1/5 sequential simplex method 0.04891347 1.0161 44,0 %

sequential gradient method 0.04881397 1.0162

Here we use 60,000 up to 200,000 paths for the simulations excluding the more irregular results occurring if one tries 50,000 paths and less. As bad estimator using the crude algorithm we indicate estimators with Ybtε,n0 stop,L =s0. Again, we repeat the procedure 100 times and depict the empirical mean plus/minus two empirical standard deviations ofYbtε,n0 stop,L.

For the sequential optimization methods we can report, that in most cases we do not obtain convergence before 50 iterations are executed. This problem is not so prominent ifεgets smaller which is quite as-tonishing. Moreover, we observe that in the sequential simplex method(κhb)b∈N oscillates for higherε between several limit points.

4.3. First try to a more general approach for effective importance sampling 107

0.6 0.8 1 1.2 1.4 1.6 1.8 2

x 105 7.45

7.5 7.55 7.6 7.65 7.7

number of paths Y0

Mean and std of Y as a function of the number of paths

Importance sampling for superhedging of European call Crude least−squares Monte Carlo

0.6 0.8 1 1.2 1.4 1.6 1.8 2

x 105 7.45

7.5 7.55 7.6 7.65 7.7

number of paths Y0

Mean and std of Y as a function of the number of paths

Importance sampling for superhedging of European call Crude least−squares Monte Carlo

(a)ε=2, direct simplex method. (b)ε=2, sequential simplex method.

Figure 4.16: Convergence ofYbtε,n0 stop,Lin the case of superhedging for a European call option with EIS and different optimization methods.

Considering the above table and Figures 4.16 - 4.27 we see that here only the direct simplex method is successful concerning a reliable variance reduction effect induced by the selected processh. The other ap-proaches more often worsen the numerical outcomes compared to the implementation without a measure transformation.

We have to be not too enthusiastic about the large average variance reduction factors for ε = 13 and ε= 14 in the direct simplex optimization method. The crude least-squares Monte Carlo algorithm is here no more reliable since the fraction of unrealistic estimators gets larger. However, the variance reduced version leads in the variant with the direct simplex optimization in both cases to excellent results making the numerical BSDE approach to superhedging problems very attractive.

The results forε = 15 already show the limits for this kind of variance reduction method: Even in the implementation with the direct simplex optimization approach we now obtain too many bad estimators such that their mean gives no indication for the solution of the BSDE. It seems that betweenε = 14 and ε= 15 is some ’break point’, where the numerics suddenly fails. However, we have no exact explanation what is going wrong at this point. Trying to useε< 15leads to basically the same results.

We want to remark that this behavior of the algorithm concerning stability with respect toεsimply reflects its sensitivity to the Lipschitz constantK of the driver f. It is typical that only small Kleads to good numerical results while hereK = 32 in the crude algorithm and K = 5 in the variance reduced version already destroy the ability of this numerical approach.

Another special feature of this example is the number of Picard iterations needed to satisfy the stop con-dition|Ybtε,n,L0 −Ybtε,n−1,L0 |<0.001. While forε=2 this number is low (5 to 9) for both variants (i.e. crude and variance reduced algorithm with direct simplex optimization method) and hence comparable to the earlier examples, we obtain a remarkable increase for lowerε. For the crude algorithm andε= 14we have 33 out of 1500 launches where the stop condition does not even grasp after 100 Picard iterations and for the variance reduced algorithm with the sameεthere are particular launches still requiring 54 iterations.

However, in average the number of iterations executed decreases from about 42 for the implementation without measure transformation to 35 if we use the variance reduced algorithm and only take into account the estimators forYbtε,n0 stop,Lwhich are not equal tos0.

Finally, it is very interesting to see that the optimalhin the direct simplex approach is independent of ε. Considering the procedure how to obtain this number this is quite astonishing or even unbelievable.

108 4.3. First try to a more general approach for effective importance sampling

0.6 0.8 1 1.2 1.4 1.6 1.8 2

x 105 7.45

7.5 7.55 7.6 7.65 7.7

number of paths Y0

Mean and std of Y as a function of the number of paths

Importance sampling for superhedging of European call Crude least−squares Monte Carlo

0.6 0.8 1 1.2 1.4 1.6 1.8 2

x 105 7.5

7.6 7.7 7.8 7.9 8 8.1 8.2 8.3 8.4 8.5

number of paths Y0

Mean and std of Y as a function of the number of paths

Importance sampling for superhedging of European call Crude least−squares Monte Carlo

(a)ε=2, sequential gradient method. (b)ε=1, direct simplex method.

Figure 4.17: Convergence ofYbtε,n0 stop,Lin the case of superhedging for a European call option with EIS and different optimization methods.

0.6 0.8 1 1.2 1.4 1.6 1.8 2

x 105

−40

−20 0 20 40 60 80

number of paths Y0

Mean and std of Y as a function of the number of paths

Importance sampling for superhedging of European call Crude least−squares Monte Carlo

0.6 0.8 1 1.2 1.4 1.6 1.8 2

x 105 2

4 6 8 10 12 14

number of paths Y0

Mean and std of Y as a function of the number of paths

Importance sampling for superhedging of European call Crude least−squares Monte Carlo

(a)ε=1, sequential simplex method. (b)ε=1, sequential gradient method.

Figure 4.18: Convergence ofYbtε,n0 stop,Lin the case of superhedging for a European call option with EIS and different optimization methods.

4.3. First try to a more general approach for effective importance sampling 109

0.6 0.8 1 1.2 1.4 1.6 1.8 2

x 105 2

4 6 8 10 12 14 16

number of paths Y0

Mean and std of Y as a function of the number of paths

Importance sampling for superhedging of European call Crude least−squares Monte Carlo

0.6 0.8 1 1.2 1.4 1.6 1.8 2

x 105

−30

−20

−10 0 10 20 30 40 50 60

number of paths Y0

Mean and std of Y as a function of the number of paths

Importance sampling for superhedging of European call Crude least−squares Monte Carlo

(a)ε= 23, direct simplex method. (b)ε= 23, sequential simplex method.

Figure 4.19: Convergence ofYbtε,n0 stop,Lin the case of superhedging for a European call option with EIS and different optimization methods.

0.6 0.8 1 1.2 1.4 1.6 1.8 2

x 105

−40

−20 0 20 40 60 80 100

number of paths Y0

Mean and std of Y as a function of the number of paths

Importance sampling for superhedging of European call Crude least−squares Monte Carlo

0.6 0.8 1 1.2 1.4 1.6 1.8 2

x 105

−15

−10

−5 0 5 10 15 20 25 30 35

number of paths Y0

Mean and std of Y as a function of the number of paths

Importance sampling for superhedging of European call Crude least−squares Monte Carlo

(a)ε= 23, sequential gradient method. (b)ε= 12, direct simplex method.

Figure 4.20: Convergence ofYbtε,n0 stop,Lin the case of superhedging for a European call option with EIS and different optimization methods.

110 4.3. First try to a more general approach for effective importance sampling

0.6 0.8 1 1.2 1.4 1.6 1.8 2

x 105

−40

−20 0 20 40 60 80 100

number of paths Y0

Mean and std of Y as a function of the number of paths

Importance sampling for superhedging of European call Crude least−squares Monte Carlo

0.6 0.8 1 1.2 1.4 1.6 1.8 2

x 105

−15

−10

−5 0 5 10 15 20 25 30 35

number of paths Y0

Mean and std of Y as a function of the number of paths

Importance sampling for superhedging of European call Crude least−squares Monte Carlo

(a)ε= 12, sequential simplex method. (b)ε= 12, sequential gradient method.

Figure 4.21: Convergence ofYbtε,n0 stop,Lin the case of superhedging for a European call option with EIS and different optimization methods.

0.6 0.8 1 1.2 1.4 1.6 1.8 2

x 105

−30

−20

−10 0 10 20 30 40 50 60

number of paths Y0

Mean and std of Y as a function of the number of paths

Importance sampling for superhedging of European call Crude least−squares Monte Carlo

0.6 0.8 1 1.2 1.4 1.6 1.8 2

x 105

−40

−20 0 20 40 60 80

number of paths Y0

Mean and std of Y as a function of the number of paths

Importance sampling for superhedging of European call Crude least−squares Monte Carlo

(a)ε= 25, direct simplex method. (b)ε= 25, sequential simplex method.

Figure 4.22: Convergence ofYbtε,n0 stop,Lin the case of superhedging for a European call option with EIS and different optimization methods.

4.3. First try to a more general approach for effective importance sampling 111

0.6 0.8 1 1.2 1.4 1.6 1.8 2

x 105

−30

−20

−10 0 10 20 30 40 50 60

number of paths Y0

Mean and std of Y as a function of the number of paths

Importance sampling for superhedging of European call Crude least−squares Monte Carlo

0.6 0.8 1 1.2 1.4 1.6 1.8 2

x 105

−40

−20 0 20 40 60 80

number of paths Y0

Mean and std of Y as a function of the number of paths

Importance sampling for superhedging of European call Crude least−squares Monte Carlo

(a)ε= 23, sequential gradient method. (b)ε= 13, direct simplex method.

Figure 4.23: Convergence ofYbtε,n0 stop,Lin the case of superhedging for a European call option with EIS and different optimization methods.

0.6 0.8 1 1.2 1.4 1.6 1.8 2

x 105

−40

−20 0 20 40 60 80

number of paths Y0

Mean and std of Y as a function of the number of paths

Importance sampling for superhedging of European call Crude least−squares Monte Carlo

0.6 0.8 1 1.2 1.4 1.6 1.8 2

x 105

−50 0 50 100 150 200

number of paths Y0

Mean and std of Y as a function of the number of paths

Importance sampling for superhedging of European call Crude least−squares Monte Carlo

(a)ε= 13, sequential simplex method. (b)ε= 13, sequential gradient method.

Figure 4.24: Convergence ofYbtε,n0 stop,Lin the case of superhedging for a European call option with EIS and different optimization methods.

112 4.3. First try to a more general approach for effective importance sampling

0.6 0.8 1 1.2 1.4 1.6 1.8 2

x 105

−60

−40

−20 0 20 40 60 80 100 120

number of paths Y0

Mean and std of Y as a function of the number of paths

Importance sampling for superhedging of European call Crude least−squares Monte Carlo

0.6 0.8 1 1.2 1.4 1.6 1.8 2

x 105

−60

−40

−20 0 20 40 60 80 100 120

number of paths Y0

Mean and std of Y as a function of the number of paths

Importance sampling for superhedging of European call Crude least−squares Monte Carlo

(a)ε= 14, direct simplex method. (b)ε= 14, sequential simplex method.

Figure 4.25: Convergence ofYbtε,n0 stop,Lin the case of superhedging for a European call option with EIS and different optimization methods.

0.6 0.8 1 1.2 1.4 1.6 1.8 2

x 105

−60

−40

−20 0 20 40 60 80 100 120

number of paths Y0

Mean and std of Y as a function of the number of paths

Importance sampling for superhedging of European call Crude least−squares Monte Carlo

0.6 0.8 1 1.2 1.4 1.6 1.8 2

x 105

−50 0 50 100 150 200

number of paths Y0

Mean and std of Y as a function of the number of paths

Importance sampling for superhedging of European call Crude least−squares Monte Carlo

(a)ε= 14, sequential gradient method. (b)ε= 15, direct simplex method.

Figure 4.26: Convergence ofYbtε,n0 stop,Lin the case of superhedging for a European call option with EIS and different optimization methods.

4.3. First try to a more general approach for effective importance sampling 113

0.6 0.8 1 1.2 1.4 1.6 1.8 2

x 105

−50 0 50 100 150

number of paths Y0

Mean and std of Y as a function of the number of paths

Importance sampling for superhedging of European call Crude least−squares Monte Carlo

0.6 0.8 1 1.2 1.4 1.6 1.8 2

x 105

−50 0 50 100 150

number of paths Y0

Mean and std of Y as a function of the number of paths

Importance sampling for superhedging of European call Crude least−squares Monte Carlo

(a)ε= 15, sequential simplex method. (b)ε= 15, sequential gradient method.

Figure 4.27: Convergence ofYbtε,n0 stop,Lin the case of superhedging for a European call option with EIS and different optimization methods.

Tryingε= 16,17,18,19 leads also to this choice, even though the subsequent algorithm leads to bad results.