• Keine Ergebnisse gefunden

With the explicit introduction of uncertainties and risks the overall performance of a de-cision

x

becomes a tradeo between dierent socioeconomic and environmental indicators (costs, benets, incomes, damages) and indicators of risks. The classical example is the mean-variance ecient strategies providing a tradeo between expected returns and the variance. Unfortunately, the concept of the mean-variance ecient strategies may be mis-leading and even wrong for nonnormal probability distributions (especially for catastrophic risks) which require more sophisticated risk indicators and corresponding concepts of ro-bust strategies. More precisely, in practice a given decision

x

results in dierent outcomes

f

(

x;!

) = (

f

1(

x;!

)

;:::;f

m(

x;!

)) aected by some uncertain (random) variables

!

. For-mally, the overall performance of

x

can be often summarized in the form of an expectation function

U

(

x

) =

E u

(

f

1(

x;!

)

;:::;f

m(

x;!

))

;

where

u

() is a \utility" function dened on

f

2

R

m. The mean-variance ecient solutions maximizing

E f

(

x;!

);

N E

[

f

(

x;!

);

E f

(

x;!

)]2,

N >

0, can also be obtained from the maximization of the following type of function:

maxx;y

E

h

f

(

x;!

);

N

(

f

(

x;!

);

y

)2i

:

This representation convexies the problem for

f

(

x;!

) = ;j

f

(

x;!

)j, where j

f

(

;!

)jis a convex (cost) function.

Traditionally the utility function is assumed to be continuous and dierentiable. It is easy to see that all risk functions discussed in this section can be represented in the same form but with nonsmooth and even discontinuous utility functions. For example, if

u

() is the indicator function for the eventf

f

2

R

mj

f

c

g, then

U

(

x

) =

P

f

f

(

x;!

)

c

g

:

(17) If

u

(

f

1

;f

2) =

f

1

I

f

f

20g=

( 0

; f

2

<

0

; f

1

; f

20

;

then we obtain function (2)

U

(

x

) =Zf

2

(x;!)0

f

1(

x;!

)

P

(

d!

)

:

(18) In the particular case

f

1(

x;!

)

f

2(

x;!

) =

f

(

x;!

)

U

(

x

) =

E

maxf0

;f

(

x;!

)

P

(

d!

)

:

Functions

U

(

x

) with nonsmooth and discontinuous integrand

u

() can be used as a unied concept to analyze quite dierent risk management problems. In short, we can call such

U

(

x

) the risk functions and

u

(

f

) the sample risk function or (extended) utility function. We can call

U

(

x

) also extended expected utility function. Note that although in-dicators (13), (14), (15) are dened through stopping time

(

x

), they can also be expressed in the form

E u

(

R

0

;R

1(

x

)

;:::;R

T(

x

)) with some discontinuous function

u

().

3 Risk functions

Consider the following risk function given in the form of extended expected utility

U

(

x

) =

E u

(

f

(

x;!

))

;

(19) where

f

:

R

n ;!

R

m is a continuous in

x

and measurable in

!

vector function,

u

:

R

m ;!

R

1 is a Borel (extended utility) function,

E

(or

E

!) denotes mathematical expectation over measure

P

(or

P

!) on . In general, as we discussed in Section 2, function

u

() may be discontinuous on a set

D

R

m.

Proposition 3.1

(Continuity of risk function). Assume that (i)

f

(

x;!

) is a.s. continuous at point

x

,

(ii)

P

f

f

(

x;!

)2

D

g= 0,

(iii)

u

(

f

(

y;!

))

M

(

!

) for all

y

from a vicinity of

x

with integrable function

M

(

!

).

Then function

U

(

x

) is continuous.

The proposition follows from Lebesgue's dominance convergence theorem.

Denote

D

=f

y

2

R

mj

dist

(

y;D

)

g

; dist

(

y;D

) = infz

2Dk

y

;

z

k

: Proposition 3.2

(Lipschitz continuity). Assume that

(i)

u

() is uniformly Lipschitzian in any ball outside the discontinuity set

D

; (ii)

f

(

x;!

) are a.s. Lipschitzian in

x

2

X

uniformly in

!

;

(iii)

P

f

f

(

x;!

)2

D

g

C

for all

x

2

X

,

y

2

R

m,

>

0 and some constant

C

; (iv)

u

(

f

(

x;!

) +

y

)

M

for all

x

2

X

,

y

2

R

m and some constant

M

.

Then function

U

(

x;y

) =

E u

(

f

(

x;!

) +

y

) is Lipschitz continuous in (

x;y

)2

X

R

m, and hence risk function

U

(

x

) =

U

(

x;

0) is Lipschitzian in

x

2

X

.

Proof.

Let

L

uand

L

f be Lipschitz constants for

u

and

f

, respectively. For given

x

1,

x

2,

y

1,

y

2dene

x

=

x

1+

(

x

2;

x

1),

y

=

y

1+

(

y

2;

y

1) with

2[0

;

1],

=

L

fk

x

2;

x

1k+k

y

2;

y

1k, 2 = f

!

2 j (

f

(

x

1

;!

) +

y

1) 2

D

2g, Obviously, k

f

(

x

;!

) +

y

;

f

(

x

1

;!

);

y

1k

L

f

k

x

2;

x

1k+

k

y

2;

y

1k

. Note that if (

f

(

x

1

;!

)+

y

1)2

D

2, then (

f

(

x

2

;!

)+

y

2)2

D

3, and if (

f

(

x

1

;!

)+

y

1)2

D

2, then (

f

(

x

;!

)+

y

)2

D

for any

2[0

;

1]. We have

U

(

x

2

;y

2);

U

(

x

1

;y

1) = R2+Rn2[

u

(

f

(

x

2

;!

) +

y

2)

;

u

(

f

(

x

1

;!

) +

y

1)]

P

(

d!

)

M P

f(

f

(

x

2

;!

) +

y

2)2

D

3g

+

M P

f(

f

(

x

1

;!

) +

y

1)2

D

2g

+Rn2

L

uk

f

(

x

2

;!

) +

y

2;

f

(

x

1

;!

);

y

1k

P

(

d!

)

(5

MC

+

L

u)(

L

fk

x

2;

x

1k+k

y

2;

y

1k)

:

2

If function

u

() is discontinuous then it can be approximated in dierent ways by continuous functions

u

() for some parameter

in such a way that

u

(

y

) ;!

u

(

y

) as

;!0 for all

y

2

D

. Then function

U

(

x

) is approximated by functions

U

(

x

) =

E u

(

f

(

x;!

))

:

(20)

Proposition 3.3

(Convergence of approximations). Assume that (i)lim!0

P

f

f

(

x;!

)2

D

g= 0, pointwise (uniformly) in

x

2

X

; (ii)lim!0

u

(

z

) =

u

(

z

), uniformly in

z

2

D

for any

>

0;

(iii)

u

(

f

(

x;!

)) and

u

(

f

(

x;!

)) are bounded by an integrable in square function

M

(

!

) uniformly in

x

2

X

and

>

0.

Thenlim!0

U

(

x

) =

U

(

x

) pointwise (uniformly) in

x

2

X

.

Proof.

Dene 1 =f

!

2j

f

(

x;!

)2

D

g and 2= n1. Then The rst term on the right-hand side of (21) can be made arbitrarily small by choosing

small enough due to (i), (iii). For a given

the second term on the right-hand side of (21) can be made arbitrary small by choosing

small enough due to (ii).2

One way to construct approximations

U

(

x

) is to consider stochastically disturbed performance indicators

f

(

x;!;

) =

f

(

x;!

) +

;

where

is a small positive parameter,

2

R

m is a random vector independent of

!

with density

K

(). The corresponding disturbed risk function takes the form

U

(

x

) =

E

E

!

u

(

f

(

x;!;

))

=

E

!

E

u

(

f

(

x;!

)+

)

=

E

!

u

(

f

(

x;!

))

;

where

u

(

f

) is the so-called smoothed (or mollied) utility function

u

(

y

) =

E

u

(

y

+

) = 1

m

Z

u

(

z

)

K

z

;

y

dz

used in kernel density estimation (see, for example, [7]), in probability function optimiza-tion (see [22], [27]) and in nonsmooth optimizaoptimiza-tion (see [25], [17] and references therein).

Proposition 3.4

(Convergence of mollied utilities at continuity points). Let

u

(

x

) be a real-valued Borel measurable function on

R

m,

K

(

x

) be a bounded, integrable, real valued density function on

R

m and one of the following holds

(i)

u

() is bounded on

R

m; (ii)

K

() has a compact support;

(iii)k

y

k

K

(

y

);!0 a

s

k

y

k;!1, where kk denotes the Euclidean norm on

R

m. Then

u

(

y

);!

u

(

y

) as

;!0 at any continuity point of

u

().

The statement of the proposition under assumption (i) can be found in [3], and under (ii), (iii) it is available in [6].

Proposition 3.5

(Uniform convergence outside discontinuity points ). Assume that (i)

u

() is a Borel function with closed set

D

of discontinuity points;

(ii) density

K

() has a compact support.

where

S

(

K

) = f

z

j

K

(

Z

)

>

0g denotes support of density

K

(). Since

D

is closed and

Example 3.1

(Partial smoothing). If in (18) we disturb only function

f

2 then

U

(

x

) =

E

E

!

f

1(

x;!

)

I

f2(x;!)+0

=

E

!

f

1(

x;!

)

E

I

f2(x;!)+0

=

E

!

f

1(

x;!

)(1;F(;

f

2(

x;!

)

=

))

;

where F is a cumulative distribution function of random variable

.

Proposition 3.6

(Uniform convergence under partial smoothing). Assume that con-ditions of Proposition 3.5 are fullled and

(i) function

E

j

f

1(

x;!

)jis bounded on

X

;

(ii)

P

fj

f

2(

x;!

)j

g;!0 as

;!0 uniformly in

x

2

X

. Then

U

(

x

) converges to

U

(

x

) uniformly in

x

2

X

.

Proof.

For arbitrary numbers

C

,

estimate the dierence

j

U

(

x

);

U

(

x

)j

E

j

f

1(

x;!

)jj1;F(;

f

2(

x;!

)

=

);

I

f2(x;!)0j The rst term on the right-hand side of (22) is made arbitrarily small by taking

C

su-ciently large by (i). The second term for given

C

is made small by taking

suciently small by (ii). Given

C

and

the third term can be made small by taking

small by Proposition 3.5. 2

Example 3.2

(Smoothing probability function ). Consider probability function

U

(

x

) =

P

!f

f

1(

x;!

)0

;:::;f

m(

x;!

)0g

We can also approximate by using

i =

,

i

= 1

;:::;m

, where random variable

has the cumulative distribution function F. Then

U

(

x

) =

P

!

P

f

;

f

1(

x;!

)

=;:::;

;

f

m(

x;!

)

=

g

=

P

!

P

f

;1 max1im

f

i(

x;!

)g

=

E

!F

;

1max1im

f

i(

x;!

)

:

If functions

u

and

f

(

x;!

) in (20) are continuously (or generalized) dierentiable, then compound function

u

(

f

(

x;!

)) is also continuously (generalized) dierentiable with (sub)dierential

@

x

u

(

f

(

x;!

)), which can be calculated by a chain rule (see [16], [26] for the nondierentiable case).

If (sub)dierential

@

x

u

(

f

(

x;!

)) is majorized by an integrable (Lipschitz) constant,

(

x;!

) is a measurable selection of

@

x

u

(

f

(

x;!

)), then function

F

(

x

) is also (generalized) dierentiable with (sub)dierential

@U

(

x

) =

E @

x

u

(

f

(

x;!

))3

E

(

x;!

)

:

(23) For optimization of

F

(

x

) one can apply specic stochastic gradient methods (see Section 6) based on samples of

(

x;!

) with

;! 0. For a given

it is also possible to use the sample mean optimization methods.

4 Stochastic smoothing of risk processes

To optimize risk functions we can apply molliers [17]) over decision variables

x

. Similarly, we can mollify risk process over some parameters, for example, initial state. In addition to smoothing eects, which are usually weaker than in the rst case, the signicant advantage of the parametric smoothing is the possibility to obtain fast statistical estimators of the risk functions and their derivatives [12].

Beside standard risk process (11) consider a process with random initial capital

R

0+

[12]:

Q

t(

x;

) =

R

0+

+ t(

x

);

C

t(

x

) =

R

t(

x

) +

;

0

t

T;

(24) where

is an independent of all claims

C

t(

x

) one-dimensional random variable with a continuously dierentiable distribution function

F(

y

) =

P

f

< y

g

;

is a small (smoothing) parameter (

!0).

We can think of (24) as risk process (11) with disturbed initial values

R

0 or

R

1(

x

).

Through dynamic equation (24) the disturbance

is transferred to further values

R

t(

x

),

t

1, of the process. Similarly we can independently disturb all

R

t(

x

), 0

t

T

, and interpret these disturbances as the presence of insignicant lines of business of the insurance company.

In subsection 2.3 we introduced important performance functions of process (11): prob-ability of insolvency T(

x

), partial expected prot

F

T(

x

), expected shortfall

H

(

x

). Under assumption P(i) they are continuous, and under P(ii) they are Lipschitz continuous. Here we consider the same performance functions also for the disturbed process (24). Under assumption P(ii) by the results of section 3 (Propositions 3.3, 3.5) these approximates con-verge uniformly in

x

to the original undisturbed performance functions as the disturbance goes to zero. The smoothing eects enable us to derive their subdierentials.

4.1 The probability of ruin

Dene measure

P

as the product of

P

! and

P

,

P

=

P

P

!. Then the probability of ruin till moment

T

of the disturbed risk processf

Q

t(

x;

) =

R

t(

x

)+

; t

= 0

;

1

;:::;T

gis

T(

x;

) = 1;

P

f

Q

t(

x;

)0

;

0

t

T

g

= 1;

P

f

;

R

t(

x

)

=;

0

t

T

g

= 1;

P

f

max0tT;

R

t(

x

)

=

g

=

P

f

<

;min0tT

R

t(

x

)

=

g

=

E

!Ff;min0tT

R

t(

x

)

=

g

;

with a subdierential (see Clarke [4], Theorems 2.3.9, 2.3.12, 2.7.2)

@

T(

x;

) =;

E

!F0f;

R

t(

x

)

=

gr

R

t(

x

)

=

jt2t(x)

;

(25) where

t

(

x

) =

argmin

0tT

R

t(

x

), and functions

R

t(

x

) are assumed continuously dier-entiable in

x

.