• Keine Ergebnisse gefunden

NUMERICAL SOLUTION OF PROBABILISTIC CONSTRAINED PROGRAMMING PROBLEMS

Pi = Prob le(w) = ell·

NUMERICAL SOLUTION OF PROBABILISTIC CONSTRAINED PROGRAMMING PROBLEMS

A. Prekopa

5.1 Introduction

In this paper we present solution techniques to problems of the following kind

IDlIllIDIze h(x)

subject to

ho(x)

=

p(gdx,e)

~ o,.~

.,gr(x,e)

~0) ~ P, (5.1)

hdx)

~Pi,... ,hm(x) ~Pm,

where for t,he sake of simplicity we assume that the functions h,hi, ... ,hm are defined on the whole n·dimensional space. Similarly, the functionsgi

(x,

V), .. .,

gr(x,

V)are supposed to be defined on the whole n+q-dimensional space,

x

ERn, yE Rq. For the probability Pthe notationPo will also be used.

Various engineering and economic problems can be cast into this form.

Now we do not intend to survey the applicational models belonging to this category. We only refer to a few papers [1]-[n],where the interested reader may find model formulations and references to applications.

The most important special case of Problem (5.1) is obtained by special-izing the functionsUi(X,V), i

=

1, ...,7'so that

g;(x,

V) =

Tix -

Vi, i = 1, ...,7'

where

T

i , •••

,Tr

are rows of an r X n matrix

T.

In this case the probabilistic constraint in Problem (5.1) takes the form

p(Tx~e)~p. (5.2)

Introducing the notation

F(z)

for the joint probability distribution function of the components of the random vector

e,

i.e.,

F(z)

=

p(e

~

z),

the constraint

(5.2) can be written in the following manner

F(Tx)

~p. (5.3)

Before proceeding to describe the Jlumerical solution techniques to Problem (5.1) we mention the following theorem that serves as a basis of the convergence theory in many special cases. For the proof of the theorem we refer to the summarizing paper [12] and the references there.

124 Stochastic Optimt'zatt"on Problems Theorem 5.1. IfUl(x,

y), ...

,Ur(x,

y)

are concave functions in R"+q and~has a continuous probability distribution with logarithmically concave probability density function

I,

i.e., for everyXl,X2 ER" and 0

<

>.

<

1we have

I(>'XI

+

(1- >')X2) ~1[(xdf\[J(X2W->"

then the function hois also logarithmically concave in R".

This theorem implies thatif

e

has the required property then the function standing on the left hand side of (5.2) is logarithmically concave.

Maximization of Probability and the Method of Two Phases Together with problem (5.1) we also formulate the problem

maximizeho(x)

=

p(udx,e) ~ O''''Ur(x,~) ~0)

subject to hd:c) ~PI, ...,hm(x) ~Pm. (5.4) This problem has practical importance too. Many reliability problems belong to this category. For one practi.cal application we refer to the paper [13] where a sequential decision process consists of a sequence of problems of the type (5.4).

Another importance of problem (5.4) is that when solving problem (5.1) a two.phase method can be applied where in the first phase we seek a feasible solution and in the second phase we solve the original problem. Assuming that we possess a method to find a feasible solution to the system of inequalities hdx) ~ Pl, ... ,hm(x) ~Pm' a feasible solution to problem (5.1) can be found insuch a way that we start to solve problem (5.4) and stop the procedure when we reach an xsatisfying ho(x) ~ p. This xis a feasible solution to problem (5.1).

For the solution of problem (5.1) we propose the application of suitable nonlinear programming methods supplied by Monte Carlo simulation proce·

dures to find function values and gradients of the function ho• There exist other proposals too to solve stochastic programming problems among which the stochastic quasi gradient method of Yu. Ermolev and his collaborators should be mentioned. There is, however, little experience regarding how this method works in case of problem (5.1) and (5.4). On the other hand the application of the already well developed theory and techniques of nonlinear programming seems to be advantageous to apply. In this case, among others, we are able to present optimality criterion which helps us to check the termination of the applied optimization procedure.

A nonlinear programming problem which is proved to be effective in case of deterministic nonlinear programming problems is not necessarily effective in case of the solution of problems (5.1) and (5.4). The reason is that in problems (5.1) and (5.4) each value of the function ho isthe probability of a set inRq and these values furthermore the values of the gradient ofhoare calculated by Monte Carlo simulation. This letter gives us a satisfactorily accurate value provided the sample size is chosen large enough. However, we are able to do so only in the

Probabilistic Constrained Problems 125 case if the effect of the Monte Carlo simulation can be well controlled, i.e., the effect of this kind of randomness can clearly be seen throughout the procedure and the numerically unstable steps can be avoided or at least controlled.

5.2 The SUMT Method with Logarithmic: Penalty Function We introduce the following assumptions:

• 0< P< 1, PI> O, ••• ,Pm

>

0, his convex inRn,

• hI, .. ', hm are continuous logconcave functions inRn,

• gl, ... ,gr are concave functions in Rn+q,

• the set of feasible solutions is compact,

• there exists an z satisfyingh,.(z)

>

Pi, i = 0, ... , m,

• e

has a continuous probability distribution with logarithmically concave density.

The Sequential Unconstrained Minimization Technique [2] applied to our problem works in the following manner

[10].

We define the penalty function

T(Z,8)

= h(z) - 8

f

Inhdz) -

pd

i=O I

(5.5) for everyz satisfyingh;(z)

>

Pi, i

=

O•... ,m and for every fixed,

>

0 where Miis the maximum of hi(x) - Pi on the set of feasible solutions. Take a positive sequence ,I > ,2 ... with the property that limk oo ,k = 0 and minimize the functionT(z,,k) for every fixed ,k. As the set of feasible solut.ions is compact then the minimum ofT(z, ,k) exists. Let zk be an optimal solution to this problem. Then we have the relation

lim T(zk, ,k)

=

lim h(zk)

=

minh(z)

k---oo k---oo :zED (5.6)

where D denotes the set of feasible solutions. It is remarkable that under the mentioned assumptions the function T(z,,) is a convex function for every fixed 8 thus various unconstrained optimization techniques work effectively. To compute the values and the gradients ofho remain difficult problems to which we return later. The sequence 8 1 ,,2, ... in practice is chosen as a geometric sequence and the procedure frequently stops after a few number of steps.

Below we prove two theorems which help to check properties generally required when solving optimization problems by the SUMT method.

Theorem 5.2.1. Ifa function h is logconcave on the convex set given by the relation

H

=

{zlh(z) ~p},

where P is a fixed probability satisfying the inequality 0 < P < 1 then the function h(x) - Pis also logconcave on the set H.

Proof. Let x, yE H, x

=f

y and 0< ,\ < 1. Then since h is logconcave onH we have the inequality

h('\x

+

(1-'\)y) - P~ [h(x)j:\[h(y)jl-:\ - p.

126 Stochastic Oph'mization Problems Setting

h(x) = a, h(y) =

b,it will be enough to prove the inequality

a>'b

1

->.

_p~

(a-p)>'(b-p)l->..

Dividing by

a>'b

1

->.

on both sides we obtain

1-

(~r (~r->. ~ (a:pr C~Py->.

Now, using t,he arithmetic mean-geometric mean inequality, we derive

(~r (~r-->. + (a:pr c~pr->.

P P

a-p b-p

5

,\~

+

(1-,\)

b +

'\-a-

+

(1-'\)-b-

=

1.

This proves the theorem.

Theorem 5.2.1 shows that under the conditions introduced in the beginning of this section the function

T(x,

B) is convex for every fixed B

> °

on the set of

xvectors satisfying the inequalitieshi(X)

>

Pi,i

=

0,1, ... , m.

Theorem 6.2.2. Suppose that in problem (5.1) the assumptions introduced in Section 5.2 hold and let z be a nOllboundar.y point of the set of feasible solutions. Then we have

h;(z)

>pi,i=O,l, ...,m.

Proof. By the assumptions introduced in the beginning of this section there exists an xsatisfying the inequalities

hi(x) > pi,i=O,l, ...

,m.

We may assume that z

1=

x. For some p.

>

1 the point

y=x+p.(z-x)

is a boundary point of the set of feasible solutions. Using the notation ,\

=

1/P.

we obtain

z

= >.y +

(1 - '\)x.

By the logconcavity of the constraining functions and taking into account the inequalities Pi

>

0,i = 0,1, ... , m, we obtain

hdz) = h;('\y+

(1-

'\)x)

~

[h;(yW[h;(x)]l->'

>.

[h ( )]1 -

>. >.

1-

>.

~ Pi i x

>

Pi Pi

=

Pi,

i=O,I, ...,m.

Probabilistic Constrained Problems 127 This proves the theorem.

Theorem 5.2.2 states that on every nonboundary feasible solution of prob·

lem

(5.1)

the penalty function

(5.5)

is defined and this makes possible the proof of the limit relation

(5.6)

also in the case ifthe optimal solution is on the boundary of the set of feasible solutions.

Finally we remark that the application of the SUMT method is particularly advantageous in cases when the calculation of the gradients of h

o

(and eventu·

ally also of hi,i = 1, ...,m) would be sophisticated not so much because of the probabilistic nature of h

o

but because of the special structure of the functions

gl,'" ,Um' In such cases gradient.free techniques may be applied to minimize T(Z,B).

6.3 Solution by the Method of Feasible Directions The following assumptions are introduced:

• The probabilistic constraint has the form

(5.3),

• his convex and has continuous gradient in Rn,

• hI,"" hm are quasi· concave and have continuous gradients in Rn,

• The constraints in whicb the constraining functions are linear determine a bounded set,

• there exists an z satisfying hi (z)

>

Pi,i= 0, ...

,m,

• e

has a continuous probability distribution with logarithmically concave density.

The method uses subsequent linearization of the constraints and the ob·

jective function. We start from an arbitrary feasible vector zl and if zl, ... , zl<

are already fixed then first we solve the following direction finding problem:

minimizey

subject to V'h(zl<)(z - zl<) $:Y

hi (zl<)

+

V'h,.(zl<)(x - zl<) ~ Pi, (5.7) V'h,(zl<)(z - xl<)+OiY ~0, ifhi(zl<)

=

Pi,

and hi is a nonlinear function,i = 0,1, ... , m,

where the 0i are fixed positive numbers not depending on the individual prob.

lems (5.7). If

4

is an optimal solution of problem (5.7) then we solve the following step length finding problem:

minh(zl<

+

A(ZZ - zl<)),

.\ (5.8)

where the minimization is extended over such Avalues for which xl<

+

A(zZ - zl<) is feasible. IfAI<is an optimal solution of problem (5.8) then we define

Zl<+l

=

xl<

+

AI«ZZ _ zl<).

128 Stochastic Optim~'zah'on Problema Under the assumptions introduced in the beginning of this section the following limit relation holds

lim h(xk) = minh(x).

k--+oo :rED (5.9)

The above procedure was published by Zoutendijk [16]. The convergence proof under the mentioned conditions is presented in

[0].

Of particular interest is the case where all constraining functions but hoare linear. Writinghi(X) =

a~x,i = 1, ... ,m and h(x)= c' x,the problem is to

. .. ,

IIllIlImlze c x

subject to P(Tx ~ e) ~P (5.10)

a~x ~Pi, i=I, ...

,m.

The first phase problem is to find a feasible solution to (5.10) is the following maximize P(Tx ~ e)

subjectto a~x~Pi' i=I, ...,m. (5.11)

When maximizing the objective function in problem (5.11) we can stop the procedure whenever we reach an x satisfying

P(Tx~ e) ~p. (5.12)

On the other hand if we perform it as long as the inequality (5.12) holds strictly we have numerical evidence that the regularity condition (the second to the last condition) holds true.

If the probability P(Tx ~ e) is positive in the set of feasible solutions then we take its negative logarithm and minimize this rather than maximize the original probability. Thus the new problem, equivalent to problem (5.11), is the following

minimize -logP(Tx~ e)

subject to a~x ~Pi, i = 1, ... ,m. (5.13) The gradient of the objective function in problem (5.13) can be computed on the bases of the equality

V'logP(Tx ~ e) ~ 0/'1"'_ ...1 ,;\V'P(Tx ~

e).

The method of feasible directions is considered today a slow method to solve nonlinear programming problems. Taking into account aspects that arise concerning probabilistic constrained programming problems we cannot be as dissatisfied with its performance. Problems (5.7) and (5.8) clearly show how accurately we have to compute the function values and the gradient values in order to obtain good approximations.

Probabili,tic Con, trained Problem, 129 5.4 Solution by the Supporting Hyperplane Method

We introduce the following assumptions:

• there exists a bounded convex polyhedronKl such that the set of feasible solutions is contained inKl ,

• the functions -h, hI, . .. , hm are quasi-concave and have continuous gradi-ents on Kl,

• there exists an z such that h.-(z)

>

Pi,i

=

O, •••,m,

• e

has continuous probability distribution and logconcave density in Rn furthermore ho has continuous gradient in Rn We assume that we have an initial feasible vectorzl. Then we perform subsequent iterations where the kth iteration in this method consists of two subsequent steps.