• Keine Ergebnisse gefunden

F(x) = L peJ(x, eel·

2.1.2 Fundamentals of approximation techniques

When constructing approximations to stochastic programming problems we have to analyze the following mutually related questions.

First we have to find out a proper way of replacing the original random vector

e

with a discrete one.

Secondly, we have to study the relations between the original problem and the approximate problem and estimate the accuracy of approximation.

Thirdly, we need a method of improving the accuracy, if it is not sufficient, by constructing a better approximation to

e.

Before investigating these problems in detail, let us introduce some basic ideas and mathematical properties of this approach.

LetS

c

Rm be thesupport of the random vector

e

(i.e. the smallest dosed set inRm such thatP{e ES}

=

1) and letSL be a finite collection of subsets

a" t =

1,2, ...,L, of

a

satisfying the following conditions:

L

U

'='

-e --,

-,=,

e=1

SinSj=0fori=j;j; i,j=l,2, ... ,L.

We shall callSL a partition of S.

For any partition we can rewrite integral (2.1) as follows

L

36 Stochastic Optimization Problems where we perform integration over the supportBeRm and use the description of the distribution of

e

in the space of its values.

In the particular case (2.1b), which is of special interest for us, (2.5) reads

L

F(z) = L / / '8'/ f(z,eh:>(e)d6d6, ... ,dem'

e=1

e

(2.5a)

Proceeding as in the simplest method for calculating integrals we can now approximate each integral over Be as follows

~ f(z, e)p(de) "" f(z, e'l ( p(de)

=

f(z, ee)p{e E Be}.

(2.6)

loe lse

where ee is a selected representative of the subset Be. In other words, we approximate the function

f(z,e)

by a step function in

e,

which is constant in each set Be,

e

= 1,2, ...,L. In this way we arrive to the following approximation of

F(z):

Since by (2.3) and (2.4) we also have

2:f=1

Pe

=

1, our approximation can be equivalently interpreted as an approximation of

e

by a discrete random vector

e

attaining valueseewith probabilitiesPe,l

=

1,2, ...,L,and our approximating formula (2.7) is exactly of the form (2.1a).

Generally, ifthe supp ort B is bounded and if max1<e<LP{eE Be} --+ 0

as L --+ 00, then for each z, under reasonable assumpti-;;n~of

f

(z,e) we get

a pointwise convergence of function values: FL(:c) --+

F(z)

asL --+ 00. This fundamental and highly desirable property, however, is not sufficient for us, because we are rather interested in the convergence of the sequence of solutions

~L of approximate problems, or at least of its convergent subsequences to a solution of the original optimization problem. Some additional conditions, e.g.

compactness of the feasible set for z together with the uniform convergence of

F L

to

F

and continuity of

F,

are needed to ensure such a kind of convergence.

We shall not go further into the analysis of these theoretical problems; a thor·

ough discussion of them and various generalizations can be found in [lj, [15j,

[30j, [34.].

Still, in many practical problem.3 such conditions are satisfied. It is

also often the case, that in practice a point Iiis satisfactory, for which the ob·

jective value lies within a certain tolerance range with respect to the minimum value, and this is possible to achieve for a far broader class of problems.

Nevertheless, it is still very difficult to determine in advance how fine the partition should be to ensure the accuracy of approximation. Division of B into many small pieces Be, l

=

1,2, ...,L, without any strategy may dramatically

Approximation Techniques 37 increase the computational complexity of the approximate problem. To illus·

trate the difficulties that may arise, let us suppose that there are 10 independent scalar random variables in our original problem, so that

e

=

(6,6, ... ,

el 0)' Ifthe support of each

ej, i

= 1,2, ... ,10, is divided into 10 subintervals, we get 1010subsets Ee of the support E of

e,

a number which is clearly beyond any computational capabilities.

To avoid such excessive numbers of subsets Ee we have to use nonuniform partitions which are suited to properties of J(x,

e)

as a function of

e.

The

problem of constructing such partitions is closely related to the way of choosing points ee E Ee. Considering only convergence, these can be arbitrary points;

however, ifwe choose them more carefully, namely as conditional expectations

with probabilities

ee= E{e(w)/e(w) EEe}

Pe= p{e(w) EEe}

(2.8)

(2.9) then we shall not only improve the accuracy of approximation in many cases, but also gain information that will help us to properly refine the partitioningif the accuracy shall not be sufficient.

Indeed, ifthe function J(x,

e)

is linear with respect to

e

in the set Ee, then withee defined by (2.8) we obtain strict equality in (2.6),

le J(x,e)p(de) = J(x,ee)p{eEEe}. (2.10) This implies that further division of the subset Ee is useless for improving the accuracy of approximation at a given

x.

On the other hand, if

J (x,·)

is highly nonlinear in Ee, the approximation in Ee can be rather rough and a finer partition of Ee is desirable. Hence, the density of partitioning in various subregions of the support E should be related to the nonlinearity of

J (x, l

Generally, we do not know in advance such detailed properties of the func·

tion J(x,

e),

some information can be gained only in the course of solving a definite approximation problem. Furthermore, the properties of the function J(x,·) change when xchanges, and we are interested in having a good partition for x close to the solution of our problem.

Thus we arrive at an idea of asequential approximation method in which constructing a partition of E and approximating a solution to the original prob.

lem are mutually related:

(1) Choose an initial partitioning Ee, f = 1,2, ... ,L, which satisfies (2.3) and (2.4).

(2) Choose points ee E E and probabilities Pe, f = 1,2, ... ,L, according to (2.8) and (2.9).

(3) Solve the approximate problem.

(4) At the solution

ih

analyze the accuracy of approximation by investigating properties ofthe function

J

(XL,e) in each ofthe subsets Ee,f= 1,2, ... ,L,

38 Stochastic Optimization Problems choose those of them that should be further divided, if the accuracy is not sufficient, and repeat step 2.

Detailed realization of this procedure depE'nds upon properties of the class of problems to which it is applied. In the next section we shall describe in more detail its application to a certain important class of stochastic programming problems.

2.2 Approximation Schemes for Linear Two-stage Problems of