• Keine Ergebnisse gefunden

Stochastic Optimization Problems with Incomplete Information on Distribution Functions

N/A
N/A
Protected

Academic year: 2022

Aktie "Stochastic Optimization Problems with Incomplete Information on Distribution Functions"

Copied!
38
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

NOT FOR QUOTATION WITHOUT PERIULISSIOX OF THE AUTHOR

STOCHASTIC OPTIMIZATION P E O E ~ . : S ITTI-Z

INCOMPUTE

INFURMATION ON DISL'RBmIGW FJNCTIOTTS

Yu. Ermoliev A Gaivoronski C. Nedeva

November 1983

WP-83-113

Working Papers a r e i n t e r i m r e p o r t s on work of t h e I n t e r n a t i o n a l Institute for Applied Systems Analysis a n d have received only limited review. Views or opinions expressed herein do n o t necessarily r e p r e s e n t those of t h e I n s t i t u t e or of i t s National Member Organiza- tions.

INTERNATIONAL INSTITLTTE FOR APPLIED SYSTEMS ANALYSIS 2361 Laxenburg, Austria

(2)

PREFACE

The development of optimization techniques f o r solving complex decision problems under uncertainty is currently a major topic of research in t h e Sys- tem and Decision SciencesArea a t IIASA, and in the Adaptation a n d Optimiza- tion group i n particular.This paper deals with a new approach to optimization under uncertainty which tackles t h e problems caused by incomplete informa- tion about t h e probabilistic behavior of random variables. In contrast t o t h e usual approach, which is t o assume t h a t probability distributions a r e known, the a u t h o r s consider the more common case which arises when t h e informa- tion available is sufficient only t o single out a s e t of possible distributions. The resulting optimization algorithms can be used in reliability analysis, mathematical statistics a n d many other fields.

ANDRZEJ

WIERZBICKI

C h a i r m a n

S y s t e m a n d Decision S c i e n c e s

(3)

ABSTRACT

The main parpose of this pepor is t o discus: numericril optirnization pro- c e d u r e s , based on duality t h e o r y , for problems In which t h e distribution func- tion is only partially known. The dual problem is formulated as a minimax- type problem in which t h e "inner" problem of maximization is n o t concave.

Numerical procedures t h a t avoid t h e difficulties associated with solving t h e

"inner" problem a r e proposed.

(4)

STOCHASTIC O P ~ ~ Z 4 T I O K FROBLEIFS KITH

INCOYSLEW. INI;Y)RIFATIOK ON DT:ZI?LRTJTJOl? WNCTJOI<S

Yu. Ermoliev, A. Gaivoronslci and C. Nedeva

1. I h ~ O D U C I l O I :

A conventional stochastic programming problem may be formulated with some generality a s minimization of t h e function

T ( z ) =

E y v ( x . y )

=

~ ~ ( x . ~ ) d ~ ( y ) (1) subject t o

where y E Y c Rm is a vector of random parameters, H ( y ) i s a given distribu- tion function and v ( x , a) is a random function possessing all t h e properties necessary for expression (1) t o be meaningful

[I].

In practice, we often do not have full irlformation on H ( y ) ; we sometimes only have some of its characteristics, in particular bounds for t h e mean value or other moments. Such information can often be written in t e r m s of con- straints

(5)

where t h e q k ( y ) , k

= 1.1,

a r e known functions. We could, for example, have t h e following c o n s t r a i n t s on joint moments:

where C, ,, a r e given constants.

i ... ~ m ' 71172 ... ~ m

Consider t h e following problem: find a vector x which minimizes

subject t o c o n s t r a i ~ l t s ( 2 ) , where K is t h e s e t of functions H satisfying con- s t r a i n t s (3) and (4).

Special c a s e s of t h i s problem have been studied in [ 2 ] , [ 3 ] . Under c e r t a i n assumptions concerning t h e family K a n d t h e function v(.), t h e solution of t h e

"inner" problem h a s a simple analytical form and h e n c e ( 6 ) is reduced t o a conventional nonlinear prograniming problem. The main purpose of this paper is t o discuss n u m e r i c a l m e t h o d s for t h e solution of problem ( 6 ) in t h e m o r e g e n e r a l case. Sections 2, 3, a n d 4 deal with t h e reduction of this prob- lem t o minimax-type problems without randomized s t r a t e g i e s a n d describe n u m e r i c a l rnet,hods based o n s o m e of the s a m e ideas a s generalized l i n e a r pro- gramming. A quite g e n e r a l m e t h o d for solving t h e resulting minimax-type problems, in which the i n n e r problem of maximization is n o t concave, i s con- sidered in Section 5.

(6)

2. OPTIMIZAFIDi: 7Xd REWECT TO DISYRIBL~OPJ

FJGXIOKS

The possible methods of minimizing T(z) depend on solution procedures for t h e following "inner" maximization problem: find a distribution function H t h a t maximizes

subject t o

-

where q V , v

=

O , l , a r e given functions Rm -+

R ~ .

This is a generalization of t h e known moments problem (see, for i n s t a n c e , [4-61). It can also be regarded a s a generalization of t h e nonlinear programming problem

m a x fqO(y):qk(y) 5 0 , y E Y , k

=

l , l j

t o a n optimization problem involving randomized strategies [7-91.

I t appears possible t o solve problem (7)-(9) by m e a n s of a modification of t h e revised simplex method [8,10]. This modification is based on Krein's

"geometrical approach" t o t h e theory of moments [1,5,6]. Consider t h e s e t

z =

[e:z

=

(qO(y) , ql(y) ,...,qi(y )) , y E Yj

a n d suppose t h a t Z is compact. This wil.1. be t r u e , for instance, if Y i s compact

-

a n d functions qV , v

=

0,1, a r e continuous. Consider also t h e convex hull of Z:

where N is an arbitrary finite number. Then general results from convex analysis lead to

(7)

Therefore problem (7)-(9) is equivalent t n rnesimizing z o subject t o

According t o t h e Caratheodory t h e o r e m , each point on t h e boundary of co Z can be represented a s a convex combination of a t most 1

+

1 points from 2:

Thus problem (7)-(9) is equivalent t o t h e follo~ving generalized linear pro- -

grarnrning problem [ I l l : find points yj E

Y ,

j

=

1.t , t I 1

+

1 and real n u m b e r s p j

.

j

= 11,

such t h a t

subject t o

Consider a r b i t r a r y points cj , j

=

l l , L + l (setting

t =

1

+

1 ) , a n d for t h e fixed -1-2

s e t ty ,y ,

. . . . yLtlj

find a solution

p =

(jiI,ji2,

. . .

,&+,) of problem ( 1 1 ) - ( 1 3 ) with r e s p e c t to p . Assume t h a t

p

exists and t h a t (G1,G2,

. . .

, u ~ + ~ ) - a r e t h e corresponding dual variables, i.e., solve t h e problem

m i n u1 ( 1 4 )

subject t o

U k l O , k = l , L -

.

( 1 6 )

Now l e t y be an a r b i t r a r y point of Y. Consider t h e following augmented

(8)

- 5 -

problem of nlaxirnization with r e s p e c t tn (pI,p,,...,pL+,,p) : maximize

subject t o

I t is c l e a r t h a t if t h e r e exists a point y

'

such t h a t

t h e n t h e solut.ion

5

could be improved by dropping one of t h e c o l u m n s ( q O ( i j j ) , y l ( i j j ; ,

.

, , , g L ( f j j ) , I ) , j

=

i , l + l , from t h e basis a n d replacing it by t h e column ( q O ( y * ) , q l ( y * ) , . . . , q l ( y * ) , I ) , j

=

1,1+ 1 , following t h e revised sim- plex method. Point y could be defined a s

Then a new solution p of (11)-(13) with fixed y

=

y can be d e t e r m i n e d in t h e s a m e way a s ji, t o g e t h e r with t h e dual variables u * . This m e t h o d gives u s a conceptual framework for solving n o t only ( 6 ) b u t also some m o r e g e n e r a l classes of problems.

I f y ( z )

=

( y l ( z ) , y 2 ( z ) , . . . , y 1 + l ( z ) ) , p ( z )

=

( p l ( z ) , p 2 ( z ) , . . . 9 p 1 + 1 ( z ) ) is a solution of t h e inrier optimization problem for fixed z , t h e n t h e function ( 6 ) m a y b e nondifferentiable with subgradient

(9)

where v, is a subgradient of function II(. , y ) . Nondifferentiable opkirnizetion techniques could t h e r e f o r e be used t o minimize

T(x).

The main difficulty of such a n approach would be t o obtain a solution of ( 2 0 ) a n d exact values of y ( z ) , p ( x ) a t e a c h c u r r e n t point z S for i t e r a t i o n s s = 0 , 1 ,

...

This last difficulty c a n s o m e t i m e s be avoided by dealing with approximate solutions r a t h e r t h a n precise values

y ( x )

, p ( x ) , a n d using c-subgradient m e t h o d s (see [ 1 2 ] , [ 1 3 ] ) . Generalized l i n e a r programming methods which do not require exact solu- tions of subproblem (20) a r e studied in Section 4.

3. DUALITY RELATIONS

The duality relations for problem (7)-(9) enable u s t o find a m o r e general approach t o t h e solution of problem (6). Consider t h e following problem:

L

m i n m a x - x u k q k ( Y ) ]

.

U E V + Y E Y k = l

where

This problem c a n b e r e g a r d e d a s dual t o (7)-(9) or ( 1 1 ) - ( 1 3 ) , b u t t o explain this we m u s t i n t r o d u c e s o m e m o r e definitions.

In what follows we shall u s e t h e s a m e l e t t e r , s a y H, for both t h e distribu- tion function a n d t h e underlying probabilistic m e a s u r e , where t h i s will n o t cause confusion. We shall denote by

Y + ( H )

t h e collection of all s u b s e t s of Y which have positive m e a s u r e H, a n d by d o m

H

t h e s u p p o r t s e t of distribution H , i.e.,

dorn

H = n

A A E Y + ( H ) S e t

(10)

* *

U * =

1

u : u E ! I + , + ( u * ) = min +(u)j U E V +

1

Y ( U )

= I

y : y E Y ,

$44

= q 0 ( y ) -

C

u k q k i y ) 1

k = l

Then t h e following generalization of the r e s u l t s given in [14] holds.

Theorem 1. Assume that

1 . Y is c o n ~ p a c t and q v ( y ) , v =

F1

, are c o n t i . n ~ ~ o u s . 2. i n t co Z # $I

Then

I . S o l u t i o n s to b o t h p r o b l e m (7)-(9) a n d p r o b l e m (21) e x i s t , a n d the o p t i m a l v a l u e s of t h e objective f u n c t i o n s of b o t h p r o b l e m s are e q u a l ,

2. For a n y s o l u t i o n

H *

of p r o b l e m (7)-(9) there e z i s t s a u E

U *

s u c h t h a t

In other words, t h e dualiky gap vanishes in nonlinear programs with random- ized strategies. A proof of t h i s theorem can be derived from general duality results [ 1 2 , 1 5 ] a n d t h e theory of moments [S]. The proof given below is close to the ideas expressed in [ 1 2 ] and illustrates certain connections with results from conventional nonlinear programming.

Proof. From ( l o ) , problem (7)-(9) is equivalent to

max [ z o : z

=

( z l , z Z

,...,

Z L ) E co Z , zk 1 0 , k

=

- 1,Lj , ( 2 2 )

where Z

=

I z : z

=

( q O ( y ) , q l ( y ) , . . . , q L ( y ) ) , y E

Yj.

From assumption 1 of Theorem 1, co Z is a convex cornpact set and therefore a. solution z

=

( z o , z

. . .

, z L * ) to problem ( 2 2 ) exists. Let L ( U , Z ) be a Lagrange func- tion for ( 2 2 ) :

(11)

From assumption 2,

z i

=

m a x min

L ( u , z )

= min max

L ( u . , z )

.

Z E C O Z 2 ( € ~ / + ".'€

v+

Z E C O Z

According to ( l o ) , th e r e exists for any

z

E co Z a distribution

H

such t h a t

We therefore have

a n d

m a x L ( U , Z )

=

rnax

~ L ( u , H )

H 2

o

,

J ~ H ( ~ )

=

11

,

Z E C O Z Y

Obviously

which proves t h e first p a r t of t h e t h e o r e m .

Under t h e assumptions of t h e t h e o r e m w e h o w t h a t for any solution ( zi,

.

.

. , z : )

t h e r e exists a u E

U *

s u c h t h a t

zgC

=

max

L(u.*,z)

,

z ECO Z

Thus, -for any optimal distribution H* nre have

which proves t h e second p a r t of t h e t h e o r e m .

(12)

Remark 1 . From t h e d u a l i t y t h e o r e m aSmre 7.; e ha\-e m

m a x J v ( x , y ) d ~ ( y ) = nlin ma?: [v ( x , y ) -

C

~ L ; C q k ( Y ) ]

H E K U E V + Y E Y k = I

for e a c h fixed x E A', w h e r e v (x ;) is a contir l u o u s f u n c t i o n . P r o b l e m (6) can t h e n b e r e d u c e d to a m i n i m a x - t y p e problern a ~s follows: m i n i m i z e t h e function

with r e s p e c t t o x E

X

, u 2 0.

Remark 2. T h e o r e m 1 c a n b e u s e d t o c h a r a c t e r i z e opt,imal d i s t r i b u t i o n s for a variety of n o n l i n e a r optimization p r o b l e m s with d i s t r i b u t i o n functions. The approach is, f i r s t , t o s t a t e n e c e s s a r y optirnal .ity conditions t h r o u g h lineariza- tion a n d t h e n t o apply T h e o r e m 1. This i s illu: i t r a t e d i n t h e following example.

Consider t h e o p t i m i z a t i o n p r c b l e m

where

g i ( ~ )

, i

= %

, a r e n o n l i n e a r funct;ionals depending o n distribution f u n c t i o n s

H

with s u p p o r t s e t

Y.

Theorem l a . A s s u m e t h a t the f o l l o w i n g state7 ne7rts are t r u e , 1 . Set

Y

is a c o m p a c t s u b s e t o f Euclidean spud< :e R n .

2. fir any d i s t r i b u t i o n s

H 1 = H 2

s u c h that d o m

H 1

L

Y 1

, d o m

Hz

C

Y2

w e have

&(a, H I , H2)

where i =

l,m

,

a

E [[I,

11

a n d - , O a s a + O .

LX

(13)

3. f i n c t i o n s q i ( y , H ) are ~ o n t i n u ~ u ~ in y f o r e v e r y H s u c h thnt dorn H

r

Y ;

f o r a n y H 1 , H z m c h t h a t d o m H I Z: Y , d o m H z s Y w e h a v e

I

q ' ( y n ~ l )

- q i ( ~ p ~ z ) l ' I J

h i ( y s H , * H 2 ) d ( H l s

Y

w h e r e

1

A i ( y , H 1 , H 2 )

I

I K

<

m f o r s o m e K w h i c h d o e s n o t d e p e n d o n H 1

.

H z

.

4. f i n c t i o n s g i ( H ) , i

=

-, a r e c o n v e z , i , e . ,

5. m e r e e z i s t s a n

H

s u c h t h a t d o m

H

s Y a n d

si(H) <

0 f o r i

= l,m .

1. A s o l u t i o n o f p r o b l e m (7a)-(9a) e x i s t s , 2. For a n y s u c h s o l u t i o n

H

w e h a v e

/

Y q O ( y . ~ * ) ~ *

=

min y o ( u , ~ * ) , U E V +

w h e r e

3.

If

H* is

a

s o l u t i o n o f (?a)-(%) t h e n f o r s o m e u * w e h a v e dorn H * L Y * ( u * , H * ) , w h e r e

Thus, t h e main assumptions of this theorem a r e the existence, continuity (in some sense) and boundedness of the directional derivatives of functions g i ( H > .

(14)

The following t h e o r e m is analogous to known r e s u l t s in l i n e a r program- ming a n d provides a useful stopping r u l e for methods of t h e type described in Section 2 (see also Section 4).

Theorem 2. ( m t i m a l i t y c o n d i t i o n ) Let t h e a s s u m p t i o n s of T h e o r e m 1 h o l d a n d Let

ji

be a s o l u t i o n of p r o b l e m ( 1 1)-(13) f o r fixed

-1 -2

31 = (Y

, y ,

. . .

,

y t )

, ij E

R ~

m e n t h e p a i r i j , p

~ ~ ,

is a n o p t i m a l s o l u t i o n of p r o b l e m (11)-(13) i f a n d o n l y i f f o r g i v e n i j t h e r e e x i s t s a s o l u t i o n

(2L1,2L2,

. . .

of p r o b l e m (14)-(16) s u c h t h a t

1 -

q O ( Y )

-

C i i k q k ( y )

- u ~ + ~

<

O for all y E Y

.

k = I

Proof

1. Suppose t h a t ij* is a n optimal solution of problem ( 1 1 ) - ( 1 3 ) , t h a t ( u l , U ~ , . . . , U ~ + ~ ) is a solution of problem ( 1 4 ) - ( 1 6 ) for given

y,

a n d t h a t

We shall show t h a t 2 L 1 Z 2 ,

. . .

, I L L + ] -

i s

a solution of problem ( 1 4 ) - ( 1 6 ) . Consider t h e two functions:

1

$1(u)

=

mpx [ q O ( i j j )

- C

uk q k

(?)I .

Is] st k=l According t o Theorem 1

Since problem ( 1 1 ) - ( 1 3 ) is dual t o problem ( 1 4 ) - ( 1 6 ) for given

y,

t h e n

(15)

Therefore

+(C)

=

min + ( u )

=

min q l ( u )

=

q l ( u *)

=

u A 1 ,

U E U+ Y E P +

where

Since

yi

E

Y

, j

= a ,

t h e n ( Z L ) ( Z L ) for ZL EU'. In particular,

@,(.LL) s +('IL). But (23) implies

a n d this gives q1(2L)

=

+ ( G ) = + l ( u * ) . Hence (2L1,2L2,

.

,

.

, u ~ + ~ ) - is a solution of problem (14)-(16).

2. Suppose now t h a t f o r given

5

t h e r e exists a solution (2L1,2L2,

. . .

,ClLICl) of problem (14)-(16) such t h a t

From t h e duality between problems (11)-(13) and (14)-(16) we have

where

p = @l,p2,

,

. . , F t )

is a solution of problem (11)-(13) for given

y.

On t h e o t h e r hand, t h e duality between problems (21) and (11)-(13) leads t o t h e inequality

1 2

f o r any \y ,y ,

. . .

, y t j,p satisfying (12)-(13). In other words,

(16)

and t h i s completes t h e proof.

The next t h e o r e m provides a m e a n s of deriving a solution t o t h e initial problem (7)-(9) from a solution of problem (21), a n d is c o m p l e m e n t a r y t o Theorem 1.

Theorem 3. A s s u m e t h a t the a s s u m p t i o n s of T h e o r e m 2 are satisfied a n d that

@ ( E ) = m i n I + ( u ) I u E U+j. Let

y = ( y 1 a 2 , ,

. ,

, y t

) , w h e r e

i j

E R and ~ ~ ~

yi

E Y ( i i ) , and l e t

ji

be a s o l u t i o n of p r o b l e m (11)-(13) for g i v e n i j . Suppose also t h a t there is a so l u t i o n p

'

to t h e i n e q u a l i t i e s (1 2)-(13) for yj = i j j s u c h t h a t

t

z q k ( i j j ) p ; = O . k E I + ,

j=1

w h e r e I + = t k l E k > 0 { , I O = [ k I E k = O j .

Then t h e pair

i j ,ji

.is a n optimal s o l u t i o n of p r o b l e m (1 1)-(13).

Proof, The vectors

a r e subgradients of t h e convex function

at a point

2L.

Therefore condition ( 2 5 ) is necessary and sufficient for point

zL

t o be a n optimal solution, i.e., so t h a t

Then, from ( 2 4 ) ,

rnin l@l(u)lu E U+j

=

+ ( C )

=

r n i n l @ ( u ) Iu E U+j

.

(26)

(17)

The minimization of

q l ( u )

, u E

u',

is equivalent to problem ( 1 4 ) - ( 1 6 ) . Hence ii

=

(Gl,'lL2, .

. .

, u l ) - together with

q + l =

q l ( G ) give a solution of prob- lem ( 1 4 ) - ( 1 6 ) . Since problem ( 1 4 ) - ( 1 6 ) is dual to problem ( 1 1 ) - ( 1 3 ) , then

- -

problem ( 1 1 ) - ( 1 3 ) has a solution, s a y p

=

( p l . p z ,

. . .

,&), and

This together with ( 2 6 ) yields

and this completes the proof.

4. ALGORITHMS

Theorems 2 a n d 3 justify a dual approach to problem ( 7 ) - ( 9 ) which may involve simultaneous approximation of both primal and dual variables subject to ( 2 4 ) - ( 2 5 ) . In this section we consider several versions of generalized- linear-programming-based method discussed briefly in Section 2. In all cases t h e c u r r e n t estimate of optimal solution satisfies ( 2 4 ) - ( 2 5 ) a t each iteration.

The convergence of such algorithms has been investigated in a number of papers [ 1 6 ] , [ l l ] , under t h e assumption that t h e initial column entries for all previous iterations of subproblem ( 2 4 ) and t h e exact solutions a t each itera- tion a r e stored in t h e memory. There a r e various ways of avoiding this expan- sion of t h e memory, mainly through selective deletion of these columns [17-191. The aim of this section is to discuss a way of avoiding not only t h e expansion of t h e memory, but also the need to have a precise solution of ( 2 0 ) . The last is important in connection with initial problem ( 6 ) , a s mentioned in Section 2 .

(18)

Description of A l g o r i t h m 1

Fix points y 0 ~ 1 , y 0 ~ 2 , . . . , y 0 , 1 + 1 and solve problem ( 1 1 ) - ( 1 3 ) with respect t o p for y j = y O - j , j

=

1,1+1. Suppose t h a t a solution p0 = ( p ; , p : , .

.

. , P 1 O + l ) to t h i s problem exists. Let u0 = ( u : ,u ; , . . . ,

uLL)

be a solution of the dual prob- lem ( 1 4 ) - ( 1 6 ) with respect to u . The vector u0 satisfies t h e following con- s t r a i n t s for y E ~ y 0 ~ 1 , y 0 ~ 2 , . . . , y 0 ~ L + 1 j :

If u0 satisfies condition ( 2 7 ) for all y E Y, t h e n t h e pair ~ I J ~ ~ ~ , I J O ' ~ , . . . , ~ O , l + l j ,

is a solution of t h e original problem ( 1 1 ) - ( 1 3 ) . If this is n o t t h e case, consider a new point y o such t h a t

for some EO

>

0.

Denote by p 1

=

( p ~ , P ~ , . . . , P l $ l ) a solution of t h e a u g m e n t e d problem ( 1 7 ) - ( 1 9 ) with r e s p e c t t o p for fixed i j j

=

y o l j , y = y o . We shall use y 1 * 1 , y 1 ~ 2 , . . . , y 1 ~ f + 1 t o denote those points Y O ~ l , . . . , Y O 1 l + l , Y O t h a t correspond t o t h e basic variables of solution l .

Thus, t h e first s t e p of t h e algorithm is t e r m i n a t e d and we pass to t h e next step: determination of u l , y l , etc. In general, after t h e s-th iteration we have points Y s ~ 1 , Y s ~ 2 , . . . , y S ~ L + 1 , a solution p S

=

(p: , p $ , . . , , p f + l ) and t h e corresponding solution us

= (US

, U $ , . . . , U ~ + ~ ) t o t h e dual problem ( 1 4 ) - ( 1 6 ) . For a n E,

>

0, find y S s u c h t h a t

(19)

and

If we do n o t obtain A(yS , u s )

>

0 for decreasing values of E , we arrive a t a n optimal solution; otherwise we have t o solve t h e a u g m e n t e d problem ( 1 7 ) - ( 1 9 ) f o r y j

=

y ~ l j ,

=

Y s .

Denote by y ~ + l , l , y s + l , 2 ,..., y ~ + l , l + l those points from

~ y S 1 1 . y S ' 2 . . . . , y S 1 1 + 1 j

u

y o t h a t correspond t o t h e basic variables of t h e solution

p S + l . The pair t y s + l l 1 , y s + 1 ~ 2 . . . . , y S + 1 1 1 + 1 j , p s + l is t h e new approximate solution t o t h e original problem, and so if A ( y S , u S ) 0 , t h e n (accor&ng t o Theorem 2) t h e pair i y S ~ 1 , y S ~ 2 , . . . , y s ~ 1 + 1 j , p S is t h e desired solution. Define

a n d

4 =

i e : e = ( e l , e 2

,...,

e l ) ,

I

( e

1 ) =

1 , ek 1 0 for k E I: a n d a r b i t r a r y ek for k E I t j ,

4

is actually a s e t of feasible directions for s e t

U +

a t point u s . Let

Note t h a t y, is always less t h a n zero because

c o i ( q 1 ( y S ~ j ) , q 2 ( y s ~ j ) , . . . , q L ( y s ~ j ) ) , v j : p T

>

O j is a s e t of subgradients of t h e function

(20)

a t point u s , a n d t h i s function h a s a m i n i m u m a t u s .

In o r d e r t o prove t h a t this m e t h o d is convergent we require, broadly speaking, t h a t y ,

<

0 a n d t e n d s t o zero only as we approach t h e optimal solu- tion.

Consider t h e functions

Theorem 4. Let t h e c o n d i t i o n s of T h e o r e m 1 b e s a t i s f i e d , a n d t h e f o l l o w i n g a d d i t i o n a l c o n d i t i o n s h o l d :

1. m e r e e*ts a n o n d e c r e a s i n g f u n c t i o n ~ ( t ) ,

t

E [0,=) , ~ ( 0 )

=

0 , ~ ( t )

>

0 f o r

t >

0 , a n d

Y , - 7 ( q ( u S )

- qs

( u s

1) -

2.

Es

>

0' Es -r 0 f o r S -r m.

m e n

any

c o n v e r g e n t s u b s e q u e n c e of s e q u e n c e tys~1,ys12....,yS~1+1j , p s c o n - v e r g e s t o a s o l u t i o n o f p r o b l e m (7)-(9).

Proof

1. First l e t u s prove t h a t t h e s e q u e n c e iuSj i s bounded. Suppose, arguing by contradiction, t h a t t h e r e exists a subsequence !us'] s u c h t h a t

I

IuS'

I I

-, m a s

r -r m. Assumption 2 of Theorem 1 implies t h a t $(us') -r m a n d t h e r e f o r e t h a t

$ ( u s ' )

-

q S ' ( u S ' ) -r , s i n c e $s'(uS') s m i n $ ( u ) . Hence, t h e r e exist F a n d

UE V+

6 >

0 s u c h t h a t for r

>

F ,

Now l e t u s fix a n a r b i t r a r y point .1L E

U+

a n d e s t i m a t e q ( E ) . We obtain

$(G) 1 q S r ( 4 1 q S r ( u S r )

+

s u p ( g

,ii

- u s ' ) ,

g ~ ~ ' '

-

(21)

where

GS'

is a s e t of subgradients of function

qS'

a t point us'. The definition of

qS

implies t h a t

and therefore (28) leads t o

L

r q s r ( u s ' ) + 1 1 ii

- us'

1 1

min max (-

ek q k ( y s r ' i ) )

eEAsr

i : p p > ~

k = l L

= q S ' ( u S r ) - 11'11 - u s ' /

( m a x min

C e k q k ( y

ST 8% )

8 'Asr i :p?

>ok

= I

This l a s t inequality yields

q ( E ) =

m i f

I 1

us'

/ I

-, m, and therefore sequence tuS

1

is bounded.

2. We shall now e s t i m a t e t h e evolution of t h e quantity w ,

= q S ( u S ) ,

where us

=

arg min $P

( u ) .

U E V +

Using t h e same a r g u m e n t a s i n p a r t 1 of t h e proof we obtain:

1

z $ S ( u S )

+ I 1

us+' - u s

I I

min max (-

C ek q k ( y s g i ) ) e E 4

~ : p f > O k = ]

= $ ' ( u s ) - I

IuS+' - u s

I I

m a x min

C

1

e k q k ( y S ' i ) eEAs i:pf>O

k = ]

1

Sequence [us

jp=O

is bounded a n d so

$ ( u s ) = s u e ( q O ( y ) - C * q i ( y ) )

m u s t also

Y E i =I

be bounded; t h u s

$ I ~ ( U ~ )

is bounded since

q S ( u S ) < $ ( u s ) .

This together with the previous inequality immediately gives

(22)

Now consider a n y convergent subsequence !us'] of sequence ! u s ] . Nre can a s s u m e f r o m (29) t h a t e i t h e r (

I

us' -us'+1

1 /

4 0 or +(us') - +S'(uS') + 0

.

In t h e l a t t e r case we g e t +S'(uS') + m i n + ( u ) =

+*

because +(us') 2

+*

and

UE V +

@'(us') s

+*.

In t h e case

1

/us' - us'+'

I I

4 0 we g e t t h e following:

so t h a t once again +(us')

-

+S'(uS') 4 0 a n d we obtain +S'(uS') + m i n + ( u ) .

Y E V + However, according t o Theorem 1 m i n + ( u ) is t h e optimal solution of t h e ini-

U E V +

tial problem; m i n q s ( u ) is t h e optimal solution of problem (11)-(13). There-

Y E V +

fore t h e solution of (11)-(13) t e n d s t o t h e solution of t h e initial problem, and a n y convergent subsequence of sequence t y s ~ 1 , y s ~ 2 ,

. . .

, YslL+l] , ps , where

s

=

0,1,

...

converges t o t h e optimal solution of t h e initial problem.

This m e t h o d c a n be viewed a s t h e dual of a cutting-plane method applied t o problem (7)-(9) [12,16,20]. I t drops all points yssi which do n o t correspond t o basic variables. Theorem 4 shows t h a t i n s o m e ( r a r e ) c a s e s t h i s m e t h o d does n o t converge; however, t h i s i s n o t surprising because in c e r t a i n cases t h e simplex m e t h o d does not converge e i t h e r . I t m a y be possible t o modify t h e algorithm in different ways t o e n s u r e convergence.

If we keep all previous points y 0 ~ 1 , y 0 ~ 2 ,

.

,

.

, yO*L+l,yO.y l,... a n d solve prob- lem (14)-(16) with a n increasing n u m b e r of corresponding columns, t h e n t h e m e t h o d appears t o be a form of Kelley's m e t h o d for minimizing function + ( u ) ,

(23)

which converges u n d e r t h e assumptions of Theorem 1. However, i t is impossi- ble t o allow t h e s e t of points t o increase ad i n f i n i t u m in practical computa- tions.

In t h e following modification of t h e algorithm p r e s e n t e d above some non- basic c o l u m n s a r e dropped when an additional inequality is satisfied.

Description of Algorithm 2

1. We f i r s t choose a sequence of positive r e a l n u m b e r s Ips take

r o

= 0 a n d s e l e c t initial points jy0,1,y012, ..., yo,' + I j s u c h t h a t problem ( 1 1)-(13) h a s a solu- tion with r e s p e c t t o p for y j

=

y O j , j

=

1,1+1. Let be a solution of t h i s problem a n d u0 be t h e corresponding dual variables. We t h e n have t o find y o s u c h t h a t

where co is a positive n u m b e r . If for any EO a n d t h e corresponding y o we have

t h e n t h e pair t y 0 ~ 1 , y 0 ~ 2 , . . . , y 0 ~ 1 + 1 j is a n optimal solution of problem (7)-(9).

Otherwise i t i s n e c e s s a r y t o s e l e c t co , y o s u c h t h a t A ( y O , u O )

>

0 a n d t a k e

A. =

A(yO , uO).

Suppose t h a t a f t e r t h e s - t h iteration we have points y S $ j , j

= K,

a solu-

tion p s

= (pi

, p z , . . . . p t ) of problem (11)-(13) for y j

=

y S n j , j

= K ,

t

= L,,

a

corresponding solution us

= (US

, U ~ , . . . , U ~ + ~ ) of t h e dual problem (14)-(16), a positive i n t e g e r n u m b e r

r,

a n d a positive n u m b e r

A,.

2. Find a n approximate solution y, s u c h t h a t

(24)

a n d

If t h i s is n o t possible t h e n we have arrived a t a solution. Otherwise consider t h e following two cases:

( a ) A ( ~ ~ , U ~ ) ~ ( ~ - I * * ~ ) ~ S

In t h i s case t a k e As+1 = A ( y S , u S ) , l S + l

=

1

+

1 , rS+] = ~ ~ a n d denote by + 1 ys+1~1,ys+1~2,...,ys+1~1+1 those points from ~ y S 1 1 , y S q 2 , . . . , y S 1 1 + 1 ]

u

y S , t h a t correspond t o t h e basic variables of t h e solution p S + l .

In this c a s e take

y s + l j

=

y = o j ,

j = 1,1,,

y s+1,1.+1 -

As+l = A s , ls+l = l s + l I ~ s + l = T s

-

- y s

Find a solution of problem ( 1 1 ) - ( 1 3 ) for

t =

, y j

=

ys+laj ,

j =

1,1,+1 a n d t h e corresponding dual variables us+', a n d proceed t o t h e n e x t iteration.

Theorem 5. S u p p o s e t h a t t h e c o n d i t i o n s o f T h e o r e m 1 a r e s a t i s f i e d a n d t h e f o l - l o w i n g a d d i t i o n a l c o n d i t i o n s h o l d .

1.

T h e n

x

p f q

o ( y s ~ i ) =

us t e n d s t o t h e o p t i m a l v a l u e o f p r o b l e m (7)-(9).

i =l

(25)

Proof

1. Suppose t h a t t h e inequtlity 3(yS,u." 5 ( 1 - yTS)A, is sak.isfied. only a finite n u m b e r of tirnes. This implies t h e existence oi a n u m b e r s o s u c h t h a t for s > s o t h e m e t h o d t u r n s i n t o Kelley's cutting-plane algorithm fcr minimiza- tion of t h e convex function + ( u ) , where t h e values of + ( u ) a r e calculated with a n e r r o r t h a t t e n d s t o zero. From assumption 2 of Theorem 1, $ ( u ) h a s a bounded s e t of m i n i m u m points and t h e initial approximating function +S0(2?-) h a s a minimum. Thus s u c h a m e t h o d would converge t o t h e m i n i m u m of func- tion $ ( u ) a n d A(yS ,ziS) I $(us) -

qS

(ziS) -, 0 , which contradicts t h e assump- tion t h a t for s s o t h e inequality ~ ( y ~ , u ~ ) I (1

-

pTs)A, is not satisfied.

2. There exists a subsequence sk s u c h t h a t

From t h e definition of t h e algorithm we have

a n d t h e r e f o r e

Making t h e substitution $ ( u s )

-

q S ( u S ) = w S we obtain

This inequality together with t h e assumption zSk/pk 4 0 gives wsk 4 0 a n d therefore

qSk(uSk)

4 mi,$(,) because q S k ( u S k ) I $ ( u ) V u E

u'.

B u t for a n y

U E V '

s we have $S+l(uS+l) 2

qS

( u s ) , which together with +bSk(uSk) 4 m i n $(u ) leads u € V t

4

t o I C / ~ ( U ~ ) * + m i n $ ( u ) . However, ? ( u s )

=

~ p ~ q o ( y s l i ) a n d m i n + ( u ) is a n

u € V t i =l U E V t

optimal value of problem'(?}-(9) d u e t o Theorem 1. This completes t h e proof.

(26)

Various ways of dropping t h e c u t s in cutting-plane methods have bee:, suggested in [ 1 1 , 2 0 ] . The following method, which keeps only 1 + I points a t each iteration, was put forward in [ 2 0 ] .

Instead of problem ( 1 4 ) - ( 1 6 ) , solve t h e following problem a t each itera- tion:

min ( u ~ + ~

+ E I

IuS - u

U. 1 2 )

0 - j k - j

q (y ) - C q (y )uk = U ~ + ~ < O , j

=

l , l + l

k = I

u k r o

,

where us

=

arg m i n q s ( u ) . That this modified version converges c a n be proved UEU+

in a similar way t o Theorem 5.

5. SrOCHASTIC

PROCEDURE

By a corollary of Theorem 1, problem (6) is r e d u c e d t o a minimax problem with a nonconcave i n n e r problem of maximization a n d a convex final problem of minimization. A vast a m o u n t of work has been done on m i n i m a x problems b u t virtually all of t h e existing numerical 'methods fail if t h e i n n e r problem is nonconcave. To overcome t h i s difficulty we adopt a n approach based on sto- c h a s t i c optimization techniques.

Consider t h e fairly g e n e r a l minimax problem

where f (z,y) is a continuous function of (z,y) a n d a convex function of z for e a c h y E Y ,

X c Rn

, Y

c

Rm. Although

(27)

is a convex function, t o compute a subgradient

requires a solution y ( x ) of nonconcave problem (32). In order t o avoid t h e dif- ficulties involved in computing y ( x ) one could t r y t o approximate Y by an E- s e t Y , a n d consider

i n s t e a d of y ( z ) . But, in general, this would require a s e t Y , containing a very large n u m b e r of elements. An alternative i s t o use t h e following ideas. Con- sider a sequence of s e t s Ys , s

=

0, 1,

...

a n d t h e sequence of functions

p ( z )

=

max f ( z , y )

Y EYs

I t can be proved (see, for i n s t a n c e , [ Z l ] ) t h a t , u n d e r c e r t a i n assumptions con- cerning t h e behavior of sequence

P ,

t h e sequence of points g e n e r a t e d by t h e rule

(where t h e s t e p size p, satisfies assumptions s u c h a s p, 2 0 ,

m

p, -, 0 , p,

=

m) tends, in some sense, t o follow t h e time-path of optimal s =o

solutions: for s -, m

lim [ P ( z S ) - m i n P ( z ) ] = O

.

We will show below how

Ys

(which depends on z S ) can be chosen s o t h a t we

(28)

obtain t h e convergence

m i n P ( x ) -, m i n F(X) ,

where Y, c o n t a i n s only a finite n u m b e r N,

r

2 of random e l e m e n t s .

The principal peculiarity of procedure (33) is i t s nonmonotonicity. Even for differentiable functions P ( x ) , t h e r e is no g u a r a n t e e t h a t x S + ' will belong t o t h e domain

> s

+

1 [ x I P ( x ) < P ( z S ) j ,

t

-

of smaller values of functions ,

P+

2,... (see Figure 1).

Figure 1

Various devices c a n be u s e d t o prevent t h e sequence f z S j,"=o from leaving t h e feasible s e t X.

The procedure adopted h e r e is t h e following (see [ Z Z ] ) .

We s t a r t by choosing initial points X O , ~ ' , a probabilistic m e a s u r e

P

on s e t Y a n d a n i n t e g e r

No

k 1. Suppose t h a t a f t e r t h e s - t h iteration we have arrived

(29)

a t points x S , y S . The next approximations x S + l , y S + l a r e then c o n s t r u c t e d i n t h e following way. Choose Ai, 2 1 points

y S 1 ' , y S f 2 , , . . 1 Y s,Ns

according t o m e a s u r e P , a n d d e t e r m i n e t h e s e t

Ys

=

[ y s J , y s ~ 2 , . , . 1 Y "."l']

" y',o,

where y s 1 0

=

yS

.

Take

yS+'

=

arg m a x f ( x S , y )

Y EYs

a n d c o m p u t e

where ps is t h e s t e p size a n d n i s t h e result of a projection operation on X.

Before studying t h e convergence of this algorithm, we should first clarify t h e notation used:

P(A) i s a probabilistic m e a s u r e of s e t A 2 Y,

X* =

a r g min F ( z ) ,

2 E X

y ( ~ )

=

inf (E,z),

2

2

i.e., T ( ~ , E ) is t h e l a r g e s t n u m b e r of steps preceding s t e p k for which t h e s u m of s t e p sizes does n o t exceed E.

(30)

1 .

X

is o. c o n v e x c o n z p a c f s ~ f i.n

Rn

a n d

Y

iq a. c o 7 n p a c t s e t i n

K m

2. f (x,y) is G, cn7zti7z?~nur j u 7 z c t i o n

01

( z , ~ ) a n d a c o n v e x f u n c t i o n o f rr: f o r a n y y

c Y ,

Y E Y

3. M e a s u r e

P

.is s u c h t h a t y ( c )

>

0 f o r c

>

0 .

m e n fo r s -, m

E m i n ) ( z S - 2 1 1 + O

.

z EX*

I f ,

in

a d d i t i o n , t h e r e e z i s t s a n cO

>

0 s u c h

that

f o r all E 5 co a n d e a c h 0

< q <

1

then, a s s -, m,

min [ ( J z S - 2

1

( l z E X * { -* 0

with p r o b a b i l i t y 1

Proof

1. First of all let us prove t h a t

F(zS)

- f

(zS,yS) + 0

in t h e mean. To simplify the notation we shall assume t h a t Ns

=

N 2 1.

According t o the algorithm

f

(zS , y S + l ) r f (zS ,ySaV) , v =

O,N

and therefore

f (zS+l,yS+l)

-

f (zS+l,yS~v) 1 [f (zS+lnyS+l)

-

f (zS,yS+1)]

(31)

Since t h e r e is a constant K such t h a t

I ~ ( z ~ + ~ . Y ) - ~ ( z ~ , Y ) I ~ K ~ ( z ~ + ~ - z ~ I ~ I K ~ ~ ~

,

then

f ( z ~ + ~ , y ~ + ~ ) 2 f ( z S + l , y S J ' ) - 2K2pS

.

We also have

f ( ~ ~ + l , y ~ + ~ ) 2 f ( z S + l , y s + l l v ) , v

= TN

, or, in particular, for v = 0

f ( Z S + ~ , ~ S + ~ ) 2 f ( ~ ~ + ~ , y ~ + l )

.

Therefore

f ( z s + 1 , y s + 2 ) ~ f ( z s + 1 , y k * v ) - 2 K 2 p S , k = s , s + 1 , v

=TN

, and in t h e same way

f ( z s ' 2 , y s f 2 ) 2 f ( z S + 2 , y k 1 v )

-

2K2(ps

+

p s + l ) , k

=

S,S

+

1 , v

= TN

etc.

Continuing this chain of inequalities, we arrive a t t h e following conclu- sion:

Thus, if

then

f ( z s , y S ) 2 max f ( z s , y ) - 2K2& ,

YI~Y,,r

(32)

It is easy to see from this t h a t

P [ F ( Z ' )

-

f ( z S , y S )

>

( 1

+

2 K 2 ) & ] <

P I F ( z S ) - max f ( z ' , ~ )

>

E ] [ I - y ( E ) ] N ~ ( ~ l ~ ) , yEY8.r

Since p, -, 0 , then T ( S , E ) -, m as s -, =. Hence [l

-

y(&)]Nr(s1&) -, 0

as s -, m, and this proves t h e mean convergence of F ( z S )

-

f ( z ' , y S ) t o 0.

2. We shall now show t h a t , under assumption ( 3 4 ) , F ( z S )

-

f ( z S , y S ) + 0 with probability 1. It is sufficient t o verify t h a t

P t s u p [ F ( z k ) - f ( z k , y k ) ]

>

( 1

+

2 K 2 ) & ] -, 0 k t s

We have

P [ s u ~ [ F ( z k )

-

f ( z ~ , ~ ~ ) ]

>

( 1

+

2K2)&j

s

krs

P [ s u p [ F ( z k )

-

max f ( z k , y ) ]

>

E {

s

k t s Y ~ Y k . 7

m m

E p [ F ( z k )

-

man f ( z k . y )

>

~j

s

x [ l - 7 ( c ) ] N r ( k l & ) -, 0 ,

k =s yEYk,r k =s

since from assumption ( 3 4 ) t h e series

a s s -, m.

3 . Let us now prove t h a t E w ( z S ) -, 0 as s -, m, where

We have

(33)

s

w ( x S ) - 2 p s [ F ( z S ) - m i n F ( z ) ]

+

2 p s [ F ( z S ) -

f

( z S t y S ) l + K2p?

-

2 EX

Taking t h e m a t h e m a t i c a l expectation of both sides of t h i s inequality leads t o Eur ( z S + ' ) I Ew ( z S )

-

2p, E [ F ( X ' ) - m i n ~ ( z ) ]

+

2pSPs

+ ~~~f

, (35)

z EX

where

8,

-r 0 as s -r m s i n c e i t h a s already been proved t h a t E[F(z')

-

f ( X ~ , Y ~ ) ] -r O for s -r m

.

Now l e t u s suppose, c o n t r a r y to o u r original assumption, t h a t

& ( z S ) > a > O , s ~ s o

. I t

is easy t o s e e t h a t in t h i s c a s e we also have

E [ F ( z S )

-

m i n F ( z ) ]

> 6 >

0 , z EX

where

6 =

6 ( a ) i s a c o n s t a n t which depends on a. Then for sufficiently large

S l s l

E u r ( z S C 1 ) ~ E I L u ( z S ) - 2 p S [ 6 -213,

-

K ~ P , ] I EILu(zs) -6 p S ( 3 6 ) since p, -r 0 ,

p,

-r 0 a n d therefore we c a n suppose t h a t

S u m m i n g t h e inequality ( 3 6 ) f r o m s l to k , k -r m, we obtain f r o m assump- tion (4) a contradiction t o t h e non-negativeness of & ( z S ) . Hence, a subse- quence l z S k ) exists s u c h t h a t

h ( z S ~ ) -r 0

(34)

as k -r M. Therefore for a given a

>

0 a n u m b e r k (a) exists such t h a t

where sk

>

sk ),( Let T be s u c h t h a t sk I T 5 s ~ a n d E w ( z T ) + ~

>

a. Take 1 such t h a t

1

=

min l i : & ( z j ) > a for i 5 j I . T { .

sk <iSr

Since p, -r 0 and

8, -.

0

.

we m a y a s s u m e t h a t BPS

+

KZp,

<

6 ( a ) for s

>

sk(,).

This a n d (36) t o g e t h e r imply t h a t & ( z T ) I & ( z l ) . NOW from ( 3 5 ) a n d t h e definition of 1 we g e t

& ( Z ~ ) I . & ( Z ~ - ~ ) + ~ P ~ / ~ ~

.

Thus & ( z S ) -, 0 , because a was chosen arbitrarily a n d pl -, 0.

4. It c a n be proved t h a t w ( z S ) converges t o 0 with probability 1 in t h e s a m e way t h a t we have already proved m e a n convergence. We have t h e inequality

where 7, -r 0 with probability 1 because i t h a s already been shown t h a t u n d e r assumption (34)

F ( z S )

-

f ( z S , y S ) -r 0 a s s -, m

with probability 1. If we now a s s u m e t h a t

w ( z s )

>

a . s > s o

for s o m e e l e m e n t of probabilistic space we will also h a v e F ( z S )

-

min ~ ( z )

> 6 >

0

2 E X etc.

(35)

We shall now give a special case in which condition (34) is satisfied.

Example. Assume t h a t p,

=

a / s b , a

>

0 , 0

<

b I 1

.

Then Raab's t e s t for series convergence shows t h a t condition (34) is satisfied.

(36)

REFERENCES

1. Yu. Ermoliev, M e t h o d s of S t o c h a s t i c P r o g r a m m i n g (in Russian), Nauka, Mos- cow (1976).

2. J. zazkova, "On minimax solutions of stochastic linear programming prob- lems",

b.

~ g s t . M a t . 91(1966), pp. 423-430.

3. J. Dupacova, "Minimax stochastic programs with nonseparable penalties", Proceedings of t h e 9th IFlP Conference, Warsaw 1979. P a r t 1. L e c t u r e N o t e s in C o n t r o l a n d h f o r m a t i o n S c i e n c e s 22, Springer-Verlag. Berlin

1980, pp. 15'7-163.

4. J.H.B. Kemperman, "On a class of m o m e n t problems", P r o c e e d i n g s of t h e Sixth B e r k e l e y S y m p o s i u m o n M a t h e m a t i c a l S t a t i s t i c s a n d P r o b a b i l i t y 2 (1972), pp. 101-126.

5. J.H.B. Kemperman, "The general moment problem: a geometric approach", A n n . Math. Statist. 39 ( 1 9 6 ~ ) , pp. 93-122.

6. M.G. Krein, "The ideas of P.L. Tchebysheff and k k Markov in t h e theory of

(37)

limiting values of integrals and their further developments", A m . Math.

Soc. S h i e s 2, 12(1951), pp. 1-122 (translation).

7. S. Fromovitz, "Nonlinear programming with randomization", !danagement Science 9 (1965).

6. Yu. Ermoliev, "Method for stochastic programming in randomized stra- tegies," Kibernetica 1 (1970).

9. k I . Kaplinski a n d AI. Propoi, "Stochastic approach t o nonlinear program- ming", A u t o m u t i k u i Telemekhanika 3 (1970) (in Russian).

10. AN. Golodnikov, E n d i n g of Optimal Distribution F'unction in Stochastic P r o g r a m m i n g Problems (Dissertation abstract), Institute of Cybernetics Press, Kiev (1979).

11. G.B. Dantzig, Linear Programming and Extensions, Princeton University P r e s s (1963).

12. R.T. Rockafeller, Convex Analysis, Princeton University Press, 1970.

13.

E.

Nurminski a n d k Zelikhovski, "&-Quasigradient method for solving nonsmooth extremal problems", Kibernetica 5 (1977).

14. Yu. Ermoliev and C. Nedeva, "Stochastic optimization problems with par- tially known distribution functions", CP-82-60, International I n s t i t u t e for Applied Systems Analysis, Laxenburg, Austria.

15. Ky Fan, Convex S e t s a n d l h e i r Applications, Argonne National l,aboratory (1959).

16. R. Wets, "Grundlagen Konvexer Optimierung", Lecture Notes in Economics a n d Mathematical S y s t e m s 137, Springer-Verlag, Berlin, Heidelberg a n d New York, 1976.

17. L. Nazareth a n d R. Wets, "Algorithms for stochastic programs: The case of nonstochastic tenders", WP-83-5, International Institute for Applied

(38)

S y s t e m s Analysis, Laxenburg, Austria.

18. D. Topkis, "Cutting p l a n e m e t h o d s without n e s t e d c o n s t r a i n t sets", Opera- tions Research l ~ ( 1 9 7 0 ) .

19. B.C. Eaves a n d W. Zangwill, "Generalized c u t t i n g p l a n e algorithms," SL4hI J.

Control 9(1971).

20. V.F. Demianov, Nondifferentiablo Optimization, Nauka, Moscow (1982).

21. Yu. Ermoliev a n d k Gaivoronski, "Simultaneous n o n s t a t i o n a r y optimiza- t i o n , e s t i m a t i o n a n d approximation procedures", CP-82-16, I n t e r n a t i o n a l I n s t i t u t e for Applied Systems Analysis, Laxenburg, Austria (1982).

22. Yu. Ermoliev a n d k Gaivoronsky, "A s t o c h a s t i c algorithm for m i n i m a x problems", CP-82-88, I n t e r n a t i o n a l I n s t i t u t e for Applied S y s t e m s Analysis, Laxenburg, Austria.

Referenzen

ÄHNLICHE DOKUMENTE

Wets, Modeling and solution strategies for unconstrained stochastic optimi- zation problems, Annals o f Operations Research l(1984); also IIASA Working Paper

We compare preferences for sharing potentially embarrassing information (body weight and height) and non-embarrassing information (address data) with geographically proximate or

Subsection 2.1 shows that nonsmooth sample performance functions do not necessarily lead t o nonsmooth expectation functions. Unfortunately, even the case when the

The unconstrained optimization algorithm, nonlinear programming algorithm, in- teractive procedure of multicriteria search, any parameters of numerical method and some

The Stochastic Nonlinear Programming System (SNLPI was developed above all out of necessity to solve practical problems. Above all these are problems of power

INTERNATIONAL INSTITUTE FOR APPLIED SYSTEMS ANALYSIS 2361 Laxenburg, Austria.. Kurzhanski Chairman System and Decision Sciences Program.. Nonlinear programming stability

INTERNATIONAL INSTITUTE FOR APPLIED SYSTEMS ANALYSIS A-2361 Laxenburg, Austria... We shall make two

NONLINEAR PROGRAMMING TECHNIQUES APPLIED TO STOCHASTIC PROGRAMS WLTH