• Keine Ergebnisse gefunden

Bounds for Stochastic Programs in Particular for Recourse Problems

N/A
N/A
Protected

Academic year: 2022

Aktie "Bounds for Stochastic Programs in Particular for Recourse Problems"

Copied!
19
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Working Paper

Bounds for Stochastic Programs in Particular for Recourse Problems

Jitka Dupac'ova'

WP-95-085 August 1995

IQIllASA

International Institute for Applied Systems Analysis A-2361 Laxenburg Austria

3.d

Telephone: +43 2236 807 Fax: +43 2236 71313 E-Mail: info@iiasa.ac.at

(2)

Bounds for Stochastic Programs in Particular for Recourse Problems

Jitka DupaEovd

WP-95-085 August 1995

Working Papers are interim reports on work of the International Institute for Applied Systems Analysis and have received only limited review. Views or opinions expressed herein do not necessarily represent those of the Institute, its National Member Organizations, or other organizations supporting the work.

FflllASA

International Institute for Applied Systems Analysis A-2361 Laxenburg Austria

ad:

Telephone: +43 2236 807 Fax: +43 2236 71313 E-Mail: info@iiasa.ac.at

(3)

Abstract

In this paper, we shall discuss the bounds for the optimal value of recourse prob- lems from the point of view of assumptions and of possible generalizations. We shall concentrate on bounds based on the first order moment conditions and t o those based on sample information. We shall indicate when it is possible to remove the convexity assumptions, when there is a hope for extensions t o multistage problems and we shall point out reflections of bounds and stability results.

(4)

1.

B O U N D S FOIL STOCIIASTIC P R O G R A M S

Thc irit,crest in bounding the optimal valuc of stochastic programs has been apparent from the very origin of stochastic programming, cf. Edmundson-Madansliy inequality [41]

ill the fifties, minimax bounds [52] in the

sixties,

bounds based on the rno~lient probleln [Ill-[13], [24] or bounds on the error due to the approximation [34], [51] in the seventies.

The reasons come from iricomplete information about the distributioil and from numerical techniques: we co~istruct and solve approximate problems using various algorithms. We need stopping rules and tests of optimality, an error analysis, strategies for refinement, conclusions colicerrling the rcsults valid for the true problem, statcments about stability and robustncss of the output, etc. See e.g. [5], [36] for further discussions.

Bounds become often n part of a numerical procedure and we are naturally interested in numerically tractable bounding techniques. Gcnerally speaking, it is easier to bound the objective function and its optimal value than to gct bounds on optimal solutions and it is not, easy to cxtend the scsult,s valid for

two-stage

stochast,ic programs to the multistage c:;i.sc:. Diff(:r(:iit i l . l ) l > ~ ~ i l . ( : l i ( ~ ~ r~(lliir(: (1iffi:rciit ;~.ss~iilil)t,ioiis, for inst,a.xi(:(:, t,ll(:r~ il,r(> t,(:(:liniclli(:s applicable oiily ulides appropriate coilvcxit,y or sliiootlillcss assuniptions, for indcpciidclit random variables, for problems of a special structure, etc. In case of an incomplete lino~vl- edge of the probabilit,~ distribution, the design of bounds reflects the existing level of information; the bounds that correspond to sample information are different from those based on linowlcdge of ~nornents of the underlying probability distribution.

To be Inore specific, let us consider a class of stocliastic programs of the form (1) lninixnize E p f (x, w ) on the set

X

whcre

X

is a givcn nonempty convex polyhedral set in a finite dimensional space,

P

is a probability distribution of w on

0,

and

f

:

X

x S1 -+

R1

is a given function. We shall assume that the cxpectation in

(1)

is finite for all

x

E

X

and that thc optimal solution of

(1)

exists.

The above formulation covers the expected utility inodels and the two stage stochastic programs with relatively complete recourse. In the lattcr case, for each

x

E

X

and w E

S2,

thc value of the random objectivc is

f (x,

w ) = c T x

+ C2(X,

W )

witli

tlie optimal value of the sccond-stage program.

Therc are various natural ideas how to get bounds on the optimal valuc of (1): Any apl~roxi~nation of the ol>jcctivc function Ell f (x, w ) that is valid uiiiforirily for all x E

,X

provides an cqually precise approximation of the optimal value. This idea was applied in the first papers of I<aiiltovA, e.g. in [39], and appears for instance also in [53]. Onc can relax the constraints in definition A' to gct a lower bound or to add new constraints to

(5)

get an uppcr bound, cf. [50]. It is possible to approxirnatc the rnndom objective fuilction ,f

(x,

w) by r~notlicr simpler or niorc convenient function; see the piecewise linear bounds

PI

, [TI, [491.

F~lrt,licr t,cc:liiiicl~ic:s i1.1.c bii.sct1 oil tliffcrciit it1ca.s t1ia.t colric: froni rcsult,~ oil stability i ~ i i ( 1

sensitivity with respect to the probability distribution

P

(e.g. [47]) and are relatcd to asyniptotic propertics of sta.tistica1 csti~nat~ors such as coi~sist~ence, rate of convergence, asyniptotic distribution, probabilistic bounds on large deviations 1381; sce also [l'i] and [48, Chapter 61 and references therein. These results ca.n be used to construct various as- ynil~totic confidcnce int,crvals for tlic true optixnnl valuc and optimal solutions. Morcovcr for special types of perturbations, such as contarnination, one can obt,ain global non- a.symptotic bounds useful in postoptilnality analysis: cf. [16], 1171. Error bounds for the optimal value can be often used also for construction of bounds for the optimal solutions provided that some additional assu~nptions (growths conditions, unique true optimal so- lutions, etc.) hold true; cf. [39], [47], [48].

We shall dcal with bounds for the true optimal value of (1) that cxploit in a simple way a s a ~ ~ i p l c based inforrnatiori (Section 2) and with bounds based on lcnowlcdge of moments (Section 3). We shall discuss thern from the point of view of assumptions and of possible generalizatioris. Finally in Sectioil 4, we shall coilcentrate on multistage stocliastic liilear programs with recourse arid with random right-hand sides to indicate when it is possible to extend the well-lcnowri upper bounding technique based on the first order lnomeiit coiidi tioris to mlrl t,ista.gc problems.

Assume now that t h e is at disposal a sample information about the true probability distributioil

P

that allows to construct an cinpirical distribution function ba.sed on t,hc ol~scrved dates with the aim to draw conclusions about the optimal value y ( P ) of the true program (1) using the optimal value of its sample based counterpart.

Let

S

be the available sample of size n, say w l , . . .

,

wn, froni the distribution

P

and lct us denote the value of the objective function based on this sample

S

of size n at a point x E

A '

as

1 n

E~ f (x,

w) := -

f (x,

w ' )

7 1 . z= 1

The comrnollly accepted procedure is to approxiniate thc optimal solution of

( I )

and its optimal value y ( P ) by an optimal solution

x s

and the optimal value y s of the sample based program

iliiiiiiiiiz~

Ed f ( x ,

W ) := - 1 .f(x, w') 0x1 t,li(. set S

1 1.

Indeed, the optimal solutions and the optimal value of (3) are consistent estimates of thc true optimal solut,ion

x ( P )

and of the true optimal value y ( P ) of (3) under relatively modest assumpt,ions - see e.g. [20]. Asymptotic normality of thcse

estimates,

lio~vevcr,

(6)

llolds truc only under rather stringent

assumptions.

Therefore wc shall base the bounds on t,hc optiinal

value

of (1) on dircct expl~itat~ion of thc standard central limit theorem;

see also [45, Chnpt,cr 151 and refercnccs ibid.

Uridcr i~ssumptions of cxistcncc of a finitc

true

cxpcctation

Er f

( x , w) and variance v;srp f ( x , w), tlie central liinit thcorcm allows to coilstruct approximate confidcncc intervals for the values of thc true objective furiction

Ep

f ( x , w) a t individual points x E

X.

The al)proxirnatc 1 - o confidericc interval is

Es f

( x , w ) f ( v a r s f (x, w ) ) 1 1 2

Jn

a11d t , denotes tlie (1 - 012) quarltile of

n / ( O ,

1). (For rrioderate samplc sizcs n , t , may bc rcl>lnced by the 1 - a 1 2 qunntile of Stude~lt's distribution with n - 1 degrees of freedom.)

For each w E

R

separately, we can also get the valuc p(w) = mi11

f

( x , w)

x E X

and quite siiriilar argu~ncnts allow to derive an approximntc 1 - o confidence interval bascd on thc si~rriplc

S

for tlic truc cxpcctation E p p ( w ) , i. c., for tlic cxpcctcd valuc of the populi~tion wait - and - see problem, provided that the true expectation and variance vaspp(w) arc finite:

where

If the normal approximatiori used in construction of tllc confidcncc interval (4) is precise cnough, the confidence interval (4) with x = x s covers approximately with roba ability 1 - N t,llc \ ~ I L ~ U C of tllc true objective furiction

Ep

f ( x , w ) a t t h e point x s . Together with tllc obvious iricquality

p(P) 5 EP f

( x s , w) it implies that

is an approsirn,u~te probu,bilistic u p p e r b o u n d f o r t h e t m u o y t i n ~ a l ,ualuc p ( P ) . Such an upper Ijound can bc obviously based on any feasible solution x E

X .

Duc to tllc mcntioncd 1.csults on consistcncc of the sample based optimal solutions,

there

is a good reason to use x s .

(7)

To gct a sul-nplc bused lower bound for cp(P), we use (6):

The whole procedure of coilstructirlg bounds for the true optimal value

cp(P)

consists of two steps tliat allow for exploitation of parallel techniques:

(i) Solutioll of the sarnplc based program (3) to get an optimal solutiori x s and the opt,irnal valuc cps and evaluatioi~ of tlic random objectives

f

( x s

,

w') at the optimal solutio~i x s for all considercd sa~riple values w'. The average and variance of the obtained values

f

(xs, wZ) are used in the upper bound (7).

(ii) Solution of the 12 individual scenario problems is needed to get the optimal values y ( w z ) for all considered sainplc points wZ and the average and variance of these

"sample" optimal values provide the ileccssary entries for construction of the lower bound

(8).

A I ~

alternative procedure can be based on minirnization of the uppcr bound of the confidence interval (4) on tlie set

X.

It resernblcs the form of the robust optimization objective function (cf. [43]) and this upper bound is rnore tight than (7). It means that the problem

1i;is to hc solvctl ilist,ciicl of (3) ill t,lic\ first st,c,1) (i) of t,lic: al)ovc. houndi~ig procctlurc l lid tlie obtained opt,i~nal valuc providcs the upper bound. The second step (ii) applies without any change.

Similar results car1 be obtained for the casc of sampling from a large finite population,, say,

R

= [ w l , . .

.

, w N ] and for distribution

P

that assigns equal probabilit,~

1/N

to all elements of

R.

Except for the finite population factor 1 - n / N , there is no difference between the bounds based on sa~ripling froin finite population and the former ones. For to get a tiglitcr lower bound, one can always t,ry to use various variance reduction s:~nipli~ig teclini ques.

Conclusions. T h e approximate confidence intervals arc distrib7stion free, i.c., they do not depend on the assunlcd for111 of the true probabilit,~ distribution

P.

No assumptioris about convexity or smoothness of the objective function are needed and these are the main advantages of the introduced approximate probabilistic bounds.

On

the other hand, t,he precision of the bounds depends on the prccision of the approximation by the central limit theorem, on the sample size, etc., and this may be one of stumbling bloclts. Possible applications of tlicse bounds for constructioll of stopping rules depend on the algorithm concerned; for insta.licc, uppcr bounds of the type (7) appear ill [ l o ] , [31], [32], [42] and [44].

Even wlic~i sonic! st,ocl~a.st,ic dcl)cndcncc ca.n l>c incorpor;~t,cd (cf. [42]), t,o cxt,cnd I ~ o u n d s (7) t,o ~riult,ist,;~.gc: st,ocli;i.st,ic progra.lns witli in,teratcl,gc dcpcn.den,t ra,ndo~ri cocficicnt,s clocs not sceni to be straig1itforwa.rd.

In casc of sampling from a continuous distribution we can in addition construct rougli confidence intervals for the optima.1 valuc using the following results of

Is],

[21]:

(8)

If is a colltinuous random variable with an unltnown uniniodal density tlien for any fixcd n and

t >

1, t,lic interval with endpoints

based on o ~ i c observation of ( covcrs the unknown mode 0 of the distribution with proba- bility at least 1 -

&.

With

t

= 19, one gets thus an at least 0.9 confidcnce interval.

Thc cxpcst "gucss" a has to be fixed prior to the random cxperimcnt that provides the

realization

and it essentially influences the length of thc confidence interval (9).

Tlic i~ssumption of continuous distribution cannot be rclaxcd; on the otlicr hand, some iiiipsovc~nci~t,~ C ~ L I ~

I)(:

o l ~ t x i n c ~ l lmdcr rnorc stringent as~umpt~ions i~b011t thc dist,ribut,ion;

for iristaricc ~lntlcr atlditionnl assumption of sym~netry, thc confidcncc lcvcl for irltcrval (9) i~icrcascs to 1 -

'

and to 1 -

$

for normal distribution of [.

1 + ]

A

similar result can be derived also for confidence i~it~ervals bascd on several independent obscrvations [8] in which casc, no prior expert gucss is needed and the confidence interval t i ~ l i c ~ on thc conimon form bascd on the samplc mean and the sample standard deviation of thc observations; conlparc (4). For two indcpcndcnt observations

ti , t 2

thc interval is

Tllcse scsults wcre used in [l8] for stochastic linear programs with individual proba- 1)ilistic constraints and random right-hand sides. For thcir application to stochastic pro- g r i ~ ~ n r n i ~ i g problcins with rccoursc, we considcr a fizcd n u n ~ b c ~ , say n , of i.i.d. scenarios sampled from the givcn continuous distribution. One sample of sizc n can be talten as tlic raiidom expcrimcnt that lcads to the observed value y of thc optimal valuc function.

The confidcncc intervals (9), (10) will covcr tlie modus of thc distribution of optimal val- ues computed from n independcnt scenarios at least with the probabilities 1 -

&)

resp.

1 -

&

providcd that the distribution is continuous and unimodal. Thc value a needed in (9) can l,c for insta~icc clioscn as thc optimal vall~e of thc cxpected valuc problem or the

~il111(' of ill1 ; l ~ ) ~ ) r o x i ~ r l ~ ~ t ( ~ so~llt~ion.

3 .

B O U N D S

B A S E D ON MOMENT CONDITIONS

IVhcncvcr thc linowlcdgc of the probability distribution

P

in (1) rcduccs to an informa- tion about its support and about vxlucs of somc moincnts wc can use results linown from tlie momcnt problcm (c.g.,

[GI,

[12], [ I s ] ,

[16],

[35]) to construct bounds for tlie optimal value y ( P ) := minXEs

El>

f ( x , w ) . It is also possiblc to exploit a

qualitative

information about

P

sucli as i t s unimodality ([12], [15]) or, in case of a discrete probability distribu- tion, thc existence of an incomplete ordering of probabilities [9]. Sometimes the moment conditions stem from thc intrinsic features of the solvcd problcm [19], e.g., from a low level of iiifos~riatioli. Tllc nloiricnt bou~ids can be i~lso ~ o l l s t s ~ c t , c d in tlic c:oursc of iLIi idgo- ritlimic solutio~i

[37]

or considcscd just for needs of st,i~bility considcrations, for the worst casc analysis and

EVPI

evaluation. There is a host of papcrs dcvotcd to t,licse bounds in thc context of ~t~ochastic programming, to thcir rcfincment

,

to cstensiorls to noncompact

(9)

supports, etc.

We

rcfer to [35] and [45] and

references

ibid. The cornmon idca of bounding tcchniclues bascd on the mornent problcm is to replacc the coniplcte linowledgc of

P

in (1) by knowlcdgc of a sct

P

of probability distributions that is supposed to contain

P

and is dcfiricd, iiltcr alia, by momcnt conditions. We assumc that

P

docs not clcpend on the first,-stagc dccision

x

and we assunlc thc existence of thc optimal value

p(P)

of (1) for all P E

P.

Given t,llc sct

P

wc want to construct bounds

~ ( x ) = iilf E p f ( x , w )

P E P

(12) U(x) = sup

Epf (x,

w )

P E P

for tlic objective furict,ion or bounds

L = rnin inf

Ell f (x,

w)

x E S I'E'P

U = min sup

Ep f

(x, w)

x E S P E P

for tlic optimal value

p(P)

by means of tllc moment problem.

Tlle lowcr bounds ( l l ) ,

(13)

reduce to Jensen's incquality

[33]

L ( x ) = min

Ep f (x,

w) =

f (x,

Ew)

P E P

1)rovidcd that tlic probability distributions

P

E

P

a.rc

characterized,

inter alia, by a fixed nrcan value Ew and that tllc function f ( x , w ) is convex in w. This bound is attained for t,lic dcgcneratetl distributioli conccntr:~tcd iri tllc 1nca.n valuc Ew independently 071 x E

A';

liciicc, t,hc lower 11o11rid for

p(P)

is the optimal valuc of thc corivcx dcterministic expected

V ~ L ~ ~ L C pTOgTf1,m

L = niin

f ( x ,

Ew)

x E X'

Similarly for convcs functions

f

( x , m ) , thc 11ppcr l~ollnd for cspcctatiorl E p f ( x , w) with

P

1)cloiigirig to t,licx scxt of clistril)l~tiolis c.;lrric.tl 1)y fiscxtl c.oiivc3s 1)olyllc~Il.o11

f2

and wit11 prescribed mcan value - a fixccl intcrior point of

R

- is attaincd and rcduccs to the Ed- murldson - Madansky bound [41]. It is casily computable (i.c, it rcduces to one-dimensional rnoment problems and/or thc extrcmal distribution is indcpenderit on

x)

only under special circllrnstances, for instance, whcn

f?

is a rcctanglc and

f ( x ,

m ) is scparablc in cornponcnts of w or the random

variables

are indcpcndcnt, or whcn

R

is a sirnplcx. (Scc [5], [35] for a detailed cliscussion.) Othcrwisc, for

R

= conv{wl,. . . , w r ~ ) , U(x) is the optimal value of thc linear progl am

(17) U ( x ) = min

P h = l

(10)

(see e. g. [ll.], [12], [24, Chapter I:[], [29], [45, Chapter 51). If

f

( x , w) is convcx separa.ble with respect to indiviclual conlpoilerits of w, (17) splits to moment problcnls wit,h respcct to one-dimensional random variables carried by closed intervals. The corrcsporiding (mar- ginal) distributions are uniqucly dcterinincd by tlie first order inonient conditions. This is the case when the extermal distribution does not dcpend on x ancl can bc given explic- itly: It is carried by thc vertices of the cartesian product of the one-dimcnsionnl intervals and tlie probabilities of these upper boun,din,g scenarios are products of the correspond- ing probabilities that coinc from the margirial cxtrcmal distributions. This is tlie inost welcome situation wlicn the upper bound for

v(P)

follows by solution of thc stochastic program for the obta.ined discrete estremal distribution, without any reference to thc inn,cr optimization problem (17).

In general, ho~vevcr, to gct t,hc ~ i p p c r bouild (14) for tlic optimal value q(P) ineans to use a I~rocedure suitable for solving the minimax problcin

min max

El> f

(x, w) = min U(x)

x E S P E P xEX

Thc assumptio~l of convexity of the random objective

f

( x , w ) with rcspcct to w means, cxccpt for vcry sl)ccinl caws, tlic rcstrictioii to two-stage stochast,ic Ilrogralns with fixed

recourse, fixed c o e ~ c i e n t s q ill thc second-stage objectivc function ant1 wit11 11,T lzneu,r in w. 11iclusio11 of raiidom cocfficicllts

q

requiscs (lcvcloping ~)arallcl scsults for saddlc functions that arc convcx with rcspcct to a group of random parameters (typically, tlie right-hand sides) and concave with rcspect to the rcmaining random paramctcrs (typically, the random para,meters of the second-stage objective function); this was done, e.g., in [23], [2Gl.

For to get a valid lower bound (15), convexity assumption can be evidently relased if there .exists a lower supporting linear function for

f

( x , 8 ) at the point Ew. Similarly, [39]

points out that Edmundson-Madansky upper bound holds true also for some nonconves functions, for instance, for

f

( x , r ) defined on a multidimensional conipact interval

S1

and convex separately in cach of components of w or multi-chord-dominatc<1 on $2. 1Yhcrea.s .Jensen's lower b o ~ ~ n d (15) also extends to the related classes of coilvex multista.gc stochastic programs both with stage iiidependeilt right-hand sidcs [30] and for their depeildence [22], lack of convcxity seems to be the main stumbling-block for dcsigning a computable upper bountl of tlic Edmundson-Madanslcy typc. See Scct,ion 4 for details.

Theoretically, the rrioment problem provides bounds for t1.w expectation

E p f

( x , w) also for n o n c o n v e x f u n c t i o n s

f

( x , r ) and under higher m o m e n t c o n d i t i o n s . For convcx compact set

7'

of probability dist,ributio~is, thc expect,ation (a 1irica.r functional in

P)

a.ttains both it,s ma.xima1 and minirnal value at ext,remal points of ?. The corresponding distribl~tions are discrete ones concentratcd at a modcst

number

of points, however, cxtrcinal dis tributioils i n d e p e n d e n t of t h e f o r m of f (a.nd thus independent of the first-stage decisioris x) appear only except,ionally. For a fixed x, they can be generated and thc bounds call be obtained as the ininilnal or maximal value of a gcncru.lized linear program [5], [24], [45]:

With fixcd x and with thc set 'P defined by a given c o m p a c t support

fl

arid by moinent conditions

EI>gk(w)

5

o k , k = 1, . . .

, I<

(11)

it is sufficicnt to select I<+1 elements wk of

R

and assign thein probabilities pk

2 0, Ck

pk =

1 so that t,he momcnt corlditions are fulfilled ant1 t,hc cxpectcd value

Ck

pk

f

( x , w k ) is nlaxilnal (Ininii~lal).

Duality a.rguments provide decision rules ileeded for replacement of individual points wk

by other cleinents of R within the geileralized rcviscd simplex method; cf. [25].

Sometimes,

it is possible to indicate a priori a finite set of elements froin R, i. e., the scenarios that a.sc of co1lccr11 fsolll t,llc point of vicw of thc worst C;LSC i ~ l l a l y ~ i ~ ; this is the C ~ L S C of COIIVCX

f ( x , a )

,

l)o~llitlctl coiivc:x l)olyll(!tlril.l slll)port a.litl tllc: first osclcr 11lo1neiit inforni;~t,ioil 011

w , sce

(9)

;i.nd i t s gcncsa.lizatio11 to pieccwisc conroex function f ( x , a ) in [ll]. Again, tllc inner optimization problenis that give bounds L(x), U(x) have to be incorporated into the optimi~at~ion problem with respect t,o x . This was applied for the first and second order moment informatio11, see c.g. [13], [35].

A

c o n ~ ~ ~ l c t e l y different approach for bounding expectations can be bascd on I<orovkin type inequali tics, scc 12, Cllapt,cr 71. Tllesc incquali tics provide for instancc estimates of tllc diffcsciicc I~ctwccil tllc cxpcctcd value of a function and it,s valuc at the cxl~cct. r i t,' 1011 of the randoirl variable. They do not necessarily assurne convcxity and some of tlleln arc indcpcndent on the explicit form of the function. As an example we shall iiltroduce tlle following simple result (see Corollary 7.4.1 of [2]):

T h e o r e i ~ ~ . Let w, a be given positive numbcrs, R a noncmpty fixed compact convex set in

nk,

L;)

E R

a.n arbitrary fixcd clelne~lt and P a probability distribution on

R

such that,

Let, 11 E Cij(R) with thc rnodulus of continuity of its partial derivatives h;Yi

There exist more coinplicated results for h

E

C i ( R ) , for highcr=order momcnt, condi- tions and also upper bounds on the difference in (19) that use the assumed fixed value

E p w

= C;). Results of this type can be helpful for estimating the

EVPI

when the raildom ob- jective

f ( x ,

a ) is not convex. Instead of convexity, smoot,l~ness of gradients is required; for a givcn compact set

&

and a positive constant E the modulus of co~ltinuity of a continuous function 9 is defined as

with

11

a

/ /

tllc

ll

norm.

Differentiability properties of thc random objective function for thc two-s tage stochastic lii~cai- progra.in cannot he cxpccted (recall thc form of the second-stage program (2)) but it is not the o~lly t,ypc of st,oclia.st,ic programming model. There arc cxamplcs of smoot,ll penal- tjics for disc:sepailces whose: choice corncs from a det,ailcd analysis of the real-lifc problcln

(12)

wit,liout ally rcfcrc:iicc to the sccond-st,a.gc prog;ra.m (2) and tllc picccwisc linear - cluaclrnt,ic st,c)c~liiist,ic I)rogriLlIiS, sc:c: c:.g. [4G], c:iljoy 1)otli s~rioot~lliic:ss aiicl c:oiivcsit,y l)rol)crt,ic.s.

To coilcludc this Scctioil let us irlc~ition anotlicr problem related to l ~ o u n d s based on moment problcrn for classcs of probability distributions defined by prescribed values of some moments. This input illforrriation is not always completely known, it is based on a sarnple or past inforlnation, on expert's opinion, etc. Accordingly, wc fdce unccrt,airlty again, on a new lcvcl. Tllcrc arc scl-~tt~crcd results coiiccrning s t a b i l i t , ~ wit11 rcspcct t,o thc lxc:scribcd vi~lues of illornents basccl on pa.ranictric prograilliniiig [ I s ] , complemented by stat,ist,ical iiiii~lysis [I41 iiii(1 dis~11ssecl i i l ~ o ill t , l l ~ collt(:st, of ;L real lifc ill)plicat,ion [I].

4. EX'TENSIONS

T O MULTISTAGE S L P

For thc purposcs of this Section, it will be expedient to changc sliglltly thc notation:

in the subscripts of espcctations we shall replace tllc probabilit,y distributioil P by the rclcvant components of w . We sliall deal with tllc following three stagc stochastic lincar program with rccoursc with random riglit hand sides

miniiliize

subject to

subject to

subject to

(13)

T h c subvectors wl

,

w2 of tllc random vcctor w on [O,

P ]

generate tlic riglit liand sides.

We assunic that the riglit-hand sidcs are lintear in wl and in w2, that therc is a n optinial solution for an arbitrary realization of right hand sides and that the expectations are finitc.

IVc want t,o construct bounds for the optimal value p(P) of (20) subject to subsequciit, co~istraints and recursive definitions, using just tllc first order nlomcnt information about w. However, cvcn under these rathcr simplifying assumptions, convexity of the recourse costs p l ( x l , w l ) with respect to w l follows only under particular circumst,nnces such as independence of wl

,

w2 or a special form of the conditional distribution function

Pw,

needcd for evaluation of the conditional cxpcctation

E,21w,

p 2 ( x 2 , w2) ill (22), for instance,

where Q is a probability distribution function a,nd

H

is a fixed matrix of proper dimension.

Case 1. To simplify tlic presentation, assume first that wl, w2 arc

independent

rando~n.

variables, for instance the short tcrin interest rates. Tlieir niarginal distributioiis

PI, P2

arc iritlcpclidcnt of tlic dccisioii vnria1)lcs xl

,

x2 and arc supl)oscd to fulfil t,lic followiiig coiidi tioils:

(26)

p t

w R } = 1, and Ewt = p t t = 1 , 2

Tlle scts of niargirial distributions that fulfil (26) will be denoted

PI, P2

and assumed independent of X I , ~ 2 .

For this fosrn of program, joint coilvcxity of functions p f , t = 1 , 2 with respect to x l , w t liolds truc and thc lower bound follows from Jcnsen's illequality

[33].

It means, t,hat t,he lover l~oulid call be coiriputcd as tlic optimal valuc of tlic cxl>ectcd valuc program

ininiinize

subject to

The upper bouild for

E w 2 p 2 ( x 2 ,

w 2 ) follows from Edmundson-Madansky iilecluality [41]:

For all distributions P2 E

P2,

the upper bound is attained for the distributioil

P2*

E

P2 concentrated

a t the points 7.2,

R2

with probabilities Xz = and 1 - X2:

For 21. fisctl ~ 2 t,liis l)o~liitl , (:i1.11 110 ol,t,;~.iiic:tl l,y solviiig t,lic: c:orrc-sl)oiitliiig 1)i.ogr;1.1ii of tlic tliird sta,ge (24) for two scenarios w2 = r2 and w2 =

R2;

moreover, E:,p2(x2,w2) is evidently convex in x2. This gives an upper bound for p l

( x l ,

wl

):

(14)

The resulting uppcr bound y ; ( x l , wl) is jointly convex in x l

,

wl and the Edmundson- Madansky bound call be applied once more to get an uppcr bound for its expectation over thc set of distributions P I . 1% get thus an upper bound for the expected recourse costs y l ( x l , w1) in (20): For id1 marginal di~tribut~ions

Pl

E

P I , P2

E P 2 ,

I t , - 11.

,

where

X1

=

E .

The upper bound (31) for the optimal va.luc of (20), (21) equals thus tlie optimal value of t,lic st,oclla.st,ic I)rogra.lii l)asctl on sccna.rios [r.,

,

1.~1, [I-,

,

R,,], [ R ,

,

I-,], [R.,

,

R,,] with 1xoba.- Lilities

X I A,, X I

(1 -

X 2 ) ,

(1 -

X I )A;!,

(1 - Xl)(l -

X 2 )

and it can be obtained as t,llc optimal value of the corresponding linear program.

There is an obvious generalization to T-stage stochastic lillcar programs with random right liantl sidcs - linear functions of stage indel~endcnt ra.ndorn variablcs whose distribu- tions bclolig t,o sets

Pl

described by fixcd compact convex supports and Ly fixcd mean values. There are 27'-1 upper bound scellarios identificd by sequences of endpoillts el of intervals [I.[, R l ] for t = 1 , . . .

, T

- 1; colllparc wit11 [22]. It is also possible to gcncra.lizc the results to right hand sides that are linear tra.nsform,s of interstage indcpende~~t random vectors wt wllose supports are given si~nplices and the mean values are fixed interior points of these simplices. Further generalizations concern nonlinear convex stochastic programs with, stage independent random right-hand sides a.nd it is a.ga.in possible to include another group of stage independent ralldorn parameters, say, qt into the objective functions. The basic requirement is the saddle property of the optinlal value functions y l with respect to tlecision variablcs and w on one side and to q on the other side (cf. [23], [26]). If we colltinuc to restrict our studies to random right - liand sidcs only, the crucial problcrn is to cxtend t hc upperbounding technique to intcrstage dependence.

Case 2.

To

illustratc the linlitations we colitinuc to discuss thc three stage progrun (20) - (24) under a.ssunlption that the set of the considered distribut,ions P 2 ( w l ) of w . ~ coliditional on wl is determined by the support [r2(wl), R2(wl)] and by the conditional lllcan value p2(w1

).

Given w1, the upper bound on EW2 y 2 ( x 2 , w 2 ) is

To procccd further this uppcr bound lias to be tscatcd as a fullct,ion of x2 and wl. Let this fullct,ioli Lc U2(xZ, W I ) .

T ~ I ( J

I I C X ~ st,cp illvolvcs ~nillinliz;~.tioll of

wit,h respect to constraints (23) on x2. Denote a.ga.in the rcsulting optinlal value by p ; ( x l , wl

).

To gct it collvcx ill w l

,

for the sake of subsequent use of Edn1u1idson-h4adansky

(15)

uppcr h o u ~ l d on its csl>cctation, one ~icctls U 2 ( x 2 , wl) jointly ~ o n u c x i11 x2, w1. To tliis plir-

R 2 ( ~ 1 ) - - / ~ 2 ( ~ , 1 pose,

i t

is not eriougli to assume r 2 , R 2 linear in wl (rccall that A 2 (wl) = R2(wl)-r2(wl) >.

Olle l~ossil>lc set of i~cldit,ioiial i~ssumptions concer~iing defiilitioil of P 2 ( w l ) reads:

A 1 7.2,

R2

a.re lincar in wl and

X2

is a, fixed number.

Assumption A 1 implies 1 ~ 2 linear in wl and under assumption A l , the uppcr bound

is jointly convex in x2, wl .

For

UL

joiiitly coilvex i11 x2,wl, rriinimizatio~l of (33) provides an uppcr bound, say, y ; ( x l , wl) for y l (xl

,

wl) that is convex in

xl

and wl so that the uppcr bound for expec- tation

E,,

y l

(xl ,

w,) follows from Ediiiuiidsori-Madaiisl<y inequality al>l)licd to t,lic cxpec- tation of y ; ( x l , wl). Accordingly, under assumption A 1 for all distributions

PI

E

Pl

of wl and conditional distributions

P2

E P2(w1) of w2, the upper bound for thc objective function in (20) is

cTx1

+

X l Y ; ( x l , r l )

+

(1 - X l ) Y ; ( x l , R 1 )

and the upper bound for thc optimal value of (20), (21) can be a.ga.in obtained via four- sce- narios, namely, [ r l , r2 (r l ) ] , [ r l , R 2 ( r 1

)], [ R 1 ,

7-2

(Rl )], [Rl ,

R 2 ( R l

)]

with probabilities

X1 X 2 ,

X l ( 1 -

Xz),

(1 - X 1 ) X 2 , (1 - X l ) ( 1 - X2).

Genera.lizat,ion to T-stage problem means assuming a fixed position of the conditional mean values ~ L ~ ( w ~ , . . .

,

wl-l) (described by fixed values

A t

E ( 0 , l ) ) within the inter~rals [ r l ( w l , . . .

,

w1-I), R1(wI,. . .

,

wt-I)] whose endpoints are lincar in wl

,

. . .

,

w,,-l. This type of a.ssumpt,ions can be used to model the increasing uncertainty by growing range of the va.riables around sonic trend described by t,hc coriditional rnean values. T h e upperbounding sccna.rios arc sequences

with rl or

R1

substitutcd for pl and r t ( p l , . . .

,

pl-1) or R l ( p l , . . .

,

pt-1) substituted for p1, t = 2 , . . .

, T

- 1; cornpare with [22].

A11 cxterisio~l to raildom vectors wl whose dist,rib~~t,ioris are ca.rried by si~npliccs is possi- ble again. Assumption of fixed values of

A t

independent of past observations translates to fixed barycentric coordinates of the conditional meail values p ( w l , . . .

,

W L - ~ ) . T h e general bounding techniq~re based on baryce~itric scenarios, see [27], follows, intcr alia, froill tllc assumed convexity or saddle property of the objective functions for all stages, for instance, convexity of the function y ( x l , wl

)

defined by (22). The same assumption is rieedcd also for the multistage extension of the upperbounding technique in [22]. Our discussions iinply t,liat tliis type of assumpt,iosis corresponds, besidcs the intersta,ge independence of ra.ndom riglit-hand sides, to rather special for111 of interst.age dependent right-liand sides so that tlie conditional clistributions fulfil A 1 or possess a Ma,rkouian property, e.g.,

(16)

witli w' illdcpcndent of w l and

H

a fixcd transition matrix. For T-stage models, thc transition ~ilatriccs

H

can bc stagc depc~ident wliat gives

with w i independent of w l , . . .

,

wt-1. It means that thc random parameters w t in stage t can be represented as a sum of inte~sta~ge independent ~a~ndorn sl~mma.nds related only to stagcs 1 , . . . , t . Noticc that (34), (35)

correspond

to the mcnt,ioricd spccial form of conditional distributions, see (25).

Conclusiolis. The uppcrbounding techniques bascd on the first order moment informa- t,ion carry ovcr t,o lnult,istagc ~t~ochastic linear programs with colriplctc recourse ant1 with random right-hand sides that are linear in random parameters w oilly in special cases, e.

g., when one of the following conditions holds true:

a right-hand sides are interstage independent,;

a for all stages, thc right-hand sides can be expresscd in tlie form of a sum of intcrstage iridcpendent random vcctors related to preceding stages and to the given stage, sce (35);

a for all stages, the conditional distributions of random parameters w t are carried by sinl- pliccs whose extremal points are lincar in past values of w l

,

. . .

,

wt-1 whereas tllc baryccn- tric coordinates of thc conditional mean valucs do not dcpend on this llistory; sce A l .

Pa.ralle1 conclusions can bc clcrived for rnultist,a,gc convex stochastic prograrxls witli ra,n- don1 right-lmnd sidcs and also for the convex-concavc cnsc with random riglit-hand sidcs and recourse costs.

Ackliowledgement. The paper was presented a t tlie 7th International Confcrc~lce on Stochastic Programming in Nahariya, Israel, June 26-30, 1995. Tlie material is based upoli work sponsored by the Grant Agency of the Czech Republic under grant No. 402/93/0631.

Sorne of idcas detailed here were discussed for the first time during t,hc T4Torltshop "Ap- proximation of Stochastic Optimizatio~i Problems" held at IIASA, Laxenburg in July 1993 and during the Minisymposium on Stochastic Progra.mlning held at Hu~llboldt University, Berlin in January 1994.

References.

(1) G. Anandalingam: A stochastic programmirig process model for investment plan- ning. Comput. Opns. Res. 14 (1987) 521-536.

(2) G. A. Anastassiou: Moments in probability and app~oxirnat~ion tlieory. Pitman R.esea.rch Notes in Matliematics Series 287 (Lorigman Scientific

6.

Technical, 1993).

(3)

J .

R.. Birge: Aggrcgatio~~ boullds in stocliastic linct~r progra~n~niiig. Matli. Progr.

31 (1985) 25-41.

(4)

J. R.

Birge and S. W. Wallace:

A

separable piecewisc linear upper bound for stochastic linear programs. SIAM J . Control and Optinlization 26 (19SS) 725- 739.

(17)

(5) J . R. Birge and R. J.-13. \Vets: Designing approximation s c h e ~ ~ i e s for stochastic optimization prol~lcms, in particular for stocliastic prograrils with recourse. Math.

Progr. Study 27 (1986) 54-102.

(6) J. R. Birge and R. J.-B. Wets: Cornputing bounds for stocha.stic l>rograinn~ing problems by mca.ns of

a

generalized rnornent problcrn. Math. Oper. R.es. 12 (1987) 149-162.

(7) J. R. Birge and R. J.-13. Wets: Sublinear upper bourids for stocliastic programs with recourse. Math. Progr. 43 (1989) 131-149.

(8) N. M. Blachman and R. E. Machol: Confidence intervals based on one or Inore observations. IEEE Trans. Inform. Theory, IT-33 (1987) 373-382.

(9)

\J. Biihler: Capital budgeting under qualitative data informat,ion. In: R. L.

Crum arid F. G. J. Derliindercn (cds.), Capital Budgeting undcr Co~lditio~ls of Uncertainty (M. Nijhoff Publ., Boston, 1981) 81-117.

(10) G . B. Dantzig and P. W. Glynn: Parallel processors for planning under uncerta,int,y.

Annals of Opes. R.es. 22 (1990) 1-21.

(11)

J .

DupaEovA: M i n i m a stochast,ic progra,ms with nonconvex nonscl)arable penalty fulictions. In: A. P r d l i o ~ ) ; ~ (etl.), Progress

iii

Ol,er;~t,ioiis R.escarc1i (.J. Bo1y;l.i

T\iIa.t,li.

Soc. a.nd North Holla.nd, 1976) 303-316.

(12) J . DupaEovL: Minimax approach to stocllastic linear prograrilining and the mo- ment problem. (In Czech.) E M 0 13 (1977) 279-307. Extended abstract ZAMM 58 (1978) T466-T467.

(13) J

.

DupaEov A: Minimax stochastic programs with nonscparable penalties. In: K.

Iracki, I<. Malanowski and S. Waluliiewitz (eds.), Optimization Techniques (Proc.

9-t,h IFIP Conference, Wa,rsawa 1979) Part. I. Lecture Notes in Control and I~iforr~iatio~i Sciences 22 (Springer, Berliii, 1980) 157-163.

(14) J . DupaEovL: Stability in stochastic programming with recourse

-

Estimat,cd pa- rameters. Math. Progr. 28 (1984) 72-83.

(15) J . DupaEovA: The minimax approacll to stochastic progra.mming and an illustra- tive application. Stochastics 20 (1987) 73-88.

s

(16) J . DupaEovA: Stochastic programming with incomplete information: A survey of results on postoptimization and sensitivit,~ analysis. Optimization 18 (1987) 507-532.

(17) J. DupaEovA: Stability and sensitivity analysis for stochastic prograinrning. An- nals of Oper. Res. 27 (1990) 115-142.

(18) 3. DupaEovi:

011

interval estimates for optima.1 value of st,ocllast,ic progra.ms.

In: P. Ih11 (ed.), Syst,em Modelling and Opt,imiznt,ion, Proc. of the 15th IFIP Conference. LN in Colitrol and Information Sciences 180 (Springer, Berlin, 1992)

11.

556-563.

(19) J . DupaEovA: Applications of stochastic programming under inconlpletc informa- tion. J. of Computational and Applied Mathematics 56 (1994) 113-125.

(20) J . DupaEovA and R . 5.-B. Wets: Asymptotic bellavior of ~tstist~ical estimators and of optimal solutions of stocliastic opt,imizatioii problems. Aiiilals of Stat,. 16 (1988)

15 17- 1549.

(21) D. Edelrnan: A coilfidcnce interval for the center of an ~ ~ i l i ~ l o ~ l ~ uriiiiiodal distri-

(18)

bution based on

a

sample of size 1. Amer. Statist., 44 (1990) 285-287.

(22) N. C. P. Edirisinghe and W. T. Ziemba: Tight bounds for stochastic convex pro- grams. Oper. Res. 40 (1992) 660-677.

(23) N. C. P. Edirisinghe and W. T. Ziemba: Bounds for two-stage stochastic programs with fixed recourse. Math. Oper. Res. 19 (1994) 292-313.

(24) Yu. Ermoliev: Methods of Stochastic Programming. (Nauka, Moscow, 1976) [in Russian].

(25) Yu. Ermoliev, A. Gaivoronski and C. Nedeva: Stochastic optimization problems with incomplete information on distribution functions. SIAM J . Control Op tim.

23 (1985) 697-716.

(26) K. Frauendorfer: Stochastic Two-stage Programming. Vol. 392 of LN in Econ.

Math. Syst. (Springer, Berlin, 1992).

(27) K. Frauendorfer: Multistage stochastic programming: Error analysis for the con- vex case. ZOR 39 (1994) 93-122.

(28) A. A. Gaivoronski: A numerical method for solving stochastic programming prob- lems with moment constraints on

a

distribution function. Annals of Oper. Res.

31 (1991) 347-369.

(29) H. Gassmann and W. T . Ziemba: Tight upper bound for the expectation of

a

convex function of

a

multivariate random variable. Math. Progr. Study 27 (1986) 39-53.

(30) R. Hartley: Inequalities for

a

class of sequential stochastic decision processes.

In: M. A. H. Dempster (ed.), Stochastic Programming (Academic Press, London, 1980) 109-123.

(31) J . L. Higle and S. Sen: Stochastic decomposition: An algorithm for two-stage linear programs with recourse. Math. Oper. Res. 16 (1991) 650-669.

(32) G. Infanger: Planning Under Uncertainty. Solving Large Scale Stochastic Linear Programs. (Boyd and Fraser, Danvers, 1994).

(33) J . L. W. V. Jensen: Sur les fonctions convexes at les inegalites entre les valeurs moyennes. Acta Math. 30 (1906) 175-193.

(34) P. Kall: Approximations to stochastic programs with complete fixed recourse.

Numerische Mathematik 22 (1974) 333-339.

(35) P. Kall: Stochastic programming with recourse: Upper bounds and moment prob- lems

- a

review. In: J . Guddat et al. (eds.), Advances in Mathematical Optimiza- tion (Akademie-Verlag, Berlin, 1988) 86-103.

(36) P. Kall, A. Ruszczynski and K. Frauendorfer: Approximation techniques in sto- chastic programming. In: Yu. Ermoliev and R. J.-B. Wets (eds.), Numerical Techniques for Stochastic Optimization (Springer, Berlin, 1988) 33-64.

(37) P. Kall and S. W. Wallace: Stochastic Programming. (Wiley, New York, 1994).

(38) Y. M. Kaniovski, A. J . King and R. J.-B. Wets: Probabilistic bounds (via large deviations) for the solutions of stochastic programming problems. RC 18752, IBM Research Division, 1993. To appear in Annals of Oper. Res.

(39) V. Kaiikova: An approximate solution of

a

stochastic optimization problem. In:

Trans. 8th Prague Conf. 1978 (Academia, Prague, 1978) 349-353.

(40) W. I<. Klein Haneveld: Multilinear approximation on rectangles and the related

(19)

inoinc~lt prol~lein. R.M 277, Inst. of Ecoilornic Research, Univ. Groningcn, 1088.

(41) A. Madansliy: Iiieql~i~lit~ics for stocliastic progranlrniilg problems. Manag. Sci. 6 (10GO) 197-204.

(42) D. P. Morton: Stopping rules for

a

class of sampling-based stochastic programming algorithms. TR. NPSOR-04-003, Dept. of Operations Research, Naval Postgra.du- atc School, Montcrcy, 1094.

(43) J . M. Mlilvcy, R. J . Vmderbei and S. A. Zcilios: Robust ~pt~inlization of lai-gc sci~le systems. R R 01-06-04, Dept. of Decision Sciences, T h e TYharton School, Univ. of Pennsylva.nia, 1091. To appcar in Opcr. Rcs.

(44) M. V. F . Percira and L. M. V. G . Pinto: Multi-stage stochastic optirnizatioil al>plied to energy l)lailning. Math. Progr. 52 (1901) 350-375.

(45) A. Prdliopa: Stochastic Progranlming (Kluwcr Publ., 1905).

(46) R. T . Rocknfcl1a.r and R. J.-B. Wets: A Lagrangian finite gcncration technique for solviilg

lill~il.r-(ll~i~dri~,tic

l)rol)l(:ins in sttoclii~,st,ic prograinining. Math. Progr.

Study 2s (198G) 63-03.

(47) W. Roinisch and R. Schultz: Lipschitz stability for stochastic progra.ins with com- plete recourse. Report 408, Schwcrpunktprogramm der Dcutschen Forschungsge- mcinschaft "Ailwendlingsbczogene Optirnierung und Steuerung", 1002. To appcar in SIAM Jourilal oil Optimization.

(48) R. Y. Rubinstein and A. Sliapiro: Discrete Evcnt Systems: Sensitivity Analysis aricl St,ochil.stic Ol>t,inliz;~t,ion by thc Score F ~ ~ n c t , i o n Method (TYiley, Ncw Yorli, 1003).

(49) S. TY. Wallace: A piecewise lincar upper bound on the network recourse function.

Math. Progr. 38 (1987) 133-146.

(50) S. W. Wa.llace and T . Yan: Bounding multi-stage stochastic programs from abovc.

Matll. Progr. 61 (1993) 111-120.

(51) R. J.-B. Wets: Solving stochastic programs with simple recoursc, 11. In: Proc. of t,hc J.. Hopliirls Coilfcre~lce on Systcm Scic~lce and Irlformation ( J . Hopliills Press, Baltiirlore, 1075).

(52) J . 2ii~liov8. ( Dupacovti): On miniinax solutions of stochastic linear progra.~nming problems. as. pi.st. ~ n i ~ t e m . 01 (1966) 423-430.

(53) S. A. Zc~lios

i1.11~2

M. S. Slltilman: Constructiiig optiini~l saniplcs froiri

a

binomiii.1

lattice. Jourrlal of Inforriiation b Optimizatioil Sciences 14 (1093) 125-147.

Referenzen

ÄHNLICHE DOKUMENTE

INTERNATIONAL INSTITUTE FOR APPLIED SYSTEMS ANALYSIS 2361 Laxenburg, Austria... 1 Inequality constrained least squares estimation of regres- sion

A host of researchers in the last 15 years [8] have suggested another way to explain software architectures: Instead of pre- senting an architectural model as a

Wets, Modeling and solution strategies for unconstrained stochastic optimi- zation problems, Annals o f Operations Research l(1984); also IIASA Working Paper

The solution to dynamic linear programming problems usually seeks an optimal control for the entire planning horizon, [O,T].. This solution can, however, usually be altered after

Now if each transferal is assigned a group value from G, we may consider the problem of reaching a given state by a sequence of transferals whose group sum is a given element of

2015 IT IS 3 MINUTES TO MIDNIGHT Unchecked climate change, global nuclear weapons modernizations, and outsized nuclear weapons arsenals pose extraordinary and undeniable threats

We found that more than 90% depletion of TIP47 by RNAi does not affect the efficiency of long chain fatty acids uptake by HeLa cells (Figure 4.19.) and has no major effect on both

In the next section we present a lower bound method for sums of graph–driven ⊕BP1s in order to prove lower bounds for this model with the additional restriction that the orderings