• Keine Ergebnisse gefunden

Numerical Techniques for Finding Estimates which Minimize the Upper Bound of the Absolute Deviation

N/A
N/A
Protected

Academic year: 2022

Aktie "Numerical Techniques for Finding Estimates which Minimize the Upper Bound of the Absolute Deviation"

Copied!
19
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

NUMERICAL

TECHNIQUES FDR FINDING ESTDWTES

WHICH

THE UPPER BOUND OF THE ABSOLUTE DEVIATION

A. Gaivoronski

Working Papers a r e interim reports on work of the It'kbrnatibnat Institute for Applied Systems Analysis and have received ahly limited review. Views or opinions expressed herein do not nbcessarily represent those of the Institute

or

of its National Membif Organizations.

INTERNATIONAL INSTITUTE FOR APPLIED SYSTZMS ANALYSIS A-2361 Laxenburg, Austria

(2)

FOREWORD

The p a p e r deals with t h e numerical techniques f o r finding t h e s p e c i a l type of p a r a m e t e r estimates based on t h e minimization of L

norm

of e r r o r . More specifi- cally, t h e s e estimates a r e derived by minimization of t h e u p p e r bound of t h e e r r o r , which i s evaluated similarly to the u p p e r bounds on t h e solution of s t o c h a s t i c op- timization problem in WP-86-72. The r e s e a r c h r e p o r t e d in t h i s p a p e r was p e r - formed in t h e Adaptation and Optimization P r o j e c t of t h e System and Decision Sci- e n c e s Program.

Alexander B. Kurzhanski Chairman System and Decision Sciences Program

(3)

CONTENTS

1 Introduction

2 Bounds for Integral Functionals

3 The Case of One-Dimensional Parameter 4 The C a s e of Vector Parameter

References

(4)

NUMERICAL TECHNIQm

FOR

FINDING

ESTTUTES

WHICH krlINI#Im

THE

UPPER

BOUND OF THE

ABSOLUTE

DEVIATION A. Gaivoronski

1.

INTRODUCTION

We a r e concerned h e r e with t h e problem of determining t h e value of t h e p a r a m e t e r a E R n when t h e information about i t comes in t h e form of observations

where ~ ( z ) i s a known v e c t o r function, z j are fixed points from finite subset Z of Rrn , Z

=

{ Z l,

. . . .

z

' 1:

and are identically distributed f o r the same j

,

random v a r i a b l e s ?:: could depend on q i : j f o r j l

#

j 2 but should b e independent f o r t h e same j l . Some additional information is available in t h e form of inclusion a f A where A i s a compact s u b s e t of R n . In t h e simplest case when n

=

1 and the values of t h e p a r a m e t e r a a r e o b s e r v e d d i r e c t l y (1.1) becomes

In t h i s l a t e s t c a s e t h e L

norm

estimate a, i s obtained by minimization with r e s p e c t t o z of t h e sum

where s i s t h e total number of observations. The p r o p e r t i e s of this estimate (the sample median) are defined by t h e p r o p e r t i e s of t h e function

namely, if F ( a )

=

min, .A F ( z ) t h e n u n d e r f a i r l y mild conditions a, -+ a with pro- bability 1 [3, 1 6 , 231.

(5)

If w e knew t h e distribution H of observations o t h e n t h e a c t u a l value of p a r a m e t e r a c a n b e found from minimization of F ( x ) . This i s not t h e c a s e , howev- er, and w e c a n think of t h e sampling p r o c e d u r e as means of obtaining information on H. Observations a r e used to estimate t h e distribution function H and use this estimate i n (1.4) to determine t h e estimate of a. If w e t a k e empirical distribution function of observations a n d substitute i t instead of H in (1.4) w e would obtain t h e function F ( z , s ) from (1.3) and estimates a,. The d i f f e r e n t Li+stimates would b e obtained if d i f f e r e n t estimates of distribution H are used.

In this p a p e r t h e numerical techniques f o r finding t h e worst-case Ll-estimates are proposed. These estimates minimize t h e l a r g e s t possible L t - e r r o r which c o r r e s p o n d s t o t h e w o r s t distribution consistent in some s e n s e with observations.

The r e s u l t s r e l y on t h e techniques developed in [12], which are summarized in sec- tion 2. Section 3 i s devoted to t h e algorithm f o r finding estimates in t h e simplest case (1.2). More complicated sampling scheme (1.1) i s considered in section 3. I t should b e noted t h a t numerical techniques f o r finding L1-estimate w a s considered in

[I-31.

Stochastic optimization techniques a n d estimation techniques with uncer- tain noise s t r u c t u r e r e l e v a n t t o t h i s problem w a s discussed i n B-8, 9-15, 17, 21, 241.

2. BOUNDS

FOR

INTEGRAL FUNCTIONALS

The r e s u l t s from [I21 which are r e l e v a n t to t h e estimation problem (1.4) a r e reviewed in this section.

Suppose t h a t w e have observations ol,

...

os with unknown distribution func- tion H. Our aim i s t o c o n s t r u c t t h e set G, of distributions which are in some s e n s e consistent with t h e s e t of observations.

Let u s assume t h a t ot belong t o some set Q C Rm with Bore1 field

B;

probabili- t y measure H i s defined on this field, t h u s w e have a probability s p a c e (Q, B, H).

- - -

F o r e a c h fixed s let u s c o n s i d e r t h e sample probability s p a c e ((2, B, P) which i s a

- - -

Cartesian p r o d u c t of s s p a c e s (Q, B, H). The s p a c e (Q, B, P ) i s t h e smallest s p a c e which contain all ( f l s ,

p,

P s ) . In what follows t h e "convergence with probability 1" will mean t h e "convergence with probability 1 in t h e s p a c e ( Q S , BS, PS)". With t h e set of observations lol,

. . . ,

o,

1

t h e set of distribution Gs will be associated in t h e following way.

(6)

Let us fix t h e confidence level a : 0

<

a

<

1. W e s h a l l consider events with probability PT less than a "improbable" events and discard them. Let u s consider a r b i t r a r y set A C B. Among s observations loi,

. . . ,

w s j t h e r e are iA observa- tions which belong t o s e t A , 0 S iA S s . The random variable id i s distributed bi- nominally and i t s values c a n be used to estimate H ( A ) (Mainland [19]). To do this l e t us consider t h e following functions

k

* ( s , k , 2 )

= C

s ! z C ( l

-

2)'

-<

i =o i ! ( s

-

i ) !

observe t h a t

The function

4

( s

,

k

,

z ) i s a monotonically increasing function of z on t h e interval [O, 11, # ( s , k 1 0 ) = O , # ( s , k , 1) = 1 , k # O . Therefore t h e solution of equation

$ ( s , k , z ) = c e x i s t f o r any O 5 c 5 1 . Let us take

d ( s , k ) : # ( s , k , d ( s , k ) )

=

a , k # O (2.3)

b ( s , k ) : (ll(s, k , b ( s , k ) ) = a , k

#

s

The values d ( s

,

k ) and b ( s , k ) are t h e lower and u p p e r bounds f o r the probability H ( A ) in the following sense.

LEMMA 1. For a n y fixed set A C B t h e bound d ( s , k ) defined in ( 5 ) possess t h e fo LLowing properties

1. P [ d ( s , i A ) > H ( A ) j S a f o r a n y m e a s u r e H .

2. Ip for some function c ( i ) , i = O : s , c ( i + l ) > c ( i ) we h a v e P I c ( i d ) > H ( A ) j S a f o r a n y H t h e n c ( i ) 5 d ( s l i )

(7)

This lemma shows t h a t d ( s , iA ) i s in a c e r t a i n sense t h e b e s t lower bound f o r t h e probability H(A ). The similar r e s u l t holds f o r t h e u p p e r bound b ( s

,

i d ) :

LEMMA 1'. For a n y w e d s e t A C B b ( s

,

k ) d e f i n e d i n (3) p o s s e s s t h e f o l l o w i n g p r o p e r t i e s :

1. p [ b ( s , i d )

<

H ( A ) ] S a

2. If f o r some plLnction c ( i ) , i

=

0 : s , c ( i

+

1 )

>

c ( i ) w e h a v e P S l c ( i A )

<

H ( A ) j S a f o r a n y H then c ( i ) 2 b ( s , i ) .

These lemmas a r e proved i n [12].

DEFINITION Th.e s e t G, of t h e d i s t r i b u t i o n s c o n s i s t e n t w i t h t h e set of observa- t i o n s l o l ,

. . . ,

o ,

j

f o r f i x e d c o n f i d e n c e Level a i s d e f i n e d as foLLows:

f o r a n y m e a s u r a b l e A , w h e r e d ( s

,

i d ) a n d b ( s , iA ) a r e d e f i n e d in (2.3).

Now l e t us consider t h e problem of finding u p p e r and lower bounds of func- tional

f

g ( o ) d H ( u ) on t h e s e t G,. This problem will b e used in l a t e r sections f o r defining t h e special c l a s s of L l-estimates. In this section we a r e i n t e r e s t e d in solv- ing the following problem:

minimize (or maximize) with r e s p e c t to

H

s u b j e c t t o constraints

Let us assume t h a t g ( o O )

=

min g ( o ) and g ( u s + I )

=

max g ( a ) e x i s t and ar-

0 E f l 0 E f l

r a n g e the s e t of observations loi,

. . . ,

w , j in o r d e r of increasing values of t h e function g ( a ) :

H e r e and elsewhere the original o r d e r of observations i s indicated by s u b s c r i p t and a r r a n g e m e n t i n increasing o r d e r of t h e values of g i s indicated by s u p e r - s c r i p t . The f i r s t element of new arrangement will always be t h e point with t h e minimal value of t h e objective function o n t h e s e t fl and t h e last element (with number s

+

1) will be t h e point with maximal value. This arrangement depends on

(8)

the number s of the time interval, but this dependence wiU not be explicitly indi- cated f o r the simplicity of notations.

The solution of the problem ( 2 . 5 ) - ( 2 . 6 ) is given by the following theorem:

THEOREM

1 Suppose t h a t e z i s t p o i n t s o0 and oS ' l s u c h t h a t g ( w O )

=

min g (w), g ( a + I )

=

maxg (w). Then

O E ~ u e n

1 . The s o l u t i o n of t h e problem (2.5)-(2.6) e z i s t a n d among extremal m e a s u r e s a l w a y s e x i s t d i s c r e t e one w h i c h i s c o m m t r a t e d i n s

+

1 p o i n t s :

w h e r e A. = rnax g ( a )

-

min g ( a )

o e n w E fl

3.

with p r o b a b i l i t y 1 as s

--

a.

The proof i s contained in [ 1 2 ] .

(9)

3. THE CASE OF Om-DIMENSIONAL

PARAMETER

Using the r e s u l t s of t h e previous section w e shall obtain estimates of t h e p a r a m e t e r a f

R '

from observations ( 1 . 2 ) . I t i s assumed t h a t a p r i o r i bounds _a and

a

are known

F o r t h e purpose of convergence analysis i t i s i r r e l e v a n t how f a r are t h e bound _a and

a

from a c t u a l value of a , i t i s only n e c e s s a r y t h a t

-

0 < _ a

< a <

0 .

F o r computational purposes i t i s p r e f e r a b l e of c o u r s e t o have _a and

e

as close to

a as possible. W e shall assume f o r simplicity t h a t _a 5 oi 5

e

f o r a l l i . The dif- f e r e n t c a s e c a n b e t r e a t e d in t h e same manner, but r e q u i r e s more complicated no- tation.

Let us t a k e some confidence level a and define t h e admissible set of distribu- tions G, from ( 2 . 4 ) . I t i s possible t o utilize this information i n two different ways.

One a p p r o a c h i s associated with t h e c a s e when not only the value of p a r a m e t e r it- self i s of i n t e r e s t b u t i t i s a l s o important t o g u a r a n t e e t h e smallest possible values of e r r o r functional J l x

-

u l d H ( o ) . In t h i s c a s e t h e estimate i s c o n s t r u c t e d which minimize t h e worst in t h e s e t G, value of t h e e r r o r functional. The second ap- p r o a c h i s to define t h e region t o which t h e a c t u a l p a r a m e t e r belongs provided t h e distribution

H

c a n t a k e a r b i t r a r y values from admissible set G,

.

W e shall consider both a p p r o a c h e s f o r one-dimensional case s t a r t i n g with t h e worst-case estimate.

The worst-case approximation F ( m , s ) to t h e function F ( m ) based on t h e s e t of distributions G, consistent with observations i s defined as follows:

The values of this function c a n be computed using t h e Theorem 1 .

DEFINITION The w o r s t - c a s e L l - e s t i m a t e

cs

of p a r a m e t e r a is d e f i n e d b y m i n i m i z a t i o n of t h e f u n c t i o n

F(z,

s ) *om

@.TI:

F(a",, s )

=

min F ( z , s)

p r z r a '

This estimate depends on t h e confidence level a. I t follows from t h e p a r t 3 of t h e theorem 1 th a t P ( x , s )

-

Jlz

-

o l d ~ ( o ) with probability 1 f o r fixed m . The defin- ition of the function F ( x , s ) and bondedness of the regions t o which o and x belong implies t h a t t h e function F ( z , s ) i s convex and uniformly continuous with r e s p e c t

(10)

t o s

.

T h e r e f o r e all limit points of t h e sequence

Es

belong t o t h e s e t

~ * = [ z * : J I z * - w l ~ ( w ) = min J l x - w l d ~ ( o ) , a x I

a Z {

g c z s h

T h e r e f o r e

cs

-+ a if X I

=

[ a {

The function F ( x , s ) i s convex function and f o r any fixed z i t i s possible to compute t h e values of this function and i t s subgradients. The convex programming techniques [ 1 8 , 221 c a n b e used t o minimize t h i s function and obtain t h e estimate a , . U However, i t i s more convenient to develop s p e c i a l algorithm which utilizes t h e p r o p e r t i e s of t h e function F ( z

,

s ) .

Let us start with defining sufficient condition f o r a point x to minimize t h e function F ( z , s ) . Take a r b i t r a r y z :_a S x S

ii

a n d define

Let us a r r a n g e observations [ o i ,

. . .

, o, { in two o r d e r i n g s . Members of t h e f i r s t ordering will b e denoted by & ' ( x ) and of t h e second o r d e r i n g by Z C ( x ) , i

=

1 : s . For e a c h

t

e x i s t j . k such t h a t ~ ' ( 2 )

=

oj, y 4 ( I )

=

ok and

In o t h e r words both o r d e r i n g s a r r a n g e observations in nondecreasing o r d e r of t h e values l x

-

w 1. They d i f f e r only f o r t h e observations equidistant from x . Ordering with t h e members _of ( x ) places f i r s t t h e observations which a r e to t h e l e f t of z while o r d e r i n g with t h e members o f ( x ) places f i r s t t h e observations which are to t h e r i g h t of x .

Let u s define

(11)

Denote f o r all _a S z S a'

-

- . + I

{

; i f a - z > z - _ a

(Z )

= -

a otherwise

THEOREM 2 Suppose t h a t

zs

i s t h e s o l u t i o n of t h e problem (3.2) a n d a < a < E . Then

-

a n d

w h e r e

p i + 1

= I

- a ( s , s ) (see (2.9))

Conversely, i f f o r some z

= zs

c o n d i t i o n s ('3.3) and (3.4) a r e satisfied t h e n

Es

i s t h e s o l u t i o n of t h e problem (3.2).

The proof of t h e theorem follows d i r e c t l y from t h e r e s u l t s of t h e previous section a n d from the n e c e s s a r y condition f o r minima of convex function, namely 0 E

a

F(z , s ) where

a

denotes t h e subdifferential of t h e convex function.

I t i s c l e a r from t h e theorem t h a t one of t h e solutions of t h e problem (3.2) will b e among points where t h e sums (3.3) and (3.4) change sign. This c a n o c c u r e i t h e r

a'

+a

in points w i o r where z

-

of

=

oj

-

z f o r some i , j o r at_z

= -

2

.

This obser- vation l e a d s to t h e following algorithm:

ALGORITHM 1.

1. S t a r t with selecting a r b i t r a r y p i n t z such t h a t _a S z S E and e x i s t i , j with of 5 z O, o 2 0. Arrange initial ordering. 0' (2 O) and p f ( z

4

(12)

2. Suppose t h a t we obtained the point x k . Then the method proceeds as follows.

2a. Compute

4

and from (3.3), (3.4). Now t h e r e could be t h r e e possibilities:

if

4

4 0 , 2 0 then go t o s t e p 2e if

<

0 ,

<

0 then go t o s t e p 2b if

4 >

0,

>

0 then go t o s t e p 2c 2b. Find

If some ci does not exist take ci

=

a

- -

2. Obtain x k +I:

go to s t e p 2d 2c. Find

w j ( x k ) - w i ( x k ) l y i ( x k ) > x ,

=

min

j = m i n l l l l > i , y L ( x k ) < x k j

I

I

If some r i does not exist, take

=

a

-

--_a. Obtain x k +I:

(13)

g o

to

s t e p 2d

2d. Obtain new o r d e r i n g s

of

(xk + I ) , _oi(xk + I ) f o r t h e new point xk and new wf(xk + I ) . Go to t h e s t e p 2a.

2e. The estimate

zs

i s found:

zs =

z k . Terminate t h e execution.

This method finds t h e estimate

4

in a finite number of s t e p s .

Now l e t us consider t h e problem of constructing t h e confidence region which contain a l l t h e solutions of t h e problem

min j I z -uIdH(o) grtrii

f o r H E G,

.

Let u s consider t h e o r d e r i n g with elements of

* ,

i

=

0 : s

+

1

and define two distributions e a c h concentrated i n s

+

1 points

* * *

H,. =

((q,, 9$),

- . . .

( o f , 9?)>

-

( u s , 9:))

where p,f and are defined in (2.9). I t a p p e a r s t h a t minimum of t h e functions

F*(x, s)

=

j l x

-

oldH, (o)

*

*

*I

define r i g h t and l e f t end points of t h e confidence i n t e r v a l [a

,

a ] where a

* =

i n

ly

: j l y

-

o l d ~ ( w )

=

min j l z

-

oldH(u)j

arycU

- -

a s z s B

H E C ,

*

+

o

=

s u p ( y : f I y - o l ~ ( o ) = min f l z - o b ( o ) j g sy SC g st ca'

H E C ,

more specifically, t h e following r e s u l t holds:

(14)

THEOREM 3 S u p p o s e that

Thgh for a n y x* s u c h t h a t

e x i s t

f

E G, s u c h that

min JIx

-

oldi?(o)

=

J l x *

-

ol&(o)

p r z r z

and i j f i r some

HI

G, and some x* c o n d i t i o n Q.8) is s a t i s f i e d t h e n

PROOF According t o t h e n e c e s s a r y and sufficient conditions t h e point

* *

x :_a

<

z

<

E i s t h e minima of t h e function / / x

-

o h H ( u ) if and only if

and

/

dH(") 2

f

M ( o )

* *

a r o r r

-

r < U S E

I t w a s assumed f o r simplicity of notation t h a t

/

M ( w )

=

1 thus ( 3 . 8 ) implies

a s u s U

-

z

* a

inf y :

/

cW 2 0 . 5

y

I

$ r o s y

I

which gives

*

y : s u p

/

d H 2 0 . 5

Y r o s y

and t h e r e f o r e a* 2 %

*

where k i s defined in ( 3 . 6 ) . On t h e o t h e r hand o; is t h e

* *

minimum of t h e function

F*

( x , s ) defined in ( 3 . 5 ) . T h e r e f o r e a

=

ok

.

Similarly we obtain t h a t a

**

= o,

*

- k .

(15)

From convexity of t h e set 0, a n d function J l x

-

o l ~ ( u ) now follows t h a t f o r

* * * * *

any point x : a 5 z 5 a e x i s t s

r?:

f G such t h a t (3.8) i s satisfied. The proof i s completed.

The same r e s u l t s c a n b e obtained in t h e totally similar fashion f o r t h e case when t h e set Q i s bounded, but n o t coincide with A

=

f z :_a 5 x 5

a 1.

4. THE CASE

OF

VECTOR PAaAMETEB

Let us consider a m o r e complicated c a s e when t h e estimated p a r a m e t e r a be- longs to R n . I t will b e assumed t h a t additional input p a r a m e t e r s z are p r e s e n t ,

z

ern

and s o m e finite set Z

=

fzi,

. . .

, z L j i s selected. The information comes with o b s e r v a t i o n s

where ~ ( z ) is a known vector-function,

q1

a r e independent identically distributed f o r t h e same j observations errors. Using t h e same type of argument as in intro- duction o b s e r v e t h a t various t y p e s of L l-estimates c a n b e obtained by minimizing t h e following function

where

Pj

a r e t h e weights assinged to t h e points z j . Suppose t h a t sj i s t h e number of observations performed at t h e point zj. Substitution of empirical distributions in (4.2) gives t h e following functional

The minimization of this functional i s t h e most common way of obtaining Ll-estimate a s in t h i s c a s e . The worst-case Ll-estimates will b e obtained similar t o t h e simpler case in section 3, namely by minimizing t h e u p p e r bound of t h e L

e error

in

(4 2):

(16)

H e r e t h e admissible sets of distributions ~j a r e defined similar to ( 2 . 4 ) a f t e r

sj

fixing t h e confidence level a.

DEFINITION The w o r s t - c a s e L l - e s t i m a t e

z,

of p a r a m e t e r s a from (4.1) i s de- f i n e d as foLLows

In f a c t t h i s estimate depends on all s j , not only on s , but this will be skipped in no-

I

tations. The values of t h e function

F ( s ,

x ) c a n be computed using r e s u l t s of t h e Theorem 1 and t h e problem of i t s minimization c a n be formulated as a l i n e a r pro- gramming problem in c a s e t h e set A i s defined by l i n e a r constraints. This problem, however, c a n be of v e r y l a r g e scale. Therefore t h e method based on generalized l i n e a r programming [ 4 ] will b e d e s c r i b e d h e r e . This method r e q u i r e s t h e solution of t h e l i n e a r programming problem of comparatively s m a l l dimension to be p e r - formed at e a c h i t e r a t i o n . In what follows i t will be assumed t h a t t h e set A is defined by l i n e a r c o n s t r a i n t s and i t i s bounded. The observations o belong t o t h e set

II

which may o r may not coincide with A . ALGORITHM 2 .

1. A t t h e beginning select initial point z 1 E A . For e a c h j make ordering i ( k , j ) :

Compute

where p k j 0 i s defined according to ( 2 . 9 )

2. Suppose t h a t t h e method a r r i v e d at point z r . W e have a collection of points [ z l ,

. . .

, x r ] and values F ( x l , s ) ,

. . .

,

F ( z r ,

s ) , f z ( x l , s ) ,

. . .

, F z ( x r ,

-

s ) . A t t h i s point t h e algorithm proceeds as follows:

(17)

2a. Solve l i n e a r programming problem min u

a, z

and obtain t h e point x r as a solution of this problem a n d d as i t s optimal value.

2b. F o r e a c h j make ordering i ( k , j ) :

Compute

2c. 1f F ( x T +I, s )

=

ur then assign

Es =

s r + l and s t o p , t h e estimate h a s been found. Otherwise g o to s t e p 2a.

Thls technique produces estimate

Es

i n a finite number of s t e p s because t h e function F ( x

,

s ) i s piecewise linear.

REFERENCES

1 Arthanari, T.S. and Y. Dodge (1982): Mathematical Programming in Statistics.

Wiley, New York.

2 Barrodale, 2. and

F.D.K.

R o b e r t s (1977): Algorithms f o r r e s t r i c t e d l e a s t abso- l u t e value estimation. Commun. Statist.Simula. Compta, B6(4), 353-363.

3 Bloomfield, P. and W.L. S t e i g e r (1983). Least Absolute Deviations, Theory, Ap- plications and Algorithms. Birkhauser, Boston.

4 Dantzig

,

G .B. : L i n e a t R o g r a m m i n g a n d E z t e n s i o n s , Princeton University P r e s s (1980).

5 ~ u ~ a g o v d , J.: Minimax Stochastic Programs with Nonconvex Nonseparable Penalty Functions, in: A. P r e k o p a (ed.): Coloquia Mathematica Societatis Jzlnos Bolyai 12. P r o g r e s s in Operations R e s e a r c h , E g e r 1974 North-Holland (1976) pp. 303-316.

(18)

~ u ~ a g o v ~ , J.: Minimax Stochastic P r o g r a m s with Nonseparable Penalties, in:

K.

I r a c k i , K. Malanowsky, S. Walukiewicz (eds.): Optimization Techniques, P a r t 1. LN Control and Information Sciences, Springer-Verlag, Berlin, vol. 22 (1980) pp. 157-163.

Ermoliev, Yu.: "Method f o r stochastic programming in randomized s t r a t e g i e s , "

K i b e r n e t i a z 1 (1970).

Ermoliev, Yu.: Methods of S t o c h a s t i c P r o g r a m m i n g (in Russian), Nauka, Mos- c o w (1976).

Ermoliev, Yu., A Gaivoronski and C. Nedeva (1985): Stochastic optimization problems with incomplete information on distribution function. SIM J. Con- t r o l a n d O p t i m i z a t i o n 23, 696-716.

Gaivoronski, A.: Linearization methods f o r optimization of functionals which depend on probability measures. Mathematical Programming Study 28 (1986) 157-181.

Gaivoronski, A.: Stochastic optimization techniques f o r finding optimal sub- measures in: V.I. Arkin, A. S h i r a e v , R. Wets eds. S t o c h a s t i c optimization.

Proceedings of t h e Internatf onal Conference, Kiev, 1984. L e c t u r e Notes in Control and Information Sciences, 1986, Springer-Verlag, Berlin, Heidelberg.

Gaivoronski, A.: Computing bounds f o r t h e solution of t h e stochastic optimiza- tion problem with incomplete information on distribution of random parame- ters. International Institute f o r Applied Systems Analysis, Working P a p e r WP- 86-72.

Kall, P. : SYochastic L i n e a r Prog-ramming, S p r i n g e r Verlag, Berlin, 1976.

Kall, P.: Stochastic programs with r e c o u r s e : a n u p p e r bound and t h e r e l a t e d moment problem. Institut fiir Operations R e s e a r c h d e r Universit2t Zurich.

Working P a p e r , 1986.

Kankova, V.: An approximative solution of a stochastic optimization problem.

Transaction of t h e e i g h t P r a g u e c o n f e r e n c e on information theory, s t a t i s t i c a l decision functions, random p r o c e s s e s v. A, P. Reidel Publishing Company, Bos- ton, 1978.

Krishnaiah, F . R . a n d Sen, P.K. eds: Handbook of Statistics. v. 4, Non- p a r a m e t r i c methods, North Holland, 1984.

Kurzhanskii, A.B.: C o n t r o l a d C X s e r v a t i o n u n d e r C o n d i t i o n s of U n c e r t a i n - t y , Nauka, Moscow, 1977.

Lemarechal, C. : 'Bundle methods in nonsmooth optimizationJ1, in: C. Lemarechal a n d R. Mifflin, eds., Nonsmooth Optimization, IIASA Proceedings S e r i e s , Vol. 3, 79-102. Laxenburg, Austria (1977).

Mainland, D. (1948): S t a t i s t i c a l methods in medical r e s e a r c h , Canadian Journal of R e s e a r c h , 26, (section E), 1-166.

Okamoto, M. (1958): Some inequalities r e l a t i n g to t h e p a r t i a l sum of binomial probabilities, Annals of t h e Institute of Statistical Mathematics, Tokyo, 1 0 , 29-35.

PrBkopa, A. (1973): Contributions t o t h e t h e o r y of stochastic programming.

M a t h a t i c a t P r o g r a m m i n g 4,202-221.

Rockafellar, R.T.: 'Monotone o p e r a t o r s and t h e proximal point algorithm,"

SIAM Journal on Control and Optimization 14(1976), 877-898.

Serfling, R.: Approximation Theorems in Mathematical S t a t i s t i c s , Wiley, 1980.

(19)

24 Wets, R. Stochastic programming: solution techniques and approximation schemes, in Mathematical R o g r a m m i n g : The State of the Art, eds., A.

Bachem, M . Grbtschel, and B . K o r t e . Springer Verlag, Berlin, 1983, pp.

566-603.

Referenzen

ÄHNLICHE DOKUMENTE

Working Papers a r e interim reports on work of t h e International Institute for Applied Systems Analysis and have received only limited review. Views or opinions

Working Papers are interim reports on work of the International Institute for Applied Systems Analysis and have received only limited review.. Views or opinions

Working Papers are interim reports on work of the International Institute for Applied Systems Analysis and have received only limited review. Views or opinions expressed herein

Working Papers are interim reports on work of the International Institute for Applied Systems Analysis and have received only limited review. Views or opinions

Working Papers are interim reports on work of the International Institute for Applied Systems Analysis and have received only limited review. Views or opinions expressed herein

Working Papers a r e interim reports on work of the International Institute for Applied Systems Analysis and have received only limited review.. Views or opinions

Working Papers a r e interim reports on work of the International Institute for Applied Systems Analysis and have received only limited review.. Views or opinions

Working Papers a r e interim reports on work of the International Institute for Applied Systems Analysis and have received only limited review.. INTERNATIONAL INSTITUTE