• Keine Ergebnisse gefunden

The Dynamic Simplex-Method

N/A
N/A
Protected

Academic year: 2022

Aktie "The Dynamic Simplex-Method"

Copied!
77
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

THE DYNAMIC SIMPLEX METHOD

*

A . I . P r o p o i -

* * * * *

V.E. K r i v o n o z h k o

J u n e 1977

Research Memoranda are interim reports on research being conducted

by the International Institute for Applied Systems Analysis, and as such receive only limited scientific review. Views or opinions contained herein d o not necessarily represent those o f the Institute or o f the National Member Organizations supporting the Institute.

*

T h i s r e s e a r c h h a s b e e n c a r r i e d o u t a t t h e I n s t i t u t e f o r S y s t e m s S t u d i e s o f t h e S t a t e C o m m i t t e e f o r S c i e n c e a n d T e c h n o l o g y a n d t h e USSR Academy o f S c i e n c e s i n s u p p o r t o f r e s e a r c h on Dynamic L i n e a r Programming c a r r i e d o u t a t IIASA.

* *

I n s t i t u t e f o r S y s t e m s S t u d i e s , M o s c o w , USSR a n d IIASA.

* * *

I n s t i t u t e f o r S y s t e m s S t u d i e s , Moscow, USSR

(2)
(3)

PREFACE

Finding optimal solutions to models is a central tool of the Systems and Decision Sciences Area, and many optimization problems in economic, management, technological systems, etc.

can be reduced to dynamic linear problems. There are many different approaches and methods for tackling dynamic linear programming problems which use decomposition, penalty functions, augmented Lagrangian nested decomposition, generalized gradient, etc. methods. The simplex-method is by all means the basis

method for solution of linear programming problems. However, the extension of the simplex method for the dynamic case has yet to be made.

This paper presents a finite-step algorithm which seems to be the natural and straightforward extension of the simplex- method to the dynamic case. The paper contains only a theore- tical description and evaluation of the algorithm. Theoretical reasonings show that this algorithm may serve as a base for develcping effective computer codes for the solution of dynamic LP problems (just as the simplex-method was for the solution of static LP problems). However, the final judgment of the algo- rithms' effiency can be made only after a definite period of its use in practice.

(4)
(5)

ABSTRACT

In this paper a finite-step method for solving dynamic linear programming (DLP) problems is described.

Many optimization problems in economic, management, tech- nology, etc. are formulated as DLP problems, because now it

becomes difficult to make a decision without taking into account the possible consequences of such a decision for a certain time period.

As DLP problems are large-scale by nature, the standard

"static" LP methods become ineffective for the dynamic case and the development of methods specially oriented to DLP problems is needed.

The method suggested is a natural and straightforward extension of one of the most effective static LP methods--the simplex method--for DLP. A new concept--a set of local bases--

(for each time step) is introduced, thus enabling considerable reduction of the requirements to computer core memory and CPU

time.

In the proposed method the system of T local m x m bases is introduced and the basic simplex procedures (selection of vectors to be removed from and to be introduced into the basis, pricing procedure, transformation of bases) as applied to this system of T local bases are described. Evaluation of possi- bilities of the method and its connection with compact inverse

LP method are discussed.

(6)
(7)

CONTENTS

Page Introduction

1 . Statement o f the Problem 2. Auxiliary Problem

3. Equivalent Problem 4. Control variation

5. Objective function variation 6. Transformation o f the basis

7. Connection with the method of compact inverse 8. General procedure of dynamic simplex-method 9. Degeneracy

10. Evaluation o f the algorithm Conclusion

Figure Tables

References

(8)
(9)

The Dynamic Simplex-Method

INTRODUCTION

Methods of linear programming (LP) are now well studied and have an extensive field of applications [1,2,3]. Dynamic linear programming (DLP) is a new development of LP methods for planning and control of complex systems.

Many optimization problems in economic, management, techno- logical systems can be reduced to DLP problems (see, for example,

[I-61). However, the development of DLP methods and its appli- cations are restrained by lack of universal DLP computer codes.

Therefore many DLP problems are now being solved by reducing them to static ones and using for their solution the standard LP codes (see, for examples, [4,6]).

As DLP problems are principally large-scale, this "static"

approach is limited in its possibilities,and development of algorithms specially oriented to dynamic LP problems is needed.

In recent years, methods for DLP have been developed which make-it possible to take into account the specific features of dynamic problems [7,9].* But extension of the most effective LP finite-step method--the simplex method--for the dynamic case has yet to be made.

The dynamic simplex method was suggested, in [10,11]. This approach uses essentially the dynamic specific of DLP problems.

The main concept of the static simplex method--the basis--is replaced by the set of local bases, introduced for the whole planning period. It allows'a significant saving in the amount of computation and computer core and permits development of a set of finite-step DLP methods (primal, dual and primal-dual dynamic simplex methods) which are direct extensions of the

corresponding static finite-step methods.

*

See also references in [31.

(10)

In this paper, detailed description of the dynamic simplex method is given and connection with the method of compact

inverse is discussed.

1. STATEMENT OF THE PROBLEM

Consider the DLP problem in the following canonical form.

Problem 1.1 Find a control u = {u(o)

,.. .

,u(T-1) 1

and a trajectory

x = {x(O),...,x(T)}

,

satisfying the state equations

with initial condition

and constraints

u(t) - > 0

which maximize the performance index

Here the vector x(t) = {xl (t),

...

.xn(t)} defines the state of the system at stage t in the state space X, which is assumed to be the n-dimension euclidean space; the vector u(t) =

{ U ~ ( ~ ) , . . . , U ~ ( ~ ) } E E ~ (r-dimension euclidean space) specifies

the controlling action at stage t; vectors a(T), x 0

,

f(t) and the matrices A(t)

,

B(t)

,

G (t)

,

D (t)

,

are respectively of dimensions

(n x 1). (n x 1) I .(m % I), and (n vln), (n x r)

,

.-(..m x n)

,

( m x r)

,

and are assumed to be given.

(11)

In vector products the right vector is a column, the left vector is a row; thus, ab is the inner product of vectors a and b; aA is the product of a row-vector a on a matrix A; Aa is the product of a matrix A on a column-vector a.

The choice of a canonical form for the problem is to some extent arbitravy, various modifications and particular cases of Problem 1 being possible (i.e. integers n, m and r may depend on the number of stage t; constraints on the state and control variables can be separate; state equations include time lags of state and/or control variables; the performance index

depends on the whole sequences {u(t)) and/or {x(t)), etc. [3tJ2I*

However, these variants of Problem 1 can either be reduced to this problem [12,13] or the results stated below may be used directly for their solution.

Along with the primary Problem 1.1, skatement of the dual problem will be necessary.

Introducing the Lagrange multipliers X (t) E Ern (t = T-1

,

...,

1,O) and p(t) E En (t=T

,...,

0) for constraints (1.3) and

( 1 1 )

,

( 1 .2) respectively. From (Table 1 ) one can obtain the following dual DLP problem [I21

.

Problem 1.2 Find a dual control

= A T - 1 , . (0)) and a dual (conjugate) trajectory

P = {p(T),---,p(O)}

,

satisfying the costate (conjugate) equations

with boundary condition

(12)

and constraints

which minimize the performance index

D e f i n i t i o n I . 1 A f e a s i b l e c o n t r o l of the DLP Problem 1.1 is a vector sequence u = {u(O),

...,

u(T-1)) which satisfies with some trajectory x={x(O),

...,

x(T)) conditions (1.1) to (1.4).

An o p t i m a l c o n t r o l of Problem 1.1 is a feasible control u*, which maximizes (1 -5).

F e a s i b l e d u a l c o n t r o l s X and o p t i m a l d u a l c o n t r o l X* to the dual Problem 1.2 are defined in a similar way.

The sets of all feasible controls u and X of Problems 1.1 and 2.1 will be denoted by $2 and

A.

T h e o r e m I . 1 ( D u a l i t y T h e o r e m [I211 I f o n e o f t h e d u a l P r o b l e m s 1 . 1 and 1 . 2 h a s a n o p t i m a l c o n t r o l , t h e n t h e o t h e r h a s a n o p t i m a l c o n t r o l a s w e l l and t h e v a l u e s o f t h e p e r f o r m a n c e i n d e x e s o f t h e p r i m a r y and d u a l P r o b l e m s 1.1 and 1 . 2 a r e e q u a l :

I f t h e p e r f o r m a n c e i n d e x o f e i t h e r P r o b l e m 1 . 1 o r 1 . 2 i s u n b o u n d e d ( f o r P r o b l e m 1 . 1 from a b o v e and f o r P r o b l e m 1 . 2 from b e l o w ) , t h e n t h e o t h e r p r o b l e m h a s n o f e a s i b l e c o n t r o l .

(13)

2. AUXILIARY PROBLEM

Let U = E rT ; u = {u(O),

...,

u(T-~))EU be the control space of Problem 1.1. In the control space U Problem 1.1 can be rewritten as follows.

One can obtain from the state equations (1.1 ) that [ 131 :

where

I is the identity matrix.

By substituting (2.1) into (1.3) and taking into account (1.2), we obtain the constraints on controls u, given in

explicit form (Table 2) :

Here

W(t,.r) = G(t)Y(t,.r

+

I)B(T) (t > T)

The matrices W (t, T) are of dimension (m x r) and vectors h(t) are of dimension ( m x 1).

(14)

The performance index (1.5) will be rewritten, respectively, in the form

where

Here vectors q(t) are satisfied to the state equation of the form

with

Denoting the constraint matrix of (2.2) by W (dimension is mT xrT), we can reformulate Problem 1.1 in the following equivalent form (see also Table 2) :

P r o b l e m 2.1 Find a control u = {u (0)

, . . .

,u (T-1)

1 ,

satisfying the constraints

which maximizes the performance index

J , (u) = CU

.

(2.7)

Here h = h 0h T - 1T ; q = [q(O),.

.

.,q(~-l)lT;

1CI

c = [c (0)

, . . . ,

c (T-1 ) lT; T denotes transposition; J 1 differs from J1 on the constant number q(0)x0.

(15)

I t i s e v i d e n t t h a t t h e s e t s o f o p t i m a l c o n t r o l s f o r P r o b l e m s 1 . 1 a n d 2 . 1 a r e t h e same.

P r o b l e m 2.1 h a s b l o c k - t r i a n g u l a r c o n s t r a i n t m a t r i x ( 2 . 2 ) a n d h a s b e e n s t u d i e d i n many w o r k s (see [ I ] ) . However, s t a t e - ment o f t h e p r o b l e m c o n s t r a i n t s i n t h e f o r m ( 2 . 2 ) d o e s n o t a l l o w u s e o f t h e d y n a m i c n a t u r e o f t h e p r o b l e m i n f u l l m e a s u r e . More n a t u r a l , a n d t h e r e f o r e more e f f e c t i v e , would b e d i r e c t u s e o f t h e s p e c i f i c c h a r a c t e r o f P r o b l e m 1 . 1 as a n o p t i m a l c o n t r o l p r o b l e m .

B e f o r e c o n s i d e r i n g t h i s a p p r o a c h t h e g e n e r a l scheme o f t h e s i m p l e x method a s a p p l i e d t o P r o b l e m 2.1 w i l l b e d e s c r i b e d .

I f t h e s t r u c t u r e o f t h e m a t r i x W i s n o t t a k e n i n t o a c c o u n t , P r o b l e m 2.1 i s a n o r d i n a r y LP p r o b l e m i n c a n o n i c a l f o r m .

L e t u b e a f e a s i b l e c o n t r o l ; w e s h a l l d e f i n e t h e i n d e x s e t s

The u n i o n s o f t h e s e s e t s o v e r a l l t = 0 , 1 ,

...,

T-1 w i l l b e d e n o t e d by

D e n o t e a l s o t h e c o l u m n s o f m a t r i x W by wi ( t ) ( i = 1 , .

. .

, r ;

t = 0 , 1 ,

...,

T - 1 ) : w i ( t ) EE". I n t h a t case t h e c o n s t r a i n t s ( 2 . 6 ) c a n b e r e w r i t t e n a s

(16)

D e f i n i t i o n 2 . 1 A b a s i c f e a s i b Z e c o n t r o l of Problem 1 . 1 is a feasible control u, for which vectors wi(t), (i,t) E ~ ( u ) , are linearly independent.

A n o n d e g e n e r a t e b a s i c f e a s i b l e c o n t r o l is a basic feasible control u, for which vectors wi (t)

,

(it t) E I (u)

,

constitute a basis in E ~ ~ .

Evidently the basic control is an extreme point of poly- hedral set Q .

D e f i n i t i o n 2 . 2 The b a s i s o f b a s i c c o n t r o l u is a system of mT linear independent vectors wi(t), which contains all vectors wi (t)

,

i (t) E: I (u)

.

Denote by IB(u) the set of indices corresponding to the basic vectors wi(t); IN(u) is the set of indices corresponding to the remaining vectors wi(t) of matrix W.

In general, IB(u)

2

I(u). If u is a nondegenerate basic control, then

Let

(17)

and m(t) is the number of basic components of a basic control u at step t. Evidently,

We shall now consider the simplex-procedure of finding the optimal control u* in terms of Problem 2.1.

As usual, without any loss in generality we assume that Problem 2.1 (1.1) is feasible and that any basic feasible control is nondegenerate.

In accordance with definitions 2.1 and 2.2 any basic feasible control may be represented as

u = {uB,uN), with ug

2

0, UN = 0 Let

0 0

u O = tu;,ol

,

u B = tui(t)l

,

(i,t) E I ~ ( u O ) be a given feasible control with associated set linearly independent vectors wi (t)

,

(i

,

t.) E IB (uO )

.

Then

C

Wi (T)ui 0 (T) = h

,

( i , ~ ) cIB(uO where all U ~ ( T ) > 0.

Denote by WB the matrix with columns wi ( t )

,

(i,t) E IB(uO) (basic matrix).

Then (2.9) can be written in the form

By Definition 2.2 WB is a nonsingular matrix, therefore

(18)
(19)
(20)
(21)
(22)
(23)
(24)
(25)
(26)
(27)
(28)
(29)
(30)
(31)
(32)
(33)
(34)
(35)
(36)
(37)
(38)
(39)
(40)
(41)
(42)
(43)
(44)
(45)
(46)
(47)
(48)
(49)
(50)
(51)
(52)
(53)
(54)
(55)
(56)
(57)
(58)
(59)
(60)
(61)
(62)
(63)
(64)
(65)
(66)
(67)
(68)
(69)
(70)
(71)
(72)
(73)
(74)
(75)
(76)
(77)

Referenzen

ÄHNLICHE DOKUMENTE

Izstrādātais kreditēšanas apjoma sistēmdinamikas modelis aprēķina un novērtē mājsaimniecības bilanci un to veidojošās plūsmas: ienākumus (darba alga un citi

Because the morbidity rates and survival probabilities are relatively constant in different regions, it is possible to use the estimation of the morbidity rates and/or

Rut the main goal is to show how the parameters in growth models can be estimated using on one hand a fuzzy approach together with vector optimization and on the other hand a

There are two major approaches in the finite-step methods of structured linear programming: decomposition methods, which are based on the Dantzig-Wolfe decomposition

While for the static LP the basic question consists of determining the optimal program, the realization of this program (related to the questions of the feedback control of such

In this section, we present a finite element time discretization of the optimization problem (2.9), where the resulting discretized constraints correspond to the application of

To illustrate this we may consider the changes in actual government intervention in the economies of the world in recent decades as measured by the extent of economic

In this work we extend our previous algorithms for the static case towards the dynamic scenario where we take user demands over time into account, and try to reduce unfulfilled