• Keine Ergebnisse gefunden

6.3 Optimization

6.3.2 Class SqpFortran

The class SqpFortran contains the C++ wraparound of the FORTRAN77 SQP solver NLPQLP of K. Schittkowski, see also [47, 202]. It is included in the library libsqpf and the code is linked as external source in the C++ class.

The abstract optimization problem for this method is given by Minimize F(x) ∀x∈Rn

ST. Gi(x) = 0, i∈ E Hi(x)≥0, i∈ I and the method uses the merit function

Leξ(x, µ) :=F(x)−X the degree of constraint violation. These parameters are chosen to guarantee a decrease in the merit function.

The optimization routine itself implements a modification of an active–set line search method, see Section 5.2.5, allowing for l parallel function calls. The stability and con-vergence issues of this algorithm have been treated in [47]. Moreover, it contains a non–

monotone line search, i.e. an increase within the merit function is allowed. For further details, see Section 5.2.4.2. The update of the Jacobian of the constraints is computed using a reduced Hessian approximation, see Section 5.2.4.4.

Implementations of active–set line search methods are known to show good performance even for large–scale optimization problems with many optimization variables, cf. [203].

Within our receding horizon controller setting, this method shows outstanding perfor-mance for small optimization problems. For a detailed computing time analysis we refer to [88].

6.3.2.1 Constructors

There exist two constructors for a class SqpFortran object, that is

btmb::MinProg::MinProg * o b j e c t _ m i n i m i z e r = new S q p F o r t r a n ; btmb::MinProg::MinProg * o b j e c t _ m i n i m i z e r = new S q p F o r t r a n (

dimension_x , f , g , dimension_G , dimension_H , df , dg ) ; Listing 6.29: Constructors of a class SqpFortran object

The first one is the standard constructor which we use within the main programm of PCC2. The second one, however, is optional and can be used for third party codes to utilize this class. This constructor automatically calls the initialization routine which needs to be done manually in the first case.

The header of the second constructor already indicates that within this class the standard nomenclature of nonlinear optimization is used, see also the definition of the abstract optimization problem in Section 6.3.1 which is inherited by the class SqpFortran.

In particular, the function pointer f denotes the cost function and g the equality and inequality constraints. Note that for the usage of NLPQLP,grequires an internal ordering of the constraints, i.e. the first dimension_G elements are the equality constraints which are followed by thedimension_Hinequality constraints. Moreover, the function pointerdf represents the gradient of the cost function anddgthe Jacobian of the (active) constraints.

6.3.2.2 Initialization

The SQP routine NLPQLP is configured using default values for all user setable options.

Similar to the second constructor of this class, the initialization method is executed by calling

object_minimizer - > init ( dimension_x , f , g , dimension_G , dimension_H , df , dg ) ;

Listing 6.30: Initializing call of a class SqpFortran object

where the standard notation of nonlinear optimization applies as explained at the end of the previous section.

The problem specifications are handled by the initialization routine of class MinProg, see Section 6.3.1.2. Hence, we do not have to worry about handling the discretization and the function pointers anymore within this class.

Remark 6.25

The initialization of classMinProgis written for any kind of optimization problem. Hence, if a standard optimization problem shall be solved outside our standard receding horizon control setting using an object of classMinProg object, the user can simply supply the size of the problem as well as cost function, restrictions and their derivatives respectively.

Additionally, the routine allocates the memory used by the SQP algorithm NLPQLP. To this end several constants need to be set, that is

Variable Default Description

dimension_x Number of optimization

vari-ables

dimension_G Number of constraints

dimension_H Number of equality constraints

6.3 Optimization 167

Variable Default Description

nmax dimension_x+ 1 Row dimension of the Hessian of the cost functional

mmax nrest+ 1 Row dimension of the Jacobian

of constraints

mnn2 nrest+ 2dimension_x+ 2 Number of Lagrangian multi-pliers

lwa 9nrest+⌈32nmax2⌉+33nmax+200 Size of working double array

lkwa nmax+ 10 Size of working integer array

lactive 2nrest+ 10 Size of logical array of active constraints

Table 6.10: Constants within the class SqpFortran

Note that in our implementation the readily setable constant l which characterizes the number of parallel line searches is taken to be one.

Variable Description

x Initially, the first column ofxhas to contain starting values for the optimal solution. On return, x is replaced by the current iterate.

In the driving program the row dimension of x has to be equal to nmax.

f On return, f contains the cost function values at the final iterate x.

g On return, g contains the constraint function values at the final iteratex. In the driving program the row dimension of g has to be equal to mmax.

gvalue1 gvalue1 is used to store the undisturbed values of the constraints to compute the Jacobian.

gvalue2 Similarly, gvalue2 is used to store the disturbed values of the con-straints.

df df contains intermediate gradients of the objective function.

dg dg is used to store gradients of the active constraints at a current iterate x. The remaining rows are filled with previously computed gradients. In the driving program the row dimension of dg has to be equal tommax.

u ucontains the multipliers with respect to the actual iterate stored in the first column ofx. The firstnrestlocations contain the multipli-ers of the nonlinear constraints, the subsequentdimension_x loca-tions the multipliers of the lower bounds, and the finaldimension_x locations the multipliers of the upper bounds. At an optimal so-lution, all multipliers with respect to inequality constraints should be nonnegative.

d The elements of the diagonal matrix of the LDL decomposition of the quasi–ewton matrix are stored in the one-dimensional array d.

Variable Description

c On return,c contains the last computed approximation of the Hes-sian matrix of the Lagrangian function stored in form of an LDL decomposition. c contains the lower triangular factor of an LDL factorization of the final quasi-Newton matrix (without diagonal elements, which are always one). In the driving program, the row dimension of c has to be equal tonmax.

wa Working double array kwa Working integer array

active Logical array of active constraints

Table 6.11: Memory allocation within classSqpFortran

Last, some variables for the optimization routine QPSOLVE need to be set. Note that these values massively influence the configuration and hence the result of the optimization.

Variable Default Description

l 1 Number of parallel systems, i.e. function calls during line search at predetermined iterates

accuracy 106 Final accuracy of the optimizer, its value should not be smaller than the accuracy by which gradients are com-puted.

accql 108 This constant is used by the QP solver to perform e.g.

testing optimality conditions or whether a number is con-sidered as zero. If qccql is less or equal to zero, then the machine precision is computed and subsequently multi-plied by 104.

stepmin accuracy Steplength reduction factor, recommended is any value in the order of the accuracy by which functions are com-puted.

maxfun 20 This variable defines an upper bound for the number of function calls during the line search (e.g. 20). maxfun must not be greater than 50.

maxit 100 Maximum number of outer iterations, where one itera-tion corresponds to one formulaitera-tion and soluitera-tion of the quadratic programming subproblem, or, alternatively, one evaluation of gradients.

max_nm 0 Stack size for storing merit function values at previous iterations for non-monotone line search (e.g. 10). In case ofmax_nm= 0, monotone line search is performed. max_nm should not be greater than 50.

tol_nm 101 Relative bound for increase of merit function value, if line search is not successful during the very first step. Must be nonnegative.

lql true Iflql = true, the quadratic programming subproblem is to be solved with a full positive definite quasi-Newton ma-trix. Otherwise, a Cholesky decomposition is performed and updated, so that the subproblem matrix contains only an upper triangular factor.

6.3 Optimization 169 Variable Default Description

iprint 0 Specification of the desired output level.

iprint= 0: No output of the program.

iprint = 1: Only a final convergence analysis is given.

iprint = 2: One line of intermediate results is printed in each iteration.

iprint = 3: More detailed information is printed in each iteration step, e.g. variable, constraint and multiplier values.

iprint = 4: In addition to ’IPRINT=3’, merit function and steplength values are displayed during the line search.

iout 6 Integer indicating the desired output unit number mode 0 The parameter specifies the desired version of NLPQLP.

mode = 0: Normal execution (reverse communica-tion!).

mode= 1: The user wants to provide an initial guess for the multipliers in u and for the Hessian of the Lagrangian function in c and d in form of an LDL decomposition.

Table 6.12: Input parameter of the classSqpFortran

6.3.2.3 Calculation

After creating and initializing a class SqpFortranobject, the abstract optimization prob-lem defined by the class Discretization within the class MinProg can now be solved by calling

object_minimizer - > calcMin ( x , fx , lb , ub ) ;

Listing 6.31: Calculation call to a class SqpFortran object

Within the receding horizon control procedure this call is triggered by the function calc of a class Discretization object.

This command causes the routine NLPQLP to be executed solving the abstract opti-mization problem iteratively. In turn, this routine repeatedly demands calculations of the values of all functions, that is cost function and restrictions, and all derivatives, i.e.

gradient of the cost function and Jacobian of the restrictions. These values are supplied by a class IOdeManager object, see Section 6.2.5 for details.

Upon termination, the flag IFAIL is set automatically. Here, the following outcome may occur:

Value Description

IFAIL=−2 Compute gradient values with respect to the variables stored inx, and store them indf and dg. Only derivatives for active constraints active[j] = true need to be computed. Then call NLPQLP again.

IFAIL=−1 Compute objective function and all constraint values subject the variablexand store them inf andg. Then call NLPQLP again.

IFAIL= 0 The optimality conditions are satisfied.

IFAIL= 1 The algorithm has been stopped after maxit iterations.

IFAIL= 2 The algorithm computed an uphill search direction.

IFAIL= 3 Underflow occurred when determining a new approximation matrix for the Hessian of the Lagrangian.

IFAIL= 4 The line search could not be terminated successfully.

IFAIL= 5 Length of a working array is too short. More detailed error information is obtained with iprint >0.

IFAIL= 6 There are false dimensions.

IFAIL= 7 The search direction is close to zero, but the current iterate is still infeasible.

IFAIL= 8 The starting point violates a lower or upper bound.

IFAIL= 9 Wrong input parameter.

IFAIL= 10 Internal inconsistency of the quadratic subproblem, division by zero.

IFAIL>100 The solution of the quadratic programming subproblem has been terminated with an error message and IFAIL is set to IFQL+ 100, where IFQL denotes the index of an inconsistent constraint.

Table 6.13: Error flags of NLPQLP

Remark 6.26

As mentioned at the end of Section 5.2.6, the sequence of evaluations is important for the speed of the minimizer. The implementation of NLPQLP demands a recomputation of parts of the variables df and dg if IFAIL= −2. This allows us to use information in the differential equation managers of class IOdeManager to simultaneously compute df and dg.