• Keine Ergebnisse gefunden

The scheme in Figure 3.1 represents a general regulatory control structure ap-plied to an arbitrary process plant. Based thereon the exact local criterion of Halvorsen et al. (2003) is introduced in this section. The vectors u ∈ Rnu, d ∈ Rnd, y ∈ Rny and c ∈ Rnu, respectively, correspond to the

manipu-Controller

Plant

Combination

u cs

d y

nc

c+ n

ny

Figure 3.1: General representation of regulatory control structures in chemical plants (after Alstad et al., 2009)

lated variables (MVs), disturbance variables (DVs), measured process variables (PVs) and controlled variables (CVs). A constant setpoint policy is applied.

That is, the MVs are adjusted by the controller(s) until (feasibility provided) the CVs equal the setpoint vector cs whose values remain xed. To account for measurement errors, the PVs and CVs are aected by the implementation errorsny∈Rny and nc∈Rnu.

Morari et al. (1980), the inventors of self-optimizing control, state that it is desirable (...) to nd a function of PVs which when held constant, leads automatically to the optimal adjustments of the MVs, and with it, the optimal operating conditions. In other words, self-optimizing control may be achieved by an appropriate mapping of PVs towards CVs, denoted byc=H(y), where H ∈ Rnu represents the combination block in Figure 3.1. For deriving the exact local method, Halvorsen et al. (2003) considered the linear control struc-tureH =∂H/∂yT.

The operational cost (or negative prot) of a plant denoted by J is usually aected by both types of input variables, MVs and DVs. In order to operate a plant optimally at minimum cost (or maximum prot) by compensation of DVs via direct manipulation of MVs, the problem

uR/O(d) = arg min

u J(u,d) s.t. g(y,u,d) = 0 (3.1) must be solved. The solution to problem (3.1) is referred to as feed-forward re-optimization (R/O) throughout this work. Here, g∈Rny denotes the steady-state model equations of the plant. For the further derivation from hereon, some simplifying assumptions need to be made.

1. The plant is in its optimal state at nominal conditions. The input/output (I/O) variables are transformed such that u=0,d=0 and y=0 hold at nominal conditions.

2. The number of MVs may be reduced as some may have no eect on the

cost, others may need to be spent in a priori controller loops in order to either stabilize the plant or fulll control targets such as product qual-ity or optimally active constraints. It is assumed thatu represents only the remaining MVs available for self-optimizing CSD. The a priori con-troller loops are considered part of the model equationsg. Further, it is supposed that the optimally active constraints remain locally unaltered.

3. Nonlinearities of the plant are locally negligible around the optimal op-erating point. Consequently, the steady-state I/O model of the plant at optimal conditions can be represented by the linear equation

y=Gyu+Gydd+ny, (3.2) where the augmented plant is given by

 Gy Gyd

=−

 ∂g

∂yT

−1

0

 ∂g

∂

uT dT

0

and the elements ofu,dand yrepresent deviations from nominal oper-ating point.

4. The cost functionJ can be locally approximated by a second order Taylor series, i.e.,

J =J0+

 Ju

Jd

T  u d

 + 1

2

 u d

T

Juu Jud

JudT Jdd

  u d

, (3.3) where the indices u and d indicate the partial derivative evaluated at nominal operating point. In particular, Ju =0 and Juu ≻ 0 hold as a consequence of the rst and second point.

Figure 3.2 shows exemplarily the operational cost of a process plant versus one DV. The cost of the feed-forward re-optimization is indicated by the solid line and gives the lower bound for feedback control with constant setpoints represented by the dashed curves. It is thus convenient to dene the measure

L(d) =J

uS/O(d),d

−J

uR/O(d),d

(3.4) known as the loss function. Here uS/O(d) and uR/O(d) represent the in-uence from DVs to MVs for feedback control with constant setpoints and re-optimization, respectively. A good self-optimizing (S/O) control structure refers to a situation in which the functions uS/O(d) and uR/O(d) are similar to each other and the loss L(d) is small for the expected disturbances.

J(d)

d d= 0

Poor

Good J(uopt(d),d)

J(uH(d),d)

Loss

Figure 3.2: Cost functions of regulatory control structures (dashed) and re-optimization (after Skogestad, 2000)

The dependency of (3.4) from the feedback control structure is not yet ap-parent and will be derived below. For shorter notation, the indication of the dependency ond is dropped in the sequel. Insertion of (3.3) into (3.4) yields

L= 1

2uTS/OJuuuS/O+uS/OJudd−uR/OJudd−1

2uTR/OJuuuR/O, (3.5) for whichJu=0 was taken into account. Using the rst order Taylor series

∂J

∂u = Ju



=0

+Juuu+Judd

and the fact that re-optimization implies∂J/∂u= 0, it follows that

uR/O=−Juu−1Judd. (3.6) Using this result, (3.5) can be transformed into

L= 1

2zTz (3.7)

with the loss variables

z =Juu1/2

uS/O−uR/O

. (3.8)

In (3.8), Juu1/2 is the Cholesky factor corresponding to Juu =  Juu1/2

T

Juu1/2.

From H y=c=! cs=0 and (3.2) it can be obtained that uS/O =−(H Gy)−1 H 

Gydd+ny

. (3.9)

Insertion of (3.9) and (3.6) into (3.8) yields z=−Juu1/2 (H Gy)−1 H 

Gyd−GyJuu−1Jud

d+ny . For simplication of notation, the matrices

Gyz =GyJuu−1/2 (3.10a)

F˜ = 

Gyd−GyJuu−1Jud

Wd Wny

 (3.10b)

M = (H Gyz)−1 HF˜ (3.10c)

are introduced which yields the short expression

z=M f. (3.11)

Here,

f ∈ F =

x∈Rnd+ny :∥x∥2 ≤1

combines the scaled disturbances and implementation errors andWdandWny

are the respective diagonal scaling matrices.

From (3.7) and (3.11), Halvorsen et al. (2003) concluded that the worst-case loss for feedback control is given by

Lwc = max

f∈FL= 1

2σ¯2(M) (3.12)

where σ¯ indicates the largest singular value. Consequently, a self-optimizing control structure with least worst-case loss may be obtained by solving the problem

H = arg min

H σ¯(M). (3.13)

An upper bound forσ¯(M) can be derived, i.e.,

¯ σ

HF˜

/σ(H Gyz)≥σ¯(M),

whereσ indicates the smallest singular value. Accordingly, approximate solu-tions can be found by either minimizing ¯σ

HF˜

 or maximizing σ(H Gyz).

The rst strategy is made use of by the nullspace method (Alstad and Skoges-tad, 2007), while the second strategy gives rise to the maximum singular value

(MSV) rule, which reads

H = arg max

H σ(H Gyz) (3.14)

when scaling is disregarded. Both strategies have the advantage that they reduce complexity and are thus favored by several authors. Kariwala et al.

(2008) proposed the average loss Lav=

ˆ

f∈F

Ldf = 1

6 (nYu+nd) ∥M∥2F (3.15) based on the assumption that ∥f∥2 is uniformly distributed over the range 0≤ ∥f∥2≤1. Here, ∥.∥F indicates the Frobenius norm andnYu =|Yu|where Yu=Y1∩ · · · ∩ Ynu and Yi is the PV subset corresponding to theith CV. I.e., Yu indicates the overall subset of all PVs who have at least one nonzero linear coecient in the control structure (e.g.,nYu =nu for PV selection). Kariwala et al. (2008) state that the average loss (3.15) is usually a better estimate of the loss as the worst-case loss (3.12) tends to overestimation. According to (3.15), Kariwala et al. (2008) suggested solving

H = arg min

H ∥M∥F (3.16)

instead of (3.13). They proved that the solution to (3.16) is super-optimal in the sense that it minimizes both the average and the worst-case loss.

Remark 3.1. According to (3.15), the average loss depends on the number of PVs used as combination variablesnYu. This dependency was omitted in (3.16) in order to limit the favor of largenYu.

After this brief survey it can be concluded that the identication of self-optimizing control structures can be achieved by solving (3.13), (3.14) or (3.16) subject to certain structural constraints. An optimal control structure subject to PV selection can be found by exhaustive screening of all possible PV com-binations. As this becomes impractical for large-scale models, several authors (Kariwala and Skogestad, 2006/07/09-13; Cao and Kariwala, 2008; Kariwala and Cao, 2009, 2010a) suggested the use of ecient branch and bound (BAB) algorithms. Others (Alstad and Skogestad, 2007; Kariwala, 2007; Kariwala et al., 2008; Alstad et al., 2009)1 published methods for the identication of structures in which each CV is a linear combination of an a priori imposed PV

1Alstad et al. (2009) proposed two dierent methods, the extended nullspace method and one which was not named but for which a linearly constrained quadratic problem needed to be solved (see Appendix 3.B). If not explicitly stated, the method by Alstad et al.

(2009) refers to the latter method throughout this work.

subset (recalled in Appendix 3.A). In order to get the best structure subject to a certain PV set size, one possibility is the exhaustive search, i.e., screening of all possible PV subsets and for each subset applying one of these methods.

As for the PV selection problem, this becomes computationally expensive for large ny and also for large PV subset sizes smallerny/2. Therefore, Kariwala and Cao (2009, 2010a) proposed ecient BAB solution methods for nding the best PV subset subject to a certain set size.

This thesis is dedicated to the solution of (3.13), (3.14) or (3.16) subject to more advanced structural constraints than PV selection or linear combinations of a common PV subset with certain set size. This is the concern of Section 3.4. However, these methods are based on a further method for common PV subset combination named GSVD method. It was priorly presented by Heldt (2009, 2010a) and is recalled in Section 3.3.