• Keine Ergebnisse gefunden

Letδ >0. Then there exists anε >0 such that

Nε⊂Bδ(ˆz?).

due to the strict convexity ofGand the compactness of Di,i∈NI. From Corollary 5.2.7, we obtain the convergenceˆz`→ ˆz? for `→ ∞. Letδ >0 be arbitrary. Then there exists an`∈Nand anε >0such thatˆz` ∈Bδ(ˆz?), or equivalentlyˆz`−ˆz? ∈Bδ(0), andˆz` ∈Nε. For the local objective function, we obtain the estimate

gi(z?i`;p`i)≤gi(z`i;p`i) =G(ˆz`)≤G(ˆz?) +ε which implies

ˆ z`− 1

Iz`i+ 1

Iz?i` ∈Nε⊂Bδ(ˆz?).

Therefore, we can conclude that for all sufficiently large`

ˆ

z?−ˆz`+ 1 Iz`i − 1

Iz?i`

− ˆ z?−ˆz`

= 1 I

z`i−z?i`

∈B(0),

i.e., kz`i −z?i`k →0 for ` → ∞. Finally, we obtainkz`+1i −z`ik →0 for `→ ∞ using the definition ofz`+1i

z`+1i −z`i = (θ`z?i`+ (1−θ`)z`i)−z`i`(z?i`−z`i) forθ`∈[0,1].

5.3 Application to residential energy systems

In Section 4.3, the advantages and disadvantages of a centralized control approach in the context of smart grids were discussed. In the previous section, we presented a hierarchical distributed optimization algorithm which splits the optimization problem into local tasks performed by subsystems and a global task performed by the CE. In the limit, Algorithm 5 recovers the optimal solution of the centralized control approach. In this section, we investigate advantages of the distributed control algorithm over the centralized approach in the context of the electricity grid introduced in Chapter 3.

5.3. Application to residential energy systems

5.3.1 The communication structure of the distributed optimization al-gorithm

Algorithm 5 consists of three main steps which we investigate in the following. The com-putation of an optimal solution of the local problems (5.7) by subsystems or the RESs, a solution of the optimization problem of the CE to obtain the stepsizeθ` and the commu-nication between the RESs and the CE.

General properties using the objective function G

The optimization problems of the RESs only depend on the parameters pi and the local system dynamics (2.1) of the individual RESs, i.e., the system dynamics of RESi, which define the set Di, is private and does not need to be known by the other RESs and the CE. For this reason, similar to the decentralized setting, the local system dynamics can be changed without changing any component other than the local controller of the corresponding RES.

The global optimization problem in Phase1 is an optimization problem in one single vari-ableθ∈[0,1]and hence, can be solved efficiently, sometimes even explicitly, independent of the size of the overall network. Moreover, the CE only requires the variableszto compute the next iterate. The variablesui andxi remain private for all i∈NI.

The number of variables that have to be transmitted grows linearly with the number of RESs or the dimension ofz. Moreover, the communication variables do not remain private between the RESs, since every system needs to knowz` to define p`i. This might prevent customers to join a network using Algorithm 5. Through the objective function G, these problems can be circumvented.

Properties using the objective functions G

For objective functions of the formG(z) =G(ˆz), the number of transmitted variables can be made independent of the number of RESs in the network. More precisely, it is sufficient that every RES sendspN values to the CE and the CE publishesN p+ 1values. Moreover, the CE does not have to send theN p+ 1values to the RESs individually but only has to make sure thatN p+ 1values are publicly available and can be accessed by every RES.

The reduction toN p+ 1values works as follows: In the case of the function G, instead of the parameterspi ∈R(I−1)p×N,i= 1, . . . ,I, we can define the average parameters

ˆ pi = 1

I

I

X

l=1

zl− 1

Izi = ˆz− 1

Izi (5.20)

for i= 1, . . . ,I, and write the local objective functions with parameters pˆi, i.e., gi(·; ˆpi).

As a consequence, the dimension of the parameters pˆi ∈ Rp×N is independent of I. To avoid the communication with every RES in iteration`of Algorithm 5, observe thatpˆ`+1i = ˆ

z`+1I1z`+1i , i.e., the individual parameterp`+1i is obtained from the general information ˆ

z`+1 plus a local information. Even though only z?i` is known to RES i, it holds that z`+1i := θ`z?i`+ (1−θ`)z`i and hence, z`+1i can be computed easily if the stepsize θ` is

known in every iteration`∈N. By publishingˆz`+1 andθ` in every iteration`, every RES is able to computepˆ`+1i independent of the size of the network. Algorithm 5 is rewritten for the special case ofG(z) = ˆG(ˆz) in Algorithm 6. In this case, the variables z`i are only Algorithm 6Hierarchical cooperative distributed optimization for a network of RESs Input:

• RES i, i ∈NI: Define the admissible set Di based on the initial state xi(k) ∈Xi

and the time-dependent quantity si(k;N).

• CE: A continuously differentiable and strictly convex functionG, number of RESsI, prediction horizon N, maximal iteration number `max ∈N∪ {∞}, desired precision ε∈R≥0.

Initialization:

• RES i, i∈NI: Define and transmit z?i1,z1i ∈Di.

• CE: Set the iteration counter ` = 1 and G1 =∞, receivez1i, i∈ NI and compute ˆ

z1= 1IPI i=1z1i. Main loop:

Phase 1 (CE): Receivez?i` for i∈NI.

• Compute ˆz?`= I1 PI i=1z?i`.

• Compute the stepsize θ`=argmin

θ∈[0,1]

G(θˆz?`+ (1−θ)ˆz`).

• Compute ˆz`+1:=θ`ˆz?`+ (1−θ`)ˆz` and evaluate the performance index

G`+1:=G(ˆz`+1). (5.21)

• If |G`+1−G`|< ε or `≥`max holds, terminate the algorithm. Otherwise, transmit ˆ

z`+1 and θ` to the RESs.

Phase 2 (RES i, i∈NI): Receive ˆz`+1 andθ`.

• Compute z`+1i`z?i`+ (1−θ`)z`i

• Solve the local minimization problem z?i`+1 =argmin

ziDi

gi

zi; ˆz`+1− 1 Iz`+1i

.

• Transmit z?i`+1.

Increment the iteration counter`=`+ 1and repeat the loop.

5.3. Application to residential energy systems

known to RESiand the CE, but not to the other RESs. Privacy is maintained since only a single RES has access to the average demandˆz`+1 from which no individual information of other RESs can be recovered. The communication structure of Algorithm 6 is visualized in Figure 5.1.

Central Entity Computeˆz?` Computeθ` Computeˆz`+1

RES1 Updatez`+11 Computez?1`+1 ˆ

z`+1, θ`

RESI Updatez`+1I Computez?I`+1 RES2

Updatez`+12 Computez?2`+1

· · ·

Iteration`, Phase 1 Public data Iteration`, Phase2

z?1`+1 z?2`+1 z?I`+1

Figure 5.1: Communication structure of Algorithm 6.

Remark 5.3.1. In Algorithm 6 the optimal states x?i and the optimal input u?i fori∈NI

can either be recovered from the system dynamics and the optimal power demand z?i or by computing x`+1i`x?i`+ (1−θ`)x`i and u`+1i = θ`u?i`+ (1−θ`)u`i in every iteration ` similar to the update of z`.

5.3.2 Numerical complexity of the distributed optimization algorithm The numerical complexity for the central entity

In Algorithm 6, the CE has to solve the optimization problem argmin

θ∈[0,1]

G(θˆz?`+ (1−θ)ˆz`) (5.22) in the unknown θ subject to box constraints independent of the number of systems I.

This implies that the numerical complexity is independent of the number of RESs if we neglect the effort to compute the average ˆz`+1. For many convex objective functions, the minimization problem with respect toθ can be solved explicitly, e.g. for the function

G(ˆz) = zˆ−ζˆ1

2

(5.23) withζˆ∈R, an explicit solution can be computed.

Example 5.3.2. The optimal stepsize θ in iteration ` of Algorithm 6 for the cost func-tion (5.23) can be computed by projecting the expression

θ˜= ˆ

z?`−ˆz` ˆ

z`−ζˆ1T

(ˆz?`−zˆ`) (ˆz?`−ˆz`)T

to the interval[0,1], i.e.,θ= max{0,min{θ,˜ 1}}. In order to show this, define the function

Since φ is strictly convex, the assertion follows by setting φ0(θ) = 0 and projecting the resultingθ on the interval [0,1]. The derivative is given by

φ0(θ) =−2· from which the assertion follows. In the case where the explicit expression for θ` is not defined, i.e., the algorithm already found the minimum.

The goal of Algorithm 5 and 6 is not to reduce the computational complexity of the CE to a minimum but to render the complexity independent of the number of RESs. Instead of one unknown θ one can also consider the case that every system i∈ NI belongs to a cluster m∈ NM, for a fixed M ∈N which will be denoted by zm,i. Every cluster has its own variableθm ∈[0,1]andθT = (θ1 . . . , θM). Without loss of generality, we assume that systems are ordered and we define

ZTm= zTm,m1 . . . zTm,mI

5.4. Extension to non-convex optimization