• Keine Ergebnisse gefunden

The problems of static modeling can be used to formulate the requirements for a better modeling strategy. The main problems are the separation of the embedding and regression step and the dependency of the selection process on the MSE1 cost function. The former inhibits a proper coordination between the processing of time information in the data and the structure of the model.

The latter leads to models that are not optimal for free-running applications and, if the data is noisy, to a biased selection and estimation of the model parameters.

Concluding from these problems, an approach is needed that can

a) merge the embedding step and the regression step into one optimization procedure.

b) employ the MSE directly for the selection of the model structure and parameter estimation.

g2

Figure 3.10: In the static modeling approach basis functionsgi(·) are selected from a pool of potential candidates into the model. Since the basis function are mutually independent, the ones lowering cost function MSE1the most can be chosen for the model in the selection process. For a free-running application a mutual dependency between the selected functions is introduced by an external feedback loop.

In the static modeling approach a model is created by choosing the best candidates from pool of basis functions (see Fig. 3.10). Since the time infor-mation is hidden in the embedding structure of the regressor patterns, these basis functions are memoryless transformation mappings. As such, the benefit of each function for the model performance can be estimated independently of the other ones. However, this performance can only be assessed for cost func-tion MSE1, which is based on one-step predictions. The performance for cost function MSEcannot be evaluated at this stage because an external feedback loop is used in free-running applications and this feedback loop introduces mu-tual dependencies between the selected basis functions in the model. Because of the external feedback loop cost functions MSE1 and MSE have different values.

The dilemma is that one has to know all basis functions in the model be-fore the external feedback loop can be reasonably applied and the performance of free-running predictions evaluated. Therefore an iterative selection process based on MSEis not feasible. There are three possible solutions to this prob-lem. The first solution is to keep the external feedback loop and approximate MSE by MSE1. This path is taken by static modeling approach. It produces often good models but suffers from the described problems.

The second solution is to avoid the external feedback loop during the free-running application. This has the advantage that cost functions MSE1 and MSE are equivalent. However, in many cases the prediction performance suffers considerably if the feedback is removed, rendering this strategy only as a last resort.

The third solution is to incorporate the positive effects of feedback loops from the beginning into the pool so that an additional external feedback is not

^y(t)

application

u(t)

model pool

selectionvirtual

2 7

1

x4 internal memory

x

model

internal memory

pool

x

x u(t)

Figure 3.11: In the dynamical modeling approach a state space model is regarded as a pool of different response dynamics to a driving input signal. In a virtual selection process the state dynamics xi that lower the cost function MSE the most are chosen to represent the model output. Because of the internal memory there is no need for an external feedback loop in free-running applications.

needed in later applications of the model. Similar to the second solution, the cost functions MSE1 and MSE are equal. However, the prediction perfor-mance does not degrade because there are still feedback loops present. This path is taken by the dynamical modeling approach and shall be discussed here in more detail (see also Fig. 3.11).

The dynamical modeling approach includes internal feedback loops between the basis functions in the pool from the start. Since no additional external loop is applied later, the basis functions already unfold the dynamics that they also show during the applications of the model. In effect, this leads to the equiv-alence between one-step and free-running predictions, so that consequently MSE = MSE1. For this reason the benefit of each basis function for free-running predictions can be assessed independently of the others. To implement the internal feedback structure, an internal memory in form of a state vector has to be introduced. The output of all basis functions in the pool is stored in this internal state vector for one time step. Since the basis functions influence each other, the state vector is used in each iteration to produce the output for the next time step.

Formally, the pool of basis functions in the dynamical modeling approach is a dynamical system that is driven by an external input signal {ut}t∈I

xt=g(xt−1, ut), (3.42) with xt ∈ RM the state vector, storing the output of M basis functions in the pool at time step t, and g(·) ≡(g1(·), . . . , gM(·)) the vector function that includes all basis functions gi :RM ×R→R. The output of the model is set

in analogy to the static models as a superposition of the basis functions ˆ

yt=

M

X

i=1

wigi(xt−1, ut). (3.43) Only the basis functions that contribute the most to a good prediction performance are needed for the model output. Since the cost functions MSE1 and MSE are equivalent, the best ones can be chosen in a selection process similar to the static models (see Fig. 3.11). One should note, however, that this selection process is only virtual, because the whole pool is needed to create the desired dynamics. The weights wi of the basis functions that are not selected are simply set to zero in Eq. (3.43). This is different from the static approach, where the selection process really reduced the size of the model. This is the downside of the dynamical modeling approach: The size of dynamical models usually exceeds the size of static models by far.

Nevertheless, both requirements from the beginning of the chapter are met by the dynamical modeling approach. Since an external feedback loop is not employed, the bias problem of the static models does not occur for dynamical models. Besides, an additional embedding step is not needed because the internal statextstores implicitly the history of the input{ut}t∈I. By adjusting the parameters of the basis functions and the internal feedback loops between them, time information in the input data and model structure are optimized at the same time.

The creation of recurrent elements in the function pool through internal feedbacks is by no means trivial. In order to produce reliable response sig-nals to an input signal the dynamics of the pool has to meet some stability criteria. This problem is discussed in more detail from the point of view of synchronization in the next chapter.

Synchronization and modeling

When two systems are coupled to each other, they can sometimes assimilate their behavior. This phenomenon is called synchronization. Its discovery is accredited to Huygens (1629–1695), who observed that two pendulum clocks adjusted their oscillations when they were attached to the same beam [61].

Since then many other examples of synchronization were found ranging from organ pipes over biological clocks to glowworms. Especially with the upcoming of radio communication in the beginning of the 20th century, synchronization became an essential concept receiving more and more attention from engineer-ing, mathematics, and physics.

In the last decades of the 20th century synchronization was linked with the popular chaos theory. It was found numerically and experimentally that chaotic systems could be synchronized (e.g. [27], [60], [58]). This new aspect of synchronization was spectacular because it went against all intuition concern-ing chaos. In chaotic systems even slight differences of initial conditions lead to completely different trajectories and it was astonishing that such systems could be brought to assimilate their dynamics and to stay locked in this state in a stable manner. Many scientists of the nonlinear dynamics community seized the opportunity to explore the new field of chaotic synchronization; and the combination of chaos theory and synchronization proved fruitful for re-search. Nowadays synchronization is among the mostly investigated nonlinear phenomena with many applications in fields like control, communication, or neuroscience.

In this chapter we want to give a short introduction to concepts regarding synchronization. Since there are so many of them, we will restrict the topic to the most important ones for this work. These are theidentical synchronization in Section 4.2 and the generalized synchronization in Section 4.3. For broader reviews see e.g. [13], [61], [56], or [53].

In the last section (Section 4.4) the dynamical modeling approach, which was introduced in Section 3.3, is examined from the perspective of synchro-nization. The concept of reliability is developed, which is based primarily on

h (x, y)y y = F(y) +y

x = F x(x) x = F x(x) +h (x, y)x

y = Fy(y) h (x, y)+ y drive system

response system

Figure 4.1: Left: Unidirectional coupling scheme between two dynamical systems. How the state of theresponse systemevolves is influenced by the state of thedrive system. Right:

Bidirectional coupling scheme. Because of the mutual influence there is no distinction be-tween drive and response system.

generalized synchronization. In a practical example we show how a synchro-nized system can be used for dynamical modeling. In this case the model system consists of interconnected Lorenz systems.

4.1 Preliminaries

Before going into detail about synchronization, a short notion on coupling schemes is required. For the general case of two coupled dynamical systems we have

˙

x = Fx(x) +hx(x,y),

˙

y = Fy(y) +hy(x,y), (4.1)

with x∈Rm andy ∈Rn as state vectors andFx:Rm →Rm and Fy :Rn → Rn as vector fields of the two systems. The functions hx : Rm ×Rn → Rm and hy : Rm ×Rn → Rn define the mutual coupling between both systems.

This coupling scheme is calledbidirectional. In the special case that one of the coupling functions is zero, the coupling is called unidirectional (see Fig. 4.1).

In contrast to bidirectional coupling, where both systems mutually influence each other, the information flows only in one way for the unidirectional case.

The system providing the information is called drive or master system. The system receiving the information is called response or slave system.

Although the unidirectional and the bidirectional coupling schemes have similarities, they also differ in many ways and there is no general method to transfer results from one to the other. Compared to the bidirectional coupling the unidirectional coupling is a docile scenario. Since the drive system is not influenced by the response system, it sets the course of the dynamics in the mutual state space. The response system is more or less restricted to the choice of following or not following. In the bidirectional case the dynamics is not dictated by one system. Rather, both systems compete for dominance

and the dynamics in the mutual state space is the result of this struggle. The bidirectional coupling produces a whole new system with its own dynamics and thus proves more difficult for analysis. In our work bidirectional coupling plays only a minor role and if not explicitly referred to, we will restrict our attention on the unidirectional case.

All definitions in this chapter refer to the case of two coupled continuous systems. However, with minor modifications they can also be transfered to multiple coupled systems. In the case of discrete systems the ordinary differ-ential equations in Eq. (4.1) have to be replaced with difference equations

xt+1 = fx(xt) +hx(xt,yt),

yt+1 = fy(yt) +hy(xt,yt). (4.2) Apart from that definitions and concepts for the continuous and the discrete case are generally the same. Exceptions will be explicitly pointed out to the reader.