• Keine Ergebnisse gefunden

Existence and Uniqueness Theorem. The properties of the governing equations describing the dynamics of the systems provide restrictions on possible trajectories in phase space. In this thesis we will often consider a system of ODEs as reference system, to which we compare the dynamics of other systems. In a system described by smooth ODEs, theexistence and uniqueness theorem holds (taken from ’Nonlinear Dynamics and Chaos’ by Steven Strogatz, p.150 [82]):

Consider the initial value problem =f(x),x(0) =x. Suppose thatf is contin-uous and that all its partial derivatives ∂fi/∂xj, i, j = 1, . . . , n, are continuous for x in some open connected set D⊂Rn. Then for xD, the initial value problem has a solution x(t) on some time interval (−τ, τ) aboutt= 0, and the solution is unique.

From this theorem it follows that as long as the functionf is smooth enough locally, trajectories locally cannot intersect, which includes trajectories not joining or splitting in two. If the functionf is globally smooth enough this holds everywhere.

Poincaré-Bendixson-Theorem. An important consequence of the existence and unique-ness theorem is thePoincaré-Bendixson-Theorem. It states that any bounded trajec-tory of a two dimensional smooth system eventually approaches a fixed point or limit cycle. A consequence of this theorem is that chaos is only possible in a smooth ODE system of at least three dimensions.

Nested Limit Cycles in a Minimal Adaptive System

3.1 Introduction

Network models in theoretical neuroscience often consist of adaptive units. What kind of dynamics can be expected of a minimal adaptive system? In this chapter we study the dynamics of a minimal adaptive system in greater detail. The phase space portrait is characterized by nested limit cycles. We provide a proof for the existence of limit cycles in a simplified system and finally consider the system from different perspectives.

Adaptation. What is adaptation? Adaptation is not a well defined term. However, there are systems that are generally considered to be adaptive. The evolution of species can be seen as a game of adaptation, different species competing over resources and fighting for survival. While species well adapted to their environment are likely to survive and flourish, other species that are less adapted are more likely to perish. A famous example is the evolution of the peppered moth, which has been documented for over two hundred years. This species of moth lives on light colored trees and lichens.

While the moths were of light color originally, due to the increased pollution during the industrial revolution the trees darkened and consequently the typical moth found became darker, because a darker color better camouflaged against predators which increased likelihood for survival. Eventually the air became less polluted, the trees turned lighter again and the coloring of the typical moth returned to the original lighter color [93].

Another example of adaptation, this time from neuroscience, is the adaptation of neuronal firing rates [20, 23, 56]. While a new stimulus elicits an increase in firing of neurons, the firing rate returns to baseline upon prolonged presentation of that stimulus. Adaptation is a general principle in the nervous system and occurs across processing levels and at a broad range of different time scales. An optical illusion stemming from adaptation is the motion after effect. If continuous motion in one direction is presented for a long time and the motion stimulus is then removed, observers report to perceive illusional motion in the opposing direction. While the motion stimulus was presented the neurons encoding this specific motion direction adapted to the stimulus by reducing their firing rate. Therefore, when the stimulus is removed, the neurons encoding the opposing direction of motion show higher firing rates than the adapted neurons. This disbalance in firing rates produces the percept of motion in the opposing direction [23, 56].

In both cases the adaptation can be understood as an optimization process. In evolution the phenotype of a species changes to optimize survival in a changing en-vironment. In neuroscience the reduction of firing rates in response to a change in baseline inputs minimizes the production of energetically expensive action potentials.

Therefore, we choose the following working definition for adaptation:

Adaptation is an ongoing minimization of an energy functional, which itself may change on a slower time scale. Energy in this context does not refer to a physical en-ergy but rather to a Lyapunov function, that is minimized along trajectories (except unstable points).

In theoretical neuroscience several networks and mechanisms were proposed to pro-duce such dynamics, which Treves [87] termed latching dynamics. These networks are Hopfield-like networks [34], which are characterized by strongly connected cells assemblies competing with each other, resulting in several fixed points attractors.

Hopfield-like networks can perform computational tasks, such as pattern completion, categorization and reproduce some perceptual effects such as priming or multistable perception [16, 66]. If equipped with an additional mechanism, destabilizing attrac-tors on a slower time scale latching dynamics can be observed: The system starts within the basin on an attractor and moves towards it. The additional mechanism destabilizes the current attractor basin and the system moves towards the next at-tractor. Overall the system produces a sequence of latching from one attractor to another. Mechanisms that were discussed for destabilizing attractors include synap-tic or intraneural mechanisms, noise and inhibition [3, 39, 47, 48, 58, 66, 77, 87].

The adaptation process can be conceptualized as a particle - the adapting entity-moving inside a potential - the environment. With the above working definition of adaptation the simplest adaptive dynamical system consists of a one dimensional potential fully determining the movement of a particle, and a dynamical rule how the potential changes over time. We chose a sinusoidal potential which slowly changes according to the particle’s position. The resulting system can be fully described by two ODEs, its phase space portrait is characterized by discrete limit cycles nested within each other.

Structure of this chapter. This chapter is structured in the following way: In the second section we introduce the system studied here. In the third section we demon-strate the system’s behavior through numerical simulations. We explore different parameters and aim for an intuitive understanding of the system’s behavior. While the original system studied is nonlinear and it is not possible to study it analytically we introduce a simplified system, that still qualitatively shows the same behavior but allows for some analytic treatment of global dynamics in a nonlinear dynamical sys-tem. In the fourth section we introduce this simplified system, show numerically that its qualitative behavior is the same and provide an analytic proof for the existence of the systems limit cycle behavior (at least for a certain parameter regime). In the fifth section we discuss the results by presenting alternative view points on the system. In the sixth and last section we provide a short summary.

-10 -5 0 5 10

x

-10 0 10

V(x)

Figure 3.1: Particle interacting with its potential. The potential determines how the particle moves. At the same time the position of the particle leads to a change of the potential on a slower time scale. Lines correspond toV(x(t)), the dot ◦ corresponds to (x(t), v(x(t)) as time tevolves. The lighter the color, the further in the past. Parameters: A= 1,ϕ=−π2+0.1, ε= 0.02,xini= 11.5, cini= 0.