• Keine Ergebnisse gefunden

Figure 2.8.: Wave propagation in a network of phase oscillators. The images depict snapshots of the phases of an array of 200x200 weakly coupled phase oscillators at different points in time. The units are arranged as two-dimensional grid and each unit is coupled locally to its four nearest neighbors.

The phase response curve of the oscillators is the one derived from the mean-field model (Figure 2.5, right). The frequency of unit (0,100), middle left in images, is fixed, i.e., an external forcing is applied.

(t= 1)Random initial state. (t= 25) Increasing homogeneity of phases. (t= 61) Appearance of several local wave generators (red blobs). (t = 111) Reduced number of local wave generators. Periodically forced unit (0,100) begins to dominate wave pattern. (t = 200) Wave fronts regularly originate from unit (0,100). Only minor irregularities in propagation pattern remain. (t= 300) Regular wave propation pattern originating from externally driven phase oscillator (0,100) is established.

2.4. Discussion

We obtained a testable prediction for the PRC of the neocortex during deep sleep and for slices of cortex tissue exhibiting up and down states. In the weak resetting regime we found type II PRCs with similar features for two different models that reproduce many aspects of up and down states in slices. The obtained PRCs show maximal responsiveness to be close to the transition to the up state. This is in agreement with evoked potential studies (Massimini,2002).

In the strong resetting regime both models also conform to the experimental results byShu et al.

(2003). The results strictly apply only to ferret brain slices, as both investigated models build on observations from those preparations. However, considering the universality of sleep and related phenomena like spindles and hippocampal ripples across mammals the results should, at least qualitatively, translate to other species as well.

Treating a neural population as phase oscillator is a strong simplification that is not justified in general. However, slow oscillations during deep sleep and deep anesthesia present a rare

case where this approximation can be valid, because they exhibit a high degree of regularity.

Theoretical investigations predict a power law distribution (Mejias et al.,2010) of the residence times in up and down states but also showed that a purely fluctuation driven transition between up and down states is not sufficient to account for the statistics of residence times (Deco et al., 2009). Rather, the probability density function obtained from experimental data is unimodal and centered on a preferred frequency not close to zero (Deco et al.,2009), which is in contrast to the theoretically derived power law distribution (Mejias et al.,2010).

The population activity of network and mean-field model is reminiscent of relaxation oscilla-tions and phase model theory can be used to predict the synchronization behavior (Izhikevich, 2000;Varkonyi and Holmes,2008). In particular, phase equations of a relaxation oscillator are appropriate to describe its synchronization behavior if coupling is weak and the oscillator is not close to the limiting case of discontinuous jumps.

Phase response theory allows for accurate prediction of phase locking between oscillators and can be useful to analyze interactions between brain regions (Levnaji´c and Pikovsky,2010;Ko and Ermentrout, 2009; Kori et al., 2009; Perez Velazquez et al., 2007). During mammalian deep sleep hippocampal sharp wave ripple complexes and thalamic spindles tend to be phase-locked to the neocortical slow oscillation (Mölle et al., 2002; Clemens et al., 2007; Mayer et al.,2007) and parahippocampal activity seems to be phase-locked to the troughs of parietal and parahippocampal spindles. A characterization of these rhythms in terms of PRCs might shed light on the nature of this observation.

Furthermore, knowing the PRC of a system enables one to estimate cortical inputs based on the drift velocity of spiral waves (Biktasheva et al.,2010).

3. Excursus: Fitting models to time series of stochastic processes

This section describes a batch method for the estimation of parameters in nonlinear stochastic dynamical models. It is based on the global optimization of a cost function that is evaluated on the complete data set. State of the art methods usually fall into the category of recursive Bayesian filters and smoothers (Särkkä,2013;Daunizeau et al.,2009), e.g., unscented Kalman filters (Wan and Van Der Merwe,2000), which can perform simultaneous parameter and state inference. However, they are somewhat hard to implement and can be computationally very demanding, especially for long time series.

3.1. Cost function

We utilize the result fromKazakov and Lavrov(1994) that the two-dimensional density w(x1, t1;x2, t2)of a stochastic process,s, notably even a non-Gaussian and non-Markov pro-cess, is completely defined by two univariate functions, namely the the amplitude distribution, f(x1) = R

w(x1, x2)dx2, and the correlation function, R(τ). Specifically, they show thatw can be represented by the expansion

With this in mind we use the probability density function (pdf)f and correlation functionRto define a distance function between stochastic processess1ands2as

D(s1, s2) :=D1(fs1, fs2) +D2(Rs1, Rs2) +D1(fS1, fS2). (3.3) Regarding D1, a common choice for the ”distance“ between probability density functions would be the Kullback-Leibler (KL) divergence. However, we chose to use the Mallows dis-tance, because it is easy to compute, statistically efficient and assumes non-trivial values even when the distributions have disjoint support. Consider two distinct probability density func-tionsf1 andf2, withfi(x) >0∀x ∈[a, b]andRb

afi(x)dx= 1. Further, letF1 andF2 be the corresponding distribution functions andF1−1andF2−1the inverse distribution functions. The Mallows distance for the one-dimensional case is

DM(f1, f2) =

If sample vectorsxandyare given, each drawn from densitiesf1, f2, and having equal length, then the Mallows distance can be calculated directly from these vectors via

DM(f1, f2) = 1

RegardingD2, the Wiener-Kinchin theorem states that for a stationary stochastic process the Fourier transform of its autocorrelation function is equivalent to the power spectral density of the process (given the Fourier transform exists). Hence, the problem of calculating the dis-tance between two correlation functions can be transformed to calculating the disdis-tance between power spectral densities. For this problem several Riemannian metrics and geodesic distances have been proposed that take into account the differential-geometric structure of spectral den-sities (Li and Wong, 2013;Georgiou,2007). Here, we use three of them and compare their performance in the parameter estimation problem.

which is sometimes called log spectral distance (Georgiou et al.,2009).

Li and Wong(2013) suggested

which is equivalent to the Hellinger distance.

Georgiou(2007) proposed the ”geodesic prediction distance“

DG(f1, f2) =

The termD1(fS1, fS2)uses higher order statistics of the time series that are necessary to cap-ture temporal asymmetry. A simple a way of doing this is to look at the pdf fS of the set S ={st−st−1}.

The cost function D(s1, s2) can be optimized by standard methods, e.g., a stochastic global optimizer. Here, the MEIGO package was used (Egea et al.,2014), which is based on scatter search.