• Keine Ergebnisse gefunden

4. Dictating spike times with Gaussian stimuli 85

4.4. Calculating a stimulus to evoke the prescribed spike train

dV

dt =α+√

2(t), (4.4)

whereξ(t) is white Gaussian noise withhξ(t)i= 0 andhξ(t)ξ(t0)i=δ(tt0), and that has the fire-and-reset rule, whenever V > 1 a spike time is registered and a reset, V → 0 is applied. The firing rate and the CV of the renewal spike train that is generated by this model uniquely correspond to the constant input α and the noise intensityD (Vilela and Lindner, 2009a). This relation can be inverted which reveals how the model parameters have to be chosen in order to evoke the prescribed rate and CV:

α=r, D= rCV2

2 . (4.5)

Here, the particular choice of the values for the prescribed rate and CV will be made relative to the reference values r0 and CV0 of the neuron under investigation. For the numerical simulations we varied the prescribed statistics around these reference values and changed the firing rate between 0.5r0 and 1.5r0 in 20 steps and the CV between 0.2CV0 and 2.5CV0 in 46 steps. In the experiments, we only used the reference values itself as prescribed values, r=r0 andCV=CV0.

4.4. Calculating a stimulus to evoke the prescribed spike train

Our procedure to calculate stimuli that are supposed to evoke prescribed spike trains in-volves approximations and transformations that have to be justified a posteriori by testing the performance of our method. Hence, in the following the method is first presented step by step, without addressing the question of the justification:

For a system that acts purely linearly, the input,s(t), and the output,x(t), are related via a convolution

x(t) =Ks(t). (4.6)

Here, K(t) is the linear response function and ∗ denotes convolution. In the frequency

domain the convolution eq.(4.6) simplifies to a multiplication:

˜

x(f) =χ(fs(f). (4.7) In this equation the Fourier transform of the linear response function,K(t), can be iden-tified as the susceptibility,χ(f). The assumption to describe the dependence of input and output by a linear relation constitutes a strong approximation (although eq.(4.7) captures an important part of the full dynamics).

In the following, regardless of the statistics of the prescribed spike train or the stimulus, we use the susceptibility,χ0, (eq.(4.2)) that is measured under white noise stimulation at the respective reference point (µ0,σ0). With the knowledge of the susceptibility, we can use eq.(4.7) to calculate the stimulus ˜s(f) that evokes the output ˜x(f):

˜

s(f) = x˜(f)

χ0(f). (4.8)

The usage of χ0(f) independent of the actual values, µ, σ, of the input current and regardless of the CV and firing rate of the prescribed spike train is again an approximation.

Also, the stimulus in eq.(4.8) is colored noise which can affect the susceptibility [see e.g.

(Fourcaud-Trocmé et al., 2003)].

The equation eq.(4.8) provides us with a stimulus in the Fourier-domain that delivers the prescribed stimulus, s(t), as time series by applying inverse Fourier transformation.

However, becausex(t) is a train ofδ-pulses, eq.(4.8) will not fulfill all our constraints of Gaussianity, given mean and variance, and finite cutoff frequency at the same time. To impose these restrictions, we further processs(t).

In a first step we apply a static nonlinear transformation,F(s, µ, σ), to the stimulus in the time domain

s†,G(t) =F(s(t), µ, σ2), (4.9) which provides us with a stimuluss†,G(t) that has the histogram distribution of a Gaussian with mean µand varianceσ2.

Using cumulative probability distributions, Py(x) = Prob(x < sy), the nonlinearity F can be expressed by the cumulative probability distribution of the actual stimulus, P, and the inverse cumulative probability distribution of a Gaussian signal,PG−1 (the inverse

4.4. Calculating a stimulus to evoke the prescribed spike train

A

p

B

Figure 4.2.: Illustration of the nonlinear transformation of a non-Gaussian vari-able into a Gaussian varivari-able. A) The histograms of the probability distributions of the actual (pink) and the transformed (transparent black) stimulis(t) ands†,G(t).

The green curve that envelops the histogram ofs†,G)(t) denotes the desired profile, i.e.

the normal distributionN(s, µ, σ). B) The cumulative probability distributionsPand P†,G of the actual (pink) and the desired Gaussian profile (black) respectively. The arrows indicate how a particular values is transformed into the corresponding value s†,G of the Gaussian signal.

of the prescribed probability distribution)

F(x, µ, σ) =PG−1(P(x), µ, σ). (4.10) In Fig.4.2, the nonlinear transformation of a non-Gaussian distribution to the desired Gaussian distribution is illustrated. The histogram distribution of a non-Gaussian and transformed stimulus are plotted in Fig.4.2A. Due to the definition of the nonlinear trans-formation, the histogram of the transformed stimulus, s†,G, perfectly agrees with the desired normal distribution,N(s, µ, σ) (green line). In Fig.4.2B, the cumulative probabil-ity distributions are shown and the transformation, eq.(4.9), is illustrated by arrows that indicate how a particular value of the non-Gaussian stimulus, s, is transformed into the corresponding value of the Gaussian stimulus, s†,G.

The stimulus s†,G(t) that results from this transformation has the desired Gaussian statistics. However, we also have to ensure the constraint of a cutoff frequency at 100 Hz.

To this end, a second step in which a lowpass filter is applied sets all Fourier components that belong to frequencies larger thanfc to zero

˜

s†,fc(f) = ˜s†,G(f)Θ(fcf). (4.11)

Here, Θ(f) is the Heaviside-function. This operation is again a nonlinear transformation of the stimulus and in general it is likely that it changes the histogram distribution such that the Gaussian signal becomes again a non-Gaussian signal.

In order to obtain a stimulus,s†,G,fc(t), that fulfills all constraints at the same time, we iteratively apply eq.(4.9) and eq.(4.11). I.e. we first give the stimulus the desired Gaussian profile, with the right mean and variance, then apply the lowpass filter to ensure the cutoff frequency and iteratively repeat this procedure. To ensure that all Fourier components for frequencies larger than fc are truly equal to zero at the end of an iteration, we start the iteration with the transformation to Gaussian statistics and apply the frequency cutoff afterwards.

To quantify the convergence of the iterative process, we define a measure, ∆, to test if the statistics of the transformed stimulus are close enough to the desired Gaussian statistics:

∆ = R

−∞|P†,fc(s)−PG(s)|ds R

−∞|PG0(s)−PG(s)|ds. (4.12) Here, as a reference for two distinct cumulative probability distributions,PG(s) andPG0(s) are calculated from two Gaussian variables that have the same mean value,µ, but slightly differ in their standard deviations, σ and σ0 = 1.01σ. Therefore, the denominator of eq.(4.12) normalizes the difference between the actual cumulative probability distribution, P†,fc(s), and the prescribed one, PG(s) (numerator), by a this reference value.

As termination criterion for the iteration, we use ∆<0.1. This means that the cumu-lative distribution of s†,G,fc(t) should differ from the desired Gaussian by less than 10%

of what would be the deviation between two Gaussians that differ in their standard devi-ations by one percent. Typically, this criterion is fulfilled after 10-20 iterdevi-ations such that the procedure returns a stimulus,s†,G,fc(t), that approximately satisfies all constraints.

To complete the procedure, we still need to decide howµandσ2 in eq.(4.9) are chosen.

The stimulus varianceσ2 =σ20 was fixed at the reference value variance which is an exper-imentally motivated constraint that is supposed to ensure the integrity of the stimulated neuron by restricting of the total power that is injected in the cell. Because the mean value of the input strongly affects the firing rate, we let it depend on the prescribed firing rate, r, via the inverse firing-rate-vs-input-current relation for the white noise driven neuron,