• Keine Ergebnisse gefunden

In the previous Chapter (Sec. 2.1.5, p. 18), we have introduced the brain’s principle to or-ganize individual memories into a web of knowledge a so-calledschema(Fig. 2.4, p. 21), which structures knowledge and allows for complex behavior. For modeling the dynam-ics of a schema, in Sec. 3.1, we set about to introduce the used recurrent neuronal network and neuron model (Fig. 3.1 A). Using this recurrent neuronal network model, in Sec. 3.2, we describe the dynamics of npop Nindividual and interconnected neuronal popula-tions (Fig. 3.1 B). Here, we make three major assumppopula-tions for the population model: (i) all neuronal populations contain the same number of neurons, (ii) all neurons of a neuronal population receive input from the same subset of neurons of the input layer, (iii) each population-specific subset of the input layer contains as many neurons as the popula-tions of the neuronal network. Two such neuronal populapopula-tions (npop = 2) construct the smallest motif within the structure of a schema being its primary building block. Hence, using this generic neuronal population model, we restrict our numerical simulations and analysis on two anatomically interconnected memories (npop = 2). Hereby, our main objective is to classify all different forms of so-calledfunctional organizations (FOs) of two such connected memories. We define those different forms of FOs based on the effect of a recalled memory to either excite or inhibit the interconnected memory, in accordance with the definitions propsed by Byrne and Huyck (2010):

Figure 3.1: Procedure to analytically investigate the input-dependent functional organization of interconnected memories. (A)The used recurrent neuronal network (blue section) consists of recurrently connected neurons (blue circles). Here the individual neurons of the network are interconnected via plastic excitatory synapses (straight blue lines) and constant inhibitory synapses (not shown). Each neuron of the network receives neuron-specific external inputs from the input layer (red section) via constant excita-tory synapses (red lines; example for the input stimulation of one neuron). (B) De-scription of npop = 2 individual neuronal populations nested in the recurrent neu-ronal network. All remaining neurons that are not part of a single neuneu-ronal popula-tion (p1: black;p2: yellow) of the network are combined to a background population (pB: blue). These background neurons serve as control neurons. All neurons belong-ing to the same neuronal population receive a population-specific input stimulation.

(C)Analytical description of the neuronal populations at equilibrium by means of fixed point analysis. Here, the input stimulation onto each neuronal population originating from the background neurons (pB) is combined with the input layer.

Definition 1 On the order of two interconnected memories, we define the FO of those as an

• association (asc), if both memories excite each other,

• sequence (seq), if one memory excites and the other memory inhibits its interconnected mem-ory, and

• discrimination (disc), if both memories inhibit each other due to an appropriate recall stimulus.

For an evaluation of the dynamic FOs of memories underlying activity-dependent synap-tic plassynap-ticity, we continue to explain how single neuronal populations decode specific environmental input stimuli. Here, the corresponding numerical analysis on the FOs of memories is computationally expensive and, furthermore, a corresponding synaptic-weight dependent analysis is not feasible. Therefore, by means of fixed point analysis, in Sec. 3.3 we provide an abstract low-dimensional population model at equilibrium state.

With this reduced population model, the learning outcome (i.e. encoded by the synaptic weights) for a given input stimulation can be analytically calculated. Thus, this approach enables an analytic investigation on the FO of two interconnected memories on the respec-tive synaptic weight space (npop= 2, Fig. 3.1 C). Therefore, in the Sections from Sec. 3.1 to

Sec. 3.3, we guide the reader step-by-step from the initial high dimensional neuronal net-work description considering individual neuronal activity and synaptic weight dynam-ics to the abstract low-dimensional description of interacting memories in a population model at equilibrium. Readers who are rather interested in the results on the underly-ing synaptic plasticity mechanisms can quickly skim the Sections from Sec. 3.1 to Sec. 3.3 and proceed in more detail with Sec. 3.4. For an evaluation on the interaction of two anatomically interconnected neuronal populations, in Sec. 3.4, we provide definitions of memory representation(MR) and thefunctional organization(FO) dependent on the coupling strength (i.e. synaptic weights) of the individual populations. Finally, we validate our de-rived method for the analysis of interconnected neuronal populations at equilibrium. For this, we compare the analytic results with those originating from the full network simula-tions for an exemplary synaptic plasticity rule.

The presented method and the corresponding conclusions in this Chapter have been pub-lished in the following article:

J. Herpich and C. Tetzlaff (2018). “Principles Underlying the Input-Dependent Formation and Organization of Memories”. In: bioRxiv. A similar manuscript is currently under revision in Network Neuroscience.

. Recurrent Neuronal Network Model

We consider a recurrent neuronal network model consisting of a setN of nrate-coded point neurons (N := Nn = {1, . . . , n}, Fig. 3.1 A, blue dots). All neurons of the net-work layer N are excitatory and inhibitory connected with global (i.e. network wide) connection probabilities p+con and pcon, respectively. Note, in our analysis, we assume p+con=pcon=:pcon. By this, thestate of the networkis defined by:

• the activitiesFi F:=M(n×1|R+)of all neuronsiof the network (i∈ N),

• the excitatory synaptic weightsωi,j W:=M(n×n|R+)connecting the presynaptic neuronjwith the postsynaptic neuroni, as well as,

• all inhibitory synaptic weightsωi,j W:=M(n×n|R+).

Note that the excitatory connections are plastic while the inhibitory connections are con-stant and globally set to ωi,j := θ R+ throughout this thesis. Only in Sec. 4.4.2 we introduce inhibitory synaptic plasticity.

. . Environmental Input S mula on

In addition to the network input, each neuroni ∈ N receives input fromnexi N dif-ferent neurons (Fig. 3.1 A, red dots) of a neuron-i-specific subsetEiof the external input layerE(Fig. 3.1 A, red section; red lines indicate the neuron-i-specific input for one par-ticular neuroni). The activitiesFkex of all input neuronsk ∈ Ei, that are interconnected to neuroniof the recurrent network, are summarized byFexi := M(nexi ×1|R+). Hence, the whole input stimulation of the neuronal network layer withn neurons is given by Fex := (Fex1 , . . . ,Fexn) = M(nexi ×n|R+). The specific input activitiesFkex Fex of these input neuronskare transmitted via excitatory synapses to the postsynaptic neuroniwith global constant strengthωex:=ωi,k,ωi,k Wexi :=M(1×nexi |R+).

. . Neuron Model

Each rate-coded point-neuroni∈ N of the network integrates all incoming neuronal ac-tivitiesFj Fvia the interconnected excitatory (ωi,j W; Fig. 3.1 A, blue connections) and inhibitory (ωi,j W; not shown) synapses, as well as, all incoming neuronal ac-tivitiesFkex Fexi via the interconnected neuron-i-specific external synapses (ωi,k Wexi ; Fig. 3.1 A, red lines) to its neuron-specificinput currenti):

ϕi(Fexi ,F,W,W,Wexi ) = ∑ Here,Ni+,Ni ∈ N are the sets of indices for the excitatory and inhibitory presynaptic neurons j connected to the postsynaptic neuroni andEi is the neuron-i-specific input layer, defined above. Although Dale’s principle states that any neuron can have either excitatory or inhibitory connections, but not both at the same time, we merge both distinct neuronal sets for the excitatory and inhibitory neurons towards one neuronal population generalized byN. This approach does not violates Dale’s principle, it is just one technical simplification of the used model. Hereby, all excitatory neurons are interconnected to the inhibitory population, which, in turn, inhibits the excitatory population. Due to the constant inhibitory synaptic weights (θ) and constant external excitatory synaptic weights (ωex) the neuron specific input is simplified to:

ϕi(Fexi ,F,W) =

This neuronispecific current drives theleaky membrane potential(ui) of the respective neu-ron and is described by the followingordinary differential equation(ODE):

τu˙i=−ui+Rϕi(Fexi ,F,W), τ,RR+, [u] = 1mV, (3.3) withτ ([τ] = 1s) being the time constant of the membrane potential set toτ = 1s and R = 0.1nW, ([R] = 1nW)being the the membrane resistance. The neuron-specific mem-brane potentialuiis non-linearly mapped to aneural firing rate(Fi) by a sigmoidal transfer function:

Fi = Fmax

1 +exp(β[ϵ−ui(Fexi ,F,W)]), Fmax,β,ϵR+, [F] = 1Hz, (3.4) with global parameters of the neuronal network of themaximal firing rateFmax = 100Hz, the steepnessβ([β] = 11/mV)and the inflexion pointϵ([ϵ] = 1mV)of the sigmoid. Thus, the firing rate of each neuronivaries in between0Hz and Fmax(F=M(n×1|[0,Fmax])).

This sigmoidal transfer function approximates the characteristics of the neuronal activ-ity: The neuronal acticity function only yields positive values (Gerstner et al. 2014) and convergences to a maximal firing rate Fmaxevoked by the refractory period for signal trans-mission in form ofaction potentials (APs). By combining Eq. (3.3) and Eq. (3.4), we simplify the description of the neuronal dynamics to one ODE for the neuronal firing rate:

τF˙i=fF(Fexi ,F,W) :=(Fmax−Fi)Fi As the dynamics of the neuronal firing rate (Fi) of all neuronsiof the network underlie the same global rule (Eq. 3.5), we summarize the whole activity dynamics of the neuronal network to:

. . Excitatory Synap c Plas city

In contrast to all inhibitory synaptic weights (ωi,j W), all excitatory synaptic weights (ωi,j W) in the recurrent neuronal network connecting the presynaptic neuronj with the postsynaptic neuroniundergo synaptic plasticity. There are multiple plasticity mech-anisms and modeling approaches describing learning, introduced in Sec. 2.2.2. We will focus on their application in the next Chapter (Chapter 4) of this thesis. So far we only

specify that we consider activity-dependent learning rules for modeling synaptic plastic-ity only considering local state variables, thus the pre- and postsynaptic activities (Fj, Fi) and the synaptic weight (ωi,j) itself. Thus, we express the plastic excitatory synaptic weight by the following initial value problem:

fω(Fj, Fi, ωi,j) :=τωω˙i,j with ωi,j(t= 0)R+, [ω] = 1pC, (3.7) withτωbeing the time constant for the synaptic weight dynamics. Again, as the dynamics of the excitatory synaptic weights (ωi,j) for all synapses of the network underlie the same global rule (Eq. 3.7), we summarize the whole excitatory synaptic weight dynamics of the neuronal network to:

fWn(F,W) :=



fω(F1, F1, ω1,1) . . . fω(Fn, F1, ω1,n)

... . .. ...

fω(F1, Fn, ωn,1) . . . fω(Fn, Fn, ωn,n)



. (3.8)

. Interac on of Memories in a Neuronal Popula on Model

In the following Section, we will derive a model of interacting neuronal populations (Fig. 3.1 B and Fig. 3.2 A for a more detailed description) nested in the recurrent neu-ronal network as introduced in the previous Section (Sec. 3.1). For this, we transfer the high dimensional recurrent network model defined by its single neuronal activity- and synaptic weight-dynamics to a low dimensional neuronal population model described by the dynamics of the average neuronal population’s activity, as well as, the intra- and inter-population’s synaptic weights.

To describe the dynamics ofnpop Ninteracting neuronal populations within the recur-rent network N, we will consider a set P of npop distinct subsets of neurons P := {Pp1, . . . ,Ppnpop}. Note we refer to the populationr as a whole by pr whilst we refer to all neurons of the populationrbyPpr ⊂ N.

The specific subset of neuronsPpr ⊂ Nbelonging to the respective neuronal populationr is selected by the population-r-specific subset of the external input layerEr ⊂ E. Here, all neuronsi, j of the specific neuronal populationr receive input from the same input neuronskof the input layerE. Thus, it holdsEi =Ej =:Erfor the population-r-specific input set. Consequently, each neuronk ∈ Erof the population-r-specific input layer has an excitatory connection towards all neuronsi ∈ Ppr of the neuronal populationr with constant synaptic weight strengthωex.

Figure 3.2: The interaction of two neuronal populations in a recurrent neuronal network. (A)In a recurrent networkN(blue section), two neuronal populations (Pp1: black;Pp2: yellow) receive population-specific external inputs of average amplitudesFpex

1tand Fpex

2t, respectively. All remaining neurons that are not part of a specific neuronal popula-tion (Pp1: black;Pp2: yellow) of the network are combined to a background population (PpB: blue). These background neurons serve as control neurons receiving noisy ex-ternal inputsFpexBt. Each populationr ∈ {1,2,B}is described by its mean activity

Fprand its mean intra-population-synaptic weightωprprwhich changes over time.

Furthermore, the mean inter-population-synaptic weights connecting two populations r, sare given byωprps(s ∈ {1,2,B}\r). (B)Numerical simulation of the complete n+n2-dimensional network dynamics (gray lines, Sec. 3.1) for three interconnected neuronal populations1,2 andBin a normalized network model (see Appendix) for full connectivity (pcon = 1) and a population-specific input of˜Fex = (0.9,0.75,0.25)T (see Sec. 3.2.1 for details on the input stimulation). Colored lines indicate the mean activity and synaptic weight dynamics of the different neuronal populations 1 (black), 2 (yellow), B (blue) (Sec. 3.2.2). The long-term-representation (LTR) of the applied population-specific input stimulation is evaluated for the second half of stimulation timet[55,100]˜τω(indicated by green bars, Sec. 3.3.1). For the applied learning rule see Sec. 4.2.2 (SPaSS-learning rule) and for the input paradigm see Sec. 3.2.1.(C)Same input paradigm as in (B) for different network connectivitiespcon [0,1]. Solid line:

LTR of the neuronal population’s activities; Colored area: variance of the LTR of single neuronal activities of the respective neuronal population. Used parameters for (B) and (C) see Tab. 3.1; (adapted from Herpich and Tetzlaff 2018).

In our model of npop different interacting neuronal populations r ∈ {1, . . . , npop}, we make two assumptions to simplify the network structure and connectivity.

Assumption 1 All neuronal populations consist of the same amount of neurons nP Nand the neuronal sets of each two neuronal populationsrandsdo not overlap (Ppr ∩ Pps =∅).

Assumption 2 We set the numbernexi of afferent input neurons projecting to a neuronal popula-tionrequal to the size of the neuronal populationnexi = nP =: nexP to consider the same order of magnitude for the input population as for the neuronal populations within the network.

The remaining neurons of the network, which are not part of any neuronal population pr ∈ P, are unified to the neuronal setPpB := N \P and defined asbackground neurons.

We refer to this neuronal population by pB with size nB := |PpB|= n−npopnP. With this approch, we reduce the high-dimensional neuronal network model to the interaction ofnpop + 1different neuronal populations nested in a recurrent neuronal network (see Fig. 3.2 A fornpop= 2interconnected neuronal populations).

. . Environmental S mula on of the Neuronal Popula ons

Each neuronal simulation starts with a tuning phase (Fig. 3.2 Bi), where all neurons of the network receive noisy inputFkex ∼ N(0.25,0.02)from nexP different neuronskof the input layerE. Along with the onset of the environmental input stimulation (t= 10τω), all neuronsi ∈ Ppr belonging to a particular neuronal populationr ∈ {1, . . . , npop}receive their neuronal population specific input stimulation:

Assumption 3 As the input stimulation of a neuronal populationr represents a specific envi-ronmental piece of information, we assume that the firing rate of all population-r-specific input neuronsk∈ Erfluctuate around a mean value over time, described by⟨Fpexrt.

With this mean population-r-specific input paradigm⟨Fpexrtover time, the activities of all neuronskof the population-r-specific external input layerErare modeled by an Ornstein-Uhlenbeck process, respectively:

F˙kex=δ(⟨Fpexrt−Fkex)

| {z }

drift

+ σζ

|{z}

diffusion

, ζ ∼ N(0,1) with Fkex(t= 0) =⟨Fpexrt (3.9)

with the drift term scaled byδ= 0.025and with a normal distributed diffusion term with constantσ = 0.0125. All neuronskof the external input layer that are connected to the background neuronsi∈PpB continue to fire at the noise levelFkex∼ N(0.25,0.02)similar to the tuning phase.

Table 3.1:Used parameters for the numerical simulations in Fig. 3.2 - Fig. 3.4 and Fig. 3.6.

Network model Neuron model Synaptic plasticity & Other

parameter value parameter value parameter value

n 100neurons τ 1 s τω 60s

nP 10neurons R 0.1nΩ FT 5Hz

nB n−2nP Fmax 100Hz ωmax 97.33pC

nexP nP nϵ 20neurons δ 2.5Hz

ω˜ex 1[ωmax] β 0.00035mV1 σ 1.25Hz

θ˜ 0.5[ωmax] ∆t 0.01s

Thus, givennpop+ 1 different neuronal populationsP = {Pp1, . . . ,Ppnpop,PpB}, the pa-rameter space of the input is defined by the different mean neuronal population-r-specific inputs over time:

Fex:=

(⟨Fpex1t, . . . ,⟨Fpexn

popt,⟨FpexBt

)T

. (3.10)

One exemplary simulation for a fully connected network (p+con = pcon = pcon = 1) with 2 + 1interacting neuronal populationsP = {Pp1,Pp2,PpB}is shown in Fig. 3.2 B (gray curves). For the simulation we used the normalized neuronal network model (see Ap-pendix). Here, we drive the neuronal network with an input stimulation (Fig. 3.2 Bi) spec-ified by the population-specific input paradigm

F˜ex=

(⟨F˜pex1t,⟨F˜pex2t,⟨F˜pexBt

)T

= (0.9,0.75,0.25)T.

For the simulation, we numerically solve theordinary differential equations (ODEs) of the normalized model for all individual neuronal activity (Fig. 3.2 Bii) and synaptic weight dynamics (Fig. 3.2 Biii,iv) with theEuler method(∆t= 0.01s). Note, for reasons of clarity in the upcoming Sections, we only plotted the intra- (Fig. 3.2 Biii) and inter-population synaptic weights (Fig. 3.2 Biv) of both neuronal populations 1 and 2 and excluded the intra- and inter-population synaptic weights related with the population on the back-ground neurons (grey lines indicate the single neuronal and synaptic dynamics). All used parameters for the numercial simulation are collected in Tab. 3.1.

. . Dynamics of the Neuronal Popula ons

To describe the time evolution of the different neuronal populationsr andswithin the recurrent neuronal network for a specific input stimulationFex, we calculate the time-dependent mean state variables of this system of interconnected neuronal populations.

Note, the mean state variables are taken over all neuronal activity and synaptic weight variables of one population not over time:

• the mean neuronal population activitiesFP(t)given by:

FP(t) :=

• the mean intra-population synaptic weightsWintra(t)given by:

Wintra(t) :=

• the mean inter-population synaptic weightWinter(t)given by:

Winter(t) :=

These mean state variables correspond to the colored curves (Pp1: black, Pp2: yellow, PpB: blue) in the example shown in Fig. 3.2 B.

. . Dimensionality of the Neuronal Popula on Model

With this description of the mean state variables of npop+ 1interacting neuronal popula-tions, we reduce the state space of the neuronal network model from(n+n2)-dimensions withndimensions for the neuronal activity spaceFandn2 dimensions for the neuronal excitatory synaptic weight spaceWto((npop+ 1) + (npop+ 1)2)-dimensions for the state space of the mean neuronal population model. Here, the mean neuronal population’s activity spaceFP, as well as the mean intra-population excitatory synaptic weightsWintra

space each consists ofnpop+1-dimensions. The mean inter-population excitatory synaptic weight spaceWinterconsists of(n2pop+npop)-dimensions. These mean state variables for the neuronal population’s activities and synaptic weights are depicted in the specific ex-ample (Fig. 3.2 B) for the interaction of 2+1 different neuronal populations by the colored curves (black: neuronal population1, yellow: neuronal population2, blue: background neuronsB).

. . Long-term Representa on of S mula on

As the neuronal populations are designed to represent specific environmental stimuli, we consider thelong-term representation(LTR) of the stimulus in the population-specific state variables to analyze how a specific input paradigm is encoded within the neuronal net-work. The LTR is determined within a time windowtw of the input stimulation when the system’s state variables have reached stable representations. Here, all neuronal and synaptic state variables fluctuate around mean values. These mean values determine the LTR of the specific input stimulation. In our neuronal network simulations (exemplary simulation shown in Fig. 3.2 B), we extract the LTR as the system’s mean state variables over the specific time window oftw = [55,100]˜τω, for the second half of stimulation time (indicated by green bars in Fig. 3.2 B). Here, we indicate the LTR of the system’s state variables in the population model formalism by⟨FprLTR for the neuronal population’s mean activity and ⟨ωpr,psLTR for the mean excitatory synaptic weights connecting neu-ronal populationstowards neuronal populationr.

. Interac on of Memories in a Popula on Model at Equilibrium

In Fig. 3.2 B, we have exemplary shown how a specific input stimulation is mentally encoded by the synaptic weights in the population model formalism. These synaptic weights, at stable state, fluctuate around mean values, which, are defined as the long-term representation(LTR) for the respective input stimulation. In this Section, by means of fixed point analysis, we derive a method to analytically determine the state variables for the LTR for a given input stimulation (Fig. 3.1 C) without simulating the whole neuronal activity and synaptic weight dynamics.

The here derivedpopulation model at equilibriumholds for specific neuronal systems with the following inherent condition: When the environmental input stimulationFexis con-stantly applied onto the network over time, specific learning rulesfωlead the system into anequilibrium stateE(Fex, fω), indicating the LTR of the respective input stimulation.

Condition 1 Given a specific external input stimulation Fex and a specific excitatory synaptic learning rule fω driving the dynamics of a neuronal population model, the system evolves into a equilibrium stateE(Fex, fω)if and only iffFn(Fex,F,W) = 0andfWn(F,W) = 0being the zero matrix. Whereby,F := M(n×1|[0,Fmax])pools the fixed activitiesFi of all neurons i∈ N of the network andW :=M(n×n|R+)summarizes all fixed excitatory synaptic weights ωi,j of the neuronal network. Throughout this thesis, we refer to the stable state by the following formalism:

E(Fex, fω) :={F(Fex, fω), W(Fex, fω)}. (3.11) In this thesisequilibriumandfixed pointare synonyms.

Note, the state variables representing the numerically obtainedlong-term representation (LTR) for a given input stimulation are described by the notation of⟨◦⟩LTR. Whereas the state variables representing the analytically determined system’s equilibrium state is de-scribed by the notation of. To clearly distinct numerically and analytically calculated LTRs, in the following, we consequently make use of⟨FiLTR,⟨ωi,jLTRfor the simulation and Fi, ωi,j for the analysis. However, both notations describe the stable state of the system for a given input paradigm.

. . Popula ons at Equilibrium

For all neuronal populationsr, s∈ {1, . . . , npop}of the network, we describe the average

For all neuronal populationsr, s∈ {1, . . . , npop}of the network, we describe the average