• Keine Ergebnisse gefunden

Discussion of the model and its results, predictions, and future workwork

A neural model of self-organizing grid cells

6.3 Competitive network model of grid cells

6.3.3 Discussion of the model and its results, predictions, and future workwork

The angular error between neurons decreases over time, depicted in Figure 6.10.

Therefore, the network minimizes alignment errors and produces neurons with shared

Figure 6.11 (previous page) Examples for the weight evolution and final auto-correlation maps. Each block displays a single simulation and displays the dendritic weight maps with respect to spatial location of each of the three neurons at several time-steps. Furthermore, the auto-correlograms of the weight maps after the final time-step are given. The auto-correlograms include a black line from the center to the closest peak which was used for the computation of the orientation (for details see Appendix B). The weight maps at the first time step (t= 0 min) are the mapsafter the first location was presented to the cells. Thus they express altered weights at the center of the map because the animal always started the exploration in the center of the arena. The first block contains a simulation with a final average gridness score of 0.949 and relative orientation error 0.000, the second block has final score0.589and error 0.000, the bottom block has score0.223and error3.339. The numbers in the white inlays in each weight map are the respective gridness score computed for the map.

orientation. Thus, the competitive dynamics are able to generate modules of grid cells with shared configurations. Furthermore, the cells express fields with phase offsets such that the entire input domain is covered. As can be observed in Figure 6.9, the fields form very early and the gridness score continues to increase over the course of the simulations. Furthermore, the weight fields remain stable and move only with a coincident increase of the gridness score. Thus, the dynamics presented in Subsection 6.3.1 describe an optimization process which converges to hexagonal arrangement of the weight fields.

The model postulates that the velocity of an animal contributes to the formation of grid cells in form of a self-tuning learning rate and is based on the following argument. The certainty to be at a specific location should be inversely proportional to the running speed. For instance, an animal at rest is very certain about its current whereabouts. However, this certainty should decline with increased speed. As linear speed cells were found to exist in the mEC [195], speed information is indeed available.

The model however makes a strong prediction about the interaction between speed cells and grid cells, as the learning rate of grid cells depends on the speed. This modulatory effect is expected to be either facilitated via inhibitory inter-neurons in such a manner that speed cells suppress currently active grid cells, or in a way whereby future grid cells are more strongly supported by speed cell activity than grid cells which are associated with temporally and spatially nearby locations. As the mEC shows almost only inhibitory recurrent activity [70], it is more likely that the modulation is facilitated using inhibitory feedback. Changing the impact of speed from a non-linear contribution as defined by Equation (6.24) to a constant value decreased the stability of grid fields in the model. This effect is therefore also expected to appear in real rodents.

The model is independent of heading direction. Rather, grid cell responses depend on the input space of spatially modulated pre-synaptic neurons. It is likely that this space is spanned by head direction cells in combination with boundary information.

Thus, an elaborate model including intricate designs of the pre-synaptic neurons, including head direction cells, will likely yield grid cells that fire more strongly with respect to the head direction of the animal and less with the movement direction.

Therefore it is expected that the results observed by Raudies et al. [293] can be explained in future studies and models which employ the necessary detail with respect to pre-synaptic neurons.

The results presented above allow abstractions of the computations and algorithms performed by grid cells. The hexagonal arrangement of grid cell firing fields can be modelled as the densest packing of circular or particle-like sampling regions of an input space. These particles need to interact in such a way that they are not overlapping, but still as tightly packed as possible with respect to their on-center and off-surround areas. In fact, it is postulated that optimal dense packing of particles with soft boundaries are the reason for shearing effects as well as wall-offsets as observed in biological data [337]. A computational model how multiple grid cells can co-ordinate their sampling regions such that grid cell responses are aligned was already suggested by Kerdels et al. [178]. In this study, grid cells are represented by a GNG and sample from their input space similar to the method presented here.

Although the study focuses on spatial sampling and not on transitions, it shows that the realignment issue of grid modules can be understood in algorithmic terms.

6.3 Competitive network model of grid cells 73

An additional benefit of the abstraction of grid fields in form of elementary samplers is the possibility to study requirements for the pre-synaptic representation.

In short, the pre-synaptic input space is required to present sufficient information for a disambiguation between places. One likely candidate for such an input space is examined in the preliminary results presented in Appendix C. Conclusively, it is likely that grid cells inherit theirmetric information and accuracy from pre-synaptic neurons and their corresponding sensory and representational resolution.

Pre-synaptic activity is required to be spatially modulated. It is therefore proposed that boundary information is one of the primary inputs for self-organizing grid cells.

Boundary vectors have been successfully used in a model which describes place cell firing fields [13]. Consequently, they are likely candidates for spatial discrimination, which was assumed to be available as pre-synaptic input in the model presented here. Preliminary results indicate that the boundary vector space allows to form centralized, approximately hexagonal, sampling locations (see Appendix C).

Some abstractions which were used in the model limit its biological accuracy. For instance, the currently employed winner-take-all mechanisms in combination with the co-activation depression show only limited success to form hexagonal grid fields when more than three cells are simulated. The cells express localized response fields, but due to the non-graded absolute winner selection, only the winning neuron correlates with the input. Furthermore, the co-activation suppression strongly decorrelates the activity of neurons which are active at the same time. Though both mechanisms are biologically inspired, they are not plausible. A hard winner-take-all mechanism would require exceptionally fast recurrent inhibitory activity. And indeed, it was observed that the HF is governed by inhibitory collaterals which operate in the range of milliseconds [86]. It is also likely that this observation will be made for the mEC. Nevertheless, the small time window between feed-forward excitation and recurrent inhibition may be sufficient to let grid cells organize with overlapping fields.

Thus, a model based on neurons with non-linear temporal dynamics, for instance a Leaky-Integrate and Fire (LIF) model in combination with STDP, is likely to be provide overlapping responses.

Future models and electrophysiological recordings have to investigate the re-ceptive fields of grid cells more rigorously. The rere-ceptive fields of dendrites were assumed to be perfectly circular in the results presented here. Furthermore, neural activity is computed simply by summation of activity of pre-synaptic states and the corresponding dendritic weights. However, it is expected that the receptive fields of the dendrites of grid cells express complex interactions with the tuning curves of pre-synaptic neurons due to the individual tuning of each dendrite. Furthermore, pre-synaptic spike characteristics, for instance if the pre-synaptic neurons are bursting or not, may have an influence.

The results presented here are used in Chapter 7 to develop an abstract model of the interactions between grid and place cells. The model is used to examine computational consequences of the transition encoding which, in turn, leads to discrete scales and the proposition of a scale-space model of grid cells.

Chapter 7

Algorithmic exploration of the