The Self-Organization of Grid Cells in 3D
TThe Self-Organization of Grid Cells in 3D
Federico Stella ∗ and Alessandro Treves † SISSA, Cognitive Neuroscience, via Bonomea 265, 34136 Trieste, Italy
What sort of grid cells do we expect to see in bats exploring a three-dimensional environment?How long will it take for them to emerge? We address these questions within our self-organizationmodel based on firing-rate adaptation.The model indicates that the answer to the first question may be simple, and to the second one rathercomplex. The mathematical analysis of the simplified version of the model points at asymptoticstates resembling FCC and HCP crystal structures, which are calculated to be very close to eachother in terms of cost function. The simulation of the full model, however, shows that the approachto such asymptotic states involves several sub-processes over distinct time scales. The smoothing ofthe initially irregular multiple fields of individual units and their arrangement into hexagonal gridsover certain best planes are observed to occur relatively fast, even in large 3D volumes. The correctmutual orientation of the planes, however, and the coordinated arrangement of different units, takea longer time, with the network showing no sign of convergence towards either a pure FCC or HCPordering.
Where does our internal representation of space comefrom? New findings about space-related activity in thebat have recently raised again this question [1–3]. Thesimilarity in the place cell and, most remarkably, in thegrid cell representation between rodents and bats sug-gests a common neural substrate for spatial navigation,shared across these mammals [4–10] and possibly otheranimals [11–14]. At the same time, the obvious differ-ence in the behavior of these species requires a systemthat flexibly adapts to perform computations as differ-ent as mapping two- or three-dimensional space [15–17].Here we describe a model of grid cell formation that ac-counts for both these aspects of spatial cognition, in aunitary perspective on the mEC network.Grid cells seemingly require some clever engineeringdesign, that generates the common periodicity amongneighboring units while keeping them distinct in termsof spatial phase [18]. While place cell and head direc-tion cells have been shown to directly generalize to threedimensions [3, 19], the form that grid cells will exhibitin flying bats is still not clear [20]. Further, the ques-tion is still open of how the mechanism producing such acomplex periodic pattern on a surface can, in the case ofbats, extend to a volume [21], even when accepting theinformation-theoretic optimality of a regular lattice [22].In the self-organization perspective that we propose, thespatial responses first emerge spontaneously, at the singleunit level, with no engineering required. In the simplestversion of the model, which we have explored in a seriesof studies [23–25], the periodicity of the grid pattern isa result of firing rate adaptation during exploration ses-sions that span a considerable developmental time. It isfixated gradually by means of synaptic plasticity in thefeedforward connections, which convey broad spatial in-puts, for example but not necessarily from ’place units’. ∗ Electronic address: [email protected] † Electronic address: [email protected]
Contrary to other models limited to the explanation ofgrid cells expression, this model delves into the issue oftheir induction and most importantly can account for theeffects of the geometry of the explored environment onthe outcome of the self-organization process. We haveshown how, for example, the model produces pentagonalgrids on a spherical surface and heptagonal ones on a hy-perbolic one [26]. The nature of our model allows us tonow investigate the process of grid cell self-organizationin three dimensions without the need to modify any ofits features.
I. MODELA. Numerical Simulations
In our simulations a virtual bat explores a volume ofside L with a constant speed v . The position of the an-imal is sampled at time steps of constant ∆ t . We tem-porarily leave these quantities unspecified. We will dis-cuss their actual values at the end of the paper as theyare critical for the interpretation of the results. For themoment they should be understood as expressed in ar-bitrary units. For simplicity, the change in running di-rection between two consecutive steps of the virtual batis sampled from a Gaussian distribution with zero meanand standard deviation σ h = 0 .
15 radians.
The network.
Our model is comprised of two lay-ers. The input network represents e.g. the CA1 regionof hippocampus and contains N inp = 12 units. Theoutput network represents a population of N mEC = 125would-be grid units in mEC, all with the same adapta-tion parameters when not differently stated. Each mECunit receives afferent spatial inputs which, as already dis-cussed in [23], we take for simplicity to arise from regu-larly arranged place cells, although they could also arisefrom spatially modulated units in the adjacent cortices. a r X i v : . [ q - b i o . N C ] D ec The input to unit i at time t is then given by h ti : h it = X j W t − ij r tj . (1)The weight W tij connects input unit j to mEC unit i . Wewill assume that at the time the mEC units develop theirfiring maps, spatially modulated or place cell-like activityis already present, either in parahippocampal cortex or inthe hippocampus. The network model works in the sameway with any kind of spatially modulated input, but theplace-cell assumption reduces the averaging necessary forlearning. Moreover, as recent studies have shown [27, 28],it is entirely plausible that place cells develop an adult-like spatial code earlier than grid cells do. We thus modelthe place field as a Gaussian bump centered in the placecell preferred position ~x j r tj = exp( − k ~x t − ~x j k σ p ) , (2)where ~x t is the position at time t of the simulated bat, σ p = 0 . L is the width of the firing field and k a − b k is the euclidean distance between two points a and b inthree dimensions. Place field centers are homogeneouslydistributed in the volume. Single unit dynamics.
The firing rate Ψ ti of mECunit i is determined by a non-linear transfer functionΨ ti = π (cid:2) g t ( α ti − µ t ) (cid:3) Θ( α ti − µ t ) , (3)which is normalized to have maximal firing rate equal to1 (in arbitrary units), and Θ( · ) is the Heaviside function.The variable µ t is a threshold while α ti represents theadaptation-mediated input to the unit i . It is related to h ti as follows: α ti = α t − i + b ( h t − i − β t − i − α t − i ) β ti = β t − i + b ( h t − i − β t − i ) , (4)where β i has slower dynamics than α i , with b = b / , b = 0 . b coefficients become rates, in units of (∆ t ) − ). Thisadaptive dynamics makes it more difficult for a neu-ron to fire for prolonged periods of time, and corre-sponds to the kernel K considered in the analyticaltreatment [23]. The gain g t and threshold µ t are it-eratively adjusted by Eqs.5 at every time step to fixthe mean activity a = P i Ψ ti /N mEC and the sparsity s = ( P i Ψ ti ) / ( N mEC P i Ψ ti ) within a 10% relativeerror bound from pre-specified values, a = 0 . s = 0 . k is indexing the iteration pro-cess: µ t,k +1 = µ t,k + b ( a k − a ) g t,k +1 = g t,k + b g t,k ( s k − s ) . (5) b = 0 .
01 and b = 0 . a k and s k are the values ofmean activity and sparsity determined by µ t,k and g t,k in the intermediate iteration steps. The iteration stopsonce the gain and threshold have been brought withinthe 10% error range, and the activity of mEC units aredetermined by the final values of the gain and thresholdin Eq.3. Synaptic plasticity model.
The learning processmodifies the strength of the feed-forward connections ac-cording to a Hebbian rule˜ W tij = W t − ij + (cid:15) (Ψ ti r tj − ¯Ψ t − i ¯ r t − j ) , (6)with a rate (cid:15) = 0 . ti and ¯ r tj are estimated mean fir-ing rates of mEC unit i and place unit j that are adjustedat each time step of the simulation¯Ψ ti = ¯Ψ t − i + η (Ψ ti − ¯Ψ t − i ) , ¯ r tj = ¯ r t − j + η ( r tj − ¯ r t − j ) , (7)with η = 0 .
05 a time averaging factor. After each learn-ing step, the provisional ˜ W tij weights are normalized intounitary norm X j W tij = 1 . (8)Units that win during competitive learning (enforcedby Eq. 5) manage to establish strong connections withunits that provide strong inputs. As learning proceeds,the units establish fields where they both receive stronginputs and, at the same time, are recovering from adap-tation. The emergence of the grid map is the product ofaveraging over millions of time steps. It remains to beassessed whether this mechanism we propose might alsoaccount for the formation of new grid representations ina novel environment the animal adapts to, for a suffi-cient time, or if, instead, it can only be applied to thedevelopmental period. Grid alignment: HD input.
Two substantial ex-tensions are represented by the introduction in the modelof Head Direction information, through the assignmentof preferred directions to mEC units, and by the pres-ence of recurrent connections in the mEC layer besidethe feed-forward set between the two layers [24]. Boththese additions to the earlier version of the model areimportant for the grid alignment issue, that we are go-ing to study in the 3D case. With these two additionalelements, the overall input to unit j is now: h it = f θ i ( ω t )( X j W t − ij r tj + ρ X k W ik Ψ t − τk ) (9)with ρ = 0 . W tij ) and collateral weights ( W ik ), and τ =25 steps a delay in signal transmission, as discussed by[24]. The multiplicative factor f θ i ( ω t ) in Eq.9 is a tuningfunction which is maximal when the current direction ofthe animal movement ω t is along the preferred direction θ i assigned to unit i [29]. f θ ( ω ) = c + (1 − c )exp[ ν (cos( θ − ω ) − c = 0 . ν = 0 . π solid angle. Collateral Weights.
The basic function of collat-eral weights in the model is to favor the development offields in the post-synaptic unit with a certain phase shiftrelative to the pre-synaptic unit, and consequently to in-duce the alignment of grids, producing a common orien-tation in the population [30]. In this study we will notdeal with collateral weight self-organization, only withfeed-forward weight learning. For simplicity, the collat-eral weights are set at convenient values at the beginningof the simulations and left unchanged afterwards. Wewill deal with the issue of recurrent connection plasticityin future work (but see [30]).Collateral weights are set in the following way [23]:each mEC unit is temporally assigned a preferred po-sition, an auxiliary field at coordinates ( x i , y i , z i ). Thecoordinates are randomly chosen among the place fieldcenters of the input layer. These auxiliary fields are in-troduced only for the sake of weight definition and do notplay any role in other parts of the simulations. The col-lateral weight between unit i and unit k is then calculatedas W ik = " f θ i ( ω ik ) f θ k ( ω ik )exp − d ki σ f ! − κ + (11)where [ ∗ ] + denotes the Heaviside step function, κ = 0 . f θ i ( ω ik ) isthe tuning function defined above (in Eq.10), ω ik is thedirection of the line connecting the auxiliary fields of unit i and k , σ f = 0 . L defines how broad the connectivity is,and d ki is defined as d ki = [ x i − ( x k + l cos( ω ik ))] ++ [ y i − ( y k + l cos( ω ik ))] + [ z i − ( z k + l cos( ω ik ))] (12)i.e., it is the distance between the auxiliary fields with anoffset l = v ∗ τ .The normalization on this set of connections is per-formed as in Eq.8. The definition of the weights is suchthat it generates strong positive interactions betweencells with similar preferred HD and activation fields ap-propriately shifted along the same head direction. Face Centered Cubic Hexagonal Close Packed
Function
Autocorrelogram
FIG. 1:
Regular optimal solutions to the sphere pack-ing problem in 3D
Top: Functions used in cost func-tion calculation for FCC and HCP. Color markers are usedto indicate the layering in the field arrangement: FCC re-sults from an A(red)B(blue)C(green) sequence, HCP from anA(red)B(blue)A(red) sequence. Bottom: Autocorrelogramsof the above functions. Color markers indicate the magnitudeof the corresponding peak in the autocorrelogram. Purple:full peaks. Orange: half peaks.
B. Analytical Model
The self-organization process we consider at the single-unit level, can be described in analytical terms as an un-supervised optimization process, if one neglects the col-lateral interactions that are presumed to align the grids[24]. The simplified version of the model which can beanalyzed mathematically is very abstract, and does notspecify most of the parameters necessary to the simula-tions. Nevertheless, it indicates which are the asymptoticstates that should be approached by the system after hav-ing evolved for a long time.The asymptotic states are defined in terms of a varia-tional principle, amounting to the minimization of a costfunction of the form: H = H K + H A == Z d χ [ Ψ( χ )] + γ Z d χ Z d t Ψ( χ ( t )) K ( t − t )Ψ( χ ( t ))(13)where χ is the spatial coordinate and Ψ represents thefiring rate of the neuron across the environment. Thefunctional is defined based on the hypothesis that theactivity of the units reflects only two simple constraints:1. The minimization of the variability of the mapsacross space, i.e. a preference for smooth maps.Such smoothness is expected to stem from thesmoothness of the spatial inputs and of the neu-ronal transfer function. This constraint is ex-pressed in the first term of the functional, the ki-netic one.2. The penalization of maps that require a neuron tofire for prolonged periods of time, which is opposedby neuronal fatigue. The second term of the func-tional, the adaptation term, represents this con-straint.The parameter γ parametrizes the relative importance ofthe two constraints.The dependence of the functional on time can be elim-inated by taking into account the averaging effect of along run over many trajectories and different speeds ex-perienced during training. We therefore substitute thetime-dependent kernel K ( t − t ) in the second term of Eq13 with an effective space-dependent one, K ( χ − χ ), us-ing the average speed of the animal to fix the relationshipbetween the two: H = H K + H A == 1 V Z Γ d χ d [ Ψ( χ )] + γ V Z Γ d χ Ψ( χ ) Z Γ d χ Ψ( χ ) K ( | χ − χ | ) (14)where we have also made explicit the normalization bythe area V of the d -dimensional environment Γ.We directly apply this expression to ask which is thefavorite arrangement of the fields in a 3D volume V . Optimal Packing.
Unlike on the plane, wherethe hexagonal tiling is always the optimal one, three-dimensional space admits a multitude of equally optimalorderings of spheres. The problem of sphere packing,in a volume of 3D space, is a well known mathemati-cal problem, and it is known since the time of Gaussthat any of these infinite optimal solutions can be de-scribed in terms of two fundamental arrangements, calledFace Centered Cubic (FCC) and Hexagonal Close Packed(HCP), that represent the only two regular solutions tothe problem. Both solutions are based on a series of lay-ers of spheres arranged in a hexagonal pattern. These layers are stacked one upon the other with given phasedifferences between them (Figure 1). The difference be-tween FCC and HCP lies in the sequence with whichthese phases appear. Given one of these layers, and tak-ing it as a reference with positioning A, there are twopossible arrangements (B and C) of the next layer, ob-tained with a translation of A, that puts all the spheres atthe same distance from their neighbors. Any sequence ofA, B and C without immediate repetitions has the same,maximum, packing score, but among all sequences thereare the two regular prototypes: • Face Centered Cubic (
FCC ) = ABCABCABCA • Hexagonal Close Packed (
HCP ) =ABABABABABIn both combinations each sphere has 12 first neigh-bors, and if d is the diameter of a sphere (or the distancebetween the centers of two neighboring spheres), thenthe inter-layer separation is √ d . If we consider the unitcell of 13 spheres (a central sphere + 12 neighbors) inFig.1, then FCC and HCP differ only for the positionof 3 spheres (compare the position of the fields markedin green and red in the top-left and top-right panels inFigure 1). In fact, while in FCC neighbor spheres arearranged in 6 pairs with symmetrical positions with re-spect to the center, in HCP there are only 3 of thesepairs, those on the central plane (fields marked in bluein Figure 1, top right).Considering the three dimensional distribution of ac-tivity ψ ( x ) that minimizes the cost function in Eq.14, wethen compare the relative optimality of Face CenteredCubic (FCC) and Hexagonal Close Packed (HCP) con-figurations. To do so we define two analytical expressionsfor the two arrangements of fields in terms of a combina-tion of plane waves, as they are well suited to be treatedin this formulation of the problem. FCC symmetry.
To represent the Face CenteredCubic arrangement of fields we use the following expres-sion: ψ F CC ( r ) = 1 + 14 X i =1 cos( k i · r ) (15)a combination of four plane waves (Figure 3, top left),with the four wave vectors k i given by the matrix: k i = 2 πa p / / √ − / √ − / √ − / √ − / √ − − / √ (16)The directions of the wave vectors are equivalent to thoseof the center-to-vertex axes in a tetrahedron. This choicegives Spacing = a (17)Normalization < ψ F CC > = 1 (18) | k i | = 32 (cid:18) πa (cid:19) (19)As any power of the previous expression maintains thesame symmetry properties, we will actually compute thecost function for the first few powers: ψ F CCn ( r ) = p F CCn (1 + 14 X i =1 cos( k i · r )) n (20)Here p F CCn is used to maintain the normalization.The first term of the cost function, the spatial aver-age < ψ n ∇ ψ n > , can then be evaluated analytically byexpanding ψ n over the Fourier modes and taking intoaccounts the orthonormality relations of planar waves.This calculation is quite simple for low powers, but thenumber of terms increases rapidly with n . The resultingformulas for n = 2 are reported in the Appendix.The second term can be similarly calculated by us-ing the change of variable q = x x together with thetrigonometric property:cos( k · q + k · x + φ ) = cos( k · q ) cos( k · x + φ ) −− sin( k · q ) sin( k · x + φ ) (21)Since sin( k · q ) K ( q ) is an odd function and the integrationdomain is symmetrical around q = 0 the second term inEq.21 does not survive the first integral in H A (Eq. 14).The calculations can then be performed as in the previouscase after introducing the 3D Fourier Transform of theadaptation kernel K .˜ K ( k i ) = Z V dqK ( q ) cos( k i · q ) , (22)The adaptation kernel may take various forms, but forreasons that will become clear in the next section, herewe consider a kernel in a form that makes it factorableover the spatial variables. We use a difference of radiallysymmetric Gaussians: K ( q ) = K L ( q ) − ρK S ( q ) == 1(2 πvτ L ) / exp (cid:20) − q (2 vτ L ) (cid:21) − ρ (2 πvτ S ) / exp (cid:20) − q (2 vτ S ) (cid:21) . (23)The Fourier transform of this kernel is:˜ K ( k i ) = ˜ K L ( k i ) − ρ ˜ K S ( k i ) == exp (cid:20) −
12 ( k i vτ L ) (cid:21) − ρ exp (cid:20) −
12 ( k i vτ S ) (cid:21) . (24)Again the computation of the integral, although concep-tually straightforward, becomes increasingly demandingwith higher values of n due to the explosion in the num-ber of terms. HCP symmetry.
Since the Hexagonal Close Pack-ing does not have central symmetry, the choice of a func-tion reproducing the arrangement of fields is less evident.We opt for: ψ HCPn ( r ) = p HCPn ((1 / / k z · r )) " X i cos( k ixy ) · r +(1 / / k z · ( r + ∆ z ))) " X i cos( k ixy ) · ( r + ∆ x ) ) n (25)where two separate wave vectors are present: k xy fixingthe spacing on planar hexagonal layers, and k z used in-stead to regulate the distance between layers (Figure 3,top right). As in the previous case we consider differentpowers of the same formula, as they all present peaks inthe same configuration.The components of k xy are, again k i = 2 πa / √ − / √ − / √ − (26)with | k xy | = (2 π/a ) , while the z component is set to | k z | = √ √ (2 π/a ) and | k z | = (2 π/a ) .To obtain thecorrect HCP arrangement of fields, ∆ x and ∆ z shouldbe set to: ∆ x = ( 1 √ ,
0) (27)∆ z = r
23 (28)Spacing and normalization are the same as for FCC.The convenience of choosing a factorizing Gaussiankernel (Eq.23) is now evident: it allows to split the in-tegrals in Eq.14 into the xy and z component. Apartfrom this expedient, the calculations for the HCP func-tion follow the same line of those previously describedfor the FCC case. A significant difference is the depen-dence of the HCP solution on two parameters k x and k z . Therefore, while the choice of the adaptation param-eters τ L and τ S fix one of the two (as shown in Figure2, left), one can still optimize over the ratio between thetwo. The value of k z /k xy that should be observed inthe presence of a perfect HCP pattern can be calculatedas 3 / / ≈ .
53. In Fig.2(right), we plot the valuesobtained for different powers of our expression for theHCP arrangement. While for higher values of n the valueof k z /k xy extracted from the optimization progressivelyapproaches the theoretical one, interestingly the n = 1case exhibits a very different behavior with an optimal k z /k xy = 0, independently of the value of γ . As k z repre-sents the reciprocal of the inter-layer spacing, this valueindicates that the minimum value of the cost functionis obtained with infinite distance, or equivalently with acolumn like distribution of activity, with a single layer offields extending indefinitely in the vertical direction [21].In Figure 2 we plot the values of the wave vectors ob-tained with the set of parameters: vτ L = 1, vτ S = vτ L , ρ = 0 .
03. Changing these parameters alters the absolutevalues of the curves but does not affect the qualitativebehavior of the solutions to the minimization of the costfunction. kxy
Optimal Spacing Optimal Ratio ofSpacings in HCP kz/kxy
FIG. 2:
Analytical estimation of the optimal grid pa-rameters.
Results for the grid parameters resulting from thecost function minimization. Left: optimal grid spacing forFCC and HCP and various powers of the respective expres-sions. Right: optimal ratio between horizontal spacing andvertical spacing for different powers of the HCP expression.The orange continuous line indicates the theoretical value toobtain a perfect HCP arrangement. All the plots are obtainedwith parameters: vτ L = 1, vτ S = vτ L , ρ = 0 . II. RESULTSA. Asymptotic states for individual grids
Self-organized grids appear to express a mix-ture of symmetries.
In our simulations the activityof mEC units is progressively shaped over an extendedperiod of time. Starting from an initial random arrange-ment of connections and a correspondingly heterogeneousdistribution of activity, the combined effect of adapta-tion and synaptic learning leads the units to approach,after a transient reorganization of their firing fields inspace, a stable configuration that is the outcome of theself-organization process. Looking at the firing configura-tions developed by the units in our simulations, one cannotice a similarity with the theoretical solutions maxi-mizing the packing density of spheres described above.In Fig.3 we show two typical examples from the unitsemerging in the model mEC layer in two distinct simu-lations, each taken after about 15 million time steps oflearning time. On the top row the firing rates of theunits presents a blobby appearance, with equally sized,spherical fields homogeneously distributed in the volume.Although rate maps present a resemblance with what weexpect from a regular tiling of the three-dimensional en-vironment, they are not very informative about the over-all organization of the fields nor about any symmetry intheir spatial distribution. Computing the corresponding autocorrelograms we indeed find that the two units arerather different in this respect, as they express two dis-tinct field configurations. One is in fact presenting anapproximately FCC arrangement (Fig.3, left) while theother is close to a HCP one (Fig.3, right). Overlaying theaxes of symmetry of the two ideal arrangements (greenand orange lines) illustrates the differences between thetwo.Thus, self-organization based on synaptic adaptationcan have multiple outcomes, leading to equivalently sta-ble solutions. Identical systems, with the same networkproperties and subject to the same evolution dynamicsmight approach different asymptotic states, just as aneffect of the random initial conditions.This fact does not necessarily imply that different solu-tions can coexist in the same population. The presence ofinteractions between units might indeed induce a globalresponse to the initial conditions driving all the cells todevelop the same symmetric properties. By making useof the measures of order described in Methods (IV B) andcalculating the similarity of each activity pattern eitherto a FCC or to an HCP, we can assess the presence withina population of mEC units of the different asymptotic ar-rangements. In Fig. 4 we show the distribution of thetwo scores, again taken after a long learning time (15million time steps), for units all belonging to the samemEC population. The values of the two scores indicatethe presence of both arrangements in the system. If welook at the scores for each unit at a given time (Fig. 4,left) we indeed find that these are not clustered in twogroups, each one expressing a homogeneous HCP or FCCarrangement, but instead they cover an entire continuumof scores between the two extremes, expressing all inter-mediate arrangements.
Which is the most favorable analytical solution?
This result points to a high degree of independence ofeach unit and at the same time to a rather weak prefer-ence of the system for either of the regular configurationsof fields. We can contrast our observations on the asymp-totic states approached by the mEC units simulated nu-merically with the analytic evaluation of the cost functionassociated with the regular asymptotic states (Methodssection I B). The calculations are based on the functionalin Eq.14, comprising two terms, one representing the de-gree of smoothness of the map, the other the effects ofadaptation. Its minimization (adjusting regular solutionsto fit with the firing rate adaptation time scale) shows arather complex interplay between different possible states(Fig. 4, right). First of all, considering FCC and HCPseparately, we see how, for both of them, solutions ofhigher power become successively favored as the value ofthe γ parameter increases, that is, as the weight of theadaptation component becomes increasingly dominant inthe evaluation of the cost function. Therefore, the same Face Centered CubicCell Hexagonal Close PackedCell Rate MapAutocorrelogram
FIG. 3:
Two illustrative examples of unit activity obtained from simulations.
Left: FCC-like unit. Right: HCP-likeunit. Top: rate maps. Bottom: Autocorrelograms. We plot the central portion of the autocorrelogram comprising the peakssurrounding the origin. Red and blue contours correspond to a correlation value of 0.2 and 0.25 respectively. arrangement of field positions, but with the fields becom-ing more and more concentrated and peaked, is selectedmoving rightward on the graph. In parallel, the two con-figurations, FCC and HCP, compete between each other,and the two are alternating as the optimal solution in dif-ferent portions of the parameter space. The picture thatemerges from this analysis is one of close equivalence ofFCC and HCP in terms of optimality. There is no evi-dence of a regime in which one of the two strongly domi-nates the other. Instead, the features that are common tothe two, like the hexagonal arrangement and the 12 firstneighbors, appear to be the relevant ones for the evalua-tion of the cost function, with the differences appearingwhen considering higher order features only marginallycontributing to its value and therefore generating minus-cule quantitative differences between the two.Analytical calculations deal with an abstract and sim-plified formulation of the properties of our network. Theconclusion they suggest, however, is supported by ourobservations from the numerical simulation of the fullmodel. The system does not appear to converge asymp-totically to either of the two regular configurations offields. Although neither FCC- or HCP-symmetric solu-tions provide a unique description of the final arrange-ment of the units, the simulations produce examples ofunits similar to either of the two optimal packing solu-tions.
B. Time scales for the emergence of local order
Constructing either a FCC or a HCP arrangement offields is a rather articulated endeavor that requires as-sembling a hierarchy of elements of increasing complexity.The three-dimensional structure described by the two ar- rangements implies the establishment of long-range rela-tionships between the level of activity at distant points ofthe environment and involves determining the position ofa large number of fields at the same time. Both FCC andHCP are described by unitary cells of 13 fields and thedifference between the two lies in the different position-ing of just three of them with respect to the others. It isevident by looking at Figure 1, however, that this long-range order is constructed from a set of building blocks,that express symmetries and regularities at a local level,involving fewer fields and a smaller set of constraints.Understanding the outcome of the self-organization ofgrid cells in three dimensions can be thus approachedbottom-up, starting from basic features of the represen-tation and then following the learning process up towardstheir combination into overarching structures.As in the two-dimensional case a description of thegrid can start from computing the mean distance be-tween first-neighbor fields and the mean angle formed bytriplets of adjacent fields. These two measures involverespectively two and three fields at a time and are notinformative about correlations extending beyond theseboundaries.At this level order emerges almost immediately (Fig.5).The mean angle among neighboring fields (calculatedover all the triplets of all the cells of a simulated popula-tion) is close to π/ . ∗ L . We run simulations indifferent conditions to test the sensitivity of this quan-tity to specific components of the model. We consider thecase of having no internal connectivity in mEC, remov-ing any interaction between different mEC units (Fig.5,right: red line) that are therefore developing grids inde-pendently, and the case in which rather than having asingle value of the adaptation time course, common toall the units, the population expresses a range of possi-ble values, drawn from an uniform distribution rangingfrom 0 . ∗ b . ∗ b
1, where b . . ∗ L ). FIG. 4:
3D grid units do not converge towards a com-mon arrangement of fields.
Left: Distribution of FCC andHCP scores in a population of simulated mEC units. Thescores span a continuum between a pure FCC arrangement(bottom right corner) and a pure HCP one (top left cornerof the figure) and are widely distributed between the two ex-tremes. Right: Analytical cost function for various powers ofFCC and HCP as a function of γ parameter. Continuous linesindicate the minimum value within each set of solutions andare found to alternate as a global minimum in different rangesof values of γ . For small values of the parameter γ , instead,the trivial solution ψ = 0 is favored. The plot is obtainedwith: vτ L = 1, vτ S = vτ L , ρ = 0 . These results indicate that the three-dimensional griddevelops from the same ingredients of its lower dimen-sional equivalent. Mean spacing and mean angle arequickly fixed over all the network almost simultaneouslyand are the first recognizable signs of the emergence ofan ordered structure form the initial random distribu-tion of activity. The equivalence between this processand the one observed in a model of two-dimensionalgrid development is due to the same principles drivingself-organization. The presence of an additional dimen-sion does not affect the way in which fields are initiallybrought by adaptation to homogeneously and regularlycover the entire space.
Mean Angle Mean Spacing
FIG. 5:
Emergence of local regularities in the arrange-ment of fields . Left: Mean angle formed by triplets of fieldsas a function of learning time (see section IV A for details onthe measure). Right: Mean spacing between fields extractedfrom the autocorrleograms, for various conditions as a func-tion of learning time.
C. Time scales for the emergence of long rangeorder
Using the measure described in Methods (IV A) we canevaluate the difference between the distribution of activ-ity of a unit and a random arrangement of fields. Plottingthe average across units of this index, which reflects thedecrease of the variance in the angles between triplets,already observed in Fig.5, we see again (Fig.6, top left),that already after roughly 4 million time steps the systemis arranged in a stable ordered manner, with equilateraltriangles among neighboring fields that dominate the ac-tivity pattern. This ordering can be generated by thesystem independently of higher order symmetries, and itprovides a first step for further arranging fields in morearticulated structures.A second step taken by the system is the coordinationof multiple field triplets to arrange them in a hexagonalpattern. This process corresponds to the formation of agrid on the plane, but in three dimensions it involves thecreation of not just one hexagon but of multiple super-imposed planes each of which contains hexagonally ar-ranged fields. To investigate the structure of the activityon the single layers, we take multiple slices of the auto-correlogram matrix. We take sections passing throughthe center, with different angles of azimuth and eleva-tion. We then compute the autocorrelogram values oneach of these planes, and we compare it with a hexago-nal template of equidistant peaks with π/ ω Planar Correlation with an Hexagonal GridCommon Orientation of Best Planes Common Orientationof Hexagonal Grid AxisFCC Configuration Score α β Angle between Triplets of Fields HCP Configuration Score β ω FIG. 6:
Three distinct time courses for the emergence of long-range ordering.
The panels present the time evolutionof the measure of different symmetries in the arrangement of fields. Top row: Fast convergence of neighboring triplets of fieldstowards equilateral triangles (left) and of group of fields into planes with a hexagonal arrangement (right). Middle row: Slowconvergence of the different units in the population towards a common orientation of the layers of fields. Left, angle betweenthe principal plane expressed by the various units. Right, angle between fields arranged over mutually aligned planes. Bottomrow: no convergence of global, inter-planar order. The measures of similarity with FCC and HCP ordering do not evolve withthe extension of the learning time and remain close to intermediate values indicating an even distribution of the values overthe population of grid cells. See Methods for details on the measures. β , Fig.6, middle left), andthe angle between the hexagonal grid axis of cells sharinga similar best plane ( ω , Fig.6, middle right). We indeedsee that collateral interaction tends to align in time theprincipal layer of different cells, defining a preferred ori-entation for the global structure of the grid. The emer-gence of the common alignment of the layers is slowerthan the formation of the layers themselves. It takesabout 12 million time steps of exploration time to sig-nificantly reduce the dispersion of the angles (note that β is slightly faster, maybe indicating a tendency to firstdefine the layers and then arrange shifts within them).Having hexagonal layers tiled upon each other alongthe same direction still leaves to each cell the degree offreedom of setting the relative phase between pairs ofthese layers. It is clear at this point that a proper choiceof these phases would result in the reproduction of eithera FCC arrangement or of a HCP one. This higher level inthe hierarchy of order for three-dimensional grids, whichwhen attained would provide a completely regular tilingof the volume, does not have a correspondence in thetwo-dimensional case. It is thus a completely new levelof complexity that can be only expressed when producinggrids in three dimensions. In the bottom row of Fig.6 weshow the time course of the scores for FCC and HCPsimilarity (see Methods IV B). Their values are almostunchanged over the duration of the simulations and after30 million time steps, a time largely exceeding the onenecessary for the other quantities to converge, both ofthem are still close to 0.5, which reflects the presence inthe population of a large distribution of values.Our simulations are able to distinguish a hierarchyin the time course leading to the formation of three-dimensional grids. Different levels of complexity appearin the arrangement of fields with different speed. Fastconverging quantities like the formation of equilateral tri-angles of fields and successively of layers of fields withhexagonal symmetry appear first, framing the activity ofsingle units in the network. These initial structures arethen modified on a slower time scale to obtain a globalcoordination among cells. Planes of fields are rotated toalign them across the population, generating a commontiling of the fields of different cells that conserve theirunique spatial phase. This global ordering is only partialthough, as the phases of the units are only partially over-lapping with those necessary to reproduce a perfectly reg-ular tiling of the volume (either with a FCC or a HCP).If we exclude the very initial phase of network dynamics,self-organization does not appear to affect these aspectsof network activity, that therefore remain loosely deter- mined even after a very extended period of time. D. Volume dependence of the time scales
The critical question is how would these timescalesscale up with the size of the environment. At least forthe most rapid self-organizing processes, their very na-ture, dependent on plasticity in the feedforward connec-tions, would appear at first sight to require the pairing ofeach activity field of each unit to the specific configura-tion of sensory inputs which impinge at the same time onthe feedforward connections. To imply, therefore, timesthat scale up with the number of fields in the volume.A volume of linear size L includes roughly √ L/a ) fields of spacing a in either an FCC or HCP arrange-ment (and 6 √ L/a ) trajectories connecting neighbor-ing fields). A square of side L on a plane would includeroughly (2 / √ L/a ) fields. The emergence of equilat-eral triangles in 3D appears to require roughly a factor2 more time than in 2d [24], in approximate agreementwith the factor p (3 / L/a ) coming from the above ar-gument. Note that if that were correct, equilateral trian-gles in an environment roughly 4 times as large, as canbe argued to be the one used in the bat experiments inthe Ulanovsky lab [20], would emerge in roughly 40 − γ factorthat, in the cost function, would determine the weight ofthe adaptation kernel with respect to the kinetic term.We find that the population-averaged data points forboth terms can be well fit by a sum of two exponentials,plus a constant (Fig. 7, inset) and with the same timeparameter for the first exponential in each term: H K ( t ) ’ A v exp( − t/τ Sv ) + B v exp( − t/τ Lv ) + K (29) H A ( t ) ’ C v exp( − t/τ Sv ) − D v exp( − t/τ M ) + E v (30)With A, B, C, D, E volume-dependent positive fit param-eters, and τ S,M,L short, medium and long relaxation timescales ( K turns out not to depend on the volume; nor, itseems, does τ M ).The short term relaxation is therefore a joint decreaseof both terms, while later the adaptation term rises,whereas the kinetic term continues to decrease. An em-pirical ansatz can be defined for γ as the largest value1that still keeps the sum H K ( t ) + γH A ( t ) monotonicallydecreasing. With this ansatz, we plot the estimate ofthe cost function (without the E v term, for clarity) forvarying volume sizes, where we have multiplied either 1or 2 of the 3 linear dimensions by either 1.2 or 1.4, ob-taining volumes larger than the standard one by factors1.2, 1.4, 1.44, 1.68 and 1.96. We can see from Fig.7 thatthe relaxation of this estimated cost function is mainlydetermined by the most rapid exponential terms, and isvirtually complete by 3-4 million time steps, with a lim-ited volume dependence. Consequently, apart from theslight prolongation of the initial transient, the time evo-lution of our measure for volumes of different size appearsto be quite similar.These results suggest that the time required for thecomplex dynamics of grid development depends onlyweakly on the number of fields that have to be orderlyarranged in the volume. The initial relaxation, whichaccomplishes most of the rearrangement and probablycenters on adjusting the angles between triplets of fields(cp. Figs.6 top left and 7), occurs in a time that increasessublinearly with system size. This is followed by somebouncing back of the adaptation term, which may haveto do with the finer adjustment of most field distances,leading to planar hexagonal grids (see Figs. 5 and 6 topright), with a time constant which can be taken to beindependent of the volume. It is protracted later by con-tinued but minor smoothing of the fields, now in placeon the best planes, concurrent, if collateral interactionsare included, with the adjustment of the planes with re-spect to each other (Fig.6 middle), which extends overlonger times. We do note that we have observed con-siderable variability in the degree of smoothness of theindividual fields obtained at the end of the simulation,with a tendency for the larger volumes to require longertime and end up with rougher fields. Given the variabil-ity from simulation to simulation, however, it remains tobe determined whether this trend is robust and whetherit points at a significant bifurcation in trajectories of griddevelopment. III. DISCUSSION
How are these results relevant to predict the gridconfiguration expressed by a flying bat? Our modelpoints towards a hierarchy of timescales, associated withthe emergence of periodical spatial activity of increasingcomplexity. To establish a relation between our resultsand a real bat it is necessary to specify the actual valuesof the temporal and spatial parameters of our model, toobtain a time scale for the development of the grids thatwe can then compare with experimental findings.If we take time steps of size ∆ t = 10 ms and an averagebat velocity of v = 1 m/s , the small environment used inmost of our simulations will correspond to a cubic roomof size L = 2 . m . Then, with this choice of parameters,grids are formed with a field spacing of 2 . m ∗ . ≈ . m and an interlayer distance of 2 . m ∗ . ∗ √ ≈ . m .The time scale of grid formation can be calculated con-sidering that 1 million simulation time steps correspondto 10 ,
000 seconds or nearly 3 hours. Our model thenpredicts, in an environment the size of ours, the pres-ence of (i) triplets of fields forming roughly equilateraltriangles in ≈ −
12 hours of continuous flight, (ii)hexagons in ≈ −
18 hours and finally (iii) differentunits that achieve a common orientation after ≈ − ,
000 seconds, or 5-6hours, for the development of gridness in 2D. At the sametime, this moderate increase in grid formation time mightmake it comparable to the flight time available for spa-tial learning during bat development. In these conditions,even the weak, sub-linear dependence of time scales withvolume, that we do observe, may be sufficient to deter-mine a switch between the possibility of forming regularstructures and leaving them beyond reach.A regular tiling of the environment (either in the formof an FCC or of an HCP lattice) is a different story, eventhough it would be the optimal arrangement from aninformation-theoretic perspective[22]. The total simula-tion time would correspond to a maximum of ≈ − . m ) volume.This time is just a lower bound for the time necessaryto form a regular tiling of the environment, and likely aloose one, as our simulations do not seem to be converg-ing towards one of them.These considerations suggest that bats may form a par-tially regular 3D tiling of the environment at most once,and then possibly only if constrained to fly prolongedlyin a rather small cage, while a completely regular, crys-talline tiling of space seems to be hardly in the range oftime available to real bats.In conclusion, the presence of an additional dimensiondoes not seem to preclude the appearance of some or-derly arrangement of the fields in mEC units of bats.Nevertheless this order might express only a partial setof the full spectrum of potential 3-dimensional symmetryproperties. It might be still sufficient to distinguish theactivity of these cells from a random multi-peaked pat-tern, but it would place it at a substantial distance froma perfectly regular pattern, too. IV. METHODSA. Local Gridness Measure
The triangular tile is the minimal structure associatedto regular volume tessellations. The two properties defin-ing any regular triangle are the length of the side andthe internal angle. Therefore, to characterize the localstructure of the grid pattern in an individual unit we ex-2
0 1 2 3 4 5 6 7 8 9 1000.511.522.533.5
Time (Millions of steps) C o s t
0 2 5 8 10−0.500.511.522.53
Time (Millions of steps) H K H A H K +H A τ (V) ≈ steps Extended VolumesUnitary Volume τ ≈ steps FIG. 7:
Effect of environment volume on grid developmental time
Temporal evolution of the cost function calculatedfor environments of different size. In the inset, breakdown of the contribution to the total cost of the kinetic part and of theadaptation part for the cubic environment of size 2.5x2.5x2.5m. Dots correspond to data points, lines to a fit. The constant ofthe kernel term, which varies with the volume, is not included for clarity. tract these two properties from the spikes it produces.Firstly, from our three-dimensional rate maps we gen-erate a representative number of spike pairs through aPoisson process to construct the distribution of distances.Typically, this distribution is highly multi-peaked, wherethe first peak corresponds to distances between intra-field spikes, the second peak between spikes belonging toneighboring fields, and subsequent peaks between spikesin non-adjacent fields. Since the length of the side ofthe tiling triangle in a regular pattern would correspondto the location of the second peak, we define a range ofdistances around this peak as a filter condition to de-clare spikes belonging to neighbouring fields. The limitsof this range were defined by the surrounding troughs, ifthey exist, or fixed to 0 . d and 1 . d , if they don’t, where d is the distance corresponding to the second peak, de-clared as the grid distance of the unit. For pseudo-spikesgenerated from randomly reshuffling spikes between dif-ferent units, distances between spikes are unimodally dis-tributed, and hereafter utilized as a control condition.Secondly, triplets of spikes were putatively classified asbelonging to neighboring fields based on distance filteringin the previous range, and the three internal angles de-termined. These three angles were pooled together andaccumulated in an overall angular distribution. The dis-tribution of angles so obtained for the spiking activityand the control condition were different and their ratiowas used to characterize the angle subtended in the tri-angular pattern. Typically (in the asymptotical state),this ratio was unimodal and distributed asymmetricallyaround a peak. We defined the characteristic angle asthe median of the above-chance distribution (ratio val-ues above unity indicate an above-chance condition or, inother words, angles more frequently obtained than ran- domly) and the significance of the angle as the maximumof the ratio distribution. B. Measure of Long-Range Order
FCC and HCP differences in the configuration of fieldsgenerate distinct symmetry properties for the two ar-rangements. These symmetries are reflected in the au-tocorrelograms that can be extracted from them. In thesame way as the autocorrelogram of a hexagonal grid is ahexagonal grid, calculating the autocorrelogram of FCC(using function 20)) just reproduces the same configura-tion (Fig. 1, bottom left) of fields, with 6 symmetric pairsof equivalent peaks surrounding the central one. Indeedthe symmetries of the structure are such that one canfind 4 planes passing throughout the origin which containpeaks arranged in a hexagonal way; these planes form an-gles of 72 ◦ and are all equivalent. The case is differentfor HCP, where the central symmetry is missing. In thiscase the autocorrelogram extracted from Eq.25 does notreproduce the original form of the function. In Fig.1, bot-tom right, one can see that the autocorrelogram presents9 pairs of peaks around the central one. But in this casethese peaks are not all the same. The HCP structureis again periodical for translations along a plane, gen-erating 6 peaks of height 1, like those of FCC (Fig.1,purple peaks). As the structure is translated out of thispreferred plane, the ABAB arrangement of the HCP lay-ers is such that there are no translations that reproducethe exact same configuration of fields, in the autocorrelo-gram. The 6 peaks above the central one ((Fig. 1, orangepeaks) are indeed half-peaks, corresponding to an overlapof only half (6 out of 12) of the peaks of the basic unit.3Therefore, although one can identify 7 planes with hexag-onally arranged peaks on this autocorrelogram, they arenot all equivalent to those in FCC. Only one of themcontains all the peaks of height 1 and forms an angle of72 ◦ with the other 6, which include half-peaks and forman angle of 56 ◦ between them. We can then use differentmeasures to quantify the degree of similarity of a unitactivity to the FCC and HCP prototypical field arrange-ments. One measure is based on the autocorrelogram.From this, we first identify the best plane, the one whichyields the highest grid score, measured here as the valueof the planar autocorrelogram at the origin, i.e., the pla-nar autocorrelation over all the slices passing through theorigin. Once the best plane has been identified, we usethe fact that the FCC has 3 more planes with hexago-nal symmetry, at ∼ ◦ from the best plane and betweenone another. HCP instead has 6 of them, again at ∼ ◦ from the best plane, but at ∼ ◦ between them. Wethen take the slice scores, i.e. the planar autocorrelationvalues on any one slice. We take all the slices at an an-gle of ∼ ◦ from the best plane and sum the scores ofthe best triplet of slices with ∼ ◦ of separation ( ζ − ).We then exclude them and take a second triplet of slicesagain with a ∼ ◦ distance from one another and a dis-tance of ∼ ◦ from the first triplet ( ζ − ). These twonumbers tell us about the number of different planes withhexagonal symmetry that can be built from our autocor-relograms. Both scores run from -3 to 3, as they are thesum of three correlations. We expect ζ − to be high forboth FCC and HCP arrangements, and its value shouldbe considered as an indicator of the general quality ofthe grid. ζ − instead should be high only for those gridspresenting an HCP type of arrangement, but again itsvalue might be affected by the quality of the grid. Wethus define a score for the degree of FCC similarity as: χ F CC = ( ζ − − ζ − ) /ζ − (31)that should be close to 1 in the presence of FCC, and to0 in the HCP case.On the other hand, HCP is characterized by the repe-tition of the same fields positions every two layers, whileFCC has a periodicity of three layers. Then another way to characterize the grids is to look for similarities betweenlayers. Since the best plane we calculated indicates thedirection of stacking of layers in the HCP (along its nor-mal vector), we can go back to the firing rate map, takeslices along this direction (that is, slices with the samebest plane orientation) and calculate the correlation be-tween planes separated by a two-layer distance (2 ∗ λ z ). χ HCP = ρ auto (2 ∗ λ z ) (32)Specularly to the previous score, this one should be closeto 1 when HCP is expressed, and to 0 when FCC is. Appendix A: Cost Function Expression for n=2 H ( ψ F CC ) = 71 ∗ k
162 + γ ∗ (1 / ∗ (256 ˜ K ( k ) + ˜ K (2 k )+6 ∗ ( ˜ K [2 p (2 / k ) + ˜ K ((2 k ) / p (3))))) (A1) H ( ψ HCP ) = 3045 ∗ k xy + 1881 ∗ k z γ ∗ ( 13468 ∗ (54 ˜ K (2 k z ) + 1160 ˜ K ( k xy ) + 1920 ˜ K [ k xy + k z ]+ 36 ∗ ˜ K ( k xy + 2 k z ] + 2 ∗ ˜ K (2 k xy ) + 48 ∗ ˜ K (2 k xy + k z )+9 ∗ ˜ K (2 k xy +2 k z )+200 ∗ ˜ K ( √ k xy )+36 ∗ ˜ K ( √ k xy +2 k z )))(A2)where k is the only parameter for spacing in FCC and k xy , k z are the two spacings of HCP, along the horizontaland vertical plane respectively. Acknowledgments
Work partially supported by the EU FET grantGRIDMAP. Discussions with the GRIDMAP collabora-tion, with its reviewers, with Andreas Herz and particu-larly with Nachum Ulanovsky and his lab are gratefullyacknowledged. [1] N. Ulanovsky and C. Moss, Hippocampus , 150 (2011),URL http://dx.doi.org/10.1002/hipo.20731 .[2] M. Yartsev, M. Witter, and N. Ulanovsky, Nature , 103 (2011), URL http://dx.doi.org/10.1038/nature10583 .[3] M. Yartsev and N. Ulanovsky, Science (New York, N.Y.) , 367 (2013), URL http://dx.doi.org/10.1126/science.1235338 .[4] K. Thurley, J. Henke, J. Hermann, B. Ludwig, C. Tata-rau, A. Watzig, A. V. Herz, B. Grothe, and C. Leibold,Behavioural brain research , 161 (2014).[5] A. Sereno and S. Lehky, Frontiers in computational neu-roscience (2010). [6] R. Andersen and C. Buneo, Annual review of neuro-science , 189 (2002).[7] N. Killian, M. Jutras, and E. Buffalo, Nature , 761 (2012), URL http://dx.doi.org/10.1038/nature11587 .[8] J. Jacobs, M. Kahana, and A. Ekstrom, Proceedings ofthe National Academy of Sciences of the United Statesof America , 6487 (2010).[9] J. Jacobs, C. Weidemann, J. Miller, A. Solway, J. Burke,X. Wei, N. Suthana, M. Sperling, A. Sharan, I. Fried,et al., Nature neuroscience , 1188 (2013), URL http://dx.doi.org/10.1038/nn.3466 .[10] I. Indovina, V. Maffei, K. Pauwels, E. Macaluso, G. A. Orban, and F. Lacquaniti, Neuroimage , 114 (2013).[11] T. B. de Perera and R. I. Holbrook, Cognitive Processing , S107 (2012).[12] S. Healy, S. de Kort, and N. Clayton, Trends in ecology& evolution , 17 (2005), URL http://dx.doi.org/10.1016/j.tree.2004.10.006 .[13] M. Dacke and M. Srinivasan, The Journal of experimen-tal biology , 845 (2007).[14] L. Wu and J. Dickman, Science (New York, N.Y.) , 1054 (2012), URL http://dx.doi.org/10.1126/science.1216567 .[15] J. Knierim, B. McNaughton, and G. Poe, Nature neu-roscience , 209 (2000), URL http://dx.doi.org/10.1038/72910 .[16] R. Hayman, M. Verriotis, A. Jovalekic, A. Fenton, andK. Jeffery, Nature neuroscience , 1182 (2011), URL http://dx.doi.org/10.1038/nn.2892 .[17] J. Taube and M. Shinder, Hippocampus , 14 (2013),URL http://dx.doi.org/10.1002/hipo.22074 .[18] E. Zilli, Frontiers in neural circuits (2012), URL http://dx.doi.org/10.3389/fncir.2012.00016 .[19] A. Finkelstein, D. Derdikman, R. Alon, J. Foerster,L. Las, and N. Ulanovsky, Society for Neuroscience (2014).[20] G. Ginosar, A. Finkelstein, and N. Las, Liora Ulanovsky,Society for Neuroscience (2014).[21] K. Jeffery, A. Jovalekic, M. Verriotis, and R. Hayman, The Behavioral and brain sciences , 523 (2013), URL http://dx.doi.org/10.1017/S0140525X12002476 .[22] A. Mathis, M. Stemmler, and A. Herz, arXiv preprintarXiv:1411.2136 (2014).[23] E. Kropff and A. Treves, Hippocampus , 1256 (2008),URL http://dx.doi.org/10.1002/hipo.20520 .[24] B. Si, E. Kropff, and A. Treves, Biological cybernet-ics , 483 (2012), URL http://dx.doi.org/10.1007/s00422-012-0513-7 .[25] F. Stella, B. Si, E. Kropff, and A. Treves, Journal ofStatistical Mechanics: Theory and Experiment (2013), URL http://dx.doi.org/10.1088/1742-5468/2013/03/P03013 .[26] E. Urdapilleta, F. Troiani, F. Stella, and A. Treves, Bern-stein Conference T 126 (2014).[27] R. Langston, J. Ainge, J. Couey, C. Canto, T. Bjerknes,M. Witter, E. Moser, and M. Moser, Science (New York,N.Y.) , 1576 (2010), URL http://dx.doi.org/10.1126/science.1188210 .[28] T. Wills, F. Cacucci, N. Burgess, and J. O’Keefe, Science(New York, N.Y.) , 1573 (2010), URL http://dx.doi.org/10.1126/science.1188224 .[29] K. Zhang, The Journal of neuroscience : the official jour-nal of the Society for Neuroscience , 2112 (1996).[30] B. Si and A. Treves, Hippocampus , 1410 (2013), URL http://dx.doi.org/10.1002/hipo.22194http://dx.doi.org/10.1002/hipo.22194