HHeteroclinic cycles in Hopfield networks
Pascal Chossat and Maciej Krupa Project Team NeuroMathComp, INRIA-CNRS-UNS, 2004 route des lucioles, 06902 Valbonne, FRANCE Project Team MYCENAE, INRIA-Rocquencourt, Domaine de Voluceau, BP 105, 78153 Le Chesnay, France
October 10, 2018
Abstract
Learning or memory formation are associated with the strengthening of the synaptic connectionsbetween neurons according to a pattern reflected by the input. According to this theory a retainedmemory sequence is associated to a dynamic pattern of the associated neural circuit. In this work weconsider a class of network neuron models, known as Hopfield networks, with a learning rule whichconsists of transforming an information string to a coupling pattern. Within this class of modelswe study dynamic patterns, known as robust heteroclinic cycles, and establish a tight connectionbetween their existence and the structure of the coupling. keywords : heteroclinic cycles, Hopfield networks, learning rule, network architecture.
AMS classification : 34C37, 37N25, 68T05, 92B20.
A simplest example of a heteroclinic cycle is a sequence of saddle type equilibrium points joined ina circle by connecting orbits. Generically heteroclinic cycles are not robust under perturbations of thesystem (changes of parameters), but for special classes of systems they may occur robustly, typically dueto the presence of invariant hyperplanes. Examples of special structures leading to robust heterocliniccycles are symmetry, the existence of invariant planes corresponding to extinction of some species inLotka-Volterra systems or existence of synchrony subspaces in coupled cell systems. Heteroclinic net-works are a generalization of heteroclinic cycles to sets of equilibria with more complicated connectionstructure. More generally, heteroclinic sets may consist of invariant sets connecting periodic or morecomplicated saddle type dynamics. The study of heteroclinic cycles was motivated by examples in fluidmechanics (systems with symmetry) [3] and in population biology [10]. See [12] for an introduction tothe subject. 1 a r X i v : . [ m a t h . D S ] N ov ore recently, Rabinovich and co-workers have proposed applications of robust heteroclinic cyclesin neuroscience, see [14] for an early review. Among the contexts proposed in [14] where heteroclinicdynamics could be relevant were central pattern generators (CPGs) and memory formation. These twoapplications were validated by some more detailed biological studies [15] [2]. The CPGs are circuitscontrolling the motoric function, and are known to support a variety of complex oscillations corre-sponding to different movements of the body. As shown in this paper, by slightly changing the couplingstructure in the model one can obtain a variety of heteroclinic cycles and thereby complex periodic so-lutions. The idea of the memory application is similar – modifications of the coupling, arising from theaction of the input, lead to the occurrence of periodic orbits, existing near heteroclinic cycles whoseproperties reflect the structure of the input.The focus of this work is to study Hopfield networks, which are the simplest models of memorycircuits, with the goal of investigating the presence of heteroclinic cycles.Hopfield introduced thirty years ago [11] this model for learning sequences and associative memory inneural networks, in their simplest possible form. In the continuous time version, for each neuron i in anetwork of N neurons, the activity is modeled by the following equation (activity model): ˙ u i = − u i − N (cid:88) i =1 J ij g ( u j ) + I i , i = 1 , . . . N. (1)where u j is the activity variable (membrane potential) of neuron j and I i is a constant external input onneuron i . The function g : R → (0 , is strictly increasing and invertible, for example a sigmoid. Thequantity v j = g ( u j ) is the firing rate of neuron j , that is the time rate of spikes which are emitted bythe neuron. Classically the function g ( u ) = (1 + e − u ) − is used. The coupling coefficients J ij definea N × N matrix J called the connectivity matrix . A positive (resp. negative) coefficient correspondsto an inhibitory (resp. excitatory) input from j to i . When J = 0 all neurons have the same state ofrest u i = I i . When coupling is switched on, other equilibria may exist depending on the coefficients J ij . It is often assumed that J is a symmetric matrix, implying that the dynamics of the network alwaysconverges to an equilibrium. Each equilibrium is defined by a sequence of values ( u , . . . , u N ) calleda pattern . Depending upon the inputs I i , one or another equilibrium will be reached, a process that isinterpreted as retrieving a pattern which has been earlier memorized through a tuning of the couplingcoefficients in the network (Hebbian rule).The assumption that J is symmetric is unnecessary to storing information, and besides experimentshave shown that dynamical patterns are often present in neural circuits and seem to play an importantrole in various aspects, in particular generating periodic oscillations in CPG. In a recent work Chuanet al. [4] developed a method of converting strings of information, in the form of sequences of vectorswith entries ± , into coupling matrices so the Hopfield network with the resulting coupling architecturewould have storing cycles , i.e. periodic orbits carrying the information of the underlying sequence ofvectors. The defining feature of a storing cycle associated to such sequence is that it visits the vicinityof each of its vector, preserving their order in the sequence. In another study Chuan et al. [5] used Hopfbifurcation analysis to find storing cycles. 2 natural observation is that heteroclinic cycles between equilibria given by the elements of the sequenceprovide a natural approximation to storing cycles. However the simulations of [4] and [5] gave noevidence of the existence of heteroclinic cycles.In this work we show that after a small modification the systems studied in [4] support heterocliniccycles. Our approach draws on the work of Fukai and Tanaka [8], who observed that by of replacing anon-differentiable term in the firing rate equations by a constant one obtains a Lotka-Volterra system,which supports robust heteroclinic cycles [10]. This approach was subsequently used by [14] and [1]in their study robust heteroclinic cycles in firing rate models. In this work we continue the approach of[8], introducing some refinements to their approximation of the firing rate equations. We point out thatthe original firing rate equations cannot support hateroclinic cycles due to the presence of non-smoothterms and introduce two methods of regularizing the equations. When the systems studied in [4] aremodified using either of our approaches heteroclinic cycles do exist and there is a direct correspondencebetween the input string/vector sequence/coupling structure and the resulting heteroclinic cycle. In thiswork we carry out a detailed study of this correspondence. System (1) is often transformed to the firing rate formulation , by letting the firing rates x i = g ( u i ) bethe dependent variables. In this section we make the same choice of g as the authors of [4], namely g ( u ) = tanh( λu ) , λ is a parameter controlling the steepness of g . (2)In Section 2.2, where we review some of the work of [1], [2] and [8], we make a brief switch to adifferent but equivalent choice of g used by these authors. System (1) transformed to the firing ratevariables with g given by (2) has the form ˙ x j = (1 − x j ) ( λ ( J x ) j − f ( x j )) , x = ( x , . . . , x n ) ∈ [ − , n (3)where f ( x ) = g − ( x ) = arctanh( x ) = 12 ln (cid:18) x − x (cid:19) (4)and J is the coupling matrix. We further decompose J as follows: J = c I + c J (5)where I is the identity matrix, c and c are non negative coefficients and c = 1 − c .Provided that λc > the equation λc β = arctanh( β ) has a couple of non zero solutions ± β λ with < β λ < . Therefore when c = 0 , any vector of the form β λ ( ξ , . . . , ξ n ) with ξ j = ± is a stablesteady-state of (3). If we think of vectors of the form ( ξ , . . . , ξ n ) as information strings in a neural3etwork, then the above steady-states represent stored memory states. However it is well-known thatmemory states need not be steady (see [9] and references therein). If c > the steady-states maybecome unstable or even disappear, but nevertheless information may still be dynamically stored.We now explain the idea of information storage by means of limit cycles of (3) (storing cycles), asexplored in [4] and then we introduce our idea to use robust heteroclinic cycles instead.The basic question adressed in [4] is the following: given an information string, can it be stored by aHopfield network in the form of dynamic information, more specifically a limit cycle? Concretely, theinformation is given in the form of a string of binary n -vectors (with components equal to ± ). Thelearning rule, consistent with Hebbian learning, is an algorithm specifying how the information stringstructures the coupling matrix c I + c J (we forget from now on the subscript 1 in J ). This learningrule will be described in detail in Section 3.1. The main research question of [4] is whether the systemwith the coupling structure resulting from applying the learning rule supports stable limit cycles thatcode the original information string in the sense that the periodic orbit passes through the quadrants of R n corresponding to the elements of the information string, following its order.In this article we focus on a different version of such encoding by the dynamics, choosing a robustheteroclinic cycle as the invariant object encoding the information string. The condition we impose isthat the cycle should connect equilibrium points located at vertices of the cube [ − , n correspondingto the elements of the information string, following its order. This is a rather natural condition, yet thefirst obstacle we must overcome is that with f as given by (4) the RHS of (3) is not C on the cube [ − , n , so that heteroclinic cycles cannot exist. We discuss this problem in more detail and propose asolution in the next section, which also relates to the work of [1], [2] and [8]. The articles [1], [2] and [8] consider the question of the existence of robust heteroclinic cycles in thefiring rate version of (1) and show that such cycles exist for a Lotka-Volterra approximation of thesystem. In this section we use a different combination of g and f = g − consistent with choice made inthese articles. Specifically we will use the functions: g ( u ) = 11 + e − u and f ( x ) = ln (cid:18) x − x (cid:19) . (6)The coefficients J ij are assumed to be all positive so that the synaptic couplings are all of inhibitorytype. We define the firing rate by x j = g ( λ − u j ) and transform (1) to the firing rate formulation. Afterapplying a time rescaling we obtain the following system. ˙ x i = x i (1 − x i ) (cid:32) − λf ( x i ) − n (cid:88) i =1 J ij x j + I (cid:33) , i = 1 , . . . n. (7)Note that system (7) is well defined and continuous on the cube [0 , n , but it is not smooth on the faces,with the term x i (1 − x i ) f ( x i ) (8)4eing the source of non-smootheness. As we are interested in heteroclinic cyles that lie on the edges,with connections in the faces, this becomes a problem for the existence and stability of the cycle.Since our purpose is merely to illustrate the problem of the lack of smoothness we restrict ourattention to the simplest case n = 3 . Then (7) has the form ˙ x = x (1 − x ) ( − λf ( x ) − J x − J x − J x + I )˙ x = x (1 − x ) ( − λf ( x ) − J x − J x − J x + I )˙ x = x (1 − x ) ( − λf ( x ) − J x − J x − J x + I ) . (9)For simplicity we assume J jj = 1 . The goal is to construct heteroclinic cycles connecting equilibria ofthe form: ( ρ, , , (0 , ρ, and (0 , , ρ ) where − λf ( ρ ) − ρ + I = 0 . The Jacobian matrix at such equilibria is given as follows: − ρ (1 − ρ ) − λ − J (1 − ρ ) ρ − J (1 − ρ ) ρ − λf (0) − J ρ + I − λ
00 0 − λf (0) − J ρ + I − λ . Since f blows up at the Jacobian is undefined. If the term (8) is neglected in the RHS of each equationof (9) then a heteroclinic cycle can be easily found, with J = .
25 00 .
875 1 1 .
253 0 .
625 1 (10)giving an example [1]. We propose two approaches to regularize the function f . First approach, which we use in this paper, isto replace f defined in (6) by its Taylor polynomial at x = 0 . . We denote such Taylor polynomial ofdegree q as f q . Note that the sequence { f q } diverges at and when q → ∞ , but converges uniformlyto f on any compact subinterval in (0 , .Another approach is to replace f by f ε ( x ) = ln (cid:18) x + ε ε − x (cid:19) , (11)where ε is a small parameter. The function f ε is well defined on the interval [0 , , yet its properties aresimilar to f , in particular its derivative at and equals / ( ε (1 + ε ) , thus is very large. If we replace f by f q or f ε in (9) then, depending on the relative size of λ and q or ε , any of the three possibilities canarise: 5. the cycle does not exist,2. the cycle exists and is unstable,3. the cycle exists and is stable.Fig. 1 shows simulations of (9) with f replaced by f ε . The matrix J is as given in (10), I = 0 . and λ = 0 . . The value of ε is varied showing an example of each of the possible cases. We now return to the formulation (3) with f replaced by its q -th order polynomial expansion f q at x = 0 as described in Section 2.3. The equation now reads ˙ x = ( I − diag x · x ) ( λ ( c x + c J x ) − f q ( x )) , x ∈ [ − , n , (12)with f q ( x ) = ( f q ( x ) , . . . , f q ( x n )) and f q ( x ) = x + x · · · + x q q . The power series f ∞ has a radius of convergence equal to 1. It follows that given any interval ( − (cid:15), − (cid:15) ) , the approximation of f by f q can be as good as we wish provided that q is large enough.We now give a formal description of the information string and introduce the learning rule.A binary pattern (or simply a pattern ) is a vector ξ of binary states of n neurons: ξ = ( x , . . . , x n ) t with x j = ± . Let Σ = (cid:0) ξ , ξ , . . . , ξ p (cid:1) (13)be a sequence of p patterns. This matrix is called a cycle if there exists a connectivity matrix J such thatthe corresponding network of n neurons visits sequentially and cyclically the patterns defined by Σ . Inother words each column ξ j can be associated with a state of the system such that the signs of the cellvariables are equal to the signs of the corresponding components of ξ j . We shall always assume p ≥ n .Let P be the matrix of the cyclic permutation ( x , x , . . . , x p − , x p ) → ( x , x , . . . , x p , x ) . The cycle Σ is called admissible if there exists J such that J Σ = Σ P, (14)has a solution [13]. This relation expresses a necessary condition for the network (3) to possess asolution that periodically takes the signs defined by the patterns ξ , . . . , ξ p . Note that, if Σ is admissiblethen a solution exists in the form J = Σ P Σ + (15)6 A BC
Figure 1: Simulations with J is as given in (10), I = 0 . and λ = 0 . illustrate the three possible casesfor the regularized system corresponding to three different values of ε . Panel A shows a heterocliniccycle, with ε = 0 . , panel B shows a periodic orbit close to an unstable cycle, with ε = 0 . , panel C shows the dynamics attracted to an equilibrium for the case when no cycle exists, with ε = 0 . .7here Σ + is the Moore-Penrose pseudo-inverse of Σ , and if Σ has full rank it is unique.A cycle Σ is called simple [4] if there exists a vector η ∈ R p such that each row of Σ equals ηP l j ,for some p > l j ≥ . We define W η = span { ηP j : j = 0 , . . . , p − } . (16)By Theorem 2 in [4] a simple cycle is admissible if and only if dim W η = Rank (Σ) .If in addition we can write
Σ = (cid:0) η t , ( ηP ) t , . . . , ( ηP n − ) t (cid:1) t (17)then the simple cycle is called consecutive . The following proposition is essentially contained in Sec.5.1.1 of [4]. Proposition 1.
If a simple consecutive cycle is admissible, then there exists a J satisfying (14) of theform J = . . . . . . ... ... ... . . . . . . . . . a a a . . . a n − a n − . (18) where a , . . . , a n − are rational coefficients. If Σ has full rank then J is uniquely defined and a (cid:54) = 0 (in this case the cycle is minimal in the sense of [4]).Proof. By construction we can write
Σ = ηηP ... ηP n − and Σ P = ηPηP ... ηP n . By admissibility Σ P = J Σ and moreover ηP n must be a linear combination of the ηP j ’s. Hence (18)follows. The a j ’s are rational because the vectors ηP j have integer coordinates. If Rank (Σ) = n then J is non singular, hence a (cid:54) = 0 . Example 1.
We consider Σ as follows, with p = 6 : Σ = − − −
11 1 − − − − − − , (19)8 et η = (1 , , , − , − , − . Note that the rows of Σ are η , ηP and ηP . Note also that ηP = − η . Itfollows that the rows of Σ P are ηP , ηP and − η , i.e. the second, the third and the negative of the firstrow of Σ . Hence − Σ = Σ P. (20) Since the rows of Σ are independent the matrix ΣΣ T is invertible. Hence (14) has a unique solutionwhich, by (20) , must be given by: J = − . (21) Note that Σ + = Σ T (ΣΣ T ) − and J satisfies (15) .This matrix provides a simple example of heteroclinic cycle, which we illustrate in Fig. 2: the readercan check on this numerical simulation that indeed trajectories follow the pattern defined by Σ . Observethat the trajectory closely follows the edges of the cube connecting the equilibria in the pattern. Theanalysis is easy but it follows directly from Proposition ( ?? ) in Section 4.2 (see Example ?? ). Figure 2: A trajectory of (3) with J given by (21) and with c = 0 . , λ = 8 . Initial conditions close to (1 , , . 9 .2 Classification of simple consecutive cycles Suppose that p is fixed and note that every consecutive cycle is uniquely determined by the choice of η and n . If n ≥ p then such a cycle is always admissible. If n < p then there are only very special choicesof η and n such that the cycle is admissible. In this section we will address the question of finding theconditions on η so that there exists an n < p such that the cycle determined by η and n is admissible. Inorder to avoid confusion with prime numbers we will, throughout this section, use the letter m instead of p to denote the dimension of η . We will return to the original notation of [4] in the subsequent sections.Consider a simple cycle as defined in Section 3.1, with η corresponding to the first row of Σ . If Σ isadmissible the there exists J in the form given by (18) such that (14) is satisfied. Let ( a , a , . . . , a n − ) be the last row of J (see (18)) and let ψ ( x ) = x n − a n − x n − − . . . − a x − a . (22)It follows from (14) that η ∈ ker( ψ ( P )) . In this section we use the following result: Theorem 1.
Let V be a non-trivial invariant subspace of the action of P on R m . Then there exists apolynomial φ ( x ) , which is a divisor of x m − , such that V = ker( φ ( P )) . Moreover, for any ˜ φ theinclusion V ⊂ ker( ˜ φ ( P )) holds if and only if φ is a divisor of ˜ φ . Theorem 1 follows from some classical results of algebra, which we will review in the appendix, therebyproviding the proof. We now state two corollaries of Theorem 1 which we will use to characterize thepossible choices of η for which dim( W η ) < m . Corollary 1. If n = dim ( W η ) < m then W η = ker( ψ ( P )) and ψ is a divisor of x m − . Proof . It is easy to see that W η ⊂ ker( ψ ( P )) . We will prove that the opposite inclusion holds and that ψ is a divisor of x m − . By Theorem 1 there exists φ a divisor of x m − such that ker( φ ( P )) = W η and φ divides ψ . Suppose that φ is a proper divisor of ψ . Then n = dim ( W η ) ≤ deg( φ ) < n, which is a contradiction. It follows that ψ = φ . Hence the corollary holds.For φ a minimal divisor of x m − (over Q ) let ψ = ( x m − /φ and let W φ = ker( ψ ) . The followingresult leads to a characterization of η s such that n = dim( W η ) < m . Corollary 2. If n = dim( W η ) < m then η ∈ W φ , for some φ a minimal divisor of x m − (over Q ). Proof
By Corollary 1 there exists ˜ ψ , a divisor of x m − , such that W η = ker( ˜ ψ ) . By unique decom-position into prime factors over Q there exists a minimal divisor φ of x m − which divides ˜ ψ . Let ψ = ( x m − /φ . Clearly ˜ ψ divides ψ . It follows from Theorem 1 that η ∈ ker( ψ ) = W φ .In the remainder of this section we will derive the conditions on η needed for η ∈ W φ for some (the10implest) choices of φ , where φ is a minimal divisor of x n − . We begin by recalling the decompositionof x n − into irreducible polynomials over Q . For a positive integer k let φ k ( x ) = k − (cid:88) i =0 x i . The polynomials φ p , where p is a prime number, are irreducible over Q . For a prime number p and anon-negative integer j we define φ p,j ( x ) = p − (cid:88) i =0 x ip j . Note that φ p, = φ p . The polynomials φ p,j ( x ) are the irreducible factors over Q of the polynomial φ p k .Suppose m = m m . . . m l , with m j = p k j j , j = 1 , . . . , l , and let Φ m ( x ) = φ m (cid:14) l (cid:89) j =1 φ m j ( x ) . The polynomial Φ m ( x ) is called the cyclotomic polynomial of degree m and is irreducible. It nowfollows that the decomposition of x m − into irreducible factors over Q given by x m − x − m ( x ) l (cid:89) j =1 k j − (cid:89) i =0 φ p j ,i ( x ) . (23)All the possible factors of x m − over Q are products of the irreducible factors appearing in (23), henceall the possible choices of W φ are obtained that way. As announced above we now describe some of thespaces W φ by simple conditions on the components of η . Proposition 2. If φ = φ p j , for some prime number p j then W φ consists of vectors η = ( b , . . . , b m − ) satisfying m/p l − (cid:88) i =0 b ip j = m/p l − (cid:88) i =0 b ip j +1 = . . . m/p l − (cid:88) i =0 b ip j + p j − . (24) Proof we use the following identity: ψ ( x ) = ( x − φ m/p j ( x p j ) . Hence Ψ( P ) η = ( P − I ) m/p l − (cid:88) i =0 b ip l , m/p l − (cid:88) i =0 b ip l +1 , . . . , m/p l − (cid:88) i =0 b ip l + m − . (25)11The indexing of the components of b in (25) must be understood modulo m .) It follows that the RHSof (25) is equal to the vector if (24) holds.We now state the condition on η for φ = Φ m ( x ) . We begin with the following elementary lemma (theproof is left to the reader). Lemma 1. If k divides m then ker( P k − I ) = { η : there exists v ∈ {− , } k such that η = ( v, v, . . . , v ) } . (26) Proposition 3.
Suppose φ = Φ m ( x ) . Then η ∈ ker( P m − I ) + . . . + ker( P m l − I ) . (27) Proof
Note that ψ ( x ) = ( x m − (cid:14) Φ m ( x ) = ( x − l (cid:89) j =1 φ m j ( x ) . Further note that, for each j , ( x − φ m j ( x ) = x m j − . Hence, for each j , ker( P m j − I ) ⊂ ker( ψ ( P )) . Moreover, for j (cid:54) = j (cid:48) ker( P m j − I ) ∩ ker( P m j (cid:48) − I ) = span { m } . The result follows.
Remark 1.
Since the coordinates of η are ± , it follows that η must be contained in one of the spaces ker( P m l − I ) . Remark 2.
The conditions for the other minimal factors of x m − are slightly more complicated andwe will not state them here. They are, however, not hard to derive. Example 2.
We consider η = ( b , b , . . . , b ) = (1 , , , , , , − , − , − , , , , , with m =15 = 3 · . Note that η satisfies (24) with p = 3 . Hence, by Proposition 2, η ∈ W φ = ker( ψ ) with φ = 1+ x + x ; ψ = ( x − x + x + x + x ) = − x − x + x − x + x − x + x − x + x Hence ηP = η − ηP + ηP − ηP + ηP − ηP + ηP − ηP + ηP . (28) Note that the last row of Σ P is equal to the LHS of (28) . If we define J as in (18) with ( a , . . . , a ) =(1 , − , , , − , , , − , , , − , , then the RHS of (28) equals J Σ . Hence, by (28) , identity (14) holds with J as specified. xample 3. We consider η = ( b , b , . . . , b ) = (1 , , , − , − , , , , − , − , , , , − , − with m = 15 = 3 · . Note that η ∈ ker ( P − I ) , or, in other words, ηP = η. (29) Arguing as in Example 3 we conclude that Σ generated by η and n = 5 is admissible with J whose lastrow equals (1 , , , , . Example 4.
An interesting class of admissible cycles exists for m even with n = m/ . Note that in thiscase x m − x n + 1)( x n − , i.e. x n + 1 divides x m − . Further note that ker ( P n + I ) = { η ∈ R m = ( ν, − ν ) , ν ∈ R n } . (30) Let Σ be a cycle constructed with some η ∈ ker ( P n + I ) , n = m/ . Then Σ is admissible. Moreover,by a similar argument as in Example 2 we conclude that the last row of J equals ( − , , . . . , . Inparticular Example 1 of Section 3.1 is a special case of this construction. This type of admissible cycleis called antisymmetric in [4]. Example 5.
Since (1 + x + . . . + x m − )( x −
1) = x m − the space ker( I + P . . . + P m − ) correspondsto vectors η for which Σ with n = m − is admissible. Moreover ker( I + P . . . + P m − ) = { η : (cid:88) η i = 0 } . Note that for η ’s whose entries are ± this means that the number of coordinates equal to is the sameas the number of coordinates equal to − . Hence m must be even. In this case the last row of J is ( − , − , . . . , − . We now come to the study of heteroclinic cycles for admissible consecutive simple cycles governed byequation (3), hence with J as in (18). Then the equation reads as a system ˙ x = (1 − x ) ( λc x + λc x − f q ( x ))˙ x = (1 − x ) ( λc x + λc x − f q ( x )) ... ˙ x n = (1 − x n ) ( λc x n + λc ( a x + · · · + a n x n ) − f q ( x n )) (31)Following [4], we also assume that the two coefficients which control the relative contributions of J and J to each neuron satisfy (H) ≤ c < and c + c = 1 . 13e aim at studying the existence and stability of heteroclinic cycles connecting vertex equilibria,i.e. equilibria with entries ± , for this system. By construction, the edges, faces and simplices of thehypercube { ( ± , . . . , ± } are invariant under the dynamics of 31.Let ξ = ( x , . . . , x n ) be a vertex equilibrium : x k = ± for all k . Linearizing (31) at ξ leads to a systemof equations ˙ u k = σ k u k where: we can express the eigenvalues as follows: σ k = 2( f q (1) − λ ) if x k x k +1 = 1 , k < nσ k = 2( f q (1) − λ ( c − c )) if x k x k +1 = − , k < nσ n = 2 (cid:16) f q (1) − λ ( c + c x n (cid:80) nj =1 a j x j ) (cid:17) , (32)Note that under the above conditions on c and c , which we assume from now on, a necessary andsufficient condition for the existence of negative and positive eigenvalues with k < n is that λ ( c − c ) < f q (1) < λ (33)This is always possible to realize since | c − c | < . Then σ k < if x k x k +1 = 1 and > otherwise. Remark 3.
The equation (12) (hence (31) ) is invariant by the symmetry S : x → − x . Therefore anytime a cycle Σ admits a heteroclinic cycle, the cycle − Σ admits the opposite heteroclinic cycle obtainedby applying S .In the following we shall always consider Σ ’s up to this symmetry. Definition 1.
A heteroclinic cycle is called an ”edge cycle” if it connects a cyclic sequence of vertexequilibria through heteroclinic orbits lying on the edges of the hypercube { ( ± , . . . , ± } . We also re-quest that the unstable manifold at each equilibrium in the cycle has dimension 1 (therefore is containedin an edge). The condition about the unstable manifolds is necessary for asymptotic stability of the edge cycles.If σ − k and σ + k denote respectively the contracting and expanding eigenvalues along the heteroclinictrajectories, the edge cycle is asymptotically stable if (see [12]) | Π σ − k | > Π σ + k (34)The example 1 provides a simple case of an asymptotically stable edge cycle, see Fig. 2. We showbelow that all asymptotically stable edge cycles have the same simple structure. Theorem 2.
Let hypothesis (H) hold. An edge cycle exists for (31) if and only if condition (33) holds aswell as the following: λ ( c + c ( a + · · · + a n ) < f q (1) < λ ( c + c ( − a + · · · − a n − + a n ) . This edge cycle connects the sequence of n equilibria (1 , , . . . , → (1 , , . . . , − → . . . ( − , − , . . . , − → · · · → ( − , , . . . , . (35)14 roof. Let ξ = ( x , . . . , x n ) be an equilibrium in the cycle. Note first that according to (32), in order tohave one unique positive eigenvalue σ k with k < n , the following must be true: (33) holds and (i) all x j with j ≤ k have the same sign, (ii) x k x k +1 = − and (iii) x j has the sign of x k +1 for k + 1 < j . Let ξ (cid:48) be the next equilibrium in the cycle, then we must have x (cid:48) j = x j for all j (cid:54) = k and x (cid:48) k = − x k . Observethat we then have σ (cid:48) k < . It is straightforward to check that under (33), there is no equilibrium pointlying on the edge joining ξ to ξ (cid:48) and therefore that a heteroclinic connection exists on this edge.Now let’s assume that the positive eigenvalue is σ n . Then all x j ’s, j = 1 , . . . n , must be equal and thecondition σ n > can be written f q (1) − λ ( c + c ( a + · · · + a n ) > . Also we request x (cid:48) n = − x n and σ (cid:48) n < , which can be written f q (1) − λ ( c + c ( − a + · · · − a n − + a n ) < . As for the case k < n we can check that if these inequalities are satisfied a heteroclinic orbit joins ξ to ξ (cid:48) .From the above construction we deduce that the edge cycle must connect the equilibria in the sequence(35). Corollary 3.
Edge cycles are in one-to-one correspondance with connectivity matrices (18) with a = − and a j = 0 for j > . Moreover, under hypothesis (H), they are asymptotically stable iff λ Consider 3 neurons ( n = 3 ) and 4 equilibria such that η = (1 , , − , − . Defining P as before and η = (1 , , − , − , we build Σ to form a consecutive cycle with n = 3 and p = 4 : { η, ηP, ηP } . Hence Σ = − − − − − − . (36) Clearly the third row is the opposite of the first one, hence this matrix has rank 2. Nevertheless the cycleis admissible because rank (Σ) = rank ( η ) where rank ( η ) is the rank of the matrix [ η T , ( ηP ) T , ( ηP ) T , ( ηP ) T ] T (Theorem 2 of [4]). Since ηP = − η − ηP − ηP , it follows that J = − − − . (37) Note that this example illustrates the criterion derived in Section 3.2, Example 5. The equations read ˙ x = (1 − x ) ( λc x + λc x − f q ( x ))˙ x = (1 − x ) ( λc x + λc x − f q ( x ))˙ x = (1 − x ) ( λc x − λc ( x + x + x ) − f q ( x )) (38) Numerical simulations exhibit a heteroclinic cycle for (3) with J given above, see Fig. 3. Observethat after short transient time x and x are opposite and move (in opposite directions) while x isfixed at ± . This indicates that a heteroclinic orbit (if it exists) connects opposite vertices in the faces x = ± . Now if we set x = − x in (39) with x = ± , we see that the first and third equations areidentical. Therefore the diagonal axis joining the vertices (1 , ± , − to ( − , ± , is flow-invariant.Moreover the eigenvalues at opposite vertices along each of these axes have opposite signs as in theprevious sections, showing that a saddle-sink connection exists on these diagonal axes. A more detailedcalculation shows that on each of these faces, the dynamics looks like in Fig. 4. Let us explain why thedynamics aligns itself on the diagonal connection (in black in Fig. 4). The unstable eigenvalue at thepoint ( − , − , − is given by σ n ( n = 3 ) in (32) : σ = 2 λ ( − c + 3 c ) + 2 f q (1) . We now come back to the general problem. Lemma 2. Let ξ = ( x , . . . , x k , . . . , x l , . . . , x n ) be a vertex equilibrium for (31) . If there are m switches of sign in the sequence x , . . . , x n , then ξ has either m or m + 1 unstable directions. The lattercase occurs if σ n > in (32) . The unstable manifold of ξ is contained in the hyperface generated by itsunstable eigenvectors. c = 0 . , λ = 2 . , random initial conditions close to origin. Colourcode: blue = x , red = x , green = x . Proof. One already knows that if j < n , then σ j > iff x j x j +1 < . However when j = n , thesign of the eigenvalue depends upon the coefficients a , . . . , a n . The last claim is straightforward from(31). Lemma 3. An equilibrium ξ possesses a 2-dimensional unstable manifold if and only if the two con-ditions are satisfied: (i) either the sequence of coordinates in ξ undergoes two switches of signs and σ n < , or one switch of signs and σ n > ; (ii) if x k and x l are the unstable directions, then x k x l < .Proof. Condition (i) is clear from Lemma 2. If condition (ii) is not satisfied, then an additional changeof sign must occur somewhere between x k and x l and therefore an additional positive eigenvalue mustexist.The next lemma characterizes when σ n > when ξ is a column vector of a consecutive cycle Σ . Lemma 4. Let ξ = ( x , . . . , x n ) be an equilibrium in a consecutive cycle. We write ξ (cid:48) = J ξ =( x (cid:48) , . . . , x (cid:48) n ) T where J is the connectivity matrix (18) . Then under the condition (33) , σ n > if andonly if x n x (cid:48) n < .Proof. Recall that σ n = − (cid:16) λ ( c + c (cid:80) nj =1 a j x j ) − f q (1) (cid:17) . By construction (cid:80) nj =1 a j x j = x (cid:48) n ,therefore by (32), σ n = − λ ( c + c x n x (cid:48) n ) − f q (1)) . The claim follows by (33). Lemma 5. Let ξ = ( x , . . . , x n ) be a vertex equilibrium in a simple consecutive cycle with connectivitymatrix J and ξ (cid:48) = J ξ = ( x (cid:48) , . . . , x (cid:48) n ) . Suppose that ξ possesses a 2-dimensional unstable manifoldalong directions x k and x l , which by construction implies x (cid:48) k = − x k and x (cid:48) l = − x l . Then a heteroclinic x = − for (39). The cross indicates an equilibriumlying on the edge off vertices. saddle-sink connection ξ → ξ (cid:48) exists in the face of coordinates ( x k , x l ) if and only if l (cid:54) = k ± . Moreoverin this case the diagonal segment joining ξ to ξ (cid:48) is flow-invariant.Proof. The face F kl = { ( x , . . . , x k − , u, x k +1 , . . . , x l − , v, x l +1 . . . , x n ) , − ≤ u, v ≤ } is flow-invariant. Suppose first that k + 1 < l (the case l < k + 1 is of course similar). Then equations in F are ˙ u = (1 − u ) (cid:0) λc u + λc x (cid:48) k − f q ( u ) (cid:1) ˙ v = (1 − v ) (cid:0) λc v + λc x (cid:48) l − f q ( v ) (cid:1) (39)Set v = − u . Then the two above equations are identical because by Lemma 3 we have x k x l = − ,which also implies x (cid:48) k x (cid:48) l = − . The saddle-sink connection along this segment follows from the sameanalysis as in the ”edge” case.Suppose now that l = k + 1 . Then in ξ (cid:48) the coordinates of indices k , k + 1 have opposite signs, whichimplies that the eigenvalue σ (cid:48) k of ξ (cid:48) is positive. Therefore ξ (cid:48) is a saddle or a source in the face joining ξ to ξ (cid:48) , which proves that no saddle-sink connection ξ → ξ (cid:48) can exist.This lemma can be generalized in a straightforward way to more that two unstable eigenvalues. Definition 2. Let Σ = ( ξ , . . . , ξ p ) be a simple, admissible consecutive cycle. Σ has adjacent switches if in one column (at least), the sign of the entries change two or more times consecutively. The following theorem summarizes the previous results.18 heorem 3. Let hypothesis (H) hold. For the admissible simple consecutive cycle Σ = ( ξ , . . . , ξ p ) withHopfield equations (31) , the equilibria ξ , . . . , ξ p are connected by a robust heteroclinic cycle if and onlyif: (i) condition (33) is satisfied; (ii) Σ has no adjacent switches. The heteroclinic connections ξ i → ξ i +1 lie either along the corresponding edge of the unit cube in phase space, or inside the corresponding face,the dimension of which is equal to the number q of switches of sign of coordinates from ξ i to ξ i +1 . Inthe latter case these connections form a q -dimensional manifold, and in this manifold one of them is thediagonal segment joining ξ i to ξ i +1 . The example 1 above illustrates this theorem for a network of three neurons. The first column in(36) contains one switch of sign but the second column contains 2 non adjacent switches. The resultingdynamics close to the heteroclinic cycle is shown in Fig. 3.The next example also concerns a network with three neurons, however it is a counter-example toexistence of a heteroclinic cycle. Example 7. Let’s take Σ in the following form: Σ = − − − . (40) This matrix defines a simple minimal consecutive cycle: it is × and invertible. Since P = Id , it iseasy to find that J = . (41) Observe that in Σ the third row has two adjacent switches of sign. The numerical simulation shows adynamics converging to the equilibrium (1 , , (Fig. 5). We can’t rule out the possibility that condition (ii) in Theorem 3 is not satisfied, but the network stillpossesses a heteroclinic cycle. However in this case the cycle will be different from the one defined by Σ . Two different Σ ’s can give the same connectivity matrix. Next is an example of this kind. Example 8. This example shows that the assumption of Theorem 3 concerning the absence of adjacentswitches is essential. Lets consider the following minimal consecutive cycle, which was introduced in[4] (Example 6). ˜Σ = − − − − − − − − − − − − − − − . (42)19igure 5: Time series for the network with connectivity matrix (41), c = 0 . , λ = 2 , initial conditionclose to the first equilibrium ( − , , . Colour code: blue = x , red = x , green = x . By Example 5 of Section 3.2 the connectivity matrix is J = − − − − − . (43) Observe that ˜Σ possesses two adjacent sign switches in the first column. The simulation of the dynamicsin this case shows a heteroclinic cycle, however not the one which would correspond to the cycle formedby the columns of ˜Σ (Fig. (6) ), which is consistent with Theorem 3. In the figure we see that x movesfirst from +1 to − while x = x = +1 and x = x = − , then x moves from +1 to − andsimultaneously x moves from − to +1 , then x and x do the same, and the process repeats itself.The corresponding cycle is given by the following matrix (see also Section 4.3.2): Σ = − − − 11 1 − − − − − − − − − − − − . (44)In the following example the rank of Σ is not maximal.20igure 6: Time series (after transients) for the network with connectivity matrix (43), c = 0 . , λ = 2 . ,random initial conditions. Colour code: blue = x , red = x , green = x , black = x , yellow = x . Example 9. Let’s take Σ = − − − − − − − − − − − − . (45) Observe that the last row is opposite to the first one, which we call η , and rank (Σ) = 3 . The cycle isadmissible and since ηP = − ηP , we have that J = − . (46) A numerical simulation is shown in Fig. 7. p = n Therefore Σ is a square matrix. By construction, if η denotes the first row of Σ , then Σ = (cid:0) η T , ( ηP ) T , . . . , ( ηP n − ) T (cid:1) .Then it follows from Theorem 2 in [4] that the cycle is admissible. Moreover ηP n = η , which implies21igure 7: Time series (after transients) for the network with connectivity matrix (46), c = 0 . , λ = 2 . ,random initial conditions. Colour code: blue = x , red = x , green = x , black = x .that in (31), a = 1 and a j = 0 for j > . Hence J = . . . 00 0 1 . . . ... . . . . . . . . . ... . . . . . . . . . . . . . (47)Observe that for n = 3 adjacent switches always exist in this case (see Example 2). However for allsquare simple consecutive cycles of a given dimension n > and such that no adjacent switches occurs,we can conclude that several non-edge heteroclinic cycles can coexist and their number increases with n . Example 10. n = 4 . The only square consecutive cycle with non adjacent switches is generated by η = (1 , , − , − . Time series of the heteroclinic cycle shown in Fig.8. Example 11. n = 6 . Then the following square consecutive cycles have no adjacent switches: η =(1 , , , − , − , − and η = (1 , , , , − , − . The first heteroclinic cycle has connections on threedifferent faces while the second cycle has connections on six different faces. p = n = 4 , c = 0 . , λ = 3 . ,random initial conditions. Colour code: blue = x , red = x , green = x , black = x . p even and n = p − An antisymmetric cycle is generated by a row vector η = ( v, − v ) , so that p is even and v has p/ entries.In this case the conditions of Example 5 of Section 3.2 hold and J = . . . 00 0 1 . . . ... . . . . . . . . . ... . . . . . . − − . . . . . . − . (48)Moreover by construction Σ does not contain adjacent switches. Hence a heteroclinic cycle exists forthis matrix. Two such examples have already been discussed: with p = 4 and n = 3 (Example 6), andwith p = 6 and n = 5 (Example 8) (see Fig. (3) and (6), respectively). n < p odd given by Propositions 2 and 3 Example 2 shows the construction of J for η satisfying (24) with p = 15 and n = 13 . We leave theobvious generalization of this construction to the reader. Figure 9 shows a heteroclinic cycle obtainedfor η = (1 , , , , , , , − , − , − , , , , , , p = 15 and n = 13 .For η ∈ ker( P n − I ) , as stipulated by Proposition 3 and Remark 1 the heteroclinic cycles whicharise are the same as for η truncated to a single block. Multiple blocks simply correspond to repeatedpassages through the same heteroclinic cycle. For example, if we use, as in Example 2 of Section 3.2, η = (1 , , , − , − , , , , − , − , , , , − , − , with n = 5 , we obtain a heteroclinic cycle in R corresponding to η = (1 , , , − , − . A triple passage through this cycle gives η as stated above.23igure 9: Time series (after transients) for the consecutive cycle with p = 15 , n = 13 , c = 0 . , λ = 3 . ,random initial conditions. Top picture = time series of x , . . . , x , bottom picture = time series of x , . . . , x . Color code: blue = x , x , red = x , x , green = x , x , black = x , x , yellow = x , x ,cyan = x , x , magenta = x . In this work we have studied robust heteroclinic cycles in Hopfield networks with coupling given byby the learning rule of [13]. We gave an extensive classification of heteroclinic cycles for couplingsof a simple consecutive type. In particular we established a tight relation between the structure ofthe coupling and the heteroclinic cycles supported by the resulting network. This work is a part ofthe general program of establishing connections between heteroclinic/homoclinic dynamics and neuralprocessing (see [14] for an outline of this program).An interesting direction for continuing this work is to determine if a correspondence between cyclesin the coupling and robust heteroclinic cycles carries over to more realistic settings. As a first attempt ofsuch a generalization we intend to introduce delays, as delays arise naturally in neural coupling and canplay a functional role [7]. Other generalizations include considering systems with noise, more realisticmodels of neurons, or generalizations of the learning rule.24 Appendix on invariant spaces for linear actions The material presented here is standard in algebra and is related to the rational or Frobenius normal formfor matrices, see for example [6]. We have not found a reference that presents the necessary results in aconcise fashion, thus we feel that there is a need for this appendix.Fix a linear transformation T : R n → R n . We are interested in understanding the ( T -)invariant sub-spaces of R n , that is, those subspaces W with T ( W ) ⊆ W , and in particular in identifying the maximalproper invariant subspaces. Here we will argue that invariant subspaces are very closely linked to theaction of the polynomial ring R [ x ] on the linear transformations on R n given by ( f, T ) (cid:55)→ f ( T ) . Ratherthan studying specifically the permutation P we consider a more general context of cyclic transforma-tions. A transformation T is cyclic if there exists a vector v ∈ R n such that R n = span { T j v, j = 0 , , . . . } . We will prove the following result. Theorem 4. Let T R n → R n be a cyclic transformation. There exists an polynomial f T of degree n (theminimal polynomial) with the following property. The invariant spaces for the action of T on R n arein one to one correspondence with non-trivial factors of the polynomial f T . If f is such a factor then ker( f ( T )) is the corresponding invariant space. It will be clear from the arguments below that f P = x n − . Hence Theorem 1 is a direct consequenceof Theorem 4. Generalities on the correspondence between invariant subspaces and polynomials For each invariant subspace W ⊆ R n , we get a map R [ x ] → L ( W, W ) , f (cid:55)→ f ( T ) (cid:22) W whose kernel is an ideal of R [ x ] . Since R [ x ] is a Principal Ideal Domain (PID) it is a principal ideal.We denote the monic generator of this ideal by f W . Note that W = ker( f W ( T )) . Note also that for W = R n , the polynomial f R n is the minimal polynomial of T , and we will denote it by f T . A fewobservations in this setting will be useful. Lemma 6. If f ∈ R [ x ] , then both ker( f ( T )) and (cid:61) ( f ( T )) are invariant subspaces.Proof. This is a consequence of the fact that, for any h ∈ R [ x ] , the linear transformations h ( T ) and f ( T ) commute so that: w ∈ ker( f ( T )) implies f ( T )( T ( w )) = T ( f ( T )( w )) = T (0) = 0 and w = f ( T )( v ) implies T ( w ) = T ( f ( T )( v )) = f ( T )( T ( v )) ∈ (cid:61) ( f ( T )) . Lemma 7. If f ∈ R [ x ] and f (cid:48) = gcd( f, f T ) , then ker( f ( T )) = ker( f (cid:48) ( T )) and (cid:61) ( f ( T )) = (cid:61) ( f (cid:48) ( T )) . roof. To see this, first we can write f (cid:48) = gf + hf T with g, h ∈ R [ x ] by the Euclidean Algorithm.It follows that f (cid:48) ( T ) = g ( T ) ◦ f ( T ) = f ( T ) ◦ g ( T ) since f T ( T ) = 0 by definition of the minimalpolynomial. Also, since f (cid:48) divides f , there is k ∈ R [ x ] so that f = kf (cid:48) and thus f ( T ) = k ( T ) ◦ f (cid:48) ( T ) = f (cid:48) ( T ) ◦ k ( T ) . For the statement about images we have f ( T ) = f (cid:48) ( T ) ◦ k ( T ) = ⇒ (cid:61) ( f ( T )) ⊆ (cid:61) ( f (cid:48) ( T )) f (cid:48) ( T ) = f ( T ) ◦ g ( T ) = ⇒ (cid:61) ( f (cid:48) ( T )) ⊆ (cid:61) ( f ( T )) , and thus f ( T ) and f (cid:48) ( T ) have equal images. The proof for kernels is similar. Corollary 4. If gcd( f, g ) = 1 , then ker( f ( T )) ∩ ker( g ( T )) = { } .Proof. Suppose v ∈ ker( f ( T )) ∩ ker( g ( T ) . We write hf + kg and thus v = h ( T )( f ( T )( v )) + k ( T )( g ( T )( v )) = h ( T )(0) + k ( T )(0) = 0 Cyclic subspaces For v ∈ R n let [ v ] T = span { v, T V, T v, . . . } . We refer to [ v ] T as the cyclic subspace generated by v . Clearly, every minimal subspace must be of thisform. We will prove in Lemma 10 that every invariant subspace has this form.Now let v ∈ R n and, again from the action of the polynomial ring, we obtain a map R [ x ] → R n , f (cid:55)→ f ( T )( v ) . Again, this map is linear, and its kernel is a principal ideal of R [ x ] , the monic generator of which wewill denote by f v . The image of this map is [ v ] T . Lemma 8. Let v ∈ R n , then f v = f [ v ] T and dim([ v ] T ) = deg( f v ) .Proof. The first statement follows as if f ( T ) annihilates v then it also annihilates g ( T )( v ) for any g ∈ R [ x ] . The second statement follows since, by the first isomorphism theorem, we have [ v ] T ∼ = R [ x ] / Idl( f v ) and the dimension of R [ x ] / Idl( f v ) is equal to deg( f v ) . Lemma 9. . Let v ∈ R n and f, g ∈ R [ x ] with f g = f v . Then f w = f for w = g ( T )( v ) . roof. Since f ( T )( w ) = f v ( v ) = 0 , it follows that f w divides f . We prove that if h is a monic divisorof f satisfying h ( w ) = 0 then f = h . Since h ( w ) = 0 is equivalent to hg ( T )( v ) = 0 it follows that f v = f g divides hg and thus deg( f ) ≤ deg( h ) . Since h is monic it follows that f = h .In the proof of the next lemma, we need the fact that R [ x ] is a Unique Factorization Domain (UFD),which means that each non-zero polynomial f ∈ R [ x ] may be written as f = af n . . . f n k k where a is areal number and each f i is an irreducible divisor of f which is also prime (that is, for all h, k ∈ R [ x ] , if f i divides hk then f i divides h or f i divides k ). Lemma 10. Let W be an invariant subspace. Then there is w ∈ W with f w = f W .Proof. To see this, first note that f W = lcm( f w | w ∈ W ) . (lcm denotes the least common multiple).Thus, for any irreducible divisor f of f W and for k the largest power of f that divides f W , there mustbe a w ∈ W so that f k divides f w . Now taking g = f w /f k , we see by Lemma 9 that f w (cid:48) = f k where w (cid:48) = g ( T )( w ) . Doing this for each irreducible divisor of f W , the sum of the resulting w (cid:48) s is the requiredelement by Corollary 4, since f w (cid:48) = f k implies w (cid:48) ∈ ker( f k ( T )) . Cyclic transformations Recall that a linear transformation T : R n → R n is cyclic if there is a v ∈ R n so that [ v ] T = R n . Wecall v a cyclic generator. Lemma 11. If T is cyclic and f g = x n − , then dim( (cid:61) ( g ( T ))) = deg ( f ) .Proof. This follows from Lemmas 8 and 9: Take v ∈ R n with [ v ] T = R n . Then f v = f T is theminimal polynomial and thus f = f w for w = g ( T )( v ) and dim([ w ] T ) = deg( f ) . Finally note that [ w ] T = span( T i ( g ( T )( v )) = g ( T )( T i ( v )) | i ∈ N ) = g ( T )( R n ) = (cid:61) ( g ( T )) . Lemma 12. If T is cyclic and f g = f T , then (cid:61) ( g ( T ))) = ker( f ( T )) .Proof. For any w with w = g ( T )( w (cid:48) ) we have f ( T )( w ) = f ( T )( g ( T )( w (cid:48) )) = 0 , so (cid:61) ( g ( T )) ⊆ ker( f ( T )) and we have n = deg( f ) + deg( g ) = dim( (cid:61) ( g ( T ))) + dim( (cid:61) ( f ( T ))) ≤ dim(ker( f ( T ))) + dim( (cid:61) ( f ( T ))) = n. Thus (cid:61) ( g ( T ))) = ker( f ( T )) .The following result now follows. 27 heorem 5. If T is a cyclic linear operator on R n and v ∈ R n is a cyclic generator, then the invariantsubspaces of R n for T are in one-to-one correspondence with the pairs ( f, g ) such that f g = f T . Thespace corresponding to such a pair ( f, g ) is [ g ( T )( v )] = Im ( g ( T )) = ker( f ( T )) . In particular, the minimal invariant subspaces correspond to the pairs ( f, g ) where f is an irreduciblefactor of f T in R [ x ] and the maximal invariant subspaces correspond to the pairs ( f, g ) where g is anirreducible factor of f T in R [ x ] . Note that Theorem 5 implies Theorem 4. Acknowledgement This work was partially supported by the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 269921 (BrainScaleS), no. 318723 (Mathemacs), and by the ERCadvanced grant NerVi no. 227747. 28 eferences [1] P. Ashwin, O. Karabacak and T. Nowotny , Criteria for robustness of heteroclinic cycles in neuralmicrocircuits, J. Math. Neurosci . :13 (2011)[2] C. Bick C, M. I. Rabinovich. Dynamical origin of the effective storage capacity in the brain’sworking memory. Phys Rev Lett . (21): 218101, 2009[3] P. Chossat, R. Lauterbach. Methods in Equivariant Bifurcation and Dynamical Systems , AdvancedSeries in Nonlinear Dynamics , World Scientific, Singapour (2000)[4] Chuan Zhang, G. Dangelmayr, I. Oprea. Storing cycles in Hopfield-type networks with pseudoinverse learning rule: Admissibility and network topology. Neural Networks , 283-298 (2013).[5] Chuan Zhang, G. Dangelmayr, I. Oprea. Storing cycles in Hopfield-type networks with pseudoin-verse learning rule: retrievability and bifurcation analysis. Submitted (2013)[6] David S. Dummit and Richard M. Foote. Abstract Algebra InterdisciplinaryApplied Mathematics , Vol. 35, Springer (2010).[8] T. Fukai, S. Tanaka. A Simple Neural Network Exhibiting Selective Activation of Neuronal En-sembles: From Winner-Take-All to Winners-Share-All. Neural Comput. : 77-97 (1997).[9] T. Gencic, M. Lappe, G. Dangelmayr and W. Guettinger. Storing cycles in analog neural networks.Parallel processing in neural systems and computers, R. Eckmiller, G. Hartmann & G. Hause (Eds),445-450, North Holland (1990).[10] J. Hofbauer, K. Sigmund. Evolutionary Games and Population Dynamics , Cambridge UniversityPress (1998).[11] J. J. Hopfield, Neural networks and physical systems with emergent collective computational abil-ities, Proc. Natl. Acad. Sci. USA (8): 2554–2558, 1982.[12] M. Krupa. Robust heteroclinic cycles. J. of Nonl. Sci. , 129–176 (1997).[13] L. Personnaz, I. Guyon & G. Dreyfus. Collective computational properties of neural networks:new learning mechanisms. Physical Review A , (5) 4217-4228 (1986).[14] M. P. Rabinovich, P. Varona, A. I. Selverston, H. D. I. Abarbanel. Dynamical Principles in Neuro-science. Reviews of Modern Physics (4): 1213-1265 (2006).[15] A. Szucs, R. Huerta, M. I. Rabinovich, A. I. Selverston. Robust Microcircuit Synchronization byInhibitory Connections. Neuron ,61