A boundary between universality and non-universality in spiking neural P systems
aa r X i v : . [ c s . CC ] M a y A boundary between universality and non-universalityin spiking neural P systems
Boole Centre for Research in Informatics, University College Cork, Ireland.
Abstract
In this work we offer a significant improvement on the previous smallest spiking neural P systemsand solve the problem of finding the smallest possible extended spiking neural P system. P˘aunand P˘aun [15] gave a universal spiking neural P system with 84 neurons and another that hasextended rules with 49 neurons. Subsequently, Zhang et al. [18] reduced the number of neuronsused to give universality to 67 for spiking neural P systems and to 41 for the extended model.Here we give a small universal spiking neural P system that has only 17 neurons and anotherthat has extended rules with 5 neurons. All of the above mentioned spiking neural P systemssuffer from an exponential slow down when simulating Turing machines. Using a more relaxedencoding technique we get a universal spiking neural P system that has extended rules withonly 4 neurons. This latter spiking neural P system simulates 2-counter machines in linear timeand thus suffer from a double exponential time overhead when simulating Turing machines. Weshow that extended spiking neural P systems with 3 neurons are simulated by log-space boundedTuring machines, and so there exists no such universal system with 3 neurons. It immediatelyfollows that our 4-neuron system is the smallest possible extended spiking neural P system thatis universal. Finally, we show that if we generalise the output technique we can give a universalspiking neural P system with extended rules that has only 3 neurons. This system is also thesmallest of its kind as a universal spiking neural P system with extended rules and generalisedoutput is not possible with 2 neurons.
Key words: spiking neural P systems, small universal spiking neural P systems, computationalcomplexity, strong universality, weak universality
Email address: [email protected] (Turlough Neary).
URL: ∼ tneary/ (Turlough Neary). Turlough Neary is funded by Science Foundation Ireland Research Frontiers Programme grant number07/RFP/CSMF641. . Introduction
Spiking neural P systems (SN P systems) [5] are quite a new computational model that are asynergy inspired by P systems and spiking neural networks. It has been shown that these systems arecomputationally universal [5]. Recently, P˘aun and P˘aun [15] gave two small universal SN P systems;They give an SN P system with 84 neurons and an extended SN P system with 49 neurons (thatuses rules without delay). P˘aun and P˘aun conjectured that it is not possible to give a significantdecrease in the number of neurons of their two universal systems. Zhang et al. [18] offered such asignificant decrease in the number of neurons used to give such small universal systems. They givea universal SN P system with 67 neurons and another, which has extended rules (without delay),with 41 neurons. Here we give a small universal SN P system that has only 17 neurons and another,which has extended rules (without delay), with 5 neurons. Using a more relaxed encoding we get auniversal SN P system that has extended rules (without delay), with 4 neurons. Table 1 gives thesmallest universal SN P systems and their respective simulation time and space overheads. Notefrom Table 1 that, in addition to its small size, our 17-neuron system uses rules without delay. Theother small universal SN P systems with standard rules [15,18] do not have this restriction.In this work we also show that extended SN P systems with 3 neurons and generalised input aresimulated by log-space bounded Turing machines. As a result, it is clear that there exists no suchuniversal system with 3 neurons, and thus our 4-neuron system is the smallest possible universalextended SN P system. Following this, we show that if we generalise the output technique we cangive a universal SN P system with extended rules that has only 3 neurons. In addition, we showthat a universal SN P system with extended rules and generalised output is not possible with 2neurons, and thus our 3-neuron systems is the smallest of its kind.From a previous result [13] it is known that there exists no universal SN P system that simulatesTuring machines in less the exponential time and space. It is a relatively straightforward matterto generalise this result to show that extended SN P systems suffer from the same inefficiencies.It immediately follows that the universal systems we present here and those found in [15,18] haveexponential time and space requirements. However, it is possible to give a time efficient SN Psystem when we allow exhaustive use of rules. A universal extended SN P system with exhaustiveuse of rules has been given that simulates Turing machines in linear time [12]. Furthermore, thissystem has only 10 neurons. SN P systems with exhaustive use of rules were originally provedcomputationally universal by Ionescu et al. [4]. However, the technique used to prove universalitysuffered from an exponential time overhead.Using different forms of SN P systems, a number of time efficient (polynomial or constant time)solutions to NP-hard problems have been given [2,8,9]. All of these solutions to NP-hard problemsrely on families of SN P systems. Specifically, the size of the problem instance determines thenumber of neurons in the SN P system that solves that particular instance. This is similar tosolving problems with circuits families where each input size has a specific circuit that solves it.Ionescu and Sburlan [6] have shown that SN P systems simulate circuits in linear time.In Section 4 we give a definition for SN P systems, explain their operation and give other relevanttechnical details. In Section 3 we give a definition for counter machines and we also discuss somenotions of universality. Following this, in Section 4 we give our small universal SN P systems andshow how their size can be reduce if we use a more relaxed encoding. In Section 5 we give ourproof showing that extended SN P systems with 3 neurons and generalised input are simulated bylog-space bounded Turing machines. Section 5 also contains our universal 3-neuron system withgeneralised output. We end the paper with some discussion and conclusions.2 umber of simulation type exhaustive authorneurons time/space of rules use of rules84 exponential standard no P˘aun and P˘aun [15]67 exponential standard no Zhang et al. [18]49 exponential extended † no P˘aun and P˘aun [15]41 exponential extended † no Zhang et al. [18]12 double-exponential extended † no Neary [14]18 exponential extended no Neary [11,14]*125 exponential/ extended † yes Zhang et al. [17]double-exponential18 polynomial/exponential extended yes Neary [13]10 linear/exponential extended yes Neary [12]
17 exponential standard † no Section 45 exponential extended † no Section 44 double-exponential extended † no Section 43 double-exponential extended ‡ no Section 5 Table 1Small universal SN P systems. The “simulation time” column gives the overheads used by each system when simulat-ing a standard single tape Turing machine. † indicates that there is a restriction of the rules as delay is not used and ‡ indicates that a more generalised output technique is used. *The 18 neuron system is not explicitly given in [14];it is however mentioned at the end of the paper and is easily derived from the other system presented in [14]. Also,its operation and its graph were presented in [11].
2. SN P systemsDefinition 1 (Spiking neural P system)
A spiking neural P system (SN P system) is a tuple
Π = (
O, σ , σ , · · · , σ m , syn, in, out ) , where:(i) O = { s } is the unary alphabet ( s is known as a spike),(ii) σ , σ , · · · , σ m are neurons, of the form σ i = ( n i , R i ) , i m , where:(a) n i > is the initial number of spikes contained in σ i ,(b) R i is a finite set of rules of the following two forms:(i) E/s b → s ; d , where E is a regular expression over s , b > and d > ,(ii) s e → λ , where λ is the empty word, e > , and for all E/s b → s ; d from R i s e / ∈ L ( E ) where L ( E ) is the language defined by E ,(iii) syn ⊆ { , , · · · , m } × { , , · · · , m } is the set of synapses between neurons, where i = j forall ( i, j ) ∈ syn ,(iv) in, out ∈ { σ , σ , · · · , σ m } are the input and output neurons, respectively. A firing rule r = E/s b → s ; d is applicable in a neuron σ i if there are j > b spikes in σ i and s j ∈ L ( E ) where L ( E ) is the set of words defined by the regular expression E . If, at time t , rule r is executed then b spikes are removed from the neuron, and at time t + d the neuron fires. Whena neuron σ i fires a spike is sent to each neuron σ j for every synapse ( i, j ) in Π. Also, the neuron σ i remains closed and does not receive spikes until time t + d and no other rule may execute in σ i until time t + d + 1. A forgeting rule r ′ = s e → λ is applicable in a neuron σ i if there are exactly3 spikes in σ i . If r ′ is executed then e spikes are removed from the neuron. At each timestep t arule must be applied in each neuron if there is one or more applicable rules at time t . Thus, whilethe application of rules in each individual neuron is sequential the neurons operate in parallel witheach other.Note from 2b(i) of Definition 1 that there may be two rules of the form E/s b → s ; d , that areapplicable in a single neuron at a given time. If this is the case then the next rule to execute ischosen non-deterministically.An extended SN P system [15] has more general rules of the form
E/s b → s p ; d , where b > p > E/s b → s p . Also, if in a rule E = s b then we write the rule as s b → s p .In the same manner as in [15], spikes are introduced into the system from the environment byreading in a binary sequence (or word) w ∈ { , } via the input neuron σ . The sequence w isread from left to right one symbol at each timestep and a spike enters the input neuron on a giventimestep iff the read symbol is 1. The output of an SN P system Π is the time between the firstand second firing rule applied in the output neuron and is given by the value Π( w ) ∈ N .A configuration c of an SN P system consists of a word w and a sequence of natural numbers( r , r , . . . , r m ) where r i is the number of spikes in σ i and w represents the remaining input yet tobe read into the system. A computation step c j ⊢ c j +1 is as follows: each number r i is updateddepending on the number of spikes neuron σ i uses up and receives during the synchronous appli-cation of all applicable rules in configuration c j . In addition, if w = λ then the leftmost symbol of w is removed. A SN P system computation is a finite sequence of configurations c , c , . . . , c t thatends in a terminal configuration c t where for all j < t , c j ⊢ c j +1 . A terminal configuration is aconfiguration where the input sequence has finished being read in via the input neuron (i.e. w = λ the empty word) and either there is no applicable rule in any of the neurons or the output neuronhas spiked exactly v times (where v is a constant independent of the input).Let φ x be the x th n -ary partial recursive function in a G¨odel enumeration of all n -ary partialrecursive functions. The natural number value φ x ( y , y , . . . y n ) is the result given by φ x on input( y , y , . . . y n ). Definition 2 [Universal SN P system] A SN P system Π is universal if there are recursive functions g and f such that for all x, y ∈ N we have φ x ( y , y , . . . y n ) = f (Π( g ( x, y , y , . . . y n ))) . In the next section we give some further discussion on the subject of definitions of universality.
3. Counter machinesDefinition 3 (Counter machine)
A counter machine is a tuple C = ( z, R, c m , Q, q , q h ) , where z gives the number of counters, R is the set of input counters, c m is the output counter, Q = { q , q , · · · , q h } is the set of instructions, and q , q h ∈ Q are the initial and halt instructions,respectively. Each counter c j stores a natural number value y >
0. Each instruction q i is of one of the followingtwo forms q i : IN C ( j ) , q l or q i : DEC ( j ) , q l , q k and is executed as follows:– q i : IN C ( j ) , q l increment the value y stored in counter c j by 1 and move to instruction q l .– q i : DEC ( j ) , q l , q k if the value y stored in counter c j is greater than 0 then decrement this valueby 1 and move to instruction q l , otherwise if y = 0 move to instruction q k .4t the beginning of a computation the first instruction executed is q . The input to the countermachine is initially stored in the input counters. If the counter machine’s control enters instruction q h , then the computation halts at that timestep. The result of the computation is the value y storedin the output counter c m when the computation halts.We now consider some different notions of universality. Korec [7] gives universality definitionsthat describe some counter machines as weakly universal and other counter machines as stronglyuniversal. Definition 4 [Korec [7]] A register machine M will be called strongly universal if there is a recur-sive function g such that for all x, y ∈ N we have φ x ( y ) = Φ M ( g ( x ) , y ) . Here Φ M ( g ( x ) , y ) is the value stored in the output counter at the end of a computation when M isstarted with the values g ( x ) and y in its input counters. Korec’s definition insists that the value y should not be changed before passing it as input to M . However, if we consider computing an n -arryfunction with a Korec-strong universal counter machine then it is clear that n arguments must beencoded as a single input y . Many Korec-strong universal counter machines would not satisfy adefinition where the function φ x in Definition 4 is replaced with an n -arry function with n > φ x ( y ) = Φ M ( g ( x ) , y )”with the equation “ φ nx ( y , y , . . . , y n ) = Φ n +1 M ( g ( x ) , y , y , . . . , y n )” in Definition 4. Note that forany counter machine M with r counters, if r n then M does not satisfy this new definition.It could be considered that Korec’s notion of strong universality is somewhat arbitrary for thefollowing reason: Korec’s definition will admit machines that require n -arry input ( y , y , . . . , y n )to be encoded as the single input y when simulating an n -arry function, but his definition will notadmit a machine that applies an encoding function to y (e.g. y is not permitted). Perhaps whenone uses this notion of universality it would be more appropriate to refer to it as strongly universalfor unary partial recursive functions instead of simply strongly universal.Korec [7] also gives a number of other definitions of universality. If the equation φ x ( y ) =Φ M ( g ( x ) , y ) in Definition 4 above is replaced with any one of the equations φ x ( y ) = Φ M ( g ( x, y )), φ x ( y ) = f (Φ M ( g ( x ) , y )) or φ x ( y ) = f (Φ M ( g ( x, y ))) then the counter machine M is weakly uni-versal. Korec gives another definition where the equation φ x ( y ) = Φ M ( g ( x ) , y ) in Definition 4 isreplaced with the equation φ x ( y ) = f (Φ M ( g ( x ) , h ( y ))). However, he does not include this definitionin his list of weakly universal machines even though the equation φ x ( y ) = f (Φ M ( g ( x ) , h ( y ))) allowsfor a more relaxed encoding than the equation φ x ( y ) = f (Φ M ( g ( x ) , y )) and thus gives a weakerform of universality.For each number m > m -counter machines that allow φ nx and its input( y , y , . . . , y n ) to be encoded separately (e.g. via g ( x ) and h n ( y , y , . . . , y n )). For universal 2-counter machines all of the current algorithms encode the function φ nx and its input ( y , y , . . . , y n )together as a single input (e.g. via g n +1 ( x, y , y , . . . , y n )). Using such encodings it is only possible togive universal 2-counter machines that Korec would class as weakly universal. Some other limitationsof 2-counter machines were shown independently by Schroeppel [16] and Barzdin [1]. In both casesthe authors are examining unary functions that are uncomputable for 2-counter machines when theinput value to the counter machine must equal the input to the function. For example Schroeppelshows that given n as input a 2-counter machine cannot compute 2 n . It is interesting to note thatone can give a Korec-strong universal counter machine that is as time/space inefficient as a Korec-weak universal 2-counter machine. Korec’s definition of strong universality deals with input andoutput only and is not concerned with the (time/space) efficiency of the computation.5n earlier work [15], Korec’s notion of strong universality was adopted for SN P systems asfollows: A spiking neural P system Π is strongly universal if Π(10 y − x −
1) = φ x ( y ) for all x and y (here if φ x ( y ) is undefined so to is Π(10 y − x −
4. Small universal SN P systems
We begin this section by giving our two extended universal systems Π C and Π C , and followingthis we give our standard system Π ′ C . We prove the universality of Π C and Π ′ C by showing thatthey each simulate a universal 3-counter machine. From Π C we obtain the system Π ′ C whichsimulates a universal 2-counter machine. Theorem 1
Let C be a universal counter machine with 3 counters that completes it computationin time t to give the output value x o when given the pair of input values ( x , x ). Then there is auniversal extended SN P system Π C that simulates the computation of C in time O ( t + x + x + x o ) and has only 5 neurons. PROOF.
Let C = (3 , { c , c } , c , Q, q , q h ) where Q = { q , q , · · · , q h } . Our SN P system Π C isgiven by Figure 1 and Table 4. The algorithm given for Π C is deterministic.4.0.1. Encoding of a configuration of C and reading input into Π C A configuration of C is stored as spikes in the neurons of Π C . The next instruction q i to beexecuted is stored in each of the neurons σ , σ and σ as 4( h + i ) spikes. Let x , x and x be thevalues stored in counters c , c and c , respectively. Then the values x , x and x are stored as8 h ( x + 1), 8 h ( x + 1) and 8 h ( x + 1) spikes in neurons σ , σ and σ , respectively.The input to Π C is read into the system via the input neuron σ (see Figure 1). If C begins itscomputation with the values x and x in counters c and c , respectively, then the binary sequence w = 10 x − x − σ . Thus, σ receives a single spike from the Note that no formal definition of this notion was explicitly given in[15]. ounter c σ σ counter c σ counter c σ σ inputoutputFig. 1. Universal extended SN P system Π C . Each oval labeled σ i is a neuron. An arrow going from neuron σ i toneuron σ j illustrates a synapse ( i, j ). environment at times t , t x +1 and t x + x +1 . We explain how the system is initialised to encodean initial configuration of C by giving the number of spikes in each neuron and the rule that isto be applied in each neuron at time t . Before the computation begins neuron σ initially contain8 h spikes, σ contains 2 spikes, σ contains 8 h + 1 spikes and all other neurons contain no spikes.Thus, when σ receives it first spike at time t we have t : σ = 8 h + 1 , s h +1 /s h → s h ,σ = 2 , s /s → s,σ = 8 h + 1 , s h +1 /s h → s h − . where on the left σ k = z gives the number z of spikes in neuron σ k at time t and on the right is therule that is to be applied at time t , if there is an applicable rule at that time. Thus, from Figure 1,when we apply the rule s h +1 /s h → s h in neuron σ , s /s → s in σ , and s h +1 /s h → s h − in σ at time t we get t : σ = 8 h + 1 , s h +1 /s h → s h ,σ = 8 h,σ = 8 h + 1 , s h +1 /s h → s,σ = 8 h + 1 , s h +1 /s h → s h − ,σ = 1 , s → λ,t : σ = 8 h + 1 , s h +1 /s h → s h ,σ = 16 h,σ = 8 h + 1 , s h +1 /s h → s,σ = 8 h + 1 , s h +1 /s h → s h − ,σ = 1 , s → λ. σ fires on every timestep between times t and t x +1 to send a total of 8 hx spikes to σ thus we get t x +1 : σ = 8 h + 2 , s h +2 /s h +1 → s h +1 ,σ = 8 hx ,σ = 8 h + 1 , s h +1 /s h → s,σ = 8 h + 1 , s h +1 /s h → s h − ,σ = 1 , s → λ,t x +2 : σ = 8 h + 1 , s h +1 /s h → s h ,σ = 8 h ( x + 1) + 1 , ( s h ) ∗ s h +1 /s h → s,σ = 8 h + 2 ,σ = 8 h + 2 , s h +2 /s h → s h − ,σ = 1 , s → λ,t x +3 : σ = 8 h + 1 , s h +1 /s h → s h ,σ = 8 h ( x + 1) + 1 , ( s h ) ∗ s h +1 /s h → s,σ = 16 h + 2 ,σ = 8 h + 2 , s h +2 /s h → s h − . Neuron σ fires on every timestep between times t x +1 and t x + x +1 to send a total of 8 hx spikesto σ . Thus, when σ receives the last spike from its environment we have t x + x +1 : σ = 8 h + 2 , s h +2 /s h +1 → s h +1 ,σ = 8 h ( x + 1) + 1 , ( s h ) ∗ s h +1 /s h → s,σ = 8 hx + 2 ,σ = 8 h + 2 , s h +2 /s h → s h − t x + x +2 : σ = 8 h + 1 , s h +1 /s h → s h ,σ = 8 h ( x + 1) + 2 , ( s h ) ∗ s h +2 /s h +2 → s h ,σ = 8 h ( x + 1) + 3 , ( s h ) ∗ s h +3 /s h +3 → s h ,σ = 8 h + 3 , s h +3 → s h . x + x +3 : σ = 6 h + 1 , s h +1 → s h +4 ,σ = 8 h ( x + 1) ,σ = 8 h ( x + 1) ,σ = 8 h,σ = 2 h, s h → λ,t x + x +4 : σ = 8 h ( x + 1) + 4( h + 1) ,σ = 8 h ( x + 1) + 4( h + 1) ,σ = 8 h + 4( h + 1) . At time t x + x +4 neuron σ contains 8 h ( x + 1) + 4( h + 1) spikes, σ contains 8 h ( x + 1) + 4( h + 1)spikes and σ contains 8 h + 4( h + 1) spikes. Thus at time t x + x +4 the SN P system encodes aninitial configuration of C .4.0.2. Π C simulating q i : IN C (1) , q l Let counters c , c , and c have values x , x , and x , respectively. Then the simulation of q i : IN C (1) , q l begins at time t j with 8 h ( x + 1) + 4( h + i ) spikes in σ , 8 h ( x + 1) + 4( h + i ) spikesin σ and 8 h ( x + 1) + 4( h + i ) spikes in σ . Thus, at time t j we have t j : σ = 8 h ( x + 1) + 4( h + i ) , ( s h ) ∗ s h + i ) /s h + i ) → s h + i ) ,σ = 8 h ( x + 1) + 4( h + i ) , ( s h ) ∗ s h + i ) /s h +4( h + i ) → s h ,σ = 8 h ( x + 1) + 4( h + i ) , ( s h ) ∗ s h + i ) /s h +4( h + i ) → s h . From Figure 1, when we apply the rule ( s h ) ∗ s h + i ) /s h + i ) → s h + i ) in neuron σ and the rule( s h ) ∗ s h + i ) /s h +4( h + i ) → s h in σ and σ at time t j we get t j +1 : σ = 16 h + 4 i, s h +4 i → s h +4 l ,σ = 8 h ( x + 1) ,σ = 8 hx ,σ = 8 hx ,σ = 6 h, s h → λ,t j +2 : σ = 8 h ( x + 2) + 4( h + l ) ,σ = 8 h ( x + 1) + 4( h + l ) ,σ = 8 h ( x + 1) + 4( h + l ) , At time t j +2 the simulation of q i : IN C (1) , q l is complete. Note that an increment on the value x in counter c was simulated by increasing the 8 h ( x + 1) spikes in σ to 8 h ( x + 2) spikes. Note alsothat the encoding 4( h + l ) of the next instruction q l has been established in neurons σ , σ and σ .9.0.3. Π C simulating q i : DEC (1) , q l , q k There are two cases to consider here. Case 1: if counter c has value x >
0, then decrementcounter 1 and move to instruction q i +1 . Case 2: if counter c has value x = 0, then move toinstruction q k . As with the previous example, our simulation begins at time t j . Thus Case 1 ( x >
0) gives t j : σ = 8 h ( x + 1) + 4( h + i ) , ( s h ) ∗ s h +4( h + i ) /s h +4 i → s h +4 i ,σ = 8 h ( x + 1) + 4( h + i ) , ( s h ) ∗ s h + i ) /s h + i ) → s h ,σ = 8 h ( x + 1) + 4( h + i ) , ( s h ) ∗ s h + i ) /s h + i ) → s h ,t j +1 : σ = 10 h + 4 i, s h +4 i → s h + l ) ,σ = 8 hx ,σ = 8 h ( x + 1) ,σ = 8 h ( x + 1) ,σ = 2 h, s h → λ,t j +2 : σ = 8 hx + 4( h + l ) ,σ = 8 h ( x + 1) + 4( h + l ) ,σ = 8 h ( x + 1) + 4( h + l ) . At time t j +2 the simulation of q i : DEC (1) , q l , q k for Case 1 ( x >
0) is complete. Note that adecrement on the value x in counter c was simulated by decreasing the 8 h ( x + 1) spikes in σ to8 hx spikes. Note also that the encoding 4( h + l ) of the next instruction q l has been established inneurons σ , σ and σ . Alternatively, if we have Case 2 ( x = 0) then we get t j : σ = 8 h + 4( h + i ) , s h +4( h + i ) /s h + i ) → s h + i ) ,σ = 8 h ( x + 1) + 4( h + i ) , ( s h ) ∗ s h + i ) /s h + i ) → s h ,σ = 8 h ( x + 1) + 4( h + i ) , ( s h ) ∗ s h + i ) /s h + i ) → s h ,t j +1 : σ = 8 h + 4 i, s h +4 i → s h + k ) ,σ = 8 h,σ = 8 h ( x + 1) ,σ = 8 h ( x + 1) ,σ = 2 h, s h → λ. j +2 : σ = 8 h + 4( h + k ) ,σ = 8 h ( x + 1) + 4( h + k ) ,σ = 8 h ( x + 1) + 4( h + k ) . At time t j +2 the simulation of q i : DEC (1) , q l , q k for Case 1 ( x = 0) is complete. The encoding4( h + k ) of the next instruction q k has been established in neurons σ , σ and σ .4.0.4. Halting
The halt instruction q h is encoded as 4 h + 5 spikes. Thus, if C enters the halt instruction q h weget t j : σ = 8 h ( x + 1) + 4 h + 5 ,σ = 8 h ( x o + 1) + 4 h + 5 , ( s h ) ∗ s h +5 /s h → s ,σ = 8 h ( x + 1) + 4 h + 5 ,t j +1 : σ = 2 , s → λ,σ = 8 h ( x + 1) + 4 h + 5 ,σ = 8 hx o + 5 , ( s h ) ∗ s h +5 /s h → s,σ = 8 h ( x + 1) + 4 h + 5 ,σ = 2 , s → s,t j +2 : σ = 1 , s → λ,σ = 8 h ( x + 1) + 4 h + 5 ,σ = 8 h ( x o −
1) + 5 , ( s h ) ∗ s h +5 /s h → s,σ = 8 h ( x + 1) + 4 h + 5 ,σ = 1 , s → λ. The rule ( s h ) ∗ s h +5 /s h → s is applied a further x o − σ until we get t j + x o : σ = 1 , s → λ,σ = 8 h ( x + 1) + 4 h + 5 ,σ = 8 h + 5 , s h +5 → s ,σ = 8 h ( x + 1) + 4 h + 5 ,σ = 1 , s → λ. j + x o +1 : σ = 2 , s → λ,σ = 8 h ( x + 1) + 4 h + 5 ,σ = 8 h ( x + 1) + 4 h + 5 ,σ = 2 , s → s. As usual the output is the time interval between the first and second spikes that are sent out of theoutput neuron. Note from above that the output neuron σ fires for the first time at timestep t j +1 and for the second time at timestep t j + x o +1 . Thus, the output of Π C is x o the value of the outputcounter c when C enters the halt instruction q h . Note that if x = 0 then the rule s h +5 → s isexecuted at timestep t j , and thus only one spike will be sent out of the output neuron.We have now shown how to simulate arbitrary instructions of the form q i : IN C (1) , q l and q i : DEC (1) , q l , q k that operate on counter c . Instructions which operate on counters c and c are simulated in a similar manner. Immediately following the simulation of an instruction Π C isconfigured to simulate the next instruction. Each instruction of C is simulated in 2 timesteps. Thepair of input values ( x , x ) is read into the system in x + x + 4 timesteps and sending the outputvalue x o out of the system takes x o + 1 timesteps. Thus, if C completes it computation in time t ,then Π C simulates the computation of C in linear time O ( t + x + x + x o ). ✷ Theorem 2
Let C be a universal counter machine with 2 counters that completes it computationin time t to give the output value x o when given the input value x . Then there is a universalextended SN P system Π C that simulates the computation of C in time O ( t + x + x o ) and hasonly 4 neurons. PROOF.
Let C = (2 , { c } , c , Q, q , q h ) where Q = { q , q , · · · , q h } . The rules for the SN Psystem Π C are given by Table 5 and a diagram of the system is obtained by removing neuron σ from Figure 1. If C begins its computation with the value x in counter c then the binarysequence w = 10 x − σ . Before the computation begins neurons σ , σ , σ and σ respectively contain 8 h , 8 h + 1, 16 h + 1 and 0 spikes. Like Π C , Π C encodes thevalue x of each counter as 8 h ( x + 1) spikes and encodes each instruction q i as 4( h + i ) spikes. Theoperation of Π C is very similar to the operation of Π C , and thus it would be tedious and repetitiveto go through another simulation here. Π C simulates a single instruction of C in 2 timesteps in amanner similar to that of Π C . The inputting and outputting techniques, used by Π C , also remainsimilar to those of Π C , and thus the running time of Π C is O ( t + x + x o ). ✷ The SN P system in Theorem 3 simulates a counter machine with the following restriction: if acounter is being decremented no other counter has value 0 at that timestep. Note that this doesnot result in a loss of generality as for each standard counter machine there is a counter machinewith this restriction that simulates it in linear time without an increase in the number of counters.Let C be any counter machine with m counters. Then there is a counter machine C ′ with m counters that simulates C in linear time, such that if C ′ is decrementing a counter no other counterhas value 0 at that timestep. Each counter in C that has value y is simulated by a counter in C ′ that has value y + 1. The instruction set of C ′ is the same as the instruction set of C withthe following exception each q i : DEC ( j ) , q l , q k instruction in C is replaced with the instructions( q i : DEC ( j ) q ′ i , q ′ i ), ( q ′ i : DEC ( j ) q ⋆l , q ⋆k ), ( q ⋆l : IN C ( j ) , q l ), and ( q ⋆k : IN C ( j ) , q k ). The reason we12 σ σ σ σ σ σ counter 2 σ counter 1 σ counter 3 σ σ σ σ σ σ σ σ input outputFig. 2. Part 1 of the universal SN P system Π ′ C . Each oval labeled σ i is a neuron. An arrow going from neuron σ i to neuron σ j illustrates a synapse ( i, j ). need these extra instructions is that y is encoded as y + 1 and we must decrement twice if we wishto test for an encoded 0. Theorem 3
Let C be a universal counter machine with 3 counters and h instructions that com-pletes it computation in time t to give the output value x o when given the input ( x , x ) . Then thereis a universal SN P system Π ′ C that simulates the computation of C in time O ( ht + x + x + x o ) and has only 17 neurons. PROOF.
Let C = (3 , { c , c } , c , Q, q , q h ) where Q = { q , q , · · · , q h } . Also, without loss ofgenerality we assume that during C ’s computation if C is decrementing a counter no other counterhas value 0 at that timestep (see the paragraph before Theorem 3). The SN P system Π ′ C is givenby Figures 2 and 3 and Tables 6 and 7. As a complement to the figures, Table 3 may be used toidentify all the synapses in Π ′ C . The algorithm given for Π ′ C is deterministic.4.0.5. Encoding of a configuration of C and reading input into Π ′ C A configuration of C is stored as spikes in the neurons of Π ′ C . The next instruction q i to beexecuted is stored in each of the neurons σ , σ , σ , σ , σ , and σ as 21( h + i ) + 1 spikes. Let x , x and x be the values stored in counters c , c and c , respectively. Then the value x is storedas 6( x + 1) spikes in neuron σ , x is stored as 6( x + 1) spikes in σ , and x is stored as 6( x + 1)spikes in σ . 13he input to Π ′ C is read into the system via the input neuron σ (see Figure 2). If C beginsits computation with the values x and x in counters c and c , respectively, then the binarysequence w = 10 x − x − σ . Thus, σ receives a spike from theenvironment at times t , t x +1 and t x + x +1 . We explain how the system is initialised to encode aninitial configuration of C by giving the number of spikes in each neuron and the rule that is to beapplied in each neuron at time t . Before the computation begins neurons σ , σ , σ , σ , σ and σ each contain 40 spikes, neurons σ , σ and σ each contain 3 spikes, and neurons σ , σ and σ each contain 21 h − σ receives it first spike at time t we have t : σ = 1 , s → s,σ , σ , σ , σ , σ , σ = 40 ,σ , σ , σ = 3 ,σ , σ , σ = 21 h − , ( s ) ∗ s /s → s. Thus, from Figures 2 and 3, when we apply the rule s → s in neuron σ and the rule ( s ) ∗ s /s → s in σ , σ and σ at time t we get t : σ , σ , σ , σ , σ , σ = 41 , s /s → s,σ , σ , σ = 4 ,σ , σ , σ = 21 h − ,σ , σ , σ = 3 ,t : σ , σ , σ , σ , σ , σ = 41 , s /s → s,σ = 10 ,σ , σ = 10 , ( s ) ∗ s /s → s,σ = 6 , s → λ,σ , σ , σ = 21 h − ,σ , σ , σ = 3 .t : σ , σ , σ , σ , σ , σ = 43 , s /s → s,σ = 16 ,σ , σ = 10 , ( s ) ∗ s /s → s,σ = 7 , s → λ,σ , σ , σ = 21 h − ,σ , σ , σ = 3 . Neurons σ , σ , σ , σ , σ and σ fire on every timestep between times t and t x +2 to send a totalof 6 x spikes to σ , and thus we get 14 x +1 : σ = 1 , s → s,σ , σ , σ , σ , σ , σ = 43 , s /s → s,σ = 6( x −
1) + 4 ,σ , σ = 10 , ( s ) ∗ s /s → s,σ = 7 , s → λ,σ , σ , σ = 21 h − ,σ , σ , σ = 3 ,t x +2 : σ = 44 , s /s → s,σ , σ , σ , σ , σ = 44 , s /s → s,σ = 6 x + 5 , ( s ) ∗ s /s → s,σ = 11 ,σ = 11 , ( s ) ∗ s /s → s,σ = 7 , s → λ,σ , σ , σ = 21 h − ,σ , σ , σ = 3 ,t x +3 : σ = 22 , s /s → s,σ , σ , σ , σ , σ = 16 , s /s → s,σ = 6 x + 5 , ( s ) ∗ s /s → s,σ = 17 ,σ = 11 , ( s ) ∗ s /s → s,σ = 7 , s → λ,σ , σ , σ = 21 h − ,σ , σ , σ = 3 . Neurons σ , σ , σ , σ , σ and σ fire on every timestep between times t x +2 and t x + x +2 to senda total of 6 x spikes to σ . Thus, when σ receives the last spike from its environment we have15 x + x +1 : σ = 1 , s → s,σ = 22 , s /s → s,σ , σ , σ , σ , σ = 16 , s /s → s,σ = 6 x + 5 , ( s ) ∗ s /s → s,σ = 6 x + 5 ,σ = 11 , ( s ) ∗ s /s → s,σ = 7 , s → λ,σ , σ , σ = 21 h − ,σ , σ , σ = 3 ,t x + x +2 : σ = 23 , s /s → s,σ , σ , σ , σ , σ = 17 ,σ = 6( x + 1) ,σ = 6( x + 2) ,σ = 12 ,σ = 7 , s → λ,σ , σ , σ = 21 h − , ( s ) ∗ s /s → s,σ , σ , σ = 3 ,t x + x +3 : σ , σ , σ , σ , σ , σ = 18 ,σ = 6( x + 1) + 1 , ( s ) ∗ s /s → s,σ = 6( x + 2) + 1 , ( s ) ∗ s /s → s,σ = 13 , ( s ) ∗ s /s → s,σ = 1 , s → λ,σ , σ , σ = 21 h − , ( s ) ∗ s /s → s,σ , σ , σ = 6 ,t x + x +4 : σ , σ , σ , σ , σ , σ = 21 ,σ = 6( x + 1) ,σ = 6( x + 1) ,σ = 6 ,σ = 1 , s → λ,σ , σ , σ = 21 h − , ( s ) ∗ s /s → s,σ , σ , σ = 9 . h − t x + x +7 h +1 : σ , σ , σ , σ , σ , σ = 21 ,σ = 6( x + 1) ,σ = 6( x + 1) ,σ = 6 ,σ = 1 , s → s,σ , σ = 1 , s → λ,σ , σ , σ = 21 h,t x + x +7 h +2 : σ , σ , σ , σ , σ , σ = 21 ,σ = 6( x + 1) ,σ = 6( x + 1) ,σ = 6 ,σ , σ , σ = 21 h + 1 , ( s ) ∗ s /s → s,t x + x +7 h +3 : σ , σ , σ , σ , σ , σ = 21 + 3 ,σ = 6( x + 1) ,σ = 6( x + 1) ,σ = 6 ,σ , σ , σ = 3 ,σ , σ , σ = 21 h − , ( s ) ∗ s /s → s. Neurons σ , σ and σ continue to fire at each timestep. Thus, after a further 7 h − t x + x +14 h +2 : σ , σ , σ , σ , σ , σ = 21 h + 21 ,σ = 6( x + 1) ,σ = 6( x + 1) ,σ = 6 ,σ , σ , σ = 21 h,σ = 1 , s → s,σ , σ = 1 , s → λ. σ σ σ σ σ σ σ σ σ σ σ σ outputFig. 3. Part 2 of the universal SN P system Π ′ C . Each oval labeled σ i is a neuron. An arrow going from neuron σ i to neuron σ j illustrates a synapse ( i, j ). t x + x +14 h +3 : σ , σ , σ , σ , σ , σ = 21( h + 1) + 1 ,σ = 6( x + 1) ,σ = 6( x + 1) ,σ = 6 ,σ , σ , σ = 21 h + 1 . At time t x + x +14 h +3 neurons σ , σ , σ , σ , σ and σ each contain 21( h +1)+1 spikes, σ contains6( x + 1) spikes, σ contains 6( x + 1) spikes and σ contains 6 spikes. Thus, at time t x + x +14 h +3 the SN P system encodes an initial configuration of C .4.0.6. Algorithm overview
Here we give a high level overview of the simulation algorithm used by Π ′ C . Neurons σ , σ and σ simulate the counters of c , c and c , respectively. Neurons σ , σ , σ , σ , σ and σ are the control neurons . They determine which instruction is to be simulated next by sending signals to theneurons that simulate the counters of C directing them to simulate an increment or decrement.There are four different signals that the control neurons send to the simulated counters. Each ofthese signals takes the form of a unique number of spikes. If 1 spike is sent to σ , σ and σ thenthe value in σ (counter c ) is tested and σ (counter c ) and σ (counter c ) are decremented. If2 spikes are sent the value of σ is tested and σ and σ are decremented. If 3 spikes are sent thevalue of σ is tested and σ and σ are decremented. Finally, if 6 spikes are sent all three countersare incremented. Unfortunately, all of the above signals have the effect of changing the value ofmore than one simulated counter at a time. We can, however, obtain the desired result by usingmore than one signal for each simulated timestep. If we wish to simulate IN C we send 2 signalsand if we wish to simulate
DEC we send either 8 or 2 signals. Table 2 gives the sequence of spikes(signals) to be sent in order to simulate each counter machine instruction. To explain how to useTable 2 we will take the example of simulating
IN C (2). In the first timestep, all three simulated18 nstruction Sequence of spikes sent from σ , σ , σ , σ , σ and σ INC (1) 6, 1
INC (2) 6, 2
INC (3) 6, 3
DEC (1) 1, 0, 6 if x = 0 DEC (1) 1, 0, 6, 6, 6, 3, 3, 2, 2 if x > DEC (2) 2, 0, 6 if x = 0 DEC (2) 2, 0, 6, 6, 6, 3, 3, 1, 1 if x > DEC (3) 3, 0, 6 if x = 0 DEC (3) 3, 0, 6, 6, 6, 2, 2, 1, 1 if x > ′ C to simulated that instruction. Each number in the sequence represents the total number of spikes tobe sent from the set of neurons σ , σ , σ , σ , σ and σ at each timestep. counters σ , σ and σ are incremented by sending 6 spikes, and then in the second timestep thesimulated counters σ and σ are decremented by sending 2 spikes. This has the effect of simulatingan increment in counter c and leaving the other two simulated counters unchanged.Each counter machine instruction q i is encoded as 21( h + i ) + 1 spikes in each of the controlneurons. At the end of each simulated timestep the number of spikes in the control neurons mustbe updated to encode the next instruction q k . The update rule s h + i ) − k → s is applied ineach control neuron leaving a total of 21 k spikes in each control neuron. Following this, 21 h + 1spikes are sent from neurons σ , σ and σ to each of the control neurons. This gives a total of21( h + k ) + 1 spikes in each control neuron. Thus encoding the next instruction q k . (Note that therule s h + i ) − k → s is simplification of the actual rule used.)4.0.7. Π ′ C simulating q i : IN C (1) , q l The simulation of
IN C (1) is given by the neurons in Figures 2 and 3. Let x , x and x be thevalues in counters c , c and c respectively. Then our simulation of q i : IN C (1) , q l begins with6( x + 1) spikes in σ , 6( x + 1) spikes in σ , 6( x + 1) spikes in σ , 21( h + i ) + 1 spikes in each ofthe neurons σ , σ , σ , σ , σ and σ , and 21 h + 1 spikes in each of the neurons σ , σ and σ .Beginning our simulation at time t j , we have t j : σ = 21( h + i ) + 1 , s h + i )+1 /s → s,σ , σ , σ , σ , σ = 21( h + i ) + 1 , s h + i )+1 /s h + i − l )+6 → s,σ = 6( x + 1) ,σ = 6( x + 1) ,σ = 6( x + 1) ,σ , σ , σ = 21 h + 1 , ( s ) ∗ s /s → s. t j +1 : σ = 21( h + i ) − , s h + i ) − /s h + i − l )+1 → s,σ , σ , σ , σ , σ = 21 l − ,σ = 6( x + 2) ,σ = 6( x + 2) ,σ = 6( x + 2) ,σ = 6 , s → λ,σ , σ , σ = 21 h − , ( s ) ∗ s /s → s,σ , σ , σ = 3 ,t j +2 : σ , σ , σ , σ , σ , σ = 21 l − ,σ = 6( x + 2) + 1 , ( s ) ∗ s /s → s,σ = 6( x + 2) + 1 , ( s ) ∗ s /s → s,σ = 6( x + 2) + 1 , ( s ) ∗ s /s → s,σ = 1 , s → λ,σ , σ , σ = 21 h − , ( s ) ∗ s /s → s,σ , σ , σ = 6 ,t j +3 : σ , σ , σ , σ , σ , σ = 21 l,σ = 6( x + 2) ,σ = 6( x + 1) ,σ = 6( x + 1) ,σ = 1 , s → λ,σ , σ , σ = 21 h − , ( s ) ∗ s /s → s,σ , σ , σ = 9 . The remainder of this simulation is similar to the computation carried out at the end of theinitialisation process (see the last paragraph of Section 4.0.6 and timesteps t x + x +4 to t x + x +14 h +3 of the Section 4.0.5). Thus, after a further 14 h − t j +14 h +2 : σ , σ , σ , σ , σ , σ = 21( h + l ) + 1 ,σ = 6( x + 2) ,σ = 6( x + 1) ,σ = 6( x + 1) ,σ , σ , σ = 21 h + 1 , ( s ) ∗ s /s → s.
20t time t j +14 h +2 the simulation of q i : IN C (1) , q l is complete. Note that an increment on the value x in counter c is simulated by increasing the number of spikes in σ from 6( x + 1) to 6( x + 2).Note also that the encoding of the next instruction q l is given by the 21( h + l ) + 1 spikes in neurons σ , σ , σ , σ , σ and σ .4.0.8. Π ′ C simulating q i : DEC (1) , q l , q k If we are simulating
DEC (1) then we get t j : σ = 21( h + i ) + 1 , s h + i )+1 /s → s,σ , σ , σ , σ , σ = 21( h + i ) + 1 ,σ = 6( x + 1) ,σ = 6( x + 1) ,σ = 6( x + 1) ,σ , σ , σ = 21 h + 1 , ( s ) ∗ s /s → s. To help simplify configurations we will not include neurons σ , σ , and σ until the end of theexample. When simulating DEC (1) there are two cases to consider. Case 1: if counter c has value x >
0, then decrement counter 1 and move to instruction q i +1 . Case 2: if counter c has value x = 0, then move to instruction q k . In configuration t j +1 our system determines if the value x in counter 1 is > σ is >
13. Note that if we have Case 1then the rule ( s ) ∗ s /s → s is applied in σ sending an extra spike to neurons σ , σ , σ , σ , σ and σ thus recording that x >
0. Case 1 proceeds as follows: t j +1 : σ = 21( h + i ) − ,σ , σ , σ , σ , σ = 21( h + i ) + 2 ,σ = 6( x + 1) + 1 , ( s ) ∗ s /s → s,σ = 6( x + 1) + 1 , ( s ) ∗ s /s → s,σ = 6( x + 1) + 1 , ( s ) ∗ s /s → s,σ = 1 , s → λ,t j +2 : σ = 21( h + i ) − , s h + i ) − /s → s,σ , σ , σ , σ , σ = 21( h + i ) + 5 , s h + i )+5 /s → s,σ = 6( x + 1) ,σ = 6 x ,σ = 6 x σ = 1 , s → λ. The method we use to test the value of σ (simulated counter c ) has the side-effect of decrementing σ (simulated counter c ) and σ (simulated counter c ). Following this, in order to get the correctvalues our algorithm takes the following steps: Each of our simulated counters ( σ , σ and σ ) are21ncremented 3 times, and then the simulated counter σ is decremented 4 times, whilst the simulatedcounters σ and σ are each decremented twice. Thus, the overall result is that a decrement of c is simulated in σ and the other encoded counter values in σ and σ remain the same. Continuingwith our simulation we get t j +3 : σ , σ , σ , σ , σ , σ = 21( h + i ) − , s h + i ) − /s → s,σ = 6( x + 2) ,σ = 6( x + 1) ,σ = 6( x + 1) ,σ = 6 , s → λ,t j +4 : σ , σ , σ = 21( h + i ) − , s h + i ) − /s → s,σ , σ , σ = 21( h + i ) − , s h + i ) − /s h + i − l )+10 → s,σ = 6( x + 3) ,σ = 6( x + 2) ,σ = 6( x + 2) ,σ = 6 , s → λ,t j +5 : σ , σ , σ = 21( h + i ) − , s h + i ) − /s → s,σ , σ , σ = 21 l − ,σ = 6( x + 4) ,σ = 6( x + 3) ,σ = 6( x + 3) ,σ = 6 , s → λ. In configurations t j +3 , t j +4 and t j +5 each of the simulated counters σ , σ and σ are incremented.In configurations t j +6 to t j +10 the simulated counter σ is decremented 4 times and the simulatedcounters σ and σ are each decremented twice. t j +6 : σ , σ = 21( h + i ) − , s h + i ) − /s → s,σ = 21( h + i ) − , s h + i ) − /s h + i − l )+5 → s,σ , σ , σ = 21 l − ,σ = 6( x + 4) + 3 , ( s ) ∗ s /s → s,σ = 6( x + 3) + 3 , ( s ) ∗ s /s → s,σ = 6( x + 3) + 3 , ( s ) ∗ s /s → s,σ = 3 , s → λ. j +7 : σ , σ = 21( h + i ) − , s h + i ) − /s → s,σ , σ , σ , σ = 21 l − ,σ = 6( x + 3) + 3 , ( s ) ∗ s /s → s,σ = 6( x + 2) + 3 , ( s ) ∗ s /s → s,σ = 6( x + 3) + 3 , ( s ) ∗ s /s → s,σ = 4 , s → λ,t j +8 : σ , σ = 21( h + i ) − , s h + i ) − /s h + i − l ) − → s,σ , σ , σ , σ = 21 l − ,σ = 6( x + 2) + 2 , ( s ) ∗ s /s → s,σ = 6( x + 1) + 2 , ( s ) ∗ s /s → s,σ = 6( x + 3) + 2 , ( s ) ∗ s /s → s,σ = 3 , s → λ,t j +9 : σ , σ , σ , σ , σ , σ = 21 l − ,σ = 6( x + 1) + 2 , ( s ) ∗ s /s → s,σ = 6( x + 1) + 2 , ( s ) ∗ s /s → s,σ = 6( x + 2) + 2 , ( s ) ∗ s /s → s,σ = 3 , s → λ,t j +10 : σ , σ , σ , σ , σ , σ = 21 l,σ = 6 x ,σ = 6( x + 1) ,σ = 6( x + 1) ,σ = 1 , s → λ,σ , σ , σ = 21 h − , ( s ) ∗ s /s → s,σ , σ , σ = 30 . Note that at time t j +8 that rule ( s ) ∗ s /s → s will always be applicable as here x > t x + x +4 to t x + x +14 h +3 of the Section 4.0.5). Thus, after a further 14 h − t j +14 h +2 : σ , σ , σ , σ , σ , σ = 21( h + l ) + 1 ,σ = 6 x ,σ = 6( x + 1) ,σ = 6( x + 1) ,σ , σ , σ = 21 h + 1 , ( s ) ∗ s /s → s. At timestep t j +14 h +2 the simulation of q i : DEC (1) , q l , q k for Case 1 ( x >
0) is complete. Notethat a decrement on the value x in counter c is simulated by decreasing the value in σ from6( x + 1) to 6 x . Note also that the encoding 21( h + l ) + 1 of the next instruction q l has beenestablished in neurons σ , σ , σ , σ , σ and σ . Alternatively, if we have Case 2 ( x = 0) then weget t j +1 : σ = 21( h + i ) − ,σ , σ , σ , σ , σ = 21( h + i ) + 2 ,σ = 7 , s → λ,σ = 6( x + 1) + 1 , ( s ) ∗ s /s → s,σ = 6( x + 1) + 1 , ( s ) ∗ s /s → s,σ = 1 , s → λ,t j +2 : σ = 21( h + i ) − , s h + i ) − /s h + i − k ) − → s,σ , σ , σ , σ , σ = 21( h + i ) + 4 , s h + i )+4 /s h + i − k )+5 → s,σ = 6 x ,σ = 6 x ,σ = 1 , s → λ,t j +3 : σ , σ , σ , σ , σ , σ = 21 k,σ = 6 ,σ = 6( x + 1) ,σ = 6( x + 1) ,σ = 6 , s → λ,σ , σ , σ = 21 h − , ( s ) ∗ s /s → s,σ , σ , σ = 9 . The remainder of this simulation is similar to the computation carried out at the end of theinitialisation process (see the last paragraph of Section 4.0.6 and timesteps t x + x +4 to t x + x +14 h +3
24f the Section 4.0.5). Thus, after a further 14 h − t j +14 h +2 : σ , σ , σ , σ , σ , σ = 21( h + k ) + 1 ,σ = 6 ,σ = 6( x + 1) ,σ = 6( x + 1) ,σ , σ , σ = 21 h + 1 . At time t j +14 h +2 the simulation of q i : DEC (1) , q l , q k for Case 2 ( x = 0), is complete. Note thatthe encoding 21( h + k ) + 1 of the next instruction q k has been established in neurons σ , σ , σ , σ , σ and σ .4.0.9. Halting If C enters the halt instruction q h at time t j then we get the following t j : σ , σ , σ , σ = 42 h + 1 , s h +1 /s → s,σ , σ = 42 h + 1 ,σ = 6( x + 1) ,σ = 6( x + 1) ,σ = 6( x o + 1) ,t j +1 : σ , σ , σ , σ = 42 h + 1 , s h +1 /s → s,σ , σ = 42 h + 2 ,σ = 6( x + 1) + 4 ,σ = 6( x + 1) + 4 , ( s ) ∗ s /s → s,σ = 6( x o + 1) + 4 , ( s ) ∗ s /s → s,σ = 4 , s → λ,t j +2 : σ , σ = 42 h + 3 , s ∗ s h +3 /s → s,σ , σ = 42 h + 3 ,σ , σ = 42 h + 5 ,σ = 6( x + 2) + 2 , ( s ) ∗ s /s → s,σ = 6( x + 1) + 2 , ( s ) ∗ s /s → s,σ = 6( x o + 1) + 2 , ( s ) ∗ s /s → s,σ = 5 . Note that after time t j +2 we can ignore neurons σ , σ , σ and σ as there are no rules applicable inthese neurons when the number of spikes is > h + 3. The number of spikes in σ and σ does not25ecrease following timestep t j +2 , and thus the rule s ∗ s h +3 /s → s is applicable at each subsequenttimestep regardless of the operation of neurons σ and σ . Thus, neurons σ and σ may also beignored as their operation has no effect on the remainder of the simulation. Note that in subsequentconfigurations we write σ , σ > h + 3 as there are more than 42 h + 3 spikes in each of theseneurons. Thus we have t j +3 : σ , σ > h + 3 , s ∗ s h +3 /s → s,σ = 6 x o + 2 , ( s ) ∗ s /s → s,σ = 8 ,t j +4 : σ , σ > h + 3 , s ∗ s h +3 /s → s,σ = 6( x o −
1) + 2 , ( s ) ∗ s /s → s,σ = 11 , s /s → s,t j +5 : σ , σ > h + 3 , s ∗ s h +3 /s → s,σ = 6( x o −
2) + 2 , ( s ) ∗ s /s → s,σ = 12 . The rule ( s ) ∗ s /s → s is applied in σ a further x o − t j + x o +3 : σ , σ > h + 3 , s ∗ s h +3 /s → s,σ = 2 , s → λ,σ = 3( x o −
2) + 12 ,t j + x o +4 : σ , σ > h + 3 , s ∗ s h +3 /s → s,σ = 2 , s → λ,σ = 3( x o −
2) + 14 , ( s ) ∗ s /s → s. Recall from Section 2 that the output of an SN P system is the time interval between the firstand second spikes that are sent out of the output neuron. Note from above that the output neuron σ fires for the first time at timestep t j +4 and for the second time at timestep t j + x o +4 . Thus, theoutput of Π ′ C is x o the contents of the output counter c when C enters the halt instruction q h .If x o = 0 neuron σ will fire only once. To see this, note that if x o = 0 then s → λ will be appliedin neuron σ at time t j +3 , and thus σ will have 10 spikes (instead of 11) at time t j +4 and therule s → s will be applied in σ ending the computation.We have shown how to simulate arbitrary instructions of the form q i : IN C (1) , q l and q i : DEC (1) , q l , q k . Instructions that operate on counters c and c are simulated in a similar manner.Immediately following the simulation of an instruction Π ′ C is configured to begin simulation of thenext instruction. Each instruction of C is simulated in 14 h + 2 timesteps. The pair of input values26 rigin neurons target neurons σ σ , σ , σ , σ , σ , σ , σ , σ , σ , σ , σ , σ σ , σ , σ , σ , σ , σ , σ , σ , σ , σ σ , σ , σ , σ , σ , σ σ , σ , σ , σ σ , σ , σ , σ σ , σ σ , σ , σ , σ , σ , σ σ σ , σ , σ , σ , σ , σ , σ σ , σ , σ σ , σ , σ σ , σ , σ σ , σ , σ , σ , σ , σ , σ , σ , σ Table 3This table gives the set of synapses of the SN P system Π ′ C . Each origin neuron σ i and target neuron σ j that appearon the same row have a synapse going from σ i to σ j . g g g . . . g u − g u g u +1 . . . g v ss s s s G g g g . . . g u − g u g u +1 . . . g v + s − s + s − s + s − s + s − s + s − s G ′ Fig. 4. Finite state machine G decides if there is any rule applicable in a neuron given the number of spikes in theneuron at a given time in the computation. Each s represents a spike in the neuron. Machine G ′ keeps track of themovement of spikes into and out of the neuron and decides whither or not a particular rule is applicable at eachtimestep in the computation. + s represents a single spike entering the neuron and − s represents a single spike exitingthe neuron. ( x , x ) is read into the system in x + x + 14 h + 3 timesteps and sending the output value x o out of the system takes x o + 4 timesteps. Thus, if C completes it computation in time t then Π ′ C simulates the computation of C in linear time O ( ht + x + x + x o ). ✷
5. Lower bounds for small universal SN P systems
In this section we show that there exists no universal SN P system with only 3 neurons evenwhen we allow the input technique to be generalised. This is achieved in Theorem 4 by showingthat these systems are simulated by log-space bounded Turing machines. Following this, we showthat if we generalise the output technique we can give a universal SN P system with extended rulesthat has only 3 neurons. As a corollary of our proof of Theorem 4, we find that a universal SN Psystem with extended rules and generalised input and output is not possible with 2 neurons.27n this and other work [15,18] on small SN P systems the input neuron only receives a constantnumber of spikes from the environment and the output neuron fires no more than a constant numberof times. Hence, we call input standard if the input neuron receives no more than y spikes fromthe environment, where y is a constant independent of the input (i.e. the number of 1s in its inputsequence is < y ). Similarly, we call the output standard if the output neuron fires no more than x times, where x is a constant independent of the input. Here we say an SN P system has generalisedinput if the input neuron is permitted to receive n spikes from the environment where n ∈ N isthe length of its input sequence. Theorem 4
Let Π be any extended SN P system with only 3 neurons, generalised input and stan-dard output. Then there is a non-deterministic Turing machine T Π that simulates the computationof Π in space O (log n ) where n is the length of the input to Π . PROOF.
Let Π be any extended SN P system with generalised input, standard output, andneurons σ , σ and σ . Also, let x be the maximum number of times the output neuron σ ispermitted to fire and let q and r be the maximum value for b and p respectively, for all E/s b → s p ; d in Π.We begin by explaining how the activity of σ may be simulated using only the states of T Π (i.e.no workspace is required to simulate σ ). Recall that the applicability of each rule is determined bya regular expression over a unary alphabet. We can give a single regular expression R that is theunion of all the regular expressions for the firing rules of σ . This regular expression R determineswhither or not there is any applicable rule in σ at each timestep. Figure 4 gives the deterministicfinite automata G that accepts L ( R ) the language generated by R . During a computation we mayuse G to decide which rules are applicable in σ i by passing an s to G each time a spike enters σ .However, G may not give the correct result if spikes leave the neuron as it does not record spikesleaving σ i . Thus, using G we may construct a second machine G ′ such that G ′ records the movementof spikes going into and out of the neuron. G ′ is construct as follows: G ′ has all the same states(including accept states) and transitions as G along with an extra set of transitions that recordspikes leaving the neuron. This extra set of transitions are given as follows: for each transition on s from a state g i to a state g j in G there is a new transition on − s going from state g j to g i in G ′ that records the removal of a spike from σ . By recording the dynamic movement of spikes, G ′ isable to decide which rules are applicable in σ at each timestep during the computation. G ′ is alsogiven in Figure 4. To simulate the operation of σ we emulate the operation of G ′ in the states of T Π . Note that there is a single non-deterministic choice to be made in G ′ . This choice is at state g u if a spike is being removed ( − s ). It would seem that in order to make the correct choice in thissituation we need to know the exact number of spikes in σ . However, we need only store at most u + yq spikes. The reason for this is that if there are > u + yq spikes in σ , then G ′ will not enterstate g u − again. To see this, note that σ spikes a maximum of y times using at most q spikeseach time, and so once there are > u + yq spikes the number of spikes in σ will be > u − T Π simulates the activity of σ by simulating the operationof G ′ and encoding at most u + yq spikes in its states.In this paragraph we explain the operation of T Π . Following this, we give an analysis of the spacecomplexity of T Π . T Π has 4 tapes including an output tape, which is initially blank, and a readonly input tape. The tape head on both the input and output tapes is permitted to only moveright. Each of the remaining tapes, tapes 1 and 2 simulate the activity of the neurons σ and σ ,respectively. These tapes record the number of spikes in σ and σ . A timestep of Π is simulatedas follows: T Π scans tapes 1 and 2 to determine if there are any applicable rules in σ and σ at28hat timestep. The applicability of each neural rule in Π is determined by a regular expressionand so a decider for each rule is easily implemented in the states of T Π . Recall from the previousparagraph that the applicability of the rules in σ is already recorded in the states of T Π . Also, T Π is non-deterministic and so if more than one rule is applicable in a neuron T Π simply chooses therule to simulate in the same manner as Π. Once T Π has determined which rules are applicable ineach of the three neurons at that timestep it changes the encodings on tapes 1 and 2 to simulatethe change in the number of spikes in neurons σ and σ during that timestep. As mentioned in theprevious paragraph any change in the number of spikes in σ is recorded in the states of T Π . Theinput sequence of Π may be given as binary input to T Π by placing it on its input tape. Also, ifat a given timestep a 1 is read on the input tape then T Π simulates a spike entering the simulatedinput neuron. At each simulated timestep, if the output neuron σ spikes then a 1 is place on theoutput tape, and if σ does not spike a 0 is placed on the output tape. Thus the output of Π isencoded on the output tape when the simulation ends.In a two neuron system each neuron has at most one out-going synapse and so the number ofspikes in the system does not increase over time. Thus, the total number of spikes in neurons σ and σ can only increase when σ fires or a spike is sent into the system from the environment.The input is of length n , and so σ and σ receive a maximum of n spikes from the environment.Neuron σ fires a total of y times sending at most r spikes each time and so the maximum numberof spikes in σ and σ during the computation is n + 2 ry . Using a binary encoding tapes 1 and 2of T Π encode the number of spikes in σ and σ using space of log ( n + 2 ry ). As mentioned earlierno space is used to simulate σ , and thus T Π simulates Π using space of O (log n ). ✷ It is interesting to note that with a slight generalisation on the system in Theorem 4 we obtainuniversality. If we remove the restriction that allows the output neuron to fire only a constantnumber of times then we may construct a universal SN P system with extended rules and onlythree neurons. Here we define the output of an extended SN P system with generalised output tothe time interval between the first and second timesteps where exactly x spikes are sent out of theoutput neuron. Theorem 5
Let C be a universal counter machine with 2 counters that completes it computationin time t to give the output value x o when given the input value x . Then there is a universal extendedSN P system Π ′′ C with standard input and generalised output that simulates the computation of C in time O ( t + x + x o ) and has only 3 neurons. PROOF.
A graph of Π ′′ C is constructed by removing the output neuron σ from the system Π C given in the proof of Theorem 2 and making σ the new output neuron of Π ′′ C . The rules for Π ′′ C are given by the first 3 rows of Table 5 and a diagram of the system is obtained by removing neurons σ and σ from Figure 1 and adding a synapse to the environment from the new output neuron σ . The operation of Π ′′ C is identical to the operation of Π C with the exception of the new outputtechnique. The output of Π ′′ C is the time interval between the first and second timesteps whereexactly 2 spikes are sent out of the output neuron σ . ✷ From the third paragraph of the proof of Theorem 4 we get the following immediate corollary.
Corollary 1
Let Π be any extended SN P system with only 2 neurons and generalised input andoutput. Then there is a non-deterministic Turing machine T Π that simulates the computation of Π in space O (log n ) where n is the length of the input to Π . . Conclusion The dramatic improvement on the size of earlier small universal SN P system given by Theorems 1and 3 is in part due to the method we use to encode the instructions of the counter machines oursystems simulate. In the systems of P˘aun and P˘aun [15] each counter machine instruction wasencoded by a unique set of neurons. Thus the size of the system is dependant on the number ofinstructions in the counter machine being simulated. Some improvement was made by Zhang etal. [18] by showing that certain types of instructions may be grouped together. However, the numberof neurons used by the system remained dependant on the number of instructions in the countermachine being simulated. In our systems each unique counter machine instruction is encoded asa unique number of spikes and thus the size of our SN P systems are independent of the numberof instruction used by the counter machine they are simulating. The technique of encoding theinstructions as spikes was first used to construct small universal SN P systems in [14].The results from Theorems 2 and 4 give tight upper and lower bounds on the size of the smallestuniversal SN P system with extended rules. Thus in Theorem 2 we have given the smallest possibleuniversal SN P system with extended rules. The results from Theorem 5 and Corollary 1 give tightupper and lower bounds on the size of the smallest universal SN P systems with extended rulesand generalised output. Thus, Theorem 5 gives the smallest possible universal SN P system withextended rules and generalised output.The lower bounds given in Theorem 4 are also applicable to standard SN P systems and thus givea lower bound of 4 neurons for the smallest possible standard system that is universal. However,when compared with extended systems the rules used in standard SN P systems are quite limited,and so it seems likely that this lower bound of 4 neurons can be increased. Note that here andin [15,18] the size of a universal SN P system is measured by the number of neurons in the system.However, the size of an SN P system could also be measured by the number of neural rules in thesystem.
References [1] A. M. Barzdin. On a class of Turing machines (Minsky machines).
Algebra i Logika , 1(6):42–51, 1963. (InRussian).[2] H. Chen, M. Ionescu, and T. Ishdorj. On the efficiency of spiking neural P systems. In M. A. Guti´errez-Naranjo,G. P˘aun, A. Riscos-N´u˜nez, and F. J. Romero-Campero, editors,
Proceedings of Fourth Brainstorming Week onMembrane Computing , pages 195–206, Sevilla, Feb. 2006.[3] P. C. Fischer, A. R. Meyer, and A. L. Rosenberg. Counter machines and counter languages.
MathematicalSystems Theory , 2(3):265–283, 1968.[4] M. Ionescu, G. P˘aun, and T. Yokomori. Spiking neural P systems with exhaustive use of rules.
InternationalJournal of Unconventional Computing , 3(2):135–154, 2007.[5] M. Ionescu, G. P˘aun, and T. Yokomori. Spiking neural P systems.
Fundamenta Informaticae , 71(2-3):279–308,2006.[6] M. Ionescu and D. Sburlan. Some applications of spiking neural P systems. In G. Eleftherakis, P. Kefalas, andG. P˘aun, editors,
Proceedings of the Eighth Workshop on Membrane Computing , pages 383–394, Thessaloniki,June 2007.[7] I. Korec. Small universal register machines.
Theoretical Computer Science , 168(2):267–301, Nov. 1996.[8] A. Leporati, C. Zandron, C. Ferretti, and G. Mauri. On the computational power of spiking neural P systems.In M. A. Guti´errez-Naranjo, G. P˘aun, A. Romero-Jim´enez, and A. Riscos-N´u˜nez, editors,
Proceedings of theFifth Brainstorming Week on Membrane Computing , pages 227–245, Sevilla, Jan. 2007.[9] A. Leporati, C. Zandron, C. Ferretti, and G. Mauri. Solving numerical NP-complete problems with spikingneural P systems. In G. Eleftherakis, P. Kefalas, and G. P˘aun, editors,
Proceedings of the Eighth Workshop onMembrane Computing , pages 405–423, Thessaloniki, June 2007.
10] M. Minsky.
Computation, finite and infinite machines
Unconventional Computation, 7thInternational Conference, UC 2008 , volume 5204 of
LNCS , pages 189–205, Vienna, Aug. 2008. Springer.[14] T. Neary. A small universal spiking neural P system. In
International Workshop on Computing withBiomolecules , pages 65–74, Vienna, Aug. 2008. Austrian Computer Society.[15] A. P˘aun and G. P˘aun. Small universal spiking neural P systems.
BioSystems , 90(1):48–60, 2007.[16] R. Schroeppel. A two counter machine cannot calculate 2 n . Technical Report AIM-257, A.I. memo 257, ComputerScience and Artificial Intelligence Laboratory, MIT, Cambridge, MA, 1972.[17] X. Zhang, Y. Jiang, and L. Pan. Small universal spiking neural P systems with exhaustive use of rules. In , pages 117–128,Adelaide, Australia, Oct. 2008. IEEE.[18] X. Zhang, X. Zeng, and L. Pan. Smaller universal spiking neural P systems. Fundamenta Informaticae , 87(1):117–136, Nov. 2008. euron rules σ s h +1 /s h → s h , s h +2 /s h +1 → s h +1 , s h +1 → s h +4 , s → λ, s → λs h +4 i → s h +4 l , s h +4 i → s h + l ) , if l < hs h +4 i → s h +5 , s h +4 i → s h +5 , if l = hs h +4 i → s h + k ) , if k = hs h +4 i → s h +5 , if k = hσ ( s h ) ∗ s h +1 /s h → s, ( s h ) ∗ s h +2 /s h +2 → s h ( s h ) ∗ s h + i ) /s h + i ) → s h + i ) if q i : INC (1) ∈ { Q } ( s h ) ∗ s h + i ) /s h +4( h + i ) → s h if q i : INC ( x ) ∈ { Q } , x = 1( s h ) ∗ s h +4( h + i ) /s h +4 i → s h +4 i if q i : DEC (1) ∈ { Q } s h +4( h + i ) /s h + i ) → s h + i ) if q i : DEC (1) ∈ { Q } ( s h ) ∗ s h + i ) /s h + i ) → s h if q i : DEC ( x ) ∈ { Q } , x = 1 σ s /s → s, s h +1 /s h → s, ( s h ) ∗ s h +3 /s h +3 → s h , ( s h ) ∗ s h +5 /s h → s ( s h ) ∗ s h +5 /s h → s, s h +5 → s , s h +5 → s ( s h ) ∗ s h + i ) /s h + i ) → s h + i ) if q i : INC (2) ∈ { Q } ( s h ) ∗ s h + i ) /s h +4( h + i ) → s h if q i : INC ( x ) ∈ { Q } , x = 2( s h ) ∗ s h +4( h + i ) /s h +4 i → s h +4 i if q i : DEC (2) ∈ { Q } s h +4( h + i ) /s h + i ) → s h + i ) if q i : DEC (2) ∈ { Q } ( s h ) ∗ s h + i ) /s h + i ) → s h if q i : DEC ( x ) ∈ { Q } , x = 2 σ s h +1 /s h → s h − , s h +2 /s h → s h − , s h +3 → s h ( s h ) ∗ s h + i ) /s h + i ) → s h + i ) if q i : INC (3) ∈ { Q } ( s h ) ∗ s h + i ) /s h +4( h + i ) → s h if q i : INC ( x ) ∈ { Q } , x = 3( s h ) ∗ s h +4( h + i ) /s h +4 i → s h +4 i if q i : DEC (3) ∈ { Q } s h +4( h + i ) /s h + i ) → s h + i ) if q i : DEC (3) ∈ { Q } ( s h ) ∗ s h + i ) /s h + i ) → s h if q i : DEC ( x ) ∈ { Q } , x = 3 σ s → λ, s h → λ, s h → λ, s h + i ) → λ, s h +4 i → λ, s → s Table 4This table gives the rules for each of the neurons of Π C . euron rules σ s h +1 /s h → s h , s h +2 /s h − → s h +3 , s h +3 → λ, s → λ, s → λs h +4 i → s h +4 l , s h +4 i → s h + l ) , if l < hs h +4 i → s h +5 , s h +4 i → s h +5 , if l = hs h +4 i → s h + k ) , if k = hs h +4 i → s h +5 , if k = hσ ( s h ) ∗ s h + i ) /s h + i ) → s h + i ) if q i : INC (1) ∈ { Q } ( s h ) ∗ s h + i ) /s h +4( h + i ) → s h if q i : INC (2) ∈ { Q } ( s h ) ∗ s h +4( h + i ) /s h +4 i → s h +4 i if q i : DEC (1) ∈ { Q } s h +4( h + i ) /s h + i ) → s h + i ) if q i : DEC (1) ∈ { Q } ( s h ) ∗ s h + i ) /s h + i ) → s h if q i : DEC (2) ∈ { Q } σ s /s → s, s h +1 /s h → s h , ( s h ) ∗ s h +5 /s h → s ( s h ) ∗ s h +5 /s h → s, s h +5 → s , s h +5 → s ( s h ) ∗ s h + i ) /s h + i ) → s h + i ) if q i : INC (2) ∈ { Q } ( s h ) ∗ s h + i ) /s h +4( h + i ) → s h if q i : INC (1) ∈ { Q } ( s h ) ∗ s h +4( h + i ) /s h +4 i → s h +4 i if q i : DEC (2) ∈ { Q } s h +4( h + i ) /s h + i ) → s h + i ) if q i : DEC (2) ∈ { Q } ( s h ) ∗ s h + i ) /s h + i ) → s h if q i : DEC (1) ∈ { Q } σ s h → λ, s h → λ, s → λ, s h → λ, s h +4 i → λ, s h + i ) → λ, s → s Table 5This table gives the rules for each of the neurons of Π C . euron rules σ s → s , σ s /s → s , s /s → s , s /s → s , s /s → s , s /s → s , s h + i ) − /s → s , s h + i ) − /s → s , s h + i ) − /s → s, s h + i ) − /s → ss h + i ) − /s → s , s h +1 /s → s, s ∗ s h +3 /s → ss h + i )+1 /s → s if q i : INC ∈ { Q } s h + i ) − /s h + i − l )+1 → s if q i : INC (1) ∈ { Q } s h + i ) − /s h + i − l )+2 → s if q i : INC ( x ) ∈ { Q } , x = 1 s h + i ) − /s h + i − k ) − → s if q i : DEC ∈ { Q } s h + i )+1 /s → s if q i : DEC (1) ∈ { Q } s h + i )+1 /s → s if q i : DEC ( x ) ∈ { Q } , x = 1 s h + i ) − /s → s if q i : DEC (1) ∈ { Q } s h + i ) − /s → s if q i : DEC ( x ) ∈ { Q } , x = 1 s h + i ) − /s h + i − l ) − → s if q i : DEC (1) ∈ { Q } s h + i ) − /s h + i − l ) − → s if q i : DEC ( x ) ∈ { Q } , x = 1 σ s /s → s , s /s → s , s /s → s , s /s → s , s h + i ) − /s → ss h + i )+5 /s → s , s h + i ) − /s → s , s h + i ) − /s → s , s h + i ) − /s → ss h + i ) − /s → s s h + i ) − /s h + i − l ) − → s , s h + i )+4 /s h + i − k )+5 → ss h +1 /s → s , s ∗ s h +3 /s → ss h + i )+1 /s h + i − l )+6 → s if q i : INC (1) ∈ { Q } s h + i )+1 /s → s if q i : INC ( x ) ∈ { Q } , x = 1 s h + i ) − /s h + i − l )+2 → s if q i : INC ( x ) ∈ { Q } s h + i ) − /s h + i − k ) − → s if q i : DEC ∈ { Q } s h + i )+1 /s → s if q i : DEC ( x ) ∈ { Q } , x = 1 s h + i ) − /s → s if q i : DEC (1) ∈ { Q } s h + i ) − /s h + i − l )+5 → s if q i : DEC ( x ) ∈ { Q } , x = 1Table 6This table gives the rules for neurons σ to σ of Π ′ C . euron rules σ s /s → s , s /s → s , s /s → s , s /s → s , s h + i ) − /s → ss h + i )+5 /s → s , s h + i ) − /s → s , s h + i ) − /s → ss h + i ) − /s h + i − l )+5 → s, s h + i )+4 /s h + i − k )+5 → s , s h +1 /s → ss h + i )+1 /s h + i − l )+6 → s if q i : INC ( x ) ∈ { Q } , x = 3 s h + i )+1 /s → s if q i : INC (3) ∈ { Q } s h + i ) − /s h + i − l )+2 → s if q i : INC ( x ) ∈ { Q } s h + i ) − /s h + i − k ) − → s if q i : DEC ∈ { Q } s h + i )+1 /s → s if q i : DEC (3) ∈ { Q } s h + i ) − /s h + i − l )+10 → s if q i : DEC (3) ∈ { Q } s h + i ) − /s → s if q i : DEC ( x ) ∈ { Q } , x = 3 σ s /s → s , s /s → s , s /s → s , s /s → ss h + i )+5 /s → s , s h + i ) − /s → s , s h + i ) − /s h + i − l )+10 → ss h + i )+4 /s h + i − k )+5 → s , s h +1 /s → ss h + i )+1 /s h + i − l )+6 → s if q i : INC ∈ { Q } σ , σ s /s → s , s /s → s , s /s → s , s /s → s , s h + i )+5 /s → s , s h + i ) − /s → s , s h + i ) − /s h + i − l )+10 → ss h + i )+4 /s h + i − k )+5 → ss h + i )+1 /s h + i − l )+6 → s if q i : INC ∈ { Q } σ ( s ) ∗ s /s → s , ( s ) ∗ s /s → s , s → λ ,( s ) ∗ s /s → s , ( s ) ∗ s /s → s , σ ( s ) ∗ s /s → s , ( s ) ∗ s /s → s , ( s ) ∗ s /s → s , s → λ , ( s ) ∗ s /s → s , σ ( s ) ∗ s /s → s , ( s ) ∗ s /s → s , ( s ) ∗ s /s → s ,( s ) ∗ s /s → s , ( s ) ∗ s /s → s , s → λ , s → λσ s → λ , s → λ , s → λ , s /s → s , ( s ) ∗ s /s → s , s → λ , s → λ , s → λ , s → sσ , σ ( s ) ∗ s /s → s , s → s , σ , σ , σ , σ ( s ) ∗ s /s → s , s → λ ,Table 7This table gives the rules for neurons σ to σ of Π ′ C ..