A boundary between universality and non-universality in spiking neural P systems

In this work we offer a significant improvement on the previous smallest spiking neural P systems and solve the problem of finding the smallest possible extended spiking neural P system. Paun and Paun gave a universal spiking neural P system with 84 …

Authors: Turlough Neary

A b oundary b et w een univ ersal it y and non-universalit y in s piking neural P sys tems 1 T urlough Neary Bo ole Centr e for R ese ar ch in Informatics, University Col le ge Cork, Ir eland . Abstract In this w ork we offer a significan t improv ement on the previous smallest spiking neural P systems and solve the problem of finding the smallest p ossi ble extended spiking neural P system. P˘ aun and P˘ aun [15] ga ve a u niv ersal spiking neural P system with 84 neurons and another th at has extended rules w ith 49 neurons. Subsequently , Zh ang et al. [18] reduced the num b er of neurons used to giv e univ ersality to 67 for spiking neural P systems and to 41 for th e extended mo del. Here w e give a small universal spiking neural P system that has only 17 neurons and anoth er that has extended rules with 5 neurons. All of the ab o ve mentioned spiking neural P systems suffer fro m an exp onential slow dow n when simulating T uring machines. Using a more relaxed encod ing tec hnique w e get a univ ersal spiking neural P system that has extended rules with only 4 neurons. This latter spiking neu ral P system simula tes 2-counter mac hines in linear time and thus suffer from a dou b le exponential time ov erhead when simulating T u ring mac hines. W e sho w that ex tended spiking neural P systems with 3 n eurons are simulated by log-space boun ded T u ring machines, and so there exists no such universal system with 3 neurons. It immediatel y follo ws t hat our 4-neuron system is the smallest possible extended spiking neural P system that is universal. Finally , we show that if w e generalise the output technique we can give a universal spiking neural P system with exten ded rules that has only 3 neu rons. This sy stem is also th e smallest of its k ind as a u niv ersal spiking neural P sy stem with ex tended rules and generali sed output is not p ossi ble with 2 neurons. Key wor ds: spiking neural P systems, small universal spiking neural P systems, computational complexit y, strong universalit y, wea k unive r salit y Email addr ess: tneary@cs.nuim .ie (T urlough Neary). URL: http://ww w.cs.nuim.ie/ ∼ tn e ar y/ (T urlough Neary). 1 T urlough Neary is funded b y Science F oundation Ireland Research F rontiers Programme gran t n umber 07/RFP/CSMF641. 8 Nov em b er 2018 1. In tro duction Spiking neural P systems (SN P systems) [5] are quite a new co mputational mo del that are a synergy inspired by P systems and spiking neural netw orks. It has b e e n shown that these systems ar e computationally univ ersa l [5]. Recen tly , P ˘ aun and P˘ a un [1 5] g ave tw o s mall universal SN P systems; They give an SN P sys tem with 84 neurons and an extended SN P system with 49 neurons (that uses rules without dela y). P˘ aun and P˘ aun conjectur e d that it is not possible to give a significant decrease in the n um b er of neurons of their t wo universal systems. Zhang e t al. [1 8] offered such a significant decrease in the num b er of ne ur ons us ed to give such sma ll univ er s al systems. They give a universal SN P system with 67 neurons and another, which ha s ex tended rules (without delay), with 41 neurons. Here we give a small univ ers al SN P system that has only 17 neurons and another, which has extended r ules (without delay), with 5 neurons. Using a mo r e relaxed enco ding we get a universal SN P system that has extended r ules (without delay), w ith 4 neuro ns. T able 1 giv es the smallest universal SN P systems and their resp ectiv e simulation time and space ov erheads. Note from T able 1 that, in a ddition to its small size, our 17-ne ur on system uses rules without delay . The other s mall universal SN P systems with standard rules [15 ,18] do not hav e this restriction. In this work we also show that ex tended SN P sys tems with 3 neuro ns and generalised input a re simulated by log-space bounded T uring mac hines. As a result, it is clear that ther e ex is ts no such universal system with 3 neurons, and th us our 4 -neuron system is the smallest p ossible universal extended SN P system. F ollowing this, we s how that if we ge neralise the output tec hnique we can give a univ ers al SN P system with extended rules that ha s only 3 neur o ns. In addition, we show that a universal SN P system with extended rules and genera lis ed output is not p ossible with 2 neurons, and thus our 3-neuron systems is the smallest of its kind. F ro m a previous res ult [13] it is known that there exists no universal SN P system that simulates T uring ma c hines in less the exp onential time and space. It is a rela tiv ely straigh tforward matter to generalise this result to show that extended SN P sys tems suffer from the same inefficiencies. It immediately follows that the universal systems w e present here and those found in [15,18] hav e exp onen tial time a nd space requiremen ts. How ever, it is p ossible to give a time efficient SN P system when w e allow ex haustiv e us e of rules. A universal extended SN P sy stem with exhaustive use of r ule s has been giv en that simulates T uring mac hines in linea r time [12]. F urthermore, this system has o nly 10 neurons. SN P systems with exhaustive use o f rules w er e originally pr o ved computationally universal b y Ione s cu et al. [4 ]. Howev er, the tec hnique used to pr o ve universality suffered from an exponential time ov erhead. Using different forms of SN P systems, a num b er of time efficient (p olynomial or constant time) solutions to NP- hard pr oblems have b een given [2,8,9]. All o f thes e solutions to NP-hard pr oblems rely on families o f SN P systems. Sp ecifically , the size of the problem instance determines the nu mber of neurons in the SN P s ystem that solves that particular ins tance. This is similar to solving problems with circuits families wher e each input size has a specific circuit that solves it. Ionescu a nd Sburlan [6] hav e s ho wn that SN P systems s im ulate circ uits in linear time. In Section 4 we give a definitio n for SN P systems, explain their o peration and give other relev ant techn ica l details. In Section 3 we give a definition for counter machines and we also discuss so me notions of universality . F ollowing this, in Section 4 w e give our small univ ersa l SN P sy stems a nd show how their size c an b e reduce if we use a more relaxed enc o ding. In Section 5 we g ive our pro of showing that extended SN P sy stems with 3 neurons and gene r alised input ar e simulated by log-spac e b ounded T ur ing machines. Section 5 a lso contains our universal 3-neuro n s y stem with generalise d output. W e end the pa per with so me discussion a nd c o nclusions. 2 n umber of simulation t yp e exhaustiv e author neurons time/space of rules use of rules 84 exponent ial standard no P˘ aun and P˘ aun [15] 67 exponent ial standard no Zhang et al. [18] 49 exponent ial extended † no P˘ aun and P˘ aun [15] 41 exponent ial extended † no Zhang et al. [18] 12 double-exponent ial extended † no Neary [14] 18 exponent ial extended no Neary [11,14]* 125 exponent ial / e xtended † y es Zhang et al. [ 17] double-exponent ial 18 polynomial/exp onen tial extended yes Neary [13] 10 linear/exp onential extended y es Neary [12] 17 exponential standard † no Sectio n 4 5 exponential ext end ed † no Sectio n 4 4 double-exponential extended † no Section 4 3 double-exponential extended ‡ no Section 5 T able 1 Small universal SN P systems. The “simulation time” column giv es the ov erheads used by each s yste m when simulat- ing a s tandard single ta p e T uri ng mac hine. † indicates that there is a r est ri ct ion of the rules as dela y i s not used and ‡ indicates that a mor e generalised output tec hnique is used. *The 18 neuron system is not explicitly given in [14]; it is how ever ment ioned at the end of the paper and is easily deriv ed f r om the other system presented in [14]. Also, its op eration and its graph w ere pr ese nted in [11]. 2. SN P system s Definition 1 (Spi king neural P system ) A spiking neur al P syst em (SN P s yst em) is a tuple Π = ( O, σ 1 , σ 2 , · · · , σ m , sy n, in, out ) , wher e: (i) O = { s } is the unary alphab et ( s is known as a spike), (ii) σ 1 , σ 2 , · · · , σ m ar e n eur ons, of the form σ i = ( n i , R i ) , 1 6 i 6 m , wher e: (a) n i > 0 is the initial num b er of spikes c ontaine d in σ i , (b) R i is a finite set of rules of the fol lowing two forms: (i) E /s b → s ; d , wher e E is a r e gu lar ex pr ession over s , b > 1 and d > 0 , (ii) s e → λ , wher e λ is the empty wor d, e > 1 , and for al l E /s b → s ; d fr om R i s e / ∈ L ( E ) wher e L ( E ) is the language define d by E , (iii) sy n ⊆ { 1 , 2 , · · · , m } × { 1 , 2 , · · · , m } is the set o f synapses b etwe en neur ons, wher e i 6 = j for al l ( i, j ) ∈ sy n , (iv) in , out ∈ { σ 1 , σ 2 , · · · , σ m } ar e t he input and out put neur ons, r esp e ctively. A fir ing rule r = E /s b → s ; d is applica ble in a neuro n σ i if ther e ar e j > b spikes in σ i and s j ∈ L ( E ) where L ( E ) is the se t of w ords defined by the regular expr e ssion E . If, a t time t , rule r is ex ecuted then b spikes are removed from the neuron, and at tim e t + d the neur on fir es. When a neur on σ i fires a spike is sent to ea c h neuron σ j for every synaps e ( i, j ) in Π. Also, the neur on σ i remains closed and doe s not r eceiv e spikes until time t + d and no other rule may exec ute in σ i un til time t + d + 1. A forgeting r ule r ′ = s e → λ is applicable in a neuron σ i if there ar e exactly 3 e spik es in σ i . If r ′ is executed then e spikes are remov ed from the neuron. At ea c h times tep t a rule m ust b e applied in eac h neuron if there is one or more applicable r ules at time t . Thus, while the applicatio n of rules in each individual neuron is sequential the neurons op erate in parallel with each other. Note from 2b(i) of Definition 1 that there ma y b e t wo rules of the form E /s b → s ; d , that are applicable in a single neuron at a given time. If this is the case then the next rule to execute is chosen non-deterministically . An ext en de d SN P sys tem [15] has mo re general rules of the form E /s b → s p ; d , where b > p > 1. Thu s, a synapse in an SN P sys tem with extended rules ma y transmit more than one spike in a single timestep. The SN P s ystems we present in this work use rules without delay , and thus in the sequel w e write rules as E /s b → s p . Also, if in a r ule E = s b then we write the rule as s b → s p . In the same manner as in [15 ], spikes are in tro duced into the system fro m the en vironment by reading in a binar y seq ue nce (or word) w ∈ { 0 , 1 } via the input neur on σ 1 . The seque nce w is read from left to right o ne sym b ol a t each timestep and a s pike enters the input neuron on a g iv en timestep iff the read sym b ol is 1. The output of an SN P system Π is the time betw een the first and second firing rule applied in the output ne ur on and is given by the v a lue Π( w ) ∈ N . A configuration c o f an SN P system co nsists o f a word w and a sequence of natura l num b ers ( r 1 , r 2 , . . . , r m ) wher e r i is the n umber of spikes in σ i and w represents the remaining input yet to be read into the system. A computation step c j ⊢ c j +1 is as follows: eac h num b er r i is upda ted depe nding on the n um b er o f spik es neur on σ i uses up and receives during the synchronous appli- cation of all applicable rules in co nfig uration c j . In additio n, if w 6 = λ then the leftmost symbol of w is r e mo ved. A SN P s y stem computation is a finite sequence of configurations c 1 , c 2 , . . . , c t that ends in a terminal configuration c t where for a ll j < t , c j ⊢ c j +1 . A t erminal c onfigur ation is a configuratio n where the input s e q uence has finished b eing read in via the input neuron (i.e. w = λ the empty word) a nd e ither there is no a pplicable rule in any of the neurons o r the output neur on has spik ed exactly v times (where v is a constant indep endent of the input). Let φ x be the x th n -ary par tial rec ur siv e function in a G¨ odel en umera tion of all n -ary partial recursive functions. The natural num b er v alue φ x ( y 1 , y 2 , . . . y n ) is the result giv en by φ x on input ( y 1 , y 2 , . . . y n ). Definition 2 [Universal SN P s yst em] A SN P system Π is universal if ther e ar e r e cursive functions g and f such that for al l x, y ∈ N we have φ x ( y 1 , y 2 , . . . y n ) = f (Π( g ( x, y 1 , y 2 , . . . y n ))) . In the next section we give some further discussio n on the s ub ject of definitions of universalit y . 3. Coun ter mac hines Definition 3 (Counter machine) A c ounter machine is a tu ple C = ( z , R , c m , Q, q 1 , q h ) , wher e z gives the nu mb er of c ounters, R is the set of input c ounters, c m is the output c ounter, Q = { q 1 , q 2 , · · · , q h } is t he set of instructions, and q 1 , q h ∈ Q ar e the initial and halt inst ructions, r esp e ctively. Each counter c j stores a natural num b er v alue y > 0. Ea c h instruction q i is of one of the following t wo forms q i : I N C ( j ) , q l or q i : DE C ( j ) , q l , q k and is executed as follows: – q i : I N C ( j ) , q l increment the v alue y stored in counter c j by 1 and mo ve to instr uctio n q l . – q i : DE C ( j ) , q l , q k if the v alue y s to red in counter c j is gr eater than 0 then decr e ment this v alue by 1 and mov e to instruction q l , o therwise if y = 0 mo ve to ins tr uction q k . 4 A t the b eginning of a co mput a tio n the first instruction executed is q 1 . The input to the counter machine is initia lly stor ed in the input counters. If the co un ter machin e’s control enters instruction q h , then the computation halts a t that timestep. The result of the computation is the v a lue y stor ed in the output coun ter c m when the computation halts. W e now consider some different notions of univ er salit y . Korec [7] gives univ er salit y definitions that describe some counter machines as weakly universal and other count er mac hines as strongly universal. Definition 4 [Kor e c [7]] A r e gister machine M wil l b e c al le d str ongly u niversa l if ther e is a r e cur- sive function g such that for al l x, y ∈ N we have φ x ( y ) = Φ 2 M ( g ( x ) , y ) . Here Φ 2 M ( g ( x ) , y ) is the v alue sto r ed in the output counter a t the end of a computation when M is started with the v a lues g ( x ) a nd y in its input counters. Ko rec’s definition insists that the v alue y should not b e c hanged b efore passing it as input to M . How ever, if we c onsider computing an n - arry function with a Korec- strong universal coun ter mac hine then it is clear tha t n arguments must b e enco ded as a sing le input y . Many Korec- strong univ ersal counter machin es would not satisfy a definition where the function φ x in Definition 4 is replac e d with an n - arry function with n > 1. F or example, let us g ive a new definition w her e w e repla c e the eq ua tion “ φ x ( y ) = Φ 2 M ( g ( x ) , y )” with the equa tio n “ φ n x ( y 1 , y 2 , . . . , y n ) = Φ n +1 M ( g ( x ) , y 1 , y 2 , . . . , y n )” in Definition 4. Note that for any counter machine M with r counters, if r 6 n then M do e s not satisfy this new definition. It co uld be conside r ed that Kor ec’s notio n o f s trong univ er salit y is somewhat a rbitrary for the following r eason: Korec’s definition will admit machines that require n -arry input ( y 1 , y 2 , . . . , y n ) to be enc o ded as the s ingle input y when sim ulating an n -a rry function, but his definition will not admit a mac hine that a pplies an enco ding function to y (e.g. y 2 is no t permitted). Perhaps when one uses this notion of universalit y it would be mo re appropria te to refer to it a s strongly universal for unary partial recursive functions instead o f simply s tr ongly universal. Korec [7] also giv es a num b er of other definit ions of universality . If the equation φ x ( y ) = Φ 2 M ( g ( x ) , y ) in Definition 4 above is replaced with any one of the equations φ x ( y ) = Φ 1 M ( g 2 ( x, y )), φ x ( y ) = f (Φ 2 M ( g ( x ) , y )) or φ x ( y ) = f (Φ 1 M ( g 2 ( x, y ))) then the counter machine M is weakly uni- versal. Korec giv es another definition where the equa tion φ x ( y ) = Φ 2 M ( g ( x ) , y ) in Definition 4 is replaced with the eq uation φ x ( y ) = f (Φ 2 M ( g ( x ) , h ( y ))). How ever, he do es not include this definition in his list of weakly universal machines e ven though the eq ua tion φ x ( y ) = f (Φ 2 M ( g ( x ) , h ( y ))) allows for a mor e r elaxed encoding than the equation φ x ( y ) = f (Φ 2 M ( g ( x ) , y )) and thus gives a weaker form of universalit y . F or eac h num b er m > 2 there exists univ ersa l m - coun ter machines that allo w φ n x and its input ( y 1 , y 2 , . . . , y n ) to b e enco ded s e parately (e.g. via g ( x ) a nd h n ( y 1 , y 2 , . . . , y n )). F or universal 2- counter machines all of the curr en t algo rithms enco de the function φ n x and its input ( y 1 , y 2 , . . . , y n ) together as a single input (e.g . v ia g n +1 ( x, y 1 , y 2 , . . . , y n )). Using such enco dings it is only possible to give universal 2 -coun ter machines that Korec would clas s as weakly universal. So me other limitations of 2-counter machines were shown indep enden tly by Sc hro epp el [16 ] and Barzdin [1]. In b oth cases the authors a r e examining una r y functions that are uncomputable for 2- coun ter machines when the input v alue to the coun ter machine must equal the input to the function. F or example Sc hro epp e l shows that giv en n as input a 2-counter machine ca nnot compute 2 n . It is interesting to note tha t one can give a Ko rec-strong universal c oun ter machine that is as time/spa ce inefficient as a Ko rec- weak univ ers al 2- coun ter machine. Korec’s definition of s trong univ ersa lit y deals with input and output only and is not concerned with the (time/spa ce) efficiency of the c o mputation. 5 In ea r lier work [15], Korec’s notion of str ong universality w as adopted for SN P sys tems 2 as follows: A spiking neural P sy s tem Π is strongly universal if Π(10 y − 1 10 x − 1 1) = φ x ( y ) for all x and y (her e if φ x ( y ) is undefined s o to is Π(10 y − 1 10 x − 1 1)). As with the SN P sy stems given in [15,18], the sys tems we give in Theorems 1 and 3 satisfy the notio n o f strong univ ersa lit y adopted from Korec in [15]. Analo g ously , o ur system in Theo rem 2 co uld b e compared to what Korec r efers to a s weak universalit y . How ever, as we noted in our a nalysis ab o ve, it could b e c onsidered that Kor ec’s notion of stro ng universalit y is somewhat arbitrary and we also p ointed out some inco nsistency in his no tio n of weak universalit y . Hence, in this work w e r ely on time/space complexit y ana lysis to compare the enco dings used b y small SN P system (see T able 1). It is w ell known that co unter machines r equire an exp onent ial time ov erhea d to simulate T uring machines [3]. Counter ma c hines with only 2 counters are universal [10], howev er, they simulate T uring machines with a double exp onential time ov erhead. In the s equel w e give some universal SN P systems that s imulate 3-counter machines and o thers tha t simulate 2- coun ter machines. The reason for this is that when using our a lgorithm ther e is a trade-o ff b et ween the size and the time efficiency of the system. This tr a de-off is dep endan t on whither we choose to sim ulate 3- counter mach ines or 2-co un ter mac hines. When simulating T uring machines, 3-counter machines suffer fro m an exp onential time o verhead and 2-count er machines suffer from a double-exp onential time o verhead, and thus the simulation of 3-counter machines is prefera ble when considering the time efficiency of the sys tem. If we are co nsidering the size of our sys tem then 2-counter machines hav e an adv a n tage ov er 3-count er machines as our algorithms require a constant n umber of neurons to sim ulate e ac h co un ter. 4. Small univ e rsal SN P systems W e b egin this section by giving our tw o extended univ ers al systems Π C 3 and Π C 2 , and following this we give our standard system Π ′ C 3 . W e pr o ve the universalit y o f Π C 3 and Π ′ C 3 by s howing that they each simulate a universal 3-counter machine. F rom Π C 3 we obtain the system Π ′ C 2 which simulates a univ ersa l 2-coun ter mac hine. Theorem 1 L et C 3 b e a universal c ount er m ac hine with 3 c ount ers that c ompletes it c omputation in time t to give the output value x o when given t he p air of input values ( x 1 , x 2 ). Then ther e is a universal extende d SN P system Π C 3 that simulates the c omputat ion of C 3 in time O ( t + x 1 + x 2 + x o ) and has only 5 n eur ons. PR OOF. L e t C 3 = (3 , { c 1 , c 2 } , c 2 , Q, q 1 , q h ) where Q = { q 1 , q 2 , · · · , q h } . Our SN P system Π C 3 is given b y Figure 1 and T able 4. The algorithm g iv en for Π C 3 is deterministic. 4.0.1. Enc o ding of a c onfigur ation of C 3 and r e ading input into Π C 3 A configuration of C 3 is stored as spik es in the neurons of Π C 3 . The next instruction q i to be executed is store d in e a c h of the neurons σ 2 , σ 3 and σ 4 as 4( h + i ) spikes. Let x 1 , x 2 and x 3 be the v alues stored in counters c 1 , c 2 and c 3 , respectively . Then the v alues x 1 , x 2 and x 3 are sto red as 8 h ( x 1 + 1), 8 h ( x 2 + 1) and 8 h ( x 3 + 1) spikes in neurons σ 2 , σ 3 and σ 4 , r espectively . The input to Π C 3 is read into the sys tem v ia the input neuron σ 1 (see Figure 1). If C 3 beg ins its computation with the v a lue s x 1 and x 2 in count er s c 1 and c 2 , resp ectiv ely , then the binary sequence w = 10 x 1 − 1 10 x 2 − 1 1 is r ead in via the input neur on σ 1 . Th us, σ 1 receives a single spik e from the 2 Note that no formal definition of this notion w as explicitly giv en in[ 15 ]. 6 coun ter c 2 σ 3 σ 5 coun ter c 1 σ 2 coun ter c 3 σ 4 σ 1 input output Fig. 1. Unive rs al extended SN P system Π C 3 . E ach ov al lab eled σ i is a neuron. An arrow going from neuron σ i to neuron σ j illustrates a synapse ( i, j ). environmen t at times t 1 , t x 1 +1 and t x 1 + x 2 +1 . W e explain how the system is initialised to encode an initial co nfiguration of C 3 by giving the n umber of spikes in each neuro n and the rule that is to be applied in each neuron at time t . Before the computation begins neuron σ 1 initially contain 8 h spikes, σ 3 contains 2 spikes, σ 4 contains 8 h + 1 spikes a nd all other neur ons contain no s pik es. Thu s, when σ 1 receives it firs t s pik e at time t 1 we have t 1 : σ 1 = 8 h + 1 , s 8 h +1 /s 8 h → s 8 h , σ 3 = 2 , s 2 /s → s, σ 4 = 8 h + 1 , s 8 h +1 /s 8 h → s 8 h − 1 . where on the left σ k = z giv es the num b er z of spikes in neuron σ k at time t and o n the r ig h t is the rule that is to b e applied at time t , if there is an a pplicable rule at that time. Thus, from Figure 1, when w e apply the rule s 8 h +1 /s 8 h → s 8 h in neuron σ 1 , s 2 /s → s in σ 3 , and s 8 h +1 /s 8 h → s 8 h − 1 in σ 4 at time t 1 we get t 2 : σ 1 = 8 h + 1 , s 8 h +1 /s 8 h → s 8 h , σ 2 = 8 h, σ 3 = 8 h + 1 , s 8 h +1 /s 8 h → s, σ 4 = 8 h + 1 , s 8 h +1 /s 8 h → s 8 h − 1 , σ 5 = 1 , s → λ, t 3 : σ 1 = 8 h + 1 , s 8 h +1 /s 8 h → s 8 h , σ 2 = 16 h, σ 3 = 8 h + 1 , s 8 h +1 /s 8 h → s, σ 4 = 8 h + 1 , s 8 h +1 /s 8 h → s 8 h − 1 , σ 5 = 1 , s → λ. 7 Neuron σ 1 fires on every timestep b et ween times t 1 and t x 1 +1 to send a total of 8 hx 1 spikes to σ 2 th us we g et t x 1 +1 : σ 1 = 8 h + 2 , s 8 h +2 /s 8 h +1 → s 8 h +1 , σ 2 = 8 hx 1 , σ 3 = 8 h + 1 , s 8 h +1 /s 8 h → s, σ 4 = 8 h + 1 , s 8 h +1 /s 8 h → s 8 h − 1 , σ 5 = 1 , s → λ, t x 1 +2 : σ 1 = 8 h + 1 , s 8 h +1 /s 8 h → s 8 h , σ 2 = 8 h ( x 1 + 1) + 1 , ( s 8 h ) ∗ s 8 h +1 /s 8 h → s, σ 3 = 8 h + 2 , σ 4 = 8 h + 2 , s 8 h +2 /s 8 h → s 8 h − 1 , σ 5 = 1 , s → λ, t x 1 +3 : σ 1 = 8 h + 1 , s 8 h +1 /s 8 h → s 8 h , σ 2 = 8 h ( x 1 + 1) + 1 , ( s 8 h ) ∗ s 8 h +1 /s 8 h → s, σ 3 = 16 h + 2 , σ 4 = 8 h + 2 , s 8 h +2 /s 8 h → s 8 h − 1 . Neuron σ 1 fires on every timestep betw een times t x 1 +1 and t x 1 + x 2 +1 to send a total of 8 h x 2 spikes to σ 3 . Thus, when σ 1 receives the la s t s pik e fro m its environment we have t x 1 + x 2 +1 : σ 1 = 8 h + 2 , s 8 h +2 /s 8 h +1 → s 8 h +1 , σ 2 = 8 h ( x 1 + 1) + 1 , ( s 8 h ) ∗ s 8 h +1 /s 8 h → s, σ 3 = 8 hx 2 + 2 , σ 4 = 8 h + 2 , s 8 h +2 /s 8 h → s 8 h − 1 t x 1 + x 2 +2 : σ 1 = 8 h + 1 , s 8 h +1 /s 8 h → s 8 h , σ 2 = 8 h ( x 1 + 1) + 2 , ( s 8 h ) ∗ s 8 h +2 /s 8 h +2 → s 2 h , σ 3 = 8 h ( x 2 + 1) + 3 , ( s 8 h ) ∗ s 8 h +3 /s 8 h +3 → s 2 h , σ 4 = 8 h + 3 , s 8 h +3 → s 2 h . 8 t x 1 + x 2 +3 : σ 1 = 6 h + 1 , s 6 h +1 → s 4 h +4 , σ 2 = 8 h ( x 1 + 1) , σ 3 = 8 h ( x 2 + 1) , σ 4 = 8 h, σ 5 = 2 h, s 2 h → λ, t x 1 + x 2 +4 : σ 2 = 8 h ( x 1 + 1) + 4( h + 1) , σ 3 = 8 h ( x 2 + 1) + 4( h + 1) , σ 4 = 8 h + 4( h + 1) . A t time t x 1 + x 2 +4 neuron σ 2 contains 8 h ( x 1 + 1) + 4 ( h + 1) spikes, σ 3 contains 8 h ( x 2 + 1) + 4 ( h + 1) spikes and σ 4 contains 8 h + 4( h + 1) spik es. Th us a t time t x 1 + x 2 +4 the SN P system enco des an initial configuration of C 3 . 4.0.2. Π C 3 simulating q i : I N C (1 ) , q l Let counters c 1 , c 2 , and c 3 hav e v alues x 1 , x 2 , and x 3 , resp ectiv ely . Then the simulation of q i : I N C (1 ) , q l beg ins at time t j with 8 h ( x 1 + 1) + 4 ( h + i ) spikes in σ 2 , 8 h ( x 2 + 1) + 4 ( h + i ) spikes in σ 3 and 8 h ( x 3 + 1) + 4( h + i ) spik es in σ 4 . T hus, at time t j we have t j : σ 2 = 8 h ( x 1 + 1) + 4( h + i ) , ( s 8 h ) ∗ s 4( h + i ) /s 4( h + i ) → s 4( h + i ) , σ 3 = 8 h ( x 2 + 1) + 4( h + i ) , ( s 8 h ) ∗ s 4( h + i ) /s 8 h +4( h + i ) → s 6 h , σ 4 = 8 h ( x 3 + 1) + 4( h + i ) , ( s 8 h ) ∗ s 4( h + i ) /s 8 h +4( h + i ) → s 6 h . F ro m Figure 1, when w e apply the rule ( s 8 h ) ∗ s 4( h + i ) /s 4( h + i ) → s 4( h + i ) in neuron σ 2 and the rule ( s 8 h ) ∗ s 4( h + i ) /s 8 h +4( h + i ) → s 6 h in σ 3 and σ 4 at time t j we g et t j +1 : σ 1 = 16 h + 4 i, s 16 h +4 i → s 12 h +4 l , σ 2 = 8 h ( x 1 + 1) , σ 3 = 8 hx 2 , σ 4 = 8 hx 3 , σ 5 = 6 h, s 6 h → λ, t j +2 : σ 2 = 8 h ( x 1 + 2) + 4( h + l ) , σ 3 = 8 h ( x 2 + 1) + 4( h + l ) , σ 4 = 8 h ( x 3 + 1) + 4( h + l ) , A t time t j +2 the simulation of q i : I N C (1) , q l is complete. Note that an increment on the v alue x 1 in co un ter c 1 was s im ulated by increasing the 8 h ( x 1 + 1) spik es in σ 2 to 8 h ( x 1 + 2) spikes. Note also that the enco ding 4 ( h + l ) of the next instruction q l has been e s tablished in neurons σ 2 , σ 3 and σ 4 . 9 4.0.3. Π C 3 simulating q i : DE C (1) , q l , q k There ar e t w o cases to consider her e. Case 1: if counter c 1 has v alue x 1 > 0, then decrement counter 1 and mov e to instruction q i +1 . Case 2: if counter c 1 has v alue x 1 = 0 , then mov e to instruction q k . As with the previous example, o ur simulation b egins at time t j . Thus Case 1 ( x 1 > 0) giv es t j : σ 2 = 8 h ( x 1 + 1) + 4 ( h + i ) , ( s 8 h ) ∗ s 16 h +4( h + i ) /s 12 h +4 i → s 6 h +4 i , σ 3 = 8 h ( x 2 + 1) + 4 ( h + i ) , ( s 8 h ) ∗ s 4( h + i ) /s 4( h + i ) → s 2 h , σ 4 = 8 h ( x 3 + 1) + 4 ( h + i ) , ( s 8 h ) ∗ s 4( h + i ) /s 4( h + i ) → s 2 h , t j +1 : σ 1 = 10 h + 4 i, s 10 h +4 i → s 4( h + l ) , σ 2 = 8 hx 1 , σ 3 = 8 h ( x 2 + 1) , σ 4 = 8 h ( x 3 + 1) , σ 5 = 2 h, s 2 h → λ, t j +2 : σ 2 = 8 hx 1 + 4( h + l ) , σ 3 = 8 h ( x 2 + 1) + 4 ( h + l ) , σ 4 = 8 h ( x 3 + 1) + 4 ( h + l ) . A t time t j +2 the simulation of q i : D E C (1) , q l , q k for Cas e 1 ( x 1 > 0) is complete. Note that a decrement on the v a lue x 1 in co un ter c 1 was sim ulated by decr easing the 8 h ( x 1 + 1) spikes in σ 2 to 8 hx 1 spikes. Note also that the enco ding 4( h + l ) of the next instruction q l has b een established in neurons σ 2 , σ 3 and σ 4 . Alter nativ ely , if w e ha ve Case 2 ( x 1 = 0) then w e get t j : σ 2 = 8 h + 4( h + i ) , s 8 h +4( h + i ) /s 4( h + i ) → s 4( h + i ) , σ 3 = 8 h ( x 2 + 1) + 4( h + i ) , ( s 8 h ) ∗ s 4( h + i ) /s 4( h + i ) → s 2 h , σ 4 = 8 h ( x 3 + 1) + 4( h + i ) , ( s 8 h ) ∗ s 4( h + i ) /s 4( h + i ) → s 2 h , t j +1 : σ 1 = 8 h + 4 i, s 8 h +4 i → s 4( h + k ) , σ 2 = 8 h, σ 3 = 8 h ( x 2 + 1) , σ 4 = 8 h ( x 3 + 1) , σ 5 = 2 h, s 2 h → λ. 10 t j +2 : σ 2 = 8 h + 4( h + k ) , σ 3 = 8 h ( x 2 + 1) + 4( h + k ) , σ 4 = 8 h ( x 3 + 1) + 4( h + k ) . A t time t j +2 the sim ulation of q i : D E C (1 ) , q l , q k for Case 1 ( x 1 = 0) is complete. The enco ding 4( h + k ) of the next instruction q k has b een established in neurons σ 2 , σ 3 and σ 4 . 4.0.4. Halting The halt instruction q h is enco ded as 4 h + 5 spikes. Thus, if C 3 ent er s the halt instruction q h we get t j : σ 2 = 8 h ( x 1 + 1) + 4 h + 5 , σ 3 = 8 h ( x o + 1) + 4 h + 5 , ( s 8 h ) ∗ s 20 h +5 /s 12 h → s 2 , σ 4 = 8 h ( x 3 + 1) + 4 h + 5 , t j +1 : σ 1 = 2 , s 2 → λ, σ 2 = 8 h ( x 1 + 1) + 4 h + 5 , σ 3 = 8 hx o + 5 , ( s 8 h ) ∗ s 16 h +5 /s 8 h → s, σ 4 = 8 h ( x 3 + 1) + 4 h + 5 , σ 5 = 2 , s 2 → s, t j +2 : σ 1 = 1 , s → λ, σ 2 = 8 h ( x 1 + 1) + 4 h + 5 , σ 3 = 8 h ( x o − 1) + 5 , ( s 8 h ) ∗ s 16 h +5 /s 8 h → s, σ 4 = 8 h ( x 3 + 1) + 4 h + 5 , σ 5 = 1 , s → λ. The rule ( s 8 h ) ∗ s 16 h +5 /s 8 h → s is applied a further x o − 2 times in σ 3 un til we g et t j + x o : σ 1 = 1 , s → λ, σ 2 = 8 h ( x 1 + 1) + 4 h + 5 , σ 3 = 8 h + 5 , s 8 h +5 → s 2 , σ 4 = 8 h ( x 3 + 1) + 4 h + 5 , σ 5 = 1 , s → λ. 11 t j + x o +1 : σ 1 = 2 , s 2 → λ, σ 2 = 8 h ( x 1 + 1) + 4 h + 5 , σ 4 = 8 h ( x 3 + 1) + 4 h + 5 , σ 5 = 2 , s 2 → s. As usua l the output is the time interv al b et ween the first and second spikes that ar e sent o ut of the output neur on. Note fr o m above that the output neuron σ 5 fires for the first time at timestep t j +1 and for the second time at timestep t j + x o +1 . Thus, the output of Π C 3 is x o the v alue of the output counter c 2 when C 3 ent er s the halt instruction q h . Note that if x 2 = 0 then the rule s 12 h +5 → s 2 is executed at timestep t j , and th us only o ne spike will b e sent out of the o utput neuron. W e have now shown how to simulate arbitrar y instructions of the for m q i : I N C (1) , q l and q i : DE C (1) , q l , q k that op erate on counter c 1 . Instr uc tio ns whic h op erate on counters c 2 and c 3 are sim ulated in a s imilar manner. Immediately following the sim ulation of an instruction Π C 3 is configured to simulate the next instruction. E ac h instruction o f C 3 is simulated in 2 timesteps. The pair o f input v alues ( x 1 , x 2 ) is read into the system in x 1 + x 2 + 4 timesteps a nd sending the output v alue x o out of the system tak es x o + 1 timesteps. Thus, if C 3 completes it computation in time t , then Π C 3 simulates the co mputation of C 3 in linea r time O ( t + x 1 + x 2 + x o ). ✷ Theorem 2 L et C 2 b e a universal c ount er m ac hine with 2 c ount ers that c ompletes it c omputation in time t to give the output value x o when given the input value x 1 . Then ther e is a u niversa l extende d SN P syst em Π C 2 that simulates the c omputation of C 2 in time O ( t + x 1 + x o ) and has only 4 neur ons. PR OOF. L e t C 2 = (2 , { c 1 } , c 2 , Q, q 1 , q h ) where Q = { q 1 , q 2 , · · · , q h } . The rules for the SN P system Π C 2 are g iv en by T able 5 and a diag ram of the s ystem is obtained by r emo ving neur on σ 4 from Figure 1 . If C 2 beg ins its computation with the v alue x 1 in counter c 1 then the bina r y sequence w = 10 x 1 − 1 1 is read in via the input neuron σ 1 . Before the co mput a tio n b egins neurons σ 1 , σ 2 , σ 3 and σ 5 resp ectiv ely co n tain 8 h , 8 h + 1 , 16 h + 1 and 0 spik es. Like Π C 3 , Π C 2 enco des the v alue x o f e a c h counter as 8 h ( x + 1 ) s pik es a nd e nc o des eac h instruction q i as 4 ( h + i ) spik es. The op eration of Π C 2 is v ery similar to the op eration of Π C 3 , and thus it would be tedious and repetitive to go thro ug h another simulation here. Π C 2 simulates a s ingle instruction of C 2 in 2 timesteps in a manner similar to that of Π C 3 . The inputting and outputting techniques, us e d b y Π C 2 , a ls o remain similar to those of Π C 3 , a nd th us the r unning time of Π C 2 is O ( t + x 1 + x o ). ✷ The SN P system in The o rem 3 simulates a coun ter machine with the following restriction: if a counter is b eing decr emen ted no other counter has v alue 0 at tha t timestep. Note that this do es not result in a loss of genera lit y as for each standard count er machine there is a coun ter machine with this restr iction that simulates it in linea r time without an incr ease in the num b er of counters. Let C be any count er machine with m co un ters. Then there is a co un ter machine C ′ with m counters that simulates C in linear time, s uc h that if C ′ is decrementing a counter no other counter has v alue 0 at that timestep. Ea c h co un ter in C that has v alue y is simulated b y a count er in C ′ that has v alue y + 1 . The instructio n set of C ′ is the s a me as the instruction set of C with the follo wing ex ception each q i : DE C ( j ) , q l , q k instruction in C is replaced with the instructions ( q i : D E C ( j ) q ′ i , q ′ i ), ( q ′ i : D E C ( j ) q ⋆ l , q ⋆ k ), ( q ⋆ l : I N C ( j ) , q l ), and ( q ⋆ k : I N C ( j ) , q k ). The r eason we 12 σ 2 σ 3 σ 4 σ 5 σ 6 σ 7 σ 1 counter 2 σ 9 counter 1 σ 8 counter 3 σ 10 σ 11 σ 12 σ 13 σ 14 σ 15 σ 16 σ 17 input output Fig. 2. Part 1 of the unive rs al SN P system Π ′ C 3 . Each o v al lab eled σ i is a neuron. An arrow going from neuron σ i to neuron σ j illustrates a synapse ( i, j ). need these extr a instr uc tio ns is that y is e nc o ded a s y + 1 a nd we must decrement twice if we wis h to test for an enco ded 0. Theorem 3 L et C 3 b e a universal c ounter mac hine with 3 c ounters and h instructions t hat c om- pletes it c omputation in time t to give the output value x o when given the input ( x 1 , x 2 ) . Then ther e is a universal SN P system Π ′ C 3 that simulates the c omputation of C 3 in t ime O ( ht + x 1 + x 2 + x o ) and has only 17 neu ro ns. PR OOF. L e t C 3 = (3 , { c 1 , c 2 } , c 3 , Q, q 1 , q h ) where Q = { q 1 , q 2 , · · · , q h } . Also , without lo s s o f generality we a ssume that during C 3 ’s co mputatio n if C 3 is decr emen ting a counter no other counter has v a lue 0 at that timestep (see the paragra ph b efore Theorem 3). The SN P system Π ′ C 3 is given by Figures 2 and 3 and T a bles 6 and 7. As a complement to the figures, T able 3 ma y b e used to ident ify all the synapse s in Π ′ C 3 . The algorithm given for Π ′ C 3 is deterministic. 4.0.5. Enc o ding of a c onfigur ation of C 3 and r e ading input into Π ′ C 3 A configuration of C 3 is stored as spik es in the neurons of Π ′ C 3 . The next instruction q i to be executed is stored in each of the neurons σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , and σ 7 as 2 1( h + i ) + 1 spik es. Let x 1 , x 2 and x 3 be the v alues store d in counters c 1 , c 2 and c 3 , resp ectiv ely . Then the v alue x 1 is stored as 6( x 1 + 1 ) spikes in neuron σ 8 , x 2 is s tored as 6( x 2 + 1 ) spikes in σ 9 , and x 3 is s tored as 6( x 3 + 1 ) spikes in σ 10 . 13 The input to Π ′ C 3 is read into the system via the input neuron σ 1 (see Figure 2). If C 3 beg ins its computation with the v alues x 1 and x 2 in c oun ters c 1 and c 2 , resp ectiv ely , then the binary sequence w = 10 x 1 − 1 10 x 2 − 1 1 is rea d in v ia the input neuron σ 1 . Thus, σ 1 receives a s pike fr om the environmen t at times t 1 , t x 1 +1 and t x 1 + x 2 +1 . W e explain how the system is initialised to enco de an initial config uration of C 3 by giving the n umber of spik es in ea c h neuron and the rule that is to b e applied in each neuro n a t time t . Before the computation b egins neurons σ 2 , σ 3 , σ 4 , σ 5 , σ 6 and σ 7 each contain 4 0 spikes, neuro ns σ 8 , σ 9 and σ 10 each contain 3 spikes, a nd neurons σ 12 , σ 13 and σ 14 each contain 21 h − 2 spikes. Thus, when σ 1 receives it firs t spike at time t 1 we have t 1 : σ 1 = 1 , s → s, σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 40 , σ 8 , σ 9 , σ 10 = 3 , σ 12 , σ 13 , σ 14 = 21 h − 2 , ( s 3 ) ∗ s 4 /s 3 → s. Thu s, fro m Figures 2 and 3, when we apply the rule s → s in neuron σ 1 and the r ule ( s 3 ) ∗ s 4 /s 3 → s in σ 12 , σ 13 and σ 14 at time t 1 we g et t 2 : σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 41 , s 41 /s → s, σ 8 , σ 9 , σ 10 = 4 , σ 12 , σ 13 , σ 14 = 21 h − 4 , σ 15 , σ 16 , σ 17 = 3 , t 3 : σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 41 , s 41 /s → s, σ 8 = 10 , σ 9 , σ 10 = 10 , ( s 6 ) ∗ s 10 /s 6 → s, σ 11 = 6 , s 6 → λ, σ 12 , σ 13 , σ 14 = 21 h − 4 , σ 15 , σ 16 , σ 17 = 3 . t 4 : σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 43 , s 43 /s 3 → s, σ 8 = 16 , σ 9 , σ 10 = 10 , ( s 6 ) ∗ s 10 /s 6 → s, σ 11 = 7 , s 7 → λ, σ 12 , σ 13 , σ 14 = 21 h − 4 , σ 15 , σ 16 , σ 17 = 3 . Neurons σ 2 , σ 3 , σ 4 , σ 5 , σ 6 and σ 7 fire on every timestep b et ween times t 2 and t x 1 +2 to s end a total of 6 x 1 spikes to σ 8 , a nd th us w e get 14 t x 1 +1 : σ 1 = 1 , s → s, σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 43 , s 43 /s 3 → s, σ 8 = 6( x 1 − 1) + 4 , σ 9 , σ 10 = 10 , ( s 6 ) ∗ s 10 /s 6 → s, σ 11 = 7 , s 7 → λ, σ 12 , σ 13 , σ 14 = 21 h − 4 , σ 15 , σ 16 , σ 17 = 3 , t x 1 +2 : σ 2 = 44 , s 44 /s 25 → s, σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 44 , s 44 /s 31 → s, σ 8 = 6 x 1 + 5 , ( s 6 ) ∗ s 11 /s 6 → s, σ 9 = 11 , σ 10 = 11 , ( s 6 ) ∗ s 11 /s 6 → s, σ 11 = 7 , s 7 → λ, σ 12 , σ 13 , σ 14 = 21 h − 3 , σ 15 , σ 16 , σ 17 = 3 , t x 1 +3 : σ 2 = 22 , s 22 /s 3 → s, σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 16 , s 16 /s 3 → s, σ 8 = 6 x 1 + 5 , ( s 6 ) ∗ s 11 /s 6 → s, σ 9 = 17 , σ 10 = 11 , ( s 6 ) ∗ s 11 /s 6 → s, σ 11 = 7 , s 7 → λ, σ 12 , σ 13 , σ 14 = 21 h − 3 , σ 15 , σ 16 , σ 17 = 3 . Neurons σ 2 , σ 3 , σ 4 , σ 5 , σ 6 and σ 7 fire on every timestep b et ween times t x 1 +2 and t x 1 + x 2 +2 to send a total of 6 x 2 spikes to σ 9 . T hus, when σ 1 receives the la s t s pik e fro m its environmen t w e have 15 t x 1 + x 2 +1 : σ 1 = 1 , s → s, σ 2 = 22 , s 22 /s 3 → s, σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 16 , s 16 /s 3 → s, σ 8 = 6 x 1 + 5 , ( s 6 ) ∗ s 11 /s 6 → s, σ 9 = 6 x 2 + 5 , σ 10 = 11 , ( s 6 ) ∗ s 11 /s 6 → s, σ 11 = 7 , s 7 → λ, σ 12 , σ 13 , σ 14 = 21 h − 3 , σ 15 , σ 16 , σ 17 = 3 , t x 1 + x 2 +2 : σ 2 = 23 , s 23 /s 5 → s, σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 17 , σ 8 = 6( x 1 + 1) , σ 9 = 6( x 2 + 2) , σ 10 = 12 , σ 11 = 7 , s 7 → λ, σ 12 , σ 13 , σ 14 = 21 h − 2 , ( s 3 ) ∗ s 4 /s 3 → s, σ 15 , σ 16 , σ 17 = 3 , t x 1 + x 2 +3 : σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 18 , σ 8 = 6( x 1 + 1) + 1 , ( s 6 ) ∗ s 13 /s → s, σ 9 = 6( x 2 + 2) + 1 , ( s 6 ) ∗ s 7 /s 7 → s, σ 10 = 13 , ( s 6 ) ∗ s 7 /s 7 → s, σ 11 = 1 , s → λ, σ 12 , σ 13 , σ 14 = 21 h − 5 , ( s 3 ) ∗ s 4 /s 3 → s, σ 15 , σ 16 , σ 17 = 6 , t x 1 + x 2 +4 : σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 21 , σ 8 = 6( x 1 + 1) , σ 9 = 6( x 2 + 1) , σ 10 = 6 , σ 11 = 1 , s → λ, σ 12 , σ 13 , σ 14 = 21 h − 8 , ( s 3 ) ∗ s 4 /s 3 → s, σ 15 , σ 16 , σ 17 = 9 . 16 After a further 7 h − 3 timestep we g et t x 1 + x 2 +7 h +1 : σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 21 , σ 8 = 6( x 1 + 1) , σ 9 = 6( x 2 + 1) , σ 10 = 6 , σ 12 = 1 , s → s, σ 13 , σ 14 = 1 , s → λ, σ 15 , σ 16 , σ 17 = 21 h, t x 1 + x 2 +7 h +2 : σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 21 , σ 8 = 6( x 1 + 1) , σ 9 = 6( x 2 + 1) , σ 10 = 6 , σ 15 , σ 16 , σ 17 = 21 h + 1 , ( s 3 ) ∗ s 4 /s 3 → s, t x 1 + x 2 +7 h +3 : σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 21 + 3 , σ 8 = 6( x 1 + 1) , σ 9 = 6( x 2 + 1) , σ 10 = 6 , σ 12 , σ 13 , σ 14 = 3 , σ 15 , σ 16 , σ 17 = 21 h − 2 , ( s 3 ) ∗ s 4 /s 3 → s. Neurons σ 15 , σ 16 and σ 17 contin ue to fire at each timestep. Thu s, after a further 7 h − 1 steps we get t x 1 + x 2 +14 h +2 : σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 21 h + 2 1 , σ 8 = 6( x 1 + 1) , σ 9 = 6( x 2 + 1) , σ 10 = 6 , σ 12 , σ 13 , σ 14 = 21 h, σ 15 = 1 , s → s, σ 16 , σ 17 = 1 , s → λ. 17 σ 2 σ 3 σ 4 σ 5 σ 6 σ 7 σ 16 σ 15 σ 17 σ 12 σ 13 σ 14 σ 11 output Fig. 3. Part 2 of the unive rs al SN P system Π ′ C 3 . Each o v al lab eled σ i is a neuron. An arrow going from neuron σ i to neuron σ j illustrates a synapse ( i, j ). t x 1 + x 2 +14 h +3 : σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 21( h + 1 ) + 1 , σ 8 = 6( x 1 + 1) , σ 9 = 6( x 2 + 1) , σ 10 = 6 , σ 12 , σ 13 , σ 14 = 21 h + 1 . A t time t x 1 + x 2 +14 h +3 neurons σ 2 , σ 3 , σ 4 , σ 5 , σ 6 and σ 7 each contain 21( h + 1) + 1 spikes, σ 8 contains 6( x 1 + 1 ) spikes, σ 9 contains 6( x 2 + 1 ) spikes and σ 10 contains 6 s pikes. Th us, at time t x 1 + x 2 +14 h +3 the SN P system enco des an initial config uration of C 3 . 4.0.6. Algorithm overvie w Here we give a high le vel ov erview of the sim ulation alg orithm used by Π ′ C 3 . Neurons σ 8 , σ 9 and σ 10 simulate the co un ters of c 1 , c 2 and c 3 , resp ectiv ely . Neurons σ 2 , σ 3 , σ 4 , σ 5 , σ 6 and σ 7 are the c ont r ol neur ons . They determine which instruction is to be s im ulated next b y sending sig nals to the neurons that sim ulate the coun ters of C 3 directing them to simulate an incr e men t or dec r emen t. There are fo ur different s ignals that the control neurons send to the simulated counters. Eac h of these signals takes the form of a unique num b er of spik es. If 1 spik e is sent to σ 8 , σ 9 and σ 10 then the v alue in σ 8 (counter c 1 ) is tested a nd σ 9 (counter c 2 ) a nd σ 10 (counter c 3 ) a re decremented. If 2 spik es are sent the v alue of σ 9 is tested and σ 8 and σ 10 are decremen ted. If 3 spik es are se n t the v alue o f σ 10 is tested and σ 8 and σ 9 are decremented. Finally , if 6 spikes are sent all three counters are incr emen ted. Unfor tuna tely , a ll of the ab o ve signals hav e the effect of changing the v a lue of more than one simulated coun ter at a time. W e ca n, how ever, obtain the desired result b y using more than one signal for each sim ulated timestep. If we wish to simulate I N C we send 2 signals and if w e wish to sim ulate D E C we send either 8 or 2 signals. T a ble 2 g ives the se q uence of spikes (signals) to be s en t in o rder to simulate each coun ter machine instruction. T o ex plain how to use T able 2 we will take the exa mple of sim ulating I N C (2). In the firs t timestep, all three sim ulated 18 Instruction Sequence of spikes sen t from σ 2 , σ 3 , σ 4 , σ 5 , σ 6 and σ 7 I N C (1) 6, 1 I N C (2) 6, 2 I N C (3) 6, 3 D E C (1) 1, 0, 6 if x 1 = 0 D E C (1) 1, 0, 6, 6, 6, 3, 3, 2, 2 i f x 1 > 0 D E C (2) 2, 0, 6 if x 2 = 0 D E C (2) 2, 0, 6, 6, 6, 3, 3, 1, 1 i f x 2 > 0 D E C (3) 3, 0, 6 if x 3 = 0 D E C (3) 3, 0, 6, 6, 6, 2, 2, 1, 1 i f x 3 > 0 T able 2 This table gives a coun ter mac hine instruction in the left column follo wed, in the right column, b y the sequence that is used b y Π ′ C 3 to simulated that instruction. Eac h num b er in the s eq uence r ep r ese nts the total num b er of spik es to be sent f rom the s et of neurons σ 2 , σ 3 , σ 4 , σ 5 , σ 6 and σ 7 at eac h timestep. counters σ 8 , σ 9 and σ 10 are incremen ted by sending 6 spik es, and then in the second timestep the simulated counters σ 8 and σ 10 are decremen ted b y s ending 2 s pikes. This has the effect of sim ulating an incremen t in counter c 2 and leaving the other tw o sim ulated coun ters unc hanged. Each coun ter machine instruction q i is enco ded as 21( h + i ) + 1 spikes in each of the control neurons. At the end of each simulated timestep the n umber of spikes in the control neurons must be upda ted to enco de the next instructio n q k . The upda te rule s 21( h + i ) − 21 k → s is applied in each con trol neuron leaving a total of 21 k spikes in each con trol neuron. F ollowing this, 21 h + 1 spikes are sent fro m neurons σ 15 , σ 16 and σ 17 to each of the con trol neurons. This giv es a total of 21( h + k ) + 1 spikes in each control neur on. Thus enco ding the next instruction q k . (Note tha t the rule s 21( h + i ) − 21 k → s is simplification of the actual rule used.) 4.0.7. Π ′ C 3 simulating q i : I N C (1 ) , q l The sim ulation of I N C (1) is giv en b y the neurons in Figures 2 a nd 3. Let x 1 , x 2 and x 3 be the v alues in counters c 1 , c 2 and c 3 resp ectiv ely . Then our simulation of q i : I N C (1 ) , q l beg ins with 6( x 1 + 1) spikes in σ 8 , 6( x 2 + 1) spikes in σ 9 , 6( x 3 + 1) spikes in σ 10 , 21( h + i ) + 1 spikes in ea c h of the neurons σ 2 , σ 3 , σ 4 , σ 5 , σ 6 and σ 7 , a nd 21 h + 1 spikes in eac h of the neurons σ 12 , σ 13 and σ 14 . Beginning our simulation at time t j , we have t j : σ 2 = 21( h + i ) + 1 , s 21( h + i )+1 /s 4 → s, σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 21( h + i ) + 1 , s 21( h + i )+1 /s 21( h + i − l )+6 → s, σ 8 = 6( x 1 + 1) , σ 9 = 6( x 2 + 1) , σ 10 = 6( x 3 + 1) , σ 12 , σ 13 , σ 14 = 21 h + 1 , ( s 3 ) ∗ s 4 /s 3 → s. 19 Thu s, from Figures 2 and 3 we get t j +1 : σ 2 = 21( h + i ) − 2 , s 21( h + i ) − 2 /s 21( h + i − l )+1 → s, σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 21 l − 4 , σ 8 = 6( x 1 + 2) , σ 9 = 6( x 2 + 2) , σ 10 = 6( x 3 + 2) , σ 11 = 6 , s 6 → λ, σ 12 , σ 13 , σ 14 = 21 h − 2 , ( s 3 ) ∗ s 4 /s 3 → s, σ 15 , σ 16 , σ 17 = 3 , t j +2 : σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 21 l − 3 , σ 8 = 6( x 1 + 2) + 1 , ( s 6 ) ∗ s 13 /s → s, σ 9 = 6( x 2 + 2) + 1 , ( s 6 ) ∗ s 7 /s 7 → s, σ 10 = 6( x 3 + 2) + 1 , ( s 6 ) ∗ s 7 /s 7 → s, σ 11 = 1 , s → λ, σ 12 , σ 13 , σ 14 = 21 h − 5 , ( s 3 ) ∗ s 4 /s 3 → s, σ 15 , σ 16 , σ 17 = 6 , t j +3 : σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 21 l , σ 8 = 6( x 1 + 2) , σ 9 = 6( x 2 + 1) , σ 10 = 6( x 3 + 1) , σ 11 = 1 , s → λ, σ 12 , σ 13 , σ 14 = 21 h − 8 , ( s 3 ) ∗ s 4 /s 3 → s, σ 15 , σ 16 , σ 17 = 9 . The remainder of this simulation is similar to the computation carried out at the e nd o f the initialisation proces s (see the last paragraph of Section 4.0.6 and timesteps t x 1 + x 2 +4 to t x 1 + x 2 +14 h +3 of the Section 4.0.5). Thus, after a further 14 h − 1 timesteps we get t j +14 h +2 : σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 21( h + l ) + 1 , σ 8 = 6( x 1 + 2) , σ 9 = 6( x 2 + 1) , σ 10 = 6( x 3 + 1) , σ 12 , σ 13 , σ 14 = 21 h + 1 , ( s 3 ) ∗ s 4 /s 3 → s. 20 A t time t j +14 h +2 the simulation of q i : I N C (1 ) , q l is complete. No te that an increment on the v alue x 1 in c o un ter c 1 is simulated by incr easing the num b er of spik es in σ 8 from 6 ( x 1 + 1) to 6( x 1 + 2). Note a lso that the e ncoding o f the next instruction q l is given by the 21( h + l ) + 1 spikes in neurons σ 2 , σ 3 , σ 4 , σ 5 , σ 6 and σ 7 . 4.0.8. Π ′ C 3 simulating q i : DE C (1) , q l , q k If we are sim ulating DE C (1) then w e g et t j : σ 2 = 21( h + i ) + 1 , s 21( h + i )+1 /s 5 → s, σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 21( h + i ) + 1 , σ 8 = 6( x 1 + 1) , σ 9 = 6( x 2 + 1) , σ 10 = 6( x 3 + 1) , σ 12 , σ 13 , σ 14 = 21 h + 1 , ( s 3 ) ∗ s 4 /s 3 → s. T o help simplify configura tions we will not include neurons σ 12 , σ 13 , and σ 14 un til the end of the example. When simulating D E C (1) there are tw o cases to consider. Case 1: if counter c 1 has v a lue x 1 > 0, then decrement coun ter 1 and mov e to instruction q i +1 . Case 2: if counter c 1 has v alue x 1 = 0, then mo ve to instruction q k . In c onfiguration t j +1 our system deter mines if the v alue x 1 in counter 1 is > 0 by checking if the n umber of spikes in σ 8 is > 1 3. Note that if w e hav e Case 1 then the r ule ( s 6 ) ∗ s 13 /s → s is applied in σ 8 sending an extra spike to neurons σ 2 , σ 3 , σ 4 , σ 5 , σ 6 and σ 7 th us recording that x 1 > 0. Ca se 1 pro ceeds as follows: t j +1 : σ 2 = 21( h + i ) − 4 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 21( h + i ) + 2 , σ 8 = 6( x 1 + 1) + 1 , ( s 6 ) ∗ s 13 /s → s, σ 9 = 6( x 2 + 1) + 1 , ( s 6 ) ∗ s 7 /s 7 → s, σ 10 = 6( x 3 + 1) + 1 , ( s 6 ) ∗ s 7 /s 7 → s, σ 11 = 1 , s → λ, t j +2 : σ 2 = 21( h + i ) − 1 , s 21( h + i ) − 1 /s 5 → s, σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 21( h + i ) + 5 , s 21( h + i )+5 /s 11 → s, σ 8 = 6( x 1 + 1) , σ 9 = 6 x 2 , σ 10 = 6 x 3 σ 11 = 1 , s → λ. The method we use to test the v a lue of σ 8 (simu lated counter c 1 ) ha s the side-effect of decrementing σ 9 (simu lated counter c 2 ) and σ 10 (simu lated counter c 2 ). F ollowing this, in o r der to get the co rrect v alues our alg orithm takes the following steps: Each of our simulated co unters ( σ 8 , σ 9 and σ 10 ) are 21 incremented 3 times, and then the simulated counter σ 8 is decremented 4 times, whilst the sim ulated counters σ 9 and σ 10 are each decremented twice. Thus, the o verall r esult is that a decremen t o f c 1 is s im ulated in σ 8 and the other enco ded counter v alues in σ 9 and σ 10 remain the same. Contin uing with our simulation we get t j +3 : σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 21( h + i ) − 5 , s 21( h + i ) − 5 /s 3 → s, σ 8 = 6( x 1 + 2) , σ 9 = 6( x 2 + 1) , σ 10 = 6( x 3 + 1) , σ 11 = 6 , s 6 → λ, t j +4 : σ 2 , σ 3 , σ 4 = 21( h + i ) − 7 , s 21( h + i ) − 7 /s 2 → s, σ 5 , σ 6 , σ 7 = 21( h + i ) − 7 , s 21( h + i ) − 7 /s 21( h + i − l )+10 → s, σ 8 = 6( x 1 + 3) , σ 9 = 6( x 2 + 2) , σ 10 = 6( x 3 + 2) , σ 11 = 6 , s 6 → λ, t j +5 : σ 2 , σ 3 , σ 4 = 21( h + i ) − 8 , s 21( h + i ) − 8 /s 3 → s, σ 5 , σ 6 , σ 7 = 21 l − 16 , σ 8 = 6( x 1 + 4) , σ 9 = 6( x 2 + 3) , σ 10 = 6( x 3 + 3) , σ 11 = 6 , s 6 → λ. In configurations t j +3 , t j +4 and t j +5 each of the simulated co un ters σ 8 , σ 9 and σ 10 are incremented. In c o nfigurations t j +6 to t j +10 the s im ulated counter σ 8 is decr e mented 4 times and the s imulated counters σ 9 and σ 10 are eac h decremen ted twice. t j +6 : σ 2 , σ 3 = 21( h + i ) − 10 , s 21( h + i ) − 10 /s 5 → s, σ 4 = 21( h + i ) − 10 , s 21( h + i ) − 10 /s 21( h + i − l )+5 → s, σ 5 , σ 6 , σ 7 = 21 l − 1 5 , σ 8 = 6( x 1 + 4) + 3 , ( s 6 ) ∗ s 9 /s 9 → s, σ 9 = 6( x 2 + 3) + 3 , ( s 6 ) ∗ s 9 /s 9 → s, σ 10 = 6( x 3 + 3) + 3 , ( s 6 ) ∗ s 15 /s 3 → s, σ 11 = 3 , s 3 → λ. 22 t j +7 : σ 2 , σ 3 = 21( h + i ) − 11 , s 21( h + i ) − 11 /s 6 → s, σ 4 , σ 5 , σ 6 , σ 7 = 21 l − 1 1 , σ 8 = 6( x 1 + 3) + 3 , ( s 6 ) ∗ s 9 /s 9 → s, σ 9 = 6( x 2 + 2) + 3 , ( s 6 ) ∗ s 9 /s 9 → s, σ 10 = 6( x 3 + 3) + 3 , ( s 6 ) ∗ s 15 /s 3 → s, σ 11 = 4 , s 4 → λ, t j +8 : σ 2 , σ 3 = 21( h + i ) − 13 , s 21( h + i ) − 13 /s 21( h + i − l ) − 6 → s, σ 4 , σ 5 , σ 6 , σ 7 = 21 l − 7 , σ 8 = 6( x 1 + 2) + 2 , ( s 6 ) ∗ s 8 /s 8 → s, σ 9 = 6( x 2 + 1) + 2 , ( s 6 ) ∗ s 14 /s 2 → s, σ 10 = 6( x 3 + 3) + 2 , ( s 6 ) ∗ s 8 /s 8 → s, σ 11 = 3 , s 3 → λ, t j +9 : σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 21 l − 3 , σ 8 = 6( x 1 + 1) + 2 , ( s 6 ) ∗ s 8 /s 8 → s, σ 9 = 6( x 2 + 1) + 2 , ( s 6 ) ∗ s 14 /s 2 → s, σ 10 = 6( x 3 + 2) + 2 , ( s 6 ) ∗ s 8 /s 8 → s, σ 11 = 3 , s 3 → λ, t j +10 : σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 21 l , σ 8 = 6 x 1 , σ 9 = 6( x 2 + 1) , σ 10 = 6( x 3 + 1) , σ 11 = 1 , s → λ, σ 12 , σ 13 , σ 14 = 21 h − 2 9 , ( s 3 ) ∗ s 4 /s 3 → s, σ 15 , σ 16 , σ 17 = 30 . Note that a t time t j +8 that rule ( s 6 ) ∗ s 14 /s 2 → s will alw ays b e applica ble as here x 2 > 0 (see the second line at the star t o f the pro of ). 23 The remainder of this sim ulatio n is similar to the computation car ried out at the end of the initialisation proces s (see the last paragraph of Section 4.0.6 and timesteps t x 1 + x 2 +4 to t x 1 + x 2 +14 h +3 of the Section 4.0.5). Thus, after a further 14 h − 8 timesteps we get t j +14 h +2 : σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 21( h + l ) + 1 , σ 8 = 6 x 1 , σ 9 = 6( x 2 + 1) , σ 10 = 6( x 3 + 1) , σ 12 , σ 13 , σ 14 = 21 h + 1 , ( s 3 ) ∗ s 4 /s 3 → s. A t timestep t j +14 h +2 the sim ulation of q i : D E C (1) , q l , q k for Case 1 ( x 1 > 0) is complete. Note that a decrement on the v alue x 1 in co unter c 1 is simulated b y decrea sing the v alue in σ 8 from 6( x 1 + 1) to 6 x 1 . Note also that the enco ding 21( h + l ) + 1 of the next instruction q l has b een established in neurons σ 2 , σ 3 , σ 4 , σ 5 , σ 6 and σ 7 . Alter nativ ely , if we hav e Ca se 2 ( x 1 = 0) then w e get t j +1 : σ 2 = 21( h + i ) − 4 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 21( h + i ) + 2 , σ 8 = 7 , s 7 → λ, σ 9 = 6( x 2 + 1) + 1 , ( s 6 ) ∗ s 7 /s 7 → s, σ 10 = 6( x 3 + 1) + 1 , ( s 6 ) ∗ s 7 /s 7 → s, σ 11 = 1 , s → λ, t j +2 : σ 2 = 21( h + i ) − 2 , s 21( h + i ) − 2 /s 21( h + i − k ) − 1 → s, σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 21( h + i ) + 4 , s 21( h + i )+4 /s 21( h + i − k )+5 → s, σ 9 = 6 x 2 , σ 10 = 6 x 3 , σ 11 = 1 , s → λ, t j +3 : σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 21 k , σ 8 = 6 , σ 9 = 6( x 2 + 1) , σ 10 = 6( x 3 + 1) , σ 11 = 6 , s 6 → λ, σ 12 , σ 13 , σ 14 = 21 h − 8 , ( s 3 ) ∗ s 4 /s 3 → s, σ 15 , σ 16 , σ 17 = 9 . The remainder of this simulation is similar to the computation carried out at the e nd o f the initialisation proces s (see the last paragraph of Section 4.0.6 and timesteps t x 1 + x 2 +4 to t x 1 + x 2 +14 h +3 24 of the Section 4.0.5). Thus, after a further 14 h − 1 timesteps we get t j +14 h +2 : σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 = 21( h + k ) + 1 , σ 9 = 6 , σ 9 = 6( x 2 + 1) , σ 10 = 6( x 3 + 1) , σ 12 , σ 13 , σ 14 = 21 h + 1 . A t time t j +14 h +2 the simulation of q i : D E C (1) , q l , q k for Case 2 ( x 1 = 0), is complete. Note that the encoding 2 1( h + k ) + 1 of the next instr uc tio n q k has b een established in neurons σ 2 , σ 3 , σ 4 , σ 5 , σ 6 and σ 7 . 4.0.9. Halting If C 3 ent er s the ha lt instruction q h at time t j then we get the following t j : σ 2 , σ 3 , σ 4 , σ 5 = 42 h + 1 , s 42 h +1 /s → s, σ 6 , σ 7 = 42 h + 1 , σ 8 = 6( x 1 + 1) , σ 9 = 6( x 2 + 1) , σ 10 = 6( x o + 1) , t j +1 : σ 2 , σ 3 , σ 4 , σ 5 = 42 h + 1 , s 42 h +1 /s → s, σ 6 , σ 7 = 42 h + 2 , σ 8 = 6( x 1 + 1) + 4 , σ 9 = 6( x 2 + 1) + 4 , ( s 6 ) ∗ s 10 /s 6 → s, σ 10 = 6( x o + 1) + 4 , ( s 6 ) ∗ s 10 /s 6 → s, σ 11 = 4 , s 4 → λ, t j +2 : σ 2 , σ 3 = 42 h + 3 , s ∗ s 42 h +3 /s → s, σ 4 , σ 5 = 42 h + 3 , σ 6 , σ 7 = 42 h + 5 , σ 8 = 6( x 1 + 2) + 2 , ( s 6 ) ∗ s 8 /s 8 → s, σ 9 = 6( x 2 + 1) + 2 , ( s 6 ) ∗ s 14 /s 2 → s, σ 10 = 6( x o + 1) + 2 , ( s 6 ) ∗ s 8 /s 8 → s, σ 11 = 5 . Note that after time t j +2 we can ignore neurons σ 4 , σ 5 , σ 6 and σ 7 as there are no rules applicable in these neurons when the n umber of spikes is > 43 h + 3. The num b er o f spikes in σ 2 and σ 3 do es not 25 decrease following timestep t j +2 , and thus the rule s ∗ s 42 h +3 /s → s is applicable at each subse quen t timestep regardless of the op eration of neurons σ 8 and σ 9 . Thus, neuro ns σ 8 and σ 9 may also b e ignored as their ope ration ha s no effect on the remainder of the simulation. Note that in subse q uen t configuratio ns we write σ 2 , σ 3 > 42 h + 3 as there ar e mo re than 42 h + 3 spikes in each o f these neurons. Th us w e ha ve t j +3 : σ 2 , σ 3 > 42 h + 3 , s ∗ s 42 h +3 /s → s, σ 10 = 6 x o + 2 , ( s 6 ) ∗ s 8 /s 8 → s, σ 11 = 8 , t j +4 : σ 2 , σ 3 > 42 h + 3 , s ∗ s 42 h +3 /s → s, σ 10 = 6( x o − 1) + 2 , ( s 6 ) ∗ s 8 /s 8 → s, σ 11 = 11 , s 11 /s 2 → s, t j +5 : σ 2 , σ 3 > 42 h + 3 , s ∗ s 42 h +3 /s → s, σ 10 = 6( x o − 2) + 2 , ( s 6 ) ∗ s 8 /s 8 → s, σ 11 = 12 . The rule ( s 6 ) ∗ s 8 /s 8 → s is applied in σ 10 a further x o − 2 times until we get t j + x o +3 : σ 2 , σ 3 > 42 h + 3 , s ∗ s 42 h +3 /s → s, σ 10 = 2 , s 2 → λ, σ 11 = 3( x o − 2) + 12 , t j + x o +4 : σ 2 , σ 3 > 42 h + 3 , s ∗ s 42 h +3 /s → s, σ 10 = 2 , s 2 → λ, σ 11 = 3( x o − 2) + 14 , ( s 3 ) ∗ s 14 /s → s. Recall from Sec tio n 2 tha t the output of an SN P s ystem is the time interv al b e t ween the first and second spikes that a re sent out of the output neuro n. No te from ab o ve that the output neuro n σ 11 fires fo r the fir st time at timestep t j +4 and for the second time at timestep t j + x o +4 . T hus, the output of Π ′ C 3 is x o the conten ts of the output count er c 3 when C 3 ent er s the halt instruction q h . If x o = 0 neuron σ 11 will fire only once. T o see this, note that if x o = 0 then s 2 → λ will b e a pplied in neuron σ 10 at time t j +3 , and th us σ 11 will have 10 spikes (instead o f 11) at time t j +4 and the rule s 10 → s will b e applied in σ 11 ending the computation. W e hav e shown how to simulate a rbitrary instructions of the for m q i : I N C (1) , q l and q i : D E C (1) , q l , q k . I ns tructions that op erate on counters c 2 and c 3 are sim ulated in a similar manner. Immediately following the simulation of a n instruction Π ′ C 3 is c o nfigured to b egin simulation of the next instruction. Each instruction o f C 3 is simulated in 14 h + 2 timesteps. The pair of input v alues 26 origin neurons target neurons σ 1 σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 , σ 8 , σ 9 , σ 10 , σ 12 , σ 13 , σ 14 σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 , σ 8 , σ 9 , σ 10 , σ 11 σ 3 , σ 2 , σ 8 , σ 9 , σ 10 , σ 11 σ 4 , σ 5 , σ 6 , σ 7 σ 8 , σ 9 , σ 10 , σ 11 σ 8 , σ 9 σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 σ 10 σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 , σ 11 σ 12 , σ 13 , σ 14 σ 15 , σ 16 , σ 17 σ 15 , σ 16 , σ 17 σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 , σ 12 , σ 13 , σ 14 T able 3 This table give s the set of synapses of the SN P system Π ′ C 3 . E ach origin neuron σ i and target neuron σ j that appear on the same row ha ve a synapse going from σ i to σ j . g 1 g 2 g 3 . . . g u − 1 g u g u +1 . . . g v s s s s s G g 1 g 2 g 3 . . . g u − 1 g u g u +1 . . . g v + s − s + s − s + s − s + s − s + s − s G ′ Fig. 4. Finite state mac hine G decides if there is any rule applicable in a neuron give n the num ber of spikes in the neuron at a gi v en time in the computation. Each s represents a spike in the neuron. Machine G ′ k eeps trac k of the mov ement of spik es into and out of the neuron and decide s whither or not a particular rule is appli ca ble at e ach timestep in the computation. + s represen ts a single spike entering the neuron and − s represents a single spike exiting the neuron. ( x 1 , x 2 ) is read in to the system in x 1 + x 2 + 14 h + 3 timesteps and sending the output v alue x o out of the system takes x o + 4 timesteps. Thus, if C 3 completes it co mputation in time t then Π ′ C 3 simulates the co mputation of C 3 in linear time O ( ht + x 1 + x 2 + x o ). ✷ 5. Lo w er b ounds for small univ ersal SN P systems In this section we show that there ex ists no universal SN P sy stem with only 3 neurons even when w e allow the input technique to be genera lised. This is achiev ed in Theorem 4 by showing that these systems are sim ulated b y log-s pace b ounded T uring machines. F ollowing this, we s how that if we generalise the output technique we can give a universal SN P s y stem with extended rules that has only 3 neur ons. As a co rollary o f our pro of o f Theor em 4, we find that a universal SN P system with extended rules a nd generalised input and o utput is not p o ssible with 2 neuro ns. 27 In this and other work [1 5,18] on small SN P systems the input neur on only receives a cons tan t nu mber of spikes fro m the e n vironment and the o utput neur on fires no more than a co nstan t num b er of times. Hence, we ca ll input standard if the input neuron receives no more than y s pik es from the environment , wher e y is a constant independent o f the input (i.e. the num be r of 1s in its input sequence is < y ). Similar ly , w e call the output standard if the output neuron fires no mo re than x times, wher e x is a constant independent of the input. Here we say an SN P system has generalise d input if the input neuron is permitted to receive 6 n spikes from the environmen t where n ∈ N is the leng th of its input sequence. Theorem 4 L et Π b e any exten de d SN P system with only 3 neur ons, gener alise d input and stan- dar d output. Then ther e is a non-deterministic T u ring machine T Π that simulates the c omputation of Π in sp ac e O (lo g n ) wher e n is t he lengt h of the input to Π . PR OOF. L e t Π b e any extended SN P system with g eneralised input, standar d output, and neurons σ 1 , σ 2 and σ 3 . Also, let x b e the maximum num ber o f times the output neuron σ 3 is per mitted to fire a nd let q and r be the ma xim um v alue for b and p resp ectiv ely , for all E /s b → s p ; d in Π. W e b egin by explaining how the activity of σ 3 may b e sim ulated using only the states of T Π (i.e. no workspace is requir ed to simulate σ 3 ). Recall that the applica bilit y of each rule is determined by a reg ular expression ov er a unary a lpha bet. W e can giv e a sing le regular express ion R that is the union of all the regular expressions for the firing r ules o f σ 3 . T his regular e x pression R determines whither or not there is a ny applicable rule in σ 3 at each timestep. Figure 4 giv es the deterministic finite automata G that accepts L ( R ) the language g enerated b y R . During a co mputation we may use G to decide which rules a re applicable in σ i by passing an s to G each time a spik e enters σ 3 . How ever, G may not give the co rrect result if spik es leave the neuron as it do es not rec o rd spikes leaving σ i . Thus, using G we may construct a second machine G ′ such tha t G ′ records the mo vemen t of spikes go ing int o and out of the neuro n. G ′ is construct as follows: G ′ has all the sa me states (including a ccept states) and transitions as G along with an extra set of trans itions that record spikes le aving the neuron. This extra set of transitions are given a s fo llo ws: for each transition on s from a state g i to a state g j in G there is a new tr ansition on − s going from state g j to g i in G ′ that re cords the remov a l o f a spik e from σ 3 . By re cording the dynamic mov ement o f spik es, G ′ is able to dec ide w hich rules are applicable in σ 3 at each timestep during the computation. G ′ is als o given in Figure 4. T o simulate the op eration of σ 3 we emulate the op eration of G ′ in the states of T Π . Note that ther e is a single non-deterministic c hoice to be made in G ′ . This c hoice is at state g u if a spike is b eing remov ed ( − s ). It would seem that in order to mak e the correct c hoice in this situation we need to kno w the exact n umber of spikes in σ 3 . Howev er, w e need only stor e at most u + y q spik es. The reason for this is that if there are > u + y q spik es in σ 3 , then G ′ will not enter state g u − 1 again. T o see this, note that σ 3 spikes a maxim um of y times using at most q spikes each time, and so once there are > u + y q spik es the n umber of spikes in σ 3 will b e > u − 1 for the remainder o f the computatio n. Th us, T Π simulates the activity of σ 3 by sim ulating the op eration of G ′ and encoding at most u + y q s pik es in its states. In this par agraph we explain the ope ration of T Π . F o llo wing this, we give an analysis of the s pa ce complexity of T Π . T Π has 4 tap es including a n output tap e, whic h is initially blank, and a read only input ta pe. The tape head on b oth the input and o utput tap es is p e r mitted to only mov e right. Each of th e remaining tapes, tapes 1 and 2 simulate the activity of the neurons σ 1 and σ 2 , resp ectiv ely . These tapes record the num b er of spik es in σ 1 and σ 2 . A timestep of Π is simulated as follo ws: T Π scans tap es 1 and 2 to determine if there are a n y a pplicable rules in σ 1 and σ 2 at 28 that timestep. The applicability of each neural rule in Π is determined by a regular expres sion and s o a decider for ea c h rule is ea sily implemen ted in the states o f T Π . Recall fro m the previous paragr aph that the a pplica bilit y o f the r ules in σ 3 is already reco rded in the s ta tes of T Π . Also, T Π is non-deterministic and s o if more than one rule is applica ble in a neuron T Π simply choos e s the rule to sim ulate in the same manner as Π. Once T Π has determined whic h rules a re applicable in each of the three neurons at that timestep it changes the encoding s on tape s 1 and 2 to sim ulate the change in the n umber of spikes in neurons σ 1 and σ 2 during that timestep. As mentioned in the previous para graph any c hange in the num ber of spik es in σ 3 is recorded in the states of T Π . The input sequence of Π may be given as binary input to T Π by placing it on its input tap e. Also, if at a given timestep a 1 is read on the input tape then T Π simulates a spik e en tering the simulated input neuron. At each s im ulated timestep, if the output neuron σ 3 spikes then a 1 is place on the output tap e, and if σ 3 do es not spike a 0 is placed on the output tape. Thus the output of Π is enco ded on the output tape when the simulation ends. In a tw o neuron system ea c h neuron has at most one o ut- g oing synaps e and so the num b er of spikes in the sys tem does not increase ov er time. Thus, the total num b er o f spikes in neurons σ 1 and σ 2 can only increase when σ 3 fires or a spike is sen t into the system from the en vironment. The input is of length n , and so σ 1 and σ 2 receive a maxim um of n spikes from the en vir o nmen t. Neuron σ 3 fires a total of y times sending at mos t r spikes ea ch time and s o the max imum n umber of spikes in σ 1 and σ 2 during the co mputation is n + 2 r y . Using a binar y enco ding tap es 1 and 2 of T Π enco de the n umber o f spikes in σ 1 and σ 2 using space o f lo g 2 ( n + 2 r y ). As mentioned earlier no space is used to sim ulate σ 3 , a nd th us T Π simulates Π using space o f O (lo g n ). ✷ It is in teresting to note that with a sligh t generalisa tio n o n the system in Theorem 4 w e obtain universalit y . If we remov e the restrictio n that allows the output neuron to fire only a consta n t nu mber of times then we may construc t a universal SN P s ystem with extended rules and only three neurons . Here we define the output of an extended SN P system with generalis e d output to the time in terv al b et ween the fir st and second timesteps where exactly x spik es a r e sent out of the output neuron. Theorem 5 L et C 2 b e a universal c ount er m ac hine with 2 c ount ers that c ompletes it c omputation in time t to give t he output value x o when given the input value x 1 . Then ther e is a universal extende d SN P syst em Π ′′ C 2 with standar d input and gener alise d out put that simulates t he c omput ation of C 2 in time O ( t + x 1 + x o ) and has only 3 neur ons. PR OOF. A graph o f Π ′′ C 2 is cons tructed by removing the output neuron σ 5 from the system Π C 2 given in the proo f of Theorem 2 and making σ 3 the new output neuron of Π ′′ C 2 . T he r ule s for Π ′′ C 2 are giv en b y the first 3 rows of T able 5 and a diag r am o f the system is obtained b y removing neuro ns σ 4 and σ 5 from Figure 1 and adding a s ynapse to the environmen t from the new output neuron σ 3 . The o peration of Π ′′ C 2 is identical to the op e r ation of Π C 2 with the exception of the new output techn ique. The o utput of Π ′′ C 2 is the time int er v al b et ween the first and second timesteps where exactly 2 spikes are sen t o ut of the output neuron σ 3 . ✷ F ro m the third paragr aph o f the pro of of Theorem 4 we get the following immediate c o rollary . Corollary 1 L et Π b e any extende d SN P system with only 2 neu r ons and gener alise d input and output. Then ther e is a non-deterministic T uring machine T Π that simulates the c omputation of Π in sp ac e O (log n ) wher e n is t he lengt h o f the input t o Π . 29 6. Conclusion The dramatic improv ement on the size of e a rlier small universal SN P system given by The o rems 1 and 3 is in par t due to the metho d we use to enco de the instructions of the count er machines our systems simulate. In the systems of P ˘ aun a nd P˘ aun [15] each counter machine instruction was enco ded b y a unique set of neurons. Thus the size of the sys tem is dep endan t on the n umber of instructions in the counter mac hine b eing simulated. Some improvemen t w as made by Zhang et al. [1 8] by showing that certain types of instructions may b e gro uped together. Howev er, the num b er of neurons used b y the system remained dependa n t on the num b er of instructions in the co un ter machine being sim ulated. In o ur systems eac h unique co un ter machine instruction is enco ded as a uniq ue n umber of spik es and th us the size of our SN P systems are indep enden t of the num be r of instructio n used by the counter machine they ar e simulating. The techn ique of enco ding the instructions as spikes was first us e d to co nstruct small univ ersa l SN P systems in [14 ]. The results from The o rems 2 and 4 g iv e tight upper and low er b ounds on the s iz e of the smallest universal SN P system with extended rules. Thus in Theo r em 2 we ha ve given the smallest p ossible universal SN P system with extended rules. The results from Theorem 5 and Cor ollary 1 give tig ht upper and lower b ounds on the size of the smalles t univ ersal SN P systems with ex tended rules and g eneralised output. Thus, Theorem 5 gives the smalle st possible univ ersal SN P system with extended rules and generalised o utpu t. The lo wer b ounds g iv en in Theorem 4 are also applicable to standa r d SN P systems and thus giv e a lo wer b ound of 4 neuro ns for the smallest poss ible standard system that is universal. How ever, when compared with extended systems the rules used in standard SN P systems are quite limited, and s o it s e e ms likely that this low er b ound of 4 neurons can b e incr eased. Note that here and in [15,1 8] the size of a universal SN P system is measured by the num b er of neur ons in the s ystem. How ever, the s ize of an SN P system could a lso b e mea sured by the n umber of neural rules in the system. References [1] A. M. Barzdin. O n a class of Turing machines (Minsky machines). A lgebr a i Lo gika , 1(6):42–51, 1963 . (In Russian). [2] H. Chen, M . Ionescu, and T. Ishdorj. On the efficiency of s pi king neural P systems. In M. A. Guti´ err ez-Naranjo, G. P˘ aun, A . R i scos-N´ u ˜ nez, and F. J. Romero-Camp ero, edito rs , Pr o c e e dings of F ourth Br ainstorming We ek on Membr ane Computing , pages 195–206, Sevilla, F eb. 2006. [3] P . C. Fisc her, A. R. Meyer, and A. L. R ose nberg. Counter mac hines and counter languages. Mathematic al Systems The ory , 2(3):265– 283, 1968. [4] M. Ionescu, G. P˘ aun, and T. Y okomori. Spiking neural P systems wi th exhaustive use of rules. International Journal of Unco nvent iona l Comp uting , 3(2):135–15 4, 2007. [5] M. Ionescu, G. P˘ aun, and T. Y okomori. Spiking neural P systems. F undamenta Informatic ae , 71(2-3):279–308, 2006. [6] M. Ionescu and D. Sburl an . Some applications of spiki ng neural P systems. In G. Eleftherakis, P . Kefalas, and G. P˘ aun, editors, Pr o ce e dings of the Eighth Workshop on Membr ane Computing , pages 383–394, Thessaloniki, June 2007. [7] I. Korec. Small univ ersal register machines. The or et ic al Computer Sci enc e , 168(2):267–301 , N ov. 1996. [8] A. Lep orati, C. Zandron, C. F erretti, and G. Mauri. On the computational p o wer of spiki ng neu r al P s ystems. In M. A. Guti ´ err ez - N aranjo, G. P˘ aun, A. Romero-Jim´ enez, and A. Riscos-N ´ u˜ nez, editors, Pr o c e e dings of the Fifth Br ainstorming We ek on Membr ane Computing , pages 227–245, Sevill a, Jan. 2007. [9] A. Lep orati, C. Zand r on, C. F erretti, and G. M auri. Solving numerical NP-complete problems with spiki ng neural P systems. In G. Eleftherakis, P . Kefalas, and G. P˘ aun, editors, Pr o c e e dings of the Eighth Workshop on Membr ane Computing , pages 405–423, Thessaloniki, June 2007. 30 [10] M . Minsky . Computation, finit e and infinite machines . Pr en tice-Hall, Englewood Cliffs, New Jersey , 1967. [11] T. Neary . Presentat ion at The Internationa l W orkshop on Computing with Biomolecules (CBM 2008). Av ai lable at h ttp://www.emcc.at/UC2008/Presen tations/CBM5.pdf . [12] T. Neary . On the computationa l complexit y of spiking neural P systems. T echn ical Rep ort [cs.CC], Dec. 2007. [13] T. N ea ry . On the computat ional complexity of spiking neural P systems. In Unc onventional Computation, 7th International Confer enc e, UC 2008 , vo lume 5204 of LNCS , pages 189–205, Vi en na, Aug. 2008. Spr inger. [14] T. Neary . A small universal spiking neural P system. In Inte rnati onal Workshop on Computing with Biomole cules , pages 65–74, Vienna, Aug. 2008. Austri an Comput er Society . [15] A . P˘ aun and G. P˘ aun. Small univ ersal spiki ng neural P systems. BioSyste ms , 90(1):48–60, 2007. [16] R. Schroeppel. A t wo counter machine cannot calculate 2 n . T ec hnical Rep ort A IM-257, A.I. memo 257, Computer Science and Artificial Int elli ge nce Lab orato ry , MIT, Cambridge, MA, 1972. [17] X . Zhang, Y. Jiang, and L. Pan. Small universal spiking neural P systems wi th exhaustiv e use of r ules. In 3r d International Confer enc e on Bio-Inspir e d Computing: The ories and Applic ations(BICT A 2008) , pages 117–128, Adelaide, Australia, Oct. 2008. IEEE. [18] X . Zhang, X. Zeng, and L. Pan . Smaller universal spiking neural P systems. F undamenta Informatic ae , 87(1):117– 136, Nov. 2008. 31 neuron rules σ 1 s 8 h +1 /s 8 h → s 8 h , s 8 h +2 /s 8 h +1 → s 8 h +1 , s 6 h +1 → s 4 h +4 , s 2 → λ, s → λ s 16 h +4 i → s 12 h +4 l , s 10 h +4 i → s 4( h + l ) , if l < h s 16 h +4 i → s 12 h +5 , s 10 h +4 i → s 4 h +5 , if l = h s 8 h +4 i → s 4( h + k ) , if k 6 = h s 8 h +4 i → s 4 h +5 , if k = h σ 2 ( s 8 h ) ∗ s 8 h +1 /s 8 h → s, ( s 8 h ) ∗ s 8 h +2 /s 8 h +2 → s 2 h ( s 8 h ) ∗ s 4( h + i ) /s 4( h + i ) → s 4( h + i ) if q i : I N C (1) ∈ { Q } ( s 8 h ) ∗ s 4( h + i ) /s 8 h +4( h + i ) → s 6 h if q i : I N C ( x ) ∈ { Q } , x 6 = 1 ( s 8 h ) ∗ s 16 h +4( h + i ) /s 12 h +4 i → s 6 h +4 i if q i : D E C (1) ∈ { Q } s 8 h +4( h + i ) /s 4( h + i ) → s 4( h + i ) if q i : D E C (1) ∈ { Q } ( s 8 h ) ∗ s 4( h + i ) /s 4( h + i ) → s 2 h if q i : D E C ( x ) ∈ { Q } , x 6 = 1 σ 3 s 2 /s → s, s 8 h +1 /s 8 h → s, ( s 8 h ) ∗ s 8 h +3 /s 8 h +3 → s 2 h , ( s 8 h ) ∗ s 20 h +5 /s 12 h → s 2 ( s 8 h ) ∗ s 16 h +5 /s 8 h → s, s 8 h +5 → s 2 , s 12 h +5 → s 2 ( s 8 h ) ∗ s 4( h + i ) /s 4( h + i ) → s 4( h + i ) if q i : I N C (2) ∈ { Q } ( s 8 h ) ∗ s 4( h + i ) /s 8 h +4( h + i ) → s 6 h if q i : I N C ( x ) ∈ { Q } , x 6 = 2 ( s 8 h ) ∗ s 16 h +4( h + i ) /s 12 h +4 i → s 6 h +4 i if q i : D E C (2) ∈ { Q } s 8 h +4( h + i ) /s 4( h + i ) → s 4( h + i ) if q i : D E C (2) ∈ { Q } ( s 8 h ) ∗ s 4( h + i ) /s 4( h + i ) → s 2 h if q i : D E C ( x ) ∈ { Q } , x 6 = 2 σ 4 s 8 h +1 /s 8 h → s 8 h − 1 , s 8 h +2 /s 8 h → s 8 h − 1 , s 8 h +3 → s 2 h ( s 8 h ) ∗ s 4( h + i ) /s 4( h + i ) → s 4( h + i ) if q i : I N C (3) ∈ { Q } ( s 8 h ) ∗ s 4( h + i ) /s 8 h +4( h + i ) → s 6 h if q i : I N C ( x ) ∈ { Q } , x 6 = 3 ( s 8 h ) ∗ s 16 h +4( h + i ) /s 12 h +4 i → s 6 h +4 i if q i : D E C (3) ∈ { Q } s 8 h +4( h + i ) /s 4( h + i ) → s 4( h + i ) if q i : D E C (3) ∈ { Q } ( s 8 h ) ∗ s 4( h + i ) /s 4( h + i ) → s 2 h if q i : D E C ( x ) ∈ { Q } , x 6 = 3 σ 5 s → λ, s 2 h → λ, s 6 h → λ, s 4( h + i ) → λ, s 6 h +4 i → λ, s 2 → s T able 4 This table gives the rules for eac h of the neurons of Π C 3 . 32 neuron rules σ 1 s 8 h +1 /s 8 h → s 8 h , s 8 h +2 /s 8 h − 1 → s 4 h +3 , s 8 h +3 → λ, s 2 → λ, s → λ s 16 h +4 i → s 12 h +4 l , s 10 h +4 i → s 4( h + l ) , if l < h s 16 h +4 i → s 12 h +5 , s 10 h +4 i → s 4 h +5 , if l = h s 8 h +4 i → s 4( h + k ) , if k 6 = h s 8 h +4 i → s 4 h +5 , if k = h σ 2 ( s 8 h ) ∗ s 4( h + i ) /s 4( h + i ) → s 4( h + i ) if q i : I N C (1) ∈ { Q } ( s 8 h ) ∗ s 4( h + i ) /s 8 h +4( h + i ) → s 12 h if q i : I N C (2) ∈ { Q } ( s 8 h ) ∗ s 16 h +4( h + i ) /s 12 h +4 i → s 6 h +4 i if q i : D E C (1) ∈ { Q } s 8 h +4( h + i ) /s 4( h + i ) → s 4( h + i ) if q i : D E C (1) ∈ { Q } ( s 8 h ) ∗ s 4( h + i ) /s 4( h + i ) → s 4 h if q i : D E C (2) ∈ { Q } σ 3 s 2 /s → s, s 16 h +1 /s 8 h → s 8 h , ( s 8 h ) ∗ s 20 h +5 /s 12 h → s 2 ( s 8 h ) ∗ s 16 h +5 /s 8 h → s, s 8 h +5 → s 2 , s 12 h +5 → s 2 ( s 8 h ) ∗ s 4( h + i ) /s 4( h + i ) → s 4( h + i ) if q i : I N C (2) ∈ { Q } ( s 8 h ) ∗ s 4( h + i ) /s 8 h +4( h + i ) → s 12 h if q i : I N C (1) ∈ { Q } ( s 8 h ) ∗ s 16 h +4( h + i ) /s 12 h +4 i → s 6 h +4 i if q i : D E C (2) ∈ { Q } s 8 h +4( h + i ) /s 4( h + i ) → s 4( h + i ) if q i : D E C (2) ∈ { Q } ( s 8 h ) ∗ s 4( h + i ) /s 4( h + i ) → s 4 h if q i : D E C (1) ∈ { Q } σ 5 s 8 h → λ, s 12 h → λ, s → λ, s 4 h → λ, s 6 h +4 i → λ, s 4( h + i ) → λ, s 2 → s T able 5 This table gives the rules for eac h of the neurons of Π C 2 . 33 neuron rules σ 1 s → s , σ 2 s 41 /s → s , s 43 /s 3 → s , s 44 /s 25 → s , s 22 /s 3 → s , s 23 /s 5 → s , s 21( h + i ) − 1 /s 5 → s , s 21( h + i ) − 5 /s 3 → s , s 21( h + i ) − 7 /s 2 → s, s 21( h + i ) − 8 /s 3 → s s 21( h + i ) − 10 /s 5 → s , s 42 h +1 /s → s, s ∗ s 42 h +3 /s → s s 21( h + i )+1 /s 4 → s if q i : I N C ∈ { Q } s 21( h + i ) − 2 /s 21( h + i − l )+1 → s i f q i : I N C (1) ∈ { Q } s 21( h + i ) − 2 /s 21( h + i − l )+2 → s i f q i : I N C ( x ) ∈ { Q } , x 6 = 1 s 21( h + i ) − 2 /s 21( h + i − k ) − 1 → s if q i : D E C ∈ { Q } s 21( h + i )+1 /s 5 → s if q i : D E C (1) ∈ { Q } s 21( h + i )+1 /s 6 → s if q i : D E C ( x ) ∈ { Q } , x 6 = 1 s 21( h + i ) − 11 /s 6 → s if q i : D E C (1) ∈ { Q } s 21( h + i ) − 11 /s 5 → s if q i : D E C ( x ) ∈ { Q } , x 6 = 1 s 21( h + i ) − 13 /s 21( h + i − l ) − 6 → s i f q i : D E C (1) ∈ { Q } s 21( h + i ) − 13 /s 21( h + i − l ) − 7 → s i f q i : D E C ( x ) ∈ { Q } , x 6 = 1 σ 3 s 41 /s → s , s 43 /s 3 → s , s 44 /s 31 → s , s 16 /s 3 → s , s 21( h + i ) − 1 /s 5 → s s 21( h + i )+5 /s 11 → s , s 21( h + i ) − 5 /s 3 → s , s 21( h + i ) − 7 /s 2 → s , s 21( h + i ) − 8 /s 3 → s s 21( h + i ) − 11 /s 6 → s s 21( h + i ) − 13 /s 21( h + i − l ) − 6 → s , s 21( h + i )+4 /s 21( h + i − k )+5 → s s 42 h +1 /s → s , s ∗ s 42 h +3 /s → s s 21( h + i )+1 /s 21( h + i − l )+6 → s i f q i : I N C (1) ∈ { Q } s 21( h + i )+1 /s 4 → s if q i : I N C ( x ) ∈ { Q } , x 6 = 1 s 21( h + i ) − 2 /s 21( h + i − l )+2 → s i f q i : I N C ( x ) ∈ { Q } s 21( h + i ) − 2 /s 21( h + i − k ) − 1 → s if q i : D E C ∈ { Q } s 21( h + i )+1 /s 6 → s if q i : D E C ( x ) ∈ { Q } , x 6 = 1 s 21( h + i ) − 10 /s 5 → s if q i : D E C (1) ∈ { Q } s 21( h + i ) − 10 /s 21( h + i − l )+5 → s i f q i : D E C ( x ) ∈ { Q } , x 6 = 1 T able 6 This table gives the rules for neurons σ 1 to σ 3 of Π ′ C 3 . 34 neuron rules σ 4 s 41 /s → s , s 43 /s 3 → s , s 44 /s 31 → s , s 16 /s 3 → s , s 21( h + i ) − 1 /s 5 → s s 21( h + i )+5 /s 11 → s , s 21( h + i ) − 5 /s 3 → s , s 21( h + i ) − 8 /s 3 → s s 21( h + i ) − 10 /s 21( h + i − l )+5 → s, s 21( h + i )+4 /s 21( h + i − k )+5 → s , s 42 h +1 /s → s s 21( h + i )+1 /s 21( h + i − l )+6 → s i f q i : I N C ( x ) ∈ { Q } , x 6 = 3 s 21( h + i )+1 /s 4 → s i f q i : I N C (3) ∈ { Q } s 21( h + i ) − 2 /s 21( h + i − l )+2 → s if q i : I N C ( x ) ∈ { Q } s 21( h + i ) − 2 /s 21( h + i − k ) − 1 → s i f q i : D E C ∈ { Q } s 21( h + i )+1 /s 6 → s i f q i : D E C (3) ∈ { Q } s 21( h + i ) − 7 /s 21( h + i − l )+10 → s i f q i : D E C (3) ∈ { Q } s 21( h + i ) − 7 /s 2 → s i f q i : D E C ( x ) ∈ { Q } , x 6 = 3 σ 5 s 41 /s → s , s 43 /s 3 → s , s 44 /s 31 → s , s 16 /s 3 → s s 21( h + i )+5 /s 11 → s , s 21( h + i ) − 5 /s 3 → s , s 21( h + i ) − 7 /s 21( h + i − l )+10 → s s 21( h + i )+4 /s 21( h + i − k )+5 → s , s 42 h +1 /s → s s 21( h + i )+1 /s 21( h + i − l )+6 → s i f q i : I N C ∈ { Q } σ 6 , σ 7 s 41 /s → s , s 43 /s 3 → s , s 44 /s 31 → s , s 16 /s 3 → s , s 21( h + i )+5 /s 11 → s , s 21( h + i ) − 5 /s 3 → s , s 21( h + i ) − 7 /s 21( h + i − l )+10 → s s 21( h + i )+4 /s 21( h + i − k )+5 → s s 21( h + i )+1 /s 21( h + i − l )+6 → s i f q i : I N C ∈ { Q } σ 8 ( s 6 ) ∗ s 11 /s 6 → s , ( s 6 ) ∗ s 13 /s → s , s 7 → λ , ( s 6 ) ∗ s 8 /s 8 → s , ( s 6 ) ∗ s 9 /s 9 → s , σ 9 ( s 6 ) ∗ s 10 /s 6 → s , ( s 6 ) ∗ s 7 /s 7 → s , ( s 6 ) ∗ s 14 /s 2 → s , s 8 → λ , ( s 6 ) ∗ s 9 /s 9 → s , σ 10 ( s 6 ) ∗ s 10 /s 6 → s , ( s 6 ) ∗ s 11 /s 6 → s , ( s 6 ) ∗ s 7 /s 7 → s , ( s 6 ) ∗ s 8 /s 8 → s , ( s 6 ) ∗ s 15 /s 3 → s , s 9 → λ , s 2 → λ σ 11 s 7 → λ , s 6 → λ , s → λ , s 11 /s → s , ( s 3 ) ∗ s 14 /s → s , s 4 → λ , s 2 → λ , s 3 → λ , s 10 → s σ 12 , σ 15 ( s 3 ) ∗ s 4 /s 3 → s , s → s , σ 13 , σ 14 , σ 16 , σ 17 ( s 3 ) ∗ s 4 /s 3 → s , s → λ , T able 7 This table gives the rules for neurons σ 4 to σ 17 of Π ′ C 3 . 35

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment