On the computational complexity of spiking neural P systems
It is shown that there is no standard spiking neural P system that simulates Turing machines with less than exponential time and space overheads. The spiking neural P systems considered here have a constant number of neurons that is independent of th…
Authors: Turlough Neary
On the computational complexit y of spiking neural P systems T urlough Neary ⋆ Boole Centre for Research in Informatics, Universit y Colleg e Cork, I reland. tneary@cs. may.ie Abstract. It is sho wn that there is no standard spiking neural P system that sim ulates T uring machines with less than exp onential time and space overhea ds. The spiking neu- ral P systems considered here hav e a constant num b er of n eurons that is indep en dent of the input length. F ollowi ng this w e construct a universal spikin g neural P system with exhaustive use of rules that simulates T uring machines in linear time and has only 10 neurons. 1 In t ro duction Since their inception inside of the last decade P systems [16] hav e spawned a v arie ty of hybrid systems. O ne such hybrid, that of spiking neur al P sy stems [3], r e sults from a fusion with spiking neural net w orks. It has bee n s hown that these systems are computationally univ ersal. Here t he time/space computational complex it y of spiking neur al P sys tems is examined. W e beg in b y showin g that count er machin es sim ulate standard spiking neural P s ystems with linear time a nd space ov erheads. Fischer et al. [2] hav e prev iously shown that counter mac hines require exp onential time and space to simulate T uring machines. Th us it immediately follows that there is no s piking neural P system that simulates T uring machines with less than exp onential time and space ov erheads. These results ar e for spiking neur a l P systems that hav e a co ns tant n umber of neuro ns indep endent of the input le ng th. Extended spiking neur al P systems with exhaustive use o f r ules were proved c omputa- tionally universal in [4]. Zhang et al. [18] gav e a small universal spiking neural P system with exhaustive use of rules (without delay) that has 125 neurons. The tec hnique us e d to prov e universalit y in [4] and [18] inv olved simulation of counter machines a nd thus suffers from an exp onential time overhead when simulating T uring mac hines. In an earlier version [10] of the work we present here, we g av e an e x tended spiking neural P sy s tem with exhaus tive use o f rules that simulates T ur ing mac hines in p olynomial time and has 18 neu ro ns . Her e we improv e on this result to give an extended spiking neural P system with exhaustive use of r ules that simulates T ur ing mac hines in line ar time and has only 10 neur ons . The brief history of sma ll universal spiking neur a l P sys tems is given in T able 1. Note that, to simulate an arbitrary T uring machine that computes in time t , all of the small universal spiking neural P systems prior to our results req uire time that is exp onential in t . An ar bitrary T uring machine that uses space of s is simulated by the universal sy s tems g iven in [4 ,11,18] in s pa ce that is doubly exp onential in s , and b y the universal systems g iven in [3,10,15,19] in space that is exp onential in s . ⋆ The author is funded by S cience F oundation Ireland R esearch F rontiers Programme gran t n umber 07/RFP/CSMFz1. num b er of sim ulation type exhaustive author neurons time/space of rules use of rules 84 exp onential standard no P˘ aun and P˘ aun [15] 67 exp onential standard no Zhang et al. [19] 49 exp onential extended † no P˘ aun and P˘ aun [15] 41 exp onential extended † no Zhang et al. [19] 12 double-exp onential extended † no Neary [11 ] 18 exp onential extended no Neary [11,12]* 17 exp onential standard † no [9] 14 double-exp onential standard † no [9] 5 exp onential extended † no [9] 4 double-exp onential extended † no [9] 3 double-exp onential extended ‡ no [9] 125 exp onential/ extended † yes Zhang et al. [18] double-exp onential 18 p olynomial/exponential extended yes Neary [10 ] 10 linear/exp onential extended y es Section 5 T able 1. Small universal SN P sys tems. The “simulation time” co lumn gives the ov er heads used by e a ch system we simulating a standard single tap e T ur ing machine. † indicates that there is a restriction of the rules a s dela y is not used and ‡ indicates that a mor e generalised output technique is used. * T he 1 8 neuron system is not explicitly given in [1 1]; it is how ever men tioned at the end o f the pap er and is easily derived from the other system presented in [11]. Also, its oper ation and its gr aph w ere presented in [12]. Chen et al. [1 ] have shown that with e x po nential pre-computed res o urces sa t is solv able in constant time with spiking neural P systems. Lep or a ti et al. [7] gave a semi-uniform family of extended s pik ing neural P systems that solve the Subset Su m problem in constant time. In later work, Lep orati et al. [8] g av e a uniform family of maximally parallel spiking neura l P sys tems with mor e gener a l rules that solve the Subset Sum problem in p olynomia l time. All the ab ov e so lutions to NP- hard problems r ely on families o f spiking neura l P sys tems . Spec ific a lly , the size of the pro blem instance determines the num b er o f neur o ns in the spiking neural P system that solves tha t pa r ticular instance. This is similar to solving pro blems with uniform circuits families where ea ch input size ha s a sp ecific cir c uit that solves it. Ionescu and Drago¸ s [5] hav e shown that spiking neural P systems simulate circuits in linear time. In the next tw o se ctions we give definitions for spiking neural P systems and counter machines and explain the op er ation of b oth. F o llowing this, in Section 4, we prov e that counter machines sim ulate spiking neural P systems in linear time. Th us proving that there ex is ts no universal spiking ne ur al P system that simulates T uring machines in less than ex p o nent ial time. In Section 5 we present our universal spiking neural P system, with exhaustive us e o f rules, that simulates T uring machine in linear time a nd ha s only 10 neurons. Finally , w e end the pap er with so me discussion and co nclusions. 2 2 Spiking neural P systems Definition 1 (Spiking neural P syste m s). A spiking neu ra l P syst em is a tuple Π = ( O, σ 1 , σ 2 , · · · , σ m , sy n, in, out ) , wher e: 1. O = { s } is t he unary alphab et ( s is known as a spike), 2. σ 1 , σ 2 , · · · , σ m ar e neur ons, of the form σ i = ( n i , R i ) , 1 6 i 6 m , wher e: (a) n i > 0 is the initial numb er of spikes c ontaine d in σ i , (b) R i is a fi n ite set of rules of the fol lowing two forms: i. E /s b → s ; d , wher e E is a r e gular expr ession over s , b > 1 and d > 1 , ii. s e → λ ; 0 wher e λ is t he empty wor d, e > 1 , and for al l E /s b → s ; d fr om R i s e / ∈ L ( E ) wher e L ( E ) is the language define d by E , 3. sy n ⊆ { 1 , 2 , · · · , m } × { 1 , 2 , · · · , m } ar e the set of synapses b etwe en neur ons, wher e i 6 = j for al l ( i, j ) ∈ sy n , 4. in, out ∈ { σ 1 , σ 2 , · · · , σ m } ar e the input and output neur ons re sp e ctively. In the sa me ma nner as in [15], spikes a re introduced into the system fro m the environmen t by reading in a binary sequence (or word) w ∈ { 0 , 1 } ∗ via the input neur o n σ 1 . The sequence w is rea d from left to right one symbol at eac h timestep. If the read symbol is 1 then a spike ent ers the input neur on on that timestep. A firing rule r = E /s b → s ; d is applicable in a neuron σ i if there are j > b s pikes in σ i and s j ∈ L ( E ) where L ( E ) is the se t of w ords defined by the regular expression E . If, at time t , rule r is executed then b s pikes are remov ed from the neuron, and a t time t + d − 1 the neuron fires. When a neuron σ i fires a s pike is sen t to each neuron σ j for ev ery s ynapse ( i , j ) in Π . Also , the neuron σ i remains closed a nd do e s not receive spikes until time t + d − 1 and no other r ule ma y ex ecute in σ i un til time t + d . W e note here that in 2b(i) it is standard to hav e a d > 0. How ever, w e hav e d > 1 as it simplifies expla nations througho ut the pap er. This do es not effect the ope r ation a s the neur on fires at time t + d − 1 instead of t + d . A forgeting rule r ′ = s e → λ ; 0 is applica ble in a neuron σ i if there are ex a ctly e spikes in σ i . If r ′ is executed then e spikes ar e removed from the neuron. A t each timestep t a rule mu st be applied in each neuro n if there is one or more applica ble rules at time t . Thus while the application o f r ules in each individual neuron is sequential the neuro ns op erate in pa rallel with each o ther. Note from 2b(i) of Definition 1 tha t there may b e t wo rules of the form E / s b → s ; d , that are a pplicable in a single neur on at a given time. If this is the case then the next rule to execute is chosen non-deterministically . The output is the time be tw een the fir s t and second spike in the output neuron σ m . An extended spiking neural P system [1 5] has mo re gener al rules o f the form E /s b → s p ; d , where b > p > 0 . Note if p = 0 then E / s b → s p ; d is a forgetting rule. An extended spiking neural P system with exhaus tive use of rules [4] applies its r ules as follows. If a neuron σ i contains k spikes and the rule E / s b → s p ; d is applicable, then the neuron σ i sends out g p spikes after d timesteps leaving u spikes in σ i , wher e k = bg + u , u < b and k, g , u ∈ N . Th us, a sy napse in a spik ing neur al P system with ex ha ustive use of r ules may tra nsmit an arbitrary nu mber of spikes in a single timestep. In the sequel w e a llow the input neuron of a system with exha ustive use o f rules to r eceive an ar bitrary num b er of spikes in a single timestep. This is a gener alisation on the input a llow ed b y Ionescu et al. [4]. W e discuss wh y we think this genera lisation is natural for this mo de l a t the end o f the pap er. In earlier work [15], K o rec’s notion of stro ng universalit y was ado pted for s ma ll SN P systems. Analogously , some small SN P systems co uld be desc rib ed a s what Korec r efers to 3 as weak universalit y . How ever, as we no ted in other work [9], it co uld b e considered that Korec’s no tion of s trong universalit y is somewha t arbitra ry and we also p ointed out some inconsistency in his notion of w eak universality . Hence, in this work we rely on time/space complexity a nalysis to compar e the enco dings used by the small SN P system in T able 1. In the sequel e ach spike in a spiking neur a l P system repr e s ents a sing le unit of spac e . The ma ximum n um b e r of spikes in a spiking neural P sy stem a t a ny given timestep dur ing a computation is the s pace used by the system. 3 Coun ter mach ines The definition we give for co unt er mach ine is similar to that of Fischer et a l. [2]. Definition 2 (Count er m ac hine). A c ounter machine is a tu ple C = ( z , c m , Q, q 0 , q h , Σ , f ) , wher e z gives the numb er of c ou n ters, c m is the output c ounter, Q = { q 0 , q 1 , · · · , q h } is the set of st ates, q 0 , q h ∈ Q ar e t he initial and halt states r esp e ctively, Σ is the input alphab et and f is the tr ansition function f : ( Σ × Q × g ( i )) → ( { Y , N } × Q × { I N C, DE C , N U LL } ) wher e g ( i ) is a binary value d fun ction and 0 6 i 6 z , Y and N c ontro l the movement of the input r e ad he ad, and I N C , D E C , and N U LL indic ate the op er ation to c arry out on c ounter c i . Each counter c i stores a natural num b er v a lue x . If x > 0 then g ( i ) is tr ue a nd if x = 0 then g ( i ) is false. The input to the count er machine is r ead in from an input tap e with a lpha b e t Σ . The mov ement of the scanning head on the input tape is o ne-wa y s o ea ch input sym bo l is read o nly once. When a co mputatio n b eg ins the scanning head is ov er the leftmost symbol α of the input word αw ∈ Σ ∗ and the counter machine is in state q 0 . W e give three examples below to expla in the oper ation of the transitio n function f . – f ( α, q j , g ( i )) = ( Y , q k , I N C ( h )) move the r ead head right on the input tap e to read the next input s ymbol, change to state q k and increment the v alue x stored in c o unter c i by 1. – f ( α, q j , g ( i )) = ( N , q k , D E C ( h )) do not mov e the read head, change to state q k and decrement the v alue x stored in coun ter c i by 1. Note that g ( i ) m ust ev a luate to true for this rule to e x ecute. – f ( α, q j , g ( i )) = ( N , q k , N U LL ) do not mov e the read head and change to state q k . A sing le a pplication o f f is a timestep. Thus in a sing le timestep only one co un ter may b e incremented or decrement ed b y 1. Our definition for counter machine, given ab ov e, is more restricted than the definition g iven by Fischer [2]. In Fischer’s definitio n I N C a nd D E C may be applied to ev ery co un ter in the machine in a single timestep. Clearly the mor e g eneral counter ma chines of Fisc her simulate our machines with no e x tra space or time ov e r heads. Fischer has shown tha t counter machines are exp onentially slow in terms of computation time as the following theorem illustrates. Theorem 1 (Fischer [2]). Ther e is a language L , r e al-time r e c o gnizable by a one-tap e TM, which is not r e c o gnizable by any k -CM in time less than T ( n ) = 2 n 2 k . 4 In Theorem 1 a one-tap e TM is a n offline T uring machine with a s ing le read only input tap e and a single work tap e, a k -CM is a counter machine with k c o unters, n is the input length and real-time rec ognizable means recogniza ble in n timesteps. F o r his pro of Fischer noted that the languag e L = { w aw r | w ∈ { 0 , 1 } ∗ } , where w r is w re versed, is recognisable in n timesteps on a o ne-tap e offline T ur ing machine. He then noted, that time o f 2 n 2 k is required to proce ss input words of le ng th n due to the unary data storage used by the counters of the k -CM. Note that Theo rem 1 also holds for no n- deterministic counter mac hines as they use the same unary s to rage method. 4 Non-deterministic coun ter machin es sim ulate spiking neural P systems in linear time Theorem 2. L et Π b e a spiking neura l P system with m neur ons t hat c ompletes its c om- putation in time T and sp ac e S . Then ther e is a non-deterministic c ounter m achine C Π that simulates t he op er ation of Π in t ime O ( T ( x r ) 2 m + T m 2 ) and sp ac e O ( S ) wher e x r is a c onstant dep endant on the rules of Π . Pro of idea Before we give the pro of of Theorem 2 we give the ma in idea b ehind the pro of. Each neuron σ i from the s piking neural P sy stem Π is simulated by a counter c i from the counter machine C Π . If a neuron σ i contains y spikes, then the counter will hav e v a lue y . A single sy nchronous up date o f all the neurons at a g iven timestep t is sim ulated as follows. If the num b er of spik es in a neuron σ i is dec easing by b s pikes in-order to execute a rule, then the v alue y stor e d in the simulated neuron c i is decremented b times using D E C ( i ) to give y − b . This pro c e ss is rep e ated for each neuron that executes a rule at time t . If neuro n σ i fires a t time t and has syna pses to neur ons { σ i 1 , . . . σ i v } then fo r each op e n neur on σ i j in { σ i 1 , . . . σ i v } at time t we increment the s imulated neur on c i j using I N C ( i j ). This pro cess is rep eated un til all firing neurons ha ve b een simulated. This simulation o f the synchronous upda te o f Π at time t is completed by C Π in co nstant time. Thus we get the linea r time bo und given in Theorem 2. Pr o of. Let Π = ( O, σ 1 , σ 2 , · · · , σ m , sy n, in, out ) b e a spiking neural P system where in = σ 1 and out = σ 2 . W e explain the o pe ration of a non-deterministic counter machine C Π that simulates the oper ation of Π in time O ( T ( x r ) 2 m + T m 2 ) and space O ( S ). There ar e m + 1 counters c 1 , c 2 , c 3 , · · · , c m , c m +1 in C Π . E ach counter c i emu lates the activity of a neuron σ i . If σ i contains y spikes then counter c i will store the v alue y . The states of the counter ma chine ar e used to control which ne ur al rules are simulated in each counter a nd also to synchronise the op erations of the s imulated neurons (counters). Input enco ding It is sufficient for C Π to ha ve a binary input tap e. The v a lue of the binar y word w ∈ { 1 , 0 } ∗ that is placed on the terminal to be r ead in to C Π is ident ical to the binary sequence read in from the environmen t by the input neuro n σ i . A single symbol is read from the terminal at each simulated times tep. The counter c 1 (the simulated input neuron) is incremented only on timesteps when a 1 (a sim ulated spike) is read. As such at each simulated timestep t , a simu lated spike is received b y c 1 if and only if a spike is r eceived by the input neuron σ 1 . A t the star t of the computation, before the input is r ead in, each counter simulating σ i is incremented n i times to sim ula ted the n i spikes in each neuron given b y 2(a) of Definition 1. This takes a constant amount of time. 5 G g 1 s g 2 s g 3 · · · g x − 1 s g x s g x +1 · · · g y s G ′ g 1 + s − s g 2 + s − s g 3 · · · g x − 1 + s − s g x + s − s g x +1 · · · g y + s − s Fig. 1. Finite s ta te ma chine G decides if a particular rule is applicable in a neuron giv en the nu mber of spik es in the neur on at a given time in the computation. Each s represents a spike in the neuron. Machine G ′ keeps tra ck of the mov emen t o f spikes into and out of the neuron and decides whither or not a pa rticular r ule is a pplicable at e ach timestep in the computation. + s represents a single spike entering the neuron and − s re presents a single spike exiting the neuron. Storing neural rules in the coun ter mac hine s tates Recall fr om Definition 1 tha t the applicability of a r ule in a neuron is dep endant on a regular expr ession over a unary alphab et. Let r = E /s b → s ; d b e a r ule in neuro n σ i . Then there is a finite state machine G that accepts language L ( E ) a nd th us decides if the num b er of spikes in σ i per mits the a pplication of r in σ i at a g iven time in the co mputation. G is given in Figur e 1. If g j is an a ccept state in G then j > b . This ensures that there is enough spikes to execute r . W e also place the restriction on G tha t x > b . During a computatio n w e may use G to decide if r is applicable in σ i by pa ssing an s to G each time a spike enters σ i . How ever, G may not give the co rrect result if spikes leav e the neuron a s it do es not record spikes leaving σ i . Thus using G we may construct a seco nd machine G ′ such that G ′ records the mov ement of spik es going in to and out o f the neuro n. G ′ is construct a s follows; G ′ has all the same states (including a ccept states) and transitions a s G along with an extr a set of transitions that record spikes lea ving the neuron. This extra set of trans itions ar e given as follows for each tr ansition on s from a state g i to a state g j in G there is a new trans itio n on − s g oing from state g i to g j in G ′ that records the remov al of a s pike from G ′ . By r ecording the dynamic mo vemen t of spikes, G ′ is able to decide if the nu mber of spikes in σ i per mits the application of r in σ i at each timestep during the computatio n. G ′ is also given in Figure 1. Note that forg etting rules s e → λ ; 0 are depe nda nt on simpler regular expr essions thu s w e will not give a machine G ′ for for getting rules here. Let neuron σ i hav e the gr eatest n umber l of r ule s o f any neuro n in Π . Th us the applicability of r ules r 1 , r 2 , · · · , r l in σ i is decided by the a uto mata G ′ 1 , G ′ 2 , · · · , G ′ l . W e rec o rd if a rule may b e s imulated in a neuron a t any given timestep during the co mputation by recording the current state of its G ′ automaton (Figure 1) in the states of the counter machine. There are m neuron in Π . Thus eac h s tate in our coun ter mac hine r emembers the current states of at most ml different G ′ automata in order to determine which rules are applicable in ea ch neuron at a g iven time. Recall that in each rule of the form r = E /s b → s ; d that d sp ecifies the num b er of timestep b etw een the r e mov al of b spik es from the neuron and the spiking of the neuron. The nu mber of timesteps < d remaining until a neur on will spike is recorded in the states o f the C Π . Each sta te in our c o unter machine r emembers at mo s t m different v alues < d . 6 Algorithm o verv iew Next we explain the op eration of C Π by e xplaining how it simulates the sy nchronous up date o f all neur ons in Π at an ar bitrary timestep t . The a lgorithm has 3 stages. A single iteration of Stage 1 iden tifies which applica ble rule to simulate in a simulated op en neuro n. Then the corr ect num ber y of s imulated spikes are r emov ed by decr ementing the counter y times ( y = b o r y = e in 2b of Definition 1). Stage 1 is itera ted until all simulated op en neurons hav e had the co rrect num b er of simu lated spik es removed. A single iteration o f Stage 2 iden tifies a ll the synapses leaving a firing neur on and increments every counter that simulates an o p e n neuron at the end of one of these synapse s. Stag e 2 is iterated until a ll firing neurons ha v e b e en s im ulated b y incrementing the appr opriate counters. Stage 3 synchronises each neuro n with the global clo ck and inc r ements the output co unt er if necessary . If the entire word w has no t b een read from the input tape the next symbol is rea d. Stage 1 . Identify rules to b e s imulated and remov e spik es from ne urons Recall that d = 0 indicates a neur on is op en and the v alue of d in each neuron is r ecorded in the states of the counter machine. Th us our algorithm beg ins by determining which rule to simulate in counter c i 1 where i 1 = min { i | d = 0 f or σ i } and the curr ent state of the counter machine enco des an accept state for one o r more of the G ′ automata for the rules in σ i 1 at time t . If there is mo re than one rule applica ble the count er machine no n-deterministically choo ses which rule to simulate. Let r = E /s b → s ; d be the rule that is to b e simulated. Using the D E C ( i 1 ) instruction, counter c i 1 is decrement ed b times. With each decrement of c i 1 the new current s ta te of each automa ton G ′ 1 , G ′ 2 , · · · , G ′ l is recorded in the co unt er machine’s current state. After b decrement s of c i the simulation of the r emov al of b spikes from neuron σ i 1 is complete. Note that the v alue of d from rule r is rec o rded in the counter mac hine state. There is a c ase not cov er ed by the above para g raph. T o see this note that in G ′ in Figure 1 there is a single non-deter ministic choice to b e made. This choice is at state g x if a spike is being removed ( − s ). Th us, if one of the automata is in such a s tate g x our co unt er machine resolves this be decrementing the counter x times using the D E C instruc tio n. If c i 1 = 0 after the counter has b een decremented x times then the counter machine simulates state g x − 1 otherwise state g y is simulated. Immedia tely after this the counter is incremented x − 1 times to restore it to the correct v alue. When the sim ulation of the r emov al o f b s pikes from neuron σ i 1 is complete, the a b ov e pro cess is repea ted w ith co unter c i 2 where i 2 = min { i | i 2 > i 1 , d = 0 f or σ i } and the c ur rent state of the count er machine enco des an accept state for one or more of the G ′ automata for the rules in σ i 2 at time t . This pro cess is iterated un til every simulated open neuro n with an applicable rule at time t has had the cor rect n um ber of simulated spikes remov ed. Stage 2. Simulate spik es This s tage of the algo r ithm begins by s im ulating spikes tr aveling along synapses of the form ( i 1 , j ) wher e i 1 = min { i | d = 1 f or σ i } (if d = 1 the neur o n is firing). Let { ( i 1 , j 1 ) , ( i 1 , j 2 ) , · · · , ( i 1 , j k ) } b e the set of synapses le aving σ i where j u < j u +1 and d 6 1 in σ j u at time t (if d 6 1 the neuron is op en and may re ceive spikes). Then the follo wing sequence of instr uctions are executed INC( j 1 ), INC( j 2 ), · · · , INC( j k ), th us incr ementing any counter (simulated neuron) that r eceives a simulated spike. The a b ov e pro cess is rep eated for syna pses of the for m ( i 2 , j ) where i 2 = min { i | i 2 > i 1 , d = 1 f or σ i } . This pro cess is iterated until every simulated neuron c i that is op en ha s bee n incremented o nce for ea ch spike σ i receives at time t . Stage 3. Reading input, decremen ting d , up dating output coun ter and halting If the entire w o rd w has not b een read from the input tap e then the next symbol is read. If this 7 is the ca se and the symbol read is a 1 then counter c 1 is incremen ted thus sim ulating a spike being rea d in by the input neur on. In this stage the state of the c ounter machine changes to reco rd the fact tha t ea ch k 6 d that r ecords the num b er of timesteps until a currently closed neur on will fire is decremented to k − 1 . If the c ounter c m , which s imulates the output neuron, has spiked only o nce prio r to the simulation of timestep t + 1 then this sta ge will also inc r ement output counter c m +1 . If during the simulation of timestep t counter c m has simulated a spike for the second time in the computation, then the co un ter machine enters the halt state. When the halt state is entered the n um be r stored in counter c m +1 is equal to the unary output tha t is given by time betw een the firs t tw o spikes in σ m . Space analysis The input word o n the binar y ta pe of C Π is ident ical to the length of the binary sequence read in by the input neuron of Π . Count ers c 1 to c m uses the same spa ce as neurons σ 1 to σ m . Co un ter c m +1 uses the same amount of space as the unary output of the computation of Π . Thus C Π simulates Π in space o f O ( S ). Time analysis The sim ulation inv olves 3 stages. Reca ll that x > b . Let x r be the maximum v alue for x of any G ′ automaton thus x r is g reater than the maximum num b er of spikes deleted in a neur o n. Stage 1. In order to simulate the dele tion of a single spike in the worst case the counter will hav e to be decr e mented x r times a nd incremented x r − 1 times as in the s pe cial case. This is rep eated a maximum of b < x r times (where b is the n um ber of spik es remov ed). Th us a single iteratio n of Stage 1 take O ( x r 2 ) time. Stage 1 is iter ated a maximum of m times p er simulated timestep giving O ( x r 2 m ) time. Stage 2. The maximum n umber of synaps e s leaving a neuro n i is m . A single spike traveling along a neuro n is simulated in one step. Stag e 2 is iterated a max imu m of m times p er simulated timestep giving O ( m 2 ) time. Stage 3. T akes a s mall consta nt n um ber of steps. Thu s a sing le timestep o f Π is simulated b y C Π in O (( x r ) 2 m + m 2 ) time and T timesteps of Π a re simulated in linear time O ( T ( x r ) 2 m + T m 2 ) by C Π . ⊓ ⊔ The following is a n immediate coro llary of Theorems 1 and 2. Corollary 1. Ther e exist no universal spiking n eur al P system that simulates T uring ma- chines with less t han exp onential time and sp ac e overhe ads. 5 A univ er sal spiking neural P system that is b oth small and time efficien t In this sectio n we construct a univ ersal spiking neural P system that applies exhaustive use of rules, has o nly 10 neurons, and simulates any T ur ing machine in linear time. Theorem 3. L et M b e a single tap e T uring machi ne with | A | symb ols and | Q | states that runs in time T . Then ther e is a universal spiking neur al P system Π M with exhaustive u se of ru les that simulates the c omputation of M in time O ( | A || Q | T ) and sp ac e O ([2 log 2 ⌈ 2 | Q || A | +2 | A |⌉ ] T ) and has only 10 neu ro ns. If the reader w ould like to ge t a quick idea of how our spiking neural P system with 10 neurons op era tes they should skip to the a lgorithm ov erview in Subsection 5.3 of the pr o of. 8 Pr o of. W e give a spik ing neural P sys tem Π M that simu lates a n a rbitrary T uring machine M in linear time and exp onential space. Π M is given b y Figure 3 and T ables 2 and 3. The algorithm for Π M is deterministic and is mainly concer ned with the simulation of an arbitrar y transition r ule. Without loss of ge ne r ality we insist that M alwa y s finishes its computation with the tap e head a t the leftmost end of the tape c onten ts. Let M b e a ny single tap e T uring machine with symbols α 1 , α 2 , . . . , α | A | and states q 1 , q 2 , . . . q | Q | , blank symbol α 1 , and ha lt state q | Q | . 5.1 Encodi ng a configuration of T uring machine M Each configur ation of M is enco ded as three natural num b er s using a well known technique. A configura tion of M is given b y the following equation C k = q r , q r , q r , · · · α 1 α 1 α 1 a − x · · · a − 3 a − 2 a − 1 a 0 a 1 a 2 a 3 · · · a y α 1 α 1 α 1 · · · (1) where q r is the current state, ea ch a i is a tap e cell of M and the tap e hea d of M , given by an underline, is over a 0 . Also, ta pe c e lls a − x and a y bo th contain α 1 , and the cells b etw een a − x and a y include all of the cells on M ’s ta pe that ha ve either b een visited by the tape hea d prior to config ur ation C k or contain par t of the input to M . In the sequel the enco ding of ob ject p is given by h p i . The tape sy m b o ls α 1 , α 2 , . . . , α | A | of M are enco ded as h α 1 i = 1 , h α 2 i = 3 , . . . , h α | A | i = 2 | A | − 1 , r esp ectively , and the sta tes q 1 , q 2 , . . . , q | Q | are enco ded as h q 1 i = 2 A, h q 2 i = 4 A, . . . , h q | Q | i = 2 | Q | A , res pec tively . The conten ts of each tap e cell a i in configuration C k is enco ded as h a i i = h α i where α is a tap e symbol of M . The tap e cont ent s in Equation (1) to the left and right of the tap e head are resp ectively enco ded as the n um be r s X = x P i =1 z i h a − i i and Y = y P j =1 z j h a j i wher e z = 2 v and v = ⌈ lo g 2 (2 | Q || A | + 2 | A | ) ⌉ . Thus the ent ire configura tion C k is enco ded a s thr ee natural nu mbers via the equa tion h C k i = ( X , Y , h q r i + h α i i ) (2) where h C k i is the enco ding of C k from Equa tion (1) a nd α i is the symbol b eing read by the tap e head in cell a 0 . A tra nsition rule q r , α i , α j , D , q u of M is executed on C k as follows. If the cur rent state is q r and the tape head is reading the symbol α i in cell a 0 , α j the write sym bo l is prin ted to cell a 0 , the tap e head mov es one cell to the left to a − 1 if D = L o r one ce ll to the right to a 1 if D = R , and q u bec omes the new current state. A simulation of tra nsition rule q r , α i , α j , D , q u on the enco ded co nfiguration h C k i from E quation (2) is given b y the equation h C k +1 i = X z − ( X z mo d z ) , z Y + z h α j i , h q u i + ( X z mo d z ) z X + z h α j i , Y z − ( Y z mo d z ) , h q u i + ( Y z mo d z ) (3) where co nfiguration C k +1 results fro m executing a s ingle tra nsition rule on configuration C k , and ( b mo d c ) = d where d < c , b = ec + d and b, c, d, e ∈ N . In E quation (3) the top case is simulating a left mov e transition rule and the b otto m case is simulating a rig ht mov e transition rule. In the top ca se, follo wing the le ft mov e, the se q uence to the r ight of the tape head is long e r b y 1 tap e cell, as cell a 0 is added to the right sequence. Cell a 0 is ov erwritten with the write symbol α j and thus we compute z Y + z h α j i to simulate cell a 0 bec oming part of the r ight sequence. Also, in the top case the sequence to the left o f the tap e head is getting shorter by 1 tape cell th us we compute X z − ( X z mo d z ). The rightmost cell of the 9 left sequence a − 1 is the new tap e hea d lo cation and the tape sym bol it contains is e nco ded as ( X z mo d z ). Thus the v alue ( X z mo d z ) is added to the new enco ded current state h q u i . F or the botto m case, a right mov e, the sequence to the right gets sho rter which is simulated by Y z − ( Y z mo d z ) and the s e quence to the left gets longer w hich is simu lated b y z X + z h α j i . The le ftmos t cell of the right s equence a 1 is the new tap e head lo c ation and the tap e symbol it contains is encoded as ( Y z mo d z ). 5.2 Input to Π M Here we giv e an explanation of how the input is read int o Π M . W e also give a rough outline of how the input to Π M is enco ded in linea r time. A config uration C k given by Equation (2) is read into Π M as follows. All the neurons of the sy s tem initially have no spikes with the exceptio n of σ 10 which ha s 31 s pikes. The input neuron σ 5 receives X + 2 spikes at the first timestep t 1 , Y spik es at time t 2 , and h q r i + h α i i spikes at time t 4 . W e explain how the sys tem is initialis e d to enco de an initia l configur ation of M by giving the num b er of spikes in each neur o n and the rule that is to b e applied in each neuron at time t . Th us at time t 1 we hav e t 1 : σ 5 = X + 2 , s 2 ( s z ) ∗ /s → s ; 1 , σ 10 = 3 1 , s 31 /s 16 → λ ; 0 . where on the left σ j = k gives the num b er k of spikes in neuron σ j at time t i and on the right is the next rule that is to be a pplied at time t i if there is an a pplicable rule at that time. Th us from Figure 3 when we a pply the r ule s 2 ( s z ) ∗ /s → s ; 1 in neuro n σ 5 and the r ule s 31 /s 16 → λ ; 0 in neur on σ 10 at time t 1 we get t 2 : σ 4 = X + 2 , s 2 ( s z ) ∗ /s z → s z ; 2 , σ 5 = Y , s 2 z ( s z ) ∗ /s → s ; 1 , σ 6 , σ 7 , σ 8 , σ 9 = X + 2 , s 2 ( s z ) ∗ /s → λ ; 0 , σ 10 = 1 5 , s 15 /s 8 → λ ; 0 . t 3 : σ 4 = X + 2 , s 2 ( s z ) ∗ /s z → s z ; 1 , σ 6 = Y , ( s z ) ∗ /s → s ; 1 , σ 7 , σ 8 , σ 9 = Y , ( s z ) ∗ /s → λ ; 0 , σ 10 = 7 , s 7 /s 4 → λ ; 0 . t 4 : σ 1 = X , σ 2 = Y , σ 4 = 2 , s 2 /s 2 → λ ; 0 , σ 5 = h q r i + h α i i , ( s z ) ∗ s h q r i + h α i i /s → s ; 1 , σ 10 = 3 , s 3 /s 2 → λ ; 0 . 10 t 5 : σ 1 = X , σ 2 = Y , σ 4 , σ 6 = h q r i + h α i i , σ 7 , σ 8 , σ 9 = h q r i + h α i i , s h q r i + h α i i /s → λ ; 0 , σ 10 = 1 , s/s → s ; log 2 ( z ) + 3 . F orgetting rules a re applied to get r id o f sup er fluo us spik es (for example see neuro ns σ 7 , σ 8 , and σ 9 at time t 2 ). Note that σ 4 is closed at time t 2 as there is a delay of 2 o n the rule ( s 2 ( s z ) ∗ /s z → s z ; 2) to b e executed in σ 4 . This preven ts the Y spikes from entering neuron σ 4 when σ 5 fires at time t 2 . At time t 5 the spiking neural P system has X spikes in σ 1 , Y spikes in σ 2 , and h q r i + h α i i spikes in σ 4 and σ 6 . Thus at time t 5 the spik ing neura l P sys tem enco des an initial co nfiguration of M . In this para graph we w ill show that given an initial configuration of M it is e nc o ded as input to our spiking neur al P system in Figure 3 in linear time. In order to do this we must compute the three num b ers that g ive h C k i fro m Eq uation 2 in linea r time. The nu mber X is computed as follows: given a seque nc e a − x a − x +1 . . . a − 2 a − 1 the sequence w = h a − x i 0 log 2 ( z ) − 1 h a − x +1 i 0 log 2 ( z ) − 1 . . . h a − 2 i 0 log 2 ( z ) − 1 h a − 1 i 0 log 2 ( z ) − 1 2 is easily computed in time that is linea r in x . The spiking neural P system Π input in Figure 2 takes the sequence w and conv er ts it into the X spikes that form pa rt of the input to o ur sys tem in Figure 3. W e give a rough idea of how Π input op erates (if the rea der wishes to pursue a mo re detailed vie w the rules for Π input are to b e found in T a ble 4). The input neuron o f Π input receives the seq uence w as a sequence of spikes and no-spikes. On each timestep wher e h a i is read h a i spikes ar e passed to the input neuron σ 1 , and on e a ch timestep where 0 is r ead no spikes are pa ssed to the input neur on. Thus at timestep t 1 neuron σ 1 receives h a − x i spikes, and at timestep t 2 neurons σ 2 , σ 3 , and σ 4 receive h a − x i spikes from σ 1 . F ollowing timestep t 2 , the n um ber of spikes in neurons σ 2 , σ 3 , and σ 4 double with each timestep. So at timestep t log 2 ( z )+1 the num b er of spikes in each of the neurons σ 2 , σ 3 , a nd σ 4 is z 2 h a − x i . At timestep t log 2 ( z )+1 neurons σ 2 , σ 3 and σ 4 also receive h a − x +1 i spikes from σ 1 giving a total of z h a − x i + h a − x +1 i spikes in each of these neur o ns at time t log 2 ( z )+2 . Pro ceeding to time t 2 log 2 ( z )+2 neurons σ 2 , σ 3 and σ 4 hav e z 2 h a − x i + z h a − x +1 i + h a − x +2 i spikes. This pro cess co nt inues until X = x P i =1 z i h a − i i is computed. The end of the pro ces s is signa led when the rightmost n umber in the sequence is read. When this num b er (2) is read it allows the r esult to be pa ssed to σ 6 via σ 5 . F ollowing this σ 6 sends X spikes out of the system. Note that prior to this 2 b eing read only forgetting rules a re executed in σ 6 th us prevent ing any spik es from b eing sen t out o f the system. Π input computes X in time x log 2 ( z ) + 3 . Recall from Sectio n 5.1 that the v alue of z is dep endant on the n um b e r of states and symbols in M thus X is co mputed in time that is linear in x . In a similar manner, the v alue Y is computed by Π input in time linear in y . T he num ber h q r i + h α i i is co mputed in co nstant time. Thus the input h C k i for Π M is computed in linear time. 5.3 Algorithm ov ervie w T o help simplify the explanation, so me of the r ules given here differ slightly fr o m those in the more de ta iled simulation that follows this overview. The num b ers fr o m E quation (2), enco ding a T uring machine co nfiguration, are stored in the neurons of our system as X , Y 11 σ 1 input σ 2 σ 3 σ 4 σ 5 σ 6 output Fig. 2. Spiking neural P system Π input . Each cir cle is a neuron and each ar row represents the direction spikes mo ve a lo ng a synapse b etw een a pair of neurons. The rules for Π input are to be found in T able 4. and h q r i + h α i i spikes. Equation (3) is implemen ted in Figure 3 to giv e a spiking neura l P system Π M that s imulates the transitio n rules of M . The tw o v alues X and Y are stored in neur ons σ 1 and σ 2 , r esp ectively . If X or Y is to be multiplied the spikes that encode X or Y a re sent down through the net work of neuro ns from either σ 1 or σ 2 resp ectively , until they reach σ 10 . Note in Figur e 3 that ea ch neuron from σ 7 , σ 8 and σ 9 has incoming synapses coming fro m the other tw o neurons in σ 7 , σ 8 and σ 9 . Thus if σ 7 , σ 8 and σ 9 each contain N spikes a t time t k , a nd they ea ch fir e sending N spikes, then each o f the neurons σ 7 , σ 8 and σ 9 will contain 2 N spikes a t time t k +1 . Given Y the v a lue z Y = 2 v Y is computed as fo llows: First we calcula te 2 Y b y firing σ 7 , σ 8 and σ 9 , then 4 Y by firing σ 7 , σ 8 , and σ 9 again. After v timesteps the v alue z Y is computed. z X is computed using the same technique. Now, we give the genera l idea of how the neurons compute X z − ( X z mo d z ) and ( X z mo d z ) from Equation (3) (a slig ht ly different s tr ategy is used in the simulation). W e b egin with X spikes in σ 1 . The r ule ( s z ) ∗ /s z → s ; 1 is applied in σ 1 sending X z spikes to σ 4 . F ollowing this ( s z ) ∗ s ( X z mo d z ) /s z → s z ; 1 is a pplied in σ 4 which sends X z − ( X z mo d z ) to σ 1 leaving ( X z mo d z ) spikes in σ 4 . The v alues Y z − ( Y z mo d z ) and ( Y z mo d z ) are computed in a similar manner. Finally , using the enco de d current s tate h q r i and the enco de d read symbol h α i i the v alues z h α j i a nd h q u i fro m E quation (3 ) are computed. Using the technique outlined in the first paragr aph of the a lgorithm overview the v alue z ( h q r i + h α i i ) is co mputed by sending h q r i + h α i i spikes from σ 5 to σ 10 in Figure 3. Then the r ule s z ( h q r i + h α i i ) /s z ( h q r i + h α i i ) −h q u i → s z h α j i ; 1 is applied in σ 10 which sends z h α j i spikes out to neurons σ 4 and σ 6 . This r ule uses z ( h q r i + h α i i ) − h q u i spikes thus lea ving h q u i spikes r emaining in σ 10 . This completes our sketc h of how Π M in Figur e 3 computes the v alues in E quation (3) to s imu late a transition r ule. A more detailed simulation of a transition rule follows. 12 σ 1 σ 4 σ 5 input σ 6 σ 2 σ 3 output σ 8 σ 7 σ 9 σ 10 Fig. 3. Univ er sal spiking neural P s ystem Π M . E ach c ircle is a neuro n and each arrow repre- sents the dir ection spikes mo ve along a synapse betw een a pair of neur ons. The rules for Π M are to b e found in T ables 2 and 3. 5.4 Sim ulation of q r , α i , α j , L, q u (top case of Equation (3) ) The simulation of the transitio n r ule beg ins at time t k with X spikes in σ 1 , Y spikes in σ 2 , h q r i + h α i i spikes in σ 4 and σ 6 , and 1 spike in σ 10 . As befor e we explain the simulation b y giving the num b er of spikes in each neuron and the rule that is to b e applied in each neuro n at time t . So at time t k we hav e t k : σ 1 = X, σ 2 = Y , σ 4 , σ 6 = h q r i + h α i i , s h q r i + h α i i /s → s ; 1 , σ 10 = 1 , s/s → s ; log 2 ( z ) + 3 . Thu s from Figure 3 when we apply the rule s h q r i + h α i i /s → s ; 1 in neurons σ 4 and σ 6 at time t k we get t k +1 : σ 1 = X + h q r i + h α i i , s 2 z ( s z ) ∗ s h q r i + h α i i /s z → s ; log 2 ( z ) + 6 , σ 2 = Y + h q r i + h α i i , ( s z ) ∗ s h q r i + h α i i /s → s ; 1 , σ 10 = 1 , s/s → s ; log 2 ( z ) + 2 . 13 t k +2 : σ 1 = X + h q r i + h α i i , s 2 z ( s z ) ∗ s h q r i + h α i i /s z → s ; log 2 ( z ) + 5 , σ 3 = Y + h q r i + h α i i , if h q r i = h q | Q | i ( s z ) ∗ s h q r i + h α i i /s z → s z ; 1 , if h q r i 6 = h q | Q | i ( s z ) ∗ s h q r i + h α i i /s → λ ; 0 , σ 5 = Y + h q r i + h α i i , ( s z ) ∗ s h q r i + h α i i /s → s ; 1 , σ 6 = Y + h q r i + h α i i , s z ( s z ) ∗ s h q r i + h α i i /s → λ ; 0 , σ 10 = 1 , s/s → s ; log 2 ( z ) + 1 . t k +3 : σ 1 = X + h q r i + h α i i , s 2 z ( s z ) ∗ s h q r i + h α i i /s z → s ; log 2 ( z ) + 4 , σ 4 , σ 6 = Y + h q r i + h α i i , s z ( s z ) ∗ s h q r i + h α i i /s → λ ; 0 , σ 7 , σ 8 , σ 9 = Y + h q r i + h α i i , s z ( s z ) ∗ s h q r i + h α i i /s → s ; 1 , σ 10 = 1 , s/s → s ; log 2 ( z ) . In timestep t k +2 ab ov e σ 3 the output neuron fires if and only if the enco ded current state enco des the halt s tate q | Q | . Recall that when M halts the entire tap e conten ts are to the right of the tap e head, thus only Y the enco ding of the right se q uence is sent out of the system. Thu s the unary o utput is a num b er of s pikes that enco des the tape conten ts of M . Note tha t at timestep t k +3 the neuron σ 7 receives Y + h q r i + h α i i spikes from each of the t wo neurons σ 8 and σ 9 . Th us at time t k +4 neuron σ 7 contains 2( Y + h q r i + h α i i ) spik es. In a similar manner σ 8 and σ 9 also receive 2( Y + h q r i + h α i i ) spikes at timestep t k +3 . The num b er of spikes in each o f the neurons σ 7 , σ 8 and σ 9 doubles at each timestep betw een t k +3 and t k +log 2 ( z )+2 . t k +4 : σ 1 = X + h q r i + h α i i , s 2 z ( s z ) ∗ s h q r i + h α i i /s z → s ; log 2 ( z ) + 3 , σ 7 , σ 8 , σ 9 = 2 ( Y + h q r i + h α i i ) , s z ( s z ) ∗ s 2( h q r i + h α i i ) /s → s ; 1 , σ 10 = 1 , s/s → s ; log 2 ( z ) − 1 . t k +5 : σ 1 = X + h q r i + h α i i , s 2 z ( s z ) ∗ s h q r i + h α i i /s z → s ; log 2 ( z ) + 2 , σ 7 , σ 8 , σ 9 = 4 ( Y + h q r i + h α i i ) , s z ( s z ) ∗ s 4( h q r i + h α i i ) /s → s ; 1 , σ 10 = 1 , s/s → s ; log 2 ( z ) − 2 . t k +6 : σ 1 = X + h q r i + h α i i , s 2 z ( s z ) ∗ s h q r i + h α i i /s z → s ; log 2 ( z ) + 1 , σ 7 , σ 8 , σ 9 = 8 ( Y + h q r i + h α i i ) , s z ( s z ) ∗ s 8( h q r i + h α i i ) /s → s ; 1 , σ 10 = 1 , s/s → s ; log 2 ( z ) − 3 . The num b er of spikes in neurons σ 7 , σ 8 , and σ 9 contin ues to double until timestep t k +log 2 ( z )+2 . When neurons σ 7 and σ 9 fire at timestep t k +log 2 ( z )+2 they send z 2 ( Y + h q r i + h α i i ) spikes each to neuron σ 10 which ha s op ened at time t k +log 2 ( z )+2 (for the first time in the tr ansition rule 14 simulation). Thus at time t k +log 2 ( z )+3 neuron σ 10 contains z ( Y + h q r i + h α i i ) spikes. t k +log 2 ( z )+2 : σ 1 = X + h q r i + h α i i , s 2 z ( s z ) ∗ s h q r i + h α i i /s z → s ; 5 , σ 7 , σ 8 , σ 9 = z 2 ( Y + h q r i + h α i i ) , s z ( s z ) ∗ s z 2 ( h q r i + h α i i ) /s → s ; 1 , σ 10 = 1 , s/s → s ; 1 . t k +log 2 ( z )+3 : σ 1 = X + h q r i + h α i i , s 2 z ( s z ) ∗ s h q r i + h α i i /s z → s ; 4 , σ 4 , σ 6 = 1 , s/s → λ ; 0 , σ 7 , σ 8 , σ 9 = z ( Y + h q r i + h α i i ) , ( s z ) ∗ /s → λ ; 0 , σ 10 = z ( Y + h q r i + h α i i ) , ( s z 2 ) ∗ s z ( h q r i + h α i i ) /s z 2 → s z 2 ; 1 . Note that ( z Y mo d z 2 ) = 0 and also that z ( h q r i + h α i i ) < z 2 . Thus in neuron σ 10 at time t k +log 2 ( z )+3 the r ule ( s z 2 ) ∗ s z ( h q r i + h α i i ) /s z 2 → s z 2 ; 1 separates the enco ding of the r ight side of the tape s z Y and the enco ding of the current state and r ead symbol s z ( h q r i + h α i i ) . T o s ee this note the num b er of spikes in neurons σ 6 and σ 10 at time t k +log 2 ( z )+4 . The rule s z ( h q r i + h α i i ) /s z ( h q r i + h α i i ) −h q u i− 1 → s z h α j i ; 1, applied in σ 10 at timestep t k +log 2 ( z )+4 , computes the ne w enco ded c urrent sta te h q u i and the enco ded wr ite symbol z h α j i . T o see this no te the n um ber of s pikes in neurons σ 6 and σ 10 at time t k +log 2 ( z )+5 . Note that neuron σ 1 is pr eparing to exec ute the rule s 2 z ( s z ) ∗ s h q r i + h α i i /s z → s ; 1 a t timestep t k +log 2 ( z )+6 , a nd so at timesteps t k +log 2 ( z )+4 and t k +log 2 ( z )+5 neuron σ 1 remains closed. Thus the spik es sen t out from σ 4 at these times do no t enter σ 1 . t k +log 2 ( z )+4 : σ 1 = X + h q r i + h α i i , s 2 z ( s z ) ∗ s h q r i + h α i i /s z → s ; 3 , σ 4 , σ 6 = z Y , ( s z ) ∗ /s → s ; 1 , σ 10 = z ( h q r i + h α i i ) , s z ( h q r i + h α i i ) /s z ( h q r i + h α i i ) −h q u i− 1 → s z h α j i ; 1 . t k +log 2 ( z )+5 : σ 1 = X + h q r i + h α i i , s 2 z ( s z ) ∗ s h q r i + h α i i /s z → s ; 2 , σ 2 = z Y , σ 4 , σ 6 = z h α j i , ( s z ) ∗ /s → s ; 1 , σ 10 = h q u i + 1 , s h q u i +1 /s h q u i → s h q u i ; 4 . t k +log 2 ( z )+6 : σ 1 = X + h q r i + h α i i , s 2 z ( s z ) ∗ s h q r i + h α i i /s z → s ; 1 , σ 2 = z Y + z h α j i , σ 10 = h q u i + 1 , s h q u i +1 /s h q u i → s h q u i ; 3 . A t time t k +log 2 ( z )+7 in neuron σ 4 the rule s z ( s z ) ∗ s ( X z mo d z ) /s z → s z ; 1 is a pplied sending X z − ( X z mo d z ) spikes to σ 1 and leaving ( X z mo d z ) spikes in σ 4 . At the same time in neuron 15 σ 5 the rule s z ( s z ) ∗ s ( X z mo d z ) /s z → λ ; 0 is applied leaving only ( X z mo d z ) spikes in σ 5 . t k +log 2 ( z )+7 : σ 1 = h q r i + h α i i , s h q r i + h α i i /s → λ ; 0 , σ 2 = z Y + z h α j i , σ 4 = X z , s z ( s z ) ∗ s ( X z mo d z ) /s z → s z ; 1 , σ 5 = X z , s z ( s z ) ∗ s ( X z mo d z ) /s z → λ ; 0 , σ 10 = h q u i + 1 , s h q u i +1 /s h q u i → s h q u i ; 2 . t k +log 2 ( z )+8 : σ 1 = X z − ( X z mo d z ) , σ 2 = z Y + z h α j i , σ 4 = X z mo d z , s ( X z mo d z ) /s → λ ; 0 , σ 5 = X z mo d z , s ( X z mo d z ) /s → s ; 1 , σ 10 = h q u i + 1 , s h q u i +1 /s h q u i → s h q u i ; 1 . t k +log 2 ( z )+9 : σ 1 = X z − ( X z mo d z ) , σ 2 = z Y + z h α j i , σ 4 = h q u i + ( X z mo d z ) , s h q u i +( X z mo d z ) /s → s ; 1 , σ 6 = h q u i + ( X z mo d z ) , s h q u i +( X z mo d z ) /s → s ; 1 , σ 7 , σ 8 , σ 9 = X z mo d z , s ( X z mo d z ) /s ( X z mo d z ) → λ ; 0 , σ 10 = 1 , s/s → s ; log 2 ( z ) + 3 . The simulation of the left moving trans itio n rule is no w complete. Note that the n um ber of spikes in σ 1 , σ 2 , σ 4 , a nd σ 6 at timestep t k +log 2 ( z )+9 are the v a lue s given by the top case of Equation (3) and enc o de the configura tion after the left mov e transition rule. The case of when the tape head mo ves o nto a pa r t of the tap e that is to the left of a − x +1 in Equatio n (1) is not covered by the simu lation. F or exa mple when the tap e head is ov er cell a − x +1 , then X = z (recall a − x contains α 1 ). If the ta p e head mov es to the left then from the top case o f Equation (3) the new v a lue for the left sequence is X = 0. Therefore we inc r ease the length of X to simulate the infinite blank symbols ( α 1 symbols) to the le ft as follo ws. The rule s z + h q r i + h α i i /s z → s z ; 1 is applied in σ 1 at time t k +log 2 ( z )+6 . Then at time t k +log 2 ( z )+7 the rule ( s z ) ∗ /s → s ; 1 is applied in σ 4 and the rule s z /s z − 1 → λ ; 0 is applied in σ 5 . Thus a t time t k +log 2 ( z )+8 there are z spik es in σ 1 which s im ulates another α 1 symbol to the le ft, and there is 1 spike in σ 5 to simulate the current r e ad sym bo l α 1 . W e hav e shown how to simulate an a rbitrary left moving tra nsition rule of M . Right moving transition rules ar e also sim ulated in log 2 ( z )+ 9 timesteps in a manner similar to that of left moving transition r ules. Thu s a sing le tr a nsition rule of M is simulated by Π M in log 2 ( z ) + 9 timesteps. Recall fro m Section 5.1 z = 2 log 2 ⌈ 2 | Q || A | +2 | A |⌉ th us the entire computation of M 16 is sim ula ted in O ( | A || Q | T ) time. F rom Section 5 .1 M is simulated in O ([2 log 2 ⌈ 2 | Q || A | +2 | A |⌉ ] T ) space. ⊓ ⊔ While the small universal spiking neur al P system in Figure 3 s im ulates T uring mac hines with a linear time ov erhe a d it r e quir es a n ex p o nent ial spa c e ov er head. This r e quir ement may b e shown b y proving it is sim ulated by a counter machine using the s ame space. How ever, it is not unreas onable to exp ect efficiency from simple universal systems as man y of the simplest computationally universal models have p olyno mial time and space overheads [13,14,17]. It w as mentioned in Section 2 that we ge ne r alised the previous definition o f spik ing neura l P systems with ex haustive use of rules to allow the input neuron to receive an arbitra ry nu mber of spikes in a single timestep. If the syna pses of the system can tra ns mit an arbitrar y nu mber of spikes in a sing le timestep, then it do e s not seem unreaso nable to a llow an a rbitrary nu mber of spikes to enter the input neuron in a single timestep. If the input is restric ted to a constant n umber of spik es, as is the case with e arlier spiking neural P systems, then the system will r emain exp onentially slow due to the time requir ed to read the unary input into the system. References 1. H. Chen, M. Ionescu, and T. Ishdorj. On th e efficiency of spiking neural P systems. In M.A. Guti´ errez-Naranjo et al., editor, Pr o c e e dings of F ourth Br ainstorming We ek on Membr ane Com- puting , pages 195–206 , Sev illa, F eb. 2006. 2. P . C. Fischer, A . Meyer, and A. R osenberg. Counter machines and counter languages. Mathe- matic al Systems The ory , 2(3):265–28 3, 1968. 3. M. Ionescu, G. P˘ aun , and T. Y okomori. Spiking neural P systems. F undamenta Informatic ae , 71(2-3):279– 308, 2006. 4. M. Ionescu, G. P˘ aun, and T. Y okomori . Sp iking n eural P systems with exhaustive use of rules. International Journal of Unc onventional Computing , 3(2):135–153, 2007. 5. M. Ion escu and D. Sbu rlan. S ome applications of spiking n eural P systems. In George Eleftherakis et al., editor, Pr o c e e dings of the Eighth Workshop on Membr ane Computing , pages 383–394, Thessaloniki, June 2007. 6. I. Korec. Small universal register machines. The or etic al Computer Scienc e , 168(2):26 7–301, Nov. 1996. 7. A. Lep orati, C. Zandron, C. F erretti, and G. Mauri. On the computational pow er of spiking neural P systems. In M.A. Guti´ errez-Naranjo et al., editor, Pr o c e e dings of the Fif th Br ainstorming We ek on Membr ane Computing , pages 227–245, Sevilla, Jan. 2007. 8. A. Lep orati, C. Zandron, C. F erretti, and G. Mauri. Solving numeric al NP-complete problems with spiking neu ral P systems. In George Eleftherakis et al., editor, Pr o c e e dings of the Ei ghth Workshop on Membr ane Com puting , pages 405–423 , Thessaloniki, June 2007. 9. T. N eary . A boun dary betw een universa lit y and non-universalit y in spiking neural P systems. arXiv:0912.07 41 v1 [cs.C C]. Decem b er 2009. 10. T. Neary . On the computational complexity of spiking neural P systems. In Unc onventional Computation, 7th International Confer enc e, UC 2008 , vo lume 5204 of LNCS , pages 189–205, Vienna, Aug. 2008. Sp rin ger. 11. T. Neary . A small universal spiking neural P system. In International Workshop on Computing with Biomole cules , pages 65–74, Vienna, Aug. 2008. Austrian Computer Society . 12. T. Neary . Presen t ation at The International W orkshop on Computing with Biomolecules (CBM 2008). Avai lable at http://w ww.emcc.at/ UC2008/Pres entatio ns/CBM5.p df . 13. T. N eary . Smal l universal T uring m achines . PhD thesis, National U niversit y of Ireland, Ma yno oth, O ct. 2008. 17 14. T. Neary and D . W o od s. P -completeness of cellular automaton R ule 110. In Michele Bugliesi et al., editor, International Col lo quium on Automata L anguages and Pr o gr aming 2006, (ICALP) Part I , volume 4051 of LNCS , pages 132–143, V enice, July 2006. Springer. 15. A. P˘ aun and G. P˘ aun . Small unive rsal spiking neural P systems. BioSystems , 90(1):48–60, 2007. 16. G. P˘ aun. M embr ane Computing: An Intr o duction . Springer, 2002. 17. D. W o o ds and T. Neary . O n the time complexity of 2-tag systems and small universal Turing mac hines. In 47 th Ann ual IEEE Symp osium on F oundations of Computer Scienc e (FOCS) , p ages 439–448 , Berkele y , Califo rnia, Oct. 2006 . IEEE. 18. X. Zhang, Y. Jiang, and L. Pan. Small un iversal spiking neural P systems with exhaustive use of rules. In 3r d International Confer enc e on Bio-Inspir e d Comput ing: The ories and Applic a- tions(BICT A 2008) , pages 117–128, Adelaide, Australia, Oct. 2008. IEEE. 19. X. Zhang, X . Zeng, and L. Pan. Small er universal spiking neural P systems. F undamenta Informatic ae , 87(1):11 7–136, Nov. 2008. 18 neuron rules σ 1 ( s z ) ∗ s h q r i + h α i i /s → s ; 1 if D =R s 2 z ( s z ) ∗ s h q r i + h α i i /s z → s ; log 2 ( z ) + 6 if D=L s z + h q r i + h α i i /s z → s z ; log 2 ( z ) + 6 if D=L s h q r i + h α i i /s → λ ; 0 if D =L σ 2 ( s z ) ∗ s h q r i + h α i i /s → s ; 1 if D =L or h q r i = h q | Q | i s 2 z ( s z ) ∗ s h q r i + h α i i /s z → s ; log 2 ( z ) + 6 if D=R s z + h q r i + h α i i /s z → s z ; log 2 ( z ) + 6 if D=R s h q r i + h α i i /s → λ ; 0 if D =R σ 3 ( s z ) ∗ s h q r i + h α i i /s z → s z ; 1 , if h q r i = h q | Q | i ( s z ) ∗ s h q r i + h α i i /s → λ ; 0 , if h q r i 6 = h q | Q | i ( s z ) ∗ s ( X z mod z ) /s → λ ; 0 s z /s → λ ; 0 σ 4 s 2 ( s z ) ∗ /s z → s z ; 2 ( s z ) ∗ /s → s ; 1 s 2 /s 2 → λ ; 0 s h q r i + h α i i /s → s ; 1 s z ( s z ) ∗ s h q r i + h α i i /s → λ ; 0 s/s → λ ; 0 s z ( s z ) ∗ s ( X z mod z ) /s z → s z ; 1 s ( X z mod z ) /s → λ ; 0 σ 5 s 2 ( s z ) ∗ /s → s ; 1 s 2 z ( s z ) ∗ /s → s ; 1 ( s z ) ∗ s h q r i + h α i i /s → s ; 1 s z ( s z ) ∗ s ( X z mod z ) /s z → λ ; 0 s z /s z − 1 → λ ; 0 s ( X z mod z ) /s → s ; 1 σ 6 s 2 ( s z ) ∗ /s → λ ; 0 ( s z ) ∗ /s → s ; 1 s h q r i + h α i i /s → s ; 1 s z ( s z ) ∗ s h q r i + h α i i /s → λ ; 0 s/s → λ ; 0 s z ( s z ) ∗ s ( Y z mod z ) /s z → s z ; 1 s ( Y z mod z ) /s → λ ; 0 T able 2 . This ta ble g ives the r ules in ea ch of the neuro ns σ 1 to σ 6 of Π M . In the rules ab ov e q r is the current state, α i is the read symbol, α j is the write symbol, D is the mov e direction, and q u is the next s tate of some tr a nsition rule q r , α i , α j , D , q u of M . Note tha t ( X z mo d z )) , ( Y z mo d z )) ∈ h A i the set of enco dings for the sym bo ls of M (see Section 5.1). 19 neuron rules σ 7 , σ 8 , σ 9 s 2 ( s z ) ∗ /s → λ ; 0 ( s z ) ∗ /s → λ ; 0 s h q r i + h α i i /s → λ ; 0 s z ( s z ) ∗ s z m ( h q r i + h α i i ) /s → s ; 1 for all m = 2 k , 2 6 m 6 z and k ∈ N s ( X z mod z ) /s ( X z mod z ) → λ ; 0 σ 10 s 31 /s 16 → λ ; 0 s 15 /s 8 → λ ; 0 s 7 /s 4 → λ ; 0 s 3 /s 2 → λ ; 0 s/s → s ; log 2 ( z ) + 3 ( s z 2 ) ∗ s z ( h q r i + h α i i ) /s z 2 → s z 2 ; 1 s z ( h q r i + h α i i ) /s z ( h q r i + h α i i ) −h q u i− 1 → s z h α j i ; 1 s h q u i +1 /s h q u i → s h q u i ; 4 T able 3. This table gives the rules in each of the neurons σ 7 to σ 10 of Π M . See T able 2 for some further ex pla nation. neuron rules σ 1 s ∗ /s → s ; 1 σ 2 , σ 3 , σ 4 s ∗ /s → s ; 1 σ 5 ( s z ) ∗ s h α i /s → s ; log 2 ( z ) ( s z ) ∗ s 2 /s → s ; 1 σ 6 ( s z ) ∗ s h a i /s → λ ; 0 ( s z ) ∗ s 2 /s z → s z ; 1 T able 4. This table gives the r ules in each of the neurons of Π input . 20
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment