Computing a Maximal Independent Set Using Beeps

We consider the problem of finding a maximal independent set (MIS) in the discrete beeping model. At each time, a node in the network can either beep (i.e., emit a signal) or be silent. Silent nodes can only differentiate between no neighbor beeping,…

Authors: Alej, ro Cornejo, Bernhard Haeupler

Computing a Maximal Independent Set Using Beeps
Computing a Maximal Independent Set Using Beeps Alejandro Cornejo ∗ , Bernhard Haeupler † , and Fabian K uhn ‡ Abstract W e consider the problem of finding a maximal independent set (MIS) in the discrete beeping model introduced in DISC 2010. At each time, a node in the network can either beep (i.e., emit a signal) or be silent. Silent nodes can only dif ferentiate between no neighbor beeping, or at least one neighbor beeping. This basic communication model relies only on carrier-sensing. Furthermore, we assume nothing about the underlying communication graph and allow nodes to w ake up (and crash) arbitrarily . W e show that if a polynomial upper bound on the size of the netw ork n is known, then with high probability ev ery node becomes stable in O (log 3 n ) time after it is woken up. T o contrast this, we establish a polynomial lower bound when no a priori upper bound on the network size is kno wn. This holds ev en in the much stronger model of local message broadcast with collision detection. Finally , if we assume nodes hav e access to synchronized clocks or we consider a somewhat restricted wake up, we can solve the MIS problem in O (log 2 n ) time without requiring an upper bound on the size of the network, thereby achie ving the same bit complexity as Luby’ s MIS algorithm. ∗ acornejo@mit.edu , Massachusetts Institute of T echnology (MIT) † haeupler@mit.edu , Massachusetts Institute of T echnology (MIT) ‡ fabian.kuhn@usi.ch , Univ ersity of Lugano (USI), Switzerland 1 1 Intr oduction This paper studies the problem of computing a maximal independent set (MIS) in the discrete beeping wireless network model of [ 6 ]. A maximal independent set of a graph is a subset S of vertices, such that no two neighboring vertices belong to S , and any vertex outside of S has a neighbor inside S . Computing an MIS of a network in a distributed way is a classical problem that has been studied in various communication models. On the one hand, the problem is fundamental as it prototypically models symmetry breaking, a key task in many distributed computations. On the other hand, the problem is of practical interest, as especially in wireless netw orks, having an MIS provides a basic clustering that can be used as a b uilding block for , e.g., ef ficient broadcast, routing, or scheduling. The network is modelled as a graph and time progresses in discrete and synchronized time slots. In each time slot, a node can either transmit a “jamming” signal (i.e., a beep) or detect whether at least one neighbor beeps. W e believe that such a model is minimalistic enough to be implementable in many real world scenarios. At the same time, the model is simple enough to study and mathematically analyze distributed algorithms. Further , it has been shown that such a minimal communication model is strong enough to efficiently solve non-tri vial tasks [ 6 , 15 , 19 ]. The beeping model can be implemented using only carrier sensing where nodes need only to dif ferentiate between silence and the presence of close-by activity on the wireless channel. Note that we do not assume that nodes can sense the carrier and send a beep simultaneously , a node that is beeping is assumed to receiv e no feedback. W e belie v e that the model is also interesting from a practical point of vie w since carrier sensing can typically be used to communicate more energy efficiently and over larger distances than sending regular messages. Besides the basic communication properties describe above, we make almost no additional assumptions. Nodes wakeup asynchronously (controlled by an adv ersary), and sleeping nodes are not automaticaly woken up by incoming messages. Upon waking up, a node has no kno wledge about the communication network. In particular a node has no a priori information about its neighbors or their state. No restrictions is placed on the structure of the underlying communication graph (e.g., it need not be a unit disk graph or a growth-bounded graph). Our contributions are two-fold. First, we show that if nodes are not endowed with any information about the underlying communication graph, an y (randomized) distrib uted algorithm to find an MIS requires at least Ω( p n/ log n ) rounds. W e remark that this lower bound holds much more generally . W e prove the lo wer bound for the significantly more powerful radio network model with arbitrary message size and collision detection, and is therefore not an artifact of the amount of information which can be communicated in each round. Furthermore, this lower bound can be easily extended for other problems which require symmetry breaking (such as e.g., a coloring or a small dominating set). Second, we study what upper bounds can be obtained by lev eraging some knowledge of the network. Aided only by a polynomial upper bound on the size of the network, we present a simple, randomized dis- tributed algorithm that finds an MIS in O (log 3 n ) rounds with high probability . W e then sho w that the knowl- edge of an upper bound on n can be replaced by synchronous clocks. In this case, we describe how to find an MIS with high probability in O (log 2 n ) rounds. Finally , we show that the synchronous clocks assumption can be simulated if we allow the wake up pattern tu be slightly restricted, also achieving a running time of O (log 2 n ) . W e highlight that all the upper bounds presented in this paper compute an MIS ev entually and almost surely , and thus only their running time is randomized. Moreover , in addition to being rob ust to nodes waking up, with no changes the algorithms also support nodes lea ving the network with similar guarantees. Related W ork: The computation of an MIS has been recognized and studied as a fundamental distributed computing problem for a long time (e.g., [ 2 , 3 , 11 , 16 ]). Perhaps the single most influential MIS algorithm is the elegant randomized algorithm of [ 2 , 11 ], generally known as Luby’ s algorithm, which has a running 2 time of O (log n ) . This algorithm works in a standard message passing model, where nodes can concurrently reliably send and receiv e messages over all point-to-point links to their neighbors. [ 12 ] show how to improv e the bit complexity of Luby’ s algorithm to use only O (log n ) bits per channel ( O (1) bits per round). For the case where the size of the largest independent set in the neighborhood of each node is restricted to be a constant (kno wn as bounded independence or gro wth-bounded graphs), [ 18 ] presented an algorithm that computes an MIS in O (log ∗ n ) rounds. This class of graphs includes unit disk graphs and other geometric graphs that hav e been studied in the context of wireless netw orks. The first ef fort to design a distributed MIS algorithm for a wireless communication model is by [ 13 ]. They pro vided an algorithm for the radio network model with a O (log 9 n/ log log n ) running time. This was later improv ed [ 14 ] to O (log 2 n ) . Both algorithms assume that the underlying graph is a unit disk graph (the algorithms also work for somewhat more general classes of geometric graphs). The two algorithms work in the standard radio network model of [ 4 ] in which nodes cannot distinguish between silence and the collision of two or more messages. The use of carrier sensing (a.k.a. collision detection) in wireless networks has e.g. been studied in [ 5 , 9 , 19 ]. As sho wn in [ 19 ], collision detection can be powerful and can be used to improv e the complexity of algorithms for v arious basic problems. [ 17 ] show how to approximate a minimum dominating set in a physical interference (SINR) model where in addition to sending messages, nodes can perform carrier sensing. In [ 8 ], it is demonstrated how to use carrier sensing as an elegant and efficient way for coordination in practice. The present paper is not he first one that uses carrier sensing alone for distributed wireless network algorithms. A similar model to the beep model considered here was first studied in [ 7 , 15 ]. As used here, the model has been introduced in [ 6 ], where it is shown ho w to efficiently obtain a v ariant of graph coloring that can be used to schedule non-overlapping message transmissions. Most related to this paper are results from [ 19 ] and [ 1 ]. In [ 19 ], it is sho wn that by solely using carrier sensing, an MIS can be computed in O (log n ) time in growth-bounded graphs (a.k.a. bounded independence graphs). Here, we drop that restriction and study the MIS problem in the beeping model for general graphs. In [ 1 ], Afek et al. described an O (log 2 n ) algorithm for a similar model motiv ated by a biological process in the development of the nervous system of flies. In [ 1 ], it assumed that nodes can beep and listen to neighboring beeps at the same time and that all nodes are woken up synchronously . 2 System Model and Pr eliminary Definitions In this paper we adopt the discrete beeping model introduced in [ 6 ]. T o model the communication network we assume there is an underlying undirected graph G = ( V , E ) , where V is a set of n = | V | vertices and E is the set of edges W e denote the set of neighbors of node u in G by N G ( u ) = { v | { u, v } ∈ E } . For a node u ∈ V we use d G ( u ) = | N G ( u ) | to denote its degree (number of neighbors) and we use d max = max u ∈ V d G ( u ) to denote the maximum degree of G . W e consider a synchronous network model where an adversary can choose when a node wakes up and when it crashes. Specifically , each node in G is occupied by a process and the system progresses in syn- chronous rounds. Initially all processes are sleeping, and a process starts participating at the round when it is woken up, which is chosen by an adversary . At any round the adversary can furthermore remove nodes by making them crash (or leave) permanently . W e denote by G t ⊆ G the subgraph induced by the processes which are participating at round t . Note that we described the model for an oblivious adversary that chooses a fixed G without knowing the randomness used by the algorithm. Instead of communicating by exchanging messages, we consider a more primiti v e communication model that relies entirely on carrier sensing. Specifically , in e very round a participating process can choose to either beep or listen. In a round where a process decides to beep it receives no feedback. If a process at node v listens in round t it can only distinguish between silence (i.e., no process u ∈ N G t ( v ) beeps in round t ) or the 3 presence of one or more beeps (i.e., there exists a process u ∈ N G t ( v ) who beeps in round t ). Observe that a beep con veys less information than a con ventional 1-bit message, since in the latter its possible to distinguish between no message, a message with a one, and a message with a zero. Gi ven an undirected graph H , a set of vertices I ⊆ V ( H ) is an independent set of H if e very edge e ∈ E has at most one endpoint in I . An independent set I ⊆ V ( H ) is a maximal independent set of H , if for all v ∈ V ( H ) \ I the set I ∪ { v } is not independent. An ev ent is said to occur with high probability , if it occurs with probability at least 1 − n − c for any constant c ≥ 1 , where n = | V | is the number of nodes in the underlying communication graph. For a positiv e integer integer k ∈ N we use [ k ] as short hand notation for { 1 , . . . , k } . In a slight abuse of this notation we use [0] to denote the empty set ∅ and for a, b ∈ N and a < b we use [ a, b ] to denote the set { a, . . . , b } . This paper describes sev eral distributed algorithms that find a maximal independent set in the beeping model. In the algorithms described in this paper , nodes can be in one of three possible states: inactive , competing and MIS . W e say a node is stable if it is in the MIS and all its neighbors are inactive, or if it has a stable neighbor in the MIS. Observe that by definition, if all nodes are stable then ev ery node is either in the MIS or inactiv e, and the MIS nodes describe a maximal independent set. W e will focus solely on algorithms in which ev entually all nodes become stable (i.e., with probability one), and once nodes become stable they remain stable unless an MIS node crashes. In other words, we only consider Las V egas type algorithms which always produce the correct output, b ut whose running time is a random v ariable. Moreover , we will sho w that with high probability nodes become stable quickly . W e say a (randomized) distributed algorithm solves the MIS problem in T rounds, if in the case that no wake ups and crashes happen for T rounds all nodes become stable with high probability . W e furthermore say an MIS algorithm is fast-con ver ging if it also guarantees that any indi vidual node irrev ocably decides to be inacti ve or in the MIS after being awak e for at most T rounds. Note that this stronger termination guarantee makes only sense if there are no crashes since a stable inactive node has to change its status and join the MIS if all its MIS neighbors crash. Moreover , this is precisely the guarantee that we provide in the algorithm presented in Section 4. 3 Lower Bound f or Unif orm Algorithms In this section we show that without some a priori information about the netw ork (e.g., an upper bound on its size or maximum degree) any fast-con ver ging (randomized) distributed algorithm needs at least polynomial time to find an MIS with constant probability . In some ways, this result is the analog of the polynomial lo wer bound [ 10 ] on the number of rounds required for a successful transmission in the radio network model without collision detection or kno wledge of n . W e stress that this lower bound is not an artifact of the beep model, b ut a limitation that stems from having message transmission with collisions and the fact that nodes are required to decide (but not necessarily terminate) without waiting until all nodes hav e woken up. Although we prove the lower bound for the problem of finding an MIS, this lower bound can be generalized to other problems (e.g., minimal dominating set, coloring, etc.). Specifically , we prov e the lower bound for the stronger communication model of the local message broad- cast with collision detection. In this communication model a process can choose in every round either to listen or to broadcast a message (no restrictions are made on the size of the message). When listening a process recei ves silence if no message is broadcast by its neighbors, it receives a collision if a message is broadcast by two or more neighbors, and it receiv es a message if it is broadcast by exactly one of its neighbors. The beep communication model can be easily simulated by this model (instead of beeping send a 1 bit message, and when listening translate a collision or the reception of a message to hearing a beep) and hence the lower bound applies to the beeping model. 4 At its core, our lower bound argument relies on the observ ation that a node can learn essentially no information about the graph G if upon waking up, it always hears collisions or silence. It thus has to decide whether it remains silent or beeps within a constant number of rounds. More formally: Proposition 1. Let A be an algorithm run by all nodes, and let b ∈ { silen t , collision } ∗ be a fixed pattern. If after waking up a node u hears b ( r ) whenever it listens in r ound r , then ther e ar e two constants  ≥ 1 and p ∈ (0 , 1] that only depend on A and b such that either a) u r emains listening indefinitely , or b) u listens for  − 1 r ounds and br oadcasts in r ound  with pr obability p . Pr oof. Fix a node u and let p ( r ) be the probability with which node u beeps in round r . Observe that p ( r ) can only depend on r , what node u heard up to round r (i.e., b ) and its random choices. Therefore, giv en any algorithm, either p ( r ) = 0 for all r (and node u remains silent fore ver), or p ( r ) > 0 for some r , in which case we let p = p ( r ) and  = r . W e now prov e the main result of this section: Theorem 2. If nodes have no a priori information about the graph G then any fast-conver ging distributed algorithm in the local message br oadcast model with collision detection that solves the MIS pr oblem with constant pr obability r equir es requir es at least Ω( p n/ log n ) r ounds, even if no node cr ashes. Pr oof. Fix any algorithm A . Using the previous proposition we split the analysis in three cases, and in all cases we sho w that with probability 1 − o (1) any algorithm runs for o ( p n/ log n ) rounds. W e first ask what happens with nodes running algorithm A that hear only silence after waking up. Propo- sition 1 implies that either nodes remain silent forever , or there are constants  and p such that nodes broadcast after  rounds with probability p . In the first case, suppose nodes are in a clique, and observe that no node will e ver broadcast anything. From this it follows that nodes cannot learn anything about the underlying graph (or even tell if they are alone). Thus, either no one joins the MIS, or all nodes join the MIS with constant probability , in which case their success probability is exponentially small in n . Thus, for the rest of the argument we assume that nodes running A that hear only silence after w aking up broadcast after  rounds with probability p . Now we consider what happens with nodes running A that hear only collisions after waking up. Again, by Proposition 1 we know that either they remain silent forev er , or there are constants m and p 0 such that nodes broadcast after m rounds with probability p 0 . In the rest of the proof we describe a dif ferent execution for each of these cases. CASE 1: (a node that hears only collisions remains silent f ore ver) (see figure 2 in appendix A ) For some k   to be fixed later , we consider a set of k − 1 cliques C 1 , . . . , C k − 1 and a set of k cliques U 1 , . . . , U k , where each clique C i has Θ( k log n/p ) vertices, and each clique U j has Θ(log n ) vertices. W e consider a partition of each clique C i into k sub-cliques C i (1) , . . . , C i ( k ) each with Θ(log n/p ) vertices. For simplicity , whenev er we say two cliques are connected, the y are connected by a complete bipartite graph. Consider the e xecution where in round i ∈ [ k − 1] clique C i wakes up, and in round  the cliques U 1 , . . . , U k wake up simultaneously . When clique U j wakes up, it is is connected to sub-clique C i ( j ) for each i <  . Similarly , when clique C i wakes up, if i ≥  then for j ∈ [ k ] sub-clique C i ( j ) is connected to clique U j . During the first  − 1 rounds only the nodes in C 1 are participating, and hence ev ery node in C 1 broadcasts in round  + 1 with probability p . Thus w .h.p. for all j ∈ [ k ] at least two nodes in sub-clique C 1 ( j ) broadcast in round  . This guarantees that all nodes in cliques U 1 , . . . , U k hear a collision during the first round they are aw ake, and hence they also listen for the second round. In turn, this implies that the nodes in C 2 hear silence during the first  − 1 rounds the y participate, and again for j ∈ [ k ] w .h.p. there are at least tw o nodes in C 2 ( j ) that broadcast in round  + 2 . 5 By a straightforward inducti ve argument we can sho w (omitted) that in general w .h.p. for each i ∈ [ k − 1] and for e very j ∈ [ k ] at least two nodes in sub-clique C i ( j ) broadcast in round  + i . Therefore, also w .h.p., all nodes in cliques U 1 , . . . , U k hear collisions during the first k − 1 rounds after waking up. Observe that at most one node in each C i can join the MIS (i.e. at most one of the sub-cliques of C i has a node in the MIS), which implies there exists at least one clique U j that is connected to only non-MIS sub-cliques. Howe ver , since the nodes in U j are connected in a clique, exactly one node of U j must decide to join the MIS, but all the nodes in U j hav e the same state during the first k − 1 rounds. Therefore if nodes decide after participating for at most k − 1 rounds, w .h.p. either no one in U j joins the MIS, or more than tw o nodes join the MIS. Finally since we have n ∈ Θ( k 2 log n + k log n ) nodes, we can let k ∈ Θ( p n/ log n ) and the theorem follo ws. CASE 2: (a node that hears only collisions remains silent f ore ver) (see figure 3 in appendix A ) For some k  m to be fixed later let q =  k 4  and consider a set of k cliques U 1 , . . . , U k and a set of m − 1 cliques S 1 , . . . , S m − 1 , where each clique U i has Θ(log n/p 0 ) v ertices, and each clique S i has Θ(log n/p ) vertices. As before, we say two cliques are connected if they form a complete bipartite graph. Consider the ex ecution where in round i ∈ [ m − 1] clique S i wakes up, and in round  + j for j ∈ [ k ] clique U j wakes up. When clique U j wakes up, if j > 1 it is connected to ev ery U i for i ∈ { max(1 , j − q ) , . . . , j − 1 } and if j < m it is also connected to e very clique S h for h ∈ { m − j, . . . , m } . During the first  − 1 rounds only the nodes in S 1 are participating, and hence e very node in S 1 broadcasts in round  + 1 with probability p , and thus w .h.p. at least two nodes in S 1 broadcast in round  + 1 . This guarantees the nodes in U 1 hear a collision upon waking up, and therefore they listen in round  + 2 . In turn this implies the nodes in S 2 hear silence during the first  − 1 rounds they participate, and hence w .h.p. at least two nodes in S 2 broadcast in round  + 2 . By a straightforward inductiv e argument we can show (omitted) that in general for i ∈ [ m − 1] the nodes in S i hear silence for the first  − 1 rounds they participate, and w .h.p. at least two nodes in S i broadcast in round  + i . Moreover , for j ∈ [ k ] the nodes in U j hear collisions for the first m − 1 rounds they participate, and hence w .h.p. there are at least two nodes in U j who broadcast in round  + m + j − 1 . This implies that w .h.p. for j ∈ [ k − q ] the nodes in U j hear collisions for the first q rounds they participate. W e argue that if nodes choose weather or not to join the MIS q rounds after participating, then they fail w .h.p. In particular consider the nodes in clique U j for j ∈ { q , . . . , k − 2 q } . These nodes will collisions for the first q rounds they participate, and they are connected to other nodes which also hear beeps for the first q rounds they participate. Therefore, if nodes decide after participating for less or equal than q rounds, w .h.p. either a node and all its neighbors won’ t be in the MIS, or two or more neighboring nodes join the MIS. Finally since we have n ∈ Θ( m log n + k log n ) nodes, we can let k ∈ Θ( n/ log n ) and hence q ∈ Θ( n/ log n ) and the theorem follows. 4 Maximal Independent Sets Using an Upper Bound on n In this section we describe a simple and rob ust randomized distrib uted algorithm that computes an MIS with high probability in a polylogarithmic number of rounds. Specifically , the algorithm only requires an upper bound N > n on the total number of nodes in the system, and guarantees that with high probability , O (log 2 N log n ) rounds after joining, a node kno ws if it belongs to the MIS or if it is cov ered by an MIS node. Therefore, if the known upper bound is polynomial in n (i.e., N ∈ O ( n c ) for a constant c ), the algorithm terminates with high probability in time O (log 3 n ) . 6 Algorithm: If a node hears a beep while listening at any point during the ex ecution, it restarts the algorithm. When a node wakes up (or it restarts), it stays in an inacti ve state where it listens for c log 2 N consecuti ve rounds. After this inacti vity period, nodes start competing and group rounds into log N phases of c log N consecuti ve rounds. Due to the asynchronous wake up and the restarts, in general phases of different nodes will not be synchronized. In each round of phase i with probability 2 i / 8 N a node beeps, and otherwise it listens. Thus by phase log N a node beeps with probability 1 8 in ev ery round. After successfully going through t he log N phases of the competition (recall that when a beep is heard during any phase, the algorithm restarts) a node assumes it has joined the MIS and into a loop where it beeps in every round with probability 1 / 8 forev er (or until it hears a beep). Algorithm 1 FastMIS algorithm 1: for c log 2 N rounds do listen  Inactiv e 2: for i ∈ { 1 , . . . , log N } do  Competing 3: f or c log N rounds do 4: with probability 2 i / 8 N beep, otherwise listen 5: fore ver with probability 1 2 beep then listen, otherwise listen then beep  MIS In contrast to the polynomial lo wer bound from Section 3 we sho w that the abo ve algorithm does not only solve the MIS problem in O (log 2 N log n ) time but is also f ast-con ver ging. Theorem 4.1. The F astMIS algorithm solves the MIS pr oblem in O (log 2 N log n ) time, wher e N is an upper bound for n that is a priori known to the nodes. Under arbitrary wake ups and no crashes, the F astMIS algorithm is furthermor e fast-con ver ging . This demonstrates that kno wing a priori size information about the network, e ven as simple as its size, can drastically change the complexity of a problem. The knowledge of n alone prov ablg creates an exponential for the running time of fast-con v erging MIS algorithms. Proof Outline. First, we le v erage the fact that for tw o neighboring nodes to go simultaneously into the MIS they hav e to choose the same actions (beep or listen) during at least c log N rounds. This does no happen w .h.p. and thus MIS nodes are independent w .h.p. On the other hand, since nodes which are in the MIS keep trying to break ties, an inactiv e node will nev er become activ e while it has a neighbor in the MIS, and e v en in the lo w probability e vent that tw o neighboring nodes do join the MIS, one of them will e ventually and almost surely leave the MIS. The more elaborate part of the proof is showing that w .h.p., any node becomes stable after O (log 2 N log n ) consecutiv e rounds without crashes. This requires three technical lemmas. First we sho w that if the sum of the beep probabilities of a neighbor are greater than a lar ge enough constant, then they hav e been than a (smaller) constant for the c log N preceding rounds. This can be used this to sho w that with constant probability , when a node u hears or produces beep, no neighbor of the beeping node beeps at the same time and thus u becomes stable. Finally , since a node hears a beep or produces a beep e very O (log 2 N ) rounds, O (log 2 N log n ) rounds suf fice to stabilize w .h.p. (Detailed proofs in Appendix B .) 5 Synchr onized Clocks For this section we assume that nodes hav e synchronized clocks, i.e., know the current round number t . As before, we allo w arbitrary node additions and deletions. 7 Algorithm: Nodes hav e three dif ferent internal states: inactive , competing , and MIS . Each node has a parameter k that is monotone increasing during the e xecution of the algorithm. All nodes start in the inacti ve state with k = 6 . Nodes communicate in beep-triples, and synchronize by starting a triple only when t ≡ 0 (mo d 3) . The first bit of the triple is the Restart-Bit. A B eep is sent for the Restart-Bit if and only if t ≡ 0 (mo d k ) . If a node hears a B eep on its Restart-Bit it doubles its k and if it is active it becomes inactiv e. The second bit sent in the triple is the MIS-Bit. A B eep is sent for the MIS-Bit if and only if a node is in the MIS state. If a node hears a B eep on the MIS-bit it becomes inactive. The last bit send in a triple is the Competing-Bit. If inacti ve, a node listens to this bit, otherwise it sends a B eep with with probability 1/2. If a node hears a B eep on the Competing-Bit it becomes inactiv e. Furthermore, if a node is in the MIS-state and hears a B eep on the Competing-Bit it doubles its k . Lastly , a node transitions from inactiv e to activ e between any time t and t + 1 for t ≡ 0 (mo d k ) . Similarly , if a node is active when t = 0 mo d k then it transitions to the MIS state. In the sequel, we refer to this algorithm as Algorithm 2. The state transitions are also depicted in Figure 1 . inactive MIS: 0; Comp.: 0 competing MIS: 0; Comp.: random MIS MIS: 1; Comp.: random Restart : ( t ≡ 0 mod k ) , if hear 1 then k = k · 2 t ≡ 0 mo d k t ≡ 0 mo d k k = k · 2 hear 1 hear 1 Figure 1: State Diagram for Algorithm 2 Idea: The idea of the algorithm is to employ Luby’ s permutation algorithm in which a node picks a random O (log n ) -size priority which it shares with its neighbors. A node then joins the MIS if it has the highest priority among its neighbors, and all neighbors of an MIS node become inactiv e. Despite the fact that this algorithm is described for the message e xchange model, it is straightforward to adapt the priority comparisons to the B eep model. For this, a node sends its priority bit by bit, starting with the highest-order bit and using a B eep for a 1 . The only further modification is that a node stops sending its priority if it has already heard a B eep on a higher order bit during which it remained silent because it had a zero in the corresponding bit. Using this simple procedure, a node can easily realize when a neighboring node has a higher priority . Furthermore, a node can observe that it has the highest-priority in its neighborhood which is exactly the case if it does not hear any B eep . Therefore, as long as nodes hav e a synchronous start and know n (or an upper bound) it is straightforward to get Luby’ s algorithm working in the beep model in O (log 2 n ) rounds (and ignoring edge additions and deletions). W e remark that this already implies a better round complexity that the result of [ 1 ] in a strictly weaker model, albeit without using a biologically inspired algorithm. In the rest of this section we sho w how to remov e the need for an upper bound on n and a synchronous start. W e solely rely on synchronized clocks to synchronize among nodes when a round to transmits a new priority starts. Our algorithm uses k to compute an estimate for the required priority-size O (log n ) . Whenev er a collision occurs and tw o nodes tie for the highest priority the algorithm concludes that k is not large enough yet and doubles its guess. The algorithm furthermore uses the Restart-Bit to ensure that nodes locally work with the same k and run in a synchronized manner in which priority comparisons start at the same time (namely ev ery t ≡ 0 (mo d k ) ). It is not ob vious that either a similar k or a synchronized priority comparison is necessary but it turns out that algorithms without them can stall for a long time. In the first case this is because repeatedly nodes with a too small k enter the MIS state simultaneously while in the second case many asynchronously competing nodes (even with the same, large enough k ) keep eliminating each other without one becoming dominant and transitioning into the MIS state. 8 Analysis: T o proof the algorithm’ s correctness, we first sho w two lemmas that show that with high proba- bility k cannot be super-logarithmic. Lemma 5.1. W ith high pr obability k ∈ O (log n ) for all nodes during the e xecution of the algorithm. Pr oof. W e start by showing that tw o neighboring nodes u, v in the MIS state must hav e the same k and transitioned to the MIS state at the same time. W e prov e both statements by contradiction. For the first part assume that nodes u and v are in the MIS state but u transitioned to this state (the last time) before v . In this case v would ha ve receiv ed the MIS-bit from u and become inacti ve instead of joining the MIS, a contradiction. Similarly , for sake of contradiction, we assume that k u < k v . In this case, during the activ e phase of u before it transitioned to the MIS at time t it would hav e set its Restart-bit to 0 at time t − k u and recei ved a 1 from v and become inactiv e, contradicting the assumption that k u < k v . Gi ven this we now sho w that for a specific node u it is unlikely to become the first node with a too large k . For this we note that k u gets doubled because of a Restart-Bit only if a B eep from a node with a lar ger k is recei ved. This node can therefore not be responsible for u becoming the first node getting a too large k . The second way k can increase is if a node transitions out of the MIS state because it recei ves a Competing-Bit from a neighbor v . In this case, we know that u competed against at least one such neighbor for k rounds with none of them loosing. The probability of this to happen is 2 − k . Hence, if k ∈ Θ(log n ) , this does not happen w .h.p. A union bound ov er all nodes and the polynomial number of rounds in which nodes are not yet stable finishes the proof. Theorem 5.2. If during an execution the O (log n ) neighborhood of a node u has not changed for Ω(log 2 n ) r ounds then u is stable, i.e., u is either in the MIS state with all its neighbors being inactive or it has at least one neighbor in the MIS state whose neighbors ar e all inactive . Pr oof. First observe that if the whole graph has the same v alue of k and no two neighboring nodes transition to the MIS state at the same time, then our algorithm beha ves e xactly as Luby’ s original permutation algorithm, and therefore terminates after O ( k log n ) rounds with high probability . From a standard locality argument, it follo ws that a node u also becomes stable if the above assumptions only hold for a O ( k log n ) neighborhood around u . Moreov er , since Luby’ s algorithm performs only O (log n ) rounds in the message passing model, we can improv e our locality argument to show that in if a O (log n ) neighborhood around u is well-behav ed, then u behav es as in Luby’ s algorithm. Since the values for k are monotone increasing and propagate between two neighboring nodes u, v with dif ferent k (i.e., k u > k v ) in at most 2 k u steps, it follo ws that for a node u it tak es at most O ( k u log n ) rounds until either k u increases or all nodes v in the O (log n ) neighborhood of u have k v = k u for at least O ( k log n ) rounds. W e can furthermore assume that these O ( k log n ) rounds are collision free (i.e, no two neighboring nodes go into the MIS), since any collision leads with high probability within O (log n ) rounds to an increased k value for one of the nodes. For any v alue of k , within O ( k log n ) rounds a node thus either performs Luby’ s algorithm for O (log n ) priority exchanges, or it increases its k . Since k increases in po wers of two and, according to Lemma 5.1 , with high probability it does not exceed O (log n ) , after at most P O (log log n ) i 2 i · 3 · O ( k log n ) = O (log 2 n ) rounds the status labeling around a O (log n ) neighborhood of u is a proper MIS. This means that u is stable at some point and it is not hard to verify that the function of the MIS-bit guarantees that this property is preserved for the rest of the e xecution. 6 Simple W ake Up In this section we show how to replace the assumption of synchronized clocks by instead restricting the way in which wake ups and crashes occur . The main theorem in this Section is Theorem 6.3 . 9 W e work with the following simple wake up restriction: The adversary is allo wed to start with any (pos- sibly disconnected) graph, without loss of generality we call this time t = 0 . Furthermore the adversary can at any time wake up any set of new nodes, with the restriction that each new node is connected at least to one old node, i.e., a node that has been around for at least for δ rounds, where we think of δ as being a small non-constant quantity (e.g., log d max ). Similarly , the adversary can crash any node, as long as this node is connected only to old nodes. Gi ven these quite flexible simple wake up dynamics, we will show that nodes can simulate synchronous clocks. This reduction allows us to ex ecute the Algorithm 2 without synchronized clocks. W e start by presenting a very simple reduction, that requires δ to depend on the current round. W e then refine the reduction and show how to circumv ent this problem and gi v e an MIS algorithm for the simple wake up assumption with δ that is at least log d max (note that this does not imply that nodes need to kno w log d max ). 6.1 Simple W ake up and Synchr onized Clocks The core idea is for each node to keep a local time counter , and use a structured beep pattern to communicate this local time counter to new nodes. The simple wake up dynamics prev ent the adversary from blocking these beep pattern through staggered node additions, as those which were described in the lo wer bound proof of Section 3 . W e split messages into blocks. A block starts with two zeros that unambiguously mark the beginning of a block. This is follo wed by the current block t (i.e. the time counter) and lastly an equal amount of bits carrying the data of the simulated algorithm. Both the individual bits describing the time and all data bits are interleav ed by ones which makes the block-beginning identifiable. As an example the bit sequence abcdef g hij k l would be sent as 00 . 0 .a. 00 . 1 .b. 00 . 1 . 0 .c.d. 00 . 1 . 1 .e.f . 00 . 1 . 0 . 0 .g .h.i. 00 . 1 . 0 . 1 .j.k .l , where we replaced the separating ones by a period for better readability . Observe that each block contains a header , the current block (i.e., time), and some data. The complete algorithm operates as follows. Once a node is awake, it listens for four rounds. If no B eep was receiv ed during this time the node can be sure that it is alone, which also implies that t = 0 . If a node hears at least one B eep it waits until it hears two rounds of silence in a row , which mark the beginning of a block. It then listens for the length of the whole block which allo ws it to identify the current block number t . In either case a node learns the current block number (and thus the time) after listening for at most two blocks. Then it is able to perform the same computations as the synchronized algorithm of the pre vious section. Theorem 6.1. Any algorithm that works in the discr ete beep model with synchr onized clocks in time O ( T ) can be simulated by an algorithm that works for the simple wake up dynamics with δ = O (log t ) in time O ( T + log t ) wher e t is the total time the algorithm is run. Pr oof. If a new node is connected to a node that broadcasts the time for O (log t ) rounds, it gets to know the number of blocks that have been sent (and thus time itself). In the simple wake up dynamics with δ = O (log t ) we thus get inductively that all old nodes are in synch and know the current time. W ith this knowledge they can easily infer ho w many data bits were sent around since the beginning of the algorithm and thus get a logical time on the data bits that is shared with all synchronized nodes. W ith this synchronization, old nodes can then run the simulated algorithm. Since during this computation by construction a constant fraction c of the bits are data bits, the number of rounds the simulated algorithm needs to run after the O (log t ) time synchronization is at most cT . This leads to the claimed total running time of O ( T + log t ) . W e can use the abov e reduction together with Algorithm 2 and obtain an MIS algorithm with running time O (log 2 n + log t ) . Unfortunately if we run this algorithm for a long time we do not get any running time guarantee in the number of nodes. T o av oid this we make the observation that Algorithm 2 does not use the full power of synchronized clocks but solely requires that nodes can ev aluate whether t ≡ 0 (mo d k ) where 10 it suffices to hav e a k = O (log d max ) that is logarithmic in the maximum degree of G . Thus if nodes know an a priori upper bound ∆ on d max it suf fices to track time modulo log ∆ , which requires only log log ∆ time-bits. This way , at the cost of having to know ∆ , the algorithm is not dependent on time any more. W e thus get the follo wing corollary: Corollary 6.2. Ther e is an algorithm that solves the MIS pr oblem in a network with simple wak e up dynamics with δ = O (log t ) in O (log 2 n + log t ) time, wher e t is the time over which wake ups and crashes occur . If nodes are given an a priori upper bound ∆ on the maximum de gr ee then ther e is also a O (log 2 n ) time MIS algorithm that works in any network with simple wake up dynamics with δ = O (log log ∆) . 6.2 The Simple W ake Up Algorithm In the last subsection we ga ve two algorithms that work in the simple wake up model. Both algorithms hav e a drawback. The first one deteriorates ov er time and thus the number of rounds it requires to solve the MIS problem increases as time progresses. On the other hand, the second algorithm requires an a priori upper bound on d max or n . As we showed in Section 3 and 4 this can be a drastic adv antage for an algorithm. In what follows we show that the synchronization required by Algorithm 2 can be achiev ed without a priori kno wledge or dependence on t . The algorithm builds on the synchronization ideas de v eloped before where nodes try to keep a time counter up to some precision. W e kno w that it suffices for the nodes to kno w the least significant log log d max bits. The problem is that now they do not know d max . The following approach makes sure that nodes send out all time bits but prioritize the earlier bits such that any node that listens for O (2 l ) rounds is able to infer the first l bits of the time. W e will count the number of blocks that hav e been sent around since the system started. This allows us to start with k = 1 and get all po wers of two for the v alues of k . It is important that nodes increase their k . Our approach is closely related to the binary carry sequence B , where the n th digit is the number of zeros at the end of n when written in base 2. B = 0102010301020104010201030102010501020103010201040102010301020106010201 . . . Suppose now we associate the n th number in this sequence with the n th block that is sent. In this case if the binary carry sequence number associated with a certain block is l , then for exactly all nodes with k = 2 i and i ≤ l we hav e t ≡ 0 mo d k which implies a status change and a zero in Restart-Bit for these nodes. W e are going to define another sequence B 0 that allows to identify the position of all numbers smaller than l as long as any interv al of length 2 l +2 of B 0 is observed. The sequence B 0 is a bit sequence in which the n th bit is the parity of the number of occurrences of the n th number of B in the first n numbers of B , i.e., B 0 = 1101100111001001110110001100100111011001110010001101100011001001110110 . . . Suppose that we split the sequence into tw o parts, one containing all e ven n and one with all odd n (and thus corresponding to the positions of zeros in B , and not zeros in B ). Then the odd sequence is strictly alternating while the even sequence is 1010 -free. The last observation is a result of the fact that either the two ones or the two zeros correspond to two consecutive occurrences of a 1 in B and should thus have dif ferent parities. After having received (at most) elev en consecutiv e bits from sequence B 0 a node will therefore receiv e a 1010 pattern on the odd subsequence and no such pattern on the even subsequence. This allows it to identify the positions of all zeros in B . W ith this knowledge, a node can then turn its attention to the ev en subsequence to identify the positions of the ones in B . Again, this can be done in the same way , namely splitting the ev en subsequence into two subsequences and waiting for a 1010 pattern. Iterating this procedure enables a node 11 that recei ves 11 · 2 l consecuti ve bits of the B 0 sequence, to identify all positions of numbers of at most l in B . From then on, it can also send the right bits of this sequence to its neighbors as soon as it learns them. W ith this trick in mind we now describe the algorithm. It operates in bit-quadruples in a similar manner to Algorithm 2 but with an additional time-bit that is used to transmit the sequence B 0 with the time parity information. Furthermore use the block structure with the leading zeros and the alternating ones to distinguish the beginning of a block. In total a block with time-bit T , Restart-bit R , MIS-bit M and competing-bit C will be transmitted as 00 .T .R.M .C . where again the . represent the separating ones. The computation of the algorithm is no w as in Algorithm 2 with two minor modifications. First, the time bit is used to con ve y the parity bit of the current time as described above. A node constantly listens to bits in the sequence B 0 that are not kno wn to it, learning more positions o ver time. On the position of the bits it knows it sends these bits out. The second modification is that a node nev er increases its k value beyond the last power of two for which it can e valuate t ≡ 0 (mo d k ) . This completes the algorithm description. Gi ven the ar guments above we can pro ve our main theorem for this section: Theorem 6.3. Ther e is a algorithm that solves the MIS problem in a network with simple wake up dynamics with δ = O (log d max ) in O (log 2 n ) time. Pr oof. W e first observ e by induction that if δ ∈ O (log d max ) we get that any old node kno ws about the log log d max lo west significant bits of time. This allows k to grow as large as O (log d max ) which is suf ficient for Luby’ s algorithm to work efficiently . Thus simulating running Algorithm 2 will be successful in O (log 2 n ) blocks (each of size O (1) ). The only thing that we need to sho w is that two neighbors u, v with different v alues of k u > k v still con v erge to the larger v alue k u in time O ( k u ) (if none of them crashes) ev en though v might not be allowed to increase its k because it does not know the time well enough. This is true because v will learn the k u parity of the time in O ( k u ) rounds and then increase its k value accordingly . Besides this no further modifications were made to the algorithm and its correctness and running time therefore follow from Theorem 6.1 and 5.2 . 12 Refer ences [1] Y . Afek, N. Alon, O. Barad, E. Hornstein, N. Barkai, and Z. Bar-Joseph. A biological solution to a fundamental distributed computing problem. Science , 331(6014):183, 2011. [2] N. Alon, L. Babai, and A. Itai. A fast and simple randomized parallel algorithm for the maximal independent set problem. Journal of Algorithms , 7(4):567–583, 1986. [3] B. A werb uch, A. V . Goldberg, M. Luby , and S. A. Plotkin. Network decomposition and locality in distributed computation. In Pr oc. of the 30th Symposium on F oundations of Computer Science (FOCS) , pages 364–369, 1989. [4] R. Bar -Y ehuda, O. Goldreich, and A. Itai. On the time-comple xity of broadcast in multi-hop radio networks: An exponential gap between determinism and randomization. J. of Computer and System Sciences , 45(1):104–126, 1992. [5] B. Chlebus, L. Gasieniec, A. Gibbons, A. Pelc, and W . Rytter . Deterministic broadcasting in unknown radio netw orks. In Pr of. 11th ACM-SIAM Symp. on Discr ete Algorithms (SOD A) , pages 861–870, 2000. [6] A. Cornejo and F . Kuhn. Deploying wireless networks with beeps. In Pr oc. of 24th Symposium on Distributed Computing (DISC) , pages 148–162, 2010. [7] J. Degesys, I. Rose, A. Patel, and R. Nagpal. Desync: self-organizing desynchronization and TDMA on wireless sensor networks. In Pr of . 6th Conf. on Information Processing in Sensor Networks (IPSN) , page 20, 2007. [8] R. Flury and R. W attenhofer . Slotted programming for sensor networks. Pr oc. 9th Confer ence on Information Pr ocessing in Sensor Networks (IPSN) , 2010. [9] D. Ilcinkas, D. Ko walski, and A. Pelc. Fast radio broadcasting with advice. Theor etical Computer Science , 411(14-15), 2010. [10] T . Jurdzinski and G. Stachowiak. Probabilistic algorithms for the wakeup problem in single-hop radio networks. Pr oc. 13th International Symposium on Algorithms and Computation (ISAA C) , 2002. [11] M. Luby . A simple parallel algorithm for the maximal independent set problem. SIAM Journal on Computing , 15:1036–1053, 1986. [12] Y . M ´ eti vier , J. M. Robson, N. Saheb-Djahromi, and A. Zemmari. An optimal bit complexity random- ized distributed mis algorithm. Pr oc. 16th Colloquim on Structural Information and Communication Complexity (SIR OCCO) , 2009. [13] T . Moscibroda and R. W attenhofer . Efficient computation of maximal independent sets in structured multi-hop radio networks. Pr oc. of 1st International Confer ence on Mobile Ad Hoc Sensor Systems (MASS) , 2004. [14] T . Moscibroda and R. W attenhofer . Maximal Independent Sets in Radio Networks. Pr oc. 24th Sympo- sium on Principles of Distributed Computing (PODC) , 2005. [15] A. Motskin, T . Roughgarden, P . Skraba, and L. Guibas. Lightweight coloring and desynchronization for networks. In Pr oc. 28th IEEE Conf . on Computer Communications (INFOCOM) , 2009. [16] A. Panconesi and A. Sriniv asan. On the complexity of distributed network decomposition. Journal of Algorithms , 20(2):581–592, 1995. 13 [17] C. Scheideler , A. Richa, and P . Santi. An O (log n ) dominating set protocol for wireless ad-hoc net- works under the physical interference model. Pr oc. 9th Symposium on Mobile Ad Hoc Networking and Computing (MOBIHOC) , 2008. [18] J. Schneider and R. W attenhofer . A Log-Star Maximal Independent Set Algorithm for Growth-Bounded Graphs. Pr oc. 28th Symposium on Principles of Distrib uted Computing (PODC) , 2008. [19] J. Schneider and R. W attenhofer . What is the use of collision detection (in wireless networks)? In Pr oc. of 24th Symposium on Distributed Computing (DISC) , pages 133–147, 2010. 14 A ppendices A Figur es C 1 (1) C 1 (2) . . . C 1 ( k ) C 2 (1) C 2 (2) . . . C 2 ( k ) C k − 1 (1) C k − 1 (2) . . . C k − 1 ( k ) U 1 U 2 . . . . . . U k C 1 C 2 C k − 1 wak e @ t = 1, beep @ t = ` + 1 wak e @ t = 2, beep @ t = ` + 2 wak e @ t = k − 1, b eep @ t = ` + k − 1 U i wak e @ t = ` and listens beeps for k rounds Figure 2: Execution for Case 1 of the Lo wer Bound C 1 C 2 C 3 . . . C m − 1 U 1 U 2 U 3 . . . U k C i wak e @ t = i beep @ t = ` + i U j wak e @ t = ` + j beep @ t = ` + m + j − 1 to U j for j ∈ [ k − q , k − 1] Figure 3: Execution for Case 2 of the Lo wer Bound B Pr oofs f or Section 4 First, we show that with high probability two neighboring nodes will never join the MIS. Moreover , even in the lo w probability ev ent that they tw o neighboring nodes join the MIS, almost surely one of them ev entually becomes inacti ve. Claim 1. W ith high probability two neighboring nodes do not join the MIS. If two neighboring nodes are in the MIS, almost surely e ventually one of them becomes inacti ve. Pr oof. For two neighboring nodes to join the MIS they would first hav e to go through an interval of c log N consecuti ve rounds where at e very round they both beep with probability 1 / 8 and listen otherwise. Moreov er , during these c log N rounds it should not be the case that one of them listens while the other beeps, and hence they ha ve to choose the same action (beep or listen) at each of these rounds. The probability of this happening is less than (1 − 1 8 ) c log N ≤ e − c log N/ 8 , and thus for sufficiently large c (i.e c ≥ 8 ) we have that with high probability tw o neighboring nodes do not join the MIS simultaneously . Moreov er , assume two neighboring nodes are in the MIS simultaneously . Then at ev ery round, one of them will leave the MIS with constant probability . Hence, the probability that they both remain in the MIS 15 after k rounds is exponentially small in k . Hence, it follows that ev entually almost surely one of them becomes inacti ve. Moreov er , it is also easy to see that once a node becomes stable it stays stable indefinitely (or until a neighbor crashes). This follo ws by construction since stable MIS nodes will beep at least e v ery 3 rounds, and therefore inactiv e neighbors will ne ver start competing to be in the MIS. Hence to prove the correctness of the algorithm we need only to show that ev entually all nodes are either in the MIS or hav e a neighbor in the MIS. For a fixed node u and a round t , we use b u ( t ) to denote the beep pr obability of node u at round t . The beep potential of a set of nodes S ⊆ V at round t is defined as the sum of the beep probabilities of nodes in S at round t , and denoted by E S ( t ) = P u ∈ S b u ( t ) . Of particular interest is the beep potential of the neighborhood of a node, we will use E v ( t ) as short hand notation of E N ( v ) ( t ) . The next lemma shows that if the beep potential of a particular set of nodes is larger than a (sufficiently large) constant at round t , then it was also lar ger than a constant during the interval [ t − c log N , t ] . Informally , this is true because the beep probability of e very node increases slo wly . Lemma 3. F ix a set S ⊆ V . If E S ( t ) ≥ λ at r ound t , then E S ( t 0 ) ≥ 1 2 λ − 1 8 at r ound t 0 ∈ [ t − c log N , t ] . Pr oof. Let P ⊆ S be the subset of nodes that are at phase 1 at round t , and let Q = S \ P be the remaining nodes. Using this partition of nodes we split the probability mass E S ( t ) as: E S ( t ) = X u ∈ P b u ( t ) | {z } E P ( t ) + X u ∈ Q b u ( t ) | {z } E Q ( t ) (1) For the rest of the proof, let t 0 be an y round in the range [ t − c log N , t ] . Since the nodes in P are in phase 1 at round t , therefore at round t 0 the nodes in P are either in the inactiv e state or at phase 1 . This implies that b u ( t 0 ) ≤ 1 / 4 N for u ∈ P , and since there are at most | P | ≤ | S | ≤ N nodes we have E P ( t 0 ) ≤ N / 4 N = 1 4 . Similarly the nodes in Q are in phase i > 1 at round t , and therefore at round t 0 the nodes in Q are in phase i − 1 ≥ 1 . This implies that b u ( t 0 ) ≥ 1 2 b u ( t ) for u ∈ Q , and hence E Q ( t 0 ) ≥ 1 2 E Q ( t ) = 1 2 ( E S ( t ) − E P ( t )) ≥ 1 2 λ − 1 8 . Finally since E S ( t 0 ) ≥ E Q ( t 0 ) we hav e E S ( t 0 ) ≥ 1 2 λ − 1 8 . Using the previous lemma, we show that with high probability nodes which are competing ha v e neighbor- hoods with a “lo w” beep potential. Intuitiv ely this is true because if a node had neighborhoods with a “high” beep potential, by the previous result we know it also had a high beep potential during the previous c log N rounds, and there are good changes it would ha ve been kicked out of the competition in a pre vious round. Lemma 4. W ith high pr obability , if node v is competing at r ound t then E v ( t ) < 1 2 . Pr oof. Fix a node v and a time t , we will show that if E v ( t ) ≥ 1 2 then with high probability node v is not competing at time t . Let L v ( τ ) be the e vent that node v listens at round τ and there is a neighbor u ∈ N ( v ) who beeps at round τ . First we estimate the probability of the ev ent L v ( τ ) . Pr [ L v ( τ )] = (1 − b v ( τ ))   1 − Y u ∈ N ( v ) (1 − b u ( τ ))   ≥ (1 − b v ( τ ))   1 − exp   − X u ∈ N ( v ) b u ( τ )     = (1 − b v ( τ ))(1 − exp( − E v ( τ ))) 16 From lemma 3 we hav e that if E v ( t ) ≥ 1 2 then E v ( τ ) ≥ 1 8 for τ ∈ [ t − c log N , t ] , together with the fact that b v ( τ ) ≤ 1 2 this implies that L v ( τ ) ≥ 1 2 (1 − e − 1 / 8 ) ≈ 0 . 058 for τ ∈ [ t − c log N , t ] . Let C v ( t ) be the ev ent that node v is competing at round t . Observe that if L v ( τ ) occurs for τ ∈ [ t − c log N , t ] then node v stops competing for at least c log N rounds and hence C v ( t ) cannot occur . Therefore, the probability that node v does not beep at round t is at least: Pr [ ¬ C v ( t )] ≥ Pr [ ∃ τ ∈ [ t − c log N , t ] s.t. L v ( τ )] ≥ 1 − t Y τ = t − c log N (1 − Pr [ L v ( τ )]) ≥ 1 − exp   − t X τ = t − c log N L v ( τ )   Finally since for τ ∈ [ t − c log N , t ] we have L v ( τ ) ≥ 0 . 058 , then for a sufficiently large c (i.e. c ≥ 18 ) with high probability node v is not competing at round t . Next, we show that if a node hears a beep or produces a beep at a round when where its neighborhood (and its neighbors neighborhood) has a “lo w” beep potential, then with constant probability either it joins the MIS, or one of its neighbors joins the MIS. Lemma 5. Assume that E u ( t ) ≤ 1 2 for every u ∈ N ( v ) ∪ { v } . If node v beeps or hears a beep at r ound t then with pr obability at least 1 e either v beeped alone, or one if its neighbors beeped alone . Pr oof. W e consider three e vents. A u : Node u beeps at round t . B u : Node u beeps alone at round t . S : [ w ∈ N ( v ) ∪{ v } B w Our aim is to sho w that the event S happens with constant probability , as a first step we sho w that Pr [ B u | A u ] is constant. Pr [ B u | A u ] = Pr   ¬ [ w ∈ N ( u ) A w   = Pr   \ w ∈ N ( u ) ¬ A w   = Y w ∈ N ( u ) (1 − b w ( t )) ≥ exp   − 2 X w ∈ N ( u ) b w ( t )   = e − 2 E u ( t ) Moreov er , since by assumption E u ( t ) ≤ 1 2 then Pr [ B u | A u ] ≥ 1 e . For simplicity we rename the set N ( v ) ∪ { v } to the set { 1 , . . . , k } where k = | N ( v ) | + 1 . W e define the follo wing finite partition of the probability space: 17 ξ 1 = A 1 ξ 2 = A 2 ∩ ¬ A 1 ξ 3 = A 3 ∩ ¬ A 2 ∩ ¬ A 1 . . . ξ k = A k ∩ k − 1 \ i =1 ¬ A i Recall that by assumption our probability space is conditioned on the ev ent that “node v beeps or hears a beep at round t ”, or in other words ∃ i ∈ [ k ] such that A i has occurred. Moreover , observe that S k i =1 ξ i = S k i =1 A i , and thus Pr h S k i =1 ξ i i = 1 . Since the e vents ξ 1 , . . . , ξ k are pairwise disjoint, by the law of total probability we ha ve Pr [ S ] = k X i =1 Pr [ S | ξ i ] Pr [ ξ i ] . Finally since Pr [ S | ξ i ] = Pr [ B i | ξ i ] ≥ Pr [ B i | A i ] ≥ 1 e we hav e Pr [ S ] ≥ 1 e P k i =1 Pr [ ξ i ] = 1 e . No w we hav e the key ingredients necessary to pro ve that our algorithm terminates. Lemma B.1. W ith high pr obability , after O (log 2 N log n ) consecutive r ounds without a neighbor crashing, a node is either in the MIS or has a neighbor in the MIS. Pr oof. W e say a node has an event at round t , if it beeps or hears a beep at round t . First we claim that with high probability a node has an ev ent ev ery O (log 2 N ) rounds. Consider a node who does not hear a beep within O (log 2 N ) rounds (if it does hear a beep, the claim clearly holds). Then after O (log 2 N ) rounds it will reach line 7 and beep (with probability 1) and the claim follo ws. From lemma 4 we know that when a node decides to beep, with high probability the beep potential of its neighborhood is less than 1 2 . W e can use a union bound to say that when a node hears a beep, with high probability the beep was produced by a node with a beep potential less than 1 2 . Therefore, we can apply lemma 5 to say that with constant probability every time a node has an e vent, either the node joins the MIS (if it was not in the MIS already) or it becomes co vered by an MIS node. Therefore, with high probability after O (log n ) e vents is either part of the MIS or it becomes cov ered by an MIS node. Since with high probability there is an ev ent every O (log 2 N ) rounds, this implies that with high probability a node is either inside the MIS or has a neighbor in the MIS after O (log 2 N log n ) rounds. This completes the proof for Theorem 4.1 and also implies fast con v ergence. 18

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment