Distributed Sensor Localization in Random Environments using Minimal Number of Anchor Nodes

The paper develops DILOC, a \emph{distributive}, \emph{iterative} algorithm that locates M sensors in $\mathbb{R}^m, m\geq 1$, with respect to a minimal number of m+1 anchors with known locations. The sensors exchange data with their neighbors only; …

Authors: Usman A. Khan, Soummya Kar, Jose M. F. Moura

Distributed Sensor Localization in Random Environments using Minimal   Number of Anchor Nodes
1 Distrib uted Sensor Localization in Random En vironments using Minimal Number of Anchor Nodes Usman A. Khan † , Soummya Kar † , and Jos ´ e M. F . Moura † Department of Electrical and Computer Engineering Carnegie Mellon Uni versity , 5000 F orbes A ve, Pittsb urgh, P A 15213 { ukhan, moura } @ece.cmu.edu, soummyak@andre w .cmu.edu Ph: (412)268-7103 Fax: (412)268-3890 Abstract The paper dev elops DILOC, a distributive , iter ative algorithm that locates M sensors in R m , m ≥ 1 , with respect to a minimal number of m + 1 anchors with known locations. The sensors exchange data with their neighbors only; no centralized data processing or communication occurs, nor is there centralized knowledge about the sensors’ locations. DILOC uses the barycentric coordinates of a sensor with respect to its neighbors that are computed using the Cayley-Menger determinants. These are the determinants of matrices of inter-sensor distances. W e sho w con ver gence of DILOC by associating with it an absorbing Markov chain whose absorbing states are the anchors. W e introduce a stochastic approximation version e xtending DILOC to random en vironments when the knowledge about the intercommunications among sensors and the inter-sensor distances are noisy , and the communication links among neighbors fail at random times. W e sho w a.s. conv ergence of the modified DILOC and characterize the error between the final estimates and the true values of the sensors’ locations. Numerical studies illustrate DILOC under a variety of deterministic and random operating conditions. Keyw ords: Distributed iterative sensor localization; sensor networks; Cayley-Menger determinant; barycentric coordinates; absorbing Markov chain; stochastic approximation. † All authors contributed equally to the paper . This work was partially supported by the D ARP A DSO Advanced Computing and Mathematics Program Integrated Sensing and Processing (ISP) Initiative under AR O grant # DAAD 19-02-1-0180, by NSF under grants # ECS-0225449 and # CNS-0428404, by ONR under grant # MURI-N000140710747, and by an IBM Faculty A ward. November 26, 2024 DRAFT 2 I . I N T RO D U C T I O N Localization is a fundamental problem in sensor networks. Information about the location of the sensors is ke y to process the sensors’ measurements accurately . In applications where sensors are deployed randomly , the y ha ve no knowledge of their exact locations, but equipping each of them with a localization device like a GPS is e xpensiv e, not robust to jamming in military applications, and is usually of limited use in indoor en vironments. Our goal is to dev elop a distributed (decentralized) localization algorithm where the sensors find their locations under a limited set of assumptions and conditions. T o be more specific, we are motiv ated by applications where N = M + m + 1 sensors in R m (for e xample, m = 2 corresponds to sensors lying on a plane, while for m = 3 the sensors are in three dimensional Euclidean space) are deployed in a large geographical region. W e assume that the deplo yment region lies in the con ve x hull of a small, in fact minimal, number of m + 1 anchors, ( m + 1)  M . The anchors know their locations. In such situations, the large geographical distances to the anchors makes it highly impractical for the M non-anchor sensors to communicate directly with the anchors. Further , to compute the locations of the non-anchor sensors at a central station is not feasible when M is lar ge, as it requires large communication ef fort, expensi ve large-scale computation, and adds latency and bottlenecks to the network operation. These networks call for efficient distributed algorithms where each sensor communicates directly only with a few neighboring nodes (either sensors or anchors) and a lo w order computation is performed locally at the sensor and at each iteration of the algorithm, for e xample, see [1]. In this paper , we present a Distrib uted Iterativ e LOCalization algorithm (DILOC, pronounced die - lock ) that o vercomes the above challenges in large-scale randomly deplo yed networks. In DILOC, the sensors start with an initial estimate of their locations, for e xample, a random guess. This random guess is arbitrary and does not need to place the sensors in the con v ex hull of the anchors. The sensors then update their locations, which we call the state of the network, by exchanging their state information only with a carefully chosen subset of m + 1 of their neighbors. This state updating is a con ve x combination of the states of the neighboring nodes. The coef ficients of the conv ex combination are the barycentric coordinates [2], [3] and are determined from the mutual inter-sensor distances among the sensors using the Cayley-Menger determinants. At each sensor l its neighborhood set contains m + 1 sensors, for e xample, its closest m + 1 sensors, such that sensor l lies in the con v ex hull of these m + 1 neighbors. These neighbors may or may not include the anchors. DILOC is distributed and iterativ e; each sensor updates locally its o wn state and then sends its state information to its neighbors; nowhere does DILOC need a fusion center or global communication. W e prove almost sure (a.s.) con ver gence of DILOC in both deterministic and random network environments by showing that DILOC behaves as an absorbing Marko v chain, where the anchors are the absorbing states. W e prov e con vergence under a broad characterization of noise. In particular , we consider three types of randomness, acting simultaneously . These model many practical random sensing and communication distortions as, for example, when: (i) the inter-sensor distances are known up to random errors, which is common in cluttered en vironments and also in ad-hoc en vironments, where cheap low resolution sensors are deplo yed; (ii) the communication links between the sensors fail at random times. This is mainly motiv ated by wireless digital communication, where packets may get dropped randomly at each iteration, particularly , if the sensors are po wer limited or there are bandwidth or rate communication constraints in November 26, 2024 DRAFT 3 the network; and (iii) the communication among two sensors, when their communication link is activ e, is corrupted by noise. Although a sensor can only communicate directly with its neighbors (e.g., sensors within a small radius), we assume that, when the links are deterministic and nev er fail, the network graph is connected, i.e., there is a communication path (by multihop) between any arbitrary pair of sensors. In a random environment, inter-sensor communication links may not stay acti ve all the time and are subject to random failures. Consequently , there may be iterations when the network is not connected; actually , there might nev er be iterations when the network is connected. W e will sho w under broad conditions almost sure con ver gence of an extended version of DILOC that we term as Distributed Localization in Random En vironments (DLRE). DLRE employs stochastic approximation techniques using a decreasing weight sequence in the iterations. In the following, we contrast our work with the e xisting literature on sensor localization. Brief review of the literature: The literature on localization algorithms may be broadly characterized into centralized and distributed algorithms. Illustrati ve centralized localization algorithms include: maximum likelihood estimators that are formulated when the data is known to be described by a statistical model, [4], [5]; multi- dimensional scaling (MDS) algorithms that formulate the localization problem as a least squares problem at a centralized location, [6], [7]; work that exploits the geometry of the Euclidean space, like when locating a single robot using trilateration in m = 3 − dimensional space, see [8] where a geometric interpretation is gi ven to the traditional algebraic distance constraint equations; localization algorithms with imprecise distance information, see [9] where the authors e xploit the geometric relations among the distances in the optimization procedure; for additional work, see, e.g., [10], [11]. Centralized algorithms are fine in small or tethered network en vironments; but in large untethered networks, they incur high communication cost and may not be scalable; they depend on the av ailability and robustness of a central processor and ha ve a single point of failure. Distributed localization algorithms can be characterized into two classes: multilateration and successiv e refine- ments. In multilateration algorithms, [12], [13], [14], [15], each sensor estimates its range from the anchors and then calculates its location via multilateration, [16]. The multilateration scheme requires a high density of anchors, which is a practical limitation in large sensor networks. Further, the location estimates obtained from multilateration schemes are subject to lar ge errors because the estimated sensor -anchor distance in lar ge networks, where the anchors are far apart, is noisy . T o ov ercome this problem, a high density of anchors is required. W e, on the other hand, do not estimate distances to far-a way nodes. Only local distances to nearby nodes are estimated that have better accuracy . This allows us to employ a minimal number of anchors. A distributed multidimensional scaling algorithm is presented in [17]. Successiv e refinement algorithms that perform an iterative minimization of a cost function are presented in [18], [19], [20]. Reference [18] discusses an iterative scheme where they assume 5% of the nodes as anchors. Reference [20] discusses a Self-Positioning Algorithm (SP A) that provides a GPS-free positioning and builds a relati ve coordinate system. Another formulation to solve localization problems in a distributed fashion is the probabilistic approach. Non- parametric belief propagation on graphical models is used in [21]. Sequential Monte Carlo methods for mobile November 26, 2024 DRAFT 4 localization are considered in [22]. Particle filtering methods ha ve been addressed in [23] where each sensor stores representativ e particles for its location that are weighted according to their likelihood. Reference [24] tracks and locates mobile robots using such probabilistic methods. Completion of partially specified distance matrices is considered in [25], [26]. The approach is rele vant when the (entire) partially specified distance matrix is available at a central location. The algorithms complete the unspecified distances under the geometrical constraints of the underlying network. The k ey point to note in our work is that our algorithm is distrib uted . In particular , it does not require a centralized location to perform the computations. In comparison with these references, our algorithm, DILOC, is equiv alent to solving in a distributed and iterative fashion a large system of linear algebraic equations where the system matrix is highly sparse. Our method exploits the structure of this matrix, which results from the topology of the communication graph of the network. W e prov e the a.s. con ver gence of the algorithm under broad noise conditions and characterize the bias and mean square error properties of the estimates of the sensor locations obtained by DILOC. W e divide the rest of the paper into two parts. The first part of the paper is concerned with the deterministic formulation of the localization problem and consists of sections II–IV. Section II presents preliminaries and then DILOC, the distributed iterati ve localization algorithm, that is based on barycentric coordinates, generalized volumes, and Cayley-Menger determinants. Section III proves DILOC’ s con v ergence. Section IV presents the DILOC-REL, DILOC with relaxation, and proves that it asymptotically reduces to the deterministic case without relaxation. The second part of the paper consists of sections V–VI and considers distributed localization in random noisy en vironments. Section V characterizes the random noisy en vironments and the iterativ e algorithm for these conditions. Section VI prov es the con vergence of the distrib uted localization algorithm in the noisy case that relies on a result on the conv ergence of Markov processes. Finally , we present detailed numerical simulations in Section VII and conclude the paper in Section VIII. Appendices I–III provide a necessary test, the Cayley-Menger determinant and background material on absorbing Markov chains. I I . D I S T R I B U T E D S E N S O R L O C A L I Z AT I O N : D I L O C In this section, we formally state DILOC (distributed iterativ e localization algorithm) in m -dimension Euclidean space, R m ( m ≥ 1) , and introduce the necessary notation. Of course, for sensor localization, m = 1 (sensors in a straight line), m = 2 (plane), or m = 3 ( 3 d-space.) The generic case of m > 3 is of interest, for example, when the graph nodes represent m -dimensional feature v ectors in classification problems, and the goal is still to find in a distributed fashion their global coordinates (with respect to a reference frame.) Since our results are general, we deal with m -dimensional ‘localization, ’ but, for easier reference, the reader may consider m = 2 or m = 3 . T o provide a quantitati ve assessment on some of the assumptions underlying DILOC, we will, when needed, assume that the deployment of the sensors in a gi ven region follo ws a Poisson distrib ution. This random deployment is often assumed and is realistic; we use it to deri ve probabilistic bounds on the deployment density of the sensors and on the communication radius at each sensor; these can be straight forwardly related to the values of network field parameters (like transmitting power or signal-to-noise ratio) in order to implement DILOC. W e discuss the November 26, 2024 DRAFT 5 computation/communication complexity of the algorithm and provide a simplistic, yet insightful, example that illustrates DILOC. A. Preliminaries and Notation Recall that the sensors are in R m . Let Θ be the set of sensors or nodes in the network decomposed as Θ = κ ∪ Ω , (1) where κ is the set of anchors, i.e., the sensors whose locations are kno wn, and Ω is the set of sensors whose locations are to be determined. By | · | we mean the cardinality of the set, and we let | Θ | = N , | κ | = m + 1 , and | Ω | = M . For a set Ψ of sensors, we denote its con vex hull by C (Ψ) 1 . For example, if Ψ is a set of three non-coplanar sensors in a plane, then C (Ψ) is a triangle. W e no w define a fe w additional sets needed. Let d lk be the Euclidean distance between two sensors l , k ∈ Θ . W e associate with the sensor l ∈ Ω , a positiv e real number , r l > 0 , and two sets K ( l, r l ) and Θ l ( r l ) : K ( l, r l ) = { k ∈ Θ : d lk < r l } , (2) Θ l ( r l ) ⊆ K ( l, r l ) , l / ∈ Θ l ( r l ) , l ∈ C (Θ l ( r l )) , | Θ l ( r l ) | = m + 1 , A Θ l ( r l ) 6 = 0 , (3) where A Θ l ( r l ) is the generalized volume (area in m = 2 -d, v olume in m = 3 , and their generalization in higher dimensions) of C (Θ l ( r l )) . The set K ( l , r l ) groups the neighboring sensors of l within a radius r l and, by (3), Θ l ( r l ) , which we will often represent simply as Θ l , assuming r l is understood from the context, is a subset of m + 1 sensors such that sensor l lies in its conv ex hull but is not one of its elements. In appendix I, we provide a procedure to test the con ve x hull inclusion of a sensor , i.e., for any sensor, l , to determine if it lies in the con ve x hull of m + 1 nodes arbitrarily chosen from the set, K ( l , r l ) , of its neighbors. Finding such a set Θ l is an important step in DILOC and we refer to it as triangulation and Θ l is referred to as a triangulation set . Let c l be the m -dimensional coordinates for a node, l ∈ Θ , with respect to a global coordinate system, written as the m -dimensional ro w vector , c l = [ c l, 1 , c l, 2 , . . . , c l,m ] . (4) The true (possibly unknown) location of sensor l is represented by c ∗ l . Because the distributed localization algorithm DILOC is iterative, c l ( t ) will represent the location vector , or state, for sensor l at iteration t . Barycentric coordinates. DILOC is expressed in terms of the barycentric coordinates, a lk , of a sensor , l ∈ Ω , with respect to the nodes, k ∈ Θ l , see [2], [3]. The barycentric coordinates, a lk , are unique and are giv en by a lk = A { l }∪ Θ l \{ k } A Θ l , (5) 1 The conv ex hull, C (Ψ) , of a set of points in Ψ is the minimal conve x set containing Ψ . November 26, 2024 DRAFT 6 with A Θ l 6 = 0 , where ‘ \ ’ denotes the set difference, A { l }∪ Θ l \{ k } is the generalized volume of the set { l } ∪ Θ l \ { k } , i.e., the set Θ l with sensor l added and node k remov ed. The barycentric coordinates can be computed from the inter-sensor distances d lk using the Cayle y-Menger determinants as sho wn in appendix II. From (5), and the facts that the generalized v olumes are non-ne gativ e and X k ∈ Θ l A Θ l ∪{ l }\{ k } = A Θ l , l ∈ C (Θ l ) , (6) it follows that, for each l ∈ Ω , k ∈ Θ l , a lk ∈ [0 , 1] , X k ∈ Θ l a lk = 1 . (7) B. Distributed iterative localization algorithm. Before presenting DILOC, we state explicitly state our assumptions. (B0) Nondegeneracy . The generalized v olume for κ , A κ 6 = 0 . 2 (B1) Anchor nodes. The anchors’ locations are kno wn, i.e., their state remains constant c q ( t ) = c ∗ q , q ∈ κ, t ≥ 0 . (8) (B2) Con vexity . All the sensors lie inside the con vex hull of the anchors C (Ω) ⊂ C ( κ ) . (9) From (B2), the ne xt Lemma follo ws easily . Lemma 1 (T riangulation) For every l ∈ Ω , there e xist r l > 0 and Θ l ( r l ) with | Θ l ( r l ) | = m + 1 satisfying the properties in (3). Pr oof: Clearly , by (B2), κ satisfies (3) and the diameter of the netw ork, max l,k d lk , ( l ∈ Ω , k ∈ κ ) , could be taken as r l . Lemma 1 provides an existence proof, but in localization in wireless sensor networks, it is important to triangulate a sensor not with the network diameter but with a small r l . In fact, Section II-E discusses the probability of finding one such Θ l with r l  max l,k d lk , ( l ∈ Ω , k ∈ κ ) . As a note, it is easily verified that ev ery pair of sensors in C (Θ l ( r l )) is within a distance R l = 2 r l . Think of m = 2 , then | Θ l ( r l ) | = 3 , C (Θ l ( r l )) is a triangle and if r l is the maximum distance of any interior point l to the vertices of the triangle, the maximum distance between the 4 points (triangle vertices and l ) is 2 r l . W e complete stating the assumptions underlying DILOC. 2 Nondegeneracy simply states that the anchors do not lie on a hyperplane. If this was the case, then the localization problem reduces to a lower dimensional problem, i.e., R m − 1 instead of R m . For instance, if all the anchors in a network lie on a plane in R 3 , the localization problem can be thought of as localization in R 2 . November 26, 2024 DRAFT 7 (B3) inter -sensor distances. For l ∈ Ω , there exists at least an r l > 0 and Θ l ( r l ) ⊂ K ( l , r l ) , satisfying (3), such that l has a communication link for ev ery k ∈ Θ l ( r l ) and kno ws the mutual distances among all the nodes in { l } ∪ Θ l ( r l ) . W e can now present DILOC. There are two steps: a set-up phase and then DILOC proper . W e discuss each separately . DILOC set-up: T riangulation. In the set-up phase, each sensor l triangulates itself, so that by the end of this phase we hav e paired ev ery l ∈ Ω with its corresponding m + 1 neighbors in Θ l . Since triangulation should be with a small r l , the following is a practical protocol for the set-up phase. A sensor starts with a small communication radius, r l , and chooses arbitrarily m + 1 sensors within r l and tests if it lies in the con ve x hull of these sensors using the procedure described in Appendix I. The sensor attempts this with all collections of m + 1 sensors within r l . If all attempts f ail, the sensor adapti vely increases, in small increments, its communication radius, r l , and repeats the process. By (B2) and (9), success is eventually achiev ed and each sensor is triangulated by finding Θ l with properties (3) and (B3). If the sensors hav e directionality a much simpler algorithm based on Lemma 2 below , see also discussion following the Lemma, triangulates the sensor with high probability of success in one shot. T o assess the practical implications required by DILOC’ s set-up phase, Subsection II-E considers the realistic scenario where the sensors are deployed using a random Poisson distribution and computes in terms of deployment parameters the probability of finding at least one such Θ l in a given radius, r l . DILOC iterations: state updating. Once the set-up phase is complete; at time t + 1 , each sensor l ∈ Ω , iterativ ely updates its state, i.e., its current location estimate, by a con ve x combination of the states at time t of the nodes in Θ l . The anchors do not update their state, since they know their locations. The updating is explicitly giv en by c l ( t + 1) =    c l ( t ) , l ∈ κ, P k ∈ Θ l a lk c k ( t ) , l ∈ Ω , (10) where a lk are the barycentric coordinates of l with respect to k ∈ Θ l . DILOC in (10) is distrib uted since (i) the update is implemented at each sensor independently; (ii) at sensor l ∈ Ω , the update of the state, c l ( t + 1) , is obtained from the states of its m + 1 neighboring nodes in Θ l ; and (iii) there is no central location and only local information is available. DILOC: Matrix format. F or compactness of notation, we write DILOC (10) in matrix form. W ithout loss of generality , we index the anchors in κ as 1 , 2 , . . . , m + 1 and the sensors in Ω as m + 2 , m + 3 , . . . , m + 1 + M = N . W e now stack the (row vectors) states, c l , given in (4) for all the N nodes in the network in the N × m -dimensional coordinate matrix C = h c T 1 , . . . , c T N i T . (11) DILOC equations in (10), now become in compact matrix form C ( t + 1) = ΥC ( t ) . (12) November 26, 2024 DRAFT 8 1 2 4 3 5 7 6  Fig. 1. Deployment corresponding to the example in Section II-C. The structure of the N × N iteration matrix Υ becomes more apparent if we partition it as Υ =   I m +1 0 B P   , (13) The first m + 1 rows correspond to the update equations for the anchors in κ . Since the states of the anchors are constant, see (B1) and (8), the first m + 1 rows of Υ are zero except for a 1 at their ( q , q ) , q ∈ κ = { 1 , . . . , m + 1 } location. In other words, these first m + 1 rows are the ( m + 1) × N block matrix [ I m +1 | 0 ] , i.e., the ( m + 1) × ( m + 1) identity matrix I m +1 concatenated with the ( m + 1) × M zero matrix, 0 . Each of the M remaining ro ws in Υ , indexed by l ∈ Ω = { m + 2 , m + 3 , . . . , N } , hav e only m + 1 non- zero elements corresponding to the nodes in the triangulation set, Θ l , of l , and these non-zero elements are the barycentric coordinates, a lk , of sensor l with respect to the nodes in Θ l . The M × ( m + 1) block B = { b lj } is a zero matrix, e xcept in those rows corresponding to the sensors in Ω that ha ve a direct link to anchors. The M × M block P = { p lj } is also a sparse matrix where the non-zero entries in row l correspond to the non-anchor nodes in Θ l . The matrices Υ , P , and B have important properties that will be used to pro ve the con ver gence of the distributed iterati ve algorithm DILOC in Sections III and IV. Remark. Equation (12) writes DILOC in matrix format for compactness; it should not be confused with a centralized algorithm–it still is a distrib uted iterativ e algorithm. It is iterativ e, because each iteration through (12) simply updates the (matrix of) state(s) from C ( t ) to C ( t + 1) . It is distrib uted because each row equation updates the state of sensor l from the states of the m + 1 nodes in Θ l . In all, the iteration matrix, Υ , is highly sparse having exactly ( m + 1) + M ( m + 1) non-zeros out of possible ( m + 1 + M ) 2 elements. C. Example W e consider an m = 2 -dimensional Euclidean plane with m + 1 = 3 anchors and M = 4 sensors, see Fig. 1. The nodes are index ed such that the anchor set is κ = { 1 , 2 , 3 } , | κ | = m + 1 = 3 , and the sensor set is Ω = { 4 , 5 , 6 , 7 } , | Ω | = M = 4 . The set of all the nodes in the network is, thus, Θ = κ ∪ Ω = { 1 , . . . , 7 } , | Θ | = N = 7 . The triangulation set, Θ l , l ∈ Ω , identified by using the conv ex hull inclusion test are Θ 4 = { 1 , 5 , 7 } , Θ 5 = November 26, 2024 DRAFT 9 { 4 , 6 , 7 } , Θ 6 = { 2 , 5 , 7 } , Θ 7 = { 3 , 4 , 6 } . It is clear that these triangulation sets satisfy properties in (3). It can be verified that sensor 5 does not hav e an y anchor node in its triangulation set, Θ 5 , and e very other sensor has only one anchor in its respective triangulation set. Since, no sensor is able to communicate to m + 1 = 3 anchors directly , no sensor can localize itself in a single step. At each sensor , l ∈ Ω , the barycentric coordinates, a lk , k ∈ Θ l , are computed using the inter-sensor distances (among the nodes in the set { l } ∪ Θ l ) in the Cayle y-Menger determinant. It is note worthy that the inter-sensor distances that need to be known at each sensor l to compute a lk are only the inter-sensor distances among the m + 2 sensors in the set { l } ∪ Θ l . For instance, the distances in the Cayley-Menger determinant needed by sensor 5 to compute a 54 , a 56 , a 57 are the inter -sensor distances among the nodes in the set, { 5 } ∪ Θ 5 , i.e., d 54 , d 56 , d 57 , d 46 , d 47 , d 67 . Due to (3) and the relation Θ 5 ⊆ K ( l , r 5 ) , the nodes in { 5 } ∪ Θ 5 lie in a circle of radius, r 5 , centered at sensor 5 (shown in Fig. 1); on the other hand, no two sensors in the set { 5 } ∪ Θ 5 can be more than R 5 = 2 r 5 apart. This justifies the choices of R l and r l = R l / 2 in (2). Once the barycentric coordinates, a lk , are computed, DILOC for the sensors in Ω is c l ( t + 1) = X k ∈ Θ l a lk c k ( t ) , l ∈ Ω = { 4 , 5 , 6 , 7 } . (14) In particular , for sensor 5 , we ha ve the following expression, c 5 ( t + 1) = a 54 c 4 ( t ) + a 56 c 6 ( t ) + a 57 c 7 ( t ) . DILOC for the anchors is giv en by c q ( t ) = c ∗ q , q ∈ κ. W e write DILOC for this example in the matrix format (12).                  c 1 ( t + 1) c 2 ( t + 1) c 3 ( t + 1) c 4 ( t + 1) c 5 ( t + 1) c 6 ( t + 1) c 7 ( t + 1)                  =                  1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 a 41 0 0 0 a 45 0 a 47 0 0 0 a 54 0 a 56 a 57 0 a 62 0 0 a 65 0 a 67 0 0 a 73 a 74 0 a 76 0                                   c 1 ( t ) c 2 ( t ) c 3 ( t ) c 4 ( t ) c 5 ( t ) c 6 ( t ) c 7 ( t )                  , (15) where the initial condition are C (0) = [ c ∗ 1 , c ∗ 2 , c ∗ 3 , c 0 4 , c 0 5 , c 0 6 , c 0 7 ] T , with c 0 l , l ∈ Ω , being randomly chosen ro w vectors of appropriate dimensions. Note here again that (15) is just a matrix representation of (14). DILOC is implemented in a distributed fashion as in (14). The matrix representation in (15) is included for compaction of notation and for the con ver gence analysis of the algorithm. November 26, 2024 DRAFT 10 D. Complexity of DILOC Once the barycentric coordinates are computed, each sensor 3 performs the update in (10) that requires m + 1 multiplications and m additions. Assuming the computation complexity of the multiplication and the addition operations to be the same, the computation comple xity of DILOC is 2 m + 1 operations, i.e., O (1) per sensor , per iteration. Since each sensor e xchanges information with m + 1 nodes in its triangulation set, the communication complexity of DILOC is m + 1 communications, i.e., O (1) per sensor, per iteration. Hence, both the computation and communication complexity are O ( M ) for a network of M sensors. Note that, since the triangulation set-up phase, which identifies Θ l ( r l ) and computes the barycentric coordinates, as explained in Subsection II-B, are to be carried out only once at the start of DILOC, the y require a constant computation/communication complexity , so we do not account explicitly for it. E. Random P oisson Deployment A common model in wireless sensor networks is the Poisson deployment [27], [28]. W e illustrate it on the plane, m = 2 ; the discussion can be extended to arbitrary dimensions. For a Poisson distribution with density , γ > 0 , the mean number of sensors in a sector Q with area A Q is γ A Q . The numbers of sensors in any two disjoint sectors, Q 1 and Q 2 , are independent random variables, and the locations of the sensors in a sector Q are uniformly distributed. W e now characterize the probability of triangulating a sensor l in a region of radius, r l , centered at l . T o this end, consider Fig. 2(a), which sho ws the triangulation region, a circle of radius, r l , centered at sensor l . Let l Q 2 Q 1 Q 3 π /4 r l Q 4 (a) l Q 2 Q 1 Q 3 r l Q 4 p 1 p 3 p 4 p 2 (b) Fig. 2. (a) Sensor l identifies its triangulation set, Θ l , in the circle of radius, r l , centered at sensor l . The circle is divided into four disjoint sectors with equal areas, Q 1 , . . . , Q 4 . A suf ficient condition for triangulation is that there exists at least one sensor in each of these four sectors. (b) Illustration of Lemma 2. Q 1 , Q 2 , Q 3 , Q 4 be four disjoint sectors partitioning this circle with equal areas, i.e., A Q i = π r 2 l 4 , i = 1 , . . . , 4 . Lemma 2 A sufficient condition to triangulate a sensor l ∈ R 2 is to hav e at least one sensor in each of the four disjoint equal area sectors, Q i , i = 1 , . . . , 4 , which partition the circle of radius of radius, r l , centered at l . 3 By sensor, we usually mean a non-anchor node. November 26, 2024 DRAFT 11 Pr oof: In Fig. 2(b) consider the triangulation of sensor l located at the center of the circle; we choose arbitrarily four sensors p 1 , p 2 , p 3 , p 4 in each of the four sectors Q 1 , Q 2 , Q 3 , Q 4 . Denote the polygon with vertices p 1 , p 2 , p 3 , p 4 by Pol ( p 1 p 2 p 3 p 4 ) . Consider the diagonal 4 p 1 — p 3 that partitions this polygon into two triangles 4 p 1 p 2 p 3 and 4 p 1 p 3 p 4 . Since l ∈ Pol ( p 1 p 2 p 3 p 4 ) and 4 p 1 p 2 p 3 ∪ 4 p 1 p 3 p 4 = Pol ( p 1 p 2 p 3 p 4 ) with 4 p 1 p 2 p 3 ∩ 4 p 1 p 3 p 4 = ∅ , then either l ∈ 4 p 1 p 2 p 3 or l ∈ 4 p 1 p 3 p 4 . The triangle in which l lies becomes the triangulating set, Θ l , of l . This completes the proof. The generalization to higher dimensions is straightforward; for instance, in R 3 , we hav e eight sectors and an arbitrary sensor l is triangulated with at least one sensor in each of these eight sectors (with equal volume) of a sphere of radius, r l , centered around l . Let Q i be the set of sensors in the sector Q i . It follows from the Poisson deployment that the probability of finding at least one sensor in a sector , Q i , of area A Q i is P  | Q i | > 0  =  1 − exp − γ A Q i  . (16) Since the distribution of the number of sensors in disjoint sectors is independent, the probability of finding at least one sensor in each of the sets, Q 1 , . . . , Q 4 , is the product P    Q i   > 0 , ∀ i  =  1 − exp − γ πr 2 l / 4  4 . (17) Thus, we have P Θ l = P (Θ l exists satisfying (3) ) ≥ P  | Q i | > 0 , ∀ i  . (18) This shows that, for a giv en deployment density , γ , we can choose, r l , appropriately , to guarantee the triangulation with arbitrarily high probability . Indeed, it follows from (18) that, for an arbitrary , 0 <  < 1 , to get the probability of triangulation to be greater than  , i.e., P Θ l ≥  , the radius r l should be R l ≥ 2   − 4ln  1 −  1 4  γ π   1 2 , (19) In alternativ e, if we are limited by the communication radius, R l , to guarantee P Θ l ≥  , we will need the deployment density , γ , to be larger than γ ≥ − 4 π ( R l / 2) 2 ln  1 −  1 4  . (20) For example, if the sensors are deplo yed (follo wing a Poisson distrib ution) with a density of γ = 1 sensor / m 2 , then we can compute from equations (19)–(20) that 99 % of the sensors will be able to triangulate (identify Θ l ) themselves if they can communicate to at least a radius of R l = 5 . 52 m. The remaining ( 1% ) of the sensors may require to communicate to a larger radius. 4 If Pol ( p 1 p 2 p 3 p 4 ) is conca ve, we choose the diagonal that lies inside Pol ( p 1 p 2 p 3 p 4 ) , i.e., p 1 — p 3 in Fig. 2(b). If Pol ( p 1 p 2 p 3 p 4 ) is con vex, we can choose any of the two diagonals and the proof follows. November 26, 2024 DRAFT 12 It also follows from the above discussion that if the sensors are equipped with a sense of directionality (for example, if the y all have ultrasound transducers) then each sensor has to find one neighbor in each of its 4 sectors, Q l, 1 , Q l, 2 , Q l, 3 , Q l, 4 (in m = 2 -d space). Once a neighbor is found, triangulation chooses 3 out of these 4 , in order to identify Θ l . The computational complexity in m = 2 -d Euclidean space is 4 choose 3 = 4 . W ithout directionality the process of finding Θ l has the (expected) computation comple xity of γ π r 2 l choose 3 . I I I . C O N V E R G E N C E O F D I L O C In this section, we prove the conv ergence of DILOC to the exact locations of the sensors, c ∗ l , l ∈ Ω . T o formally state the con ver gence result, we provide briefly some background and additional results, based on assumptions (B0) – (B3) . The entries of the rows of the iteration matrix Υ , in (12), are either zero or the barycentric coordinates, a lk , which are positiv e, and, by (7), add to 1. This matrix can then be interpreted as the transition matrix of a Markov chain. W e then describe localization problem and DILOC in terms of a Markov chain. Let the assumptions (B0)–(B3) in Section II-B hold and the N nodes in the sensor network correspond to the states of a Marko v chain where let the ( ij )-th element of the iteration matrix, Υ = { υ ij } defines the probability that the i th state goes to the j th state. Because of the structure of Υ , this chain is a v ery special Mark ov chain. Absorbing Marko v chain. Let an N × N matrix, Υ = { υ ij } , denote the transition probability matrix of a Markov chain with N states, s i,i =1 ,...,N . A state s i is called absorbing if the probability of leaving that state is 0 (i.e., υ ij = 0 , i 6 = j , in other words υ ii = 1 ). A Markov chain is said to be absorbing if it has at least one absorbing state, and if from ev ery state it is possible to go with non-zero probability to an absorbing state (not necessarily in one step). In an absorbing Markov chain, a state which is not absorbing is called transient. F or additional background, see, for example, [29]. Lemma 3 The underlying Marko v chain with the transition probability matrix given by the iteration matrix, Υ , is absorbing. Pr oof: W e pro ve by contradiction. Since υ ii = 1 , i ∈ κ , the anchors are the absorbing states of the Markov chain. Since υ ii = 0 , i ∈ Ω , the (non-anchor) sensors are the transient states. Partition the transient states into two clusters C1 and C2, such that each transient state in C1 can go with non-zero probability to at least one of the absorbing states and, with probability 1, the transient states in C2 cannot reach an absorbing states. It follows that with probability 1 the transient states in C2 cannot reach the transient states in C1 (in one or multiple steps); otherwise, they reach an absorbing state with a non-zero probability . Let’ s consider the lie on the boundary of the con ve x hull, C ( C2 ) , i.e., the vertices of C ( C2 ) . Because they are on boundary , they cannot lie in the interior of the con ve x hull any subset of sensors in C ( C2 ) , and, thus, cannot triangulate themselves, which contradicts Lemma 1 and assumption (B3) . In order to triangulate the boundary sensors in C ( C2 ) , the boundary sensors in C2 must be able to reach the transient states and/or the absorbing states, that is to say that the boundary sensors in C ( C2 ) have to reach the sensors in C1 to be able to triangulate themselves. Hence, the Markov chain is absorbing. November 26, 2024 DRAFT 13 Consider the partitioning of the iteration matrix, Υ , in (13). With the Marko v chain interpretation, the M × ( m + 1) block B = { b lj } is a transition probability matrix for the transient states to reach the absorbing states in one-step, and the block M × M P = { p lj } is a transition probability matrix for the transient states. With (13), Υ t +1 can be written as Υ t +1 =     I m +1 0 t X k =0 P k B P t +1     , (21) and, as t goes to infinity , we have lim t →∞ Υ t +1 =   I m +1 0 ( I M − P ) − 1 B 0   , (22) by Lemmas 5 and 6, in appendix III. Lemmas 5 and 6 use the fact that if P is the matrix associated to the transient states of an absorbing Markov chain, then ρ ( P ) < 1 , where ρ ( · ) is the spectral radius of a matrix. W ith (22), DILOC (10) conv erges to lim t →∞ C ( t + 1) =   I m +1 0 ( I M − P ) − 1 B 0   C (0) . (23) In (23), the coordinates of the M sensors in Ω (last M rows of C ( t + 1) ) are written as a function of the m + 1 anchors in κ whose coordinates are exactly kno wn. The limiting values of the states of the M sensors in Ω are written in terms of the coordinates of the m + 1 anchors in κ weighted by ( I M − P ) − 1 B . T o show that the limiting values are indeed the exact solution, we gi ve the following Lemma. Lemma 4 Let c ∗ l be the exact coordinates of a node, l ∈ Θ . Let the M × ( m + 1) matrix, D = { d lj } , l ∈ Ω , j ∈ κ , be the matrix of the barycentric coordinates of the M sensors (in Ω ) in terms of the m + 1 anchors in κ , relating the coordinates of the sensors to the coordinates of the anchors by c ∗ l = X j ∈ κ d lj c ∗ j , l ∈ Ω . (24) Then, we have D = ( I M − P ) − 1 B . (25) Pr oof: Clearly ( I M − P ) is inv ertible, since, by (95) in Appendix III, ρ ( P ) < 1 ; this follows from the fact that the eigen v alues of the matrix I M − P are 1 − λ j , where λ j is the j th eigen value of the matrix P and | λ j | < 1 , ∀ j = 1 , . . . , M . It suf fices to show that, D = B + PD , (26) since (25) follows from (26). In (26), D and B are both M × ( m + 1) matrices, whereas P is an M × M matrix whose non-zero elements are the barycentric coordinates for the sensors in Ω . Hence, for the l j -th element in (26), November 26, 2024 DRAFT 14 we need to sho w that d lj = b lj + X k ∈ Ω p lk d kj . (27) For an arbitrary sensor , l ∈ Ω , its triangulation set, Θ l , may contain nodes from both κ and Ω . W e denote κ Θ l as the elements of Θ l that are anchors, and Ω Θ l as the elements of Θ l that are non-anchor sensors. The exact coordinates, c ∗ l , of the sensor , l , can be expressed as a conv ex combination of the coordinates of its neighbors in its triangulation set, k ∈ Θ l , using the barycentric coordinates, a lk , i.e., c ∗ l = X k ∈ Θ l a lk c ∗ k , = X j ∈ κ Θ l a lj c ∗ j + X k ∈ Ω Θ l a lk c ∗ k , = X j ∈ κ b lj c ∗ j + X k ∈ Ω p lk c ∗ k , (28) since the scalars, a lj , are given by a lj =          b lj , if j ∈ κ Θ l , p lj , if j ∈ Ω Θ l , 0 , if j / ∈ Θ l . (29) Equation (28) becomes, after writing each k ∈ Ω in terms of the m + 1 anchors in κ , c ∗ l = X j ∈ κ b lj c ∗ j + X k ∈ Ω p lk X j ∈ κ d kj c ∗ j , = X j ∈ κ b lj c ∗ j + X j ∈ κ X k ∈ Ω p lk d kj c ∗ j , = X j ∈ κ b lj + X k ∈ Ω p lk d kj ! c ∗ j . (30) This is a representation of the coordinates of sensor, l , in terms of the coordinates of the anchors, j ∈ κ . Since for each j ∈ κ , the v alue inside the parentheses is non-negati ve with their sum over j ∈ κ being 1 and the fact that the barycentric representation is unique, we must ha ve d lj = b lj + X k ∈ Ω p lk d kj , (31) which, comparing to (24), completes the proof. W e now recapitulate these results in the following theorem. Theor em 1 (DILOC con ver gence) DILOC (10) con verges to the exact coordinates, c ∗ l , of the M sensors (with unknown locations) in Ω , i.e., lim t →∞ c l ( t + 1) = c ∗ l , ∀ l ∈ Ω . (32) November 26, 2024 DRAFT 15 Pr oof: The proof is a consequence of Lemmas 3 and 4. Con vergence rate. The con vergence rate of the localization algorithm depends on the spectral radius of the matrix P , which by (95) in Appendix III is strictly less than one. This is a consequence of the fact that P is a sub-stochastic matrix. The con ver gence is slo w if the spectral radius, ρ ( P ) , is close to 1 . This can happen if the matrix B is close to a zero matrix, 0 . This is the case if and only if the sensors cluster in a region of very small area inside the con vex hull of the anchors, and the anchors themselv es are very far apart. In fact, it can be seen that in this case the barycentric coordinates for the sensors with κ Θ l 6 = ∅ (see Lemma 4 for this notation) corresponding to the elements in κ Θ l are close to zero. Since in practical wireless sensor applications the sensors are assumed to be deployed in a geometric or a Poisson fashion (see details in Section II-E), the probability of this to happen is arbitrarily close to 0 . I V . D I L O C W I T H R E L A X A T I O N In this Section, we modify DILOC to speed its conv ergence rate and to obtain a form that is more suitable to study distributed localization in random en vironments. W e observe that in DILOC (10), at time t + 1 , the expression for c l ( t + 1) , l ∈ Ω , does not in v olve its o wn coordinates, c l ( t ) , at time t . W e introduce a relaxation parameter , α ∈ (0 , 1] , in the iterations, such that, the expression of c l ( t + 1) is a con vex combination of c l ( t ) and (10). W e refer to this v ersion as the DILOC with r elaxation , DILOC-REL. It is gi ven by c l ( t + 1) =    (1 − α ) c l ( t ) + α c l ( t ) = c l ( t ) , l ∈ κ, (1 − α ) c l ( t ) + α P k ∈ Θ l a lk c k ( t ) , l ∈ Ω . (33) DILOC is the special case of DILOC-REL with α = 1 . The matrix representation of DILOC-REL is C ( t + 1) = HC ( t ) , (34) where H = (1 − α ) I N + α Υ and I N is the N × N identity matrix. It is straightforward to show that the iteration matrix, H , corresponds to a transition probability matrix of an absorbing Markov chain, where the anchors are the absorbing states and the sensors are the transient states. Let J = (1 − α ) I M + α P ; partitioning H as H =   I m +1 0 α B J   . (35) W e note the follo wing H t +1 =     I m +1 0 t X k =0 J k α B J t +1     , (36) and, as t → ∞ , lim t →∞ H t +1 =   I m +1 0 ( I M − J ) − 1 α B 0   , (37) November 26, 2024 DRAFT 16 from Lemmas 5 and 6. Lemmas 5 and 6 apply to H , since H is non-negativ e and ρ ( J ) < 1 . T o sho w ρ ( J ) < 1 , we recall that ρ ( P ) < 1 and the eigen values of J are (1 − α ) + αλ j , where λ j are the eigen v alues of P . Therefore, we have ρ ( J ) = max j | (1 − α ) + αλ j | < 1 . (38) The following Theorem establishes conv ergence of DILOC-REL. Theor em 2 DILOC-REL (33) con verges to the e xact coordinates, c ∗ l , of the M sensors (with unknown locations) in Ω , i.e., lim t →∞ c l ( t + 1) = c ∗ l , ∀ l ∈ Ω . (39) Pr oof: It suffices to show that ( I M − J ) − 1 α B = ( I M − P ) − 1 B . (40) T o this end, we note that ( I M − J ) − 1 α B = ( I M − ((1 − α ) I M + α P )) − 1 α B , (41) which reduces to (40) after basic algebraic manipulations. The conv ergence of DILOC-REL thus follo ws from Lemma 4. As mentioned, the adv antage of DILOC-REL is twofold: since ρ ( J ) is a function of α , we may optimize the con ver gence rate over α ; and DILOC-REL forms the basis for the distrib uted localization algorithm in random en vironments (DLRE) that we discuss in Sections V and VI. V . D I S T R I B U T E D L O C A L I Z AT I O N I N R A N D O M E N V I R ON M E N T S : A S S U M P T I O N S A N D A L G O R I T H M This and the ne xt Section study distrib uted iterativ e localization in more realistic practical scenarios, when the inter-sensor distances are known up to errors, the communication links between sensors may f ail and, when ali ve, the communication among sensors is corrupted by noise. W e write the update equations for DILOC-REL, (34), in terms of the columns, c j ( t ) , 1 ≤ j ≤ m , of the coordinate matrix, C ( t ) . Column j corresponds to the vector of the j -th estimate coordinates of all the N sensor locations 5 . The updates are c j ( t + 1) = [(1 − α ) I + α Υ ] c j ( t ) , 1 ≤ j ≤ m. (42) W e partition c j ( t ) as c j ( t ) =   u j x j ( t )   , (43) 5 In the sequel, we omit the subscripts from the identity matrix, I , and its dimensions will be clear from the context. November 26, 2024 DRAFT 17 where, u j ∈ R ( m +1) × 1 corresponds to the j -th coordinates of the anchors, which are know (hence, we omit the time index, as they are not updated), and x j ( t ) ∈ R M × 1 corresponds to the estimates of the j -th coordinates of the non-anchor sensors, hence not known. Since, update is performed only on the x j ( t ) , (42) is equi valent to the following recursion: x j ( t + 1) = [(1 − α ) I + α P ] x j ( t ) + α Bu j . (44) Thus, to implement the sequence of iterations in (44) perfectly , the l -th sensor at iteration t needs the corresponding rows of the matrices P and B , and, in addition, the current estimates, c j n ( t ) , n ∈ Θ l ( j -th component of the n -th sensor coordinates), of its neighbors’ positions. In practice, there are se veral limitations: (i) The computation of the matrices P and B requires inter-sensor distance computations, which are not perfect in a random en vironment; (ii) the communication channels, or links, between neighboring channels may fail at random times; and (iii) because of imperfect communication, each sensor receiv es only noisy versions of its neighbors current state. Hence, in a random en vironment, we need to modify the iteration sequence in (44) to account for the partial imperfect information receiv ed by a sensor at each iteration. W e start by stating formally our modeling assumptions. (C1) Randomness in system matrices . At each iteration, each sensor needs the corresponding row of the system matrices B and P , which in turn, depend on the inter-sensor distance measurements, which can be, possibly , random. Since a single measurement of the inter-sensor distances may lead to a lar ge random noise, we assume the sensors estimate the required distances at each iteration of the algorithm (note that this leads to an implicit av eraging of the unbiased noisy ef fects, as will be demonstrated later .) In other w ords, at each iteration, the l -th sensor can only get estimates, b B l ( t ) and b P l ( t ) , of the corresponding rows of the B and P matrices, respectively . In the generic imperfect communication case, we ha ve b B ( t ) = B + S B + e S B ( t ) (45) where n e S B ( t ) o t ≥ 0 is an independent sequence of random matrices with, E h e S B ( t ) i = 0 , ∀ t, sup t ≥ 0 E     e S B ( t )    2  = k B < ∞ . (46) Here, S B is the mean measurement error . Similarly , for P , we have b P ( t ) = P + S P + e S P ( t ) , (47) where n e S P ( t ) o t ≥ 0 is an independent sequence of random matrices with, E h e S P ( t ) i = 0 , ∀ t, sup t ≥ 0 E     e S P ( t )    2  = k P < ∞ . (48) Like wise, S P is the mean measurement error . Note that this way of writing e B ( t ) , e P ( t ) does not require the noise model to be additiv e. It only says that any random object may be written as the sum of a deterministic mean part and the corresponding zero mean random part. The moment assumptions in eqns. (46,48) are very weak and, in November 26, 2024 DRAFT 18 particular , are satisfied if the sequences n b B ( t ) o t ≥ 0 and n b P ( t ) o t ≥ 0 are i.i.d. with finite variance. (C2) Random Link Failur e : W e assume that the inter-sensor communication links fail randomly . This happens, for example, in wireless sensor network applications, where occasionally data packets are dropped. T o this end, if the sensors l and n share a communication link (or , n ∈ Θ l ), we assume that the link fails with some probability 1 − q ln at each iteration, where 0 < q ln ≤ 1 . W e associate with each such potential netw ork link, a binary random variable, e ln ( t ) , where e ln ( t ) = 1 indicates that the corresponding network link is activ e at time t , whereas e ln ( t ) = 0 indicates a link failure. Note that E [ e ln ] = q ln . (C3) Additive Channel Noise : Let n v j ln ( t ) o l,n,j,t be a family of independent zero mean random v ariables such that sup l,n,j,t E h v j ln ( t ) i 2 = k v < ∞ . (49) W e assume that, at the t -th iteration, if the network link ( l, n ) is acti ve, sensor l recei ves only a corrupt version, y j ln ( t ) , of sensor n ’ s state, c j n ( t ) , giv en by y j ln ( t ) = c j n ( t ) + v j ln ( t ) . (50) This models the channel noise. The moment assumption in eqn. (49) is very weak and holds, in particular , if the channel noise is i.i.d. (C4) Independence : W e assume that the sequences, n e S B ( t ) , e S P ( t ) o t ≥ 0 , { e ln ( t ) } l,n,t , and n v j ln ( t ) o l,n,j,t are mutually independent. These assumptions do not put restrictions on the distributional form of the random errors, only that they obey some weak moment conditions. Clearly , under the random en vironment model (as detailed in Assumptions (C1)-(C4) , the algorithm in (44) is not appropriate to update the sensors states. W e no w consider the following state update recursion for the random en vironment case. Distributed Localization in Random En vir onment Algorithm (DLRE): x j l ( t + 1) = (1 − α ( t )) x j l ( t ) + α ( t ) " X n ∈ κ ∩ Θ l e ln ( t ) b B ln ( t ) q ln  u j n + v j ln ( t )  # (51) + α ( t ) " X n ∈ Ω ∩ Θ l e ln ( t ) b P ln ( t ) q ln  x j n ( t ) + v j ln ( t )  # , l ∈ Ω , 1 ≤ j ≤ m In contrast with DILOC-REL, in (51), the gain α ( t ) is no w time v arying. It will become clear why when we study conv ergence of this algorithm. T o write DLRE in a compact form, we introduce notation. Define the random matrices, e B ( t ) ∈ R M × ( m +1) and e P ( t ) ∈ R M × ( m +1) , as the matrices with l n entries giv en by e B ln ( t ) = b B ln ( t )  e ln ( t ) q ln − 1  , e P ln ( t ) = b P ln ( t )  e ln ( t ) q ln − 1  . (52) November 26, 2024 DRAFT 19 Clearly , by (C2),(C4) , the matrices e B ( t ) ∈ R M × ( m +1) and e P ( t ) ∈ R M × ( m +1) are zero mean. Note that E [ e ln ] = q ln . Also, by the bounded moment assumptions in (C1) , we ha ve sup t ≥ 0 E     e B ( t )    2  = e k B < ∞ , sup t ≥ 0 E     e P ( t )    2  = e k P < ∞ . (53) Hence, the iterations in (51) can be written in vector form as x j ( t + 1) = (1 − α ( t )) x j ( t ) + α ( t ) h b P ( t ) + e P ( t )  x j ( t ) +  b B ( t ) + e B ( t )  u j + η j ( t ) i , (54) where, the l th element of the vector , η j ( t ) , is given by η j l ( t ) = X n 6 = l  b P ln ( t ) + e P ln ( t )  v j ln ( t ) + X n 6 = l  b B ln ( t ) + e B ln ( t )  v j ln ( t ) . (55) By (C1)-(C4) , the sequence, { η j ( t ) } t ≥ 0 , is zero mean, independent, with sup t E h   η j ( t )   2 i = k η < ∞ . (56) From (C1) , the iteration sequence in (54) can be written as x j ( t + 1) = x j ( t ) − α ( t ) h ( I − P − S P ) x j ( t ) − ( B + S B ) u j −  e S P ( t ) + e P ( t )  x j ( t ) (57) −  e S B ( t ) + e B ( t )  u j − η j ( t ) i . W e now make two additional design assumptions. (D1) P ersistence Condition : The weight sequence satisfies α ( t ) > 0 , X t ≥ 0 α ( t ) = ∞ , X t ≥ 0 α 2 ( t ) < ∞ . (58) This condition, commonly assumed in the adapti ve control and adaptiv e signal processing literature, assumes that the weights decay to zero, but not too f ast. (D2) Low Error Bias : W e assume that ρ ( P + S P ) < 1 . (59) Clearly , we ha ve ρ ( P ) < 1 . Thus, if we assume that the non-zero bias, S P , in the system matrix (resulting from incorrect distant computation) is small, (59) is justified. W e note that this condition ensures that the matrix ( I − P − S P ) is inv ertible. In the following sections, we prove that the DLRE algorithm, under the assumptions (C1)-(C4), (D1)-(D2) , leads to a.s. con ver gence of the state vector sequence,  x j ( t )  t ≥ 0 , to a deterministic vector for each j , which may be different from the exact sensor locations, because of the random errors in the iterations. W e characterize this error and show that it depends on the non-zero biases, S B , and S P in the system matrix computations, and vanishes as k S B k → 0 and k S P k → 0 . November 26, 2024 DRAFT 20 V I . D L R E : A . S . C O N V E R G E N C E W e show the almost sure con v ergence of DLRE under the random environment presented in Section V. Theor em 3 (DLRE a.s. con ver gence) Let { x j ( t ) } t ≥ 0 , 1 ≤ j ≤ m , be the state sequence generated by the iterations, giv en by (57), under the assumptions (C1)-(C4), (D1)-(D2) . Then, P h lim t →∞ x j ( t ) = ( I − P − S P ) − 1 ( B + S B ) u j , ∀ j i = 1 . (60) The con ver gence analysis of the DLRE algorithm is based on the sample path properties of controlled Markov processes, which has also been used recently to prove con vergence properties of distrib uted iterati ve stochastic algorithms in sensor networks, e.g., [30], [31]. The proof relies on the follo wing result from [32], which we state here as a theorem. Theor em 4 Consider the follo wing recursive procedure: x ( t + 1) = x ( t ) + α ( t ) [ R ( x ( t )) + Γ ( t + 1 , x ( t ) , ω )] , (61) where, x , R , Γ are vectors in R M × 1 . There is an underlying common probability space ( Ξ , F , P ) , and ω is the canonical element of the probability space, Ξ . Assume that the following conditions are satisfied 6 . 1) : The vector function R ( x ) is Borel measurable and Γ ( t, x , ω ) is B M ⊗ F measurable for e very t . 2) : There e xists a filtration {F t } t ≥ 0 of F , such that the family of random vectors Γ ( t, x , ω ) is F t measurable, zero-mean and independent of F t − 1 . 3) : There exists a function V ( x ) ∈ C 2 with bounded second order partial deriv ativ es satisfying: V ( x 0 ) = 0 , V ( x ) > 0 , x 6 = x 0 , (62) sup k x − x 0 k > h R ( x ) , V x ( x ) i < 0 , ∀  > 0 . (63) 4) : There exist constants k 1 , k 2 > 0 , such that, k R ( x ) k 2 + E [ k Γ ( t, x , ω ) k 2 ] ≤ k 1 (1 + V ( x )) − k 2 h R ( x ) , V x ( x ) i (64) 5) : The weight sequence { α ( t ) } t ≥ 0 satisfies the persistence condition (D1) giv en by (58). Then the Markov process, { x ( t ) } t ≥ 0 , con verges a.s. to x 0 . Pr oof: The proof follows from Theorem 4.4.4 in [32] and is omitted due to space constraints. W e now return to the proof of Theorem 3. 6 In the sequel, B M denotes the Borel sigma algebra in R M × 1 . The space of twice continuously differentiable functions is denoted by C 2 , while V x ( x ) denotes the gradient ∂ V ( x ) ∂ x . November 26, 2024 DRAFT 21 Pr oof: [Proof of Theorem 3] W e will sho w that, under the assumptions, the algorithm in (57) falls under the purview of Theorem 4. T o this end, consider the filtration, {F t } t ≥ 0 , where F t = σ  x j (0) , e S P ( s ) , e P ( s ) , e S B ( s ) , e B ( s ) , η j ( s ) : 0 ≤ s < t  . (65) Define the vector d ∗ as d ∗ = ( I − P − S P ) − 1 ( B + S B ) u j . (66) Equation (57) can be written as x j ( t + 1) = x j ( t ) − α ( t ) h ( I − P − S P )  x j ( t ) − d ∗  −  e S P ( t ) + e P ( t )  x j ( t ) −  e S B ( t ) + e B ( t )  u j − η j ( t ) i . (67) In the notation of Theorem 4, (67) is giv en by x j ( t + 1) = x j ( t ) + α ( t )  R ( x j ( t )) + Γ ( t + 1 , x j ( t ) , ω )  , (68) where R ( x ) = − ( I − P − S P )  x j ( t ) − d ∗  , (69) and Γ ( t + 1 , x , ω ) = h e S P ( t ) + e P ( t )  x j ( t ) +  e S B ( t ) + e B ( t )  u j + η j ( t ) i . (70) This definition satisfies assumptions 1) and 2) of Theorem 4. W e now show the existence of a stochastic potential function V ( · ) satisfying the remaining assumptions of Theorem 4. T o this end, define V ( x ) = k x − d ∗ k 2 . (71) Clearly , V ( x ) ∈ C 2 with bounded second order partial deriv ativ es, with V ( d ∗ ) = 0 , V ( x ) > 0 , x 6 = d ∗ . (72) November 26, 2024 DRAFT 22 Also, we note that, for  > 0 , sup k x − d ∗ k > ( R ( x ) , V x ( x )) = sup k x − d ∗ k > − 2 ( x − d ∗ ) T ( I − P − S P ) ( x − d ∗ ) , = sup k x − d ∗ k > h 2 ( x − d ∗ ) T ( P + S P ) ( x − d ∗ ) − 2 k x − d ∗ k 2 i , ≤ sup k x − d ∗ k > h 2    ( x − d ∗ ) T ( P + S P ) ( x − d ∗ )    − 2 k x − d ∗ k 2 i , ≤ sup k x − d ∗ k >  2 k x − d ∗ k ρ ( P + S P ) k x − d ∗ k − 2 k x − d ∗ k 2  , = sup k x − d ∗ k > − 2 (1 − ρ ( P + S P )) k x − d ∗ k 2 , ≤ − 2  2 (1 − ρ ( P + S P )) , < 0 , (73) where, the last step follows from (D2) . Thus, assumption 3) in Theorem 4 is satisfied. T o verify 4) note that k R ( x ) k 2 = ( x − d ∗ ) T ( I − P − S P ) T ( I − P − S P ) ( x − d ∗ ) , ≤    ( I − P − S P ) T ( I − P − S P )    k x − d ∗ k 2 , = k 1 k x − d ∗ k 2 , = k 1 V ( x ) , (74) where k 1 > 0 is a constant. Finally , by assumptions (C1)-(C4) , we ha ve E k Γ ( t, x , ω ) k 2 = E h“ e S P ( t − 1) + e P ( t − 1) ” x + “ e S B ( t − 1) + e B ( t − 1) ” u j + η j ( t − 1) i T h“ e S P ( t − 1) + e P ( t − 1) ” x + “ e S B ( t − 1) + e B ( t − 1) ” u j + η j ( t − 1) i , = x T E h e S T P ( t − 1) e S P ( t − 1) + e P T ( t − 1) e P ( t − 1) i x + u j T E h e S T B ( t − 1) e S B ( t − 1) + e B T ( t − 1) e B ( t − 1) i u j + E » ‚ ‚ ‚ η j ( t − 1) ‚ ‚ ‚ 2 – + 2 x T E h e S T P ( t − 1) e S B ( t − 1) i u j , ≤ E » ‚ ‚ ‚ e S P ( t − 1) ‚ ‚ ‚ 2 + ‚ ‚ ‚ e P ( t − 1) ‚ ‚ ‚ 2 – k x k 2 + E » ‚ ‚ ‚ e S B ( t − 1) ‚ ‚ ‚ 2 + ‚ ‚ ‚ e B ( t − 1) ‚ ‚ ‚ 2 – ‚ ‚ ‚ u j ‚ ‚ ‚ 2 + E » ‚ ‚ ‚ η j ( t − 1) ‚ ‚ ‚ 2 – + 2 E » ‚ ‚ ‚ e S P ( t − 1) ‚ ‚ ‚ 2 – 1 / 2 E » ‚ ‚ ‚ e S B ( t − 1) ‚ ‚ ‚ 2 – 1 / 2 k x k ‚ ‚ ‚ u j ‚ ‚ ‚ , ≤ “ k P + e k P ” k x k 2 + “ k B + e k B ” ‚ ‚ ‚ u j ‚ ‚ ‚ 2 + k η + k P k B k x k ‚ ‚ ‚ u j ‚ ‚ ‚ . (75) The cross terms dropped in the second step of eqn. (75) ha ve zero mean by the independence assumption (C4) . For example, consider the term E h e S T P ( t − 1) e P ( t − 1) i . It follo ws from eqns. (47,52) that the l n -th entry of the November 26, 2024 DRAFT 23 matrix e S T P ( t − 1) e P ( t − 1) is gi ven by h e S T P ( t − 1) e P ( t − 1) i ln = X r h e S T P ( t − 1) i lr h e P ( t − 1) i rn = X r h e S P ( t − 1) i rl h b P ( t − 1) i rn „ e rn ( t ) q rn − 1 « = X r h e S P ( t − 1) i rl [ P ] rn „ e rn ( t ) q rn − 1 « + X r h e S P ( t − 1) i rl [ S P ] rn „ e rn ( t ) q rn − 1 « + X r h e S P ( t − 1) i rl h e S P ( t − 1) i rn „ e rn ( t ) q rn − 1 « (76) From the independence and zero-mean assumptions, we ha ve the following, ∀ r : E » h e S P ( t − 1) i rl [ P ] rn „ e rn ( t ) q rn − 1 «– = [ P ] rn E hh e S P ( t − 1) i rl i E »„ e rn ( t ) q rn − 1 «– = 0 E » h e S P ( t − 1) i rl [ S P ] rn „ e rn ( t ) q rn − 1 «– = [ S P ] rn E hh e S P ( t − 1) i rl i E »„ e rn ( t ) q rn − 1 «– = 0 E » h e S P ( t − 1) i rl h e S P ( t − 1) i rn „ e rn ( t ) q rn − 1 «– = E hh e S P ( t − 1) i rl h e S P ( t − 1) i rn i E »„ e rn ( t ) q rn − 1 «– = 0 where we have repeatedly used the fact that E  e rn ( t ) q rn − 1  = 0 (77) From eqns. (76-77) it is then clear that E h e S T P ( t − 1) e P ( t − 1) i = 0 (78) In a similar way , it can be sho wn that the other dropped crossed terms in eqn. (75) are zero-mean. W e note that there exist constants, k 3 , k 4 , k 5 , k 6 > 0 , such that k x k 2 ≤ k 3 k x − d ∗ k 2 + k 4 , k x k ≤ k 5 k x − d ∗ k 2 + k 6 . (79) Hence, from (75) and (79), we ha ve E k Γ ( t, x , ω ) k 2 ≤ k 7 k x − d ∗ k 2 + k 8 , ≤ k 9 (1 + V ( x )) , (80) where, k 7 , k 8 > 0 and k 9 = max ( k 7 , k 8 ) . Combining eqns. (74,80) we note that assumption 4) in Theorem 4 is satisfied, as ( R ( x ) , V x ( x )) ≤ 0 , ∀ x . (81) November 26, 2024 DRAFT 24 Hence, all the conditions of Theorem 4 are satisfied and we conclude that P h lim t →∞ x j ( t ) = ( I − P − S P ) − 1 ( B + S B ) u j i = 1 . (82) Since, (82) holds for all j , and j takes only a finite number of v alues ( 1 ≤ j ≤ m ), we ha ve P h lim t →∞ x j ( t ) = ( I − P − S P ) − 1 ( B + S B ) u j , ∀ j i = 1 . (83) W e now interpret Theorem 3. Referring to the partitioning of the C ( t ) matrix in (43), we ha ve C ( t ) =   U X ( t )   , (84) where each row of X ( t ) corresponds to an estimated sensor location at time t . Theorem 3 then states that, starting with any initial guess, X (0) ∈ R M × m , of the unknown sensor locations, the state sequence, { X ( t ) } t ≥ 0 , generated by the DLRE algorithm con ver ges a.s., i.e., P h lim t →∞ X ( t ) = ( I − P − S P ) − 1 ( B + S B ) U i = 1 . (85) Howe ver , it follo ws from Lemma 4, that the e xact locations of the unknown sensors are gi ven by X ∗ = ( I − P ) − 1 BU . (86) Thus, the steady state estimate given by the DLRE algorithm is not exact, and, to characterize its performance, we introduce the following notion of localization error , e l , as e l =    ( I − P − S P ) − 1 ( B + S B ) U − ( I − P ) − 1 BU    . (87) W e note that e l is only a function of S P , S B , the non-zero biases in the system matrix computations, resulting from noisy inter-sensor distance measurements (see, Section V.) W e note that the DLRE algorithm is robust to random link failures and additi ve channel noise in inter -sensor communication. In fact, it is also rob ust to the zero-mean random errors in the system matrix computations, and only affected by the fixed non-zero biases. Note that e l = 0 for S P = S B = 0 . Clearly , if we assume suf ficient accuracy in the inter-sensor distance computation process, so that the biases, S P , S B , are small, the steady state error e ss will also be negligible, ev en in a random sensing en vironment. These are illustrated by numerical studies provided in Section VII. V I I . N U M E R I C A L S T U D I E S W e di vide the numerical study of DILOC into the following parts. First, we present DILOC in the deterministic case, i.e., we hav e no communication noise, no link failures, and the required inter -sensor distances are kno wn precisely . Second, we present DILOC when there is communication noise and link failures. Third, we consider noise on the distance measurements that results in random system matrices, b P ( t ) and b B ( t ) , as giv en in (45)–(47); November 26, 2024 DRAFT 25 0 10 20 30 40 50 1 2 3 4 5 6 7 8    ! "$# % &(' )  * ) ' # )  + (a) (b) (c) 0 100 200 300 400 500 0 1000 2000 3000 4000    ! "$# % &(' )  * ) ' # )  + , % # ! - , .0/1324/5 6 . /13247 5 86 . 7132 7 5 6 . 71324/ 5 86 (d) Fig. 3. Deterministic environments: (a) and (b) DILOC algorithm implemented on the example in Section II-C. (c) An N = 500 node network and the respectiv e triangulation sets. (d) DILOC implemented on the network in (c), where the iterations are shown for two arbitrarily chosen sensors. (a) 0 500 1000 1500 2000 −100 0 100 200 300 400 500 600 700 D ILO C iteratio n , t Coordi nat e s of s e nsor 17 and 42 c 17 , 1 ( t ) c 45 , 1 ( t ) c 17 , 2 ( t ) c 45 , 2 ( t ) (b) Fig. 4. Effect of communication noise and link failures: (a) An N = 50 node network and the respective triangulation sets. (d) DLRE (with a decreasing weight sequence, α = 4 t +1 ) implemented on the network in (a), where the iterations are shown for two arbitrarily chosen sensors. we study both the biased ( S P 6 = 0 , S B 6 = 0) and unbiased ( S P = S B = 0) cases in the following. In the end, we present studies of DILOC in the presence of all random scenarios. DILOC Algorithm in Deterministic En vironments: W e consider the example presented in section II-C. W e hav e N = 7 nodes in m = 2 -dimensional space, where m + 1 = 3 are the anchors and M = 4 are the sensors. DILOC, as given in (14), is implemented, and the results are shown in Fig. 3(a)–3(b). Fig. 3(a) shows the estimated coordinates of sensor 6 and Fig. 3(b) sho ws the trajectories of the estimated coordinates for all the sensors with random initial condition. Fig. 3(c) and Fig. 3(d) show DILOC for a netw ork of N = 500 nodes. DILOC Algorithm with Communication Noise and Link Failur es: W e consider the same e xample of section II- C, but, include noise in the communication and link failures. All the communication links are activ e 90% of the time, i.e., q ln = 0 . 9 , ∀ n s.t. n ∈ Θ l , as discussed in (C2) , and include an additi ve communication noise that is Gaussian i.i.d. with zero-mean and v ariance 1 / M (roughly speaking, this is equi valent to ha ving a unity variance for the entire network). In this scenario, we employ DILOC with a decreasing weight sequence, α ( t ) = 4 t +1 and the results are presented in Fig. 4(a)and Fig. 4(b). DILOC with Noisy Distance measurements: W e now consider noise on the distance measurements. W e assume that we have a reasonable estimate of the required distances such that it translates into a small perturbation of the November 26, 2024 DRAFT 26 (a) 0 0.5 1 1.5 2 2.5 x 10 4 −100 0 100 200 300 400 500 600 D ILO C iteratio ns, t Coordi nate s of se ns or 24 a nd 28 c 28 , 2 ( t ) c 24 , 1 ( t ) c 28 , 1 ( t ) c 24 , 2 ( t ) (b) Fig. 5. Effect of noisy distance measurements: (a) An N = 50 node network and the respecti ve triangulation sets. (d) DLRE (with a decreasing weight sequence, α = 1 t 0 . 55 ) implemented on the network in (a), where the iterations are shown for two arbitrarily chosen sensors. (a) 0 0.5 1 1.5 2 x 10 4 0 50 100 150 200 250 300 350 400 450 DI LOC iteratio ns, t Coordi nate s of s ens or 24 and 28 c 24 , 1 ( t ) c 24 , 2 ( t ) c 28 , 1 ( t ) c 28 , 2 ( t ) (b) Fig. 6. Random en vironments (Noisy distances, communication noise, link failures): (a) An N = 50 node netw ork and the respectiv e triangulation sets. (d) DLRE (with a decreasing weight sequence, α = 1 t 0 . 55 ) implemented on the network in (a), where the iterations are shown for two arbitrarily chosen sensors. system matrices, b B ( t ) and b P ( t ) . The matrices e S P , e S B are zero-mean Gaussian i.i.d. perturbations with variance 0 . 1 (small signal perturbation, note that the non-zero elements of both the matrices, B and P lie in the range of 0 and 1 ). For a network of N = 50 nodes in Fig. 5(a), we implement DLRE, with a decreasing weight sequence, α = 1 t 0 . 55 , in Fig. 5(b). Finally , Fig. 6(a) shows a network of N = 50 nodes, where DLRE, with a decreasing weight sequence, α = 1 t 0 . 55 , with all of the above random scenarios is implemented in Fig. 6(b). V I I I . C O N C L U S I O N S The paper studies a distrib uted iterati ve sensor localization algorithm in m − dimensional Euclidean space, R m ( m ≥ 1) , that finds the location coordinates of the sensors in a sensor network with only local communication. The algorithm uses the minimal number , m + 1 , of anchors (sensors with known location) to localize an arbitrary number , M , of sensors that lie in the con ve x hull of these m + 1 anchors. In the deterministic case, i.e., when no noise affects the inter-sensor communication, the inter-sensor distances are known with no errors, and the communication links do not fail, we sho w that our distributed algorithms, DILOC and DILOC-REL, lead to con ver gence to the exact sensor locations. For the random en vironment scenario, where inter-sensor communication links may fail randomly , transmitted data is distorted by noise, and inter-sensor distance information is imprecise, we sho w that our modified algorithm, DLRE, leads to almost sure con ver gence of the iterati ve location estimates, and in this case we e xplicitly characterize the resulting error between the exact sensor locations and the con verged estimates. November 26, 2024 DRAFT 27 (a) (b) Fig. 7. Con vex Hull Inclusion T est (m=3): The sensor l is sho wn by a ‘ ◦ ’, whereas, the anchors in κ are shown by ‘ ∇ ’. (a) l ∈ C ( κ ) ⇒ A κ = A κ ∪{ l } , (b) l / ∈ C ( κ ) ⇒ A κ < A κ ∪{ l } . Numerical simulations illustrate the behavior of the algorithms under dif ferent field conditions. A P P E N D I X I C O N V E X H U L L I N C L U S I O N T E S T W e now gi ve an algorithm that tests if a gi ven sensor , l ∈ R m , lies in the con ve x hull of m + 1 nodes in a set, κ , using only the mutual distance information among these m + 2 nodes ( κ ∪ { l } ). Let κ denote the set of m + 1 nodes and let C ( κ ) denote the con ve x hull formed by the nodes in κ . Clearly , if l ∈ C ( κ ) , then the con v ex hull formed by the nodes in κ is the same as the con ve x hull formed by the nodes in κ ∪ { l } , i.e., C ( κ ) = C ( κ ∪ { l } ) , if l ∈ C ( κ ) . (88) W ith the abov e equation, we can see that, if l ∈ C ( κ ) , then the generalized v olumes of the tw o con vex sets, C ( κ ) and C ( κ ∪ { l } ) , should be equal. Let A κ denote the generalized volume of C ( κ ) and let A κ ∪{ l } denote the generalized volume of C ( κ ∪ { l } ) , we have A κ = A κ ∪{ l } , = X k ∈ κ A κ ∪{ l }\{ k } , if l ∈ C ( κ ) . (89) Hence, the test becomes l ∈ C ( κ ) , if X k ∈ κ A κ ∪{ l }\{ k } = A κ , (90) l / ∈ C ( κ ) , if X k ∈ κ A κ ∪{ l }\{ k } > A κ . (91) This is also sho wn in Figure 7. The abov e inclusion test is based entirely on the generalized volumes, which can be calculated using only the distance information in the Cayley-Menger determinants. November 26, 2024 DRAFT 28 A P P E N D I X I I C A Y L E Y - M E N G E R D E T E R M I N A N T Let κ be a set of m + 1 points (sensors) in R m , and d lj be the inter -sensor distance between l and j . The generalized volume, A κ , of the con v ex hull of the points in κ can be computed by the Cayley-Menger determinant, see, e.g., [33]. The Cayley-Menger determinant is the determinant of an m + 2 × m + 2 (symmetric) matrix that relates to the generalized v olume, A κ , of the con v ex hull, C ( κ ) , of the m + 1 points in R m through an integer sequence, s m +1 . The Cayley-Menger determinant is gi ven by s m +1 A 2 κ =                       0 1 1 1 . . . 1 1 0 d 2 12 d 2 13 . . . d 2 1 ,m +1 1 d 2 21 0 d 2 23 d 2 24 . . . d 2 2 ,m +1 1 d 2 31 d 2 32 0 d 2 34 . . . . . . d 2 42 d 2 43 0 . . . d 2 m − 1 ,m +1 . . . . . . . . . . . . . . . . . . d 2 m,m +1 1 d 2 m +1 , 1 d 2 m +1 , 1 . . . d 2 m +1 ,m − 1 d 2 m +1 ,m 0                       , (92) where s m = ( − 1) m +1 2 m ( m !) 2 , m = { 0 , 1 , 2 , . . . } , (93) and its first fe w terms are − 1 , 2 , − 16 , 288 , − 9216 , 460800 , . . . . A P P E N D I X I I I I M P O RT A N T R E S U LT S Lemma 5 If the matrix, P , corresponds to the transition probability matrix associated to the transient states of an absorbing Markov chain, then lim t →∞ P t +1 = 0 . (94) Pr oof: For such a matrix, P , we hav e ρ ( P ) < 1 , (95) from Lemma 8.3.20 and Theorem 8.3.21 in [34], where ρ ( · ) denotes the spectral norm of a matrix and (94) follows from (95). Lemma 6 If the matrix, P , corresponds to the transition probability matrix associated to the transient states of an absorbing Markov chain, then lim t →∞ t +1 X k =0 P k = ( I − P ) − 1 . (96) Pr oof: November 26, 2024 DRAFT 29 The proof follows from Lemma 5 and Lemma 6.2.1 in [34]. R E F E R E N C E S [1] Usman A. Khan and Jos ´ e M. F . Moura, “Distributing the Kalman filters for large-scale systems, ” IEEE T ransactions on Signal Pr ocessing , Aug. 2007, accepted for publicaiton, DOI: 10.1109/TSP .2008.927480. [2] G. Springer , Intr oduction to Riemann surfaces , Addison-W esley , Reading, MA, 1957. [3] J. G. Hocking and G. S. Y oung, T opology , Addison-W esley , Reading, MA, 1961. [4] R. L. Moses, D. Krishnamurthy , and R. Patterson, “ A self-localization method for wireless sensor networks, ” EURASIP Journal on Applied Signal Processing , , no. 4, pp. 348–358, Mar . 2003. [5] N. Patwari, A. O. Hero III, M. Perkins, N. Correal, and R. J. ODea, “Relative location estimation in wireless sensor networks, ” IEEE T rans. on Signal Processing , vol. 51, no. 8, pp. 2137–2148, Aug. 2003. [6] Y . Shang, W . Ruml, Y . Zhang, and M. Fromherz, “Localization from mere connectivity , ” in 4th ACM international symposium on mobile ad-hoc networking and computing , Annapolis, MD, Jun. 2003, pp. 201–212. [7] Y . Shang and W . Ruml, “Improved MDS-based localization, ” in IEEE Infocom , Hong Kong, Mar . 2004, pp. 2640–2651. [8] F . Thomas and L. Ros, “Revisiting trilateration for robot localization, ” IEEE T ransactions on Robotics , vol. 21, no. 1, pp. 93–101, Feb. 2005. [9] M. Cao, B. D. O. Anderson, and A. S. Morse, “Localization with imprecise distance information in sensor networks, ” Se villa, Spain, Dec. 2005, pp. 2829–2834. [10] S. T . Roweis and L. K. Saul, “Nonlinear dimensionality reduction by local linear embedding, ” Science , vol. 290, pp. 2323–2326, Dec. 2000. [11] N. Patwari and A. O. Hero III, “Manifold learning algorithms for localization in wireless sensor networks, ” in IEEE International Confer ence on Sig. Pr oc. , Montreal, Canada, Mar . 2004, vol. 3, pp. 857–860. [12] D. Niculescu and B. Nath, “ Ad-hoc positioning system, ” in IEEE Globecom , Apr. 2001, pp. 2926–2931. [13] A. Savvides, C. C. Han, and M. B. Sri vasta va, “Dynamic fine-grained localization in ad-hoc networks of sensors, ” in IEEE Mobicom , Rome, Italy , Jul. 2001, pp. 166–179. [14] A. Sa vvides, H. Park, and M. B. Sriv astav a, “The bits and flops of the N-hop multilateration primitive for node localization problems, ” in Intl. W orkshop on Sensor Networks and Applications , Atlanta, GA, Sep. 2002, pp. 112–121. [15] R. Nagpal, H. Shrobe, and J. Bachrach, “Organizing a global coordinate system from local information on an ad-hoc sensor network, ” in 2nd Intl. W orkshop on Information Pr ocessing in Sensor Networks , Palo Alto, CA, Apr . 2003, pp. 333–348. [16] J. J. Caffery , W ireless location in CDMA cellular radio systems , Kluwer Academic Publishers, Norwell, MA, 1999. [17] J. A. Costa, N. Patwari, and III A. O. Hero, “Distributed weighted-multidimensional scaling for node localization in sensor networks, ” ACM T ransactions on Sensor Networks , vol. 2, no. 1, pp. 39–64, 2006. [18] J. Albo wicz, A. Chen, and L. Zhang, “Recursive position estimation in sensor networks, ” in IEEE Int. Conf. on Network Pr otocols , Riv erside, CA, Nov . 2001, pp. 35–41. [19] C. Savarese, J. M. Rabaey , and J. Beutel, “Locationing in distributed ad-hoc wireless sensor networks, ” in IEEE International Conference on Sig. Proc. , Salt Lake City , U A, May 2001, pp. 2037–2040. [20] S. ˇ Capkun, M. Hamdi, , and J. P . Hubaux, “GPS-free positioning in mobile ad-hoc networks, ” in 34th IEEE Hawaii Int. Conf. on System Sciences , W ailea Maui, HI, Jan. 2001. [21] A. T . Ihler, J. W . Fisher III, R. L. Moses, and A. S. Willsk y , “Nonparametric belief propagation for self-calibration in sensor networks, ” in IEEE International Conference on Sig. Proc. , Montreal, Canada, May 2004. [22] L. Hu and D. Evans, “Localization for mobile sensor networks, ” in IEEE Mobicom , Philadelphia, P A, Sep. 2004, pp. 45–57. [23] M. Coates, “Distributed particle filters for sensor networks, ” in IEEE Information Pr ocessing in Sensor Networks , Berkele y , CA, Apr . 2004, pp. 99–107. [24] S. Thrun, “Probabilistic robotics, ” Communications of the ACM , vol. 45, no. 3, pp. 52–57, Mar. 2002. [25] J. C. Gower , “Euclidean distance geometry , ” Mathematical Scientist , vol. 7, pp. 1–14, 1982. [26] J. C. Gower , “Propoerties of Euclidean and non-Euclidean distance matrices, ” Linear Algebra and its Applications , vol. 67, pp. 81–97, Jun. 1985. November 26, 2024 DRAFT 30 [27] P . Hall, Intr oduction to the theory of coverage pr ocesses , John Wiley and Sons Inc., Chichester, UK, 1988. [28] Y . Sung, L. T ong, and A. Swami, “ Asymptotic locally optimal detector for large-scale sensor networks under the poisson regime, ” IEEE T ransactions on Signal Processing , vol. 53, no. 6, pp. 2005–2017, Jun. 2005. [29] C. M. Grinstead and J. L. Snell, Intr oduction to probability , American Mathematical Society , 1997. [30] S. Kar and J. M. F . Moura, “Distributed consensus algorithms in sensor networks: Link failures and channel noise, ” Nov ember 2007, Manuscript submitted to the IEEE T ransactions on Signal Processing, http://arxiv .org/abs/0711.3915. [31] S. Kar and J. M. F . Moura, “Distributed consensus algorithms in sensor networks: Quantized data, ” December 2007, Manuscript submitted to the IEEE Transactions on Signal Processing, http://aps.arxiv .org/abs/0712.1609. [32] M.B. Nevel’ son and R.Z. Has’minskii, Stochastic approximation and recur sive estimation , American Mathematical Society , Providence, Rhode Island, 1973. [33] M. J. Sippl and H. A. Scheraga, “Cayley–Menger coordinates, ” Pr oceedings of the National Academy of Sciences of U.S.A. , vol. 83, no. 8, pp. 2283–2287, Apr . 1986. [34] A. Berman and R. J. Plemmons, Nonne gative matrices in the mathematical sciences , Academic Press, INC., New Y ork, NY , 1970. November 26, 2024 DRAFT

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment