Information, Energy and Density for Ad Hoc Sensor Networks over Correlated Random Fields: Large Deviations Analysis

Using large deviations results that characterize the amount of information per node on a two-dimensional (2-D) lattice, asymptotic behavior of a sensor network deployed over a correlated random field for statistical inference is investigated. Under a…

Authors: Youngchul Sung, H. Vincent Poor, Heejung Yu

INFORMA TION, ENE RGY AND DENSITY FOR AD HOC SENSOR NETWORKS O VER CORRELA TED RANDOM FIE LDS: LARGE DEVIA TIONS AN A L YSIS Y oungchul Sung † , H. V incent P oor and Heejung Y u ABSTRA CT Using large deviations results that character ize th e amo unt of information per node on a tw o-dimen sional (2-D) lattice, asymptotic behavior of a sen sor network dep loyed over a correlated rando m field f or statistical inference is investi- gated. Under a 2-D h idden Gau ss-Markov rando m field model with symmetr ic first order con ditional au toregres- sion, the behavior of the total information [nats] and energy efficiency [nats/J] d efined as the ratio o f total gathered in- formation to the required energy is obtained as the coverage area, node density and energy v ary . 1. INTR ODUCTION In this paper, we investigate the fundamen tal behavior of a flat multi-ho p a d hoc sensor network dep loyed over a c or- related two-dim ensional (2-D) ra ndom field for statistical inference . In particular, we examine the amount o f informa- tion o btainable from a sensor network distributed over a 2-D Gauss-Markov rando m field (GMRF) and related trade-offs in various asympto tic settings. W e c onsider the Kullback- Leibler inform ation (KLI) and mu tual inform ation (MI) [1] as ou r infor mation measur es. Our approach to calculating the to tal obtainable infor mation is based o n the la rge devi- ations prin ciple. T hat is, for large networks th e total infor- mation is approxim ately given by the p roduc t of the number of sensors and the asym ptotic per-sensor information . How- ev er , a closed-form expression fo r the asymptotic per-sensor informa tion (or asy mptotic inf ormation rate in 2 -D) is not av ailable for gener al 2 -D signals. T o address this problem, we ad opt the con ditional autore gr ession (CAR) model and correspo nding correlation model for the signal, and derive a closed-for m expression for the asym ptotic inform ation rate in 2-D. W e d o so by exploiting the sp ectral stru cture of the CAR signal and the relationship between the eigen values of the block c irculant approxima tion to a block T oeplitz matrix describing the 2-D corr elation structure. Based on the de- riv ed expr essions for asymptotic in forma tion rate and their † Y . S ung and H. Y u are with the Dept. of Electrical Engineering, KAIST , Daejeon 305-701, South K orea. Email:ysung@ee.kaist.ac. kr and hjyu@stein.kaist.ac. kr . H. V . Poor is with the Dept. of Electrical Engineering, Princeton University , Princeton, NJ 08544. Email: poor@princeton.edu. The work of Y . Sung was supported in part by Brain K orea 21 Project, the School of Information T echnology , KAIST . T he work of H . V . Poor w as s upported in part by the U. S. National Science Foundation under Grants ANI-03-38807 and CNS-06-25637. proper ties, we in vestigate th e b ehavior of senso r networks deployed over correla ted r andom fields for statistical infer- ence. 1.1. Related W ork Large deviations an alysis of Gauss-Markov proc esses in Gaus- sian noise has been co nsidered pr eviously . (See [2] and referenc es th erein.) However , most work in this ar ea con- siders on ly one-dim ensional (1-D) sign als or tim e series. A closed-fo rm expression for th e asymptotic KLI rate was obtained an d its proper ties were inv estigated f or 1- D hid- den Gauss-Ma rkov random proc esses [2]. Large deviations analyses we re used to examin e the issues o f optim al sensor density and optimal sampling in a 1-D signal mod el in [3] and [4]. For a 2- D setting, an err or expon ent was obtained for the detection of 2-D GMRFs in [5], where the sensors are located ran domly and the Mar kov graph is based on th e nearest neighbor dependen cy ena bling a loop-free graph. In this w ork, ho we ver , measuremen t no ise was not considered. Our work here fo cuses on the analysis o f the funda mental behavior of 2-D sensor networks deployed for statistical in- ference via ne w large de viations r esults for 2-D hidd en GM- RFs, which enable u s to in vestigate the imp act o f field corre- lation and me asurement sig nal-to-n oise ra tio (SNR) on the informa tion. 2. B A CKGROUND AND SIGN AL MODEL T o simplify the prob lem an d gain insig hts into be havior in 2-D, we assume th at senso rs are loca ted on a 2 -D lattice I n = [0 : 1 : n − 1] 2 , as sh own in Fig. 1. W e assume that the signal samples of sensors form a (discrete-in dex) 2 - D GMRF and that each sensor has Ga ussian measur ement noise. The (noisy) m easuremen t Y ij of Sensor ij on the 2-D lattice I n is giv en by Y ij = X ij + W ij , ij ∈ I n , (1) where { W ij } represents independ ent and identically dis- tributed (i.i.d.) N (0 , σ 2 ) no ise with a kn own variance σ 2 , and { X ij } is a GMRF on the 2 -D lattice ind ependen t of the m easuremen t noise { W ij } . T hus, the o bservation sam- ples form a 2-D h idden GMRF . In the following, we briefly introdu ce the results on GMRFs relev an t to further develop- ment. Definition 1 (GMRF [6]) A random vector X = ( X 1 , X 2 , · · · , X n ) ∈ R n is a Gauss-Markov random field with r e- spect to (w .r .t.) a labelled graph G = ( ν, E ) with mea n µ and p r ecision matrix Q > 0 , if its pr obability den sity func- tion is given by p ( X ) = (2 π ) − n/ 2 | Q | 1 / 2 exp „ − 1 2 ( X − µ ) T Q ( X − µ ) « , (2) and Q lm 6 = 0 ⇐ ⇒ { l , m } ∈ E for all l 6 = m . Her e, ν is the set o f all nodes { 1 , 2 , · · · , n } and E is the set o f edges connectin g p airs of nodes, which repr esent the con ditional depend ence st ructur e. P S f r a g r e p l a c e m e n t s ( i, j ) X ij X ij W ij Y ij Y ij Sensor ij d n d n Fig. 1 . Sensor s on a 2-D Lattice I n : Hidden Markov Struc- ture The 2-D indexing s cheme ij in (1) can be appro priately conv erted to an 1-D sche me to ap ply Defin ition 1. From here on, we use the 2-D indexing scheme for con ven ience. Definition 2 (Statio narity) A GMRF { X ij } on a 2-D dou- bly infinite lattice I ∞ is said to be station ary if th e mean vector is co nstant an d Cov( X ij , X i ′ j ′ ) ∆ = E { ( X ij − E { X ij } ) ( X i ′ j ′ − E { X i ′ j ′ } ) } = c ( i − i ′ , j − j ′ ) for some fu nction c ( · , · ) . For a 2- D stationa ry GMRF { X ij } , th e covariance { γ ij } is defined as γ ij = E { X i ′ j ′ X i ′ + i,j ′ + j } = E { X 00 X ij } , which do es not depend o n i ′ or j ′ due to the stationarity . The spectral density function of a zero-mean and stationary GMRF on I ∞ with cov ariance γ ij is defined as f ( ω 1 , ω 2 ) = 1 4 π 2 X ij ∈ I ∞ γ ij exp( − ι ( iω 1 + j ω 2 )) , (3) where ι = √ − 1 and ( ω 1 , ω 2 ) ∈ ( − π , π ] 2 . Note that this is a 2-D extension o f the con ventional 1-D discrete-time F ourier transform (DTFT). Definition 3 (The Conditional A utoregression ) A GMRF { X ij } is said to be a cond itional autor e gr ession (CAR) if it is specified using a set of full cond itional no rmal distribu- tions with mean and pr e cision: E { X ij | X − ij } = − 1 θ 00 X i ′ j ′ ∈I ∞ 6 =00 θ i ′ j ′ X i + i ′ ,j + j ′ , (4) Prec { X ij | X − ij } = θ 00 > 0 , (5) wher e X − ij denotes the set of all variables except X ij . It is shown that the GMRF defin ed by the CAR mode l (4) - (5) is a zero- mean stationary Gaussian pro cess on I ∞ with the power spectral density [6 ] f ( ω 1 , ω 2 ) = 1 4 π 2 1 P ij ∈ I ∞ θ ij exp( − ι ( iω 1 + j ω 2 )) (6) if |{ θ ij 6 = 0 }| < ∞ , θ ij = θ − i, − j , θ 00 > 0 , (7) { θ ij } is so that f ( ω 1 , ω 2 ) > 0 , ∀ ( ω 1 , ω 2 ) ∈ ( − π , π ] 2 . (8) Hencefor th, we assume that the 2- D stocha stic signal { X ij } in ( 1) is given b y a station ary GMRF de fined by the CAR model (4) - (5) and (7) - (8). 3. ASYMPTO TIC INFORMA TION RA TES AND THEIR PROPER TIES In this section , we der iv e a clo sed-for m expr ession for the asymptotic KLI r ate and MI rate in the model (1), defined as K = lim n →∞ 1 |I n | log p 0 p 1 ( { Y ij , ij ∈ I n } ) a.s. under p 0 , and I = lim n →∞ 1 |I n | I ( { X ij , ij ∈ I n } ; { Y ij , ij ∈ I n } ) , respectively . For the MI, the signal model (1) is directly ap- plicable, whereas for the KLI the probability den sity fu nc- tions of the n ull (no ise-only) an d alternative ( signal-plu s- noise) distributions are gi ven by p 0 ( Y ij ) : Y ij = W ij , ij ∈ I n , p 1 ( Y ij ) : Y ij = X ij + W ij , ij ∈ I n . (9) The following clo sed-for m expressions for the asymptotic informa tion rates in th e spectral domain ha ve been obtained in [7] by exploiting the spectral structure o f th e CAR sig- nal a nd the relationship b etween the eigenv a lues o f block circulant an d block T o eplitz matrice s representin g 2-D cor- relation structure . Theorem 1 F or the model (9) with the signal given by (4) - (5), a ssuming that conditions (7) - (8 ) hold , the asymp totic KLI rate is given by K = 1 4 π 2 Z π − π Z π − π „ 1 2 log σ 2 + 4 π 2 f ( ω 1 , ω 2 ) σ 2 (10) + 1 2 σ 2 σ 2 + 4 π 2 f ( ω 1 , ω 2 ) − 1 2 « dω 1 dω 2 , = 1 4 π 2 Z π − π Z π − π D ( N (0 , S y 0 ( ω 1 , ω 2 )) ||N (0 , S y 1 ( ω 1 , ω 2 )) dω 1 dω 2 , wher e D ( ·||· ) denotes the K ullback-Leibler diver gence. Pr oof: In [8]. As a by-p roduc t of the pro of of the above theorem , we have the asymptotic MI rate gi ven by I = 1 4 π 2 Z π − π Z π − π 1 2 log σ 2 + 4 π 2 f ( ω 1 , ω 2 ) σ 2 dω 1 dω 2 . (11) Theorem 1 is a 2-D e xtension of the asymptotic KLI rate of 1-D hidden Gauss-Markov mod el ob tained in [ 2], an d th e asymptotic KLI rate (10) c an be explain ed using a frequency binning argum ent. Specifically , for each 2- D frequ ency bin dω 1 dω 2 , the spectra are flat, i.e., the signals are in depend ent and Stein’ s lem ma can be applied for the bin. The overall KLI is the sum of contributions from each se gment. 3.1. Symmetric First Order Conditional A utoregression T o inv estigate the prop erties of th e asym ptotic KLI and MI rates as fun ctions of field corr elation and SNR, we fu rther consider the sy mmetric first ord er condition al autor egres- sion (SFCAR), defined by the condition s E { X ij | X − ij } = λ κ ( X i +1 ,j + X i − 1 ,j + X i,j +1 + X i,j − 1 ) , Prec { X ij | X − ij } = κ > 0 , where 0 ≤ λ ≤ κ 4 . (Th is is a sufficient c ondition to sat- isfy (7) - (8).) Here, θ 00 = κ and θ 1 , 0 = θ − 1 , 0 = θ 0 , 1 = θ 0 , − 1 = − λ . In the SFCAR mo del, the correlatio n is sym- metric for eac h set of four neighb oring sensor nodes. Th e SFCAR mo del is a simple but m eaningf ul extension o f the 1-D autoregression (AR) model which has the co nditional causal d epend ency o nly on the previous sample. Here in the 2 -D ca se we have co nditional dep endence on four neig h- boring nodes in th e four (planar) directions, capturing 2 -D correlation structure . Th e spectrum of the SFCAR signal is giv en by f ( ω 1 , ω 2 ) = 1 4 π 2 κ (1 − 2 ζ cos ω 1 − 2 ζ cos ω 2 ) , (12) where the edge dependence factor ζ is defined as ζ ∆ = λ κ , 0 ≤ ζ ≤ 1 / 4 . (13) Here, ζ = 0 corr espond s t o the i.i.d . case whereas ζ = 1 / 4 correspo nds to the perfectly correla ted case. Th erefor e, the correlation strength can b e capture d in this single qua ntity ζ for SFCAR signals. The power of the SFCAR is obtained using the inverse Fourier transform via the relation ship (3), and is gi ven by P s = γ 00 = 2 K (4 ζ ) π κ ,  0 ≤ ζ ≤ 1 4  , where K ( · ) is th e co mplete elliptic integral of the first k ind [9]. The SNR is given by SNR = P s σ 2 = 2 K (4 ζ ) π κσ 2 . Using (1 0) and the SNR, we o btain the asymptotic KLI an d MI rates for th e SFCAR signal, gi ven in the following corollary to Theorem 1, also from [7]. Corollary 1 The asymptotic KLI and MI rates for the SF- CAR 2D signal model are given by K s = 1 4 π 2 Z π − π Z π − π „ 1 2 log 1 + SNR (2 /π ) K (4 ζ )(1 − 2 ζ cos ω 1 − 2 ζ cos ω 2 ) ! + 1 2 1 1 + SNR (2 /π ) K (4 ζ )(1 − 2 ζ cos ω 1 − 2 ζ cos ω 2 ) − 1 2 « dω 1 dω 2 . (14) and I s = 1 4 π 2 Z π − π Z π − π 1 2 log 1 + SNR (2 /π ) K (4 ζ )(1 − 2 ζ cos ω 1 − 2 ζ cos ω 2 ) ! dω 1 dω 2 , (15) r espective ly . Note that the SNR and correlation are separated in (14)- (15), which enables us to in vestigate the effects of each term separately . 3.2. Properties of the asymptotic KLI and MI rates ( K s and I s ) First, it is readily seen f rom Corollary 1 that K s and I s are continuously differentiable C 1 function s of the edge de- penden ce factor ζ ( 0 ≤ ζ ≤ 1 / 4 ) f or a given SNR since f : x → K ( x ) is a continuously differentiable C ∞ function for 0 ≤ x < 1 [10]. The values of K s at the extreme correla- tions ar e given by notin g that K (0) = π 2 and K (1) = ∞ . Therefo re, in th e i.i.d. case ( ζ = 0 ), the corollary r educes to Stein’ s lemma as expected, and K s is giv en by K s | ζ =0 = 1 2 log(1+ SNR )+ 1 2(1 + SNR ) − 1 2 = D ( N (0 , 1) ||N (0 , 1+ SNR )) . In the i.i.d . case, the asymp totic MI rate is given by the well known formu la, I s | ζ =0 = 1 2 log(1 + SNR ) . For the p erfectly correlated case ( ζ = 1 / 4 ), on the other hand, K s = 0 and I s = 0 . (In this case as well as in the i.i.d. case, the two- dimensiona lity is irr elev an t.) The lim iting b ehavior of the asymptotic informa tion r ates is giv en by T aylor’ s theo rem. Due to the continu ous dif fe rentiability , we ha ve K s ( ζ ) = c 1 · (1 / 4 − ζ ) + o ( | 1 / 4 − ζ | ) , (16) I s ( ζ ) = c ′ 1 · (1 / 4 − ζ ) + o ( | 1 / 4 − ζ | ) , (17) for some co nstants c 1 and c ′ 1 , as ζ → 1 / 4 . Similarly , we also h av e the linear limitin g behavior f or K s and I s in a neighbo rhoo d of ζ = 0 with non-zero limit values, as ζ → 0 . That is, K s ( ζ ) = K s (0) + c 2 ζ + o ( ζ ) , (18) I s ( ζ ) = I s (0) + c ′ 2 ζ + o ( ζ ) , (19) for som e c 2 and c ′ 2 , as ζ → 0 . For intermed iate values of correlation , it is seen that at high SNR K s is monoto nically decreasing as ζ increases. At low SNR, on the othe r hand , correlation is beneficial to the perfo rmance. W ith regard to K s and I s as f unction s of SNR, the be- havior of K s is giv en by the following theorem. Theorem 2 The asymptotic KLI r ate K s for the hidden SF- CAR mod el is continu ous and m onoton ically in cr e asing as SNR increases for a given ed ge d epend ence factor 0 ≤ ζ < 1 / 4 . Mor eover , K s incr e ases linearly with respect to 1 2 log SNR as SNR → ∞ . As SNR d ecr ea ses to zer o, on the other h and, K s conver ges to zer o with the conver gence rate K s ( SNR ) = c 3 · SNR 2 + o ( SNR 2 ) for some constant c 3 as SNR → 0 . The asymptotic MI rate I s has similar pr operties a s a fu nction of SNR, i.e., it is a continuo us and monoton ically-increasing function of SNR. At hig h S NR, it incr e ases with rate 1 2 log SNR, whereas it d ecr ea ses to zer o with r ate of con ver gence I s ( SNR ) = c ′ 3 · SNR + o ( SNR ) for some constant c ′ 3 as SNR → 0 . Pr oof: In [8]. Note that the limiting behavior as SNR → 0 is different for K s and I s ; K s decays to zero qu adratically while I s diminishes linearly . 4. SCALING LA WS IN A D HOC SENSOR NETWORKS O VER CORRELA TED RANDOM FIELD Based on the results in th e previous sections, we are now ready to an swer some fu ndame ntal q uestions in th e design of sensor networks for statistical inferenc e about the under- lying stochastic field. 4.1. Physical correlation model The actual physical correlatio n for the SFCAR mod el is giv en by solving the corresp onding contin uous-in dex 2-D stochastic differential equation (the stoch astic Lap lace equa- tion) 1 [11] " „ ∂ ∂ x « 2 + „ ∂ ∂ y « 2 − α 2 # X ( x, y ) = u ( x, y ) , (20) where u ( x, y ) is the 2-D white zero-mean Gaussian pertur- bation and α > 0 is th e diffusion rate. By solvin g the SDE, the edge corr elation factor ρ is given, a s a fu nction of the sensor spacing d n , by [11] ρ ∆ = γ 01 γ 00 = γ 10 γ 00 = f ( d n ) = αd n K 1 ( αd n ) , ( 21) where K 1 ( · ) is th e modified Bessel function o f th e second kind whose asymptotic behavior is gi ven by  K 1 ( x ) → p π 2 x e − x as x → ∞ , K 1 ( x ) → 1 /x as x → 0 . (22) 1 Note that the solution of (20) is circularl y symmetric, i.e., it depends only on r = p x 2 + y 2 , and s amples of the solution X ( x, y ) of (20) on latti ce I n do not necessaril y form a discrete-inde x SFCAR GMRF . How- e ver , (20) is still the continuous-ind ex counterpart of the SFCAR model, and we use its correla tion function for the SFCAR model. The correlation fun ction (2 1) can b e r egarded as the rep re- sentativ e correlatio n in 2-D, similar to the exponential cor- relation fun ction e − Ad n in 1-D. Both fu nctions decrease monoto nically w .r .t. d n . Howev er , the 2-D cor relation func- tion is flat at d n = 0 [11]. Further, we have a continuou s and differentiable ma pping g : ρ → ζ f rom th e ed ge cor- relation factor ρ to the edge depend ence factor ζ , given by [8] ρ = (2 /π ) K (4 ζ ) − 1 4(2 /π ) ζ K (4 ζ ) =: g − 1 ( ζ ) , (23) which map s zero an d one to ze ro and 1/4, respectively . Thus, we have ζ = g ( f ( d n )) , and for giv en p hysical parameters (with a slight abuse of notation), K s ( SNR , ζ ) = K s ( SNR , g ( f ( d n ))) = K s ( SNR , d n ) . (And, similarly for I s .) W e will use the arguments SNR and ζ f or K s and I s proper ly if necessary . 4.2. Asymptotic behavior In the following, we summarize the assumptions f or the pla- nar ad hoc sensor network that we consider . (A.1) n 2 sensors are located on the g rid I n = [0 : 1 : n − 1 ] 2 with spacing d n , as shown in Fig . 1, a nd a f usion center is located at the center ( ⌊ n/ 2 ⌋ , ⌊ n/ 2 ⌋ ) . (A.2) The obser vations { Y ij } at sensor nod es form a 2-D hidden (d iscrete-index) SFCAR Gau ss-Markov ran- dom field on t he lattice for each d n > 0 , and the edge depend ence f actor is gi ven by (21) and (23). (A.3) The fusion center gathers the measu rement f rom all nodes using the minimu m hop ro uting. Note th at the links in Fig. 1 are not only the M arkov dep endenc e edges but also the routing lin ks. Th e min imum hop routing re quires a hop cou nt of | i − ⌊ n / 2 ⌋| + | j − ⌊ n/ 2 ⌋| to deliver Y ij to the fusion center . (A.4) The communication energy per link E c ( d n ) = E 0 d ν n , where ν ≥ 2 is the propagation lo ss f actor i n wireless channel. (A.5) Sensing r equires en ergy , and the sen sing en ergy per node is denoted by E s . Moreover , we assume that the measur ement SNR incre ases linearly w .r .t. E s , i.e. , SNR = β E s for some constant β . Hencefor th, we consider various asympto tic scenarios a nd in vestigate the fundam ental beh avior of the ad hoc sensor network deployed over a cor related ran dom field for statis- tical inferen ce und er assumptio ns ( A.1) - ( A.5) . (Proo fs are omitted due to limited space.) The sensor density µ n on I n is given b y µ n = n 2 (( n − 1) d n ) 2 . Assuming that the n etwork is sufficiently large, the total informa tion abo ut the und erlying field o btainable f rom the network is gi ven by KLI T = n 2 K s and MI T = n 2 I s , (24) and the total consumed energy in the netw ork is gi ven by E = n 2 E s + E c ( d n ) n − 1 X i =0 n − 1 X j =0 ( | i − ⌊ n/ 2 ⌋| + | j − ⌊ n / 2 ⌋| ) , = n 2 E s + Θ( n 3 ) E c ( d n ) . (25) Note that th e k nowledge of per-node infor mation K s and I s and their prop erties w .r .t. SNR and sensor spacing d n in (24) is critical for further development, and it is provided in the previous s ections. W e begin with the increasing area case. Theorem 3 (Infin ite area and fixed density) F or an ad hoc sensor network with a fixed a nd fin ite n ode density , the to- tal amou nt of information incr eases linea rly as the area in- cr ea ses, but und er both info rmation measur es the amount of harvested information per unit ener gy decay s to zer o with rate η = Θ  ar e a − 1 / 2  , (26) for any n on-trivial d iffusion rate α , i.e., 0 < α < ∞ as we incr e ase the ar ea. Next, we c onsider the case in which the node density di- minishes, i.e., d n → ∞ . This case is of particular interest at high SNR since at h igh SNR less correlated samples yield larger per-node informatio n. Howe ver, the per-sensor infor- mation is u pper bou nded a s d n → ∞ , and the asymptotic behavior is gi ven by the following theorem. Theorem 4 As d n → ∞ , the per -node information K s and I s conver ge to K s (0) = D ( N (0 , 1) ||N (0 , 1 + SNR )) an d I s (0) = 1 2 log SNR, r espectively , an d th e con ver genc e rate is given by K s ( d n ) = K s (0) − c 4 p d n e − αd n + o “ p d n e − αd n ” , (27) I s ( d n ) = I s (0) − c ′ 4 p d n e − αd n + o “ p d n e − αd n ” , (28) for constants c 4 , c ′ 4 > 0 depen ding on the SNR. Theorem 4 can be p roved using (18, 1 9) and (21, 22), and explains how m uch gain is o btained from less corre- lated observations by inc reasing the sensor spacin g in 2- D. Fig. 2 shows K s and E c as fun ctions of d n for α = 1 , c 4 = 1 an d 10 dB SNR. The gain in inform ation is gi ven by √ d n e − αd n for lar ge d n , whereas the required per-link c om- munication energy increases withou t bound, i.e., E c ( d n ) = E 0 d ν n ( ν ≥ 2 ). Since the exp onential term is d ominan t in the gain as d n increases, the inform ation gain obtained by increasing d n decreases almost expon entially , an d there is no significant gain by increa sing the sensor spacing fu rther after some value. Hence, it is not effecti ve in terms of en- ergy efficiency to deploy a very sparse network a iming at less corre lated s amples at high SNR. 1 2 3 4 5 6 7 8 9 10 0.3 0.4 0.5 0.6 0.7 0.8 Per−sensor information K s 1 2 3 4 5 6 7 8 9 10 0 2 4 6 8 10 d n Per−link communication energy E c P S f r a g r e p l a c e m e n t s E ( R ) R R 0 R c I A M u t u a l i n f o r m a t i o n Fig. 2 . Per -node information an d p er-link commu nication energy w .r .t. sensor spacing d n (SNR = 1 0 d B, α = 1 , c 4 = 1 ) The per-link comm unication energy can be made arb i- trarily small by dec reasing the sensor spacing . T o in vesti- gate the effect of diminishing co mmunica tion energy E c as d n → 0 , we now consider th e asympto tic case in wh ich the nod e d ensity goes to infinity for a fixed coverage area. In this ca se, the per-node infor mation decays to zer o as d n → 0 since ζ → 1 / 4 as d n → 0 , and K s ( ζ ) and I s ( ζ ) conv erge to zero as ζ → 1 / 4 , as shown in Section 3.2. Th e asymptotic behavior in this case is giv en by the following theorem. Theorem 5 (Infinite densi ty model) F or the infin ite den- sity model with a fi xed covera ge area, the pe r -node infor- mation decays to zer o with rate K s = c 5 µ − 1 n + o  µ − 1 n  , (29) for some c onstant c 5 as the no de den sity µ n → ∞ . He nce, the a mount o f tota l info rmation per unit area (nats/ m 2 ) co n- ver ges to th e co nstant c 5 as µ n → ∞ . Fu rthermor e, in the case o f no sensing en er g y , a non- zer o energy efficiency η is achievable if the p r opaga tion loss factor ν = 3 , and even an infinite energy ef fi ciency is achievable if ν > 3 as µ n → ∞ for fixed area. 2 The finite total infor mation for the infinite density and fixed area mo del fo llows our intuition. The maximu m in- 2 Of course, this depends on the assumption of E c ( d n ) = E 0 d ν n for any d n > 0 . Howe ver , this assumption may not be val id for small d n . formation p rovided by the samp les from the continu ous- index rand om field does not exceed the information between X ( x, y ) and Y ( x, y ) except for the case o f spatially white fields. It is common that the propagatio n loss factor ν > 3 for near field prop agation ( i.e., d n → 0 ). Hence, infinite e n- ergy efficiency is achiev ab le as we incre ases th e n ode den- sity fo r a fixed are a consider ing only commun ication en- ergy . No te that the total amoun t of inf ormation conver ges to a c onstant as we increa ses the node density . So, the infi- nite energy ef ficiency is achie ved by diminishing communi- cation energy as d n → 0 . Con sidering th e sensing energy , howe ver, infin ite energy efficiency is no t fea sible since we have in this case E = n 2 E s + Θ( n 3 − ν ) a nd η = c 5 + o (1) n 2 E s + Θ( n 3 − ν ) , ν ≥ 2 , (30) as n → ∞ fo r fixed coverage area. In this ca se the sens- ing energy n 2 E s is the d ominant factor for low energy ef fi- ciency , and the energy e fficiency d ecreases to zero with rate O  µ − 1 n  . Thu s, it is cr itical for a de nsely dep loyed sen- sor n etwork to minim ize the sensing en ergy or processing energy for each sensor . In the infinite d ensity model, w e have observed that en- ergy is an importan t factor in e fficiency . Now we inv esti- gate th e ch ange o f to tal informatio n w .r .t. energy . W e fix the nod e density an d con sider two scenario s to increase th e required energy: One is to fix the coverage area also and increase the sen sing energy , and the o ther is to fix the sen s- ing energy and increase the coverage area. W e assume that the n etwork size is sufficiently large so that o ur asym ptotic analysis is valid. The energy asymptotic b ehavior for two scenarios is summarized in the following theorem. Theorem 6 As we in cr ea se the total energy E consumed by a sensor network with a fixed nod e d ensity and fixed ar ea, the total information incr ea ses with rate T otal information = O (log E ) (31) as E → ∞ . When th e node density an d sensing energy ar e fixed and the incr easing ener gy is used to e nlar ge the cover - age area, o n the other ha nd, the total a mount of in formation incr e ases with rate of T otal informatio n = Θ  E 2 / 3  , (32) for any ν > 0 , as E → ∞ . Theorem 6 suggests a guid eline for in vesting th e ex- cess e nergy . It is not efficient to in vest energy to improve the q uality of sensed sample s from a limited area. Th is only provide s the in crease in to tal inform ation in logarith - mic scale. Rather the energy should be spent to increase the number of samples by enlarging the coverage ar ea e ven if it yields less accurate samples. 5. CONCLUSIONS W e have analyzed the asymptotic behavior of ad hoc sensor networks deployed ov er correlated random field for statisti- cal inferen ce. Using ou r large deviations results that char- acterize the asy mptotic in formation rate in 2-D fo r GMRFs under the CAR mo del, we h av e obtaine d fundamental scal- ing laws fo r total inf ormation and energy efficiency as th e node density , coverage area and consum ed energy chang e. The results provide guideline s for sensor network design for statistical inference ab out 2-D correlated random fields such as temperatur e, humidity , density of a gas on a certain area. 6. REFERENCES [1] F . Liese and I. V ajda, “On div ergence and informations in statistics and infor ma- tion theory ,” IEEE T rans. Inform. Theory , vol . 52, no. 10, pp. 4394-4412, Oct. 2006. [2] Y . Sung, L. T ong and H. V . Poor, “Neyman-Pearson detection of Gauss-Marko v signals in noise : Closed-form error exponent and properties,” IEEE Tr ans. In- form. Theory , vol. 52, no. 4, pp. 1354-1 365, Apr . 2006. [3] Y . Sung, X. Zhang, L. T ong and H. V . Poor , “Sensor configuration and acti vation for field detection in large sensor arrays,” IEE E T rans. Signal P r ocessing , vol. 56, no. 2, pp. 447-463, Feb . 2008. [4] J. -F . Chamberland and V . V . V eeravalli, “How dense should a sensor network be for detection with correlated observations?,” IEEE Tr ans. Inform. Theory , vol. 52, no. 11, pp. 5099-5106, Nov . 2006. [5] A. Anandkumar, L. T ong and A. Swami, “Detection of Gauss-Markov random field on nearest-neighbor graph,” in Proc. 2007 ICASSP , Hawaii, USA, Apr . 2007. [6] H. Rue and L. Held, Gaussian Markov Random F ields: Theory and Applicatons , New Y ork: Chapman & Hall/CRC, 2005. [7] Y . Sung, H. V . Poor and H. Y u, “Large deviations analysis for the detection of 2D hidden Gauss-Ma rko v random fields using sensor ne tworks,” in Pr oc. 2008 ICASSP , Las V egas, NY , Mar. 2008. [8] Y . Sung, H. V . Poor and H. Y u, “How much information can one get from a wireless ad hoc se nsor network over a correlated random field?,” s ubmitted to IEEE T rans. Inform. Theory , Apr . 2008. [9] J. Besag, “On a system of two-dimensional recurrence equations,” Journal of the Royal Statistical Society . Ser ies B , vol. 43, no . 3, pp. 302-309, 1981. [10] A. Erd ´ elyi, Higher T ranscendental F unctions, V ol. II. , New Y ork: McGraw- Hill, 1953. [11] P . Whittle, “On stationary processe s in the plane,” Biometrika , vol. 41, no. 3, pp. 434-449, Dec. 1954.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment