Continuum Limits of Markov Chains with Application to Network Modeling
In this paper we investigate the continuum limits of a class of Markov chains. The investigation of such limits is motivated by the desire to model very large networks. We show that under some conditions, a sequence of Markov chains converges in some…
Authors: Yang Zhang, Edwin K. P. Chong, Jan Hannig
1 Continuum Limits of Mark o v Chains with Application to Netw ork Modeling Y ang Zhang, Student Member , I EEE, Edwin K. P . Chong, F ellow , IEEE, Jan Hannig, and Donald Estep Abstract —In this paper we in vestigate the co ntin uum limits of a class of Marko v chains. The inv estigation of such limits is motivate d by the desire to model ver y lar ge n etworks. W e show that under some conditi ons, a sequence of M arko v chain s con ver ges in some sense to the solution of a partial differ ential equation. B ased on such con ver gence w e approximate Markov chains modeling networks wit h a large number of component s by partial differential e qu ations. While tra ditional Monte Carlo simulation f or v ery large netw orks is practically infeasible, partial differential equations can b e solved with reasonable computa- tional ov erhead u sing well-established mathematical tools. Index T erms —Continuum modeling, Markov chain , partial differential eq uation, larg e n etwork modeling, w ireless sensor network. I . I N T RO D U C T I O N N ETWORK modeling is an impor tant tool in the analysis and design of networks. Many network ch aracteristics of in terest can be mo deled by Ma rkov chains, where Mo nte Carlo simulation h as been the traditio nal approach [1]. W ith the enorm ous growth in the size and com plexity o f to day’ s networks, their simulation becomes more computation ally expensiv e in both time and hard ware. So me effort has been made to exploit th e computin g powers of d istributed comp uter networks, such as p arallel simulation techn iques, where the number of processors need ed in the simu lation increases with the nu mber of nodes in th e network [2], [3]. Howe ver, for networks in volving a very large num ber of nodes, Monte Carlo simulation eventually becom es p ractically infeasible. In th is pap er we address th is pr oblem by focusin g on th e global ch aracteristics of a n en tire network ra ther than those of its in dividual com ponen ts. The idea is to ap proximate the un derlying Markov chain modelin g a cer tain network characteristic by a par tial d ifferential equa tion ( PDE). As a concrete f amiliar example, which we pr esent in Sec- tion II , co nsider multip le i.i.d. (indepen dent and identically distributed) random walks of M particles on a network co n- sisting of N p oints. For any vector x , let x T be its transpose. This research was supported in part by NSF grant ECCS-0700559. Y ang Z hang and Edwin K. P . Chong are with the Department of Electric al and Computer Engineeri ng, Colorado State Univ ersity , Ft. Colli ns, CO 80523-1373 yzhangcn@mail.eng r.colostate.ed u & edwin.chong@c olostate.edu Jan Hannig is with the Department of Statistics and Operation Research , The Univ ersity of North Carolin a at Chap el Hill, Chapel Hill, NC 27599-3260 jan.hannig@un c.edu Donald Estep is with the Department of Mathematics and Department of Statisti cs, Colorado State Univ ersity , Fort Collins, CO 80523-1373 estep@math.co lostate.edu A preliminary version of parts of the work of this paper was presented at the 49th IEEE Conferenc e on Decision and Control. Let the Markov chain mode ling the ne twork char acteristic be X N ( k ) = [ X N ( k , 1) , . . . , X N ( k , N )] T ∈ R N , where X N ( k , n ) is the n umber of particles at poin t n at time k . If we treat N and M as ind ices that grow , th is de fines a family of Mar kov chains indexed by N an d M . W e sho w that as M → ∞ an d N → ∞ , X N ( k ) conv erges in some sense to its continuu m limit, a determ inistic function with con tinuous time and sp ace variables. Under certain conditions, it is possible to characterize such a function as the solution of a PDE [4]–[6]. This itself is not a ne w result, but help s to illu strate our aim. Indeed , our de velopment here is m otiv ated by the netw ork modeling strate gy in [7] and the need for a rig orous description of its underlying limitin g pr ocess. W e illustrate in Section III the conver gence of the sequ ence of Markov ch ains to the PDE in a two-step proced ure. Suppose the e volution of X N ( k ) is governed by a certain stochastic dif ference equa tion with a “normalizin g” pa rameter M . Let x N ( k ) be the nor malized deterministic sequence governed by the corr espondin g “ex- pected” and dete rministic d ifference equation. First, we s how in Section I II-B that X N ( k ) / M is close to x N ( k ) , in the sense that as M → ∞ , both th eir con tinuous-tim e extensions conv erge to the solution of an ordina ry differential eq uation (ODE). Second, we show in Section III-C that a s N → ∞ , x N ( k ) co n verges to the solu tion of a PDE. Therefor e, as M → ∞ and N → ∞ , X N ( k ) / M co n verges to the PDE solution. Our proce dure pr ovides an approa ch to approx imating Markov chain s that m odel large networks by PDEs. PDEs a re widely used to form ulate time-sp ace phen omena in phy sics, chemistry , ecolo gy , an d economics (e.g ., [8]– [11]), and there are well-established mathematical tools for solving them such as M atlab and Comsol, which use finite element metho d [1 2] or finite difference method [13]. In contrast to Monte Carlo simulation, ou r approach enab les us to use these too ls to greatly r educe comp utation time, which m akes it possible to carry ou t the analysis, design, an d o ptimization for very large ne tworks. W e present in Section IV an example of the app lication of our approach to the mode ling o f a la rge wireless sensor network. In this example, we d erive an explicit nonlinear diffusion-conv ection PDE , whose solution captures the dynam ic be havior of the data message q ueues in the network. W e show that a lthough the PDE app roximatio n takes only a tiny fraction of the computation time of the Monte Carlo simu lation, there is a stron g agree ment between their simulation results. Continuum mod eling h as been well-established in field s such as physics, m echanics, transp ortation, a nd biolog y (e.g., [14]–[17]). Its app lications in com munication networks, how- 2 ev er, are relatively new and r are. Amo ng these, to our best knowledge, our approach is the first to addr ess the tim e- space ch aracteristics of com munication networks with a large number o f nodes. In co ntrast, for examp le, [18]–[20] d eal with networks with heavy traffic instead of large number of nod es; [21], [22] pr esent s caling laws of the netw ork traffi c without characterizin g th e actu al tr affic over time and space; a nd [2 3], [24], which use mean field meth ods, o nly keep track of the statistical features of the networks s uch as the fraction of nod es in each netw ork state. I I . C O N T I N U U M L I M I T O F M U LT I P L E R A N D O M W A L K S In this section we presen t an illustrati ve example of approxi- mating mu ltiple i.i.d. rand om walks by a PDE. First con sider a single rando m walk on a one-d imensional network co nsisting of N points unifor mly placed over D = [0 , 1] , as shown in Fig. 1. Hence the distance b etween two neig hborin g point is ds = 1 / ( N + 1) . At each time instan t, th e p article at p oint n , where n = 1 , . . . , N , randomly cho oses to move to its left or rig ht neig hborin g point with prob ability P r ( n ) and P l ( n ) , r espectiv ely . Let the length between two time instants be dt = 1 / M . W e set dt = ds 2 , which is a stand ard time-space scaling a pproach to ensur ing the conv ergence of the d ifference equation to a PDE. W e assume a “sink” boundar y conditio n, i.e., the par ticle v anishes when it reaches the ends of D (thou gh “walls” at the boundary are equally treatab le). P P P r P l … … 1 n n +1 n о 1 N N +1 0 Fig. 1. An ill ustration of a one-dimension al single random walk. Now consider M rando m walks o n the same n etwork, where the particle in each r andom walk beh av es independ ently identically as in the single random walk described above. Let B i ( k , n ) b e the Ber noulli rand om variable representing the presence of the i th p article at poin t n at time instant k , where k = 0 , 1 , . . . and n = 1 , . . . , N . Defin e B i ( k ) = [ B i ( k , 1) , . . . , B i ( k , N )] T ∈ R N . Accord ing to the behavior of the particle in the single r andom walk, for i = 1 , . . . , M , B i ( k + 1 , n ) − B i ( k , n ) = B i ( k , n − 1) , with prob ability P r ( n − 1 ); B i ( k , n + 1) , with pr obability P l ( n + 1 ); − B i ( k , n ) , with pr obability P r ( n ) + P l ( n ); 0 , otherwise , where B i ( k , n ) with n ≤ 0 or n ≥ N + 1 are d efined to be zero. Let the fun ction F N ( x, U ( k )) , whe re U ( k ) are i.i.d. a nd do not d epend on x , be suc h that fo r i = 1 , . . . , M , B i ( k + 1) = B i ( k ) + F N ( B i ( k ) , U ( k )) . (1) Then for x = [ x 1 , . . . , x N ] T , the n th co mponen t of F N ( x, U ( k )) , where n = 1 , . . . , N , is x n − 1 , with pro bability P r ( n − 1); x n +1 , with pro bability P l ( n + 1); − x n , with pr obability P r ( n ) + P l ( n ); 0 , otherwise , (2) where x n with n ≤ 0 o r n ≥ N + 1 are de fined to be zero. Let X N ( k , n ) be the nu mber of par ticles at p oint n at time k . Then X N ( k , n ) = M X i =1 B i ( k , n ) . (3) Define X N ( k ) = [ X N ( k , 1) , . . . , X N ( k , N )] T , which form s a discrete-time Markov chain with state space R N . Since F N is linear, it follows from (3) that X N ( k + 1) = X N ( k ) + F N ( X N ( k ) , U ( k )) . Let f N ( x ) = E F N ( x, U ( k )) , x ∈ R N . It f ollows from (2) that for x = [ x 1 , . . . , x N ] T , the n th compon ent of f N ( x ) , where n = 1 , . . . , N , is P r ( n − 1) x n − 1 + P l ( n + 1) x n +1 − ( P r ( n ) + P l ( n )) x n , (4) where x n with n ≤ 0 or n ≥ N + 1 are defined to be zero. By (1) and the lin earity of F N , for i = 1 , . . . , M , E B i ( k + 1) = E B i ( k ) + f N ( E B i ( k )) . (5) Notice that, since the random walks are i.i. d., E B i does not depend on i . Define a d eterministic sequen ce x N ( k ) by x N ( k + 1) = x N ( k ) + f N ( x N ( k )) , (6) where x N (0) = X N (0) M , a. s. ( almost sure ly). (7) W e seek to app roximate X N ( k ) by a continuu m m odel, where the time and spa ce indices k a nd n are made continuo us as N → ∞ and M → ∞ in th e f ollowing tw o steps: First, define X oN ( ˜ t ) = X N ( ⌊ M ˜ t ⌋ ) M , the continu ous-time extension of X N ( k ) by piecewise- constant time extensions with interval length dt = 1 / M and scaled by 1 / M . Second, define X pN ( t, s ) to be the con tinuous- space extensio n of X oN ( ˜ t ) by p iecewise-constant space exten - sions on D with inter val len gth ds . Notice that as N → ∞ , ds → 0 . Th us X pN is the continu ous-time-space exten sion of X N ( k ) . Similarly , define x oN ( ˜ t ) = x N ( ⌊ M ˜ t ⌋ ) , the piecewise- constant continu ous-time extension of x N ( k ) , and x pN ( t, s ) , the p iecewis e-co nstant contin uous-space extension of x oN ( ˜ t ) . Thus x pN is the c ontinuo us-time-space extension o f x N ( k ) . Now we show that for M sufficiently large, X pN , th e continuo us-time-space extension of X N ( k ) , is close to x pN , the con tinuous-tim e-space extension of x N ( k ) . By (3) and the strong law of large nu mbers ( SLLN), for each k , lim M →∞ X N ( k ) M = E B i ( k ) a .s. 3 By this a nd (7), lim M →∞ x N (0) = E B i (0) a.s. By (5) and (6), x N ( k ) and E B i ( k ) satisfy the same d ifference equation. The n we ha ve for each k , lim M →∞ x N ( k ) = E B i ( k ) a.s. Hence for e ach k , lim M →∞ X N ( k ) M = x N ( k ) a .s. Therefo re, X oN and x oN are close for large M in the sense that lim M →∞ k X oN ( ˜ t ) − x oN ( ˜ t ) k ( N ) ∞ = 0 a.s. , (8) where k · k ( N ) ∞ is the ∞ - norm on R N . Note th at k X pN ( · , t ) − x pN ( · , t ) k ( D ) ∞ = k X oN − x oN k ( N ) ∞ , where k · k ( D ) ∞ is the ∞ -no rm o n R D , the space of f unctions of D → R . Then by (8), X pN and x pN are close to ea ch other for large M in the sense that lim M →∞ k X pN ( · , t ) − x pN ( · , t ) k ( D ) ∞ = 0 a.s. (9) Therefo re, we can approximate X pN by x pN for M suffi- ciently large. Next we show th at as N → ∞ , x pN satisfies a certain PDE that is ea sily solvable. By (4) we ha ve for n = 1 , . . . , N , x N ( k + 1 , n ) − x N ( k , n ) = P r ( n − 1 ) x N ( k , n − 1) + P l ( n + 1) x N ( k , n + 1) − ( P r ( n ) + P l ( n )) x N ( k , n ) , where x N ( k , n ) with n ≤ 0 or n ≥ N + 1 ar e d efined to be zero. Ass ume P l ( n ) = p l ( nds ) and P r ( n ) = p r ( nds ) , wh ere p l ( s ) and p r ( s ) are real-valued fun ctions defined on D . Then by th e definition of x pN , it follows that for s ∈ D and t > 0 , x pN ( t + dt, s ) − x pN ( t, s ) = p r ( s − ds ) x pN ( t, s − ds ) + p l ( s + ds ) x pN ( t, s + ds ) − ( p r ( s ) + p l ( s )) x pN ( t, s ) . (10) T o ensure a finite non-degenerate limit, we assume p l ( s ) = b ( s ) + c l ( s ) ds and p r ( s ) = b ( s ) + c r ( s ) ds. Define c = c l − c r . W e call b the dif fusio n coef ficient and c the conv ection coefficient, for a g reater b m eans more r apid diffusion and a greater c mean s a larger direc tional bias. Assume that b ∈ C 2 and c ∈ C 1 . Assume that x pN is twice con tinuously differentiable in s . Put into (10) the T aylor expansions x pN ( t, s ± ds ) = x pN ( t, s ) ± ∂ x pN ∂ s ( t, s ) ds + ∂ 2 x pN ∂ s 2 ( t, s ) ds 2 2 + o ( ds 2 ) , (11) b ( s ± ds ) = b ( s ) ± b s ( s ) ds + b ss ( s ) ds 2 2 + o ( ds 2 ) , (12) and c ( s ± ds ) = c ( s ) ± c s ( s ) ds + o ( ds ) , (13 ) where a single subscript s represents first deri vati ve and a double subscript ss rep resents seco nd deri vati ve. Then we ha ve x pN ( t + dt, s ) − x pN ( t, s ) = b ( s ) ∂ 2 x pN ∂ s 2 ( t, s ) ds 2 + (2 b s ( s ) + c ( s )) ∂ x pN ∂ s ( t, s ) ds 2 + ( b ss ( s ) + c s ( s )) x pN ( t, s ) ds 2 + o ( ds 2 ) . (14) Divide both sides of (14) by dt = ds 2 and get x pN ( t + dt, s ) − x pN ( t, s ) dt = b ( s ) ∂ 2 x pN ∂ s 2 ( t, s ) + (2 b s ( s ) + c ( s )) ∂ x pN ∂ s ( t, s ) + ( b ss ( s ) + c s ( s )) x pN ( t, s ) + o ( ds 2 ) ds 2 . As N → ∞ , ds → 0 , and hence dt = ds 2 → 0 . Assume that x pN is contin uously differentiab le in t . Then b y taking the limit as N → ∞ and rearr anging, we get a PDE that x pN satisfies: ˙ x pN ( t, s ) = ∂ ∂ s b ( s ) ∂ x pN ∂ s ( t, s ) + ∂ ∂ s (( b s ( s ) + c ( s )) x pN ( t, s )) , for t > 0 an d s ∈ D , with boun dary cond ition x pN ( t, s ) = 0 . As N → ∞ , dt = ds 2 → 0 , and hence M = 1 /dt = 1 /ds 2 → ∞ . Then by (9), for N sufficiently large, X pN , the con tinuous-tim e-space extensio n of X N ( k ) , is close to x pN , the con tinuous-tim e-space extensio n of x N ( k ) . Ther e- fore, we c an approx imate X N ( k ) by the solutio n of the above PDE called th e one-dimensio nal diffusion-convection equation, which can be easily solved [25]. Note that our deriv ation he re differs fr om tha t o f th e well-studied Fokker- Planck eq uation (also known as the K olmo gorov f orward equation) [26], wh ereas the latter o riginates from the study of the probability density o f a Wiener pro cess. This motiv ational example raises some q uestions th at must be answered b y the conv ergence ana lysis o f the un derlying limiting pro cess. First, gener al networks may exhibit more complex behaviors. For examp le, F N might no longer b e lin- ear; and SLL N might not app ly in many scenario s since nod e behaviors are not necessarily i.i.d . Specifically , the an alysis above d oes no t ap ply to the netw ork Markov chain in [7] . T o find the co nditions un der which (8) ho lds in mo re g eneral setting, in Section III-B we apply Kushner’ s weak con vergen ce theorem in [4] to a more general class o f systems mo deled by Markov chains. Mor eover , we need to show in wh at sense and under what con ditions X pN conv erges to the solution of the PDE. W e analyze such con vergenc e and p rovide its sufficient condition s in Section III-C. I I I . C O N T I N U U M L I M I T S O F M A R KOV C H A I N S In th is section we analyz e the conver gen ce of a sequence of Markov chain s to the solution of a PDE in a two-step p ro- cedure. W e provide sufficient c onditions for this convergence. 4 A. General Setting Consider N po ints placed over a Euclidean do main D representin g a spatial region. W e assume that these points form a uniform grid, though our approach ca n later be generalized to nonun iform cases. W e will re fer to these N points in D as grid points and denote the distance between any two neighbor ing grid po ints b y ds N . Consider a discrete-time Markov c hain X N ( k ) = [ X N ( k , 1) , . . . , X N ( k , N )] T (15) with state space R N . Here X N ( k , n ) is the r eal-valued state of point n at time k , wher e n = 1 , . . . , N is a spatia l index and k = 0 , 1 , . . . is a temporal index. Suppose that the ev olutio n of X N ( k ) is describe d by the stochastic difference e quation X N ( k + 1) = X N ( k ) + F N ( X N ( k ) / M , U ( k )) , (16) where U ( k ) are i.i.d . an d do not depe nd on th e state X N ( k ) , M is a “normalizing ” p arameter, an d F N is a g i ven fu nction. Let f N ( x ) = E F N ( x, U ( k )) , x ∈ R N . (17) Define a d eterministic sequen ce x N ( k ) by x N ( k + 1) = x N ( k ) + 1 M f N ( x N ( k )) , (18) where x N (0) = X N (0) / M a.s. In the next subsection, we show th at u nder certain co nditions, X N ( k ) / M a nd x N ( k ) are close in som e sense. B. Conver gence to ODE Let X oN ( ˜ t ) be the con tinuous-tim e extension of X N ( k ) by piecewise-constant time extension s with interval length 1 / M and scaled by 1 / M , i.e., f or a rbitrary ˜ t ∈ R , X oN ( ˜ t ) = X N ( ⌊ M ˜ t ⌋ ) / M . (19) It follows th at for each k , X oN ( k / M ) = X N ( k ) / M . Similarly we d efine x oN ( ˜ t ) , th e co ntinuou s-time exten sion of x N ( k ) by x oN ( ˜ t ) = x N ( ⌊ M ˜ t ⌋ ) . (20) For fix ed ˜ T N > 0 , let D N [0 , ˜ T N ] be the space of R N - valued C ` adl ` ag f unctions on [0 , ˜ T N ] , i.e., fun ctions that are right-co ntinuou s at eac h t ∈ [0 , ˜ T N ) and have lef t-hand limits at e ach t ∈ (0 , ˜ T N ] . As d efined in (19) and (20) respectively , both X oN ( ˜ t ) and x oN ( ˜ t ) with ˜ t ∈ [0 , ˜ T N ] are in D N [0 , ˜ T N ] . Since b oth X oN ( ˜ t ) and x oN ( ˜ t ) depend o n M , each one of them forms a sequen ce of f unctions in D N [0 , ˜ T N ] indexed by M = 1 , 2 , . . . . Define the ∞ -no rm k · k ( o ) ∞ on D N [0 , ˜ T N ] , i. e., for x ∈ D N [0 , ˜ T N ] , k x k ( o ) ∞ = max n =1 ,...,N sup t ∈ [0 , ˜ T N ] | x n ( t ) | , where x n is th e n th compon ents of x . A sequenc e of fu nctions x M ∈ D N [0 , ˜ T N ] is said to co n verge uniform ly to a f unction x ∈ D N [0 , ˜ T N ] if as M → ∞ , k x M − x k ( o ) ∞ → 0 . In th is paper, we u se the notation “ ⇒ ” for wea k convergence and “ P − → ” for con vergenc e in probability . Let f N be defined as in (17). Now we present a lemma stat- ing th at und er some conditions, as M → ∞ , X oN conv erges unifor mly to a limiting fu nction y , the solution of th e O DE ˙ y = f N ( y ) , on [0 , ˜ T N ] , and X oN conv erges u niformly to the same solution o n [0 , ˜ T N ] . Lemma 1: Assume: (1a) Ther e exists an iden tically distributed seque nce { λ ( k ) } of integrable r andom variables such that f or each k and x , | F N ( x, U ( k )) | ≤ λ ( k ) a.s.; (1b) the func tion F N ( x, U ( k )) is continu ous in x a. s.; an d (1c) the ODE ˙ y = f N ( y ) has a un ique solution on [0 , ˜ T N ] for any initial con dition y (0) . Suppose that as M → ∞ , X oN (0) P − → y (0) an d x oN (0) → y (0) . Then, as M → ∞ , k X oN − y k ( o ) ∞ P − → 0 and k x oN − y k ( o ) ∞ → 0 on [0 , ˜ T N ] , wher e y is the uniq ue solution of ˙ y = f N ( y ) with initial con dition y (0) . T o pr ove Lemma 1, we first present a lemma on weak conv ergence due to K ushner [4]. Lemma 2: Assume: (2a) The set {| F N ( x, U ( k )) | : k ≥ 0 } is unifo rmly integrab le; (2b) for each k an d each bo unded random variable X , lim δ → 0 E sup | Y |≤ δ | F N ( X, U ( k )) − F N ( X + Y , U ( k )) | = 0; and (2c) there is a functio n ˆ f N ( · ) [contin uous b y (b)] such that as n → ∞ , 1 n n X k =0 F N ( x, U ( k )) P − → ˆ f N ( x ) . Suppose that ˙ y = ˆ f N ( y ) has a uniqu e solution on [0 , ˜ T N ] for each initial c ondition, and that X oN (0) ⇒ y (0 ) . Then as M → ∞ , k X oN − y k ( o ) ∞ ⇒ 0 on [0 , ˜ T N ] . W e note tha t in Kushn er’ s work, the conv ergence of X oN to y is stated in terms of Skorok hod n orm [4], but it is equi valent to the ∞ - norm in our case where the functions are define d on finite time inter vals [27]. W e now pr ove Lemma 1 b y sho wing that the assumptions (2a)–( 2c) in Lemma 2 h old unde r the assumption s (1a)–(1 c) in Lemma 1 . Pr o of of Lemma 1 : 1) Since λ ( k ) is integrable, as a → ∞ , E | λ ( k ) | 1 {| λ ( k ) | >a } → 0 , where 1 A is the indicator fu nction of set A . By Assump - tion (1a), for each k and x , F N ( x, U ( k )) ≤ λ ( k ) a.s. 5 Therefo re for each x a nd a > 0 , E | F N ( x, U ( k )) | 1 {| F N ( x,U ( k )) | >a } ≤ E | λ ( k ) | 1 {| F N ( x,U ( k )) | >a } ≤ E | λ ( k ) | 1 {| λ ( k ) | >a } . Hence as a → ∞ , sup k ≥ 0 E | F N ( x, U ( k )) | 1 {| F N ( x,U ( k )) | >a } → 0 , i.e., the family {| F N ( x, U ( k )) | : k ≥ 0 } is unifor mly integrable a nd Assumption (2 a) holds. 2) By Assumption (1b), F N ( x, U ( k )) is continuous in x a.s. The n fo r each bou nded X and ea ch k , lim δ → 0 sup | Y |≤ δ | F N ( X, U ( k )) − F N ( X + Y , U ( k )) | = 0 a.s. By Assumption (1a), fo r each x and each k , there exists an integrable rand om v ariab le λ ( k ) such that | F N ( x, U ( k )) | ≤ λ ( k ) a.s. It follows that for each bound ed X , each k , and e ach Y such th at | Y | ≤ δ , | F N ( X, U ( k )) − F N ( X + Y , U ( k )) | ≤ | F N ( X, U ( k )) | + | F N ( X + Y , U ( k )) | ≤ 2 λ ( k ) . Hence for e ach δ , sup | Y |≤ δ | F N ( X, U ( k )) − F N ( X + Y , U ( k )) | ≤ 2 λ ( k ) , an integrable ran dom variable. By the domina nt conver - gence theo rem, lim δ → 0 E sup | Y |≤ δ | F N ( X, U ( k )) − F N ( X + Y , U ( k )) | = E lim δ → 0 sup | Y |≤ δ | F N ( X, U ( k )) − F N ( X + Y , U ( k )) | = 0 . Hence Assumption (2 b) hold s. 3) Since U ( k ) are i.i.d., by the weak law of large num bers and the de finition of f N in (17), as n → ∞ , 1 n n X k =0 F N ( x, U ( k )) P − → f N ( x ) . Hence Assumption (2 c) holds. Then, by Lemm a 2, as M → ∞ , k X oN − y k ( o ) ∞ ⇒ 0 on [0 , ˜ T N ] . For each sequence of rand om processes { X n } , if A is a constan t, X n ⇒ A if and on ly if X n P − → A . Ther efore, as M → ∞ , k X oN − y k ( o ) ∞ P − → 0 on [0 , ˜ T N ] . Th e same argum ent implies the d eterministic conv ergence of x oN : as M → ∞ , k x oN − y k ( o ) ∞ → 0 on [0 , ˜ T N ] . Based on Lemma 1, we get the fo llowing lemma, which states that X oN and x oN are close with high probability when M is large. Lemma 3: Let the assump tions in Le mma 1 hold. Then f or any sequen ce { ζ N } , for each N , and for M sufficiently large, we have P {k X oN − x oN k ( o ) ∞ > ζ N } ≤ 1 / N 2 on [0 , ˜ T N ] . Pr o of: By Lem ma 1, for each N , as M → ∞ , k X oN − y k ( o ) ∞ P − → 0 and k x oN − y k ( o ) ∞ → 0 on [0 , ˜ T N ] . By the trian gle inequa lity k X oN − x oN k ( o ) ∞ ≤ k X oN − y k ( o ) ∞ + k x oN − y k ( o ) ∞ , it follows that as M → ∞ , k X oN − x oN k ( o ) ∞ P − → 0 on [0 , ˜ T N ] . This finishes the proof. Since X oN and x oN are the piecewise continuo us-time ex- tensions of X N and x N by constant interpo lation, respectiv ely , we have the f ollowing coro llary . Cor olla ry 1: Fix ˜ T N and let ˜ K N = ⌊ ˜ T N M ⌋ . Let the assumptions in Le mma 1 hold. T hen f or any sequen ce { ζ N } , for each N , and fo r M sufficiently large, we ha ve P max k =0 ,..., ˜ K N n =1 ,...,N X N ( k , n ) M − x N ( k , n ) > ζ N ≤ 1 N 2 . W e use L emma 3 and Corollary 1 in th e next su bsection. C. Con ver gence to PDE In the last sub section, we stated co nditions under wh ich the continuo us-time extensions of X N ( k ) and x N ( k ) are close asymptotically ( as M → ∞ ) with high p robab ility . In this subsection, we fur ther let N → ∞ and state co nditions un der which x N ( k ) is close asy mptotically to the solutio n of a PDE. This lead s to the con vergence o f X N ( k ) / M to the PDE solution as M → ∞ and N → ∞ . Assume that the d omain D introduc ed in Section III-A is com pact and conv ex, and let w : D → R be in C 2 . Giv en a fixed N , let V N be the set of the N grid poin ts in D . Let y N be th e vector in R N composed o f the values of w at the grid points v N ( n ) ∈ V N , n = 1 , . . . , N , i.e., y N = [ w ( v N (1)) , . . . , w ( v N ( N ))] T . Giv en s ∈ D , let { s N } ⊂ D be a sequenc e of grid points in D su ch that as N → ∞ , s N → s , where for each N , s N is a grid poin t in V N . Let f N ( y N , s N ) b e the component of the vector f N ( y N ) corre sponding to the location s N . For example, for N = 5 , if s 5 = v 5 (4) in V 5 , then f 5 ( y 5 , s 5 ) is the 4th com ponen t of the vecto r f 5 ( y 5 ) . Assume that there exist sequenc es { δ N } , { β N } , { γ N } , and { ρ N } , func tions f an d h , an d 0 < c < ∞ , such that as N → ∞ , δ N → 0 , δ N /β N → 0 , γ N → 0 , ρ N → 0 , and : • for any s N such th at s N → s , where s is in the inter ior of D , there exists a sequ ence o f functions φ N : D → R such that f N ( y N , s N ) /δ N = f ( s N , w ( s N ) , ∇ w ( s N ) , ∇ 2 w ( s N )) + φ N ( s N ) , (21) and for N suf ficiently large, | φ N ( s N ) | ≤ cγ N ; (22) and • for a ny s N such that s N → s , whe re s is on the b ound ary of D , there e xists a sequ ence o f fu nctions ϕ N : D → R such that f N ( y N , s N ) /β N = h ( s N , w ( s N ) , ∇ w ( s N ) , ∇ 2 w ( s N )) 6 + ϕ N ( s N ) , (23) and for N sufficiently large, | ϕ N ( s N ) | ≤ cρ N . Here, ∇ i w re presents all the i th ord er der iv ati ves of w , where i = 1 , 2 . These assumptions are technical conditions on the asymp- totic beh avior of the sequ ence of fu nctions f N . The b asic idea is that f N ( y N , s N ) is asympto tically close to some f unction of terms that look like the right-h and side of a time- depend ent PDE. T ypically , ch ecking these con ditions amou nts to simp ly an algebraic exercise. A concrete example of this is given in the next section. The b asic id ea und erlying the analysis in the remaind er of this subsection is th is. Recall that x N ( k ) is defin ed by (18). Suppose we associate the discrete time k with points on the r eal line spaced apart by a distance propor tional to δ N . Then, the above technical assumption implies that x N ( k ) is, in some sense, close to the solu tion of a PDE of the for m ˙ z = f ( s, z , ∇ z , ∇ 2 z ) with bo undary con dition h ( s, z , ∇ z , ∇ 2 z ) = 0 . Because th e Markov chain X N ( k ) / M is close to x N ( k ) , as established in the last subsection , it is also close to the solution of the PDE. The remainder of this subsectio n is dev oted to developing this argume nt rigoro usly . Fix T > 0 . Assume that th ere exists a fu nction z : [0 , T ] × D → R that solves the PDE ˙ z ( t, s ) = f ( s, z ( t, s ) , ∇ z ( t, s ) , ∇ 2 z ( t, s )) , (24) with bo undary co ndition h ( s, z ( t, s ) , ∇ z ( t, s ) ∇ 2 z ( t, s )) = 0 and initial co ndition z (0 , s ) = z 0 ( s ) . Here, ∇ i z ( t, s ) r epre- sents all the i th or der p artial deriv atives of z ( t, s ) with respe ct to s , wh ere i = 1 , 2 . Define dt N = δ N / M . (25) Define K N = ⌊ T /dt N ⌋ and t N ( k ) = k dt N . Define z N ( k , n ) = z ( t N ( k ) , v N ( n )) and let z N ( k ) = [ z N ( k , 1) , . . . , z N ( k , N )] T ∈ R N . Denote the ∞ -no rm on R N by k · k ( N ) ∞ . Th at is, for x ∈ R N , with the n th element being x ( n ) , k x k ( N ) ∞ = max 1 ≤ n ≤ N | x ( n ) | . Denote the ∞ - norm o n R N × K N also by k · k ( N ) ∞ . That is, f or x = [ x (1) , . . . , x ( K N )] ∈ R N × K N , where for k = 1 , . . . , K N , x ( k ) = [ x ( k , 1) , . . . , x ( k , N )] T ∈ R N , we have k x k ( N ) ∞ = max k =1 ,...,K N n =1 ,...,N | x ( k , n ) | . Now we present a lemma on the relationship between the z N ( k ) and f N . Lemma 4: Assume tha t z is continu ously dif feren tiable in t . Then for ea ch N , there exists u N ( k ) ∈ R N such th at for k = 0 , . . . , K N − 1 , z N ( k + 1) − z N ( k ) = 1 M f N ( z N ( k )) + dt N u N ( k ) , (26) and k u N k ( N ) ∞ = O (max { γ N , dt N } ) , (27) where u N = [ u N (0) , . . . , u N ( K N − 1 )] ∈ R N × K N . Pr o of: Since z is con tinuously differentiable in t , there exists 0 < c 1 < ∞ such that f or each N , for k = 0 , . . . , K N − 1 and n = 1 , . . . , N , there exists a f unction r N : [0 , T ] × D → R such th at z N ( k + 1 , n ) − z N ( k , n ) dt N = z ( t N ( k ) , v N ( n )) − z ( t N ( k ) , v N ( n )) dt N = ˙ z ( t N ( k ) , v N ( n )) + r N ( t N ( k ) , v N ( n )) , (28) and for N sufficiently large, | r N ( t N ( k ) , v N ( n )) | < c 1 dt N . By (21) an d (24), there exists 0 < c 2 < ∞ , such that fo r each N , for k = 0 , . . . , K N − 1 and n = 1 , . . . , N , th ere exists a fun ction φ N : [0 , T ] × D → R such th at ˙ z ( t N ( k ) , v N ( n )) = f ( v N ( n ) , z N ( k , n ) , ∇ z N ( k , n ) , ∇ 2 z N ( k , n )) = f N ( z N ( k ) , v N ( n )) /δ N + φ N ( t N ( k ) , v N ( n )) , (29) and for N sufficiently large, | φ N ( t N ( k ) , v N ( n )) | < c 2 γ N , where { γ N } is a s defined in (22). For eac h N , fo r k = 0 , . . . , K N − 1 and n = 1 , . . . , N , let u N ( k , n ) = φ N ( t N ( k ) , v N ( n )) + r N ( t N ( k ) , v N ( n )) , and u N ( k ) = [ u N ( k , 1) , . . . , u N ( k , N )] T ∈ R N . Then th ere exists 0 < c < ∞ such tha t f or each N , k u N k ( N ) ∞ < c max { γ N , dt N } . Hence (27) fo llows. By ( 28) and (29), for each N , for k = 0 , . . . , K N − 1 an d n = 1 , . . . , N , z N ( k + 1) − z N ( k ) dt N = f N ( z N ( k )) δ N + u N ( k ) . By this a nd (25), we ha ve (26). In the following we show that u nder some cond itions, x N ( k ) a nd z N ( k ) a re asymptotically close for large N . For eac h N , fo r k = 0 , . . . , K N and n = 1 , . . . , N , d efine ε N ( k , n ) = z N ( k , n ) − x N ( k , n ) , (30) and let ε N ( k ) = [ ε N ( k , 1) , . . . , ε N ( k , N )] T ∈ R N . By ( 18), (2 6), and (30), we have that for each N , for k = 0 , . . . , K N , there exists u N ( k ) as defined in Lemma 4 such that ε N ( k + 1) = ε N ( k ) + 1 M ( f N ( z N ( k )) − f N ( x N ( k ))) + dt N u N ( k ) . (31) Suppose that for each N , f N ∈ C 1 . Let D f N ( x ) be the deriv ative matrix of the f unction f N at x . Then we ha ve th at for each N , fo r k = 1 , . . . , K N and n = 1 , . . . , N , there exists a fun ction ˜ f N : R N → R N such th at f N ( z N ( k )) − f N ( x N ( k )) = D f N ( z N ( k )) ε N ( k ) + ˜ f N ( ε N ( k )) 7 and ˜ f N (0) = 0 . (32) Then we ha ve fro m (31) ε N ( k + 1) = ε N ( k ) + 1 M ( D f N ( z N ( k )) ε N ( k ) + ˜ f N ( ε N ( k ))) + dt N u N ( k ) . (3 3) Further suppo se th at for each N , k ε N (0) k ( N ) ∞ = 0 . (34) Define ε N = [ ε N (1) , . . . , ε N ( K N )] ∈ R N × K N . Then by ( 32), (33), and (3 4), f or each N , there exists a function H N : R N × K N → R N × K N such that ε N = H N ( u N ) . (35) It follows that H N (0) = 0 an d H N ∈ C 1 . For eac h N , defin e µ N = lim α → 0 sup k u k ( N ) ∞ ≤ α k H N ( u ) k ( N ) ∞ k u k ( N ) ∞ . Lemma 5: Assume that • z is continuously differentiab le in t ; • for each N , f N ∈ C 1 ; • for each N , ( 34) ho lds; an d • the sequen ce { µ N } is b ounde d. Then k ε N k ( N ) ∞ = O ( k u N k ( N ) ∞ ) . Pr o of: By definition, for each N , there exists δ > 0 such that fo r α < δ , sup k u k ( N ) ∞ ≤ α k H N ( u ) k ( N ) ∞ k u k ( N ) ∞ < µ N + 1 . By ( 27), as N → ∞ , k u k ( N ) ∞ → 0 . Then there exists N 0 and α 1 such that f or N > N 0 , k u k ( N ) ∞ ≤ α 1 < δ . Hence, for N > N 0 , k H N ( u ) k ( N ) ∞ k u k ( N ) ∞ ≤ sup k u k ( N ) ∞ ≤ α 1 k H N ( u ) k ( N ) ∞ k u k ( N ) ∞ < µ N + 1 . Therefo re, there exists 0 < c < ∞ such that for N > N 0 , k ε N k ( N ) ∞ = k H N ( u N ) k ( N ) ∞ < ( µ N + 1 ) k u N k ( N ) ∞ < ( c + 1) k u N k ( N ) ∞ . This finishes the proof. Lemma 5 states that as N → ∞ , k ε N k ( N ) ∞ → 0 , and at least with the same rate as k u N k ( N ) ∞ . Let X N = [ X N (1) / M , . . . , X N ( K N ) / M ] , x N = [ x N (1) , . . . , x N ( K N )] , and z N = [ z N (1) , . . . , z N ( K N )] , all in ∈ R N × K N . N ow we present the main conver gence theo rem of this pape r , which states th at the value of the normalized Markov chain at time k and node n , is c lose to that o f z at the cor respond ing poin t ( t N ( k ) , v N ( n )) ∈ [0 , T ] × D for large M and N . Theor em 1: Su ppose th at th e a ssumptions in Lemma 1 a nd Lemma 5 h old. Then k X N − z N k ( N ) ∞ = O (max { γ N , dt N } ) a.s. Pr o of: By (27 ) and Lemma 5, there e xists 0 < c 0 < ∞ such that for N sufficiently large, k ε N k ( N ) ∞ < c 0 max { γ N , dt N } . (36) Let ˜ T N in Corollary 1 be T /δ N . Then ˜ K N := ⌊ ˜ T N M ⌋ = ⌊ T /dt N ⌋ := K N . Hence by Corollary 1, f or any sequenc e { ζ N } , for each N , we can take M su fficiently large such that ∞ X N =1 P {k X N − x N k ( N ) ∞ > ζ N } ≤ ∞ X N =1 1 / N 2 < ∞ . By the first B ore l-Cantelli Lemm a [28], P lim sup N →∞ {k X N − x N k ( N ) ∞ > ζ N } = 0 , which implies that, a.s., fo r N sufficiently la rge, k X N − x N k ( N ) ∞ < ζ N . T ake ζ N such that for N sufficiently large, ζ N < max { γ N , dt N } . Then by the triang le in equality k X N − z N k ( N ) ∞ ≤ k X N − x N k ( N ) ∞ + k x N − z N k ( N ) ∞ = k X N − x N k ( N ) ∞ + k ε N k ( N ) ∞ , a.s., ther e exists 0 < c < ∞ such th at for N sufficiently large, k X N − z N k ( N ) ∞ ≤ c max { γ N , dt N } . This finishes the proof. This theorem states that as M → ∞ and N → ∞ , X N conv erges unifor mly to z N a.s., and at lea st with the same rate as max { γ N , dt N } . D. Con verg ence of Continu ous-time-spa ce Extension In the follo wing we study the c on vergence of the continuo us-time-space extension of th e Markov chain X N ( k ) to the PDE solution. Set ˜ T N = T /δ N . For eac h N , we can construct X oN ( ˜ t ) and x oN ( ˜ t ) with time interval of length 1 / M , with ˜ t ∈ [0 , ˜ T N ] . Respectiv ely , let X pN ( t ) an d x pN ( t ) , where t ∈ [0 , T ] , be th e continu ous-space e xten sion of X oN ( ˜ t ) and x oN ( ˜ t ) (with ˜ t ∈ [0 , ˜ T N ] ) by piec ewis e-co nstant space extensions on D and with time scaled b y δ N so that the time- interval length is δ N / M := dt N . By piecewise-constan t space extension of X oN , we mea n tha t w e construct a piec ewis e- constant fu nction on D such that the value of this fun ction at each point in D is the value of the component of the vector X oN correspo nding to the grid po int that is “closest to the left” (taken o ne co mpon ent at a time). The n fo r eac h t , X pN ( t ) and x pN ( t ) are r eal-valued fun ctions defined o n D . Fig. 2 is an illustration of x N and x pN in a one-dime nsional case. For fix ed T , both X pN ( t ) and x pN ( t ) with t ∈ [0 , T ] are in the space D D [0 , T ] of fun ctions of [0 , T ] × D → R and are 8 x ( k n ) & x ( t s ) x N ( k , n ) & x p N ( t , s ) n & s k & t n & s k & t x N ( k,n ) x pN ( t,s ) Fig. 2. An ill ustration of x N and x pN in a one-di mensional case. C ` adl ` ag with th e time com ponent. Define th e ∞ -norm k · k ( p ) ∞ on D D [0 , T ] , i.e ., for x ∈ D D [0 , T ] , k x k ( p ) ∞ = sup t ∈ [0 ,T ] , s ∈D | x ( t, s ) | . First we show tha t x pN and z are asymptotically close fo r large N . Lemma 6: Supp ose that the assumption s in Lemma 5 hold. Then k x pN − z k ( p ) ∞ = O (max { γ N , dt N , ds N } ) . Pr o of: For ea ch N , for k = 0 , . . . , K N and n = 1 , . . . , N , by th e definition o f x pN , we have that x pN ( t N ( k ) , v N ( n )) = x N ( k , n ) . Let Ω N ( k , n ) be the subset of [0 , T ] × D con- taining ( t N ( k ) , v N ( n )) wh ere x pN is p iecewis e co nstant, i.e., ( t N ( k ) , v N ( n )) ∈ Ω N ( k , n ) an d for all ( t, s ) ∈ Ω N ( k , n ) , x pN ( t, s ) = x pN ( t N ( k ) , v N ( n )) . (For exam ple, fo r D ⊂ R , Ω N ( k , n ) = [ t N ( k ) , t N ( k + 1)] × [ v N ( n ) , v N ( n + 1)] .) Then for each N , k x pN − z k ( p ) ∞ ≤ k ε N k ( N ) ∞ + max k =0 ,...,K N n =1 ,...,N sup ( t,s ) ∈ Ω N ( k,n ) | z ( t N ( k ) , v N ( n )) − z ( t, s ) | . Since z ( t, s ) is continuously dif ferentiab le in t on a co mpact domain, it is Lipschitz continuo us in t . Similarly , it is Lipschitz continuo us in s . Hence the re e xist 0 < c 1 , c 2 ≤ ∞ such that for each N , max k =0 ,...,K N , n =1 ,...,N sup ( t,s ) ∈ Ω N ( k,n ) | z ( t N ( k ) , v N ( n )) − z ( t, s ) | ≤ c 1 max k =0 ,...,K N , n =1 ,...,N sup ( t,s ) ∈ Ω N ( k,n ) k ( t N ( k ) , v N ( n )) − ( t, s ) k ≤ c 2 max { ds N , dt N } , where k · k is some nor m on [0 , T ] ×D . Hence, by this and (3 6), there exists 0 < c < ∞ such that f or N sufficiently large, k x pN − z k ( p ) ∞ ≤ c max { γ N , dt N , ds N } . This finishes the proof. Now we presen t a co n vergence theorem fo r the contin uous function s. Theor em 2: Su ppose th at th e a ssumptions in Lemma 1 a nd Lemma 5 h old. Then k X pN − z k ( p ) ∞ = O (max { γ N , dt N , ds N } ) a.s. on [0 , T ] × D . Pr o of: By Lemma 3 , for any sequen ce { ζ N } , for eac h N , we can take M su fficiently large such that ∞ X N =1 P {k X oN − x oN k ( o ) ∞ > ζ N } ≤ ∞ X N =1 1 / N 2 < ∞ . By the first B ore l-Cantelli Lemm a [28], P lim sup N →∞ {k X oN − x oN k ( o ) ∞ > ζ N } = 0 , which implies that, a.s., fo r N sufficiently la rge, k X oN − x oN k ( o ) ∞ < ζ N on [0 , ˜ T N ] . Since X pN and x pN are the piecewise co ntinuou s-space e xten- sions of X oN and x oN by constant interpo lation, respectively , it fo llows that for a ny sequence { ζ N } , we can take M sufficiently large such that, a. s., for N sufficiently large, k X pN − x pN k ( p ) ∞ < ζ N on [0 , T ] × D . T ake ζ N such that for N sufficiently large, ζ N < max { γ N , dt N , ds N } . Then by the triang le in equality k X pN − z k ( p ) ∞ ≤ k X pN − x pN k ( p ) ∞ + k x pN − z k ( p ) ∞ and Lemma 6, a.s., there e xists 0 < c < ∞ such that for N sufficiently large, k X pN − z k ( p ) ∞ ≤ c max { γ N , dt N , ds N } on [0 , T ] × D . This finishes the proof. This theorem states th at as M → ∞ and N → ∞ , the continuo us-time-space extension X pN of the Markov chain X N ( k ) , converges unifo rmly to z , the solu tion of the PDE a.s., and at least with the same rate as max { γ N , dt N , ds N } . The solution of the PDE can be f ound quickly by mathem at- ical to ols r eadily available and then be used to appr oximate the Mar kov chain X N ( k ) . W e g i ve an e xamp le of th is in the next section. I V . A P P L I C A T I O N T O T H E M O D E L I N G O F L A R G E N E T WO R K S In this section we present an e xamp le of the application of ou r appr oach to network modeling. W e show how the Markov chain representing the queue length s of the nod es in the network can be a pprox imated by the solu tion of a PDE using the resu lts of the preceding section. 9 X N ( k + 1 , n ) − X N ( k , n ) = 1 + G ( k , n ) , with pro bability (1 − W ( n, X N ( k , n ) / M )) × [ P r ( n − 1 ) W ( n − 1 , X N ( k , n − 1) / M )(1 − W ( n + 1 , X N ( k , n + 1) / M )) + P l ( n + 1 ) W ( n + 1 , X N ( k , n + 1) / M )(1 − W ( n − 1 , X N ( k , n − 1) / M ))]; − 1 + G ( k , n ) , with prob ability W ( n, X N ( k , n ) / M ) × [ P r ( n )(1 − W ( n + 1 , X N ( k , n + 1) / M ))(1 − W ( n + 2 , X N ( k , n + 2) / M )) + P l ( n )(1 − W ( n − 1 , X N ( k , n − 1) / M ))(1 − W ( n − 2 , X N ( k , n − 2) / M ))]; G ( k , n ) , otherwise . (37) Fig. 3. An illustra tion of a wireless sensor net work over a two-dimen sional domain. Destinat ion nodes are loc ated at the far edge. W e sho w the possible path of a message originat ing from a node located in the left-front region. A. Network Model W e consider a network of wir eless sensor no des unifor mly placed over a domain. I n a random fashion , the sensor n odes generate data messages that need to be comm unicated to the destination nodes located on the boun dary of the doma in, which represent specialized devices that collect the sensor data. The sensor no des also serve as relays in the r outing of the messages to the destinatio n nodes. Eac h sensor n ode has the cap acity to store m essages and d ecides to transmit o r receive message s to or from its imm ediate neighbors at each time instant, but not both . This simplified rule of tra nsmission allows for a r elativ ely simp le repr esentation. W e illustrate such a network over a two-dim ensional domain in Fig. 3. The commun ication is inter ference-lim ited beca use a ll nodes share the sam e wireless channel. W e assum e a simp le collision protoco l: a tra nsmission from a tr ansmitter to a neigh boring receiver is successful if and only if n one of the other neigh bors of the recei ver is a transmitter , as illustrated in Fig. 4. B. Continuu m Model in One Dime nsion W e first con sider the case of a o ne-dimen sional n etwork, where N sensor n odes are u niformly p laced over a do main D ⊂ R and labeled by n = 1 , . . . , N . The destination nodes are located on the b ound ary o f D , lab eled n = 0 and n = N + 1 . Again let ds N be the distance b etween neighb oring nodes. Let X N ( k , n ) in (15) be the queue length of n ode n at time k . Let M in (16) b e the maximu m queue leng th of each node. At each time instant k = 0 , 1 , . . . , node n decid es to be a transmitter with probability W ( n, X N ( k , n ) / M ) . Assume Fig. 4. An illustrati on of the collisio n protocol: reception at a node fail s when more t han one of it s neighbors transmit (re gardless of t he intende d recei ver). G P r P l G P r P l … … 1 n n +1 n 1 N N +1 0 1 8 Fig. 5. An illustratio n of the time e voluti on of the queues in the one- dimensiona l network model. that node n r andomly chooses to transmit to the right or the left immediate neighbor with probability P r ( n ) and P l ( n ) , respectively . Define G ( k ) = [ G ( k , 1) , . . . , G ( k , N )] T , where G ( k , n ) is the nu mber of messages gen erated at node n at time k . W e model G ( k , n ) by indepen dent Po isson ran dom vari- ables with mean g ( n ) . The destination nodes at the bou ndaries of the dom ain do not have queu es; they simply r eceiv e any message transmitted to it a nd n ev er its elf transmits anything. W e illustrate the time evolution of the queues in th e network in Fig. 5. The seq uence X N ( k ) define d above f orms a M arkov ch ain whose ev olution is described by (16). Accordin g to the behav- ior o f th e no des, the n th compo nent of F N ( X N ( k ) / M , U ( k )) , where n = 1 , . . . , N , is defined by ( 37) at th e top of the page, wh ere X N ( k , n ) with n ≤ 0 or n ≥ N + 1 are defined to be zer o. For simplicity , in the f ollowing parts we set W ( n, X N ( k , n ) / M ) = X N ( k , n ) / M , which correspon ds to the tran smission rule that a n ode transmits a m essage with a probability pro portiona l to its queue length. With this 10 simplification, for x = [ x 1 , . . . , x N ] T , the n th comp onent of F N ( x, U ( k )) , where n = 1 , . . . , N , is 1 + G ( k , n ) , with pro bability (1 − x n )[ P r ( n − 1 ) x n − 1 (1 − x n +1 ) + P l ( n + 1 ) x n +1 (1 − x n − 1 )]; − 1 + G ( k , n ) , with prob ability x n [ P r ( n )(1 − x n +1 )(1 − x n +2 ) + P l ( n )(1 − x n − 1 )(1 − x n − 2 )]; G ( k , n ) , otherwise , where x n with n ≤ 0 or n ≥ N + 1 ar e defined to b e zero . Define f N as in (17). It follows that fo r x = [ x 1 , . . . , x N ] T , the n th c ompon ent of f N ( x ) , wher e n = 1 , . . . , N , is (1 − x n )[ P r ( n − 1) x n − 1 (1 − x n +1 ) + P l ( n + 1) x n +1 (1 − x n − 1 )] − x n [ P r ( n )(1 − x n +1 )(1 − x n +2 ) + P l ( n )(1 − x n − 1 )(1 − x n − 2 )] + g ( n ) , (38) where x n with n ≤ 0 or n ≥ N + 1 are defined to be zero. Define the d eterministic sequence x N ( k ) a s in ( 18). Set δ N , defin ed in Section I II-C, to b e ds 2 N . Let dt N = δ N / M = ds 2 N / M . (39) Assume P l ( n ) = p l ( v N ( n )) and P r ( n ) = p r ( v N ( n )) , (40) where p l ( s ) an d p r ( s ) are real-valued f unctions de fined on D . As in Sec tion II we again assume p l ( s ) = b ( s ) + c l ( s ) ds N and p r ( s ) = b ( s ) + c r ( s ) ds N . (41 ) Let c = c l − c r . Again we call b the d iffusion an d c the conv ection . In order to g uarantee that the nu mber o f messages enter ing the system from ou tside over finite time intervals remain s finite through out the limiting p rocess, we set g ( n ) = M g p ( v N ( n )) dt N . Assume b, c l , c r , and g p are in C 1 . Then f N ∈ C 1 . Let f N ( y N , s N ) be defined as in Section III-C. The n we have the f in (21): f = b ( s ) d ds ((1 − z ( s ))(1 + 3 z ( s )) z s ( s )) + 2 (1 − z ( s )) z s ( s ) b s ( s ) + z ( s )(1 − z ( s )) 2 b ss ( s ) + d ds ( c ( s ) z ( s )(1 − z ( s )) 2 ) + g p ( s ) . (42) Here, recall that, a single sub script s rep resents first deriv ative and a double subscrip t ss rep resents secon d deriv ative. Based o n the behavior o f nod es n = 1 and n = N next to the destination n odes, we d eriv e the bo undary condition for the PDE. For example, th e nod e n = 1 receives messages only fro m the right and encoun ters no interf erence wh en transmitting to the left. Replacing x n with n ≤ 0 or n ≥ N + 1 by 0 in (38), it follows that the 1 st com ponen t of f N ( x ) is (1 − x n ) P l ( n + 1 ) x n +1 − x n [ P l ( n ) + P r ( n )(1 − x n +1 )(1 − x n +2 )] + g ( n ) . (43) Similarly , the N th co mpone nt of f N ( x ) is (1 − x n ) P r ( n − 1) x n − 1 − x n [ P r ( n ) + P l ( n )(1 − x n − 1 )(1 − x n − 2 )] + g ( n ) . (44) Set β N , defined in Section III-C, to be 1 . Then we ha ve the h in (2 3): h = − b ( s ) z ( s ) 3 + b ( s ) z ( s ) 2 − b ( s ) z ( s ) . (45) Solving h = 0 for real z , we h av e the boun dary cond ition z ( t, s ) = 0 . This equation might seem con fusing to some readers as the limit of (43) and (44), if it has not been noticed th at, un like f , g is th e limit of a different fu nction f N ( y N , s N ) /β N . For fixed T , let z : [0 , T ] × D → R be th e solution of the PDE (24), with b ound ary condition z ( t, s ) = 0 and in itial condition z (0 , s ) = z 0 ( s ) , wh ere the righ t hand side of (24) is b ( s ) ∂ ∂ s ((1 − z ( t, s ))(1 + 3 z ( t, s )) z s ( t, s )) + 2 (1 − z ( t, s )) z s ( t, s ) b s ( s ) + z ( t, s )(1 − z ( t, s )) 2 b ss ( s ) + ∂ ∂ s ( c ( s ) z ( t, s )(1 − z ( t, s )) 2 ) + g p ( s ) . (46) In the following we show the co n vergence o f the M arkov chain X N ( k ) to the PDE solution z to in the one-dime nsional network case. Define K N , z N , u N , and ε N as in Sectio n III-C. Throu ghout this section we assume (34) hold s. By (38) and (46), it follo ws that th ere e xists 0 < c < ∞ such that for N sufficiently large, k γ N k ( N ) ∞ < cds N . (47) Albeit arduous, the algebraic m anipulation in getting (42), (45), and (4 7) amoun ts only to algeb raic e xercises, the c oncept of which is no mo re sop histicated than that in getting ( 14) in Section II. In practice, w e accomplishe such manipu lation using symb olic tools provided by comp uter progr ams su ch as Matlab . By (35), fo r each N , for k = 1 , . . . , K N and n = 1 , . . . , N , we can write ε N ( k , n ) = H ( k,n ) N ( u N ) , wh ere H ( k,n ) N is a real-valued function defined on R N × K N . It f ollows that H ( k,n ) N (0) = 0 and H ( k,n ) N ∈ C 1 . Define D H N = max k =1 ,...,K N n =1 ,...,N X i =1 ,...,K N j =1 ,... ,N ∂ H ( k,n ) N ∂ u ( i, j ) (0) , where 0 is in R N × K N . Lemma 7: W e have that for each N , µ N ≤ D H N . Pr o of: For each N , we have max k =1 ,...,K N n =1 ,...,N X i =1 ,...,K N j =1 ,... ,N ∂ H ( k,n ) N ∂ u ( i, j ) (0) u ( i, j ) ≤ max k =1 ,...,K N n =1 ,...,N X i =1 ,...,K N j =1 ,... ,N ∂ H ( k,n ) N ∂ u ( i, j ) (0) | u ( i, j ) | ≤ D H N max i =1 ,...,K N j =1 ,... ,N | u ( i, j ) | 11 = DH N k u k ( N ) ∞ . Thus, for each N , fo r all u 6 = 0 , D H N ≥ max k = 1 ,...,K N n =1 ,...,N P i =1 ,...,K N j =1 ,... ,N ∂ H ( k,n ) N ∂ u ( i,j ) (0) u ( i, j ) k u k ( N ) ∞ . (48) For each N , let v = [ v (1) , . . . , v ( K N )] , where v ( k ) = [ v ( k , 1) , . . . , v ( k , N )] T , whe re for k = 1 , . . . , K N and n = 1 , . . . , N , v ( k , n ) = sgn ∂ H ( k 0 ,n 0 ) N ∂ u ( k, n ) (0) , where ( k 0 , n 0 ) ∈ arg max k =1 ,...,K N n =1 ,...,N X i =1 ,...,K N j =1 ,...,N ∂ H ( k,n ) N ∂ u ( i, j ) (0) . Then D H N = max k = 1 ,...,K N n =1 ,...,N P i =1 ,...,K N j =1 ,... ,N ∂ H ( k,n ) N ∂ u ( i,j ) (0) v ( i, j ) k v k ( N ) ∞ . By this a nd (48) we ha ve D H N = sup u 6 =0 max k =1 ,...,K N n =1 ,...,N P i =1 ,...,K N j =1 ,...,N ∂ H ( k,n ) N ∂ u ( i,j ) (0) u ( i, j ) k u k ( N ) ∞ . (49) By T aylor’ s theor em, for each N , for k = 1 , . . . , K N and n = 1 , . . . , N , there exists ˜ H ( k,n ) N ( u ) such that H ( k,n ) N ( u ) = X i =1 ,...,K N j =1 ,... ,N ∂ H ( k,n ) N ∂ u ( i, j ) (0) u ( i, j ) + ˜ H ( k,n ) N ( u ) , (50) and for i = 1 , . . . , K N and j = 1 , . . . , N , lim u → 0 | ˜ H ( k,n ) N ( u ) | k u k ( N ) ∞ = 0 . Hence for each ε > 0 , the re exists δ such th at for k u k ( N ) ∞ < δ , we have | ˜ H ( k,n ) N ( u ) | k u k ( N ) ∞ < ε. Then fo r k u k ( N ) ∞ ≤ α ≤ δ , sup k u k ( N ) ∞ ≤ α | ˜ H ( k,n ) N ( u ) | k u k ( N ) ∞ < ε. Therefo re, for i = 1 , . . . , K N and j = 1 , . . . , N , lim α → 0 sup k u k ( N ) ∞ ≤ α | ˜ H ( k,n ) N ( u ) | k u k ( N ) ∞ = 0 . (51) By (50), f or each N , k H N ( u ) k ( N ) ∞ ≤ max k =1 ,...,K N n =1 ,...,N ˜ H ( k,n ) N ( u ) + max k =1 ,...,K N n =1 ,...,N X i =1 ,...,K N j =1 ,... ,N ∂ H ( k,n ) N ∂ u ( i, j )(0) u ( i, j ) . Hence µ N ≤ lim α → 0 sup k u k ( N ) ∞ ≤ α max k =1 ,...,K N n =1 ,...,N ˜ H ( k,n ) N ( u ) k u k ( N ) ∞ + max k =1 ,...,K N n =1 ,...,N P i =1 ,...,K N j =1 ,... ,N ∂ H ( k,n ) N ∂ u ( i,j ) (0) u ( i, j ) k u k ( N ) ∞ . Hence by ( 49) and (51), we finish the pro of. Notice that D H N is essentially the ind uced ∞ -norm of the linearized version of th e operator H N . Now we presen t a lemma on th e con dition of the sequen ce { µ N } bein g b ounded for the one-dimension al n etwork case. Lemma 8: In the one-d imensional n etwork case, assume that the f unction max {| z | , | z s | , | z ss | , | b | , | b s | , | b ss | , | c | , | c s |} (52) of ( t, s ) is bounded o n [0 , T ] × D . Th en { µ N } is b ounde d. Pr o of: Define A N ( k ) = I N + 1 M D f N ( z N ( k )) , where I N be the identity matrix in R N × N . It follows from (33) that for each N an d for k = 0 , . . . , K N , ε N ( k + 1) = A N ( k ) ε N ( k ) + ˜ f N ( ε N ( k )) M + dt N u N ( k ) . It follows that ε N ( k ) = dt N ( A N ( k − 1) . . . A (1) u N (0) + A N ( k − 1) . . . A (2) u N (1) + . . . + A N ( k − 1) u N ( k − 2) + u N ( k − 1)) + 1 M ( A N ( k − 1) . . . A (2) ˜ f N ( ε N (1)) + A N ( k − 1) . . . A (3) ˜ f N ( ε N (2)) + . . . + A N ( k − 1) ˜ f N ( ε N ( k − 2)) + ˜ f N ( ε N ( k − 1)) . Define B ( k,n ) N = 0 , 0 ≤ n < k − 3; I N , n = k − 3; A N ( k − 1) . . . A N ( n + 1) , n ≥ k − 2 . (53) It follows that ∂ H ( k,n ) N ( u ) ∂ u ( i, j ) (0) = B ( k,i ) N ( n, j ) dt N . Hence by Le mma 7, µ N ≤ max k =1 ,...,K N n =1 ,...,N X i =1 ,...,K N j =1 ,... ,N B ( k,i ) N ( n, j ) dt N . (54) 12 By (38), for fixed N , for x = [ x 1 , . . . , x N ] T , the ( n, m ) th compon ent of Df N ( x ) := ∂ f ( n ) N ∂ x m ( x ) , wher e n, m = 1 , . . . , N , is P l ( n ) x n (1 − x n − 1 ) , m = n − 2; (1 − x n )[ P r ( n − 1 )(1 − x n +1 ) − P l ( n + 1 ) x n +1 ] + P l ( n ) x n (1 − x n − 2 ) , m = n − 1; − [ P r ( n − 1) x n − 1 (1 − x n +1 ) + P l ( n + 1 ) x n +1 (1 − x n − 1 )] − [ P r ( n )(1 − x n +1 )(1 − x n +2 ) + P l ( n )(1 − x n − 1 )(1 − x n − 2 )] , m = n ; (1 − x n )[ P l ( n + 1 )(1 − x n − 1 ) − P r ( n − 1) x n − 1 ] + P r ( n ) x n (1 − x n +2 )] , m = n + 1; P r ( n ) x n (1 − x n +1 ) , m = n + 2; 0 other wise, where x n with n ≤ 0 or n ≥ N + 1 ar e defined to b e zero . Denote the induced ∞ -norm on R N × N again by k · k ( N ) ∞ . That is, for A ∈ R N × N , with the ( i, j ) th elemen t being A ( i, j ) , k A k ( N ) ∞ = max 1 ≤ i ≤ N N X j =1 | A ( i, j ) | , which is simply the maximu m absolute row sum of the matrix. Then we ha ve, k A N ( k ) k ( N ) ∞ = max n =1 ,...,N 1 M ( | P l ( n ) z N ( k , n )(1 − z N ( k , n − 1)) | + | (1 − z N ( k , n ))[ P r ( n − 1 )(1 − z N ( k , n + 1)) − P l ( n + 1 ) z N ( k , n + 1)] + P l ( n ) z N ( k , n )(1 − z N ( k , n − 2)) | + | M − [ P r ( n − 1) z N ( k , n − 1)(1 − z N ( k , n + 1)) + P l ( n + 1 ) z N ( k , n + 1)(1 − z N ( k , n − 1))] − [ P r ( n )(1 − z N ( k , n + 1))(1 − z N ( k , n + 2)) + P l ( n )(1 − z N ( k , n − 1))(1 − z N ( k , n − 2))] | + | (1 − z N ( k , n ))[ P l ( n + 1)(1 − z N ( k , n − 1)) − P r ( n − 1) z N ( k , n − 1)] + P r ( n ) z N ( k , n )(1 − z N ( k , n + 2))] | + | P r ( n ) z N ( k , n )(1 − z N ( k , n + 1)) | ) . Put (40), (41), and the T aylor ’ s e xpa nsions (11 ), (1 2), a nd (13) of z , b , and c , respectiv ely , into the above e quation and rearrang e. ( Again we o mit the detailed algebr aic manipulatio n here.) Th en we have that there exist 0 < c 1 < ∞ such that for each N , for k = 1 , . . . , K N and n = 1 , . . . , N , k A N ( k ) k ( N ) ∞ ≤ max n =1 ,...,N | − c s ( v N ( n )) − b ss ( v N ( n )) − 2 b ( v N ( n )) z ss ( t N ( k ) , v N ( n )) + 4 b ss ( v N ( n )) z ( t N ( k ) , v N ( n )) + 2 b s ( v N ( n )) z s ( t N ( k ) , v N ( n )) + 4 c s ( v N ( n )) z ( t N ( k ) , v N ( n )) + 4 c ( v N ( n )) z s ( t N ( k ) , v N ( n )) + 6 b ( v N ( n )) z s ( t N ( k ) , v N ( n )) 2 − 3 b ss ( v N ( n )) z ( t N ( k ) , v N ( n )) 2 − 3 c s ( v N ( n )) z ( t N ( k ) , v N ( n )) 2 + 6 b ( v N ( n )) z ( t N ( k ) , v N ( n )) z ss ( t N ( k ) , v N ( n )) − 6 c ( v N ( n )) z ( t N ( k ) , v N ( n )) z s ( t N ( k ) , v N ( n )) | ds 2 M + c 1 ds 3 M + 1 := max n =1 ,...,N | q ( t N ( k ) , v N ( n )) | ds 2 M + c 1 ds 3 M + 1 . Since ( 52) is b ounde d, there exists 0 < c 2 < ∞ such that | q ( t, s ) | < c 2 for a ll ( t, s ) ∈ [0 , T ] × D . Hen ce fo r e ach N and for k = 0 , . . . , K N , k A N ( k ) k ( N ) ∞ ≤ 1 + c 2 ds 2 N M + c 1 ds 3 N M . Hence there exists 0 < c 3 < ∞ , for N sufficiently large and for k = 0 , . . . , K N , k A N ( k ) k ( N ) ∞ ≤ 1 + c 3 ds 2 N M = 1 + c 3 dt N . Hence by ( 53) and (54), f or N sufficiently large, µ N ≤ K N dt N (1 + c 3 dt N ) K N . Since T < ∞ , there exist 0 < c 4 < ∞ s uch that for eac h N , K N dt N < c 4 . But as N → ∞ , K N → ∞ , and (1 + c 3 dt N ) K N = 1 + c 3 T K N K N → e c 3 T . Therefo re, th ere exist 0 < c 5 < ∞ such that for ea ch N , µ N < c 5 . This finishes the pr oof. Pr o position 1: In th e one-d imensional n etwork case, sup- pose that the assumption in Lemma 8 holds. Th en k X N − z N k ( N ) ∞ = O ( ds N ) a.s. o n [0 , T ] × D . Pr o of: By (39) and (47), th ere exists 0 < c < ∞ such that for N suf ficiently large, max { γ N , ds N , dt N } ≤ cds N . One can now easily verify th at the assumption s in T heorem 1 hold. Then b y Theor em 1 th e desired result hold s. This propo sition states that in the one-dimen sional network case, as M → ∞ an d N → ∞ , X N conv erges unifo rmly to z N a.s., and at least with the same rate a s ds N . Analogo usly , for the continu ous-time-spac e extension X pN of X N ( k ) , given the same assumption as in th e above theorem , b y Th eorem 2, we have k X pN − z k ( p ) ∞ = O ( ds N ) a.s. o n [0 , T ] × D . 13 s z ( t 0 ,s ) ds N ds N ds N ds N ds N ds N ds N Fig. 6. The PDE solution z ( t, s ) , at t = t o approximat ing the normalized queue lengths of a one-dimensi onal network. 1) Interpretation of the appr oximation PDE: Now we make some remarks on ho w to use a given appro ximating PDE. First, for fixed N and M , the nor malized queue length o f node n at time k , is appr oximated by the value of the PDE s olu tion z at the correspondin g point in [0 , T ] × D , i.e., z (( t N ( k ) , v N ( n ))) ≈ X N ( k , n ) M . Second, we show how to interpret C ( t o ) := Z D z ( t o , s ) ds N , the area below the curve z ( t o , s ) for fixed t o ∈ [0 , T ] . Let k o = ⌊ t o /dt N ⌋ . The n we have z ( t o , v N ( n )) ds N ≈ X N ( k o , n ) M ds N , the are a o f the n th rectangle in Fig. 6. Hence C ( t o ) ≈ N X n =1 z ( t o , v N ( n )) ds N ≈ N X n =1 X N ( k o , n ) M ds N , the sum of all r ectangles. If we assume th at all messages in th e queue hav e ro ughly the same bits, and thin k of ds N as the “coverage” of each node , then the area und er any segment o f the curve mea sures a kind of “d ata-coverage produ ct” of the n odes cov ered by th e segmen t, in the unit of “bit · meter” . As N → ∞ , th e total no rmalized queue length P N n =1 X N ( k o , n ) / M o f the n etwork does go to in finity; howe ver , the coverage ds N of each nod e goes to 0. Hence the sum of the “data-coverage product” can be approxim ated b y the finite ar ea C ( t o ) . 2) Comparison between PDE app r oxima tion and Monte Carlo simu lation: One dimension: W e compar e the PDE approx imation obtained from ou r approach with the Monte Carlo simulatio ns for a network over th e dom ain D = [ − 1 , 1] . W e use the initial cond ition z 0 ( s ) = l 1 e − s 2 , whe re l 1 > 0 is a constan t, so th at initially the no des in the m iddle have messages to transmit, while those near the bou ndaries have very few . W e set the message gener ation r ate g p ( s ) = l 2 e − s 2 , where l 2 > 0 is a p arameter determin ing the total loa d of the system. −1 −0.5 0 −0.4 1 0 0.01 0.02 0.03 0.04 0.05 N=20, M=N 3 −1 −0.5 0 0.5 1 0 0.01 0.02 0.03 0.04 0.05 N=50, M=N 3 −1 −0.5 0 0.5 1 0 0.01 0.02 0.03 0.04 0.05 N=80, M=N 3 ◦ Monte Carlo simulation —— PDE solution Fig. 7. The Monte Carlo simulation s (with diffe rent N and M ) and the PDE solution of a one-di mensional networ k, with b = 1 / 2 and c = 0 , at t = 1 s . W e u se three sets of values of N = 20 , 50 , 80 an d M = N 3 , and show the PDE so lution and the Monte Carlo simulation results with d ifferent N and M at t = 1 s . The n etworks hav e diffusion coefficient b = 1 / 2 and convection co efficient c = 0 in Fig. 7 and c = 1 in Fig. 8, respectively , where the x-a xis denotes the node location and y-axis denotes the normalized queue length . For the th ree sets of the values of N = 20 , 50 , 8 0 and M = N 3 and with c = 0 , the max imum ab solute erro rs o f the PDE approx imation are 5 . 6 × 10 − 3 , 1 . 3 × 10 − 3 , and 1 . 1 × 10 − 3 , respectively; and with c = 1 , the er rors are 4 . 4 × 10 − 3 , 1 . 5 × 10 − 3 , and 1 . 1 × 10 − 3 , respectively . As we can see, a s N and M increase, the resemblance betwee n the Monte Carlo simulations an d the PDE solu tion be comes stron ger . I n the case of very large N and M , it is d ifficult to distinguish the results. W e stress that the PDEs only to ok fractions of a secon d to solve on a computer, while the Monte Carlo simulations took time on the order of tens o f hours. W e could no t d o Monte Carlo simulation s of a ny larger networks b ecause o f prohib iti vely long com putation time. C. Continuum Mod el in T wo Dimension s Generalization o f the continuu m model to higher d imen- sions is straightforward, except for more arduous algebraic manipulatio n. Now we con sider the two-dimension al n etwork of N 1 × N 2 sensor nod es. The n odes a re unifor mly placed over the domain D ⊂ R 2 and lab eled by ( n, m ) , where n = 1 , . . . , N 1 and m = 1 , . . . , N 2 . Again let the distance between neighb oring n odes be ds N . Assume tha t the node at 14 −1 −0.5 0 0.5 1 0 0.01 0.02 0.03 0.04 0.05 0.06 N=20, M=N 3 −1 −0.5 0 0.5 1 0 0.01 0.02 0.03 0.04 0.05 N=50, M=N 3 −1 −0.5 0 0.5 1 0 0.01 0.02 0.03 0.04 0.05 N=80, M=N 3 ◦ Monte Carlo simulation —— P DE solution Fig. 8. The Monte Carlo simulation s (with diffe rent N and M ) and the PDE solution of a one-dimension al network, with b = 1 / 2 and c = 1 , at t = 1 s . location ( n, m ) random ly chooses to transmit to the north, east, south, or west immediate neighb or with pr obabilities P e ( n, m ) = b 1 ( s ) + c e ( s ) ds N , P w ( n, m ) = b 1 ( s ) + c w ( s ) ds N , P n ( n, m ) = b 2 ( s ) + c n ( s ) ds N , and P s ( n, m ) = b 2 ( s ) + c s ( s ) ds N , r espectively . Define c 1 = c w − c e and c 2 = c s − c n . The deriv ation o f the appro ximating PDE is similar to those of the on e-dimension al cases, excep t that we now ha ve to consider transmission to and interfe rence f rom f our direction s instead of two. W e present th e ap proxim ating PDE here without the d etailed deriv ation : ˙ z = 2 X j =1 b j ∂ ∂ s j (1 + 5 z )(1 − z ) 3 ∂ z ∂ s j + 2 (1 − z ) 3 ∂ z ∂ s j × db j ds j + z (1 − z ) 4 d 2 b j ds 2 j + ∂ ∂ s j c j z (1 − z ) 4 , with bound ary cond ition z ( t, s ) = 0 and initial condition z (0 , s ) = z 0 ( s ) , where t ∈ [0 , T ] an d s = ( s 1 , s 2 ) ∈ D . 1) Comparison between PDE app r oxima tion and Monte Carlo simulations: T wo dimen sions: W e compare th e PDE approx imation and th e M onte Car lo simulations of a network over the d omain D = [ − 1 , 1] × [ − 1 , 1] . W e use th e initial condition z 0 ( s ) = l 1 e − ( s 2 1 + s 2 2 ) , whe re l 1 > 0 is a constant, so that in itially the nodes in the center hav e messages to transmit, while those near th e bound ary have very few . W e set the m essage gener ation ra te g p ( s ) = l 2 e − ( s 2 1 + s 2 2 ) , where l 2 > 0 is a p arameter determin ing the to tal load of the system. W e use thre e different sets of the values of N 1 × N 2 and M , where N 1 = N 2 = 20 , 50 , 8 0 an d M = N 3 1 . W e show the contou rs of the norma lized queu e length from the PDE −1 −0.5 0 0.5 1 1 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 2 4 6 x 10 −3 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 2 4 6 x 10 −3 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 2 4 6 x 10 −3 Monte Carlo simulations PDE solution Fig. 9. The Monte Carl o simulatio ns (from top to bott om, with N 1 = N 2 = 20 , 50 , 80 , respe cti vely , and M = N 3 1 ) and the PDE solution of a tw o-dimensiona l netw ork, with b 1 = b 2 = 1 / 4 a nd c 1 = c 2 = 0 , at t = 0 . 1 s . solution and the Mon te Carlo simulation results with different sets of values of N 1 , N 2 , and M , at t = 0 . 1 s . The networks have diffusion co efficients b 1 = b 2 = 1 / 4 and con vection coefficients c 1 = c 2 = 0 an d c 1 = − 2 , c 2 = − 4 in Fig. 9 and Fig. 1 0, respectively . It too k 3 days to do the M onte Carlo simulation of the network at t = 0 . 1 s with 80 × 80 n odes and the maximum queu e length M = 80 3 , while the PDE solved on the same machin es to ok less than a second. W e co uld not do Mon te Carlo simulatio ns o f any larger networks or greater values of t . For the three sets of values o f N 1 = N 2 = 20 , 5 0 , 80 and M = N 3 1 and with c 1 = c 2 = 0 , the maximum ab solute errors are 3 . 2 × 10 − 3 , 1 . 1 × 10 − 3 , and 6 . 8 × 10 − 4 , respectively; and with c 1 = − 2 , c 2 = − 4 , the er rors are 4 . 1 × 1 0 − 3 , 1 . 0 × 10 − 3 , and 6 . 6 × 10 − 4 , respectively . Again the accuracy of the continuu m mode l in creases with N 1 , N 2 , and M . V . C O N C L U S I O N A N D F U T U R E W O R K In this pa per we analyze the con vergence of a sequence of Markov chains to its continuum limit, the solution of a PDE, in a two-step pr ocedure . W e provide pre cise sufficient condition s fo r the con vergence and the explicit rate of the conv ergence. Based on such co n vergence we app roximate the Markov chain mo deling a large wireless s ensor netw ork by a nonlinear diffusion-convection PDE. W ith th e sop histicated math ematical too ls available f or PDEs, this ap proach provides a framework to model and simu- late n etworks with a very large number of com ponents, which is p ractically infeasible for M onte Carlo simulation. Such a 15 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 1 2 3 4 5 6 x 10 −3 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 1 2 3 4 5 6 x 10 −3 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 1 2 3 4 5 6 x 10 −3 Monte Carlo s imulations PDE s olution Fig. 10. The Mont e Carlo simulations (fr om top to bottom, with N 1 = N 2 = 20 , 50 , 80 , respecti vel y , and M = N 3 1 ) and the PDE s olutio n of a two-di mensional network, with b 1 = b 2 = 1 / 4 and c 1 = − 2 , c 2 = − 4 , at t = 0 . 1 s . tool enables u s to tack le problem s such as performa nce anal- ysis an d prototyp ing, resource p rovisioning, network d esign, network parame tric o ptimization, network con trol, network tomogr aphy , and in verse prob lems, for very large networks. For example, we can n ow u se the PDE m odel to optimize some perfor mance metr ic o f a large network by adju sting the placement of destination nodes or the routing parameters (coefficients in con vection terms), with relatively negligible computatio nal overhead com pared with that of the same task done by Monte Carlo s imulatio n. The appr oximation approach can be exten ded in future work with mor e specific con siderations regarding th e network , which can sig nificantly affect the d eriv ation of the c ontinuu m model. For examp le, we can seek to establish co ntinuum models fo r other do mains such a s the In ternet, cellular n et- works, and traffic n etworks; w e can co nsider more boundary condition s other than sinks, inc luding walls, semi-perm eating walls, and the ir compo sition; the no des could be non unifor mly located, even mob ile; transmission cou ld happen b etween nodes that are not immediate neighbors; and the interference between nodes could behave dif feren tly in the pr esence of power contro l. R E F E R E N C E S [1] R. M. Fuj imoto, K. S. Perumalla , and G. F . Rile y , Network Simulation . Morgan & Cla ypool Publishe rs, 2007. [2] R. Bagrodia , R. Meyer , M. T akai, Y . A. Chen , X. Zeng, J. Martin, and H. Y . Song, “Parsec : a parallel simulation en vironment for comple x systems, ” Computer , vol. 31, no. 10, pp. 77 –85, Oct. 1998. [3] H. Plesser , J. Eppler , A. Morrison, M. Diesmann, and M. O. Ge waltig , “Ef ficient parallel simulati on of lar ge-scale neuronal networks on clus- ters of m ultipro cessor computers, ” in Eur o-P ar 2007 P arallel Pr ocessing . [4] H. J. Ku shner , Appr oximation and W eak Con ver gence Met hods for Random Proc esses, with Applicatio ns to Stoc hastic Systems Theory . Cambridge , MA: MIT Press, 1984. [5] R. Norberg , “ Anomalous PDEs in Marko v chains: Domains of valid ity and numeric al solutions, ” F inance and Stochast ics , vol. 9, no. 4, pp. 519–537, October 2005. [Online]. A vaila ble: http:/ /ideas.repec .org/a/spr/finsto/v9y2005i4p519- 537. html [6] R. W . R. Darling and J. R. Norris, “Diffe rential equation approximati ons for Markov chains, ” P r obabilit y Surveys , vol. 5, p. 37, 2008. [Online ]. A vai lable: doi:1 0.1214/07- PS121 [7] E. K. P . Chong, D. Estep, and J. Hannig, “Continuum mode ling of large netw orks, ” Int. J. Numer . Model. , vol. 21, no. 3, pp. 169–186, 2008. [8] S. L. Sobolev , P artial Differ ential Equations of Mathematical Physics . Courier Dov er Publicati ons, 1964. [9] R. G. Mor timer , Mathematics for Physical Chemistry . Academic Press, 2005. [10] M. Gillman, An Intr oduction to Mathematic al Model s in Ecology and Evolutio n: T ime and Space . W iley -Blackwell , 2009. [11] T . Hens and M. O. Rieger , F inancial Economics . Springer , 2010. [12] G. R. Liu and S. S. Quek, The F inite Element Meth od: A Practical Course . Butterwor th-Heinemann, 2003. [13] A. R. Mitchel l and D. F . Griffiths, The F inite Diffe renc e Method in P artial Diffe rent ial Equations . W ile y , 1980. [14] E. W . C. v . Groesen and J. Molenaa r, Continuum Modelin g in the Physical Sciences . Society for Industrial and Applied Mathematic s, 2007. [15] H. B. M ¨ uhlhaus, Continuu m Models for Materials with Micr ostructur e . W iley , 1995. [16] W . F . Philli ps, A New Continuum Model for T raffic F low . U.S. Dept. of Tra nsportation, Research and Special Programs Administration National T echnical Information Service [distribu tor], 1981. [17] D. Gr ¨ unbaum, “Transl ating stochasti c density-depende nt indi vidual be- havi or with sensory const raints to an Eulerian model of animal swarm- ing, ” J Math Biol , vol. 33, pp. 139–161, 1994. [18] J. M. Harrison, “Heavy traf fic analysis of a syste m with parall el serve rs: Asymptotic optimality of discret e-re vie w policies, ” The Annals of Applied Probabi lity , vol. 8, no. 3, pp. pp. 822–848, 1998. [Online]. A vai lable: http: //www . jstor .org/stabl e/2667208 [19] J. G. Dai and J. M. Harrison , “Reflec ting bro wnian motion in three di- mensions: A ne w proof of sufficie nt conditions for positi ve recurren ce, ” Mathemat ical Methods of Operatio ns Resear ch , 2009. [20] M. Bramson, J. G. Dai , and J. M. Harrison, “Positi ve recurrenc e of reflecti ng Bro wnian m otion in three dimensions, ” ArXiv e-prints , Sep. 2010. [21] P . Gupta and P . R. Kumar , “The capacit y of wireless networks, ” Informatio n Theory , IEEE T ransactions on , v ol. 46, no. 2, pp. 388–404, Mar 2000. [22] E. W . Grundk e and A. N. Z. Heywo od, “ A uniform continuum model for scaling of ad hoc networks, ” in ADHOC-NO W , 2003, pp. 96–103. [23] R. Bakhshi , L . Cl oth, W . Fokkink, an d B. R. Hav erkort , “Meanfie ld analysi s for the e valua tion of gossip protoc ols, ” SIGMETRICS P erform. Eval. Rev . , vol. 36, pp. 31–39, November 2008. [Onl ine]. A va ilable: http:/ /doi.acm.org/ 10.1145/1481506.1481513 [24] M. E. J. Newman, C. Moore, and D. J. W atts, “Mean-fie ld solution of the small-wo rld network model, ” Physical Revie w Letters , vol. 84, pp. 3201–3204, Apr . 2000. [25] R. B. Gu enther and J. W . Lee, P artial Differ ential Equations of Mathe- matical Physics and Inte gral Equations . Mineola, NY : Courier Dover Publica tions, 1996. [26] T . C. Gard, Introdu ction to St ochastic Dif fere ntial Equation s (Pure and Applied Mathemat ics) . Marce l Dekke r Inc, 1987. [27] H. J. Kushner and G. G. Y in, Stochastic Approx imation and Recursive Algorithms and Applicat ions . Springer , 2003. [28] P . B illingsle y , Pr obability and Measur e . Ne w Y ork, NY : W iley , 1995.
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment