Limiting Behavior of Degree-Degree Metrics under Local Convergence in Probability

This paper investigates the limiting behaviour of degree-degree correlation metrics for sequences of random graphs under a general assumption of local convergence in probability. We establish convergence results for Pearson's correlation coefficient …

Authors: Refer to original PDF

Limiting Beha vior of Degree-Degree Metrics under Lo cal Con v ergence in Probabilit y Andrei-Eugeniu P˘ atularu 1 and Pim v an der Hoorn 2 1 ´ Ecole P olytechnique F ´ ed´ erale de Lausanne 2 Departmen t of Mathematics and Computer Science, Eindhov en Univ ersit y of T ec hnology F ebruary 20, 2026 Abstract This pap er in vestigates the limiting b ehaviour of degree-degree correlation metrics for sequences of random graphs under general assumption of local conv ergence in probabilit y . W e establish con vergence results for P earson’s correlation co efficien t r , Sp earman’s ρ , Kendall’s τ , a v erage nearest neigh b our degree (ANND), and a verage nearest neighbour rank (ANNR). Our result explicitly show ho w the limits of these degree-degree correlation metrics dep end on local structure of the graph. W e then apply our general results to study degree-degree correlations in rank-1 inhomogeneous random graphs and random geometric graphs, deriving explicit expressions for ANND in b oth mo dels and for Pearson’s correlation co efficien t in the latter one. Keyw ords: random graphs, degree-degree metrics, neutral mixing Con ten ts Con tents 1 1 In tro duction 2 2 Preliminaries and main results 3 2.1 Lo cal conv ergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.2 Statemen t of main results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2.1 Kno wn result for P earson’s correlation coefficient . . . . . . . . . . . . . . . 5 2.2.2 Sp earman’s rho . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2.3 Kendall’s tau . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2.4 Degree-degree distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2.5 Av erage nearest neigh b or degree and rank . . . . . . . . . . . . . . . . . . . 7 3 Applications 8 3.1 Rank-1 Inhomogeneous Random Graphs . . . . . . . . . . . . . . . . . . . . . . . . 8 3.2 Random Geometric Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 4 Pro ofs of main results 10 4.1 General idea and approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 4.2 Sp earman’s rho . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 4.3 Kendall’s tau . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 4.4 Degree-degree distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4.5 Av erage nearest neigh b or degree and rank . . . . . . . . . . . . . . . . . . . . . . . 14 1 5 Pro ofs of application results 16 5.1 Rank-1 inhomogeneous random graphs . . . . . . . . . . . . . . . . . . . . . . . . . 16 5.2 Random geometric graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 6 Bibliograph y 19 A Proof of Lemma 4.2 20 B Pro of of Lemma 5.1 22 C Proof of Lemma 5.2 23 D Proof of Lemma 5.3 24 1 In tro duction In today’s in terconnected w orld, net works are everywhere, from social media connections to the in tricate patterns of the in ternet. These netw orks can range from simple designs to immensely complex structures inv olving millions of in terconnected no des. T o understand the function and b eha vior of these complex net works, researc her study their structural features and dev elop random graph mo dels that mimic these. An imp ortan t property that in volv es all netw orks is the degree-degree correlation, sometimes called netw ork assortativity , whic h refers to the statistical relationship b et w een the degrees of neigh b ouring no des in a net w ork. It quantifies ho w the degree of one no de is related to the degrees of its adjacen t no des. F or instance, if a netw ork has a p ositiv e degree-degree correlation, it indicates that no des of a high degree hav e a preference to connect to other high-degree no des. In this case, the net work is said to exhibit assortativ e mixing. Similarly , there exist disassortativ e net works, with negative degree-degree correlation, where no des of high degree are mostly neighbours of no des with small degrees. If the net work is neither assortativ e nor disassortative, it is said to ha ve neutral mixing. Degree-degree correlations are an imp ortan t prop ert y of netw orks. F or example, net work with assortativ e mixing might b e more vulnerable to targeted attac ks where the high-degree no des are sp ecifically remov ed [ 21 ]. In neuroscience assortative netw orks brain net works are shown to per- form better in terms of signal processing [ 23 ]. Assortativ e netw orks are also more robust under edge or vertex remov al. In financial netw orks, for example, assortativity may influence systemic risk, since highly in terconnected banks are also highly connected to other similar en tities [ 19 ]. Con- v ersely , netw orks with disassortative mixing, where high-degree no des are preferentially connected to low-degree nodes, can b eha ve differen tly under stress or failure scenarios compared to assor- tativ e net works [ 20 ]. Disassortative netw orks do allow for easier immunization when considering epidemic spreading [ 7 ]. Giv en the impact of degree-degree correlations on the function of net works, it is imp ortan t to properly measure and analyze these correlations. One natural w a y to do this is to study the asymptotic b eha viour of degree-degree metrics in random graph models, as the num b er of nodes in the netw ork grows large. There are man y results av ailable. Some concern sp ecific mo dels [ 22 , 2 , 25 , 18 , 16 ], while others prov e limits for a giv en metric under certain assumptions on the random graph mo del [ 12 , 14 , 26 ]. While the latter hav e the potential to enable analysis of degree-degree correlation in general random graphs, the results are not alwa ys easy to directly apply , often resulting in a separate, and sometimes inv olved, analysis. The recent developmen t of the lo cal conv ergence has op ened up a p o werful and general frame- w ork to study sparse random graphs, enabling a uniform wa y to analyze topologies of a wide v ariety of random graph mo dels. Sp ecifically , if ( G n ) n ≥ 1 is a sequence of finite graphs, local con vergence means that the distribution of neigh b ourho ods around a uniformly sampled no de con verges to the distribution of neighbourho ods in an infinitely ro oted random graph. This notion is particularly useful since it implies conv ergence of local properties of random graphs, while a 2 wide range of random graph models hav e b een sho wn to hav e a lo cal limit [ 13 ]. The notion of lo cal con vergence is by now the setting to analyze random graph mo dels and has lead to a wide v ariet y of prop erties b eing studied. How ever, apart from some initial results, degree-degree metrics hav e not b een extensiv ely studied, even though most of them are lo cal prop erties. In this pap er w e address this gap by providing general limit results for a wide range of degree- degree metrics for random graphs with a lo cal limit. This mak es our results widely applicable to most of the current random graph mo dels that are studied, including the p opular Geometric Inhomogeneous Random Graphs [ 5 ], W eighted Random Connection Mo del [ 9 ] and the more general Spatial Inhomogeneous Random Graphs [ 13 ]. T o sho wcase the usage of our results we analyze degree-degree correlations in rank-1 inhomogeneous random graphs and random geometric graphs. W e provide a explicit expression for the Average Nearest Neighbor Degree in b oth mo dels and for P earson’s correlation co efficien t in the latter one. W e organize the paper as follows. In Section 2 we pro vide basic preliminaries for local con- v ergence and state our main conv ergence results for the differen t degree-degree metrics. The applications of our general results to rank-1 inhomogeneous random graphs and random geomet- ric graphs is cov ered in Section 3 . Sections 4 and 5 contain the pro ofs of the general results and the applications, under the assumption of a few technical lemmas. The pro ofs for these lemmas are included in the App endix. 2 Preliminaries and main results 2.1 Lo cal con vergence W e start by briefly introducing the concept of lo cal con vergence for a sequence of graphs as in tro duced in [ 1 , 3 , 11 ]. In particular we fo cus on the notion of local con vergence in probabilit y . This notion fo cuses on the b eha vior of lo cal neighborho o ds around a random v ertex as the graph gro ws in size. The interested reader is referred to [ 11 ] for more details on the topic. F or a graph G = ( V ( G ) , E ( G )) we denote by d v the degree of v ∈ V ( G ). The graph G is called lo c al ly finite if d v < ∞ holds for all v ∈ V ( G ). Definition 2.1 (Rooted graphs) . A r o ote d gr aph is a tuple ( G, o ) , wher e G = ( V ( G ) , E ( G )) is a gr aph with a designate d vertex o ∈ V ( G ) c al le d the r o ot. F or any (undirected) graph G = ( V ( G ) , E ( G )) we denote by d G ( u, v ) the graph distance b et w een no des u, v in G , i.e. the length of the shortest path b et ween u and v . W e adopt the con ven tion d G ( u, v ) = ∞ , if u and v are not connected in G . Definition 2.2 (Neigh b orho od of the ro ot) . L et ( G, o ) b e a lo c al ly finite r o ote d gr aph. Then, for every r > 0 we denote by B ( G ) r ( o ) the sub gr aph of G induc e d by { v ∈ V ( G ) : d G ( o, v ) ≤ r } . Informal ly, ( B ( G ) r ( o ) , o ) is the r o ote d sub gr aph of ( G, o ) having al l vertic es at gr aph distanc e at most r fr om the r o ot o . Next, w e introduce the notion of ro oted isomorphism, whic h aligns with the usual definition of graph isomorphism. Definition 2.3 (Ro oted isomorphisms) . L et ( G 1 , o 1 ) and ( G 2 , o 2 ) b e two lo c al ly finite r o ote d gr aphs. Then ( G 1 , o 1 ) is r o o te d isomorphic to ( G 2 , o 2 ) when ther e exists a bije ctive function ϕ : V ( G 1 ) → V ( G 2 ) satisfying { u, v } ∈ E ( G 1 ) ⇐ ⇒ { ϕ ( u ) , ϕ ( v ) } ∈ E ( G 2 ) and ϕ ( o 1 ) = o 2 . Mor e over, we use the notion ( G 1 , o 1 ) ≃ ( G 2 , o 2 ) when ( G 1 , o 1 ) is r o ote d isomor- phic to ( G 2 , o 2 ) . 3 Th us, b y using Definition 2.3 w e let G ⋆ b e the space of ro oted graphs mo dulo the isomorphism, i.e. G ⋆ consists the set of all equiv alence classes of the form [( G, o )], where ( G ′ , o ′ ) ∈ [( G, o )] if and only if ( G ′ , o ′ ) ≃ ( G, o ). How ever, we often omit the equiv alence class notation and adopt the con ven tion ( G, o ) ∈ G ⋆ , in particular, if ( G ′ , o ′ ) ∈ [( G, o )], i.e. ( G ′ , o ′ ) ≃ ( G, o ) then ( G, o ) and ( G ′ , o ′ ) are considered to be the same. This space can be turned in to a P olish space with using the metric d (( G 1 , o 1 ) , ( G 2 , o 2 )) = 1 1 + sup r> 0 { B G 1 r ( o 1 ) ≃ B G 2 r ( o 2 ) } . (see [ 11 ]) and thus can be turned into a probabilit y space. W e are no w ready to state the definition of lo cal con vergence in probabilit y . F or a sequence X, ( X n ) n ≥ 1 of random v ariables we write X n P → X to denote conv ergence in probability of X n to X . Definition 2.4. (L o c al c onver genc e in pr ob ability) L et ( G n ) n ≥ 1 b e a se quenc e of (p ossibly r andom) gr aphs that ar e almost sur ely finite. L et ( G n , o n ) b e the c orr esp onding se quenc e of r o ote d gr aphs, wher e the r o ot o n ∈ V ( G n ) is chosen uniformly at r andom, wher e we r estrict G n to the c onne cte d c omp onent of o n . Then we say ( G n , o n ) c on v erges lo cally in probability to the (p ossibly r andom) c onne cte d r o ote d gr aph ( G, o ) ∈ G ⋆ , having law µ , if for every r o ote d gr aph H ⋆ ∈ G ⋆ and al l inte gers r ≥ 0 it holds that 1 | V ( G n ) | X v ∈ V ( G n ) 1 { B ( G n ) r ( v ) ≃ H ⋆ } P − → µ ( B ( G ) r ( o ) ≃ H ⋆ ) as n → ∞ . (1) If ( G n , o n ) c onver ges lo c al ly in pr ob ability to ( G, o ) then, slightly abusing notation, we write ( G n , o n ) P − → ( G, o ) . 2.2 Statemen t of main results In this section, provide conv ergence results for several different metrics for degree-degree correla- tions. T o analyze these correlations it will be useful to consider directed edges in an undirected graph. Th us, for any (deterministic) undirected graph G = ( V ( G ) , E ( G )) we consider the directed graph  G := ( V ( G ) ,  E ( G )) such that { u, v } ∈ E ( G ) if and only if ( u, v ) , ( v , u ) ∈  E ( G ), i.e. in  G there is exactly one directed edge from u to v and one directed edge from v to u for ev ery edge { u, v } ∈ E ( G ). W e will write P u → v for the summation o v er all directed edges ( u, v ) in  G . The usage of directed edges allows us to talk about a left and righ t v ertex without an y ambiguit y , while the inclusion of b oth pairs for an y undirected edge ensures w e are treating ev ery v ertex in suc h an edge equally . T o define metric for degree-degree correlations, several empirical density functions for a finite graph G = ( V ( G ) , E ( G )) are imp ortan t. First, the empirical degree density function is b y f G ( k ) := 1 | V ( G ) | X v ∈ V ( G ) 1 { d v = k } , for k = 0 , 1 , . . . . In addition, the size-biased empirical degree density function is giv en by f ∗ G ( k ) := 1 |  E ( G ) | X u → v 1 { d u = k } = 1  E ( G ) | X u ∈ V ( G ) k 1 { d u = k } = k | V ( G ) | f G ( k ) |  E ( G ) | , for k = 0 , 1 , . . . . The function f ∗ G ( k ) represents the fraction of edges for which the start vertex has degree k , and it describ es the degree distribution with a size bias prop ortional to the degree. W e also define the empirical joint degree densit y function as h G ( k , ℓ ) := 1 | → E ( G ) | X u → v 1 { d u = k,d v = ℓ } , for k = 0 , 1 , . . . and ℓ = 0 , 1 , . . . . 4 The function h G ( k , ℓ ) represents the fraction of edges connecting a start vertex of degree k to the end v ertex of degree ℓ . It captures the full degree correlation structure b etw een connected no des in G . Finally , w e write F G , F ∗ G and H G for the cum ulative distribution functions, whose probabilit y mass function corresp onds to f G , f ∗ G and h G , resp ectively . Remark 2.1. We note that the definitions we give for the differ ent de gr e e-de gr e e metrics ar e valid for any gr aph G , and do not assume any sto chasticity. It is only when we c onsider their limits on se quenc es of gr aphs with a lo c al limit that sto chasticity c omes into play. 2.2.1 Kno wn result for P earson’s correlation coefficient The first example of how lo cal con vergence can b e used to deriv e limit expressions for degree- degree metrics w as giv en in that G n con verges in probabilit y in the lo cal weak sense to ( G, o ). Then. Here the limit for Pearson’s correlation co efficient was expressed in terms of the degree of the ro ot and that of a randomly sampled neighbor V . W e recall that for a finite graph G , Pearson’s correlation co efficien t is given by [ 12 ] r ( G ) = P u → v d u d v − 1 | → E ( G ) |  P v ∈ V ( G ) d 2 v  2 P v ∈ V ( G ) d 3 v − 1 | → E ( G ) |  P v ∈ V ( G ) d 2 v  2 . (2) Theorem 2.2 ([ 11 ] Theorem 2.26) . L et ( G n ) n ≥ 1 b e a se quenc e of gr aphs, wher e | V ( G n ) | tends to infinity and ( d 3 o n ) n ≥ 1 is uniformly inte gr able. L et ( G, o ) b e a r andom variable in G ∗ with law µ , such that P ( d o ≥ 1) > 0 . Assume that G n c onver ges in pr ob ability in the lo c al we ak sense to ( G, o ) . Then r ( G n ) P → E µ [ d 2 o d V ] − E µ [ d 2 o ] 2 / E µ [ d o ] E µ [ d 3 o ] − E µ [ d 2 o ] 2 / E µ [ d o ] , (3) wher e V is a uniformly chosen neighb our of r o ot o in gr aph ( G, o ) . W e can now pro ceed to introduce the other degree-degree metrics and provide our main con- v ergence results for them. 2.2.2 Sp earman’s rho Sp earman’s rho [ 24 ] is an alternativ e to P earson’s r for measuring correlations. It falls into the category of r ank-c orr elation measures as it is based on the rankings of the degrees rather than their actual v alue. Because of this, it consisten t under far less restrictiv e conditions than P earson’s r [ 12 , 14 ]. When applied to degrees in graphs some additional care is needed, as these v ales are discrete and thus ties can frequen tly occur. F or instance, consider an edge ( u, v ) ∈  E . Then there are at least d u other edges ( u, v ′ ) and thus w e will encoun ter the v alue d u at less that many times. There are several differen t wa ys to break ties, each leading to a differen t expression. W e will consider the v ersion where ties are brok en uniformly at random. This leads to an expression for Sp earman’s rho as is given in [ 12 , Equation (3.2)]. There is, how ever, a more compact version of Sp earman’s rho that is asymptotically equiv alent and easier to deal with in the mathematical analysis, see [ 14 , Prop osition 5.5], [ 15 , Section 2.3] or [ 12 , Section 2.2]. Therefore, throughout this article, w e will use this alternative expression instead of the classical one. T o define this measure, for every integer k ≥ 0, let F ∗ G ( k ) := 1 | → E ( G ) | X v ∈ V d v 1 { d v ≤ k } . (4) 5 Similarly , let F G ( k ) = F ∗ G ( k ) + F ∗ G ( k − 1) for all k ≥ 0. Then we define the Sp earman’s rho degree-degree correlation co efficien t by ρ ( G ) := 3 | → E ( G ) | X u → v F G ( d u ) F G ( d v ) − 3 . (5) Theorem 2.3. L et ( G n ) n ≥ 1 b e a se quenc e of gr aphs such that | V ( G n ) | tends to infinity and ( d o n ) n ≥ 1 is uniformly inte gr able. L et ( G, o ) b e a r andom variable in G ∗ with law µ , such that P ( d o ≥ 1) > 0 . Assume that G n c onver ges in pr ob ability in the lo c al we ak sense to ( G, o ) . Then ρ ( G n ) P → 3 E µ [ d o ] E µ  d o F ∗ µ ( d o ) F ∗ µ ( d V )  − 3 , wher e F ∗ µ ( k ) = F ∗ µ ( k ) + F ∗ µ ( k − 1) , with F ∗ µ ( k ) = E µ [ d o 1 d o ≤ k ] E µ [ d o ] , (6) and V is a neighb or of o chosen uniformly at r andom. Note that indeed conv ergence of Spearman’s ρ holds whenev er ( d o n ) n ≥ 1 is uniformly in tegrable, whic h is far less restrictiv e then requiring uniform in tegrability of ( d 3 o n ) n ≥ 1 needed for P earson’s r . 2.2.3 Kendall’s tau Another rank-correlation measure is Kendall’s tau [ 17 ]. It computes the difference b et ween con- cordan t and dicordant pairs of join t observ ations, normalized b y the total num b er of pairs, see [ 14 , Section 4.4]. Similar to Sp earman’s rho we also ha ve alternative version of Kendall’s tau, see [ 14 , Section 5.3.1], which we will consider in this article. W e first define for any pair of in tegers k , ℓ ≥ 0, H G ( k , ℓ ) = H G ( k , ℓ ) + H G ( k − 1 , ℓ ) + H G ( k , ℓ − 1) + H G ( k − 1 , ℓ − 1) , (7) where H G ( k , ℓ ) = 1 |  E ( G ) | X u → v 1 d u ≤ k,d v ≤ ℓ is the joint degree distribution. Then Kendall’s tau for degree-degree correlations is given by τ ( G ) = 1 |  E ( G ) | X u → v H G ( d u , d v ) − 1 . (8) Theorem 2.4. L et ( G n ) n ≥ 1 b e a se quenc e of r o ote d gr aphs and assume that ( d o n ) n ≥ 1 is uniformly inte gr able. L et ( G, o ) b e a r andom variable in G ∗ with law µ , such that P ( d o ≥ 1) > 0 . Assume that G n c onver ges in pr ob ability in the lo c al we ak sense to ( G, o ) . Then τ ( G n ) P → E µ [ d o H µ ( d o , d V )] E µ [ d o ] − 1 , wher e H µ ( k , ℓ ) = H µ ( k , ℓ ) + H µ ( k − 1 , ℓ ) + H µ ( k , ℓ − 1) + H µ ( k − 1 , ℓ − 1) , with H µ ( k , ℓ ) = E µ [ d o 1 d o ≤ k 1 d V ≤ ℓ ] E µ [ d o ] , and V is a neighb or of o chosen uniformly at r andom. 6 2.2.4 Degree-degree distance In addition to classical (rank-based) correlation metrics, one can also measure degree-degree cor- relations by lo oking at the differences betw een the degrees on both sides of an edge. This was recen tly prop osed indep endent ly in [ 8 ] and [ 27 ], using the notion of degree-degree distance 1 . Definition 2.5. F or any gr aph G and monotone function g : N 0 → R + , define the de gr e e distanc e as δ ( G ) = 1 |  E ( G ) | X u → v | g ( d u ) − g ( d v ) | . (9) The version of this measure for g ( x ) = x was considered in [ 8 ] while [ 27 ] used g ( x ) = log( x ). Here we establish the limit degree-degree distance under lo cal conv ergence in probability . Theorem 2.5. L et g : N 0 → R + b e a monotone function and ( G n ) n ≥ 1 a se quenc e of r o ote d gr aphs such that ( d o n g ( d o n )) n ≥ 1 is uniformly inte gr able. L et ( G, o ) b e a r andom variable in G ∗ with law µ , such that P ( d o ≥ 1) > 0 . Assume that G n c onver ges in pr ob ability in the lo c al we ak sense to ( G, o ) . Then δ ( G n ) P → E µ [ d o | g ( d o ) − g ( d V ) | ] E µ [ d o ] . It should b e noted that the conditions required on the degrees dep ends on the function g used. F or the case g ( x ) = x this b oils done to the basic uniform in tegrability of ( d 2 o n ) n ≥ 1 . 2.2.5 Av erage nearest neigh b or degree and rank W e now mo v e our attention to t w o measures for degree-degree correlations which capture the local structure of a net work. Compared to Sp earman’s rho, Kendall’s tau and degree-degree distance, whic h provide a single global v alue summarizing the o verall assortativit y or disassortativit y of the entire net work, these t wo measures offer a more detailed view. In particular, these measures sho w for each integer k how no des of degree k connect to no des of v arious degrees, which helps us understand particular b eha viors that migh t b e hidden in other global measures we considered. W e start with the Average Nearest Neighbor Degree (ANND), whic h measures the a verage of the degrees of all no des connected to a given no de of degree k , prop erly normalize. It is defined as Φ G ( k ) = 1 f G ( k ) > 0 P ℓ> 0 ℓ h G ( k , ℓ ) f ∗ G ( k ) , for k = 1 , 2 , . . . . (10) Theorem 2.6. L et ( G n ) n ≥ 1 b e a se quenc e of r o ote d gr aphs and assume that ( d 2 o n ) n ≥ 1 is uniformly inte gr able. L et ( G, o ) b e a r andom variable in G ∗ with law µ , such that P ( d o ≥ 1) > 0 . Assume that G n c onver ges in pr ob ability in the lo c al we ak sense to ( G, o ) . Then for al l k ≥ 0 such that P ( d o = k ) > 0 , ϕ G n ( k ) P → E µ [ d V | d o = k ] . Observ e that the ANND compares degrees directly and thus, lik e Pearson’s r needs additional momen t conditions for con vergence to hold, in this case uniform con vergence of d 2 o n . How ever, w e can apply a similar approach as for Sp earman’s rho, and use the ranks of the degrees instead, or similarly by apply F ∗ G to the degrees. This leads to what is kno wn as the Average Nearest Neigh b or Rank (ANNR), cf. [ 26 ], Θ G ( k ) := 1 { f G ( k ) > 0 } P l> 0 F ∗ G ( l ) h G ( k , l ) f ∗ G ( k ) , for k = 1 , 2 , . . . . (11) W orking with ranks instead of the v alues of the degrees yield a con vergence results that only requires uniform integrabilit y of the degrees. 1 This is called degree difference in [ 8 ] 7 Theorem 2.7. L et ( G n ) n ≥ 1 b e a se quenc e of r o ote d gr aphs and assume that ( d o n ) n ≥ 1 is uniformly inte gr able. L et ( G, o ) b e a r andom variable in G ∗ with law µ , such that P ( d o ≥ 1) > 0 . Assume that G n c onver ges in pr ob ability in the lo c al we ak sense to ( G, o ) . Then for al l k ≥ 0 such that P ( d o = k ) > 0 , θ G n ( k ) P → E µ  F ∗ µ ( d V ) | d o = k  , wher e F ∗ µ is define d as in ( 6 ) . 3 Applications T o show case our results w e apply them to t w o well-kno wn random graph models. In particular, for b oth models w e provide explicit expressions for the limit of the Average Nearest Neighbor Degree. The first are rank-1 inhomogeneous random graphs, whic h will serve as an example of graphs with neutral mixing. After that, w e study the behavior of degree-degree correlations metrics in Random Geometric Graphs, which are seen as examples of graphs with assortativ e mixing, although not man y results are known. W e provide formal results confirming that these graphs hav e assortativ e mixing. 3.1 Rank-1 Inhomogeneous Random Graphs Let W b e a non-negative random v ariable and n ∈ N . Then the rank-1 inhomogeneous random graph IR G n ( W ) on n no des is constructed b y considering an sequence W 1 , . . . , W n of i.i.d. random v ariables with distribution equal to W and connect each pair of no des i and j independently with probabilit y p ij := min  W i W j n , 1  . This mo del is a sp ecific instance of a more general class of inhomogeneous random graphs (see [ 4 ] or [ 11 ]), whic h in turn where generalizations of the Ch ung-Lu mo del [ 6 ]. In particular, it is kno wn (see for example [ 11 , Theorem 3.14]) that these graphs conv erge lo cally to a Galton-W atson tree with the root having off-spring distributed as P o( W ) and each other individual hav e indep endent off-spring that is distributed according to the sized-biased distribution Po( W ) ∗ , where P (Po( W ) ∗ = k ) = k P (P o( W ) = k ) E [ W ] , for k = 0 , 1 , 2 , . . . F rom this we immediately deduce for the limit graph ( G, o ) that d o d = P o( W ) and d V d = 1 + d ∗ o has the size-biased distribution and is indep endent from d o . The fact that d o and d V are independent immediately implies that the limits of ρ ( G n ) and τ ( G n ) are zero, for sequences of rank-1 IRGs IRG( W , n ). Our first application result sho ws that the av erage nearest neigh b or degree conv erges to a fixed constant for ev ery k , resembling the neutral mixing. Prop osition 3.1 (Av erage nearest neighbor degree IR Gs) . L et (IR G n ( W )) n ≥ 1 b e a se quenc e of r ank-1 IR Gs such that ( d 2 o n ) n ≥ 1 is uniformly inte gr able. Then for any k ≥ 1 , ϕ G n ( k ) P → 1 + E  W 2  E [ W ] . Note that a sufficien t condition for uniform in tegrability of ( d 2 o n ) n ≥ 1 is E  W 2+ δ  < ∞ for some 0 < δ < 1. Remark 3.2. Note that 1 + E  W 2  / E [ W ] = E  d 2 o  / E [ d o ] . H enc e, the r esult in Pr op osition 3.1 r efle cts the one obtain in [ 26 ] for the r ep e ate d and er ase d c onfigur ation mo del (se e The or em 5.5 and The or em 5.8). This is exp e cte d as b oth these mo dels have the same Galton-Watson tr e e as 8 their lo c al limit [ 10 ] and this is the only r elevant p art for establishing the limit of de gr e e-de gr e e metrics. Ther efor e, the limit establishe d in [ 26 ] fol lows fr om our main r esult and Pr op osition 3.1 . Nevertheless, it should b e note d that the r esults in [ 26 ] wer e establishe d under differ ent assumptions on the c onver genc e of the empiric al (joint) de gr e e distributions (se e Assumptions 4.1 and 4.2) and also include b ounds on the sp e e d of c onver genc e. Sinc e ther e is no dir e ct implic ation link b etwe en these assumptions and lo c al c onver genc e, which we assume in this work, b oth r esults should b e c onsider e d sep ar ately. 3.2 Random Geometric Graphs While the application of our results to rank-1 inhomogeneous random graphs yields mostly known results, we now mov e to a class of mo dels for which degree-degree correlations are not extensively studied: random geometric graphs. Let T d n denote the d -dimensional torus of volume n , i.e. the box I d n := [ − n 1 /d / 2 , n 1 /d / 2] d with the b oundaries identified. F urthermore, let p ∈ (0 , 1] and R > 0. Then the random geometric graph with n v ertices RGG n ( p, R ) is constructed by placing n points X 1 , . . . , X n uniformly at random in I d n and connecting tw o p oin ts X i and X j indep enden tly with probability p ij = p 1 ∥ X i − X j ∥ n ≤ R , where ∥ · ∥ n is the torus metric. The classical case of random geometric graphs is when p = 1. As we let n tend to infinity , the b o x I d n will blow up to R d . Since no des only care ab out a fixed neigh b orho od for establishing edges, the fact that T d n has no b oundary will play a diminishing role as n → ∞ , while a unit-rate Poisson pro cess will tak e the role of the ‘’infinite version” of placing n p oin t uniformly at random. So we w ould exp ect that the lo cal limit of random geometric graphs will consist of a graph whose nodes corresp ond to a unit-rate Poisson pro cess on R d and with connection probability p ij = p 1 ∥ X i − X j ∥≤ R , where we now use the normal Euclidean metric. The ro ot will then simply b e the origin of R d The following result establishes this intuition. It is a sp ecific instance of a more general con vergence result from [ 13 ] applied to the case of random geometric graphs. Theorem 3.3. L et (R GG n ( p, R )) n ≥ 1 b e a se quenc e of r andom ge ometric gr aphs with c onne ction r adius R > 0 and c onne ction pr ob ability p , and let ( G ∞ ( R ) , o ) b e define d as ab ove. Then G n → ( G ∞ ( R ) , o ) . No w that we hav e the lo cal limit, we can apply our results to compute degree-degree measures for random geometric graphs. In the remainder of this section we write v d := π d/ 2 / Γ(1 + d/ 2) to denote the volume of the unit ball in R 2 and denote by ω d ( R ) := v d R d the volume of the ball in R d with radius R . In addition, we define p conn = 1 ω d ( R ) 2 Z Z x,y ∈ B (0 ,R ) 1 ∥ x − y ∥≤ R dx dy . (12) W e start with giving a result for Pearson’s correlation co efficien t. Prop osition 3.4 (Pearson’s limit in RGGs) . L et (RGG n ( p, R )) n ≥ 1 b e a se quenc e of r andom ge ometric gr aphs with c onne ction r adius R > 0 and c onne ction pr ob ability p . Then r (R GG n ( p, R )) P → p c onn Remark 3.5. F r om Pr op osition 3.4 we make two imp ortant observations. First note that the limit of r (R GG n ( p, R )) do es not dep end on p . This c an b e explaine d by lo oking at the gr aph R GG n ( p, R ) as b eing c onstructe d by first gener ating the gr aph RGG n (1 , R ) and then ke eping e ach 9 e dge indep endently with pr ob ability p . Sinc e this pr o c e dur e tr e ats every e dge indep endently and e qual ly, it should not influenc e the dep endency of the de gr e es of c onne cte d no des. Se c ond, p c onn is strictly p ositive. Henc e, r andom ge ometric gr aphs ar e an example of assortative gr aphs, i.e. those with p ositive de gr e e-de gr e e c orr elations. This is not unexp e cte d as the ge ometric asp e ct to the c onne ction rule cr e ates the pr esenc e of joint neighb ors, which in turn establishes a p ositive c orr elation b etwe en the de gr e es of two c onne cte d no des. However, as far as we know, this is the first explicit expr ession for the limit of this me asur e. W e also obtain the limit of the av erage nearest neigh b or degree. Prop osition 3.6 (ANND limit in RGGs) . L et (R GG n ( p, R )) n ≥ 1 b e a se quenc e of r andom ge o- metric gr aphs with c onne ction r adius R > 0 and c onne ction pr ob ability p . Then for any k ≥ 1 , ϕ RGG n ( p,R ) ( k ) P → 1 + pω d ( R )(1 − p c onn ) + ( k − 1) p c onn . This result shows that the a verage degree of the nearest neighbor scale linearly with the degree k of the ro ot, making RGGs a clear example of an assortativ e random graph mo del. Remark 3.7. Our r esults for r andom ge ometric gr aphs c omplement and extend those in the r e c ent work [ 16 ]. Her e the de gr e e distribution of a no de on a r andomly sample d e dge was c ompute d and use d to ar gue why these gr aphs have assortative mixing. In c ontr ast, our r esults show the assortative mixing by pr oving explicit values for two de gr e e-de gr e e c orr elation me asur es. Remark 3.8. We c an also apply our r esults for Sp e arman ’s ρ , Kendal l’s τ and the aver age ne ar est neighb or r ank to the limit of r andom ge ometric gr aphs. Unfortunately, this wil l not yield an insightful and c omp act expr ession. F or example, for Sp e arman ’s ρ the limit wil l involve terms of the form E [ F d ( X + Z − 1) F d ( Y + Z − 1)] , wher e F d denotes the c df of a Poisson r andom variable with me an ω d ( R ) and X , Y , Z ar e Poisson r andom variable whose p ar ameters r epr esent the volumes of the interse ction of two b al ls and those of their two disjoint p arts. While it is p ossible to write out this expr ession ful ly, it do es not yield any p articularly insightful close d formula. We thus omit these c omputations her e. 4 Pro ofs of main results 4.1 General idea and approach The first important thing to note is that the definition of lo cal con vergence in probabilit y (Defi- nition 2.4 ) is equiv alent to E [ h ( G n , o n ) | G n ] P → E µ [ h ( G, o )] , for any con tinuous b ounded function h : G ⋆ → R [ 11 ]. This implication will b e used often in our pro ofs. Man y pro ofs will b oil done to recognizing the expression as the exp ected v alue of sum function ϕ ev aluated on d o n and d V n , conditioned on the graph G n . Once this is achiev ed, the follo wing tec hnical lemma can b e used to obtain the conv ergence to the appropriate limit under the righ t uniform integrabilit y conditions. Lemma 4.1. L et ( G n ) n ≥ 1 b e a se quenc e of gr aphs such that | V ( G n ) | tends to infinity and G n c onver ges lo c al ly in pr ob ability to ( G, o ) with law µ . L et ϕ : N 0 × N 0 → R + b e a me asur able function such that the se quenc e ( ϕ ( d o n , d V n ) n ≥ 1 is uniformly inte gr able, with o n a uniform r andom no de in G n and V n a uniform neighb or. Then E [ ϕ ( d o n , d V n ) | G n ] P → E µ [ ϕ ( d o , d V )] , wher e V is a uniform neighb or of o . 10 Pr o of. F or any K ≥ 0 w e hav e | E [ ϕ ( d o n , d V n ) | G n ] − E µ [ ϕ ( d o , d V )] | ≤   E  ϕ ( d o n , d V n ) 1 ϕ ( d o n ,d V n ) ≤ K   G n  − E µ  ϕ ( d o , d V ) 1 ϕ ( d o ,d V ) ≤ K    + E  ϕ ( d o n , d V n ) 1 ϕ ( d o n ,d V n ) >K   G n  + E µ  ϕ ( d o , d V ) 1 ϕ ( d o ,d V ) >K  . Since ( G, o ) 7→ ϕ ( d o , d V ) 1 ϕ ( d o ,d V ) ≤ K is b ounded and con tinuous 2 the first term con verges to zero in probabilit y , as n → ∞ , due to the fact that ( G n ) n ≥ 1 con verges locally in probability to ( G, o ). In particular, this implies that lim sup K →∞ lim sup n →∞ P    E  ϕ ( d o n , d V n ) 1 ϕ ( d o n ,d V n ) ≤ K   G n  − E µ  ϕ ( d o , d V ) 1 ϕ ( d o ,d V ) ≤ K    > ε  = 0 . F or the second term, we use the uniform integrabilit y of ϕ ( d o n , d V n ) and Mark ov’s inequality to conclude that lim sup K →∞ lim sup n →∞ P  E  ϕ ( d o n , d V n ) 1 ϕ ( d o n ,d V n ) >K   G n  > ε  = 0 . Finally , for the third term (which do es not dep end on n ) w e hav e lim sup K →∞ E µ  ϕ ( d o , d V ) 1 ϕ ( d o ,d V ) >K  = 0 . Putting this together we conclude that lim sup n →∞ P ( | E [ ϕ ( d o n , d V n ) | G n ] − E µ [ ϕ ( d o , d V )] | > ε ) = lim sup K →∞ lim sup n →∞ P ( | E [ ϕ ( d o n , d V n ) | G n ] − E µ [ ϕ ( d o , d V )] | > ε ) = 0 . In addition, we will also make use of sev eral basic facts concerning conv ergence in probabilit y , summarized in the following lemma. F or completeness, we include the pro of in the App endix A . Lemma 4.2. L et ( G n ) n ≥ 1 b e a se quenc e of gr aphs such that | V ( G n ) | tends to infinity, and G n c onver ges lo c al ly in pr ob ability to ( G, o ) with law µ . Then 1. sup k ≥ 0   F ∗ G n ( k ) − F ∗ µ ( k )   P → 0 ; 2. sup k,ℓ ≥ 0 | H G n ( k , ℓ ) − H µ ( k , ℓ ) | P → 0 ; 3. E  F ∗ G n ( d o n )   G n  P → E µ  F ∗ µ ( d o )  ; 4. If, in addition, ( d o n ) n ≥ 1 is uniformly inte gr able and P ( d o ≥ 1) > 0 , then 2 | V ( G n ) | |  E ( G n ) | P → 1 E µ [ d o ] . With this setup, we are ready to provide the pro ofs of our main results. 2 In the top ology on G ⋆ . 11 4.2 Sp earman’s rho Pr o of of The or em 2.3 . W e hav e to pro ve that 1 |  E ( G n ) | X u → v F G n ( d u ) F G n ( d v ) P − → E µ [ d o F µ ( d o ) F µ ( d V )] E µ [ d o ] . (13) T o this end, we replace each instance of F G n with F µ and observe that 1 |  E ( G n ) | X u → v F µ ( d u ) F µ ( d v ) = 2 |  E ( G n ) | X u ∈ V ( G n ) F µ ( d u ) X v ∈ V ( G n ) ,v ∼ u F µ ( d v ) = 2 | V ( G n ) | |  E ( G n ) | 1 | V ( G n ) | X u ∈ V ( G n ) d u F µ ( D u )   1 d u X v ∈ V ( G n ) ,v ∼ u F µ ( d v )   = 2 | V ( G n ) | |  E ( G n ) | E [ d o n F µ ( d o n ) F µ ( d V n ) | G n ] , where V n is a uniformly neighbour of o n in G n . By Lemma 4.2 2 | V ( G n ) | / |  E ( G n ) | P → E µ [ d o ] − 1 . Next, since the sequence ( d o n ) n ≥ 1 is uniformly in tegrable and d o n F µ ( d o n ) F µ ( d V n ) ≤ 4 d o n , we conclude that ( d o n F µ ( d o n ) F µ ( d V n )) n ≥ 1 is also uniformly integrable. Using Lemma 4.1 , this then implies that | V ( G n ) | |  E ( G n ) | E [ d o n F µ ( d o n ) F µ ( d V n ) | G n ] P → E µ [ d o F µ ( d o ) F µ ( d V )] E µ [ d o ] , (14) where V is a neighbor of o sampled uniformly at random. T o finish the proof, w e recall that the difference betw een ( 13 ) and ( 14 ) is that F G n is replaced with F µ . Therefore, the result follows if we can sho w that 1 |  E ( G n ) |      X u → v F G n ( d u ) F G n ( d v ) − X u → v F µ ( d u ) F µ ( d v )      P → 0 . (15) Lemma 4.2 implies that for all k ≥ 0 F G n ( k ) = F ∗ G n ( k ) + F ∗ G n ( k − 1) P − → F ∗ µ ( k ) + F ∗ µ ( k − 1) := F µ ( k ) . F or n ≥ 1 and k ≥ 0 let X n ( k ) := F G n ( k ) − F µ ( k ). Then, X u → v F G n ( d u ) F G n ( d v ) = X u → v ( F µ ( d u ) + X n ( d u )) ( F µ ( d v ) + X n ( d v )) = X u → v F µ ( d u ) F µ ( d v ) + X u → v X n ( d u ) F µ ( d v ) + X u → v F µ ( d u ) X n ( d v ) + X u → v X n ( d u ) X n ( d v ) . Moreo ver, since 0 ≤ F µ ( k ) ≤ 2, for all u, v ∈ V ( G n ) 0 ≤ X n ( d u ) F G ( D u ) ≤ 2 sup m ≥ 0 |F G n ( m ) − F µ ( m ) | , 0 ≤ X n ( d u ) X n ( d v ) ≤  sup m ≥ 0 |F G n ( m ) − F µ ( m ) |  2 . 12 W e then obtain the following upp er b ound 1 |  E ( G n ) |      X u → v F G n ( d u ) F G n ( d v ) − X u → v F µ ( d u ) F µ ( d v )      ≤ 1 |  E ( G n ) | X u → v X n ( d u ) F µ ( d v ) + 1 |  E ( G n ) | X u → v F µ ( d u ) X n ( d v ) + 1 |  E ( G n ) | X u → v X n ( d u ) X n ( d v ) ≤ 4 sup m ≥ 0 |F G n ( m ) − F µ ( m ) | +  sup m ≥ 0 |F G n ( m ) − F µ ( m ) |  2 . (16) By Lemma 4.2 we know that sup m ≥ 0   F ∗ G n ( m ) − F ∗ µ ( m )   P → 0 . (17) Since F G n and F µ are linear combinations of F ∗ G n and F ∗ µ ( m ), resp ectiv ely , last t wo terms in ( 16 ) con verge to zero in probability , whic h establishes ( 15 ) and finishes the pro of. 4.3 Kendall’s tau Pr o of of The or em 2.4 . W e follo w the same strategy as for the pro of of Theorem 2.3 . Hence, w e need to show that 1 |  E ( G n ) | X u → v H G n ( d u , d v ) P → E µ [ d o H µ ( d o , d V )] E µ [ d o ] . Again, by replacing H G n in the term on the left hand side with H µ w e get that 1 |  E ( G n ) | X u → v H µ ( d u , d v ) = 2 | V ( G n ) | |  E ( G n ) | 1 | V ( G n ) | X u ∈ V ( G n ) d u X v ∈ V ( G n ) , u ∼ v 1 d u H µ ( d u , d v ) = 2 | V ( G n ) | |  E ( G n ) | E [ d o n H µ ( d o n , d V n ) | G n ] , with V n a uniform random neigh b or of o n . Using Lemma 4.2 we hav e that 2 | V ( G n ) | / |  E ( G n ) | P → E µ [ d o ] − 1 . Moreov er, since H µ ( k , ℓ ) ≤ 4 the sequence ( d o n H µ ( d o n , d V n )) n ≥ 1 is uniformly integrable. This then implies that 1 |  E ( G n ) | X u → v H µ ( d u , d v ) P → E µ [ d o H µ ( d o , d V )] E µ [ d o ] , and hence we are left to show that 1 |  E ( G n ) |      X u → v H G n ( d u , d v ) − X u → v H µ ( d u , d v )      P → 0 . (18) This follows readily since 1 |  E ( G n ) |      X u → v H G n ( d u , d v ) − X u → v H µ ( d u , d v )      ≤ sup k,ℓ ≥ 0 |H G n ( k , ℓ ) − H µ ( k , ℓ ) | and the right hand side con verges to zero in probability by Lemma 4.2 . 13 4.4 Degree-degree distance Pr o of of The or em 2.5 . F ollo wing the approac h from the previous pro ofs, w e first write δ ( G n ) = 2 | V ( G n ) | |  E ( G n ) | 1 | V ( G n ) | X u ∈ V ( G n ) d u X v ∈ V ( G n ) 1 d u | g ( d u ) − g ( d v ) | = 2 | V ( G n ) | |  E ( G n ) | E [ d o n | g ( d o n ) − g ( d V n ) || G n ] , where V n is a uniform random neigh b or of o n . Again, 2 | V ( G n ) | / |  E ( G n ) | P → E µ [ d o ] − 1 . Moreov er, w e observe that E [ d o n | g ( d o n ) − g ( d V n ) || G n ] = 1 2 | V ( G n ) | X u → v | g ( d u ) − g ( d v ) | ≤ 2 | V ( G n ) | X u ∈ V ( G n ) d u g ( d u ) = 2 E [ d o n g ( d o n ) | G n ] . Therefore, since by assumption ( d o n g ( d o n )) n ≥ 1 is uniform integrable, so is ( d o n | g ( d o n ) − g ( d V n ) | ) n ≥ 1 . Th us Lemma 4.1 implies that E [ d o n | g ( d o n ) − g ( d V n ) || G n ] P → E µ [ d o | g ( d o ) − g ( d V ) | ] , whic h finishes the proof. 4.5 Av erage nearest neigh b or degree and rank Pr o of of The or em 2.6 . W e start with rewriting the expression of ψ G in ( 10 ) as follows 1 f G n ( k ) > 0 P ℓ> 0 ℓ h G n ( k , ℓ ) f ∗ G n ( k ) = 1 f G n ( k ) > 0 f ∗ G n ( k ) 1 |  E ( G n ) | X ℓ> 0 X u → v d v 1 d u = k,d v = ℓ = 1 f G n ( k ) > 0 f ∗ G n ( k ) 1 |  E ( G n ) | X u → v d v 1 d u = k = 1 f G n ( k ) > 0 f ∗ G n ( k ) 1 |  E ( G n ) | X u ∈ V ( G n ) d u 1 d u = k X v ∈ V ( G n ) , u ∼ v 1 d u d v = k 1 f G n ( k ) > 0 f ∗ G n ( k ) 1 |  E ( G n ) | X u ∈ V ( G n ) 1 d u = k X v ∈ V ( G n ) , u ∼ v 1 d u d v = 1 f G n ( k ) > 0 f G n ( k ) 1 V ( G n ) X u ∈ V ( G n ) 1 d u = k X v ∈ V ( G n ) , u ∼ v 1 d u d v = 1 f G n ( k ) > 0 f G n ( k ) E  1 d o n = k d V n   G n  . Here, for the second to last step we used that f ∗ G n ( k ) = | V ( G n ) | |  E ( G n ) | k f G n ( k ) . Since 1 d o n = k d V n ≤ d 2 o n , th e sequence ( 1 d o n = k d V n ) n ≥ 1 is uniformly in tegrable by our assump- tion. Hence, Lemma 4.1 implies that E  1 d o n = k d V n   G n  P → E µ [ 1 d o d V ] . 14 Finally , we note that 1 f G n ( k ) > 0 f G n ( k ) P → µ ( d o = k ) − 1 , whic h then implies that ψ G n ( k ) P → E µ [ 1 d o = k d V ] µ ( d o = k ) = E µ [ d V | d o = k ] . The pro of of the a verage nearest neigh b or rank follo ws the same lines, after replacing F ∗ G n with F ∗ µ . Here we only need uniform conv ergence of d o n since we deal with 1 d o n = k F ∗ µ ( d V n ) instead of 1 d o n = k d V n . W e include it here for completeness. Pr o of of The or em 2.7 . F ollo wing similar computations as in the previous proof, we write θ G n ( k ) = k 1 f G n ( k ) > 0 f ∗ G n ( k ) 1 |  E ( G n ) | X u ∈ V ( G n ) 1 d u = k X v ∈ V ( G n ) , u ∼ v 1 d u F ∗ G n ( d v ) (19) = 1 f G n ( k ) > 0 f G n ( k ) E  1 d o n = k F ∗ G n ( d V n )   G n  . Let us now replace F ∗ G n with F ∗ µ . Then since 1 d o n = k F ∗ µ ( d V n ) ≤ d o n and ( d o n ) n ≥ 1 is uniformly in tegrable, Lemma 4.1 implies that E  1 d o n = k F ∗ µ ( d V n )   G n  P → E µ [ 1 d o = k F µ ( d V )] T ogether with, 1 f G n ( k ) > 0 f G n ( k ) P → µ ( d o = k ) − 1 , w e hav e 1 f G n ( k ) > 0 f G n ( k ) E  1 d o n = k F ∗ µ ( d V n )   G n  P → E µ [ 1 d o = k F µ ( d V )] µ ( d o = k ) = E µ [ F µ ( d V ) | d o = k ] . Th us, we are left to show that 1 f G n ( k ) > 0 f G n ( k )   E  1 d o n = k F ∗ G n ( d V n )   G n  − E  1 d o n = k F ∗ µ ( d V n )   G n    P → 0 . Going back to ( 19 ) we get that 1 f G n ( k ) > 0 f G n ( k )   E  1 d o n = k F ∗ G n ( d V n )   G n  − E  1 d o n = k F ∗ µ ( d V n )   G n    ≤ 1 f G n ( k ) > 0 f ∗ G n ( k ) 1 |  E ( G n ) | X u ∈ V ( G n ) 1 d u = k       X v ∈ V ( G n ) , u ∼ v F ∗ G n ( d v ) − F ∗ µ ( d v )       ≤ 1 f G n ( k ) > 0 f ∗ G n ( k ) sup ℓ ≥ 0   F ∗ G n ( ℓ ) − F ∗ µ ( ℓ )   . The first term conv erges to ( k E µ [ 1 d o = k d o ] / E µ [ d o ]) − 1 while Lemma 4.2 implies that sup ℓ ≥ 0   F ∗ G n ( ℓ ) − F ∗ µ ( ℓ )   P → 0 , yielding the required result. 15 o v R r Figure 1: Joint neigh b orho od of the ro ot o and uniform neighbor V at distance r for a 2-dimensional Random Geometric Graph with radius R . The green areas con tain the first type of neighbors and the purple area contains the join t neighbors (second type). 5 Pro ofs of application results 5.1 Rank-1 inhomogeneous random graphs Pr o of of Pr op osition 3.1 . W e first observe that P ( d o = k ) = E  W k k ! e − W  , whic h is positive for all k ≥ 1 since W is non-negative. In particular, P ( d o ≥ 1) > 0 and hence Theorem 2.6 implies that ϕ n ( k ) P → E [ d V | d o = k ] = E [ d V ] , where the last equality is b ecause d o and d V are indep enden t. Using that d V has the size-biased distribution w e get that E [ d V ] = E  d 2 o  / E [ W ]. Finally we note that E  d 2 o  = E [ W ] + E  W 2  , from which the result follows. 5.2 Random geometric graphs T o prov e our results for random geometric graphs, we need some intermediate results for the lo cal limit graph G ∞ ( R, p ). The first results concerns the uniform in tegrability of the degrees of the ro ot. Lemma 5.1. L et (RGG n ( p, R )) n ≥ 1 b e a se quenc e of r andom ge ometric gr aphs with c onne ction r adius R > 0 and c onne ction pr ob ability p , and let o n b e a r andomly sample d vertex in RGG n ( p, R ) . Then the se quenc e ( d 3 o n ) n ≥ 1 is uniformly inte gr able. Next we study the degree distribution of d V conditioned on d o = k and d ( o, V ) = r . Denote b y B o ( δ ) the d -dimensional Euclidean ball of radius δ around o , and define B V ( δ ) in a similar manner. Then there are tw o types of neighbors of o : those that are not neighbors of V , and those that are. The first type of neighbors are made up of all no des in B o ( R ) \ B V ( R ), while the second type are the no des in B o ( R ) ∩ B V ( R ). A similar argument holds for the neighbors of V . Of course, there will also b e a n umber of joint neighbors of o and V . This will dep end on the distance r = d ( o, V ), see also Figure 1 . W rite λ 1 ( r ) = v ol( B o ( R ) \ B V ( R )) and λ 2 ( r ) = v ol( B o ( R ) ∩ B V ( R )) , and note that by symmetry v ol( B V ( R ) \ B o ( R )) = λ 1 ( r ) = ω d ( R ) − λ 2 ( r ) . 16 No w if X and Y are indep enden t with X d = Po( pλ 1 ( r )) and Y d = Po( pλ 2 ( r )), then d o d = X + Y d = P o( pω d ( R )). Conditioned on d o = k and d ( o, V ) = r . the n umber of neigh b ors in B o ( R ) ∩ B V ( R ) has a Binomial distribution Z k ( r ) := Bin( k, ρ ( r )) with k trials and success probability ρ ( r ) := pλ 2 ( r ) E [ d o ] = λ 2 ( r ) ω d ( R ) . Therefore, the neighbors of V that are also neighbors of o hav e the same distribution as Z k − 1 ( r ), since V is itself one of the nodes in B o ( R ) ∩ B v r ( R ). Next, w e observ e that the num b er of neigh b ors of V that are not neighbors of o is a Poisson random v ariable X ( r ) = Po( pλ 1 ( r )). Hence, if w e wri we conclude that d V | { d o = k , d ( o, V ) = r } d = 1 + X ( r ) + Z k − 1 ( r ) . (20) No w that we hav e the degree distribution of V conditioned on its distance to o and the degree d o , we need to understand ho w the distance b ehav es. Lemma 5.2 (Distance to a uniform neighbor) . L et V b e a uniform neighb or of o in G ∞ ( R, p ) and denote by r V := d ( o, d V ) the distanc e b etwe en o and V . Then r V has pr ob ability density function f r V ( r ) = d R d r d − 1 1 0 ≤ r ≤ R . With this lemma, we c an now compute the exp ected degree of a uniform neighbor of the ro ot, conditioned on the degree of the ro ot. Lemma 5.3 (Conditional exp ected degree of a uniform neigh b or.) . L et V b e a uniform neighb or of o in G ∞ ( R, p ) . Then E [ d V | d o = k ] = 1 + pω d ( R )(1 − p conn ) + ( k − 1) p conn . W e now hav e everything needed to pro ve the main results on Random Geometric Graphs. W e start with P earson’s correlation co efficien t (Prop osition 3.4 ) and then mo ve to Av erage Nearest Neigh b or Degree (Prop osition 3.6 ). Pr o of of Pr op osition 3.4 . F rom Lemma 5.3 it follo ws that E  d 2 o d V   d o = k  = k 2 E [ d V | d o = k ] = k 2 (1 + pω d ( R )(1 − p conn ) + ( k − 1) p conn ) . Th us, since E [ d 2 0 d V ] = ∞ X k =1 k 2 E [ d V | d 0 = k ] P ( d 0 = k ) . and d 0 ∼ P o( µ ) with µ := p ω d ( R ) this b ecomes E [ d 2 o d V ] = ∞ X k =1 k 2 E [ d V | d o = k ] P ( d o = k ) = ∞ X k =1 e − µ µ k k ! k 2 [1 + pω d ( R )(1 − p conn ) + ( k − 1) p conn ] . W e decomp ose the previous expression into three sums E [ d 2 o d V ] = S 1 + p conn S 2 + pω d ( R )(1 − p conn ) S 1 , 17 where S 1 = ∞ X k =1 e − µ µ k k ! k 2 , S 2 = ∞ X k =1 e − µ µ k k ! k 2 ( k − 1) . By using standard identities for P oisson moments, E [ d o ( d o − 1)] = µ 2 , E [ d o ( d o − 1)( d o − 2)] = µ 3 , w e get S 1 = E [ d 2 o ] = µ + µ 2 , S 2 = E [ d 2 o ( d o − 1)] = µ 3 + 2 µ 2 . Substituting these back, we obtain E [ d 2 o d V ] = ( µ + µ 2 ) + p conn ( µ 3 + 2 µ 2 ) + pω d ( R )(1 − p conn )( µ + µ 2 ) = µ + 2 µ 2 + µ 3 + p conn µ 2 . F urthermore, E [ d 2 o ] 2 E [ d o ] = ( µ 2 + µ ) 2 µ = µ 3 + 2 µ 2 + µ. Th us, we obtain E [ d 2 o d V ] − E [ d 2 o ] 2 E [ d o ] = p conn µ 2 , E [ d 3 o ] − E [ d 2 o ] 2 E [ d o ] = µ 2 . Hence, by applying Theorem 2.2 , w e conclude that r ( G n ) P − → p conn . Pr o of of Pr op osition 3.6 . Recall that d o d = Po( pω d ( R )). In particular, P ( d o ≥ 1) > 0 and by Lemma 5.1 the sequence ( d 2 o n ) n ≥ 1 is uniformly integrable. Th us Theorem 2.6 implies that ϕ n ( k ) P → E [ d V | d o = k ] . The result then immediately follo ws from Lemma 5.3 . Ac knowledgmen ts The results in this paper w ere established during the bac helor pro ject of the first author, under sup ervision of the second author. The first author is grateful for the inv aluable guidance and insightful discussions throughout the pro ject. W e also w ant to thank Remco v an der Hofstad for his constructive feedback and questions during the thesis presentation, esp ecially on the conditions for ANNR, and to Vlad Mihai Ciuperceanu for his meticulous proofreading of the first version of the man uscript. 18 6 Bibliograph y [1] D. Aldous and J. M. Steele. The ob jective metho d: probabilistic combinatorial optimization and lo cal weak conv ergence. In Pr ob ability on discr ete structur es , pages 1–72. Springer, 2004. doi: 10.1007/978-3-662-09444-0 1 . [2] A. An tonioni and M. T omassini. Degree correlations in random geometric graphs. Physic al R eview E , 86(3):037101, 2012. doi: 10.1103/physrev e.86.037101 . [3] I. Benjamini and O. Schramm. Recurrence of distributional limits of finite planar graphs. In Sele cte d Works of Ode d Schr amm , pages 533–545. Springer, 2011. doi: 10.1007/978-1-4419- 9675-6 15 . [4] B. Bollob´ as, S. Janson, and O. Riordan. The phase transition in inhomogeneous random graphs. R andom Structur es & A lgorithms , 31(1):3–122, 2007. doi: 10.1002/rsa.20168 . [5] K. Bringmann, R. Keusch, and J. Lengler. Geometric inhomogeneous random graphs. The o- r etic al Computer Scienc e , 760:35–54, 2019. doi: 10.1016/j.tcs.2018.08.014 . [6] F. Chung and L. Lu. The av erage distances in random graphs with giv en exp ected degrees. Pr o c e e dings of the National A c ademy of Scienc es , 99(25):15879–15882, 2002. doi: 10.1073/pnas.252631999 . [7] G. D’Agostino, A. Scala, V. Zlati´ c, and G. Caldarelli. Robustness and assortativity for diffusion-lik e processes in scale-free net works. EPL (Eur ophysics L etters) , 97(6):68006, 2012. doi: 10.1209/0295-5075/97/68006 . [8] A. F arzam, A. Samal, and J. Jost. Degree difference: a simple measure to character- ize structural heterogeneity in complex netw orks. Scientific r ep orts , 10(1):21348, 2020. doi: 10.1038/s41598-020-78336-9 . [9] P . Gracar, L. L¨ uc htrath, and P . M¨ orters. Percolation phase transition in w eight-dependent random connection mo dels. A dvanc es in Applie d Pr ob ability , 53(4):1090–1114, 2021. doi: 10.1017/apr.2021.13 . [10] R. v an der Hofstad. R andom gr aphs and c omplex networks , volume 1. Cam bridge Universit y Press, 2016. doi: 10.1017/9781316779422 . [11] R. v an der Hofstad. R andom gr aphs and c omplex networks , volume 2. Cambridge univ ersity press, 2024. [12] R. v an der Hofstad and N. Litv ak. Degree-degree dep endencies in random graphs with heavy-tailed degrees. Internet mathematics , 10(3-4):287–334, 2014. doi: 10.1080/15427951.2013.850455 . [13] R. v an der Hofstad, P . v an der Ho orn, and N. Maitra. Lo cal limits of spatial in- homogeneous random graphs. A dvanc es in Applie d Pr ob ability , 55(3):793–840, 2023. doi: 10.1017/apr.2022.61 . [14] P . v an der Ho orn. Asymptotic analysis of netw ork structures: degree-degree correlations and directed paths, 2016. doi: 10.3990/1.9789036541794 . [15] P . v an der Ho orn, L. O. Prokhorenko v a, and E. Samosv at. Generating maximally dis- assortativ e graphs with given degree distribution. Sto chastic Systems , 8(1):1–28, 2018. doi: 10.1287/stsy .2017.0006 . [16] M. Kaufmann, U. Schaller, T. Bl¨ asius, and J. Lengler. Assortativity in geometric and scale- free netw orks. arXiv pr eprint arXiv:2508.04608 , 2025. URL 04608 . 19 [17] M. G. Kendall. A new measure of rank correlation. Biometrika , 30(1/2):81–93, 1938. doi: 10.1093/biomet/30.1-2.81 . [18] P . Mann, V. A. Smith, J. B. Mitc hell, and S. Dobson. Degree correlations in graphs with clique clustering. Physic al R eview E , 105(4):044314, 2022. doi: 10.1103/PhysRevE.105.044314 . [19] C. Minoiu, C. Kang, V. S. Subrahmanian, and A. Berea. Do es financial connectedness predict crises? IMF Working Pap er , 2013. doi: 10.5089/9781475554250.001 . [20] M. E. Newman. Assortativ e mixing in netw orks. Physic al r eview letters , 89(20):208701, 2002. doi: 10.1103/Ph ysRevLett.89.208701 . [21] M. E. Newman. Mixing patterns in net works. Physic al r eview E , 67(2):026126, 2003. doi: 10.1103/Ph ysRevE.67.026126 . [22] Z. Nik oloski, N. Deo, and L. Kucera. Degree-correlation of scale-free graphs. Discr ete Math- ematics & The or etic al Computer Scienc e , (Pro ceedings), 2005. doi: 10.46298/dm tcs.3406 . [23] C. Schmeltzer, A. H. Kihara, I. M. Sok olov, and S. R ¨ udiger. Degree correlations opti- mize neuronal net work sensitivit y to sub-threshold stimuli. PloS one , 10(6):e0121794, 2015. doi: 10.1371/journal.p one.0121794 . [24] C. Sp earman. The pro of and measurement of asso ciation b etw een tw o things. The Americ an journal of psycholo gy , 15(1):72–101, 1904. doi: 10.1037/11491-005 . [25] C. Stegehuis. Degree correlations in scale-free random graph mo dels. Journal of Applie d Pr ob ability , 56(3):672–700, 2019. doi: 10.1017/jpr.2019.45 . [26] D. Y ao, P . v an der Ho orn, and N. Litv ak. Average nearest neigh b or degrees in scale-free net works. Internet Mathematics , 2018. doi: 10.24166/im.02.2018 . [27] B. Zhou, X. Meng, and H. E. Stanley . Po wer-la w distribution of degree–degree distance: A b etter representation of the scale-free property of complex netw orks. Pr o c e e dings of the national ac ademy of scienc es , 117(26):14812–14818, 2020. doi: 10.1073/pnas.1918901117 . A Pro of of Lemma 4.2 Pr o of. 1. Fix ϵ > 0 and let K := K ( ϵ ) b e suc h that ˆ F ∗ µ ( k ) > 1 − ϵ 2 for all k ≥ K ( ϵ ), whic h exists since lim sup k →∞ ˆ F ∗ µ ( k ) = 1. This then implies that for all n ≥ 1 sup k>K ( ϵ )    F ∗ G n ( k ) − ˆ F ∗ µ ( k )    ≤ max { ϵ 2 ,   F ∗ G n ( K ( ϵ )) − 1   } . (21) and hence P sup k>K ( ϵ )    F ∗ G n ( k ) − ˆ F ∗ µ ( k )    > ϵ ! ≤ P    F ∗ G n ( K ( ϵ )) − 1   > ϵ  ≤ P     F ∗ G n ( K ( ϵ )) − ˆ F ∗ µ ( K ( ϵ ))    + ϵ 2 > ϵ  = P     F ∗ G n ( K ( ϵ )) − ˆ F ∗ µ ( K ( ϵ ))    > ϵ/ 2  . Since K ( ϵ ) is fixed, the last term conv erges to zero in probability as n → ∞ . 20 F or the other part of the suprem um we hav e P sup k ≤ K ( ϵ )    F ∗ G n ( k ) − ˆ F ∗ µ ( k )    > ϵ ! ≤ K ( ϵ ) X k =1 P     F ∗ G n ( k ) − ˆ F ∗ µ ( k )    > ϵ  , whic h conv erges to zero in probability since each of the finite term in the sum do es. T ogether we conclude that for any ϵ > 0 lim n →∞ P  sup k ≥ 0    F ∗ G n ( k ) − ˆ F ∗ G ( k )    > ϵ  = 0 , as required. 2. F ollo wing a similar approach as for the pro of of the previous part, w e fix ε > 0 and pic k M := M ( ϵ ) b e suc h that F µ ( m ) > 1 − ϵ 2 for all m ≥ M ( ϵ ). Next, since H µ ( k , ℓ ) → F µ ( k ) for ℓ → ∞ , we can pic k for each k an L k suc h that H µ ( k , ℓ ) > 1 − ϵ/ 2 for all ℓ ≥ L k . In a similar fashion w e pick K ℓ suc h that H µ ( k , ℓ ) > 1 − ϵ/ 2 for all k ≥ K ℓ . No w we set L := max k ≤ M L k and K := max ℓ ≤ M K ℓ . W e then get that for all ℓ ≥ 0 sup k>K | H G n ( k , ℓ ) − H µ ( k , ℓ ) | ≤ max { ϵ 2 , | H G n ( K, ℓ ) − 1 |} , and similarly , for an y k ≥ 0 sup ℓ>L | H G n ( k , ℓ ) − H µ ( k , ℓ ) | ≤ max { ϵ 2 , | H G n ( k , L ) − 1 |} . Then, using similar consideration as in the previous pro of, we get that P sup k>K,ℓ>L | H G n ( k , ℓ ) − H µ ( k , ℓ ) | > ϵ ! P → 0 . W e are no w left with three remaining terms: sup k ≤ K,ℓ ≤ L | H G n ( k , ℓ ) − H µ ( k , ℓ ) | , (22) sup k ≤ K,ℓ>L | H G n ( k , ℓ ) − H µ ( k , ℓ ) | , (23) sup k>K,ℓ ≤ L | H G n ( k , ℓ ) − H µ ( k , ℓ ) | . (24) The first term conv erges to zero in probability since K and L are fixed. The second term con verges to zero since P sup k ≤ K,ℓ>L | H G n ( k , ℓ ) − H µ ( k , ℓ ) | > ε ! ≤ P  sup k ≤ K | H G n ( k , L ) − H µ ( k , L ) | > ε/ 2  ≤ X k ≤ K P ( | H G n ( k , L ) − H µ ( k , L ) | > ε/ 2) , and K is fixed. The third term conv erges to zero b y a similar reasoning. 3. W e first b ound the difference b et ween the t wo terms as follo ws   E  F ∗ G n ( d o n )   G n  − E µ  F ∗ µ ( d o )    ≤   E  F ∗ G n ( d o n )   G n  − E  F ∗ µ ( d o n )   G n    +   E  F ∗ µ ( d o n )   G n  − E µ  F ∗ µ ( d o )    . The first term is bounded b y sup k ≥ 0   F ∗ G n ( k ) − F ∗ µ ( k )   and conv erges to zero b y 1. F or the second term w e note that the map ( G, o ) 7→ F µ ( d o ) is bounded and contin uous. Hence, b y Lemma 4.1 this terms con verges to zero in probability as well. 21 4. Note that |  E ( G n ) | | V ( G n ) | = 2 | V ( G n ) | X v ∈ V ( G n ) d v = 2 E [ d o n | G n ] . Since ( d o n ) n ≥ 1 is uniformly integrable Lemma 4.1 implies that |  E ( G n ) | / | V ( G n ) | P → 2 E µ [ d o ]. Finally , P ( d o ≥ 1) > 0 then implies that 2 | V ( G n ) | / |  E ( G n ) | P → 1 / E µ [ d o ]. B Pro of of Lemma 5.1 Pr o of. Fix ϵ > 0. W e need to show that there exist M 0 , N ∈ N suc h that for all M > M 0 and n > N = N ( M 0 ), E [ d 3 o n 1 { d o n >M } ] < ϵ. Consider the random geometric graph RGG n ( p, R ) defined on the d -dimensional torus T n = [ − n 1 /d / 2 , n 1 /d / 2] d . Let n large enough so that the ball B (0 , R ) is contained in T n . The graph G n has n vertices X 1 , . . . , X n p ositioned indep enden tly and uniformly at random in T n . Fix a v ertex o n c hosen uniformly at random from { X 1 , . . . , X n } , and condition on its lo cation X o n = x . By translation in v ariance we can assume w.l.o.g. that x = O so that X o n is at the origin. The remaining n − 1 vertices are then indep enden t and uniformly distributed ov er T d n . F or each suc h v ertex with p osition X j , define the indicator Y j := 1 {∥ X j ∥ n ≤ R } , and note that, conditional on X o n = O , the degree of the root can b e expressed as d o n = P j Y j . Since P ( X j ∈ B ( x, R ) | X o n = O ) = V ol( B ( x, R ) ∩ T d n )) V ol( T d n ) = ω d ( R ) n , it follo ws that, conditionally on X 0 n = O , d o n is stochastically dominated by the binomial distri- bution Z n : d = Bin  n − 1 , pω d ( R ) n  . Since the edge exists indep enden tly with probability p , we obtain P  Y j = 1 | X o n = x  ≤ p · ω d ( R ) n . Because R is fixed, the success probabilit y of Z n is of order 1 /n . Thus, for n large enough, the fourth moment of Z n is uniformly b ounded sup n ≥ n 0 E [ Z 4 n ] < ∞ , for some n 0 , whic h implies b y Marko v’s inequality that for any M > 0, E [ Z 3 n 1 { Z n >M } ] ≤ E [ Z 4 n ] M ≤ C M , where C := sup n ≥ n 0 E [ Z 4 n ]. Pic king M large enough, w e thus conclude that E [ d 3 o n 1 { d o n >M } ] ≤ E [ Z 3 n 1 { Z n >M } ] < ϵ, and hence that ( d 3 o n ) n ≥ 1 is uniformly integrable. 22 C Pro of of Lemma 5.2 Pr o of. Let r V denote the distance from the root o to a uniformly chosen neigh b or. F or s ∈ [0 , R ], define the cumulativ e distribution function (CDF) of r V conditioned on d 0 ≥ 1 by F r V ( s ) := P ( r V ≤ s | d 0 ≥ 1) , where d 0 is the degree of the ro ot. Consider the infinite random geometric graph G ∞ ( R, p ) := Φ ∪ { o } , where Φ is a homogeneous unit-rate P oisson p oin t process on R d . As men tioned previously , the num b er of neighbors of the ro ot satisfies d 0 d = P o( pω d ( R )) , where ω d ( R ) = v d R d is the volume of the d -dimensional ball or radius R and v d is the volume of the unit ball. Let N s denote the num b er of neighbors of the ro ot within the ball B ( o, s ) and M s the n umber of neighbors in B ( o, R ) \ B ( o, s ). Then N s d = P o( pv d s d ) , M d = P o( pv d ( R d − s d )) , with N s and M s indep enden t, so that d 0 = N R = N s + M s . By conditioning on Φ and { d 0 ≥ 1 } , w e hav e P ( r V ≤ s | Φ) = N s d 0 1 { d 0 ≥ 1 } , since V is equally lik ely to be an y neighbor, and N s of them lie within B ( o, s ). Using the law of total exp ectation, w e then obtain P ( r V ≤ s | d 0 ≥ 1) = E  1 { r V ≤ s } | d 0 ≥ 1  = E  E [ 1 { r V ≤ s } | Φ] | d 0 ≥ 1  = E h N s d 0 | d 0 ≥ 1 i = E h N s N R | N R ≥ 1 i = E h N s N R 1 { N R ≥ 1 } i P ( N R ≥ 1) . Next, we compute the numerator. Since N R = N s + M s with N s and M S indep enden t P oisson v ariables, we get E h N s N R 1 { N R ≥ 1 } i = ∞ X k =0 ∞ X m =0 k k + m 1 { k + m ≥ 1 } P ( N s = k ) P ( M s = m ) = ∞ X n =1 n X k =1 k n P ( N s = k ) P ( M s = n − k ) = ∞ X n =1 n X k =1 k n e − pv d s d ( pv d s d ) k k ! e − pv d ( R d − s d ) [ pv d ( R d − s d )] n − k ( n − k )! = e − pv d R d ∞ X n =1 1 n n X k =1 k ( pv d s d ) k k ! [ pv d ( R d − s d )] n − k ( n − k )! = e − pv d R d ∞ X n =1 ( pv d R d ) n n ! s d R d n − 1 X j =0 ( n − 1)! j !( n − 1 − j )!  s d R d  j  1 − s d R d  n − 1 − j , where for the last line we did the substitution j = k − 1. Noticing the Binomial distribution with success probability s d /R d w e conclude that E h N s N R 1 { N R ≥ 1 } i = e − pv d R d s d R d ∞ X n =1 1 n ! ( pv d R d ) n = s d R d  1 − e − pv d R d  . 23 Since P ( N R ≥ 1) = 1 − e − pv d R d , we conclude F r V ( s ) = E h N s N R | N R ≥ 1 i = s d R d , 0 ≤ s ≤ R, and we set F r V ( s ) = 0 for s < 0 and F r V ( s ) = 1 for s > R . Differentiating, the probability density function (PDF) is f r V ( r ) = d dr F r V ( r ) = d R d r d − 1 , 0 ≤ r ≤ R. D Pro of of Lemma 5.3 Pr o of. Recall that we need to compute E [ d V | d o = k ]. F or this, we first condition on the distance r V := d ( d o , d V ). It then follo w from 20 that E [ d V | d o = k , d ( o, V ) = r ] = 1 + pλ 1 ( r ) + ( k − 1) λ 2 ( r ) ω d ( R ) = 1 + p ( ω d ( R ) − λ 2 ( r )) + ( k − 1) λ 2 ( r ) ω d ( R ) . W e thus ha ve to compute the exp ectation of λ 2 ( r ) is tak en with resp ect to the probability density function of r V := d ( o, V ) from Lemma 5.2 . This yields E r V [ λ 2 ( r V )] = Z R 0 λ 2 ( r ) dR − d r d − 1 dr = 1 ω d ( R ) Z R 0 λ 2 ( r ) dω d r d − 1 dr = 1 ω d ( R ) Z x ∈ B 0 ( R ) λ 2 ( ∥ x ∥ ) dx = 1 ω d ( R ) Z Z x,y ∈ B 0 ( R ) 1 ∥ x − y ∥≤ R dy dx = ω d ( R ) p conn . W e now conclude that E [ d V | d o = k ] = 1+ pω d ( R ) − pω d ( R ) p conn +( k − 1) p conn = 1+ pω d ( R )(1 − p conn )+ ( k − 1) p conn , (25) whic h finishes the proof. 24

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment