Resistance growth of branching random networks
RResistance growth of branching random networks
Dayue Chen a , Yueyun Hu b and Shen Lin c November 5, 2018
Abstract
Consider a rooted infinite Galton–Watson tree with mean offspring number m >
1, and a collection of i.i.d. positive random variables ξ e indexed by all the edges inthe tree. We assign the resistance m d ξ e to each edge e at distance d from theroot. In this random electric network, we study the asymptotic behavior of theeffective resistance and conductance between the root and the vertices at depth n .Our results generalize an existing work of Addario-Berry, Broutin and Lugosi onthe binary tree to random branching networks. Keywords. electric networks, Galton–Watson tree, random conductance.
AMS 2010 Classification Numbers.
An electric network is an undirected locally finite connected graph G = ( V, E ) with acountable set of vertices V and a set of edges E , endowed with nonnegative numbers { r ( e ) , e ∈ E } , called resistances, that are associated to the edges of G . The reciprocal c ( e ) = 1 /r ( e ) is called the conductance of the edge e . It is well-known that the electricalproperties of the network ( G, { r ( e ) } ) are closely related to the nearest-neighbor randomwalk on G , whose transition probabilities from a vertex are proportional to the conduc-tances along the edges to be taken. See, for instance, the book of Lyons and Peres [11]for a detailed exposition of this connection.To study random walks in certain random environments, it is natural to considera random electric network by choosing the resistances independent and identically dis-tributed. For example, the infinite cluster of bond percolation on Z d can be seen as a a Department of Probability and Statistics, School of Mathematical Sciences, Peking University, Bei-jing, China,
E-mail : [email protected] b LAGA, Université Paris XIII, Villetaneuse, France,
E-mail : [email protected] c Sorbonne Université, Laboratoire de Probabilités Statistique et Modélisation, Paris, France,
E-mail : [email protected] Cooperation between D.C. and Y.H. was supported by NSFC 11528101, Research of S.L. was partiallysupported by the grant ANR-14-CE25-0014 (ANR GRAAL). a r X i v : . [ m a t h . P R ] M a y andom electric network in which each open edge has unit resistance and each closed edgehas infinite resistance. Grimmett, Kesten and Zhang [7] proved that when d ≥
3, the ef-fective resistance of this network between a fixed point and infinity is a.s. finite, thus thesimple random walk on this infinite percolation cluster is a.s. transient. In [3], Benjaminiand Rossignol considered a different model of the cubic lattice Z d , where the resistanceof each edge is an independent copy of a Bernoulli random variable. They showed thatpoint-to-point effective resistance has submean variance in Z , whereas the mean and thevariance are of the same order when d ≥
3. The case of a complete graph on n verticeshas also been studied by Grimmett and Kesten [6]. For a particular class of resistancedistribution on the edges (see Theorem 3 in [6]), as n → ∞ , the limit distribution ofthe random effective resistance between two specified vertices was identified as the sumof two i.i.d. random variables, each with the distribution of the effective resistance be-tween the root and infinity in a Galton–Watson tree with a supercritical Poisson offspringdistribution.In this paper, we investigate the effective resistance and conductance in a supercriticalGalton–Watson tree T rooted at ∅ . Let p = ( p k ) k ≥ be the offspring distribution of T , withfinite mean m >
1. We assume p = 0 to avoid the conditioning on survival. Formally,every vertex in T can be represented as a finite word written with positive integers. Thedepth | x | of a vertex x in T is the number of edges on the unique non-self-intersectingpath from the root ∅ to x , which also equals the length of the word representing x . Let T n := { x ∈ T : | x | = n } denote the n -th level of T . We write ←− x for the parent vertexof x if x = ∅ . For each edge e = {←− x , x } of T , we define its depth d ( e ) := | x | . Let ν bethe number of children of the root, whose expected value is m . For 1 ≤ i ≤ ν , the edge { ∅ , i } between the root ∅ and its child i has depth 1. If x and y are vertices of T , wewrite x (cid:22) y if x is on the non-self-intersecting path connecting ∅ and y . In this case, wesay that y is a descendant of x . We define T n [ x ] := { y ∈ T n : x (cid:22) y } as the set of verticesat depth n that are descendants of x .If the resistance of an edge at depth d equals λ d with a deterministic λ >
0, Lyons[8] showed that the effective resistance between the root and infinity in T is a.s. infiniteif λ > m and a.s. finite if λ < m . The corresponding λ -biased random walk on T is thusrecurrent if λ > m , and transient if λ < m . For the critical value λ = m , we know bya subsequent work of Lyons [9] that the network still has an infinite effective resistancebetween the root and infinity. More precisely, the critical λ -biased random walk is nullrecurrent provided P ( k log k ) p k < ∞ .When the edges of T have random resistances, we are mainly interested in the similarcase of critical exponential weighting: to each edge e at depth d ( e ), we assign the resistance r ( e ) := m d ( e ) ξ ( e ) , (1.1)where, conditionally on T , { ξ ( e ) } are i.i.d. copies of a nonnegative random variable ξ . Wewill call ( T , { r ( e ) } ) a branching random network of offspring distribution p and electricresistance ξ . For convenience, we assume that ( T , { r ( e ) } ) and ξ are independent anddefined under the same probability measure P .Let R n (resp. C n ) be the effective resistance (resp. effective conductance) between theroot ∅ and the vertices at depth n in ( T , { r ( e ) } ). When T is a deterministic binary tree,2ddario-Berry, Broutin and Lugosi [1] showed that as n → ∞ , E [ R n ] = E [ ξ ] n − Var[ ξ ] E [ ξ ] log n + O (1) and E [ C n ] = 1 E [ ξ ] 1 n + Var[ ξ ] E [ ξ ] log nn + O ( n − ) , provided ξ is bounded away from both zero and infinity. Their arguments are based onthe concentration phenomenon of C n and R n when the underlying tree is regular. TheEfron–Stein inequality is the main tool to deduce the following upper bounds on thevariance Var[ R n ] = O (1) and Var[ C n ] = O ( n − ) . A sub-Gaussian tail bound is also established for R n , which gives E (cid:20) | R n − E [ R n ] | k (cid:21) = O (1) for all k ≥ . As observed in the concluding remarks in [1], if the tree T is random, C n and R n areno longer concentrated. For any nonnegative random variable X , we set { X } := X E [ X ] whenever 0 < E [ X ] < ∞ . Theorem 1.1.
Assuming that E [ ξ + ξ − + ν ] < ∞ , we have the almost sure convergence { C n } −→ n →∞ W, (1.2) where W := lim n →∞ m − n T n . We write W n := m − n T n . When E [ ν ] < ∞ , it is well-known that ( W n ) n ≥ is an L -bounded martingale. The convergence W n → W holds almost surely and in the L -sense.The limit W is almost surely strictly positive, with E [ W ] = 1 and E [ W ] = P k p k − mm ( m − . Similarly, for each vertex x ∈ T , the random variable W ( x ) := lim n →∞ m | x |− n T n [ x ]has the same distribution as W . Using the tree notation | x | = n to denote a vertex x atdepth n , we have W = m − n P | x | = n W ( x ) .Theorem 1.1 answers some questions mentioned at the end of [1]. When the offspringnumber ν is not deterministic, it implies that the limit distribution of { C n } is absolutelycontinuous with respect to the Lebesgue measure, which is a “scaled analogue” of Question4.1 in Lyons, Pemantle and Peres [10]. For the absolute continuity of W , see for instanceTheorem 10.4 in Chapter 1 of [2].For our next result, let us define a := m − E [ ν ( ν − , (1.3) b := E [ ξ ] ,c := a b − m − . (1.4)Notice that by Theorems 22 and 23 in Dubuc [5], E [ W − ] < ∞ if and only if p m < heorem 1.2. Assuming that E [ ξ + ξ − + ν ] < ∞ , we have lim n →∞ n E [ C n ] = 1 c . (1.5) If additionally p m < , then lim n →∞ E [ R n ] n = c E h W i . If p m ≥
1, by Fatou’s lemma, we deduce from (1.2) and (1.5) thatlim inf n →∞ E [ R n ] n = ∞ . See also the remark at the end of Section 3.To state a more precise asymptotic expansion for E [ C n ], we define a := m − E h ν ( ν − ν − { ν ≥ } i , (1.6) b := E h ξ i ,c := (1 − m − ) − (cid:18) a m − a (cid:19) , (1.7) c := 2 a c m − − b c m , (1.8) c := b − m − (cid:18) c c + a (cid:19) − b c c . (1.9)If ν = m ≥ c = b = E [ ξ ] , c = 1 , c = 0 and c = b − b b = − Var[ ξ ] E [ ξ ] . Theorem 1.3.
Assume that E [ ξ + ξ − + ν ] < ∞ . Then there exists a constant c ∈ R such that, as n → ∞ , E [ C n ] = 1 c n − c c log nn − c c n + O ( (log n ) n ) . The constant c appearing in the expansion above will be defined at the end of Sec-tion 4, but its explicit value is unknown to us.To further describe the rate of convergence in (1.2), we write ξ x := ξ ( {←− x , x } ) forevery vertex x = ∅ . Remark that, conditioning on the first ‘ levels of the tree T , therandom variables W ( x ) , | x | = ‘ are i.i.d. and independent of ξ x , | x | = ‘ . Notice that W ( x ) (1 − ξ x c W ( x ) ) is of zero mean, because c = E [ ξ ] E [ W ]. When E [ ξ + ν ] < ∞ , onecan easily verify that ∞ X ‘ =1 m ‘ X | x | = ‘ W ( x ) (cid:18) − ξ x c W ( x ) (cid:19) converges in L . 4 heorem 1.4. Assuming that E [ ξ + ξ − + ν ] < ∞ , we have n (cid:16) { C n } − W (cid:17) (P) −→ n →∞ ∞ X ‘ =1 m ‘ X | x | = ‘ W ( x ) (cid:18) − ξ x c W ( x ) (cid:19) , and, with the same constant c in Theorem 1.3, R n − c W n + c W log n + 1 W c − W ∞ X ‘ =1 m ‘ X | x | = ‘ W ( x ) (cid:18) c − ξ x W ( x ) (cid:19)! (P) −→ n →∞ , (1.10) where (P) −→ indicates convergence in probability. The rest of the paper is organized as follows. In the next section, we recall Thomson’sprinciple for the effective resistance, and we derive the recurrence relation for C n . InSection 3, we collect some estimates on the moments of C n . The convergence (1.5) andTheorem 1.3 will be shown in Section 4 by analyzing the recurrence equations on themoments of C n . Similar arguments have already been used in the proof of Theorem 5 in[1]. By second moment calculations, we establish Theorems 1.1 and 1.4 in Section 5, and,by proving the uniform integrability of ( n − R n ) n ≥ , we complete the proof of Theorem 1.2in Section 6. Finally, in Section 7 we briefly discuss the case when we change the scalingby assigning to each edge e in T the resistance λ d ( e ) ξ ( e ) with λ > m . Consider a general network G = ( V, E ) with the resistances { r ( e ) } . For x, y ∈ V , we write x ∼ y to indicate that { x, y } belongs to E . To each edge e = { x, y } , one may associatetwo directed edges −→ xy and −→ yx . We shall denote by −→ E the set of all directed edges. A flow θ is a function on −→ E that is antisymmetric, meaning that θ ( −→ xy ) = − θ ( −→ yx ). The divergenceof θ at a vertex x is defined by div θ ( x ) := X y : y ∼ x θ ( −→ xy ) . Let A and Z be two disjoint non-empty subsets of V : A will represent the source ofthe network and Z the sink. The flow θ is from A to Z with strength k θ k if it satisfiesKirchhoff’s node law that div θ ( x ) = 0 for all x / ∈ A ∪ Z , and that k θ k = X a ∈ A X y ∼ a,y / ∈ A θ ( −→ ay ) = X z ∈ Z X y ∼ z,y / ∈ Z θ ( −→ yz ) . The effective resistance between A and Z can be defined as R ( A ↔ Z ) := inf k θ k =1 X e ∈ E r ( e ) θ ( e ) , (2.1)where the infimum is taken over all flows θ from A to Z with unit strength. The infimumis always attained at what is called the unit current flow, which satisfies, in addition to5he node law, Kirchhoff’s cycle law. This flow-based formulation of the effective resistanceis also called Thomson’s principle. The effective conductance C ( A ↔ Z ) between A and Z is the reciprocal R ( A ↔ Z ) − .Conditionally on the branching random network ( T , { r ( e ) } ), let X be the associatedrandom walk on the tree T . Let ω ( x, y ) , x ∼ y denote the transition probabilities of X , andlet π ( x ) , x ∈ T denote the reversible measure. Writing the conductances c ( e ) = 1 /r ( e ),we have π ( x ) = X y : y ∼ x c ( { x, y } ) and ω ( x, y ) = c ( { x, y } ) π ( x ) . We suppose that the random walk X starts from the vertex x at time 0 under theprobability measure P x,ω . As a probabilistic interpretation, the effective conductance C n := C ( { ∅ } ↔ T n ) between the root and the level set { x ∈ T : | x | = n } satisfies C n = π ( ∅ ) P ∅ ,ω (cid:16) τ n < T + ∅ (cid:17) , where τ n := inf { k ≥ | X k | = n } and T + ∅ := inf { k ≥ X k = ∅ } . We see immediatelythat C n ≥ C n +1 .For 1 ≤ i ≤ ν , let C n +1 ,i := C ( { i } ↔ T n +1 [ i ]) denote the effective conductancebetween the vertex i and T n +1 [ i ]. We also set η i := ξ ( { ∅ , i } ) − , 1 ≤ i ≤ ν , which are i.i.d.,independent of ν . Observe that conditioning on ν , ( C n +1 ,i ) ≤ i ≤ ν are i.i.d., independentof η i , and distributed as C n m . Using the series and parallel law of electric networks, weobtain the recurrence relation that for n ≥ C n +1 = ν X i =1 (cid:18) mη i + 1 C n +1 ,i (cid:19) − = 1 m ν X i =1 η i C ( i ) n η i + C ( i ) n , (2.2)where for 1 ≤ i ≤ ν , C ( i ) n := m C n +1 ,i are i.i.d. copies of C n , independent of ( η i ) ≤ i ≤ ν . It isclear that C = m − P νi =1 η i . If we set ξ i := ξ ( { ∅ , i } ) = η − i for 1 ≤ i ≤ ν , the recurrenceequation (2.2) can also be written as C n +1 = 1 m ν X i =1 C ( i ) n ξ i C ( i ) n . (2.3) Let η denote the reciprocal ξ − . Lemma 3.1. If E [ η ] = E [ ξ − ] < ∞ , then E [ C n ] ≤ E [ η ] n for all n ≥ .Proof. First of all, E [ C ] = E [ η ]. From (2.2) we obtain for all n ≥ E [ C n +1 ] = E " ηC n η + C n . By concavity of the function x xyx + y , y > E " ηC n η + C n ≤ E " η E [ C n ] η + E [ C n ] ≤ E [ η ] E [ C n ] E [ η ] + E [ C n ] , It follows that ( E [ C n +1 ]) − ≥ ( E [ η ]) − + ( E [ C n ]) − ≥ · · · ≥ ( n + 1)( E [ η ]) − .6 emma 3.2. Assume that E [ η ] = E [ ξ − ] < ∞ . For ≤ k ≤ , if E [ ν k ] < ∞ , then E [( C n ) k ] = O ( n − k ) as n → ∞ . Proof.
Starting from (2.2), we obtain E h ( C n +1 ) i = 1 m E [ ν ] E "(cid:18) ηC n η + C n (cid:19) + E ( ν ( ν − m (cid:16) E [ C n +1 ] (cid:17) , by developing the square and using the independence after conditioning on ν . Togetherwith Lemma 3.1, it follows that E h ( C n +1 ) i ≤ m E h C n i + E [ ν ( ν − m (cid:16) E [ C n +1 ] (cid:17) ≤ m E h C n i + E [ ν ( ν − m ( E [ η ]) ( n + 1) . Since m >
1, we get E [ C n ] = O ( n − ) by induction. Furthermore, if E [ ν ] < ∞ , bydeveloping the third power and using the independence, E h ( C n +1 ) i = E m ν X i =1 η i C ( i ) n η i + C ( i ) n ! ≤ m E "(cid:18) ηC n η + C n (cid:19) + 3 E [ ν ] m E "(cid:18) ηC n η + C n (cid:19) E " ηC n η + C n + E [ ν ] m E " ηC n η + C n ≤ m E h C n i + 3 E [ ν ] m E h C n i E [ C n ] + E [ ν ] m ( E [ C n ]) . Thus, E [ C n ] = O ( n − ) follows from E [ C n ] = O ( n − ) and E [ C n ] = O ( n − ). The last bound E [ C n ] = O ( n − ) is similarly obtained by assuming that E [ ν ] < ∞ . Lemma 3.3. If E [ ξ ] ∈ (0 , ∞ ) and E [ ν ] < ∞ , then there exists a constant c > suchthat E [ C n ] ≥ cn for all n ≥ . In the following proof, we will use the uniform flow on T to give an upper bound for R n = C − n . Similar arguments can be found in Lemma 2.2 of Pemantle and Peres [12]. Proof.
We define on T the uniform flow Θ unif of unit strength (with the source { ∅ } ) bysetting Θ unif ( {←− x , x } ) = m −| x | W ( x ) W for every x ∈ T \ { ∅ } . According to Thomson’s principle (2.1), R n ≤ n X k =1 X | x | = k m k ξ x Θ unif ( {←− x , x } ) = n X k =1 X | x | = k m − k ξ x (cid:18) W ( x ) W (cid:19) . (3.1)We write A := sup k ≥ m − k T k , which is square integrable by L -maximal inequality ofDoob. It follows that R n n ≤ n n X k =1 AW T k X | x | = k ξ x ( W ( x ) ) ! . T k X | x | = k ξ x ( W ( x ) ) . s . −→ k →∞ E [ ξ ] E [ W ] . Hence, almost surely lim sup n →∞ R n n ≤ A E [ ξ ] E [ W ] W , which yields lim inf n →∞ n C n ≥ ( A E [ ξ ]) − W E [ W ] , Taking expectation and using Fatou’s lemma, we obtainlim inf n →∞ n E [ C n ] ≥ E [ W A − ] E [ ξ ] E [ W ] > . The proof is thus completed.
Remark.
The Nash-Williams inequality (see Section 2.5 in [11]) gives the lower bound R n ≥ n X k =1 (cid:18) X d ( e )= k r ( e ) − (cid:19) − = n X k =1 (cid:18) X | x | = k m − k ( ξ x ) − (cid:19) − . Suppose that E [ ξ − ] < ∞ . Proposition 2.3 in [12] implies that1 T k X | x | = k ( ξ x ) − . s . −→ k →∞ E [ ξ − ] . With the almost sure convergence m − k T k → W , it follows that1 n n X k =1 (cid:18) X | x | = k m − k ( ξ x ) − (cid:19) − . s . −→ n →∞ W E [ ξ − ] . By Fatou’s lemma, we obtainlim inf n →∞ E [ R n ] n ≥ E (cid:20) lim inf n →∞ R n n (cid:21) ≥ E [ W − ] E [ ξ − ] . The integrability of W − is therefore a necessary condition for having E [ R n ] = O ( n ). Within this section, let the assumption E [ ξ + ξ − + ν ] < ∞ be always in force. We firstestablish (1.5) in Theorem 1.2. Afterwards we will prove Theorem 1.3 under the strongerassumption that E [ ξ + ξ − + ν ] < ∞ . 8or every integer n ≥
1, we write x n := E [ C n ] , y n := E h C n i , z n := E h C n i . By Lemma 3.2, we have x n = O ( n − ) , y n = O ( n − ) and z n = O ( n − ).Observe from (2.3) that E [ C n +1 ] = E C n ξ C n with ξ and C n being independent. Thendeveloping the power of C n +1 , we arrive at E h C n +1 i = 1 m E "(cid:18) C n ξ C n (cid:19) + E [ ν ( ν − m ( E [ C n +1 ]) = 1 m E "(cid:18) C n ξ C n (cid:19) + a ( E [ C n +1 ]) and E h C n +1 i = 1 m E "(cid:18) C n ξ C n (cid:19) + 3 E [ ν ( ν − m E "(cid:18) C n ξ C n (cid:19) E " C n ξ C n + m − E " X ≤ i,j,k ≤ ν { i = j = k } E " C n ξ C n = 1 m E "(cid:18) C n ξ C n (cid:19) + 3 a m E "(cid:18) C n ξ C n (cid:19) E " C n ξ C n + a (cid:18) E " C n ξ C n , with the constants a , a defined as in (1.3) and (1.6).Using the identity x = 1 − x + x x , we obtain E " C n ξ C n = E [ C n ] − E [ ξ ] E [ C n ] + E " ξ C n ξ C n = E [ C n ] − E [ ξ ] E [ C n ] + O ( n − ) , because E [ C n ] = O ( n − ) and E [ ξ ] < ∞ . Similarly, E "(cid:18) C n ξ C n (cid:19) = E [ C n ] + O ( n − ) . Hence, we have x n +1 = x n − b y n + O ( n − ) , (4.1) y n +1 = y n m + a x n +1 + O ( n − ) . (4.2)Remark that x n +1 = E " C n ξ C n ≥ E [ C n ] − E [ ξ ] E [ C n ] = x n − b y n . Since x n ≥ cn by Lemma 3.3 and y n = O ( n − ), we get x n x n +1 ≤ Cn for some positiveconstant C independent of n . It follows that for any i < n/ ≤ x n − i x n ≤ n − Y j = n − i (1 + Cj ) ≤ exp (cid:16) Ci/ ( n − i ) (cid:17) ≤ C in (4.3)9ith another constant C > x n x n +1 , which leads to1 x n +1 − x n = b y n x n x n +1 + O ( n − ) . (4.4)By induction, (4.2) implies that y n = a n − X i =0 m − i x n − i + O ( n − ) . Using (4.3), we deduce that y n x n x n +1 = a ∞ X i =0 m − i + O ( n − ) = a − m − + O ( n − ) . It follows from (4.4) that1 x n +1 − x n = a b − m − + O ( n − ) = c + O ( n − ) , (4.5)with the constant c defined in (1.4). Consequently,1 x n = c n + O (log n ) , (4.6)and x n = 1 c n + O ( log nn ) , (4.7)which gives the convergence (1.5).Assuming from now on that E [ ξ + ξ − + ν ] < ∞ , we proceed to find higher-orderasymptotic expansions for x n . Using the identity x = 1 − x + x − x x , we obtain E " C n ξ C n = E [ C n ] − E [ ξ ] E [ C n ] + E [ ξ ] E [ C n ] − E " ξ C n ξ C n = E [ C n ] − E [ ξ ] E [ C n ] + E [ ξ ] E [ C n ] + O ( n − ) , as E [ ξ ] < ∞ and E [ C n ] = O ( n − ) by Lemma 3.2. We prove in the same manner that E "(cid:18) C n ξ C n (cid:19) = E [ C n ] − E [ ξ ] E [ C n ] + O ( n − ) , E "(cid:18) C n ξ C n (cid:19) = E [ C n ] + O ( n − ) . x n +1 = x n − b y n + b z n + O ( n − ) , (4.8) y n +1 = y n m + a x n +1 − b m z n + O ( n − )= y n m + a x n − (cid:18) a b x n y n + 2 b m z n (cid:19) + O ( n − ) , (4.9) z n +1 = z n m + 3 a m x n +1 y n + a x n +1 + O ( n − )= z n m + 3 a m x n y n + a x n + O ( n − ) . (4.10)Dividing all terms in (4.10) by x n +1 gives z n +1 x n +1 = x n x n +1 (cid:18) m z n x n + 3 a m y n x n + a + O ( n − ) (cid:19) . Recall that x n x n +1 = 1 + O ( n − ) by (4.3). Hence, z n +1 x n +1 = 1 m z n x n + 3 a m y n x n + a + O ( n − ) . Since y n x n = a − m − + O ( n − ) , (4.11)we get by induction that z n +1 x n +1 = n X i =0 m − i (cid:18) a m a − m − + a (cid:19) + O ( n − ) . Then we have z n +1 x n +1 = c + O ( n − ) , (4.12)with the constant c defined in (1.7).Dividing all terms in (4.9) by x n +1 gives y n +1 x n +1 = x n x n +1 (cid:18) y n mx n + a − a b y n x n − b m z n x n (cid:19) + O ( n − ) . (4.13)For every n ≥
1, define ε n := 1 x n +1 − x n − c ,δ n := y n +1 x n +1 − y n mx n − a . It has been shown that ε n = O ( n − ). Putting x n x n +1 = (cid:16) c + ε n ) x n (cid:17) = 1 + 2 c x n + O ( n − )11nto (4.13), we see that δ n = 2 a c x n + (cid:18) c m − a b (cid:19) y n x n − b m z n x n + O ( n − ) . By (4.11) and (4.12), it follows that δ n x n −→ n →∞ a c + a − m − (cid:18) c m − a b (cid:19) − b c m = c , with the constant c defined in (1.8). Moreover, in view of (4.7), we derive from δ n = x n (cid:18) a c + (cid:16) c m − a b (cid:17) y n x n − b m z n x n (cid:19) + O ( n − )that δ n = c c n + O ( n − log n ). If we set∆ n +1 := y n +1 x n +1 − a − m − , then ∆ n +1 = m ∆ n + δ n by the definition of δ n . It follows by induction that∆ n +1 = m − n ∆ + n − X i =0 m − i δ n − i = c c (1 − m − ) 1 n + O ( n − log n ) . (4.14)Going back to (4.8), we obtain by the definition of ε n that c + ε n = 1 x n +1 (cid:18) − x n +1 x n (cid:19) = x n x n +1 (cid:18) b y n x n − b z n x n (cid:19) + O ( n − )= (1 + ( c + ε n ) x n ) (cid:18) b y n x n − b z n x n (cid:19) + O ( n − )= b y n x n + c b y n x n − b z n x n + O ( n − ) . As c = a b − m − , we deduce that ε n = b ∆ n + x n (cid:18) c b y n x n − b z n x n (cid:19) + O ( n − ) . (4.15)Using (4.7), (4.11) and (4.12), we get that ε n = c n + O ( n − log n ) , which implies the absolute convergence of P ∞ i =1 ( ε i − c i ). Hence,1 x n = 1 x + c ( n −
1) + n − X i =1 ε i = c n + c log n + c + o (1) , c := − c + 1 x + ∞ X i =1 (cid:16) ε i − c i (cid:17) = − c + 1 E [ ξ − ] + ∞ X i =1 (cid:16) ε i − c i (cid:17) . Finally we have E [ C n ] = x n = 1 c n − c c log nn − c c n + O ( (log n ) n ) . (4.16) To prove Theorems 1.1 and 1.4, let us write Y n := { C n } − W, Π n := C n (cid:18) x n +1 − x n − x n +1 ξ C n ξ C n (cid:19) . For every vertex x ∈ T and j ≥
1, we also define C ( x ) j := m | x | C (cid:16) { x } ↔ T j + | x | [ x ] (cid:17) ,Y ( x ) j := { C ( x ) j } − W ( x ) , Π ( x ) j := C ( x ) j (cid:18) c + ε j − x j +1 ξ x C ( x ) j ξ x C ( x ) j (cid:19) . Using (2.3), we have { C n } = 1 x n m ν X i =1 C ( i ) n − ξ i C ( i ) n − = 1 m ν X i =1 { C ( i ) n − } + 1 m ν X i =1 Π ( i ) n − , Using the simple equality W = m − P νi =1 W ( i ) , we deduce that Y n = 1 m ν X i =1 Y ( i ) n − + 1 m ν X i =1 Π ( i ) n − . Since W = m − k P | x | = k W ( x ) , by induction, Y n = 1 m k X | x | = k Y ( x ) n − k + k X ‘ =1 m ‘ X | y | = ‘ Π ( y ) n − ‘ for any 1 ≤ k < n. Proof of Theorem 1.1.
Assume that E [ ξ + ξ − + ν ] < ∞ . Notice that our proof preceding(4.3) to establish x n x n +1 = 1 + O ( n − ) is still valid. Besides, y n = E [ C n ] = O ( n − ) byLemma 3.2, and y n x n +1 = O ( n − ) by Lemma 3.3. Hence, we derive from the inequality E h | Π n | i ≤ x n (cid:18) x n +1 − x n (cid:19) + 1 x n +1 E (cid:20) ξ ( C n ) ξ C n (cid:21) ≤ x n x n +1 − y n x n +1 E [ ξ ]13hat E [ | Π n | ] ≤ Cn with some constant C > k levels of the tree T , ( Y ( x ) n − k , | x | = k ) are i.i.d. copies of Y n − k . Using the fact that Y n is of zero mean and uniformly bounded in L , we can find aconstant C > E "(cid:18) m k X | x | = k Y ( x ) n − k (cid:19) = m − k E (cid:20) ( Y n − k ) (cid:21) ≤ C m − k . (5.1)Meanwhile, E " k X ‘ =1 m ‘ X | y | = ‘ | Π ( y ) n − ‘ | ≤ k X ‘ =1 Cn − ‘ ≤ Ckn − k . It follows that E h | Y n | i ≤ √ C m − k + Ckn − k . By taking k = C log n for some constant C sufficiently large, we see that E h | Y n | i = O ( log nn ) . Choose a subsequence n j = j . Borel–Cantelli’s lemma gives that Y n j converges to 0almost surely. The monotonicity of C n shows that for any n j ≤ n < n j +1 , x n j +1 x n j · { C n j +1 } ≤ { C n } ≤ x n j x n j +1 · { C n j } . By (4.3), the almost sure convergence of Y n readily follows. (cid:3) Together with (4.6), Theorem 1.1 implies that n C n a . s . −→ n →∞ Wc , (5.2)provided E [ ξ + ξ − + ν ] < ∞ . Proof of Theorem 1.4.
Assume now E [ ξ + ξ − + ν ] < ∞ . First, observe that taking thesubsequence k n = m log n in (5.1) yields n (cid:18) m k n X | x | = k n Y ( x ) n − k n (cid:19) −→ n →∞ L .By Borel–Cantelli’s lemma, the preceding convergence also holds in the almost sure sense.We claim that k n X ‘ =1 m ‘ X | y | = ‘ n Π ( y ) n − ‘ (P) −→ n →∞ ∞ X ‘ =1 m ‘ X | y | = ‘ W ( y ) (cid:18) − ξ y c W ( y ) (cid:19) . (5.3)In fact, for each vertex y at fixed depth ‘ , n C ( y ) n − ‘ a . s . −→ n →∞ W ( y ) c and n Π ( y ) n − ‘ a . s . −→ n →∞ W ( y ) (cid:18) − ξ y c W ( y ) (cid:19) .
14o for any integer K ≥ K X ‘ =1 m ‘ X | y | = ‘ n Π ( y ) n − ‘ a . s . −→ n →∞ K X ‘ =1 m ‘ X | y | = ‘ W ( y ) (cid:18) − ξ y c W ( y ) (cid:19) . Note that E "(cid:18) m ‘ X | y | = ‘ n Π ( y ) n − ‘ (cid:19) ≤ m − ‘ n E h Π n − ‘ i + E [( T ‘ ) ] m ‘ n (cid:16) E [Π n − ‘ ] (cid:17) . On the one hand, E h Π n i ≤ (cid:18) x n +1 − x n (cid:19) E h C n i + 2( x n +1 ) E (cid:20) ξ C n (1 + ξC n ) (cid:21) ≤ (cid:18) x n +1 − x n (cid:19) E h C n i + 2( x n +1 ) E h ξ i E h C n i . Using (4.5) and the facts that x n is of order n − , E [ C n ] = O ( n − ) and E [ C n ] = O ( n − ),we deduce that E [Π n ] = O ( n − ). On the other hand, E [Π n ] = x n x n +1 − − x n +1 E [ ξ C n ] + 1 x n +1 E [ ξ C n ] − x n +1 E (cid:20) ξ C n ξC n (cid:21) = x n x n +1 − − x n +1 b y n + 1 x n +1 b z n + O ( n − ) . It follows by (4.8) that E [Π n ] = O ( n − ). In particular, E [Π n − ‘ ] = O ( n − ) for any ‘ = o ( n ).Besides, m − ‘ E [( T ‘ ) ] is uniformly bounded in ‘ . Hence, there exists some constant e C > E "(cid:18) m ‘ X | y | = ‘ n Π ( y ) n − ‘ (cid:19) ≤ e Cm − ‘ + e Cn − for all ‘ ≤ k n . It follows that lim K →∞ lim sup n →∞ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) k n X ‘ = K m ‘ X | y | = ‘ n Π ( y ) n − ‘ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L = 0 , which yields (5.3). Therefore, nY n = n (cid:18) m k n X | x | = k n Y ( x ) n − k n (cid:19) + k n X ‘ =1 m ‘ X | y | = ‘ n Π ( y ) n − ‘ (P) −→ n →∞ ∞ X ‘ =1 m ‘ X | y | = ‘ W ( y ) (cid:18) − ξ y c W ( y ) (cid:19) . In view of (4.16), we have n C n − Wc n − c Wc log n − c Wc + 1 c ∞ X ‘ =1 m ‘ X | y | = ‘ W ( y ) (cid:18) − ξ y c W ( y ) (cid:19)! (P) −→ n →∞ , and the convergence (1.10) follows immediately. (cid:3) The expected resistance
When E [ ξ + ξ − + ν ] < ∞ , it follows from (5.2) that R n n a . s . −→ n →∞ c W .
The following lemma yields the uniform integrability of ( R n n , n ≥ Lemma 6.1.
Suppose that p m < and E [ ξ r + ν r ] < ∞ for some r > . Then thereexists some s > such that sup n ≥ E (cid:20)(cid:18) R n n (cid:19) s (cid:21) < ∞ . Proof. As p m <
1, by Theorems 22 and 23 in Dubuc [5], there is some α > E [ W − α ] < ∞ . In fact, we may take any α ∈ (1 , − log p log m ), with the convention that − log p log m = + ∞ if p = 0.Moreover, E [ ν r ] < ∞ implies that E [ W r ] < ∞ , according to Bingham and Doney [4].Recall that the martingale W k = m − k T k converges in L to W . Let F k := σ { T i , i ≤ k } , k ≥ W k ) k ≥ . Since W k = E [ W | F k ], it follows fromJensen’s inequality that ( W k ) − α ≤ E [ W − α | F k ]. Consequently,sup k ≥ E h ( W k ) − α i < ∞ . (6.1)Fix an arbitrary s ∈ (1 , r ∧ α ). By convexity, we deduce from (3.1) that (cid:16) R n n (cid:17) s ≤ n n X k =1 (cid:18) X | x | = k m − k ξ x (cid:16) W ( x ) W (cid:17) (cid:19) s ≤ n n X k =1 ( T k ) s − X | x | = k m − ks ( ξ x ) s (cid:16) W ( x ) W (cid:17) s . Since E [ ξ s ] < ∞ , the proof boils down to showing thatsup k ≥ E " ( T k ) s − X | x | = k m − ks (cid:16) W ( x ) W (cid:17) s < ∞ . (6.2)Recall that W = P | x | = k m − k W ( x ) , and conditioning on F k , ( W ( x ) ) | x | = k are i.i.d. copiesof W . Let φ ( u ) := − log E [ e − uW ] for any u ≥
0. Using the elementary identity a − s = 1Γ(2 s ) Z ∞ t s − e − at dt for any a > ,
16e get that for any vertex x at depth k , E "(cid:16) W ( x ) W (cid:17) s (cid:12)(cid:12)(cid:12)(cid:12) F k = 1Γ(2 s ) Z ∞ dt t s − E (cid:20) ( W ( x ) ) s e − t P | y | = k m − k W ( y ) | F k (cid:21) = 1Γ(2 s ) Z ∞ dt t s − e − ( T k − φ ( tm − k ) E h W s e − tm − k W i = 1Γ(2 s ) m ks Z ∞ du u s − e − ( T k − φ ( u ) E h W s e − uW i . It follows that I k := E " ( T k ) s − X | x | = k m − ks (cid:16) W ( x ) W (cid:17) s = 1Γ(2 s ) m ks Z ∞ du u s − E h ( T k ) s e − ( T k − φ ( u ) i E h W s e − uW i . (6.3)For any a >
0, we claim that there exits some positive constant C = C ( a, s ) > k ≥ m ks E h ( T k ) s e − a ( T k − i ≤ C. (6.4)Indeed, by discussing whether T k ≥ k or not, we have m ks E h ( T k ) s e − a T k i ≤ m ks sup y ≥ k y s e − ay + m ks k s P (cid:16) T k < k (cid:17) . The first term in the right-hand side is uniformly bounded, while m ks k s P (cid:16) T k < k (cid:17) ≤ m ks k s +2 α E h ( T k ) − α i . Note that E [( T k ) − α ] = O ( m − αk ) by (6.1). Since s < α , we obtain (6.4).Recall that E [ W s ] < ∞ because s < r . Going back to the right-hand side of (6.3),we split the integral R ∞ into two parts R and R ∞ . For the part R ∞ we apply (6.4) with a = φ (1), and for the part R we dominate E [ W s e − uW ] by E [ W s ], to arrive at I k ≤ C Γ(2 s ) Z ∞ du u s − E h W s e − uW i + C m ks Z du u s − E h ( T k ) s e − T k φ ( u ) i , with the finite constant C := e φ (1) E [ W s ]Γ(2 s ) . Notice that by Fubini’s theorem and a change of variables v = uW , Z ∞ du u s − E h W s e − uW i ≤ Z ∞ du u s − E h W s e − uW i = Γ(2 s ) . To treat the integral from 0 to 1, we remark that lim u → φ ( u ) u = E [ W ] = 1. Then thereexists some positive constant c , such that φ ( u ) ≥ uc for all 0 ≤ u ≤
1. It follows that I k ≤ C + C m ks Z du u s − E h ( T k ) s e − uc T k i = C + C E " Z T k dv ( W k ) − s v s − e − vc ≤ C + C c s Γ(2 s ) E h ( W k ) − s i . k ≥ I k < ∞ , yielding (6.2) and completing the proof. Given the Galton–Watson tree T and λ >
0, one can do the λ -exponential weighting ofresistance by assigning the resistance λ d ( e ) ξ ( e ) to each edge e at depth d ( e ). As before,conditionally on T , { ξ ( e ) } are i.i.d. positive random variables. In this random electricnetwork, let C n ( λ ) denote the effective conductance between the root and the vertices atdepth n . Instead of (2.3), the recurrence equation now reads as C n +1 ( λ ) = 1 λ ν X i =1 C ( i ) n ( λ )1 + ξ i C ( i ) n ( λ ) , where for 1 ≤ i ≤ ν , C ( i ) n ( λ ) are i.i.d. copies of C n ( λ ), independent of ( ξ i ) ≤ i ≤ ν . Theorem 7.1.
Fix λ > m . Assuming that E [ ξ + ξ − + ν ] < ∞ , we have n C n ( λ ) o a . s . −→ n →∞ W. If E [ ξ + ξ − + ν ] < ∞ , then, as n → ∞ , the limit of (cid:18) λm (cid:19) n E h C n ( λ ) i (7.1) exists and is strictly positive. It is easy to see that the limit of the rescaled expected conductance (7.1) is strictlysmaller than E [ ξ − ]. However, we are unable to compute it explicitly.Basically the proof of Theorem 7.1 goes along the same lines as Theorem 1.1 and thatof (1.5), except a few minor modifications. We leave the details to the reader. Acknowledgements
We are grateful to an anonymous referee for careful reading of the manuscript and helpfulcomments.
References [1]
L. Addario-Berry, N. Broutin and G. Lugosi . Effective resistance of randomtrees.
Ann. Appl. Probab. (2009), 1092–1107.[2] K. Athreya, P. Ney , Branching Processes . Die Grundlehren der mathematischenWissenschaften, Band 196, Springer–Verlag, New York–Heidelberg, 1972. xi+287 pp.[3]
I. Benjamini and R. Rossignol . Submean variance bound for effective resistanceon random electric networks.
Commun. Math. Phys. (2008), 445–462.184]
N. Bingham and R. Doney . Asymptotic properties of supercritical branchingprocesses I: The Galton–Watson process.
Adv. Appl. Probab. (1974), 711–731.[5] S. Dubuc . Problèmes relatifs à l’itération de fonctions suggérés par les processus encascade.
Ann. Inst. Fourier (Grenoble) (1971), 171–251.[6] G. Grimmett and H. Kesten . Random electrical networks on complete graphs.
J. London Math. Soc. (2) (1984), 171–192.[7] G. Grimmett, H. Kesten and Y. Zhang . Random walk on the infinite clusterof the percolation model.
Probab. Theory Relat. Fields (1993), 33–44.[8] R. Lyons . Random walks and percolation on trees.
Ann. Probab. (1990), 931–958.[9] R. Lyons . Random walks, capacity and percolation on trees.
Ann. Probab. (1992),2043–2088.[10] R. Lyons, R. Pemantle and Y. Peres . Unsolved problems concerning randomwalks on trees.
IMA Vol. Math. Appl. (1997), 223–237.[11] R. Lyons and Y. Peres . Probability on Trees and Networks . Cambridge UniversityPress, New York, 2016, xv+699 pp.[12]
R. Pemantle and Y. Peres . Galton–Watson trees with the same mean have thesame polar sets.
Ann. Probab.23