Anisotropic oriented percolation in high dimensions
aa r X i v : . [ m a t h . P R ] N ov Anisotropic oriented percolation in high dimensions
Pablo Almeida Gomes ∗ Alan Pereira † Remy Sanchis ‡ November 12, 2019
Abstract
In this paper we study anisotropic oriented percolation on Z d for d ≥ Keywords : Anisotropic percolation, high dimensional systems, asymmetricrandom walk, mean-field.MSC 2010 subject classifications. 82B43, 60K35
A common feature of lattice systems is that they approach, in some sense, a mean-fieldbehavior as the dimension of the lattice grows.In the particular case of oriented percolation, the seminal paper by Cox and Dur-rett [1] shows, among other important results, that the asymptotic behavior of thecritical point is 1 /d . A little earlier, Holley and Liggett [4] proved the occurrenceof an analogous behavior for the high dimensional contact process critical rate and,recently, a similar result was prove for the contact process with random rates on ahigh dimentional percolation open cluster, see [8]. For the non-oriented case, Kesten[5] and Gordon [3] independently showed that the critical point is asymptotically1 / d and, in the last three decades, a rather complete mean-field picture of highdimensional non-oriented percolation has emerged; see [6] and references therein.For d ≥
3, little is understood about the phase diagram of anisotropic percolation.The results of [1, 5, 3] are statements about one point in the parameter domain, the ∗ ICEx, University of Minas Gerais, Minas Gerais, Brazil, [email protected] † ICEx, University of Minas Gerais, Minas Gerais, Brazil, [email protected] ‡ ICEx, University of Minas Gerais, Minas Gerais, Brazil, [email protected]
We will consider the anisotropic oriented edge percolation model in Z d . Let { e , . . . , e d } be the set of canonical unit vectors of Z d . Given 0 < p , . . . , p d <
1, we declareeach edge h x, x + e i i to be open independently of each other with probability p i for i = 1 , . . . , d . We will denote the corresponding probability measure simply by P .An oriented path of length n starting at the origin in Z d is a path ( x , x , . . . , x n )such that x = 0 and x i − x i − ∈ { e , . . . , e d } for i = 1 , . . . , n . Let C be the opencluster of the origin, that is, the set of vertices x ∈ Z d such that there is an open pathfrom 0 to x ; we let | C | denote the size of C .We are now ready to state our main theorem. Theorem 1.
Let ε > . For d ≥ , let p , . . . , p d be non-negative numbers such that1. p + · · · + p d ≥ ε ,2. max ≤ i ≤ d (cid:26) p i p + · · · + p d (cid:27) < ε ,then P (cid:8) | C | = ∞ (cid:9) > . Remark:
Note that if p + · · · + p d < P (cid:8) | C | = ∞ (cid:9) = 0. Note also that, although the result is non-asymptotic, it only makes sense for d of order 1 /ε . We will see in the course of theproof of Theorem 1 that the constant 1/5 in Condition 2 could be made as close to 1as we wish, as long as the dimension is taken big enough. We mention that, for theisotropic case, Theorem 1 gives the bound p c ( Z d ) ≤ /d + 5 /d .The strategy of the proof is similar to the one in Cox-Durret [1]: we build amartingale and prove the convergence to a positive limit showing that it has boundedsecond moment. The main difficulty here is to estimate the second moments of themartingale. We do this by converting the martingale problem into a random walkproblem and comparing the asymmetric case with the symmetric one.2 .2 Proof of main result In this section we prove the main result modulo a lemma which is stated in the courseof the proof.
Proof. [Theorem 1] We want to prove that the open cluster of the origin is infinite,which happens if and only if there exists an infinite open path starting at the origin.Thus our problem can be naturally converted to a counting of the number of openpaths starting on 0, as long as we have a good control of the second moment of thisrandom variable.For each n ∈ N let V n be the set of the vertices in the n -th level , i.e., V n = { x ∈ Z d : x + · · · + x d = n } , (1.1)and C n to be the set of all possible oriented paths from the origin to V n , i.e., C n = { γ = (0 , v , v , . . . , v n ) : v j ∈ V j ; j = 1 , . . . , n } (1.2)We define X n to be the random variable which counts the open paths from theorigin up to level n , i.e., X n := X γ ∈C n { γ is open } , (1.3)and write µ := E [ X ] = p + · · · + p n . A simple calculation shows that E [ X n ] = µ n .Now, define W n := X n µ n . (1.4)We observe that { W n } n ∈ N is a positive martingale. For this, consider x ∈ V n andlet C x = { γ ∈ C n : f ( γ ) = x } , where f ( γ ) denotes the final vertex of γ . Define alsothe random variables Y x = d X i =1 {h x,x + e i i is open } and N x = X γ ∈C x { γ is open } , where Y n counts the number of oriented open edges leaving x and N x counts thenumber of oriented open paths from 0 to x , respectively. Observe that X n = X x ∈ V n N x and X n +1 = X x ∈ V n N x · Y x . (1.5)Let F n = σ ( X , . . . , X n ); observe that Y x is independent of F n for each x ∈ V n andthat N x is F n -measurable. We also have E [ Y x ] = µ for all x , so E [ X n +1 |F n ] = X x ∈ V n E [ N x · Y x |F n ] = µX n . E [ W n +1 |F n ] = W n , as wanted.Since W n is a positive martingale, it converges to a non-negative random variable W . Assume for the moment the following lemma. Lemma 1.
Let { W n } n be as defined in (1.4). Then, under the conditions of Theorem1 we have sup E [ W n ] < ∞ . From this lemma it follows that W n converges to W in L and since E ( W n ) = 1we have P ( W > >
0. Noticing that X n > n in the event { W > } , thetheorem follows.The proof of Lemma 1 in the anisotropic case require more than a direct adaptationof Cox-Durret results. The next sections are dedicated to this work.The structure of the remainder of the paper is the following: in Section 2 weconvert the martingale problem of Lemma 1 into a random walk problem, in Section3 we compare asymmetric random walks with the symmetric case to finish the proofof Lemma 1 and in Section 4 we make some final remarks. In this section we state and prove four lemmas with the goal of converting the mar-tingale problem of Lemma 1 into a random walk problem.
In the first lemma we give a criterion for bounding sup m E [ W m ]. To do that, weintroduce the following notation. Given a path γ = (0 , v , . . . , v n ) ∈ C n , we define,for each i ∈ { , . . . , n } , i ( γ ) := v i and f ( γ ) := v n . (2.1) Lemma 2.
Let a n := 1 µ n X ( γ ,γ ) ∈C n ,f ( γ )= f ( γ ) P ( γ , γ open ) . (2.2) Then sup m E [ W m ] < ∞ iff ∞ X n =1 a n < ∞ . roof. By (1.5) we have E [ X n +1 |F n ] = E X ( x,y ) ∈ V n N x Y x N y Y y (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) F n = µ X n + (V arY ) X x ∈ V n N x . Therefore E [ W n +1 ] = µ E [ X n ] µ n +2 + (V arY ) µ E (cid:2)P x ∈ V n N x (cid:3) µ n . Using the definition of N x we can see that E [ N x ] = X ( γ ,γ ) ∈C x P ( γ , γ open) so E (cid:2)P x ∈ V n N x (cid:3) µ n = P x ∈ V n P ( γ ,γ ) ∈C x P ( γ , γ open) µ n = 1 µ n X ( γ ,γ ) ∈C n ,f ( γ )= f ( γ ) P ( γ , γ open) , which is exactly the definition of a n . Hence E [ W n +1 ] = E [ W n ] + (V arY ) µ a n . Iterating the recursion above, we have E [ W n +1 ] = E [ W ] + (V arY ) µ n X j =1 a j , and the result follows.Given m ∈ N , let A m := { ( γ , γ ) ∈ C m : i ( γ ) = i ( γ ) for all i = m and f ( γ ) = f ( γ ) } (2.3)be the set of the pair of paths of size m which meet on the their final vertices anddefine b m := X ( γ ,γ ) ∈ A m P ( γ , γ open) µ m . (2.4) Lemma 3.
Let a n as defined in (2.2) and b m as defined in (2.4). Then ∞ X n =1 a n = ∞ X j =1 ∞ X m =1 b m ! j . (2.5)5 roof. Recall that a n := 1 µ n X ( γ ,γ ) ∈C n ,f ( γ )= f ( γ ) P ( γ , γ open) . (2.6)We will show that the set { ( γ , γ ) ∈ C n : f ( γ ) = f ( γ ) } can be partitioned by thenumber of vertices in the intersection of γ and γ . Let I ( γ , γ ) = { i : i ( γ ) = i ( γ ) } and C ( m , . . . , m j ) = (cid:8) ( γ , γ ) ∈ C n : I ( γ , γ ) = { m , m + m , . . . , m + · · · + m j } (cid:9) , then { ( γ , γ ) ∈ C n : f ( γ ) = f ( γ ) } = n G j =1 G ( m ,...,m j ) ∈ N j m + ··· + m j = n C ( m , . . . , m j ) . (2.7)Given two paths γ = (0 , v , . . . , v n ) and γ = (0 , w , . . . , w m ) we define the con-catenation of γ and γ by γ ◦ γ := (0 , v , . . . , v n , v n + w , . . . , v n + w m ). Observethat, given a sequence of positive integers ( m , . . . , m j ) ∈ N j , let n k = m + · · · + m j , k = 1 , . . . , j , then, recalling (2.3), we have C ( m , . . . , m j ) := { ( γ , γ ) ∈ C n j : γ = γ , ◦ · · · ◦ γ ,j , γ = γ , ◦ · · · ◦ γ ,j , ( γ ,k , γ ,k ) ∈ A m k , ∀ k = 1 , . . . , n } . Using (2.7) we can rewrite (2.6) as ∞ X n =1 a n = ∞ X j =1 X ( m ,...,m j ) ∈ N j X ( γ ,γ ) ∈ C ( m , ··· ,m j ) P ( γ , γ open) µ n j . (2.8)From the definitions of C ( m , . . . , m j ) and b m mentioned earlier, it follows taht X ( γ ,γ ) ∈ C ( m , ··· ,m j ) P ( γ , γ open) µ n j = X ( γ , ◦···◦ γ ,j ,γ , ◦···◦ γ ,j )( γ ,k ,γ ,k ) ∈ A mk ,k =1 ,...,j j Y k =1 P ( γ ,k , γ ,k open) µ m k = j Y k =1 X ( γ ,γ ) ∈ A mk P ( γ , γ open) µ m k = j Y k =1 b m k . ∞ X n =1 a n = ∞ X j =1 X ( m , ··· ,m j ) ∈ N j " j Y k =1 b m k = ∞ X j =1 ∞ X m =1 b m ! j , and the result follows. Using the lemmas in the previous section, we have, so far,sup m E [ W m ] < ∞ iff ∞ X m =1 b m < b n . In the remainder of the text, we willconsider several independent random walks and we use Q to denote the probabilityon a space where they all live in harmony.We say that q = ( q , . . . , q d ) is a ( d -dimensional) positive vector if 0 ≤ q , . . . , q d and q + · · · + q d >
0, and we say that q is a ( d -dimensional) probability vector if it isa positive vector with q + · · · + q d = 1. Given a positive vector q we say that { S n } n is the oriented random walk associated to q if S n = ξ + · · · + ξ n , (2.9)where ξ , . . . , ξ n are i.i.d. random variables with Q ( ξ = e i ) = q i q + · · · + q d , for all i = 1 , . . . , d. Given two independent random walks associated to q , { S n } n and { S n } n we define τ = τ ( { S n } n , { S n } n ) := inf { n ≥ S n = S n } . (2.10) Lemma 4.
Let b m be as defined in (2.4), A m be as in (2.3) and { S n } n , { S n } n be twoindependent random walks associated to p = ( p , . . . , p d ) . Then ∞ X m =1 b m < iff ∞ X m =2 Q ( τ = m ) < − µ . Proof.
Observe that b = X ( γ ,γ ) ∈ A P ( γ , γ open) µ = d X i =1 P ( h , e i i , h , e i i open) µ = d X i =1 p i µ = 1 µ . m ≥
2, we have b m = X ( γ ,γ ) ∈ A m P ( γ , γ open) µ m = X ( γ ,γ ) ∈ A m Q (cid:0) (0 , S , · · · , S m (cid:1) = γ ) · Q (cid:0) (0 , S , · · · , S m ) = γ (cid:1) = Q ( τ = m ) , thus ∞ X m =1 b m = 1 µ + ∞ X m =2 Q ( τ = m ) , and the result follows. Lemma 5.
Let { S n } n and { S n } n be two independent random walks associated to apositive vector q and let τ be as defined in (2.10). Then ∞ X m =1 Q ( τ = m ) < − µ iff ∞ X k =1 Q ( S k = S k ) < µ − . Proof.
Observe that Q ( S m = S m ) = X n ≤ m Q ( τ = n ) Q ( S m − n = S m − n ) , (2.11)so ∞ X m =1 Q ( S m = S m ) = ∞ X m =1 X n ≤ m Q ( τ = n ) Q ( S m − n = S m − n )= ∞ X n =1 " Q ( τ = n ) X m ≥ Q ( S m = S m ) = ∞ X n =1 Q ( τ = n ) ! ∞ X m =1 Q ( S m = S m ) ! , and therefore ∞ X n =1 Q ( τ = n ) = P ∞ m =1 Q ( S m = S m )1 + P ∞ m =1 Q ( S m = S m ) = 1 −
11 + P ∞ m =1 Q ( S m = S m ) . (2.12)Finally, we have1 −
11 + P ∞ m =1 Q ( S m = S m ) < − µ iff ∞ X m =1 Q ( S m = S m ) < µ − , and this finishes the proof. 8 Analysis of the random walks
In this section, we will estimate the maximal probability of two i.i.d random walksmeeting in a fixed time as a function of their parameters.Given a probability vector q = ( q , . . . , q d ), let { S n } n and { S n } be two independentrandom walks associated to q and define λ ( q ) := ∞ X n =1 Q ( S n = S n ) . (3.1)Observe that combining the results of the previous sections we haveIf λ ( q ) < µ − , then sup E [ W n ] < ∞ . (3.2)In this section, we investigate the behavior of λ ( q ). Theorem 2.
Let d ≥ and ≤ m ≤ d be integers. Consider the d -dimensionalprobability vector q ∗ = q ∗ d ( m ) = (1 /m, /m, . . . , /m, , . . . , , and let λ ( q ) be as in(3.1). Then(a) λ ( q ∗ ) ≤ m (b) for all d -dimensional probability vectors q such that max i q i ≤ m we have λ ( q ) ≤ λ ( q ∗ ) . We will prove Theorem 2 in the next sections, but before that we use it to proveLemma 1.
Proof. [Lemma 1] Let q i = p i P p i . Under the hypothesis of the Theorem 1, wehave q i ≤ ε/ ≤ i ≤ d ; taking m = ⌈ /ε ⌉ we obtain q i ≤ /m . Taking q = ( q , . . . , q d ), it follows from Theorem 2 that λ ( q ) ≤ /m ≤ ε . Finally, by (3.2) wehave sup E [ W n ] < ∞ and the lemma follows. In this subsection we prove Item (a) in Theorem 2. Observe that for all d ≥ m wehave λ (1 /m, /m, . . . , /m | {z } m , , . . . , | {z } d − m ) = λ (1 /m, /m, . . . , /m | {z } m ) . m ≤
3, we have that λ ( q ∗ ) = ∞ , so we let m ≥
4. Then λ (1 /m, /m, . . . , /m ) = ∞ X n =1 X ( l ,...,l m ) ∈ N m l + ··· + l m = n (cid:18) nl , . . . , l m (cid:19) m n ≤ ∞ X n =1 max ( l ,...,l m ) ∈ N n l + ··· + l m = n (cid:26)(cid:18) nl , . . . , l m (cid:19)(cid:27) m n . We will split the first sum in two parts, the first for n ≤ m and the second for n > m , and bound each one separately.Observe that for n = 1 , . . . , m the maximum inside the brackets is bounded by n !,so the sum for n ≤ m is bounded by m X n =1 n ! m n ≤ m + 2 m + m X n =3 m = 1 m + 8 m . (3.3)Now, for each j ≥ ≤ ℓ ≤ m −
1, we use Stirling’s bounds, √ πn (cid:16) ne (cid:17) n ≤ n ! ≤ √ πn (cid:16) ne (cid:17) n e n , to obtain, for n = jm + ℓ ,max ( l ,...,l m ) ∈ N n l + ··· + l m = n (cid:26)(cid:18) nl , . . . , l m (cid:19)(cid:27) = ( jm + ℓ )![( j + 1)!] ℓ ( j !) m − ℓ ≤ e m · √ m · m mj + ℓ ( √ π ) m − ( √ j ) m − . (3.4)Using the bound above we obtain λ ( q ∗ ) ≤ m + 8 m + e m · m · √ m ( √ π ) m − ∞ X j =1 j m − ≤ m + 8 m + e m · m · √ m ( √ π ) m − (cid:18) m − (cid:19) . For m ≥ m ≤ m and e m · m · √ m ( √ π ) m − (cid:18) m − (cid:19) ≤ m , and the result follows. 10 .2 Projecting the random walks We now want to understand the behavior of a random walk in Z d . To do that wewill split the random walk in Z d in two ”normalized” projections in Z and Z d − andconsider the behavior of the two parts to determine the behavior of the main randomwalk.Given two independent oriented d -dimensional random walks { S n ( q ) } n , { S n ( q ) } n associated with the probability vector q = ( q , . . . , q d ), for i = 1 ,
2, let { R in ( q ) } n betwo independent bi-dimensional oriented random walks associated with ( q , q ) and let { U in ( q ) } n be two independent ( d − q , . . . , q d ). Writing, for i = 1 , S in ( q ) = ( S in ( q ) , . . . , S in ( q ) d ), we define twocomplementary bi-dimensional oriented new random walks, { ˜ S n ( q ) } n and { ˜ S n ( q ) } n coupled with { S n ( q ) } n and { S n ( q ) } n respectively, where˜ S in ( q ) = (cid:0) S in ( q ) + S in ( q ) , S in ( q ) + · · · + S in ( q ) d (cid:1) . Clearly { ˜ S n ( q ) } n and { ˜ S n ( q ) } n are independents and have the same distribution of arandom walk associated with the probability vector ˜ q := ( q + q , q + · · · + q d ). Wewill omit the dependency on q until the proof of Theorem 2.One can think of the those newly defined random walks defined above as psudo-projections of the original random walks and the next lemma will express the meetingprobability of the first in term of the latter. Lemma 6.
Let { S n } n and { S n } n be two random walks associated with the probabilityvector q = ( q , . . . , q d ) . Then Q ( S n = S n ) = X ( j,k ) ∈ N j + k = n Q ( ˜ S n = ˜ S n = ( j, k )) Q ( R j = R j ) Q ( U k = U k ) . (3.5) Proof.
In fact Q ( S n = S n ) = X ( j,k ) ∈ N j + k = n Q ( S n = S n , ˜ S n = ˜ S n = ( j, k )) . (3.6)Now, observe that fixed ( j, k ) ∈ N such that j + k = n , we have Q ( S n = S n , ˜ S n = ˜ S n = ( j, k ))= "(cid:18) j + kj (cid:19) ( q + q ) j · ( q + · · · + q d ) k · " j X l =0 (cid:18) jl (cid:19) (cid:18) q q + q (cid:19) l (cid:18) q q + q (cid:19) j − l ) · X ( l , ··· ,l d ) ∈ N d − (cid:18) kl , . . . , l d (cid:19) (cid:18) q q + · · · + q d (cid:19) l · · · (cid:18) q d q + · · · + q d (cid:19) l d = Q ( ˜ S n = ˜ S n = ( j, k )) · Q ( R j = R j ) · Q ( U k = U k ) . Lemma 7.
For each x ∈ [0 , let { Z n ( x ) } n be a random walk over the set of theintegers Z n ( x ) = ζ ( x ) + · · · + ζ n ( x ) , where { ζ i } i ∈ N are i.i.d. random variables with Q ( ζ i ( x ) = 0) = x and Q ( ζ i ( x ) = −
1) = Q ( ζ i ( x ) = 1) = 1 − x . Then, for each n ∈ N fixed, the function F n : [1 / , → [0 , given by F n ( x ) = Q ( Z n ( x ) = 0) . (3.7) is increasing . Proof.
We want to prove that for 1 / ≤ x ≤ y ≤ F n ( y ) − F n ( x ) ≥
0. Todo that we will write F as a sum and analyze the terms of the sum separately. Define Y n ( x ) := { i ∈ [ n ] : ζ i ( x ) = 0 } , (3.8)so we can write F n ( x ) = n X j =0 Q ( Z n ( x ) = 0 | Y n ( x ) = j ) Q ( Y n ( x ) = j ) . Let us now analyze each part of the sum. We first observe that a n − j := Q ( Z n ( x ) = 0 | Y n ( x ) = j ) = ( n − j ≡ (cid:0) n − j ( n − j ) / (cid:1) × n − j if n − j ≡ . Observe also that a n − j is well defined because it does not depend on n or j , but onlyon n − j . Tt is also easy to see that for any 0 ≤ k ≤ n/ a k > a k +2 andthus a k ≥ a ℓ for all 0 ≤ k ≤ ℓ ≤ n/ g j given by g j ( x ) := Q ( Y n ( x ) = j ) = (cid:18) nj (cid:19) x j (1 − x ) n − j . (3.9)12aking the derivative, we have g ′ j ( x ) = (cid:18) nj (cid:19) x j (1 − x ) n − j (cid:18) jx − n − j − x (cid:19) , so that, for | y − x | sufficiently small and y > x > /
2, we have g j ( y ) − g j ( x ) < , if j ≤ nx, (3.10) g j ( y ) − g j ( x ) > , if j > nx. (3.11)Let N = { ≤ j ≤ n : j ≡ n mod 2 } ; then F n ( y ) − F n ( x ) = X j ∈ N [ g j ( y ) − g j ( x )] a n − j , and using the fact that { a ℓ } ≤ ℓ ≤ n/ is decreasing, and (3.10) and (3.11) we have F n ( y ) − F n ( x ) = X j ≤ nx,j ∈ N [ g j ( y ) − g j ( x )] a n − j + X j>nx,j ∈ N [ g j ( y ) − g j ( x )] a n − j ≥ " X j ≤ nx,j ∈ N [ g j ( y ) − g j ( x )] + X j>nx,j ∈ N [ g j ( y ) − g j ( x )] a n −⌊ nx ⌋ ≥ "X j ∈ N g j ( y ) − X j ∈ N g j ( x ) a n −⌊ nx ⌋ . Now note that X j ∈ N g j ( y ) can be obtained as the sum of the expansion of two binomials X j ∈ N g j ( y ) = 12 [( y − (1 − y )) n + ( y + (1 − y )) n ] = 1 + (2 y − n X j ∈ N g j ( x ) is an increasing function in [1 / , F n ( y ) − F n ( x ) ≥ Proof. [Theorem 2] We want to show that, the value of λ is maximal when thepositive entries of the vector are packed in m coordinates. Let A := (cid:26) q = ( q , . . . , q d ) : 0 ≤ q i ≤ m , ∀ i = 1 , . . . , d and X q i = 1 (cid:27) , and given q ∈ A let B ( q ) := n i ∈ [ d ] : q i / ∈ { , /m } o . A : A → A which increasesthe value of λ in each step.Let q ∈ A . If B ( q ) = 0, then define A ( q ) = q ∗ . If B ( q ) >
0, then necessarily B ( q ) ≥
2, and suppose without loss of generality that neither q nor q belongs to { , /m } , so we define A ( q ) = ( q ′ , . . . , q ′ d ) where q ′ i = q i for all i ≥
3, andfor q + q ≤ /m, we let q ′ = q + q and q ′ = 0 , for q + q > /m, we let q ′ = q + q − /m and q ′ = 1 /m, . We claim that the this algorithm has the following properties1. if B ( q ) = 0 then B ( A ( q )) = 0;2. if B ( q ) > B ( A ( q )) < B ( q );3. λ ( A ( q )) ≥ λ ( q ).Properties 1 and 2 follow from the definition and we proceed with the proof thatProperty 3 holds.We now compare the probabilities of the random walks associated with q and A ( q ).By Lemma 6 we have Q ( S n ( q ) = S n ( q )) = X ( j,k ) ∈ N j + k = n Q ( ˜ S n ( q ) = ˜ S n ( q ) = ( j, k )) Q ( R j ( q ) = R j ( q )) Q ( U k ( q ) = U k ( q )) . Observe that Q ( U k ( q ) = U k ( q )) depends only on the last d − q ,so that Q (cid:16) U k ( q ) = U k ( q ) (cid:17) = Q (cid:16) U k ( A ( q )) = U k ( A ( q )) (cid:17) . Analogously, Q ( ˜ S n ( q ) = ˜ S n ( q ) = ( j, k )) depends only on q + q and since this sumis invariant by A we have Q (cid:16) ˜ S n ( q ) = ˜ S n ( q ) = ( j, k ) (cid:17) = Q (cid:16) ˜ S n ( A ( q )) = ˜ S n ( A ( q )) = ( j, k ) (cid:17) . We will now prove that Q (cid:16) R j ( q ) = R j ( q ) (cid:17) ≤ Q (cid:16) R j ( A ( q )) = R j ( A ( q )) (cid:17) , (3.12)and it will follow that λ ( q ) = ∞ X n =1 Q (cid:16) S n ( q ) = S n ( q ) (cid:17) ≤ ∞ X n =1 Q (cid:16) S n ( A ( q )) = S n ( A ( q )) (cid:17) = λ ( A ( q )) .
14o prove (3.12) we observe that for each q ∈ A , R n ( q ) − R n ( q ) has the distributionof a lazy random walk { Z n ( x ) } n as defined in Lemma 7 with x = q + q ( q + q ) ≥ / R n ( A ( q )) − R n ( A ( q )) has the same distribution as { Z n ( x ′ ) } n , with x ′ = ( q ′ ) +( q ′ ) ( q ′ + q ′ ) ≥ x . Now, (3.12) follows from Lemma 7.Finally, we observe that for all q ∈ A we have A d ( q ) = q ∗ , where A d is the d -thiterated of A . Hence, by Property 3, we get λ ( q ∗ ) = λ ( A d ( q )) ≥ λ ( q ) and this finishesthe proof. We do not know whether Condition 2 of Theorem 1, the upper bound on the prob-abilities p i , is only a technical limitation or if a big discrepancy on the anisotropyprevents the system to behave as in mean-field conditions even in arbitrary large di-mensions. In any case, the isotropic case shows that some bound on the probabilities p i must be required as we explain now.One of the results in [1], states that the isotropic critical point satisfies p c ( d ) ≥ /d + 1 / (2 d ) + o (1 /d ). Let now each p i = 1 /d + 1 / (3 d ), and take d so that p i < p c ( d ) . In this case ε = 1 / (3 d ) and p i = √ ε (1 + ε ).A natural related question is whether anisotropic non-oriented percolation has thesame limiting critical surface behavior, i.e., under which conditions can we guaranteethat critical surfaces stay close to the isotropic critical points in high dimensions. Acknowledgments
P.A.G. has been supported by CAPES, A.P. has been sup-ported by a PNPD/CAPES grant and R.S. has been partially supported by ConselhoNacional de Desenvolvimento Cient´ıfico e Tecnol´ogico (CNPq) and by FAPEMIG(Programa Pesquisador Mineiro), grant PPM 00600/16.
References [1] Cox, J., Durrett, R. (1983). Oriented percolation in dimensions d ≥
4: Boundsand asymptotic formulas. Mathematical Proceedings of the Cambridge Philo-sophical Society, 93(1), 151-162.[2] Couto, R., Lima, B.N.B., Sanchis, R. (2014). Anisotropic percolation on slabs.Markov Process. Related Fields 20, no. 1, 145-154.[3] Gordon, D. M. (1991), Percolation in high dimensions. Journal of the LondonMathematical Society, s2-44: 373-384.154] Holley, R., Liggett T. (1981) Generalized potlatch and smoothing processes,Zeitschrift f¨ur Wahrscheinlichkeitstheorie und Verwandte Gebiete. V. 55-2, 165-195.[5] Kesten, H. (1990). Asymptotics in high dimensions for percolation. Disorder inphysical systems, 219-240, Oxford Sci. Publ., Oxford Univ. Press, New York.[6] Heydenreich, M., Van der Hosftad, R (2017). Progress in high-dimensional per-colation and random graphs. CRM Short Courses, Springer-Verlag.[7] Sanchis, R., Silva R.W.C. (2017). Dimensional crossover in anisotropic perco-lation on Z d + ss