The Free Uniform Spanning Forest is disconnected in some virtually free groups, depending on the generator set
TThe Free Uniform Spanning Forest is disconnectedin some virtually free groups, depending on the generator set
G´abor Pete ´Ad´am Tim´arJune 12, 2020
Abstract
We prove the rather counterintuitive result that there exist finite transitive graphs H andintegers k such that the Free Uniform Spanning Forest in the direct product of the k -regulartree and H has infinitely many trees almost surely.This shows that the number of trees in the FUSF is not a quasi-isometry invariant. Moreover,we give two different Cayley graphs of the same virtually free group such that the
FUSF hasinfinitely many trees in one, but is connected in the other, answering a question of Lyons andPeres [LP16] in the negative.A version of our argument gives an example of a non-unimodular transitive graph where
WUSF (cid:54) = FUSF , but some of the
FUSF trees are light with respect to Haar measure. Thisdisproves a conjecture of Tang [Tan19].
The Free Uniform Spanning Forest
FUSF is one of the most standard random spanning forestsof infinite graphs, obtained as the weak limit of the uniform random spanning trees
UST in anyexhaustion of the infinite graph by finite subgraphs. In any transitive graph, its law is invariantunder the automorphisms of the graph. It is a determinantal process, and is especially interestingdue to its connections to measurable group theory: in any Cayley graph of a group Γ, its expecteddegree is 2 + 2 β (2)1 (Γ), where β (2)1 (Γ) is the first (cid:96) -Betti number of the group, the von Neumanndimension of the space of harmonic functions of finite Dirichlet energy. In particular, we have theequality FUSF = WUSF with the Wired Uniform Spanning Forest iff β (2)1 (Γ) = 0. See [BLPS01]and [LP16, Chapter 10] for thorough studies of the FUSF ; some more recent papers are [HN17,Tim18, AHNR18, HN19].We will mostly work in the direct product graph T k × H , where T k is the k -regular infinite treewith k ≥
3, while H is a finite vertex-transitive graph. Typical examples are the product Cayleygraphs of the virtually free groups F r × Γ, where F r is a free group on r ≥ FUSF on some tree-like graphs was recently studied, among other topics,in [Tan19]. In particular, Tang proved that, for any k , the FUSF in T k × Z (where Z is thepath on 2 vertices, i.e., a single edge) is connected almost surely, and made the innocent-lookingconjecture that this holds more generally, for the direct product T k × H with any k ≥ H . See Remark 5.9 in that paper. Here we are disproving this conjecture. Theorem 1.1 (Disconnected
FUSF ) . For every d there is k d such that if T k is the k -regular infinitetree with k ≥ k d , and H is a connected finite d -regular transitive graph on more than k / vertices,then the FUSF of T k × H is disconnected almost surely. In fact, it has infinitely many components. a r X i v : . [ m a t h . P R ] J un ne striking corollary of our result is that the number of trees in the FUSF of transitive graphs is not a quasi-isometry invariant , as opposed to several similar properties: the number of trees in the
WUSF [LP16, Corollary 10.25], the property
WUSF (cid:54) = FUSF [Soa93, BLPS01], or equivalently, theinfinite-endedness of all the
FUSF trees (the equivalence follows from [Mor03] and [HN17, Tim18]).(Note, nevertheless, that without transitivity the number of components is not a quasi-isometryinvariant even when
WUSF = FUSF , as the example in [Ben91] shows.) With some extra work, weget that the number of trees is not even the same for different Cayley graphs of a fixed group (eventhough the expected degree of the
FUSF depends only on the group, because of the connection to β (2)1 (Γ)). This answers a question of Lyons and Peres [LP16, Question 10.50] in the negative: Theorem 1.2 (Dependence on the generating set) . For k large enough, the group F k × Z k (thedirect product of a free group and a cyclic group) has a Cayley graph in which the FUSF has infinitelymany components, and another Cayley graph in which the
FUSF is connected.
Another corollary of Theorem 1.1 is that although the
FUSF might be connected in everyquasi-transitive (or more generally, unimodular random) planar graph (see [AHNR18] for a largesubclass), this for sure cannot be extended from planar graphs to an arbitrary minor-closed family.This also means that a positive answer to [Tim20, Question 8], extending treeability and soficity ofunimodular random graphs from the planar case to graphs with arbitrary excluded minors, cannotbe done via the strategy of [AHNR18], using the
FUSF .It should be mentioned that [LP16, Question 11.37] asks whether the Free
Minimal
SpanningForest
FMSF is connected in any graph that is roughly isometric to a tree. A key difference fromour situation is that [LPS06, Theorem 1.3] says that the union of
FMSF with an independent
Bernoulli ( (cid:15) ) bond percolation is always connected, for any (cid:15) >
0. This is something that we do notknow for the
FUSF in our graphs, which also brings us to our next remark.A well-known question of Damien Gaboriau [Gab02] is whether the so-called measurable cost of any group Γ is equal to 1 + β (2)1 (Γ). He pointed out (see [LP16, Question 10.12]) that a positiveanswer would follow if, in every Cayley graph and any (cid:15) > ω that contains FUSF , but ω \ FUSF has density at most (cid:15) . Interesting examplesare the infinite Kazhdan groups: here β (2)1 (Γ) = 0, hence WUSF = FUSF , by [BV97]; thus non-amenability together with [BLPS01, Theorem 13.7] imply that adding an independent
Bernoulli ( (cid:15) )bond percolation does not work; on the other hand, adding some much trickier invariant percolationdoes work [HP20, Remark 2.2]. In the examples of our Theorem 1.1, we have WUSF (cid:54) = FUSF (because transitive graphs with infinitely many ends have harmonic functions with finite Dirichletenergy), so it is tempting to speculate that they could provide a negative answer to Gaboriau’squestion. However, we have been unable to prove anything in this direction. In particular, itremains open if any two trees in our
FUSF touch each other at finitely many places, similarly toBernoulli percolation [Tim06] or
WUSF clusters in Z d with d ≥ infinite transitive graph H , it has been known for long [BLPS01] thatthe FUSF of T k × H has infinitely many components. More generally, in the direct product of anynon-amenable transitive graph with any infinite transitive graph, there is no invariant probabilitymeasure on the set of subtrees [PP00] (even without the requirement of being spanning trees).However, for any finite transitive graph H , a uniform random translate of T k gives an invariantrandom subtree, hence a general non-treeability argument could not imply our Theorem 1.1. Infact, all disconnectedness results on the FUSF that we know of have been obtained so far either byproving that
WUSF = FUSF and knowing that the
WUSF trees are small (e.g., recurrent, hence oneor two-ended [Mor03]); or by noticing that even when
WUSF (cid:54) = FUSF , the
FUSF may be similarto the
WUSF , as in the free product Z ∗ Z ; or by a general non-treeability result, which applies2ot only to the FUSF but to any invariant spanning forest. In contrast, our proof is in a treeablegroup, specific to the
FUSF , in a situation where
WUSF (cid:54) = FUSF . The reason for having no earlier
FUSF -specific results is that this is quite a mysterious object: while the
WUSF can be generatedin infinite graphs directly by Wilson’s algorithm rooted at infinity, using loop-erased random walks[Wil96], or by the Interlacement Aldous-Broder algorithm [Hut18], no such method is known forthe
FUSF . Indeed, we will use Wilson’s algorithm in finite balls of the graph, then take the limit.A version of our construction gives a counterexample to Conjecture 1.2 of [Tan19], in a strongway. A transitive graph G , with full automorphism group Γ, is called unimodular if, for every pairof neighbors x, y , we have | Γ x y | = | Γ y x | , where Γ x = { γ ∈ Γ : γ ( x ) = x } is the stabilizer subgroup,and Γ x y = { γ ( y ) : γ ∈ Γ x } is the orbit of y . For instance, every Cayley graph is unimodular. See[LP16, Chapter 8] on background on unimodularity and its connections to invariant percolations.For non-unimodular transitive graphs, it is worth looking at an invariant Haar-measure µ on thelocally compact Γ, which gives finite but non-equal weights to the stabilizers: µ (Γ x ) µ (Γ y ) = | Γ x y || Γ y x | , for any x, y ∈ V ( G ). A subset C ⊂ V ( G ) is called light if (cid:80) x ∈C µ (Γ x ) < ∞ . It was proved in [Tan19,Theorem 1.1] that the trees of WUSF in any non-unimodular transitive graph (and more generally,whenever there is a closed non-unimodular subgroup of automorphisms that acts transitively on G ) are light. His Conjecture 1.2 stated that the opposite holds for FUSF , when
WUSF (cid:54) = FUSF .Since our examples in Theorem 1.1 do have transitive closed non-unimodular subgroups (the au-tomorphisms fixing an end of the tree), they already give counterexamples to the more generalconjecture. Nevertheless, with a bit of more work, we can also give counterexamples where thefull automorphism group is non-unimodular. Note here that there is a usual way of producing anon-unimodular transitive graph from a graph with a non-unimodular transitive subgroup of auto-morphisms by adding some edges in a transitive way, as in the grandmother graph; however, sincewe have already seen that the number of
FUSF components is not a quasi-isometry invariant, it isunclear what the effect of such a “small” change would be.
Theorem 1.3 (Non-unimodular lightness) . There exists a non-unimodular transitive graph G inwhich WUSF (cid:54) = FUSF , but
FUSF has some light clusters.
The second part of [Tan19, Conjecture 1.2] was that, for nonunimodular transitive graphs,
WUSF (cid:54) = FUSF implies that all the trees of
FUSF have branching number larger than 1. (Truein the unimodular case, because the average degree being strictly larger than 2 implies invariantnon-amenability [AL07, Section 8].) Our construction is not a counterexample to this conjecture.The dis/connectedness results discussed above give rise to a nontrivial graph parameter: forany finite graph H we let disco ( H ) := min (cid:8) k : FUSF ( T k × H ) is disconnected (cid:9) ∈ { , , . . . , ∞} . The earlier results on the connectedness of
FUSF in T k × P say that disco ( P ) = ∞ . Our Theo-rem 1.1 implies that if (cid:96) is large enough, then the cycle C (cid:96) of length (cid:96) has disco ( C (cid:96) ) < ∞ . Severalspecific open questions on this graph parameter are discussed in Section 6.To conclude this introduction, let us say a few words about our proof strategies and the orga-nization of the paper.The ball of radius n around a fixed root o ∈ T k will be denoted by T n , while the sphere of radius n will be denoted by S n . We will generate the UST in T n × H by Wilson’s algorithm [Wil96, LP16],3rst taking the loop-erased random walk LERW from a = ( o, h a ) to b = ( o, h b ), where h a (cid:54) = h b ∈ H are arbitrary. See Section 3 for the definitions. In the setting of Theorem 1.1, we will prove that the LERW from a to b , with a positive probability that does not depend on the radius n , will containsome boundary vertex ( z, h z ) ∈ S n × H . This will easily imply the theorem. Finding such a ( z, h z )will go as follows.We will find that the simple random walk trajectory from a to b with uniformly positive prob-ability hits a “bag” { z } × H with z ∈ S n in such a way that the part of trajectory before hitting { z } × H , denoted by π there , and the second part π back after leaving { z } × H do not intersect eachother. For this, we will have to make sure that there are no intersections in either of the followingways: (1) outside the ray of bags between { o } × H and { z } × H ; (2) in some bag of this ray.To guarantee (1), we will ensure that the bags that π there enters outside the ray are different fromthe bags that π back enters. By suitable requirements on the tree-coordinate of the random walk, itis possible to find z such that this (and hence (1)) are satisfied, as presented in Proposition 2.1.It remains to rule out intersections as in (2). Here the H -coordinates will play the main role.The intuition is that the visits in a typical bag { v } × H are not too long (since k is large comparedto d ), and the places where the walker enters { v } × H from the outside are likely to be far fromeach other, because these entrances tend to be separated by long time intervals (until the walkon the tree returns) and because H is large. To elaborate this argument will require some work,presented in Section 3.For Theorem 1.2, the idea is to start with a small degree d but large H compared to k , sothat Theorem 1.1 applies, then change the generating set so that we get the complete graph on H .This makes the random walk that generates the LERW spend a lot of time in each bag { v } × H before moving in the tree-coordinate, making it very likely that the loop-erasure erases every longexcursion away from the root bag { o } × H . The details are worked out in Section 4.In Section 5, we prove Theorem 1.3 on lightness in the non-unimodular setting. Here the taskis to modify the tree-proof to a well-chosen non-unimodular transitive graph, then argue that thereare infinitely many components in the FUSF , which makes at least some of them light.We conclude the paper with several open problems in Section 6, including the ones on Gaboriau’squestion and on our new graph parameter disco ( H ) for finite graphs H . A key observation about the tree-coordinate of the random walk will be the following proposition,somewhat interesting in its own right. Consider simple random walk ( Y t ) t ≥ on T n , started at theroot o , until the first return time τ + o := min { t > Y t = o } . Proposition 2.1 (Viable rays) . For any k large enough, with a positive probability that maydepend only on k , there is a z ∈ S n in T k such that, denoting the ray from o to z in T n by γ = ( o = γ , γ , . . . , γ n = z ) , we have: • all the edges on the ray γ are crossed exactly twice until τ + o (once on the way from o to z ,once on the way back); • on the way from o to z , for every i = 1 , . . . , n − , the number of excursions away from γ i before taking the edge ( γ i , γ i +1 ) is at most k/ ; denoting by E i and F i the set of edges incident to a vertex γ i but not on γ that are crossedon the way to z , and on the way back from z to o , respectively, we have that E i ∩ F i = ∅ forall i = 1 , . . . , n − . Such a ray typically has the property that all its vertices have positive but small local times (oforder k ) until τ +0 . It is possible that, using the Dynkin isomorphism theorem [Dyn84], such a resultcould be proved via the Gaussian Free Field on T k ; see [DLP12, Lup16, Zha18] for such arguments.However, since we also need the more refined statement on the edges incident to the ray, we havenot tried to make this connection precise. Let us emphasize that a typical ray to S n , or the firstray along which we reach S n , do not satisfy the proposition; we have to work to find such rays. Proof of Proposition 2.1.
Pick a leaf z ∈ S n , denote the ray from o to z by γ = γ ( z ), thestopping times τ z := min { t : Y t = z } and τ + o as before, and define the events A z := (cid:110) the edge ( γ i − , γ i ) is crossed exactly twice by ( Y t ) τ + o t =0 , for all i = 1 , , . . . , n (cid:111) , L z := (cid:110)(cid:12)(cid:12)(cid:8) t ∈ { , . . . , τ z } : Y t = γ i (cid:9)(cid:12)(cid:12) ≤ k/ i = 1 , , . . . , n (cid:111) . (2.1)Furthermore, let E i and F i be the set of edges as defined in Proposition 2.1, and define the event B z := A z ∩ L z ∩ (cid:8) E i ∩ F i = ∅ for all i = 1 , , . . . , n − (cid:9) . (2.2)Let us calculate P ( B z ). The first step has to be P ( Y = γ ) = 1 /k , and then, for each γ i , i = 1 , . . . , n −
1, the walk ( Y t ) may take excursions away from γ i , but it has to choose γ i +1 before γ i − , and the number of excursions has to be at most k/
2. The probability of this event
There i ,with the extra condition that there are precisely j ≥ P (cid:0) There i , with j excursions (cid:1) = (cid:18) − k (cid:19) j k , (2.3)independently of what happens at other γ i ’s. When we arrive at Y τ z = z , we have already sampledthe edge sets E i , i = 1 , . . . , n −
1. Then, at each γ i , for i = n − , n − , . . . ,
1, we have tochoose γ i − before γ i +1 , an event we will denote by Back i ; furthermore, the excursions away from γ i have to produce an edge set F i that is disjoint from E i . The probability of everything together,independently of i , is p k := P (cid:0) There i , Back i , and E i ∩ F i = ∅ (cid:1) ≥ (cid:98) k/ (cid:99) (cid:88) j =0 (cid:18) − k (cid:19) j k j + 2 (cid:16) log kk , (2.4)because if we have j ≤ k/ | E i | ≤ j , thus the walk on the way back has toavoid at most j + 1 neighbors before choosing γ i − (the edges of E i plus the edge to γ i +1 ), whichhas success probability 1 in at most j + 2 . The asymptotic formula at the end simply follows fromthe exponential factor being between 1 and 1 /e for all 0 ≤ j ≤ k/
2; the symbol (cid:16) means “up topositive universal constant factors”, independently of k or n .The events of (2.4) for different i ’s are independent from each other, hence we have P ( B z ) = 1 k p n − k . (2.5)Let Z n be the set of leafs z ∈ S n that satisfy the event B z . Then we have the first moment E | Z n | = k ( k − n − k p n − k , (2.6)5hich goes to infinity as n → ∞ if k is large enough, by (2.4).To estimate the second moment E | Z n | , let z, v ∈ S n be leafs such that their last commonancestor is w ∈ S m , with m ≥
1. We claim that P (cid:0) B z ∩ B v (cid:1) (cid:16) k p n − mk , (2.7)where p k is defined in (2.4), and (cid:16) k means “up to constant factors that may depend on k , but noton n or m ”.Indeed, the first step in ( Y t ) has to be towards w ; then we have to reach w without ever steppingbackwards along the ray from o to w ; then we have to step towards z or v before stepping backwardstowards o ; then we have to reach the chosen leaf without backward moves; then we have to go backto w without backward moves, and with the F i sets avoiding the E i ’s; at w , we have to step towardsthe other leaf before stepping towards o ; if we define F (cid:48) m to be the set of edges emanating from w that are crossed after reaching w after the first leaf, but before the step towards the second leaf, wemust have E m ∩ F (cid:48) m = ∅ ; from w we have to reach the other leaf without ever stepping backwards;then we have to go back to w , without backward moves, and with the F i sets avoiding the E i ’s alsoalong this branch; at w , we have to move towards o before moving towards z or v again, and theedge set F (cid:48)(cid:48) m produced by the excursions before that has to be disjoint both from E m and F (cid:48) m ; thenwe have to reach o without ever stepping backwards, again with the F i sets avoiding the E i ’s. Wehave m − n − − m ) = 2 n − m − w , each withsuccess probability p k , independently from each other. At w , the conditions are obviously possibleto satisfy if k ≥ z , at the second visit go straight towards w ,at the third visit go straight towards o ), happening with probability at least 1 /k and at most 1.So, the probability altogether is between p n − m − k /k and p n − m − k , which can be written as (2.7).By going through all possible last common ancestors w , from (2.7) we get E | Z n | = (cid:88) z,v ∈ S n P (cid:0) B z ∩ B v (cid:1) (cid:16) k n (cid:88) m =1 k ( k − m − ( k − n − m ) p n − mk (cid:16) k (cid:0) ( k − p k (cid:1) n , (2.8)if k is large enough, since ( k − p k → ∞ holds by (2.4), hence the m = 1 term will dominate.Comparing (2.6) and (2.8), the Cauchy-Schwarz second moment method gives us P ( | Z n | > ≥ ( E | Z n | ) E ( | Z n | ) (cid:16) k , finishing the proof of Proposition 2.1. In this section, we will first recall how to generate the
FUSF via an exhaustion by finite graphs andthe loop-erased random walk
LERW inside each finite graph. Then we will consider the randomwalk in T k × H , together with its projection to T k , and show that some of the viable rays found inSection 2 correspond to trajectories in the product graph that survive the loop-erasure, providedthat H is large enough (compared to k ). This way, we get distinct paths in the FUSF from twoneighboring vertices to infinity.As we briefly explained in the Introduction, the
FUSF of an infinite graph G is defined as theweak limit of the sequence UST ( G n ), where ( G n ) n ≥ is any increasing sequence of connected finitesubgraphs of G such that (cid:83) n ≥ G n = G , and UST is the uniform measure on all spanning trees6f the finite graph. The limit exists and is independent of the sequence ( G n ) by some electricnetwork monotonicity arguments [LP16, Chapter 10]. On a connected finite graph G , we can usethe loop-erased random walk LERW to construct
UST ( G ) with Wilson’s algorithm [Wil96]. Choosetwo vertices x , x of G , and produce a simple path from x to x by running a random walk from x until hitting x , and erasing all cycles created by the trajectory, in the order of creation. Thenpick some x , start a walk from x until we hit the path between x and x , take the loop-erasureof it, and so on, always walking from x i until we hit the already existing tree, repeating until allthe vertices become part of the tree.Our infinite graph will be a direct product G = T k × H , often denoted by T k (cid:3) H , where the vertexset is just the set of pairs, and the neighbors of ( t, h ) are the vertices ( t (cid:48) , h ) with { t, t (cid:48) } ∈ E ( T k )and the vertices ( t, h (cid:48) ) with { h, h (cid:48) } ∈ E ( H ). Proof of Theorem 1.1.
We take the exhaustion G n = T n × H of G = T k × H . Our first step inWilson’s algorithm is to take the loop-erased random walk LERW from a = ( o, h a ) to b = ( o, h b ),where h a (cid:54) = h b ∈ H are arbitrary. We will prove that the LERW from a to b , with a probabilitygreater than a positive number p that does not depend on n , will contain some boundary vertex( z, h z ) ∈ S n × H . Then, for any fixed finite subgraph U of G , if n is large enough so that G n contains U , but S n × H is already disjoint from U , and the above event for the UST ( G n )-path between a and b occurs, then the intersection of this UST ( G n )-path with U will not connect a and b . Hence,in the weak limit as n → ∞ , the FUSF -component of a will be different from the component of b with probability at least p .One way to complete the proof from here is that the number of trees in the FUSF in anyunimodular transitive graph was shown in [Tim18] and [HN17] to be either one a.s., or infinite a.s.,hence we have to be now in the second case. We will also give a direct proof for our very specialdirect product graph, via Wilson’s algorithm, not using unimodularity, at the end of this section.And, we will give yet another proof, using the Mass Transport Principle, which again works bothfor unimodular and non-unimodular transitive graphs that are tree-like in some sense, and in alarger generality than the
FUSF , in Proposition 5.3. Some readers might prefer the more specificWilson’s algorithm proof, some readers might prefer the more general MTP proof, but in any case,not relying on unimodularity will be important for Theorem 1.3.We now turn to the study of the
LERW from a to b . The random walk on G n from a to b forwhich we apply the loop-erasure will be denoted by ( X t ) t ≥ . The first coordinate of ( X t ) t ≥ is alazy random walk on T n , denoted by ( Y t ) t ≥ . As before, we fix z ∈ S n , and let τ z and τ + o denote thehitting times for the projection ( Y t ). Condition on the event B z of (2.2), but with the local timesat γ i in the definition of L z being understood as the number of “essentially different visits”, i.e.,with the lazy steps removed from ( Y t ). The last time before τ z that ( X t ) is in γ i × H is denotedby α i , and the first time after τ z that ( X t ) is in γ i × H is denoted by β i . (For i = n , we may mean α n = β n = τ z , but this will not matter anyway.) Furthermore, the number of actual (non-lazy)steps until α i in the T k and H coordinates will be denoted by α T i , α Hi , respectively, and similarlyfor β i . Lemma 3.1.
Let F ( β i ) be the sigma-algebra generated by ( X t ) β i t =0 . Then, for any ≤ i ≤ n − , P (cid:0) β Hi − − β Hi > k (cid:12)(cid:12) B z , F ( β i ) (cid:1) > b for any large enough k , with a constant b > that does not depend on i , n , or k . Proof.
Since we are conditioning on an event concerning the entire random walk trajectory, B z ,we have to be careful what the exact effect of this is. Namely, for any such event B , the original7andom walk transition probabilities get reweighted by a Bayesian factor : P (cid:16) X t +1 (cid:12)(cid:12)(cid:12) ( X s ) ≤ s ≤ t , B (cid:17) = P (cid:0) X t +1 (cid:12)(cid:12) ( X s ) ≤ s ≤ t (cid:1) P (cid:0) B (cid:12)(cid:12) ( X s ) ≤ s ≤ t +1 (cid:1) P (cid:0) B (cid:12)(cid:12) ( X s ) ≤ s ≤ t (cid:1) . (3.1)Now, for the lemma, it is enough to prove that P (cid:0) β T i − − β T i > k (cid:12)(cid:12) B z , F ( β i ) (cid:1) > b (cid:48) , (3.2)with some constant b (cid:48) >
0, by the following reasoning. Whenever X t ∈ γ i × H at some time t ≥ β i ,the conditioning on B z forbids the T k -steps through ( γ i , γ i +1 ) and E i , while the other T k -steps andall the H -steps are available. In other words, the Bayesian factor from (3.1), with B = B z and alsoconditioned on F ( β i ), is zero for X t +1 ∈ γ i +1 × H and for ( X t , X t +1 ) ∈ E i × H , while positive forother possible X t +1 ’s. Namely, for the H -steps (i.e., for X t +1 ∈ γ i × H ) and for those T k -steps thatare away from γ and not excluded by E i , the Bayesian factors are all 1, while for the T k -step to γ i − it is at most k ; this bound on the last factor comes from rearranging P (cid:0) B z (cid:12)(cid:12) X , . . . , X t = ( γ i , h ) (cid:1) ≥ d + k P (cid:0) B z (cid:12)(cid:12) X , . . . , X t = ( γ i , h ) , X t +1 = ( γ i − , h ) (cid:1) + dd + k P (cid:0) B z (cid:12)(cid:12) X , . . . , X t = ( γ i , h ) , X t +1 ∈ γ i × H ) (cid:1) , which holds for any t and any h ∈ H . Thus, the total weight of H -steps is d , while the total weightof T k -steps is at most 2 k −
1, hence, before each T k -step, the number of H -steps stochasticallydominates a Geom (cid:0) (2 k − / (2 k − d ) (cid:1) − α i and β i are measurablewith respect to the T k -coordinate of ( X t ), hence conditioned on all the events of (3.2), the variable β Hi − − β Hi stochastically dominates a sum of 2 k iid variables with mean d/ (2 k −
1) and variance d (2 k − d ) / (2 k − . Since d ≥
2, the expectation of the sum is larger than 2 k , and if k is largeenough, then the variance of the sum is less than k , hence the sum itself is larger than k with auniformly positive probability by Chebyshev’s inequality.For a proof of (3.2), first notice that, given B z and F ( β i ), the Bayesian factors calculated inthe previous paragraph show that with a uniformly positive probability the step ( X β i , X β i +1 ) isin the T k -coordinate, away from o , into a branch different from γ and E i . (We are conditioningon the event L z of (2.1) exactly in order for this uniformity to hold: the total Bayesian weight ofthese steps is at least k/ −
1, while the total weight of all other steps is at most k + d .) Fromhere, the distance of ( Y t ) from γ i is a biased random walk: whenever it changes (the step is inthe T k -coordinate), it decreases with probability 1 /k and increases otherwise. So, it will reachlevel S n with a uniformly positive probability. Now, after this, whenever the walk is at S n − , itreaches level S i before S n only with probability (cid:16) ( k − i − n +1 , by the usual exponential martingaleargument [Dur10, Theorem 5.7.7]. For i ≤ n −
5, this is at most O ( k − ). That is, the number ofsteps in the T k -coordinate until returning to S i from S n − stochastically dominates a geometricrandom variable with success probability O ( k − ), and this is at least 2 k with a uniformly positiveprobability. This gives (3.2).Let A i be the set of times until time α i when X t is in γ i × H , and let B i be the analogous setof times from time β i until τ + o . We let H ( A i ) and H ( B i ) be the set of vertices in γ i × H visitedat these times. Conditioned on B z , the time spent in γ i × H , for 1 ≤ i ≤ n −
1, is stochasticallydominated by a
Geom (cid:0) / ( k + d − (cid:1) variable on the way to z and by an independent copy on theway back to o , since the forward move along γ is always available, the backward move is never,and the forward move always has the largest Bayesian factor from (3.1). The time spent in γ n × H Geom (cid:0) / ( d + 1) (cid:1) . So, letting G i denote the sigma-algebra generated by all the trajectory pieces outside the subgraph G i spanned by γ i × H and the subgraphs of G \ ( γ × H ) hanging from there,up to time-translations for each piece (so, without the information how many steps within G i aretaken), we have P (cid:16) | A i | , | B i | < k (cid:12)(cid:12)(cid:12) B z , G i (cid:17) ≥ P (cid:16) Geom (cid:0) / ( k + d − (cid:1) < k (cid:17) = (cid:32) − (cid:18) − k + d − (cid:19) k (cid:33) > (1 − /e ) =: g, (3.3)if k is large enough, for i = 1 , . . . , n .Now, if we also condition on the history F ( β i ) and the event { β Hi − − β Hi > k } of Lemma 3.1,then t − s > k for all s ∈ A i − and t ∈ B i − , and hence the following lemma will be relevant. Lemma 3.2.
In any d -regular finite graph H on more than k / vertices, if t > k , and x, y ∈ V ( H ) are arbitrary, then the simple random walk heat kernel satisfies P ( X t = y | X = x ) < C d k − / , witha constant C d < ∞ that depends only on d . Proof.
This is basically a special case of [Lyo05, Lemma 3.6 in the arXiv version] or [MP05], witha few minor additional remarks.In both references, the Markov chain is supposed to have a uniform laziness. So, we apply theseresults to the chain given by two consecutive steps on H . Since H is d -regular, the probability ofstaying put in this chain is 1 /d . The stationary distribution is uniform. So, the references imply theon-diagonal bound P ( X t = x | X = x ) < C d k − / for all even times 2 t > k / . We then get thesame off-diagonal bound P ( X t = y | X = x ) < C d k − / by a standard Cauchy-Schwarz argumentand the uniformity of the stationary distribution. Finally, to get the same bound for X t +1 beingat y , average the bound over the neighbors of y at time 2 t , before making the last step.We also remark that to apply [MP05] one has to take (cid:15) = k − / | H | there, which is not small(as suggested by the notation (cid:15) ), but that is actually not a requirement in that paper.Now, the actual H -steps taken in the walk ( X t ) t ≥ are independent of our conditionings onthe T k -steps and on the number of H -steps, so, if | V ( H ) | > k / , then we can apply the previouslemma to obtain P (cid:16) H ( A i − ) ∩ H ( B i − ) (cid:54) = ∅ (cid:12)(cid:12)(cid:12) B z , F ( β i ) , β Hi − − β Hi > k , and | A i − | , | B i − | < k (cid:17) < (2 k ) C d k − / . This is less than 1 / k is large enough, hence, combining with Lemma 3.1 and (3.3), we get P (cid:0) H ( A i − ) ∩ H ( B i − ) = ∅ (cid:12)(cid:12) B z , F ( β i ) (cid:1) > bg, (3.4)for i ∈ { , . . . , n − } . For i ∈ { n − , . . . , n } , the lower bound b of Lemma 3.1 can of course bereplaced by some ˜ b k >
0, still independent of n .If the random walk trajectory ( X t ) τ + o t =0 satisfies B z and the intersection (cid:84) n − i =1 (cid:8) H ( A i ) ∩ H ( B i ) = ∅ (cid:9) , then its loop-erasure will still contain z , hence this would imply the event that we are interestedin. Iterating (3.4) for all i = n, n − , . . . ,
2, we will have a good lower bound (exponentially small,but with a base that does not depend on k ) on this event. But we will again need to use the secondmoment method, for which we need a little bit of preparation. Define the events C z := B z ∩ (cid:8) H ( A i ) ∩ H ( B i ) = ∅ for all i = 1 , . . . , n − (cid:9) , C z ( h ) := C z ∩ { X τ + o = ( o, h ) } , h ∈ H. P (cid:0) C z (cid:12)(cid:12) B z (cid:1) > ˜ b k ( bg/ n − . (3.5)Furthermore, we claim thatmax h ∈ H P (cid:0) C z ( h ) (cid:12)(cid:12) B z (cid:1) ≤ C min h ∈ H P (cid:0) C z ( h ) (cid:12)(cid:12) B z (cid:1) , (3.6)with a constant C < ∞ that may depend on H and k , but not on n .In the proof of this claim, we will use a small technical lemma: Lemma 3.3.
Every finite transitive graph H is 2-vertex-connected: for any vertex g ∈ V ( H ) , thegraph we get from H by deleting g is still connected. Proof.
Assume that there is a cut-vertex g , whose removal cuts H into at least two components;denote the largest of these by H g (or one of the largest ones in case of a draw). Take some vertex h not in { g } ∪ H g . By transitivity, h is also a cut-vertex, whose removal results in at least twocomponents, one containing both g and H g . But this component will have a size strictly largerthan | H g | , contradicting transitivity.To prove (3.6), first observe that, if we condition the random walk trajectory ( X t ) to satisfy C z , and let X α = ( γ , h out ) be the last vertex in γ × H on the trajectory before τ z , and let X β = ( γ , h in ) be the first one after τ z , then, conditionally on h out and h in , the part of thetrajectory between h out and h in is independent of the rest. Therefore, if we prove that there existssome p >
0, depending only on H and k , but not on n , such that, for any two vertices h out (cid:54) = h in ,and any h ∈ H , the probability that ( X t ) α t =0 and ( X t ) τ + o t = β satisfy the conditions of C z ( h ) relatingto γ i × H for i = 0 , , , p , then C = 1 /p will clearly work in (3.6). For the argumentthat follows, see Figure 3.1. o = γ γ γ γ h a h h (cid:48) h (cid:48) h in h out π in π out π h Figure 3.1: Producing a good random walk trajectory in T k × H .Pick any vertex h (cid:48) ∈ H \ { h out , h a } . By the 2-connectedness of H , we can pick a path π in in γ × H between ( γ , h in ) and ( γ , h (cid:48) ) that avoids ( γ , h out ), a path π out in γ × H between ( γ , h out )10nd ( γ , h a ) that avoids ( γ , h (cid:48) ), and a path π h in γ × H between ( γ , h ) and ( γ , h (cid:48) ) that avoids( γ , h a ). Then ( X t ) α t =0 can go from ( γ , h a ) straight to ( γ , h a ), then to ( γ , h out ) via π out , thenstraight to ( γ , h out ), and ( X t ) τ + o t = β can go from ( γ , h in ) via π in and π h to ( γ , h ). All of thishappens with probability at least ( d + k ) − | H |− , which proves (3.6). (Note that we needed theextra vertex h (cid:48) and the four layers γ , . . . , γ for this construction because it might happen that h = h out ; otherwise, taking h (cid:48) := h and removing the γ layer could have worked.)We are now ready for the second moment method. Let W n be the set of leaves z ∈ S n thatsatisfy C z ( h b ), with the desired endpoint b = ( o, h b ). We will run a second moment argument, asin Section 2, to show that W n is non-empty with a positive probability, uniformly in n .First note that (3.5) and (3.6) imply q ( n ) := P (cid:0) C z ( h b ) (cid:12)(cid:12) B z (cid:1) > c ( bg/ n , (3.7)where c depends on H and k , but not on n . Together with (2.5), we have E | W n | (cid:16) k,H ( k − n p nk q ( n ) . (3.8)Using (2.4) and (3.7), this tends to infinity as n → ∞ for k large enough.To estimate the second moment E (cid:0) | W n | (cid:1) , let z, v ∈ S n be leafs such that their last commonancestor is w ∈ S m , with m ≥
1. We claim that P (cid:0) C z ( h b ) ∩ C v ( h b ) (cid:12)(cid:12) B z ∩ B v (cid:1) ≤ Q q ( n ) q ( n − m ) , (3.9)with some Q < ∞ that depends only on H and k , but not on n .By symmetry, we may assume τ z < τ v . We first show that P (cid:0) C z ( h b ) (cid:12)(cid:12) B z ∩ B v ∩ { τ z < τ v } (cid:1) ≤ C k,H P (cid:0) C z ( h b ) (cid:12)(cid:12) B z (cid:1) . (3.10)We do this by coupling (with a uniformly positive probability) the trajectory ( X t ) conditioned on B z to be identical to the trajectory conditioned on B z ∩ B v ∩ { τ z < τ v } , denoted by ( ˜ X t ), withinthe ray γ × H (which leads from o to z ), except for a bounded neighborhood of w = γ m . Given B z , we know from (3.4) that H ( A m ) ∩ H ( B m ) = ∅ occurs with a uniformly positive probability,say c >
0. Conditioning on B z ∩ { H ( A m ) ∩ H ( B m ) = ∅} gives a certain distribution to the pairsof vertices (cid:0) X α m − , X β m − (cid:1) and (cid:0) X α m , X β m (cid:1) , which are basically the places where the trajectoryleaves w × H . On the other hand, conditioning on B z ∩ B v ∩ { τ z < τ v } , we get some distribution on( ˜ X t ) α m +3 t = α m − and ( ˜ X t ) β m − t = β m +3 . Whenever these pieces of ( ˜ X t )-trajectories satisfy H ( A i ) ∩ H ( B i ) = ∅ for i = m − , . . . , m + 3 (so that ( ˜ X t ) still has a chance to satisfy C z ( h b )), the argument ofFigure 3.1 gives that, conditioned on these trajectory pieces, with a probability at least c > k and H , we have that ( X t ) α m +3 t = α m − and ( X t ) β m − t = β m +3 satisfy (cid:0) X α m +3 , X β m +3 (cid:1) = (cid:0) ˜ X α m +3 , ˜ X β m +3 (cid:1) and (cid:0) X α m − , X β m − (cid:1) = (cid:0) ˜ X α m − , ˜ X β m − (cid:1) . Conditioned on these equalities, we cancouple the trajectories ( X t ) α m − t =0 , ( X t ) β m +3 t = α m +3 , and ( X t ) τ + o t = β m − to be equal to the tilde versions,hence if ( ˜ X t ) satisfies C z ( h b ), so does ( X t ). Altogether, (3.10) follows with C k,H = 1 / ( c c ).Now let H ( A m ) , H ( B (cid:48) m ) , H ( B (cid:48)(cid:48) m ) be the set of vertices in w × H visited before τ z , between τ z and τ v , and after τ v , respectively; thus H ( B (cid:48) m ) ∪ H ( B (cid:48)(cid:48) m ) = H ( B m ). Notice that C z ( h b ) ∩ C v ( h b ) impliesthat H ( A m ) , H ( B (cid:48) m ) , H ( B (cid:48)(cid:48) m ) are mutually disjoint, an event we will denote by M w . Conditionnow, beyond B z ∩ B v ∩ { τ z < τ v } , also on the event C z ( h b ) ∩ M w . Let h (cid:48) be the vertex in H ( B (cid:48) m )last visited before τ v , and let h (cid:48)(cid:48) be the first vertex in H ( B (cid:48)(cid:48) m ) after τ v . Since H is transitive, thereis an automorphism taking h (cid:48) to h a , and h (cid:48)(cid:48) to some h ∗ . Now, the events along the ray from w to v C v ( h b ) are just the events for some length n − m ray, with the extra conditionthat the first step from ( w, h a ) and the last step to ( w, h ∗ ) are both in the T k -coordinate. Thus,using (3.6), we have P (cid:16) C v ( h b ) (cid:12)(cid:12)(cid:12) C z ( h b ) ∩ M w ∩ B z ∩ B v ∩ { τ z < τ v } (cid:17) < C (cid:48) k,H q ( n − m ) . (3.11)Since M w ⊃ C z ( h b ) ∩ C v ( h b ), we can combine this with P (cid:0) C z ( h b ) ∩ M w (cid:12)(cid:12) B z ∩ B v ∩ { τ z < τ v } (cid:1) ≤ C k,H q ( n ) , which we get from (3.10), and we arrive at (3.9).From (3.9) and (2.7), similarly to (2.8), we have E | W n | ≤ Q (cid:48) n (cid:88) m =1 k ( k − m − ( k − n − m ) p n − mk q ( n ) q ( n − m ) . (3.12)For the Cauchy-Schwarz second moment method, we want that E | W n | < Q (cid:48)(cid:48) ( E | W n | ) , for some Q (cid:48)(cid:48) < ∞ that does not depend on n . Substituting (3.8) and (3.12) into this inequality, thenrearranging, we arrive at the following inequality to prove: n (cid:88) m =1 (cid:0) ( k − p k (cid:1) − m q ( n − m ) ? < Q (cid:48)(cid:48)(cid:48) q ( n ) . (3.13)The final ingredient is that, writing y for vertex γ m on the ray from o to z , and writing B yz and C yz for the analogs of the events B z = B oz and C z = C oz when the root is y instead of o , q ( n ) q ( n − m ) = P ( C oz ( h b ) | B oz ) P ( C yz ( h b ) | B yz ) (cid:16) k,H P (cid:0) H ( A i ) ∩ H ( B i ) = ∅ for i = 1 , , . . . , n − (cid:12)(cid:12) B oz (cid:1) P (cid:0) H ( A i ) ∩ H ( B i ) = ∅ for i = m + 1 , m + 2 , . . . , n − (cid:12)(cid:12) B yz (cid:1) (cid:16) k,H P (cid:0) H ( A i ) ∩ H ( B i ) = ∅ for i = 1 , , . . . , n − (cid:12)(cid:12) B oz (cid:1) P (cid:0) H ( A i ) ∩ H ( B i ) = ∅ for i = m + 1 , m + 2 , . . . , n − (cid:12)(cid:12) B oz (cid:1) = P (cid:0) C oz (cid:12)(cid:12) B oz , H ( A i ) ∩ H ( B i ) = ∅ for i = m + 1 , m + 2 , . . . , n − (cid:1) > ( bg/ m , where the first (cid:16) is by (3.6); the second (cid:16) is by a coupling argument similar to the one thatgave (3.10), now doing the coupling in { γ m , . . . , γ m +3 } × H ; and the inequality in the last linefollows from (3.4). Plugging this into (3.13), we arrive at n (cid:88) m =1 (cid:0) ( k − p k (cid:1) − m (2 /bg ) m ? < Q (cid:48)(cid:48)(cid:48)(cid:48) , which is true if k is large enough, since (2.4) tells us that ( k − p k → ∞ as k → ∞ . This finishesthe proof of the disconnectedness Theorem 1.1.For the first direct proof of having infinitely many trees almost surely, pick an infinite ray o , o , . . . in T k , pick any h ∈ H , and let a i := ( o i , h ). Our exhaustion G n = T n × H contains a , . . . , a n . Perform Wilson’s algorithm in G n as follows.First run a LERW from a to a , denoted by (cid:96) . By a small modification of our previous proof,with a positive probability that depends only on H and k , this (cid:96) will first enter the subtree (times H ) that starts at o and does not contain o or o , then will hit the boundary S n × H , then hits12 × H at a vertex b = ( o , h ) different from a , then goes straight to ( o , h ), then hits a = ( o , h )without leaving o × H . Without conditioning on this good event, denoted by G hereafter, the T k -coordinate of the random walk that gives (cid:96) , viewed only at the times when it moves on the ray o , . . . , o n , performs a simple random walk on this segment until τ + o . The maximum j for which o j is touched by the projection is stochastically dominated by the maximum of a one-dimensionalrandom walk excursion, which is almost surely finite, since the walk is recurrent. The maximum j for which o j × H is touched by (cid:96) , denoted by j , is even smaller. Let b = ( o j , h ) be the lastvertex in j × H touched by (cid:96) . Note that this definition of b extends our previous one that wemade under G .Next, run a LERW from a j +1 to b , denoted by (cid:96) , which, with a positive probability thatdepends only on H and k , will enter the subtree (times H ) that starts at o j +1 and does not contain o j or o j +2 , then will hit the boundary of T n × H , then hits o j +1 × H at a vertex b = ( o j +1 , h )different from a j +1 , then goes straight to ( o j , h ), then hits b without leaving o j × H . Withoutconditioning on this good event, denoted by G , the maximum j for which o j × H is touched by (cid:96) , denoted by j , has the property that j − j is stochastically dominated by the maximum ofa one-dimensional simple random walk excursion. Let b = ( o j , h ) be the last vertex in j × H touched by (cid:96) , extending the definition that we made under G .Iterate this procedure until we have reached o n × H , producing the LERW paths (cid:96) , . . . , (cid:96) I n . Sincethe distribution of j i +1 − j i is always stochastically dominated by the maximum of a one-dimensionalsimple random walk excursion, the variable I n tends to infinity in probability, as n → ∞ . Each (cid:96) i , independently of the previous ones, satisfies G i with a positive probability that depends only on H and k . Thus, the number of events G i satisfied also tends to infinity in probability. This showsthat the number of trees in the weak limit is almost surely infinite. Proof of Theorem 1.2.
The natural free generating set in each coordinate of the product,together with their inverses, gives a tree T k in the F k coordinate and a cycle in the H = Z k coordinate (every edge that appears does so in both orientations, so, as usual, we consider themto be unoriented single edges). If k is large enough, then Theorem 1.1 tells us that the FUSF hasinfinitely many trees almost surely.The second Cayley graph will also be a direct product graph: we again take free generators for F k with their inverses, while all the elements in H = Z k , except for the identity. This gives theCayley graph T k × K k , where K n is the complete graph on n vertices with a single unorientededge between any pair of vertices.We will show that, for the LERW from a = ( o, h a ) to b = ( o, h b ), with h a (cid:54) = h b ∈ H , theprobability that the LERW is not contained in T r × H is exponentially small in r , if k is largeenough. (As before, T r is the ball of radius r in T k .) This of course implies the theorem.For any v (cid:54) = o ∈ F k , let τ v be, as before, the first time when the random walk ( X t ) t ≥ from a to b hits the bag { v } × H (possibly infinite). Let LERW t be the loop-erasure of ( X s ) ts =0 , and let L t ( v )be the component of LERW t ∩ ( { v } × H ) that contains X τ v , whenever τ v < ∞ . Furthermore, let β v be the last time that ( X t ) t ≥ enters { v } × H before hitting b (i.e., the last t such that X t ∈ { v } × H but X t − (cid:54)∈ { v } × H ). Possibly β v = τ v .Our goal is to show that P (cid:16) ( X t ) τ b t = β v ∩ L β v ( v ) = ∅ (cid:12)(cid:12)(cid:12) G v (cid:17) < k (4.1)13f k is large enough, where G v is similar to the sigma-algebra used in (3.3), but is slightly largernow: it is generated by all the random walk steps up to time τ v , together with all the later movesoutside { v } × H , but these, as before, without timestamps, i.e., without the information how muchtime is spent in { v }× H between the trajectory pieces outside. (For instance, X β v is G v -measurable,while β v is neither measurable nor independent.) If the intersection in (4.1) is nonempty, then the LERW from a to b does not go beyond v × H (i.e., into any bag that is not in the same componentof ( T k \ { v } ) × H as { o } × H ). This easily implies the exponential decay claimed above, as follows.In order for the LERW from a to b not to be contained in T r × H , there must exist a ray of bags { γ } × H, . . . , { γ r +1 } × H , with γ = o , so that the event of (4.1) occurs for each v = γ i . We canbound the probability of this event from above, by iteratively conditioning on everything up to τ γ i ,and using a new factor of 1 / (2 k ) given by (4.1), getting altogether that the probability of this eventis less than (2 k ) − r − . Since the number of possible rays of length r + 1 is at most (2 k )(2 k − r , aunion bound gives the exponentially small upper bound (cid:0) − / (2 k ) (cid:1) r , as desired.Turning to the proof of (4.1), we will need a small Markov chain mixing time lemma to have acontrol on the process | L t ( v ) | t ≥ τ v . For basic definitions, such as the total variation distance d TV ,see [LPW17]. Lemma 4.1.
Let ( X t ) t ≥ be simple random walk on the complete graph K ◦ n with loops; that is, eachstep of the walk is just a new independent vertex distributed as Unif { , . . . , n } . Now let ( L t ) t ≥ bethe Markov chain on { , . . . , n } where L t is the size of the loop-erased version of the path ( X s ) ts ≥ .Then the following are true. (1) The transition probabilities for ( L t ) t ≥ are p ( i, j ) = 1 /n for all j ∈ { , . . . , i } , and p ( i, i + 1) =( n − i ) /n . The unique stationary distribution of the chain satisfies π ( i ) ≤ i/n . (2) The total variation mixing time of the chain is O ( √ n ) ; in fact, d TV ( µ t , π ) < exp (cid:16) − t n (cid:17) forevery t , where µ t is the distribution of L t started from any given state. Proof. (1)
At time t , if the next step X t +1 is to the j th vertex on the current loop-erased path,then L t +1 = j ; if X t +1 is to a vertex not currently on the path, then L t +1 = L t + 1. The transitionprobabilities follow. This chain is clearly irreducible and aperiodic, hence it has a unique stationarydistribution π , which satisfies the equation π ( i + 1) = n − in π ( i ) + 1 n n (cid:88) k = i +1 π ( k ) ≤ π ( i ) + 1 n . The inequality π ( i ) ≤ i/n follows by induction on i . (2) We will bound the mixing time by a standard coupling argument: if ( L t , ˜ L t ) t ≥ is anycoupling of two copies of the Markov chain, one with L = i , the other with ˜ L = j , and τ coupling is the first time when L t = ˜ L t , then [LPW17, Corollary 5.5] says that d TV ( µ t , π ) ≤ max i,j ∈ K ◦ n P (cid:0) τ coupling > t (cid:1) . (4.2)Our coupling will be a monotone one: we assume i < j , then will maintain L t ≤ ˜ L t for all t ≥
0. Take i.i.d. random variables U t ∼ Unif { , . . . , n } for t >
0. Given already ( L s , ˜ L s ) ts =0 ,we generate ( L t +1 , ˜ L t +1 ) as follows. If U t +1 ≤ L t , then let L t +1 := U t +1 ; if U t +1 > L t , then let L t +1 := L t + 1. We make exactly the same definitions for ˜ L t +1 , using the same variable U t +1 as for L t +1 . This is clearly a monotone coupling of two copies of the chain, and it has the property that τ coupling = inf { t + 1 : U t +1 ≤ L t + 1 } . (If U t +1 ≤ L t < ˜ L t , then both chains are in the first case of14he definition; if L t + 1 = U t +1 ≤ ˜ L t , then L t +1 is in the second case, ˜ L t +1 is in the first case, butnevertheless they have become equal.) Therefore, P (cid:0) τ coupling > t (cid:1) = t (cid:89) s =1 (cid:18) − i + sn (cid:19) < t (cid:89) s =1 (cid:16) − sn (cid:17) < exp (cid:32) − t (cid:88) s =1 sn (cid:33) < exp (cid:18) − t n (cid:19) . This is true for any pair of starting states 1 ≤ i < j ≤ n , hence the result follows by (4.2).The value of L t in this chain depends only on the non-lazy steps. Therefore, if we take a version L γt of the chain where ( X t ) has laziness P ( X t +1 = i | X t = i ) = γ , and the non-lazy steps areuniform, P ( X t +1 = j | X t = i ) = (1 − γ ) / ( n −
1) for all j (cid:54) = i , then it will have the same stationarydistribution regardless of γ , and the mixing time is easy to bound. In particular, if β is a randomtime, independent of the non-lazy steps (it may depend on their total number, though), such thatthe number of non-lazy steps in ( L γs ) βs =0 stochastically dominates the number of non-lazy steps in( L s ) ts =0 , then the bound d TV (cid:0) µ γβ , π (cid:1) ≤ sup s ≥ t d TV ( µ s , π ) < exp (cid:18) − t n (cid:19) (4.3)holds, where µ γt is the distribution of L γt .Now observe that the process | L t ( v ) | t ≥ τ v evolves exactly like the process ( L γt ) t ≥ for a certainlaziness parameter γ , except that time should be paused when X t is not in { v } × H . That is, if welet ν ( t ) denote the number of times that ( X s ) t − s =0 is in { v } × H , and ν − ( t ) := sup { s : ν ( s ) = t } ,then ν − (0) = τ v , and | L ν − ( t ) ( v ) | t ≥ d = ( L γt ) t ≥ , (4.4)for some γ > k and | H | , and which we will fix from now on. This is clearfrom the facts that there is always an independent 2 k/ (2 k + | H | −
1) probability of leaving { v } × H ,and whenever this happens, then the next time we are in { v } × H again we will be with a certainprobability at the vertex where we left (a value we could compute but do not need), and with equalprobabilities at all the other vertices.Now we want to argue that L β v ( v ) is large enough (with high probability), so that ( X t ) t ≥ β v willhit it (again with high probability) before leaving the bag { v } × H . All of this will be done underthe conditioning on G v , hence we will also need some Bayesian factors as in (3.1).The number of non-lazy steps of | L ν − ( t ) ( v ) | ≤ t ≤ ν ( β v ) , which we will denote by ˜ ν ( β v ), stochas-tically dominates N v , the number of steps that ( X t ) makes after τ v until leaving the bag { v } × H for the first time. The conditioning on G v tells us the vertex ( v, h out ) where we are leaving, andthe vertex ( w, h out ) where we are leaving to. Let E be the event that, for an unconditioned simplerandom walk started somewhere in the bag { v } × H , the first vertex we hit outside the bag is( w, h out ). For any h (cid:54) = h out ∈ H , we have, by the symmetries in H : P ( v,h ) (cid:0) E (cid:1) = 1 | H | − k P ( v,h out ) (cid:0) E (cid:1) + | H | − | H | − k P ( v,h ) (cid:0) E (cid:1) + 2 k | H | − k P ( v,h out ) (cid:0) E (cid:1) = (2 k + 1) P ( v,h ) (cid:0) E (cid:1) . With these Bayesian factors, the probability of movingto ( v, h out ) is (2 k + 1) / (2 k + | H | −
1) in each step, hence the time reaching ( v, h out ), which isstochastically dominated by N v , has distribution Geom (cid:0) (2 k + 1) / (2 k + | H | − (cid:1) . Thus, for any a > P (cid:16) ˜ ν ( β v ) < k a (cid:12)(cid:12)(cid:12) G v (cid:17) ≤ P (cid:16) Geom (cid:18) k + 12 k + | H | − (cid:19) < k a (cid:17) < k a | H | . (4.5)15urthermore, for any integer m , the number of non-lazy steps of ( L t ) ≤ t ≤ m is of course at most m .Therefore, using (4.5), (4.4), (4.3), and Lemma 4.1 we have, for any b > P (cid:16) | L β v | ≤ k b (cid:12)(cid:12)(cid:12) G v (cid:17) ≤ P (cid:0) L γν ( β v ) ≤ k b (cid:12)(cid:12) ˜ ν ( β v ) ≥ k a (cid:1) + 3 k a | H |≤ π ( { , . . . , k b } ) + sup t ≥ k a d TV ( π, µ t ) + 3 k a | H |≤ k b ( k b + 1)2 | H | + exp (cid:18) − k a | H | (cid:19) + 3 k a | H | . (4.6)Finally, if | L β v | > k b , then ( X t ) τ b t = β v will hit it with large probability, whatever X β v is: at eachstep from β v on inside the bag { v } × H , there is one edge leading to the edge where G v tells us toleave the bag, with a Bayesian factor 2 k + 1 as before, and at least k b edges to L β v , each with aBayesian factor 1. Hence the conditional probability that we leave { v } × H without intersecting L β v is at most (2 k + 1) /k b . That is, P (cid:16) ( X t ) τ b t = β v ∩ L β v ( v ) = ∅ (cid:12)(cid:12)(cid:12) G v (cid:17) < P (cid:16) | L β v | ≤ k b (cid:12)(cid:12)(cid:12) G v (cid:17) + 2 k + 1 k b . (4.7)With b = 3, a = 5, | H | = k , all terms in (4.6) and (4.7) are O (1 /k ), so, if k is large enough, thenthe bound of (4.1) follows. In place of T k , we will consider a transitive non-unimodular graph which we call the k -ary pyramidgraph Py k . One pyramid is just a cycle C with an extra vertex, the apex, connected to everyvertex of this cycle. Now we take the tree T k +1 , and orient all of its edges towards a fixed end ofthe tree. For each vertex, divide the 4 k incoming edges into 4-tuples, and connect the tails of theedges with a C . The resulting graph is Py k , which can also be considered as glued together frompyramids. See Figure 5.1. Then our example will be G = Py k × H for a large enough finite transitivegraph H . This is obviously a transitive non-unimodular graph: if Γ is the full automorphism groupof G , and ( x, y ) is an edge of Py k where x is the apex of a pyramid and y is in the base, then | Γ ( x,h ) ( y, h ) | = 4 k , while | Γ ( y,h ) ( x, h ) | = 1, for any h ∈ H . Proposition 5.1 (Disconnected nonunimodular
FUSF ) . For any d ≥ , if k is large enough, and H is a connected finite d -regular transitive graph on at least k / vertices, then FUSF on Py k × H a.s. has infinitely many components. Proof.
Fix a geodesic ray γ = γ ( z ) = { o = γ , γ , . . . , γ n = z } from o to z ∈ S n as before, let ∆ i, be the pyramid containing both γ i and γ i +1 , and let ∆ i,j , j = 1 , . . . , k −
1, be the other pyramidswith their apices at γ i . Letting ( Y t ) t ≥ be simple random walk on Py k , started at o , the version ofthe event B z from (2.2) will be as follows. A z := (cid:110) the edge ( γ i − , γ i ) is crossed exactly twice by ( Y t ) τ + o t =0 , and no other edge of ∆ i − , is crossed, for all i = 1 , , . . . , n (cid:111) , L z := (cid:110)(cid:12)(cid:12)(cid:8) t ∈ { , . . . , τ z } : Y t = γ i (cid:9)(cid:12)(cid:12) ≤ k/ i = 1 , , . . . , n (cid:111) . (5.1)16 = γ γ γ z = γ ∆ , ∆ , ∆ , Figure 5.1: A random walk excursion in the pyramid graph Py that satisfies the good event B z .The red solid parts are on the way to z , the blue dashed ones are on the way back.Furthermore, let E i and F i be the set of pyramid bases ∆ i,j \ { γ i } visited by ( Y t ) τ z t =0 and by ( Y t ) τ + o τ z ,respectively, among j = 1 , . . . , k −
1, and define the event B z := A z ∩ L z ∩ (cid:8) E i ∩ F i = ∅ for all i = 1 , , . . . , n (cid:9) . (5.2)With these definitions, the proofs of Sections 2, 3 go through almost verbatim, with three minordifferences. The first one is that all the edges of the pyramids ∆ i, except for ( γ i , γ i +1 ) are forbiddenfor the random walk, which changes some probabilities by a uniform constant factor, on each level i . The second difference is that the graph of pyramids has now the tree structure (and thus, e.g.,the self-avoidance condition E i ∩ F i = ∅ is defined via pyramids), but the walk still chooses edges,not pyramids. Again, this can change probabilities only by uniform constant factors. For instance,in place of (2.3) and (2.4), we have P (cid:0) There i , after j excursions (cid:1) = (cid:18) − k (cid:19) j k + 1 (5.3)and p k := P (cid:0) There i , Back i , and E i ∩ F i = ∅ (cid:1) ≥ (cid:98) k/ (cid:99) (cid:88) j =0 (cid:18) − k (cid:19) j k + 1 14 j + 2 (cid:16) log kk , (5.4)and everything works just as before.The last minor difference is in the direct proof of having infinitely many trees almost surely,at the end of Section 3. In the non-unimodular Py k , not all rays o , o , . . . are the same; pick onetending to the distinguished end. Then, it is not obvious that the Py k -coordinate of the simplerandom walk, viewed only at the times when it moves on this ray, is a one-dimensional symmetric walk. But it is, since the effective conductance between the cutpoints o i and o i +1 is obviously equalto the effective conductance between o i and o i − , and hence, by a standard correspondence betweenhitting probabilities and electric networks [LP16, Chapter 2], we have P o i ( τ o i − < τ o i +1 ) = 1 / FUSF in the context of tree-like graphs that may also be nonunimodular. The next claim does notinclude any randomness, and for unimodular transitive graphs it is a tautology.
Proposition 5.2 (Infinite weight sums) . Let G be a transitive graph, x ∈ V ( G ) fixed, and S afinite set of vertices such that every component of G \ S is infinite. Denote by Γ the automorphismgroup of G , and by Γ y the stabilizer of a vertex y . Then (cid:80) y ∈ C | Γ y x || Γ x y | is infinite for every component C of G \ S . Proof.
Let
M > | Γ x y || Γ y x | attained over neighbors y of x . For a vertex v , let N ( v ) be the set of all neighbors y of v with | Γ v y || Γ y v | = M . Note that if y ∈ N ( v ) then γ ( y ) ∈ N ( γ ( v ))for every γ ∈ Γ. Define recursively N i ( x ) = (cid:83) y ∈ N i − ( x ) N ( y ) as i = 2 , , . . . . We have | N ( x ) | ≥ M ,because Γ x y ⊂ N ( x ) and | Γ y x | ≥
1. We will see next | N i ( x ) | ≥ M i .Choose an arbitrary y ∈ N i ( x ). Pick an arbitrary γ ∈ Γ x , and fix a sequence x j ∈ N ( x j − ) for j = 1 , . . . , i with x i = y and x := x . We will prove by induction that γ ( x j ) ∈ N j ( x ). For j = 1we have seen this. Then, x j ∈ N ( x j − ) implies γ ( x j ) ∈ N ( γ ( x j − )), and we know N ( γ ( x j − )) ⊂ N ( N j − ( x )) = N j ( x ) from the induction hypothesis. This completes the proof of Γ x y ⊂ N i ( x ).Apply the cocycle identity | Γ u v || Γ v u | | Γ v w || Γ w v | = | Γ u w || Γ w u | to obtain | N i ( x ) | ≥ | Γ x y | = | Γ y x | | Γ x x || Γ x x | · · · | Γ xi − x i || Γ xi x i − | ≥ M i , as claimed.For v ∈ V ( G ), let N ∞ ( v ) := (cid:83) ∞ i =1 N i ( v ), and let m := min (cid:110) | Γ s x || Γ x s | : s ∈ S (cid:111) . If there existsa v ∈ C such that | Γ v x || Γ x v | < m , then N ∞ ( v ) ∩ S = ∅ (using the simple observation that | Γ v (cid:48) x || Γ x v (cid:48) | = | Γ v (cid:48) v || Γ v v (cid:48) | | Γ v x || Γ x v | = M − i | Γ v x || Γ x v | < m for every v (cid:48) ∈ N ∞ ( v ), with some i ≥ N ∞ ( v ) ⊂ C . Then (cid:88) y ∈ C | Γ y x || Γ x y | ≥ (cid:88) y ∈ N ∞ ( v ) | Γ y x || Γ x y | = | Γ v x || Γ x v | (cid:88) y ∈ N ∞ ( v ) | Γ y v || Γ v y | = | Γ v x || Γ x v | ∞ (cid:88) i =1 (cid:88) y ∈ N i ( v ) M − i = | Γ v x || Γ x v | ∞ (cid:88) i =1 | N i ( v ) | M − i ≥ | Γ v x || Γ x v | ∞ (cid:88) i =1 . If there were no such v , then one would have an infinite sum of numbers at least m in (cid:80) y ∈ C | Γ x y || Γ y x | ,leading again to the conclusion that the sum is infinite.Say that T is a tree-like decomposition of a graph G if T is a random partition of V ( G ) intofinite connected sets, called bags , together with a tree on the bags such that any edge of G goesbetween points of adjacent bags or within the same bag. We call a tree-like decomposition invariant if its distribution is preserved by the automorphisms of G .A random spanning forest F of a graph G was defined to be weak insertion tolerant in [Tim18]if it satisfies the following property. Fix r > x and y of G connected by an edge e . Let D be the event that x and y are in different components of F . Then one can map everyconfiguration ω ∈ D to a new configuration ω ∪ { e } \ { f } , where f is either the empty set or it isan edge of F at distance at least r from x , and it is determined by ω in a measurable way. Themapping just defined is measurable, and it takes events of positive probability (contained in D ) toevents of positive probability (contained in D c ). See [Tim18] for a more thorough definition andthe proof that the FUSF and the
FMSF are weak insertion tolerant.18 roposition 5.3 (1 or ∞ law) . Let G be an infinite transitive graph that has an invariant randomtree-like decomposition. Let F be an invariant random forest of G with only infinite components,and suppose that it is weak insertion tolerant. Then F has either 1 or infinitely many componentsalmost surely. Proof.
Proving by contradiction, suppose that F has m clusters with 1 < m < ∞ . Let T be aninvariant random tree-like decomposition of G . We claim that with positive probability there existsa bag B and a cluster C of F such that B ∩ C = ∅ . To see this, pick a finite B ⊂ G such that B isa bag in T with positive probability. Condition on this event, and on that some fixed and adjacent x and y in B are in different F -clusters. (We may assume that B was chosen so that this eventhas positive probability.) Let r > / B that are in the same F -cluster have distance less than r in F . Applying weak insertion tolerance,insert the edge { x, y } with the possible removal of an edge at distance at least r from x . Repeatingthis as many times as necessary (at most | B | − B , we arrive at an event of positive probability where all vertices of B are in the same F -cluster.But then every other F -cluster has to be fully contained in one of the components of G \ B , andhence it cannot intersect the bags in the other components. Thus we have found some B and C asclaimed.For any v ∈ V ( G ), if the bag B v of v does not intersect some F -cluster C , then removing from T all the bags that intersect C , we get some components, exactly one of which contains B v . Thereis a unique T -edge from a unique bag B ∗ v, C of this component, to a bag that intersects C . (Possibly B ∗ v, C = B v .) Now, define the following mass transport: for v, w ∈ G , f ( v, w, F ) := (cid:88) C f C ( v, w, F , T ) , f C ( v, w, F , T ) := (cid:40) / | B ∗ v, C | if w ∈ B ∗ v, C for cluster C of F , . We will use the
Tilted Mass Transport Principle for invariant percolations on not necessarily uni-modular transitive graphs [LP16, Corollary 8.8]: if Γ is the automorphism group of G , then (cid:88) z ∈ V ( G ) E f ( v, z, F , T ) = (cid:88) y ∈ V ( G ) E f ( y, w, F , T ) | Γ y w || Γ w y | . (5.5)The left hand side, which is the expected mass sent out, is clearly at most m .To estimate the right hand side, condition on the event A that a fixed set B is a bag of T andit does not intersect some C but is adjacent to a bag that intersects C . By the first paragraph inthe proof, this has a positive probability. Fix a vertex x in B . Vertices y in all but one componentof T \ B have the property that B ∗ y, C = B , hence they all send mass 1 / | B | to x . Furthermore,every infinite component of T \ B contains an infinite component of G \ B , hence the right handside of (5.5) can be bounded from below by P ( A ) | B | E (cid:80) y | Γ y x || Γ x y | , where y is running over the verticesin some infinite component of G \ B . (Which infinite component, that may depend on T .) ByProposition 5.2, this sum is always infinite, leading to a contradiction to (5.5).Once that the FUSF in Py k × H is disconnected with positive probability, Proposition 5.3 givesthat it has infinitely many components a.s. by the ergodicity of the FUSF [LP16, Section 10.4].
Proof of Theorem 1.3.
We already know that there are infinitely many trees in the
FUSF ,and want to show that some of them are light. The fixed end of Py k yields a natural projection π : Py k × H − (cid:16) Z , where all the preimages x ∈ π − ( m ) for a fixed m ∈ Z have the same Haarweight µ (Γ x ) = (4 k ) − m . 19f all the infinitely many clusters C i in the FUSF were reaching infinitely high up in Figure 5.1,i.e., if inf π ( C i ) = −∞ , then, for any two C i , C j there would exist some γ m ( i,j ) ∈ π − ( m ( i, j )) suchthat the infinite geodesic ray γ m ( i,j ) , γ m ( i,j ) − , . . . in Py k , converging to the fixed end at −∞ , hasthe property that both clusters intersect each γ m ( i,j ) − t × H , for t = 0 , , . . . . However, if we takeenough clusters {C i } i ∈ I such that (cid:0) | I | (cid:1) > | H | , and let m := min { m ( i, j ) : i, j ∈ I } , then γ m isalready defined for each pair, and it is actually the same vertex of Py k , so we would need to have (cid:0) | I | (cid:1) disjoint clusters intersecting γ m × H , a contradiction.Thus, all but finitely many clusters C i of FUSF have a smallest label min π ( C i ) > −∞ . Let M ( C i ) ⊂ G be the set of vertices achieving this minimal label; we set M ( C i ) = ∅ if inf π ( C i ) = −∞ .Note that | M ( C i ) | ≤ H almost surely. Now, define the following mass transport: for x, y ∈ G , f ( x, y, FUSF ) := (cid:40) x, y are in the same component C i of FUSF , and y ∈ M ( C i ) , . We again use the
Tilted Mass Transport Principle from [LP16, Corollary 8.8]: (cid:88) y ∈ V ( G ) E f ( x, y, FUSF ) = (cid:88) y ∈ V ( G ) E f ( y, x, FUSF ) µ (Γ y ) µ (Γ x ) . (5.6)The left hand side is at most | H | . The right hand side, if x ∈ M ( C i ) for some cluster C i , is(4 k ) min π ( C i ) (cid:88) y ∈C i µ (Γ y ) . By (5.6), this is finite, hence, whenever min π ( C i ) > −∞ , the cluster C i is light. The first natural question is how general the phenomena of Theorems 1.1 and 1.2 really are:
Problem 6.1. If Γ is a finitely generated treeable group with WUSF (cid:54) = FUSF , does it always havetwo generating sets such that the
FUSF is disconnected in the first Cayley graph, while it is connectedin the second?
An affirmative answer in the connected case would of course imply β (2)1 (Γ) = cost (Γ) − Problem 6.2.
Is it true that if the
FUSF in some T k × H is disconnected, then any two componentstouch each other only at finitely many places? Are there at least special choices for k and H forwhich this happens? Problem 6.3.
For the
FUSF in any T k × H , is the union of the FUSF with an independent
Bernoulli ( (cid:15) ) bond percolation connected, for any (cid:15) > ? If not, is there any invariant way tomake the FUSF connected by adding an arbitrarily small density edge percolation?
20s we already defined in the introduction, for any finite graph H we let disco ( H ) := min (cid:8) k : FUSF ( T k × H ) is disconnected (cid:9) ∈ { , , . . . , ∞} . We know that disco ( P ) = ∞ from [Tan19], while Theorem 1.1 implies that if (cid:96) is large enough,then the cycle C (cid:96) of length (cid:96) has disco ( C (cid:96) ) < ∞ . Problem 6.4.
What is the smallest (cid:96) for which disco ( C (cid:96) ) < ∞ ? In particular, what is disco ( C ) ? Problem 6.5.
Are there infinitely many finite graphs H with disco ( H ) = ∞ ? The two choices for H in Theorem 1.2 inspire the following question: Problem 6.6. If H and H (cid:48) are two finite connected graphs on the same vertex set and E ( H ) ⊂ E ( H (cid:48) ) , do we always have disco ( H ) ≤ disco ( H (cid:48) ) ? Regarding the generality of Lemma 4.1, we have not found the following question addressed inthe literature. One piece of motivation is [PR04].
Problem 6.7.
Is it true on any connected transitive graph on n vertices that the typical size of thestationary loop-erased version of a simple random walk trajectory is Ω( √ n ) ? Now, given how the proof of Theorem 1.1 used Lemma 3.2, and how the proof of Theorem 1.2used Lemma 4.1, one may guess that if H has better mixing properties, then disconnection becomeseasier: Problem 6.8. If H and H (cid:48) are two connected transitive d -regular finite graphs on the same vertexset, with H (cid:48) having a spectral gap larger than H , does it follow that disco ( H ) ≥ disco ( H (cid:48) ) ? The next natural player appearing on the floor is disco ∗ , a parameter that is dual, in some sense,to disco . Let us fix a natural sequence of finite graphs H = ( H n ) n ≥ ; as the simplest case, think ofthe cycles H n = C n . Then let disco ∗H ( k ) := min (cid:8) n : FUSF ( T k × H n ) is disconnected (cid:9) . Problem 6.9.
Consider the sequence of cycles C = ( C n ) n ≥ . Is it the case that disco ∗C (3) < ∞ ? Problem 6.10.
How about monotonicity in k ? That is, if FUSF ( T k × H ) is disconnected, then is FUSF ( T k +1 × H ) also disconnected? One can also define a continuous version of the graph parameter disco . Recall from [AL07] or[Pet20, Chapter 14] what unimodular random rooted graphs are. (cid:93) disco ( H ) := inf (cid:8) κ : FUSF ( T × H ) is disconnected with positive probability , ( T , o ) is an infinite unimodular random rooted tree with E deg T ( o ) = κ (cid:9) ∈ [2 , ∞ ] . Problem 6.11.
Find (cid:93) disco ( C (cid:96) ) . Problem 6.12.
Is there any finite graph H with (cid:93) disco ( H ) < ∞ = disco ( H ) ? Problem 6.13.
Is there any finite graph H with (cid:93) disco ( H ) = 2 ? (Note that if E deg T ( o ) = 2 , then T has at most two ends, hence T × H is recurrent, hence the FUSF is connected almost surely.However, this does not exclude the possibility of the infimum being 2.) Or perhaps (cid:93) disco ( H ) < ∞ implies (cid:93) disco ( H ) = 2 ? ributes We are indebted to Russ Lyons for several comments and corrections on the manuscript. Wealso thank Tom Hutchcroft and P´eter Mester for useful remarks, and Damien Gaboriau and AsafNachmias for some references. Our work was supported by the ERC Consolidator Grant 772466“NOISE”.
References [AL07] D. Aldous and R. Lyons. Processes on unimodular random networks.
Electron. J.Probab. (2007), Paper 54, 1454–1508. http://mypage.iu.edu/~rdlyons/ [AHNR18] O. Angel, T. Hutchcroft, A. Nachmias, and G. Ray. Hyperbolic and parabolic uni-modular random maps. Geometric and Functional Analysis (2018), 879–942. arXiv:1612.08693 [math.PR] [BV97] B. Bekka and A. Valette. Group cohomology, harmonic functions and the first L -Bettinumber. Potential Analysis (1997), 313–326. https://link.springer.com/article/10.1023/A:1017974406074 [Ben91] I. Benjamini. Instability of the Liouville property for quasi-isometric graphs and man-ifolds of polynomial volume growth. J. Theor. Probab. , (1991), 632–636. https://link.springer.com/content/pdf/10.1007/BF01210328.pdf [BKPS04] I. Benjamini, H. Kesten, Y. Peres and O. Schramm. Geometry of the uniform spanningforest: transitions in dimensions 4, 8, 12, . . . . Ann. of Math. (2) (2004), 465–491. [arXiv:math.PR/0107140] [BLPS01] I. Benjamini, R. Lyons, Y. Peres and O. Schramm. Uniform spanning forests.
Ann.Probab. (2001), 1–65. [DLP12] J. Ding, J. Lee, and Y. Peres. Cover times, blanket times, and majorizing measures. Ann. Math. (2012), 1409–1471. arXiv:1004.4371 [math.PR] [Dur10] R. Durrett.
Probability: Theory and Examples.
Fourth edition. Cambridge UniversityPress, 2010. https://services.math.duke.edu/~rtd/PTE/pte.html [Dyn84] E. B. Dynkin. Gaussian and non-Gaussian random fields associated with Markovprocesses.
J. Funct. Anal. (1984), 344–376. [Gab02] D. Gaboriau. Invariants L de relations d’´equivalence et de groupes. Publ. Math. Inst.Hautes ´Etudes Sci. (2002), 93–150. [Hut18] T. Hutchcroft. Interlacements and the Wired Uniform Spanning Forest. Ann. Probab. (2018), 1170–1200. arXiv:1512.08509 [math.PR] [HN17] T. Hutchcroft and A. Nachmias. Indistinguishability of trees in uniform spanning forests. Probab. Theory Related Fields (2017), 113–152. arXiv:1506.00556 [math.PR]
Forum ofMathematics, Sigma (2019), e29, pp. 55. arXiv:1603.07320 [math.PR] [HP20] T. Hutchcroft and G. Pete. Kazhdan groups have cost 1. Inv. Math. (2020), to appear. arXiv:1810.11015 [math.GR] [LPW17] D. A. Levin, Y. Peres and E. L. Wilmer.
Markov chains and mixing times. Secondedition.
With a chapter by J. G. Propp and D. B. Wilson. American MathematicalSociety, Providence, RI, 2017. http://pages.uoregon.edu/dlevin/MARKOV/ [Lup16] T. Lupu. From loop clusters and random interlacement to the free field.
Ann. Probab. (2016), 2117–2146. arXiv:1402.0298 [math.PR] [Lyo05] R. Lyons. Asymptotic enumeration of spanning trees. Combin. Probab. & Comput. (2005), 491–522. [arXiv:math.CO/0212165] [LP16] R. Lyons and Y. Peres. Probability on Trees and Networks.
Cambridge University Press,New York, 2016. Available at http://pages.iu.edu/~rdlyons/ [LPS06] R. Lyons, Y. Peres and O. Schramm. Minimal spanning forests.
Ann. Probab. (2006),1665–1692. [arXiv:math.PR/0412263] [Mor03] B. Morris. The components of the wired spanning forest are recurrent. Probab. TheoryRelated Fields (2003), 259–265. https://link.springer.com/article/10.1007/s00440-002-0236-0 [MP05] B. Morris and Y. Peres. Evolving sets, mixing and heat kernel bounds.
Probab. TheoryRelated Fields (2005), 245–266. [arXiv:math.PR/0305349] [PP00] R. Pemantle and Y. Peres. Nonamenable products are not treeable.
Israel J. Math. (2000), 147–155. [arXiv:math.PR/0404096] [PR04] Y. Peres and D. Revelle. Scaling limits of the uniform spanning tree and loop-erasedrandom walk on finite graphs. [arXiv:math/0410430] [Pet20] G. Pete.
Probability and Geometry on Groups . Book in preparation, [Soa93] P. M. Soardi. Rough isometries and Dirichlet finite harmonic functions on graphs.
Proc.Amer. Math. Soc. (1993), 1239–1248. [Tan19] P. Tang. Weights of uniform spanning forests on nonunimodular transitive graphs. arXiv:1908.09889 [math.PR] [Tim06] ´A. Tim´ar. Neighboring clusters in Bernoulli percolation.
Ann. Probab. (2006), 2332–2343. [arXiv:math.PR/0702873] [Tim18] ´A. Tim´ar. Indistinguishability of the components of random spanning forests. Ann.Probab. (2018), 2221–2242. arXiv:1506.01370 [math.PR] [Tim20] ´A. Tim´ar. Unimodular random planar graphs are sofic. arXiv:1910.01307 [math.PR] [Zha18] A. Zhai. Exponential concentration of cover times. Electron. J. Probab. (2018),no. 32, 1–22. arXiv:1407.7617 [math.PR] Proceedings of the Twenty-Eighth Annual ACM Symposium on the Theory of Comput-ing , pp. 296–303., New York, 1996. https://dl.acm.org/doi/pdf/10.1145/237814.237880
G´abor Pete
Alfr´ed R´enyi Institute of Mathematics, Budapest,and Institute of Mathematics, Budapest University of Technology and Economics ´Ad´am Tim´ar
Alfr´ed R´enyi Institute of Mathematics, Budapest adam.timar [at] renyi.huadam.timar [at] renyi.hu