Nearly Tight Bounds for Sandpile Transience on the Grid
NNearly Tight Bounds for Sandpile Transience on the Grid
David DurfeeGeorgia Institute of Technology [email protected]
Matthew Fahrbach ∗ Georgia Institute of Technology [email protected]
Yu Gao † Georgia Institute of Technology [email protected]
Tao Xiao † Shanghai Jiao Tong University xt [email protected]
Abstract
We use techniques from the theory of electrical networks to give nearly tight bounds for thetransience class of the Abelian sandpile model on the two-dimensional grid up to polylogarithmicfactors. The Abelian sandpile model is a discrete process on graphs that is intimately related tothe phenomenon of self-organized criticality. In this process, vertices receive grains of sand, andonce the number of grains exceeds their degree, they topple by sending grains to their neighbors.The transience class of a model is the maximum number of grains that can be added to thesystem before it necessarily reaches its steady-state behavior or, equivalently, a recurrent state.Through a more refined and global analysis of electrical potentials and random walks, we givean O ( n log n ) upper bound and an Ω( n ) lower bound for the transience class of the n × n grid. Our methods naturally extend to n d -sized d -dimensional grids to give O ( n d − log d +2 n )upper bounds and Ω( n d − ) lower bounds. ∗ Supported in part by a National Science Foundation Graduate Research Fellowship under grant DGE-1650044. † A substantial portion of this work was completed while the author visited the Institute for Theoretical ComputerScience at Shanghai University of Finance and Economics. a r X i v : . [ c s . D S ] N ov Introduction
The Abelian sandpile model is the canonical dynamical system used to study self-organized critical-ity . In their seminal paper, Bak, Tang, and Wiesenfeld [BTW87] proposed the idea of self-organizedcriticality to explain several ubiquitous patterns in nature typically viewed as complex phenomena,such as catastrophic events occurring without any triggering mechanism, the fractal behavior ofmountain landscapes and coastal lines, and the presence of pink noise in electrical networks and stel-lar luminosity. Since their discovery, self-organized criticality has been observed in an abundanceof disparate scientific fields [Bak96, WPC + +
16, LHG07], statis-tical physics [Dha06, Man91], seismology [SS89], and sociology [KG09]. A stochastic process is aself-organized critical system if it naturally evolves to highly imbalanced critical states where slightlocal disturbances can completely alter the current state. For example, when pouring grains ofsand onto a table, the pile initially grows in a predictable way, but as it becomes steeper and moreunstable, dropping a single grain can spontaneously cause an avalanche that affects the entire pile.Self-organized criticality differs from the critical point of a phase transition in statistical physics,because a self-organizing system does not rely on tuning an external parameter. Instead, it isinsensitive to all parameters of the model and simply requires time to reach criticality, which isknown as the transient period. Natural events empirically operate at a critical point between orderand chaos, thus justifying our study of self-organized criticality.Dhar [Dha90] developed the
Abelian sandpile model on finite directed graphs with a sink vertexto further understand self-organized criticality. The Abelian sandpile model, also known as a chip-firing game [BLS91], on a graph with a sink is defined as follows. In each iteration a grain of sandis added to a non-sink vertex of the graph. While any non-sink vertex v contains at least deg( v )grains of sand, a grain is transferred from v to each of its neighbors. This is known as a toppling .When no vertex can be toppled, the state is stable and the iteration ends. The sink absorbs anddestroys grains, and the presence of a sink guarantees that every toppling procedure eventuallystabilizes. An important property of the Abelian sandpile model is that the order in which verticestopple does not affect the stable state. Therefore, as the process evolves it produces a sequence ofstable states. From the theory of Markov chains, we say that a stable state is recurrent if it can berevisited; otherwise it is transient .In the self-organized critical state of the Abelian sandpile model on a graph with a sink, tran-sient states have zero probability and recurrent states occur with equal probability [Dha90]. As aresult, recurrent configurations model the steady-state behavior of the system. Thus, the naturalalgorithmic question to ask about self-organized criticality for the Abelian sandpile model is: Question 1.1.
How long in the worst case does it take for the process to reach its steady-statebehavior or, equivalently, a recurrent state?
Starting with an empty configuration, if the vertex that receives the grain of sand is chosen uniformlyat random in each step, Babai and Gorodezky [BG07] give a simple solution that is polynomial inthe number of edges of the graph using a coupon collector argument. In the worst case, however,an adversary can choose where to place the grain of sand in each iteration. Babai and Gorodezkyanalyze the transience class of the model to understand its worst-case behavior, which is definedas the maximum number of grains that can be added to the empty configuration before the con-figuration necessarily becomes recurrent. An upper bound for the transience class of a model is anupper bound for the time needed to enter self-organized criticality.1 .1 Results
We give the first nearly tight bounds (up to polylogarithmic factors) for the transience class ofthe Abelian sandpile model on the n × n grid with all boundary vertices connected to the sink.This model was first studied in depth by Dhar, Ruelle, Sen, and Verma [DRSV95], and it hassince been the most extensively studied Abelian sandpile model due to its role in algebraic graphtheory, theoretical computer science, and statistical physics. Babai and Gorodezky [BG07] initiallyestablished that the transience class of the grid is polynomially bounded by O ( n ), which wasunexpected because there are graphs akin to the grid with exponential transience classes. Choureand Vishwanathan [CV12] improved the upper bound for the transience class of the grid to O ( n )and gave a lower bound of Ω( n ) by viewing the graph as an electrical network and relating theAbelian sandpile model to random walks on the underlying graph. Moreover, they conjectured thatthe transience class of the grid is O ( n ), which we answer nearly affirmatively. Theorem 1.2.
The transience class of the Abelian sandpile model on the n × n grid is O ( n log n ) . Theorem 1.3.
The transience class of the Abelian sandpile model on the n × n grid is Ω( n ) . Our results establish how fast the system reaches its steady-state behavior in the adversarial case,and they corroborate empirical observations about natural processes exhibiting self-organized crit-icality. Our analysis directly generalizes to higher-dimensional cases, giving the following result.
Theorem 1.4.
For any integer d ≥ , the transience class of the Abelian sandpile model on the n d -sized d -dimensional grid is O ( n d − log d +2 n ) and Ω( n d − ) . In addition to addressing the main open problem in [BG07] and [CV12], we begin to shed lighton Babai and Gorodezky’s inquiry about sequences of graphs that exhibit polynomially boundedtransience classes. Specifically, for hypergrids (a family of locally finite graphs with high sym-metry) we quantify how the transience class grows as a function of the size and local degree ofthe graph. When viewed through the lens of graph connectivity, such transience class bounds aresurprising because grids have low algebraic connectivity, yet we are able to make global structuralarguments using only the fact that grids have low maximum effective resistance when viewed aselectrical networks. By doing this, we avoid spectral analysis of the grid and evade the mainobstacle in Choure and Vishwanathan’s analysis. Our techniques suggest that low effective re-sistance captures a different but similar phenomenon to high conductance and high edge expan-sion for stochastic processes on graphs. This distinction between the role of a graph’s effectiveresistance and conductance could be an important step forward for building a theory for dis-crete diffusion processes analogous to the mixing time of Markov chains. We also believe ourresults have close connections to randomized, distributed optimization algorithms for flow prob-lems [BBD +
13, BMV12, Meh13, SV16a, SV16b, SV16c], where the dynamics of self-adjusting sand-piles (a Physarum slime mold in their model) are governed by electrical flows and resistances.
Our approach is motivated by the method of Choure and Vishwanathan [CV12] for bounding thetransience class of the Abelian sandpile model on graphs using electrical potential theory and theanalysis of random walks. Viewing the graph as an electrical network with a voltage source at somevertex and a grounded sink, we give more accurate voltage estimates by carefully considering thegeometry of the grid. We use several lines of symmetry to compare escape probabilities of randomwalks with different initial positions, resulting in a new technique for comparing vertex potentials.2hese geometric arguments can likely be generalized to other lattice-based graphs. As a result, weget empirically tight inequalities for the sum of all vertex potentials in the grid and the voltagedrop between opposite corners of the network.For many of our voltage bounds, we interpret a vertex potential as an escape probabilityand decouple the corresponding two-dimensional random walks on the grid into independent one-dimensional random walks on a path graph. Decoupling is the standout technique in this paper,because it allows us to apply classical results about simple symmetric random walks on Z (suchas the reflection principle), which we extend as needed using conditional probability arguments.By reducing from two-dimensional random walks to one-dimensional walks, we utilize standardprobabilistic tools including Stirling’s approximation, Chernoff bounds, and the negative binomialdistribution. Since we consider many different kinds of events in our analysis, Section 5 is an ex-tensive collection of probability inequalities for symmetric t -step random walks on Z with variousboundary conditions. We noticed that some of these inequalities are directly related to problemsin enumerative combinatorics without closed-form solutions [ES77].Lastly, we leverage well-known results about effective resistances of the n × n grid when viewedas an electrical network. We follow Choure and Vishwanathan in using the potential reciprocitytheorem to swap the voltage source with any other non-sink vertex, but we use this theoremrepeatedly with the fact that the effective resistance between any non-sink vertex and the sink isbounded between a constant and O (log n ). This approach enables us to analyze tractable one-dimensional random walk problems at the expense of polylogarithmic factors. Let G = ( V, E ) be an undirected multigraph. Throughout this paper all of the graphs we considerhave a sink vertex denoted by v sink . The Abelian sandpile model is a dynamical system on agraph G used to study the phenomenon of self-organized criticality. A configuration σ on G in theAbelian sandpile model is a vector of nonnegative integers indexed by the non-sink vertices suchthat σ ( v ) denotes the number of grains of sand on vertex v . We say that a configuration is stable if σ ( v ) < deg( v ) for all non-sink vertices and unstable otherwise. An unstable configuration σ moves towards stabilization by selecting a vertex v such that σ ( v ) ≥ deg( v ) and sending one grainof sand from v to each of its neighboring vertices. This event is called a toppling of v , and it createsa new configuration σ (cid:48) such that σ (cid:48) ( v ) = σ ( v ) − deg( v ), σ (cid:48) ( u ) = σ ( u ) + 1 for all vertices u adjacentto v , and σ (cid:48) ( u ) = σ ( u ) for all remaining vertices. This procedure eventually reaches a stable statebecause G has a sink. Moreover, the order in which vertices topple does not affect the final stablestate. The initial configuration of the Abelian sandpile model is typically the zero vector, and ineach iteration a grain of sand is placed at a vertex (chosen either deterministically or uniformly atrandom). The system evolves by stabilizing the configuration and then receiving another grain ofsand.A stable configuration σ is recurrent if the process can eventually return to σ . Any state thatis not recurrent is transient . Note that once the system enters a recurrent state, it can never visit atransient state. Babai and Gorodezky [BG07] introduced the following notion to upper bound onthe number of steps for the Abelian sandpile model to reach self-organized criticality. Definition 2.1.
The transience class of the Abelian sandpile model of G is the maximum numberof grains that can be added to the empty configuration before the configuration necessarily becomesrecurrent. We denote this quantity by tcl( G ). 3 a) (b) (c) (d) Figure 1: Configurations of the Abelian sandpile model on the 500 ×
500 grid during its transienceperiod after placing (a) 10 (b) 2 · (c) 4 · (d) 8 · grains of sand at (1 , transient period of the Abelian sandpilemodel as it advances towards its critical state. We specifically show in this paper that by repeatedlyplacing grains of sand in the top-left corner of the grid, we maximize the length of the transienceperiod up to a polylogarithmic factor.In earlier related works, Bj¨orner, Lov´asz, and Shor [BLS91] studied a variant of this processwithout a sink and characterized the conditions needed for stabilization to terminate. They alsorelated the spectrum of the underlying graph to the rate at which the system converges. In themodel we study, an observation by Dhar [Dha90] and Kirchoff’s theorem show that the stablerecurrent states of the system are in bijection with the spanning trees of G . Choure and Vish-wanathan [CV12] show that if every vertex in a configuration has toppled then the configurationis necessarily recurrent, which we use to bound the transience class. The Abelian sandpile modelalso has broad applications to algorithms and statistical physics, including a direct relation tothe q -state Potts model and Markov chain Monte Carlo algorithms for sampling random spanningtrees [BCFR17, Dha90, JLP15, RS17, Wil10]. For a comprehensive survey on the Abelian sandpilemodel, see [HLM + A walk w on G is a sequence of vertices w (0) , w (1) , . . . , w ( t max ) such that every w ( t +1) is a neighborof w ( t ) . We let t max = | w | denote the length of the walk. A random walk is a process that beginsat vertex w (0) , and at each time step t transitions from w ( t ) to w ( t +1) such that w ( t +1) is chosenuniformly at random from the neighbors of w ( t ) . Note that this definition naturally captures theeffect of walking on a multigraph. We consider walks that continue until reaching a set of sinkvertices. It will be convenient for our analysis to formally define these following families of walks. Definition 2.2.
For any set of starting vertices S and terminating vertices T in the graph G , let W ( S → T ) def = (cid:110) w : w (0) ∈ S , w ( i ) (cid:54)∈ T ∪ { v sink } for 0 ≤ i ≤ | w | −
1, and w ( | w | ) ∈ T (cid:111) be the set of finite walks from S to T .Observe that with this definition, walks w of length 0 are permissible if we have w (0) ∈ S ∩ T .Throughout the paper it will be convenient to consider random walks from one vertex u to anothervertex v or the pair { v, v sink } . We denote these cases by the notation W ( u → v ) = W ( { u } → { v } ) . If walks on multiple graphs are being considered, we use W G ( u → v ) to denote the underlying4raph. Lastly, we consider the set of nonterminating walks in our analysis, so it will be useful todefine W ( S ) def = (cid:40) w ∈ ∞ (cid:89) i =0 V : w (0) ∈ S and w ( i ) (cid:54) = v sink for any i ≥ (cid:41) , which is the set of infinite walks from S . An analogous definition follows when S = { u } .The focus of our study is the n × n grid graph, denoted by Square n . Similar to previousworks, we do not follow the usual graph-theoretic convention of using n to denote vertex count.We formally define the one-dimensional projection of Square n to be Path n , which has the vertexset { , , . . . , n } ∪ { v sink } and edges between i and i + 1 for every 1 ≤ i ≤ n −
1, as well as twoedges connecting v sink to 1 and n . Thus, v sink can be viewed as 0 and n + 1. If we remove the sink(which can be thought of as letting v sink = ±∞ ) then the resulting graph is the one-dimensionalline with vertices i ∈ Z and edges between every pair ( i, i + 1). We denote this graph by Line anduse the indices i , j , and k to represent its vertices. Analyzing random walks on Line is critical toour analysis, and it will be useful to record the minimum and maximum position of t -step walks. Definition 2.3.
For an initial position i ∈ Z and walk w ∈ W ( i ) on Line , let the t -step minimumand maximum positions be min ≤ t ( w ) def = min ≤ (cid:98) t ≤ t w ( (cid:98) t ) and max ≤ t ( w ) def = max ≤ (cid:98) t ≤ t w ( (cid:98) t ) . We construct
Square n similarly. Its vertices are { , , . . . , n } × { , , . . . , n } ∪ { v sink } , andits edges connect any pair of vertices that differ in one coordinate. Vertices on the boundaryhave edges connected to v sink so that every non-sink vertex has degree 4. With this definitionof Square n , each corner vertex has two edges to v sink and non-corner vertices on the boundaryshare one edge with v sink . Since all vertices correspond to pairs of coordinates, we use the vectornotation u = ( u , u ) to denote coordinates on the grid, as it easily extends to higher dimensions.Throughout the paper, boldfaced variables denote vectors. A t -step random walk on Square n naturally induces a ( t max + 1) × w into its horizontaland vertical components, using the notation w for the change in position of the first coordinateand w for the change in position of the second coordinate. In general we use the notation w (cid:98) d toindex into one of the dimensions 1 ≤ (cid:98) d ≤ d of a d -dimensional walk. We do not record duplicatepositions when the walk takes a step in a dimension different than (cid:98) d , so we have | w | = | w | + | w | − d = 2 since the initial vertex is present in both w and w . Vertex potentials are central to our analysis. They have close connections with electrical voltagesand belong to the class of harmonic functions [DS84]. We analyze their relation to the transienceclass of general graphs. For any non-sink vertex u , we can define a unique potential vector π u suchthat π u ( u ) = 1, π u ( v sink ) = 0, and for all other vertices v ∈ V \ { u, v sink } we have π u ( v ) = 1deg( v ) (cid:88) x ∼ v π u ( x ) , v . Thus, π u ( v ) denotes the potential at v when the boundaryconditions are set to 1 at u and 0 at the sink. Since we analyze potential vectors in both Path n and Square n , we use superscripts to denote the graph when context is unclear.Choure and Vishwanathan showed that we can give upper and lower bounds on the transienceclass using potentials, which we rephrase in the following theorem. Theorem 2.4 ([CV12]) . If G is a graph such that the degree of every non-sink vertex is boundedby a constant, tcl( G ) = O (cid:32) max u,v ∈ V \{ v sink } (cid:32)(cid:88) x ∈ V π u ( x ) (cid:33) π u ( v ) − (cid:33) and tcl( G ) = Ω (cid:18) max u,v ∈ V \{ v sink } π u ( v ) − (cid:19) . All non-sink vertices have degree 4, so we can apply Theorem 2.4 to
Square n .The following combinatorial interpretations of potentials as random walks is fundamental toour investigation of the transience class of Square n . Note that we use boldfaced vector variablesfor non-sink vertices in Square n as they can be identified by their coordinates. Fact 2.5 ([DS84]) . For any graph G and non-sink vertex u , the potential π u ( v ) is the probabilityof a random walk starting at v and reaching u before v sink . Lemma 2.6.
Let u be a non-sink vertex of Square n . For any vertex v , we have π u ( v ) = (cid:88) w ∈W ( v → u ) −| w | . We defer the proof of Lemma 2.6 to Appendix A.A systematic treatment of the connection between random walks and electrical networks can befound in the monograph by Doyle and Snell [DS84] or the survey by Lov´asz [Lov93]. The followinglemma is a key result for our investigation, which states that a voltage source and a measurementpoint can be swapped at the expense of a distortion in the potential equal to the ratio of theeffective resistances between the sink and the two vertices. The effective resistance between a pairof vertices u and v , denoted as R eff ( u, v ), can be formalized in several ways. In the electricalinterpretation [DS84], effective resistance can be viewed as the voltage needed to send one unit ofcurrent from u to v if every edge in G is a unit resistor. For a linear algebraic definition of effectiveresistance see [ESVM + Lemma 2.7 ([CV12, Potential Reciprocity]) . Let G be a graph (not necessarily degree-bounded)with sink v sink . For any pair of vertices u and v , we have R eff ( v sink , u ) π u ( v ) = R eff ( v sink , v ) π v ( u ) . This statement is particularly powerful for
Square n , because the effective resistance betweenany pair of vertices is bounded between a constant and O (log n ). The following lemma makes useof a classical result that can be obtained using Thompson’s principle of the electrical flow [DS84].6 emma 2.8. For any non-sink vertex u in Square n , / ≤ R eff ( v sink , u ) ≤ n + 1 . We give the proof of Lemma 2.8 in Appendix A. When used together, Lemma 2.7 and Lemma 2.8imply the following result, which allows us to conveniently swap the source vertex when computingpotentials.
Lemma 2.9.
For any non-sink vertices u and v in Square n , we have π u ( v ) ≤ (8 log n + 4) π v ( u ) . Voltages and flows on electrical networks are central to many recent developments in algorith-mic graph theory (e.g. modern maximum flow algorithms and interior point methods [CKM + In this section we prove the upper bound in Theorem 1.2 for the transience class of the Abeliansandpile model on the square grid. Our proof follows the framework of Choure and Vishwanathanin that we use Theorem 2.4 to reduce the proof to bounding the following two quantities for anynon-sink vertex u ∈ V ( Square n ): • We upper bound the potential sum (cid:80) v ∈ V π u ( v ). • We lower bound the potential π u ( v ) for all non-sink vertices v .By symmetry we assume without loss of generality that u is in the top-left quadrant of Square n (i.e., we have 1 ≤ u , u ≤ (cid:100) n/ (cid:101) ). The principal idea is to use reciprocity from Lemma 2.7 andeffective resistance bounds from Lemma 2.8 to swap source vertices and bound π v ( u ) instead, atthe expense of a O (log n ) factor. The second key idea is to interpret potentials as random walksusing Fact 2.5 and then decouple two-dimensional walks on Square n into separate horizontal andvertical one-dimensional walks on Path n . Using well-studied properties of one-dimensional randomwalks, we achieve nearly tight bounds on tcl( Square n ).We note that there is a natural trade-off in the choice of the source vertex u . Setting u near theboundary decreases vertex potentials because a random walk has a higher probability of escapingto v sink instead of u . This improves the upper bound of the sum of vertex potentials, but it weakensthe lower bound of the minimum vertex potential. For vertices u that are not near the boundary,the opposite is true. Therefore, we account for the choice of u in our bounds. Lemma 3.1.
For any non-sink vertex u in Square n , we have (cid:88) v ∈ V π u ( v ) = O (cid:0) u u log n (cid:1) . roof. We use Fact 2.5 and Lemma 2.6 to interpret vertex potentials as random walks. We canomit v sink because any random walk starting there immediately terminates. By Lemma 2.9, π u ( v ) = O ( π v ( u ) log n ) , so we apply the random walk interpretation to potentials starting at u instead of v . Consider onesuch walk w ∈ W ( u → v ) and its one-dimensional decompositions w and w . The probability ofa walk from u reaching v is equal to the probability that two interleaved walks in Path n startingat u and u are present on v and v , respectively, at the same time before either hits theirone-dimensional sink v sink = { , n + 1 } .If we remove the restriction that these walks are present on v and v at the same time andonly require that they visit v and v before hitting v sink , then each of these less restricted walks w d belongs to the class W Path n ( u d → v d ) . Viewing a walk w on Square n as infinite walk on thelattice Z induces independence between w and w . Thus, we obtain the upper bound π v ( u ) = Pr w ∼W Z ( u ) [ w hits v before leaving Square n ] ≤ Pr w ∼W Z ( u ) [ w hits v before v sink and w hits v before v sink ]= Pr w ∼W Z ( u ) [ w hits v before v sink ] · Pr w ∼W Z ( u ) [ w hits v before v sink ]= π Path n v ( u ) · π Path n v ( u ) . Summing over all choices of v = ( v , v ) gives (cid:88) v ∈ V π v ( u ) ≤ (cid:32) n (cid:88) v =1 π Path n v ( u ) (cid:33) (cid:32) n (cid:88) v =1 π Path n v ( u ) (cid:33) . The potentials of vertices in
Path n have the following closed-form solution, as shown in [DS84]: π Path n v ( u ) = (cid:40) n +1 − u n +1 − v if v ≤ u , u v if v > u . Splitting the sum at u and using the fact that potentials are escape probabilities, we have n (cid:88) v =1 π Path n v ( u ) ≤ u + n (cid:88) v = u +1 u v = O ( u log n ) . We similarly obtain an upper bound of O ( u log n ) in the other dimension. These bounds alongwith the initial O (log n ) overhead from swapping u and v gives the desired upper bound. The more involved part of this paper proves a lower bound for the minimum vertex potentialmin v ∈ V \{ v sink } π u ( v ) as a function of a fixed vertex u = ( u , u ). Recall that we assumed withoutloss of generality that u is in the top-left quadrant of Square n . We first prove that the mini-mum potential occurs at vertex ( n, n ), the corner farthest from u . Using Lemma 2.9 to swap u and ( n, n ) at the expense of a Ω(1 / log n ) factor, we reduce the problem to giving a lower boundfor π ( n,n ) ( u ). Then we decompose walks w ∈ W ( u → { ( n, n ) , v sink } ) into their one-dimensionalwalks w ∈ W Path n ( u ) and w ∈ W Path n ( u ), and we interpret π ( n,n ) ( u ) as the probability thatthe individual processes w and w are present on n at the same time before either walk leaves the8nterval [1 , n ]. Walks on Line that meet at n before leaving the interval [1 , n ] are equivalent towalks on Path n that meet at n before terminating at v sink . Lastly, we use conditional probabilitiesto analyze walks on Line instead of walks on
Path n in order to leverage well-known facts aboutsimple symmetric random walks.To lower bound the desired probability π ( n,n ) ( u ), we show that a subset of W ( u → ( n, n )) ofinterleaved one-dimensional walks starting at u and u that first reach n in approximately thesame number of steps has a sufficient amount of probability mass. We prove this by observingthat the distributions of the number of steps for the walks to first reach n without leaving theinterval [1 , n ] are concentrated around ( n − u ) and ( n − u ) , respectively. Consequently, weshow that this distribution is approximately uniform in an Θ( n ) length interval, with each t -stephaving probability Ω( u /n ) and Ω( u /n ). We then use Chernoff bounds to show that both walkstake approximately the same number of steps with constant probability. Combining these facts, wegive the desired lower bound Ω( u u /n ). We first show that the corner vertex ( n, n ) has the minimum potential up to a constant factor.Viewing potentials as escape probabilities, we utilize the geometry of the grid to construct mapsbetween sets of random walks that prove the potential of an interior vertex is greater than its axis-aligned projection to the boundary of the grid. We defer the proof of Lemma 3.2 to Appendix B.
Lemma 3.2. If u is a vertex in the top-left quadrant of Square n , then for any non-sink vertex v we have π u ( v ) ≥ π u (( n, n )) . By decomposing two-dimensional walks on
Square n that start at u into one-dimensional walkson Line , our lower bound relies on showing that there is a Θ( n ) length interval such that eachone-dimensional walk of a fixed length in this interval has probability Ω( u /n ) or Ω( u /n ),respectively, of remaining above 0 and reaching n for the first time upon termination. For ourpurposes, lower bounds for this probability will suffice, and they follow from the following keyproperty for one-dimensional walks that we prove in Section 5. Lemma 3.3.
Let n ∈ Z ≥ and ≤ i ≤ (cid:100) n/ (cid:101) be any starting position. For any constant c > andany t ∈ Z such that n /c ≤ t ≤ n / with t ≡ n − i (mod 2) , a simple symmetric random walk won Z satisfies Pr w ∼W Line ( i ) (cid:20) w ( t ) = n, max ≤ t ( w ) = n, and min ≤ t ( w ) ≥ (cid:21) ≥ e − − c in . Using Lemma 3.3 with the following lemma, we give a lower bound for π ( n,n ) ( u ), the probabilitythat a walk starting from u reaches ( n, n ) before v sink . Lemma 3.4 is a consequence of a Chernoffbound, and we defer its proof to Appendix B. Lemma 3.4.
For all n ≥ , we have min n (cid:98) n (cid:99) (cid:88) k = (cid:100) n (cid:101) k odd (cid:18) nk (cid:19) , n (cid:98) n (cid:99) (cid:88) k = (cid:100) n (cid:101) k even (cid:18) nk (cid:19) ≥ . emma 3.5. For all n ≥ and any vertex u in the top-left quadrant of Square n , we have π ( n,n ) ( u ) ≥ e − u u n . Proof.
We decouple each walk w ∈ W ( u → ( n, n )) into its horizontal walk w ∈ W Line ( u ) andvertical walk w ∈ W Line ( u ). The potential π ( n,n ) ( u ) can be interpreted as the probability that w and w visit n at the same time before either leaves the interval [1 , n ]. We can further decompose t -step walks on Square n into those that take t steps in the horizontal direction and t in thevertical direction. Considering restricted instances where w and w visit n exactly once, we obtainthe following bound by Lemma 2.6: π ( n,n ) ( u ) ≥ (cid:88) w ∼W ( u → ( n,n )) w hits n exactly once w hits n exactly once −| w | . (1)Accounting for all the ways that two one-dimensional walks can be interleaved, the right hand sideof (1) is (cid:88) t ,t ≥ (cid:0) t + t t (cid:1) t + t ( t -step walks from u that stay in [1 , n −
1] and terminate at n ) · ( t -step walks from u that stay in [1 , n −
1] and terminate at n ) . Observing thatPr w ∼W ( u ) (cid:20) w ( t )1 = n, max ≤ t − ( w ) = n − , min ≤ t − ( w ) ≥ (cid:21) = ( t -step walks from u that stay in [1 , n −
1] and terminate at n )2 t , it follows from (1) that π ( n,n ) ( u ) ≥ (cid:88) t ,t ≥ (cid:0) t + t t (cid:1) t + t Pr w ∼W ( u ) (cid:20) w ( t )1 = n, max ≤ t − ( w ) = n − , min ≤ t − ( w ) ≥ (cid:21) · Pr w ∼W ( u ) (cid:20) w ( t )2 = n, max ≤ t − ( w ) = n − , min ≤ t − ( w ) ≥ (cid:21) . By our choice of n and u , the right hand side of inequality above equals (cid:88) t ,t ≥ (cid:0) t + t t (cid:1) t + t (cid:18)
12 Pr w ∼W ( u ) (cid:20) w ( t − = n − , max ≤ t − ( w ) = n − , min ≤ t − ( w ) ≥ (cid:21)(cid:19) · (cid:18)
12 Pr w ∼W ( u ) (cid:20) w ( t − = n − , max ≤ t − ( w ) = n − , min ≤ t − ( w ) ≥ (cid:21)(cid:19) . (2)Letting t = t + t , we further refine the set of two-dimensional walks so that t ∈ [ n , n ] and t , t ∈ [ t, t ] while capturing a sufficient amount of probability mass for a useful lower bound.Note that the parities of t and t satisfy t ≡ n − u (mod 2) and t ≡ n − u (mod 2) for valid10alks. Let I be an indexing of all such pairs ( t , t ). Working from (2), we have π ( n,n ) ( u ) ≥ (cid:88) ( t ,t ) ∈ I (cid:0) t + t t (cid:1) t + t (cid:18) e − − u n (cid:19) (cid:18) e − − u n (cid:19) ≥ e − · u u n (cid:88) t ∈ (cid:104) n , n (cid:105) t ≡ u + u (mod 2) ≥ e − · u u n · n · ≥ e − · u u n . For the first inequality, we can apply Lemma 3.3 because120 n ≤ t , t ≤ n . For the second inequality, we group pairs ( t , t ) by their sum t = t + t and apply Lemma 3.4.The number of t ∈ [ n , n ] with either parity restriction is at least (cid:98) n (cid:99) ≥ n . We now combine the upper bound for the sum of potentials given by Lemma 3.1 and the lowerbounds in Section 3.2 to obtain the overall upper bound for the transience class of the grid.
Proof.
For any u = ( u , u ) in the top-left quadrant of Square n , we havemax u , v ∈ V \{ v sink } (cid:32) (cid:88) x ∈ V π u ( x ) (cid:33) π u ( v ) − ≤ max u ∈ V \{ v sink } (cid:32) (cid:88) x ∈ V π u ( x ) (cid:33) π u (( n, n ))= max u ∈ V \{ v sink } (cid:32) (cid:88) x ∈ V π u ( x ) (cid:33) O (log n ) π ( n,n ) ( u )= max u ∈ V \{ v sink } O (cid:0) u u log n (cid:1) O (cid:18) n log n u u (cid:19) = O (cid:0) n log n (cid:1) . The first inequality follows from Lemma 3.2, the second from Lemma 2.9, and the third fromLemma 3.5 and Lemma 3.1. The result follows from Theorem 2.4.
In this section we lower bound tcl(
Square n ) using techniques similar to those in Section 3. Sincethe lower bound in Theorem 2.4 considers the maximum inverse vertex potential over all pairs ofnon-sink vertices u and v , it suffices to upper bound π ( n,n ) ((1 , Square n into one-dimensional walks on Line and thenupper bound the probability that a t -step walk on Line starting at 1 and ending at n does notleave the interval [1 , n ]. More specifically, our upper bound for π ( n,n ) ((1 , emma 4.1. For all n ≥ and t ≥ n − , we have Pr w ∼W Line (1) (cid:20) w ( t ) = n, max ≤ t ( w ) = n, and min ≤ t ( w ) ≥ (cid:21) ≤ min (cid:26) e n , (cid:16) nt (cid:17) (cid:27) . Fact 4.2.
For any nonnegative integer t , we have (cid:88) t ≥ (cid:18) t + t t (cid:19) t + t = 2 . Proof.
This follows directly from the negative binomial distribution. Observe that (cid:88) t ≥ (cid:18) t + t t (cid:19) t + t = 2 (cid:88) t ≥ (cid:18) ( t + 1) − t t (cid:19) t +1 · t = 2 , as desired.By decoupling the two-dimensional walks in a way similar to the proof of Lemma 3.5, we applyLemma 4.1 to the resulting one-dimensional walks to achieve the desired upper bound. Lemma 4.3.
For all n ≥ , we have π ( n,n ) ((1 , ≤ t (cid:26) Pr w ∼W (1) (cid:20) w ( t ) = n, max ≤ t ( w ) = n, min ≤ t ( w ) ≥ (cid:21)(cid:27) · (cid:88) t ≥ Pr w ∼W (1) (cid:20) w ( t ) = n, max ≤ t ( w ) = n, min ≤ t ( w ) ≥ (cid:21) . Proof.
Analogous to our lower bound for π ( n,n ) ((1 , w ∈ W ((1 , → ( n, n ))into its horizontal walk w ∈ W Line (1) and its vertical walk w ∈ W Line (1). We view π ( n,n ) ((1 , w and w are present on n at the same time before either leavesthe interval [1 , n ]. Letting t be the length of w and t be the length of w , we relax the conditionson the one-dimensional walks and only require that w and w both are present on n at the finalstep t = t + t . Note that now both walks could have previously been present on n at the sametime before terminating. This gives the upper bound π ( n,n ) ((1 , ≤ (cid:88) t ,t ≥ (cid:0) t + t t (cid:1) t + t Pr w ∼W (1) (cid:20) w ( t )1 = n, max ≤ t ( w ) = n, min ≤ t ( w ) ≥ (cid:21) · Pr w ∼W (1) (cid:20) w ( t )2 = n, max ≤ t ( w ) = n, min ≤ t ( w ) ≥ (cid:21) . Nesting the summations gives π ( n,n ) ((1 , ≤ (cid:88) t ≥ Pr w ∼W (1) (cid:20) w ( t )1 = n, max ≤ t ( w ) = n, min ≤ t ( w ) ≥ (cid:21) · (cid:88) t ≥ (cid:0) t + t t (cid:1) t + t Pr w ∼W (1) (cid:20) w ( t )2 = n, max ≤ t ( w ) = n, min ≤ t ( w ) ≥ (cid:21) . (cid:88) t ≥ (cid:0) t + t t (cid:1) t + t Pr (cid:20) w ( t )2 = n, max ≤ t ( w ) = n, min ≤ t ( w ) ≥ (cid:21) ≤ t (cid:26) Pr (cid:20) w ( t )2 = n, max ≤ t ( w ) = n, min ≤ t ( w ) ≥ (cid:21)(cid:27) . Factoring out this term from the initial expression completes the proof.The upper bound on the maximum term in the right hand side of Lemma 4.3 follows immediatelyfrom Lemma 4.1. Now we upper bound the summation in the right hand side of Lemma 4.3 usinga simple Lemma 4.1.
Lemma 4.4. If n ≥ and w ∼ W Line (1) , we have (cid:88) t ≥ Pr (cid:20) w ( t ) = n, max ≤ t ( w ) = n, min ≤ t ( w ) ≥ (cid:21) ≤ e n . Proof.
We first split the sum into (cid:88) t ≥ Pr (cid:20) w ( t ) = n, max ≤ t ( w ) = n, min ≤ t ( w ) ≥ (cid:21) = (cid:88) n ≥ t ≥ Pr (cid:20) w ( t ) = n, max ≤ t ( w ) = n, min ≤ t ( w ) ≥ (cid:21) + (cid:88) t>n Pr (cid:20) w ( t ) = n, max ≤ t ( w ) = n, min ≤ t ( w ) ≥ (cid:21) . We will bound both terms by O (1 /n ). The upper bound for the first term follows immediatelyfrom Lemma 4.1 and the fact that we are summing n + 1 terms: (cid:88) n ≥ t ≥ Pr (cid:20) w ( t ) = n, max ≤ t ( w ) = n, min ≤ t ( w ) ≥ (cid:21) ≤ e n . To upper bound the second summation, we again use Lemma 4.1. When t > n , we havePr (cid:20) w ( t ) = n, max ≤ t ( w ) = n, min ≤ t ( w ) ≥ (cid:21) ≤ (cid:16) nt (cid:17) . Since 64( n/t ) is a decreasing function in t ,64 (cid:16) nt (cid:17) ≤ (cid:90) tt − (cid:16) nt (cid:17) d t. Therefore, we can bound the infinite sum by the integral (cid:88) t>n Pr (cid:20) w ( t ) = n, max ≤ t ( w ) = n, min ≤ t ( w ) ≥ (cid:21) ≤ (cid:90) ∞ n (cid:16) nt (cid:17) d t = 32 n , which concludes the proof since we bounded both halves of the sum by O (1 /n ).13 .1 Proof of Theorem 1.3 We can now easily combine the lemmas in this section with the bounds that relate vertex potentialsto the lower bound for the transience class of
Square n . Proof.
Applying Lemma 4.3 and then Lemma 4.1 and Lemma 4.4, it follows that π ( n,n ) ((1 , ≤ max t (cid:26) Pr w ∼W (1) (cid:20) w ( t ) = n, max ≤ t ( w ) = n, min ≤ t ( w ) ≥ (cid:21)(cid:27) · (cid:88) t ≥ Pr w ∼W (1) (cid:20) w ( t ) = n, max ≤ t ( w ) = n, min ≤ t ( w ) ≥ (cid:21) ≤ · e n · e n ≤ e n . Therefore, π ( n,n ) ((1 , − = Ω( n ). By Theorem 2.4 it follows that tcl( Square n ) = Ω( n ). Our proofs for upper and lower bounding the sandpile transience class on the grid heavily reliedon decoupling two-dimensional walks into two independent one-dimensional walks since they areeasier to analyze. This claim is immediately apparent when working with vertex potentials forone-dimensional walks on the path, which we used in the proof of Lemma 3.1.However, we assumed two essential lemmas about one-dimensional walks to prove the lowerand upper bound of the minimum vertex potential. Consequently, in this section we examine theprobability Pr w ∼W Line ( i ) (cid:20) w ( t ) = n, max ≤ t ( w ) = n, min ≤ t ( w ) ≥ (cid:21) , (2)and we prove these necessary lower and upper bounds in Lemma 3.3 and Lemma 4.1 by extendingpreviously known properties of simple symmetric random walks on Z . The key ideas in these proofsare that: the position of a walk in one dimension follows the binomial distribution; the number ofwalks reaching a maximum position in a fixed number of steps has an explicit formula; and thereare tight bounds for binomial coefficients via Stirling’s approximation.The properties we need do not immediately follow from previously known facts because werequire conditions on both the minimum and maximum positions. Section 5.3 gives proofs of theknown explicit expressions for the maximum and minimum position of a walk, along with severalother useful facts that follow from this proof. In Section 5.4 we apply Stirling’s bound to giveaccurate lower bounds on a range of binomial coefficients. Sections 5.5 and 5.6 prove severalnecessary preliminary lower bound lemmas. We prove Lemma 3.3 at the end of Section 5.6. InSection 5.7 we give necessary upper bound lemmas and a proof of Lemma 4.1. To lower bound (2), we split the desired probability into the product of two probabilities using thedefinition of conditional probability. Then we prove lower bounds for each.14
In Lemma 5.6 we show for t ∈ Θ( n ), the probability that a walk on Z starting at 1 ≤ i ≤ (cid:100) n/ (cid:101) is Pr w ∼W ( i ) [min ≤ t ( w ) ≥
1] = Ω ( i/n ) . • In Lemma 5.8 and Lemma 5.7 we bound the probability that a walk starting at 1 ≤ i ≤ (cid:100) n/ (cid:101) of length t ∈ Θ( n ) reaches n at step t without going above n , conditioned on never droppingbelow 1: Pr (cid:20) w ( t ) = n, max ≤ t ( w ) = n (cid:12)(cid:12)(cid:12)(cid:12) min ≤ t ( w ) ≥ (cid:21) = Ω (cid:18) n (cid:19) . Lemma 3.3 immediately follows multiplying these two bounds together. This division allows usto separate proving a minimum and maximum, and in turn simplifies applying known bounds onbinomial distributions. Specifically, Lemma 5.6 is an immediate consequence of explicit expressionsfor the minimum point of a walk and bounds on binomial coefficients, both of which will be givenrigorous treatment in Section 5.3.These proofs will also output a known explicit expression for the probability of the walk reach-ing n at step t , while only staying to its left. All that remains then is to condition the walk to not goto the left of 1. Note that 1 is in the opposite direction of n , with respect to the starting position i .We formally show that the probability of reaching n without going above n only improves if thewalk cannot move too far in the wrong direction, but only for t ≤ ( n − i + 1) , thus giving thereason we need to upper bound t by n / The desired lemma only concerns walks starting at i = 1, which will be critical for our proof.The key idea will then be to split the walk in half and consider the probability that the necessaryconditions are satisfied in the first t/ t/ t/ , n ], so we must sum over all these possible midpoints.Removing the upper and lower bound conditions, respectively, will then give the upper bound inLemma 5.9:Pr w ∼W (1) (cid:20) w ( t ) = n, max ≤ t ( w ) = n, min ≤ t ( w ) ≥ (cid:21) ≤ n (cid:88) i =1 Pr w ∼W (1) (cid:34) w ( (cid:98) t (cid:99) ) = i, min ≤(cid:98) t (cid:99) ( w ) ≥ (cid:35) · Pr w ∼W ( i ) (cid:34) w ( (cid:100) t (cid:101) ) = n, max ≤(cid:100) t (cid:101) ( w ) = n (cid:35) . Due to the first t/ t/ n , theconditions min ≤ t ( w ) ≥ ≤(cid:100) t (cid:101) ( w ) = n for the second walk will be thedifficult property for each walk to satisfy, respectively. Next we apply facts proved in Section 5.3to obtain expressions for each term within the summation. The remainder of the upper boundanalysis will then focus on bounding those expressions. As previously mentioned, our proofs mostly leverage well-known facts about the maximum/minimumposition of a random walk along with corresponding bounds for these probabilities. This section willfirst give the result regarding maximum/minimum position of walks and a connection to Stirling’sapproximation. 15bserve that if we are only concerned with a single end point, we can fix the starting locationat 0 by shifting accordingly. In these cases, the following bounds are well known in combinatorics.
Fact 5.1 ([RB79]) . For any t, n ∈ Z ≥ , we have Pr w ∼W (0) (cid:20) max ≤ t ( w ) = n (cid:21) = Pr (cid:2) w ( t ) = n (cid:3) = (cid:0) t t + n (cid:1) t if t + n ≡ , Pr (cid:2) w ( t ) = n + 1 (cid:3) = (cid:0) t t + n +12 (cid:1) t if t + n ≡ . Proof.
For any k ≤ n , consider a walk w ∈ W (0) that satisfies w ( t ) = k and max ≤ t ( w ) ≥ n . Let t ∗ be the first time that w ( t ∗ ) = n , and construct the walk m ending at 2 n − k such that m ( ˆ t ) = (cid:40) w ( ˆ t ) if 0 ≤ ˆ t ≤ t ∗ , n − w ( ˆ t ) if t ∗ < ˆ t ≤ t. This reflection map is a bijection, so for k ≤ n we havePr w ∼W (0) (cid:20) w ( t ) = k, max ≤ t ( w ) ≥ n (cid:21) = Pr w ∼W (0) (cid:104) w ( t ) = 2 n − k (cid:105) . Subtracting the probability of the maximum position being at least n + 1 givesPr w ∼W Line (0) (cid:20) w ( t ) = k and max ≤ t ( w ) = n (cid:21) = Pr w ∼W Line (0) (cid:104) w ( t ) = 2 n − k (cid:105) − Pr w ∼W Line (0) (cid:104) w ( t ) = 2( n + 1) − k (cid:105) . Summing over all k ≤ n , we havePr w ∼W Line (0) (cid:20) max ≤ t ( w ) = n (cid:21) = Pr w ∼W Line (0) (cid:104) w ( t ) = n (cid:105) + Pr w ∼W Line (0) (cid:104) w ( t ) = n + 1 (cid:105) . Considering the parity of t and n completes the proof.The proof above contains two intermediate expressions for probabilities similar to the ones wewant to bound. Fact 5.2.
For any integers n ≥ and k ≤ n , we have Pr w ∼W (0) (cid:20) w ( t ) = k, max ≤ t ( w ) ≥ n (cid:21) = Pr w ∼W (0) (cid:104) w ( t ) = 2 n − k (cid:105) . Fact 5.3.
Let t, n ∈ Z ≥ . For any integer k ≤ n , Pr w ∼W (0) (cid:20) w ( t ) = k, max ≤ t ( w ) = n (cid:21) = (cid:40)(cid:0) t t +2 n − k (cid:1) t (cid:16) n − k +2 t +2 n − k +2 (cid:17) if t + k ≡ , if t + k ≡ . Proof.
Using Fact 5.1 and analyzing the parity of the walks gives (cid:18) t t +2 n − k (cid:19) t − (cid:18) t t +2 n − k +22 (cid:19) t = (cid:18) t t +2 n − k (cid:19) t − t − n + kt + 2 n − k + 2 (cid:18) t t +2 n − k (cid:19) t = (cid:18) t t +2 n − k (cid:19) t (cid:18) n − k + 2 t + 2 n − k + 2 (cid:19) , as desired. 16 .4 Lower Bounding Binomial Coefficients Ultimately, our goal is to give strong lower bounds on closely related probabilities to the ones above.To do so, we need to use various bounds on binomial coefficients that are consequences of Stirling’sapproximation.
Fact 5.4 (Stirling’s Approximation) . For any positive integer n , we have √ π ≤ n ! √ n (cid:0) ne (cid:1) n ≤ e. An immediate consequence of this is a concentration bound on binomial coefficients.
Fact 5.5.
Let c, n ∈ R > such that c √ n < n . For any k ∈ [( n − c √ n ) / , ( n + c √ n ) / , we have (cid:18) nk (cid:19) ≥ e − − c · n √ n . Proof.
We directly substitute Stirling’s approximation to the definition of binomial coefficients toget (cid:18) n n − c √ n (cid:19) = n ! (cid:16) n − c √ n (cid:17) ! (cid:16) n + c √ n (cid:17) ! ≥ √ πn (cid:0) ne (cid:1) n e (cid:113) n − c √ n (cid:16) n − c √ n e (cid:17) n − c √ n e (cid:113) n + c √ n (cid:16) n + c √ n e (cid:17) n + c √ n ≥ √ πe √ n · n (cid:16) − c n (cid:17) n (cid:16) − c √ n (cid:17) − c √ n (cid:16) c √ n (cid:17) c √ n ≥ √ πe c · n √ n ≥ e − − c · n √ n , as desired. We now bound the probability of the minimum position of a walk in W ( i ) being at least 1 after t steps. Lemma 5.6.
For any positive integer n , initial position ≤ i ≤ (cid:100) n/ (cid:101) , and constant c > , if wehave t ∈ [ n /c, n / , then Pr w ∼W ( i ) (cid:20) min ≤ t ( w ) ≥ (cid:21) ≥ e − − c · in . Proof.
First observe thatPr w ∼W ( i ) (cid:20) min ≤ t ( w ) ≥ (cid:21) = i (cid:88) k =1 Pr w ∼W ( i ) (cid:20) min ≤ t ( w ) = k (cid:21) .
17y symmetry, this sum is i − (cid:88) k =0 Pr w ∼W (0) (cid:20) max ≤ t ( w ) = k (cid:21) . For each 0 ≤ k ≤ i −
1, Fact 5.1 implies thatPr w ∼W (0) (cid:20) max ≤ t ( w ) = k (cid:21) ∈ (cid:26)(cid:18) t t + k (cid:19) t , (cid:18) t t + k +12 (cid:19) t (cid:27) . By assumption k ≤ k + 1 ≤ i ≤ n ≤ √ ct , so applying Fact 5.5 givesmin (cid:26)(cid:18) t t + k (cid:19) t , (cid:18) t t + k +12 (cid:19) t (cid:27) ≥ (cid:18) t t + √ ct (cid:19) t ≥ e − − c √ t ≥ e − − c n , because t ≤ n /
4. Summing over 0 ≤ k ≤ i − Similarly, we can use binomial coefficient approximations to bound the probability of a t -step walkterminating at n while never moving to a position greater than n . Lemma 5.7.
For any initial position ≤ i ≤ (cid:100) n/ (cid:101) and any max { n, n /c } ≤ t ≤ n / with t ≡ n − i (mod 2) , we have Pr w ∼W ( i ) (cid:20) max ≤ t ( w ) = n, w ( t ) = n (cid:21) ≥ e − − c · n . Proof.
By symmetry we rewrite the probability asPr w ∼W (0) (cid:20) max ≤ t ( w ) = n − i, w ( t ) = n − i (cid:21) . Fact 5.3 gives that this probability equals to12 t (cid:18) t t + n − i (cid:19) n − i + 1) t + n − i + 2 . We can separately bound the last two terms according to the assumptions on t and i . Setting i = 0minimizes (cid:0) t ( t + n − i ) / (cid:1) for all i ≥
0. Setting i = (cid:100) n/ (cid:101) in the numerator, i = 0 in the denominator,and t = n / n − i + 1)) / ( t + n + 2). It follows that12 t (cid:18) t t + n (cid:19) · (cid:98) n/ (cid:99) + 1) n / n + 2 ≥ t (cid:18) t t + n (cid:19) nn . We reapply Fact 5.5 with the observation that n ≤ √ ct to get12 t (cid:18) t t + √ ct (cid:19) n ≥ e − − c · n , as desired. 18t remains to condition upon the minimum of a walk. This hinges upon the following statementabout moving in the wrong direction only decreasing the probability a walk starting at some1 ≤ i ≤ (cid:100) n/ (cid:101) ending at n without ever going past n . Lemma 5.8.
For any ≤ i ≤ (cid:100) n/ (cid:101) , at any step t ≤ n / with t ≡ n − i (mod 2) , we have Pr w ∼W ( i ) (cid:20) w ( t ) = n, max ≤ t ( w ) = n (cid:21) ≥ Pr w ∼W ( i ) (cid:20) w ( t ) = n, max ≤ t ( w ) = n (cid:12)(cid:12)(cid:12)(cid:12) min ≤ t ( w ) < (cid:21) . Proof.
Condition on min ≤ t ( w ) < (cid:98) t the walk hits 0. This means i ≡ (cid:98) t (mod 2) and in turn n ≡ t − (cid:98) t (mod 2). The probability of max ≤ t ( w ) = w ( t ) = n via the walk insteps (cid:98) t + 1 , . . . , t is then at mostPr w ∼W (0) (cid:20) w ( t − (cid:98) t ) = max ≤ t − (cid:98) t ( w ) = n (cid:21) . Note that we have inequality since it is possible that we already have max ≤ (cid:98) t ( w ) > n . Therefore, itsuffices to show for any n and any 1 ≤ (cid:98) t ≤ t we havePr w ∼W (0) (cid:20) w ( t − (cid:98) t ) = max ≤ t − (cid:98) t ( w ) = n (cid:21) ≤ Pr w ∼W ( i ) (cid:20) w ( t ) = max ≤ t ( w ) = n (cid:21) . There are two variables that are shifted from one side of the inequality to the other, the startingposition of the walk and the number of steps. In order to prove the inequality, we will show thatboth taking more steps and starting further to the right will only improve the probability of endingat n and not going above n .We begin by showing that taking more steps will only improve this probability:Pr w ∼W (0) (cid:20) w ( t − (cid:98) t ) = max ≤ t − (cid:98) t ( w ) = n (cid:21) ≤ max (cid:40) Pr w ∼W (0) (cid:20) w ( t − = max ≤ t − ( w ) = n (cid:21) , Pr w ∼W (0) (cid:20) w ( t ) = max ≤ t ( w ) = n (cid:21) (cid:41) . There is no guarantee that t ≡ n (mod 2), so we consider t or t − t − ≥ t − (cid:98) t since (cid:98) t ≥
1, so without loss of generality, we assume t ≡ (cid:98) t (mod 2)and show Pr w ∼W (0) (cid:20) w ( t − (cid:98) t ) = max ≤ t − (cid:98) t ( w ) = n (cid:21) ≤ Pr w ∼W (0) (cid:20) w ( t ) = max ≤ t ( w ) = n (cid:21) . Note that the proof is equivalent when t − ≡ (cid:98) t (mod 2).Using Fact 5.3, we have the explicit probabilityPr w ∼W (0) (cid:20) max ≤ t ( w ) = w ( t ) = n (cid:21) = 12 t (cid:18) t t + n (cid:19) n + 2 t + n + 2 . Substituting t by t − w ∼W (0) (cid:20) w ( t − = max ≤ t − ( w ) = n (cid:21) ≤ Pr w ∼W (0) (cid:20) w ( t ) = max ≤ t ( w ) = n (cid:21) , t − (cid:18) t − t + n − (cid:19) n + 2 t + n ≤ t (cid:18) t t + n (cid:19) n + 2 t + n + 2( t − (cid:0) t + n − (cid:1) ! (cid:0) t − n − (cid:1) ! 1 t + n ≤ t ! (cid:0) t + n (cid:1) ! (cid:0) t − n (cid:1) ! 1 t + n + 21 t + n ≤ t ( t − t + n )( t − n )( t + n + 2)3 t ≤ n + 2 n. Inductively applying this argument inductively for t − (cid:26) Pr w ∼W (0) (cid:20) w ( t − = max ≤ t − ( w ) = n (cid:21) , Pr w ∼W (0) (cid:20) w ( t ) = max ≤ t ( w ) = n (cid:21) (cid:27) ≤ Pr w ∼W ( i ) (cid:20) w ( t ) = max ≤ t ( w ) = n (cid:21) , which we prove similarly. First rewrite the right hand side using the fact thatPr w ∼W ( i ) (cid:20) w ( t ) = max ≤ t ( w ) = n (cid:21) = Pr w ∼W (0) (cid:20) w ( t ) = max ≤ t ( w ) = n − i (cid:21) , and initially assume that t ≡ n (mod 2), which implies n ≡ n − i (mod 2). Again, using the explicitformula from Fact 5.3 and substituting n by n − w ∼W (0) (cid:20) w ( t ) = max ≤ t ( w ) = n (cid:21) ≤ Pr w ∼W (0) (cid:20) w ( t ) = max ≤ t ( w ) = n − (cid:21) , when t + 2 ≤ n , which true by assumption and can be inductively applied until n = ( n − i + 2)because ( n − i + 2) ≥ (cid:100) n/ (cid:101) + 1. Unfortunately, we cannot entirely apply the same proof when t − ≡ n (mod 2) because this implies n (cid:54)≡ n − i (mod 2). Applying the same proof as for t ≡ n (mod 2) we can obtainPr w ∼W (0) (cid:20) w ( t − = max ≤ t − ( w ) = n (cid:21) ≤ Pr w ∼W (0) (cid:20) w ( t − = max ≤ t − ( w ) = n − i + 1 (cid:21) , because ( t −
1) + 2 ≤ ( n − i + 3) .Therefore, we can conclude the proof by showingPr w ∼W (0) (cid:20) w ( t − = max ≤ t − ( w ) = n − i + 1 (cid:21) ≤ Pr w ∼W Line (0) (cid:20) w ( t ) = max ≤ t ( w ) = n − i (cid:21) . This is then true when n − i ≤ tt − ( n − i ) · ( n − i + 1) , which holds for n − i ≥ n without going above n . Now we prove the mainresult of this section. 20 emma 3.3. Let n ∈ Z ≥ and ≤ i ≤ (cid:100) n/ (cid:101) be any starting position. For any constant c > andany t ∈ Z such that n /c ≤ t ≤ n / with t ≡ n − i (mod 2) , a simple symmetric random walk won Z satisfies Pr w ∼W Line ( i ) (cid:20) w ( t ) = n, max ≤ t ( w ) = n, and min ≤ t ( w ) ≥ (cid:21) ≥ e − − c in . Proof.
Consider any starting position 1 ≤ i ≤ (cid:100) n/ (cid:101) and any time n /c ≤ t ≤ n / t ≡ n − i (mod 2). By the definition of conditional probability we havePr w ∼W ( i ) (cid:20) w ( t ) = n, max ≤ t ( w ) = n, min ≤ t ( w ) ≥ (cid:21) = Pr w ∼W ( i ) (cid:20) w ( t ) = n, max ≤ t ( w ) = n (cid:12)(cid:12)(cid:12)(cid:12) min ≤ t ( w ) ≥ (cid:21) · Pr w ∼W ( i ) (cid:20) min ≤ t ( w ) ≥ (cid:21) . Lemma 5.6 shows that the second term is at least exp( − − c )) i/n . Taking the probabilityunder min ≤ t ( w ) ≥ ≤ t ( w ) <
1) in Lemma 5.8 allows us toupper bound the first term using Lemma 5.7 byPr w ∼W ( i ) (cid:20) w ( t ) = n, max ≤ t ( w ) = n (cid:12)(cid:12)(cid:12)(cid:12) min ≤ t ( w ) ≥ (cid:21) ≥ Pr w ∼W ( i ) (cid:20) w ( t ) = n, max ≤ t ( w ) = n (cid:21) ≥ e − − c n . Putting these together then givesPr w ∼W ( i ) (cid:20) w ( t ) = n, max ≤ t ( w ) = n, min ≤ t ( w ) ≥ (cid:21) ≥ (cid:18) e − − c in (cid:19) · (cid:18) e − − c n (cid:19) = e − − c · in , which completes the proof. We begin by splitting every t step walk in half, and instead consider the probability of each walksatisfying the given conditions. In order to give upper bounds of these probabilities, we will relaxthe requirements, allowing us to more easily relate the probabilities to previously known facts aboutone-dimensional walks that we proved in Section 5.3. Furthermore, by splitting the walk in half wenow have to consider all possible midpoints in [1 , n ]. Lemma 5.9.
For all integers ≤ n ≤ t , we have Pr w ∼W (1) (cid:20) w ( t ) = n, max ≤ t ( w ) = n, min ≤ t ( w ) ≥ (cid:21) ≤ n (cid:88) i =1 Pr w ∼W (1) (cid:34) w ( (cid:98) t (cid:99) ) = i, min ≤(cid:98) t (cid:99) ( w ) ≥ (cid:35) · Pr w ∼W ( i ) (cid:34) w ( (cid:100) t (cid:101) ) = n, max ≤(cid:100) t (cid:101) ( w ) = n (cid:35) . Proof.
By subdividing the walk roughly in half, we consider all possible positions of a walk afterhalf of its steps such that the walk satisfies the maximum and minimum conditions. The second21alf of the walk must end at n , which implies the maximum position of the walk must be at least n .Thus, the first half of the walk only needs to not go above n . Accordingly, we can writePr w ∼W (1) (cid:20) w ( t ) = n, max ≤ t ( w ) = n, min ≤ t ( w ) ≥ (cid:21) = n (cid:88) i =1 Pr w ∼W (1) (cid:34) w ( (cid:98) t (cid:99) ) = i, max ≤(cid:98) t (cid:99) ( w ) ≤ n, min ≤(cid:98) t (cid:99) ( w ) ≥ (cid:35) · Pr w ∼W ( i ) (cid:34) w ( (cid:100) t (cid:101) ) = n, max ≤(cid:100) t (cid:101) ( w ) = n, min ≤(cid:100) t (cid:101) ( w ) ≥ (cid:35) . Removing conditions that the walks must satisfy cannot decrease the probability, so our upperbound follows.From Fact 5.3 we can obtain explicit expressions for each inner term of the summation, whichwe then simplify into a strong bound on the summation in the following lemma.
Lemma 5.10.
For all integers ≤ n ≤ t , we have Pr w ∼W (1) (cid:20) w ( t ) = n, max ≤ t ( w ) = n, min ≤ t ( w ) ≥ (cid:21) ≤ n (cid:88) i =1 (cid:18) i ( n − i + 1) t (cid:19) (cid:18) (cid:98) t (cid:99) (cid:98) t (cid:99) + i − (cid:19) (cid:98) t (cid:99) · (cid:18) (cid:100) t (cid:101) (cid:100) t (cid:101) +( n − i +1) − (cid:19) (cid:100) t (cid:101) . Proof.
Apply the upper bound from Lemma 5.9 and examine each inner term in the summation.By the symmetry of walks, there must be an equivalent number of (cid:98) t/ (cid:99) step walks with endpoints 1and i that never walk below 1 versus those that never walk above i . ThusPr w ∼W (1) (cid:34) min ≤(cid:98) t (cid:99) ( w ) ≥ , w ( (cid:98) t (cid:99) ) = i (cid:35) = Pr w ∼W (1) (cid:34) max ≤(cid:98) t (cid:99) ( w ) ≤ i, w ( (cid:98) t (cid:99) ) = i (cid:35) . Shifting the start of the walk to 0 allows us to apply Fact 5.3, because max ≤(cid:98) t (cid:99) ( w ) ≤ i is equivalentto max ≤(cid:98) t (cid:99) ( w ) = i if the walk must end at i . Therefore,Pr w ∼W (1) (cid:20) min ≤ t ( w ) ≥ , w ( (cid:98) t (cid:99) ) = i (cid:21) = (cid:18) i (cid:98) t (cid:99) + i + 1 (cid:19) (cid:18) (cid:98) t (cid:99) (cid:98) t (cid:99) + i − (cid:19) (cid:98) t (cid:99) , when the parity is correct and 0 otherwise. This works as an upper bound. Similarly, by shiftingthe start to 0 and applying Fact 5.3 we havePr w ∼W ( i ) (cid:34) w ( (cid:100) t (cid:101) ) = n, max ≤(cid:100) t (cid:101) ( w ) = n (cid:35) = (cid:18) n − i + 1) (cid:100) t (cid:101) + ( n − i + 1) + 1 (cid:19) (cid:18) (cid:100) t (cid:101) (cid:100) t (cid:101) +( n − i +1) − (cid:19) (cid:100) t (cid:101) . Applying Lemma 5.9, we now have expressions for the term inside the summation, soPr w ∼W (1) (cid:34) w ( (cid:98) t (cid:99) ) = i, min ≤(cid:98) t (cid:99) ( w ) ≥ (cid:35) · Pr w ∼W ( i ) (cid:34) w ( (cid:100) t (cid:101) ) = n, max ≤(cid:100) t (cid:101) ( w ) = n (cid:35) = (cid:18) i (cid:98) t (cid:99) + i + 1 (cid:19) (cid:18) (cid:98) t (cid:99) (cid:98) t (cid:99) + i − (cid:19) (cid:98) t (cid:99) · (cid:18) n − i + 1) (cid:100) t (cid:101) + ( n − i + 1) + 1 (cid:19) (cid:18) (cid:100) t (cid:101) (cid:100) t (cid:101) +( n − i +1) − (cid:19) (cid:100) t (cid:101) ≤ (cid:18) i ( n − i + 1) t (cid:19) (cid:18) (cid:98) t (cid:99) (cid:98) t (cid:99) + i − (cid:19) (cid:98) t (cid:99) · (cid:18) (cid:100) t (cid:101) (cid:100) t (cid:101) +( n − i +1) − (cid:19) (cid:100) t (cid:101) , (cid:98) t (cid:99) + i + 1)( (cid:100) t (cid:101) + ( n − i + 1) + 1) ≥ t /
4. This completesthe proof.The following lemma gives a upper bound for the inner expression from Lemma 5.10 by boundingthe binomial coefficients with the central binomial coefficients and using Stirling’s approximation.
Lemma 5.11.
For any integer ≤ in , we have (cid:18) i ( n − i + 1) t (cid:19) (cid:18) (cid:98) t (cid:99) (cid:98) t (cid:99) + i − (cid:19) (cid:98) t (cid:99) (cid:18) (cid:100) t (cid:101) (cid:100) t (cid:101) +( n − i +1) − (cid:19) (cid:100) t (cid:101) ≤ n t . Proof.
Given that 1 ≤ i ≤ n , we can crudely upper bound i ( n − i + 1) by n . Additionally, we canwill use Stirling’s approximation on the central binomial coefficient to upper bound our binomialcoefficients. (cid:18) (cid:100) t (cid:101) (cid:100) t (cid:101) +( n − i +1) − (cid:19) ≤ (cid:100) t (cid:101) · (cid:113) (cid:100) t (cid:101) , and (cid:18) (cid:98) t (cid:99) (cid:98) t (cid:99) + i − (cid:19) ≤ (cid:98) t (cid:99) · (cid:113) (cid:98) t (cid:99) . The exponential terms will cancel and 1 (cid:113) (cid:100) t (cid:101) · (cid:113) (cid:98) t (cid:99) ≤ t , giving our desired bound.The upper bound in Lemma 5.11 is not sufficient for t that are asymptotically less than n , so forthese t we need to give a more detailed analysis. Therefore, we more carefully examine the binomialcoefficients that are significantly smaller than the central coefficient for small t . Consequently, theexponential term will not be sufficiently canceled by the binomial coefficient for values of t that areasymptotically smaller than n . More specifically, we show that the function of t on the right handside of Lemma 5.10 is increasing in t up until approximately n .In the following lemma we consider even length walks for simplicity. The proof for odd lengthwalks follows analogously. Lemma 5.12.
For n ≥ and any integer ≤ i ≤ n , for all t ≤ n / we have i ( n − i + 1)(2 t ) t (cid:18) t t + i − (cid:19)(cid:18) t t +( n − i +1) − (cid:19) ≤ i ( n − i + 1)(2 t + 4) t +4 (cid:18) t + 2 t +2+ i − (cid:19)(cid:18) t + 2 t +2+( n − i +1) − (cid:19) , where we consider walks of length t and t + 4 to ensure that (2 t ) / and (2 t + 4) / have the sameparity.Proof. Canceling like terms implies that the desired inequality is equivalent to1 t (cid:18) t t + i − (cid:19)(cid:18) t t +( n − i +1) − (cid:19) ≤ t + 2) · · (cid:18) t + 2 t +2+ i − (cid:19)(cid:18) t + 2 t +2+( n − i +1) − (cid:19) . (cid:18) t t + i − (cid:19) ( t + 2)( t + 1) (cid:0) t +1+ i (cid:1) (cid:0) t +3 − i (cid:1) = (cid:18) t + 2 t +2+ i − (cid:19) , and (cid:18) t t +( n − i +1) − (cid:19) ( t + 2)( t + 1) (cid:16) t +2+( n − i )2 (cid:17) (cid:16) t +2 − ( n − i )2 (cid:17) = (cid:18) t + 2 t +2+( n − i +1) − (cid:19) . Using these identities, our desired inequality equals1 t ≤ − ( t + 2) ( t + 2)( t + 1) (cid:0) t +1+ i (cid:1) (cid:0) t +3 − i (cid:1) ( t + 2)( t + 1) (cid:16) t +2+( n − i )2 (cid:17) (cid:16) t +2 − ( n − i )2 (cid:17) . Further cancellation of like terms and moving the denominator on each side into the numerator onthe other side implies that our desired inequality is equivalent to( t + 1 + i )( t + 3 − i )( t + 2 + ( n − i ))( t + 2 − ( n − i )) ≤ t ( t + 1) . It is straightforward to see that ( t + 1 + i )( t + 3 − i )is maximized by i = 1 and ( t + 2 + ( n − i ))( t + 2 − ( n − i ))is maximized by n − i = 0. Furthermore, it must be true that either i ≥ n/ n − i ≥ n/
2, so wecan upper bound the left hand side of our inequality by substituting n/ i or n − i , and settingthe other terms to the value that maximizes the product. Hence,( t + 1 + i )( t + 3 − i )( t + 2 + ( n − i ))( t + 2 − ( n − i )) ≤ ( t + 2) (cid:16) t + 3 + n (cid:17) (cid:16) t + 3 − n (cid:17) . To prove our desired inequality it now suffices to show ( t +2) ( t + 3 + n/
2) ( t + 3 − n/ ≤ t ( t +1) ,which is equivalent to (cid:16) t + 3 + n (cid:17) (cid:16) t + 3 − n (cid:17) ≤ t (cid:18) − t + 2 (cid:19) . Expanding both sides of the inequality and rearranging terms yields6 t + 9 + 2 t t + 2 − (cid:18) tt + 2 (cid:19) ≤ n . Given that 2 t / ( t + 2) ≤ t , it suffices to show that 8 t + 9 ≤ n / , which is true when t ≤ n / n ≥ Lemma 4.1.
For all n ≥ and t ≥ n − , we have Pr w ∼W Line (1) (cid:20) w ( t ) = n, max ≤ t ( w ) = n, and min ≤ t ( w ) ≥ (cid:21) ≤ min (cid:26) e n , (cid:16) nt (cid:17) (cid:27) . roof. Applying Lemmas 5.10 and 5.11 givesPr w ∼W (1) (cid:20) w ( t ) = n, max ≤ t ( w ) = n, min ≤ t ( w ) ≥ (cid:21) ≤ n (cid:88) i =1 (cid:18) n t (cid:19) , which immediately gives the upper bound 64( n/t ) . Similarly, Lemmas 5.10 and 5.12 imply thatfor t ≤ n / w ∼W (1) (cid:20) w ( t ) = n, max ≤ t ( w ) = n, min ≤ t ( w ) ≥ (cid:21) ≤ n (cid:88) i =1 (cid:18) i ( n − i + 1) T (cid:19) (cid:18) (cid:98) T (cid:99) (cid:98) T (cid:99) + i − (cid:19) (cid:98) T (cid:99) · (cid:18) (cid:100) T (cid:101) (cid:100) T (cid:101) +( n − i +1) − (cid:19) (cid:100) T (cid:101) , where T = n /
40. We then use Lemma 5.11 and sum from 1 to n to obtainPr w ∼W (1) (cid:20) w ( t ) = n, max ≤ t ( w ) = n, min ≤ t ( w ) ≥ (cid:21) ≤ n (cid:88) i =1 n T ≤ n ≤ e n , for all t ≤ n /
40. Using the fact that 64( n/t ) is a decreasing function in t , we have64 (cid:16) nt (cid:17) ≤ e n , for all t ≥ n /
40, which is the desired bound.
Now we show how to extend our analysis to upper and lower bound the transience class of d -dimensional grids. Theorem 1.4.
For any integer d ≥ , the transience class of the Abelian sandpile model on the n d -sized d -dimensional grid is O ( n d − log d +2 n ) and Ω( n d − ) . We denote by d - Cube n the d -dimensional hypercube grid with n d vertices, and construct itanalogously to Square n . Its vertex set is { , , . . . , n } d ∪ { v sink } and its edges connect any pair ofvertices that differ in one coordinate. Vertices on the boundary have additional edges connecting to v sink so that every non-sink vertex has degree 2 d . We use the vector notation u = ( u , u , . . . , u d )to identify non-sink vertices. We can decouple a walk w on d - Cube n into one-dimensional walks w , w , . . . , w d , so that each step of a random walk on d - Cube n can be understood as choosing arandom direction with probability 1 /d and then a step in the corresponding one-dimensional walkwith probability 1 / d -dimensional hypercubes follows comparably and only requires simple extensionsof our lemmas for the two-dimensional grids. Therefore, we will reference the necessary lemmasfrom previous sections and show the minor modifications needed to give analogous lemmas for the d -dimensional grid. The upper bound proof requires several key lemmas and is more involved,whereas extending the lower bound only requires one simple addition to our proof in Section 4.25 .1 Upper Bounding the Transience Class Since Theorem 2.4 from [CV12] relies on non-sink vertices having constant degree, our assumptionsthat d is constant and that all non-sink vertices have degree 2 d . In addition to utilizing propertiesof one-dimensional walks, specifically Lemma 3.3 proven in Section 5, the proof of our upper boundrelies on four key lemmas: • Lemma 2.9 — The source vertex can be swapped with a any non-sink vertex while only losinga O (log n ) approximation factor in the potential. • Lemma 3.1 — An upper bound on the sum of all vertex potentials by factoring the expressioninto one-dimensional vertex potentials. • Lemma 3.2 — For any vertex, the opposite corner vertex minimizes the potential up to aconstant. • Lemma 3.5 — A lower bound on the vertex potential π ( n,n ) ( u ), for any u in the top-rightquadrant of Square n .Now we describe how to extend each of these lemmas to constant dimensions. These resultsalmost immediately follow from decoupling walks into one-dimensional walks. Lemma 6.1.
For any pair of non-sink vertices u and v in d - Cube n , we have π u ( v ) ≤ (8 log n + 4) π v ( u ) . Proof.
This is a consequence of Rayleigh’s monotonicity theorem. Fix an underlying n × n subgraphof the hypercube with corners at the source and sink, and set the rest of the resistors to infinity.The upper bound for the n × n grid is an upper bound for the hypercube.Our lemma analogous to Lemma 3.1 follows from Lemma 6.1 and decoupling walks into one-dimension. Lemma 6.2.
For any non-sink vertex u in d - Cube n , (cid:88) v ∈ V π u ( v ) = O (cid:32) log n d (cid:89) i =1 u i log n (cid:33) . Proof.
Follow the proof structure of Lemma 3.1.We can also generalize our proof of Lemma 3.2 to higher dimensions, because we work witheach dimension independently.
Lemma 6.3. If u is a non-sink vertex of d - Cube n such that ≤ u i ≤ (cid:100) n/ (cid:101) for all ≤ i ≤ d ,then π u ( v ) ≥ (cid:18) d (cid:19) d π u (( n, n, . . . , n )) . Proof.
Extend the proof of Lemma 3.2 by reflecting walks across the ( d − n )and show that there is a constant fraction such that both dimensions have taken Θ( n ) steps,which allows us to apply Lemma 3.3 for each possible walk. To do this, we essentially union boundLemma 3.4 over d dimensions, which shows that Θ( n ) walk lengths take Θ( n ) steps in eachdirection with probability at least 2 − d . Lemma 6.4.
For n ≥ and u ∈ V ( d - Cube n ) such that ≤ u i ≤ (cid:100) n/ (cid:101) for ≤ i ≤ d , we have π ( n,n,...,n ) ( u ) = Ω (cid:32) (cid:81) di =1 u i n d − (cid:33) . Proof.
Decouple walks w ∈ W ( u → ( n, n, . . . , n )) into one-dimensional walks w i ∈ W Line ( u i ), andview π ( n,n,...,n ) ( u ) as the probability that each walk w i is present on n at the same time before anyleaves the interval [1 , n ]. If each walk takes t , t , . . . , t d steps, respectively, then the total numberof possible interleavings of these walks is the multinomial (cid:18) t + t + · · · + t d t , t , . . . , t d (cid:19) . Just as before, we can obtain the lower bound π ( n,n,...,n ) ( u ) ≥ (cid:88) t ,t ,...,t d ≥ (cid:0) t + t + ··· + t d t ,t ,...,t d (cid:1) d t + t + ··· + t d d (cid:89) i =1
12 Pr (cid:20) w ( t i − i = n − , max ≤ t i − ( w ) = n − , min ≤ t i − ( w ) ≥ (cid:21) . To apply Lemma 3.3 to each walk, we need each t i to be in the interval [ n /c, n / c = 16 d .Then we consider all walks of length n ≤ t ≤ n , where t = t + t + · · · + t d , and show that a constant fraction of these walks satisfy t i ≥ n /c with t i having the correct parity. Note that we can ignore the parity conditions by simply lower boundingthe probability of all having correct parity by 4 − d . It then remains to show that all walks satisfythe inequality t i ≥ n /c with constant probability.Consider the probability that t ≥ n /c . The other dimensions follow identically. Letting eachdimension take at least n /c steps introducing dependence, so we instead consider the probabilitythat t ≥ n /c and condtion on t , t , . . . , t d ≥ n /c (which can only decrease the probability of theevent t ≥ n /c ). This is equivalent to fixing n /c steps in each of those directions and randomlychoosing all remaining steps with probability 1 /d for each direction. The remaining number ofsteps is then at least dn /c by our assumption that t ≥ n /
8. Therefore, the expected number ofsteps in the first dimension is at least n /c , which implies t ≥ n /c with probability at least 1 / t i ≥ n /c with probability at least 2 − d .Thus, there are O ( n ) values of t that we can decompose into one-dimensional walks, eachoccurring with constant probability. Applying Lemma 3.3 to each decomposition and summingΩ (cid:32) d (cid:89) i =1 u i n (cid:33) over O ( n ) possible walks proves the claim. 27ow we prove tcl( d - Cube n ) = O ( n d − log d +2 n ) using Theorem 2.4. For any u = ( u , u , ..., u d )in the top-left orthant of d - Cube n , it follows thatmax u , v ∈ V \{ v sink } (cid:32) (cid:88) x ∈ V π u ( x ) (cid:33) π u ( v ) − ≤ max u ∈ V \{ v sink } (cid:32) (cid:88) x ∈ V π u ( x ) (cid:33) (2 d ) d π u (( n ) d )= max u ∈ V \{ v sink } (cid:32) (cid:88) x ∈ V π u ( x ) (cid:33) O (log n ) π ( n ) d ( u )= max u ∈ V \{ v sink } O (cid:32) log n d (cid:89) i =1 u i log n (cid:33) O (cid:32) n d − log n (cid:81) di =1 u i (cid:33) = O (cid:16) n d − log d +2 n (cid:17) . Extending our lower bound to d -dimensional hypergrids is a simple consequence of decoupling d -dimensional walks into one-dimensional walks, because we only need to generalize the upper boundin Lemma 4.3 to π ( n ) d (cid:16) (1) d (cid:17) ≤ max t (cid:26) Pr w ∼W (1) (cid:20) w ( t ) = n, max ≤ t ( w ) = n, min ≤ t ( w ) ≥ (cid:21)(cid:27) · d (cid:88) t ≥ Pr w ∼W (1) (cid:20) w ( t ) = n, max ≤ t ( w ) = n, min ≤ t ( w ) ≥ (cid:21) , by replacing the negative binomial distribution with the negative multinomial distribution. Fact 6.5.
For any nonnegative integer t , we have (cid:88) t ,...,t d ≥ (cid:18) t + t + · · · + t d t , t , . . . , t d (cid:19) d t + t + ··· + t d = d. Proof.
Consider the proof of Fact 4.2 using the negative multinomial distribution.Thus, we can apply Lemma 4.1 and Lemma 4.4 to show π ( n ) d (cid:16) (1) d (cid:17) = O (cid:32)(cid:18) n (cid:19) d − n (cid:33) = O (cid:16) n − d +2 (cid:17) . By Theorem 2.4, we have tcl( d - Cube n ) = Ω( n d − ). Acknowledgments
We thank Marcel Celaya, Wenhan Huang, Wenhao Li, Pinyan Lu, Richard Peng, Dana Randall,and Chi Ho Yuen for various helpful discussions. We also thank the anonymous reviewers for theirinsightful comments, especially those that pointed out a major issue with Lemma 3.5 in a previousversion of this manuscript. 28
Appendix for Section 2
In this section, we prove a relationship between voltage potentials and the probability of a randomwalk escaping at the source instead of the sink.
Lemma 2.6.
Let u be a non-sink vertex of Square n . For any vertex v , we have π u ( v ) = (cid:88) w ∈W ( v → u ) −| w | . Proof.
By definition, we have π u ( v ) = (cid:80) w ∈W ( v → u ) −| w | (cid:80) w ∈W ( v →{ u ,v sink } ) −| w | . For any v ∈ V ( Square n ), let f ( v ) = (cid:88) w ∈W ( v →{ u ,v sink } ) −| w | be the normalizing constant for π u ( v ). It follows that f ( u ) = 1 and f ( v sink ) = 1, because the onlysuch walk for each has length 0. For all other v ∈ V ( Square n ) \ { u , v sink } , we have f ( v ) = 14 (cid:88) x ∼ v f ( x ) . Therefore, f ( v ) is a harmonic function with constant boundary values, so f ( v ) = 1 for all vertices v ∈ V ( Square n ).We also verify that the effective resistance between v sink and any internal vertex is boundedbetween Ω(1) and O (log n ) using a triangle inequality for effective resistances and the fact that theeffective resistance between opposite corners in an n × n resistor network is Θ(log n ). This proofeasily generalizes to any pair of vertices in Square n . Proposition A.1 ([LPW09]) . Let G be an n × n network of unit resistors. If u and v are verticesat opposite corner vertices, then log( n − / ≤ R eff ( u, v ) ≤ n. Lemma 2.8.
For any non-sink vertex u in Square n , / ≤ R eff ( v sink , u ) ≤ n + 1 . Proof.
We first prove the lower bound 1 / ≤ R eff ( v sink , u ) . The effective resistance between v sink and u is the reciprocal of the total current flowing into thecircuit when π u ( u ) = 1 and π u ( v sink ) = 0. Since π u is a harmonic function, we have π u ( v ) ≥ v ∈ V ( Square n ). Moreover, deg( u ) = 4, so R eff ( v sink , u ) = (cid:32)(cid:88) v ∼ u π u ( u ) − π u ( v ) (cid:33) − ≥ . R eff ( v sink , u ) ≤ n + 1 , for n sufficiently large. Rayleigh’s monotonicity law [DS84] states that if the resistances of a circuitare increased, the effective resistance between any two points can only increase. The followingtriangle inequality for effective resistances is given in [Tet91]: R eff ( u, v ) ≤ R eff ( u, x ) + R eff ( x, v ) . Define H to be the subgraph of Square n obtained by deleting v sink and all edges incident to v sink . Let m be the largest positive integer such that u + i ≤ n and u + j ≤ n for all 0 ≤ i, j < m ,and let H ( u ) be the subgraph of H induced by the vertex set { ( u + i, u + j ) : 0 ≤ i, j < m } . We can view H ( u ) as the largest square resistor network in H such that u is the top-left vertex.Let v = [ u + m − , u + m −
1] be the bottom-right vertex in H ( u ). Using infinite resistors toremove every edge in E ( Square n ) \ E ( H ( u )), we have R Square n eff ( v , u ) ≤ R H ( u )eff ( v , u )by Rayleigh’s monotonicity law. Proposition A.1 implies that R H ( u )eff ( v , u ) ≤ n since m ≤ n . The vertex v is incident to v sink in Square n , so Rayleigh’s monotonicity law gives R Square n eff ( v sink , v ) ≤ . By the triangle inequality for effective resistances, we have R eff ( v sink , u ) ≤ R eff ( v sink , v ) + R eff ( v , u ) ≤ n + 1 , which completes the proof. B Appendix for Section 3
We use the random walk interpretation of voltage to prove Lemma 3.2. The key idea is that thevoltage on the boundary opposite of u along any axis is less by a constant factor. This projectioncan be iterated along an axis in each dimension. Lemma 3.2. If u is a vertex in the top-left quadrant of Square n , then for any non-sink vertex v we have π u ( v ) ≥ π u (( n, n )) . Proof.
We use Lemma 2.6 to decompose π u ( v ) as a sum of probabilities of walks, and then constructmaps for all 1 ≤ v , v ≤ n to show π u (( v , v )) ≥ max (cid:26) π u (( n, v )) , π u (( v , n )) (cid:27) .
30e begin by considering the first dimension: π u (( v , v )) ≥ π u (( n, v ))4 . Let (cid:96) hor be the horizontal line of reflection passing through ( (cid:100) ( v + n ) / (cid:101) ,
1) and ( (cid:100) ( v + n ) / (cid:101) , n )in Z , and let u ∗ be the reflection of u over (cid:96) hor . Note that u ∗ may be outside of the n × n grid.Next, define the map f : W (( n, v ) → u ) → W (( v , v ) → u )as follows. For any walk w ∈ W (( n, v ) → u ):1. Start the walk f ( w ) at ( v , v ), and if n − v is odd move to ( v + 1 , v ).2. Perform w but make opposite vertical moves before the walk hits (cid:96) hor , so that the partialwalk is a reflection over (cid:96) hor .3. After hitting (cid:96) hor for the first time, continue performing w , but now use the original verticalmoves.4. Terminate this walk when it first reaches u .Denote the preimage of a walk w (cid:48) ∈ W (( v , v ) → u ) under f to be f − (cid:0) w (cid:48) (cid:1) = (cid:8) w ∈ W (( n, v ) → u ) : f ( w ) = w (cid:48) (cid:9) . We claim that for any w (cid:48) ∈ W Square n (( v , v ) → u ),14 (cid:88) w ∈ f − ( w (cid:48) ) −| w | ≤ −| w (cid:48) | . If f − ( w (cid:48) ) = ∅ the claim is true, so assume f − ( w (cid:48) ) (cid:54) = ∅ . We analyze two cases. If w (cid:48) hits (cid:96) hor , then f − ( w (cid:48) ) contains exactly one walk w of length | w (cid:48) | or | w (cid:48) | −
1. If w (cid:48) does not hit (cid:96) hor , then f − ( w (cid:48) ) = { w ∈ W (( n, v ) → u ) : w is a reflection of w (cid:48) over (cid:96) hor before w hits u ∗ } . It follows that any walk w ∈ f − ( w (cid:48) ) can be split into w = w w , where w is the unique walk from( n, v ) to u ∗ that is a reflection of w (cid:48) , and w is a walk from u ∗ to u that avoids v sink and hits u exactly once upon termination. Clearly w has length | w (cid:48) | or | w (cid:48) | −
1, and the set of admissible w is W ( u ∗ → u ). Therefore,14 (cid:88) w ∈ f − ( w (cid:48) ) −| w | = 4 −| w |− (cid:88) w ∈W ( u ∗ → u ) −| w | = 4 −| w |− π u ( u ∗ ) ≤ −| w (cid:48) | , since π u ( u ∗ ) is an escape probability. Summing over all w (cid:48) ∈ W (( v , v ) → u ), it follows fromLemma 2.6 and the previous inequality that π u (( v , v )) = (cid:88) w (cid:48) ∈W (( v , v ) → u ) −| w (cid:48) | ≥ (cid:88) w (cid:48) ∈W (( v , v ) → u ) (cid:88) w ∈ f − ( w (cid:48) ) −| w | ≥ π u (( n, v )) , w ∈ W (( n, v ) → u ) is the preimage of a w (cid:48) ∈ W (( v , v ) → u ).Similarly, we can show that π u (( v , v )) ≥ π u (( v , n )) / ≤ v ≤ n by reflectingwalks over the vertical line from (1 , (cid:100) ( n + v ) / (cid:101) ) to ( n, (cid:100) ( n + v ) / (cid:101) ). Combining inequalitiesproves the claim.Lastly, we give a constant lower bound for the probability of an n -step simple symmetric walkbeing sufficiently close to its starting position by using the recursive definition of binomial coeffi-cients and a Chernoff bound for symmetric random variables. Lemma 3.4.
For all n ≥ , we have min n (cid:98) n (cid:99) (cid:88) k = (cid:100) n (cid:101) k odd (cid:18) nk (cid:19) , n (cid:98) n (cid:99) (cid:88) k = (cid:100) n (cid:101) k even (cid:18) nk (cid:19) ≥ . Proof.
First observe that for n ≥
10, we have12 n (cid:98) n (cid:99) (cid:88) k = (cid:100) n (cid:101) k odd (cid:18) nk (cid:19) ≥ n (cid:88) k ∈ (cid:16) n − , n − (cid:17) (cid:18) n − k (cid:19) and 12 n (cid:98) n (cid:99) (cid:88) k = (cid:100) n (cid:101) k even (cid:18) nk (cid:19) ≥ n (cid:88) k ∈ (cid:16) n − , n − (cid:17) (cid:18) n − k (cid:19) . To see this, use the parity restriction and expand the summands as (cid:18) nk (cid:19) = (cid:18) n − k − (cid:19) + (cid:18) n − k (cid:19) . Let X , X , . . . , X n − be independent Bernoulli random variables such that Pr [ X i = 0] = 1 / X i = 1] = 1 /
2. Let S n − = X + X + · · · + X n − and µ = E [ S n − ] = ( n − /
2. Using aChernoff bound, we have12 n (cid:88) k ∈ (cid:16) n − , n − (cid:17) (cid:18) n − k (cid:19) = 12 (cid:18) − Pr (cid:20) | S n − − µ | ≥ µ (cid:21)(cid:19) ≥ − e − ( n − / ≥ , for n ≥
60. Checking the remaining cases numerically when 10 ≤ n <
60 proves the claim.
References [Asc11] Markus Aschwanden.
Self-Organized Criticality in Astrophysics: The Statistics ofNonlinear Processes in the Universe . Springer Science & Business Media, 2011.32Bak96] Per Bak.
How Nature Works: The Science of Self-Organized Criticality . Copernicus,1996.[BBD +
13] Luca Becchetti, Vincenzo Bonifaci, Michael Dirnberger, Andreas Karrenbauer, andKurt Mehlhorn. Physarum can compute shortest paths: Convergence proofs and com-plexity bounds. In
Proceedings of the 40th International Colloquium on Automata,Languages, and Programming (ICALP), pages 472–483. Springer, 2013.[BCFR17] Prateek Bhakta, Ben Cousins, Matthew Fahrbach, and Dana Randall. Approximatelysampling elements with fixed rank in graded posets. In
Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1828–1838. Society for Industrial and Applied Mathematics, 2017.[BdACA +
16] Ludmila Brochini, Ariadne de Andrade Costa, Miguel Abadi, Antˆonio C. Roque,Jorge Stolfi, and Osame Kinouchi. Phase transitions and self-organized criticality innetworks of stochastic spiking neurons.
Scientific reports , 6:35831, 2016.[BG07] L´aszl´o Babai and Igor Gorodezky. Sandpile transience on the grid is polynomiallybounded. In
Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Dis-crete Algorithms (SODA), pages 627–636. Society for Industrial and Applied Math-ematics, 2007.[BLS91] Anders Bj¨orner, L´aszl´o Lov´asz, and Peter W. Shor. Chip-firing games on graphs.
European Journal of Combinatorics , 12(4):283–291, 1991.[BMV12] Vincenzo Bonifaci, Kurt Mehlhorn, and Girish Varma. Physarum can computeshortest paths. In
Proceedings of the Twenty-Third Annual ACM-SIAM Symposiumon Discrete Algorithms (SODA), pages 233–240. Society for Industrial and AppliedMathematics, 2012.[BPR15] Alessio Emanuele Biondo, Alessandro Pluchino, and Andrea Rapisarda. Modelingfinancial markets by self-organized criticality.
Physical Review E , 92(4):042814, 2015.[BTW87] Per Bak, Chao Tang, and Kurt Wiesenfeld. Self-organized criticality: An explanationof the 1/ f noise. Physical Review Letters , 59(4):381, 1987.[CKM +
11] Paul Christiano, Jonathan A. Kelner, Aleksander Madry, Daniel A. Spielman, andShang-Hua Teng. Electrical flows, Laplacian systems, and faster approximation ofmaximum flow in undirected graphs. In
Proceedings of the Forty-Third ACM Sympo-sium on Theory of Computing (STOC), pages 273–282. Association for ComputingMachinery, 2011.[CV12] Ayush Choure and Sundar Vishwanathan. Random walks, electric networks and thetransience class problem of sandpiles. In
Proceedings of the Twenty-Third AnnualACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1593–1611. Societyof Industrial and Applied Mathematics, 2012.[Dha90] Deepak Dhar. Self-organized critical state of sandpile automaton models.
PhysicalReview Letters , 64(14):1613, 1990.[Dha06] Deepak Dhar. Theoretical studies of self-organized criticality.
Physica A: StatisticalMechanics and its Applications , 369(1):29–70, 2006.33DRSV95] Deepak Dhar, Philippe Ruelle, Siddhartha Sen, and D.-N. Verma. Algebraic as-pects of abelian sandpile models.
Journal of Physics A: Mathematical and General ,28(4):805, 1995.[DS84] Peter G. Doyle and J. Laurie Snell.
Random Walks and Electric Networks . Mathe-matical Association of America, 1984.[ES77] C.J. Everett and P.R. Stein. The combinatorics of random walk with absorbingbarriers.
Discrete Mathematics , 17(1):27–45, 1977.[ESVM +
11] W. Ellens, F.M. Spieksma, P. Van Mieghem, A. Jamakovic, and R.E. Kooij. Effectivegraph resistance.
Linear Algebra and its Applications , 435(10):2491–2506, 2011.[HLM +
08] Alexander E. Holroyd, Lionel Levine, Karola M´esz´aros, Yuyal Peres, James Propp,and David B. Wilson. Chip-firing and rotor-routing on directed graphs.
In and Outof Equilibrium 2 , pages 331–364, 2008.[JLP15] Daniel C. Jerison, Lionel Levine, and John Pike. Mixing time and eigenvalues of theabelian sandpile Markov chain. Preprint, arXiv:1511.00666v1 , 2015.[KG09] Thomas Kron and Thomas Grund. Society as a self-organized critical system.
Cy-bernetics & Human Knowing , 16(1):65–82, 2009.[LHG07] Anna Levina, J. Michael Herrmann, and Theo Geisel. Dynamical synapses causingself-organized criticality in neural networks.
Nature physics , 3(12):857–860, 2007.[Lov93] L´aszl´o Lov´asz. Random walks on graphs: A survey.
Combinatorics, Paul Erd˝os isEighty , 2:1–46, 1993.[LPW09] David Asher Levin, Yuval Peres, and Elizabeth Lee Wilmer.
Markov Chains andMixing Times . American Mathematical Society, 2009.[Mad13] Aleksander Madry. Navigating central path with electrical flows: From flows tomatchings, and back. In
Proceedings of the 54th Annual IEEE Symposium on Foun-dations of Computer Science (FOCS), pages 253–262. Institute of Electrical andElectronics Engineers, 2013.[Man91] S.S. Manna. Two-state model of self-organized criticality.
Journal of Physics A:Mathematical and General , 24(7):L363, 1991.[Meh13] Kurt Mehlhorn. Physarum computations. In
Proceedings of the 30th InternationalSymposium on Theoretical Aspects of Computer Science (STACS), pages 5–6. SchlossDagstuhl, 2013.[MTN94] S. Mineshige, M. Takeuchi, and H. Nishimori. Is a black hole accretion disk in aself-organized critical state?
The Astrophysical Journal , 435:L125–L128, 1994.[Phi14] J.C. Phillips. Fractals and self-organized criticality in proteins.
Physica A: StatisticalMechanics and Its Applications , 415:440–448, 2014.[RAM09] O. Ramos, Ernesto Altshuler, and K.J. Maløy. Avalanche prediction in a self-organized pile of beads.
Physical Review Letters , 102(7):078701, 2009.34RB79] Gian-Carlo Rota and Kenneth A. Baclawski.
Introduction to Probability and RandomProcesses . 1979.[RS17] Akshay Ramachandran and Aaron Schild. Sandpile prediction on a tree in nearlinear time. In
Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposiumon Discrete Algorithms (SODA), pages 1115–1131. Society of Industrial and AppliedMathematics, 2017.[SMM14] H. Saba, J.G.V. Miranda, and M.A. Moret. Self-organized critical phenomenon as a q -exponential decay–Avalanche epidemiology of dengue. Physica A: Statistical Me-chanics and its Applications , 413:205–211, 2014.[SS89] A. Sornette and D. Sornette. Self-organized criticality and earthquakes.
EurophysicsLetters , 9(3):197, 1989.[SV16a] Damian Straszak and Nisheeth K. Vishnoi. IRLS and slime mold: Equivalence andconvergence. Preprint, arXiv:1601.02712v1 , 2016.[SV16b] Damian Straszak and Nisheeth K. Vishnoi. Natural algorithms for flow problems. In
Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Al-gorithms (SODA), pages 1868–1883. Society of Industrial and Applied Mathematics,2016.[SV16c] Damian Straszak and Nisheeth K. Vishnoi. On a natural dynamics for linear pro-gramming. In
Proceedings of the 2016 ACM Conference on Innovations in TheoreticalComputer Science (ITCS), page 291. Association for Computing Machinery, 2016.[SW94] Jose A. Scheinkman and Michael Woodford. Self-organized criticality and economicfluctuations.
The American Economic Review , 84(2):417–421, 1994.[Tet91] Prasad Tetali. Random walks and the effective resistance of networks.
Journal ofTheoretical Probability , 4(1):101–109, 1991.[Wil10] David B. Wilson. Dimension of the loop-erased random walk in three dimensions.
Physical Review E , 82(6):062102, 2010.[WPC +
16] Nicholas W. Watkins, Gunnar Pruessner, Sandra C. Chapman, Norma B. Crosby, andHenrik J. Jensen. 25 years of self-organized criticality: Concepts and controversies.
Space Science Reviews , 198(1-4):3–44, 2016.[WWAM06] Rinke J. Wijngaarden, Marco S. Welling, Christof M. Aegerter, and Mariela Mengh-ini. Avalanches and self-organized criticality in superconductors.