On a random entanglement problem
OON A RANDOM ENTANGLEMENT PROBLEM
GAGE BONNER, JEAN-LUC THIFFEAULT, AND BENEDEK VALK ´O
Abstract.
We study a model for the entanglement of a two-dimensional reflecting Brow-nian motion in a bounded region divided into two halves by a wall with three or more smallwindows. We map the Brownian motion into a Markov Chain on the fundamental groupoidof the region. We quantify entanglement of the path with the length of the appropriateelement in this groupoid. Our main results are a law of large numbers and a central limittheorem for this quantity. The constants appearing in the limit theorems are expressed interms of a coupled system of quadratic equations. Motivation
We consider a reflecting Brownian motion in a piecewise smooth bounded region of R .This region is divided into a top and bottom half by a wall punctured by N ≥ Start (a) A ( − , A (1)2 , A ( − , (b) Figure 1. (A) A planar domain with N = 4 windows and a sample Brownianpath. (B) The same path expressed as arcs labeled with their associatedgenerating elements. (See Section 2.)The Brownian path winds around the wall segments through the windows, and becomesprogressively more entangled. The entanglement can be quantified by mapping the path ata given time to an element of the fundamental groupoid [5] of the region, and consideringthe length of that element in the appropriate sense (see Section 2). Our goal is to studythe asymptotic growth of this length as a function of time. The growth rate of words inthe groupoid serves as an indication for the nature of the growth one would expect to seein the winding problem. Motivated by random walks on free groups [12, 18, 29] one expects Department of Mathematics, University of Wisconsin – Madison, WI 53706, USA
E-mail addresses : [email protected], [email protected], [email protected] . a r X i v : . [ m a t h . P R ] O c t ON A RANDOM ENTANGLEMENT PROBLEM the length in the fundamental groupoid to grow linearly in time. In the present paper weidentify formulas for both the growth rate and the limiting fluctuations around the mean,in the setting involving small windows . Our main contribution (Theorem 3.2) is the proofof these limit theorems in a general setting. The limits are described in terms of a set ofcoupled quadratic equations, which can be readily solved numerically.One can follow the entanglement of the Brownian path by observing it at times whenit visits a new window, and considering the length of the corresponding element in thefundamental groupoid. This fundamental groupoid is generated by equivalence classes oforiented paths connecting two windows, with each such path lying in the upper or lowerhalf of the plane. Between successive observations, the groupoid element corresponding tothe path is appended by a random generating element whose distribution depends on thelocation inside the window. Motivated by the narrow escape problem [13], as the windowsshrink in size this location dependence disappears, and we arrive at a Markov chain on thefundamental groupoid. Our limit theorems are about the length of the groupoid element inthis Markov chain.Probabilistic winding problems on surfaces have a long history. A classical example is theasymptotic behavior of the winding of a planar Brownian motion around a point. Spitzer [32]showed that the winding angle at time t , scaled by log t , converges to a standard Cauchydistribution as time goes to infinity. The fact that the limit distribution has no momentscan be explained by the large amount of winding that the Brownian path can pick upwhen it comes near the origin. This model has been thoroughly investigated by manyauthors [22, 24–26].When using Brownian motion to model, say, polymer entanglement [10], it is more realisticto regularize the problem in some way. This can be accomplished, for example, by replacingthe punctual winding center by a finite topological disk [8, 10], by adding a persistencelength to the motion [33], or by considering a random walk instead [1–4, 27, 28]. In theregularized problem the scaling limit for the winding angle becomes the hyperbolic secantdistribution, where all the moments exist. Unsurprisingly, confinement to a finite regiongreatly increases the rate of winding, since the Brownian path returns near the windingcenter more frequently [8, 10, 35].A more challenging problem is the study of winding around multiple points or topologicaldisks. A natural first approach to this problem is the homological route, where one exam-ines the joint distribution of winding angles around each winding center. In the scaling limitthese winding angles converge to independent Cauchy distributions [24,25]. The homologicalroute is inherently Abelian, in that the order of winding around the centers is lost. Watan-abe [34] studied winding on punctured surfaces of higher genus, and derived Gaussian limitdistributions for the windings around each handle.Another approach is via the fundamental group of the punctured surface, which is thegroup of deck transformations on its universal cover. In that case, the non-Abelian aspect N A RANDOM ENTANGLEMENT PROBLEM 3 of the windings is captured, and we may regard distance in the universal cover as a measureof entanglement of the Brownian motion. This approach was first introduced by Itˆo andMcKean [14, 20] who considered the twice-punctured plane. (See also [19, 21].) Gruet [11]finds that the length of the word at time t in the fundamental group of the thrice-puncturedsphere grows at least like t log t as t → ∞ . Desenonges [6] considers a similar problem ona wider class of surfaces with n punctures. See also the book by Nechaev [23] for windingin an infinite lattice of points. Note that the region in our Brownian entanglement problemis topologically equivalent to a sphere with N holes, hence our result belongs to this classof non-Abelian problems. Our Markov chain can also be considered as a random walk on aregular language (see Remark 3.3).Our paper is organized as follows. Section 2 and Section 3 contain the precise setup ofthe problem and our main result. In Section 4 we give a number of applications of our maintheorem. Section 5 provides the key steps of the proof, which are proved in the rest of thepaper (Section 6 and Appendix A). 2. Preliminaries
The fundamental groupoid G N . We consider the groupoid representing the homo-topy classes of continuous paths that start and end at the midpoints of the windows as inFigure 1.Recall that a groupoid is defined by a set of ‘objects’ O and ‘arrows’ X and the followingfunctions:(G1) There are functions s (source) and t (target) from X → O .(G2) There is a composition function ( f , f ) → f f on a subset of X × X which is definedfor f , f if t ( f ) = s ( f ), and in that case s ( f f ) = s ( f ), t ( f f ) = t ( f ). Thecomposition function is associative.(G3) For each i ∈ O there is a unique unit element e i ∈ X with s ( e i ) = t ( e i ) = i for which e i f = f , f e i = f whenever these are defined.(G4) There is an inverse for each element of X satisfying s ( f − ) = t ( f ), t ( f − ) = s ( f ) and f f − = e s ( f ) , f − f = e t ( f ) .We consider a groupoid G N with object set O N = O = { , , . . . , N } , and arrow set X N = X generated by the elements in A N := { A ( k ) i,j : i (cid:54) = j, ≤ i, j ≤ N, k ∈ {− , }} , (2.1)with s ( A ( k ) i,j ) = i, t ( A ( k ) i,j ) = j. (2.2)For convenience we define A ( k ) i,i := e i . The set of generating relations for our groupoid isgiven by the relations A ( k ) i,j A ( k ) j,(cid:96) = A ( k ) i,(cid:96) i, j, (cid:96) ∈ O , k ∈ {− , } . (2.3) ON A RANDOM ENTANGLEMENT PROBLEM
We call a composition of finitely many generating elements a word. We include the unitelements as words, and call them empty words. Each arrow in G N corresponds to an equiva-lence class of words. Two words are in the same equivalence class if they can be transformedinto each other by repeated application of relations (2.3). We say that a nonempty word isreduced if we cannot apply the relations (2.3) to reduce the number of generators used inthe word. We consider the empty words reduced as well.For i (cid:54) = j , the arrow A (+1) i,j (respectively, A ( − i,j ) corresponds to a simple path in the upper(respectively, lower) part of the region connecting window i to window j as in Figure 1b.Figure 2 shows a schematic of the groupoid structure for N = 3.The arrows of the groupoid G N can be represented as equivalence classes of paths in thedirected multi-graph which has vertices O = { , , . . . , N } and directed edges of the form( i, j, k ) with i (cid:54) = j ∈ O , k ∈ {− , } . The generating elements A ( k ) i,j correspond to the directededges ( i, j, k ); a composition of generating elements (a word) corresponds to a path in themulti-graph. The starting and ending vertices of a path are the results of the source andtarget functions. Two paths are in the same equivalence class (and correspond to the samearrow in X ) if they can be transformed into each other by the repeated use of the followingoperations and their inverses(EC1) Deleting a backtracking step ( i, j, k ) , ( j, i, k )(EC2) Replacing two consecutive steps ( i, j, k ) , ( j, (cid:96), k ) with ( i, (cid:96), k ) if i, j, (cid:96) are different.These operations correspond to the generating relations (2.3). A path corresponds to areduced word if if we cannot use either of the moves (EC1) and (EC2) on it. An importantconsequence of the properties of our groupoid is that each non-unit arrow can be uniquelyrepresented as a product of elements of A N which alternate between ± k ). More precisely, we have the following lemma which is proved in Appendix A. Lemma 2.1.
Each arrow w ∈ X can be represented as a reduced word in a unique way. Thisreduced word is either an empty word or, for some d ≥ , a product of the form w = A ( k ) i ,i A ( − k ) i ,i A ( k ) i ,i · · · A (( − d +1 k ) i d ,i d +1 such that i (cid:96) (cid:54) = i (cid:96) +1 for all ≤ (cid:96) ≤ d. (2.4)We say that | · | L : X → [0 , ∞ ) is a metric on X generated by A N if | e i | L = 0 for all i andfor any nonempty w ∈ X we have | w | L = d (cid:88) (cid:96) =1 (cid:12)(cid:12)(cid:12) A ( k (cid:96) ) i (cid:96) ,j (cid:96) (cid:12)(cid:12)(cid:12) L , (2.5)where (cid:81) d(cid:96) =1 A ( k (cid:96) ) i (cid:96) ,j (cid:96) is the unique reduced representation of w given by Lemma 2.1. We reserve | · | to denote the number of generators in the reduced representation of w ∈ X ; | w | = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) d (cid:89) (cid:96) =1 A ( k (cid:96) ) i (cid:96) ,j (cid:96) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = d. (2.6) N A RANDOM ENTANGLEMENT PROBLEM 5 A (1)1 , A (1)1 , A (1)2 , A ( − , A ( − , A ( − , Figure 2.
The arrows represented by members A ( k ) i,j of A with i < j . Theobjects in our groupoid { , , } are represented by circles. Each arrow pointsfrom s ( A ( k ) i,j ) to t ( A ( k ) i,j ).2.2. The Markov chain on G N . We study discrete time Markov chains { W n } n ≥ on thearrow set X . We assume the following: Assumption 2.2.
In each step the value of W n changes by the right composition of agenerating element A ( k ) i,j , i.e. W − n W n +1 ∈ A N for all n ≥ Assumption 2.3.
The conditional probability of the increments given the starting statedepends on the starting state only through its target.These two assumptions imply that for x, y ∈ X the transition probability function is ofthe form P ( W n +1 = y | W n = x ) := p ( k ) i,j if x − y = A ( k ) i,j and i (cid:54) = j . (2.7)We further assume: Assumption 2.4.
The jump probabilities satisfy p ( k ) i,j ∈ (0 ,
1) for each ( i, j, k ).We consider N ≥ N = 2, then the process { W n } n ≥ reduces to the Abeliancase, namely a Markov chain on the free group of rank 1, namely a lazy random walk on Z .From Eq. (2.7), we see that there are 2 N ( N −
1) transition probabilities and we must have (cid:80) j,k p ( k ) i,j = 1 for each 1 ≤ i ≤ N . We call W n the “word” at time n . We use the notation P x ( · ) and E x [ · ] to indicate that the probabilities and expectations in question are calculatedunder the initial condition W = x . ON A RANDOM ENTANGLEMENT PROBLEM
Define the sets X i := { w : s ( w ) = i, w ∈ X } , ≤ i ≤ N. (2.8)Some important consequences of our assumptions are collected in the following lemma whichis proved in Appendix A. Lemma 2.5. (i) For any x ∈ X , under P x the process { x − W n } n ≥ has the same distribution as theprocess { W n } n ≥ under P e t ( x ) .(ii) There is a positive probability path in the Markov chain { W n } n ≥ between two words w (cid:54) = w if and only if w , w ∈ X i for some ≤ i ≤ N .(iii) Suppose that { v n } ≤ n ≤ n (cid:48) is such a path where v = w and v n (cid:48) = w . Consider thereduced composition w − w = d (cid:89) (cid:96) =1 A ( k (cid:96) ) i (cid:96) ,j (cid:96) d ≥ given by Lemma 2.1. Then, there exist times { n m } ≤ m ≤ d − satisfying < n < n < · · · < n d − < n (cid:48) , such that v n m = w (cid:81) m(cid:96) =1 A ( k (cid:96) ) i (cid:96) ,j (cid:96) for each ≤ m ≤ d − . Figure 3.
An illustration for Statement (iii) of Lemma 2.5. Let w = A (1)1 , (red, dashed) and w − w = A ( − , A (1)5 , A ( − , (blue, solid) in the N = 5 case. Apath joining w and w must visit the intermediate words w A ( − , , w A ( − , A (1)5 , in this order. Remark 2.6.
To connect with the original problem described in Section 1, we first note thatany path in the domain D (described in Figure 1a) starting at a window and ending at awindow can be naturally mapped into an element of X . Indeed, an element of X correspondsto the homotopy class of a path connecting two windows, where we allow the starting andending points of the path to move inside a window.Let B t be the position of the reflected Brownian motion in D at time t . Let τ =inf s ≥ { B s at a window } and τ n +1 = inf s ≥ τ n { B s at a window different from the window visited at time τ n } . (2.10) N A RANDOM ENTANGLEMENT PROBLEM 7
Denoting by W n the element in X corresponding to the Brownian path in the time interval[ τ , τ n ], we see that the sequence ( W n , B τ n ) n ≥ is a Markov chain on X × D . By the narrowescape problem [13] the transition distribution of this Markov chain will only depend on thefirst coordinate (an element in X ) as all window sizes go to 0, which leads to a Markov chainon X satisfying the conditions described in Section 2.2. The narrow escape problem alsoimplies that understanding the growth of the length of W n in n allows us to understand thegrowth of the entanglement of the Brownian path in t .3. Statement of main result
We now state our main result, which is the computation of the almost-sure limit lim n →∞ | W n | L /n as well as a central limit theorem. To do this, we require a set of functions whose propertiesare collected in the following proposition. Proposition 3.1.
There is an (cid:15) > and a unique set of complex functions { R ( k ) i,j : D (cid:15) → C : i (cid:54) = j, ≤ i, j ≤ N, k ∈ {− , }} , D r = { z : | z | < r } , (3.1) satisfying the following properties:(R1) Each R ( k ) i,j is complex analytic in the disk D (cid:15) .(R2) If λ ∈ [0 , , then R ( k ) i,j ( λ ) ∈ (0 , .(R3) They satisfy the system of equations R ( k ) i,j = λ (cid:34) p ( k ) i,j + (cid:88) m (cid:54) = i,j p ( k ) i,m R ( k ) m,j + (cid:88) m (cid:54) = i p ( − k ) i,m R ( − k ) m,i R ( k ) i,j (cid:35) , for λ ∈ D (cid:15) . (3.2)Let B : {− , } × D (cid:15) × C → C N × N be a matrix-valued function whose i, j th entry is B i,j ( k ; λ, z ) := (1 − δ i,j ) z | A ( k ) i,j | L R ( k ) i,j ( λ ) (3.3)where δ i,j is the Kronecker delta. Now let h ( λ, z ) := det[ I − B (1; λ, z ) B ( − λ, z )] , (3.4)where I is the identity matrix. Let γ := ∂ z h (1 , ∂ λ h (1 , , (3.5a) σ := ∂ z h (1 ,
1) + ∂ z h (1 , − γ ∂ z,λ h (1 ,
1) + γ ( ∂ λ h (1 ,
1) + ∂ λ h (1 , ∂ λ h (1 , . (3.5b)We are now ready to state our main result: Theorem 3.2.
Consider the Markov chain { W n } n ≥ satisfying Assumptions 2.2, 2.3 and2.4 and having transition probabilities defined in Eq. (2.7) . Then, for any initial condition ON A RANDOM ENTANGLEMENT PROBLEM W we have lim n →∞ | W n | L n = γ a.s. (3.6a) | W n | L − γn √ n → N (0 , σ ) in law . (3.6b) The constants γ and σ are defined in Eq. (3.5) and N (0 , σ ) denotes the normal distributionwith mean and variance σ . Remark 3.3.
Gilch [9] studies similar problems in the context of random walks on regularlanguages. A random walk on a regular language as defined in [17] is a Markov chain on theset of all finite words from a finite alphabet with the following conditions. In one jump onlythe last two letters of a word may be modified and at most one letter may be adjoined ordeleted. The transition probabilities only depend on the last two letters of the current word.Our process { W n } n ≥ is a random walk on a regular language formed by the alphabet A N .Theorem 2.4 in [9] provides a law of large numbers under the assumption that the con-sidered random walk is transient. This could potentially lead to another way to obtainthe constant γ in Theorem 3.2. However, the identification of the constant appearing inTheorem 2.4 of [9] requires the solution of a more complicated problem than in our case.4. Examples
Here we demonstrate several applications of Theorem 3.2. We will consider two simplemetrics and compute the constants in Eqs. (3.5a) and (3.5b). The first is the metric | · | defined in Eq. (2.6); the second is the metric | · | F generated by A N defined for the generatorsas | A ( k ) i,j | F = | i − j | . (4.1)4.1. The one parameter case with N = 3 . We take N = 3 and the set of transitionprobabilities p (1)2 , = p (1)2 , = p ( − , = p ( − , = 1 / ,p (1)1 , = p (1)3 , = p ( − , = p ( − , = q,p (1)1 , = p (1)3 , = p ( − , = p ( − , = 1 / − q, for 0 < q < /
2. This represents a situation where the planar domain of Section 1 is left-right and up-down symmetric about its center. Let γ ( q ) , σ ( q ) be the constants appearingin Theorem 3.2 for the | · | metric and let γ ,F ( q ) , σ ,F ( q ) be the same constants for the | · | F N A RANDOM ENTANGLEMENT PROBLEM 9 metric. We will show that γ ( q ) = 3 q + (1 − q ) (cid:112) (8 − q ) q − q ) , (4.2a) γ ,F ( q ) = (cid:112) (8 − q ) q − q q + 1) , (4.2b) σ ( q ) = Q )+(68 − Q ) q +(500 − Q ) q − (1471+64 Q ) q +8(1+42 Q ) q +728 q q ) (8 − q +14 q ) , (4.2c) σ ,F ( q ) = (32+4 Q )+(36+28 Q ) q +(80 − Q ) q − (199+30 Q ) q +70 q q ) Q , (4.2d)where Q = (cid:112) (8 − q ) q . The constants γ ( q ) , γ ,F ( q ) are plotted in Figure 4 and the constants σ ( q ) , σ ,F ( q ) are plotted in Figure 5.By our choice of probabilities, Eq. (3.2) reduces to a set of three equations by symmetry.We define R := R (1)2 , = R (1)2 , = R ( − , = R ( − , ,R := R (1)1 , = R (1)3 , = R ( − , = R ( − , ,R := R (1)1 , = R (1)3 , = R ( − , = R ( − , , which leads to R = λ [1 + R + 2 R R ] , (4.3a) R = λ (cid:2) q + (cid:0) − q (cid:1) R + qR R + (cid:0) − q (cid:1) R R (cid:3) , (4.3b) R = λ (cid:2) − q + qR + qR R + (cid:0) − q (cid:1) R (cid:3) . (4.3c)Substituting λ = 1 gives a cubic equation for R (1). We choose the solution which satisfies R (1) ∈ (0 ,
1) (by property (R2)). By implicit differentiation of Eq. (4.3) it follows that R (1) = ; R (cid:48) (1) = 3 q + 2 + Q Q − q ) (4.4a) R (1) = 3 q − Q q −
1) ; R (cid:48) (1) = 2( q + 1) Q (4.4b) R (1) = q − Q q −
1) ; R (cid:48) (1) = 2 Q + q ( Q − q (5 − q − Q )) q (2 q − Q + 7 q −
8) (4.4c) R (cid:48)(cid:48) (1) = Q +16(2+ Q ) q +2(42+19 Q ) q (26+31 Q ) q − Q ) q +84 q − q ) q Q , (4.4d) R (cid:48)(cid:48) (1) = Q )+(25 Q − q +(49+4 Q ) q − Q ) q − q (1 − q ) Q , (4.4e) R (cid:48)(cid:48) (1) = − Q )+(48 − Q ) q +6(23 Q − q +2(329+18 Q ) q − (176+137 Q ) q +(28 Q − q +300 q − q ) (2 q − Q . (4.4f) Since our p ( k ) i,j are symmetric with respect to k = ±
1, in the | · | metric, we construct B (1; λ, z ) = B ( − λ, z ) = zR zR zR zR zR zR , (4.5)so that h ( λ, z ) = (1 − z R ) (cid:2) (1 − z R R ) − z R (cid:3) , (4.6)Substituting this into Eqs. (3.5a) and (3.5b) and using Eq. (4.4) gives the required constants.Similarly, in the | · | F metric, we have B (1; λ, z ) = B ( − λ, z ) = zR z R zR zR z R zR . (4.7)Hence, we find h ( λ, z ) = (1 − z R ) (cid:2) (1 − z R R ) − z R (cid:3) . (4.8)We again substitute this into Eqs. (3.5a) and (3.5b) and use Eq. (4.4) to obtain the requiredconstants. 0 0 . . . . . . . . . q γ ,F ( q ) γ ( q ) Figure 4.
The constants γ ( q ) of Eq. (4.2a) and γ ,F ( q ) of Eq. (4.2b). Wenote that γ ( q ) has a maximum value of 1 / q = 1 / γ ,F ( q ) has amaximum value of (2 / √ −
1) when q = (8 − √ / N A RANDOM ENTANGLEMENT PROBLEM 11 . . . . . . . . q (a) . . . . . . . q . . . . q × (b) Figure 5. (A) The quantity σ ( q ) of Eq. (4.2c). We note that σ ( q ) has amaximum value of 11 /
16 when q = 1 /
4. (B) The quantity σ ,F ( q ) of Eq. (4.2d).We note that σ ,F ( q ) has a very slight maximum value of 2 . . . . when q = 0 . . . . (inset). The exact value of q for which σ ,F ( q ) is maximizedis a root of a certain eighth-order polynomial.4.2. An asymmetric case with N = 3 . We take N = 3 and the following arbitrarilychosen set of probabilities: p (1)2 , = 17 / p (1)2 , = 1 / p ( − , = 1 / p ( − , = 1 / p (1)1 , = 43 / p (1)3 , = 43 / p ( − , = 1 / p ( − , = 1 / p (1)1 , = 1 / p (1)3 , = 1 / p ( − , = 1 / p ( − , = 1 / . Truncating to six significant digits, solving Eq. (3.2) numerically gives, for r ( k ) i,j := R ( k ) i,j (1), r (1)2 , = 0 . r (1)2 , = 0 . r ( − , = 0 . r ( − , = 0 . r (1)1 , = 0 . r (1)3 , = 0 . r ( − , = 0 . r ( − , = 0 . r (1)1 , = 0 . r (1)3 , = 0 . r ( − , = 0 . r ( − , = 0 . , and, for d ( k ) i,j := d R ( k ) i,j ( λ ) / d λ | λ =1 , d (1)2 , = 1 . d (1)2 , = 1 . d ( − , = 2 . d ( − , = 2 . d (1)1 , = 1 . d (1)3 , = 1 . d ( − , = 1 . d ( − , = 0 . d (1)1 , = 1 . d (1)3 , = 1 . d ( − , = 2 . d ( − , = 1 . , and, for v ( k ) i,j := d R ( k ) i,j ( λ ) / d λ | λ =1 , v (1)2 , = 10 . v (1)2 , = 12 . v ( − , = 26 . v ( − , = 32 . v (1)1 , = 9 . v (1)3 , = 13 . v ( − , = 15 . v ( − , = 11 . v (1)1 , = 15 . v (1)3 , = 18 . v ( − , = 26 . v ( − , = 14 . . Here, B (1; λ, z ) and B ( − λ, z ) will be different; however they are each 3 × γ a , σ in the | · | metric and γ a ,F , σ ,F in the | · | F metric tobe γ a = 0 . . . . σ = 0 . . . . (4.9a) γ a ,F = 0 . . . . σ ,F = 0 . . . . (4.9b)4.3. The totally symmetric case.
For any N ≥
3, we take p ( k ) i,j = 1 / (2 N −
2) for all( i, j, k ). Let γ sym , σ be the constants appearing in Theorem 3.2 for the | · | metric and let γ sym ,F , σ ,F be the same constants for the | · | F metric. We will show that γ sym = N − N − , (4.10a) γ sym ,F = ( N + 1)( N − N − , (4.10b) σ = N + 2 N − N − , (4.10c) σ ,F = 11 N − N + 15 N − N − N ( N − . (4.10d)Note that Eqs. (4.10a) and (4.10b) also hold for N = 2 since { W n } n ≥ is recurrent in this case.We will use the following lemma for the characteristic polynomial of a Kac–Murdock–Szeg˝omatrix [15]. Lemma 4.1.
Let U n ( z ) be an n × n matrix whose i, j th entry is [ U n ( z )] i,j = z | i − j | . Let φ n ( x, z ) = det[ U n ( z ) − x I ] . Then, defining φ ( x, z ) = 1 , we have φ n ( x, z ) = (1 − x − z (1+ x )) φ n − ( x, z ) − x z φ n − ( x, z ); φ ( x, z ) = 1 , φ ( x, z ) = 1 − x. (4.11) Proof of Lemma 4.1.
We multiply the second row of U n ( z ) − x I by − z and add it to thefirst row. Then, we multiply the second column of the resulting matrix by − z and add it tothe first column. The result is that φ n ( x, z ) = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) − x z z . . .z − x z . . .z z − x . . . ... ... ... . . . (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) − x − z (1 + x ) xz . . .xz − x z . . . z − x . . . ... ... ... . . . (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) . (4.12) N A RANDOM ENTANGLEMENT PROBLEM 13
Expanding the determinant along the first column gives φ n ( x, z ) = (1 − x − z (1 + x )) φ n − ( x, z ) − xz (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) xz . . .z − x z . . .z z − x . . . ... ... ... . . . (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (4.13)= (1 − x − z (1 + x )) φ n − ( x, z ) − x z φ n − ( x, z ) . (cid:4) Since the transition probabilities are all the same for a given N , Eqs. (3.2), are invariantunder interchange of any pair ( i , j , k ), ( i , j , k ). Therefore R sym ( λ ) := R ( k ) i,j ( λ ) satisfies aquadratic equation, R sym = λ N − (cid:2) N − R sym + ( N − R (cid:3) . (4.14)Substituting λ = 1 and taking the root such that R sym (1) ∈ (0 ,
1) gives R sym (1) = 1 N − , R (cid:48) sym (1) = 2 N − , R (cid:48)(cid:48) sym (1) = 4( N − N − . (4.15)In the | · | metric, B i,j ( k ; λ, z ) = (1 − δ i,j ) zR sym ( λ ). Since B ( k ; λ, z ), does not depend on k ,we will write H ( λ, z ) := B ( k ; λ, z ). By Lemma 4.1,det[ I ± H ( λ, z )] = ( ± zR sym ( λ )) N φ N (cid:18) ∓ zR sym ( λ ) , (cid:19) . (4.16)Hence, we compute h ( λ, z ) = det[ I + H ( λ, z )]det[ I − H ( λ, z )]. This can be computed up toterms of order ( z − by substituting z = 1 into Eq. (4.11), solving the resulting linearrecurrence relation and applying implicit differentiation. Via Eqs. (3.5a) and (3.5b) we findthe constants γ sym , σ by expanding φ N ( x, z ) in powers of ( z − | · | F metric, B i,j = (1 − δ i,j ) z | i − j | R sym . By Lemma 4.1,det[ I ± H ( λ, z )] = ( ± R sym ( λ )) N φ N (cid:18) ∓ R sym ( λ ) , z (cid:19) . (4.17)We again have h ( λ, z ) = det[ I + H ( λ, z )] det[ I − H ( λ, z )] and the same arguments as abovegive γ sym ,F , σ ,F . Remark 4.2.
In the | · | metric, due to the symmetry of the problem, the process {| W n |} n ≥ is a lazy nearest neighbor walk on the non-negative integers with transition probabilities P ( | W n +1 | = y | | W n | = x ) = x = 0 and y = 1 N − N − if x (cid:54) = 0 and y = x + 1 N − N − if x (cid:54) = 0 and y = x N − if x (cid:54) = 0 and y = x −
10 otherwise . (4.18) This process is transient so Eqs. (4.10a) and (4.10c) follow by direct computation.5.
Outline of proof
Our proof strategy uses the double generating function method of Sawyer and Steger ( [29],Theorem 2.2):
Theorem 5.1 (Sawyer and Steger) . Let { Y n } n ≥ be a sequence of non-negative randomvariables and suppose that we can write for some δ > G ( λ, z ) := E (cid:34) ∞ (cid:88) n =0 z Y n λ n (cid:35) = C ( λ, z ) g ( λ, z ) for λ, z ∈ (1 − δ, , (5.1) where C ( λ, z ) , g ( λ, z ) can be extended as analytic functions to the regions − δ < | λ | < δ , | z − | < δ in the complex plane, and C (1 , (cid:54) = 0 . Let µ := ∂ z g (1 , ∂ λ g (1 , , (5.2a) ν := ∂ z g (1 ,
1) + ∂ z g (1 , − µ ∂ z,λ g (1 ,
1) + µ ( ∂ λ g (1 ,
1) + ∂ λ g (1 , ∂ λ g (1 , . (5.2b) Then lim n →∞ Y n n = µ a.s. (5.3a) Y n − µn √ n → N (0 , ν ) in law . (5.3b)We will apply Theorem 5.1 with Y n = | W n | L . In order to understand the expectation inEq. (5.1) we introduce a family of stopping times. For x ∈ X let T ( m, x ) := inf k ≥ { W m + k = W m x } ; (5.4)note that these can be ∞ . We define a set of 2 N ( N −
1) generating functions, one for eachelement of A N as R ( k ) i,j ( λ ) := E e i (cid:20) λ T (cid:16) ,A ( k ) i,j (cid:17) (cid:21) , (5.5)defined for those λ ∈ C where the expectation is finite. As we will show, these generatingfunctions are the functions described in Proposition 3.1. To do this, we will show that the R ( k ) i,j ( λ ) introduced in Eq. (5.5) uniquely satisfy Properties (R1), (R2) and (R3) of Proposi-tion 3.1. Remark 5.2.
Since each R ( k ) i,j ( λ ) is a power series whose coefficients are non-negative num-bers summing to at most 1, we immediately see that they satisfy weaker versions of Prop-erties (R1) and (R2). In particular, if λ ∈ [0 ,
1] then R ( k ) i,j ( λ ) ∈ [0 ,
1] and each R ( k ) i,j ( λ ) iscomplex analytic in λ ∈ D . N A RANDOM ENTANGLEMENT PROBLEM 15
By a first step analysis based on the Markov property, we show the following propositionin Section 6.1:
Proposition 5.3.
The set of functions R ( k ) i,j ( λ ) introduced in Eq. (5.5) satisfy Eq. (3.2) for λ ∈ [0 , . In Section 6.2, we use Perron–Frobenius theory to prove:
Proposition 5.4.
The Markov chain { W n } n ≥ is transient. As a direct corollary, we have:
Corollary 5.5.
The set of functions R ( k ) i,j ( λ ) introduced in Eq. (5.5) satisfy R ( k ) i,j (1) < .Proof. Proposition 5.4 implies that R ( k ) i,j (1) = P e i ( T (0 , A ( k ) i,j ) < ∞ ) < (cid:4) In Section 6.3 we demonstrate using techniques from the analysis of branching processesthat
Proposition 5.6.
For a given λ ∈ [0 , , the only solution to Eq. (3.2) satisfying Prop-erty (R2) is given by the R ( k ) i,j ( λ ) introduced in Eq. (5.5) . For x, y ∈ X , define the generating function S ( x, y ; λ ) := ∞ (cid:88) n =0 P x ( W n = y ) λ n . (5.6)Next, define the generating function associated with first visits from an arbitrary word to be R ( x, y ; λ ) := ∞ (cid:88) n =0 P x ( T (0 , y ) = n ) λ n . (5.7)Note that these functions are identically zero unless x, y ∈ X i for some 1 ≤ i ≤ N byLemma 2.5. For any w ∈ X i , we have S ( e i , w ; λ ) = R ( e i , w ; λ ) S ( w, w ; λ ) using the strongMarkov property with the first hitting time of w . In Section 6.4, by obtaining an exponentialbound on P e i ( W n = e i ), we show the following proposition: Proposition 5.7.
The functions S ( x, y ; λ ) and R ( x, y ; λ ) introduced in Eqs. (5.6) and (5.7) ,respectively, have radii of convergence strictly greater than 1. These ingredients allow us to prove Proposition 3.1.
Proof of Proposition 3.1.
We will show that the R ( k ) i,j defined in Eq. (5.5) are the uniquefunctions satisfying Properties (R1)–(R3).Since R ( k ) i,j ( λ ) = R ( e i , A ( k ) i,j ; λ ) for each ( i, j, k ), by Proposition 5.7 there is an (cid:15) > R ( k ) i,j is complex analytic in D (cid:15) as required for Property (R1). Property (R2) issatisfied for λ ∈ [0 ,
1) by definition and for λ = 1 by Corollary 5.5. By Proposition 5.3 andProperty (R1), the R ( k ) i,j satisfy Eq. (3.2) in D (cid:15) as required for Property (R3). Finally, byProposition 5.6 and Property (R1), the R ( k ) i,j are unique which completes the proof. (cid:4) We define G i ( λ, z ) := E e i (cid:34) ∞ (cid:88) n =0 z | W n | L λ n (cid:35) . (5.8)Let K ( λ, z ) be the 2 N × N matrix whose blocks are K ( λ, z ) := (cid:32) (1; λ, z ) B ( − λ, z ) (cid:33) , (5.9)where B ( k ; λ, z ) is defined in Eq. (3.3) and is the zero matrix.For the proof of Theorem 3.2, we require the following two propositions whose proofs arepostponed until Section 6.5. Proposition 5.8.
In the region ( λ, z ) ∈ (0 , × (0 , , the matrix K ( λ, z ) is irreducible andit has non-negative entries. In the region ( λ, z ) ∈ (0 , × (0 , , the spectral radius of K ( λ, z ) is strictly less than 1. Proposition 5.9.
Let s ( λ ) and v ( i ) be N -vectors whose entries satisfy s j ( λ ) = S ( e j , e j ; λ ) , v j ( i ) = δ ij and let ¯ s ( λ ) and ¯ v ( λ ) be the N -vectors ¯ s ( λ ) = ( s ( λ ) , s ( λ )) and ¯ v ( i ) = ( v ( i ) , v ( i )) .Then, we have G i ( λ, z ) = S ( e i , e i ; λ ) + ∞ (cid:88) d =1 ¯ v T ( i ) K ( λ, z ) d ¯ s ( λ ) , (5.10) where G i ( λ, z ) is defined in Eq. (5.8) and K ( λ, z ) is defined in Eq. (5.9) . We are now ready to prove Theorem 3.2.
Proof of Theorem 3.2.
It is enough to show Theorem 3.2 with initial condition W = e i foreach 1 ≤ i ≤ N . We will apply Theorem 5.1 with Y n = | W n | L . Then, G i in Eq. (5.8) is G inEq. (5.1).By Proposition 5.8, there is an (cid:15) > λ, z ∈ (1 − (cid:15), ∞ (cid:88) d =0 ¯ v T ( i ) K ( λ, z ) d ¯ s ( λ ) = ¯ v T ( i ) ( I − K ( λ, z )) − ¯ s ( λ ) = v T ( i ) adj[ I − K ( λ, z )] ¯ s ( λ )det[ I − K ( λ, z )] , (5.11)where adj[ · ] is the adjugate matrix. Applying this to Eq. (5.10) gives G i ( λ, z ) = C i ( λ, z )det[ I − K ( λ, z )] , (5.12)where C i ( λ, z ) depends on adj[ I − K ] and on S ( e i , e i ; λ ). Therefore, we have that C i ( λ, z ) anddet[ I − K ( λ, z )] are both polynomial functions of the R ( k ) i,j ( λ ), S ( e i , e i ; λ ). By Proposition 5.7,these functions are analytic for λ ∈ D (cid:15) with a possibly smaller (cid:15) >
0. In addition, C i ( λ, z )and det[ I − K ( λ, z )] both depend on z through a finite number of positive powers andhence they are complex analytic in a neighborhood of z = 1. It follows that C i ( λ, z ) anddet[ I − K ( λ, z )] are complex analytic in the region λ ∈ D (cid:15) , | z − | < (cid:15) . N A RANDOM ENTANGLEMENT PROBLEM 17
It remains to show that C i (1 , (cid:54) = 0. By Proposition 5.8, K ( λ, z ) is non-negative andirreducible so, by Perron–Frobenius theory, there is a simple eigenvalue µ PF ( λ, z ) of K ( λ, z )equal to the spectral radius of K ( λ, z ). Also by Proposition 5.8, we have µ PF ( λ, z ) < λ, z ) ∈ [0 , × (0 , G i ( λ,
1) = 11 − λ = C i ( λ, I − K ( λ, . (5.13)Therefore, we must have µ PF (1 , ≥
1. By Proposition 5.7, the entries of K ( λ,
1) are complexanalytic in a neighborhood of λ = 1. Therefore, µ PF ( λ,
1) is also a complex analytic in aneighborhood of λ = 1 [16] and so we have µ PF (1 ,
1) = 1. The characteristic polynomial of K ( λ,
1) satisfies det[ x I − K ( λ, x − µ PF ( λ, k ( x, λ ) , (5.14)where k ( x, λ ) is a polynomial in x with k ( µ PF ( λ, , λ ) (cid:54) = 0. This implies that the functiondet[ I − K ( λ, λ = 1. Hence, by Eq. (5.13), we have that C i (1 , (cid:54) = 0.We conclude that Theorem 5.1 applies to the random variables {| W n | L } n ≥ with g ( λ, z ) =det[ I − K ( λ, z )]. Referring to Eq. (5.9), this can also be written in the form g ( λ, z ) = det[ I − B (1; λ, z ) B ( − λ, z )] . (5.15)This completes the proof of Theorem 3.2. (cid:4) Proofs of the main steps
The following subsections contain the proofs of the propositions for the proof of Theo-rem 3.2.6.1.
Proof of Proposition 5.3.
Proof of Proposition 5.3.
By Lemma 2.5, we have P ( T ( m, xy ) = (cid:96) | W m = x ) = P e t ( x ) ( T (0 , y ) = (cid:96) ) . (6.1)We can assume that ( i, j, k ) = (1 , , W gives E e [ λ T (0 ,A (1)1 , ) ] = p (1)1 , E e [ λ T (0 ,A (1)1 , ) | W = A (1)1 , ] (6.2a)+ (cid:88) m (cid:54) =1 , p (1)1 ,m E e [ λ T (0 ,A (1)1 , ) | W = A (1)1 ,m ] (6.2b)+ (cid:88) m (cid:54) =1 p ( − ,m E e [ λ T (0 ,A (1)1 , ) | W = A ( − ,m ] . (6.2c)Let us consider these terms line-by-line. In line (6.2a), on the event W = A (1)1 , we have T (0 , A (1)1 , ) = 1, thus E [ λ T (0 ,A (1)1 , ) | W = A (1)1 , ] = λ . Consider any term in the sum on line (6.2b). On the event { W = A (1)1 ,m } we have T (0 , A (1)1 , ) = 1 + T (1 , A (1) m, ) by definition. Hence, by Eq. (6.1), E e [ λ T (0 ,A (1)1 , ) | W = A (1)1 ,m ] = E e m [ λ T (0 ,A (1) m, ) ] = λR (1) m, ( λ ) . (6.3)Consider any term in the sum on line (6.2c). If the process is at A ( − ,m with m (cid:54) = 1 and visits A (1)1 , at a later time, then by Lemma 2.5 it must visit e before visiting A (1)1 , . On the event { W = A ( − ,m } the first visit to e happens at step 1 + T (1 , A ( − m, ). Conditioning on the valueof T (0 , A ( − m, ) and applying Eq. (6.1) gives E e [ λ T (0 ,A (1)1 , ) | W = A ( − ,m ] = E e m [ λ T (0 ,A ( − m, A (1)1 , ) ] = λR ( − m, ( λ ) R (1)1 , ( λ ) . (6.4)Collecting all these terms gives Eq. (3.2). (cid:4) Proof of Proposition 5.4.
Proof of Proposition 5.4.
Suppose that { W n } n ≥ is recurrent for a given initial condition.Then, by Lemma 2.5, it is recurrent for any initial condition. Moreover, for a given initialcondition e i , the hitting times T (0 , A ( k ) i,j ) are almost surely finite, hence R ( k ) i,j (1) = 1 for each( i, j, k ).Let D ( k ) i,j ( λ ) := d R ( k ) i,j / d λ ; we have 0 < D ( k ) i,j ( λ ) < ∞ for 0 < λ < N ( N − d := ( D ( k ) i,j ) and r := ( R ( k ) i,j ). Differentiating Eq. (3.2) withrespect to λ gives the linear system d = λ − r + M ( λ ) d , (6.5)where M ( λ ) is a 2 N ( N − × N ( N −
1) matrix whose entries are M ( λ ) ( i (cid:48) ,j (cid:48) ,k (cid:48) )( i,j,k ) = λp ( k ) i,i (cid:48) i (cid:48) (cid:54) = j, j (cid:48) = j, k (cid:48) = kλp ( − k ) i,i (cid:48) R ( k ) i,j ( λ ) j (cid:48) = i, k (cid:48) (cid:54) = kλ (cid:80) (cid:96) (cid:54) = i p ( − k ) i,(cid:96) R ( − k ) (cid:96),i ( λ ) i = i (cid:48) , j = j (cid:48) , k = k (cid:48) . (6.6)By Remark 5.2, M ( λ ) extends continuously to λ ∈ [0 , M ( λ ) is primitive for λ ∈ (0 , µ ( λ ) >
0. There is a corresponding eigenvector v ( λ ) with positiveentries, and all other eigenvalues of M ( λ ) are smaller than µ ( λ ) in norm (see for example [30]).Since r > , by Eq. (6.5) we have d > Md and multiplying this vector inequality from theleft with v T gives v T d > v T Md = µ v T d . (6.7)This shows that µ ( λ ) < λ ∈ (0 ,
1) by the positivity of d . We will show that µ ( λ ) > λ ∈ (0 ,
1) which proves the lemma by contradiction.
N A RANDOM ENTANGLEMENT PROBLEM 19
We will now show that there is a non-zero vector u = ( u ( k ) i,j ) that satisfies ( M (1) − I ) u = and u ( k ) i,j = (1 − δ j, )˜ u ( k ) j − (1 − δ i, )˜ u ( k ) i , (6.8)for some nonzero numbers { ˜ u ( k ) i : 1 ≤ i ≤ N, k ∈ {− , }} . We introduce the N × N matrix J := ( x i, ( (cid:96),m ) ) ≤ i,(cid:96) ≤ N,m ∈{− , } , (6.9a) x i, ( (cid:96),m ) := − p ( m ) i,(cid:96) (1 − δ (cid:96), )(1 − δ (cid:96),i ) + δ i,(cid:96) (1 − δ i, ) (cid:88) j (cid:54) = i p ( m ) i,j . (6.9b)Suppose that u is of the form in Eq. (6.8). Then for each ( i, j, k ) we have[( M (1) − I ) u ] ( i,j,k ) = (cid:88) (cid:96) (cid:54) = i,j p ( k ) i,(cid:96) u ( k ) (cid:96),j + (cid:88) (cid:96) (cid:54) = i p ( − k ) i,(cid:96) u ( − k ) (cid:96),i − u ( k ) i,j (cid:88) (cid:96) (cid:54) = i p ( k ) i,(cid:96) (6.10a)= (cid:88) (cid:96) (cid:54) = i (cid:88) m = ± p ( m ) i,(cid:96) (cid:104) (1 − δ i, )˜ u ( m ) i − (1 − δ (cid:96), )˜ u ( m ) (cid:96) (cid:105) (6.10b)= [ J ˜ u ] i , (6.10c)where we have used R ( k ) i,j (1) = 1 in Eq. (6.10a). Hence J ˜ u = implies ( M (1) − I ) u = if u isgiven by Eq. (6.8). Since J has dimensions N × N , there is a non-trivial vector ˜ u in its null-space. Then the vector u defined via Eq. (6.8) satisfies ( M (1) − I ) u = which shows that1 is an eigenvalue of M (1). Since u ( k ) i,j = − u ( k ) j,i for all i, j, k , the entries of the correspondingeigenvector cannot be all positive, so 1 cannot be the Perron–Frobenius eigenvalue of M (1).This implies µ (1) > µ ( λ ) is a simple root of the characteristic polynomial of M ( λ ), it is continuous in thecoefficients of that polynomial [36]. These coefficients are continuous functions of λ on [0 , µ ( λ ) is a continuous function of λ , so there exists a λ ∈ (0 ,
1) such that µ ( λ ) > (cid:4) Proof of Proposition 5.6.
The proof of Proposition 5.6 follows well-known techniquesin branching processes, see Sevastyanov [31].
Proof of Proposition 5.6.
Fix a given λ ∈ [0 , N ( N − q = f ( q , λ ) . (6.11)The vector entries are labeled by the multi-index (cid:96) = ( i, j, k ), with f (cid:96) ( q , λ ) = f ( k ) i,j ( q , λ ) = λ (cid:32) p ( k ) i,j + (cid:88) m (cid:54) = i,j p ( k ) i,m q ( k ) m,j + (cid:88) m (cid:54) = i p ( − k ) i,m q ( − k ) m,i q ( k ) i,j (cid:33) . (6.12) Observe that each f (cid:96) ( q , λ ) is a quadratic polynomial in q whose coefficients are non-negativenumbers summing to at most 1. Therefore, for 2 N ( N − x , y , we have ≤ x < y ≤ = ⇒ < f ( x , λ ) < f ( y , λ ) ≤ , (6.13)where is a vector with unit entries. Let { a n ( λ ) } n ≥ be a sequence defined recursively as a n +1 ( λ ) = f ( a n ( λ ) , λ ) , a ( λ ) = f ( , λ ) . (6.14)By Lemma A.2 and Corollary 5.5,( a n ( λ )) (cid:96) ≤ n +1 − (cid:88) m =0 P e i (cid:16) T (cid:16) , A ( k ) i,j (cid:17) = m (cid:17) λ m ≤ R ( k ) i,j ( λ ) < , n ≥ . (6.15)Let q (cid:63) ( λ ) be a vector whose entries are q (cid:63)(cid:96) ( λ ) = R ( k ) i,j ( λ ). By Eqs. (6.13) and (6.15) wehave that { a n ( λ ) } n ≥ is a strictly increasing sequence bounded above by q (cid:63) ( λ ). Therefore,lim n →∞ a n ( λ ) = q (cid:63) ( λ ), where f ( q (cid:63) , λ ) = q (cid:63) and < q (cid:63) ( λ ) < .We will now show that q (cid:63) ( λ ) is the only solution to Eq. (6.11) satisfying < q ( λ ) < .Suppose that there is an r ( λ ) (cid:54) = q (cid:63) ( λ ) such that r = f ( r , λ ) and < r ( λ ) < . Applying thefunction f ( · , λ ) repeatedly to both sides of the inequality < r ( λ ) and using Eq. (6.13) weget q (cid:63) ( λ ) ≤ r ( λ ). We will drop the λ dependence for the remainder of this proof. Draw theline z ( θ ) = q (cid:63) + ( r − q (cid:63) ) θ ; there will be a point on this line ˜ r ≤ such that ˜ r (cid:96) = 1 for some (cid:96) and ˜ θ >
1. We therefore have f (cid:96) (˜ r ) ≤ r (cid:96) by Eq. (6.13). Let ϕ ( θ ) = f (cid:96) ( z ( θ )) − z (cid:96) ( θ ),then we have ϕ (0) = f (cid:96) ( q (cid:63) ) − q (cid:63)(cid:96) = 0 , ϕ (˜ θ ) = f (cid:96) (˜ r ) − ˜ r (cid:96) ≤ . (6.16)By direct computation, and since f (cid:96) is a quadratic polynomial with non-negative coefficients, ϕ (cid:48)(cid:48) ( θ ) = (cid:88) m,n ( r m − q (cid:63)m )( r n − q (cid:63)n ) ∂ f (cid:96) ∂z m ∂z n ≥ . (6.17)Since f (cid:96) is nonlinear, ϕ (cid:48)(cid:48) ( θ ) is not identically zero so convexity gives ϕ ( θ ) < θ ∈ (0 , ˜ θ ).In particular, ϕ (1) <
0, so f (cid:96) ( r ) < r (cid:96) : a contradiction. (cid:4) Proof of Proposition 5.7.
In this section, we show that the functions S ( x, y ; λ ) and R ( x, y ; λ ) introduced in Eqs. (5.6) and (5.7), respectively, have radii of convergence strictlygreater than 1. This would follow from Proposition 8.1 in Lalley (2001) [17], however theproof of this proposition is incomplete. A correction was provided to us by the author viapersonal communication and he kindly allowed us to reproduce the corrected proof here,adapted to our case.To prove Proposition 5.7, we require the following Lemma which is a consequence of theAzuma–Hoeffding inequality for bounded submartingales [7]. N A RANDOM ENTANGLEMENT PROBLEM 21
Lemma 6.1.
Let ξ , ξ , . . . be a sequence of Bernoulli random variables adapted to a filtration {F n } n ≥ . Assume that there exists p > such that, for every n ≥ , P ( ξ n +1 = 1 | F n ) ≥ p. (6.18) Then for every α < p , there exist β < and C < ∞ such that ∀ n ≥ , P (cid:32) m (cid:88) i =1 ξ i ≤ αm for some m ≥ n (cid:33) ≤ Cβ n . (6.19) Proof of Proposition 5.7.
Fix any 1 ≤ i ≤ N . For any x, y ∈ X i we have P x ( W n = y ) ≥ P x ( T (0 , y ) = n ) so it will be sufficient to show that the radius of convergence of S ( x, y ; λ )is strictly greater than 1; we begin by showing that that the radii of convergence of the S ( x, y ; λ ) are all the same.Let x, x (cid:48) , y, y (cid:48) ∈ X i . By Lemma 2.5, there exists a positive-probability path from x to y that passes through x (cid:48) then y (cid:48) on the way. Suppose that the shortest path from x to x (cid:48) has (cid:96) steps, from y to y (cid:48) has (cid:96) steps, and from x (cid:48) to y (cid:48) has n (cid:48) steps. Let k = (cid:96) + (cid:96) , then bythe Markov property we have that, for all n ≥ n (cid:48) , P x ( W n + k = y ) ≥ P x ( W n + (cid:96) + (cid:96) = y, W n + (cid:96) = y (cid:48) , W (cid:96) = x (cid:48) ) ≥ (cid:15)P x (cid:48) ( W n = y (cid:48) ) , (6.20)where (cid:15) > n . By the same argument, there exists a k (cid:48) ≥ (cid:15) (cid:48) > n , P x (cid:48) ( W n + k (cid:48) = y (cid:48) ) ≥ (cid:15) (cid:48) P x ( W n = y ) . (6.21)Therefore, S ( x, y ; λ ) and S ( x (cid:48) , y (cid:48) ; λ ) have the same radii of convergence.We will now show that S ( e i , e i ; λ ) has radius of convergence strictly greater than 1. To dothis, we will show that there are constants C < ∞ and β < P e i ( W n = e i ) ≤ Cβ n .We will write e := e i for the remainder of this proof.By Assumptions 2.2, 2.3 and Lemma 2.5, the quantity c ( w ) := P w ( | W n | > | w | for all n > w is non-empty, c ( w ) only dependson the last generator in the reduced representation of w .) By Proposition 5.4, our processis transient and hence c ( A ( k ) i,j ) > i, j, k ). For any word w , we can appendat most four generators at the end of w to produce a word ending in A ( k ) i,j in a way thatthe length of the word strictly increases during this process. Hence by the Markov property,there exists a q > P w ( | W n | > | w | for all n > ≥ q ∀ w ∈ X . (6.22) We fix m ≥ mq > − q . We define the stopping times τ k inductively such that τ = 0 and τ k +1 := min n>τ k {| W n | − | W τ k | ∈ {− , , m }} . (6.23)By the transience of our process, τ k < ∞ almost surely and W τ k is well-defined.Consider the event { W n = e } and take j = max { k : τ k < n } . By Assumptions 2.2 and 2.3,we have | W m +1 | − | W m | ∈ {− , , } for each m ≥
0. It follows that | W τ j | ≤ τ j < n (cid:48) < n such that | W n (cid:48) | − | W τ j | ∈ {− , } , so τ j +1 ≤ n (cid:48) < n which is acontradiction. Therefore, on the event { W n = e } we have | W τ j | ∈ { , } and τ j +1 = n .Let γ ∈ (0 , { W n = e } using the event { τ k = n for some k ≥ γn } andits compliment we get the upper bound P e ( W n = e ) ≤ P ( τ (cid:100) γn (cid:101) > n ) + P e ( W τ k = e for some k ≥ γn ) , (6.24)where (cid:100)·(cid:101) is the ceiling function. We will show that there is a γ such that each term inEq. (6.24) has an exponential bound.Define the Bernoulli random variables { ξ k } k ≥ by ξ k := mk < τ j ≤ m ( k + 1) for some j ≥
00 otherwise . (6.25)These are adapted to the filtration F m ( k +1) , k ≥ F n is the σ -algebra generated bythe first n steps of the process. By Assumption 2.4, we have P w ( | W | = | w | + 1) > , ∀ w ∈ X . (6.26)By Eq. (6.26) and the Markov property, there exists an α > P e ( ξ k +1 = 1 | F mk ) ≥ α, ∀ k ≥ . (6.27)Suppose that τ (cid:96) > mb for some (cid:96), b ≥
1. If (cid:80) bk =1 ξ k > (cid:96) , then there would be at least (cid:96) distinct intervals of length m up to time mb containing a τ k , which would imply that τ (cid:96) ≤ mb . Therefore, we must have P ( τ (cid:96) > mb ) ≤ P (cid:32) b (cid:88) k =1 ξ k ≤ (cid:96) (cid:33) . (6.28)Consider the first term in Eq. (6.24) and take γ = α/ (2 m ). By Eqs. (6.27), (6.28) andLemma 6.1, we have that there exist constants C < ∞ and 0 < β < P ( τ (cid:100) γn (cid:101) > n ) ≤ P (cid:32) (cid:96) (cid:88) k =1 ξ k ≤ ( α/ (cid:96) for some (cid:96) ≥ n/m (cid:33) ≤ Cβ n , ∀ n ≥ . (6.29) N A RANDOM ENTANGLEMENT PROBLEM 23
Next, define the Bernoulli random variables { ζ k } k ≥ by ζ k +1 := | W τ k +1 | − | W τ k | = m | W τ k +1 | − | W τ k | ∈ {− , } . (6.30)These are adapted to the filtration ( F τ k +1 ) k ≥ . By Eq. (6.22) and the Markov property, wehave P ( ζ k +1 = 1 | F τ k ) ≥ q, ∀ k ≥ . (6.31)We fix r < q such that mr − (1 − r ) := ∆ > . (6.32)By Eq. (6.31) and Lemma 6.1, there exist constants K < ∞ and 0 < δ < P e (cid:32) k (cid:88) (cid:96) =1 ζ (cid:96) ≤ rk for some k ≥ n (cid:33) ≤ Kδ n , ∀ n ≥ . (6.33)If (cid:80) k(cid:96) =1 ζ (cid:96) ≥ rk , then | W τ k | ≥ mrk − (1 − r ) k = k ∆ by Eq. (6.32). Therefore, P e ( | W τ k | < k ∆ for some k ≥ n ) ≤ Kδ n ∀ n ≥ . (6.34)Considering the second term in Eq. (6.24), by Eq. (6.34) we have that for all n ≥ P e ( W τ k = e for some k ≥ γn ) ≤ P e ( | W τ k | < k ∆ for some k ≥ γn ) ≤ K ( δ γ ) n , (6.35)where 0 < δ γ <
1. Combining Eqs. (6.29) and (6.35) with Eq. (6.24) gives that P e ( W n = e )has an exponential bound, which in turn implies that the radius of convergence of S ( e, e ; λ )is strictly greater than 1. Therefore, we also have that the radius of convergence of S ( x, y ; λ )is strictly greater than 1 for all x, y ∈ X i . Since i was arbitrary, we conclude that the radiusof convergence of S ( x, y ; λ ) is strictly greater than 1 for all x, y ∈ X , as required. (cid:4) Proof of Proposition 5.8 and Proposition 5.9.
We begin by establishing two lem-mas we will need for the proofs of Propositions 5.8 and 5.9.
Lemma 6.2.
Suppose that w , w ∈ X i for some ≤ i ≤ N and w (cid:54) = w . Consider thereduced composition w − w = d (cid:89) (cid:96) =1 A ( k (cid:96) ) i (cid:96) ,j (cid:96) d ≥ given by Lemma 2.1. Then, we have R ( w , w ; λ ) = d (cid:89) i =1 R ( k (cid:96) ) i (cid:96) ,j (cid:96) ( λ ) . (6.37) Proof.
This follows from Lemma 2.5 and the strong Markov property applied at the stoppingtimes when the process reaches the intermediate words (cid:81) mi =1 A ( k m ) i m ,j m for each 1 ≤ m ≤ d . (cid:4) We introduce the set of words F ( k ) i,j ( d ) := { w ∈ X i : | w | = d, t ( w ) = j, K ( w ) = k } , (6.38)where K ( w ) = k if | w | ≥ w in its reduced representation is of theform A ( k ) i,j . A representative member of F ( k ) i,j ( d ) is shown in Figure 6. Let B ( d, k ; λ, z ) be the1 2 3 4 5 Figure 6.
One member of the set F (1)1 , (4) in the N = 5 case.matrix defined by the product B ( d, k ; λ, z ) := d (cid:89) i =1 B (cid:0) ( − i +1 k ; λ, z (cid:1) , (6.39)where B ( k ; λ, z ) is defined in Eq. (3.3). Then, we have the following lemma. Lemma 6.3.
The entries of B ( d, k ; λ, z ) satisfy B i,j ( d, k ; λ, z ) = (cid:88) w ∈ F ( k ) i,j ( d ) z | w | L R ( e i , w ; λ ) , d ≥ . (6.40) Proof.
The proof is by induction on d . The set F ( k ) i,j (1) only contains the word A ( k ) i,j . Wetherefore have B i,j (1 , k ; λ, z ) = B i,j ( k ; λ, z ) = (1 − δ i,j ) z | A ( k ) i,j | L R ( k ) i,j ( λ ) = (cid:88) w ∈ F ( k ) i,j (1) z | w | L R ( e i , w ; λ ) . (6.41)Let k d := ( − d +1 k . By the induction hypothesis and Lemma 6.2, B i,j ( d + 1 , k ; λ, z ) = (cid:88) (cid:96) B i,(cid:96) ( d, k d ; λ, z ) B (cid:96),j ( − k d ; λ, z )= (cid:88) (cid:96) (cid:88) w ∈ F ( k ) i,(cid:96) ( d ) z | w | L R ( e i , w ; λ )(1 − δ (cid:96),j ) z | A ( − kd ) (cid:96),j | L R ( − k d ) (cid:96),j ( λ )= (cid:88) (cid:96) (1 − δ (cid:96),j ) (cid:88) w ∈ F ( k ) i,(cid:96) ( d ) z | w | L + | A ( − kd ) (cid:96),j | L R (cid:16) e i , wA ( − k d ) (cid:96),j ; λ (cid:17) = (cid:88) w ∈ F ( k ) i,j ( d +1) z | w | L R ( e i , w ; λ ) , (6.42) N A RANDOM ENTANGLEMENT PROBLEM 25 where we have used Eq. (2.5) in Eq. (6.42). (cid:4)
We are now ready to prove Propositions 5.8 and 5.9.
Proof of Proposition 5.8.
In the region ( λ, z ) ∈ (0 , × (0 , K ( λ, z )immediately follows from the definitions of | · | L and R ( k ) i,j ( λ ). For the remainder of the proof,consider the region ( λ, z ) ∈ (0 , × (0 , K d = (cid:32) B (2 d, λ, z )
00 B (2 d, − λ, z ) (cid:33) d ≥ K d − = (cid:32) (2 d − , λ, z ) B (2 d − , λ, z ) (cid:33) d ≥ . (6.43b)The matrix B ( k ; λ, z ) has zeroes on its diagonal and strictly positive entries elsewhere; henceit is primitive. It follows that B ( d, k ; λ, z ) has strictly positive entries for d ≥
2, so K ( λ, z )is irreducible with period 2. Next, define the double generating function F ( k ) i,j ( λ, z ) := ∞ (cid:88) n =0 E e i [ z | W n | L I ( t ( W n ) = j, K ( W n ) = k )] λ n (6.44)= ∞ (cid:88) d =1 (cid:88) w ∈ F ( k ) i,j ( d ) z | w | L S ( e i , w ; λ ) , (6.45)where I ( · ) is the indicator function. Writing S ( x, y ; λ ) = R ( x, y ; λ ) S ( y, y ; λ ) and noting that S ( y, y ; λ ) ≥
1, we have that Lemma 6.3 implies B i,j ( d, k ; λ, z ) ≤ (cid:88) w ∈ F ( k ) i,j ( d ) z | w | L S ( e i , w ; λ ) d ≥ . (6.46)Combining Eqs. (6.45) and (6.46) and using the fact that λ ∈ (0 ,
1) gives ∞ (cid:88) d =1 [ K ( λ, z ) d ] (cid:96),m ≤ max i,j,k ∞ (cid:88) d =1 B i,j ( d, k ; λ, z ) ≤ max i,j,k F ( k ) i,j ( λ, z ) ≤ − λ < ∞ , (6.47)which shows that the spectral radius of K ( λ, z ) is strictly less than 1. (cid:4) Proof of Proposition 5.9.
We have G i ( λ, z ) = (cid:88) w ∈ X i z | w | L S ( e i , w ; λ ) = S ( e i , e i ; λ ) + ∞ (cid:88) d =1 (cid:88) k,j (cid:88) w ∈ F ( k ) i,j ( d ) z | w | L S ( e i , w ; λ ) . (6.48) Applying Lemma 6.3 gives, for d ≥ v T ( i ) K ( λ, z ) d ¯ s ( λ ) = (cid:88) k v T ( i ) B ( d, k ; λ, z ) s ( λ )= (cid:88) k (cid:88) (cid:96),j v (cid:96) ( i ) B (cid:96),j ( d, k ; λ, z ) s j ( λ )= (cid:88) k (cid:88) (cid:96),j (cid:88) w ∈ F ( k ) (cid:96),j ( d ) δ i,(cid:96) z | w | L R ( e (cid:96) , w ; λ ) S ( e j , e j ; λ )= (cid:88) k,j (cid:88) w ∈ F ( k ) i,j ( d ) z | w | L S ( e i , w ; λ ) . (6.49)Substituting Eq. (6.49) into Eq. (6.48) completes the proof. (cid:4) Appendix A. Supplemental proofs
Proof of Lemma 2.1.
Suppose first that two different reduced words w, w (cid:48) are in the sameequivalence class. Then there is a sequence of words w = w, w , w , . . . , w j = w (cid:48) with j ≥ w i is obtained from w i − by a single operation, namely an applicationof Eq. (2.3). We will call such an application an “up” (“down”) type move if it increases(decreases) the number of generating elements in the word.We start by showing that each local sequence of the form w i − → w i down → w i +1 can beedited to a new sequence w i − , w i − , , w i − , . . . w i − ,(cid:96) , w i +1 such that all down moves appearto the left of all up moves. If the down move does not include the generators that have beeninserted by the up move, then the up and down moves commute so we can place the downmove first. There are finitely many cases when the down move includes one of the generatorscreated by the up move. One can check the cases one-by-one to ensure that they can beedited as required; we omit the details here.The previous statement implies that the entire sequence w , w , w , . . . w j can be edited toa new sequence ¯ w = w, ¯ w , ¯ w , . . . ¯ w k = w (cid:48) such that all down moves appear to the left ofall up moves, and k ≥
1. But this cannot happen: since w, w (cid:48) are both reduced, ¯ w → ¯ w cannot be a down move and ¯ w k − → ¯ w k cannot be an up move. This shows that an arrowcannot have two different reduced representations.To finish the proof we need to show that each arrow has a reduced representation. Fora given composition of finitely many generating elements, we can use the greedy algorithmto apply down steps until we cannot do it anymore. The resulting word cannot include aproduct of the form A ( k ) i,j A ( k ) j,(cid:96) , which means that it must be empty or of the form Eq. (2.4). (cid:4) Proof of Lemma 2.5.
The first statement follows directly from the structure of transitionprobability function (2.7) and Assumptions 2.2 and 2.3.To prove (ii), by (i) we may assume that w = e i for some 1 ≤ i ≤ N . By Assumptions (2.2)and 2.3 we have that s ( W n ) = s ( W n +1 ) for all n ≥ s ( w ) = i if e i and w N A RANDOM ENTANGLEMENT PROBLEM 27 are connected with a positive probability path. On the other hand, if s ( w ) = i and w (cid:54) = e i then by Lemma 2.1 we have that w can be written in the form w = A ( k ) i ,i A ( − k ) i ,i A ( k ) i ,i · · · A (( − d +1 k ) i d ,i d +1 such that i (cid:96) (cid:54) = i (cid:96) +1 for all 1 ≤ (cid:96) ≤ d and i = i, (A.1)for some d ≥
1. The partial products of this representation provide a positive probabilitypath from e i to w by Assumption 2.4. This completes the proof of (ii).We first prove (iii) in the special case when w = e i , w = A ( k ) i,j A ( − k ) j,(cid:96) . Let v = w , v , . . . , v n = w be a positive probability path for the Markov chain. We need to show that there isa 0 < n (cid:48) < n with v n (cid:48) = A ( k ) i,j . Let α = max { i < n : | v i | = 1 } ; this is well defined since | v i | changes by at most 1 in each step. This also shows that | v k | ≥ α < k ≤ n and thatthe first generator in the reduced representation of v k is v α . Since this first generator is A ( k ) i,j for k = n , this implies that v α = A ( k ) i,j . To prove (iii) in the general case we can iterate thisargument with the help of (i). (cid:4) Lemma A.1.
The matrix M ( λ ) defined in Eq. (6.6) is primitive for all N ≥ and λ ∈ (0 , .Proof. To show that M is primitive it is sufficient to show that there is an n such that eachentry of M n is strictly positive. Let Γ( Z ) be the matrix obtained by replacing all the strictlypositive entries of any matrix Z with 1. By the assumption that λ ∈ (0 , M ) ( i (cid:48) ,j (cid:48) ,k (cid:48) )( i,j,k ) = δ j,j (cid:48) δ k,k (cid:48) + δ j (cid:48) ,i (1 − δ k (cid:48) ,k ) , (A.2a)Γ( M ) ( i (cid:48) ,j (cid:48) ,k (cid:48) )( i,j,k ) = δ j,j (cid:48) δ k,k (cid:48) + (1 − δ j,j (cid:48) )(1 − δ k,k (cid:48) ) + δ j (cid:48) ,i (1 − δ k,k (cid:48) ) + (1 − δ j (cid:48) ,i ) δ k,k (cid:48) , (A.2b)Γ( M ) ( i (cid:48) ,j (cid:48) ,k (cid:48) )( i,j,k ) = 1 . (A.2c)We remark that N ≥ (cid:4) Lemma A.2.
For the sequence { a n ( λ ) } n ≥ defined in Eq. (6.14) , we have ( a n ( λ )) (cid:96) ≤ n +1 − (cid:88) m =0 P e i (cid:16) T (cid:16) , A ( k ) i,j (cid:17) = m (cid:17) λ m n ≥ . (A.3) where T ( m, x ) is defined in Eq. (5.4) and (cid:96) = ( i, j, k ) .Proof. The proof is by induction on n ≥
0. We will write t ( k ) i,j ( α ) := P e i (cid:16) T (cid:16) , A ( k ) i,j (cid:17) = α (cid:17) tosimplify the notation. The base case n = 0 is immediate since t ( k ) i,j (1) = p ( k ) i,j so ( a ( λ )) (cid:96) = p ( k ) i,j λ . By the induction hypothesis and Eq. (6.13), we have( a n +1 ( λ )) (cid:96) ≤ λp ( k ) i,j (A.4a)+ n +1 − (cid:88) α =0 (cid:88) m (cid:54) = i,j p ( k ) i,m t ( k ) m,j ( α ) λ α +1 (A.4b)+ n +1 − (cid:88) α =0 2 n +1 − (cid:88) β =0 (cid:88) m (cid:54) = i p ( − k ) i,m t ( − k ) m,i ( α ) t ( k ) i,j ( β ) λ α + β +1 . (A.4c)We show that the right side of Eq. (A.4) can be bounded above by n +2 − (cid:88) m =0 P e i (cid:16) T (cid:16) , A ( k ) i,j (cid:17) = m (cid:17) λ m . (A.5)The term in (A.4a) is bounded by n +2 − (cid:88) m =0 P e i (cid:16) T (cid:16) , A ( k ) i,j (cid:17) = m, W = A ( k ) i,j (cid:17) λ m , (A.6)and the term in line (A.4b) is bounded by n +2 − (cid:88) α =0 (cid:88) m (cid:54) = i,j P e i (cid:16) T (cid:16) , A ( k ) i,j (cid:17) = α, W = A ( k ) i,m (cid:17) λ α . (A.7)Finally, we consider line (A.4c). By Lemma 2.5, we can write P e m (cid:16) T (cid:16) , A ( − k ) m,i A ( k ) i,j (cid:17) = β (cid:17) = β (cid:88) α =0 t ( − k ) m,i ( α ) t ( k ) i,j ( β − α ) (A.8)with t ( k ) i,j (0) = 0. For non-negative numbers c α,β indexed by 0 ≤ α ≤ n , 0 ≤ β ≤ n , we have n (cid:88) α =0 n (cid:88) β =0 c α,β ≤ n (cid:88) r =0 (cid:88) α + β = r c α,β . (A.9)Therefore, applying Eq. (A.9) then Eq. (A.8) results in n +1 − (cid:88) α =0 2 n +1 − (cid:88) β =0 t ( − k ) m,i ( α ) t ( k ) i,j ( β ) λ α + β ≤ n +2 − (cid:88) r =0 P e m (cid:16) T (cid:16) , A ( − k ) m,i A ( k ) i,j (cid:17) = r (cid:17) λ r . (A.10)Hence, the quantity on line (A.4c) can be bounded above by n +2 − (cid:88) α =0 (cid:88) m (cid:54) = i,j P e i (cid:16) T (cid:16) , A ( k ) i,j (cid:17) = α, W = A ( − k ) i,m (cid:17) λ α , (A.11)which completes the induction step and hence the proof. (cid:4) N A RANDOM ENTANGLEMENT PROBLEM 29
Acknowledgments
The authors are grateful to Steven Lalley for valuable discussions and for his assistancewith the proof of Proposition 5.4. B.V. was partially supported by the NSF award DMS-1712551.
References [1]
C. B´elisle , Windings of random walks , Ann. Prob., 17 (1989), pp. 1377–1402.[2]
C. B´elisle and J. Faraway , Winding angle and maximum winding angle of the two-dimensionalrandom walk , J. Appl. Prob., 28 (1991), pp. 717–726.[3]
M. A. Berger , The random walk winding number problem: convergence to a diffusion process withexcluded area , J. Phys. A, 20 (1987), pp. 5949–5960.[4]
M. A. Berger and P. H. Roberts , On the winding number problem with finite steps , Adv. Appl.Prob., 20 (1988), pp. 261–274.[5]
R. Brown , Topology and Groupoids , , 2006.[6] N. Desenonges , On the dynamics of Riccati foliations with nonparabolic monodromy representations ,Conformal Geometry and Dynamics of the American Mathematical Society, 23 (2019), pp. 164–188.[7]
D. A. Freedman , On tail probabilities for martingales , Ann. Probab., 3 (1975), pp. 100–118.[8]
X. Geng and G. Iyer , Long time asymptotics of heat kernels and Brownian winding numbers onmanifolds with boundary , Apr. 2018. Preprint, https://arxiv.org/abs/1804.00368 .[9]
L. A. Gilch , Rate of Escape of Random Walks on Regular Languages and Free Products by Amalga-mation of Finite Groups , Discrete Mathematics & Theoretical Computer Science, DMTCS Proceedingsvol. AI, Fifth Colloquium on Mathematics and Computer Science (2008), pp. 405–420.[10]
A. Grosberg and H. Frisch , Winding angle distribution for planar random walk, polymer ringentangled with an obstacle, and all that: Spitzer–Edwards–Prager–Frisch model revisited , J. Phys. A, 36(2003), pp. 8955–8981.[11]
J.-C. Gruet , On the length of the homotopic Brownian word in the thrice punctured sphere , ProbabilityTheory and Related Fields, 111 (1998), pp. 489–516.[12]
Y. Guivarc’h , Sur la loi des grands nombres et le rayon spectral d’une marche al´eatoire , Ast´erisque,74 (1980), pp. 47–98.[13]
D. Holcman and Z. Schuss , The narrow escape problem , SIAM Review, 56 (2014), pp. 213–257.[14]
K. Itˆo and H. P. McKean , Diffusion processes and their sample paths , Springer, Berlin, 1965.[15]
M. Kac, W. L. Murdock, and G. Szeg˝o , On the eigen-values of certain hermitian forms , Journalof Rational Mechanics and Analysis, 2 (1953), pp. 767–800.[16]
T. Kato , Perturbation theory for linear operators , vol. 132, Springer Science & Business Media, 2013.[17]
S. Lalley , Random walks on regular languages and algebraic systems of generating functions , Contemp.Math., 287 (2001).[18]
S. P. Lalley , Finite range random walk on free groups and homogeneous trees , Ann. Probab., 21(1993), pp. 2087–2130.[19]
T. J. Lyons and H. P. McKean , Winding of the plane Brownian motion , Adv. Math., 51 (1984),pp. 212–225.[20]
H. P. McKean , Stochastic Integrals , Academic Press, New York, 1969.[21]
H. P. McKean and D. Sullivan , Brownian motion and harmonic functions on the class surface ofthe thrice punctured sphere , Adv. Math., 51 (1984), pp. 203–211. [22]
P. Messulam and M. Yor , On D. Williams’ pinching method and some applications , Journal of theLondon Mathematical Society, s2-26 (1982), pp. 348–364.[23]
S. K. Nechaev , Statistics of Knots and Entangled Random Walks , World Scientific, Singapore; London,1996.[24]
J. W. Pitman and M. Yor , The asymptotic joint distribution of windings of planar Brownian motion ,Bull. Am. Math. Soc., 10 (1984), pp. 109–111.[25] ,
Asymptotic laws of planar Brownian motion , Ann. Probab., 14 (1986), pp. 733–779.[26] ,
Further asymptotic laws of planar Brownian motion , Ann. Probab., 17 (1989), pp. 965–1011.[27]
J. Rudnick and Y. Hu , The winding angle distribution of an ordinary random walk , Journal of PhysicsA: Mathematical and General, 20 (1987), pp. 4421–4438.[28] ,
Winding angle of a self-avoiding random walk , Phys. Rev. Lett., 60 (1988), pp. 712–715.[29]
S. Sawyer and T. Steger , The rate of escape for anisotropic random walks in a tree , ProbabilityTheory and Related Fields, 76 (1987), pp. 207–230.[30]
E. Seneta , Non-negative Matrices and Markov Chains , Springer Series in Statistics, Springer NewYork, 2006.[31]
B. A. Sevastyanov , Branching processes , Nauka, 1971. (In Russian).[32]
F. Spitzer , Some theorems concerning -dimensional Brownian motion , Trans. Amer. Math. Soc., 87(1958), pp. 187–197.[33] F. Tanaka , Topological classification of Brownian orbits , J. Chem. Phys., 137 (2012), p. 104907.[34]
S. Watanabe , Asymptotic windings of Brownian motion paths on Riemann surfaces , Acta ApplicandaeMathematicae, 63 (2000), pp. 441–464.[35]
H. Wen and J.-L. Thiffeault , Winding of a Brownian particle around a point vortex , Philos. Trans.Royal Soc. A, 377 (2019), p. 20180347.[36]