Tightness of Bernoulli Gibbsian line ensembles
Evgeni Dimitrov, Xiang Fang, Lukas Fesser, Christian Serio, Carson Teitler, Angela Wang, Weitao Zhu
TTIGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES
EVGENI DIMITROV, XIANG FANG, LUKAS FESSER, CHRISTIAN SERIO, CARSON TEITLER,ANGELA WANG, AND WEITAO ZHU
Abstract.
A Bernoulli Gibbsian line ensemble L = ( L , . . . , L N ) is the law of the trajectoriesof N − independent Bernoulli random walkers L , . . . , L N − with possibly random initial andterminal locations that are conditioned to never cross each other or a given random up-right path L N (i.e. L ≥ · · · ≥ L N ). In this paper we investigate the asymptotic behavior of sequencesof Bernoulli Gibbsian line ensembles L N = ( L N , . . . , L NN ) when the number of walkers N tendsto infinity. We prove that if one has mild but uniform control of the one-point marginals of thelowest-indexed (or top) curves L N then the sequence L N is tight in the space of line ensembles.Furthermore, we show that if the top curves L N converge in the finite dimensional sense to theparabolic Airy process then L N converge to the parabolically shifted Airy line ensemble. Contents
1. Introduction and main results 12. Line ensembles 73. Properties of Bernoulli line ensembles 174. Proof of Theorem 2.26 295. Bounding the max and min 366. Lower bounds on the acceptance probability 467. Appendix A 568. Appendix B 67References 841.
Introduction and main results
Gibbsian line ensembles.
In the last several years there has been a significant interest in line ensembles that satisfy what is known as the
Brownian Gibbs property . A line ensemble ismerely a collection of random continuous curves on some interval Λ ⊂ R (all defined on the sameprobability space) that are indexed by a set Σ ⊂ Z . In this paper, we will almost exclusively have Σ = { , . . . , N } with N ∈ N ∪ {∞} and if N = ∞ we use the convention Σ = N . We denote the lineensemble by L and by L i ( ω )( x ) := L ( ω )( i, x ) the i -th continuous function (or line) in the ensemble,and typically we drop the dependence on ω from the notation as one does for Brownian motion.We say that a line ensemble L satisfies the Brownian Gibbs property if it is non-crossing almostsurely, i.e. L i ( s ) < L i − ( s ) for i = 2 , . . . , N and s ∈ Λ and it satisfies the following resamplinginvariance. Suppose we sample L and fix two times s, t ∈ Λ with s < t and a finite interval K = { k , k + 1 , . . . , k } ⊂ Σ with k ≤ k . We can erase the part of the lines L k between thepoints ( s, L k ( s )) and ( t, L k ( t )) for k = k , . . . , k and sample independently k − k + 1 randomcurves between these points according to the law of k − k + 1 Brownian bridges, which have beenconditioned to not intersect each other as well as the lines L k − and L k +1 with the conventionthat L = ∞ and L k +1 = −∞ if k + 1 (cid:54)∈ Σ . In this way we obtain a new random line ensemble L (cid:48) ,and the essence of the Brownian Gibbs property is that the law of L (cid:48) is the same as that of L . Thereaders can find a precise definition of the Brownian Gibbs property in Definition 2.8 but for now Date : November 10, 2020. a r X i v : . [ m a t h . P R ] N ov TIGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES they can think of a line ensemble that satisfies the Brownian Gibbs property as N random curves,which locally have the distribution of N avoiding Brownian bridges.Part of the interest behind Brownian Gibbsian line ensembles is that they naturally arise invarious models in statistical mechanics, integrable probability and mathematical physics. If N isfinite, a natural example of a Brownian Gibbsian line ensemble is given by Dyson Brownian motionwith β = 2 (this is the law of N independent one-dimensional Brownian motions all started at theorigin and appropriately conditioned to never cross for all positive time). Other important examplesof models that satisfy the Brownian Gibbs property include Brownian last passage percolation , whichhas been extensively studied recently in [16–19] and the
Airy line ensemble (shifted by a parabola)[7, 24]. The Airy line ensemble was first discovered as a scaling limit of the multi-layer polynucleargrowth model in [24], where its finite dimensional distribution was derived. Subsequently, in [7] itwas shown that the edge of Dyson Brownian motion (or rather a closely related model given by
Brownian watermelons ) converges uniformly over compacts to the Airy line ensemble, see Figure 1.This stronger notion of convergence was obtained by utilizing the Brownian Gibbs property and thelatter has led to the proof of many new and interesting properties of the Airy line ensemble [7,11,16].Apart from its inherent beautiful structure, the Airy line ensemble plays a distinguished (conjectural)foundational role in the
Kardar-Parisi-Zhang (KPZ) universality class through its relationship tothe construction of the
Airy sheet in [10].
Figure 1.
Dyson Brownian motion and the Airy line ensemble as its edge scalinglimit.The Airy line ensemble is believed to be a universal scaling limit of not just Dyson Brownianmotion but many line ensembles that satisfy a Gibbs property. Recently, it was shown in [9] thatuniform convergence to the Airy line ensemble holds for sequences of N non-intersecting Bernoulli,geometric, exponential and Poisson random walks started from the origin as N tends to infinity.These types of result are reminiscent of Donsker’s theorem from classical probability theory, whichestablishes the convergence of generic random walks to Brownian motion. The difference is that asthe number of avoiding walkers is increasing to infinity, one leaves the Gaussian universality classand enters the KPZ universality class. It is worth mentioning that the results in [9] rely on veryprecise integrable inputs (exact formulas for the finite dimensional distributions) for the randomwalkers for each fixed N , which are suitable for taking the large N limit – this is one reason only thepacked initial condition is effectively treated. For more general initial conditions, the convergenceeven in the Bernoulli case, which is arguably the simplest, remains widely open. IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 3
The goal of the present paper is to investigate asymptotics of N avoiding Bernoulli randomwalkers with general (possibly random) initial and terminal conditions in the large N limit. Themain questions that motivate our work are:(1) What are sufficient conditions that ensure that the trajectories of N avoiding Bernoullirandom walkers are uniformly tight, meaning that they have uniform weak subsequentiallimits that are N -indexed line ensembles on R ?(2) What are sufficient conditions that ensure that the trajectories of N avoiding Bernoullirandom walkers converge uniformly to the Airy line ensemble (shifted by a parabola)?If L N = ( L N , . . . , L NN ) denotes the trajectories of the N avoiding Bernoulli random walkers (with L N ≥ L N ≥ · · · ≥ L NN ) we show that as long as L N under suitable shifts and scales has one-pointtight marginals that (roughly) globally approximate an inverted parabola, one can conclude that thewhole line ensemble L N under the same shifts and scales is uniformly tight. In other words, havinga mild but uniform control of the one-point marginals of the lowest-indexed (or top) curve L N onecan conclude that the full line ensemble is tight and moreover any subsequential limit satisfies theBrownian Gibbs property. This result appears as Theorem 1.1 in the next section and is the mainresult of the paper. It is worth pointing out that to establish tightness we do not require actualconvergence of the marginals, which makes our approach more general than that of [9]. In particular,in [9] the authors assume finite dimensional convergence of L N to the Airy line ensemble, while ourapproach does not making it more suitable for establishing convergence to other Brownian Gibbsianline ensembles, such as the Airy wanderers processes of [1].Regarding the second question above, we show that if L N under suitable shifts and scales con-verges weakly to the Airy process (the lowest indexed curve in the Airy line ensemble) minus aparabola in the finite dimensional sense, then the whole line ensemble L N under the same shifts andscales converges uniformly to the Airy line ensemble (again minus a parabola). The latter result ispresented as Theorem 1.3 in the next section and is a relatively easy consequence of Theorem 1.1and the recent characterization result of Brownian Gibbsian line ensembles in [12].1.2. Main results.
We begin by giving some necessary definitions, which will be further elaboratedin Section 2 but will suffice for us to present the main results of the paper. For a, b ∈ Z with a < b wedenote by (cid:74) a, b (cid:75) the set { a, a + 1 , . . . , b } . Given T , T ∈ Z with T ≤ T and N ∈ N we call a (cid:74) , N (cid:75) -indexed Bernoulli line ensemble on (cid:74) T , T (cid:75) a random collection of N up-right paths drawn in theregion (cid:74) T , T (cid:75) × Z in Z – see the bottom-right part of Figure 2. We denote a Bernoulli line ensembleby L and L ( i, s ) is the location of the i -th up-right path at time s for ( i, s ) ∈ (cid:74) , N (cid:75) × (cid:74) T , T (cid:75) . Forconvenience we also denote L i ( s ) = L ( i, s ) the i -th up-right path in the ensemble and one can thinkof L i ’s as trajectories of Bernoulli random walkers that at each time either stay put or jump by one.We say that a Bernoulli line ensemble satisfies the Schur Gibbs property if it satisfies the following:(1) With probability we have L ( s ) ≥ L ( s ) ≥ · · · ≥ L N ( s ) for all s ∈ (cid:74) T , T (cid:75) .(2) For any K = { k , k + 1 , . . . , k } ⊂ (cid:74) , N − (cid:75) and a, b ∈ (cid:74) T , T (cid:75) with a < b the conditionallaw of L k , . . . , L k in the region D = (cid:74) a, b (cid:75) × Z , given { L ( i, s ) : i (cid:54)∈ K or s (cid:54)∈ (cid:74) a + 1 , b − (cid:75) } is that of k − k + 1 independent Bernoulli random walks that are conditioned to startfrom (cid:126)x = ( L k ( a ) , . . . , L k ( a )) at time a , to end at (cid:126)y = ( L k ( b ) , . . . , L k ( b )) at time b and tonever cross each other or the paths L k − or L k +1 in the interval (cid:74) T , T (cid:75) (here we use theconvention L = ∞ ).In simple words, the above definition states that a Bernoulli line ensemble satisfies the Schur Gibbsproperty if it is non-crossing and its local distribution is that of avoiding Bernoulli random walkbridges. We mention here that in the above definition the curve L N plays a special role, since we donot assume that its conditional distribution is that of a Bernoulli bridge conditioned to stay below L N − . Essentially, the curve L N plays the role of a bottom (random) boundary for our ensembleand a Bernoulli line ensemble satisfying the Schur Gibbs property can be seen to be equivalent to the TIGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES statement that it is precisely the law of N − independent Bernoulli bridges that are conditioned tostart from some random configuration at time T , end at some random configuration at time T andnever cross each other or a given random up-right path L N in the time interval (cid:74) T , T (cid:75) . We willrefer to Bernoulli line ensembles that satisfy the Shcur Gibbs property as Bernoulli Gibbsian lineensembles. We mention that the name Schur Gibbs property originates from the connection betweenBernoulli Gibbsian line ensembles and Schur symmetric polynomials, which will be discussed laterin Section 8.2.A natural context in which Bernoulli Gibbsian line ensembles arise is lozenge tilings – see Figure2 and its caption. To be brief, one can take a finite tileable region in the hexagonal lattice andconsider the uniform distribution on all possible tilings of this region with three types of rhombi(also called lozenges). The resulting measure on tilings has a natural Gibbs property, which is thatif you freeze the tiling outside of some finite region the tiling inside that region will be conditionallyuniform among all possible tilings. For special choices of tileable domains uniform lozenge tilingsgive rise to Bernoulli line ensembles (with deterministic packed starting and terminal conditions),and the tiling Gibbs property translated to the line ensemble becomes the Schur Gibbs property.In Figure 2 one observes that L (which is the bottom-most curve in the ensemble) is not uniformlydistributed among all up-right paths that stay below L and have the correct endpoints since itneeds to stay above the bottom boundary of the tiled region.In the remainder of this section we fix a sequence L N = ( L N , . . . , L NN ) of (cid:74) , N (cid:75) -indexed BernoulliGibbsian line ensembles on (cid:74) a N , b N (cid:75) where a N ≤ and b N ≥ are integers. Our interest is inunderstanding the asymptotic behavior of L N as N → ∞ (i.e. when the number of walkers tendsto infinity). Below we we list several assumptions on the sequence L N , which rely on parameters α > , p ∈ (0 , and λ > . The parameter α is related to the fluctuation critical exponent of theline ensemble and the assumptions below will indicate that L N (0) fluctuates on order N α/ . Theparameter p is the global slope of the line ensemble, and since we are dealing with Bernoulli walkersthe global slope is in [0 , and we exclude the endpoints to avoid degenerate cases. The parameter λ is related to the global curvature of the line ensemble, and the assumptions below will indicatethat once the slope is removed the line ensemble approximates the parabola − λx . We now turn toformulating our assumptions precisely. Assumption 1.
We assume that there is a function ψ : N → (0 , ∞ ) such that lim N →∞ ψ ( N ) = ∞ and a N < − ψ ( N ) N α while b N > ψ ( N ) N α .The significance of Assumption 1 is that the sequence of intervals [ a N , b N ] (on which the lineensemble L N is defined) on scale N α asymptotically covers the entire real line. The nature of ψ isnot important and any function converging to infinity along the integers works for our purposes. Assumption 2.
There is a function φ : (0 , ∞ ) → (0 , ∞ ) such that for any (cid:15) > we have(1.1) sup n ∈ Z lim sup N →∞ P (cid:16)(cid:12)(cid:12)(cid:12) N − α/ ( L N ( nN α ) − pnN α + λn N α/ ) (cid:12)(cid:12)(cid:12) ≥ φ ( (cid:15) ) (cid:17) ≤ (cid:15). Let us elaborate on Assumption 2 briefly. If n = 0 the statement indicates that N − α/ L (0) is atight sequence of random variables and so α/ is the fluctuation critical exponent of the ensemble.The transversal critical exponent is α and is reflected in the way time (the argument in L N ) isscaled – it is twice α/ as expected by Brownian scaling. The essence of Assumption 2 is that ifone removes a global line with slope p from L N and rescales by N α/ vertically and N α horizontallythe resulting curve asymptotically approximates the parabola − λx . The way the statement isformulated, this approximation needs to happen uniformly over the integers but the choice of Z isnot important. Indeed, one can replace Z with any subset of R that has arbitrarily large and smallpoints and the choice of Z is made for convenience. Equation (1.3) indicates that for each n ∈ Z thesequence of random variables X Nn = N − α/ ( L N ( nN α ) − pnN α + λn N α/ ) is tight, but it says a bitmore. Namely, it states that if M n is the family of all possible subsequential limits of { X Nn } N ≥ then ∪ n ∈ Z M n is itself a tight family of distributions on R . A simple case when Assumption 2 is IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 5
Figure 2.
The top-left picture represents a tileable region in the triangular latticeand three types of lozenges. The top-right picture depicts a possible tiling of theregion and the bottom-left picture represents the same tiling under an affine transfor-mation. One draws lines through the mid-points of the vertical sides of the verticalrhombi and the squares and this gives rise to a collection of random up-right paths.If one shifts these lines down one obtains a Bernoulli line ensemble – depicted inthe bottom-right picture. If one takes the uniform measure on lozenge tilings theBernoulli line ensemble one obtains through the above procedure satisfies the SchurGibbs property.satisfied is when X Nn converges to the Tracy-Widom distribution for all n as N → ∞ . In this casethe family ∪ n ∈ Z M n only contains the Tracy-Widom distribution and so is naturally tight.The final thing we need to do is embed all of our line ensembles L N in the same space. Thelatter is necessary as we want to talk about tightness and convergence of these line ensembles thatpresently are defined on different state spaces (remember that the number of up-right paths ischanging with N ). We consider N × R with the product topology coming from the discrete topologyon N and the usual topology on R . We let C ( N × R ) be the space of continuous functions on N × R with the topology of uniform convergence over compacts and corresponding Borel σ -algebra. Foreach N ∈ N we let f Ni ( s ) = N − α/ ( L Ni ( sN α ) − psN α + λs N α/ ) , for s ∈ [ − ψ ( N ) , ψ ( N )] and i = 1 , . . . , N ,and extend f Ni to R by setting for i = 1 , . . . , Nf Ni ( s ) = f Ni ( − ψ ( N )) for s ≤ − ψ ( N ) and f Ni ( s ) = f Ni ( ψ ( N )) for s ≥ ψ ( N ) . TIGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES If i ≥ N + 1 we define f Ni ( s ) = 0 for s ∈ R . With the above we have that L N defined by(1.2) L N ( i, s ) = f Ni ( s ) − λs (cid:112) p (1 − p ) is a random variable taking value in C ( N × R ) and we let P N denote its distribution. We remark thatthe particular extension we chose for f Ni outside of [ − ψ ( N ) , ψ ( N )] and for i ≥ N + 1 is immaterialsince all of our convergence/tightness results are formulated for the topology of uniform convergenceover compacts. Consequently, only the behavior of these functions on compact intervals and finiteindex matters and not what these functions do near infinity, which is where the modification happensas lim N →∞ ψ ( N ) = ∞ by assumption.We are now ready to state our main result, whose proof can be found in Section 2.4. Theorem 1.1.
Under Assumptions 1 and 2 the sequence P N is tight. Moreover, if L ∞ denotes anysubsequential limit of L N then L ∞ satisfies the Brownian Gibbs property of Section 1.1 (see alsoDefinition 2.10).Remark . In simple words, Theorem 1.1 states that if one has a sequence of Bernoulli Gibbsianline ensembles with a mild but uniform control of the one-point marginals of the top curves L N then the entire line ensembles need to be tight. The idea of utilizing the Gibbs property of a lineensemble to improve one-point tightness of the top curve to tightness of the entire curve or even theentire line ensemble has appeared previously in several different contexts. For line ensembles whoseunderlying path structure is Brownian it first appeared in the seminal work of [7] and more recentlyin [4, 5]. For discrete Gibbsian line ensembles (more general than the one studied in this paper) itappeared in [6] and for line ensembles related to the inverse gamma directed polymer in [29].Theorem 1.1 indicates that in order to ensure the existence of subsequential limits for L N as in(1.2) it suffices to ensure tightness of the one-point marginals of the top curves L N in a sufficientlyuniform sense. We next investigate the question of when L N converges to the Airy line ensemble.We let A = {A i } i ∈ N be the N -indexed Airy line ensemble in the sense of [7, Theorem 3.1] and L = {L Airyi } i ∈ N given by L Airyi ( x ) = 2 − / ( A i ( x ) − x ) be its parabolically shifted version. Inparticular, both A and L are random variables taking values in the space C ( N × R ) , and A ( · ) isthe Airy process while L Airy ( · ) is the parabolic Airy process . To establish convergence of L N to L Airy we need the following strengthening of Assumption 2.
Assumption 2’.
Let c = (cid:16) λ p (1 − p ) (cid:17) / . For any k ∈ N , t , . . . , t k , x , . . . , x k ∈ R we assume that(1.3) lim N →∞ P (cid:0) L N ( t i ) ≤ x i for i = 1 , . . . , k (cid:1) = P (cid:16) c − / L Airy ( ct i ) ≤ x i for i = 1 , . . . , k (cid:17) . In plain words, Assumption 2’ states that the top curves L N ( t ) converge in the finite dimensionalsense to c − / L Airy ( ct ) . Let us briefly explain why Assumption 2’ is implies Assumption 2 (and hencewe refer to it as a strengthening). Under Assumption 2’, we would have that N − α/ ( L N ( xN α ) − pxN α + λx N α/ ) converge in the finite dimensional sense to (cid:113) p (1 − p )2 c A ( cx ) . In particlar, for each n ∈ Z we have that lim N →∞ P (cid:16)(cid:12)(cid:12)(cid:12) N − α/ ( L N ( nN α ) − pnN α + λn N α/ ) (cid:12)(cid:12)(cid:12) ≥ a (cid:17) = P (cid:32)(cid:114) p (1 − p )2 c |A ( cn ) | ≥ a (cid:33) =1 − F GUE (cid:32) a · (cid:115) cp (1 − p ) (cid:33) + F GUE (cid:32) − a · (cid:115) cp (1 − p ) (cid:33) , where we used that A ( x ) is a stationary process whose one point marginals are given by the Tracy-Widom distribution F GUE , [28], and that F GUE is diffuse. In particular, given (cid:15) > we can find a IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 7 large enough so that the second line above is less than (cid:15) and such a choice of a furnishes a function φ as in Assumption 2.The next result gives conditions under which L N converges to the parabolically shifted Airy lineensemble, it is proved in Section 2.4 Theorem 1.3.
Under Assumptions 1 and 2’ the sequence L N converges weakly in the topology ofuniform convergence over compacts to the line ensemble L ∞ defined by L ∞ i ( t ) = c − / L Airyi ( ct ) , for i ∈ N and t ∈ R , where we recall c = (cid:16) λ p (1 − p ) (cid:17) / .Remark . In plain words, Theorem 1.3 states that to prove the convergence of a sequence ofBernoulli Gibbsian line ensembles L N to the parabolically shifted Airy line ensemble, it suffices toshow that the top curves L N converge in the finite dimensional sense to the parabolic Airy process.We mention here that the convergence in Theorem 1.3 is in the uniform topology over compacts,which is stronger than finite-dimensional convergence. We also mention that recently in [9] theconclusion of Theorem 1.3 was established under the assumption that L N converge to L ∞ in thefinite dimensional sense. Simply put, we require as input only the finite dimensional convergenceof the top curves while [9, Theorem 1.5] requires the finite dimensional convergence of not just thetop but all curves in the line ensemble, which is a much stronger assumption.The remainder of the paper is organized as follows. In Section 2 we introduce the basic definitionsand notation for line ensembles. The main technical result of the paper, Theorem 2.26, is presentedin Section 2.3 and Theorems 1.1 and 1.3 are proved in Section 2.4 by appealing to it. In Section 3we prove several statements for Bernoulli random walk bridges, by using a strong coupling resultthat allows us to compare the latter with Brownian bridges. The proof of Theorem 2.26 is presentedin Section 4 and is based on three key lemmas. Two of these lemmas are proved in Section 5 andthe last one in Section 6. The paper ends with Sections 7 and 8, where various technical resultsneeded throughout the paper are proved. Acknowledgments.
This project was initiated during a summer REU program at Columbia Uni-versity in 2020 and we thank the organizer from the Mathematics department, Michael Woodbury,for this wonderful opportunity. E.D. is partially supported by the Minerva Foundation Fellowship.2.
Line ensembles
In this section we introduce various definitions and notation that are used throughout the paper.2.1.
Line ensembles and the Brownian Gibbs property.
In this section we introduce thenotions of a line ensemble and the (partial) Brownian Gibbs property . Our exposition in this sectionclosely follows that of [12, Section 2] and [7, Section 2].Given two integers p ≤ q , we let (cid:74) p, q (cid:75) denote the set { p, p + 1 , . . . , q } . Given an interval Λ ⊂ R we endow it with the subspace topology of the usual topology on R . We let ( C (Λ) , C ) denote thespace of continuous functions f : Λ → R with the topology of uniform convergence over compacts,see [22, Chapter 7, Section 46], and Borel σ -algebra C . Given a set Σ ⊂ Z we endow it with thediscrete topology and denote by Σ × Λ the set of all pairs ( i, x ) with i ∈ Σ and x ∈ Λ with theproduct topology. We also denote by ( C (Σ × Λ) , C Σ ) the space of continuous functions on Σ × Λ with the topology of uniform convergence over compact sets and Borel σ -algebra C Σ . Typically, wewill take Σ = (cid:74) , N (cid:75) (we use the convention Σ = N if N = ∞ ) and then we write (cid:0) C (Σ × Λ) , C | Σ | (cid:1) in place of ( C (Σ × Λ) , C Σ ) .The following defines the notion of a line ensemble. Definition 2.1.
Let Σ ⊂ Z and Λ ⊂ R be an interval. A Σ -indexed line ensemble L is a randomvariable defined on a probability space (Ω , F , P ) that takes values in ( C (Σ × Λ) , C Σ ) . Intuitively, L is a collection of random continuous curves (sometimes referred to as lines ), indexed by Σ , each of TIGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES which maps Λ in R . We will often slightly abuse notation and write L : Σ × Λ → R , even though it isnot L which is such a function, but L ( ω ) for every ω ∈ Ω . For i ∈ Σ we write L i ( ω ) = ( L ( ω ))( i, · ) forthe curve of index i and note that the latter is a map L i : Ω → C (Λ) , which is ( C , F ) − measurable.If a, b ∈ Λ satisfy a < b we let L i [ a, b ] denote the restriction of L i to [ a, b ] .We will require the following result, whose proof is postponed until Section 7.1. In simple termsit states that the space C (Σ × Λ) where our random variables L take value has the structure of acomplete, separable metric space. Lemma 2.2.
Let Σ ⊂ Z and Λ ⊂ R be an interval. Suppose that { a n } ∞ n =1 , { b n } ∞ n =1 are sequencesof real numbers such that a n < b n , [ a n , b n ] ⊂ Λ , a n +1 ≤ a n , b n +1 ≥ b n and ∪ ∞ n =1 [ a n , b n ] = Λ . For n ∈ N we let K n = Σ n × [ a n , b n ] where Σ n = Σ ∩ (cid:74) − n, n (cid:75) . Define d : C (Σ × Λ) × C (Σ × Λ) → [0 , ∞ ) by (2.1) d ( f, g ) = ∞ (cid:88) n =1 − n min (cid:26) sup ( i,t ) ∈ K n | f ( i, t ) − g ( i, t ) | , (cid:27) . Then d defines a metric on C (Σ × Λ) and moreover the metric space topology defined by d is thesame as the topology of uniform convergence over compact sets. Furthermore, the metric space ( C (Σ × Λ) , d ) is complete and separable. Definition 2.3.
Given a sequence {L n : n ∈ N } of random Σ -indexed line ensembles we say that L n converge weakly to a line ensemble L , and write L n = ⇒ L if for any bounded continuous function f : C (Σ × Λ) → R we have that lim n →∞ E [ f ( L n )] = E [ f ( L )] . We also say that {L n : n ∈ N } is tight if for any (cid:15) > there exists a compact set K ⊂ C (Σ × Λ) such that P ( L n ∈ K ) ≥ − (cid:15) for all n ∈ N .We call a line ensemble non-intersecting if P -almost surely L i ( r ) > L j ( r ) for all i < j and r ∈ Λ .We will require the following sufficient condition for tightness of a sequence of line ensembles,which extends [2, Theorem 7.3]. We give a proof in Section 7.2. Lemma 2.4.
Let Σ ⊂ Z and Λ ⊂ R be an interval. Suppose that { a n } ∞ n =1 , { b n } ∞ n =1 are sequencesof real numbers such that a n < b n , [ a n , b n ] ⊂ Λ , a n +1 ≤ a n , b n +1 ≥ b n and ∪ ∞ n =1 [ a n , b n ] = Λ . Then {L n } is tight if and only if for every i ∈ Σ we have(i) lim a →∞ lim sup n →∞ P ( |L ni ( a ) | ≥ a ) = 0 .(ii) For all (cid:15) > and k ∈ N , lim δ → lim sup n →∞ P (cid:18) sup x,y ∈ [ a k ,b k ] , | x − y |≤ δ |L ni ( x ) − L ni ( y ) | ≥ (cid:15) (cid:19) = 0 . We next turn to formulating the Brownian Gibbs property – we do this in Definition 2.8 afterintroducing some relevant notation and results. If W t denotes a standard one-dimensional Brownianmotion, then the process ˜ B ( t ) = W t − tW , ≤ t ≤ , is called a Brownian bridge (from ˜ B (0) = 0 to ˜ B (1) = 0 ) with diffusion parameter . For brevitywe call the latter object a standard Brownian bridge .Given a, b, x, y ∈ R with a < b we define a random variable on ( C ([ a, b ]) , C ) through(2.2) B ( t ) = ( b − a ) / · ˜ B (cid:18) t − ab − a (cid:19) + (cid:18) b − tb − a (cid:19) · x + (cid:18) t − ab − a (cid:19) · y, and refer to the law of this random variable as a Brownian bridge (from B ( a ) = x to B ( b ) = y ) withdiffusion parameter . Given k ∈ N and (cid:126)x, (cid:126)y ∈ R k we let P a,b,(cid:126)x,(cid:126)yfree denote the law of k independentBrownian bridges { B i : [ a, b ] → R } ki =1 from B i ( a ) = x i to B i ( b ) = y i all with diffusion parameter .We next state a couple of results about Brownian bridges from [7] for future use. IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 9
Lemma 2.5. [7, Corollary 2.9]. Fix a continuous function f : [0 , → R such that f (0) > and f (1) > . Let B be a standard Brownian bridge and let C = { B ( t ) > f ( t ) for some t ∈ [0 , } (crossing) and T = { B ( t ) = f ( t ) for some t ∈ [0 , } (touching). Then P ( T ∩ C c ) = 0 . Lemma 2.6. [7, Corollary 2.10]. Let U be an open subset of C ([0 , , which contains a function f such that f (0) = f (1) = 0 . If B : [0 , → R is a standard Brownian bridge then P ( B [0 , ⊂ U ) > . The following definition introduces the notion of an ( f, g ) -avoiding Brownian line ensemble, whichin simple terms is a collection of k independent Brownian bridges, conditioned on not-crossing eachother and staying above the graph of g and below the graph of f for two continuous functions f and g . Definition 2.7.
Let k ∈ N and W ◦ k denote the open Weyl chamber in R k , i.e. W ◦ k = { (cid:126)x = ( x , . . . , x k ) ∈ R k : x > x > · · · > x k } . (In [7] the notation R k> was used for this set.) Let (cid:126)x, (cid:126)y ∈ W ◦ k , a, b ∈ R with a < b , and f : [ a, b ] → ( −∞ , ∞ ] and g : [ a, b ] → [ −∞ , ∞ ) be two continuous functions. The latter condition means thateither f : [ a, b ] → R is continuous or f = ∞ everywhere, and similarly for g . We also assume that f ( t ) > g ( t ) for all t ∈ [ a, b ] , f ( a ) > x , f ( b ) > y and g ( a ) < x k , g ( b ) < y k . With the above data we define the ( f, g ) -avoiding Brownian line ensemble on the interval [ a, b ] with entrance data (cid:126)x and exit data (cid:126)y to be the Σ -indexed line ensemble Q with Σ = (cid:74) , k (cid:75) on Λ = [ a, b ] and with the law of Q equal to P a,b,(cid:126)x,(cid:126)yfree (the law of k independent Brownian bridges { B i : [ a, b ] → R } ki =1 from B i ( a ) = x i to B i ( b ) = y i ) conditioned on the event E = { f ( r ) > B ( r ) > B ( r ) > · · · > B k ( r ) > g ( r ) for all r ∈ [ a, b ] } . It is worth pointing out that E is an open set of positive measure and so we can condition on itin the usual way – we explain this briefly in the following paragraph. Let (Ω , F , P ) be a probabilityspace that supports k independent Brownian bridges { B i : [ a, b ] → R } ki =1 from B i ( a ) = x i to B i ( b ) = y i all with diffusion parameter . Notice that we can find ˜ u , . . . , ˜ u k ∈ C ([0 , and (cid:15) > (depending on (cid:126)x, (cid:126)y, f, g, a, b ) such that ˜ u i (0) = ˜ u i (1) = 0 for i = 1 , . . . , k and such that if ˜ h , . . . , ˜ h k ∈ C ([0 , satisfy ˜ h i (0) = ˜ h i (1) = 0 for i = 1 , . . . , k and sup t ∈ [0 , | ˜ u i ( t ) − ˜ h i ( t ) | < (cid:15) thenthe functions h i ( t ) = ( b − a ) / · ˜ h i (cid:18) t − ab − a (cid:19) + (cid:18) b − tb − a (cid:19) · x i + (cid:18) t − ab − a (cid:19) · y i , satisfy f ( r ) > h ( r ) > · · · > h k ( r ) > g ( r ) . It follows from Lemma 2.6 that P ( E ) ≥ P (cid:32) max ≤ i ≤ k sup r ∈ [0 , | ˜ B i ( r ) − ˜ u i ( r ) | < (cid:15) (cid:33) = k (cid:89) i =1 P (cid:32) sup r ∈ [0 , | ˜ B i ( r ) − ˜ u i ( r ) | < (cid:15) (cid:33) > , and so we can condition on the event E .To construct a realization of Q we proceed as follows. For ω ∈ E we define Q ( ω )( i, r ) = B i ( r )( ω ) for i = 1 , . . . , k and r ∈ [ a, b ] . Observe that for i ∈ { , . . . , k } and an open set U ∈ C ([ a, b ]) we have that Q − ( { i } × U ) = { B i ∈ U } ∩ E ∈ F , and since the sets { i }× U form an open basis of C ( (cid:74) , k (cid:75) × [ a, b ]) we conclude that Q is F -measurable.This implies that the law Q is indeed well-defined and also it is non-intersecting almost surely. Also,given measurable subsets A , . . . , A k of C ([ a, b ]) we have that P ( Q i ∈ A i for i = 1 , . . . , k ) = P a,b,(cid:126)x,(cid:126)yfree ( { B i ∈ A i for i = 1 , . . . , k } ∩ E ) P a,b,(cid:126)x,(cid:126)yfree ( E ) . We denote the probability distribution of Q as P a,b,(cid:126)x,(cid:126)y,f,gavoid and write E a,b,(cid:126)x,(cid:126)y,f,gavoid for the expectationwith respect to this measure.The following definition introduces the notion of the Brownian Gibbs property from [7]. Definition 2.8.
Fix a set
Σ = (cid:74) , N (cid:75) with N ∈ N or N = ∞ and an interval Λ ⊂ R and let K = { k , k + 1 , . . . , k } ⊂ Σ be finite and a, b ∈ Λ with a < b . Set f = L k − and g = L k +1 withthe convention that f = ∞ if k − (cid:54)∈ Σ and g = −∞ if k + 1 (cid:54)∈ Σ . Write D K,a,b = K × ( a, b ) and D cK,a,b = (Σ × Λ) \ D K,a,b . A Σ -indexed line ensemble L : Σ × Λ → R is said to have the BrownianGibbs property if it is non-intersecting andLaw (cid:16) L| K × [ a,b ] conditional on L| D cK,a,b (cid:17) = Law ( Q ) , where Q i = ˜ Q i − k +1 and ˜ Q is the ( f, g ) -avoiding Brownian line ensemble on [ a, b ] with entrancedata ( L k ( a ) , . . . , L k ( a )) and exit data ( L k ( b ) , . . . , L k ( b )) from Definition 2.7. Note that ˜ Q isintroduced because, by definition, any such ( f, g ) -avoiding Brownian line ensemble is indexed from to k − k + 1 but we want Q to be indexed from k to k .An equivalent way to express the Brownian Gibbs property is as follows. A Σ -indexed lineensemble L on Λ satisfies the Brownian Gibbs property if and only if it is non-intersecting and forany finite K = { k , k + 1 , . . . , k } ⊂ Σ and [ a, b ] ⊂ Λ and any bounded Borel-measurable function F : C ( K × [ a, b ]) → R we have P -almost surely(2.3) E (cid:2) F (cid:0) L| K × [ a,b ] (cid:1) (cid:12)(cid:12) F ext ( K × ( a, b )) (cid:3) = E a,b,(cid:126)x,(cid:126)y,f,gavoid (cid:2) F ( ˜ Q ) (cid:3) , where F ext ( K × ( a, b )) = σ (cid:8) L i ( s ) : ( i, s ) ∈ D cK,a,b (cid:9) is the σ -algebra generated by the variables in the brackets above, L| K × [ a,b ] denotes the restrictionof L to the set K × [ a, b ] , (cid:126)x = ( L k ( a ) , . . . , L k ( a )) , (cid:126)y = ( L k ( b ) , . . . , L k ( b )) , f = L k − [ a, b ] (therestriction of L to the set { k − } × [ a, b ] ) with the convention that f = ∞ if k − (cid:54)∈ Σ , and g = L k +1 [ a, b ] with the convention that g = −∞ if k + 1 (cid:54)∈ Σ . Remark . Let us briefly explain why equation (2.3) makes sense. Firstly, since Σ × Λ is locallycompact, we know by [22, Lemma 46.4] that L → L| K × [ a,b ] is a continuous map from C (Σ × Λ) to C ( K × [ a, b ]) , so that the left side of (2.3) is the conditional expectation of a bounded measurablefunction, and is thus well-defined. A more subtle question is why the right side of (2.3) is F ext ( K × ( a, b )) -measurable. This question was resolved in [12, Lemma 3.4], where it was shown that theright side is measurable with respect to the σ -algebra σ {L i ( s ) : i ∈ K and s ∈ { a, b } , or i ∈ { k − , k + 1 } and s ∈ [ a, b ] } , which in particular implies the measurability with respect to F ext ( K × ( a, b )) .In the present paper it is convenient for us to use the following modified version of the definitionabove, which we call the partial Brownian Gibbs property – it was first introduced in [12]. Weexplain the difference between the two definitions, and why we prefer the second one in Remark2.12. Definition 2.10.
Fix a set
Σ = (cid:74) , N (cid:75) with N ∈ N or N = ∞ and an interval Λ ⊂ R . A Σ -indexedline ensemble L on Λ is said to satisfy the partial Brownian Gibbs property if and only if it is non-intersecting and for any finite K = { k , k + 1 , . . . , k } ⊂ Σ with k ≤ N − (if Σ (cid:54) = N ), [ a, b ] ⊂ Λ and any bounded Borel-measurable function F : C ( K × [ a, b ]) → R we have P -almost surely(2.4) E (cid:2) F ( L| K × [ a,b ] ) (cid:12)(cid:12) F ext ( K × ( a, b )) (cid:3) = E a,b,(cid:126)x,(cid:126)y,f,gavoid (cid:2) F ( ˜ Q ) (cid:3) , where we recall that D K,a,b = K × ( a, b ) and D cK,a,b = (Σ × Λ) \ D K,a,b , and F ext ( K × ( a, b )) = σ (cid:8) L i ( s ) : ( i, s ) ∈ D cK,a,b (cid:9) IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 11 is the σ -algebra generated by the variables in the brackets above, L| K × [ a,b ] denotes the restrictionof L to the set K × [ a, b ] , (cid:126)x = ( L k ( a ) , . . . , L k ( a )) , (cid:126)y = ( L k ( b ) , . . . , L k ( b )) , f = L k − [ a, b ] withthe convention that f = ∞ if k − (cid:54)∈ Σ , and g = L k +1 [ a, b ] . Remark . Observe that if N = 1 then the conditions in Definition 2.10 become void, i.e., anyline ensemble with one line satisfies the partial Brownian Gibbs property. Also we mention that(2.4) makes sense by the same reason that (2.3) makes sense, see Remark 2.9. Remark . Definition 2.10 is slightly different from the Brownian Gibbs property of Definition 2.8as we explain here. Assuming that
Σ = N the two definitions are equivalent. However, if Σ = { , . . . , N } with ≤ N < ∞ then a line ensemble that satisfies the Brownian Gibbs property alsosatisfies the partial Brownian Gibbs property, but the reverse need not be true. Specifically, theBrownian Gibbs property allows for the possibility that k = N in Definition 2.10 and in this casethe convention is that g = −∞ . As the partial Brownian Gibbs property is more general we preferto work with it and most of the results later in this paper are formulated in terms of it rather thanthe usual Brownian Gibbs property.2.2. Bernoulli Gibbsian line ensembles.
In this section we introduce the notion of a
Bernoulliline ensemble and the
Schur Gibbs property . Our discussion will parallel that of [6, Section 3.1],which in turn goes back to [8, Section 2.1].
Definition 2.13.
Let Σ ⊂ Z and T , T ∈ Z with T < T . Consider the set Y of functions f : Σ × (cid:74) T , T (cid:75) → Z such that f ( j, i + 1) − f ( j, i ) ∈ { , } when j ∈ Σ and i ∈ (cid:74) T , T − (cid:75) and let D denote the discrete topology on Y . We call a function f : (cid:74) T , T (cid:75) → Z such that f ( i + 1) − f ( i ) ∈ { , } when i ∈ (cid:74) T , T − (cid:75) an up-right path and elements in Y collections ofup-right paths .A Σ - indexed Bernoulli line ensemble L on (cid:74) T , T (cid:75) is a random variable defined on a probabilityspace (Ω , B , P ) , taking values in Y such that L is a ( B , D ) -measurable function. Remark . In [6, Section 3.1] Bernoulli line ensembles L were called discrete line ensembles inorder to distinguish them from the continuous line ensembles from Definition 2.1. In this paper wehave opted to use the term Bernoulli line ensembles to emphasize the fact that the functions f ∈ Y satisfy the property that f ( j, i + 1) − f ( j, i ) ∈ { , } when j ∈ Σ and i ∈ (cid:74) T , T − (cid:75) . This conditionessentially means that for each j ∈ Σ the function f ( j, · ) can be thought of as the trajectory ofa Bernoulli random walk from time T to time T . As other types of discrete line ensembles, seee.g. [29], have appeared in the literature we have decided to modify the notation in [6, Section 3.1]so as to avoid any ambiguity.The way we think of Bernoulli line ensembles is as random collections of up-right paths on the inte-ger lattice, indexed by Σ (see Figure 3). Observe that one can view an up-right path L on (cid:74) T , T (cid:75) asa continuous curve by linearly interpolating the points ( i, L ( i )) . This allows us to define ( L ( ω ))( i, s ) for non-integer s ∈ [ T , T ] and to view Bernoulli line ensembles as line ensembles in the sense of Def-inition 2.1. In particular, we can think of L as a random variable taking values in ( C (Σ × Λ) , C Σ ) with Λ = [ T , T ] . We will often slightly abuse notation and write L : Σ × (cid:74) T , T (cid:75) → Z , eventhough it is not L which is such a function, but rather L ( ω ) for each ω ∈ Ω . Furthermore we write L i = ( L ( ω ))( i, · ) for the index i ∈ Σ path. If L is an up-right path on (cid:74) T , T (cid:75) and a, b ∈ (cid:74) T , T (cid:75) satisfy a < b we let L (cid:74) a, b (cid:75) denote the restriction of L to (cid:74) a, b (cid:75) .Let t i , z i ∈ Z for i = 1 , be given such that t < t and ≤ z − z ≤ t − t . We denoteby Ω( t , t , z , z ) the collection of up-right paths that start from ( t , z ) and end at ( t , z ) , by P t ,t ,z ,z Ber the uniform distribution on Ω( t , t , z , z ) and write E t ,t ,z ,z Ber for the expectation withrespect to this measure. One thinks of the distribution P t ,t ,z ,z Ber as the law of a simple randomwalk with i.i.d. Bernoulli increments with parameter p ∈ (0 , that starts from z at time t and is Figure 3.
Two samples of (cid:74) , (cid:75) -indexed Bernoulli line ensembles with T = 0 and T = 8 , with the left ensemble avoiding and the right ensemble nonavoiding.conditioned to end in z at time t – this interpretation does not depend on the choice of p ∈ (0 , .Notice that by our assumptions on the parameters the state space Ω( t , t , z , z ) is non-empty.Given k ∈ N , T , T ∈ Z with T < T and (cid:126)x, (cid:126)y ∈ Z k we let P T ,T ,(cid:126)x,(cid:126)yBer denote the law of k inde-pendent Bernoulli bridges { B i : (cid:74) T , T (cid:75) → Z } ki =1 from B i ( T ) = x i to B i ( T ) = y i . Equivalently,this is just k independent random up-right paths B i ∈ Ω( T , T , x i , y i ) for i = 1 , . . . , k that areuniformly distributed. This measure is well-defined provided that Ω( T , T , x i , y i ) are non-emptyfor i = 1 , . . . , k , which holds if T − T ≥ y i − x i ≥ for all i = 1 , . . . , k .The following definition introduces the notion of an ( f, g ) -avoiding Bernoulli line ensemble, whichin simple terms is a collection of k independent Bernoulli bridges, conditioned on not-crossing eachother and staying above the graph of g and below the graph of f for two functions f and g . Definition 2.15.
Let k ∈ N and W k denote the set of signatures of length k , i.e. W k = { (cid:126)x = ( x , . . . , x k ) ∈ Z k : x ≥ x ≥ · · · ≥ x k } . Let (cid:126)x, (cid:126)y ∈ W k , T , T ∈ Z with T < T , S ⊆ (cid:74) T , T (cid:75) , and f : (cid:74) T , T (cid:75) → ( −∞ , ∞ ] and g : (cid:74) T , T (cid:75) → [ −∞ , ∞ ) be two functions.With the above data we define the ( f, g ; S ) -avoiding Bernoulli line ensemble on the interval (cid:74) T , T (cid:75) with entrance data (cid:126)x and exit data (cid:126)y to be the Σ -indexed Bernoulli line ensemble Q with Σ = (cid:74) , k (cid:75) on (cid:74) T , T (cid:75) and with the law of Q equal to P T ,T ,(cid:126)x,(cid:126)yBer (the law of k independent uniformup-right paths { B i : (cid:74) T , T (cid:75) → R } ki =1 from B i ( T ) = x i to B i ( T ) = y i ) conditioned on the event E S = { f ( r ) ≥ B ( r ) ≥ B ( r ) ≥ · · · ≥ B k ( r ) ≥ g ( r ) for all r ∈ S } . The above definition is well-posed if there exist B i ∈ Ω( T , T , x i , y i ) for i = 1 , . . . , k that satisfythe conditions in E S (i.e. if the set of such up-right paths is not empty). We will denote by Ω avoid ( T , T , (cid:126)x, (cid:126)y, f, g ; S ) the set of collections of k up-right paths that satisfy the conditions in E S and then the distribution on Q is simply the uniform measure on Ω avoid ( T , T , (cid:126)x, (cid:126)y, f, g ; S ) . Wedenote the probability distribution of Q as P T ,T ,(cid:126)x,(cid:126)y,f,gavoid,Ber ; S and write E T ,T ,(cid:126)x,(cid:126)y,f,gavoid,Ber ; S for the expectationwith respect to this measure. If S = (cid:74) T , T (cid:75) , we write Ω avoid ( T , T , (cid:126)x, (cid:126)y, f, g ) , P T ,T ,(cid:126)x,(cid:126)y,f,gavoid,Ber , and E T ,T ,(cid:126)x,(cid:126)y,f,gavoid,Ber . If f = + ∞ and g = −∞ , we write Ω avoid ( T , T , (cid:126)x, (cid:126)y ) , P T ,T ,(cid:126)x,(cid:126)yavoid,Ber , and E T ,T ,(cid:126)x,(cid:126)yavoid,Ber . IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 13
It will be useful to formulate simple conditions under which Ω avoid ( T , T , (cid:126)x, (cid:126)y, f, g ) is non-emptyand thus P T ,T ,(cid:126)x,(cid:126)y,f,gavoid,Ber well-defined. Note that Ω avoid ( T , T , (cid:126)x, (cid:126)y, f, g ; S ) ⊇ Ω avoid ( T , T , (cid:126)x, (cid:126)y, f, g ) for any S ⊆ (cid:74) T , T (cid:75) , so P T ,T ,(cid:126)x,(cid:126)y,f,gavoid,Ber ; S is also well-defined in this case. We accomplish this in thefollowing lemma, whose proof is postponed until Section 7.3. Lemma 2.16.
Suppose that k ∈ N and T , T ∈ Z with T < T . Suppose further that(1) (cid:126)x, (cid:126)y ∈ W k satisfy T − T ≥ y i − x i ≥ for i = 1 , . . . , k ,(2) f : (cid:74) T , T (cid:75) → ( −∞ , ∞ ] and g : (cid:74) T , T (cid:75) → [ −∞ , ∞ ) satisfy f ( i + 1) = f ( i ) or f ( i + 1) = f ( i ) + 1 , and g ( i + 1) = g ( i ) or g ( i + 1) = g ( i ) + 1 for i = T , . . . , T − ,(3) f ( T ) ≥ x , f ( T ) ≥ y , g ( T ) ≤ x k , g ( T ) ≤ y k and f ( i ) ≥ g ( i ) for i ∈ (cid:74) T , T (cid:75) .Then the set Ω avoid ( T , T , (cid:126)x, (cid:126)y, f, g ) from Definition 2.15 is non-empty. The following definition introduces the notion of the Schur Gibbs property, which can be thoughtof a discrete analogue of the partial Brownian Gibbs property the same way that Bernoulli randomwalks are discrete analogues of Brownian motion.
Definition 2.17.
Fix a set
Σ = (cid:74) , N (cid:75) with N ∈ N or N = ∞ and T , T ∈ Z with T < T . A Σ -indexed Bernoulli line ensemble L : Σ × (cid:74) T , T (cid:75) → Z is said to satisfy the Schur Gibbs property if it is non-crossing, meaning that L j ( i ) ≥ L j +1 ( i ) for all j = 1 , . . . , N − and i ∈ (cid:74) T , T (cid:75) , and for any finite K = { k , k + 1 , . . . , k } ⊂ (cid:74) , N − (cid:75) and a, b ∈ (cid:74) T , T (cid:75) with a < b the followingholds. Suppose that f, g are two up-right paths drawn in { ( r, z ) ∈ Z : a ≤ r ≤ b } and (cid:126)x, (cid:126)y ∈ W k with k = k − k + 1 altogether satisfy that P ( A ) > where A denotes the event A = { (cid:126)x = ( L k ( a ) , . . . , L k ( a )) , (cid:126)y = ( L k ( b ) , . . . , L k ( b )) , L k − (cid:74) a, b (cid:75) = f, L k +1 (cid:74) a, b (cid:75) = g } , where if k = 1 we adopt the convention f = ∞ = L . Then writing k = k − k + 1 , we have forany { B i ∈ Ω( a, b, x i , y i ) } ki =1 that(2.5) P ( L i + k − (cid:74) a, b (cid:75) = B i for i = 1 , . . . , k | A ) = P a,b,(cid:126)x,(cid:126)y,f,gavoid,Ber (cid:16) ∩ ki =1 { Q i = B i } (cid:17) . Remark . In simple words, a Bernoulli line ensemble is said to satisfy the Schur Gibbs propertyif the distribution of any finite number of consecutive paths, conditioned on their end-points andthe paths above and below them is simply the uniform measure on all collection of up-right pathsthat have the same end-points and do not cross each other or the paths above and below them.
Remark . Observe that in Definition 2.17 the index k is assumed to be less than or equal to N − , so that if N < ∞ the N -th path is special and is not conditionally uniform. This is whatmakes Definition 2.17 a discrete analogue of the partial Brownian Gibbs property rather than theusual Brownian Gibbs property. Similarly to the partial Brownian Gibbs property, see Remark 2.11,if N = 1 then the conditions in Definition 2.17 become void, i.e., any Bernoulli line ensemble withone line satisfies the Schur Gibbs property. Also we mention that the well-posedness of P T ,T ,(cid:126)x,(cid:126)y,f,gavoid,Ber in (2.5) is a consequence of Lemma 2.16 and our assumption that P ( A ) > . Remark . In [6] the authors studied a generalization of the Gibbs property in Definition 2.17depending on a parameter t ∈ (0 , , which was called the Hall-Littlewood Gibbs property due to itsconnection to Hall-Littlewood polynomials [21]. The property in Definition 2.17 is the t → limitof the Hall-Littlewood Gibbs property. Since under this t → limit Hall-Littlewood polynomialsdegenerate to Schur polynomials we have decided to call the Gibbs property in Definition 2.17 theSchur Gibbs property. Remark . An immediate consequence of Definition 2.17 is that if M ≤ N , we have that theinduced law on { L i } Mi =1 also satisfies the Schur Gibbs property as an { , . . . , M } -indexed Bernoulliline ensemble on (cid:74) T , T (cid:75) . We end this section with the following definition of the term acceptance probability.
Definition 2.22.
Assume the same notation as in Definition 2.15 and suppose that T − T ≥ y i − x i ≥ for i = 1 , . . . , k . We define the acceptance probability Z ( T , T , (cid:126)x, (cid:126)y, f, g ) to be the ratio(2.6) Z ( T , T , (cid:126)x, (cid:126)y, f, g ) = | Ω avoid ( T , T , (cid:126)x, (cid:126)y, f, g ) | (cid:81) ki =1 | Ω( T , T , x i , y i ) | . Remark . The quantity Z ( T , T , (cid:126)x, (cid:126)y, f, g ) is precisely the probability that if B i are sampleduniformly from Ω( T , T , x i , y i ) for i = 1 , . . . , k then the B i satisfy the condition E = { f ( r ) ≥ B ( r ) ≥ B ( r ) ≥ · · · ≥ B k ( r ) ≥ g ( r ) for all r ∈ (cid:74) T , T (cid:75) } . Let us explain briefly why we call this quantity an acceptance probability. One way to sample P T ,T ,(cid:126)x,(cid:126)y,f,gavoid,Ber is as follows. Start by sampling a sequence of i.i.d. up-right paths B Ni uniformly from Ω( T , T , x i , y i ) for i = 1 , . . . , k and N ∈ N . For each n check if B n , . . . , B nk satisfy the condition E and let M denote the smallest index that accomplishes this. If Ω avoid ( T , T , (cid:126)x, (cid:126)y, f, g ) is non-emptythen M is geometrically distributed with parameter Z ( T , T , (cid:126)x, (cid:126)y, f, g ) , and in particular M is finitealmost surely and { B Mi } ki =1 has distribution P T ,T ,(cid:126)x,(cid:126)y,f,gavoid,Ber . In this sampling procedure we construct asequence of candidates { B Ni } ki =1 for N ∈ N and reject those that fail to satisfy condition E , the firstcandidate that satisfies it is accepted and has law P T ,T ,(cid:126)x,(cid:126)y,f,gavoid,Ber and the probability that a candidateis accepted is precisely Z ( T , T , (cid:126)x, (cid:126)y, f, g ) , which is why we call it an acceptance probability.2.3. Main technical result.
In this section we present the main technical result of the paper. Westart with the following technical definition.
Definition 2.24.
Fix k ∈ N , α, λ > and p ∈ (0 , . Suppose we are given a sequence { T N } ∞ N =1 with T N ∈ N and that { L N } ∞ N =1 , L N = ( L N , L N , . . . , L Nk ) is a sequence of (cid:74) , k (cid:75) -indexed Bernoulliline ensembles on (cid:74) − T N , T N (cid:75) . We call the sequence ( α, p, λ ) - good if • for each N ∈ N we have that L N satisfies the Schur Gibbs property of Definition 2.17; • there is a function ψ : N → (0 , ∞ ) such that lim N →∞ ψ ( N ) = ∞ and for each N ∈ N wehave that T N > ψ ( N ) N α ; • there is a function φ : (0 , ∞ ) → (0 , ∞ ) such that for any (cid:15) > we have(2.7) sup n ∈ Z lim sup N →∞ P (cid:16)(cid:12)(cid:12)(cid:12) N − α/ ( L N ( nN α ) − pnN α + λn N α/ ) (cid:12)(cid:12)(cid:12) ≥ φ ( (cid:15) ) (cid:17) ≤ (cid:15). Remark . Let us elaborate on the meaning of Definition 2.24. In order for a sequence of L N of (cid:74) , k (cid:75) -indexed Bernoulli line ensembles on (cid:74) − T N , T N (cid:75) to be ( α, p, λ ) - good we want severalconditions to be satisfied. Firstly, we want for each N the Bernoulli line ensemble L N to satisfythe Schur Gibbs property. The second condition is that while the interval of definition of L N is finite for each N and given by (cid:74) − T N , T N (cid:75) , we want this interval to grow at least with speed N α . This property is quantified by the function ψ , which can be essentially thought of as anarbitrary unbounded increasing function on N . The third condition is that we want for each n ∈ Z the sequence of random variables N − α/ ( L N ( nN α ) − pnN α ) to be tight but moreover we wantglobally these random variables to look like the parabola − λn . This statement is reflected in (2.7),which provides a certain uniform tightness of the random variables N − α/ ( L N ( nN α ) − pnN α + λn N α/ ) . A particular case when (2.7) is satisfied is for example if we know that for each n ∈ Z the random variables N − α/ ( L N ( nN α ) − pnN α + λn N α/ ) converge to the same random variable X . In the applications that we have in mind these random variables would converge to the -pointmarginals of the Airy process that are all given by the same Tracy-Widom distribution (sincethe Airy process is stationary). Equation (2.7) is a significant relaxation of the requirement that N − α/ ( L N ( nN α ) − pnN α + λn N α/ ) all converge weakly to the Tracy-Widom distribution – theconvergence requirement is replaced with a mild but uniform control of all subsequential limits. IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 15
The main technical result of the paper is given below and proved in Section 4.
Theorem 2.26.
Fix k ∈ N with k ≥ , α, λ > and p ∈ (0 , and let L N = ( L N , L N , . . . , L Nk ) bean ( α, p, λ ) -good sequence of (cid:74) , k (cid:75) -indexed Bernoulli line ensembles. Set f Ni ( s ) = N − α/ ( L Ni ( sN α ) − psN α + λs N α/ ) , for s ∈ [ − ψ ( N ) , ψ ( N )] and i = 1 , . . . , k − ,and extend f Ni to R by setting for i = 1 , . . . , k − f Ni ( s ) = f Ni ( − ψ ( N )) for s ≤ − ψ ( N ) and f Ni ( s ) = f Ni ( ψ ( N )) for s ≥ ψ ( N ) . Let P N denote the law of { f Ni } k − i =1 and ˜ P N that of { ˜ f Ni } k − i =1 := { ( f Ni ( s ) − λs ) / (cid:112) p (1 − p ) } k − i =1 bothas (cid:74) , k − (cid:75) -indexed line ensembles ( i.e. as random variables in ( C ( (cid:74) , k − (cid:75) × R ) , C k − )) . Then(i) The sequences P N and ˜ P N are tight;(ii) Any subsequential limit L ∞ = { ˜ f ∞ i } k − i =1 of ˜ P N satisfies the partial Brownian Gibbs propertyof Definition 2.10. Roughly, Theorem 2.26 (i) states that if we have a sequence of (cid:74) , k (cid:75) -indexed Bernoulli lineensembles that satisfy the Schur Gibbs property and the top paths of these ensembles under someshift and scaling have tight one-point marginals with a non-trivial parabolic shift, then under thesame shift and scaling the top k − paths of the line ensemble will be tight. The extension of f Ni to R is completely arbitrary and irrelevant for the validity of Theorem 2.26 since the topology on C ( (cid:74) , k − (cid:75) × R ) is that of uniform convergence over compacts. Consequently, only the behaviorof these functions on compact intervals matters in Theorem 2.26 and not what these functions donear infinity, which is where the modification happens as lim N →∞ ψ ( N ) = ∞ by assumption. Theonly reason we perform the extension is to embed all Bernoulli line ensembles into the same space ( C ( (cid:74) , k − (cid:75) × R ) , C k − ) .We mention that the k -th up-right path in the sequence of Bernoulli line ensembles is special andTheorem 2.26 provides no tightness result for it. The reason for this stems from the Schur Gibbsproperty, see Definition 2.17, which assumes less information for the k -th path. In practice, oneeither has an infinite Bernoulli line ensemble for each N or one has a Bernoulli line ensemble withfinite number of paths, which increase with N to infinity. In either of these settings one can useTheorem 2.26 to prove tightness of the full line ensemble, we will see this when we prove Theorem1.1 in the next section.2.4. Proofs of Theorems 1.1 and 1.3.
In this section we prove Theorems 1.1 and 1.3.
Proof. (of Theorem 1.1) We use the same notation and assumptions as in the statement of thetheorem. For clarity we split the proof into two steps.
Step 1.
In this step we prove that L N is tight. In view of Lemma 2.4 to establish the tightness of L N it suffices to show that for every k ∈ N (i) lim a →∞ lim sup N →∞ P ( |L Nk (0) | ≥ a ) = 0 .(ii) For all (cid:15) > and m ∈ N , lim δ → lim sup N →∞ P (cid:18) sup x,y ∈ [ − m,m ] , | x − y |≤ δ |L Nk ( x ) − L Nk ( y ) | ≥ (cid:15) (cid:19) = 0 . Let T N = min( − a N , b N ) and for N ≥ k + 1 let ˜ L N = ( ˜ L N , ˜ L N , . . . , ˜ L Nk +1 ) denote the (cid:74) , k + 1 (cid:75) -indexed Bernoulli line ensemble obtained from L N by restriction to the top k + 1 lines and theinterval (cid:74) − T N , T N (cid:75) . In particular, since L N satisfies the Schur Gibbs property we conclude thesame is true for ˜ L N and moreover Assumptions 1 and 2 in Section 1.2 imply that { ˜ L N } N ≥ k +1 isan ( α, p, λ ) -good in the sense of Definition 2.24. It follows by Theorem 2.26 that { ˜ f Ni } ki =1 as in thestatement of that theorem for the line ensembles ˜ L N are tight in ( C ( (cid:74) , k (cid:75) × R ) , C k ) .Since the map F : C ( (cid:74) , k (cid:75) × R ) → R given by F ( g ) = g ( k, is continuous we conclude that F (cid:16) { ˜ f Ni } ki =1 (cid:17) = ˜ f k (0) is a tight sequence of random variables. By construction ˜ f Nk (0) has the same distribution as L Nk (0) and so statement (i) above holds. If π mk : C ( (cid:74) , k (cid:75) × R ) → C ([ − m, m ]) denotes the map π k ( g )( t ) = g ( k, t ) then π mk is continuous and so we conclude that π mk (cid:16) { ˜ f Ni } ki =1 (cid:17) =˜ f Nk | [ − m,m ] is a tight sequence of random variables in C ([ − m, m ]) , which by [2, Theorem 7.3] andthe equality in distribution of ˜ f Nk and L N implies condition (ii) above. Step 2.
We next suppose that L ∞ is any subsequential limit of L N and that n m ↑ ∞ is a sequencesuch that L n m converges weakly to L ∞ . We want to show that L ∞ satisfies the Brownian Gibbsproperty. Suppose that a, b ∈ R with a < b and K = { k , k + 1 , . . . , k } ⊂ N are given. We wishto show that for any bounded Borel-measurable function F : C ( K × [ a, b ]) → R almost surely(2.8) E (cid:2) F (cid:0) L ∞ | K × [ a,b ] (cid:1) (cid:12)(cid:12) F ext ( K × ( a, b )) (cid:3) = E a,b,(cid:126)x,(cid:126)y,f,gavoid (cid:2) F ( ˜ Q ) (cid:3) , where we use the same notation as in Definition 2.8. In particular, we recall that F ext ( K × ( a, b )) = σ (cid:8) L ∞ i ( s ) : ( i, s ) ∈ D cK,a,b (cid:9) , with D cK,a,b = ( N × R ) \ K × ( a, b ) . Let k ≥ k +1 and consider the map Π k : C ( N × R ) → C ( (cid:74) , k (cid:75) × R ) given by [Π k ( g )]( i, t ) = g ( i, t ) ,which is continuous, and so Π k [ L n m ] converge weakly to Π k [ L ∞ ] as random variables in C ( (cid:74) , k (cid:75) × R ) .If { ˜ f Ni } ki =1 are as in Step 1, then we know by construction that the resitrction of { ˜ f Ni } ki =1 to [ − ψ ( N ) , ψ ( N )] has the same distribution as the restriction of Π k [ L N ] to the same interval. Since ψ ( N ) → ∞ by assumption and Π k [ L n m ] converge weakly to Π k [ L ∞ ] we conclude that { ˜ f n m i } ki =1 converge weakly to Π k [ L ∞ ] (here we used that the topology is that of uniform convergence overcompacts). In particular, by the second part of Theorem 2.26 we conclude that Π k [ L ∞ ] satisfies thepartial Brownian Gibbs property as a (cid:74) , k (cid:75) -indexed line ensemble on R . The latter implies thatalmost surely(2.9) E (cid:104) F (cid:0) L ∞ | K × [ a,b ] (cid:1) (cid:12)(cid:12) ˜ F ext ( K × ( a, b )) (cid:105) = E a,b,(cid:126)x,(cid:126)y,f,gavoid (cid:2) F ( ˜ Q ) (cid:3) , where ˜ F ext ( K × ( a, b )) = σ (cid:110) L ∞ i ( s ) : ( i, s ) ∈ ˜ D cK,a,b (cid:111) , with ˜ D cK,a,b = ( (cid:74) , k (cid:75) × R ) \ K × ( a, b ) . Let A denote the collection of sets A of the form A = {L ∞ ( i r , x r ) ∈ B r for r = 1 , . . . , p } , where p ∈ N , B , . . . , B p ∈ B ( R ) (the Borel σ -algebra on R and ( i , x ) , . . . , ( i p , x p ) ∈ D cK,a,b . Sincein (2.9) we have that k ≥ k + 1 was arbitrary we conclude that for all A ∈ A we have E (cid:2) F (cid:0) L ∞ | K × [ a,b ] (cid:1) · A (cid:3) = E (cid:104) E a,b,(cid:126)x,(cid:126)y,f,gavoid (cid:2) F ( ˜ Q ) (cid:3) · A (cid:105) . In view of the bounded convergence theorem, we see that the collection of sets A that satisfies thelast equation is a λ -system and as it contains the π -system A we conclude by the π − λ theorem thatit contains σ ( A ) , which is precisely F ext ( K × ( a, b )) . We may thus conclude (2.8) from the definingproperties of conditional expectation and the fact that the right side of (2.8) is F ext ( K × ( a, b )) -measurable as follows from [12, Lemma 3.4]. This suffices for the proof. (cid:3) Proof. (of Theorem 1.3) As explained in Section 1.2 we have that Assumption 2’ implies Assumption2 and so by Theorem 1.1 we know that L N is a tight sequence of line ensembles. Let L ∞ sub be anysubsequential limit. We will prove that L ∞ sub has the same distribution as L ∞ as in the statement ofthe theorem. If true, this would imply that L N has only one possible subsequential limit (namely L ∞ ) which combined with the tightness of L N would imply convergence of the sequence to L ∞ .By Theorem 1.1 we know that L ∞ sub satisfies the Brownian Gibbs property and by Assumption 2’,we know that L ∞ sub, (the top curve of L ∞ sub ) has the same distribution as L ∞ . In [7] it was proved that L Airy satisfies the Brownian Gibbs property and since L ∞ i ( t ) = c − / L Airyi ( ct ) , for i ∈ N and t ∈ R IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 17 we conclude that L ∞ also satisfies the Brownian Gibbs property. To prove the latter one only needsto utilize the fact that if B t is a standard Brownian motion so is c − / B ct – see e.g. [12, Lemma3.5] where a related result is established. Combining all of the above observations, we see that L ∞ sub and L ∞ both satisfy the Brownian Gibbs property and have the same top curve distribution, whichby [12, Theorem 1.1] implies that L ∞ sub and L ∞ have the same law. (cid:3) Properties of Bernoulli line ensembles
In this section we derive several results for Bernoulli line ensembles, which will be used in theproof of Theorem 2.26 in Section 4.3.1.
Monotone coupling lemmas.
In this section we formulate two lemmas that provide cou-plings of two Bernoulli line ensembles of non-intersecting Bernoulli bridges on the same interval,which depend monotonically on their boundary data. Schematic depictions of the couplings areprovided in Figure 4. We postpone the proof of these lemmas until Section 7.
Figure 4.
Two diagrammatic depictions of the monotone coupling Lemma 3.1 (leftpart) and Lemma 3.2 (right part). Red depicts the lower line ensemble and accom-panying entry data, exit data, and bottom bounding curve, while blue depicts thatof the higher ensemble.
Lemma 3.1.
Assume the same notation as in Definition 2.15. Fix k ∈ N , T , T ∈ Z with T < T , S ⊆ (cid:74) T , T (cid:75) , a function g : (cid:74) T , T (cid:75) → [ −∞ , ∞ ) as well as (cid:126)x, (cid:126)y, (cid:126)x (cid:48) , (cid:126)y (cid:48) ∈ W k . Assume that Ω avoid ( T , T , (cid:126)x, (cid:126)y, ∞ , g ; S ) and Ω avoid ( T , T , (cid:126)x (cid:48) , (cid:126)y (cid:48) , ∞ , g ; S ) are both non-empty. Then there exists aprobability space (Ω , F , P ) , which supports two (cid:74) , k (cid:75) -indexed Bernoulli line ensembles L t and L b on (cid:74) T , T (cid:75) such that the law of L t (cid:0) resp. L b (cid:1) under P is given by P T ,T ,(cid:126)x (cid:48) ,(cid:126)y (cid:48) , ∞ ,gavoid,Ber ; S (cid:0) resp. P T ,T ,(cid:126)x,(cid:126)y, ∞ ,gavoid,Ber ; S (cid:1) and such that P -almost surely we have L ti ( r ) ≥ L bi ( r ) for all i = 1 , . . . , k and r ∈ (cid:74) T , T (cid:75) . Lemma 3.2.
Assume the same notation as in Definition 2.15. Fix k ∈ N , T , T ∈ Z with T < T , S ⊆ (cid:74) T , T (cid:75) , two functions g t , g b : (cid:74) T , T (cid:75) → [ −∞ , ∞ ) and (cid:126)x, (cid:126)y ∈ W k . We assume that g t ( r ) ≥ g b ( r ) for all r ∈ (cid:74) T , T (cid:75) and that Ω avoid ( T , T , (cid:126)x, (cid:126)y, ∞ , g t ; S ) and Ω avoid ( T , T , (cid:126)x, (cid:126)y, ∞ , g b ; S ) areboth non-empty. Then there exists a probability space (Ω , F , P ) , which supports two (cid:74) , k (cid:75) -indexedBernoulli line ensembles L t and L b on (cid:74) T , T (cid:75) such that the law of L t (cid:0) resp. L b (cid:1) under P is givenby P T ,T ,(cid:126)x,(cid:126)y, ∞ ,g t avoid,Ber ; S (cid:0) resp. P T ,T ,(cid:126)x,(cid:126)y, ∞ ,g b avoid,Ber ; S (cid:1) and such that P -almost surely we have L ti ( r ) ≥ L bi ( r ) for all i = 1 , . . . , k and r ∈ (cid:74) T , T (cid:75) . In plain words, Lemma 3.1 states that one can couple two Bernoulli line ensembles L t and L b of non-intersecting Bernoulli bridges, bounded from below by the same function g , in such a waythat if all boundary values of L t are above the respective boundary values of L b , then all up-rightpaths of L t are almost surely above the respective up-right paths of L b . See the left part of Figure4. Lemma 3.2, states that one can couple two Bernoulli line ensembles L t and L b that have thesame boundary values, but the lower bound g t of L t is above the lower bound g b of L b , in such away that all up-right paths of L t are almost surely above the respective up-right paths of L b . Seethe right part of Figure 4.3.2. Properties of Bernoulli and Brownian bridges.
In this section we derive several resultsabout Bernoulli bridges, which are random up-right paths that have law P T ,T ,x,yBer as in Section2.2, as well as Brownian bridges with law P T ,T ,x,yfree as in Section 2.1. Our results will rely on thetwo monotonicity Lemmas 3.1 and 3.2 as well as a strong coupling between Bernoulli bridges andBrownian bridges from [6] – recalled here as Theorem 3.3.If W t denotes a standard one-dimensional Brownian motion and σ > , then the process B σt = σ ( W t − tW ) , ≤ t ≤ , is called a Brownian bridge (conditioned on B = 0 , B = 0 ) with variance σ . We note that B σ isthe unique a.s. continuous Gaussian process on [0 , with B = B = 0 , E [ B σt ] = 0 , and(3.1) E [ B σr B σs ] = σ ( r ∧ s − rs − sr + sr ) = σ ( r ∧ s − rs ) . With the above notation we state the strong coupling result we use.
Theorem 3.3.
Let p ∈ (0 , . There exist constants < C, a, α < ∞ (depending on p ) such thatfor every positive integer n , there is a probability space on which are defined a Brownian bridge B σ with variance σ = p (1 − p ) and a family of random paths (cid:96) ( n,z ) ∈ Ω(0 , n, , z ) for z = 0 , . . . , n suchthat (cid:96) ( n,z ) has law P ,n, ,zBer and (3.2) E (cid:104) e a ∆( n,z ) (cid:105) ≤ Ce α (log n ) e | z − pn | /n , where ∆( n, z ) := sup ≤ t ≤ n (cid:12)(cid:12)(cid:12) √ nB σt/n + tn z − (cid:96) ( n,z ) ( t ) (cid:12)(cid:12)(cid:12) . Remark . When p = 1 / the above theorem follows (after a trivial affine shift) from [20, Theorem6.3] and the general p ∈ (0 , case was done in [6, Theorem 4.5]. We mention that a significantgeneralization of Theorem 3.3 for general random walk bridges has recently been proved in [13,Theorem 2.3].We will use the following simple corollary of Theorem 3.3 to compare Bernoulli bridges withBrownian bridges. We use the same notation as in the theorem. Corollary 3.5.
Fix p ∈ (0 , , β > , and A > . Suppose | z − pn | ≤ K √ n for a constant K > .Then for any (cid:15) > , there exists N large enough depending on p, (cid:15), A, K so that for n ≥ N , P (cid:16) ∆( n, z ) ≥ An β (cid:17) < (cid:15). Proof.
Applying Chebyshev’s inequality and (3.2) gives P (cid:16) ∆( n, z ) ≥ An β (cid:17) ≤ e − An β E (cid:104) e a ∆( n,z ) (cid:105) ≤ C exp (cid:104) − An β + α (log n ) + | z − pn | n (cid:105) ≤ C exp (cid:104) − An β + α (log n ) + K (cid:105) . The conclusion is now immediate. (cid:3)
We also state the following result regarding the distribution of the maximum of a Brownianbridge, which follows from formulas in [14, Section 12.3].
IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 19
Lemma 3.6.
Fix p ∈ (0 , , and let B σ be a Brownian bridge of variance σ = p (1 − p ) on [0 , .Then for any C, T > we have P (cid:18) max s ∈ [0 ,T ] B σs/T ≥ C (cid:19) = exp (cid:18) − C p (1 − p ) (cid:19) , P (cid:18) max s ∈ [0 ,T ] (cid:12)(cid:12) B σs/T (cid:12)(cid:12) ≥ C (cid:19) = 2 ∞ (cid:88) n =1 ( − n − exp (cid:18) − n C p (1 − p ) (cid:19) . (3.3) In particular, (3.4) P (cid:18) max s ∈ [0 ,T ] (cid:12)(cid:12) B σs/T (cid:12)(cid:12) ≥ C (cid:19) ≤ (cid:18) − C p (1 − p ) (cid:19) . Proof.
Let B be a Brownian bridge with variance 1 on [0 , . Then B σt has the same distributionas σB t . Hence P (cid:18) max s ∈ [0 ,T ] B σs/T ≥ C (cid:19) = P (cid:18) max t ∈ [0 , B t ≥ C/σ (cid:19) = e − C/σ ) = e − C /p (1 − p ) . The second equality follows from [14, Proposition 12.3.3]. This proves the first equality in (3.3).Similarly, using [14, Proposition 12.3.4] we find P (cid:18) max s ∈ [0 ,T ] (cid:12)(cid:12) B σs/T (cid:12)(cid:12) ≥ C (cid:19) = P (cid:18) max t ∈ [0 , (cid:12)(cid:12) B t (cid:12)(cid:12) ≥ C/σ (cid:19) = 2 ∞ (cid:88) n =1 ( − n − e − n C /σ , proving the second inequality in (3.3).Lastly to prove (3.4), observe that since B σt has mean 0, B σt and − B σt have the same distribution.It follows from the first equality above that P (cid:18) max s ∈ [0 ,T ] (cid:12)(cid:12) B σs/T (cid:12)(cid:12) ≥ C (cid:19) ≤ P (cid:18) max s ∈ [0 ,T ] B σs/T ≥ C (cid:19) + P (cid:18) max s ∈ [0 ,T ] (cid:0) − B σs/T (cid:1) ≥ C (cid:19) =2 P (cid:18) max s ∈ [0 ,T ] B σs/T ≥ C (cid:19) = 2 e − C /p (1 − p ) . (cid:3) We state one more lemma about Brownian bridges, which allows us to decompose a bridge on [0 , into two independent bridges with Gaussian affine shifts meeting at a point in (0 , . Lemma 3.7.
Fix p ∈ (0 , , T > , t ∈ (0 , T ) , and let B σ be a Brownian bridge of variance σ = p (1 − p ) on [0 , . Let ξ be a Gaussian random variable with mean 0 and variance E [ ξ ] = σ tT (cid:18) − tT (cid:19) . Let B , B be two independent Brownian bridges on [0 , with variances σ t/T and σ ( T − t ) /T respectively, also independent from B σ . Define the process ˜ B s/T = st ξ + B (cid:16) st (cid:17) , s ≤ t,T − sT − t ξ + B (cid:16) s − tT − t (cid:17) , s ≥ t, for s ∈ [0 , T ] . Then ˜ B is a Brownian bridge with variance σ .Proof. It is clear that the process ˜ B is a.s. continuous. Since ˜ B is built from three independentzero-centered Gaussian processes, it is itself a zero-centered Gaussian process and thus completely characterized by its covariance. Consequently, to show that ˜ B is a Brownian bridge of variance σ ,it suffices to show by (3.1) that if ≤ r ≤ s ≤ T we have(3.5) E [ ˜ B r/T ˜ B s/T ] = σ rT (cid:16) − sT (cid:17) . First assume s ≤ t Using the fact that ξ and B · are independent with mean 0, we find E [ ˜ B r/T ˜ B s/T ] = rst · σ tT (cid:16) − tT (cid:17) + σ tT · rt (cid:16) − st (cid:17) = σ rT (cid:16) st − sT + 1 − st (cid:17) = σ rT (cid:16) − sT (cid:17) . If r ≥ t , we compute E [ ˜ B r/T ˜ B s/T ] = ( T − r )( T − s )( T − t ) · σ tT (cid:16) − tT (cid:17) + σ T − tT · r − tT − t (cid:16) − s − tT − t (cid:17) = σ ( T − s ) T ( T − t ) (cid:18) t ( T − r ) T + r − t (cid:19) = σ ( T − s ) T ( T − t ) · r ( T − t ) T = σ rT (cid:16) − sT (cid:17) . If r < t < s , then since ξ , B · , and B · are all independent, we have E [ ˜ B r/T ˜ B s/T ] = rt · T − sT − t · σ t ( T − t ) T = σ r ( T − s ) T = σ rT (cid:16) − sT (cid:17) . This proves (3.5) in all cases. (cid:3)
Below we list four lemmas about Bernoulli bridges. We provide a brief informal explanationof what each result says after it is stated. All six lemmas are proved in a similar fashion. Forthe first two lemmas one observes that the event whose probability is being estimated is mono-tone in (cid:96) . This allows us by Lemmas 3.1 and 3.2 to replace x, y in the statements of the lemmaswith the extreme values of the ranges specified in each. Once the choice of x and y is fixed onecan use our strong coupling results, Theorem 3.3 and Corollary 3.5, to reduce each of the lemmasto an analogous one involving a Brownian bridge with some prescribed variance. The latter state-ments are then easily confirmed as one has exact formulas for Brownian bridges, such as Lemma 3.6. Lemma 3.8.
Fix p ∈ (0 , , T ∈ N and x, y ∈ Z such that T ≥ y − x ≥ , and suppose that (cid:96) hasdistribution P ,T,x,yBer . Let M , M ∈ R be given. Then we can find W = W ( p, M − M ) ∈ N suchthat for T ≥ W , x ≥ M T / , y ≥ pT + M T / and s ∈ [0 , T ] we have (3.6) P ,T,x,yBer (cid:18) (cid:96) ( s ) ≥ T − sT · M T / + sT · (cid:0) pT + M T / (cid:1) − T / (cid:19) ≥ . Remark . If M , M = 0 then Lemma 3.8 states that if a Bernoulli bridge (cid:96) is started from (0 , x ) and terminates at ( T, y ) , which are above the straight line of slope p , then at any given time s ∈ [0 , T ] the probability that (cid:96) ( s ) goes a modest distance below the straight line of slope p is upperbounded by / . Proof.
Define A = (cid:98) M T / (cid:99) and B = (cid:98) pT + M T / (cid:99) . Then since A ≤ x and B ≤ y , it followsfrom Lemma 3.1 that there is a probability space with measure P supporting random variables L and L , whose laws under P are P ,T,A,BBer and P ,T,x,yBer respectively, and P -a.s. we have L ≤ L . IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 21
Thus P ,T,x,yBer (cid:18) (cid:96) ( s ) ≥ T − sT · M T / + sT · (cid:0) pT + M T / (cid:1) − T / (cid:19) = P (cid:18) L ( s ) ≥ T − sT · M T / + sT · (cid:0) pT + M T / (cid:1) − T / (cid:19) ≥ P (cid:18) L ( s ) ≥ T − sT · M T / + sT · (cid:0) pT + M T / (cid:1) − T / (cid:19) = P ,T,A,BBer (cid:18) (cid:96) ( s ) ≥ T − sT · M T / + sT · (cid:0) pT + M T / (cid:1) − T / (cid:19) . (3.7)Since the uniform distribution on upright paths on (cid:74) , T (cid:75) × (cid:74) A, B (cid:75) is the same as that on uprightpaths on (cid:74) , T (cid:75) × (cid:74) , B − A (cid:75) shifted vertically by A , the last line of (3.7) is equal to P ,T, ,B − ABer (cid:18) (cid:96) ( s ) + A ≥ T − sT · M T / + sT · (cid:0) pT + M T / (cid:1) − T / (cid:19) . Now we employ the coupling provided by Theorem 3.3. We have another probability space (Ω , F , P ) supporting a random variable (cid:96) ( T,B − A ) whose law under P is P ,T, ,B − ABer as well as a Brownian bridge B σ coupled with (cid:96) ( T,B − A ) . We have P ,T, ,B − ABer (cid:18) (cid:96) ( s ) + A ≥ T − sT · M T / + sT · (cid:0) pT + M T / (cid:1) − T / (cid:19) = P (cid:18) (cid:96) ( T,B − A ) ( s ) + A ≥ T − sT · M T / + sT · (cid:0) pT + M T / (cid:1) − T / (cid:19) = P (cid:18) (cid:104) (cid:96) ( T,B − A ) ( s ) − √ T B σs/T − sT · ( B − A ) (cid:105) + √ T B σs/T ≥− A − sT · ( B − A ) + T − sT · M T / + sT · (cid:0) pT + M T / (cid:1) − T / (cid:19) . (3.8)Recalling the definitions of A and B , we can rewrite the quantity in the last line of (3.8) and boundby T − sT · ( M T / − A ) + sT · ( pT + M T / − B ) − T / ≤ T − sT + sT − T / = − T / + 1 . Thus the last line of (3.7) is bounded below by P (cid:16)(cid:104) (cid:96) ( T,B − A ) ( s ) − √ T B σs/T − sT · ( B − A ) (cid:105) + √ T B σs/T ≥ − T / + 1 (cid:17) ≥ P (cid:16) √ T B σs/T ≥ T, B − A ) < T / − (cid:17) ≥ P (cid:16) B σs/T ≥ (cid:17) − P (cid:16) ∆( T, B − A ) ≥ T / − (cid:17) =12 − P (cid:16) ∆( T, B − A ) ≥ T / − (cid:17) . (3.9)For the first inequality, we used the fact that the quantity in brackets is bounded in absolute valueby ∆( T, B − A ) . The second inequality follows by dividing the event { B σs/T ≥ } into cases andapplying subadditivity. Since | B − A − pT | ≤ ( M − M + 1) √ T , Corollary 3.5 allows us to choose W large enough depending on p and M − M so that if T ≥ W , then the last line of (3.9) isbounded above by / − / / . In combination with (3.7) this proves (3.6). (cid:3) Lemma 3.10.
Fix p ∈ (0 , , T ∈ N and y, z ∈ Z such that T ≥ y, z ≥ , and suppose that (cid:96) y , (cid:96) z have distributions P ,T, ,yBer , P ,T, ,zBer respectively. Let M > and (cid:15) > be given. Then we canfind W = W ( M, p, (cid:15) ) ∈ N and A = A ( M, p, (cid:15) ) > such that for T ≥ W , y ≥ pT − M T / , z ≤ pT + M T / we have P ,T, ,yBer (cid:18) inf s ∈ [0 ,T ] (cid:2) (cid:96) y ( s ) − ps (cid:3) ≤ − AT / (cid:19) ≤ (cid:15), P ,T, ,zBer (cid:18) sup s ∈ [0 ,T ] (cid:2) (cid:96) z ( s ) − ps (cid:3) ≥ AT / (cid:19) ≤ (cid:15). (3.10) Remark . Roughly, Lemma 3.10 states that if a Bernoulli bridge (cid:96) is started from (0 , andterminates at time T not significantly lower (resp. higher) than the straight line of slope p , thenthe event that (cid:96) goes significantly below (resp. above) the straight line of slope p is very unlikely. Proof.
The two inequalities are proven in essentially the same way. We begin with the first inequality.If B = (cid:98) pT − M T / (cid:99) then it follows from Lemma 3.1 that(3.11) P ,T, ,yBer (cid:18) inf s ∈ [0 ,T ] (cid:2) (cid:96) y ( s ) − ps (cid:3) ≤ − AT / (cid:19) ≤ P ,T, ,BBer (cid:18) inf s ∈ [0 ,T ] (cid:2) (cid:96) ( s ) − ps (cid:3) ≤ − AT / (cid:19) , where (cid:96) has law P ,T, ,BBer . By Theorem 3.3, there is a probability space (Ω , F , P ) supporting arandom variable (cid:96) ( T,B ) whose law under P is also P ,T, ,BBer , and a Brownian bridge B σ with variance σ = p (1 − p ) . Therefore P ,T, ,BBer (cid:18) inf s ∈ [0 ,T ] (cid:2) (cid:96) ( s ) − ps (cid:3) ≤ − AT / (cid:19) = P (cid:18) inf s ∈ [0 ,T ] (cid:2) (cid:96) ( T,B ) ( s ) − ps (cid:3) ≤ − AT / (cid:19) ≤ P (cid:18) inf s ∈ [0 ,T ] √ T B σs/T ≤ − AT / (cid:19) + P (cid:32) sup s ∈ [0 ,T ] (cid:12)(cid:12)(cid:12) √ T B σs/T + ps − (cid:96) ( T,B ) ( s ) (cid:12)(cid:12)(cid:12) ≥ AT / (cid:33) ≤ P (cid:18) max s ∈ [0 ,T ] B σs/T ≥ A/ (cid:19) + P (cid:18) ∆( T, B ) ≥ AT / − M T / − (cid:19) . (3.12)For the first term in the last line, we used the fact that B σ and − B σ have the same distribution.For the second term, we used the fact that sup s ∈ [0 ,T ] (cid:12)(cid:12)(cid:12) ps − sT · B (cid:12)(cid:12)(cid:12) ≤ sup s ∈ [0 ,T ] (cid:12)(cid:12)(cid:12) ps − pT − M T / T · s (cid:12)(cid:12)(cid:12) + 1 = M T / + 1 . By Lemma 3.6, the first term in the last line of (3.12) is equal to e − A / p (1 − p ) . If we choose A ≥ (cid:112) p (1 − p ) log(2 /(cid:15) ) , then this is ≤ (cid:15)/ . If we also take A > M , then since | B − pT | ≤ ( M + 1) √ T ,Corollary 3.5 gives us a W large enough depending on M, p, (cid:15) so that the second term in the lastline of (3.12) is also < (cid:15)/ for T ≥ W . Adding the two terms and using (3.11) gives the firstinequality in (3.10).If we replace B with (cid:100) pT + M T / (cid:101) and change signs and inequalities where appropriate, thenthe same argument proves the second inequality in (3.10). (cid:3) We need the following definition for our next result. For a function f ∈ C ([ a, b ]) we define its modulus of continuity for δ > by(3.13) w ( f, δ ) = sup x,y ∈ [ a,b ] | x − y |≤ δ | f ( x ) − f ( y ) | . IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 23
Lemma 3.12.
Fix p ∈ (0 , , T ∈ N and y ∈ Z such that T ≥ y ≥ , and suppose that (cid:96) has distribution P ,T, ,yBer . For each positive M , (cid:15) and η , there exist a δ ( (cid:15), η, M ) > and W = W ( M, p, (cid:15), η ) ∈ N such that for T ≥ W and | y − pT | ≤ M T / we have (3.14) P ,T, ,yBer (cid:16) w (cid:0) f (cid:96) , δ (cid:1) ≥ (cid:15) (cid:17) ≤ η, where f (cid:96) ( u ) = T − / (cid:0) (cid:96) ( uT ) − puT (cid:1) for u ∈ [0 , .Remark . Lemma 3.12 states that if (cid:96) is a Bernoulli bridge that is started from (0 , andterminates at ( T, y ) with y close to pT (i.e. with well-behaved endpoints) then the modulus ofcontinuity of (cid:96) is also well-behaved with high probability. Proof.
By Theorem 3.3, we have a probability measure P supporting a random variable (cid:96) ( T,y ) withlaw P ,T, ,yBer as well as a Brownian bridge B σ with variance σ = p (1 − p ) . We have(3.15) P ,T, ,yBer (cid:16) w (cid:0) f (cid:96) , δ (cid:1) ≥ (cid:15) (cid:17) = P (cid:16) w (cid:0) f (cid:96) ( T,y ) , δ (cid:1) ≥ (cid:15) (cid:17) , and w (cid:0) f (cid:96) ( T,y ) , δ (cid:1) = T − / sup s,t ∈ [0 , , | s − t |≤ δ (cid:12)(cid:12)(cid:12) (cid:96) ( T,y ) ( sT ) − psT − (cid:96) ( T,y ) ( tT ) + ptT (cid:12)(cid:12)(cid:12) ≤ T − / sup s,t ∈ [0 , , | s − t |≤ δ (cid:18) (cid:12)(cid:12)(cid:12) √ T B σs + sy − psT − √ T B σt − ty + ptT (cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12) √ T B σs + sy − (cid:96) ( T,y ) ( sT ) (cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12) √ T B σt + ty − (cid:96) ( T,y ) ( tT ) (cid:12)(cid:12)(cid:12) (cid:19) ≤ sup s,t ∈ [0 , , | s − t |≤ δ (cid:12)(cid:12)(cid:12) B σs − B σt + T − / ( y − pT )( s − t ) (cid:12)(cid:12)(cid:12) + 2 T − / ∆( T, y ) ≤ w (cid:0) B σ , δ (cid:1) + M δ + 2 T − / ∆( T, y ) . (3.16)The last line follows from the assumption that | y − pT | ≤ M T / . Now (3.15) and (3.16) togetherimply that P ,T, ,yBer (cid:16) w (cid:0) f (cid:96) , δ (cid:1) ≥ (cid:15) (cid:17) ≤ P (cid:16) w (cid:0) B σ , δ (cid:1) + M δ + 2 T − / ∆( T, y ) ≥ (cid:15) (cid:17) ≤ P (cid:0) w (cid:0) B σ , δ (cid:1) + M δ ≥ (cid:15)/ (cid:1) + P (cid:16) ∆( T, y ) ≥ (cid:15) T / / (cid:17) . (3.17)Corollary 3.5 gives us a W large enough depending on M, p, (cid:15), η so that the second term in thesecond line of 3.17 is ≤ η/ for T ≥ W . Since B σ is a.s. uniformly continuous on the compactinterval [0 , , w ( B σ , δ ) → as δ → . Thus we can find δ > small enough depending on (cid:15), η so that w ( B σ , δ ) < (cid:15)/ with probability at least − η/ . Then with δ = min( δ , (cid:15)/ M ) , the firstterm in the second line of (3.17) is ≤ η/ as well. This implies (3.14). (cid:3) Lemma 3.14.
Fix T ∈ N , p ∈ (0 , , C, K > , and a, b ∈ Z such that Ω(0 , T, a, b ) is nonempty.Let (cid:96) bot ∈ Ω(0 , T, a, b ) or (cid:96) bot = −∞ . Suppose (cid:126)x, (cid:126)y ∈ W k − , k ≥ , are such that T ≥ y i − x i ≥ for ≤ i ≤ k − . Write (cid:126)z = (cid:126)y − (cid:126)x , and suppose that(1) x k − + ( z k − /T ) s − (cid:96) bot ( s ) ≥ C √ T for all s ∈ [0 , T ] (2) x i − x i +1 ≥ C √ T and y i − y i +1 ≥ C √ T for ≤ i ≤ k − ,(3) | z i − pT | ≤ K √ T for ≤ i ≤ k − , for a constant K > .Let L = ( L , . . . , L k − ) be a line ensemble with law P ,T,(cid:126)x,(cid:126)yBer , and let E denote the event E = { L ( s ) ≥ · · · ≥ L k − ( s ) ≥ (cid:96) bot ( s ) for s ∈ [0 , T ] } . Then we can find W = W ( p, C, K ) so that for T ≥ W , (3.18) P ,T,(cid:126)x,(cid:126)yBer ( E ) ≥ (cid:32) − ∞ (cid:88) n =1 ( − n − e − n C / p (1 − p ) (cid:33) k − . Moreover if C ≥ (cid:112) p (1 − p ) log 3 , then for T ≥ W we have (3.19) P ,T,(cid:126)x,(cid:126)yBer ( E ) ≥ (cid:16) − e − C / p (1 − p ) (cid:17) k − . Remark . This lemma states that if k independent Bernoulli bridges are well-separated fromeach other and (cid:96) bot , then there is a positive probability that the curves will intersect neither eachother nor (cid:96) bot . We will use this result to compare curves in an avoiding Bernoulli line ensemble withfree Bernoulli bridges. Proof.
Observe that condition (1) simply states that (cid:96) bot lies a distance of at least C √ T uniformlybelow the line segment connecting x k − and y k − . Thus (1) and (2) imply that E occurs if eachcurve L i remains within a distance of C √ T / from the line segment connecting x i and y i . As inTheorem 3.3, let P i be probability measures supporting random variables (cid:96) ( T,z i ) with laws P ,T, ,z i Ber .Then P ,T,(cid:126)x,(cid:126)yBer ( E ) ≥ P ,T,(cid:126)x,(cid:126)yBer (cid:32) sup s ∈ [0 ,T ] (cid:12)(cid:12) L i ( s ) − x i − ( z i /T ) s (cid:12)(cid:12) ≤ C √ T / , ≤ i ≤ k − (cid:33) = k − (cid:89) i =1 (cid:34) P ,T, ,z i Ber (cid:32) sup s ∈ [0 ,T ] (cid:12)(cid:12) L i ( s + rN α ) − ( z i /T ) s (cid:12)(cid:12) ≤ C √ T / (cid:33)(cid:35) = k − (cid:89) i =1 (cid:34) − P i (cid:32) sup s ∈ [0 ,T ] (cid:12)(cid:12) (cid:96) ( T,z i ) − ( z i /T ) s (cid:12)(cid:12) > C √ T / (cid:33)(cid:35) . (3.20)In the third line, we used the fact that L , . . . , L k − are independent from each other under P ,T, ,z i Ber .Let B σ,i be the Brownian bridge with variance σ = p (1 − p ) coupled with (cid:96) ( T,z i ) given by Theorem3.3. Then we have P i (cid:32) sup s ∈ [0 ,T ] (cid:12)(cid:12) (cid:96) ( T,z i ) ( s ) − ( z i /T ) s (cid:12)(cid:12) > C √ T / (cid:33) ≤ P i (cid:32) sup s ∈ [0 ,T ] |√ T B σs/T | > C √ T / (cid:33) + P i (cid:16) ∆( T, z i ) > C √ T / (cid:17) . (3.21)By Lemma 3.6, the first term in the second line of (3.21) is equal to (cid:80) ∞ n =1 ( − n − e − n C / p (1 − p ) .Moreover, condition (3) in the hypothesis and Corollary 3.5 allow us to find W depending on p, C, K but not on i so that the last probability in (3.21) is bounded above by − (cid:80) ∞ n =1 ( − n − e − n C / p (1 − p ) for T ≥ W . Adding these two terms and referring to (3.20) proves (3.18).Now suppose C ≥ (cid:112) p (1 − p ) log 3 . By (3.4) in Lemma 3.6, the first term in the second lineof (3.21) is bounded above by bounded above by e − C / p (1 − p ) . After possibly enlarging W fromabove, the second term is < e − C / p (1 − p ) for T ≥ W . The assumption on C implies that − e − C / p (1 − p ) ≥ , and now combining (3.21) and (3.20) proves (3.19). (cid:3) Properties of avoiding Bernoulli line ensembles.
In this section we derive two resultsabout avoiding Bernoulli line ensembles, which are Bernoulli line ensembles with law P T ,T ,(cid:126)x,(cid:126)y,f,gavoid,Ber ; S asin Definition 2.15. The lemmas we prove only involve the case when f ( r ) = ∞ for all r ∈ (cid:74) T , T (cid:75) and we denote the measure in this case by P T ,T ,(cid:126)x,(cid:126)y, ∞ ,gavoid,Ber ; S . A P T ,T ,(cid:126)x,(cid:126)y, ∞ ,gavoid,Ber ; S -distributed random variable IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 25 will be denoted by Q = ( Q , . . . , Q k ) where k is the number of up-right paths in the ensemble. Asusual, if g = −∞ , we write P T ,T ,(cid:126)x,(cid:126)yavoid,Ber ; S . Our first result will rely on the two monotonicity Lemmas3.1 and 3.2 as well as the strong coupling between Bernoulli bridges and Brownian bridges fromTheorem 3.3, and the further results make use of the material in Section 8. Lemma 3.16.
Fix p ∈ (0 , , k ∈ N . Let (cid:126)x, (cid:126)y ∈ W k be such that T ≥ y i − x i ≥ for i = 1 , . . . , k .Then for any M, M > we can find W ∈ N depending on p, k, M, M such that if T ≥ W , x k ≥ − M √ T , and y k ≥ pT − M √ T , then for any S ⊆ (cid:74) , T (cid:75) we have (3.22) P ,T,(cid:126)x,(cid:126)yavoid,Ber ; S (cid:16) Q k ( T / − pT / ≥ M √ T (cid:17) ≥ k/ (cid:0) − e − /p (1 − p ) (cid:1) k exp (cid:16) − k ( M + M +6) p (1 − p ) (cid:17) [ πp (1 − p )] k/ . Proof.
A sketch of the proof is given in Figure 5 and its caption. Define vectors (cid:126)x, (cid:126)y ∈ W k by Figure 5.
Sketch of the argument for Lemma 3.16: We use Lemma 3.1 to lowerthe entry and exit data (cid:126)x, (cid:126)y of the curves to (cid:126)x (cid:48) and (cid:126)y (cid:48) . We define E to be the eventthat that the lines in the line ensemble lie in well-separated strips with all the stripshigh enough so that E is contained in the event we want to lower bound in (3.22).We then use strong coupling with Brownian bridges via Theorem 3.3 and bound theprobability of the bridges remaining within the blue windows to lower bound P ( E ) . x (cid:48) i = (cid:98)− M √ T (cid:99) − i − (cid:100)√ T (cid:101) , y (cid:48) i = (cid:98) pT − M √ T (cid:99) − i − (cid:100)√ T (cid:101) . Then x (cid:48) i ≤ x k ≤ x i and y (cid:48) i ≤ y k ≤ y i for ≤ i ≤ k − . Thus by Lemma 3.1, we have P ,T,(cid:126)x,(cid:126)yavoid,Ber ; S (cid:16) Q k ( T / − pT / ≥ M √ T (cid:17) ≥ P ,T,(cid:126)x (cid:48) ,(cid:126)y (cid:48) avoid,Ber ; S (cid:16) Q k ( T / − pT / ≥ M √ T (cid:17) . Let us write K i = pT / M √ T + (10( k − i ) − (cid:100)√ T (cid:101) for ≤ i ≤ k . Note K i is the midpoint of pT / M √ T + 10( k − i − (cid:100)√ T (cid:101) and pT / M √ T + 10( k − i ) (cid:100)√ T (cid:101) . Let E denote the eventthat the following conditions hold for ≤ i ≤ k :(1) (cid:12)(cid:12)(cid:12) Q i ( T / − pT / − M √ T − (10( k − i ) − (cid:100)√ T (cid:101) (cid:12)(cid:12)(cid:12) ≤ (cid:100)√ T (cid:101) , (2) sup s ∈ [0 ,T/ (cid:12)(cid:12)(cid:12) Q i ( s ) − x (cid:48) i − K i − x (cid:48) i T / s (cid:12)(cid:12)(cid:12) ≤ √ T ,(3) sup s ∈ [ T/ ,T ] (cid:12)(cid:12)(cid:12) Q i ( s ) − K i − y (cid:48) i − K i T / s − T / (cid:12)(cid:12)(cid:12) ≤ √ T .The first condition implies in particular that Q k ( T / − pT / ≥ M √ T , and also that Q i ( T / − Q i +1 ( T / ≥ √ T for each i . The second and third conditions require that each curve Q i remainwithin a distance of √ T of the graph of the piecewise linear function on [0 , T ] passing through thepoints (0 , x (cid:48) ) , ( T / , K i ) , and ( T, y (cid:48) i ) . We observe that P ,T,(cid:126)x (cid:48) ,(cid:126)y (cid:48) avoid,Ber ; S (cid:16) Q k ( T / − pT / ≥ M √ T (cid:17) ≥ P ,T,(cid:126)x (cid:48) ,(cid:126)y (cid:48) avoid,Ber ; S ( E ) ≥ P ,T,(cid:126)x (cid:48) ,(cid:126)y (cid:48) Ber ( E ) . The second inequality follows since on E we have Q ( s ) ≥ · · · ≥ Q k ( s ) for all s ∈ (cid:74) , T (cid:75) (here weused that | Ω( T , T , (cid:126)x (cid:48) , (cid:126)y (cid:48) ) | ≥ | Ω avoid ( T , T , (cid:126)x (cid:48) , (cid:126)y (cid:48) , ∞ , −∞ ; S ) | ). Writing z = y (cid:48) k − x (cid:48) k we have P ,T,(cid:126)x (cid:48) ,(cid:126)y (cid:48) Ber ( E ) = (cid:20) P ,T, ,zBer (cid:18) (cid:12)(cid:12)(cid:12) (cid:96) ( T / − pT / − M √ T − (cid:100)√ T (cid:101) + x (cid:48) (cid:12)(cid:12)(cid:12) ≤ (cid:100)√ T (cid:101) andsup s ∈ [0 ,T/ (cid:12)(cid:12)(cid:12)(cid:12) (cid:96) ( s ) − K − x (cid:48) T / s (cid:12)(cid:12)(cid:12)(cid:12) ≤ √ T andsup s ∈ [ T/ ,T ] (cid:12)(cid:12)(cid:12)(cid:12) (cid:96) ( s ) − ( K − x (cid:48) ) − y (cid:48) − K T / s − T / (cid:12)(cid:12)(cid:12)(cid:12) ≤ √ T (cid:19)(cid:21) k . (3.23)Let P be a probability space supporting a random variable (cid:96) ( T,z ) with law P ,T, ,z coupled witha Brownian bridge B σ with variance σ , as in Theorem 3.3. Then the expression on the right in(3.23) being raised to the k -th power is bounded below for large enough T by P ,T, ,zBer (cid:18) (cid:12)(cid:12)(cid:12) (cid:96) ( T / − pT / − ( M + M + 5) √ T (cid:12)(cid:12)(cid:12) ≤ √ T −
10 andsup s ∈ [0 ,T/ (cid:12)(cid:12)(cid:12)(cid:12) (cid:96) ( s ) − ps − M + M + 5 √ T / s (cid:12)(cid:12)(cid:12)(cid:12) ≤ √ T − s ∈ [ T/ ,T ] (cid:12)(cid:12)(cid:12)(cid:12) (cid:96) ( s ) − ps − ( M + M + 5) √ T + M + M + 5 √ T / s − T / (cid:12)(cid:12)(cid:12)(cid:12) ≤ √ T − (cid:19) ≥ P (cid:18) (cid:12)(cid:12)(cid:12) √ T B σ / − ( M + M + 5) √ T (cid:12)(cid:12)(cid:12) ≤ √ T andsup s ∈ [0 ,T/ (cid:12)(cid:12)(cid:12)(cid:12) √ T B σs/T − ( M + M + 5) √ T · sT / (cid:12)(cid:12)(cid:12)(cid:12) ≤ √ T andsup s ∈ [ T/ ,T ] (cid:12)(cid:12)(cid:12)(cid:12) √ T B σs/T − ( M + M + 5) √ T · T − sT / (cid:12)(cid:12)(cid:12)(cid:12) ≤ √ T (cid:19) − P (cid:16) ∆( T, z ) > √ T / (cid:17) . (3.24)Note that B σ / is a centered Gaussian random variable with variance p (1 − p ) / σ (1 / − / .Writing ξ = B σ / , it follows from Lemma 3.7 that there exist independent Brownian bridges B , B with variance σ / so that B σs has the same law as sT/ ξ + B s/T for s ∈ [0 , T / and T − sT/ ξ + B s − T ) /T IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 27 for s ∈ [ T / , T ] . The first term in the last expression in (3.24) is thus equal to P (cid:18) | ξ − ( M + M + 5) | ≤ s ∈ [0 ,T/ (cid:12)(cid:12)(cid:12)(cid:12) B s/T − ( M + M + 5 − ξ ) · sT / (cid:12)(cid:12)(cid:12)(cid:12) ≤ s ∈ [ T/ ,T ] (cid:12)(cid:12)(cid:12)(cid:12) B s − T ) /T − ( M + M + 5 − ξ ) · T − sT / (cid:12)(cid:12)(cid:12)(cid:12) ≤ (cid:19) ≥ P (cid:18) | ξ − ( M + M + 5) | ≤ s ∈ [0 ,T/ (cid:12)(cid:12) B s/T (cid:12)(cid:12) ≤ s ∈ [ T/ ,T ] (cid:12)(cid:12) B s − T ) /T (cid:12)(cid:12) ≤ (cid:19) = P (cid:16) | ξ − ( M + M + 5) | ≤ (cid:17) · P (cid:18) sup s ∈ [0 ,T/ (cid:12)(cid:12) B s/T (cid:12)(cid:12) ≤ (cid:19) · P (cid:18) sup s ∈ [0 ,T/ (cid:12)(cid:12) B s − T ) /T (cid:12)(cid:12) ≤ (cid:19) ≥ (cid:16) − e − /p (1 − p ) (cid:17) (cid:90) M + M +6 M + M +4 e − ξ /p (1 − p ) dξ (cid:112) πp (1 − p ) / ≥ √ e − M + M +6) /p (1 − p ) (cid:112) πp (1 − p ) (cid:0) − e − /p (1 − p ) (cid:1) . (3.25)In the fourth line, we used the fact that ξ , B · , and B · are independent, and in the second to lastline we used Lemma 3.6. Since | z − pT | ≤ ( M + 1) √ T , Lemma 3.5 allows us to choose T largeenough so that P (∆( T, z ) > √ T / is less than 1/2 the expression in the last line of (3.25). Thenin view of (3.23) and (3.24), we conclude (3.22). (cid:3) We now state an important weak convergence result, whose proof is presented in Section 8 (morespecifically see Proposition 8.3).
Proposition 3.17.
Fix p, t ∈ (0 , , k ∈ N , (cid:126)a,(cid:126)b ∈ W k , where we recall W k = { (cid:126)x ∈ R k : x ≥ x ≥ · · · ≥ x k } . Suppose (cid:126)x T = ( x T , . . . , x Tk ) and (cid:126)y T = ( y T , . . . , y Tk ) are two sequences in W k such that for i ∈ (cid:74) , k (cid:75) lim T →∞ x Ti √ T = a i and lim T →∞ y Ti − pT √ T = b i . Let ( Q T , . . . , Q Tk ) have law P ,T,(cid:126)x T ,(cid:126)y T avoid,Ber , and define the sequence { Z T } of random k -dimensional vectors Z T = (cid:18) Q T ( tT ) − ptT √ T , . . . , Q Tk ( tT ) − ptT √ T (cid:19) . Then as T → ∞ , Z T converges weakly to a random vector ˆ Z on R k with a probability density ρ supported on W ◦ k . The convergence result in Proposition 3.17 allows us to prove the following lemma, which roughlystates that if the entrance and exit data of a sequence of avoiding Bernoulli line ensembles remain incompact windows, then with high probability the curves of the ensemble will remain separated fromone another at each point by some small positive distance on scale √ T . This is how Proposition 3.17will be used in the main argument in the text, although in Section 8 we give a detailed descriptionof the density ρ in Proposition 3.17. Lemma 3.18.
Fix p, t ∈ (0 , and k ∈ N . Suppose that (cid:126)x T = ( x T , . . . , x Tk ) , (cid:126)y T = ( y T , . . . , y Tk ) areelements of W k such that T ≥ y Ti − x Ti ≥ for i ∈ (cid:74) , k (cid:75) . Then for any M , M > and (cid:15) > there exists W ∈ N and δ > depending on p, k, M , M such that if T ≥ W , | x Ti | ≤ M √ T and | y Ti − pT | ≤ M √ T , then P ,T,(cid:126)x T ,(cid:126)y T avoid,Ber (cid:18) min ≤ i ≤ k − (cid:2) Q i ( tT ) − Q i +1 ( tT ) (cid:3) < δ √ T (cid:19) < (cid:15). Proof.
We prove the claim by contradiction. Suppose there exist M , M , (cid:15) > such that for any W ∈ N and δ > there exists some T ≥ W with P ,T,(cid:126)x T ,(cid:126)y T avoid,Ber (cid:18) min ≤ i ≤ k − (cid:2) Q i ( tT ) − Q i +1 ( tT ) (cid:3) < δ √ T (cid:19) ≥ (cid:15). Then we can obtain sequences T n , δ n > , T n (cid:37) ∞ , δ n (cid:38) , such that for all n we have P ,T,(cid:126)x Tn ,(cid:126)y Tn avoid,Ber (cid:18) min ≤ i ≤ k − (cid:20) Q i ( tT n ) − Q i +1 ( tT n ) √ T n (cid:21) < δ n (cid:19) ≥ (cid:15). Since | x T n i | ≤ M √ T n and | y T n i − pT n | ≤ M √ T n for ≤ i ≤ k , the sequences { (cid:126)x T n / √ T n } , { ( (cid:126)y T n − pT n ) / √ T n } are bounded in R k . It follows that there exist (cid:126)x, (cid:126)y ∈ R n and a subsequence { T n m } such that (cid:126)x T nm (cid:112) T n m −→ (cid:126)x, (cid:126)y T nm − pT n m (cid:112) T n m −→ (cid:126)y as m → ∞ (see [26, Theorem 3.6]). Denote Z mi = Q i ( tT n m ) − ptT n m (cid:112) T n m . Fix δ > and choose M large enough so that if m ≥ M then δ m < δ . Then for m ≥ M we have(3.26) (cid:15) ≤ lim inf m →∞ P (cid:18) min ≤ i ≤ k − (cid:2) Z mi − Z mi +1 (cid:3) < δ n m (cid:19) ≤ lim inf m →∞ P (cid:18) min ≤ i ≤ k − (cid:2) Z mi − Z mi +1 (cid:3) ≤ δ (cid:19) . Now by Lemma 3.17, ( Z m , . . . , Z mk ) converges weakly to a random vector ˆ Z on R k with a density ρ .It follows from the portmanteau theorem [15, Theorem 3.2.11] applied with the closed set K = [0 , δ ] (3.27) lim sup m →∞ P (cid:18) min ≤ i ≤ k − (cid:2) Z mi − Z mi +1 (cid:3) ∈ K (cid:19) ≤ P (cid:18) min ≤ i ≤ k − (cid:2) ˆ Z i − ˆ Z i +1 (cid:3) ∈ K (cid:19) . Combining (3.26) and (3.27), we obtain(3.28) (cid:15) ≤ P (cid:18) ≤ min ≤ i ≤ k − (cid:2) ˆ Z i − ˆ Z i +1 (cid:3) ≤ δ (cid:19) ≤ k − (cid:88) i =1 P (cid:16) ≤ ˆ Z i − ˆ Z i +1 ≤ δ (cid:17) . To conclude the proof, we find a δ for which (3.28) cannot hold. For ˜ η ≥ put E ˜ ηi = { (cid:126)z ∈ R k : 0 ≤ z i − z i +1 ≤ ˜ η } . For each i ∈ (cid:74) , k − (cid:75) and η > , we have(3.29) P (cid:16) ≤ ˆ Z i − ˆ Z i +1 ≤ η (cid:17) = (cid:90) R k ρ · E ηi dz · · · dz k . Clearly ρ · E ηi → ρ · E i pointwise as η → , and E i = { (cid:126)z ∈ R k : z i = z i +1 } has Lebesgue measure0. Thus ρ · E ηi → a.e. as η → . Since | ρ · E ηi | ≤ ρ and ρ is integrable, the dominated convergencetheorem and (3.29) imply that P (cid:16) ≤ ˆ Z i − ˆ Z i +1 ≤ η (cid:17) −→ as η → . Thus for each i ∈ (cid:74) , k − (cid:75) and (cid:15) > we can find an η i > such that < η ≤ η i implies P (0 ≤ ˆ Z i − ˆ Z i +1 ≤ η ) < (cid:15)/ ( k − . Putting δ = min ≤ i ≤ k − η i we find that k − (cid:88) i =1 P (cid:16) ≤ ˆ Z i − ˆ Z i +1 ≤ δ (cid:17) < (cid:15), contradicting (3.28) for this choice of δ . (cid:3) IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 29 Proof of Theorem 2.26
The goal of this section is to prove Theorem 2.26. Throughout this section, we assume that wehave fixed k ∈ N with k ≥ , p ∈ (0 , , α, λ > , and (cid:8) L N = ( L N , L N , . . . , L Nk ) (cid:9) ∞ N =1 an ( α, p, λ ) -good sequence of (cid:74) , k (cid:75) -indexed Bernoulli line ensembles as in Definition 2.24, all definedon a probability space with measure P . The proof of Theorem 2.26 depends on three results –Proposition 4.1 and Lemmas 4.2 and 4.3. In these three statements we establish various propertiesof the sequence of line ensembles L N . The constants in these statements depend implicitly on α , p , λ , k , and the functions φ, ψ from Definition 2.24, which are fixed throughout. We will not listthese dependencies explicitly. The proof of Proposition 4.1 is given in Section 4.1 while the proofsof Lemmas 4.2 and 4.3 are in Section 5. Theorem 2.26 (i) and (ii) are proved in Sections 4.2 and4.3 respectively.4.1. Preliminary results.
The main result in this section is presented as Proposition 4.1 below.In order to formulate it and some of the lemmas below, it will be convenient to adopt the followingnotation for any r > and m ∈ N :(4.1) t m = (cid:98) ( r + m ) N α (cid:99) . Proposition 4.1.
Let P be the measure from the beginning of this section. For any (cid:15) > , r > there exist δ = δ ( (cid:15), r ) > and N = N ( (cid:15), r ) such that for all N ≥ N we have P (cid:16) Z (cid:0) − t , t , (cid:126)x, (cid:126)y, ∞ , L Nk (cid:74) − t , t (cid:75) (cid:1) < δ (cid:17) < (cid:15), where (cid:126)x = ( L N ( − t ) , . . . , L Nk − ( − t )) , (cid:126)y = ( L N ( t ) , . . . , L Nk − ( t )) , L Nk (cid:74) − t , t (cid:75) is the restriction of L Nk to the set (cid:74) − t , t (cid:75) , and Z is the acceptance probability of Definition 2.22. The general strategy we use to prove Proposition 4.1 is inspired by the proof of [8, Proposition6.5]. We begin by stating three key lemmas that will be required. The proofs of Lemmas 4.2 and4.3 are postponed to Section 5 and Lemma 4.4 is proved in Section 6.
Lemma 4.2.
Let P be the measure from the beginning of this section. For any (cid:15) > , r > thereexist R = R ( (cid:15), r ) > and N = N ( (cid:15), r ) such that for N ≥ N P (cid:32) sup s ∈ [ − t ,t ] (cid:2) L N ( s ) − ps (cid:3) ≥ R N α/ (cid:33) < (cid:15). Lemma 4.3.
Let P be the measure from the beginning of this section. For any (cid:15) > , r > thereexist R = R ( (cid:15), r ) > and N = N ( (cid:15), r ) such that for N ≥ N P (cid:18) inf s ∈ [ − t ,t ] (cid:2) L Nk − ( s ) − ps (cid:3) ≤ − R N α/ (cid:19) < (cid:15). Lemma 4.4.
Fix k ∈ N , k ≥ , p ∈ (0 , , r, α, M , M > . Suppose that (cid:96) bot : (cid:74) − t , t (cid:75) → R ∪ {−∞} , and (cid:126)x, (cid:126)y ∈ W k − are such that | Ω avoid ( − t , t , (cid:126)x, (cid:126)y, ∞ , (cid:96) bot ) | ≥ . Suppose further that(1) sup s ∈ [ − t ,t ] (cid:2) (cid:96) bot ( s ) − ps (cid:3) ≤ M (2 t ) / ,(2) − pt + M (2 t ) / ≥ x ≥ x k − ≥ max (cid:0) (cid:96) bot ( − t ) , − pt − M (2 t ) / (cid:1) , (3) pt + M (2 t ) / ≥ y ≥ y k − ≥ max (cid:0) (cid:96) bot ( t ) , pt − M (2 t ) / (cid:1) . Then there exist constants g, h and N ∈ N all depending on M , M , p, k, r, α such that for any ˜ (cid:15) > and N ≥ N we have (4.2) P − t ,t ,(cid:126)x,(cid:126)y, ∞ ,(cid:96) bot avoid,Ber (cid:16) Z (cid:0) − t , t , Q ( − t ) , Q ( t ) , ∞ , (cid:96) bot (cid:74) − t , t (cid:75) (cid:1) ≤ gh ˜ (cid:15) (cid:17) ≤ ˜ (cid:15), where Z is the acceptance probability of Definition 2.22, (cid:96) bot (cid:74) − t , t (cid:75) is the vector, whose coordinatesmatch those of (cid:96) bot on (cid:74) − t , t (cid:75) and Q ( a ) = ( Q ( a ) , . . . , Q k − ( a )) is the value of the line ensemble Q = ( Q , . . . , Q k − ) whose law is P − t ,t ,(cid:126)x,(cid:126)y, ∞ ,(cid:96) bot avoid,Ber at location a .Proof of Proposition 4.1. Let (cid:15) > be given. Define the event E N = (cid:110) L Nk − ( ± t ) ∓ pt ≥ − M (2 t ) / (cid:111) ∩ (cid:110) L N ( ± t ) ∓ pt ≤ M (2 t ) / (cid:111) ∩ (cid:40) sup s ∈ [ − t ,t ] [ L Nk ( s ) − ps ] ≤ M (2 t ) / (cid:41) . In view of Lemmas 4.2 and 4.3 and the fact that P -almost surely L N ( s ) ≥ L Nk ( s ) for all s ∈ [ − t , t ] we can find sufficiently large M , M and N such that for N ≥ N we have(4.3) P ( E cN ) < (cid:15)/ . Let g, h, N be as in Lemma 4.4 for the values M , M as above, the values α, p, k from thebeginning of the section and r as in the statement of the proposition. For δ = ( (cid:15)/ · gh we denote V = (cid:110) Z (cid:0) − t , t , (cid:126)x, (cid:126)y, ∞ , L Nk (cid:74) − t , t (cid:75) (cid:1) < δ (cid:111) and make the following deduction for N ≥ N P (cid:0) V ∩ E N (cid:1) = E (cid:20) E (cid:104) E N · V (cid:12)(cid:12)(cid:12) σ (cid:0) L N ( − t ) , L N ( t ) , L Nk (cid:74) − t , t (cid:75) (cid:1)(cid:105)(cid:21) = E (cid:20) E N · E (cid:104) { Z (cid:0) − t , t , (cid:126)x, (cid:126)y, ∞ , L Nk (cid:74) − t , t (cid:75) (cid:1) < δ } (cid:12)(cid:12)(cid:12) σ (cid:0) L N ( − t ) , L N ( t ) , L Nk (cid:74) − t , t (cid:75) (cid:1)(cid:105)(cid:21) = E (cid:2) E N · E avoid (cid:2) { Z (cid:0) − t , t , L ( − t ) , L ( t ) , ∞ , L Nk (cid:74) − t , t (cid:75) (cid:1) < δ } (cid:3)(cid:3) ≤ E [ E N · (cid:15)/ ≤ (cid:15)/ . (4.4)In (4.4) we have written E avoid in place of E − t ,t , L N ( − t ) , L N ( t ) , ∞ ,L Nk (cid:74) − t ,t (cid:75) avoid,Ber to ease the notation; inaddition, we have that L N ( a ) = ( L N ( a ) , . . . , L Nk − ( a )) and L on the last line is distributed accordingto P − t ,t , L N ( − t ) , L N ( t ) , ∞ ,L Nk (cid:74) − t ,t (cid:75) avoid,Ber . We elaborate on (4.4) in the paragraph below.The first equality in (4.4) follows from the tower property for conditional expectations. Thesecond equality uses the definition of V and the fact that E N is σ (cid:0) L N ( − t ) , L N ( t ) , L Nk (cid:74) − t , t (cid:75) (cid:1) -measurable and can thus be taken outside of the conditional expectation. The third equality usesthe Schur Gibbs property, see Definition 2.17. The first inequality on the third line holds if N ≥ N and uses Lemma 4.4 with ˜ (cid:15) = (cid:15)/ as well as the fact that on the event E N the random variables L N ( − t ) , L N ( t ) and L Nk (cid:74) − t , t (cid:75) (that play the roles of (cid:126)x, (cid:126)y and (cid:96) bot ) satisfy the inequalities(1) sup s ∈ [ − t ,t ] (cid:2) L Nk ( s ) − ps (cid:3) ≤ M (2 t ) / ,(2) − pt + M (2 t ) / ≥ L N ( − t ) ≥ L Nk − ( − t ) ≥ max (cid:0) L Nk ( − t ) , − pt − M (2 t ) / (cid:1) , (3) pt + M (2 t ) / ≥ L N ( t ) ≥ L Nk − ( t ) ≥ max (cid:0) L Nk ( t ) , pt − M (2 t ) / (cid:1) . The last inequality in (4.4) is trivial.Combining (4.4) with (4.3), we see that for all N ≥ max( N , N ) we have P ( V ) = P ( V ∩ E N ) + P ( V ∩ E cN ) ≤ (cid:15)/ P ( E cN ) < (cid:15), which proves the proposition. (cid:3) Proof of Theorem 2.26 (i).
Since ˜ f Nn are obtained from f Nn by subtracting a deterministiccontinuous function (namely λs ) and rescaling by a constant (namely (cid:112) p (1 − p ) ) we see that ˜ P N is tight if and only if P N is tight and so it suffices to show that P N is tight. By Lemma 2.4, itsuffices to verify the following two conditions for all i ∈ (cid:74) , k − (cid:75) , r > , and (cid:15) > :(4.5) lim a →∞ lim sup N →∞ P ( | f Ni (0) | ≥ a ) = 0 IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 31 (4.6) lim δ → lim sup N →∞ P (cid:32) sup x,y ∈ [ − r,r ] , | x − y |≤ δ | f Ni ( x ) − f Ni ( y ) | ≥ (cid:15) (cid:33) = 0 . For the sake of clarity, we will prove these conditions in several steps.
Step 1.
In this step we prove (4.5). Let (cid:15) > be given. Then by Lemmas 4.2 and 4.3 we can find N , N and R , R such that for N ≥ max( N , N ) P (cid:32) sup s ∈ [ − t ,t ] [ L N ( s ) − ps ] ≥ R N α/ (cid:33) < (cid:15)/ , P (cid:18) inf s ∈ [ − t ,t ] [ L Nk − ( s ) − ps ] ≤ − R N α/ (cid:19) < (cid:15)/ . In particular, if we set R = max( R , R ) and utilize the fact that L N (0) ≥ · · · ≥ L Nk − (0) weconclude that for any i ∈ (cid:74) , k − (cid:75) we have P (cid:0) | L Ni (0) | ≥ RN α/ (cid:1) ≤ P (cid:0) L N (0) ≥ R N α/ (cid:1) + P (cid:0) L Nk − (0) ≤ − R N α/ (cid:1) < (cid:15), which implies (4.5). Step 2.
In this step we prove (4.6). In the sequel we fix r, (cid:15) > and i ∈ (cid:74) , k − (cid:75) . To prove (4.6)it suffices to show that for any η > , there exists a δ > and N such that N ≥ N implies(4.7) P (cid:32) sup x,y ∈ [ − r,r ] , | x − y |≤ δ | f Ni ( x ) − f Ni ( y ) | ≥ (cid:15) (cid:33) < η. For δ > we define the event(4.8) A Nδ = (cid:40) sup x,y ∈ [ − t ,t ] , | x − y |≤ δN α (cid:12)(cid:12) L Ni ( x ) − L Ni ( y ) − p ( x − y ) (cid:12)(cid:12) ≥ N α/ (cid:15) (cid:41) , where we recall that t = (cid:98) ( r + 1) N α (cid:99) from (4.1). We claim that there exist δ > and N ∈ N such that for δ ∈ (0 , δ ] and N ≥ N we have(4.9) P ( A Nδ ) < η. We prove (4.9) in the steps below. Here we assume its validity and conclude the proof of (4.7).Observe that if δ ∈ (cid:0) , min (cid:0) δ , (cid:15) · (8 λr ) − (cid:1)(cid:1) , where λ is as in the statement of the theorem, wehave the following tower of inequalities P (cid:32) sup x,y ∈ [ − r,r ] , | x − y |≤ δ | f Ni ( x ) − f Ni ( y ) | ≥ (cid:15) (cid:33) = P (cid:32) sup x,y ∈ [ − r,r ] , | x − y |≤ δ (cid:12)(cid:12)(cid:12) N − α/ (cid:0) L Ni ( xN α ) − L Ni ( yN α ) (cid:1) − p ( x − y ) N α/ + λ ( x − y ) (cid:12)(cid:12)(cid:12) ≥ (cid:15) (cid:33) ≤ P (cid:32) sup x,y ∈ [ − r,r ] , | x − y |≤ δ N − α/ (cid:12)(cid:12) L Ni ( xN α ) − L Ni ( yN α ) − p ( x − y ) N α (cid:12)(cid:12) + 2 λrδ ≥ (cid:15) (cid:33) ≤ P (cid:32) sup x,y ∈ [ − r,r ] , | x − y |≤ δ (cid:12)(cid:12) L Ni ( xN α ) − L Ni ( yN α ) − p ( x − y ) N α (cid:12)(cid:12) ≥ N α/ (cid:15) (cid:33) ≤ P ( A Nδ ) < η. (4.10)In (4.10) the first equality follows from the definition of f Ni , and the inequality on the second linefollows from the inequality | x − y | ≤ rδ , which holds for all x, y ∈ [ − r, r ] such that | x − y | ≤ δ .The inequality in the third line of (4.10) follows from our assumption that δ < (cid:15) · (8 λr ) − and the first inequality on the last line follows from the definition of A Nδ in (4.8), and the fact that t ≥ rN α .The last inequality follows from our assumption that δ < δ and (4.9). In view of (4.10) we conclude(4.7). Step 3.
In this step we prove (4.9) and fix η > in the sequel. For δ , M > and N ∈ N wedefine the events E = (cid:26) max ≤ j ≤ k − (cid:12)(cid:12) L Nj ( ± t ) ∓ pt (cid:12)(cid:12) ≤ M N α/ (cid:27) , E = (cid:8) Z ( − t , t , (cid:126)x, (cid:126)y, ∞ , L Nk (cid:74) − t , t (cid:75) ) > δ (cid:9) , (4.11)where we used the same notation as in Proposition 4.1 (in particular (cid:126)x = ( L N ( − t ) , . . . , L Nk − ( − t )) and (cid:126)y = ( L N ( t ) , . . . , L Nk − ( t )) ). Combining Lemmas 4.2, 4.3 and Proposition 4.1 we know that wecan find δ > sufficiently small, M sufficiently large and ˜ N ∈ N such that for N ≥ ˜ N we know(4.12) P ( E c ∪ E c ) < η/ . We claim that we can find δ > and N ≥ ˜ N such that for N ≥ N and δ ∈ (0 , δ ) we have(4.13) P ( A Nδ ∩ E ∩ E ) < η/ . Since P ( A Nδ ) = P ( A Nδ ∩ E ∩ E ) + P ( A Nδ ∩ ( E c ∪ E c )) ≤ P ( A Nδ ∩ E ∩ E ) + P ( E c ∪ E c ) , we see that (4.12) and (4.13) together imply (4.9). Step 4.
In this step we prove (4.13). We define the σ -algebra F = σ (cid:0) L Nk (cid:74) − t , t (cid:75) , L N ( ± t ) , L N ( ± t ) , . . . , L Nk − ( ± t ) (cid:1) . Clearly E , E ∈ F , so the indicator random variables E and E are F -measurable. It followsfrom the tower property of conditional expectation that(4.14) P (cid:0) A Nδ ∩ E ∩ E (cid:1) = E (cid:104) A Nδ E E (cid:105) = E (cid:104) E E E (cid:104) A Nδ | F (cid:105)(cid:105) . By the Schur-Gibbs property (see Definition 2.17), we know that P -almost surely(4.15) E (cid:104) A Nδ | F (cid:105) = E − t ,t ,(cid:126)x,(cid:126)y, ∞ ,L Nk (cid:74) − t ,t (cid:75) avoid,Ber (cid:104) A Nδ (cid:105) . We now observe that the Radon-Nikodym derivative of P − t ,t ,(cid:126)x,(cid:126)y, ∞ ,L Nk (cid:74) − t ,t (cid:75) avoid,Ber with respect to P − t ,t ,(cid:126)x,(cid:126)yBer is given by(4.16) d P − t ,t ,(cid:126)x,(cid:126)y, ∞ ,L Nk (cid:74) − t ,t (cid:75) avoid,Ber ( Q , . . . , Q k − ) d P − t ,t ,(cid:126)x,(cid:126)yBer = { Q ≥···≥ Q k − ≥ Q k } Z ( − t , t , (cid:126)x, (cid:126)y, ∞ , L Nk (cid:74) − t , t (cid:75) ) , where Q = ( Q , . . . , Q k − ) is P − t ,t ,(cid:126)x,(cid:126)yBer -distributed and Q k = L Nk (cid:74) − t , t (cid:75) . To see this, note thatby Definition 2.15 we have for any set A ⊂ (cid:81) k − i =1 Ω( − t , t , x i , y i ) that P − t ,t ,(cid:126)x,(cid:126)y, ∞ ,L Nk (cid:74) − t ,t (cid:75) avoid,Ber ( A ) = P − t ,t ,(cid:126)x,(cid:126)yBer ( A ∩ { Q ≥ · · · ≥ Q k − ≥ Q k } ) P − t ,t ,(cid:126)x,(cid:126)yBer ( Q ≥ · · · ≥ Q k − ≥ Q k ) = E − t ,t ,(cid:126)x,(cid:126)yBer (cid:2) A · { Q ≥···≥ Q k − ≥ Q k } (cid:3) Z ( − t , t , (cid:126)x, (cid:126)y, ∞ , L Nk (cid:74) − t , t (cid:75) ) = (cid:90) A { Q ≥···≥ Q k − ≥ Q k } Z ( − t , t , (cid:126)x, (cid:126)y, ∞ , L Nk (cid:74) − t , t (cid:75) ) d P − t ,t ,(cid:126)x,(cid:126)yBer . IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 33
It follows from (4.14), (4.16), and the definition of E in 4.11 that P ( A Nδ ∩ E ∩ E ) = E (cid:20) E E E − t ,t ,(cid:126)x,(cid:126)yBer (cid:20) B Nδ · { Q ≥···≥ Q k } Z ( − t , t , (cid:126)x, (cid:126)y, L Nk (cid:74) − t , t (cid:75) ) (cid:21)(cid:21) ≤≤ E (cid:20) E E E − t ,t ,(cid:126)x,(cid:126)yBer (cid:20) B Nδ δ (cid:21)(cid:21) ≤ δ E (cid:104) E · P − t ,t ,(cid:126)x,(cid:126)yBer ( B Nδ ) (cid:105) , (4.17)where δ is as in 4.11, and B Nδ = (cid:40) sup x,y ∈ [ − t ,t ] , | x − y |≤ δN α | Q i ( x ) − Q i ( y ) − p ( x − y ) | ≥ N α/ (cid:15) (cid:41) . Notice that under P − t ,t ,(cid:126)x,(cid:126)yBer the law of Q i is precisely P − t ,t ,x i ,y i Ber , and so we conclude that P − t ,t ,(cid:126)x,(cid:126)yBer ( B Nδ ) = P , t , ,y i − x i Ber (cid:32) sup x,y ∈ [0 , t ] , | x − y |≤ δN α | (cid:96) ( x ) − (cid:96) ( y ) − p ( x − y ) | ≥ N α/ (cid:15) (cid:33) , (4.18)where (cid:96) has law P , t , ,y i − x i Ber (note that in (4.18) we implicitly translated the path (cid:96) to the rightby t and up by − x i , which does not affect the probability in question). Since on the event E weknow that | y i − x i − pt | ≤ M N α we conclude from Lemma 3.12 that we can find N and δ > depending on M , r, α such that for N ≥ N and δ ∈ (0 , δ ) we have E · P , t , ,y i − x i Ber (cid:32) sup x,y ∈ [0 , t ] , | x − y |≤ δN α | (cid:96) ( x ) − (cid:96) ( y ) − p ( x − y ) | ≥ N α/ (cid:15) (cid:33) < δ η . (4.19)Combining (4.17), (4.18) and (4.19) we conclude (4.13), and hence statement (i) of the theorem.4.3. Proof of Theorem 2.26 (ii).
In this section we fix a subsequential limit L ∞ = ( ˜ f ∞ , . . . , ˜ f ∞ k − ) of the sequence ˜ P N as in the statement of Theorem 2.26, and we prove that L ∞ possesses the partialBrownian Gibbs property. Our approach is similar to that in [12, Sections 5.1 and 5.2]. We firstgive a definition of measures on scaled free and avoiding Bernoulli random walks. These measureswill appear when we apply the Schur Gibbs property to the scaled line ensembles { ˜ f Ni } k − i =1 . Definition 4.5.
Let a, b ∈ N − α Z with a < b and x, y ∈ N − α/ Z satisfy ≤ y − x ≤ ( b − a ) N α/ .Let (cid:96) ( T,z ) denote a random variable with law P ,T, ,zBer as before Definition 2.15. We define P a,b,x,yfree,N tobe the law of the C ([ a, b ]) -valued random variable Y given by Y ( t ) = x + N − α/ (cid:104) (cid:96) (( b − a ) N α , ( y − x ) N α/ )) (( t − a ) N α ) − ptN α (cid:105)(cid:112) p (1 − p ) , t ∈ [ a, b ] . Now for i ∈ (cid:74) , k (cid:75) , let (cid:96) ( N,z ) ,i denote i.i.d. random variables with laws P ,N, ,zBer . Let (cid:126)x, (cid:126)y ∈ ( N − α/ Z ) k satisfy ≤ y i − x i ≤ ( b − a ) N α/ for i ∈ (cid:74) , k (cid:75) . We define the (cid:74) , k (cid:75) -indexed line ensemble Y N by Y Ni ( t ) = x i + N − α/ (cid:104) (cid:96) (( b − a ) N α , ( y i − x i ) N α/ )) ,i (( t − a ) N α ) − ptN α (cid:105)(cid:112) p (1 − p ) , i ∈ (cid:74) , k (cid:75) , t ∈ [ a, b ] . We let P a,b,(cid:126)x,(cid:126)yfree,N denote the law of Y N . Suppose (cid:126)x, (cid:126)y ∈ ( N − α/ Z ) k ∩ W k , where W k = { (cid:126)x = ( x , . . . , x k ) ∈ R k : x ≥ x ≥ · · · ≥ x k } . Suppose further that f : [ a, b ] → ( −∞ , ∞ ] , g : [ a, b ] → [ −∞ , ∞ ) are continuous functions. Wedefine the probability measure P a,b,(cid:126)x,(cid:126)y,f,gavoid,N to be P a,b,(cid:126)x,(cid:126)yfree,N conditioned on the event E = { f ( r ) ≥ Y N ( r ) ≥ · · · ≥ Y Nk ( r ) ≥ g ( r ) for r ∈ [ a, b ] } . This measure is well-defined if E is nonempty.Next, we state two lemmas whose proofs we give in Section 7.5. The first lemma proves weakconvergence of the scaled avoiding random walk measures in Definition 4.5. It states roughly thatif the boundary data of these measures converge, then the measures converge weakly to the law ofavoiding Brownian bridges with the boundary limiting data, as in Definition 2.7. Lemma 4.6.
Fix k ∈ N and a, b ∈ R with a < b , and let f : [ a − , b + 1] → ( −∞ , ∞ ] , g :[ a − , b + 1] → [ −∞ , ∞ ) be continuous functions such that f ( t ) > g ( t ) for all t ∈ [ a − , b + 1] .Let (cid:126)x, (cid:126)y ∈ W ◦ k be such that f ( a ) > x , f ( b ) > y , g ( a ) < x k , and g ( b ) < y k . Let a N = (cid:98) aN α (cid:99) N − α and b N = (cid:100) bN α (cid:101) N − α , and let f N : [ a − , b + 1] → ( −∞ , ∞ ] and g N : [ a − , b + 1] → [ −∞ , ∞ ) be continuous functions such that f N → f and g N → g uniformly on [ a − , b + 1] . If f ≡ ∞ thelast statement means that f N ≡ ∞ for all large enough N and if g ≡ −∞ the latter means that g N ≡ −∞ for all large enough N .Lastly, let (cid:126)x N , (cid:126)y N ∈ ( N − α/ Z ) k ∩ W k , write ˜ x Ni = ( x Ni − pa N N α/ ) / (cid:112) p (1 − p ) , ˜ y Ni = ( y Ni − pb N N α/ ) / (cid:112) p (1 − p ) , and suppose that ˜ x Ni → x i and ˜ y Ni → y i as N → ∞ for each i ∈ (cid:74) , k (cid:75) .Then there exists N ∈ N so that P a N ,b N ,(cid:126)x N ,(cid:126)y N ,f N ,g N avoid,N is well-defined for N ≥ N . Moreover, if Y N have laws P a N ,b N ,(cid:126)x N ,(cid:126)y N ,f N ,g N avoid,N and Z N = Y N | (cid:74) ,k (cid:75) × [ a,b ] , i.e. Z N is a sequence of random variableson C ( (cid:74) , k (cid:75) × [ a, b ]) obtained by projecting Y N to (cid:74) , k (cid:75) × [ a, b ] , then the law of Z N converges weaklyto P a,b,(cid:126)x,(cid:126)y,f,gavoid as N → ∞ . The next lemma shows that at any given point, the values of the k − curves in L ∞ are eachdistinct, so that Lemma 4.6 may be applied. Lemma 4.7.
Let L ∞ be as in the beginning of this section. Then for any s ∈ R we have L ∞ ( s ) =( ˜ f ∞ ( s ) , . . . , ˜ f ∞ k − ( s )) ∈ W ◦ k − , P -a.s. Using these two lemmas whose proofs are postponed, we now give the proof of Theorem 2.26 (ii).
Proof. (of Theorem 2.26 (ii)) We will write
Σ = (cid:74) , k (cid:75) . Let us write Y N = ( ˜ f N , . . . , ˜ f Nk − ) where werecall that ˜ f Ni ( s ) = N − α/ ( L Ni ( sN α ) − psN α ) / (cid:112) p (1 − p ) . Since L ∞ is a weak subsequential limitof Y N by possibly passing to a subsequence we may assume that Y N = ⇒ L ∞ . We will still callthe subsequence Y N to not overburden the notation. By the Skorohod representation theorem [2,Theorem 6.7], we can also assume that Y N and L ∞ are all defined on the same probability spacewith measure P and the convergence is happening P -almost surely. Here we are implicitly usingLemma 2.2 from which we know that the random variables Y N and L ∞ take value in a Polish spaceso that the Skorohod representation theorem is applicable.Fix a set K = (cid:74) k , k (cid:75) ⊆ (cid:74) , k − (cid:75) and a, b ∈ R with a < b . We also fix a bounded Borel-measurable function F : C ( K × [ a, b ]) → R . In view of Definition 2.10 we need to prove that P -a.s.,(4.20) E [ F ( L ∞ | K × [ a,b ] ) | F ext ( K × ( a, b ))] = E a,b,(cid:126)x,(cid:126)y,f,gavoid [ F ( Q )] , where (cid:126)x = ( ˜ f ∞ k ( a ) , . . . , ˜ f ∞ k ( a )) , (cid:126)y = ( ˜ f ∞ k ( b ) , . . . , ˜ f ∞ k ( b )) , f = ˜ f ∞ k − (with ˜ f ∞ = + ∞ ), g = ˜ f ∞ k +1 ,the σ -algebra F ext ( K × ( a, b )) is as in Definition 2.8, and Q has law P a,b,(cid:126)x,(cid:126)y,f,gavoid . We prove (4.20) intwo steps. Step 1.
Fix m ∈ N , n , . . . , n m ∈ Σ , t , . . . , t m ∈ R , and h , . . . , h m : R → R bounded continuousfunctions. Define S = { i ∈ (cid:74) , m (cid:75) : n i ∈ K, t i ∈ [ a, b ] } . In this step we prove that(4.21) E (cid:34) m (cid:89) i =1 h i ( ˜ f ∞ n i ( t i )) (cid:35) = E (cid:34)(cid:89) s/ ∈ S h s ( ˜ f ∞ n s ( t s )) · E a,b,(cid:126)x,(cid:126)y,f,gavoid (cid:34)(cid:89) s ∈ S h s ( Q n s ( t s )) (cid:35)(cid:35) , IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 35 where Q denotes a random variable with law P a,b,(cid:126)x,(cid:126)y,f,gavoid . By assumption, we have(4.22) lim N →∞ E (cid:34) m (cid:89) i =1 h i ( ˜ f Nn i ( t i )) (cid:35) = E (cid:34) m (cid:89) i =1 h i ( ˜ f ∞ n i ( t i )) (cid:35) . We define the sequences a N = (cid:98) aN α (cid:99) N − α , b N = (cid:100) bN α (cid:101) N − α , (cid:126)x N = ( L Nk ( a N ) , . . . , L Nk ( a N )) , (cid:126)y N =( L Nk ( b N ) , . . . , L Nk ( b N )) , f N = ˜ f Nk − (where ˜ f N = + ∞ ), g N = ˜ f Nk +1 . Since a N → a , b N → b , wemay choose N sufficiently large so that if N ≥ N , then t s < a N or t s > b N for all s / ∈ S with n s ∈ K . Since the line ensemble ( L N , . . . , L Nk − ) in the definition of Y N satisfies the Schur Gibbsproperty (see Definition 2.17), we see from Definition 4.5 that the law of Y N | K × [ a,b ] conditionedon the σ -algebra F = σ (cid:16) ˜ f Nk − , ˜ f Nk +1 , ˜ f Nk ( a N ) , ˜ f Nk ( b N ) , . . . , ˜ f Nk ( a N ) , ˜ f Nk ( b N ) (cid:17) is (upto a reindexingof the curves) precisely P a N ,b N ,(cid:126)x N ,(cid:126)y N ,f N ,g N avoid,N . Therefore, writing Z N for a random variable with thislaw, we have(4.23) E (cid:34) m (cid:89) i =1 h i ( Y Nn i ( t i )) (cid:35) = E (cid:34)(cid:89) s/ ∈ S h s ( Y Nn s ( t s )) · E a N ,b N ,(cid:126)x N ,(cid:126)y N ,f N ,g N avoid,N (cid:34)(cid:89) s ∈ S h s ( Z Nn s − k +1 ( t s )) (cid:35)(cid:35) . Now by Lemma 4.7, we have P -a.s. that ˜ f ∞ k − ( a ) > ˜ f ∞ k ( a ) > · · · > ˜ f ∞ k ( a ) > ˜ f ∞ k +1 ( a ) and also ˜ f ∞ k − ( b ) > ˜ f ∞ k ( b ) > · · · > ˜ f ∞ k ( b ) > ˜ f ∞ k +1 ( b ) . In addition, we have by part (i) of Theorem 2.26 that P -almost surely f N → f = f ∞ k +1 and g N → g = f ∞ k − uniformly on [ a − , b + 1] ⊇ [ a N , b N ] , and ( x Ni − pa N N α/ ) / (cid:112) p (1 − p ) → (cid:126)x , ( y Ni − pb N N α/ ) / (cid:112) p (1 − p ) → (cid:126)y for i ∈ (cid:74) , k − (cid:75) . It followsfrom Lemma 4.6 that(4.24) lim N →∞ E a N ,b N ,(cid:126)x N ,(cid:126)y N ,f N ,g N avoid,N (cid:34)(cid:89) s ∈ S h s ( Z Nn s − k +1 ( t s )) (cid:35) = E a,b,(cid:126)x,(cid:126)y,f,gavoid (cid:34)(cid:89) s ∈ S h s ( Q n s ( t s )) (cid:35) . Lastly, the continuity of the h i implies that(4.25) lim N →∞ (cid:89) s/ ∈ S h s ( Y Nn s ( t s )) = (cid:89) s/ ∈ S h s ( f ∞ n s ( t s )) . Combining (4.22), (4.23), (4.24), and (4.25) with the bounded convergence theorem proves (4.21).
Step 2.
In this step we use (4.21) to prove (4.20). The argument below is a standard monotoneclass argument. For n ∈ N we define piecewise linear functions χ n ( x, r ) = , x > r + 1 /n, − n ( x − r ) , x ∈ [ r, r + 1 /n ] , , x < r. We fix m , m ∈ N , n , . . . , n m , n , . . . , n m ∈ Σ , t , . . . , t m , t , . . . , t m ∈ R , such that ( n i , t i ) / ∈ K × [ a, b ] and ( n i , t i ) ∈ K × [ a, b ] for all i . Then (4.21) implies that E (cid:34) m (cid:89) i =1 χ n ( f ∞ n i ( t i ) , a i ) m (cid:89) i =1 χ n ( f ∞ n i ( t i ) , b i ) (cid:35) = E (cid:34) m (cid:89) i =1 χ n ( f ∞ n i ( t i ) , a i ) E a,b,(cid:126)x,(cid:126)y,f,gavoid (cid:34) m (cid:89) i =1 χ n ( Q n i ( t i ) , b i ) (cid:35)(cid:35) . Letting n → ∞ , we have χ n ( x, r ) → χ ( x, r ) = x ≤ r , and the bounded convergence theorem gives E (cid:34) m (cid:89) i =1 χ ( f ∞ n i ( t i ) , a i ) m (cid:89) i =1 χ ( f ∞ n i ( t i ) , b i ) (cid:35) = E (cid:34) m (cid:89) i =1 χ ( f ∞ n i ( t i ) , a i ) E a,b,(cid:126)x,(cid:126)y,f,gavoid (cid:34) m (cid:89) i =1 χ ( Q n i ( t i ) , b i ) (cid:35)(cid:35) . Let H denote the space of bounded Borel measurable functions H : C ( K × [ a, b ]) → R satisfying(4.26) E (cid:34) m (cid:89) i =1 χ ( f ∞ n i ( t i ) , a i ) H ( L ∞ | K × [ a,b ] ) (cid:35) = E (cid:34) m (cid:89) i =1 χ ( f ∞ n i ( t i ) , a i ) E a,b,(cid:126)x,(cid:126)y,f,gavoid [ H ( Q )] (cid:35) . The above shows that H contains all functions A for sets A contained in the π -system A consistingof sets of the form { h ∈ C ( K × [ a, b ]) : h ( n i , t i ) ≤ b i for i ∈ (cid:74) , m (cid:75) } . We note that H is closed under linear combinations simply by linearity of expectation, and if H n ∈ H are nonnegative bounded measurable functions converging monotonically to a boundedfunction H , then H ∈ H by the monotone convergence theorem. Thus by the monotone classtheorem [15, Theorem 5.2.2], H contains all bounded σ ( A ) -measurable functions. Since the finitedimensional sets in A generate the full Borel σ -algebra C K (see for instance [12, Lemma 3.1]), wehave in particular that F ∈ H .Now let B denote the collection of sets B ∈ F ext ( K × ( a, b )) such that(4.27) E [ B · F ( L ∞ | K × [ a,b ] )] = E [ B · E a,b,(cid:126)x,(cid:126)y,f,gavoid [ F ( Q )]] . We observe that B is a λ -system. Indeed, since (4.26) holds for H = F , taking a i , b i → ∞ andapplying the bounded convergence theorem shows that (4.27) holds with B = 1 . Thus if B ∈ B then B c ∈ B since B c = 1 − B . If B i ∈ B , i ∈ N , are pairwise disjoint and B = (cid:83) i B i , then B = (cid:80) i B i , and it follows from the monotone convergence theorem that B ∈ B . Moreover, (4.26)with H = F implies that B contains the π -system P of sets of the form { h ∈ C (Σ × R ) : h ( n i , t i ) ≤ a i for i ∈ (cid:74) , m (cid:75) , where ( n i , t i ) / ∈ K × ( a, b ) } . By the π - λ theorem [15, Theorem 2.1.6] it follows that B contains σ ( P ) = F ext ( K × ( a, b )) . Thus(4.27) holds for all B ∈ F ext ( K × ( a, b )) . It is proven in [12, Lemma 3.4] that E a,b,(cid:126)x,(cid:126)y,f,gavoid [ F ( Q )] isan F ext ( K × ( a, b )) -measurable function. Therefore (4.20) follows from (4.27) by the definition ofconditional expectation. This suffices for the proof. (cid:3) Bounding the max and min
In this section we prove Lemmas 4.2 and 4.3 and we assume the same notation as in the statementsof these lemmas. In particular, we assume that k ∈ N , k ≥ , p ∈ (0 , , α, λ > are all fixed and (cid:8) L N = ( L N , L N , . . . , L Nk ) (cid:9) ∞ N =1 , is an ( α, p, λ ) -good sequence of (cid:74) , k (cid:75) -indexed Bernoulli line ensembles as in Definition 2.24 that areall defined on a probability space with measure P . The proof of Lemma 4.2 is given in Section 5.1and Lemma 4.3 is proved in Section 5.2.5.1. Proof of Lemma 4.2.
Our proof of Lemma 4.2 is similar to that of [6, Lemma 5.2]. Forclarity we split the proof into three steps. In the first step we introduce some notation that will berequired in the proof of the lemma, which is presented in Steps 2 and 3.
Step 1.
We write s = (cid:100) r + 4 (cid:101) N α , s = (cid:98) r + 3 (cid:99) N α , so that s ≤ t ≤ s , and assume that N is large enough so that ψ ( N ) N α from Definition 2.24 is at least s . Notice that such a choice ispossible by our assumption that L N is an ( α, p, λ ) -good sequence and in particular, we know that L Ni are defined at ± s for i ∈ (cid:74) , k (cid:75) . We define events E ( M ) = (cid:110)(cid:12)(cid:12) L N ( − s ) + ps (cid:12)(cid:12) > M N α/ (cid:111) , F ( M ) = (cid:110) L N ( − s ) > − ps + M N α/ (cid:111) , IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 37 G ( M ) = (cid:40) sup s ∈ [0 ,s ] (cid:2) L N ( s ) − ps (cid:3) ≥ (6 r + 22)(2 r + 10) / ( M + 1) N α/ (cid:41) . If (cid:15) > is as in the statement of the lemma, we note by (2.7) that we can find M and ˜ N sufficiently large so that if N ≥ ˜ N we have(5.1) P ( E ( M )) < (cid:15)/ and P ( F ( M )) < (cid:15)/ . In the remainder of this step we show that the event G ( M ) \ E ( M ) can be written as a countabledisjoint union of certain events, i.e. we show that(5.2) (cid:71) ( a,b,s,(cid:96) top ,(cid:96) bot ) ∈ D ( M ) E ( a, b, s, (cid:96) top , (cid:96) bot ) = G ( M ) \ E ( M ) , where the sets E ( a, b, s, (cid:96) top , (cid:96) bot ) and D ( M ) are described below.For a, b, z , z , z ∈ Z with z ≤ a , z ≤ b , s ∈ (cid:74) , s (cid:75) , (cid:96) bot ∈ Ω( − s , s, z , z ) and (cid:96) top ∈ Ω( s, s , b, z ) we define E ( a, b, s, (cid:96) top , (cid:96) bot ) to be the event that L N ( − s ) = a , L N ( s ) = b , L N agrees with (cid:96) top on (cid:74) s, s (cid:75) , and L N agrees with (cid:96) bot on (cid:74) − s , s (cid:75) . Let D ( M ) be the set of tuples ( a, b, s, (cid:96) top , (cid:96) bot ) satisfying(1) ≤ s ≤ s ,(2) ≤ b − a ≤ s + s , | a + ps | ≤ M N α/ , and b − ps ≥ (6 r + 22)(2 r + 10) / ( M + 1) N α/ ,(3) z ≤ a , z ≤ b , and (cid:96) bot ∈ Ω( − s , s, z , z ) ,(4) b ≤ z ≤ b + ( s − s ) , and (cid:96) top ∈ Ω( s, s , b, z ) ,(5) if s < s (cid:48) ≤ s , then (cid:96) top ( s (cid:48) ) − ps (cid:48) < (6 r + 22)(2 r + 10) / ( M + 1) N α/ .It is clear that D ( M ) is countable. The five conditions above together imply that (cid:91) ( a,b,s,(cid:96) top ,(cid:96) bot ) ∈ D ( M ) E ( a, b, s, (cid:96) top , (cid:96) bot ) = G ( M ) \ E ( M ) , and what remains to be shown to prove (5.2) is that E ( a, b, s, (cid:96) top , (cid:96) bot ) are pairwise disjoint.On the intersection of E ( a, b, s, (cid:96) top , (cid:96) bot ) and E (˜ a, ˜ b, ˜ s, ˜ (cid:96) top , ˜ (cid:96) bot ) we must have ˜ a = L N ( − s ) = a so that a = ˜ a . Furthermore, we have by properties (2) and (5) that s ≥ ˜ s and ˜ s ≥ s from whichwe conclude that s = ˜ s and then we conclude ˜ b = b , (cid:96) top = ˜ (cid:96) top , (cid:96) bot = ˜ (cid:96) bot . In summary, if E ( a, b, s, (cid:96) top , (cid:96) bot ) and E (˜ a, ˜ b, ˜ s, ˜ (cid:96) top , ˜ (cid:96) bot ) have a non-trivial intersection then ( a, b, s, (cid:96) top , (cid:96) bot ) =(˜ a, ˜ b, ˜ s, ˜ (cid:96) top , ˜ (cid:96) bot ) , which proves (5.2). Step 2.
In this step we prove that we can find an N so that for N ≥ N (5.3) P (cid:32) sup s ∈ [0 ,t ] (cid:2) L N ( s ) − ps (cid:3) ≥ (6 r + 22)(2 r + 10) / ( M + 1) N α/ (cid:33) ≤ P ( G ( M )) < (cid:15)/ . A similar argument, which we omit, proves the same inequality with [ − t , in place of [0 , t ] andthen the statement of the lemma holds for all N ≥ N , with R = (6 r + 22)(2 r + 10) / ( M + 1) .We claim that we can find ˜ N ∈ N sufficiently large so that if N ≥ ˜ N and ( a, b, s, (cid:96) top , (cid:96) bot ) ∈ D ( M ) satisfies P ( E ( a, b, s, (cid:96) top , (cid:96) bot )) > then we have(5.4) P − s ,s,a,b, ∞ ,(cid:96) bot avoid,Ber (cid:16) (cid:96) ( − s ) > − ps + M N α/ (cid:17) ≥ . We will prove (5.4) in Step 3. For now we assume its validity and conclude the proof of (5.3).Let ( a, b, s, (cid:96) top , (cid:96) bot ) ∈ D ( M ) be such that P ( E ( a, b, s, (cid:96) top , (cid:96) bot )) > . By the Schur Gibbsproperty, see Definition 2.17, we have for any (cid:96) ∈ Ω( − s , s, a, b ) that(5.5) P (cid:0) L N (cid:74) − s , s (cid:75) = (cid:96) | E ( a, b, s, (cid:96) top , (cid:96) bot ) (cid:1) = P − s ,s,a,b, ∞ ,(cid:96) bot avoid,Ber ( (cid:96) = (cid:96) ) , where L N (cid:74) − s , s (cid:75) denotes the restriction of L N to the set (cid:74) − s , s (cid:75) .Combining (5.4) and (5.5) we get for N ≥ ˜ N P (cid:16) L N ( − s ) > − ps + M N α/ | E ( a, b, s, (cid:96) top , (cid:96) bot ) (cid:17) = P − s ,s,a,b, ∞ ,(cid:96) bot avoid,Ber (cid:16) (cid:96) ( − s ) > − ps + M N α/ (cid:17) ≥ . (5.6)It follows from (5.6) that for N ≥ ˜ N we have (cid:15)/ > P ( F ( M )) ≥ (cid:88) ( a,b,s,(cid:96) top ,(cid:96) bot ) ∈ D ( M ) , P ( E ( a,b,s,(cid:96) top ,(cid:96) bot )) > P ( F ( M ) ∩ E ( a, b, s, (cid:96) top , (cid:96) bot )) = (cid:88) ( a,b,s,(cid:96) top ,(cid:96) bot ) ∈ D ( M ) , P ( E ( a,b,s,(cid:96) top ,(cid:96) bot )) > P (cid:16) L N ( − s ) > − ps + M N α/ | E ( a, b, s, (cid:96) top , (cid:96) bot ) (cid:17) P ( E ( a, b, s, (cid:96) top , (cid:96) bot )) ≥ (cid:88) ( a,b,s,(cid:96) top ,(cid:96) bot ) ∈ D ( M ) , P ( E ( a,b,s,(cid:96) top ,(cid:96) bot )) > · P ( E ( a, b, s, (cid:96) top , (cid:96) bot )) = 13 · P ( G ( M ) \ E ( M )) , (5.7)where in the last equality we used (5.2). From (5.1) and (5.7) we have for N ≥ N = max( ˜ N , ˜ N ) P ( G ( M )) ≤ P ( E ( M )) + P ( G ( M ) \ E ( M )) < (cid:15)/ (cid:15)/ , which proves (5.3). Step 3.
In this step we prove (5.4) and in the sequel we let ( a, b, s, (cid:96) top , (cid:96) bot ) ∈ D ( M ) be such that P ( E ( a, b, s, (cid:96) top , (cid:96) bot )) > . We remark that the condition P ( E ( a, b, s, (cid:96) top , (cid:96) bot )) > implies that Ω avoid ( − s , s, a, b, ∞ , (cid:96) bot ) is not empty. By Lemma 3.2 we know that P − s ,s,a,b, ∞ ,(cid:96) bot avoid,Ber (cid:16) (cid:96) ( − s ) > − ps + M N α/ (cid:17) ≥ P − s ,s,a,bBer (cid:16) (cid:96) ( − s ) > − ps + M N α/ (cid:17) , and so it suffices to show that(5.8) P − s ,s,a,bBer (cid:16) (cid:96) ( − s ) > − ps + M N α/ (cid:17) ≥ . One directly observes that P − s ,s,a,bBer (cid:16) (cid:96) ( − s ) > − ps + M N α/ (cid:17) = P ,s + s , ,b − aBer (cid:16) (cid:96) ( s − s ) + a ≥ − ps + M N α/ (cid:17) ≥ P ,s + s , ,b − aBer (cid:16) (cid:96) ( s − s ) ≥ p ( s − s ) + 2 M N α/ (cid:17) , (5.9)where the inequality follows from the assumption in (2) that a + ps ≥ − M N α/ . Moreover, since b − ps ≥ (6 r + 22)(2 r + 10) / ( M + 1) N α/ and a + ps ≤ M N α/ , we have b − a ≥ p ( s + s ) + (6 r + 21)(2 r + 10) / ( M + 1) N α/ ≥ p ( s + s ) + (6 r + 21)( M + 1)( s + s ) / . The second inequality follows since s + s ≤ s ≤ (2 r + 10) N α .It follows from Lemma 3.8 with M = 0 , M = (6 r + 21)( M + 1) that for sufficiently large N (5.10) P ,s + s , ,b − aBer (cid:16) (cid:96) ( s − s ) ≥ s − s s + s [ p ( s + s ) + M ( s + s ) / ] − ( s + s ) / (cid:17) ≥ / . Note that s − s s + s ≥ N α (2 r +10) N α = r +10 and so for all N ∈ N we have s − s s + s [ p ( s + s ) + M ( s + s ) / ] − ( s + s ) / ≥ p ( s − s ) + (6 r + 21)( M + 1)( s + s ) / r + 10 − ( s + s ) / ≥ p ( s − s ) + 2 M N α/ . (5.11) IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 39
Combining (5.9), (5.10) and (5.11) we conclude that we can find ˜ N ∈ N such that if N ≥ ˜ N wehave (5.8). This suffices for the proof.5.2. Proof of Lemma 4.3.
We begin by proving the following important lemma, which showsthat it is unlikely that the curve L Nk − falls uniformly very low on a large interval. Lemma 5.1.
Under the same conditions as in Lemma 4.3 the following holds. For any r, (cid:15) > there exist R > and N ∈ N such that for all N ≥ N (5.12) P (cid:32) sup x ∈ [ r,R ] (cid:0) L Nk − ( xN α ) − pxN α (cid:1) ≤ − ( λR + φ ( (cid:15)/
16) + 1) N α/ (cid:33) < (cid:15), where λ, φ are as in the definition of an ( α, p, λ ) -good sequence of line ensembles, see Definition2.24. The same statement holds if [ r, R ] is replaced with [ − R, − r ] and the constants N , R dependon (cid:15), r as well as the parameters α, p, λ, k and the functions φ, ψ from Definition 2.24.Proof. Before we go into the proof we give an informal description of the main ideas. The key tothis lemma is the parabolic shift implicit in the definition of an ( α, p, λ ) -good sequence. This shiftrequires that the deviation of the top curve L N from the line of slope p to appear roughly parabolic.On the event in equation (5.12) we have that the ( k − -th curve dips very low uniformly on theinterval [ r, R ] and we will argue that on this event the top k − curves essentially do not feel thepresence of the ( k − -th curve. After a careful analysis using the monotone coupling lemmas fromSection 3.1 we will see that the latter statement implies that the curve L N behaves like a free bridgebetween its end-points that have been slighly raised. Consequently, we would expect the midpoint L N ( N α ( R + r ) / to be close (on scale N α/ ) to [ L N ( rN α ) + L N ( RN α )] / . However, with highprobability [ L N ( rN α ) + L N ( RN α )] / lies much lower than the inverted parabola − λ ( R + r ) N α/ / (due to the concavity of the latter), and so it is very unlikely for L N ( N α ( R + r ) / to be near it byour assumption. The latter would imply that the event in (5.12) is itself unlikely, since conditionalon it an unlikely event suddenly became likely.We proceed to fill in the details of the above sketch of the proof in the following steps. In totalthere are six steps and we will only prove the statement of the lemma for the interval [ r, R ] , sincethe argument for [ − R, − r ] is very similar. Step 1.
We begin by specifying the choice of R in the statement of the lemma, fixing some notationand making a few simplifying assumptions.Fix r, (cid:15) > as in the statement of the lemma. Note that for any R > r , sup r ≤ x ≤ R (cid:0) L Nk − ( xN α ) − pxN α (cid:1) ≥ sup (cid:100) r (cid:101)≤ x ≤ R (cid:0) L Nk − ( xN α ) − pxN α (cid:1) . Thus by replacing r with (cid:100) r (cid:101) , we can assume that r ∈ Z , which we do in the sequel. Notice thatby our assumption that L N is ( α, p, λ ) -good we know that (5.12) holds trivially if k = 2 (with theright side of (5.12) being any number greater than (cid:15)/ and in particular (cid:15) ) and so in the sequel weassume that k ≥ .Define constants(5.13) C = (cid:115) p (1 − p ) log 31 − (11 / / ( k − , and R > r sufficiently large so that for R ≥ R and N ∈ N we have(5.14) λ ( R − r ) ≥ φ ( (cid:15)/
16) + 2 + k (cid:100) C (cid:100) RN α (cid:101) − (cid:98) rN α (cid:99)(cid:101) N − α/ . We define R = (cid:100) R (cid:101) + (cid:100) R (cid:101) + r odd , so that R ≥ R and the midpoint ( R + r ) / are integers. Thisspecifies our choice of R and for convenience we denote m = ( R + r ) / . In the following, we always assume N is large enough so that ψ ( N ) > R , hence L Ni are definedat RN α for ≤ i ≤ k . We may do so by the second condition in the definition of an ( α, p, λ ) -goodsequence (see Definition 2.24).With the choice of R as above we define the events A = (cid:110) L N ( mN α ) − pmN α + λm N α/ < − φ ( (cid:15)/ N α/ (cid:111) ,B = (cid:40) sup x ∈ [ r,R ] (cid:0) L Nk − ( xN α ) − pxN α (cid:1) ≤ − [ λR + φ ( (cid:15)/
16) + 1] N α/ (cid:41) . (5.15)The goal of the lemma is to prove that we can find N ∈ N so that for all N ≥ N (5.16) P ( B ) < (cid:15), which we accomplish in the steps below. Step 2.
In this step we introduce some notation that will be used throughout the next steps. Let γ = (cid:98) rN α (cid:99) and Γ = (cid:100) RN α (cid:101) . We also define the event F = (cid:40) sup s ∈{ γ, Γ } (cid:12)(cid:12)(cid:12) L N ( s ) − ps + λs N − α/ (cid:12)(cid:12)(cid:12) < [ φ ( (cid:15)/
16) + 1] N α/ (cid:41) . (5.17)In the remainder of this step we show that F ∩ B can be written as a countable disjoint union (5.18) F ∩ B = (cid:71) ( (cid:126)x,(cid:126)y,(cid:96) bot ) ∈ D E ( (cid:126)x, (cid:126)y, (cid:96) bot ) , where the sets E ( (cid:126)x, (cid:126)y, (cid:96) bot ) and D are defined below.For (cid:126)x, (cid:126)y ∈ W k − , z , z ∈ Z , and (cid:96) bot ∈ Ω( γ, Γ , z , z ) , let E ( (cid:126)x, (cid:126)y, (cid:96) bot ) denote the event that L Ni ( γ ) = x i and L Ni (Γ) = y i for ≤ i ≤ k − , and L Nk − agrees with (cid:96) bot on [ γ, Γ] . Let D denotethe set of triples ( (cid:126)x, (cid:126)y, (cid:96) bot ) satisfying(1) ≤ y i − x i ≤ Γ − γ for ≤ i ≤ k − ,(2) | x − pγ + λγ N − α/ | < φ ( (cid:15)/ N α/ and | y − p Γ + λ Γ N − α/ | < φ ( (cid:15)/ N α/ ,(3) z ≤ x k − , z ≤ y k − , and (cid:96) bot ∈ Ω( γ, Γ , z , z ) ,(4) sup x ∈ [ r,R ] [ (cid:96) bot ( xN α ) − pxN α ] ≤ − [ λR + φ ( (cid:15)/
16) + 1] N α/ .It is clear that D is countable, the events E ( (cid:126)x, (cid:126)y, (cid:96) bot ) are pairwise disjoint for different elements in D and (5.18) is satisfied. Step 3.
We claim that we can find ˜ N so that for N ≥ ˜ N we have(5.19) P ( A | E ( (cid:126)x, (cid:126)y, (cid:96) bot )) ≥ / for all ( (cid:126)x, (cid:126)y, (cid:96) bot ) ∈ D such that P ( E ( (cid:126)x, (cid:126)y, (cid:96) bot )) > . We will prove (5.19) in the steps below. Inthis step we assume its validity and conclude the proof of (5.16).It follows from (5.18) and (5.19) that for N ≥ ˜ N and P ( F ∩ B ) > we have P ( A | F ∩ B ) = (cid:88) ( (cid:126)x,(cid:126)y,(cid:96) bot ) ∈ D, P ( E ( (cid:126)x,(cid:126)y,(cid:96) bot ))) > P ( A | E ( (cid:126)x, (cid:126)y, (cid:96) bot ) P ( E ( (cid:126)x, (cid:126)y, (cid:96) bot )) P ( F ∩ B ) ≥ · (cid:80) ( (cid:126)x,(cid:126)y,(cid:96) bot ) ∈ D, P ( E ( (cid:126)x,(cid:126)y,(cid:96) bot ))) > P ( E ( (cid:126)x, (cid:126)y, (cid:96) bot )) P ( F ∩ B ) = 14 . IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 41
From the third condition in the definition of an ( α, p, λ ) -good sequence, see Definition 2.24, wecan find ˜ N so that P ( A ) < (cid:15)/ for N ≥ ˜ N . Hence if N ≥ max( ˜ N , ˜ N ) and P ( F ∩ B ) > we have(5.20) P ( F ∩ B ) = P ( A ∩ F ∩ B ) P ( A | F ∩ B ) ≤ P ( A ) < (cid:15)/ . Lastly, by the same condition in Definition 2.24 we can find ˜ N so that for N ≥ ˜ N we have(5.21) P ( F c ) = 2 · (cid:15)/ (cid:15)/ . In deriving (5.21) we used the fact that | L N ( γ ) − L N ( rN α ) | ≤ , | L N (Γ) − L N ( RN α ) | ≤ and p ∈ [0 , . Combining (5.20) and (5.21) we conclude that if N ≥ N = max( ˜ N , ˜ N , ˜ N ) P ( B ) ≤ P ( F ∩ B ) + P ( F c ) ≤ (cid:15)/ (cid:15)/ < (cid:15), which proves (5.16). Step 4.
In this step we prove (5.19). We define (cid:126)x (cid:48) , (cid:126)y (cid:48) ∈ W k − through x (cid:48) i = x + ( k − − i ) (cid:100) C √ T (cid:101) , y (cid:48) i = y + ( k − − i ) (cid:100) C √ T (cid:101) for i = 1 , . . . , k − with x = (cid:100) pγ − λγ N − α/ + [ φ ( (cid:15)/
16) + 1] N α/ (cid:101) , y = (cid:100) p Γ − λ Γ N − α/ + [ φ ( (cid:15)/
16) + 1] N α/ (cid:101) , (5.22)where C is as in (5.13) and T = Γ − γ . Note that for any ( (cid:126)x, (cid:126)y, (cid:96) bot ) ∈ D we have x (cid:48) i ≥ x ≥ x ≥ x i and y (cid:48) i ≥ y ≥ y ≥ y i for each i = 1 , . . . , k − . Furthermore, x (cid:48) i − x (cid:48) i +1 ≥ C √ T and y (cid:48) i − y (cid:48) i +1 ≥ C √ T for all i = 1 , . . . , k − with the convention x (cid:48) k − = x and y (cid:48) k − = y .We claim that we can find ˜ N so that for all N ≥ ˜ N and ( (cid:126)x, (cid:126)y, (cid:96) bot ) ∈ D such that P ( E ( (cid:126)x, (cid:126)y, (cid:96) bot )) > we have (cid:81) k − i =1 | Ω( γ, Γ , x (cid:48) i , y (cid:48) i ) | ≥ | Ω avoid ( γ, Γ , (cid:126)x (cid:48) , (cid:126)y (cid:48) , ∞ , (cid:96) bot ) | ≥ and moreover we have(5.23) P γ, Γ ,(cid:126)x (cid:48) ,(cid:126)y (cid:48) Ber (cid:16) Q ( mN α ) − pmN α + λm N α/ < − φ ( (cid:15)/ N α/ (cid:17) ≥ / , (5.24) P γ, Γ ,(cid:126)x (cid:48) ,(cid:126)y (cid:48) Ber ( Q ≥ · · · ≥ Q k − ) ≥ / , where Q = ( Q , . . . , Q k − ) is P γ, Γ ,(cid:126)x (cid:48) ,(cid:126)y (cid:48) Ber -distributed and we used the convention that Q k − = (cid:96) bot .We prove (5.23) and (5.24) in the steps below. In this step we assume their validity and concludethe proof of (5.19).Observe that for any ( (cid:126)x, (cid:126)y, (cid:96) bot ) ∈ D such that P ( E ( (cid:126)x, (cid:126)y, (cid:96) bot )) > we the following tower ofinequalities provided that N ≥ ˜ N P ( A | E ( (cid:126)x, (cid:126)y, (cid:96) bot )) = P γ, Γ ,(cid:126)x,(cid:126)y, ∞ ,(cid:96) bot avoid,Ber (cid:16) Q ( mN α ) − pmN α + λm N α/ < − φ ( (cid:15)/ N α/ (cid:17) ≥ P γ, Γ ,(cid:126)x (cid:48) ,(cid:126)y (cid:48) , ∞ ,(cid:96) bot avoid,Ber (cid:16) Q ( mN α ) − pmN α + λm N α/ < − φ ( (cid:15)/ N α/ (cid:17) = P γ, Γ ,(cid:126)x (cid:48) ,(cid:126)y (cid:48) Ber (cid:0) { Q ( mN α ) − pmN α + λm N α/ < − φ ( (cid:15)/ N α/ } ∩ { Q ≥ · · · ≥ Q k − } (cid:1) P γ, Γ ,(cid:126)x (cid:48) ,(cid:126)y (cid:48) Ber ( Q ≥ · · · ≥ Q k − ) . (5.25)Let us elaborate on (5.25) briefly. The condition that P ( E ( (cid:126)x, (cid:126)y, (cid:96) bot )) > is required to ensurethat the probabilities on the first line of (5.25) are well-defined and N ≥ ˜ N ensures that all otherprobabilities are also well-defined. The equality on the first line of (5.25) follows from the definitionof A and the Schur Gibbs property, see Definition 2.17, and Q = ( Q , . . . , Q k − ) is P γ, Γ ,(cid:126)x,(cid:126)y, ∞ ,(cid:96) bot avoid,Ber -distributed. The inequality in the first line of (5.25) follows from Lemma 3.1, while the equality in the second line follows from Definition 2.15, and now Q = ( Q , . . . , Q k − ) is P γ, Γ ,(cid:126)x (cid:48) ,(cid:126)y (cid:48) Ber -distributedwith the convention that Q k − = (cid:96) bot .Combining (5.23), (5.24) and (5.25) we conclude that P ( A | E ( (cid:126)x, (cid:126)y, (cid:96) bot )) ≥ / − /
12 = 1 / , which proves (5.19). Step 5.
In this step we prove (5.23). We observe that since P ( E ( (cid:126)x, (cid:126)y, (cid:96) bot )) > we know that | Ω avoid ( γ, Γ , (cid:126)x, (cid:126)y, ∞ , (cid:96) bot ) | ≥ and then we conclude from Lemma 2.16 that there exist ˆ N ∈ N suchthat for N ≥ ˆ N we have | Ω avoid ( γ, Γ , (cid:126)x (cid:48) , (cid:126)y (cid:48) , ∞ , (cid:96) bot ) | ≥ .Below (cid:96) will be used for a generic random variable with law P · , · , · , · Ber , where the boundary datachanges from line to line. With x, y as in (5.22), write z = y − x and recall that T = Γ − γ . Then P γ, Γ ,x (cid:48) ,y (cid:48) Ber (cid:16) (cid:96) ( mN α ) − pmN α + λm N α/ < − φ ( (cid:15)/ N α/ (cid:17) = P ,T,x (cid:48) ,y (cid:48) Ber (cid:16) (cid:96) ( T / − pmN α + λm N α/ < − φ ( (cid:15)/ N α/ (cid:17) = P ,T,x,yBer (cid:16) (cid:96) ( T / − pmN α + λm N α/ < − φ ( (cid:15)/ N α/ − ( k − (cid:100) C √ T (cid:101) (cid:17) ≥ P ,T,x,yBer (cid:18) (cid:96) ( T / − x + y < λ (cid:18) γ + Γ N α/ (cid:19) − [2 φ ( (cid:15)/
16) + 1 + λm ] N α/ − k (cid:100) C √ T (cid:101) (cid:19) = P ,T, ,zBer (cid:18) (cid:96) ( T / − z/ < λ (cid:18) γ + Γ N α/ (cid:19) − [2 φ ( (cid:15)/
16) + 1 + λm ] N α/ − k (cid:100) C √ T (cid:101) (cid:19) . (5.26)The equalities in (5.26) follow from shifting the boundary data of the curve (cid:96) , while the inequalityon the third line follows from the definition of x, y as in (5.22).From our choice of R in Step 1 and the definition of γ, Γ we know that λ γ + Γ N α − λm ≥ λ ( R − r ) − rλN α ≥ φ ( (cid:15)/
16) + 2 + k (cid:100) C √ T (cid:101) N − α/ − rλN α .. The last inequality and (5.26) imply P γ, Γ ,x (cid:48) ,y (cid:48) Ber (cid:16) (cid:96) ( mN α ) − pmN α + λm N α/ < − φ ( (cid:15)/ N α/ (cid:17) ≥ P ,T, ,zBer (cid:16) (cid:96) ( T / − z/ < N α/ − rλN − α/ (cid:17) . (5.27)Let ˜ P be the probability measure on the space afforded by Theorem 3.3, supporting a randomvariable (cid:96) ( T,z ) with law P ,T, ,zBer and a Brownian bridge B σ with variance σ = p (1 − p ) . Then theprobability in the last line of (5.26) is equal to P ,T, ,zBer (cid:16) (cid:96) ( T / − z/ < N α/ − rλN − α/ (cid:17) = ˜ P (cid:16) (cid:96) ( T,z ) ( T / − z/ < N α/ − rλN − α/ (cid:17) ≥ ˜ P (cid:16) √ T B σ / < and ∆( T, z ) < N α/ − rλN − α/ (cid:17) ≥ − ˜ P (cid:16) ∆( T, z ) ≥ N α/ − rλN − α/ (cid:17) , (5.28)where we recall that ∆( T, z ) is as in (3.2). Since as N → ∞ we have T ∼ ( R − r ) N α and | z − pT | T ∼ ( R + r ) , we conclude from Corollary 3.5 that there exists ˆ N ∈ N such that if N ≥ max( ˆ N , ˆ N ) we have ˜ P (cid:16) ∆( T, z ) ≥ N α/ − rλN − α/ (cid:17) ≤ . (5.29) IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 43
Combining (5.27), (5.28) and (5.29) we obtain (5.23).
Step 6.
In this last step, we prove (5.24). Let (cid:96) bot be the straight segment connecting x and y ,defined in (5.22). By construction, we have that there is ˆ N ∈ N such that if N ≥ ˆ N we have forany ( (cid:126)x, (cid:126)y, (cid:96) bot ) ∈ D that (cid:96) bot lies uniformly below the line segment (cid:96) bot , which in turn lies at least C √ T below the straight segment connecting x (cid:48) k − and y (cid:48) k − . If ˆ N is as in Step 5 we conclude fromLemma 3.14 that there exists ˆ N ∈ N such that if N ≥ max( ˆ N , ˆ N , ˆ N ) and P ( E ( (cid:126)x, (cid:126)y, (cid:96) bot )) > (5.30) P γ, Γ ,(cid:126)x (cid:48) ,(cid:126)y (cid:48) Ber ( Q ≥ · · · ≥ Q k − ) ≥ (cid:16) − e − C / p (1 − p ) (cid:17) k − = 1112 . where the condition that N ≥ ˆ N is included to ensure that the probability P γ, Γ ,(cid:126)x (cid:48) ,(cid:126)y (cid:48) Ber is well-defined.In deriving (5.30) we also used (5.13), which implies C = (cid:115) p (1 − p ) log 31 − (11 / / ( k − ≥ (cid:112) p (1 − p ) log 3 . We see that (5.30) implies (5.24), which concludes the proof of the lemma. (cid:3)
In the remainder of this section we use Lemma 5.1 to prove Lemma 4.3.
Proof. (of Lemma 4.3) For clarity we split the proof into five steps.
Step 1.
In this step we specify the choice of R in the statement of the lemma and introduce somenotation that will be used in the proof of the lemma, which is given in Steps 2-5 below. Throughoutwe fix r, (cid:15) > . Define the constant(5.31) C = (cid:114) p (1 − p ) log 31 − − / ( k − . Let
R > r + 3 , M > and ˜ N ∈ N be such that for N ≥ ˜ N we have that the event B = (cid:40) sup x ∈ [ r +3 ,R ] (cid:0) L Nk − ( xN α ) − pxN α (cid:1) ≥ − M N α/ (cid:41) ∩ (cid:40) sup x ∈ [ − R, − r − (cid:0) L Nk − ( xN α ) − pxN α (cid:1) ≥ − M N α/ (cid:41) (5.32)satisfies(5.33) P ( B ) ≥ − (cid:15)/ . Such a choice of
R, M, ˜ N is possible by Lemma 5.1.Let us set s − = (cid:100)− R · N α (cid:101) , s − = (cid:98)− ( r + 3) · N α (cid:99) , s +1 = (cid:100) ( r + 3) · N α (cid:101) , s +2 = (cid:98) R · N α (cid:99) , and for a ∈ (cid:74) s − , s − (cid:75) and b ∈ (cid:74) s +1 , s +2 (cid:75) we define (cid:126)x (cid:48) , (cid:126)y (cid:48) ∈ W k − by x (cid:48) i = (cid:98) pa − M N α/ (cid:99) − ( i − (cid:100) C (2 R ) / N α/ (cid:101) ,y (cid:48) i = (cid:98) pb − M N α/ (cid:99) − ( i − (cid:100) C (2 R ) / N α/ (cid:101) , (5.34)for i = 1 , . . . , k − . We will write (cid:126)z = (cid:126)y (cid:48) − (cid:126)x (cid:48) , and we note that z k − ≥ p ( b − a ) − and also RN α ≥ b − a ≥ r + 3) N α . The latter and Lemma 3.10 imply that there exists R > and ˜ N ∈ N such that if N ≥ ˜ N we have(5.35) P ,b − a, ,z k − Ber (cid:18) inf s ∈ [0 ,b − a ] (cid:0) (cid:96) ( s ) − ps (cid:1) ≤ − ( R − M − C (2 R ) / k ) N α/ (cid:19) < (cid:15)/ . This fixes our choice of R in the statement of the lemma.With the above choice of R we define the event(5.36) A = (cid:26) inf s ∈ [ − t ,t ] (cid:2) L Nk − ( s ) − ps (cid:3) ≤ − R N α/ (cid:27) , and then to prove the lemma it suffices to show that there exists N ∈ N such that for N ≥ N (5.37) P ( A ) < (cid:15) Step 2.
In this step, we prove that the event B from (5.32) can be written as a countable disjointunion of the form(5.38) B = (cid:71) ( a,b,(cid:126)x,(cid:126)y,(cid:96) bot ,(cid:96) − top ,(cid:96) + top ) ∈ D E ( a, b, (cid:126)x, (cid:126)y, (cid:96) bot , (cid:96) − top , (cid:96) + top ) , where the set D and events E ( a, b, (cid:126)x, (cid:126)y, (cid:96) bot , (cid:96) − top , (cid:96) + top ) are defined below.For a ∈ (cid:74) s − , s − (cid:75) and b ∈ (cid:74) s +1 , s +2 (cid:75) , (cid:126)x, (cid:126)y ∈ W k − , z , z , z − , z +2 ∈ Z , (cid:96) bot ∈ Ω( a, b, z , z ) , (cid:96) − top ∈ Ω( s − , a, z − , x k − ) , (cid:96) + top ∈ Ω( b, s +2 , y k − , z +2 ) we define E ( a, b, (cid:126)x, (cid:126)y, (cid:96) bot , (cid:96) − top , (cid:96) + top ) to be the event that L Ni ( a ) = x i and L Ni ( b ) = y i for ≤ i ≤ k − , and L Nk agrees with (cid:96) bot on (cid:74) a, b (cid:75) , L Nk − agrees with (cid:96) − top on (cid:74) s − , a (cid:75) and with (cid:96) + top on (cid:74) b, s +2 (cid:75) .We also let D be the collection of tuples ( a, b, (cid:126)x, (cid:126)y, (cid:96) bot , (cid:96) − top , (cid:96) + top ) satisfying:(1) a ∈ (cid:74) s − , s − (cid:75) , b ∈ (cid:74) s +1 , s +2 (cid:75) ;(2) (cid:126)x, (cid:126)y ∈ W k − , ≤ y i − x i ≤ b − a , x k − − pa > − M N α/ , and y k − − pb > − M N α/ ;(3) if c ∈ (cid:74) s − , s − (cid:75) ∩ ( −∞ , a ) then (cid:96) − top ( c ) − pc ≤ − M N α/ ;(4) if d ∈ (cid:74) s +1 , s +2 (cid:75) ∩ ( b, ∞ ) then (cid:96) + top ( d ) − pd ≥ − M N α/ ;(5) z ≤ x k − , z ≤ y k − , and (cid:96) bot ∈ Ω( a, b, z , z ) .It is clear that D is countable, and that B = (cid:91) ( a,b,(cid:126)x,(cid:126)y,(cid:96) bot ) ∈ D E ( a, b, (cid:126)x, (cid:126)y, (cid:96) bot , (cid:96) − top , (cid:96) + top ) , so to prove (5.38) it suffices to show that the events E ( a, b, (cid:126)x, (cid:126)y, (cid:96) bot , (cid:96) − top , (cid:96) + top ) are pairwise disjoint.Observe that on the intersection of E ( a, b, (cid:126)x, (cid:126)y, (cid:96) bot , (cid:96) − top , (cid:96) + top ) and E (˜ a, ˜ b, ˜ (cid:126)x, ˜ (cid:126)y, ˜ (cid:96) bot , ˜ (cid:96) − top , ˜ (cid:96) + top ) , condi-tions (2) and (3) imply that a = ˜ a , while conditions (2) and (4) that b = ˜ b . Afterwards, we concludethat (cid:126)x = ˜ (cid:126)x , (cid:126)y = ˜ (cid:126)y , (cid:96) bot = ˜ (cid:96) bot , (cid:96) − top = ˜ (cid:96) − top and (cid:96) + top = ˜ (cid:96) + top , confirming (5.38). Step 3.
In this step we prove (5.37). We claim that we can find ˜ N ∈ N such that if N ≥ ˜ N and ( a, b, (cid:126)x, (cid:126)y, (cid:96) bot , (cid:96) − top , (cid:96) + top ) ∈ D is such that P (cid:0) E ( a, b, (cid:126)x, (cid:126)y, (cid:96) bot , (cid:96) − top , (cid:96) + top ) (cid:1) > we have(5.39) P ( A | E ( a, b, (cid:126)x, (cid:126)y, (cid:96) bot , (cid:96) − top , (cid:96) + top )) < (cid:15)/ . We will prove (5.39) in the steps below. Here we assume its validity and conclude the proof of (5.37).
IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 45 If N ≥ max( ˜ N , ˜ N , ˜ N ) we have in view of (5.38) and (5.39) that P ( A ) ≤ P ( A ∩ B ) + P ( B c ) = P ( B c ) + (cid:88) ( a,b,(cid:126)x,(cid:126)y,(cid:96) bot ,(cid:96) − top ,(cid:96) + top ) ∈ D P ( E ( a,b,(cid:126)x,(cid:126)y,(cid:96) bot ,(cid:96) − top ,(cid:96) + top ) ) > P ( A | E ( a, b, (cid:126)x, (cid:126)y, (cid:96) bot , (cid:96) − top , (cid:96) + top )) × P ( E ( a, b, (cid:126)x, (cid:126)y, (cid:96) bot , (cid:96) − top , (cid:96) + top )) ≤ P ( B c ) + (cid:15) (cid:88) ( a,b,(cid:126)x,(cid:126)y,(cid:96) bot ,(cid:96) − top ,(cid:96) + top ) ∈ D P ( E ( a,b,(cid:126)x,(cid:126)y,(cid:96) bot ,(cid:96) − top ,(cid:96) + top ) ) > P ( E ( a, b, (cid:126)x, (cid:126)y, (cid:96) bot , (cid:96) − top , (cid:96) + top )) = P ( B c ) + (cid:15) · P ( B ) < (cid:15), where in the last inequality we used (5.33). The above inequality clearly implies (5.37). Step 4.
In this step we prove (5.39). We claim that there exists ˜ N ∈ N such that if N ≥ ˜ N , a ∈ (cid:74) s − , s − (cid:75) and b ∈ (cid:74) s +1 , s +2 (cid:75) we have that (cid:81) k − i =1 | Ω( a, b, x (cid:48) i , y (cid:48) i ) | ≥ and(5.40) P a,b,(cid:126)x (cid:48) ,(cid:126)y (cid:48) Ber ( Q ≥ · · · ≥ Q k − ) ≥ , where Q = ( Q , . . . , Q k − ) is P a,b,(cid:126)x (cid:48) ,(cid:126)y (cid:48) Ber -distributed and we recall that (cid:126)x (cid:48) , (cid:126)y (cid:48) were defined in (5.34).We will prove (5.40) in Step 5 below. Here we assume its validity and conclude the proof of (5.39).Observe that by condition (2) in Step 2, we have that x (cid:48) i ≤ pa − M N α/ ≤ x k − ≤ x i , and similarly y (cid:48) i ≤ pb − M N α/ ≤ y k − ≤ y i for i = 1 , . . . , k − . From this observation we conclude that if N ≥ ˜ N is sufficiently large and ( a, b, (cid:126)x, (cid:126)y, (cid:96) bot , (cid:96) − top , (cid:96) + top ) ∈ D is such that P (cid:0) E ( a, b, (cid:126)x, (cid:126)y, (cid:96) bot , (cid:96) − top , (cid:96) + top ) (cid:1) > we have P ( A | E ( a, b, (cid:126)x, (cid:126)y, (cid:96) bot , (cid:96) − top , (cid:96) + top )) ≤ P (cid:18) inf s ∈ [ a,b ] (cid:0) L Nk − ( s ) − ps (cid:1) ≤ − R N α/ (cid:12)(cid:12) E ( a, b, (cid:126)x, (cid:126)y, (cid:96) bot , (cid:96) − top , (cid:96) + top ) (cid:19) = P a,b,(cid:126)x,(cid:126)y, ∞ ,(cid:96) bot avoid,Ber (cid:18) inf s ∈ [ a,b ] ( Q k − ( s ) − ps ) ≤ − R N α/ (cid:19) ≤ P a,b,(cid:126)x (cid:48) ,(cid:126)y (cid:48) avoid,Ber (cid:18) inf s ∈ [ a,b ] (cid:0) Q k − ( s ) − ps (cid:1) ≤ − R N α/ (cid:19) = P a,b,(cid:126)x (cid:48) ,(cid:126)y (cid:48) Ber (cid:0) { inf s ∈ [ a,b ] (cid:0) Q k − ( s ) − ps (cid:1) ≤ − R N α/ } ∩ { Q ≥ · · · ≥ Q k − } (cid:1) P a,b,(cid:126)x (cid:48) ,(cid:126)y (cid:48) Ber ( Q ≥ · · · ≥ Q k − ) ≤ P a,b,(cid:126)x (cid:48) ,(cid:126)y (cid:48) Ber (cid:0) inf s ∈ [ a,b ] (cid:0) Q k − ( s ) − ps (cid:1) ≤ − R N α/ (cid:1) P a,b,(cid:126)x (cid:48) ,(cid:126)y (cid:48) Ber ( Q ≥ · · · ≥ Q k − ) . (5.41)Let us elaborate on (5.41) briefly. The first inequality in (5.41) follows from the definition of A andthe fact that a ≤ − t while b ≥ t by construction. The condition P (cid:0) E ( a, b, (cid:126)x, (cid:126)y, (cid:96) bot , (cid:96) − top , (cid:96) + top ) (cid:1) > ensures that the first three probabilities in (5.41) are all well-defined. The equality on the secondline follows from the Schur Gibbs property and the inequality on the third line follows from Lemmas3.1 and 3.2 since x (cid:48) i ≤ x i and y (cid:48) i ≤ y i by construction. To ensure that the probability in the fourthline is well-defined (and hence Lemmas 3.1 and 3.2 are applicable) it suffices to assume that N ≥ ˜ N ,in view of Lemma 2.16. The equality on the fourth line follows from the definition of P a,b,(cid:126)x (cid:48) ,(cid:126)y (cid:48) avoid,Ber , seeDefinition 2.15 and the last inequality is trivial. By our choice of R , see (5.35), we know that there is ˜ N ∈ N such that if N ≥ ˜ N P a,b,(cid:126)x (cid:48) ,(cid:126)y (cid:48) Ber (cid:18) inf s ∈ [ a,b ] (cid:0) Q k − ( s ) − ps (cid:1) ≤ − R N α/ (cid:19) = P ,b − a, ,z k − Ber (cid:18) inf s ∈ [0 ,b − a ] (cid:0) (cid:96) ( s ) − ps (cid:1) ≤ − R N α/ − x (cid:48) k − (cid:19) ≤ P ,b − a, ,z k − Ber (cid:18) inf s ∈ [0 ,b − a ] (cid:0) (cid:96) ( s ) − ps (cid:1) ≤ − ( R − M − C (2 R ) / k ) N α/ (cid:19) < (cid:15)/ . (5.42)Combining (5.40), (5.41) and (5.41) we conclude that for N ≥ ˜ N = max( ˜ N , ˜ N ) we have P ( A | E ( a, b, (cid:126)x, (cid:126)y, (cid:96) bot , (cid:96) − top , (cid:96) + top )) < · (cid:15)/ (cid:15)/ , which implies (5.39). Step 5.
In this final step we prove (5.40). Set T = b − a and note that by our assumptionthat a ∈ (cid:74) s − , s − (cid:75) and b ∈ (cid:74) s +1 , s +2 (cid:75) we know that (2 r + 6) N α ≤ T ≤ RN α . This implies that C (2 R ) / N α/ ≥ x (cid:48) i − x (cid:48) i +1 ≥ C √ T and likewise for y (cid:48) i . It follows from Lemma 3.14, appliedwith (cid:96) bot = −∞ that there is ˜ N ∈ N such that if N ≥ ˜ N we have T ≥ y (cid:48) i − x (cid:48) i ≥ for all i so that (cid:81) k − i =1 | Ω( a, b, x (cid:48) i , y (cid:48) i ) | ≥ and moreover P a,b,(cid:126)x (cid:48) ,(cid:126)y (cid:48) Ber ( Q ≥ · · · ≥ Q k − ) = P ,b − a,(cid:126)x (cid:48) ,(cid:126)y (cid:48) Ber ( Q ≥ · · · ≥ Q k − ) ≥ (cid:16) − e − C / p (1 − p ) (cid:17) k − ≥ / (5.43)In deriving (5.43) we also used (5.31), which implies C = (cid:114) p (1 − p ) log 31 − − / ( k − ≥ (cid:112) p (1 − p ) log 3 . Equation (5.43) clearly implies (5.40) and this concludes the proof of the lemma. (cid:3) Lower bounds on the acceptance probability
We prove Lemma 4.4 in Section 6.1 by using Lemma 6.2, whose proof is presented in Section 6.2.6.1.
Proof of Lemma 4.4.
Throughout this section we assume the same notation as in Lemma4.4, i.e., we assume that we have fixed k ∈ N , p ∈ (0 , , M , M > , (cid:96) bot : (cid:74) − t , t (cid:75) → R ∪ {−∞} ,and (cid:126)x, (cid:126)y ∈ W k − such that | Ω avoid ( − t , t , (cid:126)x, (cid:126)y, ∞ , (cid:96) bot ) | ≥ . We also assume that(1) sup s ∈ [ − t ,t ] (cid:2) (cid:96) bot ( s ) − ps (cid:3) ≤ M (2 t ) / ,(2) − pt + M (2 t ) / ≥ x ≥ x k − ≥ max (cid:0) (cid:96) bot ( − t ) , − pt − M (2 t ) / (cid:1) , (3) pt + M (2 t ) / ≥ y ≥ y k − ≥ max (cid:0) (cid:96) bot ( t ) , pt − M (2 t ) / (cid:1) . Definition 6.1.
We write S = (cid:74) − t , − t (cid:75) ∪ (cid:74) t , t (cid:75) , and we denote by Q = ( Q , . . . , Q k − ) and ˜ Q = ( ˜ Q , . . . , ˜ Q k − ) the (cid:74) , k − (cid:75) -indexed line ensembles which are uniformly distributed on Ω avoid ( − t , t , (cid:126)x, (cid:126)y, ∞ , (cid:96) bot ) and Ω avoid ( − t , t , (cid:126)x, (cid:126)y, ∞ , (cid:96) bot ; S ) respectively. We let P Q and P ˜ Q denotethese uniform measures.In other words, ˜ Q has the law of k − independent Bernoulli bridges that have been conditionedon not-crossing each other on the set S and also staying above the graph of (cid:96) bot but only on theintervals (cid:74) − t , − t (cid:75) and (cid:74) t , t (cid:75) . The latter restriction means that the lines are allowed to cross on (cid:74) − t + 1 , t − (cid:75) , and ˜ Q k − is allowed to dip below (cid:96) bot on (cid:74) − t + 1 , t − (cid:75) as well. Lemma 6.2.
There exists N ∈ N and constants g, h such that for N ≥ N we have (6.1) P ˜ Q (cid:16) Z (cid:0) − t , t , ˜ Q ( − t ) , ˜ Q ( t ) , (cid:96) bot (cid:74) − t , t (cid:75) (cid:1) ≥ g (cid:17) ≥ h. IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 47
We will prove Lemma 6.2 in Section 6.2. In the remainder of this section, we give the proof ofLemma 4.4, with the constants g and h given by Lemma 6.2. The proof begins by evaluating theRadon-Nidokym derivative between P Q (cid:48) and P ˜ Q (cid:48) . We then use this Radon-Nikodym derivative totransition between ˜ Q in Lemma 6.2 which ignores (cid:96) bot on (cid:74) − ( t − , t − (cid:75) and Q in Lemma 4.4which avoids (cid:96) bot everywhere. Proof of Lemma 4.4.
Let us denote by P Q (cid:48) and P ˜ Q (cid:48) the measures on (cid:74) , k − (cid:75) -indexed Bernoulliline ensembles Q (cid:48) , ˜ Q (cid:48) on the set S in Definition 6.1 induced by the restrictions of the measures P Q , P ˜ Q to S . Also let us write Ω a ( · ) for Ω avoid ( · ) for simplicity, and denote by Ω a ( S ) the set of elementsof Ω avoid ( − t , t , ˜ Q ( − t ) , ˜ Q ( t )) restricted to S . We claim the Radon-Nikodym derivative betweenthese two restricted measures on elements B = ( B , . . . , B k − ) of Ω a ( S ) is given by(6.2) d P Q (cid:48) d P ˜ Q (cid:48) ( B ) = P Q (cid:48) ( B ) P ˜ Q (cid:48) ( B ) = ( Z (cid:48) ) − Z ( − t , t , B ( − t ) , B ( t ) , (cid:96) bot (cid:74) − t , t (cid:75) ) , with Z (cid:48) = E ˜ Q (cid:48) [ Z ( − t , t , B ( − t ) , B ( t ) , (cid:96) bot (cid:74) − t , t (cid:75) )] . The first equality holds simply because themeasures are discrete. To prove the second equality, observe that P Q (cid:48) ( B ) = | Ω a ( − t , t , B ( − t ) , B ( t ) , (cid:96) bot (cid:74) − t , t (cid:75) ) || Ω a ( − t , t , Q ( − t ) , Q ( t ) , (cid:96) bot ) | , P ˜ Q (cid:48) ( B ) = (cid:81) k − i =1 | Ω( − t , t , B i ( − t ) , B i ( t )) || Ω a ( − t , t , ˜ Q ( − t ) , ˜ Q ( t ) , (cid:96) bot ; S ) | (6.3)These identities follow from the restriction, and the fact that the measures are uniform. Then fromDefinition 2.22 we know Z ( − t , t , B ( − t ) , B ( t ) , (cid:96) bot ) = | Ω a ( − t , t , B ( − t ) , B ( t ) , (cid:96) bot (cid:74) − t , t (cid:75) ) | (cid:81) k − i =1 | Ω( − t , t , B i ( − t ) , B i ( t )) | and hence Z (cid:48) = (cid:88) B ∈ Ω a ( S ) (cid:81) k − i =1 | Ω( − t , t , B i ( − t ) , B i ( t )) || Ω a ( − t , t , ˜ Q ( − t ) , ˜ Q ( t ) , (cid:96) bot ; S ) | · | Ω a ( − t , t , B ( − t ) , B ( t ) , (cid:96) bot ) | (cid:81) k − i =1 | Ω( − t , t , B i ( − t ) , B i ( t )) | = (cid:80) B ∈ Ω a ( S ) | Ω a ( − t , t , B ( − t ) , B ( t ) , (cid:96) bot ) || Ω a ( − t , t , ˜ Q ( − t ) , ˜ Q ( t ) , (cid:96) bot ; S ) | = | Ω a ( − t , t , Q ( − t ) , Q ( t ) , (cid:96) bot ) || Ω a ( − t , t , ˜ Q ( − t ) , ˜ Q ( t ) , (cid:96) bot ; S ) | . Comparing the above identities proves the second equality in (6.2).Now note that Z ( − t , t , B ( − t ) , B ( t ) , (cid:96) bot (cid:74) − t , t (cid:75) ) is a deterministic function of (( B ( − t ) , B ( t )) .In fact, the law of (( B ( − t ) , B ( t )) under P ˜ Q (cid:48) is the same as that of (cid:0) ˜ Q ( − t ) , ˜ Q ( t ) (cid:1) by way of therestriction. It follows from Lemma 6.2 that Z (cid:48) = E ˜ Q (cid:48) [ Z ( − t , t , B ( − t ) , B ( t ) , (cid:96) bot (cid:74) − t , t (cid:75) )]= E ˜ Q [ Z ( − t , t , Q ( − t ) , Q ( t ) , (cid:96) bot (cid:74) − t , t (cid:75) )] ≥ gh, which gives us(6.4) ( Z (cid:48) ) − ≤ gh . Similarly, the law of ( B ( − t ) , B ( t )) under P Q (cid:48) is the same as that of ( Q ( − t ) , Q ( t )) under P Q .Hence P Q (cid:16) Z ( − t , t , Q ( − t ) , Q ( t ) , (cid:96) bot (cid:74) − t , t (cid:75) ) ≤ gh ˜ (cid:15) (cid:17) = P Q (cid:48) (cid:16) Z ( − t , t , B ( − t ) , B ( t ) , (cid:96) bot (cid:74) − t , t (cid:75) ) ≤ gh ˜ (cid:15) (cid:17) . (6.5) Now let us write E = { Z ( − t , t , B ( − t ) , B ( t ) , (cid:96) bot (cid:74) − t , t (cid:75) ) ≤ gh ˜ (cid:15) } ⊂ Ω a ( S ) . Then according to(6.2), we have P Q (cid:48) ( E ) = (cid:90) Ω a ( S ) E d P Q (cid:48) = ( Z (cid:48) ) − (cid:90) Ω a ( S ) E · Z ( − t , t , B ( − t ) , B ( t ) , (cid:96) bot (cid:74) − t , t (cid:75) ) d P ˜ Q (cid:48) ( B ) . From the definition of E , the inequality (6.4), and the fact that E ≤ , it follows that P Q (cid:48) ( E ) ≤ ( Z (cid:48) ) − (cid:90) Ω a ( S ) E · gh ˜ (cid:15) d P ˜ Q (cid:48) ≤ gh (cid:90) Ω a ( S ) gh ˜ (cid:15) d P ˜ Q (cid:48) ≤ ˜ (cid:15). In combination with (6.5), this proves (4.2). (cid:3)
Proof of Lemma 6.2.
In this section, we prove Lemma 6.2. We first state and prove twoauxiliary lemmas necessary for the proof. The first lemma establishes a set of conditions underwhich we have the desired lower bound on the acceptance probability.
Lemma 6.3.
Let (cid:15) > and V top > be given such that V top > M + 6( k − (cid:15) . Suppose furtherthat (cid:126)a,(cid:126)b ∈ W k − are such that(1) V top (2 t ) / ≥ a + pt ≥ a k − + pt ≥ ( M + 2 (cid:15) )(2 t ) / ;(2) V top (2 t ) / ≥ b − pt ≥ b k − − pt ≥ ( M + 2 (cid:15) )(2 t ) / ;(3) a i − a i +1 ≥ (cid:15) (2 t ) / and b i − b i +1 ≥ (cid:15) (2 t ) / for i = 1 , . . . , k − .Then we can find g = g ( (cid:15), V top , M ) > and N ∈ N such that for all N ≥ N we have (6.6) Z (cid:0) − t , t , (cid:126)a,(cid:126)b, (cid:96) bot (cid:74) − t , t (cid:75) (cid:1) ≥ g. Proof.
Observe by the rightmost inequalities in conditions (1) and (2) in the hypothesis, as well ascondition (1) in Lemma 4.4, that (cid:96) bot lies a distance of at least (cid:15) (2 t ) / ≥ (cid:15) (2 t ) / uniformlybelow the line segment connecting a k − and b k − . Also note that (1) and (2) imply | b i − a i − pt | ≤ ( V top − M − (cid:15) )(2 t ) / for each i . Lastly noting (3), we see that the conditions of Lemma 3.14are satisfied with C = 2 (cid:15) . This implies (6.6), with g = (cid:32) − ∞ (cid:88) n =1 ( − n − e − (cid:15) n / p (1 − p ) (cid:33) k − . (cid:3) The next lemma helps us derive the lower bound h in (6.1). Lemma 6.4.
For any
R > we can find V t , V b ≥ M + R , h > and N ∈ N (depending on R )such that if N ≥ N we have (6.7) P ˜ Q (cid:16) (2 t ) / V t ≥ ˜ Q ( ± t ) ∓ pt ≥ ˜ Q k − ( ± t ) ∓ pt ≥ (2 t ) / V b (cid:17) ≥ h . Proof.
We first define the constants V b and h , as well as two other constants C and K to be usedin the proof. We put C = (cid:115) p (1 − p ) log 31 − (11 / / ( k − ,V b = M + Ck + M + R, K = (4 r + 10) V b ,h = 2 k/ − (cid:0) − e − /p (1 − p ) (cid:1) k ( πp (1 − p )) k/ exp (cid:18) − k ( K + M + 6) p (1 − p ) (cid:19) . (6.8) IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 49
Note in particular that V b > M + R . We will fix V t > V b in Step 3 below depending on h . We willprove in the following steps that for these choices of V b , V t , h , we can find N so that for N ≥ N we have P ˜ Q (cid:16) ˜ Q k − ( ± t ) ∓ pt ≥ (2 t ) / V b (cid:17) ≥ h , (6.9) P ˜ Q (cid:16) ˜ Q ( ± t ) ∓ pt > (2 t ) / V t (cid:17) ≤ h . (6.10)Assuming the validity of the claim, we then observe that the probability in (6.7) is bounded belowby h − h = h , proving the lemma. We will prove (6.9) and (6.10) in three steps. Step 1.
In this step we prove that there exists N so that (6.9) holds for N ≥ N , assuming resultsfrom Step 2 below. We condition on the value of ˜ Q at 0 and divide ˜ Q into two independent lineensembles on [ − t , and [0 , t ] . Observe by Lemma 3.2 that(6.11) P ˜ Q (cid:16) ˜ Q k − ( ± t ) ∓ pt ≥ (2 t ) / V b (cid:17) ≥ P − t ,t ,(cid:126)x,(cid:126)yavoid,Ber ; S (cid:16) ˜ Q k − ( ± t ) ∓ pt ≥ (2 t ) / V b (cid:17) . With K as in (6.8), we define events E (cid:126)z = (cid:110)(cid:0) ˜ Q (0) , . . . , ˜ Q k − (0) (cid:1) = (cid:126)z (cid:111) , X = (cid:110) (cid:126)z ∈ W k − : z k − ≥ K (2 t ) / and P − t ,t ,(cid:126)x,(cid:126)yavoid,Ber ; S ( E (cid:126)z ) > (cid:111) , and E = (cid:70) (cid:126)z ∈ X E (cid:126)z . By Lemma 2.16, we can choose ˜ N large enough depending on M , C, k, M , R so that X is non-empty for N ≥ ˜ N . By Lemma 3.16 we can find ˜ N so that(6.12) P − t ,t ,(cid:126)x,(cid:126)yavoid,Ber ; S ( E ) ≥ P − t ,t ,(cid:126)x,(cid:126)yavoid,Ber ; S (cid:16) ˜ Q k − (0) ≥ K (2 t ) / (cid:17) ≥ A exp (cid:18) − k ( K + M + 6) p (1 − p ) (cid:19) for N ≥ ˜ N , where A = A ( p, k ) is a constant given explicitly in (3.22).Now let ˜ Q i and ˜ Q i denote the restrictions of ˜ Q i to [ − t , and [0 , t ] respectively for ≤ i ≤ k − ,and write S = S ∩ (cid:74) − t , (cid:75) , S = S ∩ (cid:74) , t (cid:75) . We observe that if (cid:126)z ∈ X , then(6.13) P − t ,t ,(cid:126)x,(cid:126)yavoid,Ber ; S (cid:16) ˜ Q k − = (cid:96) , ˜ Q k − = (cid:96) | E (cid:126)z (cid:17) = P − t , ,(cid:126)x,(cid:126)zavoid,Ber ; S ( (cid:96) ) · P ,t ,(cid:126)z,(cid:126)yavoid,Ber ; S ( (cid:96) ) . In Step 2, we will find ˜ N so that for N ≥ ˜ N we have P − t , ,(cid:126)x,(cid:126)zavoid,Ber ; S (cid:16) ˜ Q k − ( − t ) + pt ≥ (2 t ) / V b (cid:17) ≥ , P ,t ,(cid:126)x,(cid:126)zavoid,Ber ; S (cid:16) ˜ Q k − ( t ) − pt ≥ (2 t ) / V b (cid:17) ≥ . (6.14)Using (6.12), (6.13), and (6.14), we conclude that P − t ,t ,(cid:126)x,(cid:126)yavoid,Ber ; S (cid:16) ˜ Q k − ( ± t ) ∓ pt ≥ (2 t ) / V b (cid:17) ≥ A
16 exp (cid:18) − k ( K + M + 6) p (1 − p ) (cid:19) for N ≥ N = max( ˜ N , ˜ N , ˜ N ) . In combination with (6.11), this proves (6.9) with h = A/ asin (6.8). Step 2.
In this step, we prove the inequalities in (6.14) from Step 1, using Lemma 3.8. Let usdefine vectors (cid:126)x (cid:48) , (cid:126)z (cid:48) , (cid:126)y (cid:48) by x (cid:48) i = (cid:98)− pt − M (2 t ) / (cid:99) − ( i − (cid:100) C (2 t ) / (cid:101) ,z (cid:48) i = (cid:98) K (2 t ) / (cid:99) − ( i − (cid:100) C (2 t ) / (cid:101) ,y (cid:48) i = (cid:98) pt − M (2 t ) / (cid:99) − ( i − (cid:100) C (2 t ) / (cid:101) . Note that x (cid:48) i ≤ x k − ≤ x i and x (cid:48) i − x (cid:48) i +1 ≥ C (2 t ) / for ≤ i ≤ k − , and likewise for z (cid:48) i , y (cid:48) i . ByLemma 3.1 we have P − t , ,(cid:126)x,(cid:126)zavoid,Ber ; S (cid:16) ˜ Q k − ( − t ) + pt ≥ (2 t ) / V b (cid:17) ≥ P − t , ,(cid:126)x (cid:48) ,(cid:126)z (cid:48) avoid,Ber ; S (cid:16) ˜ Q k − ( − t ) + pt ≥ (2 t ) / V b (cid:17) ≥ P − t , ,x (cid:48) k − ,z (cid:48) k − Ber (cid:16) (cid:96) ( − t ) + pt ≥ (2 t ) / V b (cid:17) − (cid:16) − P − t ,t ,(cid:126)x (cid:48) ,(cid:126)z (cid:48) Ber (cid:16) ˜ Q ≥ · · · ≥ ˜ Q k − (cid:17)(cid:17) . (6.15)To bound the first term on the second line, first note that x (cid:48) k − ≥ − pt − ( M + C ( k − t ) / and z (cid:48) k − ≥ K (2 t ) / − C ( k − t ) / for sufficiently large N . Let us write ˜ x, ˜ z for these twolower bounds. Then by Lemma 3.8, we have an ˜ N so that for N ≥ ˜ N ,(6.16) P − t , ,x (cid:48) k − ,z (cid:48) k − Ber (cid:18) (cid:96) ( − t ) ≥ t t ˜ x + t − t t ˜ z − (2 t ) / (cid:19) ≥ . Moreover, as long as ˜ N α > , we have for N ≥ ˜ N α that(6.17) t − t t ≥ − ( r + 2) N α ( r + 3) N α − > − r + 2 r + 5 / r + 5 . It follows from our choice of V b and K = 2(2 r + 5) V b in (6.8), as well as (6.17), that t t ˜ x + t − t t ˜ z − (2 t ) / = − pt − C ( k − t ) / − t t M (2 t ) / + t − t t K (2 t ) / − (2 t ) / ≥ − pt − Ck (2 t ) / − M (2 t ) / + 12 r + 5 K (2 t ) / = − pt + ( M + Ck + 2( M + R ))(2 t ) / > − pt + (2 t ) / V b . For the first inequality, we used the fact that t /t < , and we assumed that ˜ N is sufficiently largeso that C ( k − t ) / + (2 t ) / ≤ Ck (2 t ) / for N ≥ ˜ N . Using (6.16), we conclude for N ≥ ˜ N (6.18) P − t , ,x (cid:48) k − ,z (cid:48) k − Ber (cid:16) (cid:96) ( − t ) + pt ≥ (2 t ) / V b (cid:17) ≥ . Since | z (cid:48) i − x (cid:48) i − pt | ≤ ( K + M + 1)(2 t ) / , we have by Lemma 3.14 and our choice of C that thesecond probability in the second line of (6.15) is bounded below by (cid:16) − e − C / p (1 − p ) (cid:17) k − ≥ / for N larger than some ˜ N . It follows from (6.15) and (6.18) that for N ≥ ˜ N = max( ˜ N , ˜ N ) , P − t , ,(cid:126)x,(cid:126)zavoid,Ber ; S (cid:16) ˜ Q k − ( − t ) + pt ≥ (2 t ) / V b (cid:17) ≥ −
112 = 14 , proving the first inequality in (6.14). The second inequality is proven similarly. Step 3.
In this last step, we fix V t and prove that we can enlarge N from Step 1 so that (6.10)holds for N ≥ N . Let C be as in (6.8), and define vectors (cid:126)x (cid:48)(cid:48) , (cid:126)y (cid:48)(cid:48) ∈ W k − by x (cid:48)(cid:48) i = (cid:100)− pt + M (2 t ) / (cid:101) + ( k − i ) (cid:100) C (2 t ) / (cid:101) ,y (cid:48)(cid:48) i = (cid:100) pt + M (2 t ) / (cid:101) + ( k − i ) (cid:100) C (2 t ) / (cid:101) . Note that x (cid:48)(cid:48) i ≥ x ≥ x i and x (cid:48)(cid:48) i − x (cid:48)(cid:48) i +1 ≥ C (2 t ) / , and likewise for y (cid:48)(cid:48) i . Moreover, (cid:96) bot lies a distanceof at least C (2 t ) / uniformly below the line segment connecting x (cid:48)(cid:48) k − and y (cid:48)(cid:48) k − . By Lemma 3.1 IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 51 we have P ˜ Q (cid:16) ˜ Q ( ± t ) ∓ pt > (2 t ) / V t (cid:17) ≤ P − t ,t ,(cid:126)x (cid:48)(cid:48) ,(cid:126)y (cid:48)(cid:48) , ∞ ,(cid:96) bot avoid,Ber ; S (cid:32) sup s ∈ [ − t ,t ] (cid:2) ˜ Q ( s ) − ps (cid:3) ≥ (2 t ) / V t (cid:33) ≤ P − t ,t ,x (cid:48)(cid:48) ,y (cid:48)(cid:48) Ber (cid:16) sup s ∈ [ − t ,t ] (cid:2) ˜ L ( s ) − ps (cid:3) ≥ (2 t ) / V t (cid:17) P − t ,t ,(cid:126)x (cid:48)(cid:48) ,(cid:126)y (cid:48)(cid:48) Ber (cid:16) ˜ L ≥ · · · ≥ ˜ L k − ≥ (cid:96) bot (cid:17) . In the numerator in the second line, we used the fact that the curves ˜ L , . . . , ˜ L k − are independentunder P − t ,t ,x (cid:48)(cid:48) ,y (cid:48)(cid:48) Ber , and the event in the parentheses depends only on ˜ L . By Lemma 3.10, since min( x (cid:48)(cid:48) + pt , y (cid:48)(cid:48) − pt ) ≤ ( M + C ( k − t ) / , we can choose V t > V b as well as ˜ N largeenough so that the numerator is bounded above by h / for N ≥ ˜ N . Since | y (cid:48)(cid:48) i − x (cid:48)(cid:48) i − pt | ≤ ,our choice of C and Lemma 3.14 give a ˜ N so that the denominator is at least / for N ≥ ˜ N .This gives an upper bound of / · h / < h in the above as long as N ≥ max( ˜ N , ˜ N ) , whichconcludes the proof of (6.10). (cid:3) We are now equipped to prove Lemma 6.2. Let us put for convenience(6.19) t = (cid:22) t + t (cid:23) . Proof. (of Lemma 6.2) We first introduce some notation to be used in the proof. Let S be as in Defini-tion 6.1. For (cid:126)c, (cid:126)d ∈ W k − , let us write ˜ S = (cid:74) − t , − t (cid:75) ∪ (cid:74) t , t (cid:75) , ˜Ω( (cid:126)c, (cid:126)d ) = Ω avoid ( − t , t , (cid:126)c, (cid:126)d, ∞ , (cid:96) bot ; ˜ S ) .For s ∈ ˜ S we define events A ( (cid:126)c, (cid:126)d, s ) = (cid:110) ˜ Q ∈ ˜Ω( (cid:126)c, (cid:126)d ) : ˜ Q k − ( ± s ) ∓ ps ≥ ( M + 1)(2 t ) / (cid:111) ,B ( (cid:126)c, (cid:126)d, V top , s ) = (cid:110) ˜ Q ∈ ˜Ω( (cid:126)c, (cid:126)d ) : ˜ Q ( ± s ) ∓ ps ≤ V top (2 t ) / (cid:111) ,C ( (cid:126)c, (cid:126)d, (cid:15), s ) = (cid:26) ˜ Q ∈ ˜Ω( (cid:126)c, (cid:126)d ) : min ≤ i ≤ k − , ς ∈{− , } (cid:2) ˜ Q i ( ςs ) − ˜ Q i +1 ( ςs ) (cid:3) ≥ (cid:15) (2 t ) / (cid:27) ,D ( (cid:126)c, (cid:126)d, V top , (cid:15), s ) = A ( (cid:126)c, (cid:126)d, s ) ∩ B ( (cid:126)c, (cid:126)d, V top , s ) ∩ C ( (cid:126)c, (cid:126)d, (cid:15), s ) . (6.20)Here, (cid:15) and V top are constants which we will specify later. By Lemma 6.3, for all ( (cid:126)c, (cid:126)d ) and N sufficiently large we have(6.21) D ( (cid:126)c, (cid:126)d, V top , (cid:15), s ) ⊂ { Z ( − t , t , Q ( − t ) , Q ( t ) , (cid:96) bot (cid:74) − t , t (cid:75) ) > g } for some g depending on (cid:15), V top , M . The above gives all the notation we require.We now turn to the proof of the lemma, which split is into several steps. Step 1.
In this step, we show that there exist
R > and ¯ N sufficiently large so that if c k − + pt ≥ (2 t ) / ( M + R ) and d k − − pt ≥ (2 t ) / ( M + R ) , then for all s ∈ ˜ S and N ≥ ¯ N we have P − t ,t ,(cid:126)c,(cid:126)d, ∞ ,(cid:96) bot avoid,Ber ; ˜ S (cid:0) A ( (cid:126)c, (cid:126)d, s ) (cid:1) ≥ P − t ,t ,(cid:126)c,(cid:126)davoid,Ber ; ˜ S (cid:0) Q k − | ˜ S ≥ (cid:96) bot | ˜ S (cid:1) ≥ . (6.22)Let us begin with the first inequality. We observe via Lemma 3.2 that(6.23) P − t ,t ,(cid:126)c,(cid:126)d, ∞ ,(cid:96) bot avoid,Ber ; ˜ S (cid:0) A ( (cid:126)c, (cid:126)d, s ) (cid:1) ≥ P − t ,t ,(cid:126)c,(cid:126)davoid,Ber ; ˜ S (cid:0) A ( (cid:126)c, (cid:126)d, s ) (cid:1) . Now define the constant(6.24) C = (cid:115) p (1 − p ) log 31 − (199 / / ( k − and vectors (cid:126)c (cid:48) , (cid:126)d (cid:48) ∈ W k by c (cid:48) i = (cid:98)− pt + ( M + R )(2 t ) / (cid:99) − ( i − (cid:100) C (2 t ) / (cid:101) ,d (cid:48) i = (cid:98) pt + ( M + R )(2 t ) / (cid:99) − ( i − (cid:100) C (2 t ) / (cid:101) . Then by Lemma 3.1 we have P − t ,t ,(cid:126)c,(cid:126)davoid,Ber ; ˜ S (cid:0) A ( (cid:126)c, (cid:126)d, s ) (cid:1) ≥ P − t ,t ,(cid:126)c (cid:48) ,(cid:126)d (cid:48) avoid,Ber ; ˜ S ( A ( (cid:126)c (cid:48) , (cid:126)d (cid:48) , s )) ≥ P − t ,t ,c (cid:48) k − ,d (cid:48) k − Ber (cid:18) inf s ∈ ˜ S (cid:2) (cid:96) ( s ) − ps (cid:3) ≥ ( M + 1)(2 t ) / (cid:19) − (cid:16) − P − t ,t ,(cid:126)c (cid:48) ,(cid:126)d (cid:48) Ber ( L ≥ · · · ≥ L k − ) (cid:17) . (6.25)By Lemma 3.14 and our choice of C , we can find ˜ N so that P − t ,t ,(cid:126)c (cid:48) ,(cid:126)d (cid:48) Ber ( L ≥ · · · ≥ L k − ) > / > / for N ≥ ˜ N . Writing z = d (cid:48) k − − c (cid:48) k − , the term in the second line of (6.25) isequal to P − t ,t , ,zBer (cid:16) inf s ∈ ˜ S (cid:2) (cid:96) ( s ) + c (cid:48) k − − ps (cid:3) ≥ ( M + 1)(2 t ) / (cid:17) ≥ P , t , ,zBer (cid:16) inf s ∈ [0 , t ] (cid:2) (cid:96) ( s ) − ps (cid:3) ≥ ( − R + Ck + 1)(2 t ) / (cid:17) . In the second line, we used the estimate c (cid:48) k − ≥ − pt + ( M + R − Ck )(2 t ) / . Now by Lemma3.10, we can choose R large enough depending on C, k, M , p so that this probability is greater than / for N greater than some ˜ N . This gives a lower bound in (6.25) of / − /
40 = 19 / for N ≥ max( ˜ N , ˜ N ) , and in combination with (6.23) this proves the first inequality in (6.22).We prove the second inequality in (6.22) similarly. Note that since (cid:96) bot ( s ) ≤ ps + M (2 t ) / on [ − t , t ] by assumption, we have P − t ,t ,(cid:126)c,(cid:126)davoid,Ber ; ˜ S (cid:16) ˜ Q k − | ˜ S ≥ (cid:96) bot | ˜ S (cid:17) ≥ P − t ,t ,(cid:126)c,(cid:126)davoid,Ber ; ˜ S (cid:18) inf s ∈ [ − t ,t ] (cid:2) Q k − ( s ) − ps (cid:3) ≥ M (2 t ) / (cid:19) ≥ P − t ,t ,(cid:126)c (cid:48) ,(cid:126)d (cid:48) avoid,Ber ; ˜ S (cid:18) inf s ∈ [ − t ,t ] (cid:2) ˜ Q k − ( s ) − ps (cid:3) ≥ M (2 t ) / (cid:19) ≥ P , t , ,zBer (cid:18) inf s ∈ [0 , t ] (cid:2) (cid:96) ( s ) − ps (cid:3) ≥ − ( R − Ck )(2 t ) / (cid:19) − (cid:16) − P − t ,t ,(cid:126)c (cid:48) ,(cid:126)d (cid:48) Ber ( ˜ L ≥ · · · ≥ ˜ L k − ) (cid:17) . (6.26)We enlarge R if necessary so that the probability in the third line of (6.26) is > / for N ≥ ˜ N by Lemma 3.10, and 3.14 implies as above that the second expression in the last lineof (6.26) is > − / for N ≥ ˜ N . This gives us a lower bound of / − /
200 = 99 / for N ≥ ˜ N = max( ˜ N , ˜ N ) as desired. This proves the two inequalities in (6.22) for N ≥ ¯ N =max( ˜ N , ˜ N , ˜ N , ˜ N ) . Step 2.
In this step we fix R sufficiently large so that R > C from (6.24) and the inequalities in(6.22) both hold for this choice of R . Our work from Step 1 ensures that such a choice for R ispossible. Let V t , V b , and h be as in Lemma 6.4 for this choice of R . Define the set E = (cid:8) (cid:126)c, (cid:126)d ∈ W k − : (2 t ) / V t ≥ max( c + pt d − pt ) and min( c k − + pt , d k − − pt ) ≥ (2 t ) / V b (cid:9) . (6.27)We show in this step that there exists V top ≥ M + 6( k − and ¯ N such that for all ( (cid:126)c, (cid:126)d ) ∈ E , s ∈ ˜ S , and N ≥ ¯ N we have(6.28) P − t ,t ,(cid:126)c,(cid:126)d, ∞ ,(cid:96) bot avoid,Ber ; ˜ S (cid:0) B ( (cid:126)c, (cid:126)d, V top , s ) (cid:1) ≥ . IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 53
Let C be as in (6.24), and define (cid:126)c (cid:48)(cid:48) , (cid:126)d (cid:48)(cid:48) ∈ W k − by c (cid:48)(cid:48) i = (cid:100)− pt + (2 t ) / V t (cid:101) + ( k − − i ) (cid:100) C (2 t ) / (cid:101) ,d (cid:48)(cid:48) i = (cid:100) pt + (2 t ) / V t (cid:101) + ( k − − i ) (cid:100) C (2 t ) / (cid:101) . Then c (cid:48)(cid:48) i ≥ c ≥ c i and c (cid:48)(cid:48) i − c (cid:48)(cid:48) i +1 ≥ C (2 t ) / for each i , and likewise for d (cid:48)(cid:48) i . By Lemma 3.1, the lefthand side of (6.28) is bounded below by P − t ,t ,(cid:126)c (cid:48)(cid:48) ,(cid:126)d (cid:48)(cid:48) , ∞ ,(cid:96) bot avoid,Ber ; ˜ S (cid:32) sup s ∈ ˜ S (cid:2) ˜ Q ( s ) − ps (cid:3) ≤ V top (2 t ) / (cid:33) ≥ P , t , ,z (cid:48) Ber (cid:32) sup s ∈ [ − t ,t ] (cid:2) (cid:96) ( s ) − ps (cid:3) ≤ ( V top − V t − Ck )(2 t ) / (cid:33) − (cid:16) − P − t ,t ,(cid:126)c (cid:48)(cid:48) ,(cid:126)d (cid:48)(cid:48) , ∞ ,(cid:96) bot Ber ( L ≥ · · · ≥ L k − ≥ (cid:96) bot ) (cid:17) . (6.29)In the last line, we have written z (cid:48) = d (cid:48)(cid:48) − c (cid:48)(cid:48) , and we used the fact that c (cid:48)(cid:48) ≤ − pt +( V t + Ck )(2 t ) / .By Lemma 3.10, we can find V top large enough depending on V t , C, k, p so that the probability inthe third line of (6.29) is at least / for N ≥ ˜ N . On the other hand, the above observationsregarding (cid:126)c (cid:48)(cid:48) , (cid:126)d (cid:48)(cid:48) , and (cid:96) bot , as well as the fact that | d (cid:48)(cid:48) − c (cid:48)(cid:48) − pt | ≤ , allow us to conclude fromLemma 3.14 that the probability in the last line of (6.29) is at least / for N ≥ ˜ N . In apply-ing Lemma 3.14 we used the fact that V b ≥ M + R , which implies that (cid:96) bot lies a distance of atleast R (2 t ) / (and hence C (2 t ) / as R > C by construction) uniformly below the line segmentconnecting c (cid:48)(cid:48) k − and d (cid:48)(cid:48) k − . We thus obtain a lower bound of / − /
40 = 19 / in (6.29) for ¯ N = max( ˜ N , ˜ N ) , which proves (6.28) as desired. Step 3.
In this step, we show that with E , V t , and V b as in Step 2, there exist (cid:15) > sufficientlysmall and ¯ N such that for all ( (cid:126)c, (cid:126)d ) ∈ E and N ≥ ¯ N , we have(6.30) P − t ,t ,(cid:126)c,(cid:126)d, ∞ ,(cid:96) bot avoid,Ber ; ˜ S (cid:0) D ( (cid:126)c, (cid:126)d, V top , (cid:15), t ) (cid:1) ≥ . We claim that this follows if we find ˜ N so that for N ≥ ˜ N ,(6.31) P − t ,t ,(cid:126)c,(cid:126)davoid,Ber ; ˜ S (cid:0) C ( (cid:126)c, (cid:126)d, (cid:15), t ) | A ( (cid:126)c, (cid:126)d, t ) ∩ B ( (cid:126)c, (cid:126)d, V top , t ) (cid:1) ≥ . To see this, note that (6.22) and (6.28) imply that for N ≥ max( ¯ N , ¯ N ) , P − t ,t ,(cid:126)c,(cid:126)davoid,Ber ; ˜ S (cid:0) A ( (cid:126)c, (cid:126)d, t ) ∩ B ( (cid:126)c, (cid:126)d, V top , t ) (cid:1) ≥ (cid:18) − (cid:19) · > , and then (6.31) and the second inequality in (6.22) imply that for N ≥ ¯ N = max( ¯ N , ¯ N , ˜ N ) , P − t ,t ,(cid:126)c,(cid:126)d, ∞ ,(cid:96) bot avoid,Ber ; ˜ S (cid:0) A ( (cid:126)c, (cid:126)d, t ) ∩ B ( (cid:126)c, (cid:126)d, V top , t ) ∩ C ( (cid:126)c, (cid:126)d, (cid:15), t ) (cid:1) > · − > , which gives (6.30) once we recall the definition of D ( (cid:126)c, (cid:126)d, V top , (cid:15), t ) .In the remainder of this step, we verify (6.31). Observe that A ( (cid:126)c, (cid:126)d, t ) ∩ B ( (cid:126)c, (cid:126)d, V top , t ) can bewritten as a countable disjoint union:(6.32) A ( (cid:126)c, (cid:126)d, t ) ∩ B ( (cid:126)c, (cid:126)d, V top , t ) = (cid:71) ( (cid:126)a,(cid:126)b ) ∈ I F ( (cid:126)a,(cid:126)b ) . Here, for (cid:126)a,(cid:126)b ∈ W k − , F ( (cid:126)a,(cid:126)b ) is the event that Q ( − t ) = (cid:126)a and Q ( t ) = (cid:126)b , and I is the collectionof pairs ( (cid:126)a,(cid:126)b ) satisfying (1) ≤ min( a i − c i , d i − b i ) ≤ t − t and ≤ b i − a i ≤ t for ≤ i ≤ k − ,(2) min( a k − + pt , b k − − pt ) ≥ ( M + 1)(2 t ) / ,(3) max( a + pt , b − pt ) ≤ V top (2 t ) / .Now let Q = ( Q , . . . , Q k − ) and Q = ( Q , . . . , Q k − ) denote the restrictions of ˜ Q to (cid:74) − t , − t (cid:75) and (cid:74) t , t (cid:75) respectively. Then we observe that P − t ,t ,(cid:126)c,(cid:126)davoid,Ber ; ˜ S (cid:16) Q = B , Q = B (cid:12)(cid:12) F ( (cid:126)a,(cid:126)b ) (cid:17) = P − t , − t ,(cid:126)c,(cid:126)aavoid,Ber (cid:0) Q = B (cid:1) · P t ,t ,(cid:126)b,(cid:126)davoid,Ber (cid:0) Q = B (cid:1) . (6.33)We also let ˜ I = { ( (cid:126)a,(cid:126)b ) ∈ I : P − t ,t ,(cid:126)c,(cid:126)davoid,Ber ; ˜ S ( F ( (cid:126)a,(cid:126)b )) > } , and we choose ˜ N so that ˜ I is nonempty for N ≥ ˜ N using Lemma 2.16. We now fix ( (cid:126)a,(cid:126)b ) and argue that we can choose (cid:15) > small enoughand ˜ N so that for N ≥ ˜ N ,(6.34) P − t ,t ,(cid:126)c,(cid:126)davoid,Ber ; ˜ S (cid:16) C ( (cid:126)c, (cid:126)d, (cid:15), t ) (cid:12)(cid:12) F ( (cid:126)a,(cid:126)b ) (cid:17) ≥ . Then using (6.34) and (6.32) and summing over ˜ I proves (6.31) for N ≥ ˜ N = max( ˜ N , ˜ N ) .To prove (6.34), we first show that we can find δ > and ˜ N so that(6.35) P − t , − t ,(cid:126)c,(cid:126)aavoid,Ber (cid:18) max ≤ i ≤ k − (cid:2) Q i ( − t ) − Q i +1 ( − t ) (cid:3) ≥ δ (2 t ) / (cid:19) ≥ √ for N ≥ ˜ N . We prove this inequality using Lemma 3.18. In order to apply this result, we firstobserve that since | − t + ( t + t ) | ≤ by (6.19), we have(6.36) ≤ Q i ( − t ) − Q i ( − ( t + t )) ≤ . Now applying Lemma 3.18 with M = V t , M = V top , we obtain ˜ N and δ > such that if N ≥ ˜ N ,then P − t , − t ,(cid:126)c,(cid:126)aavoid,Ber (cid:18) min ≤ i ≤ k − (cid:2) Q i ( − ( t + t )) − Q i +1 ( − ( t + t )) (cid:3) < δ ( t − t ) / (cid:19) < − √ . Together with (6.36) and the fact that t / < t − t , this implies that(6.37) P − t , − t ,(cid:126)c,(cid:126)aavoid,Ber (cid:18) min ≤ i ≤ k − (cid:2) Q i ( − t ) − Q i +1 ( − t ) (cid:3) < ( δ/ t ) / − (cid:19) < − √ for N ≥ ˜ N . Now we observe that as long as ˜ N α ≥ /δ r +3 , then ( δ/ t ) / ≤ ( δ/ t ) / − for N ≥ ˜ N . This implies (6.35). A similar argument gives us a ˜ δ > such that P − t , − t ,(cid:126)c,(cid:126)aavoid,Ber (cid:18) min ≤ i ≤ k − (cid:2) Q i ( − t ) − Q i +1 ( − t ) (cid:3) < (˜ δ/ t ) / (cid:19) < − √ for N ≥ ˜ N . Then putting (cid:15) = min( δ, ˜ δ ) / and using (6.33), we obtain (6.34) for N ≥ ˜ N . Step 4.
In this step, we find ¯ N so that(6.38) P − t ,t ,(cid:126)c,(cid:126)d, ∞ ,(cid:96) bot avoid,Ber ; ˜ S (cid:0) D ( (cid:126)c, (cid:126)d, V top , (cid:15), t ) (cid:1) ≥ (cid:32) − ∞ (cid:88) n =1 ( − n − e − (cid:15) n / p (1 − p ) (cid:33) k − for N ≥ ¯ N . We will find ˜ N so that for N ≥ ˜ N , P − t ,t ,(cid:126)c,(cid:126)d, ∞ ,(cid:96) bot avoid,Ber ; ˜ S (cid:16) D ( (cid:126)c, (cid:126)d, V top , (cid:15), t ) (cid:12)(cid:12) D ( (cid:126)c, (cid:126)d, V top , (cid:15), t ) (cid:17) ≥ (cid:32) − ∞ (cid:88) n =1 ( − n − e − (cid:15) n / p (1 − p ) (cid:33) k − . (6.39) IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 55
Then (6.30) implies (6.38) for N ≥ ¯ N = max( ¯ N , ˜ N ) .To prove (6.39) we first observe that we can write(6.40) D ( (cid:126)c, (cid:126)d, V top , (cid:15), t ) = (cid:71) ( (cid:126)a,(cid:126)b ) ∈ J G ( (cid:126)a,(cid:126)b ) . Here, for (cid:126)a,(cid:126)b ∈ W k − , G ( (cid:126)a,(cid:126)b ) is the event that Q ( − t ) = (cid:126)a and Q ( t ) = (cid:126)b , and J is the collectionof ( (cid:126)a,(cid:126)b ) satisfying(1) ≤ min( a i − c i , d i − b i ) ≤ t − t and ≤ b i − a i ≤ t for ≤ i ≤ k − ,(2) min( a k − + pt , b k − − pt ) ≥ ( M + 1)(2 t ) / ,(3) max( a + pt , b − pt ) ≤ V top (2 t ) / ,(4) min( a i − a i +1 , b i − b i +1 ) ≥ (cid:15) (2 t ) / for ≤ i ≤ k − .We let ˜ J = { ( (cid:126)a,(cid:126)b ) ∈ J : P − t ,t ,(cid:126)c,(cid:126)d, ∞ ,(cid:96) bot avoid,Ber ; ˜ S ( G ( (cid:126)a,(cid:126)b )) > } , and we take ˜ N large enough by Lemma 2.16so that ˜ J (cid:54) = ∅ . We also let ˜ D ( V top , (cid:15), t ) denote the set consisting of elements of D ( (cid:126)c, (cid:126)d, V top , (cid:15), t ) restricted to (cid:74) − t , t (cid:75) . Then for ( (cid:126)a,(cid:126)b ) ∈ ˜ J we have P − t ,t ,(cid:126)c,(cid:126)d, ∞ ,(cid:96) bot avoid,Ber ; ˜ S (cid:16) D ( (cid:126)c, (cid:126)d, V top , (cid:15), t ) (cid:12)(cid:12) G ( (cid:126)a,(cid:126)b ) (cid:17) = P − t ,t ,(cid:126)a,(cid:126)b, ∞ ,(cid:96) bot avoid,Ber ; ˜ S (cid:16) ˜ D ( V top , (cid:15), t ) (cid:17) ≥ P − t ,t ,(cid:126)a,(cid:126)bBer (cid:16) ˜ D ( V top , (cid:15), t ) ∩ { L ≥ · · · ≥ L k − ≥ (cid:96) bot } (cid:17) . (6.41)We observe that the event in the second line of (6.41) occurs as long as each curve L i remainswithin a distance of (cid:15) (2 t ) / from the straight line segment connecting a i and b i on [ − t , t ] ,for ≤ i ≤ k − . By the argument in the proof of Lemma 3.14, we can enlarge ˜ N so that theprobability of this event is bounded below by the expression on the right in (6.39) for N ≥ ˜ N .Then using (6.41) and (6.40) and summing over ˜ J implies (6.39). Step 5.
In this last step, we complete the proof of the lemma, fixing the constants g and h as wellas N . Let g = g ( (cid:15), V top , M ) be as in Lemma 6.3 for the choices of (cid:15), V top in Steps 2 and 3, let h = h (cid:32) − ∞ (cid:88) n =1 ( − n − e − (cid:15) n / p (1 − p ) (cid:33) k − with h as in Step 2, and let N = max( ¯ N , ¯ N , ¯ N , ¯ N , N ) , with N as in Lemma 6.4. In thefollowing we assume that N ≥ N . By (6.38) we have that if ( (cid:126)c, (cid:126)d ) ∈ E and N ≥ N , then P − t ,t ,(cid:126)c,(cid:126)d, ∞ ,(cid:96) bot avoid,Ber ; ˜ S ( H ) ≥ hh , where H is the event that(1) V top (2 t ) / ≥ ˜ Q ( − t ) + pt ≥ ˜ Q k − ( − t ) + pt ≥ ( M + 1)(2 t ) / ,(2) V top (2 t ) / ≥ ˜ Q ( t ) − pt ≥ ˜ Q k − ( t ) − pt ≥ ( M + 1)(2 t ) / ,(3) ˜ Q i ( − t ) − ˜ Q i +1 ( − t ) ≥ (cid:15) (2 t ) / and ˜ Q i ( t ) − ˜ Q i +1 ( t ) ≥ (cid:15) (2 t ) / for i = 1 , . . . , k − .Let Y denote the event appearing in (6.7). Then we can write Y = (cid:70) ( (cid:126)c,(cid:126)d ) ∈ E Y ( (cid:126)c, (cid:126)d ) , where Y ( (cid:126)c, (cid:126)d ) isthe event that ˜ Q ( − t ) = (cid:126)c , ˜ Q ( t ) = (cid:126)d , and E is defined in Step 2. If ˜ E = { ( (cid:126)c, (cid:126)d ) ∈ E : P ˜ Q ( Y ( (cid:126)c, (cid:126)d )) > } , we can assume by Lemma 2.16 that N is large enough so that ˜ E (cid:54) = ∅ . It follows from Lemma P ˜ Q ( Y ) ≥ h . We conclude from the definition of P ˜ Q that for all N ≥ N , P ˜ Q ( H ) ≥ P ˜ Q ( H ∩ Y ) = (cid:88) ( (cid:126)c,(cid:126)d ) ∈ ˜ E P ˜ Q ( Y ( (cid:126)c, (cid:126)d )) · P ˜ Q ( H | Y ( (cid:126)c, (cid:126)d )) = (cid:88) ( (cid:126)c,(cid:126)d ) ∈ ˜ E P ˜ Q ( Y ( (cid:126)c, (cid:126)d )) · P − t ,t ,(cid:126)c,(cid:126)d, ∞ ,(cid:96) bot avoid,Ber ; ˜ S ( H ) ≥ hh (cid:88) ( (cid:126)c,(cid:126)d ) ∈ ˜ E P ˜ Q ( Y ( (cid:126)c, (cid:126)d )) = hh P ˜ Q ( Y ) ≥ h. Now Lemma 6.3 implies (6.1), completing the proof. (cid:3) Appendix A
In this section we prove Lemmas 2.2, 2.4, 3.1 and 3.2.7.1.
Proof of Lemma 2.2.
We adopt the same notation as in the statement of Lemma 2.2 andproceed with its proof.Observe that the sets K ⊂ K ⊂ · · · ⊂ Σ × Λ are compact, they cover Σ × Λ , and any compactsubset K of Σ × Λ is contained in all K n for sufficiently large n . To see this last fact, let π , π denote the canonical projection maps of Σ × Λ onto Σ and Λ respectively. Since these maps arecontinuous, π ( K ) and π ( K ) are compact in Σ and Λ . This implies that π ( K ) is finite, so it iscontained in Σ n = Σ ∩ (cid:74) − n , n (cid:75) for some n . On the other hand, π ( K ) is closed and boundedin R , thus contained in some closed interval [ α, β ] ⊆ Λ . Since a n (cid:38) a and b n (cid:37) b , we canchoose n large enough so that π ( K ) ⊆ [ α, β ] ⊆ [ a n , b n ] . Then taking n = max( n , n ) , we have K ⊆ π ( K ) × π ( K ) ⊆ Σ n × [ a n , b n ] = K n .We now split the proof into several steps. Step 1.
In this step, we show that the function d defined in the statement of the lemma is a metric.For each n and f, g ∈ C (Σ × Λ) , we define d n ( f, g ) = sup ( i,t ) ∈ K n | f ( i, t ) − g ( i, t ) | , d (cid:48) n ( f, g ) = min { d n ( f, g ) , } Then we have d ( f, g ) = ∞ (cid:88) n =1 − n d (cid:48) n ( f, g ) . Clearly each d n is nonnegative and satisfies the triangle inequality, and it is then easy to see that thesame properties hold for d (cid:48) n . Furthermore, d (cid:48) n ≤ , so d is well-defined and d ( f, g ) ∈ [0 , . Observethat d is nonnegative, and if f = g , then each d (cid:48) n ( f, g ) = 0 , so the sum d ( f, g ) is 0. Conversely, if f (cid:54) = g , then since the K n cover Σ × Λ , we can choose n large enough so that K n contains an x with f ( x ) (cid:54) = g ( x ) . Then d (cid:48) n ( f, g ) (cid:54) = 0 , and hence d ( f, g ) (cid:54) = 0 . Lastly, the triangle inequality holds for d since it holds for each d (cid:48) n . Step 2.
Now we prove that the topology τ d on C (Σ × Λ) induced by d is the same as the topologyof uniform convergence over compacts, which we denote by τ c . Recall that τ c is generated by thebasis consisting of sets B K ( f, (cid:15) ) = (cid:110) g ∈ C (Σ × Λ) : sup ( i,t ) ∈ K | f ( i, t ) − g ( i, t ) | < (cid:15) (cid:111) , for K ⊂ Σ × Λ compact, f ∈ C (Σ × Λ) , and (cid:15) > , and τ d is generated by sets of the form B d(cid:15) ( f ) = { g : d ( f, g ) < (cid:15) } .We first show that τ d ⊆ τ c . It suffices to prove that every set B d(cid:15) ( f ) is a union of sets B K ( f, (cid:15) ) .First, choose (cid:15) > and f ∈ C (Σ × Λ) . Let g ∈ B d(cid:15) ( f ) . We will find a basis element A g of τ c such IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 57 that g ∈ A g ⊂ B d(cid:15) ( f ) . Let δ = d ( f, g ) < (cid:15) , and choose n large enough so that (cid:80) k>n − k < (cid:15) − δ .Define A g = B K n ( g, (cid:15) − δn ) , and suppose h ∈ A g . Then since K m ⊆ K n for m ≤ n , we have d ( f, h ) ≤ d ( f, g ) + d ( g, h ) ≤ δ + n (cid:88) k =1 − k d n ( g, h ) + (cid:88) k>n − k < δ + (cid:15) − δ (cid:15) − δ (cid:15). Therefore g ∈ A g ⊂ B d(cid:15) ( f ) . Then we can write B d(cid:15) ( f ) = (cid:91) g ∈ B d(cid:15) ( f ) A g , a union of basis elements of τ c .We now prove conversely that τ c ⊆ τ d . Let K ⊂ Σ × Λ be compact, f ∈ C (Σ × Λ) , and (cid:15) > . Choose n so that K ⊂ K n , and let g ∈ B K ( f, (cid:15) ) and δ = sup x ∈ K | f ( x ) − g ( x ) | < (cid:15) . If d ( g, h ) < − n ( (cid:15) − δ ) , then d (cid:48) n ( g, h ) ≤ n d ( g, h ) < (cid:15) − δ , hence d n ( g, h ) < (cid:15) − δ , assuming withoutloss of generality that (cid:15) ≤ . It follows that sup x ∈ K | f ( x ) − h ( x ) | ≤ δ + sup x ∈ K | g ( x ) − h ( x ) | ≤ δ + d n ( g, h ) < δ + (cid:15) − δ = (cid:15). Thus g ∈ B d − n ( (cid:15) − δ ) ( g ) ⊂ B K ( f, (cid:15) ) , proving that B K ( f, (cid:15) ) ∈ τ d by the same argument as above. Weconclude that τ d = τ c . Step 3.
In this step, we show that ( C (Σ × Λ) , d ) is a complete metric space. Let { f n } n ≥ be Cauchywith respect to d . Then we claim that { f n } must be Cauchy with respect to d (cid:48) n , on each K n . Thisfollows from the observation that d (cid:48) n ( f (cid:96) , f m ) ≤ n d ( f (cid:96) , f m ) . Thus { f n } is Cauchy with respect tothe uniform metric on each K n , and hence converges uniformly to a continuous limit f K n on each K n (see [26, Theorem 7.15]). Since the pointwise limit must be unique at each x ∈ Σ × Λ , we have f K n ( x ) = f K m ( x ) if x ∈ K n ∩ K m . Since ∪ n K n = Σ × Λ , we obtain a well-defined function f on allof Σ × Λ given by f ( x ) = lim n →∞ f K n ( x ) . We have f ∈ C (Σ × Λ) since f | K n = f K n is continuouson K n for all n . Moreover, if K ⊂ Σ × Λ is compact and n is large enough so that K ⊂ K n , thenbecause f n → f K n = f | K n uniformly on K n , we have f n → f K n | K = f | K uniformly on K . That is,for any K ⊂ Σ × Λ compact and (cid:15) > , we have f n ∈ B K ( f, (cid:15) ) for all sufficiently large n . Therefore f n → f in τ c , and equivalently in the metric d by Step 2. Step 4.
Lastly, we prove separability by adapting the arguments from [2, Example 1.3]. For eachpair of positive integers n, k , let D n,k be the subcollection of C (Σ × Λ) consisting of polygonalfunctions that are piecewise linear on { j } × I n,k,i for each j ∈ Σ n and each subinterval I n,k,i = (cid:2) a n + i − k ( b n − a n ) , a n + ik ( b n − a n ) (cid:3) , ≤ i ≤ k, taking rational values at the endpoints of these subintervals, and extended constantly to all of Λ .Then D = ∪ n,k D n,k is countable, and we claim that it is dense in τ c . To see this, let K ⊂ Σ × Λ becompact, f ∈ C (Σ × Λ) , and (cid:15) > , and choose n so that K ⊂ K n . Since f is uniformly continuouson K n , we can choose k large enough so that for ≤ i ≤ k , if t ∈ I n,k,i , then (cid:12)(cid:12) f ( j, t ) − f ( j, a n + ik ( b n − a n )) (cid:12)(cid:12) < (cid:15)/ for all j ∈ Σ n . Using that Q is dense in R we can choose g ∈ ∪ k D n,k with | g ( j, a n + ik ( b n − a n )) − f ( j, a n + ik ( b n − a n )) | < (cid:15)/ . Then we have (cid:12)(cid:12) f ( j, t ) − g ( j, a n + i − k ( b n − a n )) (cid:12)(cid:12) < (cid:15) and (cid:12)(cid:12) f ( j, t ) − g ( j, a n + ik ( b n − a n )) (cid:12)(cid:12) < (cid:15). Since g ( j, t ) is a convex combination of g ( j, a n + i − k ( b n − a n )) and g ( j, a n + ik ( b n − a n )) , we get | f ( j, t ) − g ( j, t ) | < (cid:15) as well. In summary, sup ( j,t ) ∈ K | f ( j, t ) − g ( j, t ) | ≤ sup ( j,t ) ∈ K n | f ( j, t ) − g ( j, t ) | < (cid:15), so g ∈ B K ( f, (cid:15) ) . This proves that D is a countable dense subset of C (Σ × Λ) .7.2. Proof of Lemma 2.4.
We first prove two lemmas that will be used in the proof of Lemma2.4. The first result allows us to identify the space C (Σ × Λ) with a product of copies of C (Λ) . Inthe following, we assume the notation of Lemma 2.4. Lemma 7.1.
Let π i : C (Σ × Λ) → C (Λ) , i ∈ Σ , be the projection maps given by π i ( F )( x ) = F ( i, x ) for x ∈ Λ . Then the π i are continuous. Endow the space (cid:81) i ∈ Σ C (Λ) with the product topologyinduced by the topology of uniform convergence over compacts on C (Λ) . Then the mapping F : C (Σ × Λ) −→ (cid:89) i ∈ Σ C (Λ) , f (cid:55)→ ( π i ( f )) i ∈ Σ is a homeomorphism.Proof. We first prove that the π i are continuous. We know C (Σ × Λ) is metrizable by Lemma 2.2,and by a similar argument so is C (Λ) (take Σ = { } in Lemma 2.2). Consequently, it suffices toassume that f n → f in C (Σ × Λ) and show that π i ( f n ) → π i ( f ) in C (Λ) . Let K be compact in Λ .Then { i } × K is compact in Σ × Λ , and f n → f uniformly on { i } × K by assumption, so we have π i ( f n ) | K = f n | { i }× K → f | { i }× K = π i ( f ) | K uniformly on K . Since K was arbitrary, we concludethat π i ( f n ) → π i ( f ) in C (Λ) as desired.We now observe that F is invertible. If ( f i ) i ∈ Σ ∈ (cid:81) i ∈ Σ C (Λ) , then the function f defined by f ( i, · ) = f i ( · ) is in C (Σ × Λ) , since Σ has the discrete topology. This gives a well-defined inverse for F . It suffices to prove that F and F − are open maps.We first show that F sends each basis element B K ( f, (cid:15) ) of C (Σ × Λ) to a basis element in (cid:81) i ∈ Σ C (Λ) . Note that a basis for the product topology is given by products (cid:81) i ∈ Σ B K i ( f i , (cid:15) ) , whereat most finitely many of the K i are nonempty. Here, we use the convention that B ∅ ( f i , (cid:15) ) = C (Λ) .Let π Σ , π Λ denote the canonical projections of Σ × Λ onto Σ , Λ . The continuity of π Σ implies that if K ⊂ Σ × Λ is compact, then π Σ ( K ) is compact in Σ , hence finite. Observe that the set K ∩ ( { i } × Λ) is an intersection of a compact set with a closed set and is hence compact in Σ × Λ . Therefore thesets K i = π Λ ( K ∩ ( { i } × Λ)) are compact in Λ for each i ∈ Σ since π Λ is continuous. We observethat F ( B K ( f, (cid:15) )) = (cid:81) i ∈ Σ U i , where U i = B K i ( π i ( f ) , (cid:15) ) , if i ∈ π Σ ( K ) , and U i = C (Λ) otherwise. Since π Σ ( K ) is finite and the K i are compact, we see that F ( B K ( f, (cid:15) )) is a basis element in the product topology as claimed.Lastly, we show that F − sends each basis element U = (cid:81) i ∈ Σ B K i ( f i , (cid:15) ) for the product topologyto a set of the form B K ( f, (cid:15) ) . We have K i = ∅ for all but finitely many i . Write f = F − (( f i ) i ∈ Σ ) and K = ∪ i ∈ Σ ( { i } × K i ) . Notice that K is compact in Σ × Λ as a finite union of compact sets (eachof { i } × K i is compact by Tychonoff’s theorem, [22, Theorem 37.3]). Moreover, one has F − ( U ) = B K ( f, (cid:15) ) , which proves that F − is also an open map. (cid:3) We next prove a lemma which states that a sequence of line ensembles is tight if and only if allindividual curves form tight sequences.
Lemma 7.2.
Suppose that {L n } n ≥ is a sequence of Σ -indexed line ensembles on Λ , and let X ni = π i ( L n ) . Then the X ni are C (Λ) -valued random variables on (Ω , F , P ) , and {L n } is tight if and onlyif for each i ∈ Σ the sequence { X ni } n ≥ is tight. IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 59
Proof.
The fact that the X ni are random variables follows from the continuity of the π i in Lemma7.1 and [15, Theorem 1.3.4]. First suppose the sequence {L n } is tight. By Lemma 2.2, C (Σ × Λ) is a Polish space, so it follows from Prohorov’s theorem, [2, Theorem 5.1], that {L n } is relativelycompact. That is, every subsequence {L n k } has a further subsequence {L n k(cid:96) } converging weakly tosome L . Then for each i ∈ Σ , since π i is continuous by the above, the subsequence { π i ( L n k(cid:96) ) } of { π i ( L n k ) } converges weakly to π i ( L ) by the Continuous mapping theorem, [2, Theorem 2.7]. Thusevery subsequence of { π i ( L n ) } has a convergent subsequence. Since C (Λ) is a Polish space (applyLemma 2.2 with Σ = { } ), Prohorov’s theorem, [2, Theorem 5.2], implies { π i ( L n ) } is tight.Conversely, suppose { X ni } is tight for all i ∈ Σ . Then given (cid:15) > , we can find compact sets K i ⊂ C (Λ) such that P ( X ni / ∈ K i ) ≤ (cid:15)/ i for each i ∈ Σ . By Tychonoff’s theorem, [22, Theorem 37.3], the product ˜ K = (cid:81) i ∈ Σ K i is compactin (cid:81) i ∈ Σ C (Λ) . We have(7.1) P (cid:0) ( X ni ) i ∈ Σ / ∈ ˜ K (cid:1) ≤ (cid:88) i ∈ Σ P ( X ni / ∈ K i ) ≤ ∞ (cid:88) i =1 (cid:15)/ i = (cid:15). By Lemma 7.1, we have a homeomorphism G : (cid:81) i ∈ Σ C (Λ) → C (Σ × Λ) . We observe that G (( X ni ) i ∈ Σ ) = L n , and K = G ( ˜ K ) is compact in C (Σ × Λ) . Thus L n ∈ K if and only if ( X ni ) i ∈ Σ ∈ ˜ K ,and it follows from (7.1) that P ( L n ∈ K ) ≥ − (cid:15). This proves that {L n } is tight. (cid:3) We are now ready to prove Lemma 2.4.
Proof. (of Lemma 2.4) Fix an i ∈ Σ . By Lemma 7.2, it suffices to show that the sequence {L ni } n ≥ of C (Λ) -valued random variables is tight. By [2, Theorem 7.3], a sequence { P n } n ≥ of probabilitymeasures on C [0 , with the uniform topology is tight if and only if the following conditions hold: lim a →∞ lim sup n →∞ P n ( | x (0) | ≥ a ) = 0 , lim δ → lim sup n →∞ P n (cid:16) sup | s − t |≤ δ | x ( s ) − x ( t ) | ≥ (cid:15) (cid:17) = 0 for all (cid:15) > . By replacing [0 , with [ a m , b m ] and 0 with a , we see that the hypotheses in the lemma implythat the sequence {L ni | [ a m ,b m ] } n ≥ is tight for every m ≥ . Let π m : C (Λ) → C ([ a m , b m ]) denotethe map f (cid:55)→ f | [ a m ,b m ] . Then π m is continuous, since C (Λ) and C ([ a m , b m ]) with the topologiesof uniform convergence over compacts are metrizable by Lemma 2.2, and if f n → f uniformly oncompact subsets of Λ , then f n | [ a m ,b m ] → f | [ a m ,b m ] uniformly on compact subsets of [ a m , b m ] . Itfollows from [15, Theorem 1.3.4] that π m ( L n ) = L ni | [ a m ,b m ] is a C ([ a m , b m ]) -valued random variable.Tightness of the sequence implies that for any (cid:15) > , we can find compact sets K m ⊂ C ([ a m , b m ]) so that P (cid:0) π m ( L ni ) / ∈ K m (cid:1) ≤ (cid:15)/ m for each m ≥ . Writing K = ∩ ∞ m =1 π − m ( K m ) , it follows that P (cid:0) L ni ∈ K (cid:1) ≥ − ∞ (cid:88) m =1 (cid:15)/ m = 1 − (cid:15). To conclude tightness of {L ni } , it suffices to prove that K = ∩ ∞ m =1 π − m ( K m ) is sequentially compact in C (Λ) . We argue by diagonalization. Let { f n } be a sequence in K , so that f n | [ a m ,b m ] ∈ K m for every m, n . Since K is compact, there is a sequence { n ,k } of natural numbers such that the subsequence { f n ,k | [ a ,b ] } k converges in C ([ a , b ]) . Since K is compact, we can take a further subsequence { n ,k } of { n ,k } so that { f n ,k | [ a ,b ] } k converges in C ([ a , b ]) . Continuing in this manner, weobtain sequences { n ,k } ⊇ { n ,k } ⊇ · · · so that { f n m,k | [ a m ,b m ] } k converges in C ([ a m , b m ]) for all m .Writing n k = n k,k , it follows that the sequence { f n k } converges uniformly on each [ a m , b m ] . If K isany compact subset of C (Λ) , then K ⊂ [ a m , b m ] for some m , and hence { f n k } converges uniformlyon K . Therefore { f n k } is a convergent subsequence of { f n } . (cid:3) Proof of Lemma 2.16.
We adopt the same notation as in the statement of Lemma 2.16 andproceed with its proof.We first construct a candidate B and then we prove that B ∈ Ω avoid ( T , T , (cid:126)x, (cid:126)y, f, g ) . Denote B = f and B k +1 = g with x = f ( T ) and y = f ( T ) . By Condition (3) of Lemma 2.16 we know x ≥ x and y ≥ y . We define inductively B j for j = 1 , . . . , k as follows (recall that B = f ).Assuming that B j − has been constructed we let B j ( T ) = x j and then for i ∈ (cid:74) T , T − (cid:75) we define(7.2) B j ( i + 1) = (cid:40) B j ( i ) + 1 if B j ( i ) + 1 ≤ min { B j − ( i + 1) , y j } B j ( i ) else.This gives our candidate B = ( B , . . . , B k ) . In order to verify that this candidate ensemble B is anelement of Ω avoid ( T , T , (cid:126)x, (cid:126)y, f, g ) , three properties must be ensured:(a) B ( T ) = (cid:126)x and B ( T ) = (cid:126)y (b) f ( i ) ≥ B ( i ) ≥ · · · ≥ B k ( i ) ≥ g ( i ) for all i ∈ (cid:74) T , T (cid:75) (c) B j ( i + 1) − B j ( i ) ∈ { , } for all i ∈ (cid:74) T , T − (cid:75) and j ∈ (cid:74) , k (cid:75) (7.3)Property (c) follows directly from our definition in (7.2). We split the proof of (a) and (b) aboveinto three steps. Step 1.
In this step we prove that for each j = 1 , . . . , k that B j − ( i ) ≥ B j ( i ) for i ∈ (cid:74) T , T (cid:75) . If j = 1 and f ≡ ∞ there is nothing to prove, so we may assume that either j ≥ or j = 1 and f isan up-right path – the proofs in these cases are the same. Suppose that for some i ∈ (cid:74) T , T − (cid:75) we have that B j ( i ) ≤ B j − ( i ) then we know by construction that B j ( i + 1) = B j ( i ) or B j ( i ) + 1 .In the former case, we trivially get B j ( i + 1) = B j ( i ) ≤ B j − ( i ) ≤ B j − ( i + 1) , where the last inequality used that B j − is an up-right path. If B j ( i + 1) = B j ( i ) + 1 from (7.2)we see that B j ( i ) + 1 ≤ B j − ( i + 1) and so we again conclude that B j ( i + 1) ≤ B j − ( i + 1) . Byassumption we know that B j ( T ) = x j ≤ x j − = B j − ( T ) , and so by inducting on i from T to T we conclude that B j − ( i ) ≥ B j ( i ) for i ∈ (cid:74) T , T (cid:75) and j = 1 , . . . , k . To summarize, we have provedthat for i ∈ (cid:74) T , T (cid:75) (7.4) f ( i ) ≥ B ( i ) ≥ · · · ≥ B k ( i ) . Step 2.
In this step we prove (a). By construction we already know that B ( T ) = (cid:126)x and so we onlyneed to prove that B ( T ) = (cid:126)y . We will show this claim inductively on j : we trivially know the claimis true for j = 0 , since y = f ( T ) is given. Then suppose that B j ( T ) = y j holds up to j = n − .We seek to prove that B n ( T ) = y n . Notice that by construction we know that B n ( i ) ≤ y n for all i ∈ (cid:74) T , T (cid:75) and so we only need to show that B n ( T ) ≥ y n .Suppose first that B n ( i + 1) = B n ( i ) + 1 for all i ∈ (cid:74) T , T − (cid:75) . Then we know that B n ( T ) = x n + ( T − T ) ≥ y n by assumption (1) in Lemma 2.16, and so we are done. Conversely, there isan i ∈ (cid:74) T , T − (cid:75) such that B n ( i + 1) = B n ( i ) and we can take i to be the largest index in (cid:74) T , T − (cid:75) satisfying this condition. Observe that by (7.2) we must have that either B n ( i ) ≥ y n or B n ( i ) ≥ B n − ( i + 1) . In the former case, we see that since B n is an up-right path we must IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 61 have B n ( T ) ≥ B n ( i ) ≥ y n and again we are done. Thus we only need to consider the casewhen B n − ( i + 1) ≤ B n ( i ) . By the maximality of i we know that B n ( i + 1) = B n ( i ) + 1 for i = i + 1 , . . . , T and so we see that B n ( T ) = B n ( i + 1) + ( T − i −
1) = B n ( i ) + ( T − i − ≥ B n − ( i + 1) + ( T − i − ≥ B n − ( T ) = y n − ≥ y n . Overall, we conclude in all cases that B n ( T ) ≥ y n which concludes the proof of (a). Step 3.
In this step we prove (b), and in view of (7.4) we see that it suffices to show that B k ( i ) ≥ g ( i ) for all i . If g ≡ −∞ there is nothing to prove and so we may assume that g is an up-right path.Suppose that g ( i ) > B k ( i ) for some i ∈ (cid:74) T , T (cid:75) . Since g ( T ) ≤ B k ( T ) = x k by Condition (3)in Lemma 2.16, we know that there exists some point i such that g ( i ) = B k ( i ) and g ( i + 1) >B k ( i +1) . In particular, since g and B k can each only increase by , this implies B k ( i ) = B k ( i +1) and g ( i +1) = g ( i )+1 . This implies either B k ( i ) = y k or B k ( i )+1 > B k − ( i +1) . If B k ( i ) = y k then by assumption (3) of Lemma 2.16 we conclude y k ≥ g ( T ) ≥ g ( i + 1) = g ( i ) + 1 = B k ( i ) + 1 ≥ y k + 1 , which is an obvious contradiction.Therefore, it must be the case that B k ( i ) + 1 > B k − ( i + 1) and then we conclude that B k − ( i +1) = B k − ( i ) = B k ( i ) in view of (7.4). By the same argument we see that B k − ( i +1) = B k − ( i ) can only occur if B k − ( i + 1) = B k − ( i ) = B k − ( i ) and iterating this k times weconclude that B ( i + 1) = B ( i ) = B ( i ) = · · · = B k ( i ) = g ( i ) = g ( i + 1) − . But then g ( i + 1) > f ( i + 1) , which contradicts condition (3) in Lemma 2.16. The contradiction arose fromour assumption that g ( i ) > B k ( i ) for some i ∈ (cid:74) T , T (cid:75) and so no such i exists, proving (b).7.4. Proof of Lemmas 3.1 and 3.2.
We will prove the following lemma, of which the two lemmasare immediate consequences. In particular, Lemma 3.1 is the special case when g b = g t , and Lemma3.2 is the case when (cid:126)x = (cid:126)x (cid:48) and (cid:126)y = (cid:126)y (cid:48) . We argue in analogy to [12, Lemma 5.6]. Lemma 7.3.
Fix k ∈ N , T , T ∈ Z with T < T , S ⊆ (cid:74) T , T (cid:75) , and two functions g b , g t : (cid:74) T , T (cid:75) → [ −∞ , ∞ ) with g b ≤ g t on S . Also fix (cid:126)x, (cid:126)y, (cid:126)x (cid:48) , (cid:126)y (cid:48) ∈ W k such that x i ≤ x (cid:48) i , y i ≤ y (cid:48) i for ≤ i ≤ k . Assume that Ω avoid ( T , T , (cid:126)x, (cid:126)y, ∞ , g b ; S ) and Ω avoid ( T , T , (cid:126)x (cid:48) , (cid:126)y (cid:48) , ∞ , g t ; S ) areboth non-empty. Then there exists a probability space (Ω , F , P ) , which supports two (cid:74) , k (cid:75) -indexedBernoulli line ensembles L t and L b on (cid:74) T , T (cid:75) such that the law of L t (cid:0) resp. L b (cid:1) under P is givenby P T ,T ,(cid:126)x (cid:48) ,(cid:126)y (cid:48) , ∞ ,g t avoid,Ber ; S (cid:0) resp. P T ,T ,(cid:126)x,(cid:126)y, ∞ ,g b avoid,Ber ; S (cid:1) and such that P -almost surely we have L ti ( r ) ≥ L bi ( r ) forall i = 1 , . . . , k and r ∈ (cid:74) T , T (cid:75) .Proof. Throughout the proof, we will write Ω a,S to mean Ω avoid ( T , T , (cid:126)x, (cid:126)y, ∞ , g b ; S ) and Ω (cid:48) a,S tomean Ω avoid ( T , T , (cid:126)x (cid:48) , (cid:126)y (cid:48) , ∞ , g t ; S ) . We split the proof into two steps. Step 1.
We first aim to construct a Markov chain ( X n , Y n ) n ≥ , with X n ∈ Ω a,S , Y n ∈ Ω (cid:48) a,S , withinitial distribution given by X i ( t ) = min( x i + t − T , y i ) , Y i ( t ) = min( x (cid:48) i + t − T , y (cid:48) i ) , for t ∈ (cid:74) T , T (cid:75) and ≤ i ≤ k . First observe that we do in fact have X ∈ Ω a,S , since X i ( T ) = x i , X i ( T ) = y i , X i ( t ) ≤ min( x i − + t − T , y i − ) = X i − ( t ) , and X k ( t ) ≥ x i + t − T ≥ g b ( T )+ t − T ≥ g b ( t ) . We also note here that X is maximal on the entire space Ω( T , T , (cid:126)x, (cid:126)y ) , in the sense thatfor any Z ∈ Ω( T , T , (cid:126)x, (cid:126)y ) , we have Z i ( t ) ≤ X i ( t ) for all t ∈ (cid:74) T , T (cid:75) . In particular, X is maximalon Ω a,S . Likewise, we see that Y is maximal on Ω (cid:48) a,S .We want the chain ( X n , Y n ) to have the following properties:(1) ( X n ) n ≥ and ( Y n ) n ≥ are both Markov in their own filtrations, (2) ( X n ) is irreducible and aperiodic, with invariant distribution P T ,T ,(cid:126)x,(cid:126)y, ∞ ,g b avoid,Ber ; S ,(3) ( Y n ) is irreducible and aperiodic, with invariant distribution P T ,T ,(cid:126)x (cid:48) ,(cid:126)y (cid:48) , ∞ ,g t avoid,Ber ; S ,(4) X ni ≤ Y ni on (cid:74) T , T (cid:75) for all n ≥ and ≤ i ≤ k .This will allow us to conclude convergence of X n and Y n to these two uniform measures.We specify the dynamics of ( X n , Y n ) as follows. At time n , we uniformly sample a triple ( i, t, z ) ∈ (cid:74) , k (cid:75) × (cid:74) T , T (cid:75) × (cid:74) x k , y (cid:48) − (cid:75) . We also flip a fair coin, with P ( heads ) = P ( tails ) = 1 / . We update X n and Y n using the following procedure. If j (cid:54) = i , we leave X j , Y j unchanged, and for all points s (cid:54) = t , we set X n +1 i ( s ) = X ni ( s ) . If T < t < T , X ni ( t −
1) = z , and X ni ( t + 1) = z + 1 (note thatthis implies X ni ( t ) ∈ { z, z + 1 } ), we consider two cases. If t ∈ S , then we set X n +1 i ( t ) = (cid:40) z + 1 , if heads ,z, if tails , assuming this does not cause X n +1 i ( t ) to fall below X ni +1 ( t ) , with the convention that X nk +1 = g b . If t / ∈ S , we perform the same update regardless of whether it results in a crossing. In all other cases,we leave X n +1 i ( t ) = X ni ( t ) . We update Y n using the same rule, with g t in place of g b .We first observe that X n and Y n are in fact non-intersecting on S for all n . Note X is non-crossing, and if X n is non-crossing, then the only way X n +1 could be crossing on S is if the updatewere to push X n +1 i ( t ) below X ni +1 ( t ) for some i, t with t ∈ S . But any update of this form issuppressed, so it follows by induction that X n ∈ Ω a,S for all n . Similarly, we see that Y n ∈ Ω (cid:48) a,S .It is easy to see that ( X n , Y n ) is a Markov chain, since at each time n , the value of ( X n +1 , Y n +1 ) depends only on the current state ( X n , Y n ) , and not on the time n or any of the states prior to time n . Moreover, the value of X n +1 depends only on the state X n , not on Y n , so ( X n ) is a Markovchain in its own filtration. The same applies to ( Y n ) . This proves the property (1) above.We now argue that ( X n ) and ( Y n ) are irreducible. Fix any Z ∈ Ω a ; S . As observed above, wehave Z i ≤ X i on (cid:74) T , T (cid:75) for all i . We argue that we can reach the state Z starting from X insome finite number of steps with positive probability. Due to the maximality of X , we only needto move the paths downward. If we do this starting with the bottom path, then there is no dangerof the paths X i crossing on S , or of X k crossing g b on S . To ensure that X nk = Z k , we successivelysample triples ( k, t, z ) as follows. We initialize t = T + 1 . If X nk ( t ) = Z k ( t ) , we increment t by 1.Otherwise, we have X nk ( t ) > Z k ( t ) , so we set z = X nk ( t ) − and flip tails. This may or may notpush X k ( t ) downwards by 1. We then increment t and repeat this process. If t reaches T − , thenat the increment we reset t = T + 1 . After finitely many steps, X k will agree with Z k on all of (cid:74) T , T (cid:75) . We then repeat this process for X ni and Z i , with i descending. Since each of these samplesand flips has positive probability, and this process terminates in finitely many steps, the probabilityof transitioning from X n to Z after some number of steps is positive. The same reasoning appliesto show that ( Y n ) is irreducible.To see that the chains are aperiodic, simply observe that if we sample a triple ( i, T , z ) or ( i, T , z ) ,then the states of both chains will be unchanged.To see that the uniform measure P T ,T ,(cid:126)x,(cid:126)y, ∞ ,g b avoid,Ber ; S on Ω a,S is invariant for ( X n ) , fix any ω ∈ Ω a,S .For simplicity, write µ for the uniform measure. Then for all τ ∈ Ω a,S , we have µ ( τ ) = 1 / | Ω a,S | .Hence (cid:88) τ ∈ Ω a,S µ ( τ ) P ( X n +1 = ω | X n = τ ) = 1 | Ω a,S | (cid:88) τ ∈ Ω a,S P ( X n +1 = ω | X n = τ ) =1 | Ω a,S | (cid:88) τ ∈ Ω a,S P ( X n +1 = τ | X n = ω ) = 1 | Ω a,S | · µ ( ω ) . IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 63
The second equality is clear if τ = ω . Otherwise, note that P ( X n +1 = ω | X n = τ ) (cid:54) = 0 if and onlyif τ and ω differ only in one indexed path (say the i th) at one point t , where | τ i ( t ) − ω i ( t ) | = 1 ,and this condition is also equivalent to P ( X n +1 = τ | X n = ω ) (cid:54) = 0 . If X n = τ , there is exactlyone choice of triple ( i, t, z ) and one coin flip which will ensure X n +1 i ( t ) = ω ( t ) , i.e., X n +1 = ω .Conversely, if X n = ω , there is one triple and one coin flip which will ensure X n +1 = τ . Since thetriples are sampled uniformly and the coin flips are fair, these two conditional probabilities are infact equal. This proves (2), and an analogous argument proves (3).Lastly, we argue that X ni ≤ Y ni on (cid:74) T , T (cid:75) for all n ≥ and ≤ i ≤ k . This is of course trueat n = 0 . Suppose it holds at some n ≥ , and suppose that we sample a triple ( i, t, z ) . Then theupdate rule can only change the values of the X ni ( t ) and Y ni ( t ) . Notice that the values can changeby at most 1, and if Y ni ( t ) − X ni ( t ) = 1 , then the only way the ordering could be violated is if Y i were lowered and X i were raised at the next update. But this is impossible, since a coin flip ofheads can only raise or leave fixed both curves, and tails can only lower or leave fixed both curves.Thus it suffices to assume X ni ( t ) = Y ni ( t ) .There are two cases to consider that violate the ordering of X n +1 i ( t ) and Y n +1 i ( t ) . Either (i) X i ( t ) is raised but Y i ( t ) is left fixed, or (ii) Y i ( t ) is lowered yet X i ( t ) is left fixed. These can onlyoccur if the curves exhibit one of two specific shapes on (cid:74) t − , t + 1 (cid:75) . For X i ( t ) to be raised,we must have X ni ( t −
1) = X ni ( t ) = X ni ( t + 1) − , and for Y i ( t ) to be lowered, we must have Y ni ( t − − Y ni ( t ) = Y ni ( t + 1) . From the assumptions that X ni ( t ) = Y ni ( t ) , and X ni ≤ Y ni ,we observe that both of these requirements force the other curve to exhibit the same shape on (cid:74) t − , t + 1 (cid:75) . Then the update rule will be the same for both curves for either coin flip, provingthat both (i) and (ii) are impossible. Step 2.
It follows from (2) and (3) and [23, Theorem 1.8.3] that ( X n ) n ≥ and ( Y n ) n ≥ convergeweakly to P T ,T ,(cid:126)x,(cid:126)y, ∞ ,g b avoid,Ber ; S and P T ,T ,(cid:126)x (cid:48) ,(cid:126)y (cid:48) , ∞ ,g t avoid,Ber ; S respectively. In particular, ( X n ) and ( Y n ) are tight,so ( X n , Y n ) n ≥ is tight as well. By Prohorov’s theorem, it follows that ( X n , Y n ) is relativelycompact. Let ( n m ) be a sequence such that ( X n m , Y n m ) converges weakly. Then by the Skorohodrepresentation theorem [2, Theorem 6.7], it follows that there exists a probability space (Ω , F , P ) supporting random variables X n , Y n and X , Y taking values in Ω a,S , Ω (cid:48) a,S respectively, such that(1) The law of ( X n , Y n ) under P is the same as that of ( X n , Y n ) ,(2) X n ( ω ) −→ X ( ω ) for all ω ∈ Ω ,(3) Y n ( ω ) −→ Y ( ω ) for all ω ∈ Ω .In particular, (1) implies that X n m has the same law as X n m , which converges weakly to P T ,T ,(cid:126)x,(cid:126)y, ∞ ,g b avoid,Ber ; S . It follows from (2) and the uniqueness of limits that X has law P T ,T ,(cid:126)x,(cid:126)y, ∞ ,g b avoid,Ber ; S . Simi-larly, Y has law P T ,T ,(cid:126)x (cid:48) ,(cid:126)y (cid:48) , ∞ ,g t avoid,Ber ; S . Moreover, condition (4) in Step 1 implies that X n ( i, · ) ≤ Y n ( i, · ) , P -a.s., so X ( i, · ) ≤ Y ( i, · ) for ≤ i ≤ k , P -a.s. Thus we can take L b = X and L t = Y . (cid:3) Proof of Lemmas 4.6 and 4.7.
In this section we use the same notation as in Section 4.3.We first prove Lemma 4.6. We will use the following lemma, which proves an analogous convergenceresult for a single rescaled Bernoulli random walk.
Lemma 7.4.
Let x, y, a, b ∈ R with a < b , and let a N , b N ∈ N − α Z , x N , y N ∈ N − α/ Z be sequenceswith a N ≤ a , b N ≥ b , and | y N − x N | ≤ ( b N − a N ) N α/ . Suppose a N → a , b N → b . Write ˜ x N = ( x N − pa N N α/ ) / (cid:112) p (1 − p ) , ˜ y N = ( y N − pb N N α/ ) / (cid:112) p (1 − p ) , and assume ˜ x N → x , ˜ y N → y as N → ∞ . Let Y N be a sequence of random variables with laws P a N ,b N ,x N ,y N free,N , and let Z N = Y N | [ a,b ] . Then the law of Z N converges weakly to P a,b,x,yfree as N → ∞ . Proof.
Let us write z N = ( y N − x N ) N α/ and T N = ( b N − a N ) N α . Let ˜ B be a standard Brow-nian bridge on [0 , , and define random variables B N , B taking values in C ([ a N , b N ]) , C ([ a, b ]) respectively via B N ( t ) = (cid:112) b N − a N · ˜ B (cid:18) t − a N b N − a N (cid:19) + t − a N b N − a N · ˜ y N + b N − tb N − a N · ˜ x N ,B ( t ) = √ b − a · ˜ B (cid:18) t − ab − a (cid:19) + t − ab − a · y + b − tb − a · x. We observe that B has law P a,b,x,yfree and B N = ⇒ B as N → ∞ . By [2, Theorem 3.1], to show that Z N = ⇒ B , it suffices to find a sequence of probability spaces supporting Y N , B N so that(7.5) ρ ( B N , Y N ) = sup t ∈ [ a N ,b N ] | B N ( t ) − Y N ( t ) | = ⇒ N → ∞ . It follows from Theorem 3.3 that for each N ∈ N there is a probability space supporting B N and Y N , as well as constants C, a (cid:48) , α (cid:48) > , such that(7.6) E (cid:104) e a (cid:48) ∆( N,x N ,y N ) (cid:105) ≤ Ce α (cid:48) log N e | z N − pT N | /N α , where ∆( N, x N , y N ) = (cid:112) p (1 − p ) N α/ ρ ( B N , Y N ) . Since ( z N − pT N ) N − α/ → (cid:112) p (1 − p ) ( y − x ) by assumption, there exist N ∈ N and A > so that | z − pT N | ≤ AN α/ for N ≥ N . Then for (cid:15) > and N ≥ N , Chebyshev’s inequality and (7.6) give P ( ρ ( B N , Y N ) > (cid:15) ) ≤ Ce − a (cid:48) (cid:15) √ p (1 − p ) N α/ e α (cid:48) log N e A . The right hand side tends to 0 as N → ∞ , implying (7.5). (cid:3) We now give the proof of Lemma 4.6.
Proof. (of Lemma 4.6) We prove the two statements of the lemma in two steps.
Step 1.
In this step we fix N ∈ N so that P a N ,b N ,(cid:126)x N ,(cid:126)y N ,f N ,g N avoid,N is well-defined for N ≥ N . Observethat we can choose (cid:15) > and continuous functions h , . . . , h k : [ a, b ] → R depending on a, b, (cid:126)x, (cid:126)y, f, g with h i ( a ) = x i , h i ( b ) = y i for i ∈ (cid:74) , k (cid:75) , such that if u i : [ a, b ] → R are continuous functions with ρ ( u i , h i ) = sup x ∈ [ a,b ] | u i ( x ) − h i ( x ) | < (cid:15) , then(7.7) f ( x ) − (cid:15) > u ( x ) + (cid:15) > u ( x ) − (cid:15) > · · · > u k ( x ) + (cid:15) > u k ( x ) − (cid:15) > g ( x ) + (cid:15) for all x ∈ [ a, b ] . By Lemma 2.6, we have(7.8) P a,b,(cid:126)x,(cid:126)yfree ( ρ ( Q i , h i ) < (cid:15) for i ∈ (cid:74) , k (cid:75) ) > . Since y Ni − x Ni − p ( b N − a N ) N α/ → (cid:112) p (1 − p ) ( y i − x i ) as N → ∞ for i ∈ (cid:74) , k (cid:75) and p < , wecan find N ∈ N so that for N ≥ N , | y Ni − x Ni | ≤ ( b N − a N ) N α/ . It follows from Lemma 7.4 thatif ˜ Y N have laws P a N ,b N ,(cid:126)x N ,(cid:126)y N free,N for N ≥ N and ˜ Z N = ˜ Y N | (cid:74) ,k (cid:75) × [ a,b ] , then the law of ˜ Z N convergesweakly to P a,b,(cid:126)x,(cid:126)yfree . In view of (7.8) we can then find N so that if N ≥ max( N , N ) then(7.9) P a N ,b N ,(cid:126)x N ,(cid:126)y N free,N ( ρ ( ˜ Y Ni , h i ) < (cid:15) for i ∈ (cid:74) , k (cid:75) ) > . We now choose N so that sup x ∈ [ a − ,b +1] | f ( x ) − f N ( x ) | < (cid:15)/ and sup x ∈ [ a − ,b +1] | g ( x ) − g N ( x ) | < (cid:15)/ .If f = ∞ (resp. g = −∞ ), we interpret this to mean that f N = ∞ (resp. g N = −∞ ). We take N large enough so that if N ≥ N and | x − y | ≤ N − α/ then | f ( x ) − f ( y ) | < (cid:15)/ and | g ( x ) − g ( y ) | < (cid:15)/ .Lastly, we choose N so that N − α < (cid:15)/ . Then for N ≥ N = max( N , N , N , N , N ) , we haveusing (7.7) that(7.10) { ρ ( ˜ Y Ni , h i ) < (cid:15) for i ∈ (cid:74) , k (cid:75) } ⊂ { f N ≥ Y N ≥ · · · ≥ Y Nk ≥ g N on [ a N , b N ] } . IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 65
By (7.9) and (7.10) we conclude that P a N ,b N ,(cid:126)x N ,(cid:126)y N free,N (cid:0) { f N ≥ Y N ≥ · · · ≥ Y Nk ≥ g N on [ a N , b N ] } (cid:1) > , which implies that P a N ,b N ,(cid:126)x N ,(cid:126)y N ,f N ,g N avoid,N is well-defined. Step 2.
In this step we prove that Z N = ⇒ P a,b,(cid:126)x,(cid:126)y,f,gavoid , with Z N defined in the statement ofthe lemma. We write Σ = (cid:74) , k (cid:75) , Λ = [ a, b ] , and Λ N = [ a N , b N ] . It suffices to show that for anybounded continuous function F : C (Σ × Λ) → R we have(7.11) lim N →∞ E [ F ( Z N )] = E [ F ( Q )] , where Q has law P a,b,(cid:126)x,(cid:126)y,f,gavoid .We define the functions H f,g : C (Σ × Λ) → R and H Nf,g : C (Σ × Λ N ) → R by H f,g ( L ) = { f > L > · · · > L k > g on Λ } ,H Nf,g ( L N ) = { f ≥ L N ≥ · · · ≥ L Nk ≥ g on Λ N } . Then we observe that for N ≥ N ,(7.12) E [ F ( Z N )] = E [ F ( L N | Σ × [ a,b ] ) H Nf,g ( L N )] E [ H Nf,g ( L N )] , where L N has law P a N ,b N ,(cid:126)x N ,(cid:126)y N free,N . By our choice of N in Step 1, the denominator in (7.12) is positivefor all N ≥ N . Similarly, we have(7.13) E [ F ( Q )] = E [ F ( L ) H f,g ( L )] E [ H f,g ( L )] , where L has law P a,b,(cid:126)x,(cid:126)yfree . From (7.12) and (7.13), we see that to prove (7.11) it suffices to show thatfor any bounded continuous function F : C (Σ × Λ) → R ,(7.14) lim N →∞ E [ F ( L N | Σ × [ a,b ] ) H Nf,g ( L N )] = E [ F ( L ) H f,g ( L )] . By Lemma 7.4, L N | Σ × [ a,b ] = ⇒ L as N → ∞ . Since C (Σ × Λ) is separable, the Skorohodrepresentation theorem [2, Theorem 6.7] gives a probability space (Ω , F , P ) supporting C (Σ × Λ N ) -valued random variables L N with laws P a N ,b N ,(cid:126)x N ,(cid:126)y N free,N and a C (Σ × Λ) -valued random variable L with law P a,b,(cid:126)x,(cid:126)yfree such that L N | Σ × [ a,b ] → L uniformly on compact sets, pointwise on Ω . Here we relyon the fact that a N , b N are respectively the largest element of N − α Z less than a and the smallestelement greater than b , so that L N | Σ × [ a,b ] uniquely determines L N on [ a N , b N ] .Define the events E = { ω : f > L ( ω ) > · · · > L k ( ω ) > g on [ a, b ] } ,E = { ω : L i ( ω )( r ) < L i +1 ( ω )( r ) for some i ∈ (cid:74) , k (cid:75) and r ∈ [ a, b ] } , where in the definition of E we use the convention L = f , L k +1 = g . The continuity of F impliesthat F ( L N | Σ × [ a,b ] ) H Nf N ,g N ( L N ) → F ( L ) on the event E , and F ( L N | Σ × [ a,b ] ) H Nf N ,g N ( L N ) → on theevent E . By Lemma 2.5 we have P ( E ∪ E ) = 1 , so P -a.s. we have F ( L N | Σ × [ a,b ] ) H Nf N ,g N ( L N ) → F ( L ) H f,g ( L ) . The bounded convergence theorem then implies (7.14), completing the proof of(7.11). (cid:3) We now state two lemmas about Brownian bridges which will be used in the proof of Lemma4.7. The first lemma shows that a Brownian bridge started at 0 almost surely becomes negativesomewhere on its domain.
Lemma 7.5.
Fix any
T > and y ∈ R , and let Q denote a random variable with law P ,T, ,yfree .Define the event C = { inf s ∈ [0 ,T ] Q ( s ) < } . Then P ,T, ,yfree ( C ) = 1 .Proof. Let B denote a standard Brownian bridge on [0 , , and let ˜ B s = B s/T + syT , for s ∈ [0 , T ] . Then ˜ B has the law of Q . Consider the stopping time τ = inf { s > B s < } . We will argue that τ = 0 a.s, which implies the conclusion of the lemma since { τ = 0 } ⊂ C . We observe that since ˜ B is a.s. continuous and Q is dense in R , { τ = 0 } = ∩ n ∈ N ∪ s ∈ (0 , /n ) ∩ Q { ˜ B s < } ∈ ∩ n ∈ N σ ( ˜ B s : s < /n ) . Here, σ ( ˜ B s : s < (cid:15) ) denotes the σ -algebra generated by ˜ B s for s < (cid:15) . We used the fact that fora fixed (cid:15) , each set { ˜ B s < } for s ∈ (0 , (cid:15) ) ∩ Q is contained in this σ -algebra, and thus so is theircountable union. It follows from Blumenthal’s 0-1 law [15, Theorem 7.2.3] that P ( τ = 0) ∈ { , } .To complete the proof, it suffices to show that P ( τ = 0) > . By (3.1), B s/T is distributed normallywith mean 0 and variance σ = ( s/T )(1 − s/T ) . We observe that for any s ∈ (0 , T ) , P ( τ ≤ s ) ≥ P ( B s/T < − sy/T ) = P ( σ N (0 , > ( s/T ) y ) = P (cid:16) N (0 , > y (cid:112) s/ ( T − s ) (cid:17) . As s → , the probability on the right tends to P ( N (0 , >
0) = 1 / . Since { τ = 0 } = (cid:84) ∞ n =1 { τ ≤ /n } and { τ ≤ / ( n + 1) } ⊂ { τ ≤ /n } , we conclude that P ( τ = 0) = lim n →∞ P ( τ ≤ /n ) ≥ / . Therefore P ( τ = 0) = 1 . (cid:3) The second lemma shows that a difference of two independent Brownian bridges is another Brow-nian bridge.
Lemma 7.6.
Let a, b, x , y , x , y ∈ R with a < b . Let B ( t ) , B ( t ) be independent Brownianbridges from on [ a, b ] from x to y and from x to y respectively, as defined in (2.2). If B ( t ) = B ( t ) − B ( t ) for t ∈ [ a, b ] , then − / B is itself a Brownian bridge on [ a, b ] from − / ( x − x ) to − / ( y − y ) .Proof. By definition, for i = 1 , we have B i ( t ) = ( b − a ) / · ˜ B i (cid:18) t − ab − a (cid:19) + (cid:18) b − tb − a (cid:19) · x i + (cid:18) t − ab − a (cid:19) · y i , with ˜ B i ( t ) = W it − tW i for independent Brownian motions W and W . We have(7.15) B ( t ) = ( b − a ) / · ( ˜ B − ˜ B ) (cid:18) t − ab − a (cid:19) + (cid:18) b − tb − a (cid:19) · ( x − x ) + (cid:18) t − ab − a (cid:19) · ( y − y ) . Note that the process ˜ B − ˜ B is a linear combination of continuous Gaussian mean 0 processes, soit is a continuous Gaussian mean 0 process, and is thus characterized by its covariance. Since ˜ B ( · ) and ˜ B ( · ) are both Gaussian with mean 0 and the covariance min( s, t ) , their difference ˜ B ( · ) − ˜ B ( · ) is also Gaussian with mean and covariance s, t ) . This implies that − / ( ˜ B − ˜ B ) is itselfa Brownian bridge ˜ B on [ a, b ] , and hence equation 7.15 can be rewritten − / B ( t ) = ( b − a ) / · ˜ B (cid:18) t − ab − a (cid:19) + (cid:18) b − tb − a (cid:19) · − / ( x − x ) + (cid:18) t − ab − a (cid:19) · − / ( y − y ) . This is a Brownian bridge on [ a, b ] from − / ( x − x ) to − / ( y − y ) as desired. (cid:3) IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 67
To conclude this section, we prove Lemma 4.7.
Proof. (of Lemma 4.7) Suppose that L ∞ is a subsequential limit of ( ˜ f N , . . . , ˜ f Nk − ) . By possiblypassing to a subsequence we may assume that ( ˜ f N , . . . , ˜ f Nk − ) = ⇒ L ∞ . We will still call thesubsequence ( ˜ f N , . . . , ˜ f Nk − ) to not overburden the notation. By the Skorohod representation theo-rem [2, Theorem 6.7], we can also assume that ( ˜ f N , . . . , ˜ f Nk − ) and L ∞ are all defined on the sameprobability space with measure P and the convergence is happening P -almost surely. Here we areimplicitly using Lemma 2.2 from which we know that the random variables ( ˜ f N , . . . , ˜ f Nk − ) and L ∞ take value in a Polish space so that the Skorohod representation theorem is applicable.Let us denote the random variables with laws ( ˜ f N , . . . , ˜ f Nk − ) by X N and the one with law L ∞ by X and so X N → X almost surely w.r.t. P . In particular, X N ( s ) → X ( s ) for any s ∈ R . Recallthat f Ni ( s ) = N − α/ ( L Ni ( sN α ) − psN α ) + λs , so X Ni ( s ) = N − α/ ( L Ni ( sN α ) − psN α ) / (cid:112) p (1 − p ) ,where L N has the law of L N .Suppose that X i ( s ) = X i +1 ( s ) for some i ∈ (cid:74) , k − (cid:75) . Then we have X Ni ( s ) − X Ni +1 ( s ) → , i.e., N − α/ ( L Ni ( sN α ) − L Ni +1 ( sN α )) → as N → ∞ . Let us write a = (cid:98) sN α (cid:99) N − α , b = (cid:100) ( s + 2) N α (cid:101) N − α and x N = L Ni ( aN α ) − L Ni +1 ( aN α ) , y N = L Ni ( bN α ) − L Ni +1 ( bN α ) . Then N − α/ x N → . If Q i , Q i +1 are independent Bernoulli bridges with laws P a,b, L Ni ( aN α ) , L Ni ( bN α ) Ber and P a,b, L Ni +1 ( aN α ) , L Ni +1 ( bN α ) Ber , then (cid:96) = Q i − Q i +1 is a random walk bridge taking values in {− , , } , from ( a, x N ) to ( b, y N ) . Let usdenote the law of N − α/ (cid:96)/ (cid:112) p (1 − p ) by P a,b,x N ,y N diff .By Lemma 7.4, ( x N + N − α/ Q i +1 − ptN α ) / (cid:112) p (1 − p ) and ( x N + N − α/ Q i − ptN α ) / (cid:112) p (1 − p ) converge weakly to the law of two Brownian bridges B i from L ∞ i ( s ) to L ∞ i ( s + 2) and B i +1 from L ∞ i +1 ( s ) to L ∞ i +1 ( s + 2) respectively. Consequently, their difference N − α/ (cid:96)/ (cid:112) p (1 − p ) convergesweakly to the difference of two independent Brownian bridges, B − B . By Lemma 7.6, thisdifference is equal to / B , where B is a Brownian bridge B on [ s, s + 2] from 0 to − / y , where y = L ∞ i ( s +2) −L ∞ i +1 ( s +2) . In other words, B has law P s,s +2 , , − / yfree . Therefore P a,b,x N ,y N diff convergesweakly to P s,s +2 , ,yfree . With probability one, min t ∈ [ s,s +2] B t < by Lemma 7.5. Thus given δ > ,we can choose N large enough so that the probability of N − α/ (cid:96)/ (cid:112) p (1 − p ) , or equivalently (cid:96) ,remaining above 0 on [ a, b ] is less than δ . Thus for large enough N we have P (cid:0) f ∞ i ( s ) = f ∞ i +1 ( s ) (cid:1) ≤ P (cid:18) P a,b,x N ,y N diff (cid:18) sup s ∈ [ a,b ] (cid:96) ( s ) ≥ (cid:19) < δ (cid:19) ≤ P (cid:0) Z ( a, b, L N ( aN α ) , L N ( bN α ) , ∞ , L Nk ) < δ (cid:1) . (7.16)Here, Z denotes the acceptance probability of Definition 2.22. This is the probability that k − independent Bernoulli bridges Q , . . . , Q k − on [ a, b ] with entrance and exit data L N ( a ) and L N ( b ) do not cross one another or L Nk . The last inequality follows because (cid:96) has the law of the differenceof Q i and Q i +1 , and the acceptance probability is bounded above by the probability that Q i and Q i +1 do not cross, i.e., that Q i − Q i +1 ≥ . By Proposition 4.1, given (cid:15) > we can choose δ so thatthe probability on the right in (7.16) is < (cid:15) . We conclude that P (cid:0) f ∞ i ( s ) = f ∞ i +1 ( s ) (cid:1) = 0 . (cid:3) Appendix B
The goal of this section is to prove Proposition 3.17, which roughly states that if the boundarydata of an avoiding Bernoulli line ensemble converges then the fixed time distribution of the ensembleconverges weakly to a random vector with density ρ . In the process of the proof we will identifythis limiting density ρ . Throughout this section we fix k ∈ N and consider sequences of (cid:74) , k (cid:75) -indexed line ensembleswith distribution given by P ,T,(cid:126)x,(cid:126)yavoid,Ber in the sense of Definition 2.15. Recall that this is just the lawof k independent Bernoulli random walks that have been conditioned to start from (cid:126)x = ( x , . . . , x k ) at time and end at (cid:126)y = ( y , · · · , y k ) at time T and are conditioned on never crossing. Here (cid:126)x , (cid:126)y ∈ W k satisfy T ≥ y i − x i ≥ for i = 1 , . . . , k , which by Lemma 2.16 ensures the well-posednessof P ,T,(cid:126)x,(cid:126)yavoid,Ber .In Section 8.1, we introduce some definitions and formulate the precise statements of the tworesults we want to prove as Propositions 8.2 and 8.3. In Section 8.2, we introduce some basic resultsabout skew Schur polynomials and express the fixed time distribution of avoiding Bernoulli lineensembles through these polynomials in Lemma 8.7. In Sections 8.3 and 8.4, we prove Propositions8.2 and 8.3 for an important special case. In Section 8.5 we introduce some notations and resultsabout multi-indices and multivariate functions which paves the way for the full proofs of Propositions8.2 and 8.3 in that section and Section 8.6.8.1. Weak convergence.
We start by recalling and introducing some helpful notation. Recall, W k = { (cid:126)x ∈ R k : x ≥ x ≥ · · · ≥ x k } , W ◦ k = { (cid:126)x ∈ R k : x > x > · · · > x k } . Definition 8.1.
Here we recall the scaling from Proposition 3.17. We fix p, t ∈ (0 , , and (cid:126)a,(cid:126)b ∈ W k .Suppose that (cid:126)x T = ( x T , . . . , x Tk ) and (cid:126)y T = ( y T , . . . , y Tk ) are two sequences of k -dimensional vectorsin W k such that lim T →∞ x Ti √ T = a i and lim T →∞ y Ti − pT √ T = b i for i = 1 , . . . , k . Define the sequence of random k -dimensional vectors Z T by Z T = ( Z T , . . . , Z Tk ) = (cid:18) L T ( tT ) − ptT √ T , . . . , L Tk ( tT ) − ptT √ T (cid:19) , (8.1)where ( L T , . . . , L Tk ) is P ,T,(cid:126)x T ,(cid:126)y T avoid,Ber -distributed.We next define a class of functions that will be used to express the limiting density ρ in Proposition3.17. These functions depend on two vectors (cid:126)a,(cid:126)b ∈ W k as well as parameters p, t ∈ (0 , throughthe quantities(8.2) c ( p, t ) = 1 p (1 − p ) t , c ( p, t ) = 1 p (1 − p )(1 − t ) , c ( p, t ) = 12 p (1 − p ) t (1 − t ) . Suppose the vectors (cid:126)a and (cid:126)b have the following form (cid:126)a = ( a , . . . , a k ) = ( α , . . . , α (cid:124) (cid:123)(cid:122) (cid:125) m , . . . , α p , . . . , α p (cid:124) (cid:123)(cid:122) (cid:125) m p ) (cid:126)b = ( b , . . . , b k ) = ( β , . . . , β (cid:124) (cid:123)(cid:122) (cid:125) n , . . . , β q , . . . , β q (cid:124) (cid:123)(cid:122) (cid:125) n q ) (8.3)where α > α > · · · > α p , β > β > · · · > β q and (cid:80) pi =1 m i = (cid:80) qi =1 n i = k . We denote (cid:126)m = ( m , · · · , m p ) , (cid:126)n = ( n , · · · , n q ) and define two determinants ϕ ( (cid:126)a, (cid:126)z, (cid:126)m ) and ψ ( (cid:126)b, (cid:126)z, (cid:126)n ) as IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 69 follows ϕ ( (cid:126)a, (cid:126)z, (cid:126)m ) = det (cid:0) ( c ( t, p ) z j ) i − e c ( t,p ) α z j (cid:1) i =1 ,...,m j =1 ,...,k ... (cid:0) ( c ( t, p ) z j ) i − e c ( t,p ) α p z j (cid:1) i =1 ,...,m p j =1 ,...,k ψ ( (cid:126)b, (cid:126)z, (cid:126)n ) = det (cid:0) ( c ( t, p ) z j ) i − e c ( t,p ) β z j (cid:1) i =1 ,...,n j =1 ,...,k ... (( c ( t, p ) z j ) i − e c ( t,p ) β q z j ) i =1 ,...,n q j =1 ,...,k . (8.4)Then we define the function H ( (cid:126)z ) = ϕ ( (cid:126)a, (cid:126)z, (cid:126)m ) · ψ ( (cid:126)b, (cid:126)z, (cid:126)n ) · k (cid:89) i =1 e − c ( t,p ) z i . (8.5)The function H implicitly depends on p, t, k, (cid:126)a,(cid:126)b but we will not reflect this dependence in thenotation. The following result summarizes the properties we will require from H ( (cid:126)z ) . Its proof canbe found in Sections 8.3 and 8.5. Proposition 8.2.
Fix p, t ∈ (0 , and (cid:126)a,(cid:126)b ∈ W k and let H ( (cid:126)z ) be as in (8.5). Then we have:(1) H ( (cid:126)z ) ≥ for all (cid:126)z ∈ W k .(2) H ( (cid:126)z ) = 0 for (cid:126)z ∈ W k \ W ◦ k and H ( (cid:126)z ) > for (cid:126)z ∈ W ◦ k .(3) Z c := (cid:82) W k H ( (cid:126)z ) d(cid:126)z ∈ (0 , ∞ ) , where d(cid:126)z stands for the usual Lebesgue measure. In view of Proposition 8.2 we know that the function(8.6) ρ ( (cid:126)z ) = ρ ( z , . . . , z k ) = Z − c · { z >z > ··· >z k } · H ( (cid:126)z ) , defines a density on R k . This is the limiting density in Proposition 3.17. We end this section bystating the main convergence statement we want to establish. Proposition 8.3.
Assume the same notation as in the Definition 8.1. Then the random vectors Z T converge weakly to ρ as in (8.6) as T → ∞ . The way the proof of the above two propositions is organized in the remainder of the section isas follows. We first prove Proposition 8.2 and Proposition 8.3 for the case when (cid:126)a,(cid:126)b ∈ W ◦ k – this isdone in Sections 8.3 and 8.4 respectively. Afterwards we will prove Proposition 8.2 for vectors (cid:126)a , (cid:126)b that have the form in (8.3) in Section 8.5 and then use Proposition 8.3 for the case (cid:126)a,(cid:126)b ∈ W ◦ k andthe monotone coupling Lemma 3.1 to prove Proposition 8.3 in the general case in Section 8.6.8.2. Skew Schur polynomials.
In this section we give some definitions and elementary resultsregarding skew Schur polynomials, which are mainly based on [21, Chapter 1]. Afterwards weexplain how the fixed time distribution of an avoiding Bernoulli line ensemble is expressible interms of these skew Schur polynomials.
Definition 8.4.
Partitions, skew diagrams, interlacing, conjugation (1) A partition is an infinite sequence λ = ( λ , λ , . . . , λ r , . . . ) of non-negative integers in de-creasing order λ ≥ λ ≥ · · · ≥ λ r ≥ · · · and containing only finitely many non-zero terms.The non-zero λ i are called parts of λ , the number of parts is called the length of the partition λ , denoted by l ( λ ) , and the sum of the parts is the weight of λ , denoted by | λ | . (2) A partition λ is graphically represented by a Young diagram that has λ left-justified boxeson the top row, λ boxes on the second row and so on. Suppose λ and µ are two partitions,we write λ ⊃ µ if λ i ≥ µ i for all i ∈ N . We call the set-theoretic difference of the two Youngdiagrams of λ and µ a skew diagram and denote it by λ/µ .(3) Partitions λ = ( λ , λ , · · · ) and µ = ( µ , µ , · · · ) are call interlaced , denoted by µ (cid:22) λ , if λ ≥ µ ≥ λ ≥ µ ≥ · · · .(4) The conjugate of a partition λ is the partition λ (cid:48) such that λ (cid:48) i = max j ≥ { j : λ j ≥ i } In particular, λ (cid:48) = l ( λ ) , λ = l ( λ (cid:48) ) and notice that λ (cid:48)(cid:48) = λ . For example, the conjugate of (5441) is (43331) .According to Definition 8.4, we directly get that if µ ⊂ λ then l ( λ ) ≥ l ( µ ) and l ( λ (cid:48) ) ≥ l ( µ (cid:48) ) . Also, µ (cid:22) λ implies µ ⊂ λ . Also as explained in [21, pp. 5] we have that if µ (cid:22) λ are interlaced, then λ (cid:48) i − µ (cid:48) i = 0 or for every i ≥ . Definition 8.5.
Elementary Symmetric Functions.
For each integer r ≥ , the r -th elementarysymmetric function e r is the sum of all products of r distinct variables x i , so that e = 1 and e r = (cid:88) i n .Next, we introduce Skew Schur Polynomial based on [21, Chapter 1, (5.5), (5.11), (5.12)]. Definition 8.6.
Skew Schur Polynomial, Jacob-Trudi Formula (1) Suppose µ ⊂ λ are partitions. If µ (cid:22) λ are interlaced, then the skew Schur polynomial s λ/µ with single variable x is defined by s λ/µ ( x ) = x | λ − µ | . Otherwise, we define s λ/µ ( x ) = 0 .(2) Suppose µ ⊂ λ are two partitions, define the skew Schur polynomial s λ/µ with respect tovariables x , x , · · · , x n by s λ/µ ( x , · · · , x n ) = (cid:88) ( ν ) n (cid:89) i =1 s ν i /ν i − ( x i ) = (cid:88) ( ν ) n (cid:89) i =1 x | ν i − ν i − | i (8.8) summed over all sequences ( ν ) = ( ν , ν , · · · , ν n ) of partitions such that ν = µ , ν n = λ and ν (cid:22) ν (cid:22) · · · (cid:22) ν n . In particular, when x = x = · · · = x n = 1 , the skew Schurpolynomial is just the number of such sequences of interlaced partitions ( ν ) . This definitionalso implies the following branching relation of skew Schur polynomials: s κ/µ ( x , . . . , x n ) = (cid:88) λ s κ/λ ( x , . . . , x m ) · s λ/µ ( x m +1 , . . . , x n ) , (8.9) where ≤ m ≤ n and s κ/λ ( ∅ ) = { κ = λ } .(3) We also have the following Jacob-Trudi Formula [21, Chapter 1, (5.5)] for the skew Schurpolynomial:(8.10) s λ/µ = det (cid:16) e λ (cid:48) i − µ (cid:48) j − i + j (cid:17) ≤ i,j ≤ m where m ≥ l ( λ (cid:48) ) , and e r is the elementary symmetric function in Definition 8.5. IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 71
Based on the above preparation, we are ready to state the following lemma giving the distributionof avoiding Bernoulli line ensembles at time (cid:98) tT (cid:99) . Lemma 8.7.
Assume the same notation as in Definition 8.1, denote m = (cid:98) tT (cid:99) , n = T − (cid:98) tT (cid:99) and assume m, n ∈ N . Then, the avoiding Bernoulli line ensemble at time m has the followingdistribution: P ,T,(cid:126)x T ,(cid:126)y T avoid,Ber ( L T ( m ) = λ , · · · , L Tk ( m ) = λ k ) =det (cid:16) e λ i − x Tj − i + j (1 m ) (cid:17) ≤ i,j ≤ k · det (cid:16) e y Ti − λ j − i + j (1 n ) (cid:17) ≤ i,j ≤ k det (cid:16) e y Ti − x Tj − i + j (1 m + n ) (cid:17) ≤ i,j ≤ k , (8.11) where λ ≥ λ ≥ · · · ≥ λ k are integers.Proof. Notice that if we shift all x i , y i and λ i by the same integer, both sides of (8.11) stay the sameand so we may assume that all of these quantities are positive by adding the same large integer toall the coordinates. We then let κ be the partition with parts κ i = y Ti , µ be the partition with µ i = x Ti and λ be the partition with parts λ i for i = 1 , . . . , k . All three partitions have length k . Inview of (8.10) we see that the right side of (8.11) is precisely(8.12) RHS of (8.11) = s λ (cid:48) /µ (cid:48) (1 m ) · s κ (cid:48) /λ (cid:48) (1 n ) s κ (cid:48) /µ (cid:48) (1 T ) . Let
Ω(0 , T, (cid:126)x T , (cid:126)y T ) be the set of all avoiding Bernoulli line ensembles from (cid:126)x T to (cid:126)y T , and anal-ogously define Ω(0 , m, (cid:126)x T , λ ) and Ω(0 , n, λ, (cid:126)y T ) . Then we get by the uniformity of the measure P ,T,(cid:126)x T ,(cid:126)y T avoid,Ber that(8.13) LHS of (8.11) = | Ω(0 , m, (cid:126)x T , λ ) | · | Ω(0 , n, λ, (cid:126)y T ) || Ω(0 , T, (cid:126)x T , (cid:126)y T ) | . Let us define the set
T B
Tκ/µ := { ( λ , . . . , λ T ) | λ = µ, λ T = κ, λ i (cid:22) λ i +1 for i = 0 , · · · , T − } . Notice that by the definition of
Ω(0 , T, (cid:126)x T , (cid:126)y T ) the map f : Ω(0 , T, (cid:126)x T , (cid:126)y T ) → T B
Tκ/µ given by f ( L ) = ( L ( · , , . . . , L ( · , T )) , where L ( · , j ) stands for the partition with parts L ( i, j ) for i = 1 , . . . , k defines a bijection between Ω(0 , T, (cid:126)x T , (cid:126)y T ) and T B
Tκ/µ and so we conclude that | Ω(0 , T, (cid:126)x T , (cid:126)y T ) | = | T B
Tκ/µ | = s κ (cid:48) /µ (cid:48) (1 T ) , where in the last equality we used (8.8). Applying the same argument to Ω(0 , m, (cid:126)x T , λ ) and Ω(0 , n, λ, (cid:126)y T ) we conclude(8.14) | Ω(0 , m, (cid:126)x T , λ ) | = s λ (cid:48) /µ (cid:48) (1 m ) , | Ω(0 , n, λ, (cid:126)y T ) | = s κ (cid:48) /λ (cid:48) (1 n ) , | Ω(0 , T, (cid:126)x T , (cid:126)y T ) | = s κ (cid:48) /µ (cid:48) (1 T ) . Combining (8.12), (8.13) and (8.14) gives (8.11). (cid:3)
Proof of Proposition 8.2 for (cid:126)a,(cid:126)b ∈ W ◦ k . In this section we prove a few technical results wewill need later as well as Proposition 8.2 for the case when (cid:126)a,(cid:126)b have distinct entries.
Lemma 8.8.
Suppose that p ∈ (0 , and R > are given. Suppose that x ∈ [ − R, R ] and N = pn + √ nx ∈ [0 , n ] is an integer. Then e N (1 n ) = ( √ π ) − · exp (cid:18) − x − p ) p (cid:19) · exp (cid:18) N log (cid:18) − pp (cid:19)(cid:19) · exp (cid:16) O ( n − / ) (cid:17) · exp ( − n log(1 − p ) − (1 /
2) log n − (1 /
2) log ( p (1 − p ))) (8.15) where the constant in the big O notation depends on p and R alone. Moreover, there exist positiveconstants C, c > depending on p alone such that for all large enough n ∈ N and N ∈ [0 , n ] , (8.16) e N (1 n ) ≤ C · exp (cid:18) N log 1 − pp − n log(1 − p ) − (1 /
2) log n (cid:19) · exp (cid:0) − cn − ( N − pn ) (cid:1) . Remark . Notice that when
R > is fixed and N ∈ [ pn − R √ n, pn + R √ n ] we have N ∈ [0 , n ] for all large enough n so that our insistence that N ∈ [0 , n ] in the first part of Lemma 8.8 doesnot affect the asymptotics. The second part of the lemma, equation (8.16), also trivially holds if N (cid:54)∈ [0 , n ] since e N (1 n ) = 0 in this case by Definition 8.5. Proof.
For clarity the proof is split into several steps.
Step 1.
In this step we prove (8.15). Using Definition 8.5 we obtain e N (1 n ) = n ! N !( n − N )! (8.17)We have the following formula [25] for n ≥ (8.18) n ! = √ πnn n e − n e r n , where n + 1 < r n < n Applying (8.18) to equation (8.17) gives e N (1 n ) = exp (cid:0) ( n + 1 /
2) log n − ( N + 1 /
2) log N − ( n − N + 1 /
2) log( n − N ) + O (cid:0) n − (cid:1)(cid:1) √ π = ( √ π ) − · exp (cid:18) ( n + 1 /
2) log n − ( N + 1 /
2) log
Npn − ( n − N + 1 /
2) log n − N (1 − p ) n (cid:19) · exp (cid:0) − ( N + 1 /
2) log( pn ) − ( n − N + 1 /
2) log((1 − p ) n ) + O (cid:0) n − (cid:1)(cid:1) . (8.19)Denote ∆ = √ nx = O (cid:0) n / (cid:1) , and we now use the Taylor expansion of the logarithm and theexpression for N to get log Npn = log (cid:18) pn (cid:19) = ∆ pn −
12 ∆ p n + O (cid:16) n − / (cid:17) Analogously, we have log n − N (1 − p ) n = log (cid:18) − ∆(1 − p ) n (cid:19) = − ∆(1 − p ) n −
12 ∆ (1 − p ) n + O (cid:16) n − / (cid:17) Plugging the two equations above into equation (8.19) we get e N (1 n ) = ( √ π ) − · exp (cid:18) − ( N + 1 / (cid:20) ∆ pn −
12 ∆ p n + O (cid:16) n − / (cid:17)(cid:21)(cid:19) · exp (cid:18) − ( n − N + 1 / (cid:20) − ∆(1 − p ) n −
12 ∆ (1 − p ) n + O (cid:16) n − / (cid:17)(cid:21)(cid:19) · exp (cid:0) ( n + 1 /
2) log n − ( N + 1 /
2) log( pn ) − ( n − N + 1 /
2) log((1 − p ) n ) + O (cid:0) n − (cid:1)(cid:1) (8.20)We next observe that − ∆( N + 1 / pn + ( n − N + 1 / − p ) n = − ∆ p (1 − p ) n + O (cid:16) n − / (cid:17) ∆ ( N + 1 / n p + ∆ ( n − N + 1 / − p ) n = ∆ p (1 − p ) n + O (cid:16) n − / (cid:17) ( n + 1 /
2) log n − ( N + 1 /
2) log( pn ) − ( n − N + 1 /
2) log((1 − p ) n ) = N log 1 − pp −
12 log p (1 − p ) −
12 log n − n log(1 − p ) (8.21) IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 73
Plugging (8.21) into (8.20) we arrive at (8.15).
Step 2.
In this step we prove (8.16). If N = 0 or n we know that e N (1 n ) = 1 and then (8.16)is easily seen to hold with C = 1 and any c ∈ (0 , min( − log p, − log(1 − p ))) . Thus it suffices toconsider the case when N ∈ [1 , n − and in the sequel we also assume that n ≥ .Combining (8.17) and (8.18) we conclude that(8.22) e N (1 n ) ≤ exp (( n + 1 /
2) log n − ( N + 1 /
2) log N − ( n − N + 1 /
2) log( n − N )) From (8.22) we get for all large enough n that φ n := log [ e N (1 n ) · exp ( − N log((1 − p ) /p ) + n log(1 − p ) + (1 /
2) log n )] ≤ ( n + 12 ) log n − ( N + 12 ) log N − ( n − N + 12 ) log( n − N ) − N log 1 − pp + n log(1 − p ) + 12 log n = ( n + 1 /
2) log n − ( N + 1 /
2) log
Npn − ( N + 1 /
2) log( pn ) − ( n − N + 1 /
2) log n − N (1 − p ) n − ( n − N + 1 /
2) log((1 − p ) n ) − N log 1 − pp + n log(1 − p ) + (1 /
2) log n = − ( N + 1 /
2) log
Npn − ( n − N + 1 /
2) log n − N (1 − p ) n − (1 /
2) log ( p (1 − p ))= − ( pn + ∆ + 1 /
2) log (cid:18) pn (cid:19) − ((1 − p ) n − ∆ + 1 /
2) log (cid:18) − ∆(1 − p ) n (cid:19) −
12 log ( p (1 − p )) ≤ C + ψ n (∆) where C > is sufficiently large depending on p alone and(8.23) ψ n ( s ) = − ( pn + s + 1 /
2) log (cid:18) spn (cid:19) − ((1 − p ) n − s + 1 /
2) log (cid:18) − s (1 − p ) n (cid:19) where s ∈ [ − pn + 1 , (1 − p ) n − . We claim that we can find positive constants C > and c > such that for all n sufficiently large and s ∈ [ − pn + 1 , (1 − p ) n − we have(8.24) ψ n ( s ) ≤ C − cn − s We prove (8.24) in Step 3 below. For now we assume its validity and conclude the proof of (8.16).In view of φ n ≤ C + ψ n ( s ) and (8.24) we know that e N (1 n ) ≤ exp ( C + C + N log((1 − p ) /p ) − n log(1 − p ) − (1 /
2) log n ) · exp( − cn − ( N − pn ) ) , which proves (8.16) with C = e C + C . Step 3.
In this step we prove (8.24). A direct computation gives ψ (cid:48) n ( s ) = − log (cid:18) tpn (cid:19) + log (cid:18) − t (1 − p ) n (cid:19) + 12 · pn + t + 12 · − p ) n − tψ (cid:48)(cid:48) n ( s ) = ( n + 1) · s + (2 p − n ( n + 1) · s + p ( p − n ( n + 1) + (1 / n ( pn + s ) ((1 − p ) n − s ) (8.25)Notice that the numerator of ψ (cid:48)(cid:48) n ( s ) is a quadratic function with min at x min = − (2 p − n ( n +1)2( n +1) =( − p +1 / n , which is the midpoint of the interval [ − pn +1 , (1 − p ) n − . Consequently, the numeratorreaches its maximum at either one of the two endpoints of the interval [ − pn + 1 , (1 − p ) n − . Thedenominator is the square of a parabola that reaches its minimum also at the endpoints of theinterval [ − pn + 1 , (1 − p ) n − . Therefore, we conclude that ψ (cid:48)(cid:48) n ( s ) ≤ ψ (cid:48)(cid:48) n ( − pn + 1) = ψ (cid:48)(cid:48) n ((1 − p ) n −
1) = − n + 1( n − = − − n − · n − ≤ − · n − ≤ − n = − cn − (8.26) where c = 1 / . Next, we prove (8.24) under two cases when s ∈ [ − pn + 1 , and s ∈ [0 , (1 − p ) n − ,respectively. ◦ . When s ∈ [ − pn + 1 , , by the fundamental theorem of calculus and (8.26) we get ψ (cid:48) n ( s ) = ψ (cid:48) n (0) − (cid:90) s ψ (cid:48)(cid:48) n ( y ) dy ≥ ψ (cid:48) n (0) − ( − s )( − cn − ) = 2 p − p (1 − p ) n − cn − s, and a second application of the same argument yields for s ∈ [ − pn + 1 , ψ n ( s ) = ψ n (0) − (cid:90) s ψ (cid:48) n ( y ) dy ≤ − (cid:90) s (cid:18) p − p (1 − p ) n − cn − y (cid:19) dy = (2 p − s p (1 − p ) n − cn − s , When p ≤ / , (2 p − s p (1 − p ) n ≤ (2 p − pn p (1 − p ) n = − p − p ) , so (8.24) gets proved with C = − p − p ) . When p > / ,(8.24) gets proved C = 0 . ◦ . When s ∈ [0 , (1 − p ) n − , using the fundamental theorem of calculus and (8.26) we get ψ (cid:48) n ( s ) = ψ (cid:48) n (0) + (cid:90) s ψ (cid:48)(cid:48) n ( y ) dy ≤ = 2 p − p (1 − p ) n − cn − s, and a second application of the same argument yields for s ∈ [0 , (1 − p ) n − ψ n ( s ) = ψ n (0) + (cid:90) s ψ (cid:48) n ( y ) dy ≤ (2 p − s p (1 − p ) n − cn − s , When p ≥ / , (2 p − s p (1 − p ) n ≤ (2 p − − p ) n p (1 − p ) n = p − p , so (8.24) gets proved with C = p − p . When p < / , (8.24) gets proved C = 0 . Combining cases ◦ and ◦ we complete the proof. (cid:3) Lemma 8.10.
Assume the same notation as in Definition 8.1. Fix (cid:126)z ∈ R k such that z > · · · > z k .Suppose that T ∈ N is sufficiently large so that for T ≥ T we have z k √ T + ptT ≥ a √ T + k + 1 and b k √ T + pT ≥ z √ T + ptT + k + 1 , and define λ Ti = (cid:98) z i √ T + ptT (cid:99) for i = 1 , . . . , k (to ease notation we suppress the dependence of λ on T in what follows). Setting m = (cid:98) tT (cid:99) and n = T − m define A λ ( T ) = det (cid:16) e λ i − x Tj − i + j (1 m ) (cid:17) ≤ i,j ≤ k · det (cid:16) e y Ti − λ j − i + j (1 n ) (cid:17) ≤ i,j ≤ k , (8.27) B λ ( T ) = ( √ π ) k · exp ( kT log(1 − p ) + k log T + ( k/
2) log( p (1 − p ))) · exp (cid:32) − log (cid:18) − pp (cid:19) k (cid:88) i =1 ( y Ti − x Ti ) (cid:33) · A λ ( T ) (8.28) We claim that lim T →∞ B λ ( T ) = (2 π ) − k/ · exp( − ( k/
2) log( p (1 − p )) − ( k/
2) log( t (1 − t ))) · det (cid:104) e c ( t,p ) a i z j (cid:105) ki,j =1 · det (cid:104) e c ( t,p ) b i z j (cid:105) ki,j =1 · k (cid:89) i =1 exp (cid:18) − c ( t, p ) a i + c ( t, p ) b i (cid:19) . (8.29) Proof.
Let us write A λ = det (cid:16) e λ i − x Tj − i + j (1 m ) (cid:17) ≤ i,j ≤ k , A λ = det (cid:16) e y Ti − λ j − i + j (1 n ) (cid:17) ≤ i,j ≤ k , and A λ = det (cid:16) e y Ti − x Tj − i + j (1 m + n ) (cid:17) ≤ i,j ≤ k . IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 75
Then from Lemma 8.8 we have A λ = det (cid:34) exp (cid:32) − ( λ i − x Tj + j − i − pm ) − p ) pm (cid:33) exp (cid:16) O (cid:16) T − / (cid:17)(cid:17)(cid:35) · ( √ π ) − k · exp (cid:32) − km log(1 − p ) − ( k/
2) log m − ( k/
2) log( p (1 − p )) + log (cid:18) − pp (cid:19) k (cid:88) i =1 ( λ i − x Ti ) (cid:33) (8.30) A λ = det (cid:20) exp (cid:18) − ( y Ti − λ j + j − i − pn ) − p ) pn (cid:19) exp (cid:16) O (cid:16) T − / (cid:17)(cid:17)(cid:21) · ( √ π ) − k · exp (cid:32) − kn log(1 − p ) − ( k/
2) log n − ( k/
2) log( p (1 − p )) + log (cid:18) − pp (cid:19) k (cid:88) i =1 ( y Ti − λ i ) (cid:33) (8.31) A λ = det (cid:34) exp (cid:32) − ( y Ti − x Tj + j − i − pT ) − p ) pT (cid:33) exp (cid:16) O (cid:16) T − / (cid:17)(cid:17)(cid:35) · ( √ π ) − k · exp (cid:32) − kT log(1 − p ) − ( k/
2) log T − ( k/
2) log( p (1 − p )) + log (cid:18) − pp (cid:19) k (cid:88) i =1 ( y Ti − x Ti ) (cid:33) (8.32)where the constants in the big O notation are uniform as z i vary over compact subsets of R .Combining (8.31), (8.30) and (8.28) we see that B λ ( T ) = (2 π ) − k/ · exp( − ( k/
2) log( p (1 − p )) − ( k/
2) log( t (1 − t )) + O ( T − )) · det (cid:20) exp (cid:18) − ( z i − a j ) p (1 − p ) t + O ( T − / ) (cid:19)(cid:21) · det (cid:20) exp (cid:18) − ( b i − z j ) p (1 − p )(1 − t ) + O ( T − / ) (cid:19)(cid:21) (8.33)Taking the limit T → ∞ in (8.33), and using the identities det (cid:20) exp (cid:18) − ( z i − a j ) p (1 − p ) t (cid:19)(cid:21) = det (cid:104) e c ( t,p ) a i z j (cid:105) ki,j =1 · k (cid:89) i =1 exp (cid:18) − c ( t, p )2 ( a i + z i ) (cid:19) , and det (cid:20) exp (cid:18) − ( b i − z j ) p (1 − p )(1 − t ) (cid:19)(cid:21) = det (cid:104) e c ( t,p ) b i z j (cid:105) ki,j =1 · k (cid:89) i =1 exp (cid:18) − c ( t, p )2 ( b i + z i ) (cid:19) (8.34)we get (8.29). (cid:3) Lemma 8.11.
Suppose the vector (cid:126)m = ( m , . . . , m p ) satisfies k = (cid:80) pi =1 m i , and α > α > · · · >α p . Then the following determinant U = det ( z i − j e α z j ) i =1 ,...,m j =1 ,...,k ... ( z i − j e α p z j ) i =1 ,...,m p j =1 ,...,k is non-zero for any (cid:126)z = ( z , . . . , z k ) ∈ R k whose entries are distinct.Proof. We claim that, the following equation with respect to z over R ( ξ + ξ z + · · · + ξ m z m i − ) e α z + · · · + ( ξ m + ··· + m p − +1 + · · · + ξ k z m p − ) e α p z = 0 has at most ( k − distinct roots, where ( ξ , . . . , ξ k ) ∈ R k is non-zero.Denote the rows of the matrix in the definition of U by v , . . . , v k . If the above claim holds, wecan conclude that we cannot find non-zero ( ξ , · · · , ξ k ) ∈ R k such that ξ v + · · · + ξ k v k = 0 . Thus, the k row vectors of the determinant are linearly independent and the determinant is non-zero.Thus it suffices to prove the claim, and we do it by induction on k . ◦ . If k = 2 , the equation is ( ξ + ξ z ) e α z = 0 or ξ e α z + ξ e α z = 0 , where ξ , ξ ∈ R cannot bezero at the same time. Then, it’s easy to see that the equation has at most root in two scenarios. ◦ . Suppose the claim holds for k ≤ n . ◦ . When k = n + 1 , we have the equation ( ξ + ξ z + · · · + ξ m z m i − ) e α z + · · · ( ξ m + ··· + m p − +1 + · · · + ξ k z m p − ) e α p z = 0 but now (cid:80) pi =1 m i = n + 1 . WLOG, suppose ( ξ , . . . , ξ m ) has a non-zero element and ξ (cid:96) is the firstnon-zero element. Notice that the above equation has the same roots as the following one: F ( z ) = ( ξ (cid:96) z (cid:96) − + · · · + ξ m z m − ) + · · · + ( ξ m + ··· + m p − +1 + · · · + ξ k z m p − ) e ( α p − α ) z = 0 Assume it has at least ( n + 1) distinct roots η < η < · · · < η n +1 . Then F (cid:48) ( z ) = 0 has at least n distinct roots δ < · · · < δ n such that η < δ < η < · · · < δ n < η n +1 , by Rolle’s Theorem. Actually, F (cid:48) ( z ) = ( ξ (cid:96) ( (cid:96) − z (cid:96) − + · · · + ξ m ( m − z m − )+ · · · +( ξ (cid:48) m + ··· + m p − +1 + · · · + ξ (cid:48) k z m p − ) e ( α p − α ) z = 0 where ξ (cid:48) i , i = m + 1 , · · · , k are coefficients that can be calculated. This equation has at most ( m −
1) + m + · · · + m p − n − roots by ◦ , which leads to a contradiction. Therefore, ourclaim holds and we have proved the lemma. (cid:3) Proof. (of Proposition 8.2 when (cid:126)a,(cid:126)b ∈ W ◦ k ) Let us fix (cid:126)z ∈ W ◦ k , and define λ T as in Lemma 8.10. Wealso let x Ti and y Ti be sequences of integers such that lim T →∞ x Ti √ T = a i and lim T →∞ y Ti − pT √ T = b i for i = 1 , . . . , k . In view of the Jacobi-Trudi formula (8.10) we know that B λ ( T ) as in Lemma 8.10are non-negative and from (8.29) they converge as T tends to infinity to (2 π ) − k/ · exp( − ( k/
2) log( p (1 − p )) − ( k/
2) log( t (1 − t ))) · det (cid:104) e c ( t,p ) a i z j (cid:105) ki,j =1 · det (cid:104) e c ( t,p ) b i z j (cid:105) ki,j =1 · k (cid:89) i =1 exp (cid:18) − c ( t, p ) a i + c ( t, p ) b i (cid:19) . On the other hand, we have that when the entries of (cid:126)a,(cid:126)b are distinct H ( (cid:126)z ) = det (cid:104) e c ( t,p ) a i z j (cid:105) ki,j =1 · det (cid:104) e c ( t,p ) b i z j (cid:105) ki,j =1 · k (cid:89) i =1 e − c ( t,p ) z i . The last two statements imply that H ( (cid:126)z ) ≥ and from Lemma 8.11 we have H ( (cid:126)z ) (cid:54) = 0 so that H ( (cid:126)z ) > for (cid:126)z ∈ W ◦ k . If (cid:126)z ∈ W k \ W ◦ k then z i = z j for some i (cid:54) = j and then we see that H ( (cid:126)z ) = 0 since the matrices in determinants in the equation above for H ( (cid:126)z ) have i -th and j -th column that areequal, which makes the determinant vanish. This proves the first two statements in the proposition.To prove the third statement observe that by the continuity, non-negativity of H ( (cid:126)z ) and the factthat it is strictly positive in the open set W ◦ k we know that Z c ∈ (0 , ∞ ] and so we only need toprove that Z c < ∞ . Using the formula det [ A i,j ] ki,j =1 = (cid:88) σ ∈ S k ( − σ · k (cid:89) i =1 A i,σ ( i ) IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 77 and the triangle inequality we see that (cid:12)(cid:12)(cid:12)(cid:12) det (cid:104) e c ( t,p ) a i z j (cid:105) ki,j =1 (cid:12)(cid:12)(cid:12)(cid:12) ≤ (cid:88) σ ∈ S k k (cid:89) j =1 e c ( t,p ) a σ ( j ) z j ≤ (cid:88) σ ∈ S k k (cid:89) j =1 e c ( t,p ) ( (cid:80) ki =1 | a i | ) ·| z j | ≤ k ! · k (cid:89) i =1 e C | z j | , where C = k (cid:88) i =1 c ( t, p ) | a i | (8.35)Analogously, define the constant C = (cid:80) ki =1 c ( t, p ) | b i | and we have(8.36) (cid:12)(cid:12)(cid:12)(cid:12) det (cid:104) e c ( t,p ) b i z j (cid:105) ki,j =1 (cid:12)(cid:12)(cid:12)(cid:12) ≤ k ! · k (cid:89) i =1 e C | z j | Using (8.35) and (8.36) we get | H ( (cid:126)z ) | ≤ ( k !) · k (cid:89) i =1 e C | z i |− c ( t,p ) z i (8.37)where C = C + C . Since the right side of (8.37) is integrable (because of the square in theexponential) we conclude that H ( (cid:126)z ) is also integrable by domination and so Z c < ∞ as desired. (cid:3) Proof of Proposition 8.3 for (cid:126)a,(cid:126)b ∈ W ◦ k . For clarity we split the proof into several steps.
Step 1.
In this step we prove that Z c from Proposition 8.2 in the case when (cid:126)a,(cid:126)b have distinctentries satisfies the equation(8.38) Z c = (2 π ) k ( p (1 − p ) t (1 − t )) k · e c t,p )2 (cid:80) ki =1 a i · e c t,p )2 (cid:80) ki =1 b i det (cid:104) e − p (1 − p ) ( b i − a j ) (cid:105) ki,j =1 . Let B λ ( T ) be as in Lemma 8.10 for λ ∈ W k , with (cid:126)x T , (cid:126)y T as in the statement of the proposition.It follows from Lemma 8.7 that (cid:88) λ ∈ W k B λ ( T ) T k/ = ( √ π ) k · exp( kT log(1 − p ) + ( k/
2) log T + ( k/
2) log p (1 − p )) · exp (cid:32) − log (cid:18) − pp (cid:19) k (cid:88) i =1 ( y Ti − x Ti ) (cid:33) · det (cid:16) e y Ti − x Tj − i + j (1 m + n ) (cid:17) ≤ i,j ≤ k , (8.39)where we recall that m = (cid:98) tT (cid:99) and n = T − m . Taking the T → ∞ limit in (8.39) and using (8.32)we obtain lim T →∞ (cid:88) λ ∈ W k B λ ( T ) T k/ = det (cid:104) e − p (1 − p ) ( b i − a j ) (cid:105) ki,j =1 . (8.40)For λ ∈ W k and T ∈ N we define Q λ ( T ) to be the cube [ λ T − / − pt √ T , ( λ +1) T − / − pt √ T ) ×· · · × [ λ k T − / − pt √ T , ( λ k + 1) T − / − pt √ T ) and note that Q λ ( T ) has Lebesgue measure T − k/ .In addition, we define the step functions f T through f T ( (cid:126)z ) = (cid:88) λ ∈ W k B λ ( T ) · Q λ ( T ) ( (cid:126)z ) (8.41)and observe that (cid:88) λ ∈ W k B λ ( T ) T k/ = (cid:90) R k f T ( (cid:126)z ) d(cid:126)z (8.42)where d(cid:126)z represents the usual Lebesgue measure on R k . In view of (8.29) we know that for almost every (cid:126)z = ( z , · · · , z k ) ∈ R k we have lim T →∞ f T ( (cid:126)z ) = { z > ··· >z k } · H ( z ) · (2 πp (1 − p ) t (1 − t )) − k · k (cid:89) i =1 exp (cid:18) − c ( t, p ) a i + c ( t, p ) b i (cid:19) . (8.43)We claim that there exists a non-negative integrable function g on R k such that if T is large enough | f T ( z , . . . , z k ) | ≤ | g ( z , . . . , z k ) | (8.44)We will prove (8.44) in Step 2 below. For now we assume its validity and conclude the proof of (8.38).From (8.43) and the dominated convergence theorem with dominating function g as in (8.44) weknow that lim T →∞ (cid:90) R k f T ( (cid:126)z ) d(cid:126)z = (cid:90) W k H ( (cid:126)z )(2 πp (1 − p ) t (1 − t )) − k k (cid:89) i =1 exp (cid:18) − c ( t, p ) a i + c ( t, p ) b i (cid:19) d(cid:126)z. (8.45)Combining (8.45), (8.42) and (8.40) we conclude that det (cid:104) e − p (1 − p ) ( b i − a j ) (cid:105) ki,j =1 = (cid:90) W k H ( (cid:126)z ) · (2 πp (1 − p ) t (1 − t )) − k · k (cid:89) i =1 e − c t,p ) a i + c t,p ) b i d(cid:126)z. (8.46)which clearly establishes (8.38). Step 2.
In this step we demonstrate an integrable function g that satisfies (8.44). Let us fix λ ∈ W k . If λ i ≥ x Ti + m + 1 or λ i < x Ti for some i ∈ { , , . . . , k } we know that det (cid:16) e λ i − x Tj − i + j (1 m ) (cid:17) ≤ i,j ≤ k = 0 . To see this, observe that if λ s ≥ x Ts + m + 1 then the top-right s × ( k − s ) -th block in the matrixconsists of zeros (since e N (1 m ) = 0 for N ≥ m + 1 ). Thus if A and B are the top-left ( s − × ( s − submatrix and bottom-right ( k − s + 1) × ( k − s + 1) submatrix we would have that det (cid:16) e λ i − x Tj − i + j (1 m ) (cid:17) ≤ i,j ≤ k = det A · det B, but then det B = 0 since its top row consists of ’s. Similar arguments show that the determinantis if λ s < x Ts for some s ∈ { , , . . . , k } , where now we would get a block of ’s in the bottom leftcorner using e N (1 m ) = 0 for N < . From the definition of B λ ( T ) we conclude that B λ ( T ) = 0 if λ i ≥ x Ti + m + 1 or λ i < x Ti . Similarly, we have that B λ ( T ) = 0 if y Ti ≥ λ i + n + 1 or y Ti < λ i forsome i ∈ { , , . . . , k } , using that det (cid:16) e y Ti − λ j − i + j (1 n ) (cid:17) ≤ i,j ≤ k = 0 in this case. Overall, we conclude that B λ ( T ) = 0 unless m ≥ λ i − x Ti ≥ and n ≥ y Ti − λ i ≥ for all i ∈ { , . . . , k } which implies that for all large enough T we have B λ ( T ) = 0 , unless | λ i − x Tj + j − i | ≤ (1 + p ) m and | y Ti − λ j + j − i | ≤ (1 + p ) n (8.47)for all i, j ∈ { , · · · , k } . To see the latter, suppose that there exist i, j such that (1 + p ) m < | λ i − x Tj + j − i | . Then we have (1 + p ) m < | λ i − x Tj + j − i | ≤ | λ i − x Ti | + k + | x Ti − x Tj | = | λ i − x Ti | + O ( √ T ) . IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 79
When T is sufficiently large, the above inequality implies λ i − x Ti (cid:54)∈ [0 , m ] so that B λ ( T ) = 0 , andsimilar result holds for y Ti − λ j + j − i , which justifies (8.47). From the definition of B λ ( T ) we know B λ ( T ) = C T · det[ E ( λ i − x Tj + j − i, m )] ki,j =1 · det[ E ( y Ti − λ j + j − i, n )] ki,j =1 , where E ( N, n ) = e N (1 n ) · exp (cid:18) − N log (cid:18) − pp (cid:19) + n log(1 − p ) + (1 /
2) log n (cid:19) , and C T = ( √ π ) k ( p (1 − p )) k/ · exp( k log T − ( k/
2) log n − ( k/
2) log m ) . (8.48)Notice that C T is uniformly bounded for all T large enough, because k log T − k n − k m = k (cid:18) T (cid:98) tT (cid:99) · ( T − (cid:98) tT (cid:99) ) (cid:19) = − k t (1 − t )) + O (cid:0) T − (cid:1) (8.49)and O (cid:0) T − (cid:1) is uniformly bounded.In view of (8.16) we know that we can find constants C , c > such that for all large enough T and N ∈ [0 , m ] and N ∈ [0 , n ] we have E ( N , m ) ≤ C exp( − c m − ( N − pm ) ) and E ( N , n ) ≤ C exp( − c n − ( N − pn ) ) (8.50)Observing that e r (1 n ) = 0 for r > n or r < , we know that (8.50) also holds for all N ∈ [ − (1 + p ) m, (1 + p ) m ] and N ∈ [ − (1 + p ) n, (1 + p ) n ] . Combining (8.47), (8.48) and (8.50) we seethat for all λ ∈ W k and T sufficiently large ≤ B λ ( T ) ≤ (cid:101) C (cid:88) σ,τ ∈ S k k (cid:89) i =1 {| λ i − x Tj + j − i | ≤ (1 + p ) m } · {| y Ti − λ j + j − i | ≤ (1 + p ) n }· exp (cid:16) − (cid:101) cT − (cid:104) ( λ i − √ T a σ ( i ) − ptT ) + ( √ T b i − λ τ ( i ) + ptT ) (cid:105)(cid:17) (8.51)where (cid:101) c , (cid:101) C > depend on p, t, k but not on T provided that it is sufficiently large.In particular, we see that if (cid:126)z ∈ R k then either (cid:126)z (cid:54)∈ Q λ ( T ) for any λ ∈ W k in which case f T ( (cid:126)z ) = 0 or (cid:126)z ∈ Q λ ( T ) for some λ ∈ W k in which case (8.51) implies ≤ f T ( (cid:126)z ) ≤ C (cid:88) σ,τ ∈ S k k (cid:89) i =1 exp (cid:0) − c (( z i − a σ ( i ) ) + ( b i − z τ ( i ) ) ) (cid:1) (8.52)where C , c > depend on p, t, k but not on T provided that it is sufficiently large. We finally seethat (8.44) holds with g being equal to the right side of (8.52), which is clearly integrable. Step 3.
Our work in Steps 1 and 2 implies that the density ρ ( (cid:126)z ) we want to prove to be the weaklimit of Z T has the form ρ ( (cid:126)z ) = Z − c · det (cid:104) e c ( t,p ) a i z j (cid:105) ki,j =1 · det (cid:104) e c ( t,p ) b i z j (cid:105) ki,j =1 · k (cid:89) i =1 e − c ( t,p ) z i , where Z c = (2 π ) k ( p (1 − p ) t (1 − t )) k · e c t,p )2 (cid:80) ki =1 a i · e c t,p )2 (cid:80) ki =1 b i det (cid:104) e − p (1 − p ) ( b i − a j ) (cid:105) ki,j =1 . (8.53)We fix a compact set K ⊂ W ◦ k and for (cid:126)z ∈ K we define λ T ( (cid:126)z ) ∈ W k through λ Ti ( (cid:126)z ) = (cid:98) ptT + z i T / (cid:99) for i = 1 , . . . , k. In this step we prove that(8.54) lim T →∞ T k/ · P ,T,(cid:126)x T ,(cid:126)y T avoid,Ber ( L T ( m ) = λ T ( (cid:126)z ) , · · · , L Tk ( m ) = λ Tk ( (cid:126)z )) = ρ ( (cid:126)z ) , where the convergence is uniform over K . Combining (8.11), (8.28),(8.32), (8.33), (8.34) we get T k/ · P ,T,(cid:126)x T ,(cid:126)y T avoid,Ber ( L T ( m ) = λ T ( (cid:126)z ) , · · · , L Tk ( m ) = λ Tk ( (cid:126)z )) = [1 + O ( T − / )](2 π ) − k/ · det (cid:104) e c ( t,p ) a i z j (cid:105) · det (cid:104) e c ( t,p ) b i z j (cid:105) · exp( − ( k/
2) log( p (1 − p )) − ( k/
2) log( t (1 − t ))) · k (cid:89) i =1 exp (cid:18) − c ( t, p )2 ( a i + z i ) − c ( t, p )2 ( b i + z i ) (cid:19) · det (cid:34) exp (cid:32) − ( y Ti − x Tj + j − i − pT ) − p ) pT (cid:33)(cid:35) − , where the constants in the big O notation are uniform over K . Using that det (cid:34) exp (cid:32) − ( y Ti − x Tj + j − i − pT ) − p ) pT (cid:33)(cid:35) = det (cid:104) e − p (1 − p ) ( b i − a j ) (cid:105) · [1 + o (1)] , where the constant in the little o notation does not depend on K and (8.53) we see that T k/ · P ,T,(cid:126)x T ,(cid:126)y T avoid,Ber ( L T ( m ) = λ T ( (cid:126)z ) , · · · , L Tk ( m ) = λ Tk ( (cid:126)z )) = [1 + O ( T − / )][1 + o (1)] · ρ ( (cid:126)z ) , which implies (8.54). Step 4.
In this step, we prove that for any compact rectangle R = [ u , v ] × · · · × [ u k , v k ] ⊂ W ◦ k lim T →∞ P ( Z T ∈ R ) = (cid:90) R ρ ( (cid:126)z ) d(cid:126)z, (8.55)where we have written P in place of P ,T,(cid:126)x T ,(cid:126)y T avoid,Ber to ease the notation.Define m Ti = (cid:100) u i √ T + ptT (cid:101) and M Ti = (cid:98) v i √ T + ptT (cid:99) . Then we have: P (cid:0) Z T ∈ R (cid:1) = P (cid:16) u i √ T + ptT ≤ L Ti ( (cid:98) tT (cid:99) ) ≤ v i √ T + ptT, i = 1 , . . . , k (cid:17) = M T (cid:88) λ = m T · · · M Tk (cid:88) λ k = m Tk P ( L T ( (cid:98) tT (cid:99) ) = λ , . . . , L Tk ( (cid:98) tT (cid:99) ) = λ k )= M T (cid:88) λ = m T · · · M Tk (cid:88) λ k = m Tk T − k/ · T k/ · P ( L T ( (cid:98) tT (cid:99) ) = λ , . . . , L Tk ( (cid:98) tT (cid:99) ) = λ k ) = (cid:90) R k h T ( (cid:126)z ) d(cid:126)z, where h T ( (cid:126)z ) is the step function h T ( (cid:126)z ) = M T (cid:88) λ = m T · · · M Tk (cid:88) λ k = m Tk Q λ ( T ) ( (cid:126)z ) · T k/ · P ( L T ( (cid:98) tT (cid:99) ) = λ , . . . , L Tk ( (cid:98) tT (cid:99) ) = λ k ) , where as in Step 1, Q λ ( T ) is the cube [ λ T − / − pt √ T , ( λ + 1) T − / − pt √ T ) × · · · × [ λ k T − / − pt √ T , ( λ k + 1) T − / − pt √ T ) . The last equation and (8.54) together imply that P (cid:0) Z T ∈ R (cid:1) = [1 + o (1)] · (cid:90) R ρ ( (cid:126)z ) d(cid:126)z. Letting T → ∞ in the last equation we obtain (8.55). Step 5.
In this step, we conclude the proof of the proposition. By [15, Theorem 3.10.1] to provethe weak convergence of Z T to ρ it suffices to show that for any open set U ⊂ W ◦ k we have(8.56) lim inf T →∞ P ( Z T ∈ U ) ≥ (cid:90) U ρ ( z ) dz. In the remainder we fix an open set U ⊂ W ◦ k and prove (8.56). IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 81
From [27, Theorem 1.4] we know that we can write U = ∪ ∞ i =1 R i , where R i = [ u i , v i ] ×· · ·× [ u ik , v ik ] are rectangles with pairwise disjoint interiors. Let us fix n ∈ N and (cid:15) > and put R (cid:15)i = [ u i + (cid:15), v i − (cid:15) ] × · · · × [ u ik + (cid:15), v ik − (cid:15) ] . By finite additivity of P and (8.55) we know lim inf T →∞ P ( Z T ∈ U ) ≥ lim inf T →∞ P ( Z T ∈ ∪ ni =1 R (cid:15)i ) = lim inf T →∞ n (cid:88) i =1 P ( Z T ∈ R (cid:15)i ) = n (cid:88) i =1 (cid:90) R (cid:15)i ρ ( (cid:126)z ) d(cid:126)z = (cid:90) ∪ ni =1 R (cid:15)i ρ ( (cid:126)z ) d(cid:126)z. We can now let (cid:15) → and n → ∞ above and apply the monotone convergence theorem to concludethat the right side converges to (cid:82) U ρ ( z ) dz . Here we use that ρ is continuous and non-negative. Doingthis brings us to (8.56) and thus we conclude the statement of the proposition.8.5. Proof of Proposition 8.2 for any (cid:126)a,(cid:126)b ∈ W k . In this section, we give the proof of Proposition8.2 for any (cid:126)a,(cid:126)b ∈ W k . In what follows we assume that (cid:126)a,(cid:126)b have the form in (8.3), which we recallhere for the reader’s convenience. (cid:126)a = ( a , · · · , a k ) = ( α , · · · , α (cid:124) (cid:123)(cid:122) (cid:125) m , · · · , α p , · · · , α p (cid:124) (cid:123)(cid:122) (cid:125) m p ) (cid:126)b = ( b , · · · , b k ) = ( β , · · · , β (cid:124) (cid:123)(cid:122) (cid:125) n , · · · , β q , · · · , β q (cid:124) (cid:123)(cid:122) (cid:125) n q ) (8.57)We recall that α > α > · · · > α p , β > β > · · · > β q and (cid:80) pi =1 m i = (cid:80) qi =1 n i = k . We denote (cid:126)m = ( m , · · · , m p ) , (cid:126)n = ( n , · · · , n q ) . If (cid:126)a,(cid:126)b have the above form we recall from (8.5) that H ( (cid:126)z ) = ϕ ( (cid:126)a, (cid:126)z, (cid:126)m ) · ψ ( (cid:126)b, (cid:126)z, (cid:126)n ) · k (cid:89) i =1 e − c ( t,p ) z i , (8.58)where ϕ and ψ are as in (8.4).We next introduce some new notation that will be useful for our arguments. For any (cid:15) > wedefine the vectors (cid:126)a + (cid:15) and (cid:126)b + (cid:15) through ( a + (cid:15) ) m + ··· + m i − + j = α i + ( m i − j + 1) (cid:15) for i = 1 , . . . , p and j = 1 , . . . , m i , ( b + (cid:15) ) n + ··· + n i − + j = β i + ( n i − j + 1) (cid:15) for i = 1 , . . . , q and j = 1 , . . . , n i . (8.59)Similarly, we define the vectors (cid:126)a − (cid:15) and (cid:126)b − (cid:15) through ( a − (cid:15) ) m + ··· + m i − + j = α i − j(cid:15) for i = 1 , . . . , p and j = 1 , . . . , m i , ( b − (cid:15) ) n + ··· + n i − + j = β i − j(cid:15) for i = 1 , . . . , q and j = 1 , . . . , n i . (8.60)We next let H + (cid:15) , H − (cid:15) be as in (8.58) for the vectors (cid:126)a + (cid:15) ,(cid:126)b + (cid:15) and (cid:126)a − (cid:15) ,(cid:126)b − (cid:15) respectively. In particular,(8.61) H ± (cid:15) ( (cid:126)z ) = det (cid:104) e c ( t,p )( a ± (cid:15) ) i z j (cid:105) ki,j =1 · det (cid:104) e c ( t,p )( b ± (cid:15) ) i z j (cid:105) ki,j =1 · k (cid:89) i =1 e − c ( t,p ) z i . Observe that by construction we have (cid:126)a ± (cid:15) ,(cid:126)b ± (cid:15) ∈ W ◦ k for all (cid:15) ∈ (0 , that are sufficiently small,which we implicity assume in the sequel. It follows from our work in Section 8.3 that Z ± (cid:15) = (cid:90) W k H ± (cid:15) ( z ) dz ∈ (0 , ∞ ) and so the functions(8.62) ρ ± (cid:15) ( (cid:126)z ) = [ Z ± (cid:15) ] − · H ± (cid:15) ( (cid:126)z ) are well-defined densities on W k .We next recall some basic notation for multivariate Taylor series, following [3, Chapter 3]. Suppose σ = ( σ , . . . , σ k ) is a multi-index of length k . In our context, we require σ , . . . , σ k be all non-negative integers (some of them might be equal). We define | σ | = (cid:80) ki =1 σ i as the order of σ . Suppose τ = ( τ , . . . , τ k ) is another multi-index of length n . We say τ ≤ σ if τ i ≤ σ i for i = 1 , · · · , k . Wesay τ < σ if τ ≤ σ and there exists at least one index i such that τ i < σ i . Then, define the partialderivative with respect to the multi-index σ : D σ f ( x , · · · , x k ) = ∂ | σ | f ( x , · · · , x k ) ∂x σ ∂x σ · · · ∂x σ k k . We also have the Taylor expansion for multi-variable functions:(8.63) f ( x , · · · , x k ) = (cid:88) | σ |≤ r σ ! D σ f ( (cid:126)x )( (cid:126)x − (cid:126)x ) σ + R fr +1 ( (cid:126)x, (cid:126)x ) In the equation, σ ! = σ ! σ ! · · · σ k ! is the factorial with respect to the multi-index σ , (cid:126)x =( x , · · · , x k ) is a constant vector at which we expand the function f , ( (cid:126)x − (cid:126)x ) σ stands for ( x − x ) σ · · · ( x k − x k ) σ k , and R fr +1 ( (cid:126)x, (cid:126)x ) = (cid:88) σ : | σ | = r +1 σ ! D σ f ( (cid:126)x + θ ( (cid:126)x − (cid:126)x ))( (cid:126)x − (cid:126)x ) σ is the remainder, where θ ∈ (0 , .We also need some notation for permutations . Suppose s n is a permutation of { , . . . , n } , and s n ( i ) represents the i - th element in the permutation s n . We define the number of inversions of s n by I ( s n ) = (cid:80) n − i =1 (cid:80) nj = i +1 { s n ( i ) >s n ( j ) } . For example, the permutation s n = (1 , . . . , n ) has number of inversions, while the permutation s = (3 , , , , has number of inversions equal to . Define the sign of permutation s n by sgn ( s n ) = ( − I ( s n ) . For instance, sgn ((1 , . . . , n )) = 1 and sgn ( s ) = − in the previous example.We now turn to the proof of Proposition 8.2. Proof. (of Proposition 8.2) For clarity we split the proof into two steps.
Step 1.
In this step we prove that for every (cid:126)z ∈ W k we have lim (cid:15) → (cid:15) − (cid:80) pi =1 ( mi ) − (cid:80) pi =1 ( ni ) H ± (cid:15) ( (cid:126)z ) = C ( (cid:126)m ) · C ( (cid:126)n ) · H ( (cid:126)z ) , where C ( (cid:126)m ) = p (cid:89) i =1 m i ! · (cid:89) ≤ j In this step we conclude the proof of the proposition. In view of (8.64) and the fact that H + (cid:15) ( (cid:126)z ) > for (cid:126)z ∈ W ◦ k (we proved this in Section 8.3) we conclude that H ( (cid:126)z ) ≥ for (cid:126)z ∈ W ◦ k .Also by Lemma 8.11 we know that H ( (cid:126)z ) (cid:54) = 0 for (cid:126)z ∈ W ◦ k and so indeed, H ( (cid:126)z ) > for (cid:126)z ∈ W ◦ k .Furthermore, we know that H ( (cid:126)z ) = 0 for (cid:126)z ∈ W k \ W ◦ k since the determinants in the definition of H ( (cid:126)z ) vanish due to equal columns when (cid:126)z ∈ W k \ W ◦ k . Finally, we observe that by (8.69) and (8.70)we know that there exist positive constants D, d > independent of (cid:15) provided it is sufficientlysmall such that(8.71) | (cid:15) − u − v · H (cid:15) ( (cid:126)z ) | ≤ D · exp (cid:0) d (cid:107) (cid:126)z (cid:107) − c ( t, p ) (cid:107) (cid:126)z (cid:107) (cid:1) , where as usual (cid:107) (cid:126)z (cid:107) = (cid:80) ki =1 z i . In view of (8.71) and the dominating convergence theorem, weconclude that H ( (cid:126)z ) is integrable and since it is continuous and positive on W ◦ k we conclude that Z c ∈ (0 , ∞ ) as desired. (cid:3) The above proof essentially shows the following statement. Corollary 8.12. Let (cid:126)a,(cid:126)b ∈ W k . Let ρ ± (cid:15) be as (8.62), and let ρ be as in Proposition 8.2 for the twovectors (cid:126)a,(cid:126)b . Then ρ ± (cid:15) weakly converge to ρ as (cid:15) → . Proof. We use the same notation as in the proof of Proposition 8.2 above. As the proofs areanalogous we only show that ρ + (cid:15) weakly converges to ρ . We claim that for any Borel set B ⊂ W k (8.72) lim (cid:15) → (cid:90) B (cid:15) − u − v H + (cid:15) ( z ) dz = C ( (cid:126)m ) · C ( (cid:126)n ) · (cid:90) B H ( z ) dz. Assuming the validity of (8.72) we see that for any Borel set B ⊂ W k we have (cid:90) B ρ ( z ) dz = (cid:82) B H ( z ) dz (cid:82) W k H ( z ) dz = lim (cid:15) → C ( (cid:126)m ) · C ( (cid:126)n ) · (cid:82) B H + (cid:15) ( z ) dzC ( (cid:126)m ) · C ( (cid:126)n ) · (cid:82) W k H + (cid:15) ( z ) dz = lim (cid:15) → (cid:90) B ρ + (cid:15) ( z ) dz, which proves the weak convergence we wanted. Thus we only need to show (8.72).In view of (8.64) we know that (cid:15) − u − v H + (cid:15) ( z ) converges pointwise to C ( (cid:126)m ) · C ( (cid:126)n ) · H ( z ) and then(8.72) follows from the dominated convergence theorem, once we invoke (8.71). (cid:3) Proof of Proposition 8.3 for any (cid:126)a,(cid:126)b ∈ W k . We fix the same notation as in Section 8.5 andsuppose that (cid:15) > is sufficiently small so that (cid:126)a ± (cid:15) ,(cid:126)b ± (cid:15) ∈ W ◦ k . To prove the proposition it suffices toshow that for any (cid:126)c ∈ R k we have that(8.73) lim T →∞ P ,T,(cid:126)x T ,(cid:126)y T avoid,Ber (cid:0) Z T ≤ c , . . . , Z Tk ≤ c k (cid:1) = (cid:90) W k ∩ R ρ ( (cid:126)z ) d(cid:126)z, where R = ( −∞ , c ] × · · · × ( −∞ , c k ] .We define the vectors (cid:126)x + (cid:15),T and (cid:126)y + (cid:15),T through ( x + (cid:15),T ) m + ··· + m i − + j = x Tm + ··· + m i − + j + (cid:98)√ T ( m i − j + 1) (cid:15) (cid:99) for i = 1 , . . . , p and j = 1 , . . . , m i , ( y + (cid:15),T ) n + ··· + n i − + j = y Tn + ··· + n i − + j + (cid:98)√ T ( n i − j + 1) (cid:15) (cid:99) for i = 1 , . . . , q and j = 1 , . . . , n i . Similarly, we define the vectors (cid:126)x − (cid:15),T and (cid:126)y − (cid:15),T through ( x − (cid:15),T ) m + ··· + m i − + j = x Tm + ··· + m i − + j − (cid:98)√ T j(cid:15) (cid:99) for i = 1 , . . . , p and j = 1 , . . . , m i , ( y − (cid:15),T ) n + ··· + n i − + j = y Tn + ··· + n i − + j − (cid:98)√ T j(cid:15) (cid:99) for i = 1 , . . . , q and j = 1 , . . . , n i . It follows from Lemma 3.1 that P ,T,(cid:126)x + (cid:15),T ,(cid:126)y + (cid:15),T avoid,Ber (cid:0) Z T ≤ c , . . . , Z Tk ≤ c k (cid:1) ≤ P ,T,(cid:126)x T ,(cid:126)y T avoid,Ber (cid:0) Z T ≤ c , . . . , Z Tk ≤ c k (cid:1) ≤ P ,T,(cid:126)x − (cid:15),T ,(cid:126)y − (cid:15),T avoid,Ber (cid:0) Z T ≤ c , . . . , Z Tk ≤ c k (cid:1) . (8.74)Taking the limit as T → ∞ in (8.74) and applying our result from Section 8.4 we obtain (cid:90) W k ∩ R ρ + ( (cid:126)z ) d(cid:126)z ≤ lim inf T →∞ P ,T,(cid:126)x T ,(cid:126)y T avoid,Ber (cid:0) Z T ≤ c , . . . , Z Tk ≤ c k (cid:1) ≤ lim sup T →∞ P ,T,(cid:126)x T ,(cid:126)y T avoid,Ber (cid:0) Z T ≤ c , . . . , Z Tk ≤ c k (cid:1) ≤ (cid:90) W k ∩ R ρ − ( (cid:126)z ) d(cid:126)z. (8.75)Taking the (cid:15) → limit in (8.75) and invoking Corollary 8.12 we arrive at (8.73). This suffices forthe proof. References 1. M. Adler, P. Ferrari, and P. Van Moerbeke, Airy processes with wanderers and new universality classes , Ann.Probab. (2010), 714–769.2. P. Billingsley, Convergence of Probability Measures, 2nd ed , John Wiley and Sons, New York, 1999.3. J. Callahan, Advanced calculus: a geometric view , Undergraduate Texts in Mathematics, Springer, 2010.4. P. Caputo, D. Ioffe, and V. Wachtel, Confinement of Brownian polymers under geometric area tilts , Electron. J.Probab. (2019), 21 pp. IGHTNESS OF BERNOULLI GIBBSIAN LINE ENSEMBLES 85 5. , Tightness and line ensembles for Brownian polymers under geometric area tilts , Statistical Mechanicsof Classical and Disordered Systems (Cham) (Véronique Gayrard, Louis-Pierre Arguin, Nicola Kistler, and IrinaKourkova, eds.), Springer International Publishing, 2019, pp. 241–266.6. I. Corwin and E. Dimitrov, Transversal fluctuations of the ASEP, Stochastic six vertex model, and Hall-LittlewoodGibbsian line ensembles , Comm. Math. Phys. (2018), 435–501.7. I. Corwin and A. Hammond, Brownian Gibbs property for Airy line ensembles , Invent. Math. (2014), 441–508.8. , KPZ line ensemble , Probab. Theory Relat. Fields (2016), 67–185.9. D. Dauvergne, M. Nica, and B. Virág, Uniform convergence to the Airy line ensemble , (2019), Preprint:arXiv:1907.10160.10. D. Dauvergne, J. Ortmann, and B. Virá, The directed landscape , (2018), arXiv:1812.00309.11. D. Dauvergne and B. Virág, Basic properties of the Airy line ensemble , (2018), Preprint: arXiv:1812.00311.12. E. Dimitrov and K. Matetski, Characterization of Brownian Gibbsian line ensembles , (2020), arXiv:2002.00684.13. E. Dimitrov and X. Wu, KMT coupling for random walk bridges , (2019), arXiv:1905.13691.14. R.M. Dudley, Real Analysis and Probability, 2nd ed , Cambridge University Press, 2004.15. R. Durrett, Probability: theory and examples, Fourth edition , Cambridge University Press, Cambridge, 2010.16. A. Hammond, Brownian regularity for the Airy line ensemble, and multi-polymer watermelons in Brownian lastpassage percolation , (2016), Preprint: arXiv:1609.029171.17. , Exponents governing the rarity of disjoint polymers in Brownian last passage percolations , (2017),Preprint: arXiv:1709.04110.18. , Modulus of continuity of polymer weight profiles in Brownian last passage percolation , Ann. Probab. (2019), no. 6, 3911–3962.19. , A patchwork quilt sewn from Brownian fabric: regularity of polymer weight profiles in Brownian lastpassage percolation , Forum Math. Pi (2019), Preprint: arXiv:1709.04113.20. G.F. Lawler and J.A. Trujillo-Ferreras, Random walk loop-soup , Trans. Amer. Math. Soc. (2007), 767–787.21. I. G. Macdonald, Symmetric functions and Hall polynomials , 2 ed., Oxford University Press Inc., New York, 1995.22. J. Munkres, Topology, 2nd ed , Prentice Hall, Inc., Upper Saddle River, NJ, 2003.23. J. R. Norris, Markov chains , Cambridge University Press, 1998.24. M. Prähofer and H. Spohn, Scale invariance of the PNG Droplet and the Airy process , J. Stat. Phys. (2002),1071–1106.25. H. Robbins, A remark on Stirling’s formula , Amer. Math. Monthly (1955), 26–29.26. W. Rudin, Principles of mathematical analsyis, 3rd ed. , New York: McGraw-hill, 1964.27. E. Stein and R. Shakarchi, Complex analysis , Princeton University Press, Princeton, 2003.28. C. Tracy and H. Widom, Level-spacing distributions and the Airy kernel , Commun. Math. Phys. (1994),151–174.29. X. Wu,