A graph-based equilibrium problem for the limiting distribution of non-intersecting Brownian motions at low temperature
aa r X i v : . [ m a t h . C V ] J u l A graph-based equilibrium problem for the limitingdistribution of non-intersecting Brownian motions at lowtemperature
Steven Delvaux ∗ , Arno B. J. Kuijlaars ∗ Abstract
We consider n non-intersecting Brownian motion paths with p prescribed starting po-sitions at time t = 0 and q prescribed ending positions at time t = 1. The positions ofthe paths at any intermediate time are a determinantal point process, which in the case p = 1 is equivalent to the eigenvalue distribution of a random matrix from the Gaussianunitary ensemble with external source. For general p and q , we show that if a temperatureparameter is sufficiently small, then the distribution of the Brownian paths is characterizedin the large n limit by a vector equilibrium problem with an interaction matrix that isbased on a bipartite planar graph. Our proof is based on a steepest descent analysis of anassociated ( p + q ) × ( p + q ) matrix valued Riemann-Hilbert problem whose solution is builtout of multiple orthogonal polynomials. A new feature of the steepest descent analysis is asystematic opening of a large number of global lenses. Keywords : non-intersecting Brownian motions, Karlin-McGregor theorem, vector po-tential theory, graph theory, multiple orthogonal polynomials, Riemann-Hilbert problem,Deift-Zhou steepest descent analysis.
This paper deals with non-intersecting one-dimensional Brownian motions with prescribed start-ing and ending positions. This model has already been discussed in various regimes. For thecase of one starting point and one ending point it is known that the positions of the paths atany intermediate time have the same distribution (up to trivial scaling) as the eigenvalues of aGaussian Unitary Ensemble from random matrix theory [18]. Moreover, as the number of pathstends to infinity and after appropriate scaling, the paths fill out an ellipse in the tx -plane, seeFigure 1.In the case of one starting point and two or more ending points the positions of the pathshave the same distribution as the eigenvalues of a Gaussian Unitary Ensemble with externalsource. This model is described by multiple Hermite polynomials. As the number of paths tendsto infinity, the paths fill out a more complicated region whose boundary has cusp points. Thelimiting distributions can be computed in terms of an algebraic curve known as Pastur’s equation [3, 5, 6, 24, 28, 29]. See Figure 2 for an illustration of the case of two ending points. ∗ Department of Mathematics, Katholieke Universiteit Leuven, Celestijnenlaan 200B, B-3001 Leuven, Belgium.email: { steven.delvaux,arno.kuijlaars } @wis.kuleuven.be.The first author is a Postdoctoral Fellow of the Fund for Scientific Research - Flanders (Belgium).The work of the second author is supported by FWO-Flanders project G.0427.09, by K.U. Leuven research grantOT/08/33, by the Belgian Interuniversity Attraction Pole P06/02, by the European Science Foundation ProgramMISGAM, and by grant MTM2008-06689-C02-01 of the Spanish Ministry of Science and Innovation. Figure 1: Non-intersecting Brownian motions with one starting and one ending point. As thenumber of paths tends to infinity, the paths fill out an ellipse in the tx -plane. Figure 2: Non-intersecting Brownian motions with one starting and two ending positions.The case of one ending point and two or more starting points is equivalent due to the timereversal symmetry in the model.Much less is known for the general case of p ≥ q ≥ p + q ) × ( p + q ) matrix valued RiemannHilbert problem. Calculations for the limiting distributions of paths in the large n limit weredone for very specific cases with p = q = 2 in [10, 16] based on the spectral curve (analogue ofthe Pastur’s equation) that could be computed in these special cases, see also the related works[1, 25].It is the goal of this paper to study the case of general p ≥ q ≥ Figure 3: Non-intersecting Brownian motions with two starting and two ending positions. Thestarting and ending positions are sufficiently far apart so that around time t = 1 /
2, there arethree groups of paths in the large n limit.For p = q = 2 we are considering a situation such as the one shown in Figure 3, where acertain fraction of the paths starts in each of the two starting points, and ends at each of the twoending points. As the number of paths increases we see the following situation. For small timethe paths are in two separate groups that emanate from the two starting positions. At a certaintime one of the groups splits into two, leading to a situation of three separate groups of paths.Then at a later time two of the groups come together and we end up with two groups that endat the two ending points.Our results will deal with the situation at times where there are three groups of paths, or forgeneral p and q , where there are the maximal number (namely p + q −
1) of groups of paths.There is an alternative possible scenario in which the two groups of paths first merge intoone group and later split again into two groups of paths. This will happen if the starting andending positions are sufficiently close to each other. The first scenario happens if the starting andending positions are relatively far away from each other. Below we will actually distinguish thetwo scenarios in terms of a temperature parameter T so that for small T we have the situationwith the three groups of paths. Let p ≥ q ≥
2. We fix p starting points a , . . . , a p which we assume to be ordered as a > a > · · · > a p , (2.1)and q ending points b , . . . , b q with b > b > · · · > b q . (2.2)For a given (large) n we consider n non-intersecting Brownian motion paths and we assumethat n k of the paths start at a k and that m l of the paths end at b l for k = 1 , . . . , p and l = 1 , . . . , q .Thus p X k =1 n k = q X l =1 m l = n. n k and m l also determine for each k = 1 , . . . , p and l = 1 , . . . , q , the number n k,l of paths that start at a k and end at b l . We call the fractions t ( n ) k,l = n k,l n (2.3)the finite n transition numbers . Note that t ( n ) k,l ≥ , p X k =1 q X l =1 t ( n ) k,l = 1 . (2.4)As n → ∞ , we assume that the finite n transition numbers have limits t k,l = lim n →∞ t ( n ) k,l (2.5)which are the limiting transition numbers . It is convenient to arrange the (finite n and limiting)transition numbers into p × q matrices (cid:16) t ( n ) k,l (cid:17) k =1 ,...,p,l =1 ,...,q , ( t k,l ) k =1 ,...,p,l =1 ,...,q . To avoid degenerate cases, we assume that each row and column of the matrix ( t k,l ) k =1 ,...,p,l =1 ,...,q has at least one non-zero entry.The assumption that the paths are non-intersecting puts a number of constraints on thenumbers n k,l and on the limiting transition numbers t k,l . Indeed, not all a k can be connected toall b l and certain transition numbers must be zero. The constraints on the transition numbersare easy to visualize in terms of a weighted bipartite graph G = ( V, E, t ) , (2.6)with vertices V = { a , . . . , a p } ⊔ { b , . . . , b q } , (disjoint union) , edges E = { ( a k , b l ) ∈ V × V | t k,l > } , and a weight function t : E → (0 ,
1] : ( a k , b l ) t k,l . (2.7) Example 1.
The graph G = ( V, E, t ) associated with Figure 3 is shown in Figure 4. The graphhas four vertices and three edges, each of them with weight / . (cid:0)(cid:0)(cid:0)(cid:0)(cid:1)(cid:1)(cid:1)(cid:1) b b (cid:0)(cid:1) a (cid:0)(cid:0)(cid:0)(cid:0)(cid:1)(cid:1)(cid:1)(cid:1) a (cid:0)(cid:1) Figure 4: The graph associated with Figure 3.4
Figure 5: Non-intersecting Brownian motions with two starting points and four ending points,as discussed in Example 2.
Example 2.
For a more complicated example we consider a situation with p = 2 starting pointsand q = 4 ending points as in Figure 5.The matrix of transition numbers is (cid:0) t k,l (cid:1) k =1 , , l =1 ,..., = (cid:18) /
30 4 /
30 0 00 4 /
30 7 /
30 11 / (cid:19) (2.8) and the graph G associated with (2.8) is shown in Figure 6. (cid:0)(cid:1) b (cid:0)(cid:1) b (cid:0)(cid:1) b (cid:0)(cid:0)(cid:1)(cid:1) b a (cid:0)(cid:0)(cid:1)(cid:1) (cid:0)(cid:0)(cid:0)(cid:0)(cid:1)(cid:1)(cid:1)(cid:1) a Figure 6: The graph associated with the transition numbers (2.8).The constraints on the transition numbers are contained in the following obvious result thatwe state without proof.
Proposition 2.1.
The graph G has the following properties: (a) G has at most p + q − edges. For each i = 1 , . . . , p + q − , there is at most one non-vanishingtransition number t k,l with k + l − i . (b) G is a connected graph if and only if the number of edges is equal to p + q − . (c) G has no cycles (and so G is a tree if G is connected). In [10, 16] the special case of transition numbers( t k,l ) k =1 , , l =1 , = (cid:18) / / (cid:19) t k,l ) iseasy to describe. Proposition 2.2.
Suppose that the graph G is connected. Then the non-zero entries of thematrix (cid:0) t k,l (cid:1) k,l are situated on a right-down path starting at the top left entry (1 , and endingat the bottom right entry ( p, q ) . The steps in the path are either by one unit to the right (a rightstep) or one unit down (a down step). In Examples 1 and 2 we have (cid:18) × × × (cid:19) , (cid:18) × × × × × (cid:19) , respectively.In this paper we consider only connected graphs, and we will make the following assumption. Assumption 2.3.
We assume that the graph G is connected. That is, we assume that G has p + q − edges, and for every i = 1 , . . . , p + q − there is exactly one non-vanishing transitionnumber t k,l with k + l − i , and we define k ( i ) = k, l ( i ) = l, if t k,l > , and k + l − i. It follows from the assumption and from (2.5) that also for large enough n , the finite n transition numbers have the same non-zero pattern. Thus t ( n ) k,l > t k,l > i labels the edges E of the graph, and so weidentify i with the edge ( a k ( i ) , b l ( i ) )of the graph. We consider Brownian motions having transition probability density P ( t, x, y ) = 1 √ πtσ e − tσ ( x − y ) (2.9)whose overall variance σ = Tn (2.10)is proportional to 1 /n where n is the number of Brownian paths. We interpret the proportionalityconstant T > T . We show that for small T the paths at time t have a limiting mean distribution that is characterized by a vector equilibrium problem. In thefirst theorem we state the existence of a limiting mean distribution. Theorem 2.4.
Consider n independent Brownian motions with transition probability (2.9) – (2.10) conditioned so that • the paths are non-intersecting in the time interval (0 , , • n k,l of the paths start at a k and end at b l , for each k = 1 , . . . , p and l = 1 , . . . , q . ssume that as n → ∞ , the finite n transition numbers n k,l /n converge to t k,l , and that thecorresponding graph G is connected.Let t ∈ (0 , . Then there exists a T ∗ = T ∗ ( t ) > so that for all T ∈ (0 , T ∗ ) the limitingmean distribution of the positions of the paths at time t exists, and is supported on the union of p + q − disjoint intervals S p + q − i =1 [ α i , β i ] with a density ρ i on the i th interval.The vector of measures ( µ , . . . , µ p + q − ) where dµ i ( x ) = ρ i ( x ) dx for i = 1 , . . . , p + q − isthe minimizer of a vector equilibrium problem that will be described in the next subsection. The proof of Theorem 2.4 will be based on a Deift-Zhou steepest descent analysis of theRiemann-Hilbert problem in Section 2.5. The details of the steepest descent analysis will bedescribed in Sections 5–7, and the proof of Theorem 2.4 will then be given in Section 8.
Remark . The special case where p = 1, q = 2 and m = m = n/ Remark . As already mentioned, for the case p = 1 and q ≥ T is sufficiently small.Note that Theorem 2.4 gives only a result for T sufficiently small, but it does not specify howsmall T should be. We expect that the theorem remains valid for all temperatures T that aresuch that at time t we have the maximal number of groups of paths. At a critical temperature T crit ( t ) we expect that two (or maybe more) neighboring intervals [ α i , β i ] and [ α i +1 , β i +1 ] mergeand a Pearcey phase transition occurs. However we were unable to prove this. The logarithmic energy of a measure µ on R is defined as usual by I ( µ ) = Z Z log 1 | x − y | dµ ( x ) dµ ( y ) . (2.11)The mutual energy of two measures µ , ν is defined by I ( µ, ν ) = Z Z log 1 | x − y | dµ ( x ) dν ( y ) . (2.12)We write for i = 1 , . . . , p + q − x i ( t ) = (1 − t ) a k ( i ) + tb l ( i ) , < t < t ∈ (0 ,
1) we define the quadratic functions V i ( x ) = 12 t (1 − t ) ( x − x i ( t )) , x ∈ R , (2.14)for i = 1 , . . . , p + q −
1. These functions will play the role of external fields in the vector equilibriumproblem that is relevant for our problem. 7 efinition 2.7.
Fix t ∈ (0 ,
1) and let
T >
0. Consider the energy functional E ( µ , . . . , µ p + q − ) = p + q − X i,j =1 a i,j I ( µ i , µ j ) + 1 T p + q − X i =1 Z V i ( x ) dµ i ( x ) , (2.15)where V i is defined in (2.14), and the interaction matrix A = ( a i,j ) has entries a i,j = i = j, if i = j and k ( i ) = k ( j ) or l ( i ) = l ( j ) , µ , . . . , µ p + q − ) supported on the real line for which Z dµ i = t k ( i ) ,l ( i ) , for i = 1 , . . . , p + q − . (2.17)One may understand the energy minimization problem in Definition 2.7 as follows. To eachof the edges of the graph G = ( V, E, t ) we associate a measure µ i , i = 1 , . . . , p + q −
1, of totalmass equal to the weight t k ( i ) ,l ( i ) of that edge. This measure represents a distribution of chargedparticles on the real line that repel each other due to the diagonal term a i,i I ( µ i , µ i ) = I ( µ i , µ i )in (2.15). For the particles of different measures µ i , µ j , i = j , there are two possibilities. Thefirst case is when the ( i, j ) entry of (2.16) equals 1 /
2. This happens if the edges correspondingto i and j are adjacent in the graph G . Then there is repulsion between the measures µ i and µ j but with a strength that is only half as strong as the repulsion for each individual measure.The second case is when the ( i, j ) entry of (2.16) equals zero. In that case there is no directinteraction between the measures µ i and µ j .The last term of (2.15) is a sum of external field terms due to the action of the externalfield T V i ( x ) on the measure µ i . The energy minimizer ( µ , . . . , µ p + q − ) in Definition 2.7 thencorresponds to the equilibrium distribution of charged particles under the energy functional(2.15). Proposition 2.8.
The interaction matrix A is positive definite.Proof. It is easy to check that the interaction matrix A is equal to A = 12 B T B (2.18)where B is the incidence matrix of the graph G . That is, we choose a numbering k = 1 , . . . , p + q of the vertices, and then we have B = ( b k,i ) k =1 ,...,p + q, i =1 ,...,p + q − where b k,i = 1 if vertex k is incident to edge i , and 0 otherwise.From (2.18) we get that A is positive semi-definite, and for any column vector x of length p + q − x T A x = 12 k B x k ≥ . (2.19)8ow assume that B x = . Consider a leaf of G , i.e., a vertex which is incident to exactly oneedge. Then B has exactly one zero in the row corresponding to this vertex, and from B x = itfollows that the component of x corresponding to the edge that is incident to the leaf vanishes.Since G is a tree (see Proposition 2.1(c)) we can then gradually undress G by peeling off leafsone by one and we conclude in this way that all components of x are equal to 0. Thus x = if B x = , which implies in view of (2.19) that A is positive definite. Corollary 2.9.
The vector equilibrium problem of Definition 2.7 has a unique solution ( µ , . . . , µ p + q − ) and each measure µ i is compactly supported.Proof. The interaction matrix is positive definite by Proposition 2.8. The external fields V i inthe energy functional (2.15) have enough increase at ±∞ so that standard arguments of potentialtheory as in [11, 27, 30] can be used to establish the existence and uniqueness of the minimizeras well as the fact that each measure µ i is supported on a compact set.Our next theorem describes the structure of the solution of the vector equilibrium problemfor small T . Theorem 2.10.
Fix t ∈ (0 , , and let ( t k,l ) be a matrix of transition numbers. Then there exists T ∗ > (the same T ∗ that makes Theorem 2.4 work) so that for every T ∈ (0 , T ∗ ) the followingholds. (a) Each µ i is supported on an interval supp( µ i ) = [ α i , β i ] , i = 1 , . . . , p + q − . The intervals [ α i , β i ] are pairwise disjoint and satisfy β i +1 < α i , i = 1 , . . . , p + q − . (2.20)(b) The measure µ i has a density ρ i with respect to Lebesgue measure which is real analyticand positive in the open interval ( α i , β i ) and vanishes like a square root at the endpointsof [ α i , β i ] , i.e., there exist non-zero constants ρ (1) i and ρ (2) i such that ρ i ( x ) = ρ (1) i √ x − α i + O (( x − α i ) / ) as x ↓ α i , (2.21) ρ i ( x ) = ρ (2) i p β i − x + O (( β i − x ) / ) as x ↑ β i . (2.22)The proof of Theorem 2.10 will be given in Section 3.1. In Subsections 2.4.1–2.4.3, our main Theorem 2.4 will be illustrated for some special cases. p = 1 : Angelesco-type interaction For the case p = 1 of one starting point, and an arbitrary number q of ending points, the graph G has a single vertex a on the left which is connected to each of the vertices b , . . . , b q on theright. For example, if q = 3 the graph has the form shown in Figure 7.The energy functional (2.15)–(2.16) is then equal to E ( µ , . . . , µ p + q − ) := p + q − X i =1 I ( µ i ) + 12 X i = j I ( µ i , µ j ) + 1 T p + q − X i =1 Z V i ( x ) dµ i ( x ) . (2.23)9 (cid:0)(cid:0)(cid:1)(cid:1) (cid:0)(cid:1)(cid:0)(cid:0)(cid:0)(cid:0)(cid:1)(cid:1)(cid:1)(cid:1)(cid:0)(cid:1) b a b Figure 7: A graph G with p = 1 starting points and q = 3 ending points.The functional (2.23) is exactly the one familiar from the theory of Angelesco systems [21]. Alloff-diagonal entries in the interaction matrix A in (2.16) are equal to 1 /
2. For the example inFigure 7 the interaction matrix is A = / / / / / / . (2.24) Next we consider the case where p = q and the corresponding lattice path follows a zigzag line.The graph G is then just a chain of vertices: see Figure 8. (a) a b b aa b (cid:0)(cid:0)(cid:0)(cid:0)(cid:1)(cid:1)(cid:1)(cid:1) (cid:0)(cid:1)(cid:0)(cid:0)(cid:1)(cid:1)(cid:0)(cid:0)(cid:0)(cid:0)(cid:1)(cid:1)(cid:1)(cid:1)(cid:0)(cid:0)(cid:1)(cid:1) (cid:0)(cid:1) (b) b (cid:0)(cid:0)(cid:0)(cid:0)(cid:1)(cid:1)(cid:1)(cid:1) a (cid:0)(cid:0)(cid:1)(cid:1) b (cid:0)(cid:0)(cid:1)(cid:1) a (cid:0)(cid:1) b (cid:0)(cid:1) a (cid:0)(cid:0)(cid:1)(cid:1) Figure 8: For p = q = 3, Figure 8(a) shows a graph G which has zigzag form. Figure 8(b) showsthe same graph written as a chain.The energy functional (2.15) now takes the form E ( µ , . . . , µ p + q − ) := p + q − X i =1 I ( µ i ) + p + q − X i =1 I ( µ i , µ i +1 ) + 1 T p + q − X i =1 Z V i ( x ) dµ i ( x ) . (2.25)We see that the interaction matrix of (2.25) is tridiagonal with diagonal entries equal to 1,and entries on the first sub- and superdiagonal equal to 1 /
2. For the example in Figure 8 the10nteraction matrix equals A = / / / / / / /
20 0 0 1 / . (2.26)Note that the tridiagonal structure of the interaction matrix means that there is only nearestneighbor interaction. The neighboring measures repel each other, since all signs in the interactionmatrix are positive.In a Nikishin system the interaction matrix is also tridiagonal, but the off-diagonal entriesare − / /
2, see [21].
Finally we consider the graph G in Figure 6. In this case the interaction matrix equals A = / / / / / /
20 0 1 / /
20 0 1 / / . (2.27)Note that this interaction matrix is a mixture of the nearest neighbor and Angelesco interactionmatrices. More precisely one could say that (2.27) has ‘block’ nearest neighbor interaction,where each of the blocks in turn has an Angelesco-type interaction, and with subsequent blocksintersecting in exactly one entry. For the matrix in (2.27) these building blocks are (cid:18) / / (cid:19) , (cid:18) / / (cid:19) , / / / / / / . A graph-based vector equilibrium problem was also considered in the work of Gonchar, Rakhmanov,and Sorokin [21]. The rule for building the interaction matrix from a graph G is similar to theone in this paper. The vector equilibrium problem is also labeled by the edges of a graph, whichin [21], however, is a directed rooted tree. The off-diagonal entries of the interaction matrix arenon-zero if the corresponding two edges have a common vertex. The entry is 1 / − / / The proof of Theorem 2.4 uses the connection of non-intersecting Brownian motions with pre-scribed starting and ending points with a determinantal point process and an associated Riemann-Hilbert problem. We recall this connection. 11 .5.1 Determinantal point process
The positions at time t ∈ (0 ,
1) of n non-intersecting Brownian motions, starting at distinct a j , j = 1 , . . . , n , and ending at distinct positions b j , j = 1 , . . . , n , have the joint probability densityfunction 1 Z det ( P ( t, a i , x j )) ni,j =1 · det ( P (1 − t, x i , b j )) ni,j =1 , with the transition probability density P defined in (2.9) and with Z a normalization constant.This is a consequence of a theorem of Karlin and McGregor [23]. (For applications of the discreteversion of the Karlin-McGregor theorem see e.g. [22].) In the confluent limit where n k of thestarting positions come together at a k , k = 1 , . . . , p , and m l of the ending positions come togetherat b l for l = 1 , . . . , q , the joint p.d.f. for the positions of the paths at time t can be written as P ( x , . . . , x n ) = 1˜ Z det ( f i ( x j )) ni,j =1 · det ( g i ( x j )) ni,j =1 , (2.28)for certain functions f i , g i , that are built out of the p + q functions w ,k ( x ) = e − tσ ( x − a k ) , k = 1 , . . . , p, (2.29) w ,l ( x ) = e − − t ) σ ( x − b l ) , l = 1 , . . . , q, (2.30)see e.g. [9].The p.d.f. (2.28) defines a determinantal point process (in fact a biorthogonal ensemble, see[7]) with a correlation kernel K ( x, y ) that is such that P ( x , . . . , x n ) = 1 n ! det ( K ( x i , x j )) ni,j =1 (2.31)and for each m = 1 , . . . , n , Z · · · Z P ( x , . . . , x n ) dx m +1 · · · dx n = ( n − m )! n ! det ( K ( x i , x j )) mi,j =1 . (2.32)In particular for m = 1, we have that 1 n K ( x, x )is the mean density of paths. The kernel K can be described in terms of the following Riemann-Hilbert problem (RH problem)introduced in [9]. RH problem 2.11.
The RH problem consists in finding a matrix-valued function Y : C \ R → C ( p + q ) × ( p + q ) such that (1) Y is analytic in C \ R ; (2) For x ∈ R , the limiting values Y + ( x ) = lim z → x, Im z> Y ( z ) , Y − ( x ) = lim z → x, Im z< Y ( z )12 xist and satisfy Y + ( x ) = Y − ( x ) (cid:18) I p W ( x )0 I q (cid:19) , (2.33) where I k denotes the identity matrix of size k , and where W ( x ) denotes the rank-one matrix(outer product of two vectors) W ( x ) = w , ( x ) ... w ,p ( x ) (cid:0) w , ( x ) · · · w ,q ( x ) (cid:1) (2.34) with w ,k ( x ) , k = 1 , . . . , p , and w ,l ( x ) , l = 1 , . . . , q given by (2.29) and (2.30) . (3) As z → ∞ , we have that Y ( z ) = ( I p + q + O (1 /z )) diag (cid:0) z n , . . . , z n p , z − m , . . . , z − m q (cid:1) . (2.35)The RH problem has a unique solution that can be described in terms of multiple Hermitepolynomials. This is shown in [9], generalizing results in [19, 32]. According to [9] the correlationkernel is expressed in terms of the solution to the RH problem as K ( x, y ) = 12 πi ( x − y ) (cid:0) · · · w , ( y ) . . . w ,q ( y ) (cid:1) Y − ( y ) Y + ( x ) w , ( x )... w ,p ( x )0...0 . (2.36)It is also worth noticing that if Y , ( z ) denotes the top leftmost p by p block of the RH matrix Y ( z ), then the determinant of Y , ( z ) equals the average characteristic polynomialdet Y , ( z ) = E n Y j =1 ( z − x j ) , where the expectation E is taken according to the joint probability density (2.28). This wasshown in [4] for the case p = 1 and in [15] for the case of general p and q . We analyze the RH problem 2.11 in the limit described in Theorem 2.4. That is, we take σ = T /n and we let n k → ∞ , m l → ∞ , so that n k,l n → t k,l as n → ∞ . If, for each n , we denote the correlation kernel (2.36) by K n , then the limiting density ofpaths is ρ ( x ) = lim n →∞ n K n ( x, x ) , x ∈ R . (2.37)The proof of Theorem 2.4 will be based on a steepest descent analysis of the RH problem 2.11.As a byproduct of this analysis we can also show that the local scaling limits of the limiting13istribution of the Brownian motions are those familiar from random matrix theory, i.e., theyare described in terms of the sine kernel in the bulk and the Airy kernel at the edge. We willnot discuss this any further and refer to the papers [5, 6, 10, 28] for a more detailed analysis ina similar context. The rest of this paper is organized as follows. In Section 3 we establish general properties of theequilibrium problem, in particular leading to the proof of Theorem 2.10. In Section 4 we definethe ξ -functions, λ -functions and the associated Riemann surface. These functions are used inSection 5 to normalize the RH problem at infinity. In Section 6 we apply Gaussian elimination tothe RH problem by opening global lenses, thereby making the RH problem locally of size 2 × In this section we study the vector equilibrium problem of Definition 2.7. In particular we proveTheorem 2.10.
The main difficulty is in the proof of part (a) of Theorem 2.10. For this we use the followinglemma. Recall that t ∈ (0 ,
1) is fixed.
Lemma 3.1.
For every ε > and τ > , there exists T ε > so that for every T ∈ (0 , T ε ) thefollowing holds.If ( µ , . . . , µ p + q − ) is the minimizer for the energy functional (2.15) under the normalization (2.17) with min i t k ( i ) ,l ( i ) ≥ τ > , (3.1) then the support of µ i is contained in [ x i ( t ) − ε, x i ( t ) + ε ] for every i = 1 , . . . , p + q − .Proof. We are going to prove the following upper and lower bounds for the quantity E ∗ := E ( µ , . . . , µ p + q − )(which depends on the chosen normalization (2.17)).(a) There exist constants C > C ,δ such that for every δ > T > E ∗ ≤ C ,δ + C δ T . (3.2)The constant C is independent of δ and T , and C ,δ is independent of T .(b) There exist positive constants C , and C , such that for every T ∈ (0 ,
1] and every i =1 , . . . , p + q −
1, we have E ∗ ≥ − C + C T ( x − x i ( t )) , for x ∈ supp( µ i ) . (3.3)In addition, all constants C , C ,δ , C and C can be taken independently of the normalization(2.17), and only C depends on τ . 14 roof of (a): We may assume that δ > i ( x i ( t ) − x i +1 ( t )) ≥ δ. Then the intervals [ x i ( t ) − δ, x i ( t ) + δ ] are mutually disjoint.We fix transition numbers t k ( i ) ,l ( i ) ≥ P i t k ( i ) ,l ( i ) = 1. On each of the intervals[ x i ( t ) − δ, x i ( t ) + δ ] we choose a measure λ i of finite energy with total mass 1 (for example,an appropriately rescaled and centered semi-circle law will do) and we put ν i = t k ( i ) ,l ( i ) λ i . Wechoose the measures ν i independent of T and so p + q − X i,j =1 a i,j I ( ν i , ν j ) (3.4)does not depend on T . The sum (3.4) depends continuously on the numbers t k ( i ) ,l ( i ) and so C ,δ = max p + q − X i,j =1 a i,j I ( ν i , ν j ) (3.5)exists, where the maximum is taken over all choices t k ( i ) ,l ( i ) ≥ P i t k ( i ) ,l ( i ) = 1.Since V i ( x ) ≤ δ t (1 − t ) on [ x i ( t ) − δ, x i ( t ) + δ ], we also have Z V i ( x ) dν i ( x ) ≤ δ t (1 − t )and p + q − X i =1 Z V i ( x ) dν i ( x ) ≤ C δ (3.6)with a constant C that is independent of T and δ , and also of the t k ( i ) ,l ( i ) . Then part (a) followsfrom (3.5), (3.6), and the fact that E ∗ ≤ E ( ν , . . . , ν p + q − ) = p + q − X i,j =1 a i,j I ( ν i , ν j ) + 1 T p + q − X i =1 Z V i ( x ) dν i ( x ) . Proof of (b):
For each i we take a measure ν i of total mass k ν i k = k µ i k and satisfying forsome constant K ≥ ν i ≤ Kµ i , for i = 1 , . . . , p + q − . (3.7)Then (1 − λ ) µ i + λν i is a positive measure for every λ ∈ [ − /K, E ∗ ≤ E ((1 − λ ) µ + λν , . . . , (1 − λ ) µ p + q − + λν p + q − ) (3.8)for every λ ∈ [ − /K, λ = 0. Thus the λ -derivative of the right-hand side of(3.8) vanishes for λ = 0, which leads to E ∗ = p + q − X i,j =1 a i,j I ( µ i , ν j ) + 12 T p + q − X i =1 Z V i ( x ) ( dµ i ( x ) + dν i ( x )) . (3.9)15rom the elementary inequality | x − y | ≤ √ x + 1 p y + 1 it follows that I ( µ i , ν j ) = Z Z log 1 | x − y | dµ i ( x ) dν j ( y ) ≥ − Z Z log( x + 1) dµ i ( x ) dν j ( y ) − Z Z log( y + 1) dµ i ( x ) dν j ( y ) ≥ − Z log( x + 1) ( dµ i ( x ) + dν j ( x ))where for the last inequality we used the facts that log( x + 1) ≥ k ν j k ≤ k µ i k ≤ a i,j ≥ E ∗ ≥ − p + q − X i,j =1 a i,j Z log( x + 1) ( dµ i ( x ) + dν j ( x )) + 12 T p + q − X i =1 Z V i ( x ) ( dµ i ( x ) + dν i ( x ))= 12 T p + q − X i =1 Z V i ( x ) − T p + q − X j =1 a i,j log( x + 1) ( dµ i ( x ) + dν i ( x )) . (3.10)For i = 1 , . . . , p + q − T ≤
1, the functions V i ( x ) − T P p + q − j =1 a i,j log( x +1) are boundedfrom below and assume their minimum value in a fixed compact interval (independent of i and T ≤ V i , and from the fact that P p + q − j =1 a i,j ≤ p + q for each i . Thus V i ( x ) − T p + q − X j =1 a i,j log( x + 1) ≥ V i ( x ) − C T ≥ − C T, x ∈ R , for some constant C > T ≤ i .Using this in (3.10) we find that for each i = 1 , . . . , p + q − E ∗ ≥ − C + 14 T Z V i ( x ) dν i ( x ) , (3.11)where C > i and T ≤
1. The inequality (3.11) (with the sameconstant C ) holds for all measures ν i with k ν i k = t k ( i ) ,l ( i ) and satisfying (3.7) for some K . Forevery x in the support of µ i , we can approximate the point mass k µ i k δ x by such ν i . It thusfollows from (3.11) that E ∗ ≥ − C + 14 T V i ( x ) t k ( i ) ,l ( i ) , for x ∈ supp( µ i ) . If t k ( i ) ,l ( i ) ≥ τ as in (3.1), then we obtain (3.3) with C = 14 · t (1 − t ) · τ. Conclusion of the proof:
Combining (3.2) and (3.3) we find that there exist positive con-stants C = C /C and C ,δ = ( C ,δ + C ) /C so that( x − x i ( t )) ≤ C δ + C ,δ T, for x ∈ supp( µ i ) , i = 1 , . . . , p + q − T ≤ δ > ε > δ > C δ ≤ ε and then choose T ε ∈ (0 ,
1] so that C ,δ T ε ≤ ε . Then for every i = 1 , . . . , p + q − T ≤ T ε ,( x − x i ( t )) ≤ ε , for x ∈ supp( µ i ) , and Lemma 3.1 follows.Having Lemma 3.1 the proof of Theorem 2.10 is rather straightforward. Proof. (Proof of Theorem 2.10) (a) Let ε > x i ( t ) − ε, x i ( t ) + ε ], i = 1 , . . . , p + q −
1, are disjoint.From Lemma 3.1 we know that there exists T ǫ > T < T ε the support of µ i iscontained in [ x i ( t ) − ε, x i ( t ) + ε ].Take i = 1 , . . . , p + q −
1, and fix the other measures µ j , j = i . From (2.12), (2.15) we seethat the measure µ i is the equilibrium measure in an effective external field X j = i a i,j Z log 1 | x − y | dµ j ( y ) + 1 T V i ( x ) , (3.12)and it is also the minimizer if we restrict to measures with total mass t k ( i ) ,l ( i ) that are supportedon [ x i ( t ) − ε, x i ( t ) + ε ]. On this interval the external field (3.12) is strictly convex (we use thatthe measures µ j with j = i are supported outside [ x i ( t ) − ε, x i ( t ) + ε ]). The convexity impliesthat the support of µ i is an interval, see e.g. [30, Theorem IV 1.10].(b) The real analyticity of the density of µ i in the interior of the support follows from [12],since the effective external field (3.12) is real analytic on the support of µ i .The convexity of (3.12) implies that the density of µ i does not vanish in the interior of thesupport, and has square root behavior at the endpoints, see e.g. [8, Lemma 3.5]. n In the situation of Theorem 2.4 we have for each finite n , the number n k,l of paths going from a k to b l . The finite n transition numbers are t ( n ) k,l = n k,l n and in the limit we have lim n →∞ t ( n ) k,l = t k,l for k = 1 , . . . , p, l = 1 , . . . , q. (3.13)The equilibrium problem depends on the transition numbers by means of the normalizations(2.17). For a finite n , we use the equilibrium problem for a vector of measures ( µ , . . . , µ p + q − )that have total masses Z dµ i = t ( n ) k ( i ) ,l ( i ) , for i = 1 , . . . , p + q − µ ( n )1 , . . . , µ ( n ) p + q − ) will also depend on n .Because of Theorem 2.10 and (3.13), there is a T >
0, so that for every
T < T , each µ ( n ) i issupported on an interval [ α ( n ) i , β ( n ) i ] (depending on n ) so that parts (a) and (b) of Theorem 2.10hold. As n → ∞ , we have that µ ( n ) i → µ i and α ( n ) i → α i , β ( n ) i → β i . i .In the steepest descent analysis that follows we will fix a large enough n . We will work withthe n -dependent measures µ ( n ) i and intervals [ α ( n ) i , β ( n ) i ], but for ease of notation we will usuallynot write the superscript ( n ). We trust that this will not lead to confusion. However, we willwrite t ( n ) k,l . A property that will be used a number of times is that nt ( n ) k,l = n k,l is an integer. (3.15) ξ -functions, λ -functions As said before, we take a large n and consider the vector equilibrium problem with normalization(3.14) and we assume that T is sufficiently small so that Theorem 2.10 applies. So the measure µ i is supported on the interval [ α i , β i ].The variational conditions associated with the vector equilibrium problem (2.15) are as fol-lows. For each i = 1 , . . . , p + q −
1, there is a constant L i ∈ R so that2 X j a i,j Z log 1 | x − y | dµ j ( y ) + 1 T V i ( x ) (cid:26) = L i , x ∈ [ α i , β i ] , ≥ L i , x ∈ R \ [ α i , β i ] . (4.1)We use F i to denote the Cauchy transform of the measure µ i , F i ( z ) := Z β i α i z − x dµ i ( x ) , (4.2)for i = 1 , . . . , p + q −
1. The function F i ( z ) is analytic on C \ [ α i , β i ]. By taking the derivativeof (4.1) and using the Sokhotski-Plemelj formula it follows that − F i, + ( x ) − F i, − ( x ) − X j = i a i,j F j ( x ) + 1 T V ′ i ( x ) = 0 , x ∈ [ α i , β i ] . (4.3) Lemma 4.1.
The variational inequality (4.1) is strict for x ∈ [ β i +1 , α i ) ∪ ( β i , α i − ] , where weput α = + ∞ and β p + q = −∞ .Proof. On both gaps ( β i +1 , α i ) and ( β i , α i − ), the left-hand side of (4.1) is a real analytic functionof x whose first derivative is − X j a i,j F j ( x ) + 1 T V ′ i ( x ) (4.4)and whose second derivative is2 X j a i,j Z β i α i x − y ) dµ j ( y ) + 1 T V ′′ i ( x ) . (4.5)Each term in (4.5) is positive and so the left-hand side of (4.1) is strictly convex on both ( β i +1 , α i )and ( β i , α i − ), which proves the lemma. 18 (cid:0)(cid:0) (cid:0)(cid:0)(cid:0) r r α β ξ (cid:0)(cid:0)(cid:0) (cid:0)(cid:0)(cid:0) r r r r α β α β ξ (cid:0)(cid:0)(cid:0) (cid:0)(cid:0)(cid:0) r r r r α β α β ξ (cid:0)(cid:0)(cid:0) (cid:0)(cid:0)(cid:0) r r α β ξ Figure 9: Four-sheeted Riemann surface for the example in Figure 4.
We construct a Riemann surface R as follows, compare with [17]. The Riemann surface has p + q sheets which we denote by R j , j = 1 , . . . , p + q . Each sheet is associated with a vertex of thegraph, that is, with either a starting point a k or an ending point b l . We choose the numberingso that R k is associated with a k for k = 1 , . . . , p , and R p + l is associated with b l for l = 1 , . . . , q .Recall that we use i as a label for the edges of the graph, and we write k = k ( i ) and l = l ( i )if i labels the edge ( a k , b l ). Then the p + q sheets are defined as R k = C \ [ i : k ( i )= k [ α i , β i ] , k = 1 , . . . , p, (4.6) R p + l = C \ [ i : l ( i )= l [ α i , β i ] , l = 1 , . . . , q. (4.7)The sheets are connected as follows. For each i = 1 , . . . , p + q − R k ( i ) is connectedto sheet R p + l ( i ) along the interval [ α i , β i ] in the usual crosswise manner. See Figure 9 for apicture of the Riemann surface for the example in Figure 4. Note that the cuts [ α i , β i ] are alwaysbetween a sheet R k which is among the first p sheets and a sheet R p + l which is among the last q sheets. The cuts strictly move to the left if we go from one sheet to the next among the first p sheets, and similarly, among the last q sheets.The Riemann surface depends on n , since the endpoints α ( n ) i and β ( n ) i depend on n . The ξ -and λ -functions that we define from the Riemann surface will also depend on n . We will notindicate the n -dependence, as already mentioned before.The Riemann surface R has p + q sheets and 2( p + q −
1) simple branch points. Thereforeby Hurwitz’s formula (see e.g. [26]) its genus g satisfies2 g − − p + q ) + 2( p + q −
1) = − g = 0. The fact that the genus is zero will be helpful in the construction of the globalparametrix in Section 7.2. 19 .3 ξ -functions We define the ξ -functions as follows. Recall that F i is given by (4.2). ξ k ( z ) = − X i : k ( i )= k F i ( z ) + 1 T t ( z − a k ) , k = 1 , . . . , p, (4.8) ξ p + l ( z ) = X i : l ( i )= l F i ( z ) − T (1 − t ) ( z − b l ) , l = 1 , . . . , q. (4.9)We consider ξ j as an analytic function on the j th sheet R j of the Riemann surface with apole at infinity. Moreover, these functions define a global meromorphic function on R as thefollowing result shows. Theorem 4.2.
Consider the function ξ j ( z ) on the j th sheet R j , j = 1 , . . . , p + q . Then thesefunctions are compatible along the cuts [ α i , β i ] of the Riemann surface in the sense that (cid:26) ξ k ( i ) , + ( x ) = ξ p + l ( i ) , − ( x ) ,ξ k ( i ) , − ( x ) = ξ p + l ( i ) , + ( x ) , x ∈ [ α i , β i ] , (4.10) for every i = 1 , . . . , p + q − . Hence the ξ j -functions can be extended to a global meromorphicfunction ξ defined on the Riemann surface R .Proof. Fix i ∈ { , . . . , p + q − } . On the interval [ α i , β i ] we have by definition (4.8)–(4.9) that ξ k ( i ) , + ( x ) − ξ p + l ( i ) , − ( x )= − X j : k ( j )= k ( i ) F j, + ( x ) − X j : l ( j )= l ( i ) F j, − ( x ) + 1 T t (1 − t ) ( x − (1 − t ) a k ( i ) − tb l ( i ) ))= − F i, + ( x ) − F i, − ( x ) − X j = i : k ( j )= k ( i ) or l ( j )= l ( i ) F j ( x ) + 1 T V ′ i ( x ) , (4.11)where we used the definition (2.13)–(2.14) of V i .If j = i is such that k ( j ) = k ( i ) or l ( j ) = l ( i ) then a i,j = 1 /
2. For all other j = i we have a i,j = 0. Therefore X j = i : k ( j )= k ( i ) or l ( j )= l ( i ) F j ( x ) = 2 X j = i a i,j F j ( x ) . Using this in (4.11) and recalling the variational equality (4.3), we see that that ξ k ( i ) , + ( x ) = ξ p + l ( i ) , − ( x ) for x ∈ [ α i , β i ].The other equality ξ k ( i ) , − ( x ) = ξ p + l ( i ) , + ( x ) follows in exactly the same way.In the following two lemmas we collect some more properties of the ξ -functions that will beneeded in what follows. Lemma 4.3.
The ξ -functions have the following behavior as z → ∞ ξ k ( z ) = 1 T t ( z − a k ) − n k nz + O (cid:18) z (cid:19) , k = 1 , . . . , p, (4.12) ξ p + l ( z ) = − T (1 − t ) ( z − b l ) + m l nz + O (cid:18) z (cid:19) , l = 1 , . . . , q. (4.13)20 roof. From (4.2) and (3.14) it follows that F i ( z ) = R dµ i z + O (cid:18) z (cid:19) = t ( n ) k ( i ) ,l ( i ) z + O (cid:18) z (cid:19) (4.14)as z → ∞ . Recall that we work with n -dependent transition numbers. Since X i : k ( i )= k t ( n ) k ( i ) ,l ( i ) = n k n , X i : l ( i )= l t ( n ) k ( i ) ,l ( i ) = m l n the asymptotic behaviors (4.12) and (4.13) follow immediately from the definitions (4.8)–(4.9). Lemma 4.4.
For any i = 1 , . . . , p + q − we have Z C i ξ k ( i ) ( z ) dz = − π i t ( n ) k ( i ) ,l ( i ) , (4.15) Z C i ξ p + l ( i ) ( z ) dz = 2 π i t ( n ) k ( i ) ,l ( i ) , (4.16) Z C i ξ j ( z ) dz = 0 , j
6∈ { k ( i ) , p + l ( i ) } , (4.17) where C i denotes a counterclockwise contour surrounding the interval [ α i , β i ] and not enclosingany point of the other intervals [ α j , β j ] , j = i .Proof. From (4.2), (4.14) and the definition of C i it follows that Z C i F i ( z ) dz = 2 π i t ( n ) k ( i ) ,l ( i ) and Z C i F j ( z ) dz = 0 if j = i. The lemma then follows from the definitions (4.8)–(4.9).
Remark . Our definition of the ξ -functions differs slightly from the one used in [5]. If ˜ ξ j denote the ξ -functions in [5] then we have ξ j ( z ) = ˜ ξ j ( z ) − zT (1 − t ) , j = 1 , . . . , p + q. In the present form the formulae are more symmetric. λ -functions We define the λ j -functions as λ k ( z ) = c k + Z zβ ik ξ k ( s ) ds, k = 1 , . . . , p, (4.18) λ p + l ( z ) = c p + l + Z zβ ˜ il ξ p + l ( s ) ds, l = 1 , . . . , q, (4.19)21here i k := min { i | k ( i ) = k } , and ˜ i l := min { i | l ( i ) = l } , and where the path of integration in the integrals (4.18) and (4.19) lies in C \ ( −∞ , β i k ) and C \ ( −∞ , β ˜ i l ), respectively. The functions λ k ( z ) and λ p + l ( z ) are defined with a branch cut alongthe intervals ( −∞ , β i k ] and ( −∞ , β ˜ i l ], respectively.We choose the constants c j in in the following way. Lemma 4.6.
We can (and do) choose the constants c j in (4.18) – (4.19) in such a way that Re ( λ k ( i ) , + ( β i )) = Re ( λ p + l ( i ) , + ( β i )) , (4.20) for every i = 1 , . . . , p + q − .Proof. We use the fact that the graph G = ( V, E, t ) is a tree. We can iteratively ‘undress’ thistree as follows. We start with G = G . Next we choose a leaf vertex v and set G = G \ v, i.e., G is the tree obtained by removing the leaf and its corresponding edge from the tree G .We iteratively repeat this procedure and obtain in this way a chain of nested trees G = G ⊃ G ⊃ · · · ⊃ G | V | , (4.21)where each G i is obtained from G i − by removing one leaf. Obviously the last non-empty tree G | V | in this chain consists of a single vertex v j .We freely choose the corresponding constant c j . Next we use induction on k = | V |− , . . . , , c j so that each time (4.20) is satisfied. This is possible since G i − \ G i consists of a single vertex v j , which is a leaf of G i − , and hence we have exactly onecondition (4.20) to fix the integration constant c j of this leaf. Thus we see that the conditions(4.20) can indeed be imposed on the constants c j in (4.18)–(4.19).Properties of the λ -functions that we will need are stated in the following lemmas. The firstis a reformulation of the variational conditions (4.1). Lemma 4.7.
We have for i = 1 , . . . , p + q − , Re ( λ k ( i ) , ± ( x ) − λ p + l ( i ) , ± ( x )) (cid:26) = 0 , x ∈ [ α i , β i ] , ≥ , x ∈ R \ [ α i , β i ] . (4.22) Strict inequality in (4.22) holds for x ∈ [ β i +1 , α i ) ∪ ( β i , α i − ] (where α = + ∞ and β p + q = −∞ ).Proof. Observe that by (4.8)–(4.9) and (4.18)–(4.19) we have that λ k ( z ) = X i : k ( i )= k Z log 1 z − x dµ i ( x ) + 12 T t ( z − a k ) + ˜ c k , (4.23) λ p + l ( z ) = − X i : l ( i )= l Z log 1 z − x dµ i ( x ) − T (1 − t ) ( z − b l ) + ˜ c p + l , (4.24)for certain real constants ˜ c j , j = 1 , . . . , p + q . Therefore, for i = 1 , . . . , p + q − λ k ( i ) ( z ) − λ p + l ( i ) ( z ))= 2 Z log 1 | z − x | dµ i ( x ) + X j = i : k ( j )= k ( i ) or l ( j )= l ( i ) Z log 1 | z − x | dµ j ( x )+ 1 T Re V i ( z ) + 12 T ( a k ( i ) − b l ( i ) ) + ˜ c k ( i ) − ˜ c p + l ( i ) , (4.25)22here we used the definition (2.14) of V i . Then by the variational conditions (4.1) we haveRe ( λ k ( i ) , ± ( x ) − λ p + l ( i ) , ± ( x )) ≥ L i + 12 T ( a k ( i ) − b l ( i ) ) + ˜ c k ( i ) − ˜ c p + l ( i ) (4.26)for x ∈ R with equality for x ∈ [ α i , β i ]. The constant in the right-hand side of (4.26) is equal tozero because the constants c j are chosen so that (4.20) holds.The strict inequality for x ∈ [ β i +1 , α i ) ∪ ( β i , α i − ] is a consequence of Lemma 4.1. Lemma 4.8. As z → ∞ we have that λ k ( z ) = 12 T t ( z − a k ) − n k n log z + ˜ c k + O (cid:18) z (cid:19) , k = 1 , . . . , p, (4.27) λ p + l ( z ) = − T (1 − t ) ( z − b l ) + m l n log z + ˜ c p + l + O (cid:18) z (cid:19) , l = 1 , . . . , q. (4.28) Proof.
This follows from the definitions (4.18) and (4.19) and the asymptotic behavior (4.12)and (4.13) of the ξ -functions.The next lemma will be a consequence of Lemma 4.4. Lemma 4.9.
For k = 1 , . . . , p , we have that exp( n ( λ k, + ( x ) − λ k, − ( x ))) = 1 , for x ∈ R \ [ i : k ( i )= k [ α i , β i ] . (4.29) For l = 1 , . . . , q , we have that exp( n ( λ p + l, + ( x ) − λ p + l, − ( x ))) = 1 , for x ∈ R \ [ i : l ( i )= l [ α i , β i ] . (4.30) Proof.
Fix k = 1 , . . . , p and let x ∈ R \ S i : k ( i )= k [ α i , β i ]. By definition of λ k and by contourdeformation we have that λ k, + ( x ) − λ k, − ( x ) = Z C ξ k ( z ) dz where C is some closed contour surrounding some of the intervals [ α i , β i ]. From Lemma 4.4it follows that each of the enclosed intervals gives a contribution to the integral of the form ± π i t ( n ) k,l . Since each t ( n ) k,l is a rational number with denominator n , indeed t ( n ) k,l = n k,l /n , weconclude that n ( λ k, + ( x ) − λ k, − ( x )) is a multiple of 2 π i, and so we obtain (4.29).The proof of (4.30) is similar.Lemma 4.9 will be important for the steepest descent analysis in Section 5. In the following sections we describe the steepest descent analysis Y X T S R of theRH problem 2.11. Throughout the steepest descent analysis the following simple lemma will berepeatedly used. 23 emma 5.1. Assume that the matrix function Y ( z ) satisfies the jump condition Y + ( x ) = Y − ( x ) J ( x ) for x ∈ R . Let A ( z ) and B ( z ) be matrix functions with A ( z ) entire and B ( z ) analyticin C \ R . Then X ( z ) := A ( z ) Y ( z ) B ( z ) satisfies the jump condition X + ( x ) = X − ( x ) (cid:0) B − − ( x ) J ( x ) B + ( x ) (cid:1) . The point will be to choose appropriate transformation matrices A ( z ) and B ( z ) in order tobring the RH problem 2.11 to a simple form.Let Y be the solution to the original RH problem 2.11. The first transformation Y X serves to normalize the RH problem at infinity. To this end we define X = X ( z ) as X ( z ) = L − n Y ( z ) D ( z ) n , (5.1)where we define the diagonal matrices D ( z ) = diag (cid:18)(cid:18) exp (cid:18) λ k ( z ) − T t ( z − a k ) (cid:19)(cid:19) pk =1 , (cid:18) exp (cid:18) λ p + l ( z ) + 12 T (1 − t ) ( z − b l ) (cid:19)(cid:19) ql =1 (cid:19) , (5.2)and L = diag(exp(˜ c ) , . . . , exp(˜ c p + q )) , (5.3)where the constants ˜ c j are as in (4.27)–(4.28). From Lemma 5.1 it follows by straightforwardcalculations that X = X ( z ) satisfies the following RH problem. RH problem 5.2. (1) X is analytic in C \ R ; (2) For x ∈ R , we have that X + ( x ) = X − ( x ) (cid:18) J , ( x ) J , ( x )0 J , ( x ) (cid:19) (5.4) where the blocks J , , J , and J , (of sizes p × p , p × q and q × q , respectively) have thefollowing form. (a) J , is a full p × q matrix with entries ( J , ( x )) k,l = exp ( n ( λ p + l, + ( x ) − λ k, − ( x ))) , x ∈ R . (5.5)(b) Outside of the intervals [ α i , β i ] , J , and J , are identity matrices J , ( x ) = I p , J , ( x ) = I q , x ∈ R \ p + q − [ i =1 [ α i , β i ] . (5.6)24c) On the interval [ α i , β i ] , J , and J , are diagonal matrices with ones on the diagonal,except for ( J , ( x )) k ( i ) ,k ( i ) = exp (cid:0) n ( λ k ( i ) , + ( x ) − λ k ( i ) , − ( x )) (cid:1) , x ∈ ( α i , β i ) , (5.7) and ( J , ( x )) l ( i ) ,l ( i ) = exp (cid:0) n ( λ p + l ( i ) , + ( x ) − λ p + l ( i ) , − ( x )) (cid:1) , x ∈ ( α i , β i ) . (5.8)(3) As z → ∞ , we have that X ( z ) = I p + q + O (1 /z ) . (5.9)Here we used Lemma 4.9 to see that the diagonal entries of (5.6), as well as most of thediagonal entries of (5.7)–(5.8) are equal to 1. On the other hand, we used the asymptoticconditions (4.27)–(4.28) and (5.1)–(5.3) to see that the RH problem for X is normalized atinfinity in the sense of (5.9).Let us illustrate the jump matrices for the example with p = q = 2 as in Figures 4 and 9. Inthat case the jump conditions are written as X + = X − exp( n ( λ , + − λ , − )) 0 exp( n ( λ , + − λ , − )) exp( n ( λ − λ , − ))0 1 exp( n ( λ , + − λ )) exp( n ( λ − λ ))0 0 exp( n ( λ , + − λ , − )) 00 0 0 1 on the interval [ α , β ], X + = X − n ( λ , + − λ )) exp( n ( λ − λ ))0 exp( n ( λ , + − λ , − )) exp( n ( λ , + − λ , − )) exp( n ( λ − λ , − ))0 0 exp( n ( λ , + − λ , − )) 00 0 0 1 on the interval [ α , β ], X + = X − n ( λ − λ )) exp( n ( λ , + − λ ))0 exp( n ( λ , + − λ , − )) exp( n ( λ − λ , − )) exp( n ( λ , + − λ , − ))0 0 1 00 0 0 exp( n ( λ , + − λ , − )) on the interval [ α , β ], and X + = X − n ( λ − λ )) exp( n ( λ − λ ))0 1 exp( n ( λ − λ )) exp( n ( λ − λ ))0 0 1 00 0 0 1 on R \ S i =1 [ α i , β i ]. On the interval [ α i , β i ] we have that λ p + l ( i ) , + − λ k ( i ) , − is a purely imaginary constant. So theentry ( J , ( x )) k ( i ) ,l ( i ) , x ∈ [ α i , β i ] (6.1)25s constant with modulus one. It would be an ideal situation if, except for (6.1), all entries ofthe matrix J , ( x ) for x ∈ R , are exponentially decaying as n → ∞ . That would happen if forevery k = 1 , . . . , p , and l = 1 , . . . , q , we have(a) if t ( n ) k,l = 0 then Re λ p + l, + ( x ) < Re λ k, − ( x ) , x ∈ R , (b) if t ( n ) k,l > λ p + l, + ( x ) < Re λ k, − ( x ) , x ∈ R \ [ α i , β i ] , where i = k + l − X T in the steepest descent analysis of the RH problem in which a number of unwantedentries of the jump matrices are eliminated. In particular those entries of J , ( x ) that couldpotentially be exponentially increasing as n → ∞ .To this end we will open global lenses [3] and apply Gaussian elimination inside each of thelenses. This construction will be systematic and may have interest in its own right. The proofthat appropriate lenses exist will be a consequence of the maximum principle for subharmonicfunctions.The opening of global lenses can be conveniently described in terms of the right-down pathin Proposition 2.2. We start at the top left entry (1 ,
1) of the matrix of transition numbers ( t ( n ) k,l )and walk along the path until we arrive at the bottom right entry ( p, q ). During this walk, wewill open global lenses in an appropriate way. The precise action to perform depends on whetherwe are taking a vertical (down) or a horizontal (right) step along the path. First we construct the global lenses for a vertical step along the lattice path. Thus assume that i ∈ { , . . . , p + q − } is such that ( k ( i ) , l ( i )) = ( k, l ) , (6.2)( k ( i + 1) , l ( i + 1)) = ( k + 1 , l ) , (6.3)for certain k = 1 , . . . , p − l = 1 , . . . , q .From (4.27) we obtain the asymptotic behavior λ k +1 ( z ) − λ k ( z ) = 1 T t ( a k − a k +1 ) z + O (log | z | ) , (6.4)as z → ∞ .We also note that Re λ k and Re λ k +1 are well-defined and continuous on C . Indeed, we haveby (4.23) thatRe ( λ k ( z )) = X j : k ( j )= k Z log 1 | z − x | dµ j ( x ) + 12 T t
Re ( z − a k ) + ˜ c k , (6.5)Re ( λ k +1 ( z )) = X j : k ( j )= k +1 Z log 1 | z − x | dµ j ( x ) + 12 T t
Re ( z − a k +1 ) + ˜ c k +1 , (6.6)for certain constants ˜ c k and ˜ c k +1 .The representations (6.5) and (6.6) also show the following.26 emma 6.1. The function z Re ( λ k +1 ( z ) − λ k ( z )) is superharmonic on C \ S j : k ( j )= k [ α j , β j ] ,and subharmonic on C \ S j : k ( j )= k +1 [ α j , β j ] .Proof. It is a standard fact from potential theory that any function of the form z R log | z − x | dµ ( x ),with µ a positive measure with compact support, is superharmonic on C and harmonic on C \ supp( µ ), see e.g. [30, Chapter 0]. Thus by (6.5) and (6.6) we have that z Re λ k +1 ( z ) is su-perharmonic on C and harmonic on C \ S j : k ( j )= k +1 [ α j , β j ], while z
7→ − Re λ k ( z ) is subharmonicon C and harmonic on C \ S j : k ( j )= k [ α j , β j ]Since the two sets S j : k ( j )= k [ α j , β j ] and S j : k ( j )= k +1 [ α j , β j ] are disjoint, the lemma follows.We next define the open sets Ω + , Ω − ⊂ C as followsΩ + := { z ∈ C | Re ( λ k +1 ( z ) − λ k ( z )) > } (6.7)Ω − := { z ∈ C | Re ( λ k +1 ( z ) − λ k ( z )) < } . (6.8)We also denote Ω + , ∞ := { z ∈ Ω + | ∃ a connected path in Ω + from z to ∞} (6.9)Ω − , ∞ := { z ∈ Ω − | ∃ a connected path in Ω − from z to ∞} . (6.10)In other words, Ω + , ∞ is the union of the unbounded connected components of Ω + , and similarlyfor Ω − , ∞ .The open sets Ω + , ∞ , Ω − , ∞ satisfy the following properties. Lemma 6.2. (a)
For each ε > there exists R > so that { z ∈ C | | z | > R, − π/ ε < arg z < π/ − ε } ⊂ Ω + , ∞ , (6.11) and { z ∈ C | | z | > R, π/ ε < arg z < π/ − ε } ⊂ Ω − , ∞ . (6.12) In particular, Ω + , ∞ lies to the right of Ω − , ∞ . (b) Both Ω + , ∞ and Ω − , ∞ are connected.Proof. Part (a) follows from (6.4) and the the fact that a k > a k +1 . Part (b) follows from part(a) in a similar way as in [16, Proof of Lemma 2.4], to which we refer for further details. Lemma 6.3.
We have α i ∈ Ω + , ∞ and β i +1 ∈ Ω − , ∞ , (6.13) where we recall i is related to k as in (6.2) and (6.3) .Proof. By applying the variational conditions (4.22) twice, first with the index i and then with i + 1, we obtainRe ( λ k +1 ( x ) − λ k ( x )) = Re ( λ k +1 ( x ) − λ p + l ( x )) ≥ , x ∈ [ α i , β i ] . (6.14)and the inequality (6.14) is strict for x = α i because of the statement about the strict inequalityin Lemma 4.7. Hence α i ∈ Ω + , and in a similar way we obtain β i +1 ∈ Ω − .27o show that α i belongs to the unbounded component of Ω + we argue as in [16, Proofof Lemma 2.4]. What we use is that S j : k ( j )= k [ α j , β j ] lies to the right of S j : k ( j )= k +1 [ α j , β j ],and that α i is the left-most point of S j : k ( j )= k [ α j , β j ], and that β i +1 is the right-most point of S j : k ( j )= k +1 [ α j , β j ].The proof is by contradiction. Suppose that α i does not belong to the unbounded componentof Ω + . Then the setΩ + ,α i := { z ∈ Ω + | ∃ a connected path in Ω + from z to α i } (6.15)is bounded, it is symmetric with respect to the real line, and it contains α i . Also Re ( λ k +1 − λ k ) is zero on the boundary of Ω + ,α i and strictly positive inside of Ω + ,α i by construction.Since subharmonic functions satisfy a maximum principle, it follows that Re ( λ k +1 − λ k ) is notsubharmonic on all of Ω + ,α i . Then by Lemma 6.1 we conclude that Ω + ,α i has a nonemptyintersection with S j : k ( j )= k +1 [ α j , β j ]. Any point of intersection lies strictly to the left of β i +1 ,since β i +1 ∈ Ω − and β i +1 is the right-most point of S j : k ( j )= k +1 [ α j , β j ]. Then, because ofsymmetry in the real axis, it follows that Ω + ,α i surrounds the point β i +1 . The setΩ − ,β i +1 := { z ∈ Ω − | ∃ a connected path in Ω − from z to β i +1 } (6.16)is then bounded, and it does not intersect with S j : k ( j )= k [ α j , β j ]. By Lemma 6.1, we thenhave that Re ( λ k +1 − λ k ) is superharmonic on Ω − ,β i +1 . However, Re ( λ k +1 − λ k ) is zero on theboundary, and strictly negative inside of Ω − ,β i +1 , which gives a contradiction with the minimumprinciple for superharmonic functions.Thus α i belongs to the unbounded component of Ω + , and similarly β i +1 belongs to theunbounded component of Ω − .Let X > λ k ′ ( x ) − λ p + l ′ ( x )) > , for all x ∈ ( −∞ , − X ) ∪ ( X , ∞ ) and all k ′ = 1 , . . . , p , l ′ = 1 , . . . , q . The existence of such aconstant X follows from (4.27)–(4.28).The next result is an immediate consequence of Lemma 6.3. Theorem 6.4.
There exist two simple closed contours Γ + ,i ⊂ Ω + , ∞ and Γ − ,i ⊂ Ω − , ∞ such that (a) Γ + ,i surrounds the interval [ α i , β ] . (b) Γ − ,i surrounds the interval [ α p + q − , β i +1 ] . (c) both Γ − ,i and Γ + ,i intersect the interval ( β i +1 , α i ) in exactly one point, which we denoteby x i , y i , respectively. We have x i < y i . (d) both Γ − ,i and Γ + ,i have one extra intersection point with the real axis, which lies inside theinterval ( −∞ , − X ) , ( X , ∞ ) , respectively.As in [3], the contours Γ + ,i and Γ − ,i will be called global lenses .Remark . Instead of taking closed contours, one might also take Γ + ,i and Γ − ,i to be un-bounded, tending to infinity in the right and left half of the complex plane, respectively, andboth intersecting the real line in exactly one point in the line segment ( β i +1 , α i ). This is theconstruction that was used in [16].An illustration of Theorem 6.4 is shown in Figure 10. When convenient we also define x = X , y = ∞ , x p + q − = −∞ and y p + q − = − X .28 + ,i ■✒ Γ − ,i ✒■ R ✲ r − X r α i +1 r β i +1 r x i r y i r α i r β i r X Figure 10: The figure shows how to open global lenses between the two intervals [ α i +1 , β i +1 ](left) and [ α i , β i ] (right). Next we construct the global lenses for a horizontal step along the lattice path. Thus assumethat i ∈ { , . . . , p + q − } is such that ( k ( i ) , l ( i )) = ( k, l ) , (6.17)( k ( i + 1) , l ( i + 1)) = ( k, l + 1) , (6.18)for certain k = 1 , . . . , p and l = 1 , . . . , q − + and Ω − as followsΩ + := { z ∈ C | Re ( λ p + l ( z ) − λ p + l +1 ( z )) > } (6.19)Ω − := { z ∈ C | Re ( λ p + l ( z ) − λ p + l +1 ( z )) < } . (6.20)The unbounded regions Ω + , ∞ , Ω − , ∞ ⊂ C are again defined as in (6.9)–(6.10). Then Lemma 6.2remains valid.Using the above definitions, Lemma 6.3 and Theorem 6.4 both remain valid as well. Thus wecan construct the global lenses Γ + ,i and Γ − ,i in exactly the same way as before. Now we will show how to apply Gaussian elimination inside the global lenses. First, it might beworthwhile to recall the basic idea of Gaussian elimination in our present context, see e.g. [20].Let J = uv T = u ... u p (cid:0) v . . . v q (cid:1) (6.21)be a rank-one matrix of size p by q . Assume that one multiplies J on the left with the matrix I p − u k +1 u k e k +1 e Tk . (6.22)29ere and below we use e k to denote the column vector with all entries equal to zero, except forthe k th entry which equals 1. The length of e k will be clear from the context. Note that theouter product e k +1 e Tk in (6.22) is the matrix with all zero entries except for the ( k + 1 , k ) entrywhich equals one.The multiplication with (6.22) on the left is equivalent to applying an elementary row oper-ation to the rows of J , where to row k + 1 one adds − u k +1 u k times row k . This row operation issuch that the entries of row k + 1 of J are eliminated.Similarly, assume that one multiplies J on the right with the matrix I q − v l +1 v l e l e Tl +1 . (6.23)This is equivalent to applying an elementary column operation to the columns of J , where tocolumn l + 1 one adds − v l +1 v l times column l . This column operation is such that the entries ofcolumn l + 1 of J are eliminated.Let us see how we can apply these ideas in the present context. The role of the rank-onematrix (6.21) of size p by q will be played by the top right submatrix J , ( x ) of the jump matrix(5.4), cf. (5.5). The mechanism to multiply the jump matrix on the left or on the right with atransformation matrix of the form (6.22)–(6.23) is to define a new RH matrix T ( z ) = X ( z ) B ( z )for a suitable transformation matrix B ( z ) and subsequently apply Lemma 5.1.Now we are ready to describe the Gaussian elimination in detail. This will be the nexttransformation X T in the steepest descent analysis of the RH problem. Algorithm 6.6. (The transformation X T )1. (Initialization.) We initialize T ( z ) := X ( z ) .2. (Forward sweep.) For each i = 1 , . . . , p + q − we open the global lens Γ − ,i in Theorem 6.4and we update, in case of a vertical step (6.2) – (6.3) , T ( z ) = (cid:26) T ( z ) (cid:0) I p + q + exp( n ( λ k +1 ( z ) − λ k ( z ))) e k e Tk +1 (cid:1) , inside the lens Γ − ,i T ( z ) , elsewhere , and in case of a horizontal step (6.17) – (6.18) , T ( z ) = ( T ( z ) (cid:16) I p + q − exp( n ( λ p + l ( z ) − λ p + l +1 ( z ))) e p + l +1 e Tp + l (cid:17) , inside Γ − ,i T ( z ) , elsewhere .
3. (Backward sweep.) For each i = p + q − , . . . , , we open the global lens Γ + ,i in Theorem6.4 and we update, in case of a vertical step (6.2) – (6.3) , T ( z ) = (cid:26) T ( z ) (cid:0) I p + q + exp( n ( λ k ( z ) − λ k +1 ( z ))) e k +1 e Tk (cid:1) , inside the lens Γ + ,i T ( z ) , elsewhere , and in case of a horizontal step (6.17) – (6.18) , T ( z ) = ( T ( z ) (cid:16) I p + q − exp( n ( λ p + l +1 ( z ) − λ p + l ( z ))) e p + l e Tp + l +1 (cid:17) , inside Γ + ,i T ( z ) , elsewhere . Incidently, we note that the forward and backward sweeps in the above algorithm commute.But one is not allowed to change the order in which the index i varies inside the sweeps.30t is easy to see that Algorithm 6.6 does not change the jump matrix in (5.4) except for itstop right submatrix J , ( x ). To see what happens with the latter, we have to distinguish betweendifferent regions of the complex plane.First assume that x belongs to one of the intervals ( y j , x j − ) ⊃ [ α j , β j ], j = 1 , . . . , p + q − y p + q − = − X and x = X .) From Theorem 6.4 we see that x lies inside the globallens Γ − ,i precisely when i = 1 , . . . , j −
1. Hence the ‘relevant’ indices in the forward sweep inAlgorithm 6.6 are i = 1 , . . . , j −
1. During the corresponding operations, the entries in rows1 , , . . . , k ( j ) − , , . . . , l ( j ) − J , ( x ) are cancelled by Gaussianelimination.On the other hand, Theorem 6.4 shows that x lies inside the global lens Γ + ,i precisely when i = p + q − , p + q − , . . . , j . Hence the relevant indices in the backward sweep in Algorithm 6.6are i = p + q − , p + q − , . . . , j . During the corresponding operations, the entries in rows p, p − , . . . , k ( j ) + 1 and columns q, q − , . . . , l ( j ) + 1 of J , ( x ) are cancelled by Gaussianelimination.It follows that at the end of the two sweeps in Algorithm 6.6, all the entries of the rank-onematrix J , ( x ) are eliminated, except for the ( k ( j ) , l ( j )) entry which equalsexp( n ( λ p + l ( j ) , + ( x ) − λ k ( j ) , − ( x ))) . Recall that in the above description, we assumed that x ∈ ( y j , x j − ) ⊃ [ a j , b j ]. Next, let usassume that x belongs to one of the gaps ( x j , y j ) for certain j . We can then repeat the abovearguments and find that at the end of Algorithm 6.6, all the entries of J , ( x ) are eliminatedexcept for two of them. In case of a vertical step (6.2)–(6.3) these are the ( k ( j ) , l ( j )) and( k ( j ) + 1 , l ( j )) entries, which are given byexp( n ( λ p + l ( j ) ( x ) − λ k ( j ) ( x ))) , exp( n ( λ p + l ( j ) ( x ) − λ k ( j )+1 ( x ))) , respectively. But by the variational inequality in (4.22), which is strict according to Lemma 4.1,we see that both entries are exponentially small for n → ∞ . A similar argument applies in caseof a horizontal step (6.17)–(6.18).Finally we should note that by the operations in Algorithm 6.6, the RH matrix T ( z ) also hasa jump on each of the contours Γ + ,i and Γ − ,i . For example, in case of a vertical step (6.2)–(6.3)the jump matrix on the contour Γ + ,i takes the form I p + q ± exp( n ( λ k ( z ) − λ k +1 ( z ))) e k +1 e Tk , see Algorithm 6.6. But by our assumption that Γ + ,i ⊂ Ω + we see that this jump matrix isuniformly exponentially close to the identity matrix when n → ∞ . A similar argument holds forthe jumps along the contours Γ − ,i ⊂ Ω − .Summarizing, we established that T satisfies the following RH problem. RH problem 6.7. (1) T is analytic on C \ ( R ∪ S p + q − i =1 (Γ + ,i ∪ Γ − ,i )) . (2) For x ∈ R ∪ S p + q − i =1 (Γ + ,i ∪ Γ − ,i )) we have that T + ( x ) = T − ( x ) J T ( x ) (6.24) where J T ( x ) satisfies the following − , Γ + , ✒■ ■✒ Γ − , Γ + , ■✒✒■ R ✲ r y r α r β r x r y r α r β r x r y r α r β r x Figure 11: The figure shows the contours in the RH problem for the matrix T for the example ofFigures 4 and 9. We have three intervals [ α i , β i ], i = 1 , ,
3, and four global lenses Γ ± ,i , i = 1 , For x ∈ ( y i , x i − ) ⊃ [ α i , β i ] , i = 1 , . . . , p + q − , we have that J T ( x ) equals the identitymatrix, except for the × block lying on the intersection of rows and columns k ( i ) and p + l ( i ) , which is given by (cid:18) exp( n ( λ k ( i ) , + ( x ) − λ k ( i ) , − ( x ))) exp( n ( λ p + l ( i ) , + ( x ) − λ k ( i ) , − ( x )))0 exp( n ( λ p + l ( i ) , + ( x ) − λ p + l ( i ) , − ( x ))) (cid:19) . (6.25)(b) For x ∈ ( −∞ , y p + q − ) ∪ ( S i ( x i , y i )) ∪ ( x , ∞ ) we have that J T ( x ) is exponentiallyclose to the identity matrix as n → ∞ , both uniformly as well as in L sense. (c) For x ∈ S i (Γ + ,i ∪ Γ − ,i ) , the jump matrix J T is also exponentially close to the identitymatrix as n → ∞ in uniform sense (and therefore also in L since the contours Γ ± ,i are compact). (3) As z → ∞ , we have that T ( z ) = I p + q + O (1 /z ) . (6.26)Let us illustrate this RH problem for the example in Figures 4 and 9. Then we have threeintervals [ α i , β i ], i = 1 , ,
3, and four global lenses between them: see Figure 11.The jump conditions (6.24)–(6.25) can now be written as T + = T − exp( n ( λ , + − λ , − )) 0 exp( n ( λ , + − λ , − )) 00 1 0 00 0 exp( n ( λ , + − λ , − )) 00 0 0 1 on the interval ( y , x ) ⊃ [ α , β ], T + = T − n ( λ , + − λ , − )) exp( n ( λ , + − λ , − )) 00 0 exp( n ( λ , + − λ , − )) 00 0 0 1 ✲ r y i r α i r β i r x i − ✲ L + ,i ✲ L − ,i Figure 12: A local lens around the interval [ α i , β i ].on the interval ( y , x ) ⊃ [ α , β ], and T + = T − n ( λ , + − λ , − )) 0 exp( n ( λ , + − λ , − ))0 0 1 00 0 0 exp( n ( λ , + − λ , − )) on the interval ( y , x ) ⊃ [ α , β ]. The jump matrices on the remaining contours ( −∞ , y ),( x , y ), ( x , y ), ( x , ∞ ), Γ + , , Γ − , , Γ + , and Γ − , in Figure 11 are all exponentially close tothe identity matrix as n → ∞ . The jump matrices J T in the RH problem for T are nontrivial only in the 2 × α i , β i ]. In the transformation T S of the RH problem we transform the oscillatory entries of the jumpmatrix (6.24)–(6.25) along each interval [ α i , β i ] into exponentially decaying ones. To this end weopen a local lens around the interval [ α i , β i ] = [ α i , β i ]. Since the RH problem is locally of size 2by 2 this can be done in the standard way [11, 13].For each i = 1 , . . . , p + q − α i , β i ], i = 1 , . . . , p + q − λ k ( i ) − λ p + l ( i ) ) < ± ,i . We use L + ,i and L − ,i to denote the upper andlower lip of the lens, respectively. 33e define the matrix function S as follows S ( z ) = T ( z ) (cid:16) I p + q − exp( n ( λ k ( i ) ( z ) − λ p + l ( i ) ( z ))) e p + l ( i ) e Tk ( i ) (cid:17) , in the upper part of the lens around [ α i , β i ], T ( z ) (cid:16) I p + q + exp( n ( λ k ( i ) ( z ) − λ p + l ( i ) ( z ))) e p + l ( i ) e Tk ( i ) (cid:17) , in the lower part of the lens around [ α i , β i ], T ( z ) , outside of all the lenses . (7.1)Then S satisfies the following RH problem. RH problem 7.1. (1) S is analytic in C \ ( R ∪ S i (Γ + ,i ∪ Γ − ,i ) ∪ S i ( L + ,i ∪ L − ,i )) ; (2) For x ∈ R ∪ S i (Γ + ,i ∪ Γ − ,i ) ∪ S i ( L + ,i ∪ L − ,i ) we have that S + ( x ) = S − ( x ) J S ( x ) (7.2) where J S ( x ) satisfies the following (a) For x ∈ [ y i , x i − ] ∪ L + ,i ∪ L − ,i we have that J S ( x ) is the identity matrix except for the × block on the intersection of rows and columns k ( i ) and p + l ( i ) , which is givenby (cid:18) − (cid:19) , for x ∈ [ α i , β i ] , (7.3) (cid:18) n ( λ k ( i ) − λ p + l ( i ) )) 1 (cid:19) , for x ∈ L ± ,i , (7.4) and by (cid:18) exp( n ( λ k ( i ) , + ( x ) − λ k ( i ) , − ( x ))) exp( n ( λ p + l ( i ) , + ( x ) − λ k ( i ) , − ( x )))0 exp( n ( λ p + l ( i ) , + ( x ) − λ p + l ( i ) , − ( x ))) (cid:19) , (7.5) for x ∈ [ y i , α i ] ∪ [ β i , x i − ] . (b) On the other parts of the contour we have J S ( x ) = J T ( x ) and J S ( x ) is exponentiallyclose to the identity matrix as n → ∞ , both uniformly and in the L sense. (3) As z → ∞ , we have that S ( z ) = I p + q + O (1 /z ) . (7.6)From standard arguments based on the Cauchy-Riemann conditions [11] it follows that thelocal lenses L i can be chosen so that the jumps on L i in (7.4) are uniformly exponentially closeto the identity matrix, away for a neighborhood of the endpoints α i , β i . In this subsection we build a global parametrix P ( ∞ ) ( z ), which will be a good approximation tothe RH problem away from the endpoints α i , β i , i = 1 , . . . , p + q −
1. The construction will bequite similar to the one in [10].We will construct the matrix function P ( ∞ ) ( z ) such that it satisfies the following RH problem,obtained from RH problem 7.1 by ignoring all exponentially small entries of the jump matrices.34 H problem 7.2. (1) P ( ∞ ) ( z ) is analytic in C \ S p + q − i =1 [ α i , β i ] ;(2) For x ∈ S i ( α i , β i ) , we have that P ( ∞ )+ ( x ) = P ( ∞ ) − ( x ) J P ( ∞ ) ( x ) (7.7) where the jump matrix J P ( ∞ ) ( x ) equals J P ( ∞ ) ( x ) = I k ( i ) − I p + l ( i ) − − k ( i ) − I q − l ( i ) , (7.8) for x ∈ ( α i , β i ) .(3) As z → ∞ , we have that P ( ∞ ) ( z ) = I p + q + O (1 /z ) . To solve this RH problem, we will use the fact that the Riemann surface R in Section 4.2has genus zero. From general algebraic geometry [26], this implies the existence of a rationalparametrization ξ = ξ ( v ) , z = z ( v ) (7.9)where v runs through the extended complex plane C (Riemann sphere).The v -plane is then partitioned into p + q disjoint open sets Ω j , j = 1 , . . . , p + q , whereΩ j is defined as the inverse image under (7.9) of the j th sheet R j of the Riemann surface.Correspondingly we have p + q inverse functions v j ( z ) of (7.9) such that v j : R j → Ω j , j = 1 , . . . , p + q, (7.10)is a bijection. We use v j ( ∞ ) to denote the image under this map of the point at infinity of the j th sheet R j , j = 1 , . . . , p + q . Hence v j ( ∞ ) ∈ Ω j .For i = 1 , . . . , p + q −
1, the common boundary of Ω k ( i ) and Ω p + l ( i ) in the v -plane, is ananalytic curve C i with a natural partition C i = C + ,i ∪ C − ,i , (7.11)where C + ,i is the image of the upper side of the cut [ α i , β i ] under the mapping v k i , and C − ,i isthe image of the lower side. The two parts C ± ,i meet at two points γ (1) i and γ (2) i , which are theimages of the endpoints α i and β i of the cut [ α i , β i ], respectively.Define the polynomial g ( v ) = p + q − Y i =1 (cid:16) ( v − γ (1) i )( v − γ (2) i ) (cid:17) , (7.12)and its square root p g ( v )35hich is defined as an analytic function in the v -plane, with a cut along the disjoint union ofarcs S i C + ,i . We assume p g ( v ) ∼ v p + q − as v → ∞ .We then construct a global parametrix P ( ∞ ) ( z ) as in [10]. We define for z ∈ C \ S i [ α i , β i ], P ( ∞ ) ( z ) = (cid:0) f i ( v j ( z )) (cid:1) p + qi,j =1 , f i ( v ) = l i ( v ) p g ( v ) , (7.13)where l i is the Lagrange interpolation polynomial for the points v ( ∞ ) , . . . v p + q ( ∞ ). That is, l i is a polynomial of degree p + q − f i ( v j ( ∞ )) = δ i,j , j = 1 , . . . , p + q. The fact that P ( ∞ ) ( z ) in (7.13) satisfies conditions (1) and (3) in RH problem 7.2 is immediate.For the jump condition (2) we need to show that f i ( v k ( i ) , + ( x )) = − f i ( v p + l ( i ) , − ( x )) f i ( v p + l ( i ) , + ( x )) = f i ( v k ( i ) , − ( x )) , (cid:27) x ∈ [ α i , β i ] . (7.14)These relations reduce to f i, + ( v ) = − f i, − ( v )) , for v ∈ C + ,i , (7.15) f i, + ( v ) = f i, − ( v ) , for v ∈ C − ,i (7.16)and these jumps follow from (7.13), since we have chosen the square root in p g ( v ) with a cutalong the union of arcs S i C + ,i . In a small disk around the endpoints α i and β i of the interval [ α i , β i ] we construct a localparametrix P (Airy) ( z ) involving Airy functions. Since the RH problem is locally of size 2 × Using the global parametrix P ( ∞ ) of Section 7.2 and the local parametrices P (Airy) of Section 7.3we define the final transformation S R of the RH problem by R ( z ) = (cid:26) S ( z )( P (Airy) ) − ( z ) , in the disks around α i , β i , i = 1 , . . . , p + q − ,S ( z )( P ( ∞ ) ) − ( z ) , elsewhere . (7.17)From the construction of the parametrices it then follows that R satisfies the following RHproblem. RH problem 7.3. (1) R ( z ) is analytic in C \ Σ R where Σ R is the contour shown in Figure 13. (2) R has jumps R + = R − J R on Σ R , where J R ( z ) = I p + q + O (1 /n ) , on the boundaries of the disks ,J R ( z ) = I p + q + O ( e − cn ( | z | +1) ) , on the other parts of Σ R , for some constant c > . − , Γ + , ✒■ ■✒ Γ − , Γ + , ■✒✒■ R ✲ α β α β α β ❥ ❥ ❥ ❥ ❥ ❥ Figure 13: The figure shows the contours in the RH problem for the final matrix R ( z ) for theexample of Figures 4 and 9.(3) R ( z ) = I p + q + O (1 /z ) as z → ∞ . As n → ∞ , the jump matrix J R tends to the identity matrix both in L ∞ (Σ R ) and in L (Σ R ).Then as in [11, 13, 14] we may conclude that R ( z ) = I p + q + O (cid:18) n ( | z | + 1) (cid:19) (7.18)as n → ∞ , uniformly for z in the complex plane. This completes the RH steepest descentanalysis. Now we are ready to prove the main Theorem 2.4 by unfolding the transformations of the RHsteepest descent analysis. Compare with the proofs in the earlier papers [5, 6, 10].For a finite n we define the function ρ ( n ) as ρ ( n ) ( x ) = 1 π Im ξ ( n ) k ( i ) , + ( x ) , x ∈ [ α ( n ) i , β ( n ) i ] , i = 1 , . . . , p + q − , (8.1) ρ ( n ) ( x ) = 0 , x ∈ R \ p + q − [ i =1 [ α ( n ) i , β ( n ) i ] . (8.2)Here we write ξ ( n ) k ( i ) , + ( x ) and α ( n ) i , β ( n ) i to emphasize the n -dependence.We recall that Im ξ ( n ) k ( i ) , + = − Im ξ ( n ) k ( i ) , − = − Im ξ ( n ) p + l ( i ) , + = Im ξ ( n ) p + l ( i ) , − on the interval [ α ( n ) i , β ( n ) i ], so one has in fact several equivalent ways of expressing (8.1).By (4.8), (8.1) and the Stieltjes-Perron inversion formula one has that ρ ( n ) ( x ) = dµ ( n ) i ( x ) dx , x ∈ [ α ( n ) i , β ( n ) i ] . n → ∞ , we have that α ( n ) i → α i , β ( n ) i → β i andlim n →∞ ρ ( n ) ( x ) = ρ i ( x ) , x ∈ ( α i , β i ) , (8.3)where ρ i ( x ) = dµ i ( x ) dx , x ∈ [ α i , β i ]is the density of the i th component µ i of the minimizer ( µ , . . . , µ p + q − ) of the vector equilibriumproblem with transition numbers ( t k,l ).Now we show that the ρ i give indeed the limiting distribution of the non-intersecting Brownianmotions. To this end we will use (2.37). We start with the expression for the correlation kernel(2.36), which we restate here for convenience K n ( x, y ) = 12 π i( x − y ) (cid:0) · · · w , ( y ) · · · w ,q ( y ) (cid:1) Y − ( y ) Y + ( x ) w , ( x )... w ,p ( x )0...0 . From the first transformation Y X in (5.1)–(5.3) we get (we do not explicitly write the n -dependence in the λ -functions) K n ( x, y ) = 12 π i( x − y ) (cid:0) · · · e nλ p +1 , + ( y ) · · · e nλ p + q, + ( y ) (cid:1) X − ( y ) X + ( x ) e − nλ , + ( x ) ... e − nλ p, + ( x ) . From the second transformation X T we obtain for x, y ∈ ( α ( n ) i , β ( n ) i ), K n ( x, y ) = 12 π i( x − y ) (cid:16) e nλ p + l ( i ) , + ( y ) e Tp + l ( i ) (cid:17) T − ( y ) T + ( x ) (cid:16) e − nλ k ( i ) , + ( x ) e k ( i ) (cid:17) . From the third transformation T S in (7.1) we get K n ( x, y ) = 12 π i( x − y ) (cid:16) − e nλ k ( i ) , + ( y ) e Tk ( i ) + e nλ p + l ( i ) , + ( y ) e Tp + l ( i ) (cid:17) × S − ( y ) S + ( x ) (cid:16) e − nλ k ( i ) , + ( x ) e k ( i ) + e − nλ p + l ( i ) , + ( x ) e p + l ( i ) (cid:17) , (8.4)for x, y ∈ ( α ( n ) i , β ( n ) i ). Defining the function h n by h n ( x ) := − Re ( λ k ( i ) , + ( x )) = − Re ( λ p + l ( i ) , + ( x )) , x ∈ [ α ( n ) i , β ( n ) i ] , (8.5)we see that (8.4) can be rewritten as K n ( x, y ) = e n ( h n ( x ) − h n ( y )) π i( x − y ) (cid:16) − e n iIm ( λ k ( i ) , + ( y )) e Tk ( i ) + e − n iIm ( λ k ( i ) , + ( y )) e Tp + l ( i ) (cid:17) × S − ( y ) S + ( x ) (cid:16) e − n iIm ( λ k ( i ) , + ( x )) e k ( i ) + e n iIm ( λ k ( i ) , + ( x )) e p + l ( i ) (cid:17) , (8.6)38or x, y ∈ ( α ( n ) i , β ( n ) i ).Now from (7.18) it follows by standard arguments (e.g. [5, Section 9]) that S − ( y ) S ( x ) = I + O ( x − y ) , as y → x uniformly in n . Hence (8.6) takes the form K n ( x, y ) = e n ( h n ( x ) − h n ( y )) (cid:18) sin( n Im ( λ k ( i ) , + ( x ) − λ k ( i ) , + ( y ))) π ( x − y ) + O (1) (cid:19) , (8.7)for x, y ∈ ( α i , β i ), where the O (1) term holds uniformly in n . Then by letting y → x and usingl’Hˆopital’s rule we find K n ( x, x ) = nπ (Im ( ξ ( n ) k ( i ) , + ( x ))) + O (1) , for x ∈ ( α i , β i ), or equivalently K n ( x, x ) = nρ n ( x ) + O (1) , by virtue of (8.1). It follows from (8.3) thatlim n →∞ n K n ( x, x ) = ρ i ( x ) , x ∈ ( α i , β i ) . (8.8)In a similar way one can prove thatlim n →∞ n K n ( x, x ) = 0 , x ∈ R \ [ i [ α i , β i ] . (8.9)This completes the proof of Theorem 2.4. References [1] M. Adler, P.L. Ferrari, and P. van Moerbeke, Airy processes with wanderers and new uni-versality classes, preprint math-pr/0811.1863.[2] A.I. Aptekarev, Multiple orthogonal polynomials, J. Comput. Appl. Math. 99 (1998), 423-447.[3] A.I. Aptekarev, P.M. Bleher, and A.B.J. Kuijlaars, Large n limit of Gaussian random ma-trices with external source, part II, Comm. Math. Phys. 259 (2005), 367–389.[4] P.M. Bleher and A.B.J. Kuijlaars, Random matrices with external source and multipleorthogonal polynomials, Int. Math. Res. Not. 2004, no. 3 (2004), 109-129.[5] P.M. Bleher and A.B.J. Kuijlaars, Large n limit of Gaussian random matrices with externalsource, part I, Comm. Math. Phys. 252 (2004), 43–76.[6] P.M. Bleher and A.B.J. Kuijlaars, Large n limit of Gaussian random matrices with externalsource, part III: double scaling limit, Comm. Math. Phys. 270 (2007), 481–517.[7] A. Borodin, Borthogonal ensembles, Nucl. Phys. B536 (1999), 704–732.[8] T. Claeys and A.B.J. Kuijlaars, Universality of the double scaling limit in random matrixmodels, Comm. Pure Appl. Math. 59 (2006), 1573–1603.399] E. Daems and A.B.J. Kuijlaars, Multiple orthogonal polynomials of mixed type and non-intersecting Brownian motions, J. Approx. Theory 146 (2007), 91–114.[10] E. Daems, A.B.J. Kuijlaars, and W. Veys, Asymptotics of non-intersecting Brownian mo-tions and a 4 × nn