aa r X i v : . [ m a t h . P R ] J u l A MULTI-DIMENSIONAL MARKOV CHAIN AND THEMEIXNER ENSEMBLE
KURT JOHANSSON
Abstract.
We show that the transition probability of the Markoc chain( G ( j, , . . . , G ( j, n )) j ≥ , where the G ( i, j ) ′ s are certain directed last-passagetimes, is given by a determinant of a special form. An analogous formula hasrecently been obtained by Warren in a Brownian motion model. Furthermorewe demonstrate that this formula leads to the Meixner ensemble when wecompute the distribution function for G ( m, n ). We also obtain the Fredholmdeterminant representation of this distribution, where the kernel has a doublecontour integral representation. Introduction
The starting point for the present paper are some nice results from the interestingpaper [18] by J. Warren. Let B k ( t ), 1 ≤ k ≤ n , be independent Brownian motionsstarted at the origin and define X k ( t ), k = 1 , . . . , n , recursively by(1.1) X k ( t ) = sup ≤ s ≤ t ( X k − ( s ) + B k ( t ) − B k ( s )) ,t ≥
0. The multi-dimensional Markov process X ( t ) = ( X ( t ) , . . . , X n ( t )) at a fixedtime is closely related to the largest eigenvalues of succesive principal submatricesof a GUE matrix. In fact, let H = ( h ij ) ≤ i,j ≤ n , be an n × n GUE matrix, i.e.distributed according to the probability measure Z − n exp( − Tr H ) dH on the spaceof n × n Hermitian matrices, and let H k = ( h ij ) ≤ i,j ≤ k , 1 ≤ k ≤ n , be the principalsubmatrices. Then, if λ max ( M ) denotes the largest eigenvalue of the Hermitianmatrix M , we have X (1 /
2) = ( λ max ( H ) , . . . , λ max ( H n )) in distribution, [1], [6],[18]. Furthermore, there is a nice formula for the transition function of the Markovprocess X ( t ), [18],(1.2) P [ X ( t ) = y | X ( s ) = x ] = det( D j − i φ t − s ( y j − x i )) ≤ i,j ≤ n , if x ≤ · · · ≤ x n , y ≤ · · · ≤ y n . Here D denotes ordinary differentiation, D − isanti-derivation,(1.3) D − k f ( x ) = Z x −∞ ( x − y ) k − ( k − f ( y ) dy, and φ t ( x ) = (2 πt ) − / exp( − x / t ) is the transition density for Brownian motion.Let F GUE( n ) ( η ) be the distribution function for the largest eigenvalue of an n × n Hermitian matrix H from GUE. It follows easily from (1.2) that(1.4) F GUE( n ) ( η ) = det( D j − i +1 φ / ( η )) ≤ i,j ≤ n . Supported by the G¨oran Gustafsson Foundation (KVA), the ESF-network MISGAM and theEU network ENIGMA.
This formula, given in [18], can also be obtained directly from the GUE eigenvaluemeasure, see proposition 2.3 below.We will show in this paper, starting from definitions, that we have analogousformulas for the vector G ( i ) = ( G ( i, , . . . , G ( i, n )), i ≥ w ( i, j ), ( i, j ) ∈ Z , be independent geometricrandom variables with parameter q , 0 < q < P [ w ( i, j ) = k ] = (1 − q ) q k and define(1.6) G ( m, n ) = max π X ( i,j ) ∈ π w ( i, j ) , where the maximum is over all up/right paths from (1 ,
1) to ( m, n ). It is clear thatthese random variables satisfy the recursion relation(1.7) G ( m, n ) = max( G ( m − , n ) , G ( m, n − w ( m, n ) , where G (0 , n ) = G ( n,
0) = 0 for n ≥
1. If we use this recursion relation repeatedlywe see that G ( m, n ) = max ≤ j ≤ n ( G ( j, n −
1) + m X i =1 w ( i, n ) − j − X i =1 w ( i, n )) , which looks like a discrete version of (1.1). From this it is reasonable to expectthat there should be a formula for the transition function for the Markov chain( G ( i )) i ≥ similar to (1.2). This is indeed the case in a very natural way wherethe differentiation operator is replaced by a finite difference operator, see theorem2.1. This will imply a formula similar to (1.3) for the distribution function of G ( m, n ), see theorem 2.2. We will also show how we can go from this formula(2.3) to the known expressions [14], [9], for this distribution function in terms ofthe Meixner ensemble, [7], and as a Fredholm determinant with a double contourintegral expression for the kernel, [14], [9]. The Fredholm determinant formula hasthe advantage that it is much better suited for computation of asymptotics. Theargument in this paper leading to (2.8) gives an alternative approach, starting fromthe definitions, to this formula.Results related to the transition probability (2.2) for ( G ( i )) i ≥ , but in the case of w ( i, j ) exponentially distributed, go back to the work of G. Sch¨utz, [16], where thetotally asymmetric exclusion process (TASEP) is studied using the Behte ansatz,see e.g. [7] for a discussion of the relation to G ( m, n ). This is not exactly the sameMarkov chain, but the results of Sch¨utz can also be used to derive the expression forthe distribution function for G ( m, n ) in terms of the Laguerre ensemble, see [15] andalso [13]. In [15] the case of geometric random variables is also considered and theformula for the distribution function for G ( m, n ) in terms of the Meixner ensemblederived. This is based on results from [4] on a discrete TASEP-type model. Moregeneral formulas for the asymmetric exclusion process (ASEP) have been provedrecently in [17]. Results generalizing the formula (1.2) to discrete models has alsobeen given independently in [5]. 2. Results
For z ∈ Z we define w ( x ) = (1 − q ) q x H ( x ), 0 < q <
1, where H is the Heavisidefunction, H ( x ) = 0 if x < H ( x ) = 1 if x ≥
0. The m -fold convolution of w MULTI-DIMENSIONAL MARKOV CHAIN AND THE MEIXNER ENSEMBLE 3 with itself is then the negative binomial distribution,(2.1) w m ( x ) = (1 − q ) m (cid:18) x + m − x (cid:19) q x H ( x ) , as is not difficult to see using generating functions.For a function f : Z → C we denote by ∆ the usual finite difference operator,∆ f ( x ) = f ( x + 1) − f ( x ). We also set(∆ − f )( x ) = x − X y = −∞ f ( y )provided the sum is convergent, and ∆ f = f . Note that ∆(∆ − f ) = ∆ − (∆ f ) = f . Let W n = { x ∈ Z n ; x ≤ · · · ≤ x n } , and let G ( i ), i ≥
0, be the Markov chaindefined in the introduction.
Theorem 2.1. If x, y ∈ W n , then for m ≥ ℓ , (2.2) P [ G ( m ) = y | G ( ℓ ) = x ] = det(∆ j − i w m − ℓ ( y j − x i )) ≤ i,j ≤ n . We postpone the proof to section 3. The proof first shows (2.2) in the case m = ℓ + 1 using (1.7) and an induction argument in n , and then establishes aconvolution formula for the determinants involved using the generalized Cauchy-Binet identity (2.5).Taking G (0) = 0 it is not difficult to show, see section 3, that we have thefollowing consequence of (2.2), which is analogous to (1.4). Theorem 2.2.
For any η ∈ N , m ≥ n ≥ , (2.3) P [ G ( m, n ) ≤ η ] = det(∆ j − i − w m ( η + 1)) ≤ i,j ≤ n . As stated in the introduction it is possible to relate the expression in the righthand side of (1.3) directly to the expression for the distribution function comingfrom the GUE eigenvalue measure. Let∆ n ( x ) = det( x j − i ) ≤ i,j ≤ n be the Vandermonde determinant. Proposition 2.3.
We have the following identity for any η ∈ R , n ≥ , (2.4) det( D j − i +1 φ / ( η )) ≤ i,j ≤ n = 1 Z n Z ( −∞ ,η ] n ∆ n ( x ) n Y j =1 e − x j d n x, where Z n is the appropriate normalization constant.Proof. Let H j , j ≥
0, be the standard Hermite polynomials. Then H j ( x ) = 2 j p j ( x ),where p j is a monic polynomial, and we have Rodrigues’ formula D j e − x = ( − j H j ( x ) e − x . Hence, D j − i − φ / ( η ) = D − i ( D j − φ / )( η ) = Z η −∞ ( η − x ) i − ( i − D j − φ / ( x ) dx = 2 j − ( − i + j ( i − √ π Z η −∞ ( x − η ) i − p j ( x ) e − x dx. K. JOHANSSON
Using row and column operations we obtaindet( D j − i − φ / ( η )) ≤ i,j ≤ n = n − Y j =0 j j ! √ π det( Z η −∞ x i − x j − e − x dx )) ≤ i,j ≤ n = 1 Z n Z ( −∞ ,η ] n ∆ n ( x ) n Y j =1 e − x j d n x, with Z n = 2 − n ( n − / π n/ Q n − j =0 j !. In the last equality we have used the generalizedCauchy-Binet identity,(2.5) det( Z X φ i ( x ) ψ j ( x ) dµ ( x )) = 1 n ! Z X n det( φ i ( x j )) det( ψ i ( x j )) n Y j =1 dµ ( x j )where all determinants are n × n . (cid:3) We have a similar identity relating the right hand side of (2.3) to the Meixnerensemble. The proof is a little more involved and we postpone it to section 3.
Proposition 2.4.
For any η ∈ N , m ≥ n ≥ , (2.6)det(∆ j − i − w m ( η + 1)) ≤ i,j ≤ n = 1 Z m,n X ≤ x i ≤ η + n − ∆ n ( x ) n Y j =1 (cid:18) x j + m − nx j (cid:19) q x j . For asymptotic analysis it is more useful to have a representation of the distribu-tion function for G ( m, n ) as a Fredholm determinant with an appropriate kernel. Itis possible to go to such a formula using Meixner polynomials and a standard ran-dom matrix theory computation as was done in [7]. There is also another formulafor the kernel as a double contour integral which can be obtianed from the Schurmeasure, see [14], [9] or [10]. It is actually possible to go directly to a Fredholmdeterminant formula with a double contour integral formula for the kernel startingfrom the expression in the right hand side of (2.3).Let γ r denote a circle centered at the origin with radius r >
0. Let 1 < r For any integer η ≥ , m ≥ n ≥ , (2.8) P [ G ( m, n ) ≤ η ] = det( I − K m,n ) ℓ ( { η +1 ,η +2 ,... } ) . The proof will be given in the next section. Remark 2.6. It would be interesting to understand the joint distribution of G ( m i , n i ), i = 1 , . . . , p in order to understand the fluctuations of the “last-passage times sur-face”, Z ∋ ( m, n ) → ( G ( m, n ). If ( m , n ) , . . . , ( m p , n p ) form a right/down paththen the joint distribution of G ( m , n ) , . . . , G ( m p , n p ) can be expressed as a Fred-holm determinant, see [10], [3], and it is possible to investigate the asymptoticfluctuations. However, the asymptotic correlation between for example G ( m, m ) MULTI-DIMENSIONAL MARKOV CHAIN AND THE MEIXNER ENSEMBLE 5 and G ( n, n ), m < n , is not known and their is no nice expression for their joint dis-tribution. Using (2.2) we can write down an expression for their joint distribution,which was one of the motivations for the present work. We have P [ G ( m, m ) ≤ η , G ( n, n ) ≤ η ]= X x ∈ W n , x m ≤ η X y ∈ W n , y n ≤ η det(∆ j − i w m ( x j )) ≤ i,j ≤ n det(∆ j − i w n − m ( y j − x i )) ≤ i,j ≤ n . (2.9)However, we have not been able to rewrite this in a form useful for asymptoticcomputations. Remark 2.7. The case when the w ( i, j )’s are exponential random variables canbe treated in a completely analogous way or by taking the appropriate limit of theformula above, q = 1 − α/L , G ( m, n ) → G ( m, n ) /L and l → ∞ . Remark 2.8. Random permutations can be obtained as a limit of the above model, q = α/n , n → ∞ , [8]. The random variable G ( n, n ) then converges to L ( α ) thePoissonized version of the length ℓ N of a longest increasing subsequence of a randompermutation from S N . We can take this limit in the formulas (2.7) and (2.8) andthis leads to a formula for P [ L ( α ) ≤ η ] as a Fredholm determinant involving thediscrete Bessel kernel, [8], [2]. Hence, we obtain a new proof of this result whichdoes not inolve some form of the RSK-correspondence.3. Proofs Proofs of theorems 2.1 and 2.2. To prove theorem 2.1 we first considerthe case m − ℓ = 1. The transition function from G ( ℓ ) to G ( m ) is, by (1.7),(3.1) P [ G ( ℓ + 1) = y | G ( ℓ ) = x ] = n Y j =1 w ( y k − max( x k , y k − )) , where we have set y = 0 and x, y ∈ W n . Note also that it is clear from (1.7) that( G ( i )) i ≥ is a Markov chain. The right hand side can be written as a determinantby the following lemma. Lemma 3.1. If x, y ∈ W n , then (3.2) n Y j =1 w ( y k − max( x k , y k − )) = det(∆ j − i w ( y j − x i )) ≤ i,j ≤ n . Proof. We use induction with respect to n . the claim is trivial for n = 1. Assumethat it is true up to n − 1. Expand the determinant in (3.2) along the last row,det(∆ j − i w ( y j − x i )) ≤ i,j ≤ n = n − X k =1 ( − k + n ∆ k − n w ( y k − x n ) det(∆ j − i w ( y j − x i )) ≤ i,j ≤ n − ,j = k − ∆ − w ( y n − − x n ) det(det(∆ j − i w ( y j − x i )) ≤ i,j ≤ n − ,j = n − + w ( y n − x n ) det(∆ j − i w ( y j − x i )) ≤ i,j ≤ n − . (3.3) K. JOHANSSON We will first show that each term in the sum from k = 1 to n − y denote the difference operator with respect tothe variable y . The fact that ∆(∆ − w ) = w , then givesdet(∆ j − i w ( y j − x i )) ≤ i,j ≤ n − ,j = k = ∆ y k +1 . . . ∆ y n det(∆ j − i w (˜ y j − x i )) ≤ i,j ≤ n − , where ˜ y j = y j if 1 ≤ j ≤ k − 1, ˜ y j = y j +1 if k ≤ n − 1. By the induction asumptionthis equals∆ y k +1 . . . ∆ y n k − Y j =1 w ( y j − max( x j , y j − )) w ( y k +1 − max( x k , y k − )) n − Y j = k +1 w ( y j +1 − max( x j , y j )) . If y k < x n , then ∆ k − n w ( y k − x n ) = 0 since ∆ − j w ( x ) = 0 if x < j . In thiscase the k ’th term in the sum in (3.3) is zero. Assume that y k ≥ x n . Then y n ≥ · · · ≥ y k ≥ x n and we obtaindet(∆ j − i w ( y j − x i )) ≤ i,j ≤ n − ,j = k = k − Y j =1 w ( y j − max( x j , y j − ))(1 − q ) q − max( x k ,y k − ) ∆ y k +1 . . . ∆ y n n − Y j = k +1 (1 − q ) q y j +1 − y j . But, ∆ y k +1 . . . ∆ y n n − Y j = k +1 (1 − q ) q y j +1 − y j = (1 − q ) n − k − ∆ y k +1 . . . ∆ y n q y n = 0since k + 1 < n . Hence, each term in the sum from k = 1 to n − j − i w ( y j − x i )) ≤ i,j ≤ n = − ∆ − w ( y n − − x n ) det(∆ j − i w ( y j − x i )) ≤ i,j ≤ n − ,j = n − + w ( y n − x n ) det(∆ j − i w ( y j − x i )) ≤ i,j ≤ n − . (3.4)If y n − < x n , then ∆ − w ( y n − − x n ) = 0 and the right hand side of (3.4) is w ( y n − x n ) n − Y j =1 w ( y j − max( x j , y j − )) , by the induction assumption. Since y n − < x n , w ( y n − x n ) = w ( y n − max( x n , y n − )and we get exactly the right hand side of (3.2).Assume now that y n − ≥ x n . Note that ∆ j w ( x ) = ( q − j − w ( x ) if j ≥ x ≥ 0. The determinantdet(∆ j − i w ( y j − x i )) ≤ i ≤ n − ,j = n − has ∆ n − i w ( y n − x i ) = ( q − n − − i w ( y n − x i ), 1 ≤ i < n , in the last column,since y n ≥ y n − ≥ x n ≥ · · · ≥ x . By the induction assumption this determinantequals ( q − n − Y j =1 w ( y j − max( x j , y j − )) w ( y n − max( x n − , y n − )) . MULTI-DIMENSIONAL MARKOV CHAIN AND THE MEIXNER ENSEMBLE 7 Thus, the right hand side of (3.4) equals − (1 − q )( q y n − − x n − n − Y j =1 w ( y j − max( x j , y j − )) w ( y n − max( x n − , y n − ))+ w ( x n − y n ) n − Y j =1 w ( y j − max( x j , y j − )) , (3.5)where we have used the fact that (∆ − w )( y n − − x n )( q − 1) = (1 − q )( q y n − − x n − y n − − x n ≥ 0. since y n ≥ y n − ≥ x n ≥ · · · ≥ x , the expression in (3.5) can bewritten(1 − q ) h − ( q y n − x n − − q y n − max( x n − ,y n − ) + q y n − x n q y n − − max( x n − ,y n − ) i × n − Y j =1 w ( y j − max( x j , y j − ))= (1 − q ) q y n − y n − n − Y j =1 w ( y j − max( x j , y j − )) = n Y j =1 w ( y j − max( x j , y j − )) . (cid:3) Theorem 2.1 follows from lemma 3.1 and the following convolution type formulafor determinants of the form we have. Lemma 3.2. Assume that f, g : Z → C are such that f ( x ) = g ( x ) = 0 if x < M for some M ∈ Z . Then, (3.6) X y ∈ W n det(∆ j − i f ( y j − x i )) det(∆ j − i g ( z j − y i )) = det((∆ j − i f ∗ g )( z j − x i )) , where all determinants are n × n .Proof. We will make use of the following summation by parts formula(3.7) b X y = a ∆ u ( y − x ) v ( z − y ) = b X y = a u ( y − x )∆ v ( z − y )+ u ( b +1 − x ) v ( z − b ) − u ( a − x ) v ( z +1 − a ) . The first step is to show that X y ∈ W n det(∆ j − i f ( y j − x i )) det(∆ j − i g ( z j − y i ))= X y ∈ W n det(∆ − i f ( y j − x i )) det(∆ i − g ( z i − y j ))(3.8)by repeated summation by parts. The left hand side of (3.8) can be written X y ∈ W n ∆ y n det(∆ − i f ( y − x i ) . . . ∆ n − − i f ( y n − − x i ) ∆ n − − i f ( y n − x i )) × det(∆ i − g ( z i − y ) . . . ∆ i − n g ( z i − y n )) . Here we have used the summation by parts formula (3.7) to sum y n between y n − and ∞ . The terms coming from u ( b + 1 − x ) v ( z − b ) − u ( a − x ) v ( z + 1 − a ) in(3.7) with b → ∞ and a = y n − give 0 since ∆ i − n g ( z i − b ) = 0 if b is large enough K. JOHANSSON and the other term gives rise to a determinant with two equal columns containing∆ n − − i f ( y n − − x i ). The result is X y ∈ W n det(∆ − i f ( y − x i ) . . . ∆ n − − i f ( y n − − x i ) ∆ n − − i f ( y n − x i )) × det(∆ i − g ( z i − y ) . . . ∆ i − n +1 g ( z i − y n − ) ∆ i − n +1 g ( z i − y n )) . We can now repeat this procedure with y n − , y n − , . . . , y , which gives X y ∈ W n det(∆ − i f ( y − x i ) ∆ − i f ( y − x i ) ∆ − i f ( y − x i ) . . . ∆ n − − i f ( y n − x i )) × det(∆ i − g ( z i − y ) ∆ i − g ( z i − y ) ∆ i − g ( z i − y ) . . . ∆ i − n +1 g ( z i − y n )) . Again we repeat the summation by parts procedure with y n , . . . , y , then with y n , . . . , y and so on until we get the right hand side of (3.8).Next, we apply the generalized Cauchy-Binet identity (2.5) to the right handside of (3.8). This givesdet( X y ∈ Z ∆ − i f ( y − x i )∆ j − g ( z j − y )) . To prove the lemma it remains to show that X y ∈ Z ∆ − i f ( y − x )∆ j − g ( z − y ) = ∆ j − i ( f ∗ g )( z − x ) . If we set h ( x ) = H ( x − − f ( x ) = h ∗ f ( x ) and hence X y ∈ Z ∆ − i f ( y − x )∆ j − g ( z − y ) = ∆ j − z ( h ∗ ( i − ∗ f ) ∗ g ( z − x )= ∆ j − z (∆ − i ( f ∗ g ))( z − x ) = ∆ j − i ( f ∗ g )( z − x ) . Here h ∗ j denotes the j -fold convolution of h with itself. (cid:3) To prove theorem 2.2 we note taht by theorem 2.1 P [ G ( m, n ) ≤ η ] = X x ≤···≤ x n ≤ η det(∆ j − i w m ( x j ))= X x ≤···≤ x n − ≤ η η X x n = x n − det(∆ j − i w m ( x j )) . Now, η X x n = x n − det(∆ j − i w m ( x j ))= det(∆ − i w m ( x ) . . . ∆ n − − i w m ( x n − ) ∆ n − − i w m ( η + 1) − ∆ n − − i w m ( x n − ))= det(∆ − i w m ( x ) . . . ∆ n − − i w m ( x n − ) ∆ n − − i w m ( η + 1)) . Repeated use of this argument proves theorem 2.2. MULTI-DIMENSIONAL MARKOV CHAIN AND THE MEIXNER ENSEMBLE 9 Proofs of propositions 2.4 and 2.5. The generating function for w ( x ) is X x ∈ Z w ( x ) z x = 1 − q − qz and hence, since w m is the m -fold convolution of w with itself, w m ( x ) = (1 − q ) m πi Z γ r dz (1 − qz ) m z x +1 , where the radius r of the circle γ r , centered at the origin, satisfies 0 < r < /q .It follows that(3.9) ∆ k w m ( x ) = (1 − q ) m πi Z γ r (1 − z ) k (1 − qz ) m z x + k +1 dz, for k ≥ k < r < h ( x ) = H ( x − − f ( x ) = h ∗ f ( x ) and hence∆ − k f ( x ) = ( h ∗ k ∗ f )( x ). Using e.g. generating functions it is not difficult tosee that(3.10) h ∗ k ( x ) = ( x − [ k − ( k − H ( x − k ) , where y [ k ] = y ( y − . . . ( y − k + 1) is the factorial power. We can write∆ j − i − w m ( x ) = ∆ − i (∆ j − w m )( x ) = X y ∈ Z h ∗ i ( x − y )∆ j − w m ( y ) . Note that h ∗ i ( x − y ) = 0 if x − y < i by (3.10) and hence h ∗ i ( x − y ) = 0 if y > x − i ≥ 1. We obtain(3.11) ∆ j − i − w m ( η + 1) = η X y = −∞ ( η − y ) [ i − ( i − j − w m ( y ) . Fix L ≥ n − 1. By (3.11) and some row operations we finddet(∆ j − i − w m ( η + 1)) ≤ i,j ≤ n = det( η X y = −∞ ( y + L ) i − ( i − − i − ∆ j − w m ( y )) ≤ i,j ≤ n = det( η X y = − L ( y + L ) i − ( i − − j − ∆ j − w m ( y )) ≤ i,j ≤ n , (3.12)since it follows from (3.9) that ∆ j − w m ( y ) = 0 if y ≤ − n for 1 ≤ j ≤ n .If we choose L = n − P [ G ( m, n ) ≤ η ] = n − Y j =1 j ! η + n − X y ,...,y n =0 ∆ n ( y ) det(( − j − ∆ j − w m ( y i − n + 1)) ≤ i,j ≤ n . If 0 ≤ k < n , m ≥ n , then(3.14) ( − k ∆ k w m ( y ) = s k ( y ) m − Y ℓ = k +1 ( y + ℓ ) q y H ( y + n − , where s k is a polynomial of degree k . To see this we can use (3.9). We see thatthe integral in (3.9) is zero if x ≤ − ( k + 1), and in particular if x ≤ − n for all k , ≤ k < n . If we make the change of variables z → /z and assume that x ≥ − m ,we find∆ k w m ( y ) = (1 − q ) m πi Z γ /r ( z − k ( z − q ) m z y + m − dz = y + m − X r =0 (cid:18) y + m − r (cid:19) (1 − q ) m πi Z γ /r ( z − k ( z − q ) r − m q y + m − − r dz. This is a polynomial of degree m − y times q y , and this polynomial has zerosat − ( k + 1) , . . . , − m . Consequently (3.14) follows.Furthermore we have the following determinantal identity. Let p j , j = 0 , . . . , n − j and A , . . . , A n − constants. Then there is a constant B such that(3.15) det( p j − ( x i ) n Y k = j +1 ( x i + A k )) ≤ i,j ≤ n = B ∆ n ( x ) . This is not hard to see. Choose c rj so that n X r =1 x r − c rj = p j − ( x ) n Y k = j +1 ( x + A k ) . Then, the left hand side of (3.15) isdet( n X r =1 x r − i c rj ) ≤ i,j ≤ n = ∆ n ( x ) det C, and we have proved (3.15) with B = det C . Actually, according to [12] we have B = n Y j =1 ( − j − p j − ( − A j ) , but we will not need this result.By (3.13), (3.14) and (3.15) we obtain P [ G ( m, n ) ≤ η ] = C m,n η + n − X y ,...,y n =0 ∆ n ( y ) n Y j =1 q y j m − Y k = n ( y j + k + 1 − n )= 1 Z m,n η + n − X y ,...,y n =0 ∆ n ( y ) n Y j =1 (cid:18) y j + m − ny j (cid:19) q y j , for some constants C m,n , Z m,n . If we let η → ∞ we see that Z m,n must be exactlythe normalization constant in the Meixner ensemble. This proves proposition 2.4.We now turn to the proof of proposition 2.5. Write K = m − n + 1 and define,for 0 ≤ j < n , x ∈ Z , a j ( x ) = q − πi Z γ r z x − ( qz − j + K − ( z − j +1 dz and b j ( x ) = 12 πi Z γ r ( w − j w x ( qw − j + k dw, MULTI-DIMENSIONAL MARKOV CHAIN AND THE MEIXNER ENSEMBLE 11 where 1 < r < r < /q . If x ≥ 0, 0 ≤ j < n , then(3.16) a j ( x ) = ( q − j + K j ! p j ( x − n ) , where p j is a monic polynomial of degree 1. This follows from the computation a j ( x + n ) = q − πi Z γ r z x + n − ( qz − j + K − ( z − j +1 dz = q − πi Z γ r x + n − X r =0 (cid:18) x + n − r (cid:19) ( z − r − j − ( qz − j + K − dz = q − πi Z γ r j X r =0 (cid:18) x + n − r (cid:19) ( z − r − j − ( qz − j + K − dz. We see that this is a polynomial of degree j in x with leading coefficient( q − j + K /j !.It follows from theorem 2.2 and (3.12) with L = − n that(3.17) P [ G ( m, n ) ≤ η ] = det η + n X y =0 ( − j ∆ j w m ( y − n ) ! ≤ i,j The Meixner ensemble is related to Meixner polynomials since thesepolynomials are orthogonal on N with respect to the negative binomial weight.These polynomials are not used explicitely in the computations above but figure inthe background. This can be seen from the Rodrigues’ formula, p ( m,q ) j ( x ) q j + x (cid:18) x + m − x (cid:19) = ∆ j "(cid:18) x + m − x (cid:19) q x j − Y k =0 ( x − k ) , and the integral representation p ( m,q ) j ( x ) = j !2 πi Z γ r (1 − z/q ) x (1 − z ) x + K dzz j +1 , with 0 < r < 1, where p ( m,q ) j ( x ) are the standard Meixner polynomials, [11]. Acknowledgement : I thank Jon Warren for a discussion and for sending me apreliminary version of [5]. References [1] Yu. Baryshnikov, GUES and QUEUES, , Probab.Theory Relat. Fields, , (2001), 256 - 274[2] A. Borodin, A. Okounkov & G. Olshanski, Asymptotics of Plancherel measures for symmetricgroups, J. Amer. Math. Soc. (2000), 481–515[3] A. Borodin & G. Olshanski, Stochastic dynamics related to Plancherel measure on partitions, Representation theory, dynamical systems, and asymptotic combinatorics, 9–21, Amer. Math.Soc. Transl. Ser. 2, 217, Amer. Math. Soc., Providence, RI, 2006[4] J. G. Brankov, V. B. Priezzhev, R. V. Shelest, Genneralized determinant solution of thediscrete-time totally asymmetric exclusion process and zero-range process, Phys. Rev., E69 (2004), 066136[5] A. B. Dieter, J. Warren, Transition probabilities, queues in series and determinants, work inprogress[6] J. Gravner, C. A. Tracy, H. Widom, A growth model in a random environment, Ann. Probab., (2002), 1340 - 1368.[7] K. Johansson, Shape fluctuations and random matrices, Commun. Math. Phys., , (2000),437 - 476[8] K. Johansson, Discrete orthogonal polynomial ensembles and the Plancherel measure, Annalsof Math., (2001), 259 - 296 [9] K. Johansson, Discrete polynuclear growth and determinantal processes, Commun. Math.Phys., (2003), 277 - 329[10] K. Johansson, Random Matrices and determinantal processes, Lecture notes from the LesHouches summer school on Mathematical Statistical Physics (2005), arXiv:math-ph/0510038[11] R. Koekoek, R.F. Swarttouw, The Askey-scheme of hypergeometric orthogonal polynomialsand its q-analogue available at http://fa.its.tudelft.nl/ ∼ koekoek/askey/[12] C. Krattenthaler, Advanced Determinant Calculus ∼ kratt/artikel/detsurv.html[13] T. Nagao, T. Sasamoto, Asymmetric Simple Exclusion Process and Modified Random MatrixEnsembles, Nuclear Phys. B, (2004), no. 3, 487–502.[14] A. Okounkov, Infinite wedge and random partitions, Selecta Math. (N.S.), (2001), 57–81[15] A. R´akos and G.M. Sch¨utz, Current distribution and Random Matrix Ensembles for anIntegrable Asymmetric Fragmentation Process, J. Stat. Phys., (2005), 511 - 530[16] G. M. Sch¨utz, Exact Solution of the Master Equation for the Asymmetric Exclusion Process, J. Stat. Phys., (1997), 427 - 445[17] C. A. Tracy, H. Widom, Integral formulas for the asymmetric simple exclusion process, arXiv:0704.2633v1[18] J. Warren, Dyson’s Brownian motions, intertwing and interlacing, Electron. J. Probab. 12(2007), no. 19, 573–590 (electronic). Department of Mathematics, Royal Institute of Technology, SE-100 44 Stockholm,Sweden E-mail address ::