Singular values of products of random matrices and polynomial ensembles
aa r X i v : . [ m a t h . P R ] A p r Singular values of products of randommatrices and polynomial ensembles
Arno B. J. Kuijlaars and Dries StivignySeptember 18, 2018
Abstract
Akemann, Ipsen, and Kieburg showed recently that the squared singular val-ues of a product of M complex Ginibre matrices are distributed according to adeterminantal point process. We introduce the notion of a polynomial ensembleand show how their result can be interpreted as a transformation of polynomialensembles. We also show that the squared singular values of the product of M − M complexGinibre matrices. Our final result is that these limiting kernels also appear asscaling limits for the biorthogonal ensembles of Borodin with parameter θ > θ or 1 /θ is an integer. This further supports the conjecture that thesekernels have a universal character.
1. Introduction
There is a remarkable recent development in the understanding of the structure ofeigenvalues and singular values of products of complex Ginibre matrices at the finitesize level. Both the eigenvalues and the singular values turn out to have a determinantalstructure. For the eigenvalues this was shown by Akemann and Burda [2]. Relatedresults that involve also products with inverses of complex Ginibre matrices are in[23, 1], and products with truncated unitary matrices in [3, 20].
KU Leuven, Department of Mathematics, Celestijnenlaan 200B box 2400, 3001 Leuven, Belgium.E-mail: [email protected], [email protected] Z n | ∆( z ) | n Y j =1 w ( z j ) , ( z , . . . , z n ) ∈ C n , where ∆( z ) = Q j 0. Then the joint probability density functiontakes the form 1 Z n ∆( y ) det [ w k − ( y j )] nj,k =1 , ( y , . . . , y n ) ∈ [0 , ∞ ) n , (1.1)where the y j ’s are the squared singular values of Y , and w k ( y ) = G M, ,M (cid:18) − ν M , . . . , ν , ν + k (cid:12)(cid:12)(cid:12)(cid:12) y (cid:19) , (1.2)is again a Meijer G-function. This was shown by Akemann, Kieburg, and Wei [4]in the case of square matrices, and by Akemann, Ipsen, and Kieburg [5] for generalrectangular matrices. The density (1.1) defines a biorthogonal ensemble which is a special case of a deter-minantal process. Because of the Vandermonde determinant ∆( y ) in (1.1) there is aconnection with polynomials and we call (1.1) a polynomial ensemble . A generalpolynomial ensemble is of the form1 Z n ∆( x ) det [ f k − ( x j )] nj,k =1 , ( x , . . . , x n ) ∈ R n , (1.3)for certain functions f , . . . , f n − . In such an ensemble the correlation kernel is K n ( x, y ) = n − X k =0 P k ( x ) Q k ( y )where P k is a monic polynomial of degree k such that Z ∞ P k ( x ) f j ( x ) dx = 0 for j = 0 , , . . . , k − , and k = 0 , , . . . , n, (1.4)and Q k is in the linear span of f , . . . , f k such that Z ∞ x j Q k ( x ) dx = δ j,k for j = 0 , , . . . , k, and k = 0 , , . . . , n − . f k ( x ) = x k f ( x ) for every k , then (1.3) is an orthogonal polynomial ensemble [24]and (1.4) reduces to the conditions for an orthogonal polynomial with respect to f .It is also worth noting that in a polynomial ensemble, P n is the average characteristicpolynomial P n ( x ) = E n Y j =1 ( x − x j ) with the expectation taken over (1.3).The first aim of the present work is to interpret the result of Akemann et al. as atransformation of polynomial ensembles. The result may be stated as follows: suppose X is a random matrix whose squared singular values form a polynomial ensemble andthat G is a complex Ginibre matrix. Then the squared singular values of Y = GX arealso a polynomial ensemble. See Theorem 2.1 below for a precise formulation.We can use the theorem repeatedly, and we obtain that the squared singular valuesof Y = G M − · · · G X are also a polynomial ensemble, for any M ≥ G , . . . , G M − .The theorem applies to any random matrix X whose squared singular values are apolynomial ensemble. Taking for X a complex Ginibre matrix itself, we obtain theresult of Akemann et al., and by taking for X the inverse of a product of complex Gini-bre matrices, we rederive a recent result of Forrester [15]. In both these examples, thefunctions in the polynomial orthogonal ensembles are expressed as Meijer G-functions.We consider one new example where X is a truncation of a Haar distributed unitarymatrix for which it is known that the squared singular values are a Jacobi ensemble on[0 , The correlation kernels K n for the polynomial ensemble (1.1)-(1.2) have an interestinglarge n scaling limit at the hard edgelim n →∞ n K n (cid:16) xn , yn (cid:17) = K ν ,...,ν M ( x, y )with a limiting kernel that depends on M parameters, ν , . . . , ν M , see [25] and Theorem4.1 below for a precise statement.For M = 1 they reduce to the hard edge Bessel kernels, see e.g. [33] or [14, Section7.2], and for M = 2 these kernels already appeared in work of Bertola et al. [8] onthe Cauchy-Laguerre two matrix model. In [15] Forrester obtained the same familyof limiting kernels for the squared singular values of a product of complex Ginibrematrices with the inverse of another product of complex Ginibre matrices. Differentialequations for the gap probabilities are in [32].For results on the global distribution of the points in (1.1)-(1.2), see e.g. [10, 16, 29,31, 34].The second aim of this paper is to provide two more examples of models with thekernels K ν ,...,ν M as scaling limit. We show that the new example with the product3f Ginibre matrices with one truncated unitary matrix falls into this category. In thesecond example we consider the biorthogonal ensembles1 Z n Y j 0. Borodin [9] found the hard edge scaling limits for the caseswhere w ( x ) = x α e − x or w ( x ) = x α χ [0 , ( x ). The scaling limit depends on the twoparameters θ > α > − 1. We show that, after suitable rescaling, the limitingkernels belong to the class of kernels K ν ,...,ν M provided that θ or 1 /θ is an integer,see Theorem 5.1.This last example in particular supports the conjecture that the kernels K ν ,...,ν M have a universal character and that they might appear as scaling limits in other inter-esting random models as well. 2. Transformation of Polynomial Ensembles A complex Ginibre matrix G of size m × n has independent entries whose real and imag-inary parts are independent and have a standard normal distribution. The probabilitydistribution can be written as 1 Z n e − Tr G ∗ G dG (2.1)where dG = Q mj =1 Q nk =1 d Re G j,k d Im G j,k and Z n is a normalization constant.The main result of this section is the following: Theorem 2.1. Let ν ≥ and l ≥ n ≥ be integers and let G be a complex Ginibrerandom matrix of size ( n + ν ) × l . Let X be a random matrix of size l × n , independentof G , such that the squared singular values x , . . . , x n of X have a joint probabilitydensity function on [0 , ∞ ) n that is proportional to ∆( x ) det [ f k − ( x j )] nj,k =1 , (2.2) for certain functions f , . . . , f n − . Then the squared singular values y , . . . , y n of Y = GX are distributed with a joint probability density function on [0 , ∞ ) n proportional to ∆( y ) det [ g k − ( y j )] nj,k =1 (2.3) with g k ( y ) = Z ∞ x ν e − x f k (cid:16) yx (cid:17) dxx , k = 0 , . . . , n − . (2.4)The theorem says that left multiplication by a complex Ginibre matrix maps poly-nomial ensembles to polynomial ensembles. Observe that g k is the Mellin convolution[30, formula 1.14.39] of f k with the function x x ν e − x .Before we prove the theorem, we state an auxiliary result, which is essentially con-tained in [6, section 2.1] and also in [5]. For clarity, we give a detailed proof of thisresult. 4 emma 2.2. Let ν, l, n and G be as in Theorem 2.1. Let X be a non-random matrixof size l × n with non-zero squared singular values x , . . . , x n . Then the squared singularvalues y , . . . , y n of Y = GX have a joint probability density function on [0 , ∞ ) n thatis proportional to ∆( y )∆( x ) det (cid:20) y νj x ν +1 k e − y j /x k (cid:21) nj,k =1 (2.5) where the proportionality constant does not depend on X . In case some of the x k are the same, we have to interpret (2.5) in the appropriatelimiting sense using l’Hˆopital’s rule. Proof. First we show that we can reduce the proof to the case l = n . Assume l > n .Then any matrix X of size l × n can be written as X = U (cid:18) X O (cid:19) where U is an l × l unitary, X is an n × n , and O is a zero matrix of size ( l − n ) × n .Then by the unitary invariance of Ginibre random matrix ensembles, we have that Y = GX , has the same distribution of singular values as Y = G X , where G is an( n + ν ) × n complex Ginibre matrix.So in the rest of the proof we assume that l = n and X is a square matrix of size n × n with squared singular values x , . . . , x n .Then it is known that the change of variables G Y = GX , with X being fixed,where G and Y are ( n + ν ) × n matrices, has a Jacobian (see e.g. [27, Theorem 3.2])det( X ∗ X ) − ( n + ν ) = n Y k =1 x − ( n + ν ) k . (2.6)Thus under the mapping G Y the Ginibre probability distribution (2.1) transforms,up to a constant, into e − Tr( G ∗ G ) dG = n Y k =1 x − ( n + ν ) k ! e − Tr( Y ∗ Y ( X ∗ X ) − ) dY. (2.7)Next we write Y = U Σ V in its singular value decomposition. Thus Σ is a diagonalmatrix with the singular values along the diagonal, V is a unitary matrix n × n and U is an ( n + ν ) × n matrix with U ∗ U = I , that is, U belongs to the Stiefel manifold V n,n + ν . If we let y , . . . , y n be the squared singular values of Y , then it is known that dY ∝ n Y j =1 y νj ∆( y ) dU dV dy · · · dy n (2.8)5here dU is the invariant measure on the Stiefel manifold, and dV is the Haar measureon U ( n ), see e.g. [13] and [35] . Combining (2.7) and (2.8) we obtain a probabilitymeasure proportional to n Y k =1 x − ( n + ν ) k ! n Y j =1 y νj ∆( y ) e − Tr( V ∗ Σ ∗ Σ V ( X ∗ X ) − ) dU dV dy · · · dy n . (2.9)Since we are only interested in the squared singular values of Y , we integrate outthe U and V part in (2.9). The integral over U only contributes to the constant. Theintegration over V is done by means of the Harish-Chandra/Itzykson-Zuber integral[19, 21] Z U ( n ) e − Tr( V ∗ Σ ∗ Σ V ( X ∗ X ) − ) dV = C n det (cid:2) e − y j /x k (cid:3) nj,k =1 ∆( y )∆( x − ) , (2.10)where C n is a (known) constant only depending on n . From (2.9) and (2.10) we obtainthat the density of squared singular values of Y is proportional to n Y k =1 x − ( n + ν ) k ! n Y j =1 y νj ∆( y ) det (cid:2) e − y j /x k (cid:3) nj,k =1 ∆( x − ) . (2.11)Using ∆( x − ) = ( − n ( n − / Q nk =1 x − n +1 k ∆( x ) and bringing the products into thedeterminant, we immediately obtain (2.5) with a proportionality constant that is in-dependent of x , . . . , x n . This proves the lemma.We can now prove Theorem 2.1. Proof. Suppose that the squared singular values of X have joint probability densityfunction (2.2). Then the squared singular values are distinct almost surely, and weobtain from Lemma 2.2, after averaging out over X , that the squared singular valuesof Y = GX have a joint probability density function that is proportional to∆( y ) Z ∞ · · · Z ∞ det (cid:20) y νj x ν +1 k e − y j /x k (cid:21) nj,k =1 det [ f k − ( x j )] nj,k =1 dx · · · dx n . (2.12)The multiple integral in (2.12) can be evaluated with the Andreief identity, see e.g.[12, Chapter 3], Z · · · Z det [ φ j ( x k )] nj,k =1 det [ ψ k ( x j )] nj,k =1 dµ ( x ) · · · dµ ( x n )= n ! det (cid:20)Z φ j ( x ) ψ k ( x ) dµ ( x ) (cid:21) nj,k =1 Note that in [13, page 10] the Jacobian is given in terms of the singular values σ j = √ y j with afactor Q nj =1 σ ν +1 j . Since 2 σ j dσ j = dy j we obtain (2.8) with factor Q nj =1 y νj . g k ( y ) = Z ∞ y ν x ν +1 e − y/x f k ( x ) dx = Z ∞ x ν e − x f k (cid:16) yx (cid:17) dxx and this completes the proof of Theorem 2.1. Remark 2.3. We emphasize that in Theorem 2.1 we do not assume that the probabilitydistribution on X is invariant under (left or right) multiplication with unitary matrices. Remark 2.4. It is of interest to find other random matrices A so that multiplicationwith A preserves the biorthogonal structure of squared singular values. In a forthcom-ing work we will show that this is the case for multiplication with truncated unitarymatrices. The main issue is to find a suitable analogue of the Harish-Chandra/ItzyksonZuber formula (2.10).Theorem 2.1 has a number of immediate consequences that we list now. 3. Corollaries We can apply Theorem 2.1 to any random matrix X for which the squared singularvalues have a joint probability density function of the form (2.2).In all the examples below, we will see the appearance of Meijer G-functions. Thisis actually quite naturally, because of its connections with the Mellin transform. Inparticular if f k in Theorem 2.1 is a Meijer G-function, then also g k in (2.4) is a MeijerG-function, see formula (A.2) in the appendix. X is a Ginibre matrix Suppose X = G is itself a complex Ginibre random matrix of size ( n + ν ) × n , ν ≥ X have a joint p.d.f. proportionalto ∆( x ) n Y j =1 x ν j e − x j which can be written in the form (2.2) with f k ( x ) = x ν + k e − x = G , , (cid:18) − ν + k (cid:12)(cid:12)(cid:12)(cid:12) x (cid:19) . (3.1)Assume now that Y = G M · · · G is the product of M independent complex Ginibrematrices where G k has size ( n + ν k ) × ( n + ν k − ) and all ν k ≥ ν = 0. Then wecan apply Theorem 2.1 M − Corollary 3.1. The joint probability density function of the squared singular valuesof Y = G M · · · G is proportional to ∆( y ) det [ w k − ( y j )] nj,k =1 (3.2)7 here w k ( y ) = G M, ,M (cid:18) − ν M , . . . , ν + k (cid:12)(cid:12)(cid:12)(cid:12) y (cid:19) . (3.3)This is the result of Akemann, Ipsen and Kieburg [5] mentioned in the introduction. X is the inverse of a product of Ginibre matrices A second application of Theorem 2.1 is inspired by the recent work of Forrester [15]who considered the product Y = G M · · · G ( ˜ G K · · · ˜ G ) − (3.4)of M Ginibre random matrices with the inverse of a product of K Ginibre randommatrices. Here it is assumed that ˜ G j has size ( n + ˜ ν j ) × ( n + ˜ ν j − ) with all ˜ ν j ≥ ν = ˜ ν K = 0, so that ˜ G K · · · ˜ G is a square matrix.From Corollary 3.1 we know that the squared singular values of ˜ G K · · · ˜ G have ajoint probability density function proportional to∆( x ) det [ φ k − ( x j )] nj,k =1 , φ k ( x ) = G K, ,K (cid:18) − ˜ ν K , . . . , ˜ ν + k (cid:12)(cid:12)(cid:12)(cid:12) x (cid:19) . (3.5)The following simple lemma shows that the squared singular values of ( ˜ G K · · · ˜ G ) − then also have the structure of a polynomial ensemble. Lemma 3.2. Let X be a random matrix of size n × n such that the squared singularvalues x , . . . , x n of X have a joint probability density function proportional to ∆( x ) det [ φ k ( x j )] nj,k =1 , (3.6) for certain functions φ k . Then the squared singular values of Y = X − have a jointprobability density function proportional to ∆( y ) det [ ψ k ( y j )] nj,k =1 (3.7) with ψ k ( y ) = y − n − φ k (cid:0) y − (cid:1) , k = 1 , . . . , n. (3.8) Proof. The squared singular values of X − are given by y j = x − j , j = 1 , . . . , n .making the change of variables x j y j = x − j in (3.6) gives us the joint probabilitydensity function of the squared singular values of X − . The Jacobian of this changeof variables is ( − n n Y j =1 y − j . Noting also that∆( y − ) = ( − n ( n − / n Y j =1 y − n +1 j ∆( y ) , 8e find that the joint probability density function of y , . . . , y n is therefore proportionalto n Y j =1 y − j n Y j =1 y − n +1 j ∆( y ) det (cid:2) φ k − ( y − j ) (cid:3) which is indeed (3.7) with ψ k given by (3.8).The class of Meijer G-functions is closed under inversion of the argument and undermultiplication by a power of the independent variable, see (A.4) and (A.5). It followsthat if φ k in Lemma 3.2 is a Meijer G-function, then so is ψ k . To be precise, if φ k ( x ) = G m,np,q (cid:18) a , . . . , a p b , . . . , b q (cid:12)(cid:12)(cid:12)(cid:12) x (cid:19) then ψ k ( y ) = G n,mq,p (cid:18) − b − n, . . . , − b q − n − a − n, . . . , − a p − n (cid:12)(cid:12)(cid:12)(cid:12) x (cid:19) . If we apply this to (3.5) we see that the squared singular values of X = ( ˜ G K · · · ˜ G ) − have a joint p.d.f. of the form (2.2) with f k ( x ) = G ,KK, (cid:18) − ˜ ν K − n, . . . , − ˜ ν − n, − ˜ ν − n − k − (cid:12)(cid:12)(cid:12)(cid:12) x (cid:19) . Then a repeated application of Theorem 2.1 and formula (A.3) gives the followingresult of [15, Proposition 3]. Corollary 3.3. Let ˜ G , · · · ˜ G K and G , . . . , G M be independent complex Ginibre ma-trices where ˜ G j has size ( n + ˜ ν j ) × ( n + ˜ ν j − ) and G j has size ( n + ν j ) × ( n + ν j − ) with ˜ ν , . . . , ˜ ν K − ≥ , ν , . . . , ν M ≥ and ν = ˜ ν = ˜ ν K = 0 . Then the squared sin-gular values y , . . . , y n of Y given in (3.4) with K ≥ , have a joint probability densityfunction proportional to ∆( y ) det [ w k − ( y j )] nj,k =1 (3.9) where w k ( y ) = G M,KK,M (cid:18) − ˜ ν K − n, . . . , − ˜ ν − n, − ˜ ν − n − kν M , . . . , ν (cid:12)(cid:12)(cid:12)(cid:12) y (cid:19) . (3.10) X is a truncation of a random unitary matrix As a third application we consider a new example, where we start from a matrix X which is a truncated unitary matrix. Let U be an l × l Haar distributed unitary matrixand let X be the ( n + ν ) × n upper left block of U , where ν ≥ l ≥ n + ν . Thenthe squared singular values of X are in (0 , 1) with a joint p.d.f that is proportional to(see e.g. [22, Proposition 2.1]) Y ≤ j 1. Then Theorem 2.1 togetherwith (A.3) and (3.13) gives the following. Corollary 3.4. Let X be the ( n + ν ) × n truncation of a unitary matrix of size l × l with l ≥ n + ν . Let M ≥ and let Y = G M − · · · G X where each G j is a complexGinibre matrix of size ( n + ν j ) × ( n + ν j − ) with ν , . . . , ν M ≥ . Then the squaredsingular values y , . . . , y n of Y have a joint probability density function proportionalto ∆( y ) det [ w k − ( y j )] nj,k =1 (3.14) where w k ( y ) = G M, ,M (cid:18) l − n + 1 + kν M , . . . , ν , ν + k (cid:12)(cid:12)(cid:12)(cid:12) y (cid:19) , y > . (3.15) 4. Integral Representations and Hard Edge Scaling Limit K ν ,...,ν M For any set of functions w , . . . , w n − , the probability density function (1.1) is a poly-nomial ensemble and we already noted in the introduction that the correlation kerneltakes the form K n ( x, y ) = n − X k =0 P k ( x ) Q k ( y ) (4.1)where, for each k = 0 , . . . , n − P k is a polynomial of degree k and Q k is in the spanof w , . . . , w k such that Z ∞ P j ( x ) Q k ( x ) dx = δ j,k . (4.2)For the case of weight functions (1.2) that are associated with the product of M Ginibrematrices, it was shown in [5] and [25] that the functions P j and Q k have contour integralrepresentations, which was used in [25] to derive a double integral representation ofthe correlation kernel (4.1). Based on this double integral representation the followingscaling limit was obtained in [25, Theorem 5.3].102 -1 0 1 2Σ − + i R Figure 1: The contours − + i R and Σ in the double integral (4.3) Theorem 4.1. Let M ≥ and ν , . . . , ν M ≥ be fixed integers. Then the kernels K n have the scaling limit lim n →∞ n K n (cid:16) xn , yn (cid:17) = K ν ,...,ν M ( x, y ) . The limiting kernel has a double integral represention K ν ,...,ν M ( x, y ) = 1(2 πi ) Z − / i ∞− / − i ∞ ds I Σ dt M Y i =0 Γ( s + ν i + 1)Γ( t + ν i + 1) sin( πs )sin( πt ) x t y − s − s − t (4.3) where Σ is a contour in { t ∈ C | Re t > − / } encircling the positive real axis, asillustrated in Figure 1. The kernels (4.3) have the alternative representation in terms of Meijer G-functions K ν ,...,ν M ( x, y )= Z G , ,M +1 (cid:18) −− ν , − ν , . . . , − ν M (cid:12)(cid:12)(cid:12)(cid:12) ux (cid:19) G M, ,M +1 (cid:18) − ν , . . . , ν M , ν (cid:12)(cid:12)(cid:12)(cid:12) uy (cid:19) du, (4.4)with ν = 0, see also [25].In this section, we consider the polynomial ensemble (3.14) from Corollary 3.4 thatis associated with the product of complex Ginibre random matrices with one truncatedunitary matrix. Following [25] and [15] we are able to obtain integral representationsfor P k and Q k in this case as well, and from this a double integral representation forthe correlation kernel. While keeping ν , . . . , ν M fixed and letting l grow at least as2 n , we obtain the limiting kernel K ν ,...,ν M at the hard edge also in this case. Q k and P k So in the rest of this section, we assume that we work with the functions w k in (3.15).The corresponding polynomials P k are such that P k is monic of degree k with Z ∞ P k ( x ) w j ( x ) dx = 0 for j = 0 , . . . , k − Q k satisfy Z ∞ x j Q k ( x ) dx = δ j,k for j = 0 , . . . , k, (4.6)with Q k in the linear span of w , . . . , w k . The polynomials P k and functions Q k havethe following integral representation. Proposition 4.2. We have Q k ( x ) = 12 πi Z c + i ∞ c − i ∞ q k ( s ) Q Mj =1 Γ( s + ν j )Γ( s + l − n + 1) x − s ds (4.7) where c > and q k ( s ) = Γ( l − n + 2 k + 2) Q Mj =0 Γ( k + 1 + ν j ) ( s − k ) k ( s + l − n + 1) k . (4.8)Recall that ν = 0 and that the Pochhammer symbol is given by( a ) k = a ( a + 1) · · · ( a + k − 1) = Γ( a + k )Γ( a ) . Proof. The functions w k from (3.15) have the integral representation w k ( x ) = 12 πi Z c + i ∞ c − i ∞ ( s + ν ) k ( s + l − n + 1) k Q Mj =1 Γ( s + ν j )Γ( s + l − n + 1) x − s ds, x > . (4.9)Then it is easy to see that the linear span of w , . . . , w k consists of all functionsas in the right-hand side of (4.7) with q k being a rational function in s such that( s + l − n + 1) k q k ( s ) is a polynomial of degree ≤ k . Since (4.8) is of that type, we seethat Q k belongs to the linear span of w , . . . , w k .By the properties of the Mellin transform, we have from (4.7) that Z ∞ x s − Q k ( x ) dx = q k ( s ) Q Mj =1 Γ( s + ν j )Γ( s + l − n + 1) , s > . (4.10)Since q k has zeros in 1 , . . . , k we find from (4.10) that Z ∞ x j Q k ( x ) dx = 0 , for j = 0 , . . . , k − . The prefactor in (4.8) has been chosen so that Z ∞ x k Q k ( x ) dx = 1as can be readily verified from (4.8) and (4.10). Thus (4.6) holds and the propositionis proved. 12otice that we can rewrite Q k as a Meijer G-function Q k ( x ) = Γ( l − n + 2 k + 2) Q Mi =0 Γ( k + 1 + ν i ) G M +1 , ,M +1 (cid:18) − k, l − n + k + 1 ν , ν , . . . , ν M (cid:12)(cid:12)(cid:12)(cid:12) x (cid:19) . There is a similar integral representation for P k . Proposition 4.3. We have P k ( x ) = Q Mj =0 Γ( k + 1 + ν j )Γ( l − n + 2 k + 1) 12 πi I Σ k Γ( t − k ) Γ( t + l − n + k + 1) Q Mj =0 Γ( t + 1 + ν j ) x t dt (4.11) where Σ k is a closed contour encircling the interval [0 , k ] once in the positive directionand such that Re t > − for t ∈ Σ k .Proof. The integrand in the right-hand side of (4.11) is a meromorphic function withsimple poles 0 , , . . . , k inside the contour. Since Re t > − t ∈ Σ k , we have thatthe other poles are outside. Hence, by the residue theorem we have that the right-handside of (4.11) defines a polynomial of degree at most k , and in fact P k ( x ) = Q Mj =0 Γ( k + 1 + ν j )Γ( l − n + 2 k + 1) k X t =0 Res t Γ( t − k ) Γ( t + l − n + k + 1) Q Mj =0 Γ( t + 1 + ν j ) ! x t = k X t =0 ( − k − t ( k − t )! Γ( l − n + k + t + 1)Γ( l − n + 2 k + 1) M Y j =0 Γ( k + 1 + ν j )Γ( t + 1 + ν j ) x t . (4.12)The leading coefficient is 1 and so P k is indeed a monic polynomial of degree k .To check the orthogonality condition (4.5), we recall that by (4.9) and the propertiesof the Mellin transform Z ∞ x s − w j ( x ) dx = ( s + ν ) j Q Mi =1 Γ( s + ν i )( s + l − n + 1) j Γ( s + l − n + 1) . And so, if we use (4.11) and interchange the integrals Z ∞ P k ( x ) w j ( x ) dx = c k πi I Σ k Γ( t − k ) Γ( t + l − n + k + 1) Q Mi =0 Γ( t + 1 + ν i ) ( t + 1 + ν ) j ( t + l − n + 2) j Q Mi =1 Γ( t + 1 + ν i )Γ( t + l − n + 2) dt (4.13)with c k = Q Mi =0 Γ( k + 1 + ν i )Γ( l − n + 2 k + 1) . In case j = 0 , . . . , k − 1, the integrand in (4.13)simplifies to ( t + 1 + ν ) j ( t + l − n + j + 2) k − j − ( t − k ) k +1 t with poles at t = 0 , , . . . , k only, and these are insidethe contour Σ k . In addition it is O ( t − ) as t → ∞ . Thus by moving the contour Σ k to infinity, we see that (4.13) vanishes for j = 0 , , . . . , k − 1, and we obtain (4.5).Formula (4.12) shows that P k is a hypergeometric polynomial, namely P k ( x ) = ( − k M Y i =1 Γ( k + 1 + ν i )Γ( ν i + 1) Γ( l − n + k + 1)Γ( l − n + 2 k + 1) F M (cid:18) − k, l − n + k + 11 + ν , . . . , ν M (cid:12)(cid:12)(cid:12)(cid:12) x (cid:19) . Finally, we notice that P k can also be identified as a Meijer G-function P k ( x ) = − Q Mi =0 Γ( k + 1 + ν i )Γ( l − n + 2 k + 1) G , ,M +1 (cid:18) k + 1 , − ( l − n + k ) − ν , . . . , − ν M (cid:12)(cid:12)(cid:12)(cid:12) x (cid:19) . We proceed to obtain a double integral representation for the kernel K n . Proposition 4.4. We have K n ( x, y ) = 1(2 πi ) Z − / i ∞− / − i ∞ ds I Σ dt M Y j =0 Γ( s + 1 + ν j )Γ( t + 1 + ν j )Γ( t + 1 − n )Γ( t + l − n + 1)Γ( s + 1 − n )Γ( s + l − n + 1) x t y − s − s − t (4.14) where Σ is a closed contour encircling , , . . . , n once in the positive direction suchthat Re t > − / for t ∈ Σ .Proof. The correlation kernel (4.1) can be written as K n ( x, y ) = 1(2 πi ) Z c + i ∞ c − i ∞ ds I Σ dt M Y j =0 Γ( s + ν j )Γ( t + 1 + ν j ) n − X k =0 ( l − n + 2 k + 1) Γ( t − k )Γ( s − k ) Γ( t + l − n + k + 1)Γ( s + l − n + k + 1) x t y − s where we used the representations (4.11) and (4.7) for P k ( x ) and Q k ( y ). By using thefunctional equation Γ( z + 1) = z Γ( z ), one can see that( s − t − l − n + 2 k + 1) Γ( t − k )Γ( s − k ) Γ( t + l − n + k + 1)Γ( s + l − n + k + 1)= Γ( t − k )Γ( s − k − 1) Γ( t + l − n + k + 2)Γ( s + l − n + k + 1) − Γ( t − k + 1)Γ( s − k ) Γ( t + l − n + k + 1)Γ( s + l − n + k )14hich means that we have a telescoping sum( s − t − n − X k =0 ( l − n + 2 k + 1) Γ( t − k )Γ( s − k ) Γ( t + l − n + k + 1)Γ( s + l − n + k + 1)= Γ( t − n + 1)Γ( s − n ) Γ( t + l − n + 1)Γ( s + l − n ) − Γ( t + 1)Γ( s ) Γ( t + l − n + 1)Γ( s + l − n ) . (4.15)By taking c = 1 / , , . . . , n such that Re( t ) > − / t ∈ Σ,we ensure that s − t − = 0 whenever s ∈ c + i R and t ∈ Σ. And so we obtain that K n ( x, y ) =1(2 πi ) Z / i ∞ / − i ∞ ds I Σ dt M Y j =0 Γ( s + ν j )Γ( t + 1 + ν j ) Γ( t − n + 1)Γ( s − n ) Γ( t + l − n + 1)Γ( s + l − n ) x t y − s s − t − − πi ) Z / i ∞ / − i ∞ ds I Σ dt M Y j =0 Γ( s + ν j )Γ( t + 1 + ν j ) Γ( t + 1)Γ( s ) Γ( t + l − n + 1)Γ( s + l − n ) x t y − s s − t − . The integrand of the second double integral has no singularities inside Σ and hencethe t -integral vanishes by Cauchy’s theorem. By finally making the change of variable s s + 1 in the first double integral, we obtain (4.14). Remark 4.5. The proofs in subsections 4.2 and 4.3 are modelled after those in [25],but there are slight differences in all proofs.We want to emphasize one difference which has to do with the telescoping sum(4.15). The left-hand side of (4.15) has the factors l − n + k + 1 which come fromthe product of the prefactors in (4.8) and (4.11). The corresponding prefactors in [25,formulas (3.2) and (3.8)] are each other inverses, and as a consequence there is no suchfactor. However it is remarkable that the factors l − n + k + 1 are actually necessaryfor the telescoping sum (4.15) to hold.We notice that we can rewrite the kernel in terms of Meijer G-functions : Corollary 4.6. We have K n ( x, y ) = Z G , ,M +1 (cid:18) n, − ( l − n ) − ν , . . . , − ν M (cid:12)(cid:12)(cid:12)(cid:12) ux (cid:19) G M +1 , ,M +1 (cid:18) − n, l − nν , . . . , ν M (cid:12)(cid:12)(cid:12)(cid:12) uy (cid:19) du. (4.16) Proof. Since x t y − s − s − t = − Z ( ux ) t ( uy ) − s − du, the kernel (4.14) can be rewritten as K n ( x, y ) = − Z πi I Σ Γ( t + 1 − n )Γ( t + l − n + 1) Q Mj =0 Γ( t + 1 + ν j ) ( ux ) t dt ! × πi Z − / i ∞− / − i ∞ Q Mj =0 Γ( s + 1 + ν j )Γ( s + 1 − n )Γ( s + l − n + 1) ( uy ) − s − ds ! du. 15y the definition of a Meijer G-function and making the change of variables t 7→ − t and s s − 1, we obtain the identity (4.16).Using the integral representation for K n , we can derive the scaling limit at the hardedge. Theorem 4.7. With ν , . . . , ν M being fixed and with l growing at least as n , we have lim n →∞ l − n ) n K n (cid:18) x ( l − n ) n , y ( l − n ) n (cid:19) = K ν ,...,ν M ( x, y ) . (4.17) uniformly for x, y in compact subsets of the real positive axis, where K ν ,...,ν M ( x, y )= 1(2 πi ) Z − / i ∞− / − i ∞ ds I Σ dt M Y j =0 Γ( s + 1 + ν j )Γ( t + 1 + ν j ) sin( πs )sin( πt ) x t y − s − s − t . (4.18) The contour Σ starts at + ∞ in the upper half plane and returns to + ∞ in the lowerhalf plane encircling the positive real axis such that Re t > − / for t ∈ Σ (see alsoFigure 1 on page 11).Proof. By using identity (4.14), we know1( l − n ) n K n (cid:18) x ( l − n ) n , y ( l − n ) n (cid:19) = 1(2 πi ) Z − / i ∞− / − i ∞ ds I Σ dt M Y j =0 Γ( s + 1 + ν j )Γ( t + 1 + ν j )Γ( t + 1 − n )Γ( t + l − n + 1)Γ( s + 1 − n )Γ( s + l − n + 1) x t y − s − s − t ( l − n ) s − t n s − t . Euler’s reflection formula tells us thatΓ( z )Γ(1 − z ) = π sin( πz ) , and so we see that Γ( t − n + 1)Γ( s − n + 1) = Γ( n − s )Γ( n − t ) sin( πs )sin( πt ) . Furthermore, as n → ∞ we also know [30, formula 5.11.13]Γ( n − s )Γ( n − t ) = n t − s (cid:0) O ( n − ) (cid:1) and similarly Γ( t + l − n + 1)Γ( s + l − n + 1) = ( l − n ) t − s (cid:0) O (( l − n ) − ) (cid:1) . Hence, if we deform the contour Σ to a two sided, unbounded contour as in Figure 1and apply the identities above, we immediately obtain identity (4.17), provided thatwe can take the limit inside the integral. This can be justified by using the dominatedconvergence theorem (see [25, Theorem 5.3] for details).16 . Borodin Biorthogonal Ensembles In this final section we consider the biorthogonal ensembles (1.5) that were studiedby Borodin in [9], see also [11, 28]. These are determinantal point process on [0 , ∞ ),whose correlation kernels are expected to have interesting scaling limits at the hardedge x = 0. This was proved in [9] for the cases where w is either a special Jacobiweight w ( x ) = ( x α , < x ≤ , , x > , α > − , (5.1)or a Laguerre weight w ( x ) = x α e − x x > , α > − . (5.2)In both cases it was shown that a scaling limit at the origin leads to the followingcorrelation kernel that depends on α and θ , K ( α,θ ) ( x, y ) = θx α Z J α +1 θ , θ ( xu ) J α +1 ,θ (( yu ) θ ) u α du. (5.3)where J a,b is Wright’s generalization of the Bessel function given by J a,b ( x ) = ∞ X j =0 ( − x ) j j !Γ( a + jb ) . (5.4)The kernels (5.3) are related to the Meijer G-kernel K ν ,...,ν M in case θ or 1 /θ is aninteger. This is our final result. Theorem 5.1. Let M ≥ be an integer. (a) Then we have M M K ( α, M ) ( M M x, M M y ) = (cid:18) xy (cid:19) α K ν ,...,ν M ( x, y ) (5.5) with parameters ν j = α + j − M , j = 1 , . . . , M. (5.6)(b) We also have x M − K ( α,M ) ( M x M , M y M ) = K ˜ ν ,...,, ˜ ν M ( y, x ) (5.7) with parameters ˜ ν j = αM − jM , j = 1 , . . . , M. (5.8)17he parameters (5.6) and (5.8) come in an arithmetic progression with step size 1 /M ,and therefore they cannot all be integers if M ≥ 2. This is in contrast to the limitingkernels obtained from the products of random matrices where the ν j are necessarilyintegers. Proof. If b is a rational number, then (5.4) can be expressed as a Meijer G-function,see [17, formula (22)] and [18, formula (13)]. For the case when b = M with M ≥ J a,M ( x ) = (2 π ) M − M − a +1 / πi Z c + i ∞ c − i ∞ Γ( s ) Q M − j =0 Γ (cid:0) aM − s + jM (cid:1) (cid:16) xM M (cid:17) − s ds = (2 π ) M − M − a +1 / G , ,M +1 (cid:18) − , − aM + M , − aM + M , . . . , − aM + 1 (cid:12)(cid:12)(cid:12)(cid:12) xM M (cid:19) (5.9)and for b = 1 /M , J a, M ( x ) = (2 π ) − M − M / πi I Σ Q M − k =0 Γ (cid:0) t + kM (cid:1) Γ( a − t ) (cid:18) x M M M (cid:19) − t dt = (2 π ) − M − M / G M, ,M +1 (cid:18) − , M , . . . , M − M , − a (cid:12)(cid:12)(cid:12)(cid:12) x M M M (cid:19) (5.10)where Σ is a contour encircling the negative real axis.Inserting (5.9) and (5.10) into (5.3) we obtain for θ = 1 /M with a positive integer M , K ( α, M ) ( x, y )= M − ( α +1) M x α Z G , ,M +1 (cid:18) − , − α, − α − M , . . . , − α − M − M (cid:12)(cid:12)(cid:12)(cid:12) uxM M (cid:19) × G M, ,M +1 (cid:18) − , M , . . . , M − M , − α (cid:12)(cid:12)(cid:12)(cid:12) uyM M (cid:19) u α du, (5.11)and after a rescaling of variables x M M x , y M M y , M M K ( α, M ) ( M M x, M M y )= x α Z G , ,M +1 (cid:18) − , − α, − α − M , . . . , − α − M − M (cid:12)(cid:12)(cid:12)(cid:12) ux (cid:19) × G M, ,M +1 (cid:18) − , M , . . . , M − M , − α (cid:12)(cid:12)(cid:12)(cid:12) uy (cid:19) u α du. (5.12)Then we realize that for a Meijer G-function G ( z ), we have that z α G ( z ) is again a18eijer G-function with parameters shifted by α , see formula (A.4). Thus by (5.12) M M K ( α, M ) ( M M x, M M y )= (cid:18) xy (cid:19) α Z G , ,M +1 (cid:18) − , − α, − α − M , . . . , − α − M − M (cid:12)(cid:12)(cid:12)(cid:12) ux (cid:19) × G M, ,M +1 (cid:18) − α, α + M , . . . , α + M − M , (cid:12)(cid:12)(cid:12)(cid:12) uy (cid:19) du which proves part (a) of the theorem because of (4.4).Part (b) follows in a similar way. Alternatively it can be obtained from part (a)because of the formula1 θ x θ − K ( α,θ ) ( x θ , y θ ) = (cid:18) xy (cid:19) α ′ K ( α ′ , θ ) ( y, x ) , α ′ = α + 1 θ − Acknowledgements We thank Peter Forrester for useful discussions and providing us with a copy of [16].The authors are supported by KU Leuven Research Grant OT/12/073 and theBelgian Interuniversity Attraction Pole P07/18. The first author is also supportedby FWO Flanders projects G.0641.11 and G.0934.13, and by Grant No. MTM2011-28952-C02 of the Spanish Ministry of Science and Innovation. A. The Meijer G-function For ease of reference, we collect in this appendix the definition and properties of theMeijer G-function that are used in this paper. By definition, the Meijer G-function isgiven by the following contour integral: G m,np,q (cid:18) a , . . . , a p b , . . . , b q (cid:12)(cid:12)(cid:12)(cid:12) z (cid:19) = 12 πi Z L Q mj =1 Γ( s + b j ) Q nj =1 Γ(1 − a j − s ) Q qj = m +1 Γ(1 − b j − s ) Q pj = n +1 Γ( s + a j ) z − s ds, (A.1)where the branch cut of z − s is taken along the negative x -axis. Furthermore, it is alsoassumed that • m, n, p, q are integers such that 0 ≤ p ≤ n and 0 ≤ q ≤ m ; • the real (or complex) parameters a , . . . , a p and b , . . . , b q satisfy the conditions a k − b j = 1 , , , . . . for k = 1 , . . . , n and j = 1 , . . . , m. I.e., none of the poles of Γ( b j + u ) coincides with any of the poles of Γ(1 − a k − u ).19he contour L is such that all the poles of Γ( u + b j ) are on the left of the path whilethe poles of Γ(1 − a j + u ) are on the right of the path. In typical situations the contouris a vertical line c + i R with c > w on [0 , ∞ ) is( M w )( s ) = Z ∞ x s − w ( x ) dx, a < Re s < b. The inverse Mellin transform is w ( x ) = 12 πi Z c + i ∞ c − i ∞ ( M w )( s ) x − s ds, where a < c < b . Thus for a Meijer G-function which is defined and integrable on thepositive half-line we have Z ∞ x s − G m,np,q (cid:18) a , . . . , a p b , . . . , b q (cid:12)(cid:12)(cid:12)(cid:12) x (cid:19) dx = Q mj =1 Γ( s + b j ) Q nj =1 Γ(1 − a j − s ) Q qj = m +1 Γ(1 − b j − s ) Q pj = n +1 Γ( s + a j ) . (A.2)The Mellin convolution of two Meijer G-functions is again a Meijer G-function. Aspecial case of this is Z ∞ x ν − e − x G m,np,q (cid:18) a , . . . , a p b , . . . , b q (cid:12)(cid:12)(cid:12)(cid:12) yx (cid:19) dx = G m +1 ,np,q +1 (cid:18) a , . . . , a p ν, b , . . . , b q (cid:12)(cid:12)(cid:12)(cid:12) y (cid:19) , (A.3)provided that the integral in the left-hand side converges.Further identities are x α G m,np,q (cid:18) a , . . . , a p b , . . . , b q (cid:12)(cid:12)(cid:12)(cid:12) x (cid:19) = G m,np,q (cid:18) a + α, . . . , a p + αb + α, . . . , b q + α (cid:12)(cid:12)(cid:12)(cid:12) x (cid:19) (A.4)and G m,np,q (cid:18) a , . . . , a p b , . . . , b q (cid:12)(cid:12)(cid:12)(cid:12) x − (cid:19) = G n,mq,p (cid:18) − b , . . . , − b q − a , . . . , − a p (cid:12)(cid:12)(cid:12)(cid:12) x (cid:19) . (A.5)For more details, we refer the reader to [7, 26]. References [1] K. Adhikari, N.K. Reddy, T.L. Reddy, and K. Saha, Determinantal point processesin the plane from products of random matrices, preprint arXiv: 1308.6817.[2] G. Akemann and Z. Burda, Universal microscopic correlation functions for prod-ucts of independent Ginibre matrices, J. Phys. A 45 (2012), 465201, 18 pp.[3] G. Akemann, Z. Burda, M. Kieburg, and T. Nagao, Universal microscopic cor-relation functions for products of truncated unitary matrices, preprint arXiv:1310.6395. 204] G. Akemann, M. Kieburg, and L. Wei, Singular value correlation functions forproducts of Wishart random matrices, J. Phys. A 46 (2013), 275205, 22 pp.[5] G. Akemann, J.R. Ipsen, and M. Kieburg, Products of rectangular random matri-ces: singular values and progressive scattering, Phys. Rev. E 88 (2013), 052118,13 pp.[6] J. Baik, G. Ben Arous, and S. P´ech´e, Phase transition of the largest eigenvalue fornonnull complex sample covariance matrices, Ann. Probab. 33 (2005), 1643–1697.[7] R. Beals and J. Szmigielski, Meijer G -functions: a gentle introduction, NoticesAmer. Math. Soc. 60 (2013), 866–872.[8] M. Bertola, M. Gekhtman, and J. Szmigielski, Cauchy-Laguerre two-matrix modeland the Meijer-G random point field, Comm. Math. Phys. 326 (2014), 111–144.[9] A. Borodin, Biorthogonal ensembles, Nuclear Phys. B 536 (1999), 704–732.[10] Z. Burda, A. Jarosz, G. Livan, M.A. Nowak and A. Swiech, Eigenvalues andsingular values of products of rectangular Gaussian random matrices, Phys. Rev.E 82 (2010), 061114, 10 pp.[11] T. Claeys and S. Romano, Biorthogonal ensembles with two-particle interaction,preprint arXiv:1312.2892.[12] P. Deift and D. Gioev, Random Matrix Theory: Invariant Ensembles and Uni-versality, Courant Lecture Notes in Mathematics, 18, Courant Institute of Math-ematical Sciences, NY, USA; American Mathematical Society, Providence, RI,2009[13] A. Edelman and N.R. Rao, Random matrix theory, Acta Numerica , 14 (2005),233–297.[14] P.J. Forrester, Log-gases and Random Matrices, London Mathematical SocietyMonographs Series, Vol. 34, Princeton University Press, Princeton, NJ, 2010.[15] P.J. Forrester, Eigenvalue statistics for product complex Wishart matrices,preprint arXiv: 1401.2572.[16] P.J. Forrester and D.-Z. Liu, Raney distributions and random matrix theory,manuscript.[17] R. Gorenflo, Y. Luchko, and F. Mainardi, Analytical properties and applicationsof the Wright function, Fract. Calc. Appl. Anal. J. Comput. Appl. Math. 118 (2000),175–191. 2119] Harish-Chandra, Differential operators on a semisimple Lie algebra, Amer. J.Math. 79 (1957), 87–120.[20] J.R. Ipsen and M. Kieburg, Weak commutation relations and eigenvalue statisticsfor products of rectangular random matrices, preprint arXiv: 1310.4154.[21] C. Itzykson and J.-B. Zuber, The planar approximation II, J. Math. Phys. Probab. Theory Related Fields 144 (2009),221–246.[23] M. Krishnapur, Zeros of random analytic functions, Ph.D. thesis, U.C. Berkeley,2006. preprint arXiv:math/0607504.[24] W. K¨onig, Orthogonal polynomial ensembles in probability theory, Probab. Surv. Comm. Math. Phys .[26] Y.L. Luke, The Special Functions and their Approximations, Vol. I, Mathematicsin Science and Engineering, Vol. 53, Academic Press, NY, 1969.[27] A.M. Mathai, Jacobians of Matrix Transformations and Functions of Matrix Ar-gument, World Scientific Publishing Co. Inc., River Edge, NJ, 1997.[28] K.A. Muttalib, Random matrix models with additional interactions, J. Phys. A 28 (1995), L159-L164.[29] T. Neuschel, Plancherel–Rotach formulae for average characteristic polynomials ofproducts of Ginibre random matrices and the Fuss–Catalan distribution, RandomMatrices Theory Appl. Phys. Rev. E 83 (2011), 061118, 9 pp.[32] E. Strahov, Differential equations for singular values of products of random ma-trices, preprint arXiv: 1403.6368.[33] C.A. Tracy and H. Widom, Level spacing distributions and the Bessel kernel, Comm. Math. Phys. 161 (1994), 289–309.2234] L. Zhang, A note on the limiting mean distribution for products of two Wishartrandom matrices, J. Math. Phys. 54 (2013), 083303, 8 pp.[35] L. Zheng, and D.N.C. Tse, Communication on the Grassmann manifold: a geo-metric approach to the noncoherent multiple-antenna channel,