Permanental processes from products of complex and quaternionic induced Ginibre ensembles
aa r X i v : . [ m a t h - ph ] N ov PERMANENTAL PROCESSES FROM PRODUCTS OF COMPLEX ANDQUATERNIONIC INDUCED GINIBRE ENSEMBLES
GERNOT AKEMANN, JESPER R. IPSEN, AND EUGENE STRAHOV
Abstract.
We consider products of independent random matrices taken from the inducedGinibre ensemble with complex or quaternion elements. The joint densities for the complexeigenvalues of the product matrix can be written down exactly for a product of any fixednumber of matrices and any finite matrix size. We show that the squared absolute values of theeigenvalues form a permanental process, generalising the results of Kostlan and Rider for singlematrices to products of complex and quaternionic matrices. Based on these findings, we canfirst write down exact results and asymptotic expansions for the so-called hole probabilities,that a disc centered at the origin is void of eigenvalues. Second, we compute the asymptoticexpansion for the opposite problem, that a large fraction of complex eigenvalues occupies adisc of fixed radius centered at the origin; this is known as the overcrowding problem. Whilethe expressions for finite matrix size depend on the parameters of the induced ensembles, theasymptotic results agree to leading order with previous results for products of square Ginibrematrices. Introduction
In random matrix theory, the classical Ginibre ensembles given by real ( β = 1), complex( β = 2) and quaternion ( β = 4) matrix elements with Gaussian distribution without furthersymmetry play a central role and enjoy many applications. For recent reviews see [40, 30]and references therein. Such ensembles lead to determinantal and Pfaffian point processeson the complex plane, forming a true Coulomb gas in two dimensions, see e.g. [27] for thisinterpretation.Recently there has been a considerable interest in the distribution of eigenvalues of products of a finite number of independent random matrices of finite size N , taken from the classicalGinibre ensembles. While the density in the limit N → ∞ was previously known, see [17, 33],an exact solution for finite N was lacking. For applications of such products of random matriceswe refer to [18] and references within. In a series of papers the authors and their co-workersshowed that eigenvalues of products of matrices taken from the complex [4] and quaternionic[37] Ginibre ensemble form determinantal or Pfaffian point processes, respectively. In [8] theresults for complex Ginibre matrices were used to compute hole probabilities and overcrowdingprobabilities for this determinantal point process and its infinite analogue. Products of realGinibre matrices were studied by Forrester [28] who calculated the probability that all eigen-values of such a product are real. Products of matrices from other ensembles or from different Key words and phrases.
Non-Hermitian random matrix theory, products of random matrices, permanentalprocesses, induced Ginibre ensembles, hole probabilities, overcrowding, generalised Schur decomposition. ensembles have been considered too, including inverse complex Ginibre matrices by Adhikari,Reddy, Reddy, and Saha [1], and truncated unitary matrices in [9] (the ensemble of truncatedunitary matrices was introduced by ˙Zyczkowski and Sommers [55]). Rectangular matrices [1]as well as most recently products of elliptic Ginibre matrices [48] have been considered, andthe order in which different (square) matrices are multiplied is not important in a weak sense,even at finite N [38]. For mixing Hermitian and non-Hermitian matrices in the macroscopiclimit and the corresponding limiting densities see [18]. Furthermore, the exact results for finite N allowed to study correlation functions on a local scale. Whereas in the bulk and at the edgethese were found to be in the corresponding Ginibre universality class [4, 9] or weak non-unitaryclass [9], at the origin new universality classes labelled by the number of matrices multipliedwere found [4, 9].Although we will focus on the complex eigenvalues, we mention in passing that the distribu-tion of singular values of the product matrix and all correlation functions have been calculatedin [7, 6]. In particular, it was shown that the squared singular values form a biorthogonal en-semble in the sense of Borodin [15], being distributed according to a determinant point process.The limiting correlation kernel at the hard edge of this determinantal point process was studiedby Kuijlaars and Zhang [53, 42]. Their integral representation of this kernel for the product oftwo matrices was found to coincide with that of a Cauchy–Laguerre two-matrix model [12, 13].Very recently, differential equations for gap probabilities for this type of biorthogonal ensembleshave been derived [51].The present paper extends some of the aforementioned results to products of independentrandom matrices taken from induced Ginibre ensembles with complex or quaternion elements.The induced Ginibre ensemble of a single random matrix introduced by Fischmann, Bruzda,Khoruzhenko, Sommers, and ˙Zyczkowski [25] for β = 1 , β = 2 the elliptic Ginibre ensemble with inserted determinants was previously consideredand solved in [2], generalising the corresponding induced Ginibre ensemble. There the focus wason applications to quantum chromodynamics with chemical potential in three dimensions. Fur-thermore, motivated again from quantum chromodynamics the product of two random matricesfrom the induced ensemble was considered for β = 2 and 4 in [5]. There the exact hole proba-bilities and their asymptotic expansions were calculated, which allows us to compare with thesespecial cases. The induced ensembles are of mathematical interest for the following reasons.First, these ensembles are special cases of the non-Hermitian Feinberg–Zee ensembles with aspecific choice of potential that can be exactly solved for finite N . Second, the induced Ginibreensembles provide an explicit realisation of the single ring theorem, namely when the large N limit is taken such that the difference between the long and short side of the rectangular matrixis of the order of N [25]. For a discussion of such ensembles with arbitrary potentials and thesingle ring theorem we refer the reader to Feinberg and Zee [22], Feinberg, Scalettar and Zee[23], Feinberg [24], Guionnet, Krishnapur, and Zeitouni [32] and references therein.The paper is organized as follows. In Section 2 we state all results for the product of n independent random matrices taken from the complex induced Ginibre ensemble at β = 2. ERMANENTAL PROCESSES FROM PRODUCTS OF RANDOM MATRICES 3
Our starting point is the joint density of eigenvalues from [4, 38] given by a determinantalpoint process on the complex plane with the weight being a Meijer G -function in Theorem 2.1.This result was rigorously established in [1], and its kernel easily follows. We show that theset of squared absolute values of the complex eigenvalues forms a permanental process, thusgeneralising the work of Kostlan [41] for n = 1, and of [8]. This enables us to compute thehole probability that a disc of radius r centered at the origin is empty for N finite. Both forfinite and infinite N we then take the asymptotic limit r → ∞ . For N → ∞ , we establishbounds and compute the leading order asymptotic for the probability that a large number q of eigenvalues lie in a centered disk of fixed radius r known as overcrowding. In Section 3we state the corresponding results for quaternionic matrix elements at β = 4. We first give arigorous proof for the joint density established in [37]. It uses the technique of differential formssimilar to that presented by Hough, Krishnapur, Peres and Vir´ag [34] in the context of theclassical complex Ginibre ensemble of single matrices. Also for β = 4, we find a permanentalprocess for the moduli generalising the work of Rider [47]. The main new result in this sectionis Theorem 3.8, which says that the set of moduli of eigenvalues has the same distribution as aset of independent random variables. Furthermore, these random variables can be described interms of products of independent gamma variables. The corresponding expressions for the holeprobabilities and overcrowding estimates at β = 4 can be obtained from Theorem 3.8. The restof the paper is devoted to proofs: The proofs for the theorems presented in Section 2 are givenin Sections 4 to 6, while the proofs for the theorems presented in Section 3 are given in Sections7 to 9. In Appendix A we recall the higher order terms in the Euler–Maclaurin summationformula that we need in the main text.2. Products of random matrices from the induced complex Ginibre ensemble
Definition of the product matrix.
Let Mat( N, C ) be the set of all N × N matrices M with entries in C . Define a probability measure P N,m on Mat( N, C ) by the formula(2.1) P N,m ( dM ) ≡ Z N,m [det( M ∗ M )] m e − Tr( M ∗ M ) dM, where m ≥ Z N,m is a normalisation constant, and dM is the Lebesguemeasure on Mat( N, C ). This parameter dependent probability measure P N,m is called theinduced complex Ginibre ensemble (with parameter m ) and was studied in detail in [25]. Notethat P N,m is a special case of the non-Hermitian Feinberg–Zee ensemble [22] corresponding tothe potential V ( t ) = t + m log t with t = M ∗ M . If m = 0, then this ensemble is reduced to theclassical Ginibre ensemble of complex random matrices [31].Let n be a positive integer, m , . . . , m n be positive real numbers, and consider the product P n of n independent matrices M , . . . , M n , P n = M M . . . M n . Each matrix M a , a = 1 , . . . , n , is of size N × N , and it is chosen independently from the inducedcomplex Ginibre ensemble with parameter m a . We will refer to P n as to the product of n randommatrices from the induced complex Ginibre ensemble with the parameters m , . . . , m n . Note GERNOT AKEMANN, JESPER R. IPSEN, AND EUGENE STRAHOV that the effect of rectangular matrices can be incorporated into the parameters m , . . . , m n ,see [38].2.2. The joint density of eigenvalues as a determinantal point process.
The first resultconcerns the joint density of eigenvalues of P n . To present this result we use the same notationand definitions for the Meijer G -function as in Luke [44], Section 5.2. Namely, the Meijer G -function is defined there as G m,np,q (cid:18) x (cid:12)(cid:12)(cid:12)(cid:12) a , a , . . . , a p b , b , . . . , b q (cid:19) ≡ πi Z C Q mj =1 Γ( b j − s ) Q nj =1 Γ(1 − a j + s ) Q qj = m +1 Γ(1 − b j + s ) Q pj = n +1 Γ( a j − s ) x s ds. An empty product is to be interpreted as unity, 0 ≤ m ≤ q , 0 ≤ n ≤ p , and the parameters { a k } ( k = 1 , . . . , p ) and { b j } ( j = 1 , . . . , m ) are chosen such that no pole of Γ( b j − s ) coincides withany of the poles of Γ(1 − a k + s ). We also assume that x ∈ C \ { } . The integration contour C goes from − i ∞ to + i ∞ such that all the poles of Γ( b j − s ), j = 1 , . . . , m, lie to the right of theintegration path, and that all of the poles of Γ(1 − a k + s ), k = 1 , . . . , n, lie to the left of thispath. When p = 0, then also n = 0, and we can write the corresponding Meijer G -function as G m, ,q ( x | b , b , . . . , b q ). Here the indices a p are absent and thus omitted. Theorem 2.1.
The vector of (unordered) eigenvalues of P n has the density (with respect to theLebesgue measure on C N ) which can be written as ̺ ( m ,...,m n ) N ( z , . . . , z N ) = C N Y k =1 w ( m ,...,m n ) n ( z k ) Y ≤ i Remark 2.2. 1) Apart from the weight function and normalisation constant this has exactlythe same form as the joint density of the Ginibre ensemble [31] for β = 2, where the complexeigenvalues repel each other through the absolute value squared of a Vandermonde determinant.2) In particular when n = 1 the weight function is given by w ( m ) n =1 ( z ) = | ζ | m e −| z | , and equation(2.2) reduces to the formula for the joint density of eigenvalues of a matrix taken from theinduced Ginibre ensemble, see equation (19) in [25]. For m = 0 we are of course back to theclassical Ginibre ensemble [31].3) When n = 2 we have w ( m ,m ) n =2 ( z ) = 2 | z | m + m ) K m − m (2 | z | ) in terms of the modified Bessel ERMANENTAL PROCESSES FROM PRODUCTS OF RANDOM MATRICES 5 (or MacDonald) function, and the joint density can be found e.g. in [5].4) For m = . . . = m n = 0 we obtain from equation (2.2) the joint density of eigenvalues ofa matrix which is a product of n independent complex Ginibre matrices of size N × N . Theeigenvalue distributions for such products were studied in [4, 8]. Formulae (2.2) and (2.3) forthe joint density of eigenvalues generalise that obtained in [4] and were already considered in[1, 38].Recall that a determinantal point process on C with kernel K N ( z, ζ ) and normalisable weightfunction w ( z ) is a point process on C given by ̺ N ( z , . . . , z N ) = 1 Z N N Y k =1 w ( z k ) Y ≤ i The eigenvalues of P n form a determinantal point process in the complex planewith kernel (2.4) K ( m ,...,m n ) N ( z, ζ ) = 1 π n N − X k =0 ( zζ ) k Q na =1 Γ( k + 1 + m a ) with respect to the normalised probability measure (2.5) G n, ,n ( | z | | m , m , . . . , m n ) π Q na =1 Γ( m a + 1) d z. Note that the kernel K ( m ,...,m n ) N ( z, ζ ) defined by equation (2.4) can be written as K ( m ,...,m n ) N ( z, ζ ) = N − X k =0 ϕ k ( z ) ϕ k ( ζ ) , where { ϕ k , ≤ k ≤ N − } is an orthonormal set in L ( C , µ ), with respect to the correspondingmeasure defined by equation (2.5). The orthonormal polynomials { ϕ k , ≤ k ≤ N − } are(2.6) ϕ k ( z ) ≡ (cid:0) h ( m ,...,m n ) k (cid:1) − z k , GERNOT AKEMANN, JESPER R. IPSEN, AND EUGENE STRAHOV where the squared norms of the monic orthogonal polynomials are given by(2.7) h ( m ,...,m n ) k ≡ Z w ( m ,...,m n ) n ( z ) | z | k d z = π n n Y a =1 Γ ( k + 1 + m a ) . Therefore K ( m ,...,m n ) N ( z, ζ ) is the kernel of the projection operator which projects from L ( C , µ )onto the span of { z k : 0 ≤ k ≤ N − } . We thus conclude that the eigenvalues of P n form adeterminantal projection process. We call this process the generalised induced finite- N complexGinibre ensemble with parameters m , m , . . . , m n . Limit of large matrices. When N → ∞ , then the finite- N process converges (in distribu-tion) to some limiting determinantal process, with respect to the same probability measure (2.5).A note of caution is in place here. As was pointed out in [4] there are several possibilities totake the large N limit. One possibility is to rescale the matrix elements of the induced Ginibreensemble eq. (2.1) to obtain an N dependent distribution ∼ exp[ − N Tr( M ∗ M )]. In that casethe limit of the spectral density ̺ ( m ,...,m n )1 ( z ) has compact support, and one could study localeigenvalue correlations a) at the edge of support, b) in the bulk and c) at the origin, each leadingto a different limiting kernel. Because we are only interested in the hole probability we willnot follow this route and keep the induced Ginibre ensemble eq. (2.1) N independent. In thelimit N → ∞ the edge of support will thus go to infinity, and without further rescaling we willalways stay in case c), the origin limit.To write the correlation kernel of this limiting determinantal point process explicitly, recallthat a generalised series with an arbitrary number of numerator and denominator parametersis defined by p F q (cid:18) α , α , . . . , α p β , β , . . . , β q (cid:12)(cid:12)(cid:12)(cid:12) z (cid:19) = ∞ X k =0 ( α ) k · · · ( α p ) k ( β ) k · · · ( β q ) k z k k ! , see Luke [44], Section 3.2. The limiting determinantal process at the origin has the kernel K ( m ,...,m n ) ∞ ( z, ζ ) = 1 π n ∞ X k =0 ( zζ ) k Q na =1 Γ( k + 1 + m a )= 1 π n Q na =1 Γ(1 + m a ) F n (cid:18) 11 + m , m , . . . , m n (cid:12)(cid:12)(cid:12)(cid:12) zζ (cid:19) . (2.8)We call the corresponding limiting determinantal process the generalised induced infinite com-plex Ginibre ensemble with parameters m , m , . . . , m n . The joint density of moduli of the eigenvalues as a permanental process. Assumethat z , z , . . . , z N are random complex variables. If their joint density with respect to theLebesgue measure on C N is given by(2.9) ̺ N ( z , . . . , z N ) = 1 Z N N Y k =1 w ( | z k | ) Y ≤ i Assume that z , z , . . . , z N are random complex variables. Set z i = r i e iθ , for i = 1 , . . . , N . Assume that the joint probability density function of the (unordered) randomcomplex variables z , z , . . . , z N with respect to the Lebesgue measure on C N is given by theformula (2.9) where w ( r k ) : R ≥ → R ≥ is a normalisable weight function. Then it holds that (2.10) Z N = N ! N Y k =1 h k − , h k = π Z ∞ y k w ( √ y ) dy, and the joint density of the (unordered) absolute values r , r , . . . , r N is given by a permanent N Y j =1 Z π dθ j ̺ N ( z , . . . , z N ) = (2 π ) N Z N per h r j − i i Ni,j =1 N Y j =1 w ( r j ) . Moreover, the set of absolute values { r i } i =1 ,...,N has the same distribution as the set { R i } i =1 ,...,N ,where the random variables R , R , . . . R N are independent, and for each k (1 ≤ k ≤ N ) therandom variable ( R k ) has the density q k ( y ) = ( y k − w ( √ y ) h k − , y ≥ , , y < . In order to exploit Theorem 2.4 for the description of absolute values of eigenvalues of theproduct matrix P n we need the following result. Recall that gamma variables Gamma( k, f k ( x ) = (cid:26) k ) x k − e − x , x ≥ , , x < . Proposition 2.5. Let x , x , . . . , x n be n independent gamma variables having density functions f b k ( x k ) = ( b k ) x b k − k e − x k , x k ≥ , , x k < , where b k > , ≤ k ≤ n . Then the probability density function of the product variable z = x x . . . x n is a Meijer G -function multiplied by a normalizing constant, i.e. g ( b ,...,b n ) ( z ) = G n, ,n ( z | b − , b − , . . . , b n − Q ni =1 Γ( b i ) . The proof of Proposition 2.5 can be found in Springer and Thompson [49], Section 3 (notehowever, the misprint in the denominator there). Starting from formula (2.2) for the jointdensity of eigenvalues of the product matrix P n , and applying Theorem 2.4 together withProposition 2.5 we obtain GERNOT AKEMANN, JESPER R. IPSEN, AND EUGENE STRAHOV Theorem 2.6. Let P n be a product of n independent random matrices from the induced complexGinibre ensemble. The set of absolute values of eigenvalues of P n has the same distribution asa set of independent random variables (cid:8) R ( m ,...,m n )1 , R ( m ,...,m n )2 , . . . , R ( m ,...,m n ) N (cid:9) , and for each k (1 ≤ k ≤ N ) the random variable ( R ( m ,...,m n ) k ) has the same distribution asthe product of n independent gamma variables Gamma( k + m , , . . . , Gamma( k + m n , .Moreover, the random variable ( R ( m ,...,m n ) k ) has the density function given by the formula (2.12) g ( m ,...,m n ) k ( x ) = G n, ,n ( x | k + m − , . . . , k + m n − Q na =1 Γ( k + m a ) , x ≥ , , x < . The result of Theorem 2.6 generalises our previous results [8], see Theorem 3.1 and 3.2. Therethe authors considered the case when products of matrices are taken form the same classicalcomplex Ginibre ensemble. Theorem 2.6 describes distribution of absolute values of eigenvaluesin the case when the matrices involved in the product are taken from the induced complexGinibre ensembles with different parameters m j .2.5. Hole probabilities in the generalised induced complex Ginibre ensemble. Denoteby N ( m ,...,m n ) GG ( r ; N ) the number of points of the generalised finite- N induced complex Ginibreensemble with parameters m , . . . , m n which are located in the disk of radius r centered at theorigin. We are interested to investigate the hole probability, i.e. the probability of the eventthat there are no eigenvalues of P n in the disc of a radius r around the origin. We denote thisprobability by Pr (cid:8) N ( m ,...,m n ) GG ( r ; N ) = 0 (cid:9) . Starting from Theorem 2.6 an exact expression canbe given for this probability in terms of Meijer G -functions: Theorem 2.7. The hole probability Pr (cid:8) N ( m ,...,m n ) GG ( r ; N ) = 0 (cid:9) for the generalised inducedfinite- N complex Ginibre ensemble with parameters m , . . . , m n can be written as Pr n N ( m ,...,m n ) GG ( r ; N ) = 0 o = N Y k =1 G n +1 , ,n +1 (cid:18) r (cid:12)(cid:12)(cid:12)(cid:12) , k + m , . . . , k + m n (cid:19)Q na =1 Γ( k + m a ) . Remark 2.8. As for n = 1, we can write G , , (cid:18) r (cid:12)(cid:12)(cid:12)(cid:12) , k + m (cid:19) = Z ∞ r e − t t k + m − dt = Γ( k + m , r ) . Thus we see that once n = 1 the formula in Theorem 2.7 is reduced toPr n N ( m ) GG ( r ; N ) = 0 o = N Y k =1 Γ( k + m , r )Γ( k + m ) , ERMANENTAL PROCESSES FROM PRODUCTS OF RANDOM MATRICES 9 for the induced Ginibre ensemble of a single matrix. The formula just written above forPr (cid:8) N ( n =1 ,m ) GG ( r ; N ) = 0 (cid:9) was found in Fischmann, Bruzda, Khoruzhenko, and Sommers [25].For m = 0 it can be already found in the book of Mehta [45].Theorem 2.7 gives an exact formula for the hole probability in the case the products of n matrices of finite size. In order to obtain the decay of the hole probability as r → ∞ we usethe asymptotic expansion of the Meijer G -function for large values of its argument as given inLuke [44], Section 5.7, Theorem 5. The result is Theorem 2.9. For fixed N and as r → ∞ , we have Pr n N ( m ,...,m n ) GG ( r ; N ) = 0 o = (2 π ) N ( n − n N Q Nk =1 Q na =1 Γ( k + m a ) × exp (cid:26) − nN r n + N (cid:18) N + 2( m + . . . + m n ) − n (cid:19) log r (cid:27) (cid:16) O ( r − n ) (cid:17) . Now we turn to the case of the generalised infinite induced Ginibre ensemble with parameters m , m . . . , m n . Starting from Theorem 2.6 we are able to find upper and low bounds for thehole probability Pr (cid:8) N ( m ,...,m n ) GG ( r ; ∞ ) = 0 (cid:9) , from which we can determine the limiting leadingorder bahaviour. Namely, the following result holds true. Theorem 2.10. A) (Upper bound for the hole probability.) We have (2.13) Pr n N ( m ,...,m n ) GG ( r ; ∞ ) = 0 o ≤ exp n − n r n + n ( m + · · · + m n ) r n + O (log r ) o , as r → ∞ .B) (Lower bound for the hole probability.) We have (2.14) Pr n N ( m ,...,m n ) GG ( r ; ∞ ) = 0 o ≥ exp n − n r n − n r n log r n − n X a =1 ( C ( m a ) − m a + log Γ( m a + 1) + log 4) r n + O (log r ) o , as r → ∞ , where C ( m ) is a constant given by (2.15) C ( m ) = m + 1 − m + 1) − (cid:0) + m (cid:1) log(1 + m ) + 12 Z ∞ P ( x ) dx ( m + x ) . Here P ( x ) is the second Bernoulli periodic function P ( x ) = B ( x − ⌊ x ⌋ ) . with B ( x ) = x − x + and ⌊ x ⌋ denoting the integer part of x .In particular we obtain (2.16) lim r →∞ (cid:18) r n log (cid:16) Pr n N ( m ,...,m n ) GG ( r ; ∞ ) o = 0 (cid:17)(cid:19) = − n . As we can see the first term in the asymptotic expansion of the logarithm of the hole probabil-ity does not depend on the numbers m , . . . , m n characterizing the induced Ginibre ensembles.The statistical quantity defined in the left-hand of equation (2.16) depends only on the numberof matrices in the product and is thus universal. We note that in the special cases n = 1with square [26] and n = 2 with rectangular matrices [5] additional higher order terms in theasymptotic expansion were computed.2.6. Overcrowding estimates. Let us reconsider a disk with a fixed radius r > q points in this discfor the generalised infinite Ginibre ensemble, i.e. Pr (cid:8) N ( m ,...,m n ) ( r ; ∞ ) ≥ q (cid:9) (the probabilityof overcrowding). We are particularly interested in the decay of this probability when q → ∞ .In the context of zeros of Gaussian analytic functions the overcrowding problem was studied inKrishnapur [43]. It was shown in [43] that the probability of the event that in the disc with afixed radius there are more than q zeros of the Gaussian analytic function decays (as q → ∞ ) inthe same way as the corresponding probability for the classical Ginibre ensemble. The fact thatthe absolute values of the eigenvalues of products of random matrices are distributed as productsof independent Gamma variables (see Theorem 2.6) enables us to get upper and lower boundson the probabilities under considerations, and show a different behavior of the overcrowdingprobabilities for products of random matrices than that in the case of the Gaussian analyticfunctions. The result is Theorem 2.11. A) (Lower bound for the overcrowding probability.) As q → ∞ , Pr n N ( m ,...,m n ) GG ( r ; ∞ ) ≥ q o ≥ exp n − n q log q + (cid:16) n r − n (cid:17) q − n X a =1 (cid:0) m a + (cid:1) q log q + O ( q ) o . B) (Upper bound for the overcrowding probability.) As q → ∞ , Pr n N ( m ,...,m n ) GG ( r ; ∞ ) ≥ q o ≤ exp n − n q log q + (cid:16) n r (cid:17) q + (cid:16) − n X a =1 m a (cid:17) q log q + O ( q ) o . In particular, we conclude that for a fixed radius r > to leading order Pr n N ( m ,...,m n ) GG ( r ; ∞ ) ≥ q o = exp n − n q log q (1 + o (1)) o , as q → ∞ . Again, the first term in the asymptotic expansion of the logarithm of the overcrowdingprobability does not depend on the numbers m , . . . , m n and is thus universal. ERMANENTAL PROCESSES FROM PRODUCTS OF RANDOM MATRICES 11 Products of random matrices from the induced quaternion Ginibre ensemble Definition of the product matrix. Denote by H the division ring of quaternions. It isknown that H is isomorphic to the ring H ′ of 2 × H ′ = (cid:26)(cid:18) α − ββ α (cid:19) (cid:12)(cid:12)(cid:12)(cid:12) α, β ∈ C (cid:27) , and that the elements of H ′ can be understood as matrix representations of quaternions.Let Mat ( H , N × N ) denote the collection of all N × N matrices with entries from H , thenthe isomorphism between H and H ′ induces the isomorphism between Mat ( H , N × N ) andMat (cid:0) H ′ , N × N (cid:1) , where Mat (cid:0) H ′ , N × N (cid:1) is the collection of all N × N matrices whose entriesare 2 × H ′ . Thus each matrix, M ∈ Mat (cid:0) H ′ , N × N (cid:1) , can be represented as(3.1) M = ( ˘ M i,j ) Ni,j =1 with ˘ M i,j = M (1) i,j − M (2) i,j M (2) i,j M (1) i,j ! , where 1 ≤ i, j ≤ N , and where M (1) i,j , M (2) i,j ∈ C . Let P qu N,m be a probability measure defined onMat (cid:0) H ′ , N × N (cid:1) by the formula P qu N,m ( dM ) = 1 Z qu N,m [det M ] m e − Tr M ∗ M dM, where m ≥ Z qu N,m is a normalization constant, and dM de-notes the Lebesgue measure on Mat (cid:0) H ′ , N × N (cid:1) . Note that the eigenvalues of a matrix fromMat (cid:0) H ′ , N × N (cid:1) come in complex conjugate pairs, see Section 7.6, hence the density of P qu N,m is non-negative. The probability measure P qu N,m defines an ensemble of random matrices whichis called the induced quaternion Ginibre ensemble (with parameter m ).Let n be a positive integer, m , . . . , m n be positive real numbers, and consider the product P qu n of n independent matrices M , . . . , M n from Mat( H ′ , N × N ), P qu n = M M . . . M n . Each matrix M a ( a = 1 , . . . , n ), is chosen from the induced quaternion Ginibre ensemble (withparameter m a ). We will refer to P qu n as to the product of n random matrices from the inducedquaternion Ginibre ensemble with the parameters m , . . . , m n .3.2. The joint density of eigenvalues as a Pfaffian point process. The joint densityof eigenvalues of a matrix taken from the ensemble of the quaternion Ginibre matrices is wellknown, see Mehta [45], Section 15.2. Theorem 3.1 generalises this result to products of in-dependent matrices taken from the induced quaternion Ginibre ensemble with the parameters m , . . . , m n . Theorem 3.1. The vector of unordered eigenvalues of P qu n has a density (with respect to theLebesgue measure on C N ) which can be written as ̺ ( m ,...,m n ) N, qu ( ζ , . . . , ζ N ) = C N Y k =1 w ( m ,...,m n ) n ( ζ k ) | ζ k − ¯ ζ k | Y ≤ i Suppose that there is a 2 × K ( m ,...,m n ) N ( z, ζ ) such thatfor a general finitely supported function f defined on C we have(3.3) ̺ ( m ,...,m n ) N, qu ( ζ , . . . , ζ N ) N Y k =1 (1 + f ( ζ k )) = r det (cid:16) I + K ( m ,...,m n ) N f (cid:17) , where ̺ ( m ,...,m n ) N, qu ( ζ , . . . , ζ N ) is defined in (3.2), K ( m ,...,m n ) N is the operator associated to thekernel K ( m ,...,m n ) N ( z, ζ ), and f is the operator of multiplication by the function f . Then K ( m ,...,m n ) N ( z, ζ ) is called the correlation kernel of the ensemble defined by (3.2).The explanation of the term correlation kernel can be found in Tracy and Widom [52], § Theorem 3.4. The correlation kernel can be written as K ( m ,...,m n ) N ( z, ζ ) = ( ζ − ζ ) w ( m ,...,m n ) n ( ζ ) S ( m ,...,m n ) N ( z, ζ ) − S ( m ,...,m n ) N ( z, ζ ) S ( m ,...,m n ) N ( z, ζ ) − S ( m ,...,m n ) N ( z, ζ ) ! , where S ( m ,...,m n ) N ( z, ζ ) = 12 π n X ≤ i ≤ j ≤ N − n Y a =1 ( j − i ) Γ (cid:16) m a +2 j +22 (cid:17) Γ (cid:0) m a +2 i +22 (cid:1) Γ ( m a + 2 j + 2) (cid:0) z i ζ j +1 − z i +1 ζ j (cid:1) . Remark 3.5. 1) For n = 1 and m = 0 the formula for the correlation kernel turns into K ( m =0) N ( z, ζ ) = ( ζ − ζ ) e −| ζ | S ( m =0) N ( z, ζ ) − S ( m =0) N ( z, ζ ) S ( m =0) N ( z, ζ ) − S ( m =0) N ( z, ζ ) ! , ERMANENTAL PROCESSES FROM PRODUCTS OF RANDOM MATRICES 13 where S ( m =0) N ( z, ζ ) = 12 π X ≤ i ≤ j ≤ N − ( j − i ) j ! i !(2 j + 1)! (cid:0) z i ζ j +1 − z i +1 ζ j (cid:1) . This is the well-known formula for the correlation kernel of the quaternion Ginibre ensemble,see Mehta [45], equation (15.2.24).2) Theorem 3.4 implies that the eigenvalues of P qu n form a Pfaffian point process on the complexplane with a 2 × K ( m ,...,m n ) N ( z, ζ ) by asimple transformation.3) In [37] skew-orthogonal polynomials were used to derive the correlation kernel in the case m = · · · = m n and the result for arbitrary parameters was derived in [38]. However, theapproach given in this paper differs from the aforementioned papers and our proof of Theorem3.4 does not use skew-orthogonal polynomials. Instead, we use a technique similar to that usedby Tracy and Widom [52] to derive correlation kernels for the Gaussian Orthogonal Ensemble,and for the Gaussian Symplectic Ensemble.The point process formed by eigenvalues of P qu n is called the generalised induced finite- N quaternion Ginibre ensemble with parameters m , . . . , m n . When N → ∞ (in the same senseas in Section 2.3) the point process formed by eigenvalues of P qu n converges to some limitingprocess on the complex plane. We call this limiting process the generalised induced infinitequaternion Ginibre ensemble with parameters m , . . . , m n .3.4. The joint density of moduli of the eigenvalues as a permanental process. The-orem 3.6 gives a similar result to that of Theorem 2.4 in the case when the joint density of z , . . . , z N with respect to the Lebesgue measure on C N has the form as in equation (3.2). Theorem 3.6. Assume that z , . . . , z N are random complex variables. Assume that the jointprobability density function of z , . . . , z N with respect to the Lebesgue measure on C N is givenby the formula (3.4) ̺ N, qu ( z , . . . , z N ) = 1 Z qu N N Y k =1 w ( | z k | ) | z k − z k | Y ≤ i Formula (3.6) was first obtained in Rider [47], Section 5, in the context of theclassical quaternion Ginibre ensemble.As in the case of products of matrices from the induced complex Ginibre ensemble (seeSection 2.4) the application of Theorem 3.6 together with Proposition 2.5 leads to an explicitdescription of the distribution of absolute values of the eigenvalues of P qu n . Theorem 3.8. Let ζ , . . . , ζ N be the eigenvalues of P qu n . The set of absolute values of ζ , . . . , ζ N has the same distribution as the set of independent random variables n R ( m ,...,m n )1 , R ( m ,...,m n )2 , . . . , R ( m ,...,m n ) N o , and for each k (1 ≤ k ≤ N ) the random variable (cid:0) R ( m ,...,m n ) k (cid:1) has the same distribution asthe product of n independent gamma variables Gamma(2 k + m , , . . . , Gamma(2 k + m n , .Moreover, the random variable (cid:0) R ( m ,...,m n ) k (cid:1) has a density function given by the formula (3.7) g ( m ,...,m n ) k, qu ( x ) = G n, ,n (cid:0) x (cid:12)(cid:12) k + m − , k + m − , . . . , k + m n − (cid:1)Q na =1 Γ(2 k + m a ) , x ≥ , , x < . The result of Theorem 3.8 should be compared with that of Theorem 2.6. The joint densitiesof eigenvalues of P n and P qu n are given by quite different formulae (see equations (2.2) and (3.2)).In the quaternion case, equation (3.2) implies that the eigenvalues tend to avoid the real axis.However, according to Theorem 3.8 and Theorem 2.6 the absolute values of eigenvalues of P n and P qu n are distributed in a very similar way. This is not surprising, since the different behaviorin the microscopic neighbourhood of the real axis is averaged out due to the integration overthe angles of the eigenvalues. For the product of quaternion Ginibre matrices this was alreadyobserved in [37].3.5. Hole probabilities in the generalised induced quaternion Ginibre ensemble. Denote by N ( m ,...,m n ) GG, qu ( r ; N ) the number of eigenvalues of the random matrix P qu n in the disk ofradius r with its center at the origin. Similar to the case of matrices from the induced complexGinibre ensemble, Theorem 3.8 enables us to give an exact formula for the hole probabilityPr (cid:8) N ( m ,...,m n ) GG, qu ( r ; N ) = 0 (cid:9) , i.e. for the probability of the event that there are no eigenvalues of P qu n in the disc of a radius r with its center at the origin. ERMANENTAL PROCESSES FROM PRODUCTS OF RANDOM MATRICES 15 Theorem 3.9. The hole probability Pr (cid:8) N ( m ,...,m n ) GG, qu ( r ; N ) = 0 (cid:9) for the generalised inducedfinite- N quaternion Ginibre ensemble with parameters m , . . . , m n can be written as Pr n N ( m ,...,m n ) GG, qu ( r ; N ) = 0 o = N Y k =1 G n +1 , ,n +1 (cid:18) r (cid:12)(cid:12)(cid:12)(cid:12) , k + m , . . . , k + m n (cid:19)Q na =1 Γ(2 k + m a ) . Theorem 3.10 below describes the decay of the hole probability Pr (cid:8) N ( m ,...,m n ) GG, qu ( r ; N ) = 0 (cid:9) as r → ∞ . Theorem 3.10. For fixed N and r → ∞ , we have Pr n N ( m ,...,m n ) GG, qu ( r ; N ) = 0 o = (2 π ) N ( n − n N Q Nk =1 Q na =1 Γ(2 k + m a ) × exp (cid:26) − nN r n + N (cid:18) N + 1 + 2( m + . . . + m n ) − n (cid:19) log r (cid:27) (cid:16) O (cid:16) r − n (cid:17)(cid:17) . Comparing Theorem 2.9 and Theorem 3.10 we see that the first term in the asymptoticexpansions (as r → ∞ ) of the logarithms of the hole probabilities at finite N is the same. Inother words,(3.8)lim r →∞ log h Pr n N ( m ,...,m n ) GG ( r ; N ) = 0 oi r /n = lim r →∞ log h Pr n N ( m ,...,m n ) GG, qu ( r ; N ) = 0 oi r /n = − nN. Theorem 3.11 estimates the decay of the hole probability for the generalised induced infinitequaternion Ginibre ensemble with parameters m , . . . , m n . Theorem 3.11. Denote by Pr (cid:8) N ( m ,...,m n ) GG, qu ( r ; ∞ ) = 0 (cid:9) the probability of the event that thereare no points of the generalised induced infinite quaternion Ginibre ensemble with parameters m , . . . , m n in a disc of radius r with its center at the origin.A) (Upper bound for the hole probability.) We have (3.9) Pr n N ( m ,...,m n ) GG ;qu ( r ; ∞ ) = 0 o ≤ exp n − n r n + r n (cid:16) m + · · · + m n + n (cid:17) + O (log r ) o , as r → ∞ .B) (Lower bound for the hole probability.) We have (3.10) Pr n N ( m ,...,m n ) GG, qu ( r ; ∞ ) = 0 o ≥ exp n − n r n − n r n log r n − n X a =1 (cid:16) C (cid:16) m a − (cid:17) + C (cid:16) m a (cid:17) + log Γ (cid:16) m a + 1 (cid:17) − m a (1 + log 2) − 12 + 32 log 2 (cid:17) r n + O (log r ) o , as r → ∞ , where C ( m ) is defined by equation (2.15). In particular this implies lim r →∞ (cid:18) r n log h Pr n N ( m ,...,m n ) GG, qu ( r ; ∞ ) = 0 oi(cid:19) = − n . Thus for large ( N → ∞ ) matrices the hole probabilities associated with the matrices takenfrom induced complex Ginibre ensemble, and the hole probability associated with the matricestaken from the induced quaternion Ginibre ensemble decay almost identically. In particular,the difference in the first term in the asymptotic expansions of the logarithms of the holeprobabilities is only in constant factors − n and − n . Remark 3.12. The result of Theorem 3.11 is in agreement with known results for the holeprobability in the case of a product of one or two matrices taken from the complex or quaternionGinibre ensemble [5]. Note that in [5], the product of two Ginibre matrices is a special case ofthe non-Hermitian chiral ensembles.3.6. Overcrowding estimates. Overcrowding estimates for the generalised induced infinitecomplex Ginibre ensemble with parameters m , . . . , m n (see Section 2.6) can be extendedto the case of the generalised induced infinite quaternion Ginibre ensemble with parameters m , . . . , m n . The result is given by the following Theorem 3.13. A) (Lower bound for the overcrowding probability.) As q → ∞ , Pr n N ( m ,...,m n ) GG, qu ( r ; ∞ ) ≥ q o ≥ exp n − nq log q + (cid:16) n − n log 2 + 2 log r (cid:17) q − n X a =1 ( m a + 1) q log q + O ( q ) o . B) (Upper bound for the overcrowding probability.) As q → ∞ , Pr n N ( m ,...,m n ) GG, qu ( r ; ∞ ) ≥ q o ≤ exp n − nq log q + (cid:16) n − n log 2 + log r (cid:17) q + (cid:16) − n X a =1 (cid:0) m a + (cid:1)(cid:17) q log q + O ( q ) o . Thus, for a fixed r > , Pr n N ( m ,...,m n ) GG ;qu ( r ; ∞ ) ≥ q o = exp (cid:8) − nq log q (1 + o (1)) (cid:9) , as q → ∞ . We note that the difference between the complex and the quaternion case in the first termof asymptotic expansions as q → ∞ is in constant factors − n and − n .4. Joint density for the generalised complex Ginibre ensemble We begin by completing the proof of Theorem 2.1 as it was presented in [1]. While thegeneralised Schur decomposition and the corresponding Jacobian were proved there, the explicitcomputation of the weight function and its normalisation was not spelled out explicitly. ERMANENTAL PROCESSES FROM PRODUCTS OF RANDOM MATRICES 17 Proof of Theorem 2.1. In [1] the unnormalised joint density of the eigenvalues z , . . . , z N of the product matrix P n was given as(4.1) ̺ ( m ,...,m n ) N ( z , . . . , z N ) = 1 Z N N Y k =1 w ( m ,...,m n ) n ( z k ) Y ≤ i Theorem 2.3 can be obtained from Theorem 2.1 by the samemethod as in the case of the classical complex Ginibre ensemble, see Mehta [45], Section 15.1.Indeed, equation (2.2) implies that the eigenvalues of P n form a determinantal point process onthe complex plane. The corresponding correlation kernel has the following form K ( m ,...,m n ) N ( z, ζ ) = N − X k =0 ϕ k ( z ) ϕ k ( ζ ) , where ϕ k ( z ) are polynomials orthonormal with respect to the weight function w ( m ,...,m n ) n ( z ), Z ϕ k ( z ) ϕ l ( z ) w ( m ,...,m n ) n ( z ) d z = δ k,l . ERMANENTAL PROCESSES FROM PRODUCTS OF RANDOM MATRICES 19 Because the weight function is rotational invariant these polynomials are monic and can beproperly normalised as ϕ k ( z ) = z k h ( m ,...,m n ) k , where the squared norms h ( m ,...,m n ) k were already computed as the moments in equation (4.3).Theorem 2.3 follows. (cid:3) We will now study the radial distributions of the eigenvalues of P n , and of the points of thegeneralised induced complex Ginibre ensemble with parameters m , . . . , m n . The first step inthis direction is to prove Theorem 2.4 on the distribution of the absolute values of the complexvariables z , . . . , z N whose joint density has the same form as that of the joint density of theeigenvalues of P n . This follows along the lines of the work of Kostlan [41] for Gaussian weights,and ultimately leads to the independence of the moduli of the complex eigenvalues.4.3. Proof of Theorem 2.4. Set z i = r i e iθ i ( i = 1 , . . . , N ). Expanding the two Vandermondedeterminants in the joint density we have Y ≤ i Let z , . . . , z N be the eigenvalues of P n . From Proposition 2.2 andTheorem 2.4 we conclude that the set of absolute values of z , . . . , z N has the same distributionas the set of independent random variables, n R ( m ,...,m n )1 , R ( m ,...,m n )2 , . . . , R ( m ,...,m n ) N o . For each k (1 ≤ k ≤ N ), the random variable (cid:0) R ( m ,...,m n ) k (cid:1) has the density(4.10) q ( m ,...,m n ) k ( x ) = x k − π k − G n, ,n ( x | m , m , . . . , m n ) R ∞ x k − π k − G n, ,n ( x | m , m , . . . , m n ) dx , x ≥ , , x < . This gives formula (2.12) for the density function. Now, formula (2.12) and Proposition 2.5imply that (cid:0) R ( m ,...,m n ) k (cid:1) has the same distribution as the product of n independent gammarandom variables Gamma( k + m , , . . . , Gamma( k + m n , (cid:3) Hole probabilities for the generalised complex Ginibre ensemble This Section contains the proofs of exact and asymptotic results for the hole probabilitiesboth in the case of the generalised induced finite- N and infinite complex Ginibre ensemble withparameters m , . . . , m n .5.1. Proof of Theorem 2.7. Recall that N ( m ,...,m n ) GG ( r ; N ) denotes the number of points ofthe generalised induced finite- N complex Ginibre ensemble with parameters m , . . . , m n in thedisk of radius r with its center at the origin. From Theorem 2.6 we conclude that the holeprobability can be written as(5.1) Pr n N ( m ,...,m n ) GG ( r ; N ) = 0 o = N Y k =1 Pr (cid:26)(cid:16) R ( m ,...,m n ) k (cid:17) > r (cid:27) , ERMANENTAL PROCESSES FROM PRODUCTS OF RANDOM MATRICES 21 where the random variables R ( m ,...,m n ) k are those defined in the statement of Theorem 2.6.Formula (2.12) givesPr n N ( m ,...,m n ) GG ( r ; N ) = 0 o = N Y k =1 Z ∞ r G n, ,n ( x | k + m − , k + m − , . . . , k + m n − dx Q na =1 Γ( k + m a ) . The integral just written above can be computed explicitly, see Luke [44], Section 5.6. As aresult we obtain the formula for the hole probabilities in the statement of Theorem 2.7. (cid:3) Proof of Theorem 2.9. We use an asymptotic formula for Meijer G -functions (see Luke[44], Section 5.7)(5.2) G q, p,q (cid:18) x (cid:12)(cid:12)(cid:12)(cid:12) a , a , . . . , a p b , b , . . . , b q (cid:19) ∼ (2 π ) ( σ − / σ / exp n − σx /σ o x θ ∞ X k =0 M k x − k/σ , for x → ∞ . The parameters σ and θ are defined by σ = q − p and σθ = 12 (1 − σ ) + q X k =1 b k − p X k =1 a k . The coefficients M k are independent of x and are given in the above reference. However, weonly need that M = 1. This leads to the asymptotic expression G n +1 , ,n +1 (cid:18) r (cid:12)(cid:12)(cid:12)(cid:12) , k + m , . . . , k + m n (cid:19) = (2 π ) n − n e − nr n r k − n ( m + ··· + m n ) − n h O (cid:16) r − /n (cid:17)i as r → ∞ . When inserting this into the formula for the hole probability in Theorem 2.7, weobtain the asymptotic formula in Theorem 2.9. (cid:3) Proof of Theorem 2.10. Upper bound for the hole probability of the infinite ensemble. To estimate the hole prob-ability at N → ∞ we use the following standard fact called the Markov inequality. Proposition 5.1. Suppose ϕ : R → R is a positive valued function, and let A be a Borel subsetof R . Then inf { ϕ ( y ) : y ∈ A } Pr { X ∈ A } ≤ E { ϕ ( X ) } . Proof. See, for example, Durrett [21], Section 1.6, Theorem 1.6.4. (cid:3) Let b ≥ A = ( r , ∞ ), and ϕ ( x ) = x b . By the Markov inequality (Proposition 5.1) we have Pr (cid:26)(cid:16) R ( m ,...,m n ) k (cid:17) > r (cid:27) ≤ Z ∞ t b G n, ,n ( t | k + m − , k + m − , . . . , k + m n − dt ( r ) b Q na =1 Γ( k + m a ) . In the proof for the β = 4 case one should replace k with 2 k . Moreover, Z ∞ t b G n, ,n ( t | k + m − , k + m − , . . . , k + m n − dt = n Y a =1 Γ( k + m a + b ) . Therefore, Pr (cid:26)(cid:16) R ( m ,...,m n ) k (cid:17) > r (cid:27) ≤ r ) b n Y a =1 Γ( k + m a + b )Γ( k + m a ) . As the next step we utilise the following well-known inequality (see e.g. the Digital Library ofMathematical Functions [20], § π ) exp (cid:26) − z + ( z − 12 ) log z (cid:27) ≤ Γ( z ) ≤ (2 π ) exp (cid:26) − z + ( z − ) log( z ) + 112 z (cid:27) , as z ≥ 1, to obtainPr (cid:26)(cid:16) R ( m ,...,m n ) k (cid:17) > r (cid:27) ≤ exp (cid:26) − nb − b log( r ) + n X a =1 (cid:0) k + m a − + b (cid:1) log ( k + m a + b ) − n X a =1 ( k + m a − ) log( k + m a ) + n X a =1 k + m a + b ) (cid:27) . Now set b = r n − k in the equation above . We can then rewrite the inequality above asPr (cid:26)(cid:16) R ( m ,...,m n ) k (cid:17) > r (cid:27) ≤ exp (cid:26) − nr n − nr n log r n + n X a =1 ( r n + m a − ) log( r n + m a )+ nk (cid:0) r n (cid:1) − n X a =1 ( m a − ) log ( k + m a ) − k n X a =1 log ( k + m a ) + 112 n X a =1 r n + m a (cid:27) . Next we exploit that ∞ Y k =1 Pr (cid:26)(cid:16) R ( m ,...,m n ) k (cid:17) > r (cid:27) ≤ r n Y k =1 Pr (cid:26)(cid:16) R ( m ,...,m n ) k (cid:17) > r (cid:27) . In the proof for the β = 4 case one should choose b = r n − k . For β = 4 one should replace r n with r n / ERMANENTAL PROCESSES FROM PRODUCTS OF RANDOM MATRICES 23 This gives ∞ Y k =1 Pr (cid:26)(cid:16) R ( m ,...,m n ) k (cid:17) > r (cid:27) ≤ exp (cid:26) − nr n − nr n log r n + r n n X a =1 ( r n + m a − ) log( r n + m a ) + n r n ( r n + 1)(1 + log r n )+ 12 n X a =1 r n X k =1 log ( k + m a ) − n X a =1 r n X k =1 ( k + m a ) log ( k + m a ) + r n n X a =1 r n + m a (cid:27) . Here r n is considered as an integer (this assumption should not affect our estimate), alterna-tively we simply consider ⌊ r n ⌋ + 1. A straightforward application of equations (A.1) and (A.3)from the Appendix A gives r n X k =1 log ( k + m a ) = r n log r n − r n + O (log r ) , r n X k =1 ( k + m a ) log ( k + m a ) = 12 r n log r n − r n r n (cid:0) m a + (cid:1) log r n + O (log r ) . In addition, r n n X a =1 ( r n + m a − ) log( r n + m a ) = nr n log r n + r n log r n n X a =1 ( m a − ) + r n n X a =1 m a + O (1) . The estimate in the statement of Theorem 2.10, (A) follows. (cid:3) Lower bound for the hole probability for the infinite ensemble. Proposition 5.2. We have (5.4) ∞ Y k =1 Pr n(cid:16) R ( m ) k (cid:17) > r o ≥ exp n − r − r log r − r ( C ( m ) − m + log( m !) + log 4) + O (log( r )) o , as r → ∞ . Here the constant C ( m ) is defined by equation (2.15).Proof. The idea of the proof is to split the infinite product into three parts and to derive boundsfor each of these.Recall that (cid:0) R ( m ) k (cid:1) is distributed in the same way as a gamma random variable , Gamma( k + m, { Gamma( k + m, > λ } = Pr { Pois( λ ) < k + m } . In the proof for the β = 4 case we have Gamma(2 k + m, This gives(5.5) Pr (cid:26)(cid:16) R ( m ) k (cid:17) > r (cid:27) = Pr (cid:8) Pois( r ) ≤ k + m − (cid:9) ≥ exp (cid:8) − r (cid:9) r k + m − ( k + m − , and thus for the first part of the infinite product we have r Y k =1 Pr (cid:26)(cid:16) R ( m ) k (cid:17) > r (cid:27) ≥ r Y k =1 exp (cid:8) − r (cid:9) r k + m − ( k + m − n − r + log r r X k =1 ( k + m − − r X k =1 log( k + m − o . Without loss of generality we have assumed that r is an integer. The following identity holdstrue r X k =1 log( k + m − r log m ! + ( r + m ) r X k =1 log( k + m ) − r X k =1 ( k + m ) log( k + m ) . Applying formulae (A.1) and (A.3) from Appendix A we obtain r Y k =1 Pr n(cid:16) R ( m ) k (cid:17) > r o ≥ exp n − r − r r − r ( C ( m ) − m + log m !) + O (log( r )) o (5.6)as r → ∞ , where C ( m ) is defined by equation (2.15).We move to an estimate for the second part of the product. Since (cid:0) R ( m ) k (cid:1) is distributed inthe same way as Gamma( k + m, n (cid:16) R ( m ) k (cid:17) > r o = Pr (cid:8) Pois( r ) ≤ k + m − (cid:9) = 1 − Pr (cid:8) Pois( r ) > k + m − (cid:9) ≥ − Pr (cid:8) Pois( r ) ≥ r (cid:9) , for k > r − m . Since Pr (cid:8) Pois( r ) ≥ r (cid:9) → as r → ∞ , we obtain thatPr n(cid:16) R ( m ) k (cid:17) > r o ≥ , for sufficiently large r and for k ≥ r . This gives (5.7) r Y k = r +1 Pr n(cid:16) R ( m ) k (cid:17) > r o ≥ exp (cid:8) − r log 4 (cid:9) , for sufficiently large r which was already derived in [34]. In the proof for β = 4 we replace k with 2 k and change the upper limit of the product from r to r . For β = 4 this product goes from r + 1 to r , and obviously in the third part the product extends then from r + 1 to ∞ . ERMANENTAL PROCESSES FROM PRODUCTS OF RANDOM MATRICES 25 For the third part of the product we once more follow the same argument as in Hough,Krishnapur, Peres and Vir´ag [34]. Section 7.2 there givesPr (cid:26)(cid:16) R ( m ) k (cid:17) < k + m (cid:27) ≤ exp {− c ( k + m ) } , where the constant c is independent of k . This implies ∞ X k =2 r +1 Pr (cid:26)(cid:16) R ( m ) k (cid:17) < r (cid:27) ≤ ∞ X k =2 r +1 Pr (cid:26)(cid:16) R ( m ) k (cid:17) < k + m (cid:27) < , for sufficiently large r . Finally, since ∞ Y j =1 (1 − a j ) ≥ − ∞ X i =1 a i , where 0 < a i < ∞ Y k =2 r +1 Pr (cid:26)(cid:16) R ( m ) k (cid:17) > r (cid:27) = ∞ Y k =2 r +1 (cid:18) − Pr (cid:26)(cid:16) R ( m ) k (cid:17) ≤ r (cid:27)(cid:19) ≥ − ∞ X k =2 r +1 Pr (cid:26)(cid:16) R ( m ) k (cid:17) < k + m (cid:27) > . (5.8)The statement of the Proposition follows from inequalities (5.6), (5.7), and (5.8) for the differentparts of the product. (cid:3) We know from Theorem 2.6 that the random variables R ( m ,...,m n ) k are independent, and thateach (cid:0) R ( m ,...,m n ) k (cid:1) enjoys the same distribution as the product of n independent gamma randomvariables Gamma( k + m , , . . . , Gamma( k + m n , a , 1 ≤ a ≤ n , therandom variable (cid:0) R ( m a ) k (cid:1) is itself a gamma random variable Gamma( k + m a , (cid:0) R ( m ,...,m n ) k (cid:1) has the same distribution as the product of randomvariables (cid:0) R ( m ) k (cid:1) , (cid:0) R ( m ) k (cid:1) , . . . , (cid:0) R ( m n ) k (cid:1) . Thus we have(5.9) Pr n(cid:16) R ( m ,...,m n ) k (cid:17) > r o ≥ n Y i =1 Pr n(cid:16) R ( m i ) k (cid:17) > r n o . Inequality (5.9) and Proposition 5.2 imply inequality (2.14). (cid:3) Overcrowding probability for the generalised complex Ginibre ensemble In this Section we continue to study probabilistic quantities of interest for the generalisedinduced complex Ginibre ensemble with parameters m , . . . , m n . Namely, we now turn to theproof of Theorem 2.11 giving lower and upper bounds for the overcrowding probabilities. As inthe case of the hole probabilities, Theorem 2.6 serves as the main technical tool in our analysis. We use a technique similar to that used by Krishnapur [43] to estimate hole probabilities in thecase of the classical infinite Ginibre ensemble.6.1. Proof of Theorem 2.11 A) Lower bound for the overcrowding probability. Letus recall that the set of absolute values of eigenvalues of this ensemble has the same distributionas the set of independent random variables n R ( m ,...,m n )1 , R ( m ,...,m n )2 , . . . , R ( m ,...,m n ) N o . Furthermore, the random variable (cid:0) R ( m ,...,m n ) k (cid:1) has the same distribution as the product of n independent gamma random variables Gamma( k + m , . . . , Gamma( k + m n , (cid:16) R ( m ,...,m n ) k (cid:17) d = n Y i =1 Gamma( k + m i , d = n Y i =1 (cid:16) ξ ( i )1 + . . . + ξ ( i ) k + m i (cid:17) , where all ξ ( j ) i are i.i.d. exponential random variables with mean 1, and d = denotes equality indistribution. Hence we can writePr n(cid:16) R ( m ,...,m n ) k (cid:17) < r o ≥ n Y i =1 Pr { ξ ( i )1 + · · · + ξ ( i ) k + m i < r n } ≥ n Y i =1 k + m i Y j =1 Pr n ξ ( i ) j < r n k + m i o . For any exponential random variable ξ with mean 1 it holdsPr { ξ < x } ≥ x , < x < . This leads to Pr n(cid:16) R ( m ,...,m n ) k (cid:17) < r o ≥ n Y a =1 (cid:16) r n k + m a ) (cid:17) k + m a , and we thus obtainPr n N ( m ,...,m n ) GG ( r ; ∞ ) ≥ q o ≥ q Y k =1 Pr (cid:26)(cid:16) R ( m ,...,m n ) k (cid:17) < r (cid:27) ≥ q Y k =1 n Y a =1 (cid:18) r n k + m a ) (cid:19) k + m a = exp ( log r n n X a =1 q X k =1 ( k + m a ) − n X a =1 q X k =1 ( k + m a ) log( k + m a ) ) . The next step is to evaluate the sums. We have n X a =1 q X k =1 ( k + m a ) = n q + q n X a =1 (cid:0) m a + (cid:1) , and n X a =1 q X k =1 ( k + m a ) log( k + m a ) = n q log q − n q + q log q n X a =1 (cid:0) m a + (cid:1) + O (log q ) , ERMANENTAL PROCESSES FROM PRODUCTS OF RANDOM MATRICES 27 as q → ∞ . Therefore,Pr n N ( m ,...,m n ) GG ( r ; ∞ ) ≥ q o ≥ exp n − n q log q + (cid:16) n r − n (cid:17) q − q log q n X a =1 (cid:0) m a + (cid:1) + q n X a =1 (cid:0) m a + (cid:1) + O (log q ) o , as q → ∞ . (cid:3) Proof of Theorem 2.11 B) Upper bound for the overcrowding probability. Inorder to obtain an upper bound for Pr n N ( m ,...,m n ) GG ( r ; ∞ ) ≥ q o we use again the Markov in-equality (Proposition 5.1), this times with A = { , r } , ϕ ( x ) = x − b , b ≥ 0. This givesPr n ( R ( m ,...,m n ) k ) < r o ≤ ( r ) b Z ∞ t − b G n, ,n ( t | k + m − , . . . , k + m n − dt Q na =1 Γ( k + m a )= ( r ) b n Y a =1 Γ( k + m a − b )Γ( k + m a ) . By the inequality (5.3) for the Gamma functionPr (cid:26)(cid:16) R ( m ,...,m n ) k (cid:17) < r (cid:27) ≤ exp (cid:26) nb + b log r + n X a =1 (cid:0) k + m a − − b (cid:1) log( k + m a − b ) − n X a =1 (cid:0) k + m a − (cid:1) log( k + m a ) + 112 n X a =1 k + m a − b (cid:27) . When choosing b = k − , we obtainPr (cid:26)(cid:16) R ( m ,...,m n ) k (cid:17) < r (cid:27) ≤ exp (cid:26)(cid:0) k − (cid:1) log r + n (cid:0) k − (cid:1) + n X a =1 m a log (cid:0) m a + (cid:1) − n X a =1 (cid:0) k + m a − (cid:1) log( k + m a ) + 112 n X a =1 m a + (cid:27) . Furthermore, using simple probabilistic arguments we have(6.1) Pr n N ( m ,...,m n ) GG ( r ; ∞ ) ≥ q o ≤ Pr n q X k =1 I (cid:16)(cid:16) R ( m ,...,m n ) k (cid:17) < r (cid:17) ≥ q o + ∞ X k = q +1 Pr n(cid:16) R ( m ,...,m n ) k (cid:17) < r o . Here I ( · ) denotes the characteristic function of a set. Then the second term on the right handside of the inequality above can be accordingly estimatedPr (cid:26)(cid:16) R ( m ,...,m n ) k (cid:17) < r (cid:27) ≤ exp {− nk log k (1 + o (1)) } , as k → ∞ . Therefore, ∞ X k = q +1 Pr (cid:26)(cid:16) R ( m ,...,m n ) k (cid:17) < r (cid:27) ≤ exp (cid:8) − nq log q (1 + o (1)) (cid:9) , as q → ∞ . Finally let us estimate the first term on the right hand side of the inequality (6.1).We obtainPr (cid:26) q X k =1 I (cid:18)(cid:16) R ( m ,...,m n ) k (cid:17) < r (cid:19) ≥ q (cid:27) ≤ (cid:18) q q (cid:19) q Y k =1 Pr (cid:26)(cid:16) R ( m ,...,m n ) k (cid:17) < r (cid:27) . Using (cid:0) q q (cid:1) < q q we thus obtainPr n N ( m ,...,m n ) GG ( r ; ∞ ) ≥ q o ≤ exp n − n q log q + q (cid:16) n r (cid:17) + q log q (cid:16) − n X a =1 m a (cid:17) + O ( q ) o , as q → ∞ . The main statement of the theorem then follows from the bounds A) and B). (cid:3) Joint density for the generalised quaternion Ginibre ensemble In this Section we turn to the case of random matrices taken from the induced quaternionGinibre ensemble. Quaternion matrices play a prominent role in the physical sciences; nonethe-less, the literature on quaternion matrices is very fragmented and often underappreciated. Forthis reason, we find it appropriate to start this section with a summary of some known propertiesof quaternions and matrices of quaternions. In particular, we are interested in decompositiontheorems for quaternion matrices. Note that the generalised Schur decomposition, stated intheorem 7.13, is an essential tool for the discussion of the complex eigenvalues of products ofquaternion matrices, but a rigorous proof of this result has not previously appeared in theliterature. For a review of some known results for matrices with quaternion entries we referto [45, 54] and references within.7.1. Quaternions. Let , i , j , and k be abstract generators with the following multiplicationrules = , i = j = k = ijk = − , ij = − ji = k , jk = − kj = i , ki = − ik = j . The division ring H of quaternions is a four dimensional vector space over R with basis , i , j and k . ERMANENTAL PROCESSES FROM PRODUCTS OF RANDOM MATRICES 29 Representation of quaternions in terms of pairs of complex numbers. We identifyquaternions of the form x with the real numbers, and we write x instead of x . Moreover, weidentify a quaternion of the form x + x i with the complex number x + ix . This includes thereal numbers R , and the complex numbers C into H in the obvious way. Now, each quaternion x = x + x i + x j + x k can be written as x = ( x + x i ) + ( x + x i ) j . Therefore, each quaternion x can be represented as x = ζ + η j , where ζ = x + x i and η = x + x i are two complex numbers.If x = ζ + η j , and y = ζ + η j are two quaternions, then we find that the quaternion xy can be written as xy = ζ ζ − η η + ( η ζ + ζ η ) j . Therefore the quaternions x = ζ + η j , and y = ζ + η j can be multiplied in a formal waytaking into account that j α = α j , α ∈ C . Representation of quaternions as matrices with complex entries. The divisionring H of quaternions is isomorphic to the ring of 2 × H ′ = (cid:26)(cid:18) α − ββ α (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) α, β ∈ C (cid:27) . The ring H ′ is the sub-ring of M ( C ) under the operations of M ( C ). The isomorphism between H and H ′ is defined by x = ζ + η j ˘ x = (cid:18) ζ − ηη ¯ ζ (cid:19) . It can be checked that the bijection above preserves ring operations.7.4. The conjugate of a quaternion and the norm of a quaternion. If x ∈ H , and x = x + x i + x j + x k , then define the conjugate by x = x − x i − x j − x k . The definition of x implies that if x = ζ + η j , then x = ζ − η j . The operation x → x in H corresponds to Hermitian conjugation in H ′ . Namely, if x = ζ + η j ∈ H and ˘ x is thecorresponding element in H ′ , then the map x → x in H induces the map˘ x = (cid:18) ζ − ηη ¯ ζ (cid:19) −→ ˘ x ∗ = (cid:18) ζ η − η ζ (cid:19) in H ′ . If x ∈ H , and x = x + x i + x j + x k , then the norm of x is defined by | x | = √ xx = q x + x + x + x . If | x | = 1, then x is called a unit quaternion. If x is a unit quaternion, and x = ζ + η j , then | ζ | + | η | = 1. The corresponding element ˘ x from H ′ is a 2 × Quaternion matrices. Let Mat ( H , m × n ) denote the collection of all m × n matriceswith entries from H . Since for x ∈ H there is a unique representation x = ζ + η j (where ζ, η ∈ C ),we conclude that for P ∈ Mat ( H , m × n ) there is a unique representation as P = A + B j , where A, B ∈ Mat ( C , m × n ). The quaternion matrix P = A + B j can be also represented as the2 m × n complex matrix(7.2) ˆ P = (cid:18) A − BB A (cid:19) . The map P → ˆ P is an algebra isomorphism of Mat ( H , m × n ) onto the algebra of 2 m × n complex matrices of form (7.2).Under the isomorphism H → H ′ defined in Section 7.3 the set Mat ( H , m × n ) turns intoMat (cid:0) H ′ , m × n (cid:1) , i.e. to the collection of all m × n matrices whose entries are 2 × (cid:18) u − vv u (cid:19) , u, v ∈ C . Thus each matrix M ∈ Mat (cid:0) H ′ , m × n (cid:1) can be represented as(7.3) M = ( ˘ m i,j ) Ni,j =1 , ˘ m i,j = M (1) i,j − M (2) i,j M (2) i,j M (1) i,j ! , where 1 ≤ i ≤ m , 1 ≤ j ≤ n , and where M (1) i,j , M (2) i,j ∈ C .If P = ( p ij ) ni,j =1 ∈ Mat ( H , n × n ), i.e. a n × n matrix with entries p i,j ∈ H , then P ∗ = ( p ji ) ni,j =1 and P = ( p ij ) ni,j =1 . If P ∗ P = PP ∗ = I , where I ≡ diag( , . . . , ) is the identity matrix, then we say that P is a quaternion unitary matrix. Note that if P is a quaternion unitary matrix, then thecorresponding element from Mat (cid:0) H ′ , n × n (cid:1) is an 2 × P ∈ Mat ( H , n × n ) is non-singular if there exists a matrix Q ∈ Mat ( H , n × n ) suchthat PQ = QP = I .7.6. Eigenvalues of quaternion matrices. Denote by C + the set of complex numbers withnon-negative imaginary parts. Recall that the real and complex numbers were embedded into H . Thus we have R ⊂ C + ⊂ C ⊂ H . Definition 7.1. A number λ ∈ C + is called an eigenvalue of P ∈ Mat ( H , n × n ) if there existsa non-zero vector x ∈ Mat ( H , n × 1) such that Px = x λ . Remark 7.2. If λ ∈ C + is an eigenvalue of P ∈ Mat ( H , n × n ) with an eigenvector x ∈ Mat ( H , n × P ( xw ) = ( xw )( w λ w )for any w ∈ H with | w | = 1. Therefore the set { w λ w | | w | = 1 } can be understood as acontinuum of “eigenvalues” each of which is a representative of λ . ERMANENTAL PROCESSES FROM PRODUCTS OF RANDOM MATRICES 31 Proposition 7.3. Any quaternion matrix P ∈ Mat ( H , n × n ) has exactly n eigenvalues.Proof. Assume that λ is an eigenvalue of P . Then Px = x λ for some non-zero vector x , x ∈ Mat ( H , n × P = P + P j , x = x + x j , where P , P are complex matrices, and where x , x are complex column vectors. Since λ is acomplex number, the equation Px = x λ is equivalent to (cid:18) P P − P P (cid:19) (cid:18) x x − x x (cid:19) = (cid:18) x x − x x (cid:19) (cid:18) λ λ (cid:19) , where we have used the isomorphism defined by equation (7.2), i.e. we are using the represen-tation ˆ P ˆ x = ˆ x ˆ λ . We obtain two equations, (cid:18) P P − P P (cid:19) (cid:18) x − x (cid:19) = λ (cid:18) x − x (cid:19) and (cid:18) P P − P P (cid:19) (cid:18) x − x (cid:19) = λ (cid:18) x − x (cid:19) , which are equivalent to each other. Since ˆ P is a 2 n × n complex matrix, it has exactly 2 n complex eigenvalues. Now, in order to complete the proof of the Proposition it is enough toshow that the non-real eigenvalues of ˆ P come in conjugate pairs, and that the real eigenvaluesoccur an even number of times.Note that (cid:18) − (cid:19) (cid:18) P P − P P (cid:19) (cid:18) − 11 0 (cid:19) = (cid:18) P P − P P (cid:19) , and det (cid:18) λI n − (cid:18) P P − P P (cid:19)(cid:19) = det (cid:18) λI n − (cid:18) P P − P P (cid:19)(cid:19) . This says that if λ is a non-real eigenvalue, then λ is an eigenvalue as well. By a simplecontinuity argument we conclude that real eigenvalues appear an even number of times. (cid:3) Remark 7.4. Proposition 7.3 implies that eigenvalues of quaternion matrices (defined as ele-ments of C + ) form a discrete, finite set. Corollary 7.5. Assume that P ∈ Mat ( H , n × n ) , and P ′ is the corresponding element of Mat (cid:0) H ′ , n × n (cid:1) obtained from P by the isomorphism defined in Section 7.3. Then P ′ has n eigenvalues, where the non-real eigenvalues of P ′ appear in conjugate pairs, and every realeigenvalue of P ′ occurs an even number of times.Proof. Let P ′ ∈ Mat (cid:0) H ′ , n × n (cid:1) . We know that the total number of eigenvalues of P ′ is 2 n . Let P be the pre-image of P ′ under the isomorphism between Mat ( H , n × n ) and Mat (cid:0) H ′ , n × n (cid:1) .It can be checked directly that if λ is an eigenvalue of P , then both λ and ¯ λ are eigenvalues of P ′ . This observation, and the fact that P has exactly n eigenvalues (see Proposition 7.3) implythe statement of Corollary 7.5. (cid:3) The Schur canonical form for quaternion matrices.Lemma 7.6. Assume that P ∈ Mat ( H , m × n ) , and assume that m < n . Then Px = 0 has anon-trivial solution in Mat ( H , n × .Proof. Represent P as P = A + B j , and x as x = x + x j . Then Px = 0 is equivalent toˆ P ˆ x = 0 (where ˆ P and ˆ x are images of P and x under the isomorphism defined in Section 7.5).More explicitly, Px = 0 is equivalent to the equation (cid:18) A − BB A (cid:19) (cid:18) x x − x x (cid:19) = 0 or equivalently (cid:18) A − BB A (cid:19) (cid:18) x − x (cid:19) = 0 . The system just written above does have a non-trivial solution. (cid:3) Proposition 7.7. Assume that u ∈ Mat( H , n × , and that u ∗ u = 1 . Then there exists aunitary matrix from Mat( H , n × n ) whose first column is u .Proof. According to Lemma 7.6 there exists a non-trivial vector ˜ u , ˜ u ∈ Mat( H , n × u ∗ ˜ u = 0. Define u = ˜ u | ˜ u | . By the same argument there exists a non-trivial vector ˜ u ,˜ u ∈ Mat( H , n × u ∗ ˜ u = 0. Define u = ˜ u | ˜ u | . Proceeding in this way we obtain n vectors u , u , . . . , u n . The desired unitary matrix is U = ( u , u , . . . , u n ). (cid:3) Proposition 7.8. If P ∈ Mat( H , n × n ) , then there exists a unitary quaternion matrix U ∈ Mat( H , n × n ) such that U ∗ PU is in an upper triangular form.Proof. Proposition 7.3 implies that P has exactly n eigenvalues λ , . . . , λ n . Let x be a nor-malised eigenvector of P corresponding to λ . By Proposition 7.7 there exists a unitary matrix U ∈ Mat( H , n × n ) whose first column is x . Then the matrix U ∗ PU has the form U ∗ PU = (cid:18) λ ∗ P (cid:19) . The matrix P ∈ Mat( H , ( n − × ( n − n − λ , . . . , λ n . Then we can finda unitary matrix U ∈ Mat( H , ( n − × ( n − U ∗ P U = (cid:18) λ ∗ P (cid:19) . If we set V = diag( , U ), then( U V ) ∗ P U V = λ ∗ ∗ λ ∗ P , where U V is a unitary matrix from Mat ( H , n × n ). Repeating this argument we obtain thestatement of Proposition 7.8. (cid:3) Corollary 7.9. If M ∈ Mat (cid:0) H ′ , n × n (cid:1) , then there exists a unitary × block matrix U ∈ Mat (cid:0) H ′ , n × n (cid:1) , that is a unitary symplectic matrix U ∈ USp(2 n ) , such that (7.4) M = U ( Z + T ) U ∗ . ERMANENTAL PROCESSES FROM PRODUCTS OF RANDOM MATRICES 33 Here Z ∈ Mat (cid:0) H ′ , n × n (cid:1) is a × block diagonal matrix of the form (7.5) Z = ˘ z ˘0 · · · ˘0˘0 ˘ z · · · ˘0 ... ... . . . ... ˘0 ˘0 · · · ˘ z n , ˘ z i = (cid:18) z i z i (cid:19) , ˘0 = (cid:18) (cid:19) , (with z i ∈ C ), and T ∈ Mat (cid:0) H ′ , n × n (cid:1) is a × block strictly upper triangular matrix of theform (7.6) T = ˘0 ˘ t , ˘ t , · · · ˘ t ,n ˘0 ˘0 ˘ t , · · · ˘ t ,n ... ... ... . . . ... ˘0 ˘0 ˘0 · · · ˘ t n − ,n ˘0 ˘0 ˘0 · · · ˘0 , ˘ t i,j = t (1) i,j − t (2) i,j t (2) i,j t (1) i,j , with t (1) i,j , t (2) i,j ∈ C .Proof. Use the Schur canonical form for quaternion matrices (see Proposition 7.8), and theisomorphism between Mat ( H , n × n ) and Mat (cid:0) H ′ , n × n (cid:1) defined in Section 7.5. (cid:3) Remark 7.10. Similar to the standard Schur decomposition, decomposition (7.4) is not unique.However, in subsequent calculations leading to Jacobian determinantal formulas we omit all ma-trices with equal eigenvalues (such matrices are of zero Lebesgue measure). Then the uniquenessof decomposition (7.4) can be restored by requiring that z i are in increasing order (with respectto the lexicographic order on complex numbers, i.e. u + iv ≤ u ′ + iv ′ if u < u ′ or if u = u ′ and v ≤ v ′ ), and then by requiring that z i ∈ C + .7.8. QR decomposition for quaternion matrices.Proposition 7.11. Let P ∈ Mat ( H , n × n ) be a non-singular quaternion matrix. Then thereis a factorization P = US , where U ∈ Mat ( H , n × n ) is unitary, and S ∈ Mat ( H , n × n ) is an upper triangular.Proof. Similar to the case of matrices with complex entries, this statement is just a reformulation(in matrix notation) of the result of applying of the Gram–Schmidt process to the columns of P . (cid:3) Corollary 7.12. Let P ∈ Mat (cid:0) H ′ , n × n (cid:1) be a non-singular matrix. Then there is a factoriza-tion P = U S, where U is a unitary × block matrix, U ∈ Mat (cid:0) H ′ , n × n (cid:1) , and S ∈ Mat (cid:0) H ′ , n × n (cid:1) is a × block upper triangular matrix.Proof. Use the QR factorization for quaternion matrices (Proposition 7.11), and the isomor-phism between Mat ( H , n × n ) and Mat (cid:0) H ′ , n × n (cid:1) defined in Section 7.5. (cid:3) Generalised Schur decomposition for quaternion matrices.Theorem 7.13. Let N and n be fixed natural numbers. Let M i ( i = 1 , . . . , n ) be non-singularquaternion matrices from Mat ( H , N × N ) . Then there exist unitary quaternion matrices U i ( i = 1 , . . . , n ) from Mat ( H , N × N ) , and upper triangular quaternion matrices S i ( i = 1 , . . . , n ) from Mat ( H , N × N ) such that (7.7) M i = U i S i U ∗ i +1 , i = 1 , . . . , n, where U n +1 = U .Proof. Let M i ( i = 1 , . . . , n ) be non-singular quaternion matrices from Mat ( H , N × N ). Itfollows from the Schur decomposition, Proposition 7.8, that there exists a unitary quaternionmatrix U ∈ U ( N, H ), such that(7.8) U ∗ M M · · · M n U = S , where S is an upper triangular quaternion matrix from Mat ( H , N × N ). It is clear that M n U ∈ Mat ( H , N × N ) and it follows directly from the QR decomposition for quaternion matrices,Proposition 7.11, that there exists a unitary quaternion matrix U n ∈ U ( N, H ), such that M n U = U n S n , where S n is an upper triangular quaternion matrix from Mat ( H , N × N ). This expression isidentical to (7.7) with i = n . Repeating this procedure, we also find M i U i +1 = U i S i , i = 2 , . . . , n − , with S i ( i = 2 , . . . , n − 1) upper triangular quaternion matrices from Mat ( H , N × N ) and U i ( i = 2 , . . . , n ) unitary quaternion matrices. Using this in (7.8) we get U ∗ M U S · · · S n = S . The upper triangular matrices S i ( i = 2 , . . . , n ) are invertible, since the matrices M i ( i =2 , . . . , n ) are non-singular. We define the upper triangular matrix S ≡ SS − · · · S − n and thetheorem follows. (cid:3) Remark 7.14. Note that the diagonal elements of the upper triangular matrix S are complexnumbers, i.e. ( S ) ii ∈ C ⊂ H for all i = 1 , . . . , N . This is equivalent to the structure given byCorollary 7.9. Remark 7.15. Similar to the ordinary Schur decomposition, the generalised Schur decompo-sition given by Theorem 7.13 is not unique. In particular, let z a = x + y i ∈ C ⊂ H with | z a | = 1 for all a = 1 , . . . , N and V = diag( z , . . . , z N ), then with the same definitions as inTheorem 7.13, we have M i = U i S i U ∗ i +1 = ( U i V )( V ∗ S i ) U ∗ i +1 , where U i V ∈ U ( N, H ) is a unitary quaternion matrix and V ∗ S i is a upper triangular quater-nion matrices from Mat ( H , N × N ). Uniqueness of the generalised Schur decomposition (Theo-rem 7.13) can be obtained by choosing U i ∈ U ( N, H ) /U (1 , C ) N such that the diagonal elements( S ) jj ∈ C + ( j = 1 , . . . , N ) are complex numbers in lexicographical order. ERMANENTAL PROCESSES FROM PRODUCTS OF RANDOM MATRICES 35 Corollary 7.16. Let N and n be fixed natural numbers. Let M i ( i = 1 , . . . , n ) be non-singular × block matrices from Mat (cid:0) H ′ , N × N (cid:1) . There exist unitary × block matrices U i ( i =1 , . . . , n ) from Mat (cid:0) H ′ , N × N (cid:1) , and upper triangular × block matrices S i ( i = 1 , . . . , n ) from Mat (cid:0) H ′ , N × N (cid:1) such that (7.9) M i = U i S i U ∗ i +1 , i = 1 , . . . , n where U n +1 = U .Proof. Use the generalised Schur decomposition for quaternion matrices (Theorem 7.13), andthe isomorphism between Mat ( H , n × n ) and Mat (cid:0) H ′ , n × n (cid:1) defined in Section 7.5. (cid:3) Remark 7.17. Often we write the upper triangular matrices S i ( i = 1 , . . . , n ) as a sum, S i = Z i + T i , consisting of a diagonal matrix Z i similar to (7.5), and a strictly upper triangularmatrix T i similar to (7.6).7.10. Change of measure for quaternion matrices. Corollary 7.16 defines a change ofvariables from the matrices M , . . . , M n to their block triangular forms. Our task is to derivethe Jacobian for such a change of variables. To make our method of derivation more transparentwe begin our presentation from the simplest case of a single matrix. Definition 7.18. Let λ (1) , λ (2) be independent one-forms. The 2 × λ definedby(7.10) ˘ λ = λ (1) − λ (2) λ (2) λ (1) ! is called a quaternion one-form. Remark 7.19. Recall that the set H ′ , see (7.1), is isomorphic to the set H of quaternions. Thisexplains Definition 7.18.We introduce the following notation. If ˘ λ is a quaternion one-form defined by equation (7.10),then ˘ λ = λ (1) ∧ λ (1) ∧ λ (2) ∧ λ (2) . So ˘ λ is the wedge product of one-forms which are entries of ˘ λ .Let M ∈ Mat (cid:0) H ′ , N × N (cid:1) . Represent M as in equation (3.1). Integration of a function of M with respect to the Lebesgue measure is the same as integrating against the following 4 N form dM ≡ ^ i,j d ˘ M i,j . Here we have introduced dM as a compact notation for the 4 N form. Proposition 7.20. Let M , Q , Q be elements of Mat( H ′ , n × n ) . If Q , Q are fixed unitarymatrices, and ˜ M = Q M Q , then d ˜ M = dM . Proof. Use the fact that if x ′ i = P j a ij x j , and A = ( a ij ) i,j is a constant matrix, then dx ′ ∧ . . . ∧ dx ′ n = det( A ) dx ∧ . . . ∧ dx n . (cid:3) We use Corollary 7.9, and write M in terms of a unitary matrix U ∈ USp(2 N ) /U (1) N ,a diagonal matrix Z and a strictly upper triangular matrix T (see equations (7.4) to (7.6)).Set dM = ( d ˘ m ij ) Ni,j =1 , thus dM can be understood as a matrix whose elements, d ˘ m i,j , arequaternion one-forms. We have dM = U (Ω S − S Ω + dS ) U ∗ , where Ω = U ∗ dU is skew-Hermitian and S = Z + T . Denote by ˘ w i,j (1 ≤ i, j ≤ N ) the 2 × dM , the matrices dU , dZ and dT can beunderstood as matrices whose elements are quaternion one-forms. We will use the abbreviations dZ = ^ i ( dz i ∧ d ¯ z i ) , dT = ^ i The following Jacobian determinantal formula holds true (7.12) dM = Y j
1) Equation (7.12) can be understood as an analogue of the complex Ginibremeasure decomposition, see equation (6.3.5) of Hough, Krishnapur, Peres, and Vir´ag [34].2) Here (and in subsequent calculations leading to the equation in the statement of Theorem7.21, and in statement of Theorem 7.23) we omit combinatorial constants before differentialforms. In what follows we will restore normalization constants for the relevant probabilitymeasures using the usual normalization condition.3) Our proof of Theorem 7.21 can be seen as an extension of that in Section 6.3 of Hough,Krishnapur, Peres, and Vir´ag [34] to the case of matrices from Mat (cid:0) H ′ , N × N (cid:1) . Proof. Proposition 7.20 implies dM = Λ. The explicit formula for ˘ λ i,j is(7.13) ˘ λ i,j = j X k =1 ˘ w i,k ˘ s k,j − N X k = i ˘ s i,k ˘ w k,j + d ˘ s i,j , where d ˘ s i,j = ˘0 for i > j . Recall that the quaternion one-forms ˘ w i,j are the matrix elements ofΩ. We emphasise that ˘ w ∗ i,j = ˘ w j,i for ( i < j ) and that the diagonal elements ˘ w i,i ( i = 1 , . . . , N )have the special structure ˘ w i,i = − w (2) i,i w (2) i,i ! . ERMANENTAL PROCESSES FROM PRODUCTS OF RANDOM MATRICES 37 Now we begin to investigate how the terms ˘ λ i,j contribute to the wedge product Λ, for differentchoices of indices i and j . We start with i = N and j = 1, and find that˘ λ N, = ˘ w N, ˘ s , − ˘ s N,N ˘ w N, = ˘ w N, ˘ z − ˘ z N ˘ w N, . This gives(7.14) ˘ λ N, = | z − z N | | z − z N | ˘ w N, . For ˘ λ N, we find ˘ λ N, = ˘ w N, ˘ z − ˘ z N ˘ w N, + ˘ w N, ˘ t , . Taking into account (7.14) it is not hard to see that the term ˘ w N, ˘ t , does not contribute tothe wedge product ˘ λ N, ∧ ˘ λ N, (the one-forms consisting of ˘ w N, already have appeared in ˘ λ N, ).Thus we obtain the formula˘ λ N, ∧ ˘ λ N, = | z − z N | | z − z N | | z − z N | | z − z N | ˘ ω N, ∧ ˘ ω N, . Proceeding in this way we find(7.15) ˘ λ N, ∧ ˘ λ N, ∧ · · · ∧ ˘ λ N,N − = N − Y j =1 | z j − z N | | z j − ¯ z N | ˘ w N, ∧ ˘ w N, ∧ · · · ∧ ˘ w N,N − . In addition, we have ˘ λ N,N = ˘ w N,N ˘ z N − ˘ z N ˘ w N,N + N − X k =1 ˘ w N,k ˘ t N,k + d ˘ z N . The sum in the expression above does not contribute to the wedge product ˘ λ N, ∧ ˘ λ N, ∧· · ·∧ ˘ λ N,N ,since for each k (1 ≤ k ≤ N − 1) the one-forms consisting of ˘ w N,k have already appeared in thewedge product ˘ λ N, ∧ ˘ λ N, ∧ . . . ∧ ˘ λ N,N − , see equation (7.15). Moreover, we note that˘ w N,N ˘ z N − ˘ z N ˘ w N,N + d ˘ z N = dz N ( z N − ¯ z N ) w (2) N,N ( z N − ¯ z N ) w (2) N,N d ¯ z N ! , so ˘ w N,N ˘ z N − ˘ z N ˘ w N,N + d ˘ z N = | z N − ¯ z N | dz N ∧ dz N ∧ w (2) N,N ∧ w (2) N,N . This gives N ^ j =1 ˘ λ N,j = | z N − ¯ z N | N − Y j =1 | z j − z N | | z j − ¯ z N | dz N ∧ d ¯ z N ∧ w (2) N,N ∧ w (2) N,N N − ^ j =1 ˘ w N,j . Replace N by N − N − N − N − N − ^ i ≥ j ˘ λ i,j = Y j
Let M , . . . , M n be n matrices takenfrom Mat (cid:0) H ′ , N × N (cid:1) . Recall that these matrices can be represented as M a = (cid:2) ( ˘ M a ) i,j (cid:3) Ni,j =1 , ( ˘ M a ) i,j = ( M a ) (1) i,j − ( M a ) (2) i,j ( M a ) (2) i,j ( M a ) (1) i,j ! Here ( M a ) (1) i,j and ( M a ) (2) i,j are complex numbers. Denote by P the product, P = M M · · · M n . The integration of a function of P with respect to the Lebesgue measure is the same as inte-grating against the following 4 nN differential form ^ a dM a = ^ a ^ i,j ( d ˘ M a ) i,j . We apply the generalised Schur decomposition for the quaternion matrices M , . . . , M n (seeCorollary 7.16), and obtain M a = U a ( Z a + T a ) U ∗ a +1 , where for each a (1 ≤ a ≤ n ) the matrix U a is a unitary symplectic matrix, U n +1 = U and thematrices Z a and T a are defined by equations (7.5) and (7.6) correspondingly. We have dM a = U a (Ω a S a − S a Ω a +1 + dS a ) U ∗ a +1 . In the formula above Ω a and S a are defined in analogy to the single matrix case. In particular,we distinguish between the diagonal and strictly upper triangular part of S a , see Remark 7.17.Note also that Ω n +1 = Ω . Furthermore, we will use the abbreviationsΛ a ≡ Ω a S a − S a Ω a +1 + dS a for a = 1 , . . . , n and(7.17) ( z ) i ≡ n Y a =1 ( z a ) i for i = 1 , . . . , N. Theorem 7.23. The Jacobian determinantal formula for the product of n quaternion Ginibrematrices is (7.18) ^ a dM a = Y j
1) Formula (7.18) is a generalization of formula (7.12) to the case of a productof n matrices from Mat (cid:0) H ′ , N × N (cid:1) .2) Since we omit combinatorial constants before differential forms, we do not need to fix theorder in the wedge products (change in order results in the change of the overall sign). ERMANENTAL PROCESSES FROM PRODUCTS OF RANDOM MATRICES 39 Proof. Proposition 7.20 implies V a dM a = V a Λ a , thus it is enough to find the Jacobian deter-minant for the change of variables from Λ a to Ω a and dS a . Explicit formulae for the matrixentries (˘ λ a ) i,j can be written as follows(7.19) (˘ λ a ) i,j = j X k =1 ( ˘ w a ) i,k (˘ s a ) k,j − N X k = i (˘ s a ) i,k ( ˘ w a +1 ) k,j + ( d ˘ s a ) i,j , a = 1 , . . . , n, where ( d ˘ s a ) i,j = ˘0 for i > j . Recall that ( ˘ w n +1 ) k,j = ( ˘ w ) k,j . In particular, formula (7.19) gives(7.20) (˘ λ a ) N, = ( ˘ w a ) N, (˘ s a ) , − (˘ s a ) N,N ( ˘ w a +1 ) N, , For all a = 1 , . . . , n the matrices (˘ s a ) , and (˘ s a ) N,N are diagonal, namely(˘ s a ) , = (cid:18) ( z a ) 00 ( z a ) (cid:19) and (˘ s a ) N,N = (cid:18) ( z a ) N 00 ( z a ) N (cid:19) . Taking this into account we see that equations (7.20) can be explicitly rewritten as( λ a ) (1) N, = ( z a ) ( w a ) (1) N, − ( z a ) N ( w a +1 ) (1) N, and ( λ a ) (2) N, = ( z a ) ( w a ) (2) N, − ( z a ) N ( w a +1 ) (2) N, . In matrix form, we have ( λ ) (1) N, ( λ ) (1) N, ...( λ n ) (1) N, = ( z ) − ( z ) N . . . 00 ( z ) − ( z ) N . . . . . . − ( z n − ) N − ( z n ) N . . . ( z n ) ( w ) (1) N, ( w ) (1) N, ...( w n ) (1) N, and ( λ ) (2) N, ( λ ) (2) N, ...( λ n ) (2) N, = ( z ) − ( z ) N . . . 00 ( z ) − ( z ) N . . . . . . − ( z n − ) N − ( z n ) N . . . ( z n ) ( w ) (2) N, ( w ) (2) N, ...( w n ) (2) N, Using these formulae, and the fact that (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) det α − β . . . α − β . . . . . . − β n − − β n . . . α n (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = | α α . . . α n − β β . . . β n | , we obtain(7.21) ^ a (˘ λ a ) N, = | ( z ) − ( z ) N | | ( z ) − ( z ) N | ^ a ( ˘ w a ) N, , where we use the notation introduced in (7.17). Next step is to consider i = N and j = 2. Here, we have(7.22) (˘ λ a ) N, = ( ˘ w a ) N, (˘ s a ) , − (˘ s a ) N,N ( ˘ w a +1 ) N, + ( ˘ w a ) N, (˘ s a ) , . The last term on the right-hand side of equation (7.22) does not contribute to the wedgeproduct V a (˘ λ a ) N, V a (˘ λ a ) N, , since the one-forms consisting of ( ˘ w a ) N, already have appearedin V a (˘ λ a ) N, . It follows that (7.22) contributes to the total wedge product with a term analogueto (7.21). Repeating these considerations we obtain(7.23) N − ^ j =1 ^ a (˘ λ a ) N,j = N − Y j =1 | ( z ) j − ( z ) N | | ( z ) j − ( z ) N | N − ^ j =1 ^ a ( ˘ w a ) N,j . Now, let us compute the wedge product V Nj =1 V a (˘ λ a ) N,j . The formula for (˘ λ a ) N,N is(7.24) (˘ λ a ) N,N = ( ˘ w a ) N,N (˘ s a ) N,N − (˘ s a ) N,N ( ˘ w a +1 ) N,N + N − X k =1 ( ˘ w a ) N,k (˘ s a ) k,N + ( d ˘ s a ) N,N . The sum on the right-hand side of equation (7.24) does not contribute to the wedge product,since all ( ˘ w a ) N,k already appear in (7.23). Thus we only need to find the contribution of thefirst three terms in the right-hand side of equation (7.24) to the wedge product. The procedureis the same as for a single matrix and we find(7.25) ^ a ( ˘ w a ) N,N (˘ z a ) N − (˘ z a ) N ( ˘ w a +1 ) N,N + ( d ˘ z a ) N = | ( z ) N − ( z ) N | ^ a (cid:16) ( dz a ) N ∧ ( dz a ) N ∧ ( w a ) (2) N,N ∧ ( w a ) (2) N,N (cid:17) . Combining (7.23) and (7.25), we obtain(7.26) N ^ j =1 ^ a (˘ λ a ) N,j = | ( z ) N − ( z ) N | N − Y j =1 | ( z ) j − ( z ) N | | ( z ) j − ( z ) N | × ^ a (cid:16) ( dz a ) N ∧ ( dz a ) N ∧ ( w a ) (2) N,N ∧ ( w a ) (2) N,N (cid:17) N − ^ j =1 ^ a ( ˘ w a ) N,j . Replacing N in equation (7.26) first by N − 1, then by N − ^ i ≥ j ^ a (˘ λ a ) i,j = N Y i =1 | ( z ) i − ( z ) i | Y j
Proof of Theorem 3.1. Let M , . . . , M n ∈ Mat( H ′ , N × N ) be independent randommatrices each taken from the induced quaternion Ginibre ensemble with parameters m , . . . , m n .Consider the matrix P qu n defined in Section 3.1. We use the generalised Schur decompositionfor matrices from Mat (cid:0) H ′ , N × N (cid:1) , see Corollary 7.16. Thus we assume that M , . . . , M n arewritten as in the statement of Corollary 7.16, and obtainTr M ∗ a M a = Tr Z ∗ a Z a + Tr T ∗ a T a , where Z a and T a are defined by equations (7.5) and (7.6). Taking this into account, and usingthe Jacobian determinantal formula for the product of quaternion matrices in Theorem 7.23 weobtain that the density of P qu n is proportional toe − P na =1 Tr Z ∗ a Z a N Y i =1 n Y a =1 | ( z a ) i | m a N Y i =1 | ( z ) i − ( z ) i | Y j
In this Section we prove Theorem 3.4. Let f ( ζ ) be some finitely supported function definedon C . Assume that ζ , . . . , ζ N are complex random variables with the joint density given byequation (3.2). Starting from equation (3.2), and using standard methods of Random MatrixTheory (see Tracy and Widom [52]) we find E (cid:18) N Y j =1 (1 + f ( ζ j )) (cid:19) = Pf (cid:16)R ( ζ j ¯ ζ k − ζ k ¯ ζ j )( ζ − ¯ ζ )(1 + f ( ζ )) w ( m ,...,m n ) n ( ζ ) d ζ (cid:17) ≤ j,k ≤ N − Pf (cid:16)R ( ζ j ¯ ζ k − ζ k ¯ ζ j )( ζ − ¯ ζ ) w ( m ,...,m n ) n ( ζ ) d ζ (cid:17) ≤ j,k ≤ N − . Introduce the matrix Q = ( Q j,k ) ≤ j,k ≤ N − by Q j,k = Z ( ζ j ¯ ζ k − ζ k ¯ ζ j )( ζ − ¯ ζ ) w ( m ,...,m n ) n ( ζ ) d ζ. By the same argument as in Tracy and Widom [52], § 8, we find K ( m ,...,m n ) N ( z, ζ ) = ( ζ − ¯ ζ ) w ( m ,...,m n ) N ( ζ ) N − P j,k =0 ¯ z j (cid:0) Q − (cid:1) j,k ζ k − N − P j,k =0 ¯ z j (cid:0) Q − (cid:1) j,k ¯ ζ k N − P j,k =0 z j (cid:0) Q − (cid:1) j,k ζ k − N − P j,k =0 z j (cid:0) Q − (cid:1) j,k ¯ ζ k . The matrix elements Q j,k can be written as Q j,k = 2 h ( m ,...,m n ) j +1 δ k,j +1 − h ( m ,...,m n ) k δ k +1 ,j , where h ( m ,...,m n ) k are defined by equation (2.7). From this representation it is not hard to seethat the matrix Q − is an antisymmetric matrix defined by the following conditions • (cid:0) Q − (cid:1) i, j +1 = 2 n ( j − i ) π n n Y a =1 Γ (cid:0) m a + j + 1 (cid:1) Γ ( m a + 2 j + 2) Γ (cid:0) m a + i + 1 (cid:1) , ≤ i ≤ j ≤ N − • (cid:0) Q − (cid:1) j +1 , i = − (cid:0) Q − (cid:1) i, j +1 , ≤ i ≤ j ≤ N − • All other matrix elements are equal to zero.Using the explicit formulae just written above for the matrix elements of Q − we get the formulafor the correlation kernel in the statement of Theorem 3.4. (cid:3) Hole and overcrowding estimates for the generalised quaternion ensemble In this Section we complete the task of extending the results obtained for products of complexmatrices to the case of products of matrices from the induced quaternion Ginibre ensemble. Webegin from the following due to Rider [47]: Proposition 9.1. Set z i = r i e iθ ( i = 1 , . . . , N ) , then we have (9.1) Z π . . . Z π N Y k =1 ( z k − z k ) V ( z , z , . . . , z N , z N ) dθ . . . dθ N = (4 π ) N per h r j − i i Ni,j =1 , where (9.2) V ( x , . . . , x N ) = Y ≤ i Denote by I N ( r , . . . , r N ) the integral in the left-hand side of equation (9.1). We have I N ( r , . . . , r N ) = Z π · · · Z π N Y k =1 ( z k − z k ) V ( z , z , . . . , z N , z N ) dθ · · · dθ N = ( − N Z π · · · Z π N Y k =1 r k (e − iθ k − e iθ k ) × X σ ∈ S (2 N ) sgn( σ ) N Y n =1 e i ( σ (2 n − − σ (2 n )) θ n r σ (2 n − σ (2 n ) − n dθ · · · dθ N . Integrating over variables θ , . . . , θ N we obtain(9.3) I N ( r , . . . , r N ) = (2 π ) N X σ ∈ S (2 N ) sgn( σ ) N Y n =1 ( δ σ (2 n − ,σ (2 n ) − δ σ (2 n − − ,σ (2 n ) ) r σ (2 n − σ (2 n ) − n . ERMANENTAL PROCESSES FROM PRODUCTS OF RANDOM MATRICES 43 Now formula (9.1) follows by simple combinatorial arguments. We expand the product ofKronecker deltas inside the sum in equation (9.3), and represent this expression as a sum of 2 N terms. One of these terms is δ σ (1)+1 ,σ (2) δ σ (3)+1 ,σ (4) · · · δ σ (2 N − ,σ (2 N ) . It is not hard to see that the contribution of this term to I N ( r , . . . , r N ) is(2 π ) N X σ ∈ S ( N ) r − σ (1) r − σ (2) · · · r N − N − σ ( N ) = (2 π ) N per h r j − i i Ni,j =1 . Then we check that each of these 2 N terms gives the same contribution to I N ( r , . . . , r N ). Theformula (9.1) follows. (cid:3) Now, we can turn to the proof of Theorem 3.6. We note that V ( z , z , . . . , z N , z N ) = N Y i =1 ( z i − z i ) Y ≤ i Now to prove Theorem 3.8 we use Theorem 3.6, Proposition 2.5, and proceed in the sameway as in the proof of Theorem 2.6.Comparing Theorem 2.6 for complex matrices with Theorem 3.8 for quaternion matrices, wesee that they differ by a simple scaling k → k ; this scaling immediately yields Theorem 3.9.The asymptotic formulae for the hole probability and overcrowding for quaternions given byTheorem 3.10 to Theorem 3.13 are also directly linked to their complex analogues. The proofsare straightforward generalizations of those for complex matrices given in Section 5 and 6. Infact, the proofs follow exactly the same steps as for the complex matrices, and for this reasonwe will not write them out explicitly. Instead, we have added footnotes in Section 5, whichnotify when the proofs for quaternion matrices differs. Acknowledgements: We would like to thank Alon Nishry for discussions. The SFB | TR12“Symmetries and Universality in Mesoscopic Systems” of the German research council DFG &DAAD International Network “From Extreme Matter to Financial Markets” (G.A.), “Stochas-tics and Real World Models” IRTG 1132 (J.R.I) are acknowledged for financial support. TheSchool of Mathematics at the IAS Princeton is thanked for its kind hospitality while part ofthis work was written up. Appendix A.In this appendix we collect some known results about the Euler–Maclaurin summation for-mula as we need them throughout the main part. Theorem A.1. (Second-derivative form of the Euler–Maclaurin summation formula).Assume that f is a function with a continuous second derivative on the interval [1 , n ] . Inaddition, assume that the improper integral Z ∞ | f (2) ( x ) | dx converges. Then we have n X k =1 f ( k ) = Z n f ( x ) dx + D ( f ) + E f ( n ) , where D ( f ) = 12 f (1) − P (0) f ′ (1) − Z ∞ P ( x ) f (2) ( x ) dx, and E f ( n ) = 12 f ( n ) + 12 P (0) f ′ ( n ) + 12 Z ∞ n P ( x ) f (2) ( x ) dx. Here P ( x ) is the second Bernoulli periodic function defined by P ( x ) = B ( x − ⌊ x ⌋ ) , with B ( x ) = x − x + and ⌊ x ⌋ denotes the integer part of x ..Proof. See Apostol [11], Section 4. (cid:3) ERMANENTAL PROCESSES FROM PRODUCTS OF RANDOM MATRICES 45 Theorem A.2. (General form of the Euler–Maclaurin summation formula).Assume that f is a function with a continuous derivative of order m + 1 on the interval [1 , n ] .In addition, assume that the improper integral Z ∞ | f (2 m +1) ( x ) | dx converges. Then we have n X k =1 f ( k ) = Z n f ( x ) dx + D ( f ) + E f ( n ) , where D ( f ) = 12 f (1) − m X r =1 B r (2 r )! f (2 r − (1) + 1(2 m + 1)! Z ∞ P m +1 ( x ) f (2 m +1) ( x ) dx, and E f ( n ) = 12 f ( n ) + m X r =1 B r (2 r )! f (2 r − ( n ) − m + 1)! ∞ Z n P m +1 ( x ) f (2 m +1) ( x ) dx. Here P k ( x ) stand for the Bernoulli periodic functions (periodic extensions of the Bernoullipolynomials B k ( x ) ) defined by P k ( x ) = B k ( x − ⌊ x ⌋ ) . The constants B k = P k (0) = P k (1) are the Bernoulli numbers.Proof. See Apostol [11], Section 5. (cid:3) Proposition A.3. The following asymptotic formula holds true (A.1) n X k =1 log( m + k ) = n log( n ) − n + (cid:0) m + (cid:1) log( n ) + C ( m ) + O (cid:0) n − (cid:1) , as n → ∞ . Here C ( m ) is a constant given by (A.2) C ( m ) = m + 1 − m + 1) − (cid:0) + m (cid:1) log(1 + m ) + 12 ∞ Z P ( x ) dx ( m + x ) . In addition, we have (A.3) n X k =1 ( k + m ) log( k + m ) = 12 n log( n ) − n n (cid:0) m + (cid:1) log( n ) + m ( m + 1)2 log( n ) + O (1) , as n → ∞ .Proof. To obtain the first formula, equation (A.1), use the second-derivative form of the Euler-Maclaurin summation formula, Theorem A.1. Application of the general form of the Euler-Maclaurin summation formula with m = 1 (Theorem A.2) gives equation (A.3). (cid:3) References [1] K. Adhikari, N. K. Reddy, T. R. Reddy, K. Saha, [arXiv:1308.6817].[2] G. Akemann, Phys. Rev. D64 (2001) 114021 [arXiv:hep-th/0106053].[3] G. Akemann, J. Baik, P. Di Francesco (Eds.), The Oxford Handbook of Random Matrix Theory , OxfordUniversity Press, Oxford 2011.[4] G. Akemann, Z. Burda, J. Phys. A45 (2012) 465201. [arXiv:1208.0187][5] G. Akemann, M. J. Phillips, L. Shifrin, J. Math. Phys. 50 (2009), 063504. [arXiv:0901.0897].[6] G. Akemann, J. R. Ipsen, M. Kieburg, Phys. Rev. E 88 (2013) 052118 [arXiv:1307.7560].[7] G. Akemann, M. Kieburg, L. Wei, J. Phys. A46 (2013) 27520. [arXiv:1303.5694].[8] G. Akemann, E. Strahov, J. Stat. Phys. 151 (2013) 987 [arXiv:1211.1576].[9] G. Akemann, Z. Burda, M. Kieburg, T. Nagao, J. Phys. A: Math. Theor. 47 (2014) 255202[arXiv:1310.6395].[10] G. W. Anderson, A. Guionnet, O. Zeitouni, An introduction to random matrices. Cambridge UniversityPress, Cambridge, 2010.[11] T. Apostol, Amer. Math. Monthly. 106 (1999) 409.[12] M. Bertola, M. Gekhtman, J. Szmigielski, Comm. Math. Phys. 287 (2009) 983 [arXiv:0804.0873].[13] M. Bertola, M. Gekhtman, J. Szmigielski, Comm. Math. Phys. 326 (2014) 111 [arXiv:1211.5369].[14] A. Borodin, Determinantal point processes. Chapter 11 in [3]. [arXiv:0911.1153].[15] A. Borodin, Nuclear Phys. B 536 (1999) 704. [arXiv:math/9804027].[16] N. G. de Bruijn, J. Indian Math. Soc. 19 (1955) 133.[17] Z. Burda, R. A. Janik, B. Waclaw, Phys. Rev. E 81 (2010) 041132 [arXiv:0912.3422v2].[18] Z. Burda, Preprint (2013) [arXiv:1309.2568].[19] P. A. Deift, Orthogonal polynomials and random matrices: a Riemann-Hilbert approach. AMS, Providence,RI, 1999.[20] Digital Library of Mathematical Functions, http://dlmf.nist.gov/[21] R. Durrett, Probability: theory and examples. Fourth edition. Cambridge University Press, Cambridge,2010.[22] J. Feinberg, A. Zee, Nucl. Phys. B 501 (1997) 643. [arXiv:cond-mat/9704191].[23] J. Feinberg, R. Scalettar, A. Zee, J. Math. Phys. 42 (2001) 5718. [arXiv:cond-mat/0104072].[24] J. Feinberg, J. Phys. A 39 (2006) 10029. [arXiv:cond-mat/0603622].[25] J. Fischmann, W. Bruzda, B. A. Khoruzhenko, H.-J. Sommers, K. ˙Zyczkowski, J. Phys. A 45 (2012)075203. [arXiv:1107.5019].[26] P. J. Forrester, Phys. Lett. A169 (1992) 21.[27] P. J. Forrester, Log-gases and random matrices. Princeton University Press, Princeton, NJ, 2010.[28] P. J. Forrester, J. Phys. A 47 (2014) 065202 [arXiv:1309.7736].[29] Y. V. Fyodorov, B. Mehlig, Phys. Rev. E 66 (2002) 045202 [arXiv:cond-mat/0207166].[30] Y. V. Fyodorov, H.-J. Sommers, J. Phys. A36 (2003) 3303 [arXiv:nlin.CD/0207051].[31] J. Ginibre, J. Math. Phys. 6 (1965), 440.[32] A. Guionnet, M. Krishnapur, O. Zeitouni, Ann. of Math. 174 (2011) 1189. [arXiv:0909.2214].[33] F. G¨otze, A. Tikhomirov, Preprint (2010) [arXiv:1012.2710].[34] J. B. Hough, M. Krishnapur, Y. Peres, B. Vir´ag, Zeros of Gaussian analytic functions and determinantalpoint processes. AMS, Providence, RI, 2009.[35] J. B. Hough, M. Krishnapur, Y. Peres, B. Vir´ag, Probab. Surv. 3 (2006), 206. [arXiv:math/0503110].[36] R. A. Horn, C. R. Johnson, Matrix Analysis , Cambridge University Press, 1985.[37] J. R. Ipsen, J. Phys. A 46 (2013) 265201. [arXiv:1301.3343].[38] J. R. Ipsen, M. Kieburg, Phys. Rev. E 89 (2014) 032106 [arXiv:1310.4154].[39] E. Kanzieper, J. Phys. A 35 (2002) 6631 [arXiv:cond-mat/0109287].[40] B. A. Khoruzhenko, H.-J. Sommers, Non-Hermitian ensembles Chapter 18 in [3].[41] E. Kostlan, Lin. Alg. Appl. 162/164 (1992), 385. ERMANENTAL PROCESSES FROM PRODUCTS OF RANDOM MATRICES 47 [42] A. B. J. Kuijlaars, L. Zhang, Preprint (2013) [arXiv:1308.1003].[43] M. Krishnapur, J. Stat. Phys. 124 (2006) 1399. [arXiv:math/0510588].[44] Y. L. Luke, The special functions and their approximations. Academic Press, New York 1969.[45] M. L. Mehta, Random Matrices Volume 142, Third Edition (Pure and Applied Mathematics).[46] L. Pastur, M. Shcherbina, Eigenvalue distribution of large random matrices. AMS, Providence, RI, 2011.[47] B. Rider, Prob. Theory Related Fields 130 (2004) 337. [arXiv:math/0312043].[48] S. O’Rourke, D. Renfrew, A. Soshnikov, V. Vu Preprint (2014) [arXiv:1403.6080].[49] M. D. Springer, W. E. Thompson, SIAM J. Appl. Math. 18 (1970) 721.[50] E. Strahov, unpublished notes (2013).[51] E. Strahov, J. Phys. A 47 (2014) 325203 [arXiv:1403.6368].[52] C. A. Tracy, H. Widom, J. Stat. Phys. 96 (1998) 809. [arXiv:solv-int/9804004].[53] L. Zhang, J. Math. Phys. 54 (2013) 083303 [arXiv:1305.0726].[54] F. Zhang, Lin. Alg. App. 251 (1997) 21.[55] K. ˙Zyczkowski, H.-J. Sommers, J. Phys. A33 (2000) 2045 [arXiv:1107.5019]. Department of Physics, Bielefeld University, P.O. Box 100131, D-33501 Bielefeld, Germany E-mail address : [email protected] Department of Physics, Bielefeld University, P.O. Box 100131, D-33501 Bielefeld, Germany E-mail address : [email protected] Department of Mathematics, The Hebrew University of Jerusalem, Givat Ram, Jerusalem 91904,Israel E-mail address ::