Free products of large random matrices - a short review of recent developments
FFree products of large random matrices - a short review of recent developments
Zdzislaw Burda ∗ Marian Smoluchowski Institute of Physics and Mark Kac Center for Complex Systems Research,Jagiellonian University, Reymonta 4, PL 30-059 Cracow, Poland
We review methods to calculate eigenvalue distributions of products of large random matrices. Wediscuss a generalization of the law of free multiplication to non-Hermitian matrices and give a coupleof examples illustrating how to use these methods in practice. In particular we calculate eigenvaluedensities of products of Gaussian Hermitian and non-Hermitian matrices including combinations ofGUE and Ginibre matrices.
I. INTRODUCTION
Products of random matrices arise in many areas of research and engineering [1]. In studies of dynamical systemsone uses products of random matrices to derive limiting laws for Lyapunov exponents [2]; in information theoryand wireless telecommunication – to calculate channel capacities for serial MIMO (multiple-input multiple-output)transmission [3]. With the help of products of random matrices one investigates unitary evolution [4], matrix diffusion[5] or phase transitions in quantum Yang-Mills theories [6]. One encounters products of random matrices in studiesof quantum entanglement [7], financial engineering [8], and many other fields of research [9, 10].The problem of random matrix multiplication has attracted attention since the sixties [11, 12]. Originally thestudies were mostly concentrated on deriving limiting laws for products of infinitely many finite random matrices [10]with some exceptions where also limiting laws for infinite matrices were formulated [2]. Recently the focus has shiftedto products of infinitely large matrices due to the discovery of a correspondence between infinitely large invariantrandom matrices and free random variables [13, 14] which have been studied in the framework of free probability [15]– a new branch of probability theory that has been intensively developed since the eighties [16, 17]. Many resultsderived in this theory have been applied to large random matrices in the limit of infinite matrix size N → ∞ .In this paper we review some of these results, and give practical recipes to calculate eigenvalue densities of productsof large random matrices. Instead of giving formal proofs we gradually build up intuition passing from matrix additionthrough multiplication of Hermitian random matrices to multiplication of non-Hermitian matrices. Free addition oflarge random matrices is a counterpart of addition of independent random variables in classical probability. Westart with addition because it is much simpler than multiplication but it already has essential ingredients neededto understand a special role of invariant random matrices played in the formulation of limiting laws for eigenvaluedistributions in the large N limit. The key feature of invariant random matrices is that their eigenvectors arenot correlated with the eigenvalues, and that they are uniformly distributed. The last statement means that theunitary matrix constructed from the normalized eigenvectors is uniformly distributed on the unitary group. Ineffect, randomness of such matrices is entirely shaped by the probability distribution of the eigenvalues. The sum ofindependent invariant random matrices is an invariant random matrix. In the limit N → ∞ the eigenvalue density ofthe sum depends only on eigenvalue densities of the matrices added. This law is very similar to the law of addition inclassical probability where the probability distribution of the sum of independent random variables can be calculatedfrom the probability distributions of these variables. Similarities to classical probability go far beyond that. One canwrite a dictionary between classical and free probability. In this dictionary, random variable corresponds to invariantrandom matrix for N → ∞ , probability density function to eigenvalue density, characteristic function to Green’sfunction, cumulant generating function (logarithm of the characteristic function) to R transform, Mellin transform ofthe probability density function to S transform, independence to freeness, etc. Using this correspondence one can finda full analogy between free addition and free multiplication of large invariant matrices and addition and multiplicationof independent random variables.It is worth emphasizing that the laws of free addition and free multiplication can be independently derived usingfield theoretical methods, including perturbation theory of invariant matrix models [18, 19], without referring to freeprobability. Perturbation theory is a way of graphical enumeration of planar Feynman diagrams that arise in the large N limit [20]. A great advantage of the field theoretical approach is that it also works in the case of non-Hermitianmatrices and it allows to derive a non-Hermitian version of the multiplication law [21]. To the best of our knowledgethis result is beyond the reach of free probability so far. We shortly recall this result here. ∗ [email protected] a r X i v : . [ m a t h - ph ] N ov The paper is organized as follows. In section II we discuss eigenvalues of a sum of random matrices. In particular wedefine a slightly modified version of matrix addition – such that it is independent of the eigenvectors of the matricesadded. This brings us to the class of invariant random matrices that are discussed in section III. In section IV werecall the definition of the Green’s function, the moment generating function and the R transform. These functionsare used in the derivation of limiting laws for invariant matrices in the large N limit. We employ these functionsin the formulation of the law of free addition. In section V we discuss the law of free multiplication for invariantHermitian matrices. In section VI we discuss consequences of this law for multiplication of infinitely large isotropicnon-Hermitian matrices. Finally in section VII we show how to calculate eigenvalue densities for products of largeindependent non-Hermitian matrices. We illustrate calculations with examples. We conclude the paper with a shortsummary in section VIII. II. EIGENVALUES OF THE SUM OF RANDOM HERMITIAN MATRICES
Let a and b be N × N Hermitian matrices. We are interested in the eigenvalues of the sum a + b . To calculatethem it is not sufficient to know the eigenvalues of a and b . One would also need to know their eigenvectors. Actuallyit would be enough to know relative positions of the eigenvectors of a and b : for example the decomposition of b -eigenvectors in the eigenbasis of a . One can, however, define a non-standard addition of matrices that depends onlyon the eigenvalues of individual matrices in the sum and not on their eigenvectors. This new addition is defined in aprobabilistic way as a sum a (cid:1) b = u † au + v † bv , (1)where u, v are independent Haar unitary matrices and u † denotes the Hermitian conjugate of u . Let us recall thatHaar unitary matrix is a random matrix distributed according to the uniform measure on the unitary group U ( N ).The matrix a (cid:1) b is not a concrete matrix but it is a random matrix or equivalently an ensemble of matrices with acertain probability measure. The eigenvalues of a (cid:1) b are random variables. One is interested in the joint probabilitydistribution of these random variables. The marginal distribution of the joint probability distribution is referredto as the eigenvalue distribution. One should note that the transformations a → u † au and b → v † bv preserve theeigenvalues of the individual terms, but uniformly randomize their eigenvectors. One can therefore expect that theeigenvalue distribution of a (cid:1) b depends only on eigenvalues of a and b . Moreover, since the sum u † au + v † bv can bewritten as u † ( a + w † bw ) u , where w = vu − is also a Haar unitary matrix, the eigenvalue distribution of the matrix a + w † bw averaged over w is the same as the eigenvalue density of a (cid:1) b . In other words it is sufficient to randomizethe eigenvectors of one matrix against the other.One can consider a more general version of the problem in which a and b are independent random matrices insteadof being deterministic ones. In this case the question is whether one can determine the eigenvalue distribution of a (cid:1) b from the eigenvalue distributions of individual matrices a and b . The answer is affirmative for N → ∞ . In thislimit the eigenvalue density ρ a (cid:1) b ( λ ) of the sum a (cid:1) b can indeed be determined solely from the eigenvalue densities ρ a ( λ ) and ρ b ( λ ). For finite N the answer is more complicated since it depends also on eigenvalue correlations. Thecorrelations can be neglected only in the limit N → ∞ . The addition (1) is called free addition when N → ∞ and a and b are independent. In the next section we shall see that the addition (1) is equivalent to the standard matrixaddition a + b for invariant random matrices. III. INVARIANT HERMITIAN MATRICES AND FREE MATRICES
Random matrices are formally defined by matrix ensembles equipped with probability measures. Let us denote themeasure of an N × N Hermitian random matrix x by µ ( x ). Random matrix x is called invariant if the measure isinvariant under the transformation x → U † xU , where U is an arbitrary unitary matrix: µ ( x ) = µ ( U † xU ). Let us givetwo standard examples.One can construct an invariant random matrix from a diagonal random matrix Λ = diag( λ , . . . , λ N ) and Haarunitary random matrix u a = u † Λ u . (2)The invariance of the measure for a : µ ( a ) = µ ( U † aU ) is a direct consequence of the invariance of the Haar measure µ Haar ( u ) = µ Haar ( uU ) under the right multiplication by a unitary matrix U . One can choose the diagonal elements λ i ’s of Λ to be independent, identically distributed real random variables. In such a case the joint eigenvalue densityfactorizes and ρ N ( λ , . . . , λ N ) = p ( λ ) . . . p ( λ N ) where p ( λ ) is the probability density function for the random variablesrepresenting diagonal elements of Λ. The factorization of the joint eigenvalue density reflects the independence of theeigenvalues. The eigenvalue density is ρ ( λ ) = p ( λ ).In physical applications one frequently encounters invariant measures of the form [22, 23] dµ ( a ) ∝ e − N Tr V ( a ) da , (3)where V ( a ) is a polynomial or a power series in a and da = (cid:81) ≤ i ≤ N da ii (cid:81) ≤ i For N → ∞ the eigenvalue density of an invariant random matrix a can be described by an eigenvalue density ρ a ( λ ) having support I on the real axis. Typically I is a finite interval. In calculations it is convenient to use theStieltjes transform of the density G a ( z ) = (cid:90) I ρ a ( λ ) dλz − λ , (6)that is a complex function defined on the complex plane outside I . This function is often called Green’s function.The density can be reconstructed from the Green’s function G a ( z ) by calculating the imaginary part of the Green’sfunction close to the real axis ρ a ( λ ) = − π lim (cid:15) → + Im G a ( λ + i(cid:15) ) . (7)The power expansion G a ( z ) at infinity G a ( z ) = ∞ (cid:88) n =0 m a,n z n +1 (8)generates moments of the eigenvalue distribution m a,n = (cid:90) I dλρ a ( λ ) λ n . (9)These moments are equivalent to the trace moments m a,n = (cid:28) N Tr a n (cid:29) = (cid:90) dµ ( a ) 1 N Tr a n , (10)calculated with respect to the probability measure µ ( a ) for the entire matrix a . The zeroth moment is fixed m a, = 1by the normalization of the measure. One often defines another moment generating function as a power series at zero φ a ( z ) = ∞ (cid:88) n =1 m a,n z n . (11)It is related to the Green’s function φ a ( z ) = 1 z G a (cid:18) z (cid:19) − . (12)The eigenvalue density ρ a ( λ ), the moment generating function φ a ( z ), and the Green’s function G a ( z ) contain roughlyspeaking the same information about the eigenvalue distribution of the invariant matrix a and one can reproducethese functions from each other. The idea is to calculate these functions for a + b when the corresponding functionsfor a and b are given. Let us first calculate the moments of a + bm a + b,n = (cid:28) N Tr( a + b ) n (cid:29) = (cid:90) (cid:90) dµ a ( a ) dµ b ( b ) 1 N Tr( a + b ) n . (13)The measure factorizes, since a and b are independent by assumption. For n = 1 , , . . . the last equation gives m a + b, = m a, + m b, ,m a + b, = m a, + m b, + 2 m a, m b, ,. . . . (14)In the second equation we replaced (cid:10) N Tr ab (cid:11) by m a, m b, . The equality (cid:10) N Tr ab (cid:11) = m a, m b, follows from theequation (cid:82) (cid:82) dµ a ( a ) dµ b ( b ) N Tr( a − m a, )( b − m b, ) = 0, which holds in the limit N → ∞ as a result of invarianceand independence of a and b . The relations between moments (14) can be written in a compact way using freecumulants which are certain (specific) combinations of moments [15]: κ a, = m a, , κ a, = m a, − m a, , κ a, = m a, − m a, m a, + 2 m a, , etc. Equations (14) correspond to κ a + b,n = κ a,n + κ b,n , (15)for n = 1 , , . . . . The concept of free cumulants is analogous to the standard cumulants in classical probability. Letus recall that cumulants for a real random variable are generated as coefficients of the power series expansion ofthe logarithm of the characteristic function. The logarithm of the characteristic function for a sum of independentrandom variables is equal to the sum of logarithms of the characteristic functions of these random variables. Thereforecumulants for the sum of independent variables are additive, exactly as free cumulants (15). Free cumulants are,however, generated in a different way. The generating function for free cumulants is called R transform [15, 16]. Freecumulants are equal to coefficients of the power series expansion R ( z ) = ∞ (cid:88) n =1 κ n z n − . (16)Note that there is a shift between the power of z and the index of the corresponding cumulant. The R transform isrelated to the Green’s function which generates moments of the eigenvalue distribution. The relation reads G ( z ) = 1 z − R ( G ( z )) . (17)We give it without a proof [15]. One can find a diagrammatic interpretation of this equation [21] using field theoreticalmethods for planar graphs enumeration [18, 19]. Using equation (17) one can find G ( z ) if R ( z ) is known. One canalso invert equation (17) G (cid:18) R ( z ) + 1 z (cid:19) = z . (18)This form is useful when one wants to determine the R transform for the given Green’s function. The R transformfor the sum of free matrices a + b is a sum of R transforms for a and b [15, 16] R a + b ( z ) = R a ( z ) + R b ( z ) , (19)exactly as the logarithm of the characteristic function for the sum of independent real random variables in classicalprobability. The additivity of R transform reflects the invariance and independence of the measures for a and b in(13). For N → ∞ these two properties correspond to freeness.Explicit relations between free cumulants κ n and moments m n can be found by inserting the power series R ( z ) = κ + κ z + . . . and G ( z ) = 1 /z + m /z + . . . into equation (18). The first three free cumulants are identical as the standardones κ = m , κ = m − m , κ = m − m m + 2 m . From the fourth one on they are different. The fourth freecumulant is κ = m − m m − m +10 m m − m while the standard one is κ = m − m m − m +12 m m − m .The R transform was first introduced in free probability by Voiculescu [16]. Later it was observed that freecumulants are related to a combinatorial problem of non-crossing lines on a plane [14, 25]. For this reason freecumulants are sometimes called non-crossing cumulants. For completeness we mention that equation (18) can bededuced [21, 26] using field theoretical methods of planar graph enumeration [18]. The Green’s function and the Rtransform correspond to sums over certain classes of diagrams, and equations like (17) - to Dyson-Schwinger equations[18]. The combinatorics for planar Feynman diagrams for matrix models is equivalent to the combinatorial approachto freeness [14, 25].We close this section by giving a standard example. Recall that in classical probability the normal random variablewith the probability density function p ( x ) = √ π e − x / has all cumulants equal zero except κ = 1. What is thecorresponding random matrix eigenvalue density in free probability that has all free cumulants equal zero exceptthe second one κ = 1? We want to find the eigenvalue density corresponding to the R transform equal R ( z ) = z (16). Using (17) we first find an equation for the Green’s function G ( z ) = 1 / ( z − G ( z )). It can be solved for G ( z ): G ( z ) = ( z − √ z − G ( z ) ∼ /z forlarge z . The eigenvalue density is (7) ρ ( λ ) = π √ − λ for λ ∈ [ − , 2] and zero otherwise. It is the Wigner semicirclelaw. Odd moments of this distribution are equal zero while even moments m k = (cid:82) − dλρ ( λ ) λ k = k +1 (cid:0) kk (cid:1) are equalto the Catalan numbers. One can use equation (5) to find that the corresponding potential is quadratic V ( x ) = x / dµ ( a ) ∝ e − N Tr a / . Let us slightly modify the R transform and consider R ( z ) = α + σz . Without repeating the calculations one can easily argue on general grounds that the correspondingeigenvalue density is shifted as compared to the previous case by the first cumulant κ = α and rescaled by the secondone κ = σ leading to ρ ( λ ) = πσ (cid:112) − ( λ − α ) /σ and V ( x ) = ( x − α ) / σ .Let us now calculate the eigenvalue density of the sum of two independent large ( N → ∞ ) invariant matrices a and b that have eigenvalue densities given by the Wigner semicircle law with different shift and scale parameters.Denote these parameters by α a , σ a for a and α b , σ b for b , respectively. The R transforms are R a ( z ) = α a + σ a z and R b ( z ) = α b + σ b z . To find the eigenvalue density of the sum c = a + b we first calculate R transform that is equal to R c ( z ) = α c + σ c z with α c = α a + α b and σ c = σ a + σ b . The R transform is exactly of the same form as for a and b .Therefore the eigenvalue density for c is also given by the Wigner semicircle law. The situation is analogous to thatin classical probability where the sum of two random Gaussian variables is known to be a Gaussian random variable.One says that the semicircle law is stable with respect to free addition in the same way as the Gaussian law is stablewith respect to addition in classical probability. Actually there exist a bijection between stable laws in classical andfree probability that allows one to classify all stable laws in free probability [27]. Just as a curiosity we mention thatthe only distribution that is stable with respect to addition (random variables) and free addition (random matrices)is the Cauchy distribution. V. PRODUCTS OF LARGE INVARIANT MATRICES As we have seen the law of addition for independent large invariant matrices can be concisely expressed in termsof the R transform (19). Before discussing the law of multiplication it is convenient to recall the corresponding lawin classical probability. Let x and y be independent non-negative real random variables. What is the distributionof the random variable xy ? A way to derive this distribution is via the Mellin transform of the probability densityfunction. The Mellin transform M xy ( s ) for the product xy is the product of Mellin transforms M xy ( s ) = M x ( s ) M y ( s )for x and y . Clearly this multiplication law is analogous to the addition law discussed in the previous section (19). Itremains to find the corresponding law for products of large invariant matrices. Let a and b be independent invariantmatrices. For the moment we also assume a and b to be positive semi-definite. Later we will weaken this restriction.The first trivial observation is that even if a and b are Hermitian, the product ab is generically non-Hermitian, so itseigenvalues are complex. Therefore, it is convenient to introduce a slightly modified version of the multiplication [28] a · b = √ a b √ a . (20)We use dot in the notation to distinguish a · b from the standard matrix multiplication ab . Square root of a positivesemi-definite matrix is defined as √ a = U † diag( √ λ , . . . , √ λ N ) U where U is a unitary matrix that diagonalizes a : a = U † diag( λ , . . . , λ N ) U . The product a · b of Hermitian matrices is Hermitian. The product a · b of invariant randommatrices is an invariant random matrix. Moreover, the probability measure µ a · b for the product a · b is identical asfor b · a : µ a · b = µ b · a so this product is in a sense commutative. One should also note that the trace moments for theproduct a · b (20) are identical as for the standard product: N Tr( a · b ) k = N Tr( ab ) k .The law of free multiplication was formulated by Voiculescu in free probability [15, 17]. Later it was shown that itapplies also to multiplication (20) of invariant random matrices in the limit N → ∞ [28]. The multiplication law isexpressed in terms of S transform S a · b ( z ) = S a ( z ) S b ( z ) (21)where S a ( z ) is a complex function. The S transform S a ( z ) can be constructed from the moment generating function φ a ( z ) (11) or actually the inverse function of φ a that is usually denoted by χ a : χ a ( φ a ( z )) = φ a ( χ a ( z )) = z . Therelation reads [17] S a ( z ) = z + 1 z χ a ( z ) . (22)The S transform can be viewed as a series in zS a ( z ) = 1 m a, + m a, − m a, m a, z + 2 m a, − m a, m a, − m a, m a, m a, z + . . . (23)that is obtained by inserting the inverse series in place of φ a ( z ) in equation (22). The coefficients of this series arerelated to the moments m a,k . The coefficients are singular for m a, = 0, in which case the S transform is ill-defined.The S transform is related to the R transform as follows [15, 21] S ( z ) = 1 R ( zS ( z )) . (24)If one knows the R transform then the last equation can be used to calculate the S transform. One can invert the lastequation [21] R ( z ) = 1 S ( zR ( z )) , (25)which allows one to calculate the R transform if the S transform is known. Using these relations one can show thatthe multiplication law (21) can be alternatively written in the R transform form [21]: R a · b ( z ) = R a ( w ) R b ( v ) ,v = zR a ( w ) ,w = zR b ( v ) . (26)This set of equations looks more complicated than (21) but in some situations it is more advantageous. For example,it can be used when the first moments vanish, that is when the S transforms are ill-defined. Moreover, this form canbe generalized to the case of non-Hermitian matrices, as we discuss in section VII. The two forms of the multiplicationlaw are in a sense complementary. The original S transform form of (21) has a very simple and appealing structurethat reflects important properties of free products, for example that the multiplication a · b (20) of invariant randommatrices is commutative and associative: S ( a · b ) · c ( z ) = S c · ( a · b ) ( z ). This follows from the fact that multiplication onthe right hand side of (21) is just multiplication of complex numbers which is commutative and associative.Let us give an example to illustrate how these formal relations work in practice. Consider the product a · b of twoindependent identical Wishart matrices [29]. The Wishart matrix can be thought of as a square of a Gaussian matrixfrom the GUE ensemble [22]. The matrices a and b have the eigenvalue densities ρ a ( λ ) = ρ b ( λ ) = ρ ( λ ) ρ ( λ ) = 12 π (cid:114) − λλ (27)for λ ∈ [0 , 4] and ρ ( λ ) = 0 for λ outside this interval. The Green’s function (6) corresponding to this density is G ( z ) = 12 − (cid:114) z − z (28)and the moment-generating function (12) φ ( z ) = 12 z (cid:0) − √ − z (cid:1) − . (29)The inverse function of φ fulfills the equation z = χ ( z ) (cid:16) − (cid:112) − χ ( z ) (cid:17) − χ ( z ) χ ( z ) = z (1 + z ) . (30)Thus the S transform is (22) S ( z ) = 11 + z . (31)We now want to find the S transform for the product S a · b ( z ) from S a ( z ) = S b ( z ) = S ( z ). Using the multiplicationlaw (21) we obtain S a · b ( z ) = 1(1 + z ) . (32)We now proceed to find the eigenvalue density ρ a · b ( λ ) that corresponds to the S transform given above. First wewrite the equation for the inverse function of the moment generating function χ a · b ( z ) = z/ (1 + z ) using (22). Weinvert it for φ a · b ( z ): z = φ a · b ( z )(1 + φ a · b ( z )) . (33)It is a cubic equation for φ a · b ( z ). Replacing φ a · b ( z ) with the Green’s function (12) we have z G a · b ( z ) − zG a · b ( z ) + 1 = 0 . (34)This equation has three solutions given by the Cardano formulas. We select the one which asymptotically behaves as1 /z for large z . Using (7) we find the density ρ a · b ( λ ) = 2 / / π / (cid:16) 27 + (cid:112) − λ ) (cid:17) / − λ / λ / (cid:16) 27 + (cid:112) − λ ) (cid:17) / (35)for λ ∈ (cid:2) , (cid:3) and zero otherwise.We close this section with a few general comments. The multiplication law (21) or (26) can be derived using solelyperturbation expansion for invariant matrix models in the limit N → ∞ without referring to free probability. Werefer the interested reader to [21]. The second comment concerns the multiplication law for matrices which are notpositive semi-definite. In the discussion above we restricted the class of matrices to positive semi-definite ones becausewe wanted to use √ a in the definition of the product a · b (20). For (invariant) Hermitian matrices the product a · b is(invariant) Hermitian. One can also consider the multiplication law for the standard product ab of Hermitian matrices.In this case a and b do not have to be positive semi-definite. The matrix ab is non-Hermitian and thus has complexeigenvalues. Its trace moments N Tr( ab ) k are, however, real. Since the S transform depends only on the moments, allcalculations are identical as discussed before for a · b except that the final result ρ ab ( λ ) cannot be interpreted as aneigenvalue density function, but merely as a function to calculate the trace moments of the matrix ab : m ab,n = (cid:28) N Tr( ab ) n (cid:29) = (cid:90) (cid:90) dµ ( a ) dµ ( b ) 1 N Tr( ab ) n = (cid:90) R dλρ ab ( λ ) λ n . (36)The eigenvalue density has in this case generically a two-dimensional support on the complex plane. For example,the product of two independent Hermitian matrices from the GUE ensemble is a non-Hermitian matrix. Each GUEmatrix has real eigenvalues with the eigenvalue distribution given by the Wigner semicircle law. The R transform R a ( z ) = R b ( z ) = z (we consider standardized GUE with the mean zero and unit variance). Using the R transformform of the multiplication law (26) one finds that R ab ( z ) = 0 and thus G ab ( z ) = 1 /z . This means that all the moments m ab,n = 0. It does not mean, however, that eigenvalue density of the product is zero. To calculate the eigenvaluedensity in this case one has to use methods that allow one to treat non-Hermitian matrices. We discuss them in thesubsequent sections. Here we only mention that the eigenvalue density for the product is ρ ab ( z ) = (cid:26) π | z | , | z | ≤ , | z | > z is a complex variable. This result was first derived in [30] with the help of field theoretical methods. Notethat the result is invariant under rotations around the origin of the complex plane. VI. PRODUCT OF ISOTROPIC RANDOM MATRICES The extension from Hermitian to non-Hermitian random matrices is similar to the extension from real to complexrandom variables in classical probability. In this section we employ this similarity to define isotropic random matrices[31] being probably the simplest extension of invariant random matrices to non-Hermitian ones.Consider a complex random variable with a probability measure that is invariant under rotation around the originof the complex plane µ ( z ) = µ ( ze iα ). The probability density is a function of the distance from the origin dµ ( z ) = d zρ ( | z | ). Such a random variable z = re iφ can be constructed as a product of a real non-negative random variable r and a uniformly distributed random phase φ ∈ [0 , π ). When one knows the probability density function for theradial variable r , one can easily derive the probability density for z on the complex plane.Isotropic random matrices are constructed in a way that imitates this construction. A random matrix A is calledisotropic if the probability measure µ ( A ) is invariant under the left and right multiplication by unitary matrices U, V : µ ( A ) = µ ( U † AV ). In Math and Engineering community such random matrices are called Bi-Unitarily Invariant(BUI) and in general they can be rectangular. Here we concentrate on square matrices and use the name isotropic inreference to them.The simplest way of constructing isotropic random matrices is to build them from real semi-positive diagonalrandom matrices a and Haar unitary matrices u, v independent of a (free of a for N → ∞ ) A = v † au . (38)Clearly, the diagonal matrix a corresponds to the radial variable r of z = re iφ and the Haar unitary matrices v and u to the uniform phase φ . There are two of them to ensure the invariance under multiplication by a unitary matrixon both sides. Alternatively one could consider a one-sided invariance, in which case the last equation would take theform A = au . Most of the results given below hold in either case.Another way to construct such matrices is to define them by a probability measure of the form [32] dµ ( A ) = e − N Tr V ( A † A ) dA (39)where dA = (cid:81) ij d Im A ij d Re A ij is flat measure for N × N complex matrices and V ( x ) is a polynomial or a powerseries in x . There are many other ways of defining isotropic matrices. An interesting property of isotropic randommatrices is that independently of the way they are defined, in the limit N → ∞ there is a one-to-one correspondencebetween the eigenvalue density of the matrix A and the eigenvalue density of the Hermitian matrix A † A associatedwith A [33]. By construction the matrix A † A is an invariant positive semi-definite random matrix. We denote thesquare root of this matrix by a = √ A † A . Note that the eigenvalues of a correspond to singular values of A . We canuse the whole arsenal of methods discussed in the previous section to calculate the eigenvalue distribution of suchmatrices or their products.The correspondence between the eigenvalue distribution of a large isotropic matrix and the distribution of itssingular values was found by Feinberg and Zee [32] who applied field theoretical perturbative methods to solve thematrix model (39) and independently by Haagerup and Larsen [33] who used methods of free probability. Belowwe quote the result in the form given by Haagerup and Larsen [33]. In the terminology of free probability isotropicmatrices in the limit N → ∞ correspond to variables that are called R-diagonal [34]. Here we prefer to call themlarge isotropic matrices. The invariance of the measure implies that the eigenvalue density of an isotropic matrix isinvariant under rotation around the origin of the complex plane. For an isotropic matrix A it is convenient to defineradial cumulative density function as the probability that a random eigenvalue of the isotropic matrix lies on thecomplex plane within the distance x from the origin: F A ( x ) = Prob( | λ A | ≤ x ). The eigenvalue density is related tothe radial cumulative function as ρ A ( z ) = 12 π | z | F (cid:48) A ( | z | ) + p δ (2) ( z ) , (40)where p = F A (0) is a possible point mass located at the origin. It corresponds to the probability of finding zeroeigenvalues in the eigenvalue spectrum of A . For p > F A ( x ) one can calculate the density ρ A ( z ) and vice versa. With the help of this function one can writean equation which relates the eigenvalue distributions of A and a = √ A † A [33] S a ( F A ( x ) − 1) = 1 x , (41)for x > 0. This equation does not fix the value of the radial cumulative function for x = 0. It is fixed by an additionalequation p = F A (0) = F a (0) , (42)where F a ( x ) = (cid:82) x ρ a ( λ ) dλ is the cumulative eigenvalue density function for a . This equation merely means that thenumber of zero eigenvalues of A is equal to the number of zero eigenvalues of a . The value F a (0) is generically zeroexcept the situation when there is a point measure (Dirac delta) at zero in the eigenvalue density ρ a ( λ ), as for instancein the distribution ρ a ( λ ) = (1 − r ) δ (0) + 12 πλ (cid:112) ( λ + − λ )( λ − λ − ) , (43)with λ ± = (1 ± √ r ) , 0 < r < { } ∪ [ λ − , λ + ]. This distribution naturally arises in thecontext of random matrices and it is called anti-Wishart distribution in physics literature and free Poissonian in mathand engineering literature. The first term on the right hand side means that the fraction of zero eigenvalues of thecorresponding random matrix a is (1 − r ). The corresponding isotropic random matrix A = au ‘inherits’ all zeroeigenvalues from a and thus the corresponding fraction of zero eigenvalues of A is also 1 − r .Equations (41) and (42) provide an implicit relation between ρ A ( z ) and ρ a ( λ ) that allows one to calculate one fromanother.A few comments are in order. Equation (41) was first published in physical literature in a slightly different(less transparent) albeit equivalent form [32]. In this paper it was also observed that for N → ∞ the support of theeigenvalue density of an isotropic random matrix has the shape of a single ring on the complex plane and that the radiiof the ring can be derived from the eigenvalue density of a . The external radius is equal to R e = ( (cid:82) ∞ ρ a ( λ ) λ dλ ) / and the internal one to R i = ( (cid:82) ∞ ρ a ( λ ) λ − dλ ) − / . The ring reduces to a disk when R i = 0. In the paper [33] thediscussion was formalized and all statements were proven rigorously using concepts of free probability. In addition tothat the S transform form of equation (41) and the discussion of the point measure for zero eigenvalues p (cid:54) = 0 (42)were given. From the discussion [33] it follows that in general there are three different types of solutions for F A . Thering solution that corresponds to F A ( x ) growing monotonically from 0 to 1 for x ∈ [ R i , R e ] and for R i > 0, the disk0solution that corresponds to F A ( x ) growing monotonically from 0 to 1 for x ∈ [0 , R e ] and the disk solution with afinite fraction of zero eigenvalues p > F A ( x ) growing monotonically from p to 1 for x ∈ [0 , R e ].Let us illustrate how the Haagerup-Larsen theorem (41) works in practice by giving an example. Consider a randommatrix A from the Ginibre ensemble [36, 37]. It is an N × N random matrix whose elements are independent identicallydistributed, normally distributed, centered complex random variables. The probability measure for A is dµ ( A ) ∝ e − N Tr A † A dA (44)where dA is flat measure for N × N complex matrices. Obviously A is an isotropic random matrix. The limitingeigenvalue density for this matrix for N → ∞ is given by the uniform distribution on the unit disk [36, 37] ρ A ( z ) = 1 π , | z | ≤ . (45)It is zero outside the disk. The cumulative density F A ( x ) = (cid:82) x dr πrρ A ( r ) is F A ( x ) = x , x ≤ . (46)Inserting it into equation (41) we obtain an equation for the S transform S a ( z ) = 11 + z . (47)Next we find (22) χ a ( z ) = z ( z +1) , the moment generating function (the inverse function for χ a ) and the Green’sfunction (12) G a ( z ) = 12 − (cid:114) z − z . (48)Finally we calculate the eigenvalue density of a (7) ρ a ( λ ) = 12 π (cid:114) − λλ , λ ∈ [0 , , (49)and correspondingly of a ρ a ( λ ) = 1 π (cid:112) − λ , λ ∈ [0 , . (50)This is the density of singular values of the Ginibre matrix. We see that it is given by the quarter-circle law.Let us now discuss the product of n independent isotropic matrices A , A , . . . , A n P = A A . . . A n (51)where A i ’s are independent isotropic matrices [30, 35]. The goal is to calculate the eigenvalue density and the singularvalue density in the limit N → ∞ having the knowledge of eigenvalue densities for the matrices in the product. Sincethe product itself is an isotropic matrix we can use equation (41) to write S p ( F P ( x ) − 1) = 1 x . (52)The S transform for p = P P † can be calculated using the multiplication law (21) S p ( z ) = S a · a · ... · a n ( z ) = n (cid:89) i =1 S a i ( z ) . (53)Since the right-hand side of this equation is a product of complex numbers, it does not depend on the order ofmultiplication. If instead of P we considered a product of matrices permuted with a permutation πP π = A π (1) A π (2) . . . A π ( n ) , (54)1we would obtain the S transform S p π ( z ) identical to S p ( z ) because multiplication on the right hand side of (53) iscommutative. Therefore F P ( x ) = F P π ( x ) are equal for any permutation π . In other words, the product of isotropicmatrices is spectrally commutative in the limit N → ∞ .Isotropic random matrices possess a striking property that has no counterpart in classical probability. Considerthe product P = A . . . A n of independent identically distributed isotropic random matrices. This can be realizedby picking up n independent matrices from the same ensemble and multiplying them. It turns out that in the limit N → ∞ such a product has exactly the same eigenvalue density as the n -th power Q = A n of one matrix pickedup at random from this ensemble [31]. Actually it was already observed in [33] that singular values of Q and P areidentically distributed. This observation can be trivially extended to eigenvalues if one additionally assumes that p = 0 (42). This is a sort of spectral self-averaging of multiplication in the limit N → ∞ because a single matrixfrom the ensemble is sufficiently representative in the n -fold product to give the same result as n independent randomrealizations. The proof is short so we recall it here in the form given in [31] where it was tacitly assumed that p = 0.All constituents needed for this proof can be found already in [33].We assume that p = F A (0) = F a (0) = 0 (42). The S transform for the product of independent identicallydistributed matrices is the n -th power of the S transform for a single one S p ( z ) = S na ( x ) as follows from equation(53) in which all S a i ( z ) are replaced by S a ( z ). Inserting S na ( x ) in place of S p ( z ) in equation (52) and taking the n -th root of both sides we get S a ( F P ( x ) − 1) = 1 x /n . (55)Comparing the last equation to equation (41) we see that F P ( x ) = F A ( x /n ). We can now argue that the radialcumulative density F Q ( x ) for the power Q is given by the same expression. Indeed, using directly the definition ofradial cumulative density we have F Q ( x ) = Prob( | λ Q | ≤ x ) = Prob( | λ A | n ≤ x ) = Prob( | λ A | ≤ x /n ) = F A ( x /n ) (56)for x > 0, and for F Q ( x ) = F P ( x ) = 0 for x = 0 and hence F P ( x ) = F Q ( x ) = F A (cid:0) x /n (cid:1) . This completes the proof. If F A (0) = p > x = 0. Recall that p is the probability of drawing at random a zero-eigenvalue from the spectrum of random matrix A . The corresponding probability for the product of n independent A matrices is F P (0) = 1 − (1 − p ) n , while for the n -th power F Q (0) = 1 − p , so we see that F P (0) (cid:54) = F Q (0) for p > P = A . . . A n of n independent Ginibre matrices in the limit N → ∞ . We know that F A ( x ) = x for x ∈ [0 , 1] for oneGinibre matrix (46) thus for the product of n F P ( x ) = x n , x ∈ [0 , . (57)The eigenvalue density for the product is (40) ρ P ( z ) = 1 nπ | z | − n ) n , | z | ≤ p = P † P [35] z n G n +1 p ( z ) = zG p ( z ) − ρ p ( λ ) by using (7). For n = 1 it gives the Wishart distribution (27); for n = 2the distribution given in equation (35). One can find an explicit expression for any n [42]. The last step is to changevariables λ → λ to go from ρ p ( λ ) to ρ p ( λ ).To summarize, large isotropic random matrices are exceptional in many respects. There exists a one-to-one cor-respondence between the densities of eigenvalues and singular values [33]. Multiplication of independent isotropicmatrices is spectrally commutative and self-averaging in the large N limit [31, 35]. One can also show that if theeigenvalue density for an isotropic matrix has a power-law singularity at zero ρ A ( z ) ∼ | z | − s with 0 < s < 2, then thedensity of singular values behaves at zero as ρ p ( λ ) ∼ λ − s − s [35].2 VII. GENERAL MULTIPLICATION LAW So far we have discussed invariant and isotropic random matrices. We are now going to discuss the more general caseof products of non-Hermitian matrices. The multiplication law for non-Hermitian matrices was derived in reference[21] by field theoretical methods.To build some intuition let us discuss Gaussian non-Hermitian random matrices. One can construct such matricesfrom independent identically distributed Hermitian Gaussian matrices from the GUE ensemble. The probabilitymeasure for GUE matrices is dµ ( a ) ∼ e − N Tr a / da , where da is flat measure for Hermitian matrices. Let us take twoindependent such matrices a and b . The following combination X = 1 √ a + ib ) (60)is a complex random matrix; a gives rise to Hermitian sector of X and ib to the anti-Hermitian one. The resultingmatrix X is a Ginibre random matrix [36, 37] with the uniform eigenvalue distribution on the unit disk (45). Onecan also define elliptic random matrices [37, 43] X = cos( α ) a + i sin( α ) b (61)being linear combinations of Hermitian and anti-Hermitian random matrices with different coefficients controlled by amixing parameter α . For α = 0 the random matrix X reduces to the Hermitian matrix a , for α = π/ α = π/ ib . The eigenvalue density of the matrix X is constant on thesupport being an ellipse whose oblateness depends on α . The probability measure for matrix X can be calculated fromthe measure dµ ( a, b ) = dµ ( a ) dµ ( b ) ∝ e − N Tr( a + b ) / dadb that factorizes since a and b are independent. Changingvariables to X = cos( α ) a + i sin( α ) b and X † = cos( α ) a − i sin( α ) b one finds the probability measure for X [44, 45] dµ ( X ) ∝ exp (cid:18) − N − τ Tr (cid:0) XX † − τ ( XX + X † X † ) (cid:1)(cid:19) dX , (62)where τ = cos(2 α ). For this measure one can calculate two-point correlation functions (cid:104) X ab X cd (cid:105) = (cid:82) dµ ( X ) X ab X cd .There are four such functions : (cid:68) X † ab X † cd (cid:69) = τ N δ ad δ bc , (cid:68) X † ab X cd (cid:69) = 1 N δ ad δ bc , (cid:68) X ab X † cd (cid:69) = 1 N δ ad δ bc , (cid:104) X ab X cd (cid:105) = τ N δ ad δ bc , (63)that can be compactly written in the matrix form (cid:68) X † ab X † cd (cid:69) (cid:68) X † ab X cd (cid:69)(cid:68) X ab X † cd (cid:69) (cid:104) X ab X cd (cid:105) = (cid:18) τ τ (cid:19) ⊗ N δ ad δ bc . (64)The matrix on the right hand side of the last equation defines the correlation structure of the model [21]. It plays animportant role, as we shall see below. The parameter τ takes values between − G onto quaternions G → R ( G ). The multiplication law reads R AB ( G AB ) = [ R A ( G B )] L [ R B ( G A )] R , [ G A ] R = G AB [ R A ( G B )] L , [ G B ] L = [ R A ( G B )] R G AB . (65)Here G A = G A ( z ), G B = G B ( z ) and G AB = G AB ( z ) correspond to generalized quaternionic Green’s functions for A , B and AB . They have the structure: G ( z ) = (cid:18) g ( z ) ib ( z ) i ¯ b ( z ) ¯ g ( z ) (cid:19) , (66)3where b ( z ) and g ( z ) are complex functions. We suppressed the indices of the corresponding matrix ( A , B or AB )in the last equation. The density is related to the upper left element g ( z ) of the generalized quaternionic Green’sfunction ρ ( z ) = 1 π ∂ ¯ z g ( z ) . (67)The superscripts L and R refer to the left and right adjoint actions of the following unitary matrix U = (cid:32) (cid:0) z ¯ z (cid:1) (cid:0) ¯ z ¯ z (cid:1) (cid:33) (68)on matrices in the square brackets [ X ] L = U XU † , [ X ] R = U † XU . (69)The diagonal elements of U are constructed from z being the argument of the Green’s functions (66). The generalizedR transform is related to the Green’s function by an equation analogous to (17) G ( z ) = ( Z − R ( G ( z ))) − (70)where Z = (cid:18) z 00 ¯ z (cid:19) . (71)This law looks complicated but in practice it reduces to a set of algebraic equations for g ( z ) from which one cancalculate the eigenvalue density by taking the derivative with respect to ¯ z (67). Before we give an example let usmention an electrostatic analogy [45]. In this analogy eigenvalues of a random matrix represent positions of charges intwo-dimensions in the given potential, the function g ( z ) is interpreted as the electric field (cid:126)g = ( g x , g y ) with components g x = Re g ( z ) and g y = − Im g ( z ) and equation (67) as the Gauss law in two dimensions [45]. It takes a more familiarform when written in the components of z = x + iy : ρ ( z ) = 12 π ( ∂ x g x + ∂ y g y ) = 12 π (cid:126) ∇ · (cid:126)g . (72)We now illustrate how to use the multiplication law (65) to calculate the eigenvalue density of the product. Weconsider as an example the multiplication of Gaussian matrices constructed from the centered elliptic matrix X (62)by shifting and rescaling A = q + σX . (73)The scale parameter σ is a real number and the shift parameter q is a complex number. The quaternionic R transformfor A is R A (cid:18)(cid:18) g ibi ¯ b ¯ g (cid:19)(cid:19) = (cid:18) q 00 ¯ q (cid:19) + σ (cid:18) τ g ibi ¯ b τ ¯ g (cid:19) . (74)There are only two terms on the right-hand side because the matrix is Gaussian and thus all higher order cumulantsvanish (16). The second term on the right-hand side is directly related to the matrix on the right hand side of equation(64) that defines the two-point correlation structure of the model [21, 30].The simplest example is q = 0, σ = 1 and τ = 0 (73). It corresponds to the Ginibre matrix. Let us calculate theeigenvalue density for the product of two such matrices. The first equation in the multiplication law (65) gives R AB ( G AB ) = (cid:18) icb B i ¯ c ¯ b B (cid:19) (cid:18) i ¯ cb A ic ¯ b A (cid:19) = (cid:18) − c b B ¯ b A − ¯ c ¯ b B b A (cid:19) . (75)We used a shorthand notation c = ( z/ ¯ z ) / . Because A and B are identically distributed, we search a symmetricsolution b A ( z ) = b B ( z ) and g A ( z ) = g B ( z ). R AB ( G AB ) = (cid:18) − c | b A | − ¯ c | b A | (cid:19) . (76)4Now we can use the two remaining equations (65). Because of the symmetry it is enough to use only one of them, forexample the first one. Replacing G − AB by Z − R AB ( G AB ) (70) we can write this equation as( Z − R AB ( G AB )) [ G A ] R = [ R A ( G B )] L . (77)We can now eliminate R AB in the last equation using (76). Writing also [ G A ] R and [ R A ( G B )] L in the explicit matrixform we have (cid:18) z + c | b A | 00 ¯ z + ¯ c | b A | (cid:19) (cid:18) g A i ¯ cb A ic ¯ b A ¯ g A (cid:19) = (cid:18) icb A i ¯ c ¯ b A (cid:19) , (78)where c = ( z/ ¯ z ) / , as before. It is a set of equations for g A = g A ( z ) and b A = b A ( z ). It has a trivial solution g A ( z ) = b A ( z ) = 0 and a non-trivial one g A ( z ) = 0 and | b A ( z ) | = 1 − | z | . The two solutions coincide at the unitcircle | z | = 1. The trivial solution holds for | z | > | z | ≤ 1. Note that for both of them g A ( z ) = 0. Inserting these solutions to (76) we obtain R AB and then using (70) we derive the final result for thegeneralized Green’s function of the product G AB ( z ) = (cid:32) (cid:113) ¯ zz (cid:112) z ¯ z (cid:33) (79)inside the unit circle | z | ≤ 1, and G AB ( z ) = (cid:18) z z (cid:19) (80)outside. Applying the Gauss law (67) to the upper left element of G AB ( z ) (denoted by g AB ( z )) we find the density ρ AB ( z ) = 1 π ∂ ¯ z g AB ( z ) = 12 π | z | (81)for | z | ≤ 1, and ρ AB ( z ) = 0 otherwise. The calculations were done for τ A = τ B = 0. One, however, (quite surprisingly)obtains the same result for the product of matrices with arbitrary τ A and τ B [30]. The reason for this is the following.Looking at the R transform (74) we see that τ appears only in the combination τ g (or, writing it separately for A and B , in combinations τ A g A and τ B g B ). The dependence on τ ’s disappears for g A ( z ) = g B ( z ) = 0. Since in the solutiondiscussed above we had g A ( z ) = g B ( z ) = 0, we conclude that this solution is independent of τ A and τ B and thus itholds for any τ A and τ B . In particular, the product AB of two Hermitian GUE matrices ( τ A = τ B = 1) or any ellipticrandom matrices has exactly the same eigenvalue density (81), which is spherically symmetric. In a similar way onecan find the eigenvalue density for the product of matrices with non-zero constants q (73) but the calculations aretedious. In figure 1.(a) we show an analytic result for the eigenvalue density for ( + X )( + X ) where X and X areindependent Ginibre matrices [21]. The eigenvalue support is surrounded by a contour r = 1+2 cos φ that correspondsto the the Pascal’s lima¸con. In figure 1.(b) we compare this theoretical prediction to Monte Carlo simulations of large,finite matrices. The agreement is very good. As another example, in figure 2 we show a comparison between MonteCarlo results and the analytic prediction for the contour of the support for the eigenvalue density of the product( + H )( + X ), where H is the GUE Hermitian matrix and X is the Ginibre matrix. Again the comparison showsa good agreement. The details of the calculations and other examples can be found in [46]. We summarize thissection shortly by emphasizing that there exists an algorithm to calculate eigenvalue densities for products of non-Hermitian matrices. This algorithm works for large random matrices ( N → ∞ ) with probability measures of the type dµ ( X ) ∝ e − N Tr V ( X,X † ) dX with a self-adjoint potential (cid:0) V ( X, X † ) (cid:1) † = V ( X, X † ) which is bounded from below [21]. VIII. CONCLUSIONS Using free probability one can find an explicit formula for the moment generating function for the product ofinvariant random Hermitian matrices in the limit N → ∞ . This formula allows to calculate trace moments (cid:10) N Tr( ab ) k (cid:11) for independent (free) random matrices a, b and to determine the eigenvalue density of the matrix √ ab √ a (if a ispositive semi-definite) from eigenvalue densities of a and b . We discussed how to extend the formalism to non-Hermitian matrices to calculate the eigenvalue density of the product ab , which is generically a non-Hermitian matrix.The extension has been worked out using field theoretical methods of planar graphs enumeration. The multiplication5 (a) (b)FIG. 1. (a) Eigenvalue density for the product ( + X )( + X ) where X and X are independent standardized Ginibrematrices, each having the eigenvalue density uniformly distributed on the unit circle. The result is derived analytically [21].The horizontal axes correspond to the real and imaginary axes of the complex plane and the vertical one to the value of theeigenvalue density ρ ( z ). The eigenvalue density has a support surrounded by a contour given by the Pascal’s lima¸con. (b) Solidline represents the Pascal’s lima¸con. Dots correspond to eigenvalues collected for ten matrices of size 1000 × + X )( + H ) where X is a standardizedGinibre matrix and H is an independent standardized Hermitian GUE matrix. The curves forming the contour are calculatedanalytically [46] using the multiplication law (65). The extrapolation of these curves outside the support is shown in red dots.Black dots correspond to eigenvalues collected for ten matrices of size 1000 × law (65) is conveniently written in the R transform form (26). The R transform is a quaternionic function. One canuse this law to find eigenvalue densities for products of large non-Hermitian random matrices.Isotropic matrices are a special class of non-Hermitian random matrices. These matrices have two exceptionalfeatures as far as matrix multiplication is concerned. Isotropic random matrices are spectrally commutative andself-averaging in the limit N → ∞ . The first notion means that the eigenvalue distribution of the product of suchmatrices does not depend on the order of matrix multiplication. For example, the matrix abcd has the same eigenvaluedensity as bdac . The second one means that the product of n independent isotropic identically distributed matriceshas the same eigenvalue distribution as the n -th power of a single matrix.We finish by mentioning that recently a progress has been also made in the understanding of microscopic properties6of eigenvalues statistics for products of random matrices. For example, an explicit form of the joint probabilitydistribution for eigenvalues and singular values of the product of Ginibre matrices has been explicitly found [47, 48].Using these results one can determine not only the eigenvalue density but also joint eigenvalue distribution andeigenvalue correlations for products of large random matrices. One can also infer finite size corrections to the limitinglaws discussed in this paper. Acknowledgments The author would like to thank Romuald Janik, Andrzej Jarosz, Giacomo Livan, Maciej A. Nowak, Gabor Papp,Artur Swiech and Ismail Zahed for many interesting discussions and a fruitful collaboration on the subject andJeremi Ochab for critical reading of the manuscript. The author also wants to thank the organizers of the conference“Inference, Computation, and Spin Glasses” held in Sapporo, July 28th–30th 2013, for a kind invitation to give a talkon this conference. The research presented in the paper was financed by Grant No. DEC-2011/02/A/ST1/00119 ofNational Centre of Science in Poland. The participation in the conference was supported by the grant: Grant-in-Aidfor Scientific Research on Innovative Areas, MEXT, Japan. REFERENCES [1] A. Crisanti, G. Paladin and A. Vulpiani, Products of random matrices, Random matrices and their applications , (Berlin,Heidelberg: Springer-Verlag, 1993).[2] C.M. Newman, Commun. Math. Phys. , 121 (1986).[3] R. R. Mueller, IEEE Trans. Inf. Theor. , 2086 (2002).[4] R.A. Janik and W. Wieczorek, J. Phys. A: Math. Gen. , 6521 (2004).[5] E. Gudowska-Nowak, R.A. Janik, J. Jurkiewicz, M.A. Nowak and W. Wieczorek 2010, Chem. Phys. , 380 (2010).[6] R. Lohmayer, H. Neuberger and T. Wettig, JHEP , 107 (2009).[7] B. Collins, I. Nechita and K. Zyczkowski, J. Phys. A: Math. Theor. , 275303 (2010).[8] J-P. Bouchaud, L. Laloux, M.A. Miceli and M. Potters M, EPJ B , 201 (2007).[9] P. Bougerol and J. Lacroix J, Products of Random Matrices with Applications to Schr¨odinger Operators (Boston: Birkj¨auser,1985).[10] J. E. Cohen, H. Kesten and C.M. Newman (eds), Random matrices and their applications, Contemporary Mathematics ,(Providence, RI: American Mathematical Society, 1986).[11] H. Furstenberg and H. Kesten, Ann. Math. Statist. , 457 (1960).[12] V. I. Oseledec, Trans. Moscow Math. Soc. , 197 (1968).[13] D.V. Voiculescu, Invent. Math. , 201 (1991).[14] R. Speicher, Math. Ann. , 611 (1994).[15] D.V. Voiculescu, K.J. Dykema and A. Nica, Free random variables, CRM Monograph Series , (Providence, RI: AmericanMathematical Society, 1992).[16] D.V. Voiculescu, J. Funct. Anal. , 323 (1986).[17] D.V. Voiculescu, J. Operator Theory , 223 (1987).[18] E. Brezin, C. Itzykson, G. Parisi and J.-B. Zuber, Commun. Math. Phys. , 35 (1978).[19] D. Bessis, C. Itzykson and J.-B. Zuber, Adv. Appl. Math. , 109 (1980).[20] G. ’t Hooft, Nucl. Phys. B , 461 (1974).[21] Z. Burda, R.A. Janik and M.A. Nowak, Phys. Rev. E , 061125 (2011).[22] M.L. Mehta, Random matrices , (Amsterdam: Elsevier 2004).[23] G. Akemann, J. Baik and P. Di Francesco (eds) The Oxford Handbook of Random Matrix Theory , (Oxford: OxfordUniversity Press, 2011).[24] F. J. Dyson, J. Math. Phys. , 140 (1962).[25] A. Nica and R. Speicher, Lectures on the combinatorics of free probability, London Mathematical Society Lecture NoteSeries , (Cambridge: Cambridge University Press, 2006).[26] A. Zee, Nucl. Phys. B , 726 (1996).[27] H. Bercovici and V. Pata, Math. Res. Lett. , 791 (1995).[28] N. R. Rao and R. Speicher, Elect. Comm. in Probab. , 248 (2007).[29] J. Wishart, Biometrika Phys. Rev. E , 041132 (2010).[31] Z. Burda, M.A. Nowak and A. Swiech, Phys. Rev. E , 061137 (2012). [32] J. Feinberg and A. Zee, Nucl. Phys. B , 643 (1997).[33] U. Haagerup and F. Larsen, J. Funct. Anal. , 331 (2000).[34] A. Nica and R. Speicher, Fields Institute Communications , 149 (1997).[35] Z. Burda, G. Livan and A. Swiech, Phys. Rev. E , 022107 (2013).[36] J. Ginibre, J. Math. Phys. , 440 (1965).[37] B.A. Khoruzhenko and H.-J. Sommers, Non-Hermitian Random Matrix Ensembles , Chapter 18 in reference [23] (PreprintarXiv:0911.5645).[38] Z. Burda, A. Jarosz, G. Livan, M.A. Nowak and A. Swiech, Phys. Rev. E , 061114 (2010).[39] E. Wigner E, Ann. of Math. , 548 (1955).[40] N. Alexeev, F. Goetze and A. Tikhomirov A, Lith. Math. J. , 121 (2010).[41] S. O’Rourke and A. Soshnikov, Electron. J. Probab. , 81 (2011).[42] K.A. Penson and K. Zyczkowski, Phys. Rev. E , 061118 (2011).[43] V.L. Girko, Theor. Prob. Appl. , 694 (1985).[44] Y.V. Fyodorov, B.A. Khoruzhenko and H.-J. Sommers, Phys. Lett. A 46 (1997).[45] H.-J. Sommers, A. Crisanti, H. Somplinsky and Y. Stein, Phys. Rev. Lett. , 1895 (1988).[46] A. Swiech, Applications of generalized free addition and multiplication laws to large non-Hermitian random matrices , (MScthesis, Jagiellonian University, Cracow 2013).[47] G. Akemann and Z. Burda, J. Phys. A: Math. Theor. , 465201 (2012).[48] G. Akemann, M. Kieburg and L. Wei, J. Phys. A: Math. Theor.46