Matrix models and eigenvalue statistics for truncations of classical ensembles of random unitary matrices
MMATRIX MODELS AND EIGENVALUE STATISTICS FORTRUNCATIONS OF CLASSICAL ENSEMBLES OF RANDOMUNITARY MATRICES
ROWAN KILLIP AND ROSTYSLAV KOZHAN
UCLA Mathematics DepartmentBox 951555, Los Angeles, CA 90095, USUppsala UniversityBox 480, 751 06 Uppsala, Sweden
Abstract.
We consider random non-normal matrices constructed by remov-ing one row and column from samples from Dyson’s circular ensembles orsamples from the classical compact groups. We develop sparse matrix modelswhose spectral measures match these ensembles. This allows us to compute thejoint law of the eigenvalues, which have a natural interpretation as resonancesfor open quantum systems or as electrostatic charges located in a dielectricmedium.Our methods allow us to consider all values of β >
0, not merely β = 1 , , Introduction
The main objects of investigation of this paper are Dyson’s circular ensembles ofunitary random matrices, as well as the orthogonal and compact symplectic groupsequipped with the Haar measure. To give the flavor of the results, let us restrictour attention in this introduction to Dyson’s circular ensembles only.In 1962 Dyson [5] introduced three ensembles of unitary random matrices (
CU E , COE , and
CSE ) to model complex physical systems (e.g., evolution operators ofclosed quantum systems) corresponding to various physical symmetry classes. Webegin with the definitions. Note that the
CSE ensemble is best explained throughthe use of (real and complex) quaternions. We review this material and introducethe relevant notations in Appendix A.
Definition 1.1.
Dyson’s three circular ensembles are: (CUE)
The circular unitary ensemble
CU E ( n ) is the set ( group ) of all n × n unitarymatrices U ( n ) endowed with the Haar measure ( which comes from the groupstructure of U ( n )) . (COE) The circular orthogonal ensemble
COE ( n ) is the set of all n × n symmetricunitary matrices with the measure induced by the Haar measure on U ( n ) E-mail address : [email protected], [email protected] . a r X i v : . [ m a t h . P R ] J un TRUNCATIONS OF UNITARY ENSEMBLES via the mapping U ( n ) → COE ( n ) U (cid:55)→ U T U. (CSE) The circular symplectic ensemble
CSE ( n ) is the set of all n × n complexquaternionic matrices that are both unitary and self-dual. The measure onthis set is taken to be the one induced by the Haar measure on U (2 n ) viathe mapping U (2 n ) → CSE ( n ) U (cid:55)→ C − ( U R U ) . As argued by Dyson,
COE is relevant in most practical circumstances, with thetwo exceptions of systems without time-reversal invariance (when
CU E should beused) and odd-spin systems with time-reversal but without rotational symmetry(when
CSE should be used).Dyson computed that the eigenvalues of these ensembles are jointly distributedon ∂ D n := { e iθ : 0 ≤ θ < π } n proportionally to ∝ (cid:89) j CSE . Theorem 1.2. The eigenvalues z , . . . , z n of COE ( n +1) , CU E ( n +1) , CSE ( n +1) ensembles with one row and column removed are distributed in D n according to β n (2 π ) n n (cid:89) j,k =1 (1 − z j ¯ z k ) β − (cid:89) j Figure 1. A realization of random eigenvalues for truncations of:(a) U ( n +1) with n = 301; (b) O ( n +1) with n = 301; (c) U Sp ( n +1)with n = 151. with β = 1 , , respectively.Remarks. 1. For CSE each of the eigenvalues is of multiplicity 2.2. In fact, we show that for any 0 < β < ∞ (1.1) is the eigenvalue distribution ofthe circular β -ensemble of Killip–Nenciu [22] (see Remark 1 after Proposition 4.2)with the first row and column removed.3. β = 2 is quite special here as the process becomes determinantal [35].One of the ingredients of the proof is to reduce these truncations to a specialsparse matrix form (namely, CMV form, see Definition 2.3) that depends on only n independent coefficients with explicit distributions. This is done in Proposition 5.2and could be of interest on its own for simulation purposes.Another way to look at this is that we show that the eigenvalues of the trun-cations coincide with the zeros of orthogonal polynomials on the unit circle withrandom recursion coefficients that are independently distributed according to theexplicit distributions (5.1).Similarly, we develop matrix models for truncations of the orthogonal group(and, more generally, of the real orthogonal β -ensemble) and of the compact sym-plectic group. We establish connection to zeros of orthogonal polynomials andcompute the distribution of the (independent) recurrence coefficients. Finally, wecompute the distribution of eigenvalues of these ensembles (equivalently, distribu-tion of zeros of these random orthogonal polynomials), with the exception of theeigenvalue distribution of the truncated compact symplectic group, which remainsan open problem as of now. For a reader who is wondering why the compactsymplectic group seems special among all the other classical ensembles, we remarkthat deleting one quaternionic row and column corresponds to the deletion of two complex rows and columns. Truncation of more than 1 row and column can alsoin principle be attacked with our methods, which leads to a CMV model with matrix-valued Verblunsky coefficients. However, the Jacobian computation is nottechnically manageable to us at this point. We leave this as an interesting openproblem.The joint eigenvalue distribution for truncated O (2 k ) was found earlier via adhoc methods by Khoruzhenko–Sommers– ˙Zyczkowski in [20]. Since the completion TRUNCATIONS OF UNITARY ENSEMBLES of this work, Forrester [9] reported a formula for the joint eigenvalue density oftruncated U Sp ( n ) ensemble.Our methods can be applied to random ensembles with the so-called non-idealcoupling (Fyodorov–Sommers [15], Fyodorov [12], and Fyodorov–Khoruzhenko [13]).We produce β -matrix models and compute the joint eigenvalue density.After establishing the joint distribution, it is natural to ask what can be saidabout the correlation functions and their large n behavior. In our upcoming pa-per [21] we investigate the microscopic limit of the 1-point correlation function ofthe circular β -ensemble (0 < β < ∞ ). In fact, we show that the same limit isobtained for a wide class of subunitary random matrices.The organization of the paper is as follows. In Section 2 we introduce the restof our unitary random matrix ensembles (the compact groups) and define CMVmatrices and the CMV-fication algorithm. In the rather lengthy Section 3 wecompute the random spectral measure of each of the ensembles we study. Theireigenvalue distributions are well-known of course, but we also need distributions ofthe eigenweights of the spectral measures. In Section 4 we present the matrix modelsthat are obtained by CMV-fying each of the (non-truncated) unitary ensembles.This is originally the idea of Killip–Nenciu [22], and we extend this to all the otherunitary ensembles.In Section 5 we derive CMV matrix models for each of the truncated ensembles.In Section 6 we compute the joint eigenvalue distribution of the truncations. InSection 7 we discuss matrix models and compute the eigenvalue distribution for thenon-ideally coupled ensembles.In Section 8 we show that the eigenvalues of the truncated Dyson’s (and, moregenerally, circular β -) ensembles admit a log-gas interpretation as charged particleslocated in a dielectric medium.In Section 9 we introduce a symmetric variant of the CMV model, which allowsus to find a canonical system of representatives of each conjugacy class that actuallylie in the sets COE and CSE ; the traditional CMV matrix representation has nosuch symmetry properties. Finally, in Appendices we collect the information aboutquaternions, basics of the theory of orthogonal polynomials on the unit circle, andJacobian determinants. 2. Preliminaries Compact groups and Haar measure. As discussed in the Introduction,apart from Dyson’s circular ensembles defined in Definition 1.1, we also study theclassical compact groups. For the notation related to quaternions and quaternionicmatrices, see Appendix A. Definition 2.1. The unitary, orthogonal, special orthogonal, and compact symplec-tic groups are defined, respectively, by U ( n ) = (cid:8) Q ∈ C n × n : Q † Q = QQ † = I n (cid:9) , O ( n ) = (cid:8) Q ∈ R n × n : Q T Q = QQ T = I n (cid:9) , SO ( n ) = (cid:8) Q ∈ R n × n : Q T Q = QQ T = I n , det Q = 1 (cid:9) , U Sp ( n ) = (cid:8) Q ∈ H n × n R : C ( Q ) † C ( Q ) = C ( Q ) C ( Q ) † = I n (cid:9) . Here I n is the n × n identity matrix; † and T stands for the Hermitian conjugationand transposition, respectively. RUNCATIONS OF UNITARY ENSEMBLES 5 By the general theory, each of these groups possesses the Haar measure, whichis the unique (normalized by 1) measure that is invariant under the group action.We will use the same notation U ( n ) , O ( n ) , SO ( n ) , U Sp ( n ) for the group itself andfor the random ensemble.Next we describe the joint law of the columns of matrices from these groups.This is well known, as is the method of proof we employ. We give details only inthe U Sp case where the argument contains an additional subtlety. Proposition 2.2. (a) The first column v of a matrix from O ( n ) is distributeduniformly on the sphere { x ∈ R n : || x || = 1 } . For ≤ k ≤ n , the k -th column v k is distributed uniformly on the subset of this unit sphere that is orthogonal to v , . . . , v k − . (b) The first column v of a matrix from U ( n ) is distributed uniformly on thesphere { z ∈ C n : || z || = 1 } . For ≤ k ≤ n , the k -th column v k is distributeduniformly on the subset of this unit sphere that is orthogonal to v , . . . , v k − . (c) Let Q ∈ U Sp ( n ) . The first column v of C ( Q ) is distributed uniformly on thesphere { z ∈ C n : || z || = 1 } . The second column v is Z ¯ v , where Z is (A.3) . For ≤ k ≤ n , the (2 k − -th column v k − is distributed uniformly on the subset ofthis unit sphere that is orthogonal to v , . . . , v k − , and v k = Z ¯ v k − .Proof. As noted above, we give details only for the case of U Sp ( n ).Form V with columns v , . . . , v n described by the procedure in the Proposition,and let us show that V is indeed C ( Q ) for a Q ∈ U Sp ( n ). By the construction, v , . . . , v n form an orthonormal basis of C n since v ⊥ Z ¯ v holds for any vector v . Thus V is unitary. Moreover, V consists of n blocks each of which has thereal quaternionic form, see (A.2). Therefore C − ( V ) belongs to the group (not ensemble yet) U Sp ( n ). Let us show that for any non-random W chosen from thegroup C ( U Sp ( n )), the distribution of V and W V are identical. This will prove that C − ( V ) was generated according to the Haar measure by the uniqueness of themeasure invariant with respect to the group action on U Sp ( n ).Denote the columns of W V by u , . . . , u n . First note that u = W v is avector uniformly distributed on the unit sphere { z ∈ C n : || z || = 1 } since v is.The second column u = W v = W Z ¯ v = ZW v = Z ¯ u (we used (A.4)), whichagrees with our construction. Now inductively, consider u k − , which is orthogonalto u , . . . , u k − . Consider any 2 n × n unitary U that maps span { u , . . . , u k − } to itself. Then W − U W maps span { v , . . . , v k − } to itself. By the constructionthis means that v k − and W − U W v k − are distributed identically. Thus u k − = W v k − and U u k − = U W v k − are distributed identically. Since U was arbitrary,we conclude that u k − was distributed uniformly on the subspace of C n orthogonalto u , . . . , u k − .Finally, u k = Z ¯ u k − as before, and this completes the induction. (cid:3) CMV matrices. Let us now give the definition of CMV matrices. Theyare named after Cantero–Moral–Velasquez [4] (also discovered earlier by Bunse-Gerstner–Elsner [3]). For a comprehensive discussion, see [32]. Definition 2.3. Given coefficients α , . . . , α n − with α k ∈ D for ≤ k ≤ n − and α n − ∈ D , let ρ k = (cid:112) − | α k | , and define Ξ k = (cid:20) ¯ α k ρ k ρ k − α k (cid:21) (2.1) TRUNCATIONS OF UNITARY ENSEMBLES for ≤ k ≤ n − , while Ξ − = [1] and Ξ n − = [¯ α n − ] . From these, form the n × n block-diagonal matrices L = diag (Ξ , Ξ , Ξ , . . . ) , M = diag (Ξ − , Ξ , Ξ , . . . ) . (2.2) The matrix C ( α , . . . , α n − ) := LM is called the CMV matrix corresponding to α , . . . , α n − .Remark. Note that C ( α , . . . , α n − ) is unitary if and only if α n − ∈ ∂ D . If α n − ∈ D , then it can be shown that C ( α , . . . , α n − ) is a subunitary operator ( C † C ≤ α n − ∈ ∂ D , and if α n − ∈ D then thesematrices are referred to as the “cut-off CMV matrices”. In this paper we will usethe term “CMV matrix” for all α n − ∈ D .Let us now define what we will refer to here as the CMV-fication algorithm.Suppose we are given an n × n unitary matrix U (with complex entries). If thevector e := [1 , , , . . . , T is cyclic for U then (see [3, 4, 32]) applying the Gram–Schmidt orthonormalization procedure to e , U e , U − e , U e , U − e , . . . producesa basis in which U has the CMV form C with α n − ∈ ∂ D .Similar construction works if U is instead a unitary operator on (cid:96) ( Z + ) withthe cyclic vector e . Then one ends up with an infinite CMV matrix C ( α , α , . . . )with α j ∈ D for all j ≥ 0. This matrix is defined as LM , where L and M aresemi-infinite block-diagonal matrices (2.2), where Ξ k are as in (2.1) for all k ≥ U is a unitary operator on (cid:96) ( C n ) oron (cid:96) ( Z + ) with a cyclic vector e , then its spectral measure is defined to be theprobability measure on ∂ D uniquely determined by (cid:104) e , U m e (cid:105) = (cid:90) π e imθ dµ ( θ ) , (2.3)where (cid:104) v, u (cid:105) = v † u . In fact, we have an isometry ı of Hilbert spaces L ( µ ) and (cid:96) ( C n ) or (cid:96) ( Z + ) defined by ı : z m (cid:55)→ U m e , m ∈ Z . (2.4)Now note that during the Gram-Schmidt procedure above, e was not changed,and therefore our CMV-fication algorithm can be described as U = W C W † (2.5)for some unitary W with W e = W † e = e . This means that not only theeigenvalues of U and C coincide, but also that the spectral measures (2.3) of U and C with respect to e are identical. In fact, two unitary CMV matrices with identicalspectral measures necessarily coincide, see [23, Prop 3.3].Thus from the physical point of view, one may want to think that given a physicalsystem modeled by a unitary matrix U with a chosen vector e (viewed as the initialstate of the system) there is a unique choice of basis in which the matrix attainsthe “minimal complexity” CMV form.Note however that in the case of systems with time-reversal invariance ( COE and CSE ) the associated CMV form will not possess the same symmetry properties. InSection 9, we introduce a variant of the CMV form that does possess the requisite RUNCATIONS OF UNITARY ENSEMBLES 7 symmetry. Moreover, the matrix used to conjugate to this new canonical formbelongs to the othogonal/symplectic group.From the mathematical point of view, one of the advantages of reducing a uni-tary matrix or operator U to the CMV form, apart from the latter being sparse andhaving the same the spectral measure, is that there is a direct connection to thetheory of orthogonal polynomials on the unit circle. Indeed, characteristic polyno-mials of C ( α , . . . , α k − ), k = 1 , , . . . , coincide with the orthogonal polynomials Φ k associated to the spectral measure µ of U with respect to e (see Appendix B):det( zI k − C ( α , . . . , α k − )) = Φ k ( z ) . (2.6)In particular, this means that these polynomials satisfy Szeg˝o’s recurrence (B.1).We stress that the recurrence coefficients in (B.1) are precisely the coefficients fromthe CMV matrix. We will refer to them as the Verblunsky coefficients.We collect all the needed facts from the theory of orthogonal polynomials inAppendix B. For more details the reader is referred to the monographs [30, 31].2.3. Block CMV matrices and matrix-valued spectral measures. For thequaternionic ensembles CSE ( n ) and U Sp ( n ) we will have to deal with the 2 × µ is an l × l matrix-valued probability measure on ∂ D if it is acountably additive mapping from Borel subsets of ∂ D to positive semi-definite l × l matrices, with the normalization condition µ ( ∂ D ) = I l . The l × l spectral measureof an ln × ln unitary matrix U with respect to vectors e , e , . . . , e l is the l × l matrix-valued measure dµ with entries dµ jk (1 ≤ j, k ≤ l ) determined by (cid:104) e j , U m e k (cid:105) = (cid:90) π e imθ dµ jk ( θ ) . (2.7)Block CMV matrices can be defined as before, by replacing the scalar Verblunskycoefficients α j ∈ ¯ D by l × l matrices satisfying α † α ≤ I l and ρ j = ( I l − α † j α j ) / .Under the natural cyclicity condition, any unitary nl × nl matrix U is unitarilyequivalent to a block CMV matrix W † U W = C ( α , . . . , α n − ) with the last co-efficient satisfying α † n − α n − = I l × l . Moreover, the l × l matrix-valued spectralmeasure (2.7) with respect to e , e , . . . , e l is preserved (which is to say, W fixes e , . . . e l ).Now, given a matrix Q from CSE ( n ) or U Sp ( n ), by its spectral measure we willcall the 2 × n × n complex matrix C ( Q )with respect to e , e . The matrix C ( Q ) can be reduced (with probability 1) to C ( α , . . . , α n − ) with 2 × α j . The 2 × C ( Q ) and C with respect to e , e coincide.3. Spectral Measures of Unitary Ensembles In this section we will derive the distribution of the random spectral measuresof each of the ensembles defined above.Let us denote the Vandermonde determinant by∆( z , . . . , z n ) = (cid:89) ≤ j Dyson’s circular ensembles.Proposition 3.1. The spectral measure of COE ( n ) and of CU E ( n ) is n (cid:88) j =1 µ j δ e iθj , (3.1) where ≤ θ j < π (1 ≤ j ≤ n ) , (3.2) n (cid:88) j =1 µ j = 1 , µ j > ≤ j ≤ n ) , (3.3) and the joint distribution of e iθ , . . . , e iθ n , µ , . . . , µ n − is Z n,β (cid:12)(cid:12) ∆( e iθ , . . . , e iθ n ) (cid:12)(cid:12) β dθ π . . . dθ n π × Z (cid:48) n,β n (cid:89) j =1 µ β/ − j dµ . . . dµ n − (3.4) with β = 1 for COE and β = 2 for CU E . Here Z n,β , Z (cid:48) n,β are the normalizationconstants expressed in terms of the gamma function as Z n,β = Γ( βn + 1) (cid:2) Γ( β + 1) (cid:3) n , Z (cid:48) n,β = (cid:2) Γ( β ) (cid:3) n Γ( βn ) . The × spectral measure of CSE ( n ) is n (cid:88) j =1 (cid:20) µ j µ j (cid:21) δ e iθj , (3.5) where µ j and θ j follow the law (3.4) with β = 4 on the domain (3.2) – (3.3) .Remark. In fact, the probability distribution of the eigenvalues in (3.4) with any0 < β < ∞ has a natural interpretation as the Gibbs distribution for the classicalCoulomb gas on the circle at the inverse temperature β . This distribution forany 0 < β < ∞ was realized as the eigenvalue distribution of the family of randommatrix models called circular β -ensembles in Killip–Nenciu [22]. See Proposition 4.2below. Proof. The distribution of the eigenvalues is well-known (see Dyson’s original pa-per [5]).Distribution of the eigenweights for β = 2 was computed in [22, Proposition 3.1].Let us compute the distributions of the eigenweights for β = 1. Suppose U ∈ COE ( n ), and let its eigenvalues e iθ , . . . , e iθ n be given. It can diagonalized bya real orthogonal matrix U = ODO T (see [5, Thm 4]; or, more generally, [17,Thm 4.4.7]). The matrix O of eigenvectors is unique up to a multiplication of itby a diagonal matrix S with ± U (cid:55)→ OS becomeswell defined, and OS becomes a Haar distributed orthogonal matrix. Indeed, theinvariance property of COE ( n ) ([26, Thm 10.1.1]) implies that the distributionof OS and W OS are identical for any orthogonal W , and now we can apply theuniqueness of the Haar measure on O ( n ). Now Proposition 2.2(a) says that thefirst row x = ( x , . . . , x n ) of OS is uniformly distributed on the unit sphere S n − = { x ∈ R n : || x || = 1 } . But by (2.3) and (3.1), µ j = x j . Now Lemma C.1(a) finishesthe proof. RUNCATIONS OF UNITARY ENSEMBLES 9 A similar approach works for β = 4. Let U ∈ CSE ( n ). Suppose the 2 n eigen-values of C ( U ) are e iθ , e iθ , . . . , e iθ n , e iθ n (each repeated twice). By [5, Thm 3] U can be reduced to the diagonal form U = BDB R by a symplectic matrix B ∈ U Sp ( n ), where C ( D ) is the 2 n × n complex diagonal matrix with the di-agonal ( e iθ , e iθ , . . . , e iθ n , e iθ n ). B is defined uniquely up to multiplication by adiagonal matrix S with unimodular real quaternions on its diagonal. If we choosethose by random independently and uniformly on U Sp (2), then by the invarianceproperty of CSE ( n ) (see [26, Thm 10.2.1]), BS becomes Haar distributed randommatrix from U Sp ( n ).Denote the first quaternionic column of ( BS ) R by [ q , . . . , q n ] T with C ( q j ) = (cid:20) α j − ¯ β j β j ¯ α j (cid:21) . with α j , β j ∈ C , 1 ≤ j ≤ n . Now U m = ( BS ) D m ( BS ) R and a simple calculationimplies (cid:104) e , C ( U ) m e (cid:105) = (cid:104) e , C ( U ) m e (cid:105) = n (cid:88) j =1 ( | α j | + | β j | ) e imθ j , (cid:104) e , C ( U ) m e (cid:105) = (cid:104) e , C ( U ) m e (cid:105) = 0 . Therefore the 2 × U is indeed of the form (3.5) with µ j = | α j | + | β j | . By Proposition 2.2(c), the first complex column of C (( BS ) R ), thatis [ α , β , . . . , α n , β n ] T , is uniformly distributed on the unit sphere of C n . Thenby Lemma C.1(e), µ j ’s are distributed according to (2 n − (cid:81) nj =1 µ j dµ . . . dµ n − on (3.3) as required. (cid:3) Orthogonal group.Proposition 3.2. (a) The spectral measure of SO (2 n ) is n (cid:88) j =1 12 µ j (cid:0) δ e iθj + δ e − iθj (cid:1) , (3.6) where < θ j < π (1 ≤ j ≤ n ) , (3.7) n (cid:88) j =1 µ j = 1 , µ j > ≤ j ≤ n ) , (3.8) and the joint distribution of e iθ , . . . , e iθ n , µ , . . . , µ n − is n − n ! π n | ∆(2 cos θ , . . . , θ n ) | dθ . . . dθ n × ( n − dµ . . . dµ n − . (b) The spectral measure of O (2 n ) \ SO (2 n ) is n − (cid:88) j =1 12 µ j (cid:0) δ e iθj + δ e − iθj (cid:1) + µ n δ + µ n +1 δ − , (3.9) where < θ j < π (1 ≤ j ≤ n − , (3.10) n +1 (cid:88) j =1 µ j = 1 , µ j > ≤ j ≤ n + 1) , (3.11) and the joint distribution of e iθ , . . . , e iθ n − , µ , . . . , µ n is n − ( n − π n − | ∆(2 cos θ , . . . , θ n − ) | n − (cid:89) j =1 sin θ j dθ . . . dθ n − × ( n − π √ µ n µ n +1 dµ . . . dµ n . (c) The spectral measure of SO (2 n + 1) is n (cid:88) j =1 12 µ j (cid:0) δ e iθj + δ e − iθj (cid:1) + µ n +1 δ , (3.12) with (3.7) , (3.11) , and the joint distribution of e iθ , . . . , e iθ n , µ , . . . , µ n is n n ! π n | ∆(2 cos θ , . . . , θ n ) | n (cid:89) j =1 sin (cid:0) θ j (cid:1) dθ . . . dθ n × Γ( n + 12 )Γ( 12 ) 1 √ µ n +1 dµ . . . dµ n . (d) The spectral measure of O (2 n + 1) \ SO (2 n + 1) is n (cid:88) j =1 12 µ j (cid:0) δ e iθj + δ e − iθj (cid:1) + µ n +1 δ − , (3.13) with (3.7) , (3.11) , and the joint distribution of e iθ , . . . , e iθ n , µ , . . . , µ n is n n ! π n | ∆(2 cos θ , . . . , θ n ) | n (cid:89) j =1 cos (cid:0) θ j (cid:1) dθ . . . dθ n × Γ( n + 12 )Γ( 12 ) 1 √ µ n +1 dµ . . . dµ n . Proof. The distributions of the eigenvalues is well-known and can be found in,e.g., [34] and [28], so we just need to justify the distribution of the eigenweights.(a) is proven in [22, Prop 3.4].(b), (c), and (d) can be proved in the similar fashion. Indeed, one can reduce arandom matrix to the block diagonaldiag (cid:18)(cid:20) cos θ sin θ − sin θ cos θ (cid:21) , . . . , (cid:20) cos θ n − sin θ n − − sin θ n − cos θ n − (cid:21) , (cid:2) (cid:3) , (cid:2) − (cid:3)(cid:19) , diag (cid:18)(cid:20) cos θ sin θ − sin θ cos θ (cid:21) , . . . , (cid:20) cos θ n sin θ n − sin θ n cos θ n (cid:21) , (cid:2) (cid:3)(cid:19) , diag (cid:18)(cid:20) cos θ sin θ − sin θ cos θ (cid:21) , . . . , (cid:20) cos θ n sin θ n − sin θ n cos θ n (cid:21) , (cid:2) − (cid:3)(cid:19) , respectively.This shows that the vector of eigenweights ( µ , . . . , µ n +1 ) for the cases (b), (c),and (d) is distributed as in Lemma C.1 (c), (d), (d), respectively, which finishes theproof. (cid:3) RUNCATIONS OF UNITARY ENSEMBLES 11 Compact symplectic unitary group.Proposition 3.3. The × spectral measure of U Sp ( n ) is n (cid:88) j =1 W j δ e iθj + n (cid:88) j =1 W Rj δ e − iθj , (3.14) where W j = (cid:20) µ j √ µ j ν j e iψ j √ µ j ν j e − iψ j ν j (cid:21) , W Rj = (cid:20) ν j −√ µ j ν j e iψ j −√ µ j ν j e − iψ j µ j (cid:21) . Here < θ j < π (1 ≤ j ≤ n ) , ≤ ψ j < π (1 ≤ j ≤ n ) , n (cid:88) j =1 µ j + n (cid:88) j =1 ν j = 1 , µ j > , ν j > ≤ j ≤ n ) , and the joint distribution of e iθ , . . . , e iθ n , e iψ , . . . , e iψ n , µ , . . . , µ n , ν , . . . , ν n − is n n ! π n | ∆(2 cos θ , . . . , θ n ) | n (cid:89) j =1 sin θ j dθ . . . dθ n × dψ π . . . dψ n π × (2 n − dµ . . . dµ n dν . . . dν n − . Proof. The distribution of the eigenvalues is a classical result. As for the matrixeigenweights, first write a matrix U ∈ U Sp ( n ) as U = BDB R , where C ( D ) =diag( e iθ , e − iθ , . . . , e iθ n , e − iθ n ). Repeating the arguments from the proof of Propo-sition 3.1, we may think of B as being Haar distributed in U Sp ( n ). Let the firstquaternionic column of B R be [ q , . . . , q n ] T , q j ∈ H R , with C ( q j ) = (cid:20) α j − ¯ β j β j ¯ α j (cid:21) . Now U m = BD m B R and a simple calculation imply (cid:104) e , C ( U ) m e (cid:105) = n (cid:88) j =1 | α j | e imθ j + n (cid:88) j =1 | β j | e − imθ j , (cid:104) e , C ( U ) m e (cid:105) = n (cid:88) j =1 | β j | e imθ j + n (cid:88) j =1 | α j | e − imθ j , (cid:104) e , C ( U ) m e (cid:105) = − n (cid:88) j =1 ¯ α j ¯ β j e imθ j + n (cid:88) j =1 ¯ α j ¯ β j e − imθ j , (cid:104) e , C ( U ) m e (cid:105) = − n (cid:88) j =1 α j β j e imθ j + n (cid:88) j =1 α j β j e − imθ j . Therefore the matrix-valued measure (2.7) has the form (3.14) with W j = (cid:20) | α j | − ¯ α j ¯ β j − α j β j | β j | (cid:21) . Using Proposition 2.2(c) and Lemma C.1(b), we obtain the promised joint distri-bution. (cid:3) Matrix Models of Unitary Random Matrix Ensembles It turns out that if one applies the CMV-fication procedure described in Sec-tion 2.2, then the Verblunsky coefficients become statistically independent with thedistributions that are explicitly computable. For CU E ( n ) and SO (2 n ) this wasdone by Killip–Nenciu in [22]. We restate these results here for completeness anddevelop analogues for the other ensembles. We stress that for the quaternionic en-sembles CSE and U Sp , one needs to consider the 2 × Definition 4.1. (a) We say that a real-valued random variable, X , is beta-distributed with parameters s, t > , which we denote by X ∼ B ( s, t ) if E { f ( X ) } = − s − t Γ( s + t )Γ( s )Γ( t ) (cid:90) − f ( x )(1 − x ) s − (1 + x ) t − dx. We write X ∼ B (0 , when X takes values and − with probabilities / .(This is the t = s → limit of the above.) (b) We say that a complex random variable, X , with values in the unit disk D is Θ( ν ) -distributed (for ν > ) if E { f ( X ) } = ν − π (cid:90) (cid:90) D f ( z )(1 − | z | ) ( ν − / d z, where d z is the Lebesgue measure on D . As a limit case, X ∼ Θ(1) meansthat X is uniformly distributed on the unit circle. (c) We say that a × matrix-valued random variable, X , is Υ( ν ) -distributed(for ν > ) if X = (cid:20) a + a i − a + a ia + a i a − a i (cid:21) , (4.1) where, writing (cid:126)a = ( a , a , a , a ) , E { f ( (cid:126)a ) } = ( ν − ν − π (cid:90) || (cid:126)x ||≤ f ( (cid:126)x )(1 − | (cid:126)x | ) ( ν − / dx dx dx dx . By extension, we write X ∼ Υ(3) when X is Haar distributed in SU (2) ,that is, (4.1) with ( a , a , a , a ) uniformly distributed over { x ∈ R : x + x + x + x = 1 } .Remarks. 1. Incorporating our stated extensions, B ( ν/ , ν/ 2) is the distribution ofthe first component of the random vector uniformly distributed over the ν -sphere (if ν ≥ ν ) is the distribution of x + ix , where x , x are the first two componentsof the random vector uniformly distributed over the ν -sphere (if ν ≥ ν )-distributed a , a , a , a appear as the first four components of the randomvector uniformly distributed over the ν -sphere (if ν ≥ Proposition 4.2. The CMV-fication of ensembles COE ( n ) or CU E ( n ) producesthe CMV matrix C ( α , . . . , α n − ) with independent Verblunsky coefficients α k ∼ Θ( β ( n − − k ) + 1) , ≤ k ≤ n − , (4.2) where β = 1 for COE ( n ) and β = 2 for CU E ( n ) .More generally, for any < β < ∞ , the spectral measure of C ( α , . . . , α n − ) with independent Verblunsky coefficients (4.2) has the law (3.4) . RUNCATIONS OF UNITARY ENSEMBLES 13 Remarks. 1. The ensembles of CMV matrices with (4.2) for any 0 < β < ∞ wereintroduced by Killip–Nenciu in [22], and are called the circular β -ensembles. Oneway to look at it is that Dyson’s ensembles embed themselves into a family of ran-dom matrix ensembles that depend on 0 < β < ∞ continuously. This looks naturalif one thinks of the eigenvalue point process as an interacting particle system. Sim-ilarly, we will see a β -extrapolation for SO ( n ) and O ( n ) \ SO ( n ) ensembles belowas well. It would be interesting to find a natural β -extrapolation for U Sp ( n ).2. The CMV-fication procedure requires cyclicity of e or, equivalently, that | α k | < k < n − 2. This holds with probability 1. Proof. The result for CU E ( n ) was established in [22, Prop 3.3].The result for COE ( n ) is the combination of our Proposition 3.1 above, [22,Prop 4.2], and the fact that two CMV matrices with the same spectral measuresmust coincide.The last statement of Proposition is in [22, Prop 4.2]. (cid:3) Proposition 4.3. The × block CMV-fication of CSE ( n ) produces the block CMVmatrix C ( α I , . . . , α n − I ) with independent matrix-valued Verblunsky coefficients α I , . . . , α n − I , where α j are distributed according to (4.2) with β = 4 .In other words, the × spectral measures of CSE ( n ) and of C ( α I , . . . , α n − I ) with α j given by (4.2) with β = 4 coincide.Remark. Scalar CMV-fication of CSE ( n ) is not natural. It would produce CMVmatrix with 2 n Verblunsky coefficients with non-independent distributions. Proof. By Proposition 3.1 the random 2 × CSE ( n ) ma-trix is equal to I times the scalar measure (3.1) with (3.4) and β = 4. Thisimplies that the 2 × CSE ( n ) are equal to I times thescalar Verblunsky coefficients of the scalar measure, which are (4.2) by the previousproposition. (cid:3) Proposition 4.4. The CMV-fication of the O ( N ) ensemble produces the CMVmatrix C ( α , . . . , α N − ) with independent Verblunsky coefficients distributed as α k ∼ B (cid:0) N − − k , N − − k (cid:1) , ≤ k ≤ N − . (4.3) Remark. CMV-fication of SO ( n ) and O ( n ) \ SO ( n ) gives the same (4.3) except forthe last coefficient α n − which has to be taken always − SO (2 k ) or O (2 k + 1) \ SO (2 k + 1) or 1 for SO (2 k + 1) or O (2 k ) \ SO (2 k ). Proof. This was proved for SO (2 n ) in [22, Prop 3.5]. The other three cases can beproved in the exact same way. Finally note that the total measure of SO ( N ) and O ( N ) \ SO ( N ) with respect to the Haar measure on O ( N ) are equal. This givesthe B (0 , 0) distribution for α N − . (cid:3) The orthogonal groups treated in Proposition 4.4 can be realized as special casesof the following more general result: Proposition 4.5 (Real orthogonal β -ensemble) . For any < β < ∞ , a > − , and b > − , we have: (a) ( N = 2 n, det = 1 ) The spectral measure of C ( α , . . . , α n − , − with inde-pendent Verblunsky coefficients α k ∼ (cid:40) B (cid:0) n − − k β + a + 1 , n − − k β + b + 1 (cid:1) , ≤ k ≤ n − , k even ,B (cid:0) n − − k β + a + b + 2 , n − − k β (cid:1) , ≤ k ≤ n − , k odd is (3.6) on (3.7) , (3.8) with the joint distribution C n ( a,b,β ) | ∆(2 cos θ , . . . , θ n ) | β n (cid:89) j =1 | − cos θ j | a + 12 | θ j | b + 12 dθ . . . dθ n × K n ( β ) n (cid:89) j =1 µ β − j dµ . . . dµ n − . (b) ( N = 2 n, det = − ) The spectral measure of C ( α , . . . , α n − , with inde-pendent Verblunsky coefficients α k ∼ B (cid:0) n − − k β, n − − k β (cid:1) , ≤ k ≤ n − is (3.9) on (3.10) , (3.11) with the joint distribution D n ( β ) | ∆(2 cos θ , . . . , θ n − ) | β n − (cid:89) j =1 | − cos θ j | β − | θ j | β − dθ . . . dθ n × L n ( β ) ( µ n µ n +1 ) β − n − (cid:89) j =1 µ β − j dµ . . . dµ n . (c) ( N = 2 n + 1 , det = 1 ) The spectral measure of C ( α , . . . , α n − , withindependent Verblunsky coefficients α k ∼ B (cid:0) n − k β, n − k β (cid:1) , ≤ k ≤ n − is (3.12) on (3.7) , (3.11) with the joint distribution E n ( β ) | ∆(2 cos θ , . . . , θ n ) | β n (cid:89) j =1 | − cos θ j | β − | θ j | β − dθ . . . dθ n × M n ( β ) µ β − n +1 n (cid:89) j =1 µ β − j dµ . . . dµ n . (d) ( N = 2 n + 1 , det = − ) The spectral measure of C ( α , . . . , α n − , − withindependent Verblunsky coefficients α k ∼ B (cid:0) n − k β, n − k β (cid:1) , ≤ k ≤ n − is (3.13) on (3.7) , (3.11) with the joint distribution E n ( β ) | ∆(2 cos θ , . . . , θ n ) | β n (cid:89) j =1 | − cos θ j | β − | θ j | β − dθ . . . dθ n × M n ( β ) µ β − n +1 n (cid:89) j =1 µ β − j dµ . . . dµ n . RUNCATIONS OF UNITARY ENSEMBLES 15 The normalization constants are: C n ( a, b, β ) = n !2 n ( a + b +1)+ βn ( n − n − (cid:89) j =0 Γ( a + 1 + β j )Γ( b + 1 + β j )Γ( β ( j + 1))Γ( a + b + 2 + β ( n − j ))Γ( β ) ; K n ( β ) = (cid:2) Γ( β ) (cid:3) n Γ( βn ) ; D n ( β ) = 1 L n ( β ) n ! π n − / (2 n − β − n n − (cid:89) j =1 Γ( β j )Γ( + β j ) ; L n ( β ) = (cid:2) Γ( β ) (cid:3) n − (cid:2) Γ( β ) (cid:3) Γ( βn ) ; E n ( β ) = 1 M n ( β ) n ! π n ( β − n n (cid:89) j =1 Γ( β j )Γ( + β j ) ; M n ( β ) = (cid:2) Γ( β ) (cid:3) n Γ( β )Γ( β ( n + )) . Remarks. 1. If β = 2 , a = b = − / β = 2 in (b), (c), (d), then oneobtains matrix models for SO (2 n ), O (2 n ) \ SO (2 n ), SO (2 n +1), O (2 n +1) \ SO (2 n +1), respectively.2. The ensemble of matrices in (a) was introduced in [22]. It is now called thereal orthogonal β -ensemble (see Forrester [8, Sect 2.9]).3. This admits a log-gas interpretation. Indeed, the eigenvalues of (a) can beviewed as n unit charges e iθ , . . . , e iθ n lying on the half-circle 0 < θ < π that alsointeract with their image charges at e − iθ , . . . , e − iθ n and with the fixed charges − + a +1 β at 1 and − + b +1 β at 1. As usual, 0 < β < ∞ plays the role of theinverse temperature. This is worked out in [8, Prop 2.9.6]. Similarly, one can checkthat (b) corresponds to the charges 1 − β at 1 and − 1; (c) to the charge 1 − β at1 and − β at − 1; (d) to the charge − β at 1 and 1 − β at − Proof. (a) was proved in [22, Prop 5.3].Let us prove (b). First of all note that C is a real orthogonal matrix. Equa-tion (2.6), Lemma B.1(i), and α n − = 1 implies det C = − 1. Therefore C haseigenvalues e ± iθ , . . . , e ± iθ n − , , − n × n CMV matricesof the form C ( α , . . . , α n − , 1) with all α j ∈ ( − , 1) and spectral measures of theform (3.9) with (3.10), (3.11), and distinct θ j ’s. Indeed, all Verblunsky coefficientsare real if and only if the spectral measure is symmetric. And as we just saw, inthis case α n − = 1 is equivalent to det C = − 1, which for orthogonal matrices withsimple spectrum is equivalent to having an eigenvalue at each 1 and − α , . . . , α n − ) ∈ ( − , n − and ( θ , . . . , θ n − , µ , . . . , µ n ) in (3.10), (3.11) with θ < . . . < θ n . Sowe can speak of the corresponding Jacobian. In fact, combining Propositions 4.4and 3.2(b), we obtain its value: (cid:12)(cid:12)(cid:12)(cid:12) det ∂ ( α , . . . , α n − ) ∂ ( θ , . . . , θ n − , µ , . . . , µ n ) (cid:12)(cid:12)(cid:12)(cid:12) = 2 n − | ∆(2 cos θ , . . . , θ n − ) | (cid:81) n − j =1 sin θ j √ µ n µ n +1 (cid:81) n − k =0 (1 − α k ) n − − k − . Using the form (3.9) of the spectral measure, Lemma B.1(v) gives n − (cid:89) j =0 (1 − | α j | ) (2 n − j − / = √ µ n µ n +1 n − (cid:89) j =1 µ j (cid:12)(cid:12) ∆ (cid:0) e iθ , e − iθ , . . . , e iθ n − , e − iθ n − , , − (cid:1)(cid:12)(cid:12) = 2 n − √ µ n µ n +1 n − (cid:89) j =1 µ j n − (cid:89) j =1 | sin θ j | | ∆ (2 cos θ , . . . , θ n − ) | . Now, starting with the distribution (4.4), and using the change of variablesalong with the last computation produces the promised distribution of the spectralmeasure. The normalization constant L n ( β ) is a well-known integral (see, e.g., [22,Lem A.4]).Parts (c) and (d) are similar. (cid:3) Proposition 4.6. The × block CMV-fication of the U Sp ( n ) ensemble producesthe block CMV matrix C ( α , . . . , α n − ) with independent × Verblunsky coeffi-cients α j distributed as α k ∼ Υ(4 n − k − , ≤ k ≤ n − . Remark. Scalar CMV-fication of U Sp ( n ) is not natural and does not lead to sta-tistically independent Verblunsky coefficients. Proof. With the help of our Proposition 2.2(c), the proof for U Sp ( n ) goes alongthe same lines as the proof for U ( n ) in [22, Prop 3.3]. (cid:3) Matrix Models for Truncations In order to deal with the truncations, we will need the following lemma. Lemma 5.1. Consider a unitary CMV matrix C ( α , . . . , α n ) with α n ∈ ∂ D . • If n is even then the matrix obtained by removing the first row and columnis equal to S † C ( − ¯ α n − α n , − ¯ α n − α n , . . . , − ¯ α α n ) T S for some unitary S . • If n is odd then the matrix obtained by removing the first row and columnis equal to S † C ( − ¯ α n − α n , − ¯ α n − α n , . . . , − ¯ α α n ) S for some unitary S .Proof. Suppose first that n is even. Then C ( α , . . . , α n ) with the first row andcolumn removed is equal to ˆ L ˆ M , whereˆ L = diag ([ − α ] , Ξ[ α ] , . . . , Ξ[ α n − ] , [¯ α n ]) , ˆ M = diag (Ξ[ α ] , Ξ[ α ] , . . . , Ξ[ α n − ]) , where Ξ[ α k ] is defined as in (2.1). RUNCATIONS OF UNITARY ENSEMBLES 17 Now consider ( Q † P ˆ M T P † Q )( Q † P ˆ L T P † Q ), where Q = · · · · · · · · · , and P = diag( α n , , α n , , . . . ), P = diag(1 , α n , , α n , . . . ).Direct calculations yield Q † P ˆ M T P † Q = diag(Ξ[ − ¯ α n − α n ] , . . . , Ξ[ − ¯ α α n ]) ,Q † P ˆ L T P † Q = diag([1] , Ξ[ − ¯ α n − α n ] , . . . , Ξ[ − ¯ α α n ] , [ − ¯ α α n ])This shows Q † P ( ˆ L ˆ M ) T P † Q = ( Q † P ˆ M T P † Q )( Q † P ˆ L T P † Q ) is precisely equal to C ( − ¯ α n − α n , − ¯ α n − α n , . . . , − ¯ α α n ), cf. Definition 2.3.Similarly, if n is odd, we can see that ( Q † P † ˆ L P Q )( Q † P † ˆ M P Q ) is equal to C ( − ¯ α n − α n , − ¯ α n − α n , . . . , − ¯ α α n ). (cid:3) With this lemma and the results of Section 4 in hand, we immediately obtainmatrix models for the truncations of unitary ensembles.Indeed, choose a matrix U from one of the “scalar” ensembles from Section 3( CU E, COE, O , SO , O \ SO ), and remove any row and the corresponding column.By the invariance under the group action, it does not matter which row and columnwe delete. It will be convenient to delete the first ones. Then the truncated matrix V is V = P U P † , where P = proj (cid:0) (cid:104) e (cid:105) ⊥ (cid:1) . Recall now (2.5) that U = W C W † with W e = W † e = e which implies V = P W C W † P † = W P C P † W † . Therefore V is unitarily equivalent to the CMV matrix form C of U but with thefirst row and column removed. Thus the results of the next proposition follow fromthe CMV models in Section 4, the previous lemma, and the rotational (or real even)symmetry of the Verblunsky coefficients.For quaternionic ensembles CSE and U Sp one needs to modify all of the argu-ments in the obvious way to accommodate quaternionic rows and columns. Proposition 5.2. (a) The eigenvalue distribution of a COE ( n + 1) or CU E ( n + 1) matrix withone row and column removed coincides with the eigenvalue distribution of C ( α , . . . , α n − ) with independent Verblunsky coefficients α k ∼ Θ( β ( k + 1) + 1) , ≤ k ≤ n − , (5.1) where β = 1 for COE and β = 2 for CU E .The eigenvalue distribution of a CSE ( n + 1) matrix with one (quater-nionic) row and column removed coincides with the eigenvalue distributionof C ( α I , . . . , α n − I ) where α k are independent and distributed accordingto (5.1) with β = 4 . (b) The eigenvalue distribution of a O ( n +1) , or SO ( n +1) , or O ( n +1) \ SO ( n +1) matrix with one row and column removed coincides with the distributionof C ( α , . . . , α n − ) with independent Verblunsky coefficients α k ∼ B (cid:0) k +12 , k +12 (cid:1) , ≤ k ≤ n − . (c) The eigenvalue distribution of a U Sp ( n + 1) matrix with one (quaternionic)row and column removed coincides with the distribution of C ( α , . . . , α n − ) with independent × matrix Verblunsky coefficients α k ∼ Υ (4 k + 7) , ≤ k ≤ n − . Remarks. 1. From the aesthetic considerations, one might prefer to look at theabove distributions as B ( k +2) − , k +2) − ), Θ(2( k + 2) − k + 2) − 1) forthe truncations of the compact groups O , U , and U Sp , respectively. Note B has anextra factor of due to the historical definition of the Beta function.2. These CMV matrices are no longer unitary as the last coefficient is no longerunimodular (with probability 1), see the remark after Definition 2.3.3. These models are efficient for simulation purposes.6. Eigenvalue Distribution of the Truncations Truncations of circular β -ensembles.Theorem 6.1. Choose Verblunsky coefficients α k to be independent and Θ( β ( k +1) + 1) -distributed (0 ≤ k ≤ n − , that is, distributed in D with to the density β ( k +1)2 π (1 − | α k | ) β ( k +1) − d α k . Then the eigenvalues z , . . . , z n of the CMV matrix C ( α , . . . , α n − ) are dis-tributed in D n with the joint law β n (2 π ) n n (cid:89) j,k =1 (1 − z j ¯ z k ) β − (cid:89) j 1. For any 0 < β < ∞ these ensembles are unitarily equivalent to thecircular β -ensembles with the first row and column removed.2. Combining this with Proposition 5.2, we obtain Theorem 1.2.3. As we will see in Section 8, the law (6.1) coincides with the Gibbs measurefor a system of n charges in the disk D where D and C \ D are filled with dielectricsof suitably chosen susceptibility.The proof is given in the end of this subsection after we establish a series oflemmas.Given a sequence of Verblunsky coefficients α , . . . , α n − , one may then formthe associated sequence of orthogonal polynomials Φ , . . . , Φ n ; see Appendix B.Recall from (2.6) that the eigenvalues of C ( α , . . . , α n − ) coincide with the zeros ofΦ n ( z ), which we shall denote by z , . . . , z n . Note that the mapping from the zeros( z , . . . , z n ) ∈ D n to the Verblunsky coefficients ( α , . . . , α n − ) ∈ D n is an n -foldbranched covering. This follows from [30, Thm 1.7.5]. To compute the Jacobianof this change of variables, it will be useful to introduce an intermediate set ofvariables. Let κ ( k ) l be the coefficients of the k -th orthogonal polynomial Φ k :Φ k ( z ) = z k + κ ( k )1 z k − + . . . + κ ( k ) k − z + κ ( k ) k . (6.2) RUNCATIONS OF UNITARY ENSEMBLES 19 The Jacobian of the mapping from roots of a polynomial to its coefficients iswell known; we reproduce it here as Lemma D.2(b). Viewing complex numbers asordered pairs of real numbers, we have J R (cid:34) ∂ ( κ ( n )1 , . . . , κ ( n ) n ) ∂ ( z , . . . , z n ) (cid:35) = | ∆( z , . . . , z n ) | . (6.3) Lemma 6.2. The Jacobian J R (cid:20) ∂ ( κ ( n )1 ,...,κ ( n ) n ) ∂ ( α ,...,α n − ) (cid:21) is equal to ( − n (cid:81) n − k =0 (1 − | α k | ) k .Proof of Lemma 6.2. From (B.2) we can immediately see that κ ( n ) j = κ ( n − j − ¯ α n − ¯ κ ( n − n − j , j = 1 , . . . , n, (6.4)where we assign κ ( n − n = 0 and ¯ κ ( n − = 1. Therefore applying Lemma D.1, weobtain J R (cid:20) ∂ ( κ ( n )1 ,...,κ ( n ) n ) ∂ ( κ ( n − ,...,κ ( n − n − ,α n − ) (cid:21) =det · · · − α n − · · · − ¯ α n − − α n − · · · − ¯ α n − · · · − κ ( n − n − · · · − κ ( n − − − ¯ κ ( n − n − · · · − ¯ κ ( n − − . (6.5)Expanding this determinant along the last two columns and then rearranging theremaining rows and columns we obtain a block diagonal matrix with n − (cid:20) − α n − − ¯ α n − (cid:21) on the diagonal. Therefore (6.5) simplifies to − (1 − | α n − | ) n − (minus sign comesfrom expanding along the last two columns).The lemma now follows by induction. (cid:3) Lemma 6.3. The Jacobian J R (cid:104) ∂ ( α ,...,α n − ) ∂ ( z ,...,z n ) (cid:105) is equal to | ∆( z ,...,z n ) | ( − n (cid:81) n − k =0 (1 −| α k | ) k .Proof. This is immediate from (6.3) and Lemma 6.2. (cid:3) Now we can prove Theorem 6.1. Proof of Theorem 6.1. The joint distribution of α , . . . , α n − is β n n !(2 π ) n n − (cid:89) k =0 (1 − | α k | ) β ( k +1) − d α . . . d α n − . Using Lemmas 6.3 and B.1(vi) we see that this induces the following measure on z , . . . , z n : n ! β n n !(2 π ) n (cid:32) n − (cid:89) k =0 (1 − | α k | ) β ( k +1) − (cid:33) | ∆( z ,...,z n ) | (cid:81) n − k =0 (1 −| α k | ) k d z . . . d z n = β n (2 π ) n | ∆( z , . . . , z n ) | n − (cid:89) k =0 (1 − | α k | ) ( β − k +1) d z . . . d z n = β n (2 π ) n | ∆( z , . . . , z n ) | n (cid:89) j,k =1 (1 − z j ¯ z k ) β − d z . . . d z n . Note that we removed the ordering of z j ’s which resulted in the n ! factor. (cid:3) Truncation of the orthogonal β -ensemble. Truncations of the orthogonalgroup, and more generally of the orthogonal β -ensemble, have real eigenvalues withnon-zero probability. Correspondingly, we decompose the set of possible systems ofeigenvalues as D := (cid:91) L +2 M = n,L ≥ ,M ≥ D L,M , where D L,M := (cid:8) ( x , . . . , x L , x L +1 ± iy L +1 , . . . , x L + M ± iy L + M ) ∈ D L +2 M : − < x < x < . . . < x L < , − < x L +1 < . . . < x L + M < ,y L +1 > , . . . , y L + M > } . (6.6)Assume we are given a function f : C n → C n that is non-negative on D and in-variant under the permutations of its n arguments. Let us define f ( z , . . . , z n ) | dz ∧ dz ∧ . . . ∧ dz n | to be the distribution on D ⊂ D n determined by (cid:90) D n f ( z , . . . , z n ) | dz ∧ dz ∧ . . . ∧ dz n | := (cid:88) L +2 M = n M (cid:90) D L,M f ( x , . . . , x L , x L +1 ± iy L +1 , . . . , x L + M ± iy L + M ) × dx . . . dx L + M dy L +1 . . . dy L + M . Informally one should think of each of the summands on the right hand side as thedistribution conditional on the event that the matrix has L real eigenvalues and M complex-conjugate pairs with L + 2 M = n . Each of these distributions is obtainedby plugging into the left-hand side dz k = dx k + idy k , taking the wedge productwhile keeping track which of the eigenvalues are real or complex, and then takingthe absolute value of the constant. Thus for each real eigenvalue z k , dz k becomesjust the real Lebesgue measure dx k on the real line, and for a complex-conjugatepair z k , ¯ z k , we get | ( dx k + idy k ) ∧ ( dx k − idy k ) | = 2 dx k dy k . This corresponds tothe factor 2 M on the right-hand side of the definition.More formal and careful discussion of these measures can be found, e.g., in theBorodin–Sinclair paper [2, Sects 2–3]. RUNCATIONS OF UNITARY ENSEMBLES 21 Theorem 6.4. Given β > , a, b > − , let the coefficients α k , ≤ k ≤ n − , bedistributed as follows: α k ∼ B (cid:16) βk + a + 1 , βk + b + 1 (cid:17) , if k is even, B (cid:16) β ( k − + a + b + 2 , β ( k +1)4 (cid:17) , if k is odd. (6.7) Then eigenvalues of the CMV matrix C ( α , . . . , α n − ) are distributed in D with thejoint law P n n (cid:89) j,k =1 (1 − z j ¯ z k ) β − n (cid:89) j =1 (1 − z j ) a + − β (1+ z j ) b + − β (cid:89) j 1. From our previous discussion it is evident that if n is odd, thensuch a random matrix is unitarily equivalent to a ( n + 1) × ( n + 1) matrix from theorthogonal β -ensemble in Proposition 4.5(a) with the first row and column removed(for any 0 < β < ∞ and a, b > − a = b = − β/ 4, then it is also unitarilyequivalent to a ( n + 1) × ( n + 1) matrix from the ensemble in Proposition 4.5(b)with the first row and column removed (for any 0 < β < ∞ ). If n is even and a = b = − β/ 4, then it is unitarily equivalent to a ( n + 1) × ( n + 1) matrixfrom the ensembles in Proposition 4.5(c) and (d) with the first rows and columnsremoved.2. When β = 2 , a = b = − / 2, such a random matrix is unitarily equivalent toa matrix from O ( n + 1), or SO ( n + 1), or O ( n + 1) \ SO ( n + 1) with any row andcolumn removed, cf. Proposition 5.2(b).We follow the same strategy as for the circular ensembles. As before, let (6.2).In the next lemma we compute the Jacobian determinant of the transformationfrom z j ’s to κ ( n ) j ’s on each of the sets D L,M . Lemma 6.5. Suppose z , . . . , z n are in D L,M , L + 2 M = n , see (6.6) . On this setwe have that the mapping ( z , . . . , z n ) (cid:55)→ ( κ ( n )1 , . . . , κ ( n ) n ) is injective, and (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) J (cid:34) ∂ ( κ ( n )1 , . . . , κ ( n ) n ) ∂ ( x , . . . , x L , x L +1 , y L +1 , x L +2 , y L +2 , . . . , x L + M , y L + M ) (cid:35)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = 2 M ∆( z , . . . , z n ) . Remark. In other words, the above lemma says that the following change of vari-ables is legitimate f ( . . . ) dκ ( n )1 . . . dκ ( n ) n = f ( . . . ) | ∆( z , . . . , z n ) | | dz ∧ dz ∧ . . . ∧ dz n | . Proof. Note that the above change of variables can be viewed as a composition oftwo changes of variables:( κ ( n )1 , . . . , κ ( n ) n ) (cid:55)→ ( x , . . . , x L , x L +1 ± iy L +1 , . . . , x L + M ± iy L + M ) (cid:55)→ ( x , . . . , x L , x L +1 , y L +1 , x L +2 , y L +2 , . . . , x L + M , y L + M )But the (complex) Jacobian determinant for the first map is ∆( z , . . . , z n ) byLemma D.2(a), and the (complex) Jacobian for the second map is easily seen to be2 M in the absolute value. (cid:3) Lemma 6.6. The Jacobian determinant of the R n → R n change of variables ( α j ) n − j =0 (cid:55)→ ( κ ( n ) j ) nj =1 is J (cid:34) ∂ ( κ ( n )1 , . . . , κ ( n ) n ) ∂ ( α , . . . , α n − ) (cid:35) = ( − n (cid:89) ≤ k ≤ n − k even (1 − α k ) k/ (cid:89) ≤ k ≤ n − k odd (1 − α k )(1 − α k ) ( k − / . Proof. Using the recurrence (6.4) it is easy to see that J (cid:20) ∂ ( κ ( n )1 ,...,κ ( n ) n ) ∂ ( κ ( n − ,...,κ ( n − n − ,α n − ) (cid:21) = det · · · − α n − 00 1 · · · − α n − − α n − · · · − α n − · · · − κ ( n − n − − κ ( n − n − · · · − κ ( n − − κ ( n − − . (6.9)If n is even then the ( n , n )-entry of the latter matrix is 1 − α n − . We stress thatthis is an n × n matrix, in contrast to (6.5) which was 2 n × n . Expansion alongthe last column allows to compute (6.9), and then induction finishes the proof. (cid:3) Now we are ready to prove Theorem 6.4. Proof of Theorem 6.4. Starting with the joint probability density (6.7) of α k ’s andperforming two changes of variables from the previous two lemmas, we obtain P n (cid:89) ≤ k ≤ n − k even (1 − α k ) kβ + a − k (1 + α k ) kβ + b − k × (cid:89) ≤ k ≤ n − k odd (1 − α k ) ( k − β + a + b +1 − k +12 (1 + α k ) ( k +1) β − − k − × | ∆( z , . . . , z n ) | | dz ∧ dz ∧ . . . ∧ dz n | , where the constant P n = (cid:89) ≤ k ≤ n − k even Γ( β k + a + b + 2)2 βk + a + b +1 Γ( β k + a + 1)Γ( β k + b + 1) × (cid:89) ≤ k ≤ n − k odd Γ( β k + a + b + 2)2 βk + a + b +1 Γ( β ( k − 1) + a + b + 2)Γ( β ( k + 1))can be easily seen to be equivalent to (6.8).After some simplification, the distribution becomes P n n − (cid:89) k =0 (1 − α k ) ( k +1)( β − ) n − (cid:89) k =0 (1 − α k ) a − β + n − (cid:89) k =0 (1 + ( − k α k ) b − β + × | ∆( z , . . . , z n ) | | dz ∧ dz ∧ . . . ∧ dz n | . Now, an application of Lemma B.1(i) and (vi) finishes the proof. (cid:3) RUNCATIONS OF UNITARY ENSEMBLES 23 Matrix Models and Resonance Distribution for Systems withNon-ideal Coupling As argued by Fyodorov–Sommers in [15] (see also [12, 16]), the eigenvalues ofthe truncations represent resonances for the open systems for the special case ofthe ideal coupling. They proposed a more general model (for CU E ) which for the1 open channel case takes form V = U diag (cid:16)(cid:112) − T a , , , . . . , (cid:17) . (7.1)Here U is a unitary random matrix, and 0 ≤ T a ≤ T a = 0 correspondsto the closed system, and T a = 1 to the open system with the ideal coupling. Clearlythe eigenvalues corresponding to T a = 0 and T a = 1 (ignoring the zero eigenvalue)coincide with the eigenvalues of the unperturbed unitary random matrix and of itstruncation, respectively.In fact, instead of the transmission coefficient it will be more convenient for usto work with the reflection coefficient R a := √ − T a . Our method of reducing to the CMV form applies here as well. Indeed, let U be n × n and its CMV form be U = W C ( α , . . . , α n − ) W † with W e = W † e = e .Then V = W C ( α , . . . , α n − ) diag ( R a , , , . . . , W † . Using the idea in Lemma 5.1, one then obtains that V is unitarily equivalent to C ( − ¯ α n − α n − , − ¯ α n − α n − , . . . , − ¯ α α n − , R a α n − ) T if n is odd and to C ( − ¯ α n − α n − , − ¯ α n − α n − , . . . , − ¯ α α n − , R a α n − )if n is even.Combining this with the CMV models we developed earlier, we immediatelyobtain the following proposition. Proposition 7.1. Let ≤ R a ≤ be independent of the random matrix U . (a) The eigenvalue distribution of (7.1) with U ∈ COE ( n ) , CU E ( n ) , CSE ( n ) coincides with the eigenvalue distribution of C ( α , . . . , α n − , R a e iφ ) withindependent coefficients α k ∼ Θ( β ( k + 1) + 1) , ≤ k ≤ n − , e iφ ∈ Θ(1) , (7.2) where β = 1 , , for COE, CU E, CSE , respectively. (b) The eigenvalue distribution of (7.1) with U ∈ O ( n ) , or SO ( n ) , or O ( n ) \ SO ( n ) coincides with the distribution of C ( α , . . . , α n − , R a σ ) with indepen-dent coefficients α k ∼ B (cid:0) k +12 , k +12 (cid:1) , ≤ k ≤ n − , and σ ∈ B (0 , for O ( n ) ; σ = − for SO (2 k ) or O (2 k + 1) \ SO (2 k + 1) ;and σ = 1 for SO (2 k + 1) or O (2 k ) \ SO (2 k ) . (c) The eigenvalue distribution of a U Sp ( n ) matrix with one row and columnremoved coincides with the distribution of C ( α , . . . , α n − , R a q ) with inde-pendent × matrix coefficients α k ∼ Υ (4 k + 7) , ≤ k ≤ n − , q ∈ Υ(3) . Remarks. 1. As usual, CSE has double multiplicity at each of the eigenvaluesof (7.2).2. When R a = 0 these CMV models coincide with the models in Section 5(with n instead of n + 1 that we had earlier), and when R a = 1 they become the(reversed) models from Section 4 (the reversed models however do not have thespectral measures preserved but only the eigenvalues, so are less natural).For (a) and (b) we can compute the eigenvalue distributions. Proposition 7.2. (a) Let < β < ∞ and α k ∼ Θ( β ( k + 1) + 1) , ≤ k ≤ n − , e iφ ∈ Θ(1) . Assume the reflection coefficient R a is independent of α , . . . , α n − , φ with a dis-tribution on (0 , of the form F ( R a ) βn (1 − R a ) β n − R a dR a , (7.3) where F is any function that makes (7.3) a probability distribution. Then the eigen-values z , . . . , z n of C ( α , . . . , α n − , R a e iφ ) are distributed in D n with the joint dis-tribution β n (2 π ) n n (cid:89) j,k =1 (1 − z j ¯ z k ) β − (cid:89) j 1. When β = 1 , , U ∈ COE ( n ) , CU E ( n ) , CSE ( n ), respectively. Also, for any 0 < β < ∞ , (7.4)is the eigenvalue distribution of (7.1) with U a matrix from the circular β -ensemble.2. When β = 2 in (b), (7.6) is the eigenvalue distribution of (7.1) with U ∈ O ( n ).The analogue of Theorem 6.4 for general a, b > − a = b = − β/ F ≡ G ≡ R a behaves as a system onesize larger with ideal coupling. One should also recognize that (7.3) with F ≡ G ≡ | α n − | in Theorem 6.1 andTheorem 6.4, respectively. Ultimately, this is a manifestation of the statisticalindependence of the Verblunsky coefficients. RUNCATIONS OF UNITARY ENSEMBLES 25 Proof. The result is obvious from the arguments in Section 6. The only additionalfact we need to use is that R a = | α n − | = (cid:12)(cid:12)(cid:12)(cid:81) nj =1 z j (cid:12)(cid:12)(cid:12) , see Lemma B.1(i). (cid:3) Log-gas Interpretation In this section, we present an interpretation of the measure (6.1) as the Gibbsmeasure for a configuration of charges in a certain geometry of dielectrics. Whatfollows was very much inspired by [8, Sect 15.8–15.9].Consider a system of n particles of equal charge +1 that are confined to lie inthe unit disk D of the complex plane. The unit disk D is filled with a homogeneous(linear) dielectric of permittivity (cid:15) , while the rest of space C \ D is filled witha homogeneous (linear) dielectric of permittivity (cid:15) . As usual, one may regardtwo-dimensional point charges as representing parallel lines of charge in a three-dimensional setting.Recall (cf. [25, § z inthe presence of dielectric media is the solution to −∇ z · [ (cid:15) ( z ) ∇ z V ( z )] = δ ( z − z ).For the configuration described above, and z ∈ D the solution is V ( z | z ) = (cid:40) − π(cid:15) log | z − z | − π(cid:15) (cid:15) − (cid:15) (cid:15) + (cid:15) log | − z ¯ z | z ∈ D , − π ( (cid:15) + (cid:15) ) log | z − z | z ∈ C \ D . From this, one finds (cf. Problems 3 and 4 in [25, § z ∈ D of the form F = − π(cid:15) (cid:15) − (cid:15) (cid:15) + (cid:15) z −| z | , which has potential W ( z ) = − π(cid:15) (cid:15) − (cid:15) (cid:15) + (cid:15) log[1 − | z | ] . This force pushes the charge toward the origin when (cid:15) > (cid:15) > (cid:15) > (cid:15) > n chargeslocated at the points { z j } nj =1 ⊆ D is then H = (cid:88) j W ( z j ) + (cid:88) j (cid:54) = k V ( z k | z j ) , and consequently, the associated Gibbs measure at temperature T has density e − H/ ( k B T ) = n (cid:89) j,k =1 (1 − z j ¯ z k ) αγ (cid:89) j As we mentioned in Subsection 2.2, in the case of a system with time-reversalinvariance ( COE and CSE ) reducing its scattering matrix U to the CMV form isnot natural since this breaks the initial symmetry. In this section we introduce asymmetric analogue of the CMV form which solves this issue. We will call thesematrices (9.1) “symmetric CMV”.In what follows let Re T = ( T + T † ) and Im T = i ( T − T † ). Proposition 9.1. Let U be a unitary operator on (cid:96) ( Z + ) with cyclic vector e andspectral measure (2.3) . Then applying Gram–Schmidt orthonormalization procedureto e , [Im U ] e , [Re U ] e , [Im( U )] e , [Re( U )] e , . . . produces a basis { s k } in which U has the form S ( α , α , . . . ) := N LN T , (9.1) where L = L T is as in (2.2) , and N is the unitary matrix N = diag ([1] , Ψ , Ψ , . . . ) , (9.2)Ψ k = 1 (cid:112) − Re α k ) (cid:20) i (1 − ¯ α k ) − iρ k ρ k − α k (cid:21) . (9.3) The matrix S is -diagonal and symmetric S T = S . In fact, U = R S R † with R † R = RR † = I , Re = R † e = e . Moreover, if U T = U , then R is orthogonal.Proof. By the spectral theorem (cf. (2.4)), we may conflate U with the operator f ( z ) (cid:55)→ zf ( z ) in L ( dµ ) and e with the constant function 1. In this way, we identifythe basis { s k } ∞ k =0 with the functions formed by orthonormalizing the sequence { , sin θ, cos θ, sin 2 θ, cos 2 θ, . . . } in L ( µ ). Here and below we write z = e iθ .Following the notation from [30, Section 4.2], let us write { χ k } ∞ k =0 and { x k } ∞ k =0 for the orthonormal bases formed by orthonormalizing { , z, z − , z , z − , . . . } and { , z − , z, z − , z , . . . } , respectively, in L ( dµ ). Recall that the traditional CMVmatrix C = LM represents U in the { χ k } basis and that M represents the changeof basis from { χ k } to { x k } ; specifically, M jk = (cid:104) x j , χ k (cid:105) .It is elementary to verify that N T N = M and consequently, N CN − = N LMN − = N LN T . Thus, the key assertion to be proved is that N represents the change of basis from { χ k } to { s k } . To this end, let us first observe that χ n − ( z ) = z − n φ n − ( z ) = ρ ρ ··· ρ n − (cid:2) ρ n − z n + · · · + 0 z − n (cid:3) ,χ n ( z ) = z − n φ ∗ n ( z ) = ρ ρ ··· ρ n − (cid:2) − α n − z n + · · · + z − n (cid:3) . Here ellipsis indicates terms in the linear span of { z n − , . . . , z − n } ; we will continuethis convention for the remainder of the proof.Proceeding directly from the definition of the Gram–Schmidt process, we have s n − ( z ) = c z n − z − n i + · · · + 0 z n + z − n for some positive constant c . Correspondingly, there is a c (cid:48) > s n − ( z ) = c (cid:48) (cid:112) − Re α n − ) (cid:2) iρ n − χ n ( z ) − i (1 − α n − ) χ n − (cid:3) + · · · . RUNCATIONS OF UNITARY ENSEMBLES 27 In fact, c (cid:48) = 1 and the ellipsis terms vanish. To see this, we observe that s n − , χ n , and χ n − are all unit vectors and are all perpendicular to elements in thespan of { z n − , . . . , z − n } . This confirms all odd-indexed columns of N − = N T .To verify the even-index columns (after the zeroth, which is trivial) we firstobserve that there are some γ > δ ∈ C so that s n ( z ) = δ z n − z − n i + · · · + γ z n + z − n . Noting also that (1 − ¯ α n − ) χ n ( z ) + ρ n − χ n − ( z )= 2(1 − Re α n − ) z n + z − n + · · · + 2(Im α n − ) z n − z − n i we deduce that s n ( z ) = γ (cid:48) (cid:112) − Re α n − ) (cid:2) (1 − ¯ α n − ) χ n ( z ) + ρ n − χ n − ( z ) (cid:3) + δ (cid:48) s n − ( z ) + · · · for some γ (cid:48) > δ (cid:48) ∈ C . Again the ellipsis terms vanish because they areperpendicular everything else that appears in this formula. To determine γ (cid:48) and δ (cid:48) ,we first exploit that s n is perpendicular to s n − . Using the expansion of s n − interms of { χ k } proved above, this shows that δ (cid:48) = 0. Lastly, the fact that s n is aunit vector confirms γ (cid:48) = 1. This justifies the even-indexed columns of N − = N T and so completes the proof of (9.1).The symmetry and 7-diagonal properties of S are obvious from the construction.Finally, suppose U T = U . Note that columns of R are the vectors obtained fromthe orthonormalization of e , [Im U ] e , [Re U ] e , [Im( U )] e , [Re( U )] e , . . . But U † =¯ U (entrywise complex conjugation), which means that each of Re( U m ) and Im( U m )are real matrices for any m . This implies that R is also real. (cid:3) Remarks. 1. Similar arguments show that, unless n = 2 k and det U = − α n − = 1), an n × n unitary matrix U with e cyclic can be reducedto the “symmetric CMV” form S ( α , . . . , α n − ) = N LN T , where N = diag ([1] , Ψ , Ψ , . . . , Ψ n − ), L = diag (Ξ , Ξ , Ξ , . . . , Ξ n − ) if n =2 k + 1 and N = diag ([1] , Ψ , Ψ , . . . , Ψ n − ), L = diag (Ξ , Ξ , Ξ , . . . , Ξ n − ) if n = 2 k . As before, Ξ n − = [¯ α n − ], while Ψ n − = (cid:2) √ α n − (cid:3) , where the square rootcorresponds to the argument Arg [0 , π ) with the branch cut along R + (this is justthe (1 , | α n − | = 1).2. Similarly, unless n = 2 k and det U = 1, then one can instead work outan alternative symmetric CMV form (cid:101) S form obtained by orthonormalizing thesequence { e , [Re U ] e , [Im U ] e , [Re( U )] e , [Im( U )] e , . . . } . In this case we obtainthe matrix (cid:101) S ( α , α , . . . ) := (cid:101) N L (cid:101) N T , where L is as in (2.2), and (cid:101) N = diag (cid:16) [1] , (cid:101) Ψ , (cid:101) Ψ , . . . (cid:17) , (cid:101) Ψ k = 1 (cid:112) α k ) (cid:20) α k ρ k iρ k − i (1 + α k ) (cid:21) . Note that det U = − COE ( n ) is a probability 0 event. Therefore with prob-ability 1 any matrix from COE ( n ) can be orthogonally reduced to the symmetricCMV form, while fixing e and preserving the spectral measure. Moreover, matri-ces with different spectral measures lead to different sets of Verblunsky coefficients,and therefore different symmetric CMV forms.The infinite unitary CMV matrix with α n = 0 is the bilateral shift (after re-ordering basis elements). The symmetric CMV form of this operator is i √ √ i √ − i √ i i − i − i i i − i Similar arguments lead to the analogue of Proposition 9.1 for CSE ( n ). Thecorresponding (complex quaternionic) symmetric CMV form is just S ⊗ I (with S as above), and the reducing matrix R is in U Sp ( n ). Appendix A. Quaternions Let us carefully define the algebras of real quaternions H R and complex quater-nions H C , as well as discuss quaternionic matrices.A.1. Real and complex quaternions. In modern language, the real quaternionswere introduced by Hamilton as the unital associative algebra H R over R generatedby 1 , i , j , k with (non-commutative) multiplication defined by i = j = k = ijk = − . We identify the underlying vector space with R by using 1 , i , k , j as the standardbasis. Note the deliberate reordering! This makes for a prettier regular represen-tation q (cid:55)→ L ( q ): let q = a + b i + c k + d j ∈ H R , then q ( w + x i + y k + z j ) = ( w (cid:48) + x (cid:48) i + y (cid:48) k + z (cid:48) j )if and only if w (cid:48) x (cid:48) y (cid:48) z (cid:48) = L ( q ) wxyz with L ( q ) := a − b − c − db a d − cc − d a bd c − b a . (A.1)The 2 × × a + b i + c k + d j (cid:55)→ (cid:20) a + ib − c + idc + id a − ib (cid:21) =: C ( a + b i + c k + d j ) , (A.2)where a, b, c, d ∈ R . RUNCATIONS OF UNITARY ENSEMBLES 29 In particular, this shows that the group of unimodular real quaternions (i.e., q ∈ H R with a + b + c + d = 1) is the special unitary group SU (2).The analogue of (A.1) for multiplication on the right is as follows: R ( a + b i + c k + d j ) := a − b − c − db a − d cc d a − bd − c b a , which is to say( w + x i + y k + z j ) q = ( w (cid:48) + x (cid:48) i + y (cid:48) k + z (cid:48) j ) ⇐⇒ w (cid:48) x (cid:48) y (cid:48) z (cid:48) = R ( q ) wxyz . Note that q (cid:55)→ R ( q ) is not a representation (i.e. homomorphism) of H R , butrather of the reversed algebra. Needless to say, all matrices R ( q ), q ∈ H R commutewith all L ( p ), p ∈ H R . Indeed, the general theory of matrix algebras (cf. [34,Theorem 3.4.A]) guarantees that every matrix that commutes with all L ( p ), p ∈ H R ,must be of the form R ( q ) for some q ∈ H R .The complex quaternions H C are defined in the same exact way as H R but over C instead of R . For q = a + b i + c k + d j ∈ H C (now a, b, c, d ∈ C ), we will be usingthe representation C ( q ) defined by same expression (A.2).Note that C ( H R ) consists of 2 × α ij ) i,j =1 with α = ¯ α and α = − ¯ α , and C ( H C ) consists of all 2 × q = a + b i + c k + d j let us define its conjugate by ¯ q = a − b i − c k − d j , and its Hermitian conjugate by q † = ¯ a − ¯ b i − ¯ c k − ¯ d j . Of course ¯ q = q † for q ∈ H R .A.2. Matrices of quaternions. For the rest of the section, let Q be an n × n matrix of (real or complex) quaternions Q = [ q ij ] ni,j =1 .We define C ( Q ) as the matrix formed by replacing each quaternion entry by thecorresponding matrix representation C ( Q ) = [ C ( q ij )] ni,j =1 . Let us define Q † = [ q † ji ] ni,j =1 . It is easy to see that C ( Q † ) = C ( Q ) † , the usual Hermitian conjugation of complex matrices.For any 2 n × n complex matrix M we define its (time reversal) dual matrix by M R := Z T M T Z, where Z := I n ⊗ (cid:20) − 11 0 (cid:21) , (A.3) the 2 n × n block diagonal matrix with n copies (cid:20) − 11 0 (cid:21) along the diagonal.For a quaternionic matrix Q let us define Q R = C − ( C ( Q ) R ). It is easy to verifythat if Q is complex quaternionic then Q R = [¯ q ji ] ni,j =1 . If Q is real quaternionic matrix, then Q R = Q † coincide, which implies Z T C ( Q ) T Z = C ( Q ) † . (A.4)We say that a (complex or real) quaternionic matrix is self-dual if Q R = Q .We say that a (complex or real) quaternionic matrix Q is unitary if C ( Q ) isunitary.Finally, by the eigenvalues of a quaternionic matrix Q we mean the eigenvaluesof the complex matrix C ( Q ). Appendix B. Basics of Orthogonal Polynomials on the Unit Circle We will introduce some basics of the theory of orthogonal polynomials on theunit circle. For more details we refer the reader, e.g., to [30, 31].Suppose we have a positive probability measure µ on the unit circle ∂ D havingat least n points in its support. For the purposes of this paper, the typical exampleone should keep in mind is µ being the spectral measure of an n × n unitary matrix U with distinct eigenvalues (see Section 3).Then applying Gram-Schmidt procedure to the monomials { z k } n − k =0 with respectto the inner product (cid:104) f ( z ) , g ( z ) (cid:105) := (cid:82) ∂ D f ( e iθ ) g ( e iθ ) dµ ( θ ) in L ( dµ ), one obtains asequence { Φ k ( z ) } n − k =0 of monic polynomials orthogonal with respect to µ (cid:104) Φ j ( z ) , Φ k ( z ) (cid:105) = 0 , if j (cid:54) = k. (B.1)The famous result of Szeg˝o is that these orthogonal polynomials satisfy therecurrence relation Φ k +1 ( z ) = z Φ k ( z ) − ¯ α k Φ ∗ k ( z ) (B.2)for some sequence of complex coefficients { α j } n − j =0 from D (referred to as Verblunskycoefficients), where for any polynomial P ( z ) of degree k we define P ∗ ( z ) = z k P (¯ z − )(i.e., if P ( z ) = z k + κ z k − + . . . + κ k − z + κ k , then P ∗ ( z ) = ¯ κ k z k + ¯ κ k − z k − + . . . + ¯ κ z + 1).It should be noted that if µ has exactly n points in its support, then { z j } n − j =0 forma basis, and therefore z n is a linear combination of { z j } n − j =0 in L ( dµ ). HoweverGram–Schmidt procedure can still be applied to produce a polynomial Φ n ( z ) (whichis of norm 0 in L ( dµ ), of course), and (B.2) holds with a unimodular coefficient α n − .In fact, there is a one-to-one correspondence between all probability measuressupported on n distinct points µ ( θ ) = n (cid:88) j =1 µ j δ e iθj (B.3)and sequences of Verblunsky coefficients { α k } n − k =0 with α k ∈ D for 0 ≤ k ≤ n − α n − ∈ ∂ D . So α , . . . , α n − can be viewed as a change of variables from RUNCATIONS OF UNITARY ENSEMBLES 31 e iθ , . . . , e iθ n , µ , . . . , µ n (subject to (cid:80) nj =1 µ j = 1). This was initially exploredin [22] (see also Forrester–Rains [11]), and is developed further in our Section 4.For each 0 ≤ k ≤ n − 1, denote the orthonormal polynomials φ k ( z ) = || Φ k || Φ k ( z ) , where the norm is in L ( dµ ). We summarize some of the properties of orthogonalpolynomials that we will need in the next lemma. Lemma B.1. (i) Let ≤ k ≤ n and Φ k ( z ) = (cid:81) kj =1 ( z − z j ) . Then Φ k (0) = ( − k k (cid:89) j =1 z j = − ¯ α k − ; If all α j ’s are real, then Φ k (1) = k (cid:89) j =1 (1 − z j ) = k − (cid:89) j =0 (1 − α j );Φ k ( − 1) = n (cid:89) j =1 (1 + z j ) = k − (cid:89) j =0 (1 + ( − j α j ) . (ii) For any ≤ k ≤ n , || Φ k || = (cid:81) k − j =0 (1 − | α j | ) . (iii) For any ≤ k ≤ n − , the Christoffel–Darboux formula holds: k − (cid:88) j =0 φ j ( z ) φ j ( ζ ) = φ ∗ k ( z ) φ ∗ k ( ζ ) − φ k ( z ) φ k ( ζ )1 − z ¯ ζ . (iv) The Cauchy formula holds: det (cid:20) − z j ¯ z s (cid:21) ≤ j,s ≤ k = (cid:81) j Let us now prove (vi). An alternate proof of this in [30, Thm 2.1.4]. By elemen-tary row operations, | ∆( z , . . . , z k ) | = det (cid:16)(cid:2) z s − j (cid:3) ≤ j,s ≤ k (cid:2) ¯ z j − s (cid:3) ≤ j,s ≤ k (cid:17) = det (cid:18) [Φ s − ( z j )] ≤ j,s ≤ k (cid:104) Φ j − ( z s ) (cid:105) ≤ j,s ≤ k (cid:19) = (cid:32) k − (cid:89) s =0 || Φ s || (cid:33) det (cid:18) [ φ s − ( z j )] ≤ j,s ≤ k (cid:104) φ j − ( z s ) (cid:105) ≤ j,s ≤ k (cid:19) . Note that φ ∗ k ( z ) = || Φ k || (cid:81) kj =1 (1 − z ¯ z j ) and φ k ( z s ) = 0 for any s . Therefore theright-hand side of the last equation can be written as= (cid:32) k − (cid:89) s =0 || Φ s || (cid:33) det (cid:34) φ ∗ k ( z j ) φ ∗ k ( z s )1 − z j ¯ z s (cid:35) ≤ j,s ≤ k = (cid:32) k − (cid:89) s =0 || Φ s || (cid:33) (cid:32) k (cid:89) s =1 | φ ∗ k ( z s ) | (cid:33) det (cid:20) − z j ¯ z s (cid:21) ≤ j,s ≤ k . Now using parts (ii), and (iv), we obtain= | ∆( z , . . . , z k ) | (cid:81) kj,s =1 | − z j ¯ z s | (cid:16)(cid:81) k − s =0 (1 − | α j | ) j +1 (cid:17) (cid:81) kj,s =1 (1 − z j ¯ z s ) , which gives us (B.4) after cancelling by | ∆ | . (cid:3) Appendix C. Some Distributions on Spheres For the reader’s convenience we collect here some of the distributions needed inthe body of the text. Lemma C.1. (a) Let a random vector x ∈ R n be chosen according to the normal-ized uniform distribution the unit sphere { x ∈ R n : || x || = 1 } . Then ( µ , . . . , µ n ) = ( x , . . . , x n ) are jointly distributed on the simplex (cid:80) nj =1 µ j = 1 , µ j ≥ , ≤ j ≤ n, according tothe probability distribution Γ( n/ / n n (cid:89) j =1 µ − / j dµ . . . dµ n − . (b) Let a random vector x ∈ R n be chosen according to the normalized uniformdistribution the unit sphere { x ∈ R n : || x || = 1 } . Then ( µ , . . . , µ n ) = ( x + x , . . . , x n − + x n ) are jointly distributed on the simplex (cid:80) nj =1 µ j = 1 , µ j ≥ , ≤ j ≤ n, according tothe probability distribution ( n − dµ . . . dµ n − . (c) Let a random vector x ∈ R n be chosen according to the normalized uniformdistribution the unit sphere { x ∈ R n : || x || = 1 } . Then ( µ , µ , . . . , µ n − , µ n , µ n +1 ) = ( x + x , x + x , . . . , x n − + x n − , x n − , x n ) RUNCATIONS OF UNITARY ENSEMBLES 33 are jointly distributed on the simplex (cid:80) n +1 j =1 µ j = 1 , µ j ≥ , ≤ j ≤ n + 1 , accordingto the probability distribution ( n − π √ µ n µ n +1 dµ . . . dµ n . (d) Let a random vector x ∈ R n +1 be chosen according to the normalized uniformdistribution the unit sphere { x ∈ R n +1 : || x || = 1 } . Then ( µ , . . . , µ n , µ n +1 ) = ( x + x , x + x , . . . , x n − + x n , x n +1 ) are jointly distributed on the simplex (cid:80) n +1 j =1 µ j = 1 , µ j ≥ , ≤ j ≤ n + 1 , accordingto the probability distribution Γ( n + 12 )Γ( 12 ) 1 √ µ n +1 dµ . . . dµ n . (e) Let a random vector x ∈ R n be chosen according to the normalized uniformdistribution the unit sphere { x ∈ R n : || x || = 1 } . Then ( µ , . . . , µ n ) = ( x + x + x + x , . . . , x n − + x n − + x n − + x n ) are jointly distributed on the simplex (cid:80) nj =1 µ j = 1 , µ j ≥ , ≤ j ≤ n, according tothe probability distribution (2 n − n (cid:89) j =1 µ j dµ . . . dµ n − . Proof. Part (b) has been proved in [22, Cor A.3].Part (a) follows from [22, Cor A.1] and a change of variables. The normalizationconstant comes from [22, Cor A.4].Parts (c), (d), (e) can be proved using the arguments in the proof of [22, Cor A.3]and then using [22, Cor A.4] for the normalization constants. (cid:3) Appendix D. Jacobian Determinants Let us define the Jacobian determinants of real and complex maps.Recall that for a map f : R n → R n , ( z , . . . , z n ) (cid:55)→ ( f , . . . , f n ), the Jacobiandeterminant is defined to be J (cid:20) ∂ ( f , . . . , f n ) ∂ ( z , . . . , z n ) (cid:21) := det (cid:20) ∂f j ∂z k (cid:21) nj,k =1 . To avoid confusion, for complex maps we will use a different notation. If eachcomponent of the map f : C n → C n , ( z , . . . , z n ) (cid:55)→ ( f , . . . , f n ) is analytic, thenone can define the complex Jacobian determinant J C (cid:20) ∂ ( f , . . . , f n ) ∂ ( z , . . . , z n ) (cid:21) := det (cid:20) ∂f j ∂z k (cid:21) nj,k =1 . For our purposes however, the following will be more natural. For a (not nec-essarily analytic) map f : C n → C n , ( z , . . . , z n ) (cid:55)→ ( f , . . . , f n ), the real Jacobiandeterminant J R is defined to be J R (cid:20) ∂ ( f , . . . , f n ) ∂ ( z , . . . , z n ) (cid:21) := J (cid:20) ∂ (Re f , Im f , . . . , Re f n , Im f n ) ∂ (Re z , Im z , . . . , Re z n , Im z n ) (cid:21) . We show below that for a given map f : C n → C n , ( z , . . . , z n ) (cid:55)→ ( f , . . . , f n ),its Jacobian J R f is equal to the Jacobian J C (cid:101) f , where (cid:101) f : C n → C n is the mapping that takes ( z , ¯ z , . . . , z n , ¯ z n ) into ( f , ¯ f , . . . , f n , ¯ f n ), where we treat z j and ¯ z j asindependent variables. Lemma D.1. For a map f : C n → C n , ( z , . . . , z n ) (cid:55)→ ( f , . . . , f n ) , J R (cid:20) ∂ ( f , . . . , f n ) ∂ ( z , . . . , z n ) (cid:21) = J C (cid:20) ∂ ( f , ¯ f , . . . , f n , ¯ f n ) ∂ ( z , ¯ z , . . . , z n , ¯ z n ) (cid:21) . Remark. This implies that for an analytic map f : C n → C n , J R (cid:20) ∂ ( f , . . . , f n ) ∂ ( z , . . . , z n ) (cid:21) = (cid:12)(cid:12)(cid:12)(cid:12) J C (cid:20) ∂ ( f , . . . , f n ) ∂ ( z , . . . , z n ) (cid:21)(cid:12)(cid:12)(cid:12)(cid:12) . Proof. Indeed, (cid:34) ∂f j ∂z k ∂ ¯ f j ∂z k ∂f j ∂ ¯ z k ∂ ¯ f j ∂ ¯ z k (cid:35) ≤ j,k ≤ n = (cid:18) diag (cid:20) / − i/ i/ i/ (cid:21)(cid:19) (cid:34) ∂ Re f j ∂ Re z k ∂ Im f j ∂ Re z k ∂ Re f j ∂ Im z k ∂ Im f j ∂ Im z k (cid:35) ≤ j,k ≤ n (cid:18) diag (cid:20) i − i (cid:21)(cid:19) , where diag[ A ] denotes a block diagonal matrix with n copies of A on the diagonal. (cid:3) In the following lemma we letΦ( z ) = z n + κ z n − + . . . + κ n − z + κ n = n (cid:89) j =1 ( z − z j ) , where we order z j ’s, e.g., by the absolute values and then by the argument. Themapping from the zeros to coefficients is then injective. It can be viewed as R n → R n or as C n → C n . Lemma D.2. (a) As an R n → R n map, J (cid:20) ∂ ( κ , . . . , κ n ) ∂ ( z , . . . , z n ) (cid:21) = ∆( z , . . . , z n );(b) As a C n → C n map, J R (cid:20) ∂ ( κ , . . . , κ n ) ∂ ( z , . . . , z n ) (cid:21) = | ∆( z , . . . , z n ) | . Proof. Let s k = (cid:88) j <... Acknowledgement The authors would like to thank Yan V. Fyodorov for drawing our attention toresonances for the systems with non-ideal coupling.The first author was funded, in part, by US NSF grant DMS-1265868. Thesecond author was funded, in part, by the grant KAW 2010.0063 from the Knutand Alice Wallenberg Foundation. References 1. Yu. Arlinski˘ı, L. Golinski˘ı, and E. Tsekanovski˘ı. Contractions with rank one defect operatorsand truncated CMV matrices. J. Funct. Anal. , 254(1):154–195, 2008.2. A. Borodin and C. D. Sinclair. The Ginibre ensemble of real random matrices and its scalinglimits. Comm. Math. Phys. , 291(1):177–224, 2009.3. A. Bunse-Gerstner and L. Elsner. Schur parameter pencils for the solution of the unitaryeigenproblem. Linear Algebra Appl. , 154/156:741–778, 1991.4. M. J. Cantero, L. Moral, and L. Vel´azquez. Five-diagonal matrices and zeros of orthogonalpolynomials on the unit circle. Linear Algebra Appl. , 362:29–56, 2003.5. F. J. Dyson. Statistical theory of the energy levels of complex systems. I. J. MathematicalPhys. , 3:140–156, 1962.6. N. Engheta. Metamaterials with negative permittivity and permeability: background, salientfeatures, and new trends. IEEE MTT-S International Microwave Symposium Digest , 1:187–190, 2003.7. P. J. Forrester. The limiting Kac random polynomial and truncated random orthogonal ma-trices. J. Stat. Mech. , (P12018.), 2010.8. P. J. Forrester. Log-gases and random matrices , volume 34 of London Mathematical SocietyMonographs Series . Princeton University Press, Princeton, NJ, 2010.9. P. J. Forrester. Analogies between random matrix ensembles and the one-component plasmain two-dimensions. Nuclear Phys. B , 904:253–281, 2016.10. P. J. Forrester and M. Krishnapur. Derivation of an eigenvalue probability density functionrelating to the Poincar´e disk. J. Phys. A , 42(38):385204, 10, 2009.11. P. J. Forrester and E. M. Rains. Jacobians and rank 1 perturbations relating to unitaryHessenberg matrices. Int. Math. Res. Not. , pages Art. ID 48306, 36, 2006.12. Y. V. Fyodorov. Spectra of random matrices close to unitary and scattering theory for discrete-time systems. In Disordered and complex systems (London, 2000) , volume 553 of AIP Conf.Proc. , pages 191–196. Amer. Inst. Phys., Melville, NY, 2001.13. Y. V. Fyodorov and B. A. Khoruzhenko. On absolute moments of characteristic polynomialsof a certain class of complex random matrices. Comm. Math. Phys. , 273(3):561–599, 2007.14. Y. V. Fyodorov and D. V. Savin. Resonance scattering of waves in chaotic systems. In TheOxford handbook of random matrix theory , pages 703–722. Oxford Univ. Press, Oxford, 2011.15. Y. V. Fyodorov and H.-J. Sommers. Spectra of random contractions and scattering theory fordiscrete-time systems. JETP Letters , 72(8):422–426, 2000.16. Y. V. Fyodorov and H.-J. Sommers. Random matrices close to Hermitian or unitary: overviewof methods and results. J. Phys. A , 36(12):3303–3347, 2003. Random matrix theory.17. R. A. Horn and C. R. Johnson. Matrix analysis . Cambridge University Press, Cambridge,second edition, 2013.18. J. B. Hough, M. Krishnapur, Y. Peres, and B. Vir´ag. Zeros of Gaussian analytic functionsand determinantal point processes , volume 51 of University Lecture Series . American Math-ematical Society, Providence, RI, 2009.19. B. A. Khoruzhenko and H.-J. Sommers. Non-Hermitian ensembles. In The Oxford handbookof random matrix theory , pages 376–397. Oxford Univ. Press, Oxford, 2011.20. B. A. Khoruzhenko, H.-J. Sommers, and K. ˙Zyczkowski. Truncations of random orthogonalmatrices. Phys. Rev. E (3) , 82(4):040106, 4, 2010.21. R. Killip and R. Kozhan. Zeros of orthogonal polynomials on the unit circle with randomdecaying Verblunsky coefficients. (in preparation).22. R. Killip and I. Nenciu. Matrix models for circular ensembles. Int. Math. Res. Not. , (50):2665–2701, 2004. 23. R. Killip and I. Nenciu. CMV: the unitary analogue of Jacobi matrices. Comm. Pure Appl.Math. , 60(8):1148–1188, 2007.24. M. Krishnapur. From random matrices to random analytic functions. Ann. Probab. , 37(1):314–346, 2009.25. L. D. Landau and E. M. Lifshitz. Electrodynamics of continuous media . Course of TheoreticalPhysics, Vol. 8. Translated from the Russian by J. B. Sykes and J. S. Bell. Pergamon Press,Oxford-London-New York-Paris; Addison-Wesley Publishing Co., Inc., Reading, Mass., 1960.26. M. L. Mehta. Random matrices , volume 142 of Pure and Applied Mathematics (Amsterdam) .Elsevier/Academic Press, Amsterdam, third edition, 2004.27. J. Novak. Truncations of random unitary matrices and Young tableaux. Electron. J. Combin. ,14(1):Research Paper 21, 12, 2007.28. L. Pastur and M. Shcherbina. Eigenvalue distribution of large random matrices , volume 171of Mathematical Surveys and Monographs . American Mathematical Society, Providence, RI,2011.29. D. Petz and J. R´effy. Large deviation for the empirical eigenvalue density of truncated Haarunitary matrices. Probab. Theory Related Fields , 133(2):175–189, 2005.30. B. Simon. Orthogonal polynomials on the unit circle. Part 1 , volume 54 of American Mathe-matical Society Colloquium Publications .31. B. Simon. Orthogonal polynomials on the unit circle. Part 2 , volume 54 of American Math-ematical Society Colloquium Publications . American Mathematical Society, Providence, RI,2005.32. B. Simon. CMV matrices: five years after. J. Comput. Appl. Math. , 208(1):120–154, 2007.33. B. Sz.-Nagy and C. Foia’s. Harmonic analysis of operators on Hilbert space . Translated fromthe French and revised. North-Holland Publishing Co., Amsterdam, 1970.34. H. Weyl. The Classical Groups. Their Invariants and Representations . Princeton UniversityPress, Princeton, N.J., 1939.35. K. ˙Zyczkowski and H.-J. Sommers. Truncations of random unitary matrices.