Topological expansion of the Bethe ansatz, and non-commutative algebraic geometry
aa r X i v : . [ m a t h - ph ] S e p SPhT-T08/140
Topological expansion of the Bethe ansatz,and non-commutative algebraic geometry
B. Eynard † , O. Marchal † ‡ † Institut de Physique Th´eorique,CEA, IPhT, F-91191 Gif-sur-Yvette, France,CNRS, URA 2306, F-91191 Gif-sur-Yvette, France. ‡ Centre de recherches math´ematiques, Universit´e de Montr´eal C. P. 6128, succ.centre ville, Montr´eal, Qu´ebec, Canada H3C 3J7.
Abstract :In this article, we define a non-commutative deformation of the ”symplecticinvariants” (introduced in [13]) of an algebraic hyperelliptical plane curve. Thenecessary condition for our definition to make sense is a Bethe ansatz. Thecommutative limit reduces to the symplectic invariants, i.e. algebraic geometry, andthus we define non-commutative deformations of some algebraic geometry quantities.In particular our non-commutative Bergmann kernel satisfies a Rauch variationalformula. Those non-commutative invariants are inspired from the large N expansionof formal non-hermitian matrix models. Thus they are expected to be related to theenumeration problem of discrete non-orientable surfaces of arbitrary topologies.
In [13], the notion of symplectic invariants of a spectral curve was introduced. For anygiven algebraic plane curve (called spectral curve) of equation:0 = E ( x, y ) = X i,j E i,j x i y j (1.1) E-mail: [email protected] E-mail: [email protected]
1n infinite sequence of numbers F ( g ) ( E ) , g = 0 , , , . . . , ∞ (1.2)and an infinite sequence of multilinear meromorphic forms W ( g ) n (meromorphic on thealgebraic Riemann surface of equation E ( x, y ) = 0) were defined.Their definition was inspired from hermitian matrix models, i.e. in the case where E = E M . M . is the spectral curve ( y ( x ) is the equilibrium density of eigenvalues) of aformal hermitian matrix integral Z M . M . = R dM e − N Tr V ( M ) , the F ( g ) were such that:ln Z M . M . = ∞ X g =0 N − g F ( g ) ( E M . M . ) (1.3)The F ( g ) ’s have many remarkable properties (see [13]), in particular invariance undersymplectic deformations of the spectral curve, homogeneity (of degree 2 − g ), holo-morphic anomaly equations (modular transformations), stability under singular limits,... An important property also, is that the following formal series τ ( E ) = e P g N − g F ( g ) ( E ) (1.4)is the ”formal” τ function of an integrable hierarchy.Although those notions were first developed for matrix models, they extend beyondmatrix models, and they make sense for spectral curves which are not matrix modelsspectral curves. For instance the (non-algebraic) spectral curve E WP ( x, y ) = (2 πy ) − (sin (2 π √ x )) is such that F ( g ) ( E WP ) = Vol( M g ) is the Weyl-Petersson volume ofmoduli space of Riemann surfaces of genus g (see [11, 12]). It is conjectured [3] thatthe F ( g ) ’s are deeply related to Gromov-Witten invariants, Hurwitz numbers [4] andtopological strings [3]. In particular they are related to the Kodaira-Spencer fieldtheory [8].There were many attempts to compute also non-hermitian matrix integrals, andan attempt to extend the method of [13] was first made in [7], and here in this paperwe deeply improve the result of [7]. The aim of the construction we present here,is to define F ( g ) ’s for a ”non-commutative spectral curve”, i.e. a non commutativepolynomial: E ( x, y ) = X i,j E i,j x i y j , [ y, x ] = ~ (1.5)For instance we can view y as y = ~ ∂/∂x , and E is a differential operator, whichencodes a linear differential equation. 2n this article we choose E ( x, y ) of degree 2 in the variable y , i.e. the case of asecond order linear differential equation, i.e. Schroedinger equation, and we leave to afurther work the general case.Here, in this article, we define some F ( g ) ( E ), which reduce to those of [13] in thelimit ~ →
0, and which compute non-hermitian matrix model topological expansions.For instance consider a formal matrix integral: Z = Z E β,N dM e − N √ β Tr V ( M ) = e P g N − g F ( g ) (1.6)where E β,N is one of the Wigner matrix ensembles [16] of rank N : E ,N is the set ofreal symmetric matrices, E ,N is the set of hermitian matrices, and E ,N is the set ofself-dual quaternion matrices (see [16] for a review). We define: ~ = 1 N (cid:18)p β − √ β (cid:19) (1.7)Notice that ~ = 0 for hermitian matrices, i.e. the hermitian case is the classical limit[ y, x ] = 0. Notice also that the expected duality β ↔ /β (cf [17, 6]) corresponds to ~ ↔ − ~ , i.e. we expect it to correspond to the duality x ↔ y (for ~ = 0, the x ↔ y duality was proved in [14]).Let us also mention that the topological expansion of non-hermitian matrix integralsis known to be related to the enumeration of unoriented discrete surfaces, and weexpect that our F ( g ) = P k ~ k F ( g,k ) can be interpreted as generating functions of suchunoriented surfaces.So, in this article, we provide a method for computing F ( g,k ) for any g and k (whichis more consise than [7]). Outline of the article • In section 2, we introduce our recursion kernel K ( x, x ′ ), and we show that themere existence of this kernel is equivalent to the Bethe ansatz condition. • In section 3, we define the W ( g ) n ’s and the F ( g ) ’s, and we study their main prop-erties, for instance that W ( g ) n is symmetric. • In section 4, we study the classical limit ~ →
0, and we show that we recover thealgebro-geometric construction of [13]. • This inspires a notion of non-commutative algebraic geometry in section 5. • In section 6, we study the application to the topological expansion of non-hermitian matrix integrals. 3
In section 7, we study the application to the Gaudin model. • Section 8 is the conclusion. • All the technical proofs are written in appendices for readability.
Let V ′ ( x ) be a rational function (possibly a polynomial), and we call V ( x ) the poten-tial . Let α i be the poles of V ′ ( x ) (one of the poles may be at ∞ ).For example, the following potential is called Gaudin potential (see section 7): V ′ Gaudin ( x ) = x + n X i =1 S i x − α i (2.1)As another example, we will consider formal matrix models in section 6, for which V ′ ( x ) is a polynomial.However, many other choices can be made. Our problem is to find m complex numbers s , . . . , s m , as well as two functions G ( x , x )and K ( x , x ) with the following properties:1. G ( x , x ) is a rational function of x with poles at x = s i , and a simple pole ofresidue +1 at x = x , and which behaves as O (1 /x ) at x → ∞ .2. G ( x , x ) is a rational function of x with (possibly multiple) poles at x = s i , anda simple pole at x = x , and G ( x , x ) behaves like O (1 /x ) at x → ∞ .3. B ( x , x ) = − ∂∂x G ( x , x ) is symmetric: B ( x , x ) = B ( x, x ).4. K and G are related by the following differential equation: ~ m X i =1 x − s i − V ′ ( x ) − ~ ∂∂x ! K ( x , x ) = G ( x , x ) (2.2)5. K ( x , x ) is analytical when x → s i for all i = 1 , . . . , m .We shall see below that those 5 conditions determine K , G , and the s i ’s. In factcondition 5 is the most important one in this list, it amounts to a no-monodromycondition , and we shall see below that it implies that the s i ’s must obey the Bethe-ansatz equation . 4 .2 Analytical structure of the kernel G The 4th and 5th conditions imply that G ( x , x ) has at most simple poles at x = s i .Then condition 3 implies that G ( x , x ) has at most double poles at x = s i .The first 3 conditions imply that there exists a symmetric matrix A i,j such that G ( x , x ) can be written: G ( x , x ) = 1 x − x + 2 m X i,j =1 A i,j ( x − s i )( x − s j ) (2.3)and therefore: B ( x , x ) = 12 1( x − x ) + m X i,j =1 A i,j ( x − s i ) ( x − s j ) (2.4)We will argue in section 5, that B can be viewed as a non=commutative deformationof the algebraic geometry’s Bergmann kernel. First, we study the conditions under which the differential equation eq. (2.2) has nomonodromies around s i , in other words the condition under which K ( x , x ) is analyticalwhen x → s i , ∀ i : K ( x , s i + ǫ ) = K ( x , s i ) + ǫK ′ ( x , s i ) + ǫ K ′′ ( x , s i ) + ǫ K ′′′ ( x , s i ) + . . . (2.5)Equating the coefficient of ǫ − in eq. (2.2), we get: ~ K ( x , s i ) = X j A i,j ( x − s j ) (2.6)equating the coefficient of ǫ in eq. (2.2), we get: ~ K ′ ( x , s i ) = − x − s i + V ′ ( s i ) K ( x , s i ) − ~ X j = i K ( x , s i ) − K ( x , s j ) s i − s j (2.7)and equating the coefficient of ǫ in eq. (2.2), we get:2 ~ X j = i K ′ ( x , s i ) s i − s j − ~ X j = i K ( x , s i )( s i − s j ) + V ′′ ( s i ) K ( x , s i )= V ′ ( s i ) K ′ ( x , s i ) − s i − x ) − X j = i X k A j,k ( s i − s j ) ( x − s k ) (2 .
8) 5otice from eq. (2.6), that K ( x , s i ) has only double poles in x , with no residue:Res x → s k K ( x , s i ) = 0 (2.9)Then, taking the residue at x → s k in eq. (2.7), we see that: ~ Res x → s k K ′ ( x , s i ) = − δ i,k (2.10)Then, taking the residue when x → s i in eq. (2.8), implies that the s i ’s are Betheroots, i.e. they must obey the Bethe equation : ∀ i = 1 , . . . , m , ~ X j = i s i − s j = V ′ ( s i ) (2.11)Then eq. (2.8) becomes:1( s i − x ) = V ′′ ( s i ) K ( x , s i ) + 2 ~ X j = i K ( x , s i )( s i − s j ) − X j = i X k A j,k ( s i − s j ) ( x − s k ) (2.12)i.e. by comparing the coefficient of 1 / ( x − s k ) on both sides: δ i,k = 1 ~ V ′′ ( s i ) A i,k + 2 X j = i A i,k − A j,k ( s i − s j ) (2.13)i.e. A is the inverse of the Hessian matrix T : A = T − , ( T i,i = ~ V ′′ ( s i ) + 2 P j = i s i − s j ) T i,j = − s i − s j ) (2.14) T i,j = 1 ~ ∂ ∂s i ∂s j (cid:16) X k V ( s k ) − ~ X k = l ln ( s k − s l ) (cid:17) (2.15)Therefore the Bethe ansatz equations eq. (2.11) (as well as eq. (2.13)) are thenecessary conditions for K ( x , x ) to be analytical when x → s i . Those conditions arenecessary, but also sufficient conditions, as one can see by solving explicitely the linearODE for K . K ( x , x ) = Z xc dx ′ G ( x , x ′ ) e ~ ( V ( x ′ ) − V ( x )) Y i ( x − s i ) ( x ′ − s i ) (2.16) Remark 2.1
Notice that K ( x , x ) is not analytical everywhere, it has a logarithmic singu-larity at x = x , and it has essential singularities at the poles of V ′ . emark 2.2 Notice that if one solution of the ODE is analytical near all s i ’s, then allsolutions have that property. Indeed, all the solutions differ by a solution of the homogeneousequation, i.e. by: Y i ( x − s i ) e − ~ V ( x ) (2.17)which is clearly analytical near the s i ’s.So, for the moment, the requirements 1–5 determine G ( x , x ) uniquely, but K ( x , x ) isnot unique. Let us choose one possible K ( x , x ), and we prove below in theorem 3.4, thatthe objects we are going to define, do not depend on the choice of K . Remark 2.3
In what follows, it is useful to compute the Taylor expansion of K near a root s i . We write: K ( x , x ) = ∞ X k =0 K i,k ( x ) ( x − s i ) k (2.18)The coefficients K i,k ( x ) are themselves rational fractions of x , and are computed in appendixA. It is well known that the Bethe condition can be rewritten as a Schroedinger equation[1, 2]. We rederive it here for completeness.Define the wave function: ψ ( x ) = m Y i =1 ( x − s i ) e − ~ V ( x ) , ω ( x ) = ~ m X i =1 x − s i (2.19) Y ( x ) = − ~ ψ ′ ( x ) ψ ( x ) = V ′ ( x ) − ω ( x ) = V ′ ( x ) − ~ X i x − s i (2.20)then compute: U ( x ) = Y − ~ Y ′ ( x ) = 4 ~ ψ ′′ ( x ) ψ ( x )= V ′ ( x ) − ~ V ′′ ( x ) + 4( ω ( x ) − V ′ ( x ) ω ( x ) + ~ ω ′ ( x )) (2.21)We have: ω ( x ) + ~ ω ′ ( x ) = ~ X i,j x − s i )( x − s j ) − ~ X i x − s i ) = ~ X i = j x − s i )( x − s j )(2 . s i ’s. The residue at s i is2 ~ P j = i s i − s j = ~ V ′ ( s i ), and thus: ω ( x ) + ~ ω ′ ( x ) = ~ X i V ′ ( s i )( x − s i ) (2.23)7hich implies: ω ( x ) − V ′ ( x ) ω ( x ) + ~ ω ′ ( x ) = − ~ X i V ′ ( x ) − V ′ ( s i )( x − s i ) (2.24)and thus: U ( x ) = V ′ ( x ) − ~ V ′′ ( x ) − ~ m X i =1 V ′ ( x ) − V ′ ( s i ) x − s i (2.25)Therefore U ( x ) is a rational fraction with poles at the poles of V ′ (of degree at mostthose of V ′ ), in particular it has no poles at the s i ’s. U is the potential for the Schroedinger equation for ψ :4 ~ ψ ′′ = U ψ (2.26)As announced in the introduction, this equation can be encoded in a D-moduleelement: E ( x, y ) = y − U ( x ) , y = ~ ∂∂x , [ y, x ] = ~ (2.27)i.e. E ( x, y ) .ψ = 0 (2.28)Notice that the Schroedinger equation is equivalent to a Ricatti equation for Y = − ~ ψ ′ /ψ : Y − ~ Y ′ = U (2.29) We shall come back in more detail to the classical limit ~ → • In the classical limit, the Ricatti equation becomes an algebraic equation (hyper-elliptical), which we call the (classical) spectral curve: Y = U ( x ) (2.30)The function Y cl ( x ) = p U ( x ) is therefore a multivalued function of x , and it shouldbe seen as a meromorphic function on a branched Riemann surface (branching pointsare the zeroes of U ( x )). We shall see below that in the limit ~ →
0, the kernel B ( x , x )tends towards the Bergmann kernel of that Riemann surface.In other words the classical limit is expressed in terms of algebraic geometry .In fact, in this article we are going to define non-commutative deformations ofcertain algebraic geometric objects in section 5.8 Definition of correlators and free energies
In this section, we define the quantum deformations of the symplectic invariants intro-duced in [10, 13]. The following definitions are inspired from (not hermitian) matrixmodels. The special case of their application to matrix models will be discussed insection 6.
Definition 3.1
We define the following functions W ( g ) n ( x , . . . , x n ) (called n -point cor-relation function of ”genus” g ) by the recursion: W (0)1 ( x ) = ω ( x ) = ~ m X i =1 x − s i , W (0)2 ( x , x ) = B ( x , x ) (3.1) W ( g ) n +1 ( x , J )= m X i =1 Res x → s i K ( x , x ) W ( g − n +2 ( x, x, J ) + g X h =0 ′ X I ⊂ J W ( h ) | I | +1 ( x, I ) W ( g − h ) n −| I | +1 ( x, J/I ) ! (3 . where J is a collective notation for the variables J = { x , . . . , x n } , and where P P ′ means that we exclude the terms ( h = 0 , I = ∅ ) and ( h = g, I = J ) , and where: W ( g ) n ( x , ..., x n ) = W ( g ) n ( x , ..., x n ) − δ n, δ g, x − x ) (3.3) Remark 3.1
This is exactly the same recursion as in [13], the only difference is that thekernel K is not algebraic, but it is solution of the differential equation eq. (2.2). We shallshow in section 4, that in the limit ~ →
0, it indeed reduces to the definition of [13].
Remark 3.2
We say that W ( g ) n is the correlation function of genus g with n marked points,and sometimes we say that it has characteristics: χ = 2 − g − n (3.4)By analogy with algebraic geometry, we say that W ( g ) n is stable if χ < χ ≥
0. We see that all the stable W ( g ) n ’s have a common recursive definition def.3.1, whereasthe unstable ones appear as exceptions. Remark 3.3
In order for the definition to make sense, we must make sure that the be-haviour of each term in the vicinity of x → s i is indeed locally meromorphic so that we cancompute residues, i.e. there must be no log-singularity near s i . In particular, the require-ment of section 2.3 for the kernel K is necessary . In other words, a necessary condition fordefinition eq.3.2 to make sense, is the Bethe ansatz ! here g is any given integer, it has nothing to do with the genus of the spectral curve. .2 Properties of correlators The main reason of definition. 3.1, is because the W ( g ) n ’s have many beautiful proper-ties, which generalize those of [13].We shall prove the following properties: Theorem 3.1
Each W ( g ) n is a rational function of all its arguments. It has poles onlyat the s i ’s (except W (0)2 , which also has a pole at x = x ). In particular it has no polesat the α i ’s. Moreover, it vanishes as O (1 /x i ) when x i → ∞ . proof: in appendix B (cid:3) Theorem 3.2
The W ( g ) n ’s satisfy the loop equation, i.e. Virasoro-like constraints.This means that the quantity: P ( g ) n +1 ( x ; x ..., x n ) = − Y ( x ) W ( g ) n +1 ( x, x , ..., x n ) + ~ ∂ x W ( g ) n +1 ( x, x ..., x n )+ X I ⊂ J W ( h ) | I | +1 ( x, x I ) W ( g − h ) n −| I | +1 ( x, J/I ) + W ( g − n +2 ( x, x, J )+ X j ∂ x j W ( g ) n ( x, J/ { j } ) − W ( g ) n ( x j , J/ { j } )( x − x j ) ! (3 . is a rational fraction of x (possibly a polynomial), with no pole at x = s i . The onlypossible poles of P ( g ) n +1 ( x ; x ..., x n ) are at the poles of V ′ ( x ) , with degree less than thedegree of V ′ . proof: in appendix C (cid:3) Theorem 3.3
Each W ( g ) n is a symmetric function of all its arguments. proof: in appendix D, with the special case of W (0)3 in appendix F. (cid:3) Theorem 3.4
The correlation functions W ( g ) n are independent of the choice of kernel K , provided that K is solution of the equation eq. (2.2). proof: in appendix E (cid:3) heorem 3.5 The 3 point function W (0)3 can also be written: W (0)3 ( x , x , x ) = 4 X i Res x → s i B ( x, x ) B ( x, x ) B ( x, x ) Y ′ ( x ) (3.6) (In section 5, we interpret this equation as a non-commutative version of Rauch vari-ational formula). proof: in appendix F (cid:3) Theorem 3.6
Under an infinitesimal variation of the potential V → V + δV , we have: ∀ n ≥ , g ≥ , δW ( g ) n ( x , . . . , x n ) = − X i Res x → s i W ( g ) n +1 ( x, x , . . . , x n ) δV ( x ) (3.7) proof: in appendix G (cid:3) This theorem suggest the definition of the ”loop operator”:
Definition 3.2
The loop operator δ x computes the variation of W ( g ) n under a formalvariation δ x V ( x ′ ) = x − x ′ : δ x n +1 W ( g ) n ( x , . . . , x n ) = W ( g ) n +1 ( x , . . . , x n , x n +1 ) (3.8) The loop operator is a derivation: δ x ( uv ) = uδ x v + vδ x u , and we have δ x δ x = δ x δ x , δ x ∂ x = ∂ x δ x . Theorem 3.7
For n ≥ , W ( g ) n satify the equation: n X i =1 ∂∂x i W ( g ) n ( x , . . . , x n ) = − X i Res x n +1 → s i V ′ ( x n +1 ) W ( g ) n +1 ( x , . . . , x n , x n +1 ) (3.9) and n X i =1 ∂∂x i x i W ( g ) n ( x , . . . , x n ) = − X i Res x n +1 → s i x n +1 V ′ ( x n +1 ) W ( g ) n +1 ( x , . . . , x n , x n +1 )(3.10) proof: in appendix H (cid:3) Theorem 3.8
For n ≥ , W ( g ) n satify the equation: (2 − g − n − ~ ∂∂ ~ ) W ( g ) n ( x , . . . , x n ) = − X i Res x n +1 → s i V ( x n +1 ) W ( g ) n +1 ( x , . . . , x n , x n +1 )(3.11)11 roof: We give a ”long” proof in appendix I.There is also a short cut:If one changes ~ → λ ~ , and V → λV , the s i ’s don’t change, B and G don’t change,and K changes to λ K , thus W ( g ) n changes by λ − g − n W ( g ) n . The theorem is obtainedby computing λ∂∂λ λ g − n W ( g ) n = P k t k ∂∂t k W ( g ) n , and computing the RHS with theorem3.6, i.e. δV = V . (cid:3) So far, we have defined W ( g ) n with n ≥
1. Now, we define F ( g ) = W ( g )0 .Theorem 3.6, and the symmetry theorem 3.3 imply that: δ x W ( g )1 ( x ) = W ( g )2 ( x , x ) = W ( g )2 ( x , x ) = δ x W ( g )1 ( x ) (3.12)Thus, the symmetry of W ( g )2 implies that there exists a ”free energy” F ( g ) = W ( g )0 suchthat: W ( g )1 ( x ) = δ x F ( g ) (3.13)which is equivalent to saying that for any variation δV : δF ( g ) = − X i Res x → s i W ( g )1 ( x ) δV ( x ) (3.14)Therefore, we know that there must exists some F ( g ) = W ( g )0 which satisfy theorem 3.6for n = 0.Now, let us give a definition of F ( g ) , inspired from theorem 3.8, and which will beproved to satisfy theorem 3.6 for n = 0. Definition 3.3
We define F ( g ) ≡ W ( g )0 by a solution of the differential equation in ~ : ∀ g ≥ , (2 − g − ~ ∂∂ ~ ) F ( g ) = − X i Res x → s i W ( g )1 ( x ) V ( x ) (3.15) more precisely: F ( g ) = ~ − g Z ~ d ˜ ~ ˜ ~ − g X i Res x → s i V ( x ) W ( g )1 ( x ) (cid:12)(cid:12)(cid:12) ˜ ~ (3.16) And the unstable cases − g ≥ are defined by: F (0) = ~ X i = j ln ( s i − s j ) − ~ X i V ( s i ) (3.17)12 (1) = 12 ln det A + ln (∆( s ) ) + F (0) ~ (3.18) where ∆( s ) = Q i>j ( s i − s j ) is the Vandermonde determinant of the s i ’s. Properties of the F ( g ) ’s: The definition of the F ( g ) ’s, is made so that all the theorems for the W ( g ) n ’s, holdfor for n = 0 as well. Proofs are given in appendices J, K, L.Explicit computations of the first few F ( g ) ’s are given in section 7 and appendix M. In the ~ → ~ : Write: W ( g ) n ( x , . . . , x n ) = X k ~ k W ( g,k ) n ( x , . . . , x n ) , F ( g ) = X k ~ k F ( g,k ) (4.1) Here we consider the classical limit ~ →
0. We noticed in section 2.5, that in thatlimit, the Ricatti equation Y − ~ Y ′ = U = V ′ − ~ V ′′ − P (4.2)where P ( x ) = ~ P i V ′ ( x ) − V ′ ( s i ) x − s i , becomes an algebraic hyperelliptical equation: Y cl2 = U ( x ) = V ′ ( x ) − P ( x ) (4.3)i.e. Y ( x ) ∼ ~ → Y cl ( x ) = p V ′ ( x ) − P ( x ) (4.4) Y cl ( x ) is a multivalued function of x , and it should be seen as a meromorphic functionon a 2-sheeted Riemann surface, i.e. there is a Riemann surface Σ (of equation 0 = E cl ( x, y ) = y − U ( x ), such that the solutions of E cl ( x, y ) = 0 are parametrized by twomeromorphic functions on Σ: E cl ( x, y ) = 0 ⇔ ∃ z ∈ Σ (cid:26) x = x ( z ) y = y ( z ) (4.5)The Riemann surface Σ has a certain topology characterized by its genus g . It hasa (non-unique) symplectic basis of 2 g non-trivial cycles A i ∩ B j = δ i,j . This genus g has nothing to do with the index g of F ( g ) or W ( g ) n . B cl on Σ, called the Bergmann kernel,such that: B cl ( z , z ) has a double pole at z → z , and no other pole, without residueand normalized (in any local coordinate z ) as: B cl ( z , z ) ∼ z → z dz dz ( z − z ) + reg , ∀ i = 1 , . . . , g , I A i B cl = 0 (4.6)We define a primitive: G cl ( z , z ) = − Z z B cl ( z , z ′ ) (4.7)which is a 3rd kind differential in the variable z , it is called dE z ( z ) in [13].When ~ = 0, the kernel K ( z , z ) satisfies the equation: K cl ( z , z ) = − G cl ( z , z ) Y cl ( z ) = 2 R zc B cl ( z , z ′ ) Y cl ( z ) (4.8)which coincides with the definition of the recursion kernel in [13]. When ~ is small but non-zero, we can WKB expand ψ ( x ), i.e.: ψ ( x ) ∼ e − ~ R x Y cl ( x ′ ) dx ′ p Y cl ( x ) X k ~ k ψ k ( x ) ! (4.9)i.e. Y ∼ Y cl + ∞ X k =1 ~ k Y k (4.10)The expansion coefficients Y k can be easily obtained recursively from the Ricatti equa-tion: 2 Y cl Y k = 2 Y ′ k − − k − X j =1 Y j Y k − j (4.11)For instance: Y = Y cl ′ Y cl , Y = Y ′ Y cl − Y Y cl = Y cl ′′ Y cl2 − Y cl ′ Y cl3 , . . . etc (4.12) ~ expansion of correlators and energies The kernel K ( x , x ) can also be expanded: K ( x , x ) = K cl ( x , x ) + ∞ X k =1 ~ k K ( k ) ( x , x ) (4.13)14here K (0) = K cl is the kernel of [13]: K cl ( x , x ) = dE x,o ( x ) Y cl ( x ) (4.14)This implies that the correlators W ( g ) n can also be expanded: W ( g ) n ( x , . . . , x n ) = ∞ X k =0 ~ k W ( g,k ) n ( x , . . . , x n ) (4.15)where the W ( g,k ) n are obtained by the recursion: W ( g,k ) n +1 ( x , J ) = k X l =0 X i Res x → s i K ( k − l ) ( x , x ) h W ( g − ,l ) n +2 ( x, x, J )+ g X h =0 l X j =0 ′ X I ⊂ J W ( h,j ) | I | +1 ( x, I ) W ( g − h,l − j ) n −| I | +1 ( x, J/I ) i (4.16)where J = { x , . . . , x n } .Therefore, we observe that to leading order in ~ , the lim ~ → W ( g,k ) n = W ( g, n docoincide with the W ( g ) n computed with only K cl , and thus they coincide with the W ( g ) n of [13].And also, the ~ expansion must coincide with the diagrammatic rules of [7]. We have seen that in the limit ~ →
0, the correlation functions and the various functionswe are considering, are fundamental objects of algebraic geometry. For instance B is theBergmann kernel, and K is the recursion kernel of [13], which generates the symplecticinvariants F g and the correlators W ( g ) n attached to the spectral curve Y cl ( x ).In this paper, when ~ = 0, we have defined deformations of those objects, whichhave almost the same properties as the classical ones, except that they are no longeralgebraic functions.For instance we have: • Spectral curve
The algebraic equation of the classical spectral curve is replaced by a lineardifferential equation:0 = E ( x, y ) = X i,j E i,j x i y j → E ( x, ~ ∂ ) ψ = X i,j E i,j x i ( ~ ∂ ) j ψ (5.1)15n other words the polynomial E ( x, y ) is replaced by a non-commutative polyno-mial with y = ~ ∂ x , i.e. [ y, x ] = ~ .Here, our non-commutative spectral curve is: E ( x, y ) = y − U ( x ) , y = ~ ∂ x (5.2)Notice that it can be factorized as: E ( x, y ) = ( y − Y y + Y Y ( x ) is solution of Y − ~ Y ′ = U . • Bergmann Kernel B ( x , x )The non-commutative Bergmann kernel B ( x , x ) is closely related to the Inverseof the Hessian T , i.e. to A = T − : B ( x , x ) = 12( x − x ) + X i,j A i,j ( x − s i ) ( x − s j ) (5.4)A property of the classical Bergmann kernel B cl ( x , x ) is that it computes deriva-tives, i.e. for any meromorphic function f ( x ) defined on the spectral curve wehave: df ( x ) = − Res x → poles of f B cl ( x, x ) f ( x ) (5.5)Here, this property is replaced by: for any function f ( x ) defined on the non-commutative spectral curve (i.e. with poles only at the s i ’s), we have: f ′ ( x ) = − X i Res x → s i B ( x, x ) f ( x ) dx (5.6)The factor of 2, comes from the fact that the interpretation of x , and thus ofderivatives with respect to x , is slightly different. In the classical case, the dif-ferentials are computed in terms of local variables, and x is not a local variablenear branch-points. A good local variable near a branchpoint a , is √ x − a . Inthe non-commutative case, the role of branchpoints seems to be played by the s i ’s, and x is a good local variable near s i . • Rauch variational formula : In classical algebraic geometry, on an algebraiccurve of equation E ( x, y ) = P i,j E i,j x i y j = 0, the Bergmann kernel depends onlyon the location of branchpoints a i . The branchpoints are the points where thetangent is vertical, i.e. dx ( a i ) = 0. Their location is x i = x ( a i ). The Bergmann16ernel is only function of the x i ’s, and the classical variational Rauch formulareads: ∂ B cl ( z , z ) ∂x i = Res z → a i B cl ( z, z ) B cl ( z, z ) dx ( z ) (5.7)Equivalently, we can parametrize the spectral curve as x ( y ) instead of y ( x ), andconsider the branchpoints of y , i.e. dy ( b i ) = 0, whose location is y i = y ( b i ), andwe have: ∂ B cl ( z , z ) ∂y i = Res z → b i B cl ( z, z ) B cl ( z, z ) dy ( z ) (5.8)Here, in the non-commutative version, theorem 3.5 and theorem 3.6 implies thatunder a variation of the spectral curve, we have: δB ( x , x ) = − X i Res x → s i B ( x, x ) B ( x, x ) Y ′ ( x ) δY ( x ) (5.9)Consider the branchpoints b i such that Y ′ ( b i ) = 0, and define their location as Y i = Y ( b i ), by moving the integration contours we have: δB ( x , x ) = 12 X i Res x → b i B ( x, x ) B ( x, x ) Y ′ ( x ) δY ( x ) dx = 12 X i δY i Res x → b i B ( x, x ) B ( x, x ) Y ′ ( x ) dx (5 . ∂ B ( x , x ) ∂Y i = 12 Res x → b i B ( x, x ) B ( x, x ) Y ′ ( x ) dx (5.11)which is thus the quantum version of the Rauch variational formula eq. (5.8).Those properties can be seen as the beginning of a dictionary giving the deforma-tions of classical algebraic geometry into non-commutative algebraic geometry. Conjecture about the symplectic invariants
The F g ’s of [13] are the symplectic invariants of the classical spectral curve, whichmeans that they are invariant under any cannonical change of the spectral curve whichconserves the symplectic form dx ∧ dy . For instance they are invariant under x → y, y → − x .Here, we conjecture that we may define some non-commutative F ( g ) ’s which areinvariant under any cannonical transformation which conserves the commutator [ y, x ] = ~ . This duality should also correspond to the expected duality β → /β in matrixmodels, cf [17, 6].However, to check the validity of this conjecture, one needs to extend our work todifferential operators of any order in y , and not only order 2. We plan to do this in aforthcoming work. 17 Application: non-hermitian Matrix models
The initial motivation for the work of [13], as well as this present work, was initiallyrandom matrix models. The classical case corresponds to hermitian matrix models,and here, we show that ~ = 0 corresponds in some sense to non-hermitian matrixmodels [5, 6, 9].In this section, we show that non-hermitian matrix models satisfy the loop equationeq. (C.1) of theorem 3.2.We define the matrix integral over E m, β =set of m × m matrices of Wigner–type 2 β ( E m, = real symmetric matrices, E m, = hermitean matrices, E m, = real quaternionself-dual matrices, see [16]): Z = Z E m, β dM e − N √ β Tr V ( M ) (6.1)where N is some arbitrary constant, not necessarily related to the matrix size m .It is more convenient to rewrite it in terms of eigenvalues of M (see [16]): Z = Z C m dλ . . . dλ m Y i>j ( λ j − λ i ) β Y i e − N √ β V ( λ i ) (6.2)This last expression is well defined for any β , and not only 1 / , ,
2, and for any contourof integration C on which the integral is convergent.We also define the correlators: W n ( x , . . . , x n ) = < Tr 1 x − M . . .
Tr 1 x n − M > c = (cid:16) N p β (cid:17) − n ∂∂V ( x ) . . . ∂∂V ( x n ) ln Z (6.3)i.e. in terms of eigenvalues: W n ( x , . . . , x n ) = < X i x − λ i . . . X i n x n − λ i n > c (6.4)In order to match with the notations of section 3, we prefer to shift W by a secondorder pole, and we define: W n ( x , . . . , x n ) = W n ( x , . . . , x n ) + δ n, x − x ) (6.5)We are interested in a case where Z has a large N expansion of the form:ln Z ∼ ∞ X g =0 N − g F g (6.6)and for the correlation functions we assume: W n ( x , . . . , x n ) = 1 β n/ ∞ X g =0 N − g − n W ( g ) n ( x , . . . , x n ) (6.7)18 .1 Loop equations The loop equations can be obtained by integration by parts, or equivalently, theyfollow from the invariance of an integral under a change of variable. By consideringthe infinitesimal change of variable: λ i → λ i + ǫ x − λ i + O ( ǫ ) (6.8)we obtain: N p β ( V ′ ( x ) W n +1 ( x, x , . . . , x n ) − P n +1 ( x ; x , . . . , x n ))= β X J ⊂ L W | J | ( x, J ) W n −| J | ( x, L/J )+ βW n +2 ( x, x, x , . . . , x n ) − (1 − β ) ∂∂x W n +1 ( x, x , . . . , x n )+ n X j =1 ∂∂x j W n ( x, L/ { x j } ) − W n ( x j , L/ { x j } ) x − x j (6.9)where P n +1 ( x ; x , . . . , x n )) is a polynomial in its first variable x , of degree δ n, + deg V −
2. If we expand this equation into powers of N using eq. (6.7), we have ∀ n, g : V ′ ( x ) W ( g ) n +1 ( x, x , . . . , x n ) − P ( g ) n +1 ( x ; x , . . . , x n ))= g X g ′ =0 X J ⊂ L W ( g ′ )1+ | J | ( x, J ) W ( g − g ′ )1+ n −| J | ( x, L/J )+ βW ( g − n +2 ( x, x, x , . . . , x n )+ ~ ∂∂x W ( g ) n +1 ( x, x , . . . , x n )+ n X j =1 ∂∂x j W ( g ) n ( x, L/ { x j } ) − W ( g ) n ( x j , L/ { x j } ) x − x j (6.10)where ~ = √ β − √ β N (6.11)Those loop equations coincide with the loop equations eq. (3.5) of theorem 3.2.Moreover we have: W ( g ) n = ∂W ( g ) n − ∂V (6.12)and near x → ∞ : p β W ( x ) ∼ mx [ N ~ − ∞ X g =1 ( − g (2 g − g !( g − N ~ ) − g ] (6.13)19.e. W (0)1 ( x ) ∼ m ~ x + O (1 /x ) , W ( g )1 ( x ) ∼ − m ~ x ~ − g (2 g − g !( g − O (1 /x ) (6.14)One should notice that the loop equations are independent of the contour C of integration of eigenvalues. The contour C is in fact encoded in the polynomial P n +1 ( x ; x , . . . , x n ). To order g = 0 , n = 1 we have: V ′ ( x ) W (0)1 ( x ) − P (0)1 ( x ) = W (0)1 ( x ) + ~ ∂∂x W (0)1 ( x ) (6.15)which is the same as the Ricatti equation eq. (2.21).As we said above, the contour C is in fact encoded in the polynomial P (0)1 ( x ). Fromnow on, we choose a contour C , i.e. a polynomial P (0)1 ( x ) such that the solution of theRicatti equation is rational: W (0)1 ( x ) = ~ m X i =1 x − s i (6.16)It also has the correct behaviour at ∞ : W (0)1 ( x ) ∼ m ~ x . This corresponds to a certaincontour C which we do not determine here.Since W (0)1 ( x ) = ω ( x ) satisfies the Ricatti equation, i.e. the Bethe ansatz, the kernel K exists, and we can define the functions K ( x , x ), G ( x , x ) and B ( x , x ).Then, from eq. (6.12), we see that every W ( g ) n is going to be a rational fraction of x , with poles only at the s i ’s. In particular, Cauchy theorem implies: W ( g ) n +1 ( x , x , . . . , x n ) = Res x → x G ( x , x ) W ( g ) n +1 ( x, x , . . . , x n ) (6.17)and since both G ( x , x ) and W ( g ) n +1 ( x, x , . . . , x n ) are rational fractions, which vanishsufficientely at ∞ , we may change the integration contour to the other poles of theintegrand, namely: W ( g ) n +1 ( x , x , . . . , x n )= − X i Res x → s i G ( x , x ) W ( g ) n +1 ( x, x , . . . , x n )= − X i Res x → s i W ( g ) n +1 ( x, x , . . . , x n ) (2 ω ( x ) − V ′ ( x ) − ~ ∂ x ) K ( x , x )= − X i Res x → s i K ( x , x ) (2 ω ( x ) − V ′ ( x ) + ~ ∂ x ) W ( g ) n +1 ( x, x , . . . , x n )206 . P ( g ) n +1 and ∂∂x j W ( g ) n ( x j ,L/ { x j } ) x − x j do not have poles at the s i ’s, so they don’t contribute.We thus get: W ( g ) n +1 ( x , x , . . . , x n )= X i Res x → s i K ( x , x ) (cid:16) W ( g − n +2 ( x, x, x , . . . , x n )+ g X g ′ =0 X J ⊂ L W ( g ′ )1+ | J | ( x, J ) W ( g − g ′ )1+ n −| J | ( x, L/J ) (cid:17) (6.19)i.e. we find the correlators of def 3.1.Special care is needed for W (0)2 . We have: W (0)2 ( x , x , . . . , x n )= − X i Res x → s i K ( x , x ) (2 ω ( x ) − V ′ ( x ) + ~ ∂ x ) W (0)2 ( x, x )= X i Res x → s i K ( x , x ) ω ( x )( x − x ) = ~ X i K ( x , s i )( s i − x ) = X i,j A i,j ( s i − x ) ( s j − x ) (6 . The Gaudin model’s Bethe ansatz is obtained for the potential: V ′ Gaudin ( x ) = x + n X i =1 S i x − α i (7.1)i.e. it corresponds to a Gaussian matrix model with sources: Z = Z E m, β dM e − N √ β Tr M Y i det( α i − M ) − NS i √ β (7.2)with ~ = √ β − / √ βN . Z can also be written in eigenvalues: Z = Z dλ . . . dλ m Q mi =1 e − N √ β λ i Q mi =1 Q nj =1 ( α j − λ i ) N √ β S j Y i>j ( λ i − λ j ) β (7.3)21 .1 Example Consider: V ′ ( x ) = x − s x , V ( x ) = x − s ln x (7.4)With only 1 root m = 1, the solution of the Bethe equation V ′ ( x ) = 0 is x = s .Thus we have: ω ( x ) = ~ x − s (7.5) B ( x , x ) = 12( x − x ) + ~ x − s ) ( x − s ) (7.6)We find: W (0)3 ( x , x , x ) = ~ x − s ) ( x − s ) ( x − s ) (cid:18) x − s + 1 x − s + 1 x − s + 12 s (cid:19) (7.7) W (1)1 ( x ) = 1 ~ ( x − s ) + 14 s ( x − s ) + 12( x − s ) (7.8)For the free energies we have: F (0) = ~ s s −
1) (7.9) F (1) = 12 ln ( ~ F (0) ~ (7.10) F (2) = − ~ s − F (0) ~ (7.11) F (3) = 112 ~ s + 2 F (0) ~ (7.12)and Z = e P g N − g F ( g ) = e − N √ βV ( s ) √ ~ (1 − s N ~ + . . . ) (7.13)which is indeed the beginning of the saddle point expansion of: Z = Z dx e − N √ β V ( x ) (7.14) In this article, we have defined a special case of non-commutative deformation of thesymplectic invariants of [13]. Many of the fundamental properties of [13] are conservedor only slightly modified.The main difference, is that the recursion kernel, instead of beeing an algebraicfunction, is given by the solution of a differential equation, otherwise the recursion isthe same. 22he main drawback of our definition, is that it concerns only a very restrictivesubset of possible non-commutative spectral curves. Namely, we considered here onlynon commutative polynomials E ( x, y ) = P i,j E i,j x i y j with y = ~ ∂ x , of degree 2 in y ,and such that the differential equation E ( x, ~ ∂ ) .ψ = 0 has a ”polynomial” solution ofthe form ψ ( x ) = Q mi =1 ( x − s i ) e − V ( x ) / ~ .It should be possible to extend our definitions to other ”non-polynomial” solutions ψ (with an infinite number of zeroes m = ∞ for instance), and/or to higher degreesin y . In other words, what we have so far, is only a glimpse on more general structureyet to be discovered.For example, it is not yet clear how our definitions are related to matrix integrals.We have said that the integration contour for the eigenvalues should be chosen so thatthe solution of the Schroedinger equation is polynomial of degree m , however, it is notknown how to find explicitly such integration contours. Conversely, the usual matrixintegrals with eigenvalues on the real axis, do probably not correspond to polynomialsolutions of the Schroedinger equation. Similarly, it is not clear what the relation-ship between our definitions and the number of unoriented ribbon graphs is, for thesame reason. The solution of the Schroedinger equation for ribbon graphs, should bechosen such that all the W ( g,k ) n ’s are power series in t , and it is not known which inte-gration contour it corresponds to, and which solution of the Schroedinger equation itcorresponds to.Therefore it seems necessary to extend our definitions to arbitrary solutions, i.e. toarbitrary integration contours for the matrix integrals. A possibility could be to obtainnon-polynomial solutions as limits of polynomial ones.The extension to higher degree in y , can be obtained from multi-matrix integrals,and extension seems rather easy for polynomial solutions again.Finally, like the symplectic invariants of [13], we expect those ”to be defined” non-commutative symplectic invariants, to play a role in several applications to enumerativegeometry, and to topological string theory like in [3]. In other words, we expect our F ( g ) ’s to be generating functions for intersection numbers in some non-commutativemoduli spaces of unoriented Riemann surfaces, whatever it means... Acknowledgments
We would like to thank O. Babelon, M. Berg`ere, M. Bertola, L. Chekhov, R. Dijkgraaf,J. Harnad and N. Orantin for useful and fruitful discussions on this subject. This workis partly supported by the Enigma European network MRT-CT-2004-5652, by the ANR23roject G´eom´etrie et int´egrabilit´e en physique math´ematique ANR-05-BLAN-0029-01,by the Enrage European network MRTN-CT-2004-005616, by the European ScienceFoundation through the Misgam program, by the French and Japaneese governmentsthrough PAI Sakurav, by the Quebec government with the FQRNT.
A Appendix: Expansion of K Since we have to compute residues at the s i ’s, we need to compute the Taylor expansionof K ( x , x ) when x → s i : K ( x , x ) = X k ( x − s i ) k K i,k ( x ) (A.1)For instance we find: K i, = 1 ~ X j A i,j ( x − s j ) (A.2) ~ K i, ( x ) = − x − s i ) − X a = i X j A a,j ( s a − s i ) ( x − s j ) ( A. ~ K i, = − ~ (cid:16) X a = i s a − s i ) + 1 ~ V ′′ ( s i ) (cid:17) K i, − ~ (cid:16) X a = i s a − s i ) + 1 ~ V ′′′ ( s i )2 (cid:17) K i, + 1( x − s i ) + 2 X a = i X j A a,j ( s a − s i ) ( x − s j ) ( A. K i, = 0 (A.5)Then, we have the recursion for k ≥ ~ (cid:16) (1 − k ) K i,k +1 − X a = i k X l =0 K i,k − l ( s a − s i ) l +1 − ~ k X l =0 V ( l +1) ( s i ) l ! K i,k − l (cid:17) = − x − s i ) k +1 − X a = i X j A a,j ( s a − s i ) k +1 ( x − s j ) (A.6)This proves that each K i,k ( x ) is a rational fraction of x , with poles at the s j ’s.24 .1 Rational fraction of x Thus we write: K i,k ( x ) = X j,l x − s j ) k ′ K i,k ; j,k ′ (A.7)For instance we have: K i, j,k ′ = A i,j ~ δ k ′ , (A.8) ~ K i, j,k ′ = − δ k ′ , δ i,j − δ k ′ , X a = i A a,j s a − s i (A.9)For higher k we have the recursion: ~ (cid:16) (1 − k ) K i,k +1; j,k ′ − X a = i k X l =1 K i,k − l ; j,k ′ ( s a − s i ) l +1 − ~ k X l =1 V ( l +1) ( s i ) l ! K i,k − l ; j,k ′ (cid:17) = − δ i,j δ k ′ ,k +1 − δ k ′ , X a = i A a,j ( s a − s i ) k +1 (A.10)In particular, it shows that if k ′ >
2, then K i,k ; i,k ′ is proportional to δ i,j . A.2 Generating functions
We introduce generating functions: R i ; j,k ′ ( x ) = X i K i,k ; j,k ′ ( x − s i ) k (A.11)We have: ~ (cid:16) ψ ′ ( x ) ψ ( x ) − ∂ x (cid:17) R i ; j,k ′ ( x ) = − δ i,j ( x − s i ) k ′ − + 2 δ k ′ , X a A a,j x − s a (A.12)i.e. − ~ ψ ( x ) ∂ x (cid:16) R i ; j,k ′ ( x ) ψ ( x ) (cid:17) = − δ i,j ( x − s i ) k ′ − + δ k ′ , c j + 2 δ k ′ , X a A a,j x − s a (A.13)In particular with k ′ = 1 we find: R i ; j, ( x ) = δ i,j ~ ψ ( x ) φ ( x ) (A.14)where φ ( x ) = ψ ( x ) Z x dx ′ ψ ( x ′ ) , φ ′ ( x ) ψ ( x ) − ψ ′ ( x ) φ ( x ) = 1 (A.15)25 Appendix: Proof of theorem 3.1
Theorem 3.1
Each W ( g ) n is a rational function of all its arguments. If g + n − > ,it has poles only at the s i ’s. In particular it has no poles at the α i ’s, and it vanishesas O (1 /x i ) when x i → ∞ . proof: It is easy to check that W (0)1 , W (0)2 satisfy the theorem.We will now make a recursion over − χ = 2 g − n to prove the result for every( n, g ). We write: W ( g ) n +1 ( x , x , . . . , x n ) = X i Res x → s i K ( x , x ) U ( g ) n +1 ( x, x , . . . , x n ) (B.1)where J = { x , . . . , x n } , and U ( g ) n +1 ( x, J ) = W ( g − n +2 ( x, x, J ) + g X h =0 X I ⊂ J W ( h ) | I | +1 ( x, I ) W ( g − h ) n −| I | +1 ( x, J/I ) (B.2)First, the recursion hypothesis clearly implies that U ( g ) n +1 ( x, x , . . . , x n ) is a rationalfraction in all its variables x, x , ...x n .Then we Taylor expand K ( x , x ) as in eq. (A.1) or eq. (A.7) W ( g ) n +1 ( x , x , . . . , x n ) = X i Res x → s i K ( x , x ) U ( g ) n +1 ( x, x , . . . , x n )= X i X k K i,k ( x ) Res x → s i ( x − s i ) k U ( g ) n +1 ( x, x , . . . , x n )( B. U ( g ) n +1 ( x, x , . . . , x n ) is a rational fraction of x , the sum over k is finite, and there-fore, W ( g ) n +1 ( x , x , . . . , x n ) is a finite sum of rational fractions of x , with poles at the s j ’s, therefore it is a rational fraction of x with poles at the s j ’s.It is also clear that W ( g ) n +1 ( x , x , . . . , x n ) is a rational fraction of the other variables x , . . . , x n . The poles in those variables are necessarily at the s j ’s, because as longas the residues can be computed, W ( g ) n +1 ( x , x , . . . , x n ) is finite. The residue cannotbe computed everytime an integration contour gets pinched, and since the integrationcontours are small circles around the s i ’s, the only singularities may occur at the s i ’s.It remains to prove that each W ( g ) n behaves like O (1 /x i ) at ∞ . The proof followsthe same line: each K i,k ( x ) behaves like O (1 /x ), and by an easy recursion the resultholds for all other variables. (cid:3) Appendix: Proof of theorem 3.2
In this subsection we prove theorem 3.2, that all W ( g ) n ’s satisfy the loop equation. Theorem 3.2
The W ( g ) n ’s satisfy the loop equation, i.e. the following quantity P ( g ) n +1 ( x ; x ..., x n ) P ( g ) n +1 ( x ; x ..., x n ) = − Y ( x ) W ( g ) n +1 ( x, x , ..., x n ) + ~ ∂ x W ( g ) n +1 ( x, x ..., x n )+ X I ⊂ J W ( h ) | I | +1 ( x, x I ) W ( g − h ) n −| I | +1 ( x, J/I ) + W ( g − n +2 ( x, x, J )+ X j ∂ x j W ( g ) n ( x, J/ { j } ) − W ( g ) n ( x j , J/ { j } )( x − x j ) ! ( C. is a rational fraction of x (possibly a polynomial), with no pole at x = s i . The onlypossible poles of P ( g ) n +1 ( x ; x ..., x n ) are at the poles of V ′ ( x ) , and their degree is less thanthe degree of V ′ . proof: First, from theorem 3.1, we easily see that P ( g ) n +1 ( x ; x ..., x n ) is indeed a rationalfunction of x . Moreover it clearly has no pole at coinciding points x = x j .Then we write Cauchy’s theorem for W ( g ) n +1 : W ( g ) n +1 ( x , ..., x n ) = Res x → x x − x W ( g ) n +1 ( x, x , ..., x n )= Res x → x G ( x , x ) W ( g ) n +1 ( x, x , ..., x n ) (C.2)and using again theorem 3.1, i.e. that W ( g ) n +1 has poles only at the s i ’s, and that both W ( g ) n +1 and G ( x , x ) behave as O (1 /x ) for large x , we may move the integration contours: W ( g ) n +1 ( x , ..., x n ) = − X i Res x → s i G ( x , x ) W ( g ) n +1 ( x, x , ..., x n ) (C.3)Then we use the definition of K , and integrate by parts: W ( g ) n +1 ( x , ..., x n ) = X i Res x → s i ( Y ( x ) K ( x , x ) + ~ K ′ ( x , x )) W ( g ) n +1 ( x, x , ..., x n )= X i Res x → s i K ( x , x ) (cid:16) Y ( x ) W ( g ) n +1 ( x, x , ..., x n ) − ~ ∂ x W ( g ) n +1 ( x, x , ..., x n ) (cid:17) ( C. W ( g ) n +1 ( x , ..., x n ) 27 X i Res x → s i K ( x , x ) g X h =0 X I ⊂ J W ( h ) | I | +1 ( x, I ) W ( g − h ) n −| I | +1 ( x, J/I ) + W ( g − n +2 ( x, x, J ) ! ( C. W ( g ) n to W ( g ) n in the RHS, i.e.: W ( g ) n +1 ( x , ..., x n )= X i Res x → s i K ( x , x ) (cid:16) g X h =0 X I ⊂ J W ( h ) | I | +1 ( x, I ) W ( g − h ) n −| I | +1 ( x, J/I ) + W ( g − n +2 ( x, x, J )+ n X j =1 W ( g ) n ( x, J/ { j } )( x − x j ) (cid:17) = X i Res x → s i K ( x , x ) (cid:16) g X h =0 X I ⊂ J W ( h ) | I | +1 ( x, I ) W ( g − h ) n −| I | +1 ( x, J/I ) + W ( g − n +2 ( x, x, J )+ n X j =1 ∂ x j W ( g ) n ( x, J/ { j } ) x − x j ! (cid:17) = X i Res x → s i K ( x , x ) (cid:16) g X h =0 X I ⊂ J W ( h ) | I | +1 ( x, I ) W ( g − h ) n −| I | +1 ( x, J/I ) + W ( g − n +2 ( x, x, J )+ n X j =1 ∂ x j W ( g ) n ( x, J/ { j } ) − W ( g ) n ( x j , J/ { j } ) x − x j ! (cid:17) ( C. W ( g ) n ( x j , J/ { j } ) because it has no poleat x = s i .Therefore we have:0 = X i Res x → s i K ( x , x ) (cid:16) − Y ( x ) W ( g ) n +1 ( x, x , ..., x n ) + ~ ∂ x W ( g ) n +1 ( x, x , ..., x n )+ g X h =0 X I ⊂ J W ( h ) | I | +1 ( x, I ) W ( g − h ) n −| I | +1 ( x, J/I ) + W ( g − n +2 ( x, x, J )+ n X j =1 ∂ x j W ( g ) n ( x, J/ { j } ) − W ( g ) n ( x j , J/ { j } ) x − x j ! (cid:17) = X i Res x → s i K ( x , x ) P ( g ) n +1 ( x ; x , ..., x n )= X i X k K i,k ( x ) Res x → s i ( x − s i ) k P ( g ) n +1 ( x ; x , ..., x n )( C. x . Since K i,k ( x ) is a rational fraction witha pole of degree k + 1 in x = s i , the K i,k ( x ) are linearly independent functions, andthus we must have: ∀ k, i x → s i ( x − s i ) k P ( g ) n +1 ( x ; x , ..., x n ) (C.8)28his means that P ( g ) n +1 has no pole at x = s i .One easily sees that P ( g ) n +1 ( x ; x , . . . , x n ) is a rational fraction of x , and its poles areat most those of Y ( x ), i.e. at the poles of V ′ ( x ). (cid:3) D Appendix: Proof of theorem 3.3
Theorem 3.3
Each W ( g ) n is a symmetric function of all its arguments. proof: The special case of W (0)3 is proved in appendix F above. It is obvious from thedefinition that W ( g ) n +1 ( x , x , . . . , x n ) is symmetric in x , x , . . . , x n , and therefore weneed to show that (for n ≥ W ( g ) n +1 ( x , x , J ) − W ( g ) n +1 ( x , x , J ) = 0 (D.1)where J = { x , . . . , x n } . We prove it by recursion on − χ = 2 g − n .Assume that every W ( h ) k with 2 h + k − ≤ g + n is symmetric. We have: W ( g ) n +1 ( x , x , J )= X i Res x → s i K ( x , x ) (cid:16) W ( g − n +2 ( x, x, x , J ) + 2 B ( x, x ) W ( g ) n ( x, J )+2 g X h =0 ′ X I ∈ J W ( h )2+ | I | ( x, x , I ) W ( g − h ) n −| I | ( x, J/I ) (cid:17) ( D. P ′ means that we exclude the terms ( I = ∅ , h = 0) and ( I = J, h = g ). Noticealso that W ( g − n +2 = W ( g − n +2 because n ≥
1. Then, using the recursion hypothesis, wehave: W ( g ) n +1 ( x , x , J )= 2 X i Res x → s i K ( x , x ) B ( x, x ) W ( g ) n ( x, J )+ X i,j Res x → s i Res x ′ → s j K ( x , x ) K ( x , x ′ ) (cid:16) W ( g − n +3 ( x, x, x ′ , x ′ , J )+2 X h ′ X I W ( h )2+ | I | ( x ′ , x, I ) W ( g − − h )1+ n −| I | ( x ′ , x, J/I )+2 X h ′ X I W ( h )3+ | I | ( x ′ , x, x, I ) W ( g − − h ) n −| I | ( x ′ , J/I )+2 X h ′ X I ∈ J W ( g − h ) n −| I | ( x, J/I ) h W ( h − | I | ( x, x ′ , x ′ , I )292 X h ′ ′ X I ′ ⊂ I W ( h ′ )2+ | I ′ | ( x ′ , x, I ′ ) W ( h − h ′ )1+ | I |−| I ′ | ( x ′ , I/I ′ ) i (cid:17) ( D. W ( g ) n +1 ( x , x , J ), we get the same expression, with the order ofintegrations exchanged, i.e. we have to integrate x ′ before integrating x . Notice, bymoving the integration contours, that:Res x → s i Res x ′ → s j − Res x ′ → s j Res x → s i = − δ i,j Res x → s i Res x ′ → x (D.4)Moreover, the only terms which have a pole at x = x ′ are those containing B ( x, x ′ ).Therefore: W ( g ) n +1 ( x , x , J ) − W ( g ) n +1 ( x , x , J )= 2 X i Res x → s i ( K ( x , x ) B ( x, x ) − K ( x , x ) B ( x, x )) W ( g ) n ( x, J ) − X i Res x → s i Res x ′ → x K ( x , x ) K ( x , x ′ ) B ( x, x ′ ) (cid:16) W ( g − n ( x ′ , x, J ) + 2 X h ′ X I ∈ J W ( g − h ) n −| I | ( x, J/I ) W ( h )1+ | I | ( x ′ , I ) (cid:17) ( D. x ′ → x can be computed: W ( g ) n +1 ( x , x , J ) − W ( g ) n +1 ( x , x , J )= 2 X i Res x → s i ( K ( x , x ) B ( x, x ) − K ( x , x ) B ( x, x )) W ( g ) n ( x, J ) − X i Res x → s i K ( x , x ) ∂∂x ′ (cid:16) K ( x , x ′ ) (cid:16) W ( g − n ( x ′ , x, J ) + 2 X h ′ X I ∈ J W ( g − h ) n −| I | ( x, J/I ) W ( h )1+ | I | ( x ′ , I ) (cid:17) (cid:17) x ′ = x = 2 X i Res x → s i ( K ( x , x ) B ( x, x ) − K ( x , x ) B ( x, x )) W ( g ) n ( x, J ) − X i Res x → s i K ( x , x ) K ′ ( x , x ) (cid:16) W ( g − n ( x, x, J ) + 2 X h ′ X I ∈ J W ( g − h ) n −| I | ( x, J/I ) W ( h )1+ | I | ( x, I ) (cid:17) − X i Res x → s i K ( x , x ) K ( x , x ) ∂∂x ′ (cid:16) W ( g − n ( x ′ , x, J ) + 2 X h ′ X I ∈ J W ( g − h ) n −| I | ( x, J/I ) W ( h )1+ | I | ( x ′ , I ) (cid:17) x ′ = x = 2 X i Res x → s i ( K ( x , x ) B ( x, x ) − K ( x , x ) B ( x, x )) W ( g ) n ( x, J )30 X i Res x → s i K ( x , x ) K ′ ( x , x ) (cid:16) W ( g − n ( x, x, J ) + 2 X h ′ X I ∈ J W ( g − h ) n −| I | ( x, J/I ) W ( h )1+ | I | ( x, I ) (cid:17) − X i Res x → s i K ( x , x ) K ( x , x ) ∂∂x (cid:16) W ( g − n ( x, x, J ) + 2 X h ′ X I ∈ J W ( g − h ) n −| I | ( x, J/I ) W ( h )1+ | I | ( x, I ) (cid:17) ( D. W ( g ) n +1 ( x , x , J ) − W ( g ) n +1 ( x , x , J )= 2 X i Res x → s i ( K ( x , x ) B ( x, x ) − K ( x , x ) B ( x, x )) W ( g ) n ( x, J )+ 12 X i Res x → s i (cid:16) K ′ ( x , x ) K ( x , x ) − K ( x , x ) K ′ ( x , x ) (cid:17) (cid:16) W ( g − n ( x, x, J ) + 2 X h ′ X I ∈ J W ( g − h ) n −| I | ( x, J/I ) W ( h )1+ | I | ( x, I ) (cid:17) ( D. W ( g ) n +1 ( x , x , J ) − W ( g ) n +1 ( x , x , J )= 2 X i Res x → s i ( K ( x , x ) B ( x, x ) − K ( x , x ) B ( x, x )) W ( g ) n ( x, J )+ X i Res x → s i (cid:16) K ′ ( x , x ) K ( x , x ) − K ( x , x ) K ′ ( x , x ) (cid:17) (cid:16) P ( g ) n ( x, J )+( Y ( x ) − ~ ∂ x ) W ( g ) n ( x, J ) + X j ∂ x j (cid:16) W ( g ) n − ( x j , J/ { x j } ) x − x j (cid:17)(cid:17) ( D. P ( g ) n ( x, J ) and W ( g ) n − ( x j , J/ { x j } ) have no poles at the s i ’s, we have: W ( g ) n +1 ( x , x , J ) − W ( g ) n +1 ( x , x , J )= 2 X i Res x → s i ( K ( x , x ) B ( x, x ) − K ( x , x ) B ( x, x )) W ( g ) n ( x, J )+ X i Res x → s i (cid:16) K ′ ( x , x ) K ( x , x ) − K ( x , x ) K ′ ( x , x ) (cid:17) ( Y ( x ) − ~ ∂ x ) W ( g ) n ( x, J )( D. K ′ K − K K ′ = − ~ ( G K − K G ) (D.10)31nd B = − G ′ , therefore: W ( g ) n +1 ( x , x , J ) − W ( g ) n +1 ( x , x , J )= − X i Res x → s i ( K G ′ − K G ′ ) W ( g ) n ( x, J ) − ~ X i Res x → s i (cid:16) G K − K G (cid:17) ( Y ( x ) − ~ ∂ x ) W ( g ) n ( x, J )( D. W ( g ) n +1 ( x , x , J ) − W ( g ) n +1 ( x , x , J )= X i Res x → s i ( K ′ G − K ′ G ) W ( g ) n ( x, J )+ X i Res x → s i ( K G − K G ) W ( g ) n ( x, J ) ′ − ~ X i Res x → s i (cid:16) G K − K G (cid:17) ( Y ( x ) − ~ ∂ x ) W ( g ) n ( x, J )( D. K ′ G − G K ′ = − Y ~ ( K G − G K ) (D.13)So we find W ( g ) n +1 ( x , x , J ) − W ( g ) n +1 ( x , x , J ) = 0 (D.14) E Appendix: Proof of theorem 3.4
Theorem E.1
The correlation functions W ( g ) n are independent of the choice of kernel K , provided that K is solution of the equation eq. (2.2). proof: Any two solutions of eq. (2.2), differ by a homogeneous solution, i.e. by ψ ( x ).Therefore, what we have to prove is that the following quantity vanishes: X i Res x → s i ψ ( x ) h W ( g − n +2 ( x, x, J ) + X h ′ X I ⊂ J W ( h )1+ | I | ( x, I ) W ( g − h )1+ n −| I | ( x, J/I ) i (E.1)Using theorem 3.2, we have:Res x → s i ψ ( x ) h W ( g − n +2 ( x, x, J ) + X h ′ X I ⊂ J W ( h )1+ | I | ( x, I ) W ( g − h )1+ n −| I | ( x, J/I ) i = Res x → s i ψ ( x ) (cid:16) Y ( x ) W ( g ) n ( x, J ) − ~ ∂ x W ( g ) n ( x, J ) + P ( g ) n ( x ; J ) (cid:17) (E.2)32hen we notice that P ( g ) n gives no residue, and then we use Y = − ~ ψ ′ /ψ , and weintegrate by parts: = − ~ Res x → s i ψ ( x ) (cid:16) ψ ′ ψ W ( g ) n + ∂ x W ( g ) n (cid:17) = − ~ Res x → s i ∂ x (cid:16) ψ W ( g ) n (cid:17) = 0 (E.3)This means that adding to K ( x , x ) a constant times ψ ( x ) doesnot change the W ( g ) n ’s.In fact we may chose a different constant near each s i , or in other words, we mayassume that K i, ( x ) = 0 (E.4) (cid:3) F Appendix: Proof of theorem 3.5
Theorem 3.1
The 3 point function W (0)3 is symmetric and we have: W (0)3 ( x , x , x ) = 4 X i Res x → s i B ( x, x ) B ( x, x ) B ( x, x ) Y ′ ( x ) (F.1) proof: The definition of W (0)3 is: W (0)3 ( x , x , x )= 2 X i Res x → s i K ( x , x ) B ( x, x ) B ( x, x )= 12 X i Res x → s i K G ′ G ′ = 12 X i Res x → s i K (( ~ K ′′ + Y K ′ + Y ′ K )( ~ K ′′ + Y K ′ + Y ′ K ))= 12 X i Res x → s i K ( ~ K ′′ K ′′ + ~ Y ( K ′ K ′′ + K ′′ K ′ ) + ~ Y ′ ( K ′′ K + K ′′ K )+ Y K ′ K ′ + Y Y ′ ( K K ′ + K ′ K ) + Y ′ K K )( F. K i = K ( x i , x ), G i = G ( x i , x ), and derivative are w.r.t. x . 33ince K ( x i , x ) has no pole when x → s i , the first term vanishes. Using the Ricattiequation Y = 2 ~ Y ′ + U (where U has no pole at s i ), we may replace Y by 2 ~ Y ′ and Y Y ′ by ~ Y ′′ without changing the residues, i.e.: W (0)3 ( x , x , x )= 12 X i Res x → s i K ( ~ Y ( K ′ K ′′ + K ′′ K ′ ) + ~ Y ′ ( K ′′ K + K ′′ K )+2 ~ Y ′ K ′ K ′ + ~ Y ′′ ( K K ′ + K ′ K ) + Y ′ K K )= 12 X i Res x → s i K ( ~ Y ( K ′ K ′ ) ′ + ~ Y ′ ( K K ) ′′ + ~ Y ′′ ( K K ) ′ + Y ′ K K )= 12 X i Res x → s i Y ′ K K K + ~ ( Y ′′ K ( K K ) ′ − ( Y K ) ′ K ′ K ′ − ( Y ′ K ) ′ ( K K ) ′ )= 12 X i Res x → s i Y ′ K K K − ~ (( Y K ) ′ K ′ K ′ + Y ′ K ′ ( K K ) ′ )= 12 X i Res x → s i Y ′ K K K − ~ Y K ′ K ′ K ′ − ~ Y ′ ( K K ′ K ′ + K ′ K K ′ + K ′ K ′ K )( F. x , x , x as claimed in theorem 3.3.Let us give an alternative expression, in the form of the Verlinde or Kricheverformula [15]: W (0)3 ( x , x , x ) = 4 X i Res x → s i B ( x, x ) B ( x, x ) B ( x, x ) Y ′ ( x ) (F.4) proof: In order to prove formula F.4, compute: B ( x, x i ) = − G ′ ( x, x i ) = − G ′ i = 12 ( ~ K ′′ i + Y K ′ i + Y ′ K i ) (F.5)thus: X i Res x → s i B ( x, x ) B ( x, x ) B ( x, x ) Y ′ ( x )= 18 X i Res x → s i Y ′ ( x ) ( ~ K ′′ + Y K ′ + Y ′ K )( ~ K ′′ + Y K ′ + Y ′ K )( ~ K ′′ + Y K ′ + Y ′ K )= 18 X i Res x → s i ~ Y ′ K ′′ K ′′ K ′′ + ~ YY ′ ( K ′ K ′′ K ′′ + K ′′ K ′ K ′′ + K ′′ K ′′ K ′ )+ ~ ( K K ′′ K ′′ + K ′′ K K ′′ + K ′′ K ′′ K )+ ~ Y Y ′ ( K ′′ K ′ K ′ + K ′ K ′′ K ′ + K ′ K ′ K ′′ )+ ~ Y ( K K ′ K ′′ + K K ′′ K ′ + K ′ K K ′′ + K ′ K ′′ K + K ′′ K K ′ + K ′′ K ′ K )34 ~ Y ′ ( K ′′ K K + K K ′′ K + K K K ′′ ) + Y Y ′ K ′ K ′ K ′ + Y ( K K ′ K ′ + K ′ K K ′ + K ′ K ′ K )+ Y Y ′ ( K ′ K K + K K ′ K + K K K ′ ) + Y ′ K K K ( F. K i has no pole at the s i ’s, and 1 /Y ′ has no pole, Y /Y ′ has no pole, Y /Y ′ has no pole, thus: X i Res x → s i B ( x, x ) B ( x, x ) B ( x, x ) Y ′ ( x )= 18 X i Res x → s i ~ Y ( K K ′ K ′′ + K K ′′ K ′ + K ′ K K ′′ + K ′ K ′′ K + K ′′ K K ′ + K ′′ K ′ K ) + ~ Y ′ ( K ′′ K K + K K ′′ K + K K K ′′ ) + Y Y ′ K ′ K ′ K ′ + Y ( K K ′ K ′ + K ′ K K ′ + K ′ K ′ K )+ Y Y ′ ( K ′ K K + K K ′ K + K K K ′ ) + Y ′ K K K ( F. Y = 2 ~ Y ′ + U , thus we may replace Y /Y ′ by 2 ~ Y , and Y by 2 ~ Y ′ and Y Y ′ by ~ Y ′′ , thus: X i Res x → s i B ( x, x ) B ( x, x ) B ( x, x ) Y ′ ( x )= 18 X i Res x → s i ~ Y ( K K ′ K ′′ + K K ′′ K ′ + K ′ K K ′′ + K ′ K ′′ K + K ′′ K K ′ + K ′′ K ′ K ) + ~ Y ′ ( K ′′ K K + K K ′′ K + K K K ′′ ) + 2 ~ Y K ′ K ′ K ′ +2 ~ Y ′ ( K K ′ K ′ + K ′ K K ′ + K ′ K ′ K ) + ~ Y ′′ ( K ′ K K + K K ′ K + K K K ′ )+ Y ′ K K K = 18 X i Res x → s i ~ Y ( K ( K ′ K ′ ) ′ + K ( K ′ K ′ ) ′ + K ( K ′ K ′ ) ′ )+2 ~ Y K ′ K ′ K ′ + Y ′ K K K + ~ ( Y ′ ( K ′ K K + K K ′ K + K K K ′ )) ′ = 18 X i Res x → s i ~ Y ( K ( K ′ K ′ ) ′ + K ( K ′ K ′ ) ′ + K ( K ′ K ′ ) ′ )+2 ~ Y K ′ K ′ K ′ + Y ′ K K K = − X i Res x → s i ~ Y K ′ K ′ K ′ + ~ Y ′ ( K K ′ K ′ + K ′ K K ′ + K ′ K ′ K ) − ~ Y K ′ K ′ K ′ − Y ′ K K K = 14 W (0)3 ( x , x , x ) (F.8) F.1 Direct computation
We write W (0)3 ( z , z , z ) 35 2 X i Res z → s i K ( z , z ) B ( z , z ) B ( z , z )= X j X i A i,j ( z − s j ) Res z → s i K ( z , z ) 1( z − s i ) ( z − z ) + sym . +2 X i X i ′ = i X j,k A i,j A i ′ ,k ( z − s j ) ( z − s k ) Res z → s i K ( z , z ) 1( z − s i ) ( z − s i ′ ) + sym . +2 X i X j,k A i,j A i,k ( z − s j ) ( z − s k ) Res z → s i K ( z , z ) 1( z − s i ) = X j X i A i,j ( z − s j ) (cid:16) K i, ( z )( z − s i ) + 2 K i, ( z )( z − s i ) (cid:17) + sym . +2 X i X i ′ = i X j,k A i,j A i ′ ,k ( z − s j ) ( z − s k ) (cid:16) K i, ( z )( s i ′ − s i ) + 2 K i, ( z )( s i ′ − s i ) (cid:17) + sym . +2 X i X j,k A i,j A i,k ( z − s j ) ( z − s k ) K i, ( z )= X j X i A i,j ( z − s j ) (cid:16) K i, ( z )( z − s i ) + 2 K i, ( z )( z − s i ) (cid:17) + sym . +2 X i X i ′ = i X j,k A i,j A i ′ ,k ( z − s j ) ( z − s k ) (cid:16) K i, ( z )( s i ′ − s i ) + 2 K i, ( z )( s i ′ − s i ) (cid:17) + sym . − X i X j,k A i,j A i,k ( z − s j ) ( z − s k ) T i,i K i, ( z ) − X i X j,k A i,j A i,k ( z − s j ) ( z − s k ) ( V ′′′ ( s i )2 ~ + 2 X i ′ = i s i ′ − s i ) ) K i, ( z )+ 2 ~ X i X j,k A i,j A i,k ( z − s j ) ( z − s k ) ( z − s i ) + 4 ~ X i X i ′ = i X l X j,k A i,j A i,k A i ′ ,l ( z − s j ) ( z − s k ) ( s i ′ − s i ) ( z − s l ) = 2 ~ X i,j,k A i,j A i,k ( z − s i ) ( z − s j ) ( z − s k ) + A j,i A j,k ( z − s i ) ( z − s j ) ( z − s k ) + A k,i A k,j ( z − s i ) ( z − s j ) ( z − s k ) + X i,j,k K i, ( z )( z − s j ) ( z − s k ) (cid:16) A j,k δ i,j + A j,k δ i,k − A i,j X i ′ T i,i ′ A i ′ ,k − A i,k X i ′ T i,i ′ A i ′ ,j (cid:17) +2 X i X i ′ = i X j,k A i,j A i ′ ,k ( z − s j ) ( z − s k ) K i, ( z )( s i ′ − s i ) + sym . − X i X j,k A i,j A i,k ( z − s j ) ( z − s k ) ( V ′′′ ( s i )2 ~ + 2 X i ′ = i s i ′ − s i ) ) K i, ( z )36 4 ~ X i X i ′ = i X l X j,k A i,j A i,k A i ′ ,l ( z − s j ) ( z − s k ) ( s i ′ − s i ) ( z − s l ) = 2 ~ X l,j,k z − s l ) ( z − s j ) ( z − s k ) X i (cid:16) δ i,l A i,j A i,k ( z − s i ) + δ i,j A i,l A i,k ( z − s i )+ δ i,k A i,l A i,j ( z − s i ) (cid:17) + 4 ~ X l,j,k X i X i ′ = i A i,j A i,k A i ′ ,l + A i,j A i ′ ,k A i,l + A i,k A i ′ ,j A i,l − A i,j A i,k A i,l ( z − s l ) ( z − s j ) ( z − s k ) ( s i ′ − s i ) − ~ X l,j,k X i A i,j A i,k A i,l V ′′′ ( s i )( z − s l ) ( z − s j ) ( z − s k ) ( F. W (0)3 ( z , z , z )= 2 ~ X i,j,k,l δ i,l A i,j A i,k ( z − s i ) + δ i,j A i,l A i,k ( z − s i ) + δ i,k A i,l A i,j ( z − s i ) ( z − s l ) ( z − s j ) ( z − s k ) + 4 ~ X l,j,k X i X i ′ = i A i,j A i,k A i ′ ,l + A i,j A i ′ ,k A i,l + A i,k A i ′ ,j A i,l − A i,j A i,k A i,l ( z − s l ) ( z − s j ) ( z − s k ) ( s i ′ − s i ) − ~ X l,j,k X i A i,j A i,k A i,l V ′′′ ( s i )( z − s l ) ( z − s j ) ( z − s k ) ( F. G Appendix: Proof of theorem 3.6
Theorem 3.6
Under an infinitesimal variation of the potential V → V + δV , we have: ∀ n ≥ , g ≥ , δW ( g ) n ( x , . . . , x n ) = − X i Res x → s i W ( g ) n +1 ( x, x , . . . , x n ) δV ( x ) (G.1) G.1 Variation of ω We have: ω ( x ) = ~ X i x − s i (G.2)and V ′ ( s i ) = 2 ~ X j = i s i − s j (G.3)37hus taking a variation we have: δV ′ ( s i ) + δs i V ′′ ( s i ) = − ~ X j = i δs i − δs j ( s i − s j ) (G.4)i.e. δV ′ ( s i ) = − ~ X j T i,j δs j (G.5)which implies: δs i = − ~ X j A i,j δV ′ ( s j ) (G.6)and therefore: δω ( x ) = − X i,j A i,j δV ′ ( s j )( x − s i ) (G.7)which can also be written: δω ( x ) = − X k Res x ′ → s k X i,j A i,j ( x − s i ) ( x ′ − s j ) δV ′ ( x ′ )= − X k Res x ′ → s k X i,j A i,j ( x − s i ) ( x ′ − s j ) δV ( x ′ )= − X k Res x ′ → s k B ( x, x ′ ) δV ( x ′ ) (G.8)and finally we obtain the case n = 1 , g = 0 of the theorem: δω ( x ) = − X k Res x ′ → s k B ( x, x ′ ) δV ( x ′ ) (G.9) G.2 Variation of B Consider: W (0)2 ( x, x ′ ) = B ( x, x ′ ) −
12 1( x − x ′ ) = X i,j A i,j ( x − s j ) ( x ′ − s i ) (G.10)Due to eq. (2.6) we have: W (0)2 ( x, x ′ ) = X i ~ K ( x, s i )( x ′ − s i ) = X i Res z → s i K ( x, z ) ω ( z )( z − x ′ ) = ∂∂x ′ X i Res z → s i K ( x, z ) ω ( z ) − ω ( x ′ ) z − x ′ G. W (0)2 ( x, x ′ ) has poles only at the s i ’s we have: W (0)2 ( x, x ′ ) = Res z → x G ( x, z ) W (0)2 ( z, x ′ )= − X i Res z → s i G ( x, z ) W (0)2 ( z, x ′ )= − X i Res z → s i ((2 ω ( z ) − V ′ ( z ) + ~ ∂ z ) K ( x, z )) W (0)2 ( z, x ′ )= − X i Res z → s i K ( x, z ) (cid:16) (2 ω ( z ) − V ′ ( z ) − ~ ∂ z ) W (0)2 ( z, x ′ ) (cid:17) ( G. ∀ x :0 = − X i Res z → s i K ( x, z ) (cid:18) (2 ω ( z ) − V ′ ( z ) − ~ ∂ z ) W (0)2 ( z, x ′ ) + ∂∂x ′ ω ( z ) − ω ( x ′ ) z − x ′ (cid:19) (G.13)and therefore, W (0)2 ( x, x ′ ) satisfies the loop equation:(2 ω ( x ) − V ′ ( x ) − ~ ∂ x ) W (0)2 ( x, x ′ ) + ∂∂x ′ ω ( x ) − ω ( x ′ ) x − x ′ = − P (0)2 ( x, x ′ ) (G.14)where P (0)2 ( x, x ′ ) has no pole at x → s i ’s.Then we take the variation:(2 ω ( x ) − V ′ ( x ) − ~ ∂ x ) δW (0)2 ( x, x ′ ) = − (2 δω ( x ) − δV ′ ( x )) W (0)2 ( x, x ′ ) − ∂∂x ′ δω ( x ) − δω ( x ′ ) x − x ′ − δP (0)2 ( x, x ′ )( G. δW (0)2 ( x, x ′ ) is a rational fraction of x , with poles only at the s i ’s, and δP (0)2 ( x, x ′ ) hasno pole at x → s i ’s. We thus write: δW (0)2 ( x, x ′ ) = δW (0)2 ( x, x ′ )= Res z → x G ( x, z ) δW (0)2 ( z, x ′ )= − X i Res z → s i G ( x, z ) W (0)2 ( z, x ′ )= − X i Res z → s i ((2 ω ( z ) − V ′ ( z ) + ~ ∂ z ) K ( x, z )) δW (0)2 ( z, x ′ )= − X i Res z → s i K ( x, z ) (cid:16) (2 ω ( z ) − V ′ ( z ) − ~ ∂ z ) δW (0)2 ( z, x ′ ) (cid:17) = X i Res z → s i K ( x, z ) (cid:16) (2 δω ( z ) − δV ′ ( z )) W (0)2 ( z, x ′ )39 ∂∂x ′ δω ( z ) − δω ( x ′ ) z − x ′ + δP (0)2 ( z, x ′ ) (cid:17) = X i Res z → s i K ( x, z ) (cid:16) (2 δω ( z ) − δV ′ ( z )) W (0)2 ( z, x ′ ) + δω ( z )( z − x ′ ) (cid:17) = X i Res z → s i K ( x, z ) (2 δω ( z ) − δV ′ ( z )) B ( z, x ′ )( G. δW (0)2 ( x, x ′ ) = − X i Res z → s i X k Res x ′′ → s k K ( x, z ) B ( z, x ′′ ) δV ( x ′′ ) B ( z, x ′ ) − X i Res z → s i K ( x, z ) δV ′ ( z ) B ( z, x ′ )= − X i Res z → s i X k Res x ′′ → s k K ( x, z ) G ( z, x ′′ ) δV ′ ( x ′′ ) B ( z, x ′ ) − X i Res z → s i Res x ′′ → z K ( x, z ) G ( z, x ′′ ) δV ′ ( x ′′ ) B ( z, x ′ )= − X k Res x ′′ → s k X i Res z → s i K ( x, z ) G ( z, x ′′ ) δV ′ ( x ′′ ) B ( z, x ′ )= − X k Res x ′′ → s k X i Res z → s i K ( x, z ) B ( z, x ′′ ) δV ( x ′′ ) B ( z, x ′ )( G. n = 2 , g = 0 of the theorem: δW (0)2 ( x, x ′ ) = − X k Res x ′′ → s k W (0)3 ( x, x ′ , x ′′ ) δV ( x ′′ ) (G.18) G.3 Variation of other higher correlators
We prove by recursion on 2 g + n , that: δW ( g ) n +1 ( x, L ) = − X k Res x ′′ → s k δV ( x ′′ ) W ( g ) n +2 ( z, L, x ′′ ) (G.19)where L = { x , . . . , x n } .We write: U ( g ) n +1 ( z, L ) = W ( g − n +2 ( z, z, L ) + X h ′ X J ⊂ L W ( h )1+ | J | ( z, J ) W ( g − h )1+ n −| J | ( z, L/J ) (G.20)By definition we have: W ( g ) n +1 ( x, L ) = X i Res z → s i K ( x, z ) U ( g ) n +1 ( z, L ) (G.21)40rom the recursion hypothesis, we have: δU ( g ) n +1 ( z, L ) = − X k Res x ′′ → s k δV ( x ′′ ) (cid:16) W ( g − n +3 ( z, z, L, x ′′ ) − X h ′ X J ⊂ L W ( h )2+ | J | ( z, J, x ′′ ) W ( g − h )1+ n −| J | ( z, L/J ) (cid:17) = − X k Res x ′′ → s k δV ( x ′′ ) (cid:16) U ( g ) n +2 ( z, L, x ′′ ) − B ( z, x ′′ ) W ( g ) n +1 ( z, L ) (cid:17) ( G. δW ( g ) n +1 ( x, L )= X i Res z → s i δK ( x, z ) U ( g ) n +1 ( z, L ) − X i Res z → s i K ( x, z ) X k Res x ′′ → s k δV ( x ′′ ) (cid:16) U ( g ) n +2 ( z, L, x ′′ ) − B ( z, x ′′ ) W ( g ) n +1 ( z, L ) (cid:17) = X i Res z → s i δK ( x, z ) U ( g ) n +1 ( z, L ) − X k Res x ′′ → s k X i Res z → s i K ( x, z ) δV ( x ′′ ) (cid:16) U ( g ) n +2 ( z, L, x ′′ ) − B ( z, x ′′ ) W ( g ) n +1 ( z, L ) (cid:17) = X i Res z → s i δK ( x, z ) U ( g ) n +1 ( z, L )+2 X k Res x ′′ → s k X i Res z → s i K ( x, z ) δV ( x ′′ ) B ( z, x ′′ ) W ( g ) n +1 ( z, L ) − X k Res x ′′ → s k X i Res z → s i K ( x, z ) δV ( x ′′ ) U ( g ) n +2 ( z, L, x ′′ )= X i Res z → s i δK ( x, z ) U ( g ) n +1 ( z, L )+2 X i Res z → s i X k Res x ′′ → s k K ( x, z ) δV ( x ′′ ) B ( z, x ′′ ) W ( g ) n +1 ( z, L )+2 X i Res z → s i Res x ′′ → z K ( x, z ) δV ( x ′′ ) B ( z, x ′′ ) W ( g ) n +1 ( z, L ) − X k Res x ′′ → s k δV ( x ′′ ) W ( g ) n +2 ( z, L, x ′′ )( G. U ( g ) n +1 ( z, L ) + (2 ω ( z ) − V ′ ( z ) + ~ ∂ z ) W ( g ) n +1 ( z, L ) has no pole at z → s i , and thus: δW ( g ) n +1 ( x, L )= − X i Res z → s i δK ( x, z ) (2 ω ( z ) − V ′ ( z ) + ~ ∂ z ) W ( g ) n +1 ( z, L )+2 X i Res z → s i X k Res x ′′ → s k K ( x, z ) δV ( x ′′ ) B ( z, x ′′ ) W ( g ) n +1 ( z, L )+2 X i Res z → s i Res x ′′ → z K ( x, z ) δV ( x ′′ ) B ( z, x ′′ ) W ( g ) n +1 ( z, L )41 X k Res x ′′ → s k δV ( x ′′ ) W ( g ) n +2 ( z, L, x ′′ )= − X i Res z → s i W ( g ) n +1 ( z, L ) (2 ω ( z ) − V ′ ( z ) − ~ ∂ z ) δK ( x, z )+2 X i Res z → s i X k Res x ′′ → s k K ( x, z ) δV ( x ′′ ) B ( z, x ′′ ) W ( g ) n +1 ( z, L )+2 X i Res z → s i Res x ′′ → z K ( x, z ) δV ( x ′′ ) B ( z, x ′′ ) W ( g ) n +1 ( z, L ) − X k Res x ′′ → s k δV ( x ′′ ) W ( g ) n +2 ( z, L, x ′′ )( G. ω ( z ) − V ′ ( z ) − ~ ∂ z ) δK ( x, z ) = δG ( x, z ) − (2 δω ( z ) − δV ′ ( z )) K ( x, z ) (G.25) δW ( g ) n +1 ( x, L )= − X i Res z → s i W ( g ) n +1 ( z, L ) δG ( x, z )+ X i Res z → s i W ( g ) n +1 ( z, L ) (2 δω ( z ) − δV ′ ( z )) K ( x, z )+2 X i Res z → s i X k Res x ′′ → s k K ( x, z ) δV ( x ′′ ) B ( z, x ′′ ) W ( g ) n +1 ( z, L )+ X i Res z → s i K ( x, z ) δV ′ ( z ) W ( g ) n +1 ( z, L ) − X k Res x ′′ → s k δV ( x ′′ ) W ( g ) n +2 ( z, L, x ′′ )( G. X i Res z → s i W ( g ) n +1 ( z, L ) δG ( x, z ) = 0 (G.27)because the integrand is a rational fraction, and we have taken the sum of residues atall poles.Using eq. (G.9), we are thus left with: δW ( g ) n +1 ( x, L ) = − X k Res x ′′ → s k δV ( x ′′ ) W ( g ) n +2 ( z, L, x ′′ ) (G.28)which proves the recursion hypothesis for 2 g + n + 1. QED. H Appendix: Proof of theorem 3.7
Theorem 3.7 k = 0 , W ( g ) n satify the equation: (cid:16) − n X i =1 x ki ∂∂x i (cid:17) W ( g ) n ( x , . . . , x n )= X i Res x n +1 → s i x kn +1 V ′ ( x n +1 ) W ( g ) n +1 ( x , . . . , x n , x n +1 ) (H.1) proof: Since W ( g ) n +1 has poles only at the s i ’s we have (with as usual J = { x , . . . , x n } ): X i Res x → s i x k V ′ ( x ) W ( g ) n +1 ( J, x )= X i Res x → s i x k Y ( x ) W ( g ) n +1 ( J, x )( H. X i Res x → s i x k V ′ ( x ) W ( g ) n +1 ( J, x )= X i Res x → s i x k Y ( x ) W ( g ) n +1 ( J, x )= X i Res x → s i x k h ~ ∂ x W ( g ) n +1 ( J, x ) + U ( g ) n +1 ( x, J ) − P ( g ) n +1 ( x ; J ) − n X j =1 ∂ x j W ( g ) n ( J ) x − x j i = X i Res x → s i x k h ~ ∂ x W ( g ) n +1 ( J, x ) + U ( g ) n +1 ( x, J ) i ( H. n ≥ W ( g ) n +1 ( J, x ) behaves like O (1 /x ) at x → ∞ , and thus, if k ≤ x k ∂ x W ( g ) n +1 ( J, x ) behaves like O (1 /x ). Since we take the residues at all poles, the sumof residues vanish and thus: X i Res x → s i x k V ′ ( x ) W ( g ) n +1 ( J, x )= X i Res x → s i x k U ( g ) n +1 ( x, J )( H. U ( g ) n +1 ( x, J ) (defined in eq. (G.20)), behaves at most like O (1 /x ) for large x , and thus, if k ≤
1, the product x k U ( g ) n +1 ( x, J ) is a rational fraction, which behaveslike O (1 /x ) for large x . Its only poles can be at x = s i or at x = x j . Therefore thesum of residues at s i ’s, can be replaced by the sum of residues at x j ’s: X i Res x → s i x k V ′ ( x ) W ( g ) n +1 ( J, x )43 − n X j =1 Res x → x j x k U ( g ) n +1 ( x, J )( H. U ( g ) n +1 ( x, J ) which have poles at x = x j , are the terms containing a B ( x, x j ), i.e.: X i Res x → s i x k V ′ ( x ) W ( g ) n +1 ( J, x ) = − n X j =1 Res x → x j x k B ( x, x j ) W ( g ) n ( x, J/ { x j } )= − n X j =1 Res x → x j x k x − x j ) W ( g ) n ( x, J/ { x j } )= − n X j =1 ∂∂x j (cid:16) x kj W ( g ) n ( x , . . . , x n ) (cid:17) ( H. (cid:3) I Appendix: Proof of theorem 3.8
Theorem 3.8:
For n ≥ , W ( g ) n satify the equation: (2 − g − n − ~ ∂∂ ~ ) W ( g ) n ( x , . . . , x n ) = − X i Res x n +1 → s i V ( x n +1 ) W ( g ) n +1 ( x , . . . , x n , x n +1 )(I.1) I.1 ~ derivatives for w ( z ) We have: V ′ ( s i ) = 2 ~ X = i s i − s j Taking the derivative with respect to ~ gives: ~ V ′′ ( s i ) ∂ ~ s i = V ′ ( s i ) − ~ X j = i ∂ ~ si − ∂ ~ s j ( s i − s j ) and so V ′ ( s i ) = ~ V ′′ ( s i ) ∂ ~ s i + 2 ~ X j = i ∂ ~ si − ∂ ~ s j ( s i − s j ) ! We recognize the general term of the matrix T and find: V ′ ( s i ) = ~ X j T i,j ∂ ~ s j A gives: ~ ∂ ~ s i = X j A i,j V ′ ( s j ) (I.2)We can use this result to compute: ~ ∂ ~ ω ( x ) = ω ( x ) + ~ X i ∂ ~ si ( x − s i ) = ω ( x ) + X i,j A i,j V ′ ( s j )( x − s i ) = ω ( x ) + X k Res x ′ → s k X i,j A i,j V ′ ( x ′ )( x − s i ) ( x ′ − s j )= ω ( x ) + X k Res x ′ → s k X i,j A i,j V ( x ′ )( x − s i ) ( x ′ − s j ) = ω ( x ) + X k Res x ′ → s k W (0)2 ( x, x ′ ) V ( x ′ )= ω ( x ) + X k Res x ′ → s k W (0)2 ( x, x ′ ) V ( x ′ )( I. n = 1 , g = 0 of the theorem: ~ ∂ ~ ω ( x ) = ω ( x ) + X k Res x ′ → s k W (0)2 ( x, x ′ ) V ( x ′ ) (I.4) I.2 ~ derivatives for W (0)2 ( z ) We have seen in appendix G, eq. (G.14), that W (0)2 ( x, x ′ ) satisfies the loop equation:(2 ω ( x ) − V ′ ( x ) + ~ ∂ x ) W (0)2 ( x, x ′ ) + ∂∂x ′ ω ( x ) − ω ( x ′ ) x − x ′ = − P (0)2 ( x, x ′ ) (I.5)where P (0)2 ( x, x ′ ) has no pole at x → s i ’s.Then we take the derivation ~ ∂ ~ of this equation:(2 ω ( x ) − V ′ ( x ) + ~ ∂ x ) ~ ∂ ~ W (0)2 ( x, x ′ ) + ~ ∂ x W (0)2 ( x, x ′ ) + 2 ~ ∂ ~ w ( x ) W (0)2 ( x, x ′ )= − ∂∂x ′ ~ ∂ ~ ω ( x ) − ~ ∂ ~ ω ( x ′ ) x − x ′ − ~ ∂ ~ P (0)2 ( x, x ′ ) (I.6)45 ∂ ~ W (0)2 ( x, x ′ ) is a rational fraction of x , with poles only at the s i ’s, and ~ ∂ ~ P (0)2 ( x, x ′ )has no pole at x → s i ’s. We thus write: ~ ∂ ~ W (0)2 ( x, x ′ )= ~ ∂ ~ W (0)2 ( x, x ′ )= Res z → x G ( x, z ) ~ ∂ ~ W (0)2 ( z, x ′ )= − X i Res z → s i G ( x, z ) ~ ∂ ~ W (0)2 ( z, x ′ )= − X i Res z → s i ((2 ω ( z ) − V ′ ( z ) − ~ ∂ z ) K ( x, z )) ~ ∂ ~ W (0)2 ( z, x ′ )= − X i Res z → s i K ( x, z ) (cid:16) (2 ω ( z ) − V ′ ( z ) + ~ ∂ z ) ~ ∂ ~ W (0)2 ( z, x ′ ) (cid:17) = X i Res z → s i K ( x, z ) (cid:16) (2 ~ ∂ ~ ω ( z )) W (0)2 ( z, x ′ )+ ∂∂x ′ ~ ∂ ~ ω ( z ) + ~ ∂ ~ ω ( x ′ ) z − x ′ + ~ ∂ ~ P (0)2 ( z, x ′ ) + ~ ∂ z W (0)2 ( z, x ′ ) (cid:17) = X i Res z → s i K ( x, z ) (cid:16) W (0)2 ( z, x ′ ) ~ ∂ ~ ω ( z ) + ~ ∂ ~ ω ( z )( z − x ′ ) + ~ ∂ z W (0)2 ( z, x ′ ) (cid:17) = X i Res z → s i K ( x, z ) (cid:16) W (0)2 ( z, x ′ ) ~ ∂ ~ ω ( z ) + ~ ∂ z W (0)2 ( z, x ′ ) (cid:17) ( I. ~ ∂ ~ W (0)2 ( x, x ′ )= X i Res z → s i K ( x, z ) (cid:16) W (0)2 ( z, x ′ ) w ( z ) + ~ ∂ z W (0)2 ( z, x ′ ) (cid:17) +2 X i,k Res z → s i Res x ′′ → s k K ( x, z ) W (0)2 ( z, x ′ ) W (0)2 ( z, x ′′ ) V ( x ′′ )= X i Res z → s i W (0)2 ( z, x ′ ) (cid:16) w ( z ) − ~ ∂ z (cid:17) K ( x, z )+ X i,k Res z → s i Res x ′′ → s k K ( x, z ) W (0)2 ( z, x ′ ) G ( z, x ′′ ) V ′ ( x ′′ )= X i Res z → s i W (0)2 ( z, x ′ ) ( G ( x, z ) + V ′ ( z ) K ( x, z ))+ X i,k Res z → s i Res x ′′ → s k K ( x, z ) W (0)2 ( z, x ′ ) G ( z, x ′′ ) V ′ ( x ′′ )= X i Res z → s i W (0)2 ( z, x ′ ) G ( x, z )+ X i,k Res z → s i Res x ′′ → s k K ( x, z ) W (0)2 ( z, x ′ ) G ( z, x ′′ ) V ′ ( x ′′ )+ X i Res z → s i Res x ′′ → z K ( x, z ) W (0)2 ( z, x ′ ) G ( z, x ′′ ) V ′ ( x ′′ )= X i Res z → s i W (0)2 ( z, x ′ ) G ( x, z )46 X i,k Res x ′′ → s k Res z → s i K ( x, z ) W (0)2 ( z, x ′ ) G ( z, x ′′ ) V ′ ( x ′′ )= X i Res z → s i W (0)2 ( z, x ′ ) G ( x, z )+2 X i,k Res x ′′ → s k Res z → s i K ( x, z ) W (0)2 ( z, x ′ ) B ( z, x ′′ ) V ( x ′′ )= X i Res z → s i B ( z, x ′ ) G ( x, z )+ X k Res x ′′ → s k W (0)3 ( x, x ′ , x ′′ ) V ( x ′′ )( I. G ( x, z ) and B ( z, x ′ ) are rational fractions whose only polesare s i ’s, as well as z = x and z = x ′ , and we write: X i Res z → s i B ( z, x ′ ) G ( x, z )= − Res z → x B ( z, x ′ ) G ( x, z ) − Res z → x ′ B ( z, x ′ ) G ( x, z )= − Res z → x B ( z, x ′ ) 1 z − x −
12 Res z → x ′ z − x ′ ) G ( x, z )= − Res z → x B ( z, x ′ ) 1 z − x + Res z → x ′ z − x ′ B ( x, z )= − B ( x, x ′ ) + B ( x, x ′ )= 0 (I.9)So that eventually we have proved the case n = 2 , g = 0 of the theorem: ~ ∂ ~ W (0)2 ( x, x ′ ) = X k Res x ′′ → s k W (0)3 ( x, x ′ , x ′′ ) V ( x ′′ ) (I.10) I.3 Recursion for higher correlators
We proceed by recursion on 2 g + n .From theorem 3.2, we have that:( Y ( x ) − ~ ∂ x ) ~ ∂ ~ W ( g ) n +1 ( x, L )= ~ ∂ ~ U ( g ) n +1 ( x ; L ) + ~ ∂ x W ( g ) n +1 ( x, L ) − W ( g ) n +1 ( x, L ) ~ ∂ ~ Y ( x ) − ~ ∂ ~ P ( g ) n +1 ( x ; L ) + X x j ∈ L ∂∂x j W ( g ) n ( L ) x − x j (I.11)where the term on the last line has no pole at x = s i . This implies that: X i Res x → s i K ( x , x ) (cid:16) ( Y ( x ) − ~ ∂ x ) ~ ∂ ~ W ( g ) n +1 ( x, L ) (cid:17) X i Res x → s i K ( x , x ) (cid:16) ~ ∂ ~ U ( g ) n +1 ( x ; L ) + ~ ∂ x W ( g ) n +1 ( x, L ) − W ( g ) n +1 ( x, L ) ~ ∂ ~ Y ( x ) (cid:17) (I.12)We have: X i Res x → s i K ( x , x ) (cid:16) ( Y ( x ) − ~ ∂ x ) ~ ∂ ~ W ( g ) n +1 ( x, L ) (cid:17) = X i Res x → s i ~ ∂ ~ W ( g ) n +1 ( x, L ) ( Y ( x ) + ~ ∂ x ) K ( x , x )= − X i Res x → s i ~ ∂ ~ W ( g ) n +1 ( x, L ) G ( x , x )= Res x → x ~ ∂ ~ W ( g ) n +1 ( x, L ) G ( x , x )= ~ ∂ ~ W ( g ) n +1 ( x , L ) (I.13)and therefore: ~ ∂ ~ W ( g ) n +1 ( x , L )= X i Res x → s i K ( x , x ) (cid:16) ~ ∂ ~ U ( g ) n +1 ( x ; L ) + ~ ∂ x W ( g ) n +1 ( x, L ) − W ( g ) n +1 ( x, L ) ~ ∂ ~ Y ( x ) (cid:17) ( I. ~ ∂ ~ U ( g ) n +1 ( x ; L )= ~ ∂ ~ W ( g − n +2 ( x, x, L ) + g X k =0 ′ X J ⊂ L W ( k )1+ | J | ( x, J ) ~ ∂ ~ W ( g − k )1+ n −| J | ( x, L/J )+ g X k =0 ′ X J ⊂ L W ( g − k )1+ n −| J | ( x, L/J ) ~ ∂ ~ W ( k )1+ | J | ( x, J )= (2 − g − − ( n + 2)) W ( g − n +2 ( x, x, L ) + X i Res x ′ → s i W ( g − n +3 ( x, x, L, x ′ ) V ( x ′ )+ g X k =0 ′ X J ⊂ L (2 − g − k ) − (1 + n − | J | )) W ( k )1+ | J | ( x, J ) W ( g − k )1+ n −| J | ( x, L/J )+ g X k =0 ′ X J ⊂ L (2 − k − (1 + | J | )) W ( g − k )1+ n −| J | ( x, L/J ) W ( k )1+ | J | ( x, J )+ X i Res x ′ → s i V ( x ′ ) g X k =0 ′ X J ⊂ L W ( k )2+ | J | ( x, J, x ′ ) W ( g − k )1+ n −| J | ( x, L/J )+ X i Res x ′ → s i V ( x ′ ) g X k =0 ′ X J ⊂ L W ( k )1+ | J | ( x, J ) W ( g − k )2+ n −| J | ( x, L/J, x ′ )= (2 − g − n ) U ( g ) n +1 ( x ; L )+ X i Res x ′ → s i V ( x ′ ) ( U ( g ) n +2 ( x ; x ′ , L ) − B ( x, x ′ ) W ( g ) n +1 ( x, L )) (I.15)48hus we have: ~ ∂ ~ W ( g ) n +1 ( x , L )= (2 − g − n ) X i Res x → s i K ( x , x ) U ( g ) n +1 ( x ; L )+ X i Res x → s i K ( x , x ) X j Res x ′ → s j V ( x ′ ) ( U ( g ) n +2 ( x ; x ′ , L ) − B ( x, x ′ ) W ( g ) n +1 ( x, L ))+ X i Res x → s i K ( x , x ) (cid:16) ~ ∂ x W ( g ) n +1 ( x, L ) − W ( g ) n +1 ( x, L ) ~ ∂ ~ Y ( x ) (cid:17) = (2 − g − n ) W ( g ) n +1 ( x , L )+ X j Res x ′ → s j X i Res x → s i K ( x , x ) V ( x ′ ) ( U ( g ) n +2 ( x ; x ′ , L ) − B ( x, x ′ ) W ( g ) n +1 ( x, L ))+ X i Res x → s i K ( x , x ) (cid:16) ~ ∂ x W ( g ) n +1 ( x, L ) − W ( g ) n +1 ( x, L ) ~ ∂ ~ Y ( x ) (cid:17) = (2 − g − n ) W ( g ) n +1 ( x , L ) + X j Res x ′ → s j V ( x ′ ) W ( g ) n +2 ( x , x ′ , L ) − X j Res x ′ → s j X i Res x → s i K ( x , x ) V ( x ′ ) B ( x, x ′ ) W ( g ) n +1 ( x, L )+ X i Res x → s i K ( x , x ) (cid:16) ~ ∂ x W ( g ) n +1 ( x, L ) − W ( g ) n +1 ( x, L ) ~ ∂ ~ Y ( x ) (cid:17) = (2 − g − n ) W ( g ) n +1 ( x , L ) + X j Res x ′ → s j V ( x ′ ) W ( g ) n +2 ( x , x ′ , L ) − X i Res x → s i X j Res x ′ → s j K ( x , x ) V ( x ′ ) B ( x, x ′ ) W ( g ) n +1 ( x, L ) − X i Res x → s i Res x ′ → x K ( x , x ) V ( x ′ ) B ( x, x ′ ) W ( g ) n +1 ( x, L )+ X i Res x → s i K ( x , x ) (cid:16) ~ ∂ x W ( g ) n +1 ( x, L ) − W ( g ) n +1 ( x, L ) ~ ∂ ~ Y ( x ) (cid:17) (I.16)Notice that: ~ ∂ ~ Y ( x ) + 2 X j Res x ′ → s j B ( x, x ′ ) V ( x ′ ) + 2 Res x ′ → x B ( x, x ′ ) V ( x ′ ) = Y ( x ) (I.17)therefore: ~ ∂ ~ W ( g ) n +1 ( x , L )= (2 − g − n ) W ( g ) n +1 ( x , L ) + X j Res x ′ → s j V ( x ′ ) W ( g ) n +2 ( x , x ′ , L )+ X i Res x → s i K ( x , x ) (cid:16) ~ ∂ x W ( g ) n +1 ( x, L ) − Y ( x ) W ( g ) n +1 ( x, L ) (cid:17) = (2 − g − n ) W ( g ) n +1 ( x , L ) + X j Res x ′ → s j V ( x ′ ) W ( g ) n +2 ( x , x ′ , L ) − X i Res x → s i W ( g ) n +1 ( x, L ) ( Y ( x ) + ~ ∂ x ) K ( x , x )49 (2 − g − n ) W ( g ) n +1 ( x , L ) + X j Res x ′ → s j V ( x ′ ) W ( g ) n +2 ( x , x ′ , L )+ X i Res x → s i W ( g ) n +1 ( x, L ) G ( x , x )= (2 − g − n ) W ( g ) n +1 ( x , L ) + X j Res x ′ → s j V ( x ′ ) W ( g ) n +2 ( x , x ′ , L ) − Res x → x W ( g ) n +1 ( x, L ) G ( x , x )= (2 − g − n ) W ( g ) n +1 ( x , L ) + X j Res x ′ → s j V ( x ′ ) W ( g ) n +2 ( x , x ′ , L ) − W ( g ) n +1 ( x , L )= (2 − g − n − W ( g ) n +1 ( x , L ) + X j Res x ′ → s j V ( x ′ ) W ( g ) n +2 ( x , x ′ , L ) (I.18)i.e. we have proved the theorem for 2 g + n + 1. J Appendix: Free Energies
Here we consider g ≥ F ( g ) ( λV, λ ~ ) = λ − g F ( g ) ( V, ~ ) (J.1)Here we show that they satisfy theorem 3.6.We start from the definition: F ( g ) = ~ − g Z ~ d ˜ ~ ˜ ~ − g X i Res x → s i V ( x ) W ( g )1 ( x ) (cid:12)(cid:12)(cid:12) ˜ ~ (J.2)and we compute the loop operator applied to F ( g ) : δ x F ( g ) = ~ − g Z ~ d ˜ ~ ˜ ~ − g X i Res x → s i (cid:16) V ( x ) W ( g )2 ( x, x ) + δ x V ( x ) W ( g )1 ( x ) (cid:17) ˜ ~ = ~ − g Z ~ d ˜ ~ ˜ ~ − g X i Res x → s i V ( x ) W ( g )2 ( x, x ) + W ( g )1 ( x ) x − x ! ˜ ~ = ~ − g Z ~ d ˜ ~ ˜ ~ − g X i Res x → s i V ( x ) W ( g )2 ( x, x ) ! − W ( g )1 ( x ) ! ˜ ~ = ~ − g Z ~ d ˜ ~ ˜ ~ − g ˜ h − g d (˜ h g − W ( g )1 ( x )) d ˜ h − W ( g )1 ( x ) ! ˜ ~ = ~ − g Z ~ h d (cid:16) ˜ h g − W ( g )1 ( x ) (cid:17) − d ˜ ~ ˜ ~ − g W ( g )1 ( x ) ! ˜ ~ J. g − >
0, there is no boundary term coming fromthe bound at 0, and thus: δ x F ( g ) = W ( g )1 ( x ) + ~ − g Z ~ (cid:16) ˜ h g − W ( g )1 ( x ) − ˜ ~ g − W ( g )1 ( x ) (cid:17) ˜ ~ d ˜ h = W ( g )1 ( x ) (J.4)Therefore we have proved that the loop operator acting on F ( g ) is indeed W ( g )1 , i.e. wehave proved theorem 3.6. K Appendix: F (0)We have defined F (0) as: F (0) = − ~ X i V ( s i ) + ~ X i = j ln ( s i − s j ) (K.1) • Proof of theorem 3.6 for F (0) :consider a variation δV , we have: δF (0) = − ~ X i δV ( s i ) − ~ X i V ′ ( s i ) δs i + 2 ~ X j = i δs i s i − s j = − ~ X i δV ( s i )= − X i Res x → s i ω ( x ) δV ( x ) (K.2) • Proof of theorem 3.8 for F (0) :we have: ~ ∂ ~ F (0) = − ~ X i V ( s i ) + 2 ~ X i = j ln ( s i − s j ) − ~ X i ∂s i ∂ ~ V ′ ( s i ) − ~ X j = i s i − s j ! = − ~ X i V ( s i ) + 2 ~ X i = j ln ( s i − s j )= 2 F (0) + ~ X i V ( s i )= 2 F (0) + X i Res x → s i ω ( x ) V ( x ) (K.3)Therefore: (2 − ~ ∂ ~ ) F = − X i Res x → s i V ( x ) w ( x ) (K.4)51 Appendix: F (1)We have defined F (1) as: F (1) = 12 ln (det A ) + F (0) ~ + ln (∆( s ) )= 12 ln (det A ) − ~ X i V ( s i ) + X i = j ln ( s i − s j ) + X i = j ln ( s i − s j )= 12 ln (det A ) − ~ X i V ( s i ) + 2 X i = j ln ( s i − s j ) (L.1) • Proof of theorem 3.6 for F (1) :Let us start from W (1)1 W (1)1 ( x ) = X i Res z → s i K ( x, z ) W ( z, z )= X i Res z → s i K ( x, z ) h A i,i ( z − s i ) + 2 X j = i A i,j ( z − s i ) ( z − s j ) i = X i Res z → s i K ( x, z ) A i,i ( z − s i ) +2 X i X j = i K ′ ( x, s i ) A i,j ( s i − s j ) − X i X j = i K ( x, s i ) A i,j ( s i − s j ) ( L. X i Res z → s i K ( x, z ) A i,i ( z − s i ) = 13 X i Res z → s i K ′ ( x, z ) A i,i ( z − s i ) = 13 X i Res z → s i ( 2 z − s i + 2 ω i ( z ) − ~ V ′ ( z )) K ( x, z ) A i,i ( z − s i ) − ~ X i Res z → s i G ( x, z ) A i,i ( z − s i ) (L.3)Therefore: X i Res z → s i K ( x, z ) A i,i ( z − s i ) = X i Res z → s i (2 ω i ( z ) − ~ V ′ ( z )) K ( x, z ) A i,i ( z − s i ) ~ X i Res z → s i G ( x, z ) A i,i ( z − s i ) = X i Res z → s i h ω i ( z ) − ~ V ′ ( z ) z − s i K ( x, z ) i A i,i ( z − s i ) − ~ X i Res z → s i G ′ ( x, z ) A i,i ( z − s i ) = X i A i,i h ω i ( z ) − ~ V ′ ( z ) z − s i K ( x, z ) i ′ z = s i + 1 ~ X i Res z → s i B ( x, z ) A i,i ( z − s i ) = 12 X i (2 ω ′′ i ( s i ) − ~ V ′′′ ( s i )) K ( x, s i ) A i,i − X i K ′ ( x, s i ) A i,i T i,i + 1 ~ X i Res z → s i B ( x, z ) A i,i ( z − s i ) (L.4)Notice that: Res x → s K ( x, s i ) δV ( x ) = 1 ~ X j Res x → s A i,j δV ( x )( x − s j ) = 1 ~ X j A i,j δV ′ ( s j )= − δs i (L.5)Res x → s K ′ ( x, s i ) δV ( x ) = − ~ X j δ i,j δV ( s j ) − X j = i δs j s i − s j (L.6)Res x → s Res z → s i B ( x, z )( z − s i ) δV ( x ) = Res z → s i Res x → s B ( x, z )( z − s i ) δV ( x )+ Res z → s i Res x → z B ( x, z )( z − s i ) δV ( x )= Res z → s i Res x → s A j,l ( x − s l ) ( z − s j ) ( z − s i ) δV ( x )+ 12 Res z → s i Res x → z x − z ) ( z − s i ) δV ( x )= ~ Res z → s i Res x → s K ( x, s j )( z − s j ) ( z − s i ) δV ( x )+ 12 Res z → s i z − s i ) δV ′ ( z )= − ~ Res z → s i δs j ( z − s j ) ( z − s i ) + 12 δV ′′ ( s i )53 2 ~ δs j ( s i − s j ) + 12 δV ′′ ( s i ) (L.7)That gives: Res x Res z → s i K ( x, z ) A i,i ( x − s i ) δV ( x )= −
12 (2 ω ′′ i ( s i ) − ~ V ′′′ ( s i )) δs i A i,i + 1 ~ X j δ i,j δV ( s j ) A i,i T i,i +2 X j = i δs j s i − s j A i,i T i,i + 2 ~ δs j ( s i − s j ) A i,i + 12 δV ′′ ( s i ) A i,i = 12 δ ( T i,i ) A i,i + 1 ~ X j δ i,j δV ( s j ) A i,i T i,i + 2 X j = i δs j s i − s j A i,i T i,i (L.8)and thus:Res x → s W (1)1 ( x ) δV ( x )= X i δ ( T i,i ) A i,i + 1 ~ X i X j δ i,j δV ( s j ) A i,i T i,i + 2 X j = i δs j s i − s j A i,i T i,i − X i X j = i ~ P l δ i,l δV ( s l )( s i − s j ) A i,j − X i X j = i X l = i δs l ( s i − s l )( s i − s j ) A i,j +4 X i X j = i δs i ( s i − s j ) A i,j = X i δ ( T i,i ) A i,i + 1 ~ X i X j X l δ i,j δV ( s j ) A i,l T l,i + 2 X j = i δs j s i − s j A i,i T i,i − X i X j = i X l = i δs l ( s i − s l )( s i − s j ) A i,j + 4 X i X j = i δs i ( s i − s j ) A i,j = 12 Tr A δT + 1 ~ X i X j X l δ i,j δV ( s j ) A i,l T l,i + 2 X j = i δs j s i − s j +4 X j = i X l = i δs j ( s i − s j )( s i − s l ) A i,l − X i = j = l δs l ( s i − s l )( s i − s j ) A i,j = 12 Tr A δT + 1 ~ X j δV ( s j ) − X j = i δs i − δs j s i − s j = 12 δ ln det T + 1 ~ X j δ ( V ( s j )) − ~ X j V ′ ( s j ) δs j − X j = i δs i − δs j s i − s j = 12 δ ln det T + 1 ~ X j δ ( V ( s j )) − X j X i = j δs j s j − s i − X j = i δs i − δs j s i − s j = 12 δ ln det T + 1 ~ X j δ ( V ( s j )) − X j X i = j δs j − δs i s j − s i − X j = i δs i − δs j s i − s j = 12 δ ln det T + 1 ~ X j δ ( V ( s j )) − X i = j δs j − δs i s j − s i (L.9)54hat implies: F = −
12 ln det T − ~ X j V ( s j ) + 2 X i = j ln( s i − s j ) (L.10) F = 12 ln det A − ~ X j V ( s j ) + 2 X i = j ln( s i − s j ) (L.11) M Appendix: Example m = 1 We choose s = 0, and V ′ ( s ) = v s + v s + P v k +1 s k .We have ω ( x ) = ~ x (M.1) A = ~ v (M.2) K ( x , x ) = X k K k ( x ) x k (M.3) K = 1 v x , K = K = 0 (M.4) K = 1 ~ x − v ~ v x (M.5) B ( x , x ) = 12( x − x ) + Ax x (M.6) W (0)3 = 2 ~ v x x x ( 1 x + 1 x + 1 x ) − ~ v v x x x (M.7) W (0)4 = 6 ~ v x x x ( 1 x + 1 x + 1 x + 1 x )+ 8 ~ v x x x ( 1 x x + 1 x x + 1 x x + 1 x x + 1 x x + 1 x x ) − ~ v v x x x ( 1 x + 1 x + 1 x + 1 x ) + 12 ~ v v x x x − ~ v v x x x (M.8) W (1)1 = 1 ~ x + 1 v x − v v x (M.9)55 (1)2 = 3 v x x ( 1 x + 1 x + 23 x x ) + 1 ~ v x x − v v x x ( 1 x + 1 x )+ 4 v v x x − v v x x (M.10) W (1)3 = 12 v x x x ( 1 x + 1 x + 1 x )+ 12 v x x x ( 1 x x + 1 x x + 1 x x + 1 x x + 1 x x + 1 x x )+ 8 v x x x + 2 ~ v x x x ( 1 x + 1 x + 1 x ) − v v x x x ( 1 x + 1 x + 1 x + 1 x x + 1 x x + 1 x x ) − v ~ v x x x + 32 v v x x x ( 1 x + 1 x + 1 x ) − v v x x x − v v x x x ( 1 x + 1 x + 1 x )+ 42 v v v x x x − v v x x x (M.11) W (2)1 = − ~ x + 3 ~ v x − v ~ v x + 5 v ~ v x − v ~ v x − v ~ v x + 8 v v ~ v x − v ~ v x ( M. W (2)2 = 15 ~ v x x ( 1 x + 1 x + 1 x x ) + 12 ~ v x x ( 1 x x + 1 x x ) − ~ v x x − v ~ v x x ( 1 x + 1 x ) − v ~ v x x ( 1 x x + 1 x x ) + 45 v ~ v x x ( 1 x + 1 x )+ 40 v ~ v x x − v ~ v x x ( 1 x + 1 x ) + 50 v ~ v x x − v ~ v x x ( 1 x + 1 x ) − v ~ v x x + 64 v v ~ v x x ( 1 x + 1 x ) − v v ~ v x x + 24 v ~ v x x − v ~ v x x ( 1 x + 1 x ) + 50 v v ~ v x x − v ~ v x x (M.13) W (3)1 = 2 ~ x + 15 ~ v x − ~ v x − v ~ v x + 5 v ~ v x + 50 v ~ v x − v ~ v x − v ~ v x + 5 v ~ v x + 60 v ~ v x − v ~ v x − v ~ v x + 3 v ~ v x