Spectral properties of ghost Neumann matrices
aa r X i v : . [ h e p - t h ] J a n Preprint typeset in JHEP style - HYPER VERSION
SISSA/101/2007/EP hep-th/yymm.nnnn
Spectral properties of ghost Neumann matrices
L.Bonora
International School for Advanced Studies (SISSA/ISAS)Via Beirut 2–4, 34014 Trieste, Italy, and INFN, Sezione di TriesteE-mail: [email protected] , R.J.Scherer Santos
Centro Brasileiro de Pesquisas Fisicas (CBPF-MCT)-LAFEXR. Dr. Xavier Sigaud, 150 - Urca - Rio de Janeiro - Brasil - 22290-180E-mail: [email protected] , D.D.Tolla
Center for Quantum SpaceTime (CQUEST), Sogang UniversityShinsu-dong 1, Mapo-gu, Seoul, KoreaE-mail: [email protected]
Abstract:
We continue the analysis of the ghost wedge states in the oscillator formalismby studying the spectral properties of the ghost matrices of Neumann coefficients. We showthat the traditional spectral representation is not valid for these matrices and propose anew heuristic formula that allows one to reconstruct them from the knowledge of theireigenvalues and eigenvectors. It turns out that additional data, which we call boundarydata, are needed in order to actually implement the reconstruction. In particular our resultlends support to the conjecture that there exists a ghost three strings vertex with propertiesparallel to those of the matter three strings vertex.
Keywords:
String Field Theory, Ghost Wedge States, Star Product. ontents
1. Introduction 12. The three strings vertex 3 K
3. The diagonal recursive relations for wedge states 10
4. Matrix reconstruction from the spectrum 13
5. Conclusion 19A. The ghost Neumann coefficients 20B. Why we can use long square matrices 25C. Proof of eq.(3.5) 26
1. Introduction
This paper is complementary to the analysis, started in [1], of the conjectured equivalence e − n − “ L ( g )0 + L ( g ) † ” | i = N n e c † S n b † | i ≡ | n i (1.1)which is a crucial one in the recent developments in open string field theory [2, 3, 4, 5, 6,7, 8, 10, 11, 12, 9, 13, 14]. Here | n i are the ghost wedge states in the oscillator formalism[15, 16, 17, 18, 19] , which of course must coincide with the corresponding surface wedgestates. In [1] we dealt with the LHS of this equation. We showed that, if we understand itordered according to the natural normal ordering, it can be cast into the midterm form in(1.1), and we diagonalized the matrix S n in such a squeezed state. Then we proved that, if we are allowed to star–multiply the squeezed states representing the ghost wedge states | n i the same way we do for the matter wedge states and diagonalize the correspondingmatrices, the eigenvalue we obtain in the two cases are the same. In this paper we focuson the spectral properties of such operators like S n and, among other things, we exhibitevidence that the above if is justified. – 1 –o be more precise, in order to fully prove (1.1) we have two possibilities. The mostdirect alternative is to define the three strings vertex for the ghost part, and thus thestar product, pertinent to the natural normal ordering in the oscillator formalism; then toconstruct the wedge states appropriate for this vertex; finally to diagonalize the latter andshows that they coincide with the midterm of (1.1) (with some additional specifications thatwill be clarified in due course). Unfortunately the construction of the ghost vertex is notso straightforward as one would hope. Relying on the common lore on this subject, we facea large number of possibilities, which are mostly linked to the ghost zero mode insertionsand our attempts in this direction so far have been unfruitful. Before continuing in such achallenging program it is wise to gather some evidence that the vertex one is looking fordoes exist and some indirect information about it. This is the original motivation of thepresent paper, which relies on the second alternative.Having diagonalized the matrices S n in the midterm of (1.1) in the basis of weight 2differentials, see [1], one may wonder whether one can reconstruct the original matrices. Forthe matter part this is a standard procedure, simply one uses the spectral representationof the infinite matrices involved. But for the ghost sector we are interested in here thingsare more complicated (it should be recalled that the infinite matrices S n are not square but lame , i.e. infinite rectangular). Ultimately, the answer is: yes, we can reconstruct the S n matrices; in other words, we can derive the RHS of (1.1) from the LHS, but the procedureis more involved than in the matter case. In fact the traditional spectral representationis not valid for lame matrices and we have to figure out a new heuristic formula thatallows us to reconstruct them from their eigenvalues and eigenvectors. It turns out thatadditional data, which we call boundary data, are needed in order to actually implementthe reconstruction. Once this is done we can extract from them basic information aboutthe Neumann coefficients matrices of the ghost three strings vertex.The main results of our paper are the study of the spectral properties of the infinitematrices S n in the b − c ghost bases, the reconstruction recipe for such infinite matrices(which is an interesting result in itself) and the evidence for the existence of the three–strings vertex we need for the ghost sector in the natural normal ordering.The paper is organized as follows. In section 2, which is essentially pedagogical, wepresent an example of three strings vertex which is not the one we are looking for as itcannot be diagonalized in the weight 2 basis, but has all the other good properties weexpect of the true vertex. This example also illustrates the problems one comes across inconstructing a ghost three strings vertex. In section 3 we make contact with the results of[1] and give a more detailed proof that the squeezed states in the midterm of (1.1) havethe same eigenvalue as the ghost wedge states in the oscillator formalism. We clarify thatthis is not enough to prove that (1.1) holds, and, in section 4, we show where the problemlies and propose a new heuristic formula for the reconstruction of infinite lame matrices.Finally section 5 is devoted to our conclusions. Three Appendices contain details of thecalculations needed in the text. In particular Appendix C presents a new proof of thefundamental eq.(3.5). Notation . Any infinite matrix we meet in this paper is either square short or longlegged, or lame. In this regard we will often use a compact notation: a subscript s will– 2 –epresent an integer label n running from 2 to ∞ , while a subscript l will represent a labelrunning from − ∞ . So Y ss , Y ll will denote square short and long legged, respectively; Y sl , Y ls will denote short–long and long–short lame matrices, respectively. With the samemeaning we will say that a matrix is ( ll ) , ( ss ) , ( sl ) or ( ls ). In a similar way we will denoteby V s and V l a short and long infinite vector, to which the above matrices naturally apply.Moreover, while n, m represent generic matrix indices, at times we will use ¯ n, ¯ m to stressthat they are short, i.e. ¯ n, ¯ m ≥
2. The three strings vertex
This section is mostly pedagogical. We would like here to explain what are the problemswith defining a three strings vertex for the ghost sector that fits the purposes of provingeq.(1.1) The first problem we have to face is normal ordering. We will have in mind twomain cases of normal orderings, those we have called natural and conventional normalordering in [1]. The former is the obvious normal ordering required when the vacuum is | i , the latter is instead appropriate to the vacuum state c | i . A consistent vertex forconventional normal ordering exists, is the one explicitly computed by Gross and Jevicki,[18], who used the vacuum c c | i (for general problems connected with the ghost sector,see [26, 33, 32, 37]). But it is not what we need in the natural normal ordering case. Asecond problem is generated by the ghost insertions, which are free and there is no a prioriprinciple to fix them. We know however that a certain number of conditions should besatisfied. One is BRST invariance of the three strings vertex. This is unfortunately hardto translate into a practical recipe for construction. Other conditions, i.e. cyclicity, bpz –compatibility and commutativity of the Neumann coefficients matrices are more usefulfrom a constructive point of view.In the sequel we will consider a definite example. Even though it turns out not tobe the right vertex we are looking for, it will allow us to illustrate many questions whichwould sound rather abstruse in the abstract.To start with we first recall the relevant anti–commutator and bpz rules[ c n , b m ] + = δ n + m, , bpz ( c n ) = − ( − n c − n , bpz ( b n ) = ( − n b − n The we define the state | ˆ0 i = c − c c | i , where | i is the SL(2,R)–invariant vacuum, thetensor product of states h ˆ ω | = h ˆ0 | h ˆ0 | h | (2.1)carrying total ghost number 6, and | ω i = | i | i | ˆ0 i (2.2)carrying total ghost number 3. They satisfy h ˆ ω | ω i = 1. Finally we write down thegeneral form of the three strings vertex h ˆ V | = K h ˆ ω | e ˆ E , ˆ E = − X r,s =1 ∞ X n,m c ( r ) n ˆ V rsnm b ( s ) m (2.3)– 3 –he dual vertex is | V i = K e E | ˆ ω i E = X r,s =1 ∞ X n,m c ( r ) † n V rsnm b ( s ) † m (2.4)The range of m, n is not specified. However, for reasons that will become clear later, wewould like to interpret the matrices ˆ V rsnm and V rsnm as square long–legged matrices ( ll ). But,as soon as we try to evaluate, for instance, contractions like h ˆ V | ω i in order to computethe constant K , a problem arises linked to the presence in the exponent (2.3) of couples ofconjugate operators c , b , c − , b and c , b − . In order to appreciate this problem let usconsider the simple case of e c V b . Interpreting this expression literally one gets e c V b = 1 + c V b + 12 c V b c V b + . . . = 1 + c ( V + 12 V + . . . ) b = 1 + c ( e V − b (2.5)It follows that, when inserted in h ˆ V | ω i a term like this does not yield 1, as one wouldexpect. Moreover if, instead of the single zero mode we have considered here for simplicity,we had three, the result would be even more complicated. All this is not natural. Letus recall that the meaning of ˆ V rsnm (see [20] and below) is the coefficient of the monomial z m +1 w n − in the expansion of h ˆ V | R ( c ( s ) ( z ) b ( r ) ( w )) | ω i in powers of z and w (with op-posite sign). Therefore interpreting the exponentials in (2.3) as in (2.5) is misleading. Itis clear that what they really mean is something else. To adapt the oscillator formalism tothe desired meaning we proceed as follows.Let us introduce new conjugate operators η a , ξ † a , a = − , ,
1, in addition to c n , b m ,such that [ η a , ξ † b ] + = δ ab (2.6)and they anticommute with all the other oscillators. Moreover we require them to satisfy η a | i = 0 , h | ξ † a = 0 (2.7)while h | η a = 0 , ξ † a | i 6 = 0 (2.8)Now let us replace in the exponent of (2.3) c a with η a (but not c † a in the exponent of (2.4)with η † a ) and b † a in the exponent of (2.4) with ξ † a (but not b a in the exponent of (2.3) with ξ a –in fact c † a and b a will not be needed). With these rules h ˆ V | ω i = K straightforwardly. Thematrices ˆ V rsnm and V rsnm are naturally square long legged. The interpretation of ˆ V rsnm as thenegative coefficient of order z m +1 and w n − in the expansion of h ˆ V | R ( c ( s ) ( z ) b ( r ) ( w )) | ω i in powers of z and w , remains valid provided one replaces b ( r ) †− , b ( r ) † , b ( r ) † in b ( r ) ( w ) with ξ ( r ) †− , ξ ( r ) † , ξ ( r ) † .We stress again that the substitution of c a with η a and b † a with ξ † a is dictated by therequirement of consistency of the interpretation of the Neumann coefficient as expansioncoefficients of the b - c propagator. – 4 – .1 Ghost Neumann coefficients and their properties It is time to go to a concrete example. To this end one has to explicitly compute ˆ V rsnm and V rsnm in (2.3,2.4). The method is well–known: one expresses the propagator with zero modeinsertions ≪ c ( z ) b ( w ) ≫ in two different ways, first as a CFT correlator and then in termsof ˆ V and one equates the two expressions after mapping them to the disk via the maps f i ( z i ) = α − i f ( z i ) , i = 1 , , f ( z ) = (cid:16) iz − iz (cid:17) (2.10)Here α = e πi is one of the three third roots of unity. However this recipe leaves severaluncertainties due especially to the ghost insertions. For concreteness in Appendix A wemake a specific choice of these insertions, in a way the simplest one: we set the insertionsat infinity. Even so there remain some uncertainties which we fix by requiring certainproperties, in particular cyclicity, consistency with the bpz operation and commutativityof the twisted matrices of Neumann coefficients (the motivation for the latter will becomeclear further on). With this (arbitrary) choice, the ghost Neumann coefficients worked outin Appendix A satisfy the following set of properties: • cyclicity ˆ V rsnm = ˆ V r +1 ,s +1 nm , (2.11) • bpz consistency ( − n + m V rsnm = ˆ V rsnm (2.12) • commutativity Its meaning is the following. Defining X = ˆ CV rr , X + = ˆ CV , X − = ˆ CV , we have X rs X r ′ s ′ = X r ′ s ′ X rs (2.13)for all r, s, r ′ , s ′ . In addition we have X + X + + X − = 1 (2.14)and X + X − = X − X, X + ( X + ) + ( X − ) = 1 (2.15)It should be stressed that all the X rs matrices are ( ll ).– 5 – .2 Formulas for wedge states Our next goal is to define recursion relations for the ghost wedge states. To start with wedefine the star product of squeezed ghost states of the form | S i = N exp (cid:16) c † Sb † (cid:17) | i (2.16)We notice that since the vacuum is | i we are implicitly referring to the natural normalordering. The star product of two such states | S i and | S i is the bpz of the state h ˆ V || S i | S i (2.17)However this formula needs some specifications. We remark that the problem pointed outabove, linked to the presence of couples of conjugate oscillators in the exponents, is presentboth in (2.16) and (2.17). We solve it as we did in section 2, with the help of additionaloscillators η a , ξ † b . We interpret for instance (2.16) as follows. We replace the new oscillatorsin it as in section 2, then we exploit the anticommutativity properties of the latter to movethem to the right and apply them to | i , then we substitute back b † a in the place of ξ † a . Theupshot of this operation is that no b † a oscillator will survive and the state (2.16) takes theform | S i = N exp X n = − X m =2 c † n S nm b † m ! | i (2.18)That is, the matrix S nm in the exponent is lame ls . This is the precise meaning we attachto (2.16). Let us notice that the bpz dual expression of (2.18) is h S | = N h | exp (cid:16) − c ˆ CS ˆ Cb (cid:17) (2.19)The matrix S here is ll .After this specification let us define the star product of | S i and | S i . Let us recall thethree strings vertex (2.3,2.4). Remembering the discussion before (2.18) we conclude thatˆ V rsnm is sl for r = 1 , ll for r = 3, while V rsnm is ls for r = 1 , ll for r = 3.In evaluating this product we will have to evaluate vev’s of the type h ˆ0 | exp (cid:16) cF b + cµ + λb (cid:17) exp (cid:16) c † Gb † + θb † + c † ζ (cid:17) | i (2.20)Here we are using an obvious compact notation: F, G denotes matrices F nm , G nm , and λ, µ, θ, ζ are anticommuting vectors λ n , µ n , θ n , ζ n . We expect the result of this evaluationto be h ˆ0 | exp ( cF b + cµ + λb ) exp (cid:0) c † Gb † + θb † + c † ζ (cid:1) | i = det(1 + F G ) exp (cid:16) − θ F G
F ζ − λ GF Gµ − θ F G µ + λ GF ζ (cid:17) (2.21)In order for this formula to hold in (2.20) the operator denoted b, c must be creationoperators with respect to h ˆ0 | and annihilation operators with respect to the | i vacuum.– 6 –iceversa the oscillators denoted c † , b † must be all creation operator with respect to | i , andannihilation operators with respect to h ˆ0 | . But this is precisely what happens if we assumethe definition (2.18) for the squeezed states and (2.3) for the vertex with the summationover n starting from 2 (which is consistent with the interpretation by means of ξ † a and η a ,as before (2.18)).Therefore it is correct to use formulas like (2.21) in order to evaluate the star product(2.17), but in this case the matrices F and G will be lame ( ls or sl as the case be), whileanalogous considerations apply to the vectors λ, µ, θ, ζ ( λ, ζ are long vectors, while µ, θ areshort). The star product of two squeezed states like (2.16) is | S i ⋆ | S i = | S i where the state in the RHS has the same form as (2.16), with the matrix S replaced by S = ˆ CT . The latter is given by the familiar formula T = X + ( X + , X − ) 11 − Σ V Σ X − X + ! (2.22)where Σ = ˆ CS
00 ˆ CS ! , V = X X + X − X ! (2.23)The normalization of | S i is given by N = N N det (1 − V Σ ) (2.24)Notice that in this formula the four matrices in V Σ are ss .These expressions are well defined. However, since they are expressed in terms of lamematrices we cannot operate with them in the same way we usually do with the analogousmatrices of the matter sector. For that one needs the identities proved in the previoussection, which are only valid for long square matrices. Luckily in the case of the wedgestates it is possible to overcome this difficulty.When computing a star product we would like to be able to apply the formulas ofsubsection 2.1, which are expressed in terms of long square matrices. To this end we wouldlike (2.21) to be expressed in terms of long square matrices, rather than of lame matrices.This is possible at the price of some modifications.Let us introduce the new conjugate operators η a , ξ † a , a = − , ,
1, as above, see(2.6,2.7,2.8) and let us replace in (2.21) c a (but not c † a ) with η a and b † a (but not b a ) with ξ † a . Then in the RHS long square matrices and long vectors will feature (instead of lamematrices and short or long vectors). In the sequel we will use (2.21) in this sense. Suchmodifications of course are not for free. We have to justify them . We will show later onthat in the case of the wedge states such a move is justified. In the previous cases the introduction of the new oscillators was simply an auxiliary tool to help usinterpret such formulas as (2.18). We could have done without them by ad hoc definitions. But now we aretampering with vev’s, therefore we have to make sure that we do not modify anything essential. – 7 –nce this is done the calculation of the star product works smoothly without any sub-stantial difference with respect to the matter case. The formulas are the same eqs.(2.22,2.23)and (2.24) above, but expressed in terms of long square matrices to which we can applythe identities of subsection 2.1. This allows us to treat the ghost squeezed states in a waycompletely similar to the matter squeezed states. Of course it remains for us to complywith the promise we made of showing that we are allowed to use long square matrices.The wedge states are now defined to be squeezed states | n i ≡ | S n i that satisfy therecursive star product formula | n i ⋆ | m i = | n + m − i (2.25)This implies that the relevant matrices T n = ˆ CS n satisfy the recursion relation T n + m − = X − ( T n + T m ) X + T n T m − ( T n + T m ) X + T n T m X (2.26)or T n +1 = X − T n − T n X , (2.27)and the normalization constants are given by N n +1 = N n K det (1 − T n X ) (2.28)These relations are derived under the hypothesis that T n and X, X + , X − commute and byusing the identities of subsection 2.1. The solution to (2.27) is well–known, [36, 37]. Werepeat the derivation in order to stress its uniqueness. We require that | i coincide withthe vacuum | i , both for the matter and the ghost sector .This implies in particular that T = 0 and N = 1, which entails from (2.27) that T = X , T = X X , etc. That is T n is a uniquely defined function of X . But X can beuniquely expressed in terms of the sliver matrix TX = TT − T + 1 (2.29)a formula whose inverse is well–known, [34, 21] T = 12 X (cid:16) X − p (1 − X )(1 + 3 X ) (cid:17) (2.30)Therefore T n can be expressed as a uniquely defined function of T . Now consider theformula T n = T + ( − T ) n − − ( − T ) n It is worth recalling that our purpose in this paper is to complete the proof started in [1] of (1.1) e − n − “ L ( g )0 + L ( g ) † ” | i = N n e c † S n b † | i ≡ | n i that is, that the LHS does represent the ghost wedge states. In this light the requirement that the wedgestate | n i with n = 2 coincide with the vacuum state is natural. – 8 –t satisfies (2.27) as well as the condition T = 0, therefore it is the unique solution to(2.27) we were looking for.So far the states | n i have been defined solely in terms of the three strings vertex. Onemight ask what is their connection with the wedge states defined as surface states, [25, 27,35, 2]. This connection can be established: it can be shown that, with the appropriateinsertion of the zero modes, the surface wedge matrix S is actually V rr , i.e. T = X .It is simple to see that similarly (2.28) has a unique solution satisfying N = 1 and K = N . K What we have done so far is all very good, but the concrete example of vertex constructedin Appendix A is only academical, as the following remark shows. In [1] we diagonalizedthe LHS of 1.1 on the basis of weight 2 differentials, in which the operator K is diagonal.In order to be able to compare this result with the wedge states defined above we have tomake sure that also the matrices T n , X, X + , X − can be diagonalized in the same basis. Inthis subsection we will discuss this problem.Let us recall the definition of K : K = X p,q ≥− c † p G pq b q + X p,q ≥ b † ¯ p H ¯ p ¯ q c ¯ q − c b − (2.31)where G pq = ( p − δ p +1 ,q + ( p + 1) δ p − ,q ,H ¯ p ¯ q = (¯ p + 2) δ ¯ p +1 , ¯ q + (¯ p − δ ¯ p − , ¯ q (2.32) G is a square long–legged matrix and H a square short–legged one. In the common overlapwe have G = H T . We notice immediately that K annihilates the vacuum K | i = 0 (2.33)What is important for us is that the action of K commutes with the matrices we want todiagonalize. Now let T n = ˆ CS n , where S n is the matrix of the squeezed state representing | n i . We have seen that T n can be either lame or square ( ll ). Since we want to diagonalize(2.27) we must consider the second alternative. But in order to arrive at square ( ll ) matri-ces, at the beginning of this section we introduced into the game the conjugate oscillators η a , ξ † a , a = − , ,
1. Therefore, to be consistent, they must appear also in the oscillatorrepresentation of K . This can be done as follows.We write down K as K = X p,q ≥− c † p G pq b q + X p,q ≥− b † p H pq c q (2.34)where G and H have the same expression as before, but now also H is square long leggedand H = G T . What is important is that in the expression b † Hc we understand that b † a is– 9 –eplaced by ξ † a and c a is replaced by η a (for simplicity we dispense with writing the new K explicitly). If we write b ( z ) = X n ≥ b n z − n − + X − ≤ a ≤ ξ † a z − a − + X n ≥ b † n z n − (2.35) c ( z ) = X n ≥ c n z n − + X − ≤ a ≤ η a z − a +1 + X n ≥ c † n z n +1 (2.36)we find the expected conformal action of K on these fields. For instance[ K , b ( z )] = − X n (( n − b n +1 + ( n + 1) b n − ) z − n − = (1 + z ) ∂b ( z ) + 4 z b ( z )after replacing back b † a for ξ † a .On the basis of this discussion we expect therefore that[ G, T n ] = 0 (2.37)as square long legged matrices. In particular we should find that G commute with X . Onecan however show that this is not the case for the vertex explicitly constructed in AppendixA. Therefore that vertex has many good properties but not this one.However we will show below it is very plausible that a three strings vertex that satisfiesalso (2.37) exists. Therefore in the sequel we imagine that we have done everything withthis vertex and will try to justify its existence a posteriori .
3. The diagonal recursive relations for wedge states
So far we have worked, so to speak, on the RHS of eq.(1.1). It is now time to make acomparison with the left hand side. In [1] we showed that e − n − “ L ( g )0 + L ( g ) † ” | i = e η ( t ) e c † α ( t ) b † | i (3.1)where t = (2 − n ) / α ( t ) = A sinh (cid:16)p ( D T ) − BA t (cid:17)p ( D T ) − BA cosh (cid:16)p ( D T ) − BA t (cid:17) − D T sinh (cid:16)p ( D T ) − BA t (cid:17) (3.2)and η ( t ) = − Z t dt ′ tr( Bα ( t ′ )) (3.3)and A, B, D T are matrices extracted from L ( g )0 + L ( g ) † . In particular D T as well as thecombination ( D T ) − BA are ( ss ) matrices, while A is lame ( ls ). The purpose of thepaper was to show that the RHS of (3.2), multiplied by the twist matrix ˆ C , does satisfythe recursion relations (2.27) and (2.28). This was achieved by diagonalizing the matrices– 10 – A = ˆ CA, D T and ( D T ) − BA on the weight 2 basis V (2) n ( κ ), with n = 2 , , . . . . Weconcluded that if we are allowed to replace in (2.27,2.28) the matrices by their eigenvaluesin such a basis, the recursion relations can be shown to be true. What remained to beproved was precisely the correctness of replacing in such formulas the matrices by theireigenvalues. We are now in the position to do it.Let us examine first (3.2), multiplied from the left by the twist matrix. The RHS isthe product of ˜ A by a matrix which is diagonal in the weight 2 basis and is of type ( ss ).Therefore when we apply the latter to a vector V (2) s with components ( V (2)2 ( κ ) , V (2)3 ( κ ) , . . . ),we obtain the same vector multiplied by the eigenvalue. When we next apply ˜ A from theleft to the resulting vector, things are a little bit more complicated because A is an ( ls )matrix. The vector ensuing from the operation would seem to have three additional entrieswith n = − , ,
1, therefore making meaningless even the idea of eigenvalue and eigenvector.However it was shown in [1] that, ∞ X q =2 ˜ A pq V (2) q ( κ ) = a ( κ ) V (2) p ( κ ) , p = 2 , , . . . (3.4) ∞ X q =2 ˜ A aq V (2) q ( κ ) = 0 , a = − , , ss ) submatrix of ˜ A diagonalin the weight 2 basis with eigenvalue a ( κ ), but the potential additional vector elementsvanish. This allows us to conclude that the same property is shared by the matrix α ( t ).That is, when applying α ( t ) to the weight 2 basis vector V (2) s as above, we obtain thecorresponding eigenvalue multiplying the same vector (without additional components): α ( t ) V (2) s = α ( κ, t ) V (2) s .Now let us apply the above to (2.27). The latter is formulated in terms of long squarematrices whose ( ls ) part has the form α ( t ). Now we can interpret (2.27) as an infinite seriesexpansion, in which each term is a monomial of (possibly different) matrices whose ( ls )part has the form α ( t ). Let us consider the weight 2 basis vector V (2) s extended by addingthree 0 components in position − , ,
1, and let us call it V (2) l . When we apply any of theabove matrices to it, we get the same extended vector multiplied by the matrix eigenvalue:for instance α ( t ) V (2) l = α ( κ, t ) V (2) l . Therefore we can repeat the operation as many timesas needed for any monomial and obtain the same vector multiplied by the monomial inwhich each matrix is replaced by its eigenvalue. Re-summing the series we obtain that therelation (2.27) applied to the weight 2 basis vector becomes a relation of the same formwith the matrices replaced by the corresponding eigenvalues. But in [1] we checked thatthis relation for the eigenvalues is true. This, which is the main argument in [1] and thepresent paper, and is intended to lead to the proof of (1.1), has been laid out so far in arather patchy way due to its complexity. For the sake of clarity it is worth reviewing it infull, even at the price of some repetitions. We wish to show that the eigenvalues of the matrices S n in (1.1) satisfy (2.27) and (2.28).– 11 –his sentence has to be unambiguously understood. First we notice that T = 0, which isconsistent with | i being identified with the vacuum | i and N = 1. Now proving (2.27)means proving two things: T = X (3.6)and T n +1 = T − T n − T n T , (3.7)This second equation is demonstrated by setting T n ≡ ˆ CS n = ˜ α (cid:18) − n − (cid:19) (3.8)and using ˜ α given by eq.(3.2). This gives the explicit expression T n = − ˜ A sinh (cid:16) √ ∆ n − (cid:17) √ ∆ cosh (cid:16) √ ∆ n − (cid:17) + D T sinh (cid:16) √ ∆ n − (cid:17) (3.9)where ∆ = ( D T ) − BA . On the basis of the remarks made at the beginning of this section,we replace everywhere the matrices by their eigenvalues √ ∆ = π | κ | , ˜ A = κπ (cid:0) κπ (cid:1) , D T = κπ (cid:16) κπ (cid:17) (3.10)By inserting (3.8) into (3.7) one can see that the latter is satisfied (see section 2.5 of [1] fordetails) if D T + ˜ A = √ ∆ coth (cid:16) κπ (cid:17) This is immediately verified using (3.10).Next, in order to prove (3.6), we recall that (3.7) can be solved by T n = T + ( − T ) n − − ( − T ) n (3.11)for some matrix T . This matrix is easily identified to be T ≡ T ∞ (this makes sense becausethe absolute value (of the eigenvalue) of T turns out to be < T = − e − | κ | π ). But, fromthe defining eq.(2.25), T represents the sliver [34, 21, 22]. Therefore it is related to X byeq.(2.30) or by its inverse X = TT − T + 1 (3.12)This is precisely (3.11) for n = 3. Therefore (3.6) is satisfied and in addition this tells usthat the eigenvalue of X is X = −
11 + 2cosh (cid:0) κπ (cid:1) (3.13)– 12 –ince the recursive constraints propagates this identification to all the wedge states thiscomplete our proof .Let us come to the normalization constants N n . They must satisfy a recursion relation N n K det (1 − T n X ) = N n +1 (3.15)where K is some constant to be determined. We fix it by requiring that N = 1 so thatthe wedge state | i coincides with the vacuum | i . We have η n = − Z t n dt tr( Bα ) = − Z t n dt tr( ˜ A ˜ α ) (3.16)where the trace is over the weight 2 basis. Now identifying N n = e η n , plugging in the relevant eigenvalues and proceeding as in section (2.5) of [1] one can easilyverify that (3.15) is satisfied.This completes our proof that the squeezed states in the midterm of (1.1) have thesame eigenvalue as the ghost wedge states in the oscillator formalism.To complete this argument we must show that our choice of enlarging the Fock space atthe beginning of this section is justified in the case of the wedge states. Since this requiresthe same type of arguments as in the previous subsection and is somewhat repetitious, wewill account for it in Appendix B.Finally let us remark that without the commutativity property of the twisted Neumanncoefficients matrices spelled out in section 2, it would be impossible to reproduce the resultsof [1] where the matrices A, B, C, D T commute (in the appropriate way).The results we have obtained in this section consolidates the result obtained in [1],however is not yet the end our proof of (1.1). In the next section we explain why.
4. Matrix reconstruction from the spectrum
So far our argument has been carried out by replacing the matrices involved with theireigenvalues. It would seem that we are done with the proof of (1.1). However what wehave to show is not only that the eigenvalues of the matrices featuring in the RHS ofeq.(3.1) coincide with the eigenvalues of the matrix S n in (2.16), where S n satisfies the It might seem at first sight that eq.(3.13) contradicts the well–known formula found by Gross andJevicki, [17, 18, 28], X = − E M
M E − (3.14)which relates the twisted ghost Neumann coefficients matrix X with the corresponding matter matrix ofNeumann coefficients M . If we naively diagonalize X and M on the matter basis of eigenvectors and usethe result of [24], we obtain a value for X different from (3.13). However this is an ‘optical’ effect: eq.(3.14)is certainly true numerically, but X and M act on different spaces, therefore they are different operators.Each must be diagonalized in its own space. They cannot be diagonalized using the same basis. There isthus no room for contradiction between (3.14) and (3.13). – 13 –ecursion relation (2.27), but that the matrices themselves coincide . Now, in general, if oneknows eigenvalues and eigenvectors of a matrix operator one can reconstruct the originalmatrix. This is true for the matter sector of (1.1), but in the ghost sector this is not thecase. In the ghost sectors things are unfortunately more complicated due to the existenceof zero modes. This section is devoted to explaining this additional complication.So far our argument has consisted in applying the matrices involved such as ˜ A, D T andin particular α ( t ) to the weight 2 basis vector V (2) . As shown in [1], the exponent c † αb † in(3.1) can be written as follows c † αb † = X n = − ,m =2 c † n α nm ( t ) b † m = ∞ X n = − ,m =2 Z dκ dκ ′ ˜ c † ( κ ) ˜ V ( − n ( κ ) ˜ α nm ( t ) ˜ V (2) m ( κ ′ ) b † ( κ ′ )= ∞ X n =2 Z dκ dκ ′ ˜ c † ( κ ) ˜ V ( − n ( κ ) ˜ α ( κ, t ) ˜ V (2) n ( κ ′ ) b † ( κ ′ ) = Z dκ ˜ c † ( κ ) ˜ α ( κ, t ) b † ( κ ) (4.1)where we have introduced( − n c † n = Z dκ ˜ c † ( κ ) ˜ V ( − n ( κ ) , b † n = Z dκ b † ( κ ) ˜ V (2) n ( κ ) , n ≥ n = − , , f (2) κ ( z ) = X n =2 V (2) n ( κ ) z n − (4.3)in terms of the generating function f (2) κ ( z ) = (cid:18)
11 + z (cid:19) e κ arctan( z ) = 1 + κz + (cid:18) κ − (cid:19) z + . . . (4.4)Following [29, 30], (see also Appendix B of [1]), we normalize the eigenfunctions as follows˜ V (2) n ( κ ) = p A ( κ ) V (2) n ( κ ) (4.5)where A ( κ ) = κ ( κ + 4)2 sinh (cid:0) πκ (cid:1) The unnormalized weight -1 basis is given by f ( − κ ( z ) = X n = − V ( − n ( κ ) z n +1 (4.6)– 14 –n terms of the generating function f ( − κ ( z ) = (1 + z ) e κ arctan( z ) = 1 + κz + (cid:18) κ (cid:19) z + . . . (4.7)The normalized one is˜ V ( − n ( κ ) = p A − ( κ ) V ( − n ( κ ) , p A − ( κ ) = P κ p A ( κ ) κ + 4 (4.8)where P denotes the principal value. We reported in [1] the biorthogonality Z ∞−∞ dκ ˜ V ( − n ( κ ) ˜ V (2) m ( κ ) = δ n,m , n ≥ ∞ X n =2 ˜ V ( − n ( κ ) ˜ V (2) n ( κ ′ ) = δ ( κ, κ ′ ) (4.10)taking them from [31]. These relations can be formally proved, but it is evident that theyhave to be handled with care. Let us recall again eqs.(3.4) and (3.5), which turned out to becrucial in the previous sections, and let us do the following. We multiply (3.5) by V (2) n ( κ )and integrate over κ : we get A an = 0 for n ≥
2, which is evidently false. On the otherhand eq.(3.5) is correct (we present a new demonstration of it in Appendix C). Thereforeit is apparent that in the above exercise we did something illegal. This can only be theexchange between the (infinite) summation and the integration over κ . We remark thatthe same kind of exchange occurs also in the intermediate steps of (4.1). We are thereforewarned that in doing so we may lose some information. The question is: is there a way torepair the illegality we commit in this way and recover the full relevant information?In mathematical terms this involves the problem of the spectral representation for lameoperators. Unfortunately we have not been able to find any treatment of this problem inthe mathematical literature. We proceed therefore in a heuristic way. Let us analyze the reconstruction of the matrix ˜ A . Since P ∞ l =2 ˜ A nl ˜ V (2) l ( κ ) = a ( κ ) ˜ V (2) n ( κ ),we might argue as follows Z ∞−∞ dκ ˜ V ( − m ( κ ) a ( κ ) ˜ V (2) n ( κ ) = ∞ X l =2 ˜ A nl Z ∞−∞ ˜ V ( − m ( κ ) V (2) l ( κ ) = ˜ A nm (4.11)using the bi-orthogonality relations (4.9). Therefore we should be able to reconstruct the˜ A matrix starting from a ( κ ) = πκ (cid:0) πκ (cid:1) – 15 –nd the bases. Here are the first few basis elements V (2)2 ( κ ) = 1 , V (2)3 ( κ ) = κ, V (2)4 ( κ ) = κ −
42 (4.12) V (2)5 ( κ ) = 16 κ ( κ − , V (2)6 ( κ ) = 124 ( κ − κ + 72) , . . . and V ( − ( κ ) = 16 κ ( κ + 4) , V ( − ( κ ) = κ ( κ + 4)24 ,V ( − ( κ ) = κ
120 ( κ − V ( − ( κ ) = κ
720 ( κ − κ −
56) (4.13) V ( − ( κ ) = κ κ − κ − κ + 288) , . . . These must be multiplied by the normalization factor p A ( κ ) = s κ ( κ + 4)2 sinh (cid:0) πκ (cid:1) and p A − ( κ ) = P κ r κ κ + 4) sinh (cid:0) πκ (cid:1) so that p A ( κ ) A − ( κ ) = 12 sinh (cid:0) πκ (cid:1) Using these formulas we get˜ A = π Z ∞−∞ dκ κ ( κ + 4) (cid:0) sinh (cid:0) πκ (cid:1)(cid:1) ≈ . A = π Z ∞−∞ dκ κ ( κ − (cid:0) sinh (cid:0) πκ (cid:1)(cid:1) ≈ − . A = π Z ∞−∞ dκ κ ( κ + 4) (cid:0) sinh (cid:0) πκ (cid:1)(cid:1) ≈ . A = π Z ∞−∞ dκ κ ( κ − κ + 4) (cid:0) sinh (cid:0) πκ (cid:1)(cid:1) ≈ − . A nm one shoulf get instead˜ A = − − . , ˜ A = 1635 ≈ . A = − ≈ − . , ˜ A = 2263 ≈ .
349 (4.18)As we can see the reconstructed matrix elements are far apart from the expected values.– 16 –et’s do the same for D T . D T = π Z ∞−∞ dκ κ ( κ + 4) cosh (cid:0) πκ (cid:1)(cid:0) sinh (cid:0) πκ (cid:1)(cid:1) ≈ .
665 (4.19) D T = π Z ∞−∞ dκ ( κ ( κ − (cid:0) πκ (cid:1)(cid:0) sinh (cid:0) πκ (cid:1)(cid:1) ≈ . D T = π Z ∞−∞ dκ ( κ ( κ + 4)) cosh (cid:0) πκ (cid:1)(cid:0) sinh (cid:0) πκ (cid:1)(cid:1) ≈ . D T = π Z ∞−∞ dκ ( κ ( κ − κ + 4)) cosh (cid:0) πκ (cid:1)(cid:0) sinh (cid:0) πκ (cid:1)(cid:1) ≈ . D = 4 , D = 0 , D = 6 , D = 23 (4.23)Also here we are far apart from the true values.We remark that if we do the same exercise for the A and C matrices of the mattersector (see [1]), we find a perfect coincidence between the original matrices and the onesreconstructed by means of the spectrum. The idea is to apply the matrix ˜ A not to the weight 2 basis, but to the weight -1 basis. I.e. ∞ X l = − ˜ V ( − l ( κ ) ˜ A l ¯ m = a ( κ ) ˜ V ( − m (4.24)This formula was proved in Appendix D3 of [1]. Then Z ∞−∞ dκ ˜ V ( − m ( κ ) a ( κ ) ˜ V (2)¯ n ( κ ) = Z ∞−∞ dκ ∞ X l = − V ( − l ( κ ) ˜ A l ¯ m ˜ V (2)¯ n ( κ ) (4.25)= X a = − , , ˜ A a ¯ m Z ∞−∞ dκ ˜ V ( − a ( κ ) ˜ V (2) n ( κ ) + ∞ X l =2 ˜ A l ¯ m Z ∞−∞ dκ ˜ V ( − l ( κ ) ˜ V (2)¯ n ( κ )where barred indices denote ‘short’ indices, i.e. ¯ m, ¯ n ≥
2. Now use the decomposition (see[31] and Appendix B of [1]) ˜ V ( − a ( κ ) = ∞ X n =2 b a ¯ n ˜ V ( − n ( κ ) (4.26)One can easily obtain b − , n +3 = ( − n ( n + 1) , b , n +2 = ( − n , b , n +3 = ( − n ( n + 2) (4.27)Inserting these into (4.25) we get˜ A ¯ n ¯ m = Z ∞−∞ dκ ˜ V ( − m ( κ ) a ( κ ) ˜ V (2)¯ n ( κ ) − X a = − , , b a ¯ n ˜ A a ¯ m (4.28)– 17 –ow the corrections to the values obtained in (4.14-4.17) are easy to compute. For instance˜ A = − . − ˜ A b = − .
076 + 815 ≈ .
457 (4.29)˜ A = 0 . − ˜ A − , b − , − ˜ A , b , = 0 . −
23 = − . B the answer is easy since B ¯ n ¯ m = A ¯ n ¯ m . Notice that the terms B ¯ na are differentfrom A a ¯ n . These terms should also be considered as known terms.We can reconstruct in a similar way also D T . For this we must apply C to the -1basis. This amounts to the same formulas above, with the substitution of ˜ A with C and a ( κ ) with c ( κ ). Remember that C ¯ n ¯ m = D T ¯ n ¯ m for ¯ n, ¯ m ≥
2. In particular D T ¯ n ¯ m = C ¯ n ¯ m = Z ∞−∞ dκ ˜ V ( − m ( κ ) c ( κ ) ˜ V (2)¯ n ( κ ) − X a = − , , b a ¯ n C a ¯ m (4.31)For instance D T = C = 2 . − C b = 2 . ≈ D T = C = 5 . − ( C − , b − , + C , b , ) = 5 . −
23 + 2 23 ≈ D T = C = 1 . − ( C − , b − , + C , b , ) = 1 . − ≈ . A a, ¯ n , ˜ B ¯ n,a and C a, ¯ n and C ¯ n,a will be referred to from now on as boundary terms .Notice that A a,n = − C − a,n and C ¯ n, − a = B ¯ n,a .In the absence of a mathematical theorem we formulate the following: Heuristic rule . In order to reconstruct any matrix A ¯ n ¯ m = B ¯ n ¯ m and C ¯ n ¯ m = D T ¯ n ¯ m from its eigenvalues, apply A and C to the weight -1 basis, separate the ¯ n, ¯ m ≥ partfrom the zero mode part and operate as in eq.(4.25) above. As for the matrix elements A a,n and B ¯ n,a they cannot be reconstructed from the eigenvalues, but they have to be ratherconsidered as known terms of the problem. We will refer to them as boundary data. Usually (for instance in the matter sector) we start from a matrix (for instance thematrices ˜ A or C of [1]), diagonalize it and determine the spectrum, i.e. eigenvalues andeigenvectors. Viceversa, starting from the latter, we can reconstruct the initial matrixusing its spectral representation.In the present case the situation is somewhat different. Given the matrices we cancompute the spectrum (see section 5 of [1]). Viceversa given the spectrum and the boundarydata A a,n and B ¯ n,a we can compute the matrices ˜ A and C and the related ones. This alsomeans that, in order to determine the eigenvalue of a given diagonalizable matrix over theweight 2 basis, the ss part of that matrix contains all the necessary information, but inorder to reconstruct even its ss part we have to know the action of that matrix over theweight -1 basis, i.e. we need the information stored in the latter.It is clear that, with the above heuristic rule, it is possible to reconstruct, at least inprinciple, any matrix which can be expressed as a series of products of ˜ A, B, C, D T , in par-ticular ˜ α ( t ). Unfortunately so far we have not been able to produce a simple, manageablereconstruction algorithm. – 18 – . Conclusion Let us return to the validity of (1.1) and reformulate the question raised at the beginningof the previous section. In [1] we wrote the LHS of this equation in the form (3.1). We haveshown above that the RHS of the latter has the form of a wedge state, and in fact we provedthat once the squeezed state matrix ˜ α ( κ, − n ) ≡ ˜ α n there is diagonalized in the weight 2basis, it coincides with the (diagonalized) matrix that represents the n–th wedge state | n i ,defined by the squeezed state (2.18) whose matrix T n satisfies the recursion relations (2.27).The next question to be answered is: in view of the discussion of the previous section, doalso the matrices ˜ α n coincide with the matrices T n and, in particular, the matrix elements( ˜ α n ) a,m with ( T n ) a,m with a = − , , α n ) a, ¯ m (beside the other matrix elements) is uniquely determined by thespectrum of ˜ α n and by the boundary data. This is true in particular for ˜ α , which wasinterpreted above as X . We therefore expect that ( ˜ α ) n,m coincides with X n,m . If this is so,solving (2.14,2.15) for X ± , we can, in principle, reconstruct the three strings vertex fromthe A, B, C and D T matrices . This vertex has precisely the features we have hypothesizedin section 2 and 3, in particular the commutativity of the twisted matrices of Neumanncoefficients (otherwise they would not be simultaneously diagonalized).However the reconstruction of X ± is not on the same footing as the reconstruction of X (or T ). For the latter, as we have seen, there exists a precise (though unwieldy) procedureto obtain it from the A, B, C and D T matrices. For X ± instead we have to proceed on thebasis of (2.14) and (2.15). To discuss this point let us introduce the following notation: forany matrix M we represent by M ee the part of M with both even entries, M oo the partwith both odd entries, and accordingly M eo , M oe with obvious meaning. From [1] we knowthat all the matrices A, B, C, D have vanishing eo and oe parts. All matrices T n , and inparticular X will therefore share the same property. We expect instead that X ± eo and X ± oe be nonvanishing. Remember that X + = ˆ CX − ˆ C . Therefore X + ee = X − ee , X + oo = X − oo (5.1) X + eo = − X − eo , X + oe = − X − oe (5.2)Substituting these relations into (2.14) we find X + ee = X − ee = 12 (1 ee − X ee ) (5.3) X + oo = X − oo = 12 (1 oo − X oo ) (5.4)Therefore both X ± ee and X ± oo are immediately derived from X . From (2.15) we get instead X ± eo X ± oo = X ± ee X ± eo (5.5) X ± oe X ± ee = X ± oo X ± oe (5.6)and X ± eo X ± oe = 14 (1 ee + 3 X ee ) (1 ee − X ee ) (5.7)– 19 –nd a parallel equation with o exchanged everywhere with e . This means that X ± eo and X ± oe are not determined algorithmically like T n , but only by solving the quadratic equations(5.7) subject to the commutativity relations (5.5, 5.6).If the solution to such equations, as we expect, exists, this is a proof of the validity of(1.1). In fact the analysis in section 3 was carried out under the hypothesis that a vertex,with the properties illustrated in section 2.1 and with the twisted matrices of Neumanncoefficients commuting with K , existed. But we have just shown that such vertex can bededuced (reconstructed) directly from the LHS of (1.1), in the sense we have just specified.The information we have extracted from the reconstruction path taken up in this paperis not conclusive. The existence proof of the three strings vertex, as we have just seen, isnot complete. On the other hand the missing part in the proof is rather marginal and,what is more important, our general characterization of the three strings vertex (section2.1) has not met any inconsistencies. This is reassuring in the prospect of coping with thetask of explicitly constructing the three strings vertex endowed with the above properties. Acknowledgments
We would like to thank Carlo Maccaferri for his comments and suggestions and for hiscollaboration in the early stage of this work. L.B. would like to thank the CBPF (Riode Janeiro) and the YITP (Kyoto) for their kind hospitality and support during this re-search. This research was supported for L.B. by the Italian MIUR under the program“Superstringhe, Brane e Interazioni Fondamentali”. D.D.T. was supported by the ScienceResearch Center Program of the Korea Science and Engineering Foundation through theCenter for Quantum Spacetime(CQUeST) of Sogang University with grant number R11 -2005 - 021. R.J.S.S. was supported by CNPq–Brasil.
AppendixA. The ghost Neumann coefficients
In this Appendix we explicitly compute ˆ V rsnm and V rsnm . We use the definitions (2.3,2.4).The method is well–known: we express the propagator ≪ c ( z ) b ( w ) ≫ in two differentways, first as a CFT correlator and then in terms of ˆ V and we equate the two expressionsafter mapping them to the disk via the maps (2.9) However this recipe leaves severaluncertainties. We will fix them by requiring certain properties, in particular cyclicity,consistency with the bpz operation and commutativity of the twisted matrices of Neumanncoefficients (the reason for the latter will become clear later on).First we have to insert the three c zero modes. One way is to insert them at differentpoints t i and use the correlator (A.1) ≪ c ( z ) b ( w ) ≫ ( t ,t ,t ) = h | c ( z ) b ( w ) c ( t ) c ( t ) c ( t ) | i = 1 z − w Y i =1 t i − zt i − w ( t − t )( t − t )( t − t ) (A.1)– 20 –o we have to compare h f ◦ c ( t ) f ◦ c ( t ) f ◦ c ( t ) f r ◦ c ( r ) ( z ) f s ◦ b ( s ) ( w ) i (A.2)with h ˆ V | R ( c ( r ) ( z ) b ( s ) ( w )) | ω i (A.3)where R denotes radial ordering. If :: denotes the natural normal ordering, we have forinstance R ( c ( z ) b ( w )) = X n,k : c n b k : z − n +1 w − k − + 1 z − w (A.4)This should be inserted inside (A.3). Let us refer to the last term in (A.4) as the orderingterm . We notice that the choice we have made for this term is rather arbitrary. Whatprecisely has to be inserted in (A.3) depends also on the definition of the three stringsvertex, therefore should be decided on the basis of a consistent definition of the latter. Forthe time being we continue on the basis of (A.4), later on we will introduce the necessarymodifications.To start with let us compute the K constant. We have h ˆ V | ω i = K = h f ◦ c ( t ) f ◦ c ( t ) f ◦ c ( t ) i = ( f ( t ) − f ( t )) ( f ( t ) − f ( t )) ( f ( t ) − f ( t )) f ′ ( t ) f ′ ( t ) f ′ ( t ) (A.5)Now h ˆ V | R ( c ( r ) ( z ) b ( s ) ( w )) | ω i = h ˆ V | X n,k : c ( r ) n b ( s ) k : z − n +1 w − k − + 1 z − w | ω i = K (cid:18) − ˆ V srkn z n +1 w k − + δ rs z − w (cid:19) (A.6)On the other hand, from direct computation, h f ◦ c ( t ) f ◦ c ( t ) f ◦ c ( t ) f r ◦ c ( r ) ( z ) f s ◦ b ( s ) ( w ) i = ( f ′ s ( w )) f ′ r ( z ) 1 f r ( z ) − f s ( w ) ( f ( t ) − f ( t )) ( f ( t ) − f ( t )) ( f ( t ) − f ( t )) f ′ ( t ) f ′ ( t ) f ′ ( t ) · Y i =1 f i ( t i ) − f r ( z ) f i ( t i ) − f s ( w ) (A.7)Comparing the last two equations and using (A.5) we getˆ V srkn = − I dz πi I dw πi z n +2 w k − · (A.8) · ( f ′ s ( w )) f ′ r ( z ) 1 f r ( z ) − f s ( w ) Y i =1 f i ( t i ) − f r ( z ) f i ( t i ) − f s ( w ) − δ rs z − w ! – 21 –fter obvious changes of indices and variables we end up withˆ V rsnm = I dz πi I dw πi z n − w m +2 (A.9) · ( f ′ r ( z )) ( f ′ s ( w )) 1 f r ( z ) − f s ( w ) Y i =1 f s ( w ) − f i ( t i ) f r ( z ) − f i ( t i ) − δ rs z − w ! Now we make a definite choice for the insertions, that is we take t i → ∞ . We remark thatthis choice leads to simple formulas but remains anyhow arbitrary .Since f i ( ∞ ) = α − i we get Y i =1 f i ( t i ) − f s ( w ) f i ( t i ) − f r ( z ) = f ( w ) − f ( z ) − V rsnm = ˆ V r +1 ,s +1 nm , (A.11)Moreover (by letting z → − z, w → − w )ˆ V rsnm = ( − n + m ˆ V srnm (A.12)Now let us consider the decompositionˆ V rsnm = 13 ( E nm + ¯ α r − s U nm + α r − s ¯ U nm ) (A.13)where E nm = I dz πi I dw πi N nm ( z, w ) E ( z, w ) U nm = I dz πi I dw πi N nm ( z, w ) U ( z, w ) (A.14)¯ U nm = I dz πi I dw πi N nm ( z, w ) ¯ U ( z, w )and E ( z, w ) = 3 f ( z ) f ( w ) f ( z ) − f ( w ) U ( z, w ) = 3 f ( z ) f ( z ) − f ( w )¯ U ( z, w ) = 3 f ( w ) f ( z ) − f ( w ) N nm ( z, w ) = 1 z n − w m +2 ( f ′ ( z )) ( f ′ ( w )) − f ( w ) − f ( z ) − We recall that zero mode insertions can be introduced also by means of the operator Y ( t ) = ∂ c ( t ) ∂c ( t ) c ( t ) instead of three different c ( t i ). This has not lead so far to better results. – 22 –fter some elementary algebra, using f ′ ( z ) = i z f ( z ), one finds E nm = I dz πi I dw πi z n +1 w m +1 (cid:16)
11 + zw − ww − z − z w z − w (cid:17) = ( − n δ nm − δ n, δ m, − δ n, δ m, − (A.15) U nm = I dz πi I dw πi z n +1 w m +1 (cid:20) f ( z ) f ( w ) (cid:16)
11 + zw − ww − z (cid:17) − z w z − w (cid:21) ¯ U nm = I dz πi I dw πi z n +1 w m +1 (cid:20) f ( w ) f ( z ) (cid:16)
11 + zw − ww − z (cid:17) − z w z − w (cid:21) In this way the ambiguities are eliminated. The δ n,m in (A.15) is for n, m ≥ h ω | R ( I ◦ c ( z ) I ◦ b ( w )) | V i = h ω | X n,k ( − k + n +1 : c ( r ) n b ( s ) k : z n +1 w k − + z w δ rs z − w | V i = K X n,k V srkn ( − n + m +1 z n +1 w k − + z w δ rs z − w (A.16)Equating now to (A.2) and repeating the same procedure as above we finally obtain( − n + m V rsnm = 13 ( E ′ nm + ¯ α r − s U ′ nm + α r − s ¯ U ′ nm ) (A.17)where E ′ nm = I dz πi I dw πi z n +1 w m +1 (cid:16)
11 + zw − ww − z − w z z − w (cid:17) (A.18) U ′ nm = I dz πi I dw πi z n +1 w m +1 (cid:20) f ( z ) f ( w ) (cid:16)
11 + zw − ww − z (cid:17) − w z z − w (cid:21) ¯ U ′ nm = I dz πi I dw πi z n +1 w m +1 (cid:20) f ( w ) f ( z ) (cid:16)
11 + zw − ww − z (cid:17) − w z z − w (cid:21) As we see, we have ( − n + m V rsnm = ˆ V rsnm (A.19)except perhaps for the values of the labels both involving zero modes. That the relation(2.12) should hold for the full range of the labels is instead a basic requirement. We willuse also this, beside cyclicity and commutativity, in order to guess the final form of thevertex.Motivated by these requirements we introduce minor modifications in the previousdefinitions. We start from the basic (A.15) without the last term (the ordering term) E nm = I dz πi I dw πi z n +1 w m +1 (cid:16)
11 + zw − ww − z (cid:17) (A.20) U nm = I dz πi I dw πi z n +1 w m +1 f ( z ) f ( w ) (cid:16)
11 + zw − ww − z (cid:17) (A.21)¯ U nm = I dz πi I dw πi z n +1 w m +1 f ( w ) f ( z ) (cid:16)
11 + zw − ww − z (cid:17) (A.22)– 23 –hen we define the ordering term Z nm = I dz πi I dw πi z n +1 w m +1 (cid:18) ww − z − zw (cid:19) (A.23)Next we define the matrices E = E + Z, U = U + Z, ¯ U = ¯ U + Z (A.24)which will be our basic ingredients. The choice of Z is made in such a way that E = ˆ C . Infact E nm = I dz πi I dw πi z n +1 w m +1 (cid:18)
11 + zw − zw (cid:19) = ( − n δ nm (A.25)for n, m ≥ − − ≤ n, m ≤ With reference to(2.3) and (2.4) we set ˆ V rsnm = 13 ( E nm + ¯ α r − s U nm + α r − s ¯ U nm ) (A.26) and V rsnm = ( − n + m ˆ V rsnm , V rs = ˆ C ˆ V rs ˆ C (A.27)From the definition of U and ¯ U it is easy to verify that ˆ C U = ¯ U ˆ C , where ˆ C denotesthe twist matrix. We have seen above that E ≡ ˆ C .Now using the method of ([23]) it is possible to show that U = 1 for n, m ≥ −
1. Thisimplies that, beside X + X + + X − = 1where X = ˆ CV rr , X + = ˆ CV , X − = ˆ CV , we have the commutativity property X rs X r ′ s ′ = X r ′ s ′ X rs and X + X − = X − X, X + ( X + ) + ( X − ) = 1It should be stressed that all the X rs matrices are ( ll ).– 24 – . Why we can use long square matrices Let us return to eqs.(2.22,2.23) and (2.24) applied to wedge states, that is let us suppose S = S n and S = S m . Let us concentrate on eq.(2.22): the RHS can be understoodin terms of a series expansion in which each monomial is the product of alternating lamematrices X, X ± , T n , T m , the rightmost and leftmost ones being ( ls ). These matrices cannotbe assumed to satisfy the identities of sec.2.1, in particular they cannot be assumed tocommute. However let us apply any such monomial from the left to the above introducedweight 2 basis vector V (2) s : . . . Y sl Z ls V (2) s (B.1)Since the rightmost matrix Z in the monomial is ( ls ), whatever matrix it is it is obviousthat we can simply replace it with corresponding long square matrices and replace V (2) s by V (2) l . According to the discussion in section 3, the result of the application is thesame extended vector multiplied by the matrix eigenvalue. This is obvious if the matrixin question is X, T n or T m , as has been discussed above. If the rightmost matrix in themonomial is X ± the same conclusion requires some comment. Since X ± ls can be triviallyreplaced by a long square matrices applied to V (2) l , we are entitled to apply to X ± ll theidentities of subsection 2.1. Therefore, using a well–known result, X ± ll can be expressedin terms of X ll , and the result of the application of X ± ll to V (2) l is V (2) l multiplied by thematrix eigenvalue.The next to the rightmost matrix in the monomial we have picked up is of the type Y sl . If Y is X, T n or T m we can apply to it the argument used in section 3 for α ( t ): wecan replace them with Y ll since, due to (3.5) and the consequences thereof, the initial threeelements of Y ll V (2) l are zero. The result once again is the product of the eigenvalues of Y ll and of Z ll multiplying V (2) l .If, on the other hand, Y sl is X ± sl , we can argue as follows. The result of replacing X ± sl by X ± ll in front of V (2) l is a vector with three more entries (corresponding as alwaysto n = − , ,
1, if n is the left label of X ± ). However we can use the same argument asabove, remarking that X ± ll can be expressed in terms of X ll . Therefore, due to (3.5) and itsconsequences, we can conclude that these three additional entries are 0. Therefore writing X ± sl V (2) l is tantamount to writing X ± ll V (2) l , and the result is once again V (2) l multiplied bythe product of the eigenvalues of Y ll and of Z ll .From this point on the argument is recursive and there is no need to repeat it again.Re-summing the series we can conclude that in eq.(2.22) we can everywhere replace thematrices by the corresponding long square ones. Analogous things can be repeated foreq. (2.24). This is our justification for enlarging the Fock space at the beginning of thissection. – 25 – . Proof of eq.(3.5) We want to show that X = ∞ X n =2 ˜ A − ,n V (2) n ( κ ) = 0 (C.1)Set n = 2 l + 1. Then ˜ A − , l +1 = 2( − l l + 1are the only non–vanishing matrix elements. Define F ( z ) = ∞ X l =1 − l l + 1 V (2)2 l +1 z l +1 (C.2)so that X = F (1) and F (0) = 0. We get dFdz = ∞ X l =1 − l V (2)2 l +1 z l = iz (cid:16) f (2) k ( iz ) − f (2) k ( − iz ) (cid:17) = iz (1 − z ) (cid:18) z − z (cid:19) ξ − (cid:18) z − z (cid:19) − ξ ! , ξ = iκ F (1) = Z dz ( iz ) (cid:16) (1 + z ) ξ − (1 − z ) − ξ − − (1 + z ) − ξ − (1 − z ) ξ − (cid:17) = iξ (1 + ξ ) F (2 − ξ, , − ξ, −
1) + iξ (1 − ξ ) F (2 + ξ, , ξ, − F (2 − ξ, , − ξ, −
1) = 14 ξξ − iξ (1 + ξ ) F (2 − ξ, , − ξ, −
1) = − i − ξ ) iξ (1 − ξ ) F (2 + ξ, , ξ, −
1) = i − ξ )and F (1) = 0.Next we want to show Y = ∞ X n =2 ˜ A ,n V (2) n ( k ) = 0 (C.3)This time put n = 2 l ˜ A , l = ( − l +1 (cid:18) l + 1 + 12 l − (cid:19) – 26 –efine F ( z ) = ∞ X l =1 ( − l +1 l + 1 V (2)2 l z l +1 (C.4) G ( z ) = ∞ X l =1 ( − l +1 l − V (2)2 l z l − (C.5)so that F (1) + G (1) = Y and F (0) + G (0) = 0. We get dFdz = ∞ X l =1 ( − l +1 V (2)2 l z l = z (cid:16) f (2) k ( iz ) + f (2) k ( − iz ) (cid:17) = z − z ) (cid:18) z − z (cid:19) ξ + (cid:18) z − z (cid:19) − ξ ! and dGdz = ∞ X l =1 ( − l +1 V (2)2 l z l − = 12 (cid:16) f (2) k ( iz ) + f (2) k ( − iz ) (cid:17) = 12(1 − z ) (cid:18) z − z (cid:19) ξ + (cid:18) z − z (cid:19) − ξ ! , which give F (1) = Z dz z (cid:16) (1 + z ) ξ − (1 − z ) − ξ − + (1 + z ) − ξ − (1 − z ) ξ − (cid:17) = 1 ξ (1 + ξ )(1 − ξ ) F (2 − ξ, , − ξ, − − ξ (1 − ξ )(1 + ξ ) F (2 + ξ, , ξ, −
1) = 0(C.6)and G (1) = Z dz (cid:16) (1 + z ) ξ − (1 − z ) − ξ − + (1 + z ) − ξ − (1 − z ) ξ − (cid:17) = − ξ ) F (2 − ξ, , − ξ, − − − ξ ) F (2 + ξ, , ξ, −
1) = 0 (C.7)These identities can be obtained by means of well–known relations valid for the hypergeo-metric functions, such as those in Appendix C of [1].Lastly we want to show that Z = ∞ X n =2 ˜ A ,n V (2) n ( k ) = 0 (C.8)Setting n = 2 l + 1 one realizes that˜ A , l +1 = 2( − l +1 l + 1 = − ˜ A − , l +1 (C.9)So Z = − X = 0. – 27 – eferences [1] L. Bonora, C. Maccaferri, R. J. Scherer Santos and D. D. Tolla, Ghost story. I. Wedge statesin the oscillator formalism, arXiv:0706.1025 [hep-th].[2] M. Schnabl,
Analytic solution for tachyon condensation in open string field theory,
Adv.Theor. Math. Phys. (2006) 433 [arXiv:hep-th/0511286].[3] Y. Okawa, Comments on Schnabl’s analytic solution for tachyon condensation in Witten’sopen string field theory,
JHEP (2006) 055 [arXiv:hep-th/0603159].[4] I. Ellwood and M. Schnabl,
Proof of vanishing cohomology at the tachyon vacuum,
JHEP (2007) 096 [arXiv:hep-th/0606142].[5] L. Rastelli and B. Zwiebach,
Solving open string field theory with special projectors, arXiv:hep-th/0606131.[6] Y. Okawa, L. Rastelli and B. Zwiebach,
Analytic solutions for tachyon condensation withgeneral projectors, arXiv:hep-th/0611110.[7] M. Schnabl,
Comments on marginal deformations in open string field theory, arXiv:hep-th/0701248.[8] M. Kiermaier, Y. Okawa, L. Rastelli and B. Zwiebach,
Analytic solutions for marginaldeformations in open string field theory, arXiv:hep-th/0701249.[9] E. Fuchs and M. Kroyter,
Schnabl’s L(0) operator in the continuous basis,
JHEP (2006) 067 [arXiv:hep-th/0605254].[10] E. Fuchs and M. Kroyter,
Universal regularization for string field theory,
JHEP (2007)038 [arXiv:hep-th/0610298].[11] E. Fuchs, M. Kroyter and R. Potting,
Marginal deformations in string field theory, arXiv:0704.2222 [hep-th].[12] E. Fuchs and M. Kroyter,
On the validity of the solution of string field theory,
JHEP (2006) 006 [arXiv:hep-th/0603195].[13] Y. Okawa,
Analytic solutions for marginal deformations in open superstring field theory, arXiv:0704.0936 [hep-th].[14] Y. Okawa,
Real analytic solutions for marginal deformations in open superstring field theory, arXiv:0704.3612 [hep-th].[15] S. Samuel,
The Ghost Vertex In E. Witten’s String Field Theory , Phys. Lett. B (1986)255.[16] E.Cremmer,A.Schwimmer, C.Thorn, ”The vertex function in Witten’s formulation of stringfield theory , Phys.Lett. (1986) 57.[17] D.J.Gross and A.Jevicki,
Operator Formulation of Interacting String Field Theory ,Nucl.Phys.
B283 (1987) 1.[18] D.J.Gross and A.Jevicki,
Operator Formulation of Interacting String Field Theory, 2 ,Nucl.Phys.
B287 (1987) 225.[19] N. Ohta, “Covariant Interacting String Field Theory In The Fock Space Representation,”
Phys. Rev. D (1986) 3785 [Erratum-ibid. D (1987) 2627]. – 28 –
20] A.Leclair, M.E.Peskin, C.R.Preitschopf,
String Field Theory on the Conformal Plane. (I)Kinematical Principles , Nucl.Phys.
B317 (1989) 411.[21] L.Rastelli, A.Sen and B.Zwiebach,
Classical solutions in string field theory around thetachyon vacuum,
Adv. Theor. Math. Phys. (2002) 393 [arXiv:hep-th/0102112].[22] L.Rastelli, A.Sen and B.Zwiebach, Half-strings, Projectors, and Multiple D-branes in VacuumString Field Theory , JHEP (2001) 035 [hep-th/0105058].[23] L. Bonora, C. Maccaferri, D. Mamone and M. Salizzoni, “Topics in string field theory,” arXiv:hep-th/0304270.[24] L.Rastelli, A.Sen and B.Zwiebach,
Star Algebra Spectroscopy , [ hep-th/0111281 ].[25] D. Gaiotto, L. Rastelli, A. Sen and B. Zwiebach,
Star algebra projectors,
JHEP (2002)060 [arXiv:hep-th/0202151].[26] D. Gaiotto, L. Rastelli, A. Sen and B. Zwiebach,
Ghost structure and closed strings invacuum string field theory,
Adv. Theor. Math. Phys. (2003) 403 [arXiv:hep-th/0111129].[27] M. Schnabl, Wedge states in string field theory,
JHEP (2003) 004[arXiv:hep-th/0201095].[28] C. Maccaferri and D. Mamone,
Star democracy in open string field theory,
JHEP (2003) 049 [arXiv:hep-th/0306252].[29] D. M. Belov,
Witten’s ghost vertex made simple (bc and bosonized ghosts),
Phys. Rev. D (2004) 126001 [arXiv:hep-th/0308147].[30] D. M. Belov and C. Lovelace, Star products made easy,
Phys. Rev. D (2003) 066003[arXiv:hep-th/0304158].[31] D. M. Belov and C. Lovelace, Unpublished [32] K.Okuyama,
Ghost Kinetic Operator of Vacuum String Field Theory , JHEP (2002) 027[hep-th/0201015].[33] H.Hata and T.Kawano,
Open string states around a classical solution in vacuum string fieldtheory , JHEP (2001) 038 [hep-th/0108150].[34] V.A.Kostelecky and R.Potting,
Analytical construction of a nonperturbative vacuum for theopen bosonic string , Phys. Rev. D (2001) 046007 [hep-th/0008252].[35] E.Fuchs, M.Kroyter and A.Marcus, Squeezed States Projectors in String Field Theory , JHEP (2002) 022 [hep-th/0207001].[36] K. Furuuchi and K. Okuyama, “Comma vertex and string field algebra,”
JHEP (2001)035 [arXiv:hep-th/0107101].[37] I. Kishimoto,
Some properties of string field algebra,
JHEP (2001) 007[arXiv:hep-th/0110124].(2001) 007[arXiv:hep-th/0110124].