Trace Identities for the matrix Schrödinger operator on the half line with general boundary conditions
aa r X i v : . [ m a t h - ph ] M a r Trace Identities for the matrix Schr¨odinger operator on the half line withgeneral boundary conditions ∗ Ricardo Weder †‡ Departamento de F´ısica Matem´atica.Instituto de Investigaciones en Matem´aticas Aplicadas y en Sistemas.Universidad Nacional Aut´onoma de M´exico.Apartado Postal 20-126, M´exico DF 01000, M´exico.
Abstract
We prove Buslaev-Faddeev trace identities for the matrix Schr¨odinger operator on the half line, with generalboundary conditions at the origin, and with selfadjoint matrix potentials.
In this paper we study the matrix Schr¨odinger operator on the half line H A,B ψ := − ψ ′′ + V ( x ) ψ, x ∈ (0 , ∞ ) , (1.1)where the prime denotes the derivative with respect to the spatial coordinate x . Furthermore, the wavefunction ψ ( x )will either be an n × n matrix-valued function or it will be a column vector valued function with n components. Asit is shown in [1]-[9] the general selfadjoint boundary conditions at x = 0 for the matrix Schr¨odinger operator (1.1)can be formulated in several equivalent way. For our purposes, it is more convenient to use the formulation given inin [6]-[9] where we state them in terms of constant n × n matrices A and B as follows, − B † ψ (0) + A † ψ ′ (0) = 0 , (1.2) − B † A + A † B = 0 , (1.3) A † A + B † B > . (1.4)Note that A † B is selfadjoint and the selfadjoint matrix ( A † A + B † B ) is positive. ∗ PACS classification (2010) 02.30.Zz; 03.65.-w; 03.65.Ge; 03.65.Nk. Mathematics Subject Classification (2010): 34L25; 34L40; 81U05;81Uxx. Research partially supported by project PAPIIT-DGAPA UNAM IN102215 † Fellow, Sistema Nacional de Investigadores. ‡
1t is clear that the matrices
A, B are not uniquely defined. It is always possible to multiply them on the right byan invertible matrix T without affecting (1.2), (1.3) and (1.4). Moreover, H A T,B T = H A,B . (1.5)We assume that the potential matrix V ( x ) is a n × n selfadjoint matrix-valued function that is in the Faddeev class,namely, V ( x ) and its first moment are integrable in (0 , ∞ ). That is to say, each entry of the matrix V is Lebesguemeasurable on (0 , ∞ ) and Z ∞ (1 + x ) dx || V ( x ) || < + ∞ . (1.6)By , || V ( x ) || we denote the norm of V ( x ) as an operator on C n . Clearly, V ( x ) satisfies (1.6) if and only this equationholds for each of its entries. Moreover, we assume that the potential is infinitely differentiable on (0 , ∞ ) and that, (cid:13)(cid:13)(cid:13)(cid:13) d j dx j V ( x ) (cid:13)(cid:13)(cid:13)(cid:13) ≤ C j (1 + | x | ) − ρ − j for some ρ ∈ (1 , , and all j = 0 , , , · · · . (1.7)Furthermore, we always assume that the matrix potential is selfadjoint ( the dagger means matrix adjoint) V ( x ) = V ( x ) † , x ∈ (0 , ∞ ) . (1.8)The main result of our paper is Theorem 3.6 where we prove Buslaev-Faddeev trace identities for the matrix Schr¨odingeroperator (1.1) with the most general boundary conditions (1.2-1.4). These trace identities were first proven in thescalar case by Buslaev and Faddeev in [10] (see also [11] ). For a textbook presentation of these results see Section 6of Chapter 4 of [12]. In these remarkable identities the sum of the absolute value of the eigenvalues, to an even or toan odd power, is expressed in terms of the coefficients of the asymptotic expansion for high energy of the logarithm ofthe Jost function and of the integral over the absolutely continuous spectrum (0 , ∞ ) of the phase of the Jost functionin the even case and of the logarithm of the absolute value of the Jost function in the odd case. The coefficients onthe asymptotic expansion of the logarithm of the Jost function depend on the (scalar) potential. These identities giveformulae for sums of the absolute value of the eigenvalues to an even or odd power (traces) in terms of propertiesof the absolutely continuos spectrum encoded in the Jost function. Actually, they link the point spectrum with theabsolutely continuous spectrum, or in other words, bound state information with scattering information. In [7] weproved Levinson’s theorem for the matrix Schr¨odinger operator with general boundary conditions, that is a traceidentity of order zero.We prove our trace identities adapting to the matrix Schr¨odinger operator the classical proof in the scalar case thatis given, for example, in Section 6 of Chapter 4 of [12]. In our case the Jost function is replaced by the determinant ofthe Jost matrix. The new technical results that made possible to adapt the classical proof to the matrix case are theprecise study of the low-energy behavior of the determinant of the Jost matrix that we obtained in Corollary 6.2 of27] assuming that (1.6) holds, the detailed analysis of the high-energy behavior of the determinant of the Jost matrixthat is given in Proposition 7.5 of [7] assuming only that the matrix potential is integrable, i.e., that Z ∞ dx || V ( x ) || < + ∞ . (1.9)Furthermore, we use to prove Theorem 3.6 the results of Theorems 8.1 and 8.5 of [7] where, in particular, we provethat the eigenvalues of H A,B coincide with the zeros of the determinant of the Jost matrix and that the multiplicityof each eigenvalue is equal to the order of the corresponding zero of the determinant of the Jost matrix. Furthermore,we also use in the proof of Theorem 3.6 the asymptotic expansion for high energy for the logarithm of the determinantof the Jost matrix, that we prove in Section 3 of this paper, assuming that V ( x ) ∈ C ∞ ((0 , ∞ )) and that (1.7) holds.Currently there is a great deal of interest in the spectral and scattering theory of matrix Schr¨odinger operatorson the half-line with general boundary conditions. For a review of the literature see [6]-[9]. They are important,for example, in the quantum mechanical scattering of particles with internal structure and in quantum graphs. Thematrix Schr¨odinger operator on the half line (1.1) is equivalent to a star graph, i.e. to a quantum graph with onlyone vertex and a finite number of edges of infinite length. Physically it corresponds to n very thin quantum wiresconnected at the vertex. The boundary conditions (1.2), (1.3) and (1.4) impose relations, at the vertex, between thevalues of the wave functions, and of its derivatives, at different edges. See for example, [1]- [5] ans [13]- [25]The paper is organized as follows. In Section 2 we state results from [7], [26], that we need, about the Jost solution,the regular solution, the Jost matrix, and in transformations of the matrices A, B that give the boundary conditions.In Section 3 we prove our trace identities and in Section 4 we give examples that illustrate them.Along the paper we designate by C + the upper-half complex plane, by R the real axis, and we let C + := C + ∪ R .For any k ∈ C + we denote by k ∗ its complex conjugate. As we already mentioned, for any matrix D we designateby D † its adjoint. We denote by C a positive constant that is not required take the same value when it appears indifferent places. In this section we introduce certain results that we need. See [7] and [26]. We always assume that the selfadjointmatrix potential V satisfies at least (1.9). We will use n × n matrix solutions to the equation − ψ ′′ + V ( x ) ψ = k ψ, x ∈ (0 , ∞ ) , k ∈ C + . (2.1)Let F, G be any pair of n × n matrix valued functions defined for x ∈ (0 , ∞ ). The Wronskian, [F;G] is defined asfollows, 3 F ; G ] := F G ′ − F ′ G. Remark that for any two n × n solutions φ ( k, x ) and ψ ( k, x ) to (2.1), each of the Wronskians [ φ ( k ∗ , x ) † ; ψ ( k, x )] and[ φ ( − k ∗ , x ) † ; ψ ( k, x )] is independent of x. By f ( k, x ) we denote the Jost solution to (2.1) that is the n × n matrix solution that satisfies the followingasymptotics for k ∈ C + \ { } ,f ( k, x ) = e ikx [ I n + o (1 /x )] , f ′ ( k, x ) = ik e ikx [ I n + o (1 /x )] , x → + ∞ , (2.2)with I n the n × n identity matrix. For each fixed x, (see [7, 26]) f ( k, x ) and f ′ ( k, x ) are analytic for k ∈ C + andcontinuous for k ∈ C + . Clearly, this asymptotics implies that for each fixed k ∈ C + , each of the n columns of f ( k, x )decays exponentially to zero as x → + ∞ .Another important solution to (2.1) is the regular solution, ϕ A,B ( k, x ) , that is the n × n matrix solution definedby the the initial conditions ϕ A,B ( k,
0) =
A, ϕ ′ A,B ( k,
0) = B, (2.3)with A and B the matrices that define the boundary conditions in (1.2), (1.3), (1.4). ϕ A,B ( k, x ) is entire in k in thecomplex plane C , for each fixed x ∈ (0 , ∞ ) . Let us define the Jost matrix J ( k ) in the following way, J A,B ( k ) := [ f ( − k ∗ , x ) † ; ϕ A,B ( k, x )] , k ∈ C + . (2.4)It follows from (2.4) and evaluating the Wronskian at x = 0, that, J A,B ( k ) = f ( − k ∗ , † B − f ′ ( − k ∗ , † A, k ∈ C + . (2.5) J ( k ) is well defined for C + since f ( − k ∗ , † and f ′ ( − k ∗ , † are analytic in k ∈ C + and continuous in k ∈ C + . It isknown [7] that J ( k ) is invertible for k ∈ R \ M, M † there , replaced, respectively, by M † , M ) that under the unitarytransformation V M V M † , with a unitary matrix M, and the combination of three consecutive transformations( A, B ) ( M AT M † T , M BT M † T ) , first by a right multiplication by an invertible matrix T , then by the unitary4ransformation with M, followed by a right multiplication by an invertible matrix T , we have that f MV M † ( k, x ) = M f ( k, x ) M † , k ∈ C + , (2.6) ϕ M V M † ,MAT M † T ,MBT M † T ( k, x ) = M ϕ
V,A,B ( k, x ) T M † T , k ∈ C , (2.7) J M V M † ,MAT M † T ,MBT M † T ( k, x ) = M J
V,A,B ( k, x ) T M † T , k ∈ C + . (2.8)Furthermore, we have that, H MV M † ,MAT M † T ,MBT M † T = M H
V,A,B M † . (2.9)For clarity, in (2.6)-(2.9) we explicitly state the dependence in V of the Jost solution, the regular solution, the Jostmatrix and the matrix Schr¨odinger operator.The transformation V V and ( A, B ) ( AT, BT ) with an invertible matrix T is just a change in the parametriza-tion of the boundary conditions (1.2), (1.3) and (1.4). On the contrary, the unitary transformation V M V M † and( A, B ) ( M AM † , M BM † ) with a unitary matrix M is a change of representation in quantum mechanical sense.Taking into account the general selfadjoint boundary condition for the scalar Schr¨odinger operator, that is to saywhen n = 1, we consider the case where the matrices A, B are diagonal. This special pair of diagonal matrices isdenoted by ˜ A and ˜ B, with, ˜ A := − diag { sin θ , . . . , sin θ n } , ˜ B := diag { cos θ , . . . , cos θ n } . (2.10)In this case the boundary conditions (1.2) are,cos θ j ψ j (0) + sin θ j ψ ′ j (0) = 0 , j = 1 , , · · · , n. (2.11)Here, the real parameters θ j take values in the interval (0 , π ] . The case θ j = π corresponds to the Dirichlet boundarycondition, the case where θ j = π/ , π corresponds to mixed boundary conditions and the case θ j = π/ n D values with θ j = π, that there are n M values θ j = π/ , π and, in consequence, that there are n N remaining values, with n N := n − n M − n D , such that the corresponding θ j -values are θ j = π/
2. Particular cases where any of n D , n M , and n N are zero or n are allowed. Clearly, ˜ A, ˜ B satisfy(1.3), (1.4) with ˜ A, ˜ B instead of A, B.
In Proposition 4.3 of [7] it is proven that for any pair of matrices (
A, B ) that satisfy (1.2)-(1.4) there is a pair ofdiagonal matrices ( ˜ A, ˜ B ) as in (2.10), a unitary matrix M and a two invertible matrices T , T such that, A = M ˜ A T M † T , B = M ˜ B T M † T . (2.12)Note that T , T in (2.12) correspond, respectively to T − , T − in Proposition 4.3 of [7].5he following results are proven in Proposition 6.1 of [7]. Consider the selfadjoint matrix Schr¨odinger operator(1.1) with the selfadjoint boundary conditions (1.2, 1.3, 1.4) and with the selfadjoint potential V in the Faddeev class(1.6), Then, the nonzero column vector u ∈ C n is an eigenvector of the zero-energy Jost matrix J (0) with the zeroeigenvalue, i.e. u ∈ Ker [ J (0)] if and only if ϕ A,B (0 , x ) u is bounded for x ∈ (0 , ∞ ) . Let us denote by µ the geometricmultiplicity of the zero eigenvalue of J (0) . Then, we can form exactly µ columns by using linear combinations of n columns of the zero-energy regular solution ϕ A,B (0 , x ) in such a way that those µ columns form linearly independentsolutions to the zero-energy Schr¨odinger equation, − ϕ ′′ ( x ) + V ( x ) ϕ ( x ) = 0 , (2.13)that satisfy the boundary conditions (1.2, 1.3, 1.4) and that remain bounded as x → + ∞ . Furthermore, each of such µ column-vector solutions to (2.13) can also be expressed as a linear combinations of columns of f (0 , x ) . In the physicsliterature the bounded solutions to (2.13) that satisfy the boundary conditions are called half-bound states or zeroenergy resonances. Hence, µ is the number of linearly independent half-bound states or of zero energy resonances.Furthermore, in Corollary 6.2 of [7] it is proven that if the selfadjoint potential is in the Fadeev class (1.6), thedeterminant of the Jost matrix J ( k ) defined in (2.4) has the small- k behaviordet J ( k ) = c k µ [1 + o (1)] , k → C + , (2.14)where c is a nonzero constant that it is explicitly given in [7].Furthermore, it is proven in Proposition 7.5 of [7] that if the selfadjoint potential is integrable, i.e. if it satisfies(1.9), then, det J ( k ) = c k n M + n N [1 + O (1 /k )] , k → ∞ in C + , (2.15)where c is a nonzero constant that it is explicitly given in [7], and n M and n N are the nonnegative integers definedafter (2.11), i.e. they are, respectively, the number of mixed and of Neumann boundary conditions in the representationwhere the matrices that define the boundary conditions are given by the diagonal matrices ˜ A, ˜ B. The following results concerning eigenvalues of H A,B are proven in [7]. Since the matrix Schr¨odinger operator, H A,B is selfadjoint its eigenvalues, k , have to be real. It turns out that H A,B has no eigenvalues for k ≥ k , for k = iκ , for some κ > k = iκ for some positive κ if and only if Ker[ J ( iκ )]is nontrivial or equivalently if and only if det[ J ( iκ )] = 0 . The multiplicity, m κ , of the eigenvalue at k = iκ is finite andit is equal to the dimension of Ker[ J ( iκ )] Furthermore, it is proven in Theorem 8.5 of [7] that,6et J ( k ) = c ( k − iκ ) m κ [1 + O ( k − iκ )] , k → iκ, (2.16)where c is a nonzero constant. Consequently, the order of the zero of det J ( k ) at k = iκ is equal to the multiplicity, m κ , of the eigenvalue at k = iκ . We denote by N the total number of eigenvalues of H A,B , repeated according to itsmultiplicity. If (1.6) holds, N < ∞ . We first obtain a high-energy asymptotic expansion for the Jost solution. Let us denote, m ( k, x ) := e − ikx f ( k, x ) . (3.1) LEMMA 3.1.
Suppose that the matrix potential V is selfadjoint, that it belongs to C ∞ ((0 , ∞ )) and that it satisfies (1.7) . Then, for any N = 0 , , , · · · , m ( k, x ) = N X l =0 ik ) l b l ( x ) + r N ( k, x ) , (3.2) where the remainder r N satisfies the estimate: for every C > there are constants C N,j such that, (cid:12)(cid:12)(cid:12)(cid:12) ∂ j ∂x j r N ( k, x ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ C N,j | k | N +1 | x | ) ( N +1)( ρ − j , j = 0 , , , · · · , x ∈ [0 , ∞ ) , Im k ≥ , | k | ≥ C, (3.3) where b l ( x ) , l = 0 , , , · · · are C ∞ functions defined by the following recurrent relation, b ( k, x ) = 1 , b l +1 ( x ) = − b ′ l ( x ) − Z ∞ x V ( y ) b l ( y ) dy. (3.4) The b l ( x ) satisfy the estimate, (cid:12)(cid:12)(cid:12)(cid:12) d j dx j b l ( x ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ C l,j | x | l ( ρ − j , j = 0 , , , · · · , l = 0 , , , · · · . (3.5) Furthermore the expansion (3.2) can be derived term by term.
REMARK 3.2.
When an asymptotic expansion as (3.2) holds for all N we will write the right-hand side as anasymptotic series (see [27]) m ( k, x ) = ∞ X l =0 ik ) j b l ( x ) . (3.6) Proof:
Results of this type are classical. For the reader’s convenience we outline the proof following the one given inthe scalar case in Proposition 4.1 in Chapter 4 of [12], with some changes. The coefficients b l , l ≥ − r ′′ N ( k, x ) − i k r ′ N ( k, x ) + V ( x ) r N ( k, x ) = q N ( k, x ) , (3.7)where, q N ( k, x ) := ( b ′′ N ( x ) − V ( x ) b N ( x )) 1(2 ik ) N . (3.8)By (1.7, 3.5) (cid:12)(cid:12)(cid:12)(cid:12) ∂ j ∂x j q N ( k, x ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ C N,j (1 + x ) − ( N +1)( ρ − − − j ik ) N . (3.9)Writing (3.7) as an integral equation we get, r N ( k, x ) = Q N ( k, x ) + 12 ik Z ∞ x (cid:16) e ik ( y − x ) − (cid:17) V ( y ) r N ( k, y ) dy, (3.10)where, Q N ( k, x ) := 12 ik Z ∞ x (cid:16) − e ik ( y − x ) (cid:17) q N ( k, y ) dy. (3.11)By (3.9) | Q N ( k, x ) | ≤ C N (1 + x ) − ( N +1)( ρ − ik ) N +1 . (3.12)Then, solving (3.10) by iterations and using (3.12) we prove that r N ( k, x ) satisfies (3.3) with j = 0. To prove it forpositive j we proceed as follows (in the proof of Proposition 4.1 in Chapter 4 of [12] in the scalar case a differentargument is proposed). We assume that (3.3) is true for j = 0 , , · · · j and we prove it for j = j + 1. We denote p ( k, x ) := ∂ j ∂x j r N ( k, x ). Then, deriving (3.7) j + 1 times we get that, − p ′′ ( k, x ) − i k p ′ ( k, x ) + V ( x ) p ( k, x ) = s ( k, x ) , (3.13)where, s ( k, x ) = − j = j +1 X j =1 (cid:18) j + 1 j (cid:19) (cid:18) dx j dx j V ( x ) (cid:19) ∂ j +1 − j ∂x j +1 − j r N ( k, x ) + ∂ j +1 ∂x j +1 q N ( k, x ) . (3.14)By (1.7), (3.9) and the inductive assumption we have that, | s ( k, x ) | ≤ C (1 + x ) − ( N +1)( ρ − − j − ik ) N . (3.15)Writing (3.13) as an integral equation we obtain that, p ( k, x ) = S ( k, x ) + 12 ik Z ∞ x (cid:16) e ik ( y − x ) − (cid:17) V ( y ) p ( k, y ) dy, (3.16)where, S ( k, x ) := 12 ik Z ∞ x (cid:16) − e ik ( y − x ) (cid:17) s ( k, y ) dy. (3.17)8y (3.15) we have that S ( k, x ) satisfies | S ( k, x ) | ≤ (1 + x ) − ( N +1)( ρ − − j − ik ) N +1 . (3.18)Finally, solving (3.16) by iterations and using (3.18) we obtain that, | p ( k, x ) | ≤ (1 + x ) − ( N +1)( ρ − − j − ik ) N +1 , (3.19)what proves that (3.3) holds for r N ( k, x ) with j = j + 1. (cid:3) By Lemma 3.1 the Jost matrix J ( k ) defined in (2.4) (see also (2.5)) has the following asymptotic expansion, J ( k ) = f ( − k ∗ , † B − f ′ ( − k ∗ , † A = ∞ X l = − c l ik ) l , (3.20)where, c − = − A,c = B − b (0) † A,c l = b l (0) † B − (cid:18) b l +1 (0) † ] + b ′ (0) † l (cid:19) A, l = 1 , , · · · . (3.21)Let us denote by P the set of all permutations of 1 , , · · · , n . By (3.20) we have thatdet J ( k ) = ∞ X l = − n d l ik ) l , (3.22)with d l := X σ ∈P sign σ X l i ≥− l + l + ··· l n = l ( c l ) ,σ ( c l ) ,.σ · · · ( c l n ) n,σ n , (3.23)where by ( c l ) j,m we denote the component of the matrix c l in the row j and the column m .By (2.15), d j = 0 if j < − n M − n N , and d − n M − n N = c (2 i ) n M + n N . (3.24)Then, we have proven the following lemma. LEMMA 3.3.
Suppose that the matrix potential V is selfadjoint, that it belongs to C ∞ ((0 , ∞ )) and that it satisfies (1.7) . Then, the determinant of the Jost matrix, defined in (2.4) , has the following asymptotic expansion det J ( k ) = ∞ X l = − n M − n N d l ik ) l , (3.25) where the coefficients d l , l = − n M − n N , − n M − n N + 1 , · · · are defined in (3.4, 3.21, 3.23, 3.24 ) with c the constantthat appears in (2.15) and n M , n N the nonnegative integers defined after equation (2.11) . h ( k ) := 1 c k n M + n N det J ( k ) . (3.26)By (2.15), lim | k |→∞ h ( k ) = 1 . Hence, we define ln h ( k ) with lim | k |→∞ ln h ( k ) = 0. By (3.25)ln h ( k ) = ∞ X l =1 e l ik ) l , (3.27)where, e = (2 i ) n M + n N c d − n M − n N ,e l = (2 i ) n M + n N c d l − n M − n N − l l − X j =1 j d l − n M − n N − j e j , l = 2 , · · · . (3.28)For k real we split the asymptotic expansion (3.27) into its even and odd parts,ln h ( k ) = ln e h ( k ) + ln o ( k ) , ln e h ( k ) := 12 (ln h ( k ) + ln h ( − k )) , ln o h ( k ) := 12 (ln h ( k ) − ln h ( − k )) , k ∈ R . (3.29)By (3.27), ln e h ( k ) = ∞ X l =1 e l ik ) l = ∞ X l =1 ( − l e l k ) l , k ∈ R , (3.30)ln o h ( k ) := ∞ X l =0 e l +1 ik ) l +1 = i ∞ X l =0 ( − l +1 e l +1 k ) l +1 , k ∈ R . (3.31) LEMMA 3.4.
Suppose that the selfadjoint matrix potential V satisfies (1.6) . For any z ∈ C with < Re z < / letus define, F ( z ) := Z ∞ ln e h ( k ) k z − dz, G ( z ) := − i Z ∞ ln o h ( k ) k z − dz. (3.32) Then, F ( z ) sin( πz ) − G ( z ) cos( πz ) = π z gX N j =1 | κ j | z , (3.33) where − κ j , j = 1 , · · · , N are the eigenvalues of the Schr¨odinger operator H A,B defined in (1.1) with the boundaryconditions (1.2- 1.4) . In the right-hand side of (3.33) fP N j =1 | κ j | z means the sum over the absolute value of theeigenvalues, − κ j , to the power z and repeated according to its multiplicity. Furthermore, N is the total number of(repeated) eigenvalues. As (1.6) holds, N < ∞ . Proof:
This lemma is proven as in the proof of Proposition 6.3 in Chapter 4 of [12] integrating the function (cid:16) ˙ h ( k ) /h ( k ) (cid:17) k z , with h ( k ) defined in (3.26) with 0 ≤ arg k ≤ π along the contour C ǫ,R , given below, taking thelimit when ǫ → R → ∞ and using (2.14), (2.15), (2.16) and the residues theorem. The contour C ǫ,R consists offour parts given by C ǫ,R := ( − R, − ǫ ) ∪ C ǫ ∪ ( ǫ, R ) ∪ C R . − R, − ǫ ) is the directed line segment on the real axis for some small positive ǫ and for a large positive R, with the direction of the path from − R + i − ǫ + i . The second part C ǫ consists of the upper semicircle centered atthe origin with radius ǫ and traversed from the point − ǫ + i ǫ + i . The third piece ( ǫ, R ) is the directedline segment of the positive real axis from ǫ + i R + i . The fourth part C R is the upper semicircle centered at theorigin with radius R and traversed from the point R + i − R + i . PROPOSITION 3.5.
Assume that the selfadjoint matrix potential V satisfies (1.6) , that it belongs to C ∞ ((0 , ∞ )) and that (1.7) holds. Then, the functions F ( z ) and G ( z ) , defined in (3.32) have analytic continuations, denoted alsoby F ( z ) , G ( z ) , to meromorphic functions for Re z > . The function F ( z ) has simple poles at z = j, j = 1 , , · · · withresidue ( − j +1 − j − e j . Furthermore, the representation (3.32) of F ( z ) is valid for < Re z < , and, F ( z ) = Z ∞ ln e h ( k ) − j X l =1 ( − l e l k ) l ! k z − dk, for j < Re z < j + 1 , j = 1 , , · · · . (3.34) Moreover, G ( z ) has simple poles at z = j + 1 / , j = 0 , , , · · · with residue, ( − j − j − e j +1 and G ( z ) = Z ∞ − i ln o h ( k ) − j − X l =0 ( − l +1 e l +1 k ) l +1 ! k z − dk, j − < Re z < j + 12 , j = 1 , , · · · . (3.35) Proof:
The proposition follows from (3.30), (3.31) as in the proof of Lemma 6.4 in Chapter 4 of [12]. We give detailsin the case of G ( z ) for the reader’s convenience. We have that for Rez < ,G ( z ) = Z ( − i ) ln o h ( k ) k z − dk + Z ∞ − i ln o h ( k ) − j − X l =0 ( − l +1 e l +1 (2 k ) l +1 ! k z − dk + j − X l =0 ( − l − l − e l +1 z − l − / . (3.36)The first integral in the right-hand side of (3.36) is analytic for Re z > z < j + 1 /
2. Since this is true for every j = 1 , , · · · , G ( z ) has an analytic continuation to a meromorphicfunction for Re z > z = j + 1 / , j = 0 , , , · · · , with residue, ( − j − j − e j +1 . Moreover, for j − < Re z, Z ( − i ) ln o h ( k ) k z − dk = Z − i ln o h ( k ) − j − X l =0 ( − l +1 e l +1 (2 k ) l +1 ! k z − dk − j − X l =0 ( − l − l − e l +1 z − l − / . (3.37)Equation (3.35) follows from (3.36) and (3.37). (cid:3) We now prove our main result. 11
HEOREM 3.6.
Suppose that the selfadjoint matrix potential V satisfies (1.6) that it belongs to C ∞ ((0 , ∞ )) andthat (1.7) holds. Then, gX N l =1 | κ l | − π Z ∞ ln e h ( k ) dk = e , (3.38) gX N l =1 | κ l | j +1 + ( − j +1 j + 1 π Z ∞ ln e h ( k ) − j X l =1 ( − l e l k ) l ! k j dk = (2 j + 1) e j +1 j +2 , j = 1 , , · · · . (3.39) Furthermore, gX N l =1 | κ l | j + ( − j jπ Z ∞ − i ln o h ( k ) − j − X l =0 ( − l +1 e l +1 k ) l +1 ! k j − dk = − j e j j , j = 1 , , · · · . (3.40) In the left -hand side of (3.38) , (3.39) and (3.40) fP N l =1 | κ l | q , with, respectively, q = 1 , q = 2 j + 1 , and q = 2 j, meansthe sum over the absolute value of the eigenvalues, − κ l , to the power q/ , and repeated according to its multiplicity.Furthermore, N is the total number of (repeated) eigenvalues. As (1.6) holds, N < ∞ . The coeficients e j , j = 1 , , · · · are defined in equations (3.4, 3.21, 3.23, 3.24, 3.28).Proof: By Proposition 3.5 and analytic continuation (3.33) holds for Re z >
0. Equations (3.38) and (3.39) followevaluating (3.33) at z = j + 1 / , j = 0 , , , · · · and using (3.34). Moreover, (3.40) follows evaluating (3.33) at z = j, j = 1 , , · · · and using (3.35). In this section we illustrate our trace identities in Theorem 3.6 with simple examples.
EXAMPLE 4.1.
We consider a 2 × n = 2 , with Dirichlet boundary condition, ψ (0) = 0. We can take, A = 0 , B = − I . Since we only have Dirichlet boundary conditions according to the definition given below equation(2.11) n M = n N = 0 and also the constant c that appears in (2.15) is equal to one (see the first equation in page 16of [7]). Then according to (3.4, 3.21, 3.23, 3.24, 3.28). e = − Z ∞ V , ( x ) dx − Z ∞ V , ( x ) dx. (4.1) e = (cid:18)Z ∞ V , ( x ) dx (cid:19) (cid:18)Z ∞ V , ( x ) dx (cid:19) − (cid:18)Z ∞ V , ( x ) dx (cid:19) (cid:18)Z ∞ V , ( x ) dx (cid:19) − V , (0)+ (cid:18)Z ∞ dx Z ∞ x V ( y ) dy V ( x ) (cid:19) , − V , (0) + (cid:18)Z ∞ dx Z ∞ x V ( y ) dy V ( x ) (cid:19) , − (cid:18)Z ∞ V , ( x ) dx + Z ∞ V , ( x ) dx (cid:19) . (4.2)12 XAMPLE 4.2. (The δ ′ boundary condition) . We consider a 3 × n = 3. It satisfies the δ ′ boundarycondition, ψ ′ (0) = ψ ′ (0) = ψ ′ (0) , X j =1 ψ j (0) = aψ ′ (0) , a ∈ R . (4.3)The matrices A, B can be taken as, A = − a − − , B = −
10 0 −
10 0 − . The potential V is identically zero. The matrices A, B satisfy (1.3), (1.4). From (2.5) with f ( k, x ) = e ikx I , we obtainthat, J A,B ( k ) = − ik − iakik − ik − ik − . Then, det J A,B ( k ) = k (3 − iak ) . (4.4)It follows that if a ≥ H A,B has no eigenvalues. On the contrary, if a < H A,B has oneeigenvalue − κ with iκ = i / | a | and with multiplicity one. Suppose that a = 0. Then, with h ( k ) defined in (3.26) h ( k ) = det J A,B ( k ) − iak = (cid:18) iak (cid:19) . (4.5)Then, ln h ( k ) = ∞ X l =1 − l (cid:18) a (cid:19) l ik ) l . (4.6)It follows that e l = − l (cid:18) a (cid:19) l , l = 1 , , · · · . (4.7)In the case a = 0, h ( k ) = 1 and then ln h ( k ) = 0 and e l = 0 , l = 1 , , , · · · . References [1] V. Kostrykin and R. Schrader,
Kirchhoff ’s rule for quantum wires,
J. Phys. A , 595–630 (1999).[2] V. Kostrykin and R. Schrader, Kirchhoff ’s rule for quantum wires. II: The inverse problem with possible applica-tions to quantum computers,
Fortschr. Phys. , 703–716 (2000).[3] M. S. Harmer, Inverse scattering for the matrix Schr¨odinger operator and Schr¨odinger operator on graphs withgeneral self-adjoint boundary conditions,
ANZIAM J. , 161–168 (2002).[4] M. S. Harmer, The matrix Schr¨odinger Operator and Schr¨odinger Operator on Graphs,
Ph.D. thesis, Universityof Auckland, New Zealand, 2004. 135] M. S. Harmer,
Inverse scattering on matrices with boundary conditions,
J. Phys. A , 4875–4885 (2005).[6] T. Aktosun, M. Klaus, and R. Weder, Small-energy analysis for the self-adjoint matrix Schr¨odinger equation onthe half line,
J. Math. Phys. , 102101 (2011).[7] T. Aktosun, and R. Weder, High-energy analysis and Levinson’s theorem for the self-adjoint matrix Schr¨odingeroperator on the half line,
J. Math. Phys. , 012108 (2013).[8] T. Aktosun, M. Klaus, and R. Weder, Small-energy analysis for the self-adjoint matrix Schr¨odinger operator onthe half line II, J. Math. Phys. , 451-454 (1960).[11] L. D. Faddeev, An expression for the trace of the difference between two singular differential operators of Sturm-Liouville type, Dokl. AN SSSR N5, 878-881 (1957) ( russian).[12] D. R. Yafaev, Mathematical Scattering Theory: Analytic Theory, Amer. Math. Soc. Providence, RI, 2010.[13] N. I. Gerasimenko,
The inverse scattering problem on a noncompact graph,
Theoret. Math. Phys. , 460–470(1988).[14] N. I. Gerasimenko and B. S. Pavlov, A scattering problem on noncompact graphs,
Theoret. Math. Phys. ,230–240 (1988).[15] B. Gutkin and U. Smilansky, Can one hear the shape of a graph?
J. Phys. A , 6061–6068 (2001).[16] P. Kurasov and F. Stenberg, On the inverse scattering problem on branching graphs,
J. Phys. A , 101–121(2002).[17] P. Kuchment, Quantum graphs. I. Some basic structures,
Waves Random Media , S107–S128 (2004).[18] J. Boman and P. Kurasov, Symmetries of quantum graphs and the inverse scattering problem,
Adv. Appl. Math. , 58–70 (2005).[19] P. Kuchment, Quantum graphs. II. Some spectral properties of quantum and combinatorial graphs,
J. Phys. A ,4887–4900 (2005).[20] P. Kurasov and M. Nowaczyk, Inverse spectral problem for quantum graphs,
J. Phys. A , 4901–4915 (2005).1421] G. Berkolaiko, R. Carlson, S. A. Fulling, and P. Kuchment (eds.), Quantum Graphs and their Applications,
Contemporary Mathematics, 415, Amer. Math. Soc., Providence, RI, 2006.[22] P. Exner, J. P. Keating, P. Kuchment, T. Sunada, and A. Teplyaev (eds.),
Analysis on graphs and its applications,
Proc. Symposia in Pure Mathematics, 77, Amer. Math. Soc., Providence, RI, 2008.[23] J. Behrndt and A. Luger,
On the number of negative eigenvalues of the Laplacian on a metric graph,
J. Phys. A , 474006 (2010).[24] P. Kurasov and M. Nowaczyk, Geometric properties of quantum graphs and vertex scattering matrices,
OpusculaMathematica , 295–309 (2010).[25] G. Berkolaio and P. Kuchment, Introduction to Quantum Graphs . Mathematical Surveys and Monographs
Am. Math. Soc, Providence, R I 2013.[26] Z. S. Agranovich and V. A. Marchenko,
The Inverse Problem of Scattering Theory,
Gordon and Breach, NewYork, 1963.[27] A. Erd´elyi,