Inverse Spectral Problems for Tridiagonal N by N Complex Hamiltonians
aa r X i v : . [ m a t h . SP ] F e b Symmetry, Integrability and Geometry: Methods and Applications SIGMA (2009), 018, 28 pages Inverse Spectral Problemsfor Tridiagonal N by N Complex Hamiltonians ⋆ Gusein Sh. GUSEINOVDepartment of Mathematics, Atilim University, 06836 Incek, Ankara, Turkey
E-mail: [email protected]
URL:
Received November 18, 2008, in final form February 09, 2009; Published online February 14, 2009doi:10.3842/SIGMA.2009.018
Abstract.
In this paper, the concept of generalized spectral function is introduced for finite-order tridiagonal symmetric matrices (Jacobi matrices) with complex entries. The structureof the generalized spectral function is described in terms of spectral data consisting of theeigenvalues and normalizing numbers of the matrix. The inverse problems from generalizedspectral function as well as from spectral data are investigated. In this way, a procedure forconstruction of complex tridiagonal matrices having real eigenvalues is obtained.
Key words:
Jacobi matrix; difference equation; generalized spectral function; spectral data
Consider the N × N tridiagonal symmetric matrix (Jacobi matrix) with complex entries J = b a · · · a b a · · · a b · · · . . . b N − a N −
00 0 0 · · · a N − b N − a N − · · · a N − b N − , (1.1)where for each n, a n and b n are arbitrary complex numbers such that a n is different from zero: a n , b n ∈ C , a n = 0 . (1.2)In the real case a n , b n ∈ R , a n = 0 , (1.3)the matrix J is Hermitian (self-adjoint) and in this case many versions of the inverse spectralproblem for J have been investigated in the literature, see [1, 2, 3] and references given therein.In the complex case (1.2), the matrix J is in general non-Hermitian (non-selfadjoint) and ouraim in this paper is to introduce appropriate spectral data for such a matrix and then considerthe inverse spectral problem consisting in determining of the matrix from its spectral data. ⋆ G.Sh. GuseinovAs is known [4, 5, 6, 7, 8, 9], for non-selfadjoint differential and difference operators a naturalspectral characteristic is the so-called generalized spectral function which is a linear continuousfunctional on an appropriate linear topological space. In general very little is known about thestructure of generalized spectral functions.Given the matrix J of the form (1.1) with the entries satisfying (1.2), consider the eigenvalueproblem J y = λy for a column vector y = { y n } N − n =0 , that is equivalent to the second order lineardifference equation a n − y n − + b n y n + a n y n +1 = λy n , n ∈ { , , . . . , N − } , a − = a N − = 1 , (1.4)for { y n } Nn = − , with the boundary conditions y − = y N = 0 . (1.5)The problem (1.4), (1.5) is a discrete analogue of the continuous eigenvalue value problem ddx (cid:20) p ( x ) ddx y ( x ) (cid:21) + q ( x ) y ( x ) = λy ( x ) , x ∈ [ a, b ] , (1.6) y ( a ) = y ( b ) = 0 , (1.7)where [ a, b ] is a finite interval.To the continuous problem ddx (cid:20) p ( x ) ddx y ( x ) (cid:21) + q ( x ) y ( x ) = λy ( x ) , x ∈ [0 , ∞ ) ,y (0) = 0 , on the semi-infinite interval [0 , ∞ ) there corresponds a Jacobi matrix of the type (1.1) but J being infinite both downwards and to the right. To the equation in (1.6) considered on the wholereal axis ( −∞ , ∞ ) there corresponds a Jacobi matrix which is infinite in the all four directions:upwards, downwards, to the left, and to the right.The case of infinite Jacobi matrices was considered earlier in the papers [6, 7, 8, 9] in whichthe generalized spectral function was introduced and the inverse problem from the generalizedspectral function was studied. However, in the case of infinite Jacobi matrices the structure ofthe generalized spectral function does not allow any explicit description because of complexityof the structure. Our main achievement in the present paper is that we describe explicitly thestructure of the generalized spectral function for the finite order Jacobi matrices (1.1), (1.2).The paper is organized as follows. In Section 2, the generalized spectral function is introducedfor Jacobi matrices of the form (1.1) with the entries satisfying (1.2). In Section 3, the inverseproblem from the generalized spectral function is investigated. It turns out that the matrix (1.1)is not uniquely restored from the generalized spectral function. There are precisely 2 N − dis-tinct Jacobi matrices possessing the same generalized spectral function. The inverse problemis solved uniquely from the data consisting of the generalized spectral function and a sequence { σ , σ , . . . , σ N − } of signs + and − . Section 4 is devoted to some examples. In Section 5, wedescribe the structure of the generalized spectral function and in this way we define the conceptof spectral data for matrices (1.1). In Section 6, the inverse problem from the spectral data isconsidered. In Section 7, we characterize generalized spectral functions of real Jacobi matricesamong the generalized spectral functions of complex Jacobi matrices. In Section 8, we describethe structure of generalized spectral functions and spectral data of real Jacobi matrices. Finally,in Section 9, we consider inverse problem for real Jacobi matrices from the spectral data.Note that considerations of complex (non-Hermitian) Hamiltonians in quantum mechanicsand complex discrete models have recently received a lot of attention [10, 11, 12, 13]. For somenverse Spectral Problems for Tridiagonal N by N Complex Hamiltonians 3recent papers dealing with the spectral theory of difference (and differential) operators withcomplex coefficients see [14, 15, 16, 17, 18]. Fur further reading on the spectral theory of theJacobi difference equation (three term recurrence relation) the books [19, 20, 21, 22, 23] areexcellent sources.
Given a matrix J of the form (1.1) with the entries satisfying (1.2). Consider the eigenvalueproblem J y = λy for a column vector y = { y n } N − n =0 , that is equivalent to the second order lineardifference equation a n − y n − + b n y n + a n y n +1 = λy n , n ∈ { , , . . . , N − } , a − = a N − = 1 , (2.1)for { y n } Nn = − , with the boundary conditions y − = y N = 0 . (2.2)Denote by { P n ( λ ) } Nn = − the solution of equation (2.1) satisfying the initial conditions y − = 0 , y = 1 . (2.3)Using (2.3), we can find from equation (2.1) recurrently the quantities P n ( λ ) for n = 1 , , . . . , N ; P n ( λ ) is a polynomial in λ of degree n. Thus { P n ( λ ) } Nn =0 is the unique solution of the recursion relations b P ( λ ) + a P ( λ ) = λP ( λ ) ,a n − P n − ( λ ) + b n P n ( λ ) + a n P n +1 ( λ ) = λP n ( λ ) , (2.4) n ∈ { , , . . . , N − } , a N − = 1 , subject to the initial condition P ( λ ) = 1 . (2.5) Lemma 1.
The equality det ( J − λI ) = ( − N a a · · · a N − P N ( λ ) (2.6) holds so that the eigenvalues of the matrix J coincide with the zeros of the polynomial P N ( λ ) . Proof .
To prove (2.6) let us set, for each n ∈ { , , . . . , N } ,J n = b a · · · a b a · · · a b · · · . . . b n − a n −
00 0 0 · · · a n − b n − a n − · · · a n − b n − and set ∆ n ( λ ) = det( J n − λI ). Note that by I we denote an identity matrix of needed size. Byexpanding the determinant det( J n +1 − λI ) by the elements of the last row, it is easy to showthat ∆ n +1 ( λ ) = ( b n − λ )∆ n ( λ ) − a n − ∆ n − ( λ ) , n = 1 , , . . . ; ∆ ( λ ) = 1 . G.Sh. GuseinovDividing this equation by the product a · · · a n − , we find that the sequence d − = 0 , d = 1 , d n = ( − n ( a · · · a n − ) − ∆ n ( λ ) , n = 1 , , . . . , satisfies (2.1), (2.3). Then d n = P n ( λ ) , n = 0 , , . . . , and hence we have (2.6) because J N = J and a N − = 1 . (cid:4) For any nonnegative integer m denote by C m [ λ ] the ring of all polynomials in λ of degree ≤ m with complex coefficients. A mapping Ω : C m [ λ ] → C is called a linear functional if forany G ( λ ) , H ( λ ) ∈ C m [ λ ] and α ∈ C , we have h Ω , G ( λ ) + H ( λ ) i = h Ω , G ( λ ) i + h Ω , H ( λ ) i and h Ω , αG ( λ ) i = α h Ω , G ( λ ) i , where h Ω , G ( λ ) i denotes the value of Ω on the element (polynomial) G ( λ ) . Theorem 1.
There exists a unique linear functional
Ω : C N [ λ ] → C such that the relations h Ω , P m ( λ ) P n ( λ ) i = δ mn , m, n ∈ { , , . . . , N − } , (2.7) h Ω , P m ( λ ) P N ( λ ) i = 0 , m ∈ { , , . . . , N } , (2.8) hold, where δ mn is the Kronecker delta. Proof .
First we prove the uniqueness of Ω . Assume that there exists a linear functional Ωpossessing the properties (2.7) and (2.8). The 2 N + 1 polynomials P n ( λ ) ( n = 0 , , . . . , N − , P m ( λ ) P N ( λ ) ( m = 0 , , . . . , N ) (2.9)form a basis for the linear space C N [ λ ] because they are linearly independent (their degreesare distinct) and their number 2 N + 1 = dim C N [ λ ] . On the other hand, by (2.7) and (2.8) thefunctional Ω takes on polynomials (2.9) completely definite values: h Ω , P n ( λ ) i = δ n , n ∈ { , , . . . , N − } , (2.10) h Ω , P m ( λ ) P N ( λ ) i = 0 , m ∈ { , , . . . , N } . (2.11)Therefore Ω is determined on C N [ λ ] uniquely by the linearity.To prove existence of Ω we define it on the basis polynomials (2.9) by (2.10), (2.11) and thenwe extend Ω to over the whole space C N [ λ ] by linearity. Let us show that the functional Ωdefined in this way satisfies (2.7), (2.8). Denote h Ω , P m ( λ ) P n ( λ ) i = A mn , m, n ∈ { , , . . . , N } . (2.12)Obviously, A mn = A nm for m, n ∈ { , , . . . , N } . From (2.10) and (2.11) we have A m = A m = δ m , m ∈ { , , . . . , N } , (2.13) A mN = A Nm = 0 , m ∈ { , , . . . , N } . (2.14)Since { P n ( λ ) } N is the solution of equations (2.4), we find from the first equation, recalling (2.5), λ = b + a P ( λ ) . Substituting this in the remaining equations of (2.4), we obtain a n − P n − ( λ ) + b n P n ( λ ) + a n P n +1 ( λ ) = b P n ( λ ) + a P ( λ ) P n ( λ ) ,n ∈ { , , . . . , N − } , a N − = 1 . nverse Spectral Problems for Tridiagonal N by N Complex Hamiltonians 5Applying the functional Ω to both sides of the last equation, and recalling (2.13) and (2.14), weget A n = A n = δ n , n ∈ { , , . . . , N } . (2.15)Further, since a m − P m − ( λ ) + b m P m ( λ ) + a m P m +1 ( λ ) = λP m ( λ ) , m ∈ { , , . . . , N − } ,a n − P n − ( λ ) + b n P n ( λ ) + a n P n +1 ( λ ) = λP n ( λ ) , n ∈ { , , . . . , N − } , we obtain, multiplying the first of these identities by P n ( λ ) , and multiplying the second by P m ( λ ) , then subtracting the second result from the first: a m − P m − ( λ ) P n ( λ ) + b m P m ( λ ) P n ( λ ) + a m P m +1 ( λ ) P n ( λ )= a n − P n − ( λ ) P m ( λ ) + b n P n ( λ ) P m ( λ ) + a n P n +1 ( λ ) P m ( λ ) , m, n ∈ { , , . . . , N − } . Applying the functional Ω to both sides of the last equation, and recalling (2.13), (2.14),and (2.15), we obtain for A mn the boundary value problem a m − A m − ,n + b m A mn + a m A m +1 ,n = a n − A n − ,m + b n A nm + a n A n +1 ,m , (2.16) m, n ∈ { , , . . . , N − } ,A n = A n = δ n , A n = A n = δ n , A Nn = A nN = 0 , (2.17) n ∈ { , , . . . , N } . Using(2.17), we can find from (2.16) recurrently all the A mn and the unique solution of problem(2.16), (2.17) is A mn = δ mn for m, n ∈ { , , . . . , N − } and A mN = 0 for m ∈ { , , . . . , N } . (cid:4) Definition 1.
The linear functional Ω defined in Theorem 1 we call the generalized spectralfunction of the matrix J given in (1.1). The inverse problem is stated as follows:1. To see if it is possible to reconstruct the matrix J, given its generalized spectral function Ω.If it is possible, to describe the reconstruction procedure.2. To find the necessary and sufficient conditions for a given linear functional Ω on C N [ λ ] , to be the generalized spectral function for some matrix J of the form (1.1) with entriesbelonging to the class (1.2).Since P n ( λ ) is a polynomial of degree n, we can write the representation P n ( λ ) = α n λ n + n − X k =0 χ nk λ k ! , n ∈ { , , . . . , N } . (3.1)Substituting (3.1) in (2.4), we find that the coefficients a n , b n of system (2.4) and the quanti-ties α n , χ nk of decomposition (3.1), are interconnected by the equations a n = α n α n +1 (0 ≤ n ≤ N − , α = 1 , α N = α N − , (3.2) b n = χ n,n − − χ n +1 ,n (0 ≤ n ≤ N − , χ , − = 0 . (3.3) G.Sh. GuseinovIt is easily seen that relations (2.7), (2.8) are equivalent to the collection of the relations h Ω , λ m P n ( λ ) i = δ mn α n , m = 0 , , . . . , n, n ∈ { , , . . . , N − } , (3.4) h Ω , λ m P N ( λ ) i = 0 , m = 0 , , . . . , N. (3.5)In fact, using (3.1), we have h Ω , P m ( λ ) P n ( λ ) i = α m h Ω , λ m P n ( λ ) i + α m m − X j =0 χ mj (cid:10) Ω , λ j P n ( λ ) (cid:11) . (3.6)Next, since we have the expansion λ j = j X i =0 c ( j ) i P i ( λ ) , j ∈ { , , . . . , N } , it follows from (3.6) that (3.4), (3.5) hold if we have (2.7), (2.8). The converse is also true:if (3.4), (3.5) hold, then (2.7), (2.8) can be obtained from (3.6), in conjunction with (3.1).Let us set s l = h Ω , λ l i , l ∈ { , , . . . , N } , (3.7)that are the “power moments” of the functional Ω . Replacing P n ( λ ) and P N ( λ ) in (3.4) and (3.5) by their expansions in (3.1), we obtain s n + m + n − X k =0 χ nk s k + m = 0 , m = 0 , , . . . , n − , n ∈ { , , . . . , N } , (3.8) s N + N − X k =0 χ Nk s k + N = 0 , (3.9) s n + n − X k =0 χ nk s k + n = 1 α n , n ∈ { , , . . . , N − } . (3.10)Notice that (3.8) is the fundamental equation of the inverse problem, in the sense that itenables the problem to be formally solved. For, if we are given the linear functional Ω on C N [ λ ] , we can find the quantities s l from (3.7) and then we consider the inhomogeneous system of linearalgebraic equations (3.8) with unknowns χ n , χ n , . . . , χ n,n − , for every fixed n ∈ { , , . . . , N } . If this system is uniquely solvable, and s n + n − P k =0 χ nk s k + n = 0 for n ∈ { , , . . . , N − } , then theentries a n , b n of the required matrix J can be found from (3.2) and (3.3), respectively, α n beingfound from (3.10). The next theorem gives the conditions under which the indicated procedureof solving the inverse problem is rigorously justified. Theorem 2.
In order for a given linear functional Ω , defined on C N [ λ ] , to be the generali-zed spectral function for some Jacobi matrix J of the form (1.1) with entries belonging to theclass (1.2) , it is necessary and sufficient that the following conditions be satisfied: ( i ) h Ω , i = 1 (normalization condition); ( ii ) if, for some polynomial G ( λ ) , deg G ( λ ) ≤ N − , h Ω , G ( λ ) H ( λ ) i = 0 for all polynomials H ( λ ) , deg H ( λ ) = deg G ( λ ) , then G ( λ ) ≡ N by N Complex Hamiltonians 7( iii ) there exists a polynomial T ( λ ) of degree N such that h Ω , G ( λ ) T ( λ ) i = 0 for all polynomials G ( λ ) with deg G ( λ ) ≤ N. Proof .
Necessity . We obtain ( i ) from (2.7) with n = m = 0 , recalling (2.5). To prove ( ii ), wewrite the expansion G ( λ ) = m X j =0 c ( m ) j P j ( λ ) , m = deg G ( λ ) , and take as H ( λ ) the polynomial H ( λ ) = m X j =0 c ( m ) j P j ( λ ) , where the bar over a complex number denotes the complex conjugation. Then we find from h Ω , G ( λ ) H ( λ ) i = 0 using (2.7) that m X j =0 (cid:12)(cid:12)(cid:12) c ( m ) j (cid:12)(cid:12)(cid:12) = 0 , hence c ( m ) j = 0 , j = 0 , , . . . , m, i.e., G ( λ ) ≡ . The statement ( iii ) of the theorem followsfrom (2.8) if we take T ( λ ) = P N ( λ ) . Sufficiency.
The proof will be given in several stages.(a) Given the linear functional Ω, defined on C N [ λ ] and satisfying the conditions of thetheorem. Consider equation (3.8) with the unknowns χ nk , k = 0 , , . . . , n − , in which s l arefound with the aid of the functional Ω from expression (3.7). Let us show that this equationhas a unique solution for every fixed n ∈ { , , . . . , N } . For this, it is sufficient to show that thecorresponding homogeneous equation n − X k =0 g k s k + m = 0 , m = 0 , , . . . , n − , (3.11)has only the zero solution for every n. Assume the contrary. For some n ∈ { , , . . . , N } letequation (3.11) have the nonzero solution ( g k ) n − . Further let ( h m ) n − be an arbitrary vector.We multiply both sides of (3.11) by h m and sum over m between 0 and n −
1; we get n − X m =0 n − X k =0 h m g k s k + m = 0 . Substituting expression (3.7) for s k + m in this equation and denoting G ( λ ) = n − X k =0 g k λ k , H ( λ ) = n − X m =0 h m λ m , we obtain h Ω , G ( λ ) H ( λ ) i = 0 . (3.12) G.Sh. GuseinovSince ( h m ) n − is an arbitrary vector, we find from (3.12), in the light of condition ( ii ) of thetheorem, that G ( λ ) ≡ , and hence g = g = · · · = g n − = 0 , in spite of our assumption. Thus,for any n ∈ { , , . . . , N } , equation (3.8) has a unique solution.(b) Let us show that s n + n − X k =0 χ nk s k + n = 0 , n ∈ { , , . . . , N − } , (3.13)where ( χ nk ) n − k =0 is the solution of the fundamental equation (3.8). (For n = 0 , the left-hand sideof (3.13) is s = h Ω , i = 1 . ) Assume the contrary, i.e., for some n ∈ { , , . . . , N − } s n + n − X k =0 χ nk s k + n = 0 . Joining this equation to the fundamental equation (3.8), we obtain s n + m + n − X k =0 χ nk s k + m = 0 , m = 0 , , . . . , n. (3.14)Let ( h m ) n be an arbitrary vector. Multiplying both sides of (3.14) by h m and summing over m from 0 to n, we obtain n X m =0 h m s n + m + n X m =0 n − X k =0 h m χ nk s k + m = 0 . Replacing s l in this by its expression (3.7), we obtain h Ω , [ λ n + χ ( λ )] H ( λ ) i = 0 , where χ ( λ ) = n − X k =0 χ nk λ k , H ( λ ) = n X m =0 h m λ m . Since ( h m ) n is an arbitrary vector, we obtain from the last equation in the light of condition ( ii )of the theorem: λ n + χ ( λ ) ≡ , which is impossible. Our assumption is therefore false.(c) Given the solution ( χ nk ) n − k =0 of the fundamental equation (3.8), we find α n from (3.10)for n ∈ { , , . . . , N − } with α = 1 and set α N = α N − . Then we find the polynomials P n ( λ )from (3.1). Let us show that the relations (2.7), (2.8) hold. It is enough to show that (3.4), (3.5)hold because (3.4), (3.5) together are equivalent to relations (2.7), (2.8). From (3.8) and (3.10)we have (3.4) and (3.5), the latter except for m = N. So it remains to show that h Ω , λ N P N ( λ ) i = 0 . For this purpose we use the condition ( iii ) of the theorem. By this condition we have T ( λ ) = N X k =0 t k λ k , t N = 0 , nverse Spectral Problems for Tridiagonal N by N Complex Hamiltonians 9and 0 = h Ω , P N ( λ ) T ( λ ) i = N X k =0 t k h Ω , λ k P N ( λ ) i = t N h Ω , λ N P N ( λ ) i , where we have used (3.5) except for m = N. Hence h Ω , λ N P N ( λ ) i = 0 . (d) Let us show that the polynomials P n ( λ ) , n = 0 , , . . . , N, constructed in accordancewith (3.1) with the aid of the numbers χ nk and α n obtained above, satisfy the equations b P ( λ ) + a P ( λ ) = λP ( λ ) ,a n − P n − ( λ ) + b n P n ( λ ) + a n P n +1 ( λ ) = λP n ( λ ) , (3.15) n ∈ { , , . . . , N − } , a N − = 1 , where the coefficients a n , b n are given by the expressions a n = α n α n +1 (0 ≤ n ≤ N − , α = 1 , α N = α N − , (3.16) b n = χ n,n − − χ n +1 ,n (0 ≤ n ≤ N − , χ , − = 0 . (3.17)We first verify the first equation of (3.15). From (3.1) we have P ( λ ) = 1 , P ( λ ) = α ( λ + χ ) . Hence the first equation of (3.15) has the form b + a α ( λ + χ ) = λ ;and this is true, since, by (3.16) and (3.17), a α = 1 , b = − χ . Let us prove the remaining equations of (3.15). Since λP n ( λ ) is a polynomial of degree n + 1 , while P k ( λ ) , k = 0 , , . . . , n + 1 , are linearly independent, we have λP n ( λ ) = n +1 X k =0 c ( n ) k P k ( λ ) n ∈ { , , . . . , N − } , c ( N − N = 1 , (3.18)where c ( n ) k , k = 0 , , . . . , n + 1 , are constants. By (2.7), (2.8) which we proved in (c), we havefrom (3.18): c ( n ) k = h Ω , λP n ( λ ) P k ( λ ) i , k = 0 , , . . . , n + 1 ( n ∈ { , , . . . , N − } ) . (3.19)The polynomials λP k ( λ ) , k = 0 , , . . . , n − , have degrees ≤ n − , and hence we find from (3.19)in the light of (2.7), (2.8) that c ( n ) k = 0 , k = 0 , , . . . , n − n ∈ { , , . . . , N − } ) . Consequently, expansion (3.18) takes the form c ( n ) n − P n − ( λ ) + c ( n ) n P n ( λ ) + c ( n ) n +1 P n +1 ( λ ) = λP n ( λ ) , n ∈ { , , . . . , N − } . (3.20)It follows from (3.19) that c ( n ) n − = c ( n − n . Hence, denoting c ( n ) n +1 = e a n , c ( n ) n = e b n , (3.21)0 G.Sh. Guseinovwe have from (3.20): e a n − P n − ( λ ) + e b n P n ( λ ) + e a n P n +1 ( λ ) = λP n ( λ ) , n ∈ { , , . . . , N − } . (3.22)Replacing P n ( λ ) in (3.22) by its expression (3.1) and equating coefficients of like λ n , whilerecalling (3.16), (3.17), we obtain e a n = α n α n +1 = a n (0 ≤ n ≤ N − , e b n = χ n,n − − χ n +1 ,n = b n (0 ≤ n ≤ N − . Theorem 2 is completely proved. (cid:4)
Remark 1.
It follows from the above solution of the inverse problem that the matrix (1.1) isnot uniquely restored from the generalized spectral function. This is linked with the fact thatthe α n are determined from (3.10) uniquely up to a sign. To ensure that the inverse problemis uniquely solvable, we have to specify additionally a sequence of signs + and − . Namely, let { σ , σ , . . . , σ N − } be a given finite sequence, where for each n ∈ { , , . . . , N − } the σ n is +or − . We have 2 N − such different sequences. Now to determine α n uniquely from (3.10) for n ∈ { , , . . . , N − } (remember that we always take α = 1) we can choose the sign σ n whenextracting the square root. In this way we get precisely 2 N − distinct Jacobi matrices possessingthe same generalized spectral function. For example, the two matrices (cid:20) (cid:21) , (cid:20) − − (cid:21) , as well as the four matrices , − − , − − , − − − − , have the same generalized spectral function. The inverse problem is solved uniquely from thedata consisting of Ω and a sequence { σ , σ , . . . , σ N − } of signs + and − . Using the numbers s l = h Ω , λ l i , l = 0 , , . . . , N, (3.23)let us introduce the determinants D n = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) s s · · · s n s s · · · s n +1 ... ... . . . ... s n s n +1 · · · s n (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) , n = 0 , , . . . , N. (3.24)It turns out that Theorem 2 is equivalent to the following theorem. Theorem 3.
In order for a given linear functional Ω , defined on C N [ λ ] , to be the generali-zed spectral function for some Jacobi matrix J of the form (1.1) with entries belonging to theclass (1.2) , it is necessary and sufficient that D = 1 , D n = 0 ( n = 1 , , . . . , N − , and D N = 0 , (3.25) where D n is defined by (3.24) and (3.23) . nverse Spectral Problems for Tridiagonal N by N Complex Hamiltonians 11
Proof .
Necessity . The condition D = 1 follows from 1 = h Ω , i = s = D . By Theorem 2, iffor a polynomial G ( λ ) = n X k =0 g k λ k (3.26)with n ≤ N − h Ω , G ( λ ) H ( λ ) i = 0 (3.27)for all polynomials H ( λ ) = n X m =0 h m λ m , (3.28)then G ( λ ) ≡ , that is, g = g = · · · = g n = 0 . If we substitute (3.26) and (3.28) in (3.27), then we get n X m =0 h m n X k =0 g k s k + m ! = 0 . Since h , h , . . . , h n are arbitrary, the last equation gives n X k =0 g k s k + m = 0 , m = 0 , , . . . , n. (3.29)This is a linear homogeneous system of algebraic equations with respect to g , g , . . . , g n and thedeterminant of this system coincides with the determinant D n . Since this system has only thetrivial solution g = g = · · · = g n = 0 , we have that D n = 0 , where n ≤ N − . To prove that D N = 0 , we write equation (3.8) for n = N to get s N + m + N − X k =0 χ Nk s k + m = 0 , m = 0 , , . . . , N − . This equation has the unique solution χ N , χ N , . . . , χ N,N − . Next, these equalities togetherwith (3.9) can be written in the form s N s N +1 ... s N − s N + χ N s s ... s N − s N + χ N s s ... s N s N +1 + · · · + χ N,N − s N − s N ... s N − s N − = 0 . This means that the last column in the determinant D N is a linear combination of the remainingcolumns. Therefore D N = 0 . Sufficiency . Given the linear functional Ω : C N [ λ ] → C satisfying the conditions (3.25), itis enough to show that then the conditions of Theorem 2 are satisfied. We have h Ω , i = s = D = 1 . Next, let (3.27) hold for a polynomial G ( λ ) of the form (3.26) and all polynomials H ( λ )of the form (3.28). Then (3.29) holds. Since the determinant of this system is D n and D n = 0for n ≤ N − , we get that g = g = · · · = g n = 0 , that is, G ( λ ) ≡ . Finally, we have to showthat there is a polynomial T ( λ ) of degree N such that h Ω , G ( λ ) T ( λ ) i = 0 (3.30)2 G.Sh. Guseinovfor all polynomials G ( λ ) with deg G ( λ ) ≤ N. For this purpose we consider the homogeneoussystem N X k =0 t k s k + m = 0 , m = 0 , , . . . , N, (3.31)with the unknowns t , t , . . . , t N . The determinant of this system is D N . Since by condition D N = 0 , this system has a nontrivial solution t , t , . . . , t N . We have t N = 0 . Indeed, if t N = 0 , then we get from (3.31) N − X k =0 t k s k + m = 0 , m = 0 , , . . . , N − . (3.32)The determinant of this system is D N − and by condition D N − = 0 . Then t = t = · · · = t N − = 0 and we get that the solution t , t , . . . , t N of system (3.31) is trivial, which is a contra-diction. Taking the nontrivial solution t , t , . . . , t N of system (3.31) we construct the polynomial T ( λ ) = N X k =0 t k λ k . of degree N. Then substituting s k + m = h Ω , λ k + m i in (3.31) gives h Ω , λ m T ( λ ) i = 0 , m = 0 , , . . . , N. Hence (3.30) holds for all polynomials G ( λ ) with deg G ( λ ) ≤ N. (cid:4) Note that the determinant of system (3.8) coincides with D n − . Denote by D ( k ) m ( k =0 , , . . . , m ) the determinant that is obtained from the determinant D m by replacing in D m the ( k + 1)th column by the column with the components s m +1 , s m +2 , . . . , s m +1 . Then, solvingsystem (3.8) by making use of the Cramer’s rule, we find χ nk = − D ( k ) n − D n − , k = 0 , , . . . , n − . (3.33)Next, substituting the expression (3.33) of χ nk into the left-hand side of (3.10), we get α − n = D n D − n − . (3.34)Now if we set D ( m ) m = ∆ m , then we get from (3.16), (3.17), by virtue of (3.33), (3.34), a n = ± ( D n − D n +1 ) / D − n , n ∈ { , , . . . , N − } , D − = 1 , (3.35) b n = ∆ n D − n − ∆ n − D − n − , n ∈ { , , . . . , N − } , ∆ − = 0 , ∆ = s . (3.36)Thus, if the conditions of Theorem 3 or, equivalently, the conditions of Theorem 2 are sa-tisfied, then the entries a n , b n of the matrix J for which Ω is the generalized spectral function,are recovered by the formulas (3.35), (3.36), where D n is defined by (3.24) and (3.23), and ∆ n is the determinant obtained from the determinant D n by replacing in D n the last column bythe column with the components s n +1 , s n +2 , . . . , s n +1 . nverse Spectral Problems for Tridiagonal N by N Complex Hamiltonians 13
In this section we consider some simple examples to illustrate the solving of the inverse problemgiven above in Section 3.
Example 1.
The functional h Ω , G ( λ ) i = Z G ( λ ) dλ satisfies the conditions ( i ) and ( ii ) of Theorem 2, but it does not satisfy the condition ( iii ) ofthis theorem.In fact, obviously, h Ω , i = 1 . Next, let for a polynomial G ( λ ) = N − X k =0 g k λ k (4.1)we have h Ω , G ( λ ) H ( λ ) i = Z G ( λ ) H ( λ ) dλ = 0for all polynomials H ( λ ) = N − X k =0 h k λ k , deg H ( λ ) = deg G ( λ ) . (4.2)Taking, in particular, H ( λ ) = N − X k =0 g k λ k , (4.3)where the bar over a complex number denotes the complex conjugation, we get Z | G ( λ ) | dλ = 0and hence G ( λ ) ≡ . The same reasoning shows that there is no nonidentically zero polynomial T ( λ ) such that h Ω , G ( λ ) T ( λ ) i = 0 for all polynomials G ( λ ) with deg G ( λ ) ≤ deg H ( λ ) . Example 2.
The functional h Ω , G ( λ ) i = N X k =1 c k G ( λ k ) , where λ , . . . , λ N are distinct real numbers, c , . . . , c N are complex numbers such that N X k =1 c k = 1 and Re c k > k = 1 , . . . , N ) , satisfies the conditions of Theorem 2.4 G.Sh. GuseinovIn fact, obviously, h Ω , i = 1 . Next, assume that for a polynomial G ( λ ) of the form (4.1) wehave h Ω , G ( λ ) H ( λ ) i = N X k =1 c k G ( λ k ) H ( λ k ) = 0for all polynomials H ( λ ) of the form (4.2). If we take, in particular, H ( λ ) of the form (4.3),then we get N X k =1 c k | G ( λ k ) | = 0 . Hence, taking the real part and using the condition Re c k > k = 1 , . . . , N ) , we get G ( λ k ) = 0 ,k = 1 , . . . , N. Therefore G ( λ ) ≡ λ , . . . , λ N are distinct and G ( λ ) is a polynomial withdeg G ( λ ) ≤ N − . Further, for the polynomial T ( λ ) = ( λ − λ ) · · · ( λ − λ N )we have h Ω , G ( λ ) T ( λ ) i = 0 for all polynomials G ( λ ) so that the condition ( iii ) of Theorem 2 isalso satisfied. Thus the functional Ω satisfies all the conditions of Theorem 2.Consider the case N = 2 and take the functional Ω defined by the formula h Ω , G ( λ ) i = cG (0) + (1 − c ) G (1) , where c is any complex number such that c = 0 and c = 1 . Let us solve the inverse problem forthis functional by using formulas (3.35) and (3.36). We have s = h Ω , i = 1 , s l = h Ω , λ l i = 1 − c for all l = 1 , , . . . ,D − = 1 , D = s = 1 ,D = (cid:12)(cid:12)(cid:12)(cid:12) s s s s (cid:12)(cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12) − c − c − c (cid:12)(cid:12)(cid:12)(cid:12) = c (1 − c ) ,D = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) s s s s s s s s s (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) − c − c − c − c − c − c − c − c (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = 0 , ∆ − = 0 , ∆ = s = 1 − c, ∆ = D (1)1 = (cid:12)(cid:12)(cid:12)(cid:12) s s s s (cid:12)(cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12) − c − c − c (cid:12)(cid:12)(cid:12)(cid:12) = c (1 − c ) . Therefore the functional Ω satisfies all the conditions of Theorem 3. According to formulas (3.35)and (3.36), we find a = ± ( D − D ) / D − = ± p D = ± p c (1 − c ) ,b = ∆ D − − ∆ − D − − = 1 − c,b = ∆ D − − ∆ D − = 1 − (1 − c ) = c. Therefore there are two matrices J ± for which Ω is the spectral function: J ± = (cid:20) b a a b (cid:21) = (cid:20) − c ± p c (1 − c ) ± p c (1 − c ) c (cid:21) . The characteristic polynomials of the matrices J ± have the formdet( J ± − λI ) = λ ( λ − . nverse Spectral Problems for Tridiagonal N by N Complex Hamiltonians 15
Example 3.
Let N = 2 . Consider the functional Ω defined by the formula h Ω , G ( λ ) i = G ( λ ) + cG ′ ( λ ) , where λ and c are arbitrary complex numbers such that c = 0 . This functional satisfies all theconditions of Theorem 2. As the polynomial T ( λ ) presented in the condition ( iii ) of Theorem 2,we can take T ( λ ) = ( λ − λ ) .We have s = h Ω , i = 1 , s l = h Ω , λ l i = λ l + clλ l − for l = 1 , , . . . ,D − = 1 , D = s = 1 ,D = (cid:12)(cid:12)(cid:12)(cid:12) s s s s (cid:12)(cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12) λ + cλ + c λ + 2 cλ (cid:12)(cid:12)(cid:12)(cid:12) = − c ,D = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) s s s s s s s s s (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) λ + c λ + 2 cλ λ + c λ + 2 cλ λ + 3 cλ λ + 2 cλ λ + 3 cλ λ + 4 cλ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = 0 , ∆ − = 0 , ∆ = s = λ + c, ∆ = D (1)1 = (cid:12)(cid:12)(cid:12)(cid:12) s s s s (cid:12)(cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12) λ + 2 cλ λ + c λ + 3 cλ (cid:12)(cid:12)(cid:12)(cid:12) = − c λ . Therefore the functional Ω satisfies all the conditions of Theorem 3. According to formulas (3.35)and (3.36), we find a = ± ( D − D ) / D − = ± p D = ± p − c = ± ic,b = ∆ D − − ∆ − D − − = λ + c,b = ∆ D − − ∆ D − = − c λ − c − ( λ + c ) = λ − c. Therefore the two matrices J ± for which Ω is the spectral function have the form J ± = (cid:20) b a a b (cid:21) = (cid:20) λ + c ± ic ± ic λ − c (cid:21) . The characteristic polynomials of the matrices J ± have the formdet( J ± − λI ) = ( λ − λ ) . Note that if N = 3 , then the functional h Ω , G ( λ ) i = G ( λ ) + c G ′ ( λ ) + c G ′′ ( λ ) , where λ , c , c are complex numbers, satisfies the conditions of Theorem 2 (or Theorem 3) ifand only if c = 0 , c − c = 0 . Let J be a Jacobi matrix of the form (1.1) with the entries satisfying (1.2). Next, let Ω be thegeneralized spectral function of J, defined above in Section 2. The following theorem describesthe structure of Ω . Theorem 4.
Let λ , . . . , λ p be all the distinct eigenvalues of the matrix J and m , . . . , m p betheir multiplicities, respectively, as roots of the characteristic polynomial (2.6) . There existcomplex numbers β kj ( j = 1 , . . . , m k , k = 1 , . . . , p ) uniquely determined by the matrix J suchthat for any polynomial G ( λ ) ∈ C N [ λ ] the formula h Ω , G ( λ ) i = p X k =1 m k X j =1 β kj ( j − G ( j − ( λ k ) , (5.1) holds, where G ( n ) ( λ ) denotes the n th order derivative of G ( λ ) with respect to λ. Proof .
Let J be a matrix of the form (1.1), (1.2). Consider the second order linear differenceequation a n − y n − + b n y n + a n y n +1 = λy n , n ∈ { , , . . . , N − } , a − = a N − = 1 , (5.2)where { y n } Nn = − is a desired solution. Denote by { P n ( λ ) } Nn = − and { Q n ( λ ) } Nn = − the solutionsof equation (5.2) satisfying the initial conditions P − ( λ ) = 0 , P ( λ ) = 1; (5.3) Q − ( λ ) = − , Q ( λ ) = 0 . (5.4)For each n ≥ , P n ( λ ) is a polynomial of degree n and is called a polynomial of first kind (notethat P n ( λ ) is the same polynomial as in Section 2) and Q n ( λ ) is a polynomial of degree n − M ( λ ) = − Q N ( λ ) P N ( λ ) . (5.5)Then it is straightforward to verify that the entries R nm ( λ ) of the matrix R ( λ ) = ( J − λI ) − (resolvent of J ) are of the form R nm ( λ ) = ( P n ( λ )[ Q m ( λ ) + M ( λ ) P m ( λ )] , ≤ n ≤ m ≤ N − ,P m ( λ )[ Q n ( λ ) + M ( λ ) P n ( λ )] , ≤ m ≤ n ≤ N − . (5.6)Let f be an arbitrary element (column vector) of C N , with the components f , f , . . . , f N − . Since R ( λ ) f = − fλ + O (cid:18) λ (cid:19) , as | λ | → ∞ , we have for each n ∈ { , , . . . , N − } ,f n = − πi Z Γ r ( N − X m =0 R nm ( λ ) f m ) dλ + Z Γ r O (cid:18) λ (cid:19) dλ, (5.7)where r is a sufficiently large positive number, Γ r is the circle in the λ -plane of radius r centeredat the origin.Denote by λ , . . . , λ p all the distinct roots of the polynomial P N ( λ ) (which coincides by (2.6)with the characteristic polynomial of the matrix J up to a constant factor) and by m , . . . , m p their multiplicities, respectively: P N ( λ ) = c ( λ − λ ) m · · · ( λ − λ p ) m p , (5.8)nverse Spectral Problems for Tridiagonal N by N Complex Hamiltonians 17where c is a constant. We have 1 ≤ p ≤ N and m + · · · + m p = N. By (5.8), we can rewrite therational function Q N ( λ ) /P N ( λ ) as the sum of partial fractions: Q N ( λ ) P N ( λ ) = p X k =1 m k X j =1 β kj ( λ − λ k ) j , (5.9)where β kj are some uniquely determined complex numbers depending on the matrix J. Substi-tuting (5.6) in (5.7) and taking into account (5.5), (5.9) we get, applying the residue theoremand passing then to the limit as r → ∞ ,f n = p X k =1 m k X j =1 β kj ( j − (cid:26) d j − dλ j − [ F ( λ ) P n ( λ )] (cid:27) λ = λ k , n ∈ { , , . . . , N − } , (5.10)where F ( λ ) = N − X m =0 f m P m ( λ ) . (5.11)Now define on C N [ λ ] the functional Ω by the formula h Ω , G ( λ ) i = p X k =1 m k X j =1 β kj ( j − G ( j − ( λ k ) , G ( λ ) ∈ C N [ λ ] . (5.12)Then formula (5.10) can be written in the form f n = h Ω , F ( λ ) P n ( λ ) i , n ∈ { , , . . . , N − } . (5.13)From here by (5.11) and the arbitrariness of { f m } N − m =0 it follows that the “orthogonality” relation h Ω , P m ( λ ) P n ( λ ) i = δ mn , m, n ∈ { , , . . . , N − } , (5.14)holds. Further, in virtue of (5.8) and (5.12) we have also h Ω , P m ( λ ) P N ( λ ) i = 0 , m ∈ { , , . . . , N } . (5.15)These mean by Theorem 1 that the generalized spectral function of the matrix J has theform (5.12). (cid:4) Definition 2.
The collection of the quantities { λ k , β kj ( j = 1 , . . . , m k , k = 1 , . . . , p ) } , determining the structure of the generalized spectral function of the matrix J according toTheorem 4, we call the spectral data of the matrix J. For each k ∈ { , . . . , p } the sequence { β k , . . . , β km k } we call the normalizing chain (of the matrix J ) associated with the eigenvalue λ k (the sense of“normalizing” will be clear below in Section 8).8 G.Sh. GuseinovIf we delete the first row and the first column of the matrix J given in (1.1), then we get thenew matrix J (1) = b (1)0 a (1)0 · · · a (1)0 b (1)1 a (1)1 · · · a (1)1 b (1)2 · · · . . . b (1) N − a (1) N −
00 0 0 · · · a (1) N − b (1) N − a (1) N − · · · a (1) N − b (1) N − , where a (1) n = a n +1 , n ∈ { , , . . . , N − } ,b (1) n = b n +1 , n ∈ { , , . . . , N − } . The matrix J (1) is called the first truncated matrix (with respect to the matrix J ). Theorem 5.
The normalizing numbers β kj of the matrix J can be calculated by decomposingthe rational function − det( J (1) − λI )det( J − λI ) into partial fractions. Proof .
Let us denote the polynomials of the first and the second kinds, corresponding to thematrix J (1) , by P (1) n ( λ ) and Q (1) n ( λ ) , respectively. It is easily seen that P (1) n ( λ ) = a Q n +1 ( λ ) , n ∈ { , , . . . , N − } , (5.16) Q (1) n ( λ ) = 1 a { ( λ − b ) Q n +1 ( λ ) − P n +1 ( λ ) } , n ∈ { , , . . . , N − } . (5.17)Indeed, both sides of each of these equalities are solutions of the same difference equation a (1) n − y n − + b (1) n y n + a (1) n y n +1 = λy n , n ∈ { , , . . . , N − } , a (1) N − = 1 , and the sides coincide for n = − n = 0 . Therefore the equality holds by the uniquenesstheorem for solutions.Consequently, taking into account Lemma 1 and using (5.16), we havedet( J (1) − λI ) = ( − !) N − a (1)0 a (1)1 · · · a (1) N − P (1) N − ( λ ) = ( − N − a · · · a N − a Q N ( λ ) . Comparing this with (2.6), we get Q N ( λ ) P N ( λ ) = − det( J (1) − λI )det( J − λI )so that the statement of the theorem follows from (5.9). (cid:4) nverse Spectral Problems for Tridiagonal N by N Complex Hamiltonians 19
By the inverse spectral problem is meant the problem of recovering matrix J , i.e. its entries a n and b n , from the spectral data. Theorem 6.
Let an arbitrary collection of complex numbers { λ k , β kj ( j = 1 , . . . , m k , k = 1 , . . . , p ) } (6.1) be given, where λ , λ , . . . , λ p (1 ≤ p ≤ N ) are distinct, ≤ m k ≤ N , and m + · · · + m p = N .In order for this collection to be the spectral data for some Jacobi matrix J of the form (1.1) with entries belonging to the class (1.2) , it is necessary and sufficient that the following twoconditions be satisfied: ( i ) p P k =1 β k = 1 ; ( ii ) D n = 0 , for n ∈ { , , . . . , N − } , and D N = 0 , where D n is defined by (3.24) in which s l = p X k =1 n kl X j =1 (cid:18) lj − (cid:19) β kj λ l − j +1 k , (6.2) n kl = min { m k , l + 1 } , (cid:0) lj − (cid:1) is a binomial coefficient. Proof .
The necessity of conditions of the theorem follows from Theorem 3 because the generali-zed spectral function of the matrix J is defined by the spectral data according to formula (5.1)and therefore the quantity (6.2) coincides with (cid:10) Ω , λ l (cid:11) . Besides, p X k =1 β k = h Ω , i = s = D . Note that the condition ( iii ) of Theorem 2 holds with T ( λ ) = ( λ − λ ) m · · · ( λ − λ p ) m p . (6.3)Let us prove the sufficiency. Assume that we have a collection of quantities (6.1) satisfy-ing the conditions of the theorem. Using these data we construct the functional Ω on C N [ λ ]by formula (5.1). Then this functional Ω satisfies the conditions of Theorem 3 and thereforethere exists a matrix J of the form (1.1), (1.2) for which Ω is the generalized spectral function.Now we have to prove that the collection (6.1) is the spectral data for the recovered matrix J. For this purpose we define the polynomials P − ( λ ) , P ( λ ) , . . . , P N ( λ ) as the solution of equa-tion (5.2), constructed by means of the matrix J, under the initial conditions (5.3). Then therelations (2.7), (2.8) and the equalities a n = h Ω , λP n ( λ ) P n +1 ( λ ) i , n ∈ { , , . . . , N − } , (6.4) b n = (cid:10) Ω , λP n ( λ ) (cid:11) , n ∈ { , , . . . , N − } (6.5)hold. We show that (5.8) holds which will mean, in particular, that λ , . . . , λ p are eigenvaluesof the matrix J with the multiplicities m , . . . , m p , respectively.Let T ( λ ) be defined by (6.3). Let us show that there exists a constant c such that a N − P N − ( λ ) + b N − P N − ( λ ) + cT ( λ ) = λP N − ( λ ) (6.6)for all λ ∈ C . If we prove this, then from here and (5.2) with y k = P k ( λ ) and n = N − P N ( λ ) = cT ( λ ) . P n ( λ ) = n (0 ≤ n ≤ N − , deg T ( λ ) = m + · · · + m p = N, the polynomials P ( λ ) , . . . , P N − ( λ ) , T ( λ ) form a basis of the linear space of all polynomials of degree ≤ N. Therefore we have the decomposition λP N − ( λ ) = cT ( λ ) + N − X n =0 c n P n ( λ ) , (6.7)where c, c , c , . . . , c N − are some constants. By (6.3) and (5.1) it follows that h Ω , T ( λ ) P n ( λ ) i = 0 , n ∈ { , , . . . , N } . Hence taking into account the relations (2.7), (2.8) and (6.4), (6.5), we find from (6.7) that c n = 0 (0 ≤ n ≤ N − , c N − = a N − , c N − = b N − . So (6.6) is shown.It remains to show that for each k ∈ { , . . . , p } the sequence { β k , . . . , β km k } is the normali-zing chain of the matrix J associated with the eigenvalue λ k . Since we have already shown that λ k is an eigenvalue of the matrix J of the multiplicity m k , the normalizing chain of J associatedwith the eigenvalue λ k has the form { e β k , . . . , e β km k } . Therefore for h Ω , G ( λ ) i we have an equalityof the form (5.1) in which β kj is replaced by e β kj . Subtracting these two equalities for h Ω , G ( λ ) i each from other we get that p X k =1 m k X j =1 β kj − e β kj ( j − G ( j − ( λ k ) = 0 for all G ( λ ) ∈ C N [ λ ] . Since the values G ( j − ( λ k ) can be arbitrary numbers, we get that β kj = e β kj for all k and j. (cid:4) Under the conditions of Theorem 6 the entries a n and b n of the matrix J for which thecollection (6.1) is spectral data, are recovered by formulas (3.35), (3.36). In this section, we characterize generalized spectral functions of real Jacobi matrices amongthe generalized spectral functions of complex Jacobi matrices. Let m be a nonnegative integer.Denote by R m [ λ ] the ring of all polynomials in λ of degree ≤ m with real coefficients. Definition 3.
A linear functional Ω defined on the space C m [ λ ] is said to be positive if h Ω , G ( λ ) i > G ( λ ) ∈ R m [ λ ] , which are not identically zero and which satisfy the inequality G ( λ ) ≥ , −∞ < λ < ∞ . Lemma 2. If Ω is a positive functional on C m [ λ ] , then it takes only real values on R m [ λ ] . Proof .
Since the functional Ω is positive, the values h Ω , λ k i , k ∈ { , , . . . , m } are real (more-over they are positive). Next, the monomial λ k − , k ∈ { , , . . . , m } is represented as a differenceof two nonnegative polynomials of degree 2 k :2 λ k − = λ k − ( λ + 1) − λ k − ( λ + 1) . Therefore the values h Ω , λ k − i , k ∈ { , , . . . , m } are also real to be a difference of two positivenumbers. Thus, h Ω , λ n i is real for any n ∈ { , , . . . , m } . Hence h Ω , G ( λ ) i is real for any G ( λ ) ∈ R m [ λ ] . (cid:4) nverse Spectral Problems for Tridiagonal N by N Complex Hamiltonians 21
Lemma 3.
A linear functional Ω on C m [ λ ] is positive if and only if D n > for all n ∈{ , , . . . , m } , where D n = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) s s · · · s n s s · · · s n +1 ... ... . . . ... s n s n +1 · · · s n (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) , n = 0 , , . . . , m, in which s l = h Ω , λ l i , l = 0 , , . . . , m. Proof .
Any polynomial G ( λ ) ∈ R m [ λ ] which is not identically zero and which satisfies theinequality G ( λ ) ≥ , −∞ < λ < ∞ , (7.1)can be represented in the form G ( λ ) = [ A ( λ )] + [ B ( λ )] , (7.2)where A ( λ ), B ( λ ) are polynomials of degrees ≤ m with real coefficients. Indeed, it followsfrom (7.1) that the polynomial G ( λ ) has even degree: deg G ( λ ) = 2 p, where p ≤ m. Thereforeits decomposition into linear factors has the form G ( λ ) = c p Y k =1 ( λ − α k − iβ k )( λ − α k + iβ k ) , where c > , β k ≥ , α k are real (among the roots α k + iβ k , of course, may be equal). Nowsetting √ c p Y k =1 ( λ − α k − iβ k ) = A ( λ ) + iB ( λ ) , we get that the polynomials A ( λ ) , B ( λ ) have real coefficients and (7.2) holds.Now writing A ( λ ) = p X k =1 x k λ k , B ( λ ) = p X k =1 y k λ k , where x k , y k are real numbers, we find h Ω , G ( λ ) i = p X j,k =0 s j + k x j x k + p X j,k =0 s j + k y j y k . This implies the statement of the lemma. (cid:4)
Theorem 7.
In order for a given linear functional Ω on C N [ λ ] , to be the generalized spectralfunction for a real Jacobi matrix of the form (1.1) , (1.3) it is necessary and sufficient that thefollowing three conditions be satisfied: ( i ) h Ω , i = 1 ; ( ii ) Ω is positive on C N − [ λ ] ; iii ) there exists a polynomial T ( λ ) of degree N such that h Ω , G ( λ ) T ( λ ) i = 0 for all polynomials G ( λ ) with deg G ( λ ) ≤ N. Proof .
Necessity . The condition h Ω , i = 1 follows from (2.7) with m = n = 0 . To provepositivity on C N − [ λ ] of the generalized spectral function Ω of the real Jacobi matrix J, takean arbitrary polynomial G ( λ ) ∈ R N − [ λ ] which is not identically zero and which satisfies theinequality G ( λ ) ≥ , −∞ < λ < ∞ . This polynomial can be represented in the form (see the proof of Lemma 3) G ( λ ) = [ A ( λ )] + [ B ( λ )] , (7.3)where A ( λ ) , B ( λ ) are polynomials of degrees ≤ N − P ( λ ) , P ( λ ) , . . . , P N − ( λ ) have real coefficients (because J is a real matrix) and theyform a basis of R N − [ λ ] , we can write the decompositions A ( λ ) = N − X k =1 c k P k ( λ ) , B ( λ ) = N − X k =1 d k P k ( λ ) , where c k , d k are real numbers not all zero. Therefore using the “orthogonality” property (2.7)we get from (7.3), h Ω , G ( λ ) i = N − X j,k =0 ( c k + d k ) > . The property of Ω indicated in the condition ( iii ) of the theorem follows from (2.8) if we take T ( λ ) = P N ( λ ) . Sufficiency.
It follows from the conditions of the theorem that all the conditions of Theorem 2are satisfied. In fact, we need to verify only the condition ( ii ) of Theorem 2. Let for somepolynomial G ( λ ) , deg G ( λ ) = n ≤ N − , h Ω , G ( λ ) H ( λ ) i = 0 (7.4)for all polynomials H ( λ ), deg H ( λ ) = n. We have to show that then G ( λ ) ≡ . Setting G ( λ ) = n X k =0 g k λ k , H ( λ ) = n X j =0 h j λ j , we get from (7.4) that n X j =0 h j n X k =0 g k s k + j ! = 0 . Since h , h , . . . , h n ( h n = 0) are arbitrary, the last equation gives n X k =0 g k s k + j = 0 , j = 0 , , . . . , n. (7.5)This is a linear homogeneous system of algebraic equations with respect to g , g , . . . , g n and thedeterminant of this system coincides with the determinant D n . From the condition ( ii ) of thenverse Spectral Problems for Tridiagonal N by N Complex Hamiltonians 23theorem it follows by Lemma 3 that D n > . So D n = 0 and hence system (7.5) has only thetrivial solution g = g = · · · = g n . Thus, all the conditions of Theorem 2 are satisfied. Therefore there exists, generally speaking,a complex Jacobi matrix J of the form (1.1), (1.2) for which Ω is the generalized spectralfunction. This matrix J is constructed by using formulas (3.35), (3.36). It remains to show thatthe matrix J is real. But this follows from the fact that by Lemma 2 and Lemma 3 we have D n > n ∈ { , , . . . , N − } and the determinants ∆ n are real. Therefore formulas (3.35),(3.36) imply that the matrix J is real. (cid:4) If we take into account Lemma 3, then it is easily seen from the proof of Theorem 3 thatTheorem 7 is equivalent to the following theorem.
Theorem 8.
In order for a given linear functional Ω , defined on C N [ λ ] , to be the generalizedspectral function for some real Jacobi matrix J of the form (1.1) with entries belonging to theclass (1.3) , it is necessary and sufficient that D = 1 , D n > n = 1 , , . . . , N − , and D N = 0 , where D n is defined by (3.24) and (3.23) . Under the conditions of Theorem 8 the entries a n and b n of the matrix J for which thefunctional Ω is the spectral function are recovered by formulas (3.35), (3.36). First we prove two lemmas which hold for any complex Jacobi matrix J of the form (1.1), (1.2).Having the matrix J consider the difference equation (5.2) and let { P n ( λ ) } Nn = − and { Q n ( λ ) } Nn = − be the solutions of this equation satisfying the initial conditions (5.3) and (5.4), respectively. Lemma 4.
The equation P N − ( λ ) Q N ( λ ) − P N ( λ ) Q N − ( λ ) = 1 (8.1) holds. Proof .
Multiply first of the equations a n − P n − ( λ ) + b n P n ( λ ) + a n P n +1 ( λ ) = λP n ( λ ) , (8.2) n ∈ { , , . . . , N − } , a − = a N − = 1 ,a n − Q n − ( λ ) + b n Q n ( λ ) + a n Q n +1 ( λ ) = λQ n ( λ ) ,n ∈ { , , . . . , N − } , a − = a N − = 1 , by Q n ( λ ) and second by P n ( λ ) and subtract the second result from the first to get a n − [ P n − ( λ ) Q n ( λ ) − P n ( λ ) Q n − ( λ )]= a n [ P n ( λ ) Q n +1 ( λ ) − P n +1 ( λ ) Q n ( λ )] , n ∈ { , , . . . , N − } . This means that the expression (Wronskian of the solutions P n ( λ ) and Q n ( λ )) a n [ P n ( λ ) Q n +1 ( λ ) − P n +1 ( λ ) Q n ( λ )]does not depend on n ∈ {− , , , . . . , N − } . On the other hand the value of this expression at n = − a − = 1 . Therefore a n [ P n ( λ ) Q n +1 ( λ ) − P n +1 ( λ ) Q n ( λ )] = 1 for all n ∈ {− , , , . . . , N − } . Putting here, in particular, n = N − , we arrive at (8.1). (cid:4) Lemma 5.
The equation P N − ( λ ) P ′ N ( λ ) − P N ( λ ) P ′ N − ( λ ) = N − X n =0 P n ( λ ) (8.3) holds, where the prime denotes the derivative with respect to λ . Proof .
Differentiating equation (8.2) with respect to λ, we get a n − P ′ n − ( λ ) + b n P ′ n ( λ ) + a n P ′ n +1 ( λ ) = λP ′ n ( λ ) + P n ( λ ) , (8.4) n ∈ { , , . . . , N − } , a − = a N − = 1 . Multiplying equation (8.2) by P ′ n ( λ ) and equation (8.4) by P n ( λ ) , and subtracting the left andright members of the resulting equations, we get a n − [ P n − ( λ ) P ′ n ( λ ) − P n ( λ ) P ′ n − ( λ )] − a n [ P n ( λ ) P ′ n +1 ( λ ) − P n +1 ( λ ) P ′ n ( λ )] = − P n ( λ ) ,n ∈ { , , . . . , N − } . Summing the last equation for the values n = 0 , , . . . , m ( m ≤ N −
1) and using the initialconditions (5.3), we obtain a m [ P m ( λ ) P ′ m +1 ( λ ) − P m +1 ( λ ) P ′ m ( λ )] = m X n =0 P n ( λ ) , m ∈ { , , . . . , N − } . Setting here, in particular, m = N − a N − = 1 , we get (8.3). (cid:4) Now we consider real Jacobi matrices of the form (1.1), (1.3).
Lemma 6.
For any real Jacobi matrix J of the form (1.1) , (1.3) the roots of the polyno-mial P N ( λ ) are simple. Proof .
Let λ be a root of the polynomial P N ( λ ) . The root λ is an eigenvalue of the matrix J by (2.6) and hence it is real by Hermiticity of J. Putting λ = λ in (8.3) and using P N ( λ ) = 0 , we get P N − ( λ ) P ′ N ( λ ) = N − X n =0 P n ( λ ) . (8.5)The right-hand side of (8.5) is different from zero because the polynomials P n ( λ ) have realcoefficients and hence are real for real values of λ, and P ( λ ) = 1 . Consequently P ′ N ( λ ) = 0 , that is, the root λ of the polynomial P N ( λ ) is simple. (cid:4) Lemma 7.
Any real Jacobi matrix J of the form (1.1) , (1.3) has precisely N real and distincteigenvalues. Proof .
The reality of eigenvalues of J follows from its Hermiticity. Next, the eigenvalues of J coincide, by (2.6), with the roots of the polynomial P N ( λ ) . This polynomial of degree N has N roots. These roots are pairwise distinct by Lemma 6. (cid:4) The following theorem describes the structure of generalized spectral functions of real Jacobimatrices.nverse Spectral Problems for Tridiagonal N by N Complex Hamiltonians 25
Theorem 9.
Let J be a real Jacobi matrix of the form (1.1) , (1.3) and Ω be its generalizedspectral function. Then for any G ( λ ) ∈ C N [ λ ] h Ω , G ( λ ) i = N X k =1 β k G ( λ k ) , (8.6) where λ , . . . , λ N are the eigenvalues of the matrix J and β , . . . , β N are positive real numbersuniquely determined by the matrix J . Proof .
By Lemma 6, the roots λ , . . . , λ N of the polynomial P N ( λ ) are simple. Therefore theformula (8.6) follows from (5.1) and the decomposition (5.9) takes the form Q N ( λ ) P N ( λ ) = N X k =1 β k λ − λ k . Hence Q N ( λ k ) = β k P ′ N ( λ k ) . (8.7)On the other hand, putting λ = λ k in (8.1) and (8.3) and taking into account that P N ( λ k ) = 0 , we get P N − ( λ k ) Q N ( λ k ) = 1 , (8.8) P N − ( λ k ) P ′ N ( λ k ) = N − X n =0 P n ( λ k ) , (8.9)respectively. Comparing (8.7), (8.8), and (8.9), we find that β k = ( N − X n =0 P n ( λ k ) ) − , (8.10)whence we get, in particular, that β k > . (cid:4) Since { P n ( λ k ) } N − n =0 is an eigenvector of the matrix J corresponding to the eigenvalue λ k , itis natural, according to the formula (8.10), to call the numbers β k the normalizing numbers ofthe matrix J. Definition 4.
The collection of the eigenvalues and normalizing numbers { λ k , β k ( k = 1 , . . . , N ) } of the matrix J of the form (1.1), (1.3) we call the spectral data of this matrix. Remark 2.
Assuming that λ < λ < · · · < λ N , let us introduce the nondecreasing stepfunction ω ( λ ) on ( −∞ , ∞ ) by ω ( λ ) = X λ k ≤ λ β k , where ω ( λ ) = 0 if there is no λ k ≤ λ. So the eigenvalues of the matrix J are the points ofincrease of the function ω ( λ ) . Then equality (8.6) can be written as h Ω , G ( λ ) i = Z ∞−∞ G ( λ ) dω ( λ ) , Z ∞−∞ P m ( λ ) P n ( λ ) dω ( λ ) = δ mn , m, n ∈ { , , . . . , N − } and the expansion formula (5.13) as f n = Z ∞−∞ F ( λ ) P n ( λ ) dω ( λ ) , n ∈ { , , . . . , N − } , where F ( λ ) is defined by (5.11): F ( λ ) = N − X m =0 f m P m ( λ ) . Such function ω ( λ ) is known as a spectral function (see, e.g., [21]) of the operator (matrix) J. This explains the source of the term “generalized spectral function” used in the complex case.
By the inverse spectral problem for real Jacobi matrices we mean the problem of recovering thematrix, i.e. its entries, from the spectral data.
Theorem 10.
Let an arbitrary collection of numbers { λ k , β k ( k = 1 , . . . , N ) } (9.1) be given. In order for this collection to be the spectral data for a real Jacobi matrix J of theform (1.1) with entries belonging to the class (1.3) , it is necessary and sufficient that the followingtwo conditions be satisfied: ( i ) The numbers λ , . . . , λ N are real and distinct. ( ii ) The numbers β , . . . , β N are positive and such that N P k =1 β k = 1 . Proof .
The necessity of the conditions of the theorem was proved above. To prove the sufficien-cy, assume that we have a collection of quantities (9.1) satisfying the conditions of the theorem.Using these data we construct the functional Ω on C N [ λ ] by the formula h Ω , G ( λ ) i = N X k =1 β k G ( λ k ) , G ( λ ) ∈ C N [ λ ] . (9.2)Then this functional Ω satisfies the conditions of Theorem 7. Indeed, we have h Ω , i = N X k =1 β k = 1 , Next, let G ( λ ) ∈ R N − [ λ ] be an arbitrary polynomial which is not identically zero and whichsatisfies the inequality G ( λ ) ≥ , −∞ < λ < ∞ . nverse Spectral Problems for Tridiagonal N by N Complex Hamiltonians 27This polynomial can be represented in the form (see the proof of Lemma 3) G ( λ ) = [ A ( λ )] + [ B ( λ )] , where A ( λ ) , B ( λ ) are polynomials of degrees ≤ N − h Ω , G ( λ ) i = N X k =1 β k G ( λ k ) = N X k =1 β k [ A ( λ k )] + N X k =1 β k [ B ( λ k )] ≥ . (9.3)We have to show that the equality sign in (9.3) is impossible. If we have the equality in (9.3),then, since all the β k are positive, we get A ( λ ) = · · · = A ( λ N ) = 0 and B ( λ ) = · · · = B ( λ N ) = 0 . Hence A ( λ ) ≡ B ( λ ) ≡ λ , . . . , λ N are distinct and deg A ( λ ) ≤ N − , deg B ( λ ) ≤ N − . Therefore we get G ( λ ) ≡ T ( λ ) = ( λ − λ ) · · · ( λ − λ N ) , then the condition ( iii ) of Theorem 7 is also satisfied: for any polynomial G ( λ ) , h Ω , G ( λ ) T ( λ ) i = N X k =1 β k G ( λ k ) T ( λ k ) = 0 . Thus, the functional Ω defined by the formula (9.2) satisfies all the conditions of Theorem 7.Therefore there exists a real Jacobi matrix J of the form (1.1), (1.3) for which Ω is the generalizedspectral function. Further, from the proof of sufficiency of the conditions of Theorem 6 it followsthat the collection { λ k , β k ( k = 1 , . . . , N ) } is the spectral data for the recovered matrix J. (cid:4) Note that under the conditions of Theorem 10 the entries a n and b n of the matrix J for whichthe collection (9.1) is spectral data, are recovered by formulas (3.35), (3.36). Acknowledgements
This work was supported by Grant 106T549 from the Scientific and Technological ResearchCouncil of Turkey (TUBITAK).
References [1] Boley D., Golub G.H., A survey of matrix inverse eigenvalue problems,
Inverse Problems (1987), 595–622.[2] Ikramov Kh.D., Chugunov V.N., Inverse matrix eigenvalue problems, J. Math. Sciences (2000), 51–136.[3] Chu M.T., Golub G.H., Inverse eigenvalue problems: theory, algorithms, and applications, Oxford UniversityPress, New York, 2005.[4] Marchenko V.A., Expansion in eigenfunctions of non-selfadjoint singular second order differential operators, Mat. Sb. (1960), 739–788 (in Russian).[5] Rofe-Beketov F.S., Expansion in eigenfunctions of infinite systems of differential equations in the non-selfadjoint and selfadjoint cases, Mat. Sb. (1960), 293–342 (in Russian).[6] Guseinov G.Sh., Determination of an infinite non-selfadjoint Jacobi matrix from its generalized spectralfunction, Mat. Zametki (1978), 237–248 (English transl.: Math. Notes (1978), 130–136).[7] Guseinov G.Sh., The inverse problem from the generalized spectral matrix for a second order non-selfadjointdifference equation on the axis, Izv. Akad. Nauk Azerb. SSR Ser. Fiz.-Tekhn. Mat. Nauk (1978), no. 5, 16–22(in Russian). [8] Kishakevich Yu.L., Spectral function of Marchenko type for a difference operator of an even order,
Mat.Zametki (1972), 437–446 (English transl.: Math. Notes (1972), 266–271).[9] Kishakevich Yu.L., On an inverse problem for non-selfadjoint difference operators, Mat. Zametki (1972),661–668 (English transl.: Math. Notes (1972), 402–406).[10] Bender C.M., Making sense of non-Hermitian Hamiltonians, Rep. Progr. Phys. (2007), 947–1018,hep-th/0703096.[11] Znojil M., Matching method and exact solvability of discrete P T -symmetric square wells,
J. Phys. A: Math.Gen. (2006), 10247–10261, quant-ph/0605209.[12] Znojil M., Maximal couplings in P T -symmetric chain models with the real spectrum of energies,
J. Phys. A:Math. Theor. (2007), 4863–4875, math-ph/0703070.[13] Znojil M., Tridiagonal P T -symmetric N by N Hamiltonians and fine-tuning of their observability domainsin the strongly non-Hermitian regime,
J. Phys. A: Math. Theor. (2007), 13131–13148, arXiv:0709.1569.[14] Allakhverdiev B.P., Guseinov G.Sh., On the spectral theory of dissipative difference operators of secondorder, Mat. Sb. (1989), 101–118 (English transl.:
Math. USSR Sbornik (1990), 107–125).[15] Guseinov G.Sh., Completeness of the eigenvectors of a dissipative second order difference operator, J. Dif-ference Equ. Appl. (2002), 321–331.[16] van Moerbeke P., Mumford D., The spectrum of difference operators and algebraic curves, Acta Math. (1979), 93–154.[17] Sansuc J.J., Tkachenko V., Spectral parametrization of non-selfadjoint Hill’s operators,
J. Differential Equa-tions (1996), 366–384.[18] Egorova I., Golinskii L., Discrete spectrum for complex perturbations of periodic Jacobi matrices,
J. Dif-ference Equ. Appl. (2005), 1185–1203, math.SP/0503627.[19] Atkinson F.V., Discrete and continuous boundary problems, Academic Press, New York, 1964.[20] Akhiezer N.I., The classical moment problem and some related questions in analysis, Hafner, New York,1965.[21] Berezanskii Yu.M., Expansion in eigenfunctions of selfadjoint operators, Translations of Mathematical Mono-graphs , Vol. 17, American Mathematical Society, Providence, R.I., 1968.[22] Nikishin E.M., Sorokin V.N., Rational approximations and orthogonality,
Translations of MathematicalMonographs , Vol. 92, American Mathematical Society, Providence, R.I., 1991.[23] Teschl G., Jacobi operators and completely integrable nonlinear lattices,