Inverse scattering on the half line for the matrix Schrödinger equation
aa r X i v : . [ m a t h - ph ] A ug INVERSE SCATTERING ON THE HALF LINEFOR THE MATRIX SCHR ¨ODINGER EQUATION
Tuncay AktosunDepartment of MathematicsUniversity of Texas at ArlingtonArlington, TX 76019-0408, [email protected] Weder † Departamento de F´ısica Matem´aticaInstituto de Investigaciones en Matem´aticas Aplicadas y en SistemasUniversidad Nacional Aut´onoma de M´exicoApartado Postal 20-126, IIMAS-UNAM, M´exico DF 01000, M´[email protected] “Dedicated to the 95th birthday of Prof. Vladimir A. Marchenko”
Abstract : The matrix Schr¨odinger equation is considered on the half line with the generalselfadjoint boundary condition at the origin described by two boundary matrices satisfyingcertain appropriate conditions. It is assumed that the matrix potential is integrable, isselfadjoint, and has a finite first moment. The corresponding scattering data set is con-structed, and such scattering data sets are characterized by providing a set of necessaryand sufficient conditions assuring the existence and uniqueness of the one-to-one corre-spondence between the scattering data set and the input data set containing the potentialand boundary matrices. The work presented here provides a generalization of the classicresult by Agranovich and Marchenko from the Dirichlet boundary condition to the generalselfadjoint boundary condition.
Mathematics Subject Classification (2010):
Keywords: matrix Schr¨odinger equation, selfadjoint boundary condition, Marchenkomethod, matrix Marchenko method, Jost matrix, scattering matrix, inverse scattering,characterization † Fellow Sistema Nacional de Investigadores1 . INTRODUCTION
Our aim in this paper is to describe the direct and inverse scattering problems forthe half-line matrix Schr¨odinger operator with a selfadjoint boundary condition. In thedirect problem we are given an input data set D consisting of an n × n matrix-valuedpotential V ( x ) and a selfadjoint boundary condition at x = 0 , and our goal is to determinethe corresponding scattering data set S consisting of the scattering matrix S ( k ) and thebound-state data. In the inverse problem, we are given a scattering data set S , and ourgoal is to determine the corresponding input data set D . We would like to have a one-to-one correspondence between an input data set D and a scattering data set S so that boththe direct and inverse problems are well posed. Thus, some restrictions are needed on D and S for a one-to-one correspondence.Since the scattering and inverse scattering problems in the scalar case, i.e. when n = 1 , are well understood, it is desirable that the analysis in the matrix case reduces tothe scalar case when n = 1 . However, as elaborated in Section 8, the current formulation ofthe scattering and inverse scattering problems in the scalar case presents a problem. As aconsequence, it becomes impossible to have a one-to-one correspondence between an inputdata set D and a scattering data set S , unless the Dirichlet and non-Dirichlet boundaryconditions are analyzed separately and they are not mixed with each other. Although notideal, this could perhaps be done in the scalar case because a given boundary condition inthe scalar case is either Dirichlet or non-Dirichlet. On the other hand, in the matrix casewith n ≥ , a given boundary condition may partly be Dirichlet and partly non-Dirichlet,and this may be as a result of constraints in a physical problem. It turns out that theproper way to deal with the issue is to modify the definition of the scattering matrix insuch a way that it is defined the same way regardless of the boundary condition, i.e. oneshould avoid defining the scattering matrix in one way in the Dirichlet case and in anotherway in the non-Dirichlet case.There are four aspects related to the direct and inverse problems. These are theexistence, uniqueness, construction, and characterization. In the existence aspect in the2irect problem, given D in a specified class we determine whether a corresponding S existsin some specific class. The uniqueness aspect is concerned with whether there exists aunique S corresponding to a given D , or two or more distinct sets S may correspond to thesame D . The construction deals with the recovery of S from D . In the inverse problem theexistence problem deals with the existence of some D corresponding to a given S belongingto a particular class. The uniqueness deals with the question whether D corresponding toa given S is unique, and the construction consists of the recovery of D from S . After theexistence and uniqueness aspects in the direct and inverse problems are settled, one thenturns the attention to the characterization problem, which consists of the identification ofthe class to which D belongs and the identification of the class to which S belongs so thatthere is a one-to-one correspondence between D and S in the respective classes. One alsoneeds to ensure that the scattering data set S uniquely constructed from a given D in thedirect problem in turn uniquely constructs the same D in the inverse problem.A viable characterization in the literature for the matrix Schr¨odinger operator on thehalf line can be found in the seminal work by Agranovich and Marchenko [1]. However,the analysis in [1] is restricted to the Dirichlet boundary condition, and hence our studycan be viewed as a generalization of the characterization in [1]. A characterization for thecase of the general selfadjoint boundary condition is recently provided [9,10] by the authorsof this paper, and in the current paper we present a summary of some of the results in[9,10]. For brevity, we do not include any proofs because such proofs are already availablein [9,10].We present the existence, uniqueness, reconstruction, and characterization issues re-lated to the relevant direct and inverse problems under the assumption that D belongs tothe Faddeev class and S belongs to the Marchenko class . The Faddeev class consists ofinput data sets D as in (2.1), where the potential V ( x ) and the boundary matrices A and B are as specified in Definition 2.1. The Marchenko class consists of scattering data sets S as in (3.12), where the scattering matrix S ( k ) and the bound-state data { κ j , M j } Nj =1 areas specified in Definition 4.1. 3et us mention the relevant references [14-16], where the direct and inverse problemsfor (2.2) are formally studied with the general selfadjoint boundary condition, not as in(2.5)-(2.7) but in a form equivalent to (2.5)-(2.7). However, the study in [14-16] lacksthe large- k analysis beyond the leading term and also lacks the small- k analysis of thescattering data, which are both essential for the analysis of the relevant inverse problem.Thus, our study can also be considered as a complement to the work by Harmer [14-16]. Inour paper, which is essentially a brief summary of [9,10], we rely on results from previouswork [1,4,5,8,22-14], in particular [1,4,8,22].Our paper is organized as follows. In Section 2 we introduce the matrix Schr¨odingerequation on the half line, describe the general selfadjoint boundary condition in terms oftwo constant matrices A and B. We then describe the Faddeev class of input data sets D consisting of the matrix potential V ( x ) and the boundary matrices A and B. In Section 3we describe the solution to the direct problem, which uses an input data set D in theFaddeev class. We outline the construction of various quantities such as the Jost solution,the physical solution, the regular solution, the Jost matrix, the scattering matrix, andthe bound-state data. In Section 4 we introduce the Marchenko class of scattering datasets. We present the solution to the inverse problem by starting with a scattering dataset S in the Marchenko class, and we describe the construction of the potential and theboundary matrices. In Section 5 we provide a characterization of the scattering data byshowing that there is a one-to-one correspondence between the Faddeev class of inputdata sets D and the Marchenko class of scattering data sets S . In Section 6 we provide anequivalent description of the Marchenko class, and we provide an alternate characterizationof the scattering data with the help of Levinson’s theorem. In Section 7 we provide yetanother description of the Marchenko class based on an approach utilizing the so-calledgeneralized Fourier map. Finally, in Section 8 we contrast our definition of the Jost matrixand the scattering matrix with those definitions in the previous literature. We indicatethe similarities and differences occurring when the boundary condition used is Dirichlet ornon-Dirichlet. We elaborate on the resulting nonuniqueness issue if the scattering matrix4s defined differently when the Dirichlet boundary condition is used, as commonly done inthe previous literature.
2. THE MATRIX SCHR ¨ODINGER EQUATION
In this section we introduce the matrix Schr¨odinger equation (2.2), the matrix po-tential V ( x ) , and the boundary matrices A and B used to describe the general selfadjointboundary condition. We also indicate that the boundary matrices A and B can be uniquelyspecified modulo a postmultiplication by an invertible matrix. Our input data set D isdefined as D := { V, A, B } . (2.1)Consider the matrix Schr¨odinger equation on the half line − ψ ′′ + V ( x ) ψ = k ψ, x ∈ R + , (2.2)where R + := (0 , + ∞ ) , the prime denotes the derivative with respect to the spatial co-ordinate x, k is the complex-valued spectral parameter, the potential V ( x ) is an n × n selfadjoint matrix-valued function of x and belongs to class L ( R + ) , and n is any positiveinteger. We assume that the value of n is fixed and is known. The selfadjointness of V ( x )is expressed as V ( x ) = V ( x ) † , x ∈ R + , (2.3)where the dagger denotes the matrix adjoint (complex conjugate and matrix transpose).We equivalently say hermitian to describe a selfadjoint matrix. We remark that, unless weare in the scalar case, i.e. unless n = 1 , the potential is not necessarily real valued. Thecondition V ∈ L ( R + ) means that each entry of the matrix V ( x ) is Lebesgue measurableon R + and Z ∞ dx (1 + x ) | V ( x ) | < + ∞ , (2.4)where | V ( x ) | denotes the matrix operator norm. Clearly, a matrix-valued function belongsto L ( R + ) if and only if each entry of that matrix belongs to L ( R + ) . ψ ( k, x ) appearing in (2.2) may be either an n × n matrix-valuedfunction or it may be a column vector with n components. We use C for the complexplane, R for the real line ( −∞ , + ∞ ) , R − for the left-half line ( −∞ , , C + for the openupper-half complex plane, C + for C + ∪ R , C − for the open lower-half complex plane, and C − for C − ∪ R . We are interested in studying (2.2) with an n × n selfadjoint matrix potential V ( x )in L ( R + ) under the general selfadjoint boundary condition at x = 0 . There are variousequivalent formulations [4,8,14-18] of the general selfadjoint boundary condition at x = 0 , and we find it convenient to state it [4,8] in terms of two constant n × n matrices A and B as − B † ψ (0) + A † ψ ′ (0) = 0 , (2.5)where A and B satisfy − B † A + A † B = 0 , (2.6) A † A + B † B > . (2.7)The condition in (2.7) means that the n × n matrix ( A † A + B † B ) is positive, which isalso called positive definite. One can easily verify that (2.5)-(2.7) remain invariant if theboundary matrices A and B are replaced with AT and BT, respectively, where T is anarbitrary n × n invertible matrix. We express this fact by saying that the selfadjointboundary condition (2.5) is uniquely determined by the matrix pair ( A, B ) modulo aninvertible matrix T, and we equivalently state that (2.5) is equivalent to the knowledge of( A, B ) modulo T. We remark that the positivity condition (2.7) is equivalent to having therank of the 2 n × n matrix (cid:20) AB (cid:21) equal to n. In our analysis of the direct problem related to (2.2) and (2.5), we assume that ourinput data set D belongs to the Faddeev class defined below. Definition 2.1
The input data set D given in (2.1) is said to belong to the Faddeev class ifthe potential V ( x ) satisfies (2.3) and (2.4) and the boundary matrices A and B appearingin (2.5) satisfy (2.6) and (2.7). In other words, D belongs to the Faddeev class if the n × n atrix-valued potential V ( x ) appearing in (2.2) is hermitian and belongs to class L ( R + ) and the constant n × n matrices A and B appearing in (2.5) satisfy (2.6) and (2.7). It is possible to formulate the general selfadjoint boundary condition by using a unique n × n constant matrix instead of using the pair of matrices A and B appearing in (2.5)-(2.7). For example, in [15] a unitary n × n matrix U is used to describe the selfadjointboundary condition as i (cid:0) U † − I (cid:1) ψ (0) + 12 (cid:0) U † + I (cid:1) ψ ′ (0) = 0 , where I is the n × n identity matrix. Without loss of any generality, one could also use [4]a diagonal representation of the selfadjoint boundary condition by choosing the matrices A and B as A = diag {− sin θ , · · · , − sin θ n } , B = diag { cos θ , · · · , cos θ n } , (2.8)where the θ j are some real constants in the interval (0 , π ] . In fact, through the represen-tation (2.8), one can directly identify [5] the three integers n D , n N , and n M , where n D is the number of θ j -values equal to π, n N is the number of θ j -values equal to π/ , and n M is the number of θ j -values in the union (0 , π/ ∪ ( π/ , π ) . One can informally call n D the number of Dirichlet boundary conditions, n N the number of Neumann boundaryconditions, and n M the number of mixed boundary conditions.We find it more convenient to write the general selfadjoint boundary condition interms of the two constant n × n matrices A and B, with the understanding that A and B are unique up to a postmultiplication by an invertible matrix T . For example, the so-calledKirchhoff boundary condition is easier to recognize if expressed in terms of A and B, ratherthan written in terms of a single unique n × n constant matrix.
3. THE SOLUTION TO THE DIRECT PROBLEM
In this section we summarize the solution to the direct scattering problem associatedwith (2.2) and (2.5) when the related input data set D given in (2.1) belongs to the7addeev class. In other words, we start with an n × n hermitian potential V ( x ) belongingto L ( R + ) and a pair of constant boundary matrices A and B satisfying (2.6) and (2.7),and we construct the relevant quantities leading to the scattering data set S . The uniqueconstruction of the scattering data set S also enables us to determine the basic propertiesof S . The steps of the construction are given below:(a) When our input data set D belongs to the Faddeev class, regardless of the boundarymatrices A and B, the matrix Schr¨odinger equation (2.2) has an n × n matrix-valuedsolution, usually called the Jost solution and denoted by f ( k, x ) , satisfying the asymp-totic condition f ( k, x ) = e ikx [ I + o (1)] , x → + ∞ . (3.1)The solution f ( k, x ) is uniquely determined by the potential V ( x ) . For each fixed x ∈ [0 , + ∞ ) , the Jost solution f ( k, x ) has an extension from k ∈ R to k ∈ C + , and such an extension is continuous in k ∈ C + and analytic in k ∈ C + and has theasymptotic behavior e − ikx f ( k, x ) = I + o (1) , k → ∞ in C + . (b) In terms of the boundary matrices A and B in D and the Jost solution f ( k, x ) obtainedas in (a), we construct the Jost matrix J ( k ) as J ( k ) := f ( − k ∗ , † B − f ′ ( − k ∗ , † A, k ∈ R , (3.2)where the asterisk denotes complex conjugation. We remark that J ( k ) is an n × n matrix-valued function of k. The redundant appearance of k ∗ instead of k in (3.2)when k ∈ R is useful in extending the Jost matrix analytically from k ∈ R to k ∈ C + . We recall that the boundary matrices A and B can be postmultiplied by any invertiblematrix T without affecting (2.5)-(2.7) and hence the definition given in (3.2) yieldsthe Jost matrix J ( k ) , which is unique up to a postmultiplication by T. (c) In terms of the Jost matrix J ( k ) , obtained from D as indicated in (3.2), we constructthe scattering matrix S ( k ) as S ( k ) := − J ( − k ) J ( k ) − , k ∈ R . (3.3)8e remark that S ( k ) is an n × n matrix-valued function of k. Even though the Jostmatrix in (3.2) is uniquely determined up to a postmultiplication by an invertiblematrix T, from (3.3) we see that the scattering matrix S ( k ) is uniquely determinedirrespective of T. (d) In terms of the Jost solution f ( k, x ) obtained in (a) and the scattering matrix S ( k )obtained in (c), we construct the so-called physical solution to (2.2). The physicalsolution, denoted by Ψ( k, x ) , is constructed asΨ( k, x ) := f ( − k, x ) + f ( k, x ) S ( k ) , k ∈ R . (3.4)We remark that Ψ( k, x ) is an n × n matrix-valued function of k and x. The physicalsolution, as the name implies, has the physical interpretation of a scattering solution;namely, the initial n × n matrix-valued plane wave e − ikx I sent from x = + ∞ onto thepotential yields the n × n matrix-valued scattered wave S ( k ) e ikx at x = + ∞ with theamplitude S ( k ) . This interpretation is seen by using (3.1) in (3.4), i.e. for each fixed k ∈ R , we get Ψ( k, x ) = e − ikx + S ( k ) e ikx + o (1) , x → + ∞ . We also remark that each column of the physical solution satisfies the boundary con-dition (2.5), and hence the physical solution itself satisfies (2.5) and we have − B † Ψ( k,
0) + A † Ψ ′ ( k,
0) = 0 . (3.5)Even though the boundary matrices A and B appearing in (2.5)-(2.7) can be modifiedby a postmultiplication by an invertible matrix T, the definition given in (3.4) uniquelydetermines the physical solution irrespective of T. In the definition of the physicalsolution (3.4), one could multiply the right-hand side of (3.4) by a scalar function of k without affecting the physical interpretation of a physical solution. Nevertheless, weprefer to use (3.4) to define the physical solution in a unique manner.(e) Instead of constructing the physical solution via (3.4), one can alternatively constructit in an equivalent way as follows. When our input data set D belongs to the Faddeev9lass, there exists [4] an n × n matrix-valued solution to (2.1), called the regularsolution and denoted by ϕ ( k, x ) , satisfying the initial conditions ϕ ( k,
0) =
A, ϕ ′ ( k,
0) = B. The solution ϕ ( k, x ) is uniquely determined by the input data set D given in (2.1).We remark that ϕ ( k, x ) depends on the choice of A and B. The solution ϕ ( k, x ) isknown as the regular solution because it is entire in k for each fixed x ∈ R + . In termsof the regular solution ϕ ( k, x ) and the Jost matrix J ( k ) appearing in (3.2) we canintroduce the physical solution asΨ( k, x ) = − ik ϕ ( k, x ) J ( k ) − . (3.6)One can show that the expressions given in (3.4) and (3.6) are equivalent, and thiscan be shown by using the relationship given in (3.5) of [4], i.e. ϕ ( k, x ) = 12 ik f ( k, x ) J ( − k ) − ik f ( − k, x ) J ( k ) , (3.7)where we recall that f ( k, x ) is the Jost solution appearing in (3.1).(f) When the input data set D belongs to the Faddeev class, the Jost matrix J ( k ) con-structed as in (3.2) has an analytic extension from k ∈ R to k ∈ C + and its deter-minant det[ J ( k )] is nonzero in C + except perhaps at a finite number of k -values onthe positive imaginary axis. Let us use N to denote the number of distinct zeros ofdet[ J ( k )] in C + without counting multiplicities of those zeros, by realizing that theinteger N may be zero for some input data sets D . Let us use N distinct positivenumbers κ j so that the zeros of det[ J ( k )] occur at k = iκ j and use m j to denote themultiplicity of the zero of det[ J ( k )] at k = iκ j . Thus, the nonnegative integer N, theset of distinct positive values { κ j } Nj =1 , and the set of positive integers { m j } Nj =1 areall uniquely determined by the input data set D . Each m j satisfies 1 ≤ m j ≤ n. It isappropriate to call N the number of bound states without counting the multiplicities.The nonnegative integer N defined as N := N X j =1 m j , (3.8)10an be referred to as the number of bound states including the multiplicities.(g) Having determined the sets { κ j } Nj =1 and { m j } Nj =1 , let us use Ker[ J ( iκ j ) † ] to denotethe kernel of the n × n constant matrix J ( iκ j ) † . Next, we construct the orthogonalprojection matrices P j onto Ker[ J ( iκ j ) † ] for j = 1 , . . . , N. The n × n matrices P j arehermitian and idempotent, i.e. P † j = P j , P j = P j , j = 1 , . . . , N. Furthermore, the rank of P j is equal to m j . We then construct the constant n × n matrices A j , B j , and M j defined as A j := Z ∞ dx f ( iκ j , x ) † f ( iκ j , x ) , j = 1 , . . . , N,B j := ( I − P j ) + P j A j P j , j = 1 , . . . , N, (3.9) M j := B − / j P j , j = 1 , . . . , N, (3.10)where f ( k, x ) is the Jost solution constructed in (a). We remark that when D belongsto the Faddeev class, the matrices B j given in (3.9) are hermitian and positive definiteand hence the matrices B − / j are well defined as positive definite matrices. It is knownthat each projection matrix P j has rank m j , and it follows from (3.10) that each matrix M j is hermitian, nonnegative, and has rank m j . The matrices M j are usually calledthe bound-state normalization matrices.(h) When the input data set D belongs to the Faddeev class, at each k = iκ j with j =1 , . . . , N, the Schr¨odinger equation (2.2) has m j linearly independent column vector-valued solutions, where each of those column vector solutions is square integrablein x ∈ R + . It is possible to rearrange those m j linearly independent column-vectorsolutions to form an n × n matrix Ψ j ( x ) , in such a way that Ψ j ( x ) can be uniquelyconstructed as Ψ j ( x ) := f ( iκ j , x ) M j , j = 1 , . . . , N, (3.11)11here M j is the n × n normalization matrix defined in (3.10). We can refer to Ψ j ( x )as the normalized bound-state matrix solution to (2.1) at k = iκ j . We remark thateach Ψ j ( x ) satisfies the boundary condition (2.5) and has rank equal to m j . Having constructed all the relevant quantities starting with the input data set D , wenow define the scattering data set S as S := { S, { κ j , M j } Nj =1 } , (3.12)where S denotes the scattering matrix S ( k ) for k ∈ R constructed as in (3.3), the N distinct positive constants κ j are as described in (f), and the N hermitian, nonnegative,rank- m j matrices M j are as in (3.10).
4. THE SOLUTION TO THE INVERSE PROBLEM
In this section, given the scattering data set S in (3.12), our goal is to construct theinput data set D given in (2.1), with the understanding that the potential V ( x ) is uniquelyconstructed and that the boundary matrices A and B are uniquely constructed up to apostmultiplication by an invertible matrix. The construction is given when S belongs tothe so-called Marchenko class. We first present the construction and provide the definitionof the Marchenko class at the end of the construction procedure. Later in the section weshow that the Marchenko class can also be described in various equivalent ways.We summarize the steps in the construction of D from S as follows:(a) From the large- k asymptotics of the scattering matrix S ( k ) , we determine the constant n × n matrix S ∞ via S ∞ := lim k →±∞ S ( k ) , (4.1)and the constant n × n matrix G via S ( k ) = S ∞ + G ik + o (cid:18) k (cid:19) , k → ±∞ . (4.2)(b) Using S ( k ) and S ∞ , we uniquely construct the n × n matrix F s ( y ) via F s ( y ) := 12 π Z ∞−∞ dk [ S ( k ) − S ∞ ] e iky , y ∈ R . (4.3)12c) Using F s ( y ) constructed as in (4.3) and the bound-state data { κ j , M j } Nj =1 appearingin S , we construct the n × n matrix F ( y ) via F ( y ) := F s ( y ) + N X j =1 M j e − κ j y , y ∈ R + . (4.4)Note that we have F s ( y ) for y ∈ R , but we need F ( y ) only for y ∈ R + . (d) We use the matrix F ( y ) given in (4.4) as input to the Marchenko integral equation K ( x, y ) + F ( x + y ) + Z ∞ x dz K ( x, z ) F ( z + y ) = 0 , ≤ x < y, (4.5)and uniquely solve (4.5) and obtain K ( x, y ) for 0 ≤ x < y < + ∞ . We remark that K ( x, y ) is continuous in the region 0 ≤ x < y < + ∞ . We note that K (0 , , which isused to denote K (0 , + ) , is well defined as a constant n × n matrix.(e) Having obtained K ( x, y ) for 0 ≤ x < y < + ∞ uniquely from S as described in (d),we construct the potential V ( x ) via V ( x ) = − dK ( x, x ) dx , x ∈ R + . (4.6)By K ( x, x ) we mean K ( x, x + ) . We remark that, in general, V ( x ) constructed as in(4.6) may exists only a.e. and it may not be continuous in x. (f) Having constructed the potential V ( x ) from the scattering data set S , we turn ourattention to the construction of the boundary matrices A and B appearing in (2.1). Werecall that we need to construct A and B uniquely, where the uniqueness is understoodin the sense of being up to a postmultiplication by an arbitrary invertible n × n matrix T. Such a construction is carried out as follows. We use the already-constructed n × n constant matrices S ∞ , G , and K (0 ,
0) as input in the linear, homogeneous algebraicsystem ( ( I − S ∞ ) A = 0 , ( I + S ∞ ) B = [ G − S ∞ K (0 , − K (0 , S ∞ ] A, (4.7)and determine A and B as the general solution to (4.7). Such a general solution isequivalent to finding A and B satisfying (4.7) in such a way that the rank of the 2 n × n matrix (cid:20) AB (cid:21) is equal to n. D from the scattering data set S , we can construct various auxiliary quantities relevant tothe corresponding direct and inverse scattering problems as follows.(g) Having constructed the solution K ( x, y ) to the Marchenko integral equation (4.5), weobtain the Jost solution f ( k, x ) via f ( k, x ) = e ikx I + Z ∞ x dy K ( x, y ) e iky . (4.8)(h) Having the Jost solution f ( k, x ) and the scattering matrix S ( k ) at hand, we constructthe physical solution Ψ( k, x ) as in (3.4).(i) Having the Jost solution f ( k, x ) and the boundary matrices A and B at hand, weconstruct Jost matrix J ( k ) as in (3.2). Note that the constructed A and B areunique up to a postmultiplication by an arbitrary invertible matrix T, and hencethe constructed Jost matrix J ( k ) is also unique up to a postmultiplication by T. (j) Having the Jost solution f ( k, x ) and the Jost matrix J ( k ) at hand, we construct theregular solution ϕ ( k, x ) as in (3.7). Since the constructed A and B as well as theconstructed J ( k ) are each unique up to a postmultiplication by an arbitrary invertiblematrix T, the constructed regular solution ϕ ( k, x ) is also unique up to a postmultipli-cation by T. For each particular choice of the pair (
A, B ) , we have a particular choiceof the regular solution.(k) Having the Jost solution f ( k, x ) and the bound-state data { κ j , M j } Nj =1 appearing in S , we construct the normalized bound-state matrix solutions Ψ j ( x ) as in (3.11).Next we define the Marchenko class of scattering data sets S . The importance ofthe Marchenko class is that there exists [9,10] a one-to-one correspondence between theFaddeev class of input data sets D and the Marchenko class of scattering data sets S . Definition 4.1
Consider a scattering data set S as in (3.12), which consists of an n × n scattering matrix S ( k ) for k ∈ R , a set of N distinct positive constants κ j , and a set of N constant n × n hermitian and nonnegative matrices M j with respective positive ranks m j , here N is a nonnegative integer. In case N = 0 , it is understood that S consists only of S ( k ) for k ∈ R . We say that S belongs to the Marchenko class if S satisfies the followingfour conditions, listed below as ( ) , ( ) , ( a ) , ( a ) : ( ) The scattering matrix S ( k ) satisfies S ( − k ) = S ( k ) † = S ( k ) − , k ∈ R , (4.9) and there exist constant n × n matrices S ∞ and G in such a way that (4.2) holds.Furthermore, the n × n matrix quantity F s ( y ) defined in (4.3) is bounded in y ∈ R and integrable in y ∈ R + . ( ) For the matrix F s ( y ) defined in (4.3), the derivative F ′ s ( y ) exists a.e. for y ∈ R + andit satisfies Z ∞ dy (1 + y ) | F ′ s ( y ) | < + ∞ , (4.10) where we recall that the norm in the integrand of (4.10) is the operator norm of amatrix. ( a ) The physical solution Ψ( k, x ) satisfies the boundary condition (2.5), i.e. it satisfies(3.5). We clarify this property as follows: The scattering matrix appearing in S yieldsa particular n × n matrix-valued solution Ψ( k, x ) to (2.2) known as the physical solutiongiven in (3.4) and also yields a pair of matrices A and B (modulo an invertible matrix)satisfying (2.6) and (2.7). Our statement ( a ) is equivalent to saying that (2.5) issatisfied if we use in (2.5) the quantities Ψ( k, x ) , A, and B constructed from S . ( a ) The Marchenko equation (4.5) at x = 0 given by K (0 , y ) + F ( y ) + Z ∞ dz K (0 , z ) F ( z + y ) = 0 , y ∈ R + , has a unique solution K (0 , y ) in L ( R + ) . Here, F ( y ) is the n × n matrix related to F s ( y ) as in (4.4). Let us mention a slight drawback in the definition of the Marchenko class given inDefinition 4.1. The property ( a ) cannot be checked from the scattering data set S directly15ecause it requires the construction of the corresponding boundary matrices A and B aswell as the physical solution Ψ( k, x ) . It is already known [9,10] that one can replace ( a )by an equivalent pair of conditions, listed as ( III a ) and ( V c ) , as indicated in the nexttheorem. Theorem 4.2
Consider a scattering data set S as in (3.12), which consists of an n × n scattering matrix S ( k ) for k ∈ R , a set of N distinct positive constants κ j , and a set of N constant n × n hermitian and nonnegative matrices M j with respective positive ranks m j , where N is a nonnegative integer. In case N = 0 , it is understood that S consists only of S ( k ) for k ∈ R and that N appearing in (3.8) is zero. The scattering data set S belongs tothe Marchenko class if and only if S satisfies the five conditions, three of which are listedas ( ) , ( ) , and ( a ) in Definition 4.1, and the two additional conditions ( III a ) and ( V c ) are given by: ( III a ) For the matrix-valued function F s ( y ) given in (4.3), the derivative F ′ s ( y ) for y ∈ R − can be written as a sum of two matrix-valued functions, one of which is integrableand the other is square integrable in y ∈ R − . Furthermore, the only solution X ( y ) , which is a row vector with n square-integrable components in y ∈ R − , to the linearhomogeneous integral equation − X ( y ) + Z −∞ dz X ( z ) F s ( z + y ) = 0 , y ∈ R − , is the trivial solution X ( y ) ≡ . ( V c ) The linear homogeneous integral equation X ( y ) + Z ∞ dz X ( z ) F s ( z + y ) = 0 , y ∈ R + , (4.11) has precisely N linearly independent row-vector solutions for some nonnegative integer N , with n components which are integrable in y ∈ R + . Here F s ( y ) is the matrix definedin (4.3) and N is the nonnegative integer readily constructed from S as in (3.8). If N = 0 , it is understood that the only solution in L ( R + ) to (4.11) is the trivialsolution X ( y ) ≡ .
16e remark that Theorem 4.2 is a special case of Theorem 4.5, but we still preferto state it as a separate result. This is because Theorem 4.2 is closely related to thecharacterization result stated by Agranovich and Marchenko in the Dirichlet case on pp.4–5 of their manuscript [1].The next theorem shows that in the description of the Marchenko class specified inDefinition 4.1, we can replace the condition ( a ) with another equivalent condition. Theorem 4.3
Consider a scattering data set S as in (3.12), which consists of an n × n scattering matrix S ( k ) for k ∈ R , a set of N distinct positive constants κ j , and a set of N constant n × n hermitian and nonnegative matrices M j with respective positive ranks m j , where N is a nonnegative integer. In case N = 0 , it is understood that S consists only of S ( k ) for k ∈ R . The scattering data set S belongs to the Marchenko class if and only if S satisfies the four conditions, three of which are listed as ( ) , ( ) , and ( a ) in Definition 4.1and one additional condition ( b ) replacing ( a ) , which is given by ( b ) The Jost matrix J ( k ) satisfies J ( − k ) + S ( k ) J ( k ) = 0 , k ∈ R . (4.12) We clarify this property as follows: The scattering matrix S ( k ) given in S yields a Jostmatrix J ( k ) constructed as in (3.2), unique up to a postmultiplication by an invertiblematrix. Using the scattering matrix S ( k ) given in S and the Jost matrix constructedfrom S ( k ) , we find that (4.12) is satisfied. Let us use ˆ L ( C + ) to denote the Banach space of all complex-valued functions ξ ( k )that are analytic in k ∈ C + in such a way that there exists a corresponding function η ( x )belonging to L ( R + ) satisfying ξ ( k ) = Z ∞ dx η ( x ) e ikx . We remark that if ξ ( k ) belongs to ˆ L ( C + ) , then ξ ( k ) is continuous in k ∈ R and it satisfies ξ ( k ) = o (1) as k → ∞ in C + . If ξ ( k ) is vector valued or matrix valued instead of beingscalar valued, then it belongs to ˆ L ( C + ) if and only if each entry of ξ ( k ) belongs to ˆ L ( C + ) .
17e remark that the result of Theorem 4.3 is included in the next theorem presented.However, we have stated Theorem 4.3 separately in order to emphasize the importance of( b ) of Theorem 4.3 and its connection to ( a ) of Definition 4.1. Theorem 4.4
Consider a scattering data set S as in (3.12), which consists of an n × n scattering matrix S ( k ) for k ∈ R , a set of N distinct positive constants κ j , and a set of N constant n × n hermitian and nonnegative matrices M j with respective positive ranks m j , where N is a nonnegative integer. In case N = 0 , it is understood that S consists only of S ( k ) for k ∈ R . The scattering data set S belongs to the Marchenko class if and only if S satisfies the four conditions ( ) , ( ) , ( ) , and ( ) , where ( ) can be either one of ( a ) and ( b ); and ( ) can be any one of ( a ) , ( b ) , ( c ) , ( d ) , and ( e ) . Note that ( ) , ( ) , ( a ) , ( a ) are listed in Definition 4.1; ( b ) is listed in Theorem 4.3; and the remainingconditions ( b ) , ( c ) , ( d ) , ( e ) are listed below: ( b ) The only solution in L ( R + ) to the homogeneous Marchenko integral equation at x = 0 given by K (0 , y ) + Z ∞ dz K (0 , z ) F ( z + y ) = 0 , y ∈ R + , (4.13) is the trivial solution K (0 , y ) ≡ . Note that (4.13) is the homogeneous version at x = 0 of the Marchenko equation given by (4.5). We remark that F ( y ) appearing in(4.13) is the quantity defined in (4.4). ( c ) The only integrable solution X ( y ) , which is a row vector with n integrable componentsin y ∈ R + , to the linear homogeneous integral equation X ( y ) + Z ∞ dz X ( z ) F ( z + y ) = 0 , y ∈ R + , (4.14) is the trivial solution X ( y ) ≡ . Again, we recall that F ( y ) is the quantity defined in(4.4). ( d ) The only solution ˆ X ( k ) to the system ( ˆ X ( iκ j ) M j = 0 , j = 1 , . . . , N, ˆ X ( − k ) + ˆ X ( k ) S ( k ) = 0 , k ∈ R , (4.15)18 here ˆ X ( k ) is a row vector with n components belonging to the class ˆ L ( C + ) , is thetrivial solution ˆ X ( k ) ≡ . ( e ) The only solution h ( k ) to the system ( M j h ( iκ j ) = 0 , j = 1 , . . . , N,h ( − k ) + S ( k ) h ( k ) = 0 , k ∈ R , (4.16) where h ( k ) is a column vector with n components belonging to the class ˆ L ( C + ) , isthe trivial solution h ( k ) ≡ . We use H ( C ± ) to denote the Hardy space of all complex-valued functions ξ ( k ) thatare analytic in k ∈ C ± with a finite norm defined as || ξ || H ( C ± ) := sup ρ> (cid:20)Z ∞−∞ dα | ξ ( α ± iρ ) | (cid:21) / . Thus, ξ ( k ) is square integrable along all lines in C ± that are parallel to the real axis. Thevalue of ξ ( k ) for k ∈ R is defined to be the non-tangential limit of ξ ( k ± iρ ) as ρ → + . Such a non-tangential limit exists a.e. in k ∈ R and belongs to L ( R ) . It is known that ξ ( k ) belongs to H ( C + ) if and only if there exists a corresponding function η ( x ) belongingto L ( R + ) in such a way that ξ ( k ) = Z ∞ dx η ( x ) e ikx . Similarly, ξ ( k ) belongs to H ( C − ) if and only if there exists a corresponding function η ( x )belonging to L ( R − ) in such a way that ξ ( k ) = Z −∞ dx η ( x ) e ikx . If ξ ( k ) is vector valued or matrix valued instead of being scalar valued, then it belongs to H ( C ± ) if and only if each entry of ξ ( k ) belongs to H ( C ± ) . The next theorem shows that in the equivalent description of the Marchenko classspecified in Theorem 4.2, we can replace the condition (
III a ) with one of two other equiv-alent conditions and we can also replace the condition ( V c ) with any one of various otherequivalent conditions. 19 heorem 4.5 Consider a scattering data set S as in (3.12), which consists of an n × n scattering matrix S ( k ) for k ∈ R , a set of N distinct positive constants κ j , and a set of N constant n × n hermitian and nonnegative matrices M j with respective positive ranks m j , where N is a nonnegative integer. In case N = 0 , it is understood that S consistsonly of S ( k ) for k ∈ R and that N appearing in (3.8) is zero. The scattering data set S belongs to the Marchenko class if and only if S satisfies the five conditions ( ) , ( ) , ( III ) , ( ) , and ( V ) , where ( III ) represents any one of the three conditions ( III a ) , ( III b ) , ( III c ); ( ) represents any one of the five conditions ( a ) , ( b ) , ( c ) , ( d ) , ( e ); and ( V ) represents any one of the eight conditions ( V a ) , ( V b ) , ( V c ) , ( V d ) , ( V e ) , ( V f ) , ( V g ) , ( V h ) . We remark that ( ) , ( ) , and ( a ) are listed in Definition 4.1; ( III a ) and ( V c ) are listed inTheorem 4.2; ( b ) , ( c ) , ( d ) , ( e ) are listed in Theorem 4.4; and the remaining conditionsare listed below: ( III b ) For the matrix-valued function F s ( y ) given in (4.3), the derivative F ′ s ( y ) for y ∈ R − can be written as a sum of two matrix-valued functions, one of which is integrable andthe other is square integrable in y ∈ R − . Furthermore, the only solution ˆ X ( k ) to thehomogeneous Riemann-Hilbert problem − ˆ X ( − k ) + ˆ X ( k ) S ( k ) = 0 , k ∈ R , where ˆ X ( k ) is a row vector with n components belonging to the class H ( C − ) , is thetrivial solution ˆ X ( k ) ≡ . ( III c ) For the matrix-valued function F s ( y ) given in (4.3), the derivative F ′ s ( y ) for y ∈ R − can be written as a sum of two matrix-valued functions, one of which is integrableand the other is square integrable in y ∈ R − . Furthermore, the only solution h ( k ) tothe homogeneous Riemann-Hilbert problem − h ( − k ) + S ( k ) h ( k ) = 0 , k ∈ R , (4.17)where h ( k ) is a column vector with n components belonging to the class H ( C − ) , isthe trivial solution h ( k ) ≡ . V a ) Each of the N normalized bound-state matrix solutions Ψ j ( x ) constructed as in (3.11)satisfies the boundary condition (2.5), i.e. − B † Ψ j (0) + A † Ψ ′ j (0) = 0 , j = 1 , . . . , N. (4.18) We clarify this statement as follows. The scattering matrix S ( k ) and the bound-statedata { κ j , M j } Nj =1 given in S yield n × n matrices Ψ j ( x ) as in (3.11), where each Ψ j ( x ) is a solution to (2.2) at k = iκ j . As stated in ( a ) of Definition 4.1, the scatteringmatrix given in S yields a pair of matrices A and B (modulo an invertible matrix)satisfying (2.6) and (2.7). The statement ( V a ) is equivalent to saying that (2.5)is satisfied if we use in (2.5) the quantities Ψ j ( x ) , A, and B constructed from thequantities appearing in S . If N = 0 , then the condition (4.18) is absent. ( V b ) The normalization matrices M j appearing in S satisfy J ( iκ j ) † M j = 0 , j = 1 , . . . , N. (4.19) We clarify this condition as follows. As indicated in ( b ) of Theorem 4.3, the scatteringmatrix S ( k ) given in S yields a Jost matrix J ( k ) . Using in (4.19) the matrix M j givenin S and the Jost matrix constructed from S ( k ) , at each κ j -value listed in S the matrixequation (4.19) holds. If N = 0 , then the condition (4.19) is absent. ( V d ) The homogeneous Riemann-Hilbert problem given by ˆ X ( − k ) + ˆ X ( k ) S ( k ) = 0 , k ∈ R , (4.20) has precisely N linearly independent row-vector solutions with n components in ˆ L ( C + ) . Here, N is the nonnegative integer given in (3.8). If N = 0 , it is understood that theonly solution in ˆ L ( C + ) to (4.20) is the trivial solution ˆ X ( k ) ≡ . ( V e ) The homogeneous Riemann-Hilbert problem given by h ( − k ) + S ( k ) h ( k ) = 0 , k ∈ R , (4.21)21 as precisely N linearly independent column-vector solutions with n components in ˆ L ( C + ) , where N is the nonnegative integer given in (3.8). If N = 0 , it is understoodthat the only solution in ˆ L ( C + ) to (4.21) is the trivial solution h ( k ) ≡ . ( V f ) The integral equation (4.11) has precisely N linearly independent row-vector solutions X ( y ) with n components in L ( R + ) , where N is the nonnegative integer given in(3.8). If N = 0 , it is understood that the only solution in L ( R + ) to (4.11) is thetrivial solution X ( y ) ≡ . We remark that the matrix F s ( y ) appearing in the kernel of(4.11) is defined in (4.3). ( V g ) The homogeneous Riemann-Hilbert problem given in (4.20) has precisely N linearlyindependent row-vector solutions ˆ X ( k ) with n components in H ( C + ) , Here, N is thenonnegative integer given in (3.8). If N = 0 , it is understood that the only solutionin H ( C + ) to (4.20) is the trivial solution ˆ X ( k ) ≡ . ( V h ) The homogeneous Riemann-Hilbert problem given in (4.21) has precisely N linearlyindependent row-vector solutions with n components in H ( C + ) . Here, N is the non-negative integer given in (3.8). If N = 0 , it is understood that the only solution in H ( C + ) to (4.21) is the trivial solution h ( k ) ≡ .
5. THE CHARACTERIZATION OF THE SCATTERING DATA
In this section we consider the characterization of the scattering data. In the next the-orem we present one of our main characterization results. It shows that the four conditionsgiven in Definition 4.1 for the Marchenko class form a characterization of the scatteringdata sets S so that there exists a one-to-one correspondence between a scattering data setin the Marchenko class and an input data set D in the Faddeev class specified in Defini-tion 2.1. From Section 4 we know that the Marchenko class can be described in variousequivalent ways, and hence it is possible to present the characterization in various differentways. Theorem 5.1
Consider a scattering data set S as in (3.12), which consists of an n × n cattering matrix S ( k ) for k ∈ R , a set of N distinct positive constants κ j , and a set of N constant n × n hermitian and nonnegative matrices M j with respective positive ranks m j , where N is a nonnegative integer. In case N = 0 , it is understood that S consists only of S ( k ) for k ∈ R and that N appearing in (3.8) is zero. Consider also an input data set D as in (2.1) consisting of an n × n matrix potential V ( x ) satisfying (2.3) and (2.4) anda pair of constant n × n matrices A and B satisfying (2.6) and (2.7). Then, we have thefollowing: (a) For each input data set D in the Faddeev class specified in Definition 2.1, there ex-ists and uniquely exists a scattering data set S in the Marchenko class specified inDefinition 4.1. (b) Conversely, for each S in the Marchenko class, there exists and uniquely exists aninput data set D in the Faddeev class, where the boundary matrices A and B areuniquely determined up to a postmultiplication by an invertible n × n matrix T. (c) Let ˜ S be the scattering data set corresponding to D given in the previous step (b),where D is constructed from the scattering data set S . Then, we have ˜ S = S , i.e.the scattering data set constructed from D is equal to the scattering data set used toconstruct D . (d) The characterization outlined in the steps (a)-(c) given above can equivalently be statedas follows. A set S as in (3.12) is the scattering data set corresponding to an inputdata set D in the Faddeev class if and only if S satisfies ( ) , ( ) , ( a ) , and ( a ) statedin Definition 4.1. (e) The characterization outlined in the steps (a)-(c) given above can equivalently be statedas follows. A set S as in (3.12) is the scattering data set corresponding to an inputdata set D in the Faddeev class if and only if S satisfies ( ) , ( ) , ( a ) of Definition 4.1and ( III a ) and ( V c ) of Theorem 4.2. (f) The characterization outlined in the steps (a)-(c) given above can equivalently be statedas follows. A set S as in (3.12) is the scattering data set corresponding to an input ata set D in the Faddeev class if and only if S satisfies ( ) , ( ) , ( ) , and ( ) , where ( ) can be either one of ( a ) and ( b ); and ( ) can be any one of ( a ) , ( b ) , ( c ) , ( d ) , and ( e ) . We recall that ( ) , ( ) , ( a ) , ( a ) are listed in Definition 4.1; ( b ) islisted in Theorem 4.3; and ( b ) , ( c ) , ( d ) , ( e ) are listed in Theorem 4.4. (g) The characterization outlined in the steps (a)-(c) given above can equivalently be statedas follows. A set S as in (3.12) is the scattering data set corresponding to an inputdata set D in the Faddeev class if and only if S satisfies ( ) , ( ) , ( III ) , ( ) , and ( V ) , where ( III ) can be any one of ( III a ) , ( III b ) , ( III c ); ( ) can be any one of ( a ) , ( b ) , ( c ) , ( d ) , ( e ); and ( V ) can be any one of ( V a ) , ( V b ) , ( V c ) , ( V d ) , ( V e ) , ( V f ) , ( V g ) , ( V h ) . We recall that ( ) , ( ) , ( a ) are listed in Definition 4.1; ( III a ) and ( V c ) arelisted in Theorem 4.2; ( b ) , ( c ) , ( d ) , and ( e ) are listed in Theorem 4.4; and ( III b ) , ( III c ) , ( V a ) , ( V b ) , ( V d ) , ( V e ) , ( V f ) , ( V g ) , and ( V h ) are listed in Theorem 4.5. We have the following remarks on the results presented in Theorem 5.1. The char-acterization result stated in Theorem 5.1(e) follows from Theorem 4.2. The result statedin Theorem 5.1(f) is a consequence of Theorem 4.4. The result in Theorem 5.1(e) is aparticular case of the result in Theorem 5.1(g), but we prefer to state it separately be-cause it resembles the characterization result stated by Agranovich and Marchenko [1]in the Dirichlet case. Finally we remark that Theorem 5.1(g) is a direct consequence ofTheorem 4.5.
6. AN ALTERNATE CHARACTERIZATION OF THE SCATTERING DATA
It is possible to present an alternate characterization of the scattering data usingLevinson’s theorem. This characterization again establishes a one-to-one correspondencebetween the Faddeev class of input data sets D and the Marchenko class of scatteringdata sets S . Hence, such an alternate characterization can also be viewed as an alternatedescription of the Marchenko class of scattering data sets with the help of Levinson’stheorem. 24n general, the bound-state data { κ j , M j } Nj =1 appearing in the scattering data set S of (3.12) and the scattering matrix S ( k ) are independent, and they need to be specifiedseparately. On the other hand, the determinant of the scattering matrix contains the infor-mation of the number of bound states including the multiplicities, which is the nonnegativeinteger N appearing in (3.8). The change in the argument of the determinant of S ( k ) as k changes from k = 0 + to k = + ∞ in the k -interval (0 , + ∞ ) is related to the numberof bound states including multiplicities. This general fact is usually known as Levinson’stheorem.When the input data set D belongs to the Faddeev class, we have [8] Levinson’stheorem stated in the following. Theorem 6.1
Consider the matrix Schr¨odinger equation (2.2) with the selfadjoint bound-ary condition (2.5). Assume that the corresponding input data set D given in (2.1) belongsto the Faddeev class. Let S appearing in (3.12) be the scattering data set corresponding to D . Then, the number N of bound states including the multiplicities appearing in (3.8) isrelated to the argument of the determinant of the scattering matrix S ( k ) as arg (cid:2) det[ S (0 + )] (cid:3) − arg [det[ S (+ ∞ )]] = π (2 N + µ + n D − n ) , (6.1) where µ is the (algebraic and geometric) multiplicity of the eigenvalue +1 of the zero-energy scattering matrix S (0) , n is the positive integer appearing in the matrix size n × n of the scattering matrix S ( k ) , and n D is the number of Dirichlet boundary conditions inthe diagonal representation (2.8) of the boundary matrices A and B. We remark that n D is the same as the nonzero integer which is equal to the multiplicity of the eigenvalue − of the constant n × n matrix S ∞ appearing in (4.1). In some cases, by using Levinson’s theorem we may be able to quickly determine if agiven scattering data set S does not belong to the Marchenko class. Using the scatteringmatrix S ( k ) , we readily know the positive integer n appearing in the matrix size n × n ofthe matrix S ( k ) . The zero-energy scattering matrix S (0 has eigenvalues equal to either − . Thus, we can identify µ as the multiplicity of the eigenvalue +1 of S (0) . From the25arge- k limit of S ( k ) given in (4.1) we can easily construct the constant matrix S ∞ and wealready know that S ∞ has eigenvalues equal to either − . Thus, we can identify n D as the multiplicity of the eigenvalue − S ∞ . Then, from the scattering matrix S ( k ) wecan evaluate the change in the argument of det[ S ( k )] given on the left-hand side of (6.1).We can then use (6.1) to determine the value of N predicted by Levinson’s theorem. Ifthat value of N evaluated from (6.1) does not turn out to be a nonnegative integer, weknow that the corresponding S does not belong to the Marchenko class.The next theorem shows that we can obtain an equivalent description of the Marchenkoclass specified in Definition 4.1, by replacing ( a ) by a set of two conditions one of whichis related to Levinson’s theorem, and at the same time by replacing ( a ) by any one ofthree other conditions. Theorem 6.2
Consider a scattering data set S as in (3.12), which consists of an n × n scattering matrix S ( k ) for k ∈ R , a set of N distinct positive constants κ j , and a set of N constant n × n hermitian and nonnegative matrices M j with respective positive ranks m j , where N is a nonnegative integer. In case N = 0 , it is understood that S consists only of S ( k ) for k ∈ R and that N appearing in (3.8) is zero. The scattering data set S belongs tothe Marchenko class if and only if S satisfies the five conditions, two of which are listed as ( ) and ( ) in Definition 4.1, the third and the fourth are the respective conditions listedas ( L ) and ( ◦ ) below, and the fifth is any one of the three conditions listed as ( ◦ c ) , ( ◦ d ) , ( ◦ e ) below: ( L ) The scattering matrix S ( k ) appearing in S is continuous for k ∈ R and (6.1) of Levin-son’s theorem is satisfied with µ , n D , and N coming from S . Here, µ is the (algebraicand geometric) multiplicity of the eigenvalue +1 of the zero-energy scattering matrix S (0) , n D is the (algebraic and geometric) multiplicity of the eigenvalue − of the her-mitian matrix S ∞ appearing in (4.1), and N is the nonnegative integer in (3.8) whichis equal to the sum of the ranks m j of the matrices M j appearing in S . ( ◦ c ) The only square-integrable solution X ( y ) , which is a row vector with n square-integrable omponents in y ∈ R + , to the linear homogeneous integral equation X ( y ) + Z ∞ dz X ( z ) F ( z + y ) = 0 , y ∈ R + , (6.2) is the trivial solution X ( y ) ≡ . Here, F ( y ) is the quantity defined in (4.4). ( ◦ d ) The only solution ˆ X ( k ) to the system ( ˆ X ( iκ j ) M j = 0 , j = 1 , . . . , N, ˆ X ( − k ) + ˆ X ( k ) S ( k ) = 0 , k ∈ R , (6.3) where ˆ X ( k ) is a row vector with n components belonging to the Hardy space H ( C + ) , is the trivial solution ˆ X ( k ) ≡ . ( ◦ e ) The only solution h ( k ) to the system ( M j h ( iκ j ) = 0 , j = 1 , . . . , N,h ( − k ) + S ( k ) h ( k ) = 0 , k ∈ R , (6.4) where h ( k ) is a column vector with n components belonging to the Hardy space H ( C + ) , is the trivial solution h ( k ) ≡ . ( ◦ ) For the matrix-valued function F s ( y ) given in (4.3), the derivative F ′ s ( y ) for y ∈ R − can be written as a sum of two matrix-valued functions, one of which is integrable andthe other is square integrable in y ∈ R − . We remark that the conditions ( ◦ c ) , ( ◦ d ) , ( ◦ e ) listed in Theorem 6.2 are somehowsimilar to the conditions ( c ) , ( d ) , ( e ) of Theorem 4.4. However, there are also somedifferences; for example, X ( y ) appearing in (4.14) belongs to L ( R + ) whereas X ( y ) ap-pearing in (6.2) belongs to L ( R + ) , ˆ X ( k ) of (4.15) belongs to ˆ L ( C + ) whereas ˆ X ( k ) of(6.3) belongs to H ( C + ) , and h ( k ) of (4.16) belongs to ˆ L ( C + ) whereas h ( k ) of (6.4)belongs to H ( C + ) . Let us also remark that the condition ( ◦ ) in Theorem 6.2 is the same as the first sen-tence given in ( ◦ III a ) of Theorem 4.2. We note that Theorem 6.2 is the generalization of acharacterization result by Agranovich and Marchenko presented in Theorem 2 on p. 281 of271], which utilizes Levinson’s theorem in the purely Dirichlet case. That characterizationresult by Agranovich and Marchenko is valid only in the case of the Dirichlet boundarycondition and does not include the condition stated in ( ◦ ) in Theorem 6.2. In the specialcase of the purely Dirichlet boundary condition, it turns out that ( ◦ ) in Theorem 6.2 isnot needed. This has something to do with the fact that in the purely Dirichlet case theMarchenko integral equation (4.5) alone plays a key role in the solution to the inverse prob-lem whereas in the non-Dirichlet case not only the Marchenko integral equation but alsothe derivative Marchenko integral equation plays a key role in the solution to the inverseproblem, in particular in the satisfaction of the selfadjoint boundary condition given in(3.5). The derivative Marchenko integral equation is obtained by taking the x -derivativeof (4.5), and hence the quantity F ′ s ( y ) appears in the nonhomogeneous term of the deriva-tive Marchenko integral equation. That presence of F ′ s ( y ) somehow results in the conditionstated in ( ◦ ) in Theorem 6.2. The presence of ( ◦ ) in Theorem 6.2 also has something todo with the fact that the boundary condition stated in (3.5) must hold for all k ∈ R . By taking the Fourier transform of both sides of (3.5), we end up with the requirementthat the Fourier transform of the left-hand side of (3.5) must identically vanish. For this,one needs the necessity of the satisfaction of ( ◦ ) in Theorem 6.2, unless A = 0 in (3.5).Since the case A = 0 is the same as having the purely Dirichlet boundary condition, ( ◦ ) inTheorem 6.2 is relevant only in the non-Dirichlet case. For the mathematical elaborationon ( ◦ ) we refer the reader to [10].The presence of ( ◦ ) in Theorem 6.2 is an indication of one of several reasons why thecharacterization of scattering data sets with the general selfadjoint boundary condition ismore involved than the characterization with the Dirichlet boundary condition.We conclude that the result presented in Theorem 6.2, compared to Theorem 5.1,constitutes an alternate characterization of the scattering data sets S . Recall that Theo-rem 5.1 characterizes the scattering data sets that are in a one-to-one correspondence withthe input data sets D in the Faddeev class. With the help of Theorem 6.2 we have thefollowing alternate characterization of the scattering data sets.28 heorem 6.3 Consider a scattering data set S as in (3.12), which consists of an n × n scattering matrix S ( k ) for k ∈ R , a set of N distinct positive constants κ j , and a set of N constant n × n hermitian and nonnegative matrices M j with respective positive ranks m j , where N is a nonnegative integer. In case N = 0 , it is understood that S consists onlyof S ( k ) for k ∈ R . Consider also an input data set D as in (2.1) consisting of an n × n matrix potential V ( x ) satisfying (2.3) and (2.4) and a pair of constant n × n matrices A and B satisfying (2.6) and (2.7), where it is understood that the boundary matrices A and B are unique up to a postmultiplication by an invertible n × n matrix T. Then, we have thefollowing characterization of the scattering data sets. A set S as in (3.12) is the scatteringdata set corresponding to an input data set D in the Faddeev class if and only if S satisfies ( ) and ( ) of Definition 4.1, both ( L ) and ( ◦ ) of Theorem 6.2, and any one of the threeconditions listed as ( ◦ c ) , ( ◦ d ) , ( ◦ e ) in Theorem 6.2.
7. ANOTHER CHARACTERIZATION OF THE SCATTERING DATA
In this section we provide yet another description of the Marchenko class of scatteringdata sets S so that there exists a one-to-one correspondence between an input data set D in the Faddeev class and a scattering data set S in the Marchenko class. Such a descriptionallows us to have yet another characterization of the scattering data sets S in a one-to-onecorrespondence with the input data sets D in the Faddeev class.The characterization given in this section resulting from a new description of theMarchenko class has some similarities and differences compared to the first characterizationpresented in Theorem 5.1 and the alternate characterization presented in Theorem 6.3.Related to this new characterization, the construction of the potential in the solution tothe inverse problem is the same as in the previous characterizations; namely, one constructsthe potential by solving the Marchenko equation. Hence, the conditions ( ), ( ), ( a ) inthe first characterization, the conditions ( ), ( ), ( ◦ c ) in the alternate characterization, andthe conditions ( I ) , ( ) , ( c ) in this new characterization are essentially used to constructthe potential. This new characterization differs from the two earlier ones in regard to29he satisfaction of the boundary condition by the physical solution Ψ( k, x ) and by thenormalized bound-state matrix solutions Ψ j ( x ) . It is based on the alternate solution to theinverse problem by using the generalized Fourier map [22]. This new characterization usessix conditions, indicated as ( I ) , ( ) , ( A ) , ( c ) , either of ( V e ) or ( V h ) , and ( VI ) . Recallthat ( ) is described in Definition 4.1, ( c ) is described in Theorem 4.4, and ( V e ) and( V h ) are described in Theorem 4.5. In the following definition we describe the conditions( I ) , ( A ) , and ( VI ) . Definition 7.1
The properties ( I ) , ( A ) , and ( VI ) for the scattering data set S in (3.12)are defined as follows: ( I ) The scattering matrix S ( k ) satisfies (4.9), the quantity S ∞ appearing in (4.1) exists,the quantity S ( k ) − S ∞ is square integrable in k ∈ R , and the quantity F s ( y ) definedin (4.3) is bounded in y ∈ R and integrable in y ∈ R + . ( A ) Consider the nonhomogeneous Riemann-Hilbert problem given by h ( k ) + S ( − k ) h ( − k ) = g ( k ) , k ∈ R , (7.1) where the nonhomogeneous term g ( k ) belongs to a dense subset ◦ Υ of the vector space Υ of column vectors with n square-integrable components and satisfying g ( − k ) = S ( k ) g ( k ) for k ∈ R . Then, for each such given g ( k ) , the equation (7.1) has a solution h ( k ) as a column vector with n components belonging to the Hardy space H ( C + ) . ( VI ) The scattering matrix S ( k ) is continuous in k ∈ R . We remark that ( I ) of Definition 7.1 is weaker than ( ) of Definition 4.1. The quantity G and hence (4.2) appearing in ( ) are used to construct the boundary matrices A and B . In order to construct the potential V ( x ) only, it is enough to use the weaker condition( I ) . The condition ( A ) of Definition 7.1 somehow resembles ( III c ) of Theorem 4.5, butthere are also some major differences. In ( III c ) a solution is sought to the homogeneousRiemann-Hilbert problem (4.17) as a column vector with n components where each of thosecomponents belongs to H ( C − ) , and the only solution is expected to be the trivial solution30 ( k ) ≡ . On the other hand, in ( A ) one solves a nonhomogeneous Riemann-Hilbertproblem and the solution is sought as a column vector where each of the n componentsbelongs the Hardy space H ( C + ) . The solution h ( k ) to (7.1) is in general nontrivial becausethe nonhomogeneous term g ( k ) there is in general nontrivial, and the existence of a solutionto (7.1) is more relevant than its uniqueness. The condition ( VI ) of Definition 7.1, whichis the continuity of the scattering matrix S ( k ) , is mainly needed to prove that the physicalsolution Ψ( k, x ) satisfies the boundary condition (2.5).Let us first describe the solution to the inverse scattering problem related to this newcharacterization and then present the characterization itself. As already indicated, thepart of the solution to the inverse problem involving the construction of the potential ispractically the same as the solution outlined in Section 4. However, the part of the solutionrelated to the boundary condition is different than the procedure outlined in Section 4. Wesummarize below the construction of D from S in this new method, where the existenceand uniqueness are implicit at each step:(a) From the large- k asymptotics of the scattering matrix S ( k ) , with the help of (4.1), wedetermine the n × n constant matrix S ∞ . Contrary to the method of Section 4, we donot deal with the determination of the constant n × n matrix G appearing in (4.2).It follows from (4.9) that the matrix S ∞ is hermitian when S satisfies the condition( I ) described in Definition 7.1.(b) In terms of the quantities in S , we uniquely construct the n × n matrix F s ( y ) by using(4.3) and the n × n matrix F ( y ) by using (4.4). This step is the same as steps (b) and(c) of the summary of the method outlined in Section 4.(c) If the condition ( c ) of Theorem 4.4 is also satisfied, then one uses the matrix F ( y ) asinput to the Marchenko integral equation (4.5). If F ( y ) is integrable in y ∈ ( x, + ∞ )for each x ≥ , then for each fixed x ≥ K ( x, y ) integrablein y ∈ ( x, + ∞ ) to (4.5) and such a solution is unique. The solution K ( x, y ) can beconstructed by iterating (4.5). We remark that this step is the same as step (d) of the31ummary of the method outlined in Section 4. Even though K ( x, y ) is constructedonly for 0 ≤ x < y, one can extend K ( x, y ) to y ∈ R + by letting K ( x, y ) = 0 for0 ≤ y < x. (d) Having obtained K ( x, y ) uniquely from S , one constructs the potential V ( x ) via (4.6)and also constructs the Jost solution f ( k, x ) via (4.8). Then, by using ( I ) , ( ), and( c ) , one proves that the constructed V ( x ) satisfies (2.3) and (2.4) and that theconstructed f ( k, x ) satisfies (2.2) used with the constructed potential V ( x ) . (e) Having constructed the Jost solution f ( k, x ) , one then constructs the physical so-lution Ψ( k, x ) via (3.4)and the normalized bound-state matrix solutions Ψ j ( x ) via(3.11). One then proves that the constructed matrix Ψ( k, x ) satisfies (2.2) and thatthe constructed Ψ j ( x ) satisfies (2.2) at k = iκ j , with the understanding that theconstructed potential V ( x ) is used in (2.2).(f) Having constructed the potential V ( x ) , one forms a matrix-valued differential operatordenoted by L min , which acts as ( − D x I + V ) with D x := d/dx, with a domain that isa dense subset of L ( R + ) . More precisely, the domain of L min consists of the columnvectors with n components each of which is a function of x belonging to a dense subsetof L ( R + ) . The constructed operator L min is symmetric, i.e. it satisfies L min ⊂ L † min , but is not selfadjoint, i.e. it does not satisfy L min = L † min . The operator inclusion L min ⊂ L † min indicates that the domain of the operator L min is a subset of the domainof the operator L † min and these two operators have the same value on the domain of L min . (g) One then constructs a selfadjoint realization of L min , namely an operator L in such away that L min ⊂ L and L = L † . The constructed operator L is a restriction of L † min , i.e. we have L ⊂ L † min but not L = L † min . (h) The construction of the operator L is achieved [9,10,22] by using the so-called gener-alized Fourier map F and its adjoint F † . The generalized Fourier map F correspondsto a generalization of the Fourier transform between the space of square-integrable32unctions of x and the space of square-integrable functions of k. (i) Once the selfadjoint operator L is constructed, it follows [9,10] that the domain of L is a maximal isotropic subspace, which is sometimes also called a Lagrange plane.Once we know that the domain of L is a maximal isotropic subspace, then it follows[9,10] that the functions in the domain of L must satisfy the boundary condition (2.5)for some boundary matrices A and B satisfying (2.6) and (2.7), where A and B areuniquely determined up to a postmultiplication by an invertible matrix T. (j) Finally, one proves that the constructed physical solution Ψ( k, x ) and the constructednormalized bound-state matrix solutions Ψ j ( x ) satisfy the boundary condition (2.5)with the boundary matrices A and B specified in the previous step; however, such aproof is different in nature than the proofs for the previous characterizations. For theconstructed matrices Ψ j ( x ) , it is immediate that they satisfy the boundary conditionbecause they belong to the domain of L . Thus, it remains to prove that the constructedΨ( k, x ) satisfies the boundary condition. We note that the matrix Ψ( k, x ) does notbelong to the domain of L because its entries do not belong to L ( R + ) . On the otherhand, Ψ( k, x ) is locally square integrable in x ∈ [0 , + ∞ ) , i.e. it is square integrable inevery compact subset of [0 , + ∞ ) . Hence, it is possible to use a simple limiting argumentto prove that Ψ( k, x ) satisfies the boundary condition (2.5), and the condition ( VI )is utilized in the aforementioned limiting argument.(k) As in the previous characterization given in Theorem 5.1(c), we still need to provethat the input data set D of (2.1) constructed from the scattering data set S of (3.12)yields S . The proof of this step is the same as in the proof of Theorem 5.1(c).Based on the procedure outlined above, we next present another description of theMarchenko class of scattering data sets S . Recall that ( I ) , ( A ) , and ( VI ) are describedin Definition 7.1, ( ) is described in Definition 4.1, ( c ) is described in Theorem 4.4, and( V e ) and ( V h ) are described in Theorem 4.5. Theorem 7.2
Consider a scattering data set S as in (3.12), which consists of an n × n cattering matrix S ( k ) for k ∈ R , a set of N distinct positive constants κ j , and a set of N constant n × n hermitian and nonnegative matrices M j with respective positive ranks m j , where N is a nonnegative integer. In case N = 0 , it is understood that S consists only of S ( k ) for k ∈ R and that N appearing in (3.8) is zero. The set S is the scattering dataset corresponding to a unique input data set D as in (4.2) in the Faddeev class specified inDefinition 2.1 if and only if S satisfies the six conditions consisting of ( I ) , ( ) , ( A ) , ( c ) , either one of ( V e ) and ( V h ) , and ( VI ) . We recall that the uniqueness of the input dataset D is understood in the sense that the boundary matrices A and B in (4.2) are uniqueup to a postmultiplication by an arbitrary invertible n × n matrix T.
8. SOME ELABORATIONS
In this section we make a comparison with the definitions of the Jost matrix and thescattering matrix in the scalar case appearing in the literature. We also elaborate on thenonuniqueness issue arising if the scattering matrix is defined differently when the Dirichletboundary condition is used. The reader is referred to Section 4 of [6] and Example 6.3 of[6] for further elaborations on the nonuniqueness issue.In the scalar case, i.e. when n = 1, from (2.8) we see that we can choose A = − sin θ, B = cos θ, θ ∈ (0 , π ] , (8.1)where θ represents the boundary parameter. We can write the boundary condition (2.5)in the equivalent form − A † ψ ′ (0) + B † ψ (0) = 0 . (8.2)Using (8.1) in (8.2) we see that our boundary condition (2.5) in the scalar case is equivalentto (sin θ ) ψ ′ (0) + (cos θ ) ψ (0) = 0 , θ ∈ (0 , π ] . (8.3)We remark that the boundary condition (8.3) agrees with the boundary condition usedin the literature [7,19,20] in the scalar case. Since θ = π corresponds to the Dirichlet34oundary condition, we can write (8.3) in the equivalent form ( ψ (0) = 0 , Dirichlet case ,ψ ′ (0) + (cot θ ) ψ (0) = 0 , non-Dirichlet case , (8.4)where θ ∈ (0 , π ) in the non-Dirichlet case. The boundary condition (8.4) is also identical[7,19,20] to that used in the literature in the scalar case. As stated below (2.7), theboundary matrices A and B in (2.5) can be postmultiplied by any invertible matrix T without affecting (2.5)-(2.7). Hence, the constants A and B appearing in (8.1) can bemultiplied by any nonzero constant. In any case, the boundary condition (2.5) we use isthe same as the boundary condition used in the literature [7,19,20] in the scalar case.Using (8.1) we see that the Jost matrix defined in (3.2) yields J ( k ) = f ( − k ∗ , † (cos θ ) + f ′ ( − k ∗ , † (sin θ ) , k ∈ R , (8.5)where we recall that θ = π in the Dirichlet case and θ ∈ (0 , π ) in the non-Dirichlet case.Using (2.2), (2.3), and (3.1), for each fixed x ≥ f ( k, x ) and f ′ ( k, x )in the scalar case satisfy f ( − k ∗ , x ) ∗ = f ( k, x ) , f ′ ( − k ∗ , x ) ∗ = f ′ ( k, x ) , k ∈ C + . (8.6)Informally speaking, f ( k, x ) and f ′ ( k, x ) each contain k as ik, and hence we have (8.6).Using (8.6) in (8.5) we see that the Jost matrix in the scalar case is given by J ( k ) = f ( k,
0) (cos θ ) + f ′ ( k,
0) (sin θ ) , k ∈ R , (8.7)which is equivalent to J ( k ) = ( − f ( k, , Dirichlet case , (sin θ ) [ f ′ ( k,
0) + (cot θ ) f ( k, , non-Dirichlet case . (8.8)In the literature in the scalar case the Jost matrix is usually called the Jost function andis defined [7,19,20] as J ( k ) = ( f ( k, , Dirichlet case , − i [ f ′ ( k,
0) + (cot θ ) f ( k, , non-Dirichlet case , (8.9)35he primary motivation behind the definition in (8.9) is to define the Jost function J ( k )in the scalar case in such a way that as k → ∞ in C + we have J ( k ) = 1 + O (1 /k ) inthe Dirichlet case and J ( k ) = k + O (1) in the non-Dirichlet case. We remark that (8.8)and (8.9) do not agree, and we further elaborate on this disagreement. We know from (b)in Section 3 that the right-hand side of (8.7) can be multiplied by any nonzero constantbecause the boundary matrices A and B appearing in (3.2) can be postmultiplied by anyinvertible matrix T without affecting (2.5)-(2.7). Comparing (8.8) and (8.9) we see that itis impossible to modify the right-hand side of (8.8) through a multiplication by a nonzeroscalar so that the right-hand sides of (8.8) and (8.9) agree. In other words, we cannot usethe same multiplicative constant both in the Dirichlet case and in the non-Dirichlet caseso that (8.8) and (8.9) can agree.Using (8.8) in (3.3) we obtain the scattering matrix in the scalar case as S ( k ) = − f ( − k, f ( k, , Dirichlet case , − f ′ ( − k,
0) + (cot θ ) f ( − k, f ′ ( k,
0) + (cot θ ) f ( k, , non-Dirichlet case . (8.10)On the other hand, the scattering matrix in the scalar case is defined in the literature[7,19,20] as S ( k ) = f ( − k, f ( k, , Dirichlet case , − f ′ ( − k,
0) + (cot θ ) f ( − k, f ′ ( k,
0) + (cot θ ) f ( k, , non-Dirichlet case . (8.11)Thus, the first lines of (8.10) and (8.11) differ by a minus sign and their second lines areidentical. Note that (8.9) and (8.11) indicate that the scattering matrix in the literature[7,19,20] in the scalar case is related to the Jost matrix as S ( k ) = ( J ( − k ) J ( k ) − , Dirichlet case , − J ( − k ) J ( k ) − , non-Dirichlet case , (8.12)The definition (8.12) of the scattering matrix in the scalar case in the literature is motivatedby the fact that (8.12) ensures that S ∞ defined in (4.1) is equal to 1 , regardless of the36irichlet case or the non-Dirichlet case. Comparing (8.12) with (3.3) we see that (3.3) andthe first line of (8.12) differ by a minus sign and that (3.3) and the second line of (8.12)agree with each other.In the previous literature [1,21], the scattering matrix in the Dirichlet case is definedas in the first line of (8.12) even in the nonscalar case, i.e. when n ≥ . Again, this ensuresthat S ∞ = I, where we recall that I is the n × n identity matrix. However, defining thescattering matrix in the Dirichlet case as in (8.12) and not as (3.3) makes it impossible tohave a unique solution to the inverse scattering problem unless the boundary condition isalready known as a part of the scattering data. If the physical problem arises mainly fromquantum mechanics and hence the boundary condition is the purely Dirichlet condition,which corresponds to having A = 0 in (2.5), this does not present a problem. On the otherhand, if the determination of the selfadjoint boundary condition is a part of the solutionto the inverse problem, then the definition of the scattering matrix S ( k ) given in (8.12) isproblematic and that is one of the reasons why we use the definition of S ( k ) given in (3.3)regardless of the boundary condition. Note that we define the scattering matrix as in (3.3)so that the associated Schr¨odinger operator for the unperturbed problem has the Neumannboundary condition. This definition is motivated by the theory of quantum graphs, wherethe Neumann boundary condition is usually used for the unperturbed problem. We referthe reader to [15,17,18,22] for further details.In the following three examples, we illustrate the drawback of using (8.12) and not(3.3) as the definition of the scattering matrix. Example 8.1
Let us use (8.12) as the definition of the scattering matrix, instead of using(3.3). Let us assume that we are in the scalar case. Let us consider the input data set D given in (2.1) and the scattering data set S given in (3.12). The input data set D corresponding to the trivial potential V ( x ) ≡ θ = π yields the Jost solution f ( k, x ) = e ikx , and hence the corresponding Jost matrixis evaluated by using the first line of (8.9) as J ( k ) = f ( k,
0) = 1 . There are no boundstates because J ( k ) does not vanish on the positive imaginary axis in the complex k -plane.37hus, using the first line of (8.11), we evaluate the scattering matrix as S ( k ) ≡ , andhence the corresponding scattering data set S consists of S ( k ) ≡ D corresponding to the trivial potential V ( x ) ≡ θ = π/ f ( k, x ) = e ikx and hence, by using the second line of (8.9), the correspondingJost matrix is evaluated as J ( k ) = − if ′ ( k,
0) = k. There are no bound states because J ( k ) does not vanish on the positive imaginary axis in the complex k -plane. Thus, usingthe second line of (8.11) or equivalently using the second line of (8.12), we evaluate thescattering matrix as S ( k ) ≡ , and hence the corresponding scattering data set S consistsof S ( k ) ≡ S = S even though D = D . This nonuniqueness would not occur if we used (3.3) as the definition of thescattering matrix S ( k ) . We would then get S ( k ) ≡ − S ( k ) ≡ , and hence S = S . Next, we further illustrate the nonuniqueness encountered in Example 8.1 with anontrivial example.
Example 8.2
Let us use (8.12) as the definition of the scattering matrix, instead of using(3.3). Let us assume that we are in the scalar case. Let us choose a nontrivial potential V ( x ) so that it is real valued and satisfies (2.4). Let us also view V ( x ) as a full-linepotential with support on x ∈ R + . We refer the reader to any reference on the scatteringtheory for the full-line Schr¨odinger equation such as [2,11-13,19,20] for the description ofthe corresponding scattering coefficients. As a full-line potential, let us also assume that V ( x ) has no bound states and corresponds to the full-line exceptional case. The no bound-state assumption on the full line is the same as assuming that the transmission coefficienthas no poles on the positive imaginary axis in the complex k -plane, and the exceptionalcase on the full line is equivalent to the assumption that the transmission coefficient doesnot vanish at k = 0 . Corresponding to V ( x ) as a full-line potential we have the full-linescattering data { T ( k ) , R ( k ) , L ( k ) } , where T ( k ) is the transmission coefficient, R ( k ) isthe reflection coefficient from the right, and L ( k ) is the reflection coefficient from the left.It is known [2,11-13,19,20] that the full-line scattering data { T ( k ) , R ( k ) , L ( k ) } , where38e have T ( k ) = T ( k ) , R ( k ) = − R ( k ) , L ( k ) = − L ( k ) , (8.13)corresponds to a nontrivial full-line potential V ( x ) so that V ( x ) is real valued, vanisheswhen x < , has no bound states on the full line, and corresponds to the full-line exceptionalcase. Furthermore, V ( x ) satisfies (2.4). Viewing V ( x ) and V ( x ) as half-line potentials,let us now evaluate the corresponding half-line scattering data sets S and S associatedwith the full-line scattering data sets { T ( k ) , R ( k ) , L ( k ) } and { T ( k ) , R ( k ) , L ( k ) } , re-spectively. Since V ( x ) and V ( x ) both vanish when x < , the corresponding respectiveJost solutions f ( k, x ) and f ( k, x ) yield f ( k,
0) = 1 + L ( k ) T ( k ) , f ( k,
0) = 1 + L ( k ) T ( k ) , (8.14) f ′ ( k,
0) = ik − L ( k ) T ( k ) , f ′ ( k,
0) = ik − L ( k ) T ( k ) . (8.15)Let us now view V ( x ) as a half-line potential, associate it with the Dirichlet boundarycondition θ = π, and use D to denote the resulting input data set. Similarly, let us view V ( x ) as a half-line potential, associate it with the Neumann boundary condition θ = π/ , and use D to denote the resulting input data set. Clearly, we have D = D because θ = θ . Using (8.14) in the first lines of (8.9) and (8.11) we obtain the Jost matrix J ( k )and the scattering matrix S ( k ) as J ( k ) = f ( k,
0) = 1 + L ( k ) T ( k ) , S ( k ) = f ( − k, f ( k,
0) = T ( k ) T ( − k ) 1 + L ( − k )1 + L ( k ) . (8.16)On the other hand, using (8.15) and the second lines of (8.9) and (8.11) with θ = π/ , weobtain the Jost matrix J ( k ) and the scattering matrix S ( k ) as J ( k ) = − i f ′ ( k,
0) = k − L ( k ) T ( k ) , S ( k ) = − f ′ ( − k, f ′ ( k,
0) = T ( k ) T ( − k ) 1 − L ( − k )1 − L ( k ) . (8.17)Using (8.13) in (8.16) and (8.17) we see that S ( k ) ≡ S ( k ) , and hence the correspondingscattering data sets S and S coincide. Thus, we get D = D and S = S . Thisnonuniqueness can be fixed by using (3.3) and not (8.12) as the definition of the scatteringmatrix S ( k ) . n × n matrix case for any positiveinteger n. For the relevant scattering theory for the matrix Schr¨odinger equation on thefull line, we refer the reader to [3].
Example 8.3
In this example we assume that n is any positive integer and not necessarilyrestricted to n = 1 . Let us use (8.12) as the definition of the scattering matrix, insteadof using (3.3). Let us again use (2.1) to describe an input data set D and use (3.12) todescribe a scattering data set S on the half line. Consider the class of n × n matrix-valuedpotentials V ( x ) on the full line satisfying V ( x ) = V ( x ) † , x ∈ R , (8.18,) Z ∞−∞ dx (1 + | x | ) | V ( x ) | < + ∞ , (8.19)where we recall that the dagger denotes the matrix adjoint and | V ( x ) | denotes the matrixoperator norm. The reader is referred to [3] for the matrix-valued scattering coefficientsfor the full-line matrix Schr¨odinger equation with such potentials. Associated with V ( x )we have the full-line scattering coefficients T l ( k ) , R ( k ) , and L ( k ) , each of which is an n × n matrix. These matrix-valued scattering coefficients are the matrix generalizations of thescalar scattering coefficients T ( k ) , R ( k ) , and L ( k ) considered in Example 8.2. Let us furtherrestrict the full-line potentials V ( x ) so that they vanish when x < , they do not possessany bound states on the full line, and they correspond to the purely exceptional case. Werefer the reader to [3] for the details on the bound states and the purely exceptional caseon the full line. The absence of bound states on the full line is equivalent to having thedeterminant of the matrix inverse of T l ( k ) not vanishing on the positive imaginary axis inthe complex k -plane. The purely exceptional case on the full line is equivalent to havingthe limit of k T l ( k ) − as k → n × n zero matrix. For such potentials T l (0) − is well defined, and we have det[ I ± L (0)] = 0 , where we recall that I denotes the n × n identity matrix. Since we only consider the full-line potentials V ( x ) vanishing when x < ,
40e can view their restrictions on x ∈ R + as half-line potentials V ( x ) . From (8.18) and(8.19) we see that their restrictions on x ∈ R + belong to the Faddeev class. When x ∈ R + , the full-line Jost solution from the left f l ( k, x ) coincides [3] with the half-line Jost solution f ( k, x ) appearing in (3.1). Furthermore, we have [3] f l ( k,
0) = [ I + L ( k )] T l ( k ) − , f ′ l ( k,
0) = ik [ I − L ( k )] T l ( k ) − , k ∈ R . (8.20)Let V ( x ) be a specific full-line matrix potential satisfying (8.18) and (8.19) such that itvanishes when x < , does not contain any bound states on the full line, and correspondsto the purely exceptional case on the full line. Let { T l ( k ) , R ( k ) , L ( k ) } be the correspond-ing full-line scattering data. Let V ( x ) be the full-line matrix potential corresponding tothe full-line scattering data { T l ( k ) , − R ( k ) , − L ( k ) } , where the signs of the matrix-valuedreflection coefficients are changed. The matrix potential V ( x ) also vanishes for x < , satisfies (8.18) and (8.19), does not possess any bound states on the full line, and corre-sponds to a purely exceptional case on the full line. The restrictions of V ( x ) and V ( x ) on x ∈ R + can be viewed as half-line potentials. Let A , B , A , B be four n × n constantmatrices in such a way that A = 0 , B = 0 , A is an arbitrary invertible matrix, and B is an arbitrary invertible matrix. Let D := { V , A , B } and D := { V , A , B } be thehalf-line input data sets as in (2.1), with the understanding that the domains of V ( x ) and V ( x ) are restricted to x ∈ R + . Let f ( k, x ) and f ( k, x ) be the half-line Jost solutionscorresponding to D and D , respectively. From (8.20) we see that f ( k,
0) = [ I + L ( k )] T l ( k ) − , f ′ ( k,
0) = ik [ I − L ( k )] T l ( k ) − , k ∈ R , (8.21) f ( k,
0) = [ I − L ( k )] T l ( k ) − , f ′ ( k,
0) = ik [ I + L ( k )] T l ( k ) − , k ∈ R . (8.22)Using (8.21) and (8.22), because of the purely exceptional case on the full line [3], it followsthat neither of the determinants of f ( k,
0) and f ′ ( k,
0) vanish. Let J ( k ) , S ( k ) , S bethe respective Jost matrix, scattering matrix, and scattering data set corresponding to D . Similarly, let J ( k ) , S ( k ) , S be the respective Jost matrix, scattering matrix, andscattering data set corresponding to D . Using (8.21) and (8.22) in (3.2) we obtain J ( k ) = f ( − k, † B = [ T l ( − k ) † ] − [ I + L ( − k ) † ] B , (8.23)41 ( k ) = − f ′ ( − k, † A = − ik [ T l ( − k ) † ] − [ I + L ( − k ) † ] A . (8.24)Using (8.23) in the first line of (8.12) we obtain S ( k ) = J ( − k ) J ( k ) − = f ( k, † (cid:2) f ( − k, † (cid:3) − , which yields S ( k ) = [ T l ( k ) † ] − [ I + L ( k ) † ] [ I + L ( − k ) † ] − T l ( − k ) † . (8.25)Using (8.24) in the second line of we obtain S ( k ) = − J ( − k ) J ( k ) − = − f ′ ( k, † (cid:2) f ′ ( − k, † (cid:3) − , yielding S ( k ) = [ T l ( k ) † ] − [ I + L ( k ) † ] [ I + L ( − k ) † ] − T l ( − k ) † . (8.26)There are no half-line bound states associated with either of the scattering data setscorresponding to S ( k ) and S ( k ) . Hence we have S = { S } and S = { S } . From (8.25)and (8.26) it follows that S ( k ) ≡ S ( k ) and hence we have S = S even though D = D . This nonuniqueness would not occur if we used (3.3) as the definition of the scatteringmatrix S ( k ) . We would then get S ( k ) ≡ − S ( k ) , and hence S = S . Acknowledgments.
The research leading to this article was supported in part by CONA-CYT under project CB2015, 254062, Project PAPIIT-DGAPA-UNAM IN103918, and byCoordinaci´on de la Investigaci´on Cient´ıfica, UNAM.
References [1] Z. S. Agranovich and V. A. Marchenko,
The inverse problem of scattering theory,
Gordon and Breach, New York, 1963.[2] T. Aktosun and M. Klaus,
Chapter 2.2.4: Inverse theory: problem on the line,
In:E. R. Pike and P. C. Sabatier (eds.),
Scattering,
Academic Press, London, 2001, pp.770–785. 423] T. Aktosun, M. Klaus, and C. van der Mee, itSmall-energy asymptotics for theSchr¨odinger equation on the line, J. Math. Phys. , 619–632 (2001).[4] T. Aktosun, M. Klaus, and R. Weder, Small-energy analysis for the self-adjointmatrix Schr¨odinger operator on the half line,
J. Math. Phys. , 102101 (2011);arXiv:1105.1794 [math-ph] (2011).[5] T. Aktosun, M. Klaus, and R. Weder, Small-energy analysis for the self-adjoint ma-trix Schr¨odinger operator on the half line. II,
J. Math. Phys. , 032103 (2014);arXiv:1310.4809 [math-ph] (2013).[6] T. Aktosun, P. Sacks, and M. Unlu, Inverse problems for selfadjoint Schr¨odingeroperators on the half line with compactly supported potentials,
J. Math. Phys. ,022106 (2015); arXiv:1409.5819 [math.SP] (2014).[7] T. Aktosun and R. Weder, Inverse spectral-scattering problem with two sets of dis-crete spectra for the radial Schr¨odinger equation,
Inverse Problems , 89–114 (2006);arXiv:math-ph/0402019 (2004).[8] T. Aktosun and R. Weder, High-energy analysis and Levinson’s theorem for the self-adjoint matrix Schr¨odinger operator on the half line,
J. Math. Phys. , 112108(2013); arXiv:1206.2986 [math-ph] (2012).[9] T. Aktosun and R. Weder, Inverse scattering for the matrix Schr¨odinger equation, preprint, 2018; arXiv:1708.03837 [math-ph] (2017).[10] T. Aktosun and R. Weder,
Direct and inverse scattering for the matrix Schr¨odingerequation, the monograph to be published by Springer-Verlag.[11] K. Chadan and P. C. Sabatier,
Inverse problems in quantum scattering theory,
Inverse scattering on the line,
Commun. Pure Appl. Math. , 121–251 (1979). 4313] L. D. Faddeev, Properties of the S -matrix of the one-dimensional Schr¨odinger equa-tion, Amer. Math. Soc. Transl. (ser. 2), 139–166 (1967).[14] M. S. Harmer, Inverse scattering for the matrix Schr¨odinger operator and Schr¨odingeroperator on graphs with general self-adjoint boundary conditions,
ANZIAM J. , 161–168 (2002).[15] M. S. Harmer, The matrix Schr¨odinger operator and Schr¨odinger operator on graphs,
Ph.D. thesis, University of Auckland, New Zealand, 2004.[16] M. Harmer,
Inverse scattering on matrices with boundary conditions,
J. Phys. A ,4875–4885 (2005).[17] V. Kostrykin and R. Schrader, Kirchhoff ’s rule for quantum wires,
J. Phys. A ,595–630 (1999).[18] V. Kostrykin and R. Schrader, Kirchhoff ’s rule for quantum wires. II: The inverseproblem with possible applications to quantum computers,
Fortschr. Phys. , 703–716(2000).[19] B. M. Levitan, Inverse Sturm-Liouville problems,
VNU Science Press, Utrecht, 1987.[20] V. A. Marchenko,
Sturm-Liouville operators and applications, revised ed., Amer. Math.Soc. Chelsea Publ., Providence, R.I., 2011.[21] R. G. Newton and R. Jost,
The construction of potentials from the S -matrix forsystems of differential equations, Nuovo Cim. , 590–622 (1955).[22] R. Weder, Scattering theory for the matrix Schr¨odinger operator on the half line withgeneral boundary conditions,
J. Math. Phys. , 092103 (2015); arXiv:1505.0879[math-ph] (2015).[23] R. Weder, Trace formulas for the matrix Schr¨odinger operator on the half-line withgeneral boundary conditions,
J. Math. Phys. , 112101 (2016); arXiv:1603.09432[math-ph] (2016). 4424] R. Weder, The number of eigenvalues of the matrix Schr¨odinger operator on thehalf line with general boundary conditions,
J. Math. Phys.58