Inverse problem solution and spectral data characterization for the matrix Sturm-Liouville operator with singular potential
aa r X i v : . [ m a t h . SP ] J u l Inverse problem solution and spectral data characterization for thematrix Sturm-Liouville operator with singular potential
Natalia P. BondarenkoAbstract.
The matrix Sturm-Liouville operator on a finite interval with singular poten-tial of class W − and the general self-adjoint boundary conditions is studied. This operatorgeneralizes the Sturm-Liouville operators on geometrical graphs. We investigate the inverseproblem that consists in recovering the considered operator from the spectral data (eigenvaluesand weight matrices). The inverse problem is reduced to a linear equation in a suitable Banachspace, and a constructive algorithm for the inverse problem solution is developed. Moreover,we obtain the spectral data characterization for the studied operator. Keywords: inverse spectral problems; matrix Sturm-Liouville operator; singular potential;method of spectral mappings; spectral data characterization.
AMS Mathematics Subject Classification (2010):
This paper is devoted to an inverse spectral problem for the matrix Sturm-Liouville operator − Y ′′ + Q ( x ) Y , where Q ( x ) is an ( m × m )-matrix function called the potential .Inverse problems of spectral analysis consist in reconstruction of operators from their spec-tral information. The greatest success in inverse problem theory has been achieved for the scalar Sturm-Liouville operators (for m = 1), see the classical monographs [1–4] and referencestherein. Matrix
Sturm-Liouville operators have been intensively studied in connection withvarious applications. In particular, inverse problems for such operators are used in quantummechanics [5], in elasticity theory [6], for description of electromagnetic waves [7] and nuclearstructure [8], for solving matrix nonlinear evolution equations by inverse spectral transform [9].For the matrix Sturm-Liouville operators on a finite interval , the majority of studies dealwith the Dirichlet boundary conditions Y (0) = Y ( π ) = 0or the Robin boundary conditions Y ′ (0) − H Y (0) = 0 , Y ′ ( π ) + H Y ( π ) = 0 , where H and H are constant ( m × m )-matrices. Uniqueness of recovering such operators fromvarious spectral characteristics has been proved by Carlson [10], Chabanov [8], Malamud [11],Yurko [12], and Shieh [13]. Yurko [14] proposed a constructive method, based on spectralmappings, for solving such inverse problems. Further, this method has been developed byBondarenko [15] for working with multiple eigenvalues. The most difficult and, at the sametime, the most important issue of inverse problem theory is the spectral data characterization.For the matrix Sturm-Liouville operators on a finite interval, this issue has been independentlysolved by Chelkak and Korotyaev [16], by Mykytyuk and Trush [17], and by Bondarenko [15,18].The latter approach was also generalized for a certain class of non-self-adjoint matrix Sturm-Liouville operators [19]. 1he present paper deals with the matrix Sturm-Liouville operator with the self-adjointboundary conditions in the general form defined below. Denote by C m and C m × m the spacesof complex m -vectors and ( m × m )-matrices, respectively. For an interval I and a class A ( I ) offunctions defined on I (e.g., A = L , C, . . . ), we denote by A ( I ; C m ) and A ( I ; C m × m ) the classesof complex-valued m -vector functions and ( m × m )-matrix functions, respectively, with entriesfrom A ( I ).Consider the matrix Sturm-Liouville problem L = L ( σ, T , T , H ): ℓY := − ( Y [1] ) ′ − σ ( x ) Y [1] − σ ( x ) Y = λY, x ∈ (0 , π ) , (1.1) V ( Y ) := T Y [1] (0) − T ⊥ Y (0) = 0 , V ( Y ) := T ( Y [1] ( π ) − H Y ( π )) − T ⊥ Y ( π ) = 0 . (1.2)where Y = [ y j ( x )] mj =1 is a vector function, σ ∈ L ((0 , π ); C m × m ), σ ( x ) = ( σ ( x )) † a.e. on(0 , π ), Y [1] ( x ) := Y ′ ( x ) − σ ( x ) Y ( x ) is the quasi-derivative , λ is the spectral parameter, for j = 1 , T j ∈ C m × m , T j is an orthogonal projection matrix, T ⊥ j = I − T j , H ∈ C m × m , H = H † = T H T , I is the ( m × m )-unit matrix, the symbol † denotes the conjugate transform.Under these assumptions, the problem L is self-adjoint. We suppose that Y belongs to thedomain D ( L ) := { Y : Y, Y [1] ∈ AC ([0 , π ]; C m ) , ( Y [1] ) ′ ∈ L ((0 , π ); C m ) } . Equation (1.1) can be rewritten in the equivalent form − Y ′′ + Q ( x ) Y = λY, x ∈ (0 , π ) , with the singular potential Q ( x ) = σ ′ ( x ) of class W − ((0 , π ); C m × m ). The derivative of L -function is understood in the sense of distributions. However, it is more convenient to use theform (1.1).Relations (1.2) describe the general self-adjoint form of separated boundary conditions.The matrix Sturm-Liouville operator given by (1.1)-(1.2) causes interest because it generalizesSturm-Liouville operators on geometrical graphs. The latter operators are used for modelingwave propagation in graph-like structures consisting of thin tubes, strings, beams, etc. Differ-ential operators on graphs attract much attention of mathematicians and physicists in recentyears in connection with applications in nanotechnology, organic chemistry, mechanics, andother branches of science and engineering (see [20–23] and references therein). The generalself-adjoint boundary conditions in the form T Y ′ ( v ) + HY ( v ) = 0 , T ⊥ Y ( v ) = 0 , where T and T ⊥ are complimentary projection matrices, H = H † = T HT , have been in-troduced by Kuchment [24]. In the literature (see, e.g., [25]), the other equivalent forms ofparametrization also appear: AY ( v ) + BY ′ ( v ) = 0 , where the ( m × m )-matrix [ A, B ] has the maximal rank m and the matrix AB † is Hermitian,and − i ( U + I ) Y ( v ) + ( U − I ) Y ′ ( v ) = 0 , where U is a unitary matrix.Inverse problems for the matrix Sturm-Liouville operator on a finite interval with generalself-adjoint boundary conditions and regular potential of class L have been recently studiedby Xu [26]. However, paper [26] is only concerned with uniqueness theorems. The issuesof constructive solution and spectral data characterization for this operator appeared to be2ore difficult for investigation because of complex asymptotic behavior of the spectrum andstructural properties of the problem. In [27, 28], properties of the spectral data have beeninvestigated for the matrix Sturm-Liouville operator with boundary condition in the general self-adjoint form at x = π and with Dirichlet boundary condition at x = 0. Further, a constructivesolution procedure has been developed for the corresponding inverse spectral problem (see [29]).Those results have been applied for obtaining the spectral data characterization for the Sturm-Liouville operator on the star-shaped graph (see [30]).In addition, it worth mentioning that inverse scattering problems have been studied for thematrix Sturm-Liouville operators on the half-line and on the line (see, e.g., [5, 9, 31–35]). Inparticular, Harmer [31, 32] and Aktosun and Weder [35] investigated inverse scattering on thehalf-line with boundary condition in the general self-adjoint form at the origin. Harmer [31]also applied those results to the inverse scattering problem on the star-shaped graph consistingof infinite rays. However, the matrix Sturm-Liouville operators on infinite domains usuallyhave a bounded set of eigenvalues, so the inverse problems for them are in some sense easierfor investigation than analogous problems on a finite interval.The majority of mentioned results deal with the case of regular (square summable orsummable) potentials. For the Sturm-Liouville operators with singular (distributional) po-tentials, there is an extensive literature concerning the scalar case. Inverse problems for thescalar operators in the form − ( y [1] ) ′ − σ ( x ) y [1] − σ ( x ) y , σ ∈ L (0 , π ), were studied by Hrynivand Mykytuyk [36, 37], Savchuk and Shkalikov [38], Djakov and Mityagin [39], and by otherauthors. Mykytyuk and Trush [17] investigate inverse problems for the matrix Sturm-Liouvilleoperators with potential of class W − on a finite interval in a special form, which differs from(1.1) and can be easily reduced to a Dirac-type operator. Analogous reduction was applied byEckhardt and co-authors [40, 41] to the matrix Sturm-Liouville operators on the half-line andon the line.In this paper, we solve the inverse spectral problem for the matrix Sturm-Liouville operator(1.1)-(1.2) with singular potential and with general self-adjoint boundary conditions at theboth ends of the interval. We obtain an algorithm for reconstruction of the operator by itsspectral data and provide the spectral data characterization. On the one hand, our approach isbased on the spectral properties of the operator (1.1)-(1.2) obtained in our previous study [44].On the other hand, we rely on the method of spectral mappings for constructive solution ofthe inverse problem. This method has been initially developed by Yurko for operators withregular coefficients (see [4]). This method allows one to reduce a nonlinear inverse problem toa linear equation in a suitable Banach space. Such reduction leads to a constructive procedurefor solving an inverse problem and also can be used for investigating global solvability, localsolvability, stability, and other issues of inverse problem theory. Yurko’s method has beenmodified for the Sturm-Liouville operators with singular potentials by Freiling, Ignatiev, andYurko [42] and by Bondarenko [43]. An approach to inverse problems for the matrix Sturm-Liouville operators has been developed in [15, 19, 29, 30]. In the present paper, we combine theideas of the mentioned studies to solve the inverse problem for the operator (1.1)-(1.2).The paper is organized as follows. In Section 2, we describe asymptotical and structuralproperties of the spectral data, formulate the inverse problem, the corresponding uniquenesstheorem, and our main theorem (Theorem 2.6) on the characterization of the spectral data.The proof of Theorem 2.6 is contained in Sections 3-6. In Section 3, we reduce the nonlinearinverse problem to a linear equation in a special Banach space. That equation is called themain equation of the inverse problem. In Section 4, we obtain auxiliary estimates concerningthe operator participating in the main equation and some other characteristics. In Section 5, itis proved that, under the conditions of Theorem 2.6, the main equation is uniquely solvable. In3ection 6, the proof of Theorem 2.6 is finished. By using the solution of the main equation, weconstruct σ and H , and finally arrive at Algorithm 6.8 for constructive solution of the inverseproblem.We overcome the following difficulties specific for our problem.1. The problem L can have an infinite number of groups of multiple and/or asymptoticallymultiple eigenvalues. Therefore, in the construction of the main equation in Section 3,we use the special grouping (3.5) of the eigenvalues with respect to their asymptotics.2. Because of the singular potential, we need to obtain some precise estimates related withthe operator participating in the main equation (see Lemmas 4.4-4.5). These estimatesplay an important role in the proofs of the main equation solvability. Such estimates donot needed in the case of regular potential.3. When the matrix function σ ( x ) is constructed by using the spectral data, we cannotdirectly substitute this function into equation (1.1) and so have to approximate it bysmooth matrix functions σ N . In this section, we define the spectral data and provide their properties obtained in [44]. Further,we formulate the inverse problem (Inverse Problem 2.4), the corresponding uniqueness theorem(Proposition 2.5), and our main result (Theorem 2.6). The latter theorem gives necessary andsufficient conditions for the inverse problem solvability, or, in other words, the spectral datacharacterization.Let us start with the notations :1. Denote ρ := √ λ , arg ρ ∈ (cid:2) − π , π (cid:1) (unless stated otherwise).2. We use the Euclidean norm in C m : k a k = m X j =1 | a j | ! / , a = [ a j ] mj =1 , and the corresponding matrix norm k A k equal to the maximal singular value of A .3. The scalar product in the Hilbert space L ( I ; C m ) is defined as follows:( Y, Z ) = Z I ( Y ( x )) † Z ( x ) dx = m X j =1 Z I y j ( x ) z j ( x ) dx,Y = [ y j ( x )] mj =1 , Z = [ z j ( x )] mj =1 ∈ L ( I ; C m ) .
4. The same symbol C is used for various positive constants independent of n , x , λ , etc.Let ϕ ( x, λ ) be the matrix solution of equation (1.1) satisfying the initial conditions ϕ (0 , λ ) = T , ϕ [1] (0 , λ ) = T ⊥ . Clearly, the matrix functions ϕ ( x, λ ) and ϕ [1] ( x, λ ) are entire in λ foreach fixed x ∈ [0 , π ]. The eigenvalues of the problem L coincide with the zeros of the entirecharacteristic function ∆( λ ) := V ( ϕ ( x, λ )) with their multiplicities.4he matrix function ϕ ( x, λ ) can be represented in the form ϕ ( x, λ ) = (cos ρx T + sin ρx T ⊥ + K x ( ρ ))( T + ρ − T ⊥ ) , (2.1)where K x ( ρ ) = Z x − x K ( x, t ) exp( iρt ) dt, the kernel K ( x, . ) belongs to L (( − x, x ); C m × m ) for each fixed x ∈ [0 , π ] and the norm k K ( x, . ) k L (( − x,x ); C m × m ) is bounded uniformly by x ∈ (0 , π ]. Using (2.1) and the analogousrelation for ϕ [1] ( x, λ ), we have proved the following proposition in [44]. Proposition 2.1.
The spectrum of L is a countable set of real eigenvalues { λ nk } ( n,k ) ∈ J , countedwith their multiplicities and numbered in non-decreasing order: λ n k ≤ λ n k if ( n , k ) < ( n , k ) . The following asymptotic relation holds: ρ nk := p λ nk = n + r k + κ nk , ( n, k ) ∈ J, { κ nk } ∈ l , (2.2) where J := { ( n, k ) : n ∈ N , k = 1 , m } ∪ { (0 , k ) : k = p ⊥ + 1 , m } , p ⊥ := dim(Ker T ∩ Ker T ) , (2.3) { r k } mk =1 are the zeros of the function det( W ( ρ )) on [0 , , ≤ r ≤ r ≤ · · · ≤ r m < , W ( ρ ) := ( T T + T ⊥ T ⊥ ) sin ρπ + ( T ⊥ T − T T ⊥ ) cos ρπ. (2.4)The Weyl solution of L is the matrix solution Φ( x, λ ) of equation (1.1) satisfying the bound-ary conditions V (Φ) = I , V (Φ) = 0. The matrix function M ( λ ) := T Φ(0 , λ ) + T ⊥ Φ [1] (0 , λ ) iscalled the Weyl matrix of L . The matrix functions M ( λ ) and Φ( x, λ ) for each fixed x ∈ [0 , π ]are meromorphic in λ . All their singularities are the simple poles at λ = λ nk , ( n, k ) ∈ J . Denote α nk := Res λ = λ nk M ( λ ) , ( n, k ) ∈ J. The matrices { α nk } ( n,k ) ∈ J are called the weight matrices and the collection { λ nk , α nk } ( n,k ) ∈ J is called the spectral data of L .Let λ n k = λ n k = · · · = λ n r k r be a group of multiple eigenvalues maximal by inclusion,( n , k ) < ( n , k ) < · · · < ( n r , k r ). Clearly, α n k = α n k = · · · = α n r k r . Define α ′ n k := α n k , α n j k j := 0, j = 2 , r . We obtain the sequences of matrices { α ′ nk } ( n,k ) ∈ J . Proposition 2.2.
The weight matrices are Hermitian non-negative definite: α nk = α † nk ≥ , ( n, k ) ∈ J . For each ( n, k ) ∈ J , rank( α nk ) equals the multiplicity of the eigenvalue λ nk .Furthermore, the asymptotic relation holds: α ( k ) n := X r s ∈ J k α ′ ns = 2 π ( T + nT ⊥ )( A k + K nk )( T + nT ⊥ ) , n ≥ , k ∈ J , (2.5) where J := { } ∪ { k = 2 , m : r k = r k − } , J k := { s = 1 , m : r s = r k } , {k K nk k} ∈ l ,A k := π Res ρ = r k ( W ( ρ )) − U ( ρ ) ,U ( ρ ) := ( T T + T ⊥ T ⊥ ) cos ρπ + ( T T ⊥ − T ⊥ T ) sin ρπ, The matrices { A k } k ∈J are orthogonal projection matrices having the following properties: rank( A k ) = | J k | , A k A s = 0 , k = s, X k ∈J A k = I. λ n k = λ n k = · · · = λ n r k r maximal by inclusion.By Proposition 2.2, we have rank( α n k ) = r , so Ran α n k is an r -dimensional subspace in C m .Choose a basis { χ n j k j } rj =1 in this subspace. This choice is non-unique. Proposition 2.3 is validfor any choice of the basis. Thus, we have defined the vector sequence { χ nk } ( n,k ) ∈ J . Considerthe sequence of vector functions X := { X nk } ( n,k ) ∈ J , X nk ( x ) := (cid:18) cos( ρ nk x ) T + sin( ρ nk x ) ρ nk T ⊥ (cid:19) χ nk . (2.6) Proposition 2.3.
The sequence X is complete in L ((0 , π ); C m ) . Proposition 2.3 immediately follows from [44, Theorem 5.1], which asserts the completenessof the following sequence Y . Put T nk := (cid:26) T + ρ nk T ⊥ , ρ nk = 0 ,I, ρ nk = 0 , B nk := π T − nk α nk T − nk , ( n, k ) ∈ J. Clearly, rank( B nk ) equals the multiplicity of the eigenvalue λ nk .For any group of multiple eigenvalues λ n k = λ n k = · · · = λ n r k r considered above, choosean orthonormal basis {E n j k j } rj =1 in the r -dimensional subspace Ran B n k . Thus, we have definedthe vector sequence {E nk } ( n,k ) ∈ J . Define Y := { Y nk } ( n,k ) ∈ J , Y nk ( x ) := (cid:26) (cos( ρ nk x ) T + sin( ρ nk x ) T ⊥ ) E nk , ρ nk = 0 , ( T + xT ⊥ ) E nk , ρ nk = 0 . (2.7)Clearly, the completeness of X is equivalent to the completeness of Y independently of thechoice of the bases in the corresponding subspaces.Now we turn to discuss the following inverse spectral problem. Inverse Problem 2.4.
Given the spectral data { λ nk , α nk } ( n,k ) ∈ J , find σ , T , T , H .Along with the problem L , we consider the problem ˜ L = L (˜ σ, ˜ T , ˜ T , ˜ H ) of the same formbut with different coefficients. We agree that if a symbol γ denotes an object related to L , thenthe symbol ˜ γ with tilde denotes the similar object related to ˜ L . Note that the quasi-derivativesfor these two problems are supposed to be different: Y [1] = Y ′ − σY for L and Y [1] = Y ′ − ˜ σY for ˜ L . In [44], the following uniqueness theorem has been obtained. Proposition 2.5. If λ nk = ˜ λ nk , α nk = ˜ α nk , ( n, k ) ∈ J , J = ˜ J , then σ ( x ) = ˜ σ ( x ) + H ⋄ a.e. on (0 , π ) , T = ˜ T , T = ˜ T , H = ˜ H − T H ⋄ T , (2.8) where H ⋄ = ( H ⋄ ) † = T ⊥ H ⋄ T ⊥ . (2.9) Thus, the spectral data { λ nk , α nk } ( n,k ) ∈ J uniquely specify the problem L up to a transform (2.8) given by an arbitrary matrix H ⋄ satisfying (2.9) . Conversely, the transform (2.8) does notchange the spectral data. The main result of this paper is the following theorem, which provides the characterizationof the spectral data { λ nk , α nk } ( n,k ) ∈ J of the problem L .6 heorem 2.6. Let T , T ∈ C m × m be arbitrary fixed orthogonal projection matrices. Then, fora collection { λ nk , α nk } ( n,k ) ∈ J to be the spectral data of a problem L = L ( σ, T , T , H ) in theform (1.1) - (1.2) , the following conditions are necessary and sufficient:(i) λ nk ∈ R , α nk ∈ C m × m , α nk = α † nk ≥ , rank( α nk ) is equal to the multiplicity of thecorresponding value λ nk (i.e., to the number of times λ nk occurs in the sequence), for all ( n, k ) ∈ J , and α nk = α ls if λ nk = λ ls .(ii) The asymptotic relations (2.2) and (2.5) hold, where { r k } mk =1 and { A k } k ∈J are definedas in Propositions 2.1 and 2.2, respectively, by using the fixed T and T .(iii) The sequence X defined by (2.6) is complete in L ((0 , π ); C m ) . In Theorem 2.6, the index set J is defined by the fixed matrices T and T via (2.3).We suppose that the matrices T and T are initially given, but this is done only for conve-nience of formulation. In fact, T and T can be uniquely recovered from the spectral data { λ nk , α nk } ( n,k ) ∈ J by [44, Algorithm 6.7]. Note that, in condition (iii), the sequence X dependson the choice of { χ nk } . Obviously, if condition (iii) holds for some choice of { χ nk } , then it holdsfor any possible choice of { χ nk } .The necessity part of Theorem 2.6 readily follows from Propositions 2.1-2.3. Therefore, ourgoal is to prove the sufficiency part. For this purpose, we need one more proposition provedin [44]. Proposition 2.7.
Suppose that the data { λ nk , α nk } ( n,k ) ∈ J satisfy the conditions (i)-(iii) of The-orem 2.6. Then the sequence Y constructed by (2.7) is a Riesz basis in L ((0 , π ); C m × m ) . The proof of Theorem 2.6 is based on several auxiliary theorems and lemmas provided inSections 3-6. This proof is constructive and yields Algorithm 6.8 for solving Inverse Problem 2.4.
The goal of this section is to reduce the nonlinear Inverse Problem 2.4 to the linear mainequation in a special Banach space. For construction of this Banach space, we group theeigenvalues with respect to their asymptotics (2.2). In the next sections, the main equation isused for the proof of Theorem 2.6 and for constructive solution of the inverse problem.Consider the problem L = L ( σ, T , T , H ) with the spectral data { λ nk , α nk } ( n,k ) ∈ J . Withoutloss of generality, we may assume that λ nk ≥ ρ nk = √ λ nk ≥
0, ( n, k ) ∈ J . One can easilyachieve this condition by a shift: σ ( x ) := σ ( x ) + cxI, H := H − cπT , λ nk := λ nk + c, c > . Fix the model problem ˜ L := L (0 , T , T , ϕ ( x, λ ) = cos ρx T + sin ρxρ T ⊥ , ˜ ρ nk = n + r k , (3.1)˜ α nk = ( π ( T + ˜ ρ nk T ) A k ( T + ˜ ρ nk T ) , ˜ ρ nk = 0 , π T A k T , ˜ ρ nk = 0 . (3.2)Denote h Z, Y i := ZY [1] − Z [1] Y . Introduce the notations˜ D ( x, λ, µ ) := h ˜ ϕ ( x, λ ) , ˜ ϕ ( x, µ ) i λ − µ = Z x ˜ ϕ ( t, λ ) ˜ ϕ ( x, µ ) dµ, (3.3)7 nk := λ nk , λ nk := ˜ λ nk , ρ nk := ρ nk , ρ nk := ˜ ρ nk ,α nk := α nk , α nk := ˜ α nk , α ′ nk := α ′ nk , α ′ nk := ˜ α ′ nk . Using the contour integration in the λ -plane (see [43]), we prove the following lemma. Lemma 3.1.
The following relation holds ˜ ϕ ( x, λ nki ) = ϕ ( x, λ nki ) + X ( l,s ) ∈ J ( ϕ ( x, λ ls ) α ′ ls ˜ D ( x, λ ls , λ nki ) − ϕ ( x, λ ls ) α ′ ls ˜ D ( x, λ ls , λ nki )) , (3.4) for ( n, k ) ∈ J , i = 0 , . The series converges in the sense lim N →∞ P l ≤ N ( . . . ) absolutely and uniformlyby x ∈ [0 , π ] . It is inconvenient to use (3.4) as the main equation of the inverse problem, since the seriesin (3.4) only converges “with brackets”. Below we transform (3.4) into a linear equation in aspecially constructed Banach space.Let J =: { j k } |J | k =1 . Divide the square roots { ρ nki } of the eigenvalues into collections (multi-sets) as follows: G := { ρ nki : ( n, k ) ∈ J, n ≤ n , i = 0 , } ,G |J | q + s +1 := { ρ nki : n = n + q + 1 , r k = r j s , i = 0 , } , q ≥ , s = 1 , |J | . (3.5)In view of asymptotics (2.2), we can choose and fix n such that G n ∩ G k = ∅ for n = k .For any multiset G of real numbers, let B ( G ) be the finite-dimensional space of matrixfunctions f : G → C m × m such that f ( ρ ) = f ( θ ) if ρ = θ . The norm in B ( G ) is defined asfollows: k f k B ( G ) = max max ρ ∈G k f ( ρ ) k , max ρ = θρ,θ ∈G | ρ − θ | − k f ( ρ ) − f ( θ ) k . (3.6)Introduce the Banach space B of infinite sequences: B := { f = { f n } n ≥ : f n ∈ B ( G n ) , n ≥ , k f k B := sup n ≥ k f n k B ( G n ) < ∞} . (3.7)For ( n, k ) ∈ J , i = 0 ,
1, denote T nki := (cid:26) T + T ⊥ ρ nki , ρ nki = 0 ,I, ρ nki = 0 , φ nki ( x ) := ϕ ( x, λ nki ) T nki . Put φ ( x ) := { φ n ( x ) } ∞ n =1 , φ n ( x )( ρ lsj ) := φ lsj ( x ) for ρ lsj ∈ G n . Analogously, define ˜ φ ( x ) replacing ϕ by ˜ ϕ . Using relation (2.1), we obtain the estimates k φ nki ( x ) k ≤ C, k φ nki ( x ) − φ lsj ( x ) k ≤ C | ρ nki − ρ lsj | , ρ nki , ρ lsj ∈ G q , for q ≥ x ∈ [0 , π ]. Hence, φ ( x ) ∈ B and, similarly, ˜ φ ( x ) ∈ B for each fixed x ∈ [0 , π ]. Inaddition, φ ( x ) and ˜ φ ( x ) are uniformly bounded in B with respect to x ∈ [0 , π ].For each fixed x ∈ [0 , π ], define the linear operator ˜ R ( x ) : B → B , ˜ R ( x ) = [ ˜ R k,n ( x )] ∞ k,n =1 ,acting on an element f = { f n } ∞ n =1 of B by the following rule:( f ˜ R ( x )) n = ∞ X k =1 f k ˜ R k,n ( x ) , ˜ R k,n ( x ) : B ( G k ) → B ( G n ) , (3.8)( f k ˜ R k,n ( x ))( ρ ηqi ) = X ρ lsj ∈ G k ( − j f k ( ρ lsj ) T − lsj α ′ lsj ˜ D ( x, λ lsj , λ ηqi ) T ηqi ρ ηqi ∈ G n . (3.9)Here we put operators to the right of operands to show the order of matrix multiplication.8 heorem 3.2. The series (3.8) converges in the B ( G n ) -norm. For each fixed x ∈ [0 , π ] , theoperator ˜ R ( x ) is bounded and can be approximated by finite-dimensional operators in the norm k . k B → B . Theorem 3.2 is proved in Section 4. Taking the above definitions into account, we rewriterelation (3.4) in the form φ ( x )( I + ˜ R ( x )) = ˜ φ ( x ) , x ∈ [0 , π ] . (3.10)where I is the unit operator in B . For each fixed x ∈ [0 , π ], relation (3.10) is a linear equationwith respect to φ ( x ) in the Banach space B . Note that ˜ φ ( x ) and ˜ R ( x ) are constructed bythe spectral data { λ nk , α nk } ( n,k ) ∈ J and by the model problem ˜ L , while the unknown element φ ( x ) is related to the problem L . Further, the solution of the main equation (3.10) is used forconstructive solution of Inverse Problem 2.4. Therefore, we call (3.10) the main equation of theinverse problem. In this section, we investigate properties of the operator ˜ R ( x ) and obtain the estimatesneeded in further proofs. It is supposed that ˜ R ( x ) is the operator constructed by thecollection { λ nk , α nk } ( n,k ) ∈ J satisfying the asymptotics (2.2) and (2.5) and by the problem˜ L = L (0 , T , T , { λ nk , α nk } ( n,k ) ∈ J are not assumed to be the spectraldata of some problem L . This allows us to use the results of this section in the proof of thesufficiency in Theorem 2.6.For n ≥
1, denote ˆ α ( G n ) := X ρ lsj ∈ G n ( − j T − lsj α ′ lsj T − lsj , (4.1) ξ n := X ρ,θ ∈ G n | ρ − θ | + k ˆ α ( G n ) k . (4.2)It follows from the asymptotic formulas (2.2) and (2.5) thatΞ := ∞ X n =1 ξ n ! / < ∞ . (4.3)Put ˜ D T ( x, ρ, θ ) := ( T + ρT ⊥ ) ˜ D ( x, ρ , θ )( T + θT ⊥ ) . Substituting (3.1) into (3.3) and using (2.2), (4.2), we obtain the following lemma.
Lemma 4.1.
For x ∈ [0 , π ] , n, k ≥ , ρ, ζ ∈ G n , θ, χ ∈ G k , the following estimates hold k ˜ D T ( x, ρ, θ ) k ≤ C | n − k | + 1 , k ˜ D T ( x, ρ, θ ) − ˜ D T ( x, ρ, χ ) k ≤ Cξ k | n − k | + 1 , k ˜ D T ( x, ρ, θ ) − ˜ D T ( x, ρ, χ ) − ˜ D T ( x, ζ , θ ) + ˜ D T ( x, ζ , χ ) k ≤ Cξ n ξ k | n − k | + 1 . For x ∈ [0 , π ] , n ≥ , ρ, ζ ∈ G n , θ ∈ C , we have k ˜ D T ( x, ρ, θ ) k ≤ C exp( | Im θ | x ) | θ − m n | + 1 , k ˜ D T ( x, ρ, θ ) − ˜ D T ( x, ζ , θ ) k ≤ Cξ n exp( | Im θ | x ) | θ − m n | + 1 , here m n = l + r s , ( l, s ) : ρ ls ∈ G n . In all the estimates, the constant C does not depend on n , k , x , etc. Lemma 4.2.
For x ∈ [0 , π ] , the following estimates hold k ˜ R k,n ( x ) k B ( G k ) → B ( G n ) ≤ Cξ k | n − k | + 1 , n, k ≥ , (4.4) k ˜ R ( x ) k B → B ≤ C Ξ , (4.5) where the constant C does not depend on n , k and x .Proof. Estimate (4.4) is proved by using (3.6), (3.9), and the summation rule v X u =1 a u b u c u = v X u =1 ( a u − a ) b u c u + a v X u =1 b u ( c u − c ) + a v X u =1 b u c . (4.6)We put a u = f k ( ρ lsj ) , b u = ( − j T − lsj α ′ lsj T − lsj , c u = T lsj ˜ D ( x, λ lsj , λ ηqi ) T ηqi , apply the estimates k f k k B ( G k ) ≤ C , (4.2) and Lemma 4.1, and so arrive at (4.4).Relations (3.7) and (3.8) yield k ˜ R ( x ) k B → B ≤ sup n ≥ ∞ X k =1 k ˜ R k,n ( x ) k B ( G k ) → B ( G n ) . Using (4.3) and (4.4), we arrive at (4.5).
Proof of Theorem 3.2.
It readily follows from (4.3), (4.4), and (4.5) that the series (3.8) con-verges in B ( G n )-norm and the operator ˜ R ( x ) is bounded. Define the finite-dimensional opera-tors ˜ R N ( x ) = [ ˜ R Nk,n ( x )] ∞ k,n =1 , ˜ R Nk,n = (cid:26) ˜ R k,n ( x ) , k ≤ N, , k > N, N ≥ . Using (4.4), it is easy to show that the sequence { ˜ R N ( x ) } converges to ˜ R ( x ) in the norm k . k B → B for each fixed x ∈ [0 , π ].Further we need the following auxiliary proposition, which easily follows from asymp-totics (2.2) and the Riesz-basicity of the sequences { cos( n + κ n ) x } ∞ n =0 , { sin( n + κ n ) x } ∞ n =1 in L (0 , π ), { κ n } ∈ l , n + κ n = k + κ k for n = k (see [45]). Proposition 4.3. (i) Let { κ nki } be an arbitrary sequence from l . Then the series F c ( x ) := X n,k,i κ nki cos( ρ nki x ) , F s ( x ) := X n,k,i κ nki sin( ρ nki x ) converge in L (0 , π ) and k F c k L (0 ,π ) , k F s k L (0 ,π ) ≤ C k{ κ nki }k l , where the constant C depends only on { ρ nki } and not on { κ nki } .(ii) Let F ( x ) be arbitrary function from L (0 , π ) . Put κ c,nki = Z π F ( x ) cos( ρ nki x ) dx, κ s,nki = Z π F ( x ) sin( ρ nki x ) dx. hen the sequences { κ c,nki } and { κ s,nki } belong to l and k{ κ c,nki }k l , k{ κ s,nki }k l ≤ C k F k L (0 ,π ) , where the constant C depends only on { ρ nki } and not on F .In (i) and (ii), the indices ( n, k, i ) run over the set: ( n, k ) ∈ J , i = 0 , . For convenience, for any sequence f = { f n } ∞ n =1 ∈ B , we denote f lsj := f n ( ρ lsj ), where n issuch that ρ lsj ∈ G n . Lemma 4.4.
Suppose that f ∈ B and g ( x ) = ˜ R ( x ) f . Then the corresponding sequence {k g nki ( x ) k} belongs to l for each fixed x ∈ [0 , π ] and k{k g nki ( x ) k}k l ≤ C k f k B , uniformly by x ∈ [0 , π ] .Proof. Substituting (3.1) into (3.3), we obtain˜ D ( x, ρ lsj , ρ nki ) = T − lsj Z x (cos( ρ lsj t ) cos( ρ nki t ) T + sin( ρ lsj t ) sin( ρ nki t ) T ⊥ ) dx T − nki . (4.7)For simplicity, throughout this proof we assume that ρ nki = 0 and ρ lsj = 0. The opposite caserequires minor technical changes. Using (3.9) and (4.7), we derive g nki ( x ) T = Z x F ( t ) cos( ρ nki t ) dt, (4.8) F ( t ) := X l,s,j ( − j f lsj T − lsj α ′ lsj T − lsj T cos( ρ lsj t ) . (4.9)Using the summation rule (4.6) in (4.9), relations (2.2), (4.2), and Proposition 4.3(i), we provethat F ∈ L ((0 , π ); C m × m ) and k F k L ≤ C k f k B . Applying Proposition 4.3(ii) to (4.8), weshow that {k g nki ( x ) T k} ∈ l for each fixed x ∈ [0 , π ] and the l -norm of this sequence doesnot exceed C k F k L , where C does not depend on x ∈ [0 , π ]. Similar arguments are valid for g nki T ⊥ . This concludes to proof. Lemma 4.5.
Suppose that f ∈ B and g ( x ) = ˜ R ( x ) f . Let indices q ∈ J , k, s ∈ J q , i, j ∈ { , } be fixed. Then the sequence {k g nki − g nsj k} belongs to l for each fixed x ∈ [0 , π ] and k{k g nki ( x ) − g nsj ( x ) k}k l ≤ C k f k B uniformly by x ∈ [0 , π ] .Proof. For fixed k , s , i , j satisfying the conditions of the lemma, we havecos( ρ nki t ) − cos( ρ nsj t ) = κ n t sin( n + r k ) t + ζ n ( t ) , { κ n } ∈ l , X n max t ∈ [0 ,π ] | ζ n ( t ) | ≤ C, (4.10)where κ n does not depend on t , t ∈ [0 , π ]. Using (4.8), (4.10), and Proposition 4.3, we obtain( g nki ( x ) − g nsj ( x )) T = Z x F ( t )(cos( ρ nki t ) − cos( ρ nsj t )) dt = κ n K n ( x ) + Z n ( x ) , where k{k K n ( x ) k}k l ≤ C k F k L , k Z n ( x ) k ≤ C k F k L max t ∈ [0 ,π ] | ζ n ( t ) | uniformly by x ∈ [0 , π ]. Taking the estimate k F k L ≤ C k f k B into account, we obtain theassertion of the lemma for the sequence {k ( g nki ( x ) − g nsj ( x )) T k} . The sequence {k ( g nki ( x ) − g nsj ( x )) T ⊥ k} can be studied similarly. 11 Solvability of main equation
In this section, we suppose that the collection { λ nk , α nk } ( n,k ) ∈ J satisfies the conditions of The-orem 2.6 and prove the unique solvability of the main equation (3.10). Theorem 5.1.
For each fixed x ∈ [0 , π ] , the operator ( I + ˜ R ( x )) : B → B has a boundedinverse, so the main equation (3.10) has a unique solution φ ( x ) ∈ B .Proof. Fix x ∈ [0 , π ]. By virtue of Theorem 3.2, the operator ˜ R ( x ) can be approximated byfinite-dimensional operators. Therefore, in view of Fredholm’s Theorem, it suffices to provethat the homogeneous equation β ( x )( I + ˜ R ( x )) = 0 , β ( x ) = { β n ( x ) } ∞ n =1 ∈ B , (5.1)has the only solution β ( x ) = 0 in B . Since β ( x ) = − ˜ R ( x ) β ( x ), Lemmas 4.4 and 4.5 imply {k β nki ( x ) k} ∈ l , {k β nki ( x ) − β nsj ( x ) k} ∈ l , (5.2)for fixed k, s ∈ J q , q ∈ J , i, j ∈ { , } . Introduce the matrix functions: γ ( x, λ ) := − X l,s,j ( − j β lsj ( x ) T − lsj α ′ lsj ˜ D ( x, λ lsj , λ ) , (5.3)Γ( x, λ ) := − X l,s,j ( − j β lsj ( x ) T − lsj α ′ lsj ˜ E ( x, λ lsj , λ ) , (5.4)˜ E ( x, λ, µ ) := h ˜ ϕ ( x, λ ) , ˜Φ( x, µ ) i λ − µ , B ( x, λ ) := Γ( x, λ )( γ ( x, λ )) † . (5.5)In (5.3) and (5.4), the indices ( l, s, j ) run over the set: ( l, s ) ∈ J , j = 0 ,
1. The matrix function γ ( x, λ ) is entire in λ , while Γ( x, λ ) and B ( x, λ ) are meromorphic in λ with the simple poles { λ nki } . Relation (5.1) implies γ ( x, λ nki ) = β nki ( x ) T − nki , ( n, k ) ∈ J , i = 0 ,
1. Calculations showthat Res λ = λ nk B ( x, λ ) = γ ( x, λ nk ) α nk ( γ ( x, λ nk )) † , Res λ = λ nk B ( x, λ ) = 0 (5.6)if λ nk = λ ls , ( n, k ) , ( l, s ) ∈ J . The opposite case requires minor changes.Using (5.3), the summation rule (4.6), (5.2), (4.2), and Lemma 4.1, we obtain k γ ( x, λ )( T + ρT ⊥ ) k ≤ C ( x ) exp( | Im ρ | x ) ∞ X n =1 θ n | ρ − m n | + 1 . (5.7)Here and below, ρ = √ λ , arg ρ ∈ [ − π , π ), the notation { θ n } stands for various l -sequences ofnon-negative numbers. Analogously to (5.7), we get k Γ( x, λ )( ρT + T ⊥ ) k ≤ C ( x ) exp( −| Im ρ | x ) ∞ X n =1 θ n | ρ − m n | + 1 , ρ ∈ G δ , | ρ | ≥ ρ ∗ , (5.8)where G δ := { ρ ∈ C : | ρ − ( n + r k ) | ≥ δ, n ∈ Z , k = 1 , m } , and ρ ∗ are some positive reals. Suppose that λ ∈ Υ N + r , Υ N + r := { λ ∈ C : | λ | = ( N + r ) } ,where N ∈ N , r is fixed, r = r k , k = 1 , m . Using (5.5), (5.7), and (5.8), we obtain k B ( x, λ ) k ≤ C ( x ) N ∞ X n =1 θ n | N − n | + 1 ! . Consequently, k B ( x, λ ) k ≤ C ( x ) N f N , f N := ∞ X n =1 θ n ( N − n + 1 / , λ ∈ Υ N + r . Obviously, { f N } ∈ l . This implies lim N →∞ f N /N = 0 . Hence, there exists a sequence { N k } such thatmax λ ∈ Υ N k B ( x, λ ) = o ( N − k ) , k → ∞ . Therefore, lim k →∞ I Υ Nk + r B ( x, λ ) dλ = 0 . Using the Residue Theorem and (5.6), we show that X ( n,k ) ∈ J γ ( x, λ nk ) α ′ nk ( γ ( x, λ nk )) † = 0 . Since α ′ nk = ( α ′ nk ) † ≥
0, we get γ ( x, λ nk ) α nk = 0 , ( n, k ) ∈ J. (5.9)It is easy to see that the matrix function γ ( x, ρ ) T is even and γ ( x, ρ ) ρT ⊥ is odd. Itfollows from (5.7), (5.3), (5.2), (4.2), (4.7), and Proposition 4.3 that these matrix functions are O (exp( | Im ρ | x )) and belong to L ( R ; C m × m ). Applying the Paley-Wiener Theorem, we obtainthe representation γ ( x, λ ) = Z π ( h ( x, t )) † (cid:18) cos ρt T + sin ρtρ T ⊥ (cid:19) dt h ( x, . ) ∈ L ((0 , π ); C m ) . (5.10)Combining (5.9) and (5.10), we get ( h, X nk ) = 0 for each fixed x ∈ [0 , π ] and for all ( n, k ) ∈ J .Since the sequence X = { X nk } ( n,k ) ∈ J is complete in L ((0 , π ); C m ), it follows that h = 0 in L ((0 , π ); C m ) for each fixed x ∈ [0 , π ]. Consequently, γ ( x, λ ) ≡ β ( x ) = 0, so thehomogeneous equation (5.1) has the unique solution in B . This yields the claim. In this section, we prove the sufficiency part of Theorem 2.6. Let { λ nk , α nk } ( n,k ) ∈ J be a collectionsatisfying the conditions of Theorem 2.6. Suppose that the Banach space B , the element˜ φ ( x ) ∈ B , and the operator ˜ R ( x ) : B → B for each fixed x ∈ [0 , π ] are constructed in accordancewith Section 3. By virtue of Theorem 5.1, the main equation (3.10) has the unique solution φ ( x ) ∈ B for each fixed x ∈ [0 , π ]. Similarly to [43, Lemma 5.3], we obtain the following result.13 emma 6.1. The elements { φ nki ( x ) } of φ ( x ) can be represented in the form φ nki ( x ) = cos( n + r k ) x T + sin( n + r k ) x T ⊥ + ψ nki ( x ) , ( n, k ) ∈ J, i = 0 , , where the matrix functions ψ nki are continuous on [0 , π ] , the sequence {k ψ nki ( x ) k} belongs to l for each fixed x ∈ [0 , π ] , and the l -norm of this sequence is uniformly bounded by x ∈ [0 , π ] . Construct the matrix function σ ( x ) and the matrix H as follows: σ ( x ) := − ∞ X n =1 X ρ lsj ∈ G n ( − j φ lsj ( x ) T − lsj α ′ lsj T − lsj ˜ φ lsj ( x ) − (cid:0) T ˆ α ( G n ) T + T ⊥ ˆ α ( G n ) T ⊥ (cid:1)! (6.1) H := T ∞ X n =1 X ρ lsj ∈ G n ( − j φ lsj ( π ) T − lsj α ′ lsj T − lsj ˜ φ lsj ( π ) − (cid:0) T ˆ α ( G n ) T + T ⊥ ˆ α ( G n ) T ⊥ (cid:1)! T , (6.2)where ˆ α ( G n ) is defined by (4.1).Relying on Lemmas 4.5 and 6.1, Proposition 4.3, and relations (4.2), (4.3), we prove thefollowing lemma. Lemma 6.2.
The series (6.1) and (6.2) converge in L ((0 , π ); C m × m ) and C m × m , respectively. Thus, we have constructed σ ( x ) and H by formulas (6.1) and (6.2), respectively. Considerthe corresponding boundary value problem L = L ( σ, T , T , H ) of the form (1.1)-(1.2). Itremains to prove the following theorem. Theorem 6.3.
The values { λ nk , α nk } ( n,k ) ∈ J are the spectral data of L . In order to prove Theorem 6.3, consider the data { λ Nnk , α
Nnk } ( n,k ) ∈ J defined as follows: λ Nnk = ( λ nk , n ≤ N, ˜ λ nk , n > N, α Nnk = ( α nk , n ≤ N, ˜ α nk , n > N, N ∈ N . (6.3) Lemma 6.4.
The collection { λ Nnk , α
Nnk } ( n,k ) ∈ J satisfies conditions (i)-(iii) of Theorem 2.6 forall sufficiently large N .Proof. Conditions (i)-(ii) are obvious, so we focus on the proof of (iii). We have to show thatthe sequence Y N := { Y Nnk } ( n,k ) ∈ J , Y Nnk := ( Y nk , n ≤ N, ˜ Y nk , n > N, is complete in L ((0 , π ); C m ) for each sufficiently large N . In view of condition (iii) of Theo-rem 2.6, the sequence X is complete, so Y is also complete. By virtue of Proposition 2.7, Y isa Riesz basis. Consider the sequence Y N • := { Y N • nk } ( n,k ) ∈ J , Y N • nk := ( Y nk , n ≤ N,Y • nk , n > N, where Y • nk is defined similarly to Y nk (see (2.7)), but with E nk replaced by E • nk := A k E nk . It iseasy to show that lim N →∞ X ( n,k ) ∈ J, n>N k Y nk − Y • nk k L ((0 ,π ); C m ) = 0 . Y N • is a Riesz basis for sufficiently large N , so Y N • is complete in L ((0 , π ); C m ) for such N . It is easy to check that, for each fixed sufficiently large n and eachfixed k ∈ J , the vector functions { Y • ns } s ∈ J k are linear combinations of { ˜ Y ns } s ∈ J k . This impliesthat Y N is also complete in L ((0 , π ); C m ).By using { λ Nn , α Nn } ( n,k ) ∈ J and the model problem ˜ L = L (0 , T , T , φ N ( x ) and the operator ˜ R N ( x ) similarly to ˜ φ ( x ) and ˜ R ( x ), respectively. Let φ N ( x ) be thesolution of the main equation φ N ( x )( I + ˜ R N ( x )) = ˜ φ N ( x ) , x ∈ [0 , π ] , (6.4)analogous to (3.10). By virtue of Theorem 5.1, the solution of (6.4) exists and is unique.Obviously, for the matrix sequences { φ Nnki ( x ) } and { ˜ φ Nnki ( x ) } corresponding to φ ( x ) and ˜ φ ( x ),respectively, the following relations hold: ˜ φ Nnki ( x ) = ˜ φ nki ( x ) for n ≤ N , φ Nnk ( x ) = φ Nnk ( x ),˜ φ Nnk ( x ) = ˜ φ Nnk ( x ) for n > N . Taking these relations into account, similarly to (6.1) and (6.2),we define σ N ( x ) := − g ( N ) X n =1 X ρ lsj ∈ G n ( − j φ Nlsj ( x ) T − lsj α ′ lsj T − lsj ˜ φ lsj ( x ) − (cid:0) T ˆ α ( G n ) T + T ⊥ ˆ α ( G n ) T ⊥ (cid:1)! (6.5) H N := T g ( N ) X n =1 X ρ lsj ∈ G n ( − j φ Nlsj ( π ) T − lsj α ′ lsj T − lsj ˜ φ lsj ( π ) − (cid:0) T ˆ α ( G n ) T + T ⊥ ˆ α ( G n ) T ⊥ (cid:1)! T , (6.6)where g ( N ) is such that g ( N ) [ n =1 G n = { ρ lsj : ( l, s ) ∈ J, l ≤ N, j = 0 , } . Here and above, we assume that N is large enough.Let us show that { λ Nnk , α
Nnk } ( n,k ) ∈ J are the spectral data of the problem L N := L ( σ N , T , T , H N ), i.e., prove Theorem 6.3 for { λ Nnk , α
Nnk } ( n,k ) ∈ J . This special case is much easierfor investigation than the general case, since the main equation (6.4) in the element-wise formcontains a finite sum:˜ φ Nnki ( x ) = φ Nnki ( x ) + X l,s,j : l ≤ N ( − j φ Nlsj ( x ) T − lsj α ′ lsj ˜ D ( x, λ lsj , λ Nnki ) T nki . Therefore, one can show that ˜ R N ( x ) is twice continuously differentiable with respect to x ∈ [0 , π ], and so does φ N ( x ) (see the proof of Lemma 1.6.9 from [4] for details). Moreover, thesums (6.5) and (6.6) are finite, so we do not need to care of their convergence. Define thematrix functions ϕ N ( x, λ ) := ˜ ϕ ( x, λ ) − X l,s,j : l ≤ N ( − j φ Nlsj ( x ) T − lsj α ′ lsj ˜ D ( x, λ lsj , λ ) , Φ N ( x, λ ) := ˜Φ( x, λ ) − X l,s,j : l ≤ N ( − j φ Nlsj ( x ) T − lsj α ′ lsj ˜ E ( x, λ lsj , λ ) , (6.7) σ N ∗ ( x ) := σ N ( x ) + C N , H N , ∗ := H N − T C N T , C N := T ⊥ g ( N ) X n =1 ˆ α ( G n ) T ⊥ . Calculations yield the following lemma. 15 emma 6.5. ϕ N ( ., λ ) ∈ C ([0 , π ]; C m × m ) for each fixed λ ∈ C , Φ N ( ., λ ) ∈ C ([0 , π ]; C m × m ) foreach fixed λ = λ nki , and σ N ∗ ∈ C ([0 , π ]; C m × m ) . Moreover, the following relations hold: − d dx ϕ N ( x, λ ) + ddx σ N ∗ ( x ) ϕ N ( x, λ ) = λϕ N ( x, λ ) , x ∈ (0 , π ) ,ϕ N (0 , λ ) = T , ddx ϕ N (0 , λ ) − σ N ∗ (0) ϕ N (0 , λ ) = T ⊥ , − d dx Φ N ( x, λ ) + ddx σ N ∗ ( x )Φ N ( x, λ ) = λ Φ N ( x, λ ) , x ∈ (0 , π ) ,T (cid:18) ddx Φ N (0 , λ ) − σ N ∗ (0)Φ N (0 , λ )) (cid:19) − T ⊥ Φ N (0 , λ ) = 0 ,T (cid:18) ddx Φ N ( π, λ ) − ( σ N ∗ ( π ) + H N , ∗ )Φ N ( π, λ )) (cid:19) − T ⊥ Φ N ( π, λ ) = 0 . Lemma 6.5 implies that ϕ N ( x, λ ) is the ϕ -type solution and Φ N ( x, λ ) is the Weyl solutionof the boundary value problem L N ∗ := L ( σ N ∗ , T , T , H N , ∗ ). Hence, the Weyl matrix of L N ∗ hasthe form M N ( λ ) := T Φ N (0 , λ ) + T ⊥ (cid:18) ddx Φ N (0 , λ ) − σ N ∗ (0)Φ(0 , λ ) (cid:19) . Using (6.7), we derive M N ( λ ) = ˜ M ( λ ) + X l,s,j : l ≤ N ( − j α ′ lsj λ − λ lsj . (6.8)Recall that the Weyl matrix ˜ M ( λ ) has the poles { ˜ λ nk } ( n,k ) ∈ J and the corresponding residues { ˜ α nk } ( n,k ) . Consequently, it follows from (6.3) and (6.8) that the Weyl matrix M N ( λ ) has thepoles { λ Nnk } ( n,k ) ∈ J and the corresponding residues { α Nnk } ( n,k ) ∈ J . Thus, { λ Nnk , α
Nnk } ( n,k ) ∈ J are thespectral data of the problem L N ∗ = L ( σ N ∗ , T , T , H N , ∗ ). Since the transform (2.8) with H ⊥ = C N does not change the spectral data, we conclude that { λ Nnk , α
Nnk } ( n,k ) ∈ J are also the spectral dataof L N = L ( σ N , T , T , H N ). Since λ Nnk ∈ R and α Nnk = ( α Nnk ) † for ( n, k ) ∈ J , one can easily showthat the matrices σ N ( x ) for a.e. x ∈ (0 , π ) and H N are Hermitian.The following two lemmas can be proved similarly to Lemmas 5.6 and 5.7 from [44]. Lemma 6.6. σ N → σ in L ((0 , π ); C m × m ) and H N → H as N → ∞ , where σ , H , σ N , H N are defined by (6.1) , (6.2) , (6.5) , (6.6) , respectively. Lemma 6.7.
Suppose that σ and σ N , N ∈ N , are arbitrary Hermitian matrix functions from L ((0 , π ); C m × m ) such that σ N → σ in L ((0 , π ); C m × m ) as N → ∞ and H , H N , N ∈ N , arearbitrary Hermitian matrices from C m × m such that H N → H as N → ∞ . Let { λ nk , α nk } ( n,k ) ∈ J and { λ Nnk , α
Nnk } ( n,k ) ∈ J be the spectral data of the problems L ( σ, T , T , H ) and L ( σ N , T , T , H N ) ,respectively. Then, for each fixed ( n, k ) ∈ J , lim N →∞ λ Nnk = λ nk . Let λ n k = λ n k = · · · = λ n r k r be a group of multiple eigenvalues of L , maximal by inclusion.Then lim N →∞ r X j =1 α N ′ n j k j = α n k . { λ nk , α nk } ( n,k ) ∈ J . Theo-rems 5.1,6.3 and Lemmas 6.1, 6.2 yield the sufficiency part of Theorem 2.6. Our proof ofsufficiency in Theorem 2.6 is constructive and provides the following algorithm for solvingInverse Problem 2.4. Algorithm 6.8.
Suppose that the orthogonal projection matrices T , T and the data { λ nk , α nk } ( n,k ) ∈ J satisfying conditions (i)-(iii) of Theorem 2.6 be given. We have to find σ and H .1. Find r k and A k by the formulas r k = lim n →∞ ( p λ nk − n ) , A k = π n →∞ ( T + n − T ⊥ ) α ( k ) n ( T + n − T ⊥ ) , k = 1 , m.
2. Fix the model problem ˜ L := L (0 , T , T ,
0) and find { ˜ λ nk , ˜ α nk } ( n,k ) ∈ J , { ˜ ϕ ( x, λ nki ) } ( n,k ) ∈ J, i =0 , , by using (3.1), (3.2) and ˜ λ nk = ( ˜ ρ nk ) .3. Find ˜ D ( x, λ lsj , λ nki ) by (3.3) for ( l, s ) , ( n, k ) ∈ J , i, j = 0 , { ρ nki } into the groups { G n } ∞ n =1 according to (3.5).5. Construct the Banach space B , the sequence ˜ φ ( x ) ∈ B , and the operator ˜ R ( x ) : B → B for each fixed x ∈ [0 , π ] as it is described in Section 3.6. Find the solution φ ( x ) of the main equation (3.10).7. Using the elements { φ lsj ( x ) } of φ ( x ), construct σ and H by formulas (6.1) and (6.2),respectively.In view of Proposition 2.5, the solution constructed by Algorithm 6.8 is not the only solutionof Inverse Problem 2.4. All the other solutions can be obtained by applying transform (2.8). References [1] Marchenko, V.A. Sturm-Liouville Operators and Their Applications, Naukova Dumka,Kiev (1977) (Russian); English transl., Birkhauser (1986).[2] Levitan, B.M. Inverse Sturm-Liouville Problems, Nauka, Moscow (1984) (Russian); Englishtransl., VNU Sci. Press, Utrecht (1987).[3] P¨oschel, J.; Trubowitz, E. Inverse Spectral Theory, New York, Academic Press (1987).[4] Freiling, G.; Yurko, V. Inverse Sturm-Liouville Problems and Their Applications, Hunt-ington, NY: Nova Science Publishers (2001).[5] Agranovich, Z. S.; Marchenko, V. A. The inverse problem of scattering theory, Gordonand Breach, New York, 1963.[6] Beals, R.; Henkin, G. M.; Novikova, N. N. The inverse boundary problem for the Rayleighsystem, J. Math. Phys. 36 (1995), no. 12, 6688–6708.[7] Boutet de Monvel, A.; Shepelsky, D. Inverse scattering problem for anisotropic media, J.Math. Phys. 36 (1995), no. 7, 3443–3453.178] Chabanov, V. M. Recovering the M-channel Sturm-Liouville operator from M+1 spectra,J. Math. Phys. 45 (2004), no. 11, 4255–4260.[9] Calogero, F.; Degasperis, A. Nonlinear evolution equations solvable by the inverse spectraltransform II, Nouvo Cimento B 39(1977), no. 1.[10] Carlson, R. An inverse problem for the matrix Schr¨odinger equation, J. Math. Anal. Appl.267 (2002), 564–575.[11] Malamud, M.M. Uniqueness of the matrix Sturm-Liouville equation given a part of themonodromy matrix, and Borg type results, Sturm-Liouville Theory, Birkh¨auser, Basel(2005), 237–270.[12] Yurko, V.A. Inverse problems for matrix Sturm-Liouville operators, Russ. J. Math. Phys.13 (2006), no. 1, 111–118.[13] Shieh, C.-T. Isospectral sets and inverse problems for vector-valued Sturm-Liouville equa-tions, Inverse Problems 23 (2007), 2457–2468.[14] Yurko, V. Inverse problems for the matrix Sturm-Liouville equation on a finite interval,Inverse Problems 22 (2006), 1139–1149.[15] Bondarenko, N. Spectral analysis for the matrix Sturm-Liouville operator on a finite in-terval, Tamkang J. Math. 42 (2011), no. 3, 305–327.[16] Chelkak, D.; Korotyaev, E. Weyl-Titchmarsh functions of vector-valued Sturm-Liouvilleoperators on the unit interval, J. Func. Anal. 257 (2009), 1546–1588.[17] Mykytyuk, Ya.V.; Trush, N.S. Inverse spectral problems for Sturm-Liouville operators withmatrix-valued potentials, Inverse Problems 26 (2009), no. 1, 015009.[18] Bondarenko, N.P. Necessary and sufficient conditions for the solvability of the inverseproblem for the matrix Sturm-Liouville operator, Func. Anal. Appl. 46 (2012), no.1, 53–57.[19] Bondarenko, N. P. An inverse problem for the non-self-adjoint matrix Sturm-Liouvilleoperator, Tamkang J. Math. 50 (2019), no. 1, 71–102.[20] Kuchment, P. Graph models for waves in thin structures, Waves in Random Media 12(2002), no. 4, R1–R24.[21] Pokorny, Yu. V.; Penkin, O. M.; Pryadiev, V. L. et al. Differential Equations on Geomet-rical Graphs, Fizmatlit, Moscow (2004) (Russian).[22] Analysis on Graphs and Its Applications, edited by P. Exner, J.P. Keating, P. Kuchment,T. Sunada and Teplyaev, A. Proceedings of Symposia in Pure Mathematics, AMS, 77(2008).[23] Berkolaiko, G.; Kuchment, P. Introduction to Quantum Graphs, Amer. Math. Soc., Prov-idence, RI (2013).[24] Kuchment, P. Quantum graphs. I. Some basic structures, Waves Random Media 14 (2004),no. 1, S107–S128. 1825] Nowaczyk, M. Inverse Problems for Graph Laplacians, Doctoral Theses in MathematicalSciences, Lund, Sweden (2007).[26] Xu, X.-C. Inverse spectral problem for the matrix Sturm-Liouville operator with the gen-eral separated self-adjoint boundary conditions, Tamkang J. Math. 50 (2019), no. 3, 321-336.[27] Bondarenko, N.P. Spectral analysis of the matrix Sturm-Liouville operator, BoundaryValue Problems (2019), 2019:178.[28] Bondarenko, N.P. Spectral analysis of the Sturm-Liouville operator on the star-shapedgraph, Math. Meth. Appl. Sci. 43 (2020), no. 2, 471–485.[29] Bondarenko, N.P. Constructive solution of the inverse spectral problem for the ma-trix Sturm-Liouville operator, Inv. Probl. Sci. Eng. (2020), published online, DOI:https://doi.org/10.1080/17415977.2020.1729760[30] Bondarenko, N.P. Spectral data characterization for the Sturm-Liouville operator on thestar-shaped graph (to appear).[31] Harmer, M. Inverse scattering for the matrix Schr¨odinger operator and Schr¨odinger op-erator on graphs with general self-adjoint boundary conditions, ANZIAM J. 43 (2002),1–8.[32] Harmer, M. Inverse scattering on matrices with boundary conditions, J. Phys. A. 38 (2005),no. 22, 4875–4885.[33] Wadati, M. Generalized matrix form of the inverse scattering method, R. K. Bullough andP. J. Caudry (eds.), Solitons, 287-299, Topics in current physics, vol. 17, Springer, Berlin(1980).[34] Olmedilla, E. Inverse scattering transform for general matrix Schr¨odinger operators andthe related symplectic structure, Inverse Problems 1 (1985), 219-236.[35] Aktosun, T.; Weder, R. Inverse Scattering. In: Direct and Inverse Scattering for the MatrixSchr¨odinger Equation, Applied Mathematical Sciences, vol 203. Springer, Cham (2021).[36] Hryniv, R.O.; Mykytyuk, Y.V. Inverse spectral problems for Sturm-Liouville operatorswith singular potentials, Inverse Problems 19 (2003), no. 3, 665–684.[37] Hryniv, R.O.; Mykytyuk, Y.V. Inverse spectral problems for Sturm-Liouville operatorswith singular potentials. II. Reconstruction by two spectra, North-Holland MathematicsStudies 197 (2004), 97–114.[38] Savchuk, A.M.; Shkalikov, A.A. Inverse problem for Sturm-Liouville operators with distri-bution potentials: Reconstruction from two spectra, Russ. J. Math. Phys. 12 (2005), no. 4,507–514.[39] Djakov, P.; Mityagin, B.N. Spectral gap asymptotics of one-dimensional Schr¨odinger op-erators with singular periodic potentials, Integral Transforms Spec. Funct. 20 (2009), no.3–4, 265–273. 1940] Eckhardt, J.; Gesztesy, F.; Nichols, R.; Teschl, G. Supersymmetry and Schr¨odinger-typeoperators with distributional matrix-valued potentials, J. Spectral Theory 4 (2014), no. 4,715–768.[41] Eckhardt, J.; Gesztesy, F.; Nichols, R.; Sakhnovich, A.; Teschl, G. Inverse spectral prob-lems for Schr¨odinger-type operators with distributional matrix-valued potentials, Differ-ential Integral Equations 28 (2015), no. 5/6, 505–522.[42] Freiling, G.; Ignatiev, M. Y.; Yurko, V. A. An inverse spectral problem for Sturm-Liouvilleoperators with singular potentials on star-type graph, Proc. Symp. Pure Math. 77 (2008),397–408.[43] Bondarenko, N.P. Solving an inverse problem for the Sturm-Liouville operator with asingular potential by Yurko’s method, Tamkang J. Math. (accepted), preprint (2020),arXiv:2004.14721 [math.SP].[44] Bondarenko, N.P. Direct and inverse problems for the matrix Sturm-Liouville operator withthe general self-adjoint boundary conditions, preprint (2020), arXiv:2006.06533 [math.SP].[45] He, X.; Volkmer, H. Riesz bases of solutions of Sturm-Liouville equations, J. Fourier Anal.Appl. 7 (2001), no. 3, 297–307.Natalia Pavlovna Bondarenko1. Department of Applied Mathematics and Physics, Samara National Research University,Moskovskoye Shosse 34, Samara 443086, Russia,2. Department of Mechanics and Mathematics, Saratov State University,Astrakhanskaya 83, Saratov 410012, Russia,e-mail: