Spectral data characterization for the Sturm-Liouville operator on the star-shaped graph
aa r X i v : . [ m a t h . SP ] S e p Spectral data characterization for the Sturm-Liouville operator onthe star-shaped graph
Natalia P. BondarenkoAbstract.
The inverse spectral problems are studied for the Sturm-Liouville operator onthe star-shaped graph and for the matrix Sturm-Liouville operator with the boundary conditionin the general self-adjoint form. We obtain necessary and sufficient conditions of solvability forthese two inverse problems, and also prove their local solvability and stability.
Keywords: inverse spectral problem; Sturm-Liouville operator on graph; differential op-erators on graphs; quantum graphs; spectral data characterization; local solvability; stability;method of spectral mappings.
AMS Mathematics Subject Classification (2010):
The paper aims to give spectral data characterization for the Sturm-Liouville operator on ageometrical graph. Differential operators on graphs, also called quantum graphs, are used formodeling wave propagation in structures, consisting of thin tubes, strings, beams, etc. Suchmodels appear in organic chemistry, mechanics, nanotechnology, theory of waveguides and otherapplications (see [1–7] and references therein).Inverse spectral problems, that consist in recovering differential operators on graphs fromtheir spectral characteristics, have been studied by many scholars (see [8–32]). The resultsof those studies generalize the classical results of inverse problem theory for ordinary differ-ential operators on intervals (see the monographs [33–36]). The majority of the papers oninverse problems for quantum graphs concern the second-order (also called Sturm-Liouville orSchr¨odinger) differential operators. On the one hand, such operators are easier for investigation,on the other hand, they are natural for applications.For quantum graphs, there are three types of inverse problems, that consist in reconstructionof the following characteristics:1. Coefficients of differential expressions (e.g., Sturm-Lioville potentials) on the graph edges(see [8, 11, 12, 14–16, 18, 20–23, 27–32]).2. Graph structure (see [10, 13, 22, 24]).3. Boundary conditions (see [9, 25, 26]).In this paper, we focus on a problem of the first type. For recovering coefficients of differ-ential expressions on graphs, two constructive methods made the most significant impact. Thefirst of them is the BC-method, developed by Belishev and his successors (see [12, 16, 20, 24]).That method allowed them to solve inverse problems on arbitrary trees (graphs without cycles)and recover not only operator coefficients, but also a graph structure. The second approach isbased on the method of spectral mappings (see [15, 21, 27–30]). Relying on that method, Yurkoand other mathematicians have solved inverse spectral problems for differential operators on1rbitrary compact graphs (see [29]) and inverse spectral-scattering problems on noncompactgraphs (see [27–29]). There were also attempts to apply the methods of Marchenko (see [33,37])to inverse scattering problems for special types of graphs with infinite rays (see [11, 17, 31, 32]).Nevertheless, although there is a significant number of studies on inverse problems for differen-tial operators on graphs, they concern only uniqueness theorems and constructive algorithms forsolution. The question of spectral data characterization remained open even for the followingoperator on the simplest star-shaped graph.In this paper, we consider the geometrical graph G with the vertices { v j } mj =0 and the edges { e j } mj =1 . Every edge e j connects the vertices v and v j , j = 1 , m , i.e. v is the internal vertex,and { v j } mj =1 are the boundary vertices. For j = 1 , m , we associate with the edge e j the interval[0 , π ] and a parameter x j ∈ [0 , π ], so that x j = 0 corresponds to the boundary vertex v j and x j = π corresponds to the internal vertex v .Consider the system of the Sturm-Liouville equations on the star-shaped graph G : ℓ j y j := − y ′′ j ( x j ) + q j ( x j ) y j ( x j ) = λy j ( x j ) , x j ∈ (0 , π ) , j = 1 , m, (1.1)with the Dirichlet conditions at the boundary vertices y j (0) = 0 , j = 1 , m, (1.2)and the following matching conditions at the internal vertex y ( π ) = y j ( π ) , j = 1 , m, m X j =1 ( y ′ j ( π ) − hy j ( π )) = 0 . (1.3)Here q j , j = 1 , m , are real-valued functions from L (0 , π ), called the potentials , and h ∈ R .Introduce the spaces L ( G ) := { y = [ y j ] mj =1 : y j ∈ L (0 , π ) , j = 1 , m } ,W ( G ) := { y = [ y j ] mj =1 : y j , y ′ j ∈ AC [0 , π ] , y ′′ j ∈ L (0 , π ) , j = 1 , m } . The boundary value problem (1.1)-(1.3) defines the self-adjoint operator L in L ( G ), actingby the rule L y = [ ℓ j y j ] j =1 ,m and having the domain D ( L ) := { y ∈ W ( G ) : y satisfies (1.2), (1.3) } . It is well-known that the operator L has a purely discrete spectrum, consisting of realeigenvalues. Definition 1.1.
Let { λ nk } n ≥ , k =1 ,m be the eigenvalues of L , numbered in the nondecreasingorder: λ n ,k ≤ λ n ,k , if ( n , k ) < ( n , k ), i.e. n < n or n = n , k = k . Multipleeigenvalues occur in the sequence { λ nk } n ≥ , k =1 ,m several times, according to their multiplicities.It is convenient to number the eigenvalues by two indices n and k because of the asymptoticformulas (2.4). Definition 1.2.
For k = 1 , m , we introduce the vector function Φ k ( x, λ ) = [ φ kj ( x, λ )] mj =1 ,satisfying equations (1.1) for x j = x , j = 1 , m , the matching conditions (1.3) and the followingboundary conditions: φ kk (0 , λ ) = 1 , φ kj (0 , λ ) = 0 , k, j = 1 , m, k = j. Let Φ( x, λ ) = [ φ kj ( x, λ )] mk,j =1 be the matrix with the columns Φ k ( x, λ ). The matrix function M ( λ ) := Φ ′ (0 , λ ) is called the Weyl matrix of the boundary value problem (1.1)-(1.3).2eyl matrices and Weyl functions are natural spectral characteristics for recovering differ-ential operators of various types (see, e.g., [14, 21, 28, 29, 33, 36, 38]).The matrix functions Φ( x, λ ) and M ( λ ) are meromorphic in the λ -plane. All their poles aresimple and coincide with the the eigenvalues { λ nk } n ≥ , k =1 ,m (see [38, 39]). Thus, we define theweight matrices α nk := − Res λ = λ nk M ( λ ) , n ≥ , k = 1 , m. (1.4)The collection S := { λ nk , α nk } n ≥ , k =1 ,m is called the spectral data of L . This paper is devotedto the following inverse spectral problem. Inverse Problem 1.3.
Given the spectral data S , construct the potentials { q j } mj =1 and thecoefficient h .The uniqueness of Inverse Problem 1.3 solution follows, in particular, from the resultsof [14, 15, 38, 40]. In the papers [15, 21, 40], a constructive solution of this inverse problemhas been developed, based on the method of spectral mappings [36].In this paper, we obtain necessary and sufficient conditions of solvability for Inverse Prob-lem 1.3. In other words, we provide spectral data characterization for the Sturm-Liouvilleoperator L on the star-shaped graph. Moreover, local solvability and stability of Inverse Prob-lem 1.3 are proved.The question of necessary and sufficient is the most important issue of inverse problem theoryand usually is the most complicated one. For differential operators on graphs, this question hasnot been solved before. Complicated structural properties and the behavior of the spectrumcause significant difficulties in spectral data characterization for quantum graphs. Some resultsin this directions were obtained by Pivovarchik [8, 18]. However, for reconstruction of theoperator, Pivovarchik used spectra corresponding to separate edges of the graph but not to thewhole graph. Local solvability means that the solution of inverse problem still exists undera sufficiently small perturbation of the spectral data. Local solvability is closely related withstability, which is essential for justification of numerical methods for solving inverse problems.Our approach is based on representation of the boundary value problem (1.1)-(1.3) in theequivalent matrix form: − Y ′′ ( x ) + Q ( x ) Y ( x ) = λY ( x ) , x ∈ (0 , π ) , (1.5) Y (0) = 0 , V ( Y ) := T ( Y ′ (0) − HY (0)) − T ⊥ Y (0) = 0 , (1.6)where Y ( x ) = [ y j ( x )] mj =1 is a vector function, Q ( x ) = diag { q j ( x ) } mj =1 is the diagonal matrix,and T = [ T jk ] mj,k =1 , T jk = m , j, k = 1 , m, T ⊥ = I − T, H = hT. (1.7)The symbol I denotes the m × m unit matrix. We denote the problem (1.5)-(1.6) by L = L ( Q ( x ) , T, H ).In addition, we study the problem L ( Q ( x ) , T, H ) in the general form , where • Q ( x ) is an arbitrary Hermitian matrix function with the elements from L (0 , π ); • T is an arbitrary orthogonal projector, 1 ≤ rank( T ) ≤ m − T ⊥ = I − T ; • H is a Hermitian matrix, such that H = T HT .The case, when Q ( x ) is diagonal and (1.7) is fulfilled, is called the graph case .Note that the condition V ( Y ) = 0 turns into the Dirichlet condition Y ( π ) = 0 in the case T = 0 and into the Robin condition Y ′ ( π ) − HY ( π ) = 0 in the case T = I . In the latter two3egenerated cases, our main results remain valid, but the proofs require technical modifications.Therefore we suppose that 1 ≤ rank( T ) ≤ m − L in the general case, we define the Weyl solution Φ( x, λ ) as the matrixsolution of equation (1.5), satisfying the conditions Φ(0 , λ ) = I , V (Φ) = 0, and the Weyl matrix as follows: M (0 , λ ) = Φ ′ (0 , λ ). Clearly, these definitions generalize Definition 1.2 for the graphcase. The weight matrices in the general case are defined by the formula (1.4). Along withInverse Problem 1.3, we investigate the following general matrix inverse problem. Inverse Problem 1.4.
Given the spectral data S , find Q ( x ), T and H .The most complete investigation of inverse problems has been carried out for the matrixSturm-Liouville equation (1.5) with the Dirichlet boundary conditions Y (0) = Y ( π ) = 0 andthe Robin boundary conditions Y ′ (0) − H Y (0) = 0, Y ′ ( π ) + H Y ( π ) = 0 instead of (1.6). Here H and H are m × m matrices. In [41–44], spectral data characterization has been providedfor those matrix Sturm-Liouville operators. Nevertheless, operators with general self-adjointboundary conditions appeared to be more difficult for investigation. There are only uniquenessresults for recovering the matrix Sturm-Liouville operator with the both boundary conditionsin the form similar to V ( Y ) = 0 from spectral characteristics (see [38]). In the recent study [40],a constructive method for solving Inverse Problem 1.4 has been developed. We also mentionthat the inverse scattering problem for the matrix Sturm-Liouville operator on the half-linewith the Dirichlet boundary condition at x = 0 was solved in [37]. Harmer [11] generalized theresults of [37] to the case of general self-adjoint boundary condition analogous to V ( Y ) = 0. Inaddition, Harmer [11] studied the inverse scattering problem for the Sturm-Liouville operatoron the star-shaped graphs with infinite rays. However, the operators considered in [11, 37] havea finite number of eigenvalues, so the difficulties related to spectral data asymptotics do notarise. Therefore inverse scattering problems for matrix Sturm-Liouville operators on infinitedomains appear to be easier for investigation than inverse spectral problems on a finite interval.In this paper, we obtain necessary and sufficient conditions of solvability for Inverse Prob-lem 1.4 for the general matrix case and, in parallel, for Inverse Problem 1.3 for the Sturm-Liouville operator on the graph. Furthermore, local solvability and stability are proved forthe both problems. Note that our necessary and sufficient conditions (Proposition 3.2, Theo-rems 3.3 and 3.4) generalize [36, Theorem 1.6.2] for the scalar Sturm-Liouville operator on afinite interval. Similarly, local solvability and stability Theorems 7.1 and 7.3 generalize [36, The-orem 1.6.4]. However, these generalizations are far from being trivial. The main difficulty inour research is caused by complicated behavior of the spectrum. The spectrum of the problem L can contain an infinite number of groups of multiple and/or asymptotically multiple eigen-values, that influences the structure of the weight matrices. In order to overcome this difficulty,we group the eigenvalues by asymptotics and investigate the sums of the weight matrices,corresponding to each group.Our analysis relies on the basic ideas of the method of spectral mappings (see [36]). A crucialstep of this method is contour integration in the complex plane of the spectral parameter.As a result, a nonlinear inverse problem is reduced to a linear equation in a Banach space.Investigation of matrix Sturm-Liouville operators requires essential development of this method.We construct a special Banach space of infinite matrix sequences, by relying on our eigenvaluegrouping, and then investigate solvability of the main equation in that Banach space.The paper is organized as follows. Section 2 contains preliminaries. We provide asymptoticformulas for the eigenvalues { λ nk } and for the weight matrices { α nk } . Then Inverse Problem 1.4is reduced to the so-called main equation in an appropriate Banach space. In Section 3, weformulate necessary and sufficient conditions of solvability for Inverse Problems 1.3 and 1.4. The4roofs are provided in the next three sections. In Section 4, auxiliary asymptotics and estimatesare obtained. In Section 5, we investigate solvability of the main equation. In Section 6, usingthe solution of the main equation, we construct the operator and show that its spectral datacoincide with the initially given numbers. In Section 7, local solvability and stability theoremsare provided. The goal of this section is to provide preliminary results from [39, 40, 45]. In particular, Propo-sitions 2.1 and 2.3 give asymptotic formulas for the eigenvalues and the weight matrices, re-spectively. Further the special Banach space B of infinite matrix sequences is constructed, andInverse Problem 1.4 is reduced to the main equation (2.12) in B . In the construction of B , animportant role is played by the grouping { G k } k ≥ of the square roots of the eigenvalues.First of all, we introduce the notations. • The prime denotes differentiation by x in expressions similar to Y ′ ( x, λ ). • The symbol † denotes the conjugate transpose, i.e. for a matrix A = [ a jk ] mj,k =1 we have A † = [¯ a kj ] mj,k =1 . • The spaces of complex-valued m -vectors and m × m matrices are denoted by C m and C m × m , respectively. In these spaces, we use the Euclidean vector norm and the inducedmatrix norm: k A k = p λ max ( A † A ), where λ max is the maximal eigenvalue. • For any interval I ⊂ R , we denote by L ( I ; C m ) and L ( I ; C m × m ) the spaces of m -vectorfunctions and m × m matrix functions, respectively, having elements from L ( I ). Forexample, Q ∈ L ((0 , π ); C m × m ). • The scalar product and the norm in the Hilbert space L ( I ; C m × m ) are defined as follows:( Y, Z ) = Z I Y † ( x ) Z ( x ) dx, k Y k = p ( Y, Y ) , Y = [ y j ( x )] mj =1 , Z = [ z j ( x )] mj =1 . • In L ( I ; C m × m ), the following norm is used: k A k L = max ≤ j,k ≤ m (cid:18)Z I | a jk ( x ) | (cid:19) / , A ( x ) = [ a jk ( x )] mj,k =1 . • The matrix Wronskian is denoted by h Y ( x ) , Z ( x ) i := Y ( x ) Z ′ ( x ) − Y ′ ( x ) Z ( x ), where Y and Z are m × m matrix functions. • In estimates, we use the same symbol C for various constants, independent of x , λ , n ,etc. • The notation { K n } is used for various matrix sequences, such that {k K n k} ∈ l . • λ = ρ , τ := Im ρ . 5elow we suppose that the problem L = L ( Q ( x ) , T, H ) is of the general from, unless theopposite is stated. Denote p := rank( T ), then rank( T ⊥ ) = m − p . In the general case,1 ≤ p ≤ m −
1. In the graph case, we have p = 1 according to (1.7).Denote Ω := 12 Z π Q ( x ) dx, P ( z ) := z p − m det( zI − T (Ω − H ) T ) , P ( z ) := z − p det( zI − T ⊥ HT ⊥ ) . (2.1)One can easily show that P ( z ) and P ( z ) are polynomials of degrees p and ( m − p ), re-spectively, whose roots are real. Denote the roots of P ( z ) by { z k } pk =1 and the roots of P ( z )by { z k } mk = p +1 , counting with the multiplicities and in the nondecreasing order: z k ≤ z k +1 for k = 1 , m \{ p } .In the graph case, we have Ω = diag { ω j } mj =1 , ω j := R π q j ( x ) dx , j = 1 , m , z = 1 m m X j =1 ω j − h, (2.2)and { z j } mj =2 are the roots of the polynomial P ( z ), which takes the form P ( z ) = 1 m ddz m Y j =1 ( z − ω j ) ! (2.3)Let { λ nk } n ≥ , k =1 ,m be the eigenvalues of L , numbered according to Definition 1.1. Put ρ nk := √ λ nk , n ≥ k = 1 , m . The following proposition gives the asymptotic formulas for theeigenvalues. Proposition 2.1.
The following relations hold ρ nk = n −
12 + z k πn + κ nk n , k = 1 , p,ρ nk = n + z k πn + κ nk n , k = p + 1 , m, (2.4) where n ≥ , { κ nk } ∈ l . Proposition 2.1 has been proved in [45] for the general case and in [46] for the graph case.In order to provide asymptotic formulas for the weight matrices { α nk } n ∈ N , k =1 ,m , we needsome additional notations. Definition 2.2.
Consider in the sequence { λ nk } n ≥ , k =1 ,m any group of multiple eigenvalues λ n ,k = λ n ,k = · · · = λ n r ,k r , ( n j , k j ) < ( n j +1 , k j +1 ), j = 1 , r −
1, maximal by inclusion.Obviously, we have α n ,k = α n ,k = · · · = α n r ,k r . Define α ′ n ,k := α n ,k and α ′ n j ,k j := 0 for j = 2 , r . Defining the matrices α ′ nk for every group of multiple eigenvalues in such a way, weget the sequence { α ′ nk } n ≥ , k =1 ,m .Introduce the sums α In = p X k =1 α ′ nk , α IIn = m X k = p +1 α ′ nk ,α ( s ) n = p X k =1 z k = z s α ′ nk , s = 1 , p, α ( s ) n = m X k = p +1 z k = z s α ′ nk , s = p + 1 , m. roposition 2.3. The following relations hold α In = n − / π (cid:16) T + K n n (cid:17) , α IIn = n π (cid:16) T ⊥ + K n n (cid:17) ,α ( s ) n = n π ( A ( s ) + K n ) , s = 1 , m, (2.5) where A ( s ) = U † T s U, s = 1 , m,T s = [ T s,jk ] mj,k =1 , T s,jk = ( , j = k, z j = z s , j, s ≤ p or j, s > p, , otherwise , and U is a unitary matrix, such that U Θ U † = diag { z k } mk =1 , Θ := T (Ω − H ) T + T ⊥ Ω T ⊥ . The matrices { A ( s ) } ms =1 do not depend on the choice of U .In the graph case, the following relations hold A (1) = T, A ( s ) = 1 m Res z = z s A ( z ) P ( z ) , s = 2 , m, (2.6) A ( z ) = [ a jk ] mj,k =1 , a jj ( z ) = ddz m Y s =1 s = j ( z − ω s ) , a jk = − m Y s =1 s = j,k ( z − ω s ) , j = k. (2.7)Proposition 2.3 has been proved in [45] for the general case and in [39] for the graph case.Further we need the main equation of Inverse Problem 1.4, derived in [40]. Consider a modelproblem ˜ L = L ( ˜ Q ( x ) , ˜ T , ˜ H ) of the same form as L , but with different coefficients. We agreethat, if a certain symbol γ denotes an object related to L , the symbol ˜ γ with tilde denotes theanalogous object related to ˜ L . Let ˜ L be such that ˜ T = T and ˜Θ = Θ. In particular, one canput ˜ Q ( x ) := π Θ, ˜ T := T , ˜ H = 0. A detailed algorithm for constructing the problem ˜ L by usingthe spectral data is provided in [40].In the graph case, it is convenient to choose the model problem ˜ L with the diagonal potentialmatrix. Suppose that we know the mean values { ω j } mj =1 . Then we put˜ Q ( x ) := 2 π Ω , ˜ T := T, ˜ H := ˜ hT, ˜ h = 1 m m X j =1 ω j − z . (2.8)Denote by S ( x, λ ) the m × m matrix solution of equation (1.5), satisfying the initial condi-tions S (0 , λ ) = 0, S ′ (0 , λ ) = I . Define D ( x, λ, µ ) := h S † ( x, ¯ λ ) , S ( x, µ ) i λ − µ . (2.9)Without loss of generality, we can assume that λ nk ≥ λ nk ≥ n ∈ N , k = 1 , m . One can easily achieve these conditions by a shift of the spectrum: λ λ + C , Q ( x ) Q ( x ) + CI , where C is a constant. Introduce the notations: λ nk = λ nk , λ nk = ˜ λ nk , ρ nk = ρ nk , ρ nk = ˜ ρ nk , nk = α nk , α nk = ˜ α nk , α ′ nk = α ′ nk , α ′ nk = ˜ α ′ nk ,S nks ( x ) = S ( x, λ nks ) , ˜ S nks ( x ) = ˜ S ( x, λ nks ) , n ≥ , k = 1 , m, s = 0 , . We group the square roots { ρ nks } n ≥ , k =1 ,m, s =0 , of the eigenvalues into the collections: G := { ρ nks } n =1 ,n , k =1 ,m, s =0 , , G j := { ρ n + j,ks } k =1 ,p, s =0 , , G j +1 := { ρ n + j,ks } k = p +1 ,m, s =0 , , (2.10)for j ≥
1. Each collection G n is a multiset, i.e. it can contain multiple values. In view of theasymptotics (2.4), one can choose n ≥
1, such that G n ∩ G k = ∅ for all n = k .For any finite multiset G of complex numbers, we define the finite-dimensional space B ( G ).The space B ( G ) consists of all the matrix functions f : G → C m × m with the property: if ρ, θ ∈ G and ρ = θ , then f ( ρ ) = f ( θ ). The norm in B ( G ) is defined as follows: k f k B ( G ) = max max ρ ∈G k f ( ρ ) k , max ρ = θρ,θ ∈G | ρ − θ | − k f ( ρ ) − f ( θ ) k . Define the Banach space B of infinite sequences: B = { f = { f n } n ≥ : f n ∈ B ( G n ) , n ≥ , k f k B < ∞} , k f k B := sup n ≥ ( n k f n k B ( G n ) ) . (2.11)One can easily show, that the following sequence ψ ( x ) belongs to B for each fixed x ∈ [0 , π ]: ψ ( x ) = { ψ n ( x ) } n ≥ , ψ n ( x )( ρ ) = S ( x, ρ ) , ρ ∈ G n , n ≥ . Analogously ˜ ψ ( x ) ∈ B is defined, by changing S ( x, ρ ) to ˜ S ( x, ρ ).For each fixed x ∈ [0 , π ], we also define the linear operator R ( x ) : B → B , acting on anyelement f = { f n } n ≥ of B as follows:( f R ( x )) n = ∞ X k =1 f k R k,n ( x ) , R k,n ( x ) : B ( G k ) → B ( G n ) , ( f k R k,n ( x ))( ρ ) = X ( l,j ): ρ lj ,ρ lj ∈ G k ( f k ( ρ lj ) α ′ lj D ( x, ρ lj , ρ ) − f k ( ρ lj ) α ′ lj D ( x, ρ lj , ρ )) , ρ ∈ G n . In the latter expressions, the operators are put to the right of their operands to emphasizethat the matrices are multiplied in this order. Similarly, the operator ˜ R ( x ) is defined, bychanging S , D to ˜ S , ˜ D . The operators R ( x ) and ˜ R ( x ) are compact on B . Furthermore, thefollowing main equation is satisfied for each fixed x ∈ [0 , π ]:˜ ψ ( x ) = ψ ( x )( I + ˜ R ( x )) , (2.12)where I is the identity operator in B . The main equation (2.12) can be used for constructivesolution of Inverse Problems 1.3 and 1.4 (see [40] for details).We call an element f = { f n } n ≥ ∈ B diagonal , if for every n ≥
1, all the values of thematrix function f n : G n → C m × m are diagonal matrices. In the graph case, the matrix function S ( x, λ ) is diagonal, so the element ψ ( x ) ∈ B is diagonal for all x ∈ [0 , π ].8 Spectral Data Characterization
In this section, we formulate necessary and sufficient conditions of solvability for Inverse Prob-lems 1.3 and 1.4.Let SD be the class of all the data in the form { λ nk , α nk } n ≥ , k =1 ,m , such that:1. λ nk are real numbers, α nk are Hermitian, nonnegative definite matrices: α nk = α † nk ≥ n ≥ k = 1 , m ;2. if λ n ,k = λ n ,k , then α n ,k = α n ,k . Moreover, rank( α nk ) equals the multiplicity of thecorresponding value λ nk (i.e. the number of times that λ nk occurs in the sequence). Definition 3.1.
Following Definition 2.2, consider any group of multiple eigenvalues λ n ,k = λ n ,k = · · · = λ n r ,k r maximal by inclusion. We have rank( α n ,k ) = r . In other words,Ran( α n ,k ) is an r -dimensional subspace in C m . Choose in Ran( α n ,k ) an orthonormal basis {E n j ,k j } j =1 ,m . (This choice can be not unique). Thus, the sequence of normalized vectors {E nk } n ≥ , k =1 ,m is defined. Further we need the following sequence of vector functions F := (cid:26) E nk sin( ρ nk t ) ρ nk (cid:27) n ≥ , k =1 ,m . The results of [39, 40, 45] yield the following necessary conditions on the spectral data.
Proposition 3.2 (Necessity) . The spectral data S := { λ nk , α nk } n ≥ , k =1 ,m of the problem L inthe general form fulfill the following conditions.1. S ∈ SD .2. (Asymptotics) The eigenvalues { λ nk } n ≥ , k =1 ,m and the weight matrices { α nk } n ≥ , k =1 ,m satisfy the asymptotic relations (2.4) and (2.5) , respectively, where { z k } mk =1 are the roots ofthe polynomials P ( z ) and P ( z ) defined by (2.1) , and { A ( s ) } ms =1 are the matrices definedin Proposition 2.3.3. (Completeness) The sequence F is complete in L ((0 , π ); C m ) for any choice of thevectors {E nk } n ≥ , k =1 ,m in Definition 3.1.4. (Solvability) The main equation (2.12) is uniquely solvable for each x ∈ [0 , π ] .In the graph case, the solution of the main equation (2.12) is diagonal (Diagonality) . The main goal of this paper is to show that the conditions of Proposition 3.2 are not onlynecessary but also sufficient for solvability of Inverse Problems 1.3 and 1.4. The main resultsare formulated as follows.
Theorem 3.3 (Sufficiency in the general case) . Let S = { λ nk , α nk } n ≥ , k =1 ,m be an arbitraryelement of SD , satifying the following conditions.1. (Asymptotics) The values { λ nk } n ≥ , k =1 ,m and the matrices { α nk } n ≥ , k =1 ,m satisfythe relations (2.4) and (2.5) , respectively, where T is an orthogonal projector in C m , p := rank( T ) ∈ [1 , m − , T ⊥ = I − T , { z k } mk =1 are real numbers, z k ≤ z k +1 for k = 1 , m − \{ p } , { A ( s ) } ms =1 are orthogonal projectors in C m , having the following prop-erties: p X s =1 A ( s ) = T, m X s = p +1 A ( s ) = T ⊥ , A ( s ) ) = { k = 1 , p : z k = z s } , s = 1 , p, rank( A ( s ) ) = { k = p + 1 , m : z k = z s } , s = p + 1 , m,A ( s ) A ( k ) = 0 , s, k = 1 , m : ( s ≤ p and k > p ) or ( z k = z s ) . (Completeness) The vectors {E nk } n ≥ , k =1 ,m in Definition 3.1 can be chosen so that thesequence F is complete in L ((0 , π ); C m ) .Then there exists a unique boundary value problem L ( Q ( x ) , T, H ) in the general form, such that S is its spectral data. Theorem 3.4 (Sufficiency in the graph case) . Let S = { λ nk , α nk } n ≥ , k =1 ,m be an arbitraryelement of SD , satisfying the following conditions.1. (Asymptotics) There exist real numbers { ω j } mj =1 , such that for the values { λ nk } n ≥ , k =1 ,m and for the matrices { α nk } n ≥ , k =1 ,m the asymptotic relations (2.4) and (2.5) hold, respec-tively, where p = 1 , T and T ⊥ are the matrices defined by (1.7) , z is an arbitraryreal number, { z k } mk =2 are the roots of the polynomial P ( z ) defined by (2.3) , z k ≤ z k +1 , k = 2 , m − , the matrices { A ( s ) } ms =1 are defined by (2.6) , (2.7) .2. (Completeness) The vectors {E nk } n ≥ , k =1 ,m in Definition 3.1 can be chosen so that thesequence F is complete in L ((0 , π ); C m ) .Let ˜ L = L ( ˜ Q ( x ) , ˜ T , ˜ H ) be constructed by the formulas (2.8) , where Ω := diag { ω j } mj =1 . Underthe conditions 1-2, the main equation (2.12) is uniquely solvable (Solvability) .If, in addition, the solution ψ ( x ) of the main equation is diagonal (Diagonality) , thenthere exist unique real-valued functions { q j } mj =1 , q j ∈ L (0 , π ) , j = 1 , m , such that S is thespectral data of the operator L , constructed by { q j } mj =1 and h = ˜ h ( ˜ h is defined by (2.8) ). Thus, Proposition 3.2 and Theorem 3.3 together give characterization of the spectral datafor the matrix Sturm-Liouville problem (1.5)-(1.6) in the general form. Proposition 3.2 togetherwith Theorem 3.4 characterize spectral data for the Sturm-Liouville operator L on the star-shaped graph. Note that, in Theorem 3.4, Solvability of the main equation is not required,but it follows from
Asymptotics and
Completeness .Theorems 3.3 and 3.4 are proved in Sections 4-6.
This section plays an auxiliary role. Here we obtain asymptotic formulas and estimates usedin the further proofs.Recall the notation τ := Im ρ . The matrix solution S ( x, λ ) has the following standardasymptotics as | ρ | → ∞ , λ = ρ : S ( x, λ ) = sin ρxρ I + O (cid:0) ρ − exp( | τ | x ) (cid:1) , S ′ ( x, λ ) = cos ρxI + O (cid:0) ρ − exp( | τ | x ) (cid:1) . (4.1)Let Ψ( x, λ ) be the matrix solution of (1.5) under the initial conditions Ψ( π, λ ) = T ,Ψ ′ ( π, λ ) = T ⊥ + HT . Clearly, V (Ψ) = 0. The following asymptotic formulas are valid as | ρ | → ∞ : Ψ( x, λ ) = (cid:16) cos ρ ( π − x ) I + O (cid:0) ρ − exp( | τ | ( π − x )) (cid:1)(cid:17) T (cid:16) − sin ρ ( π − x ) ρ I + O (cid:0) ρ − exp( | τ | ( π − x )) (cid:1)(cid:17) T ⊥ , Ψ ′ ( x, λ ) = (cid:16) ρ sin ρ ( π − x ) I + O (exp( | τ | ( π − x ))) (cid:17) T + (cid:16) cos ρ ( π − x ) I + O (cid:0) ρ − exp( | τ | ( π − x )) (cid:1)(cid:17) T ⊥ . The Weyl solution can be expressed in the formΦ( x, λ ) = Ψ( x, λ )Ψ − (0 , λ ) . For | ρ | → ∞ , ρ ∈ G δ , where G δ := { ρ ∈ C : | ρ − n | ≥ δ, | ρ − ( n − ) | ≥ δ } , δ > , we haveΨ − (0 , λ ) = T (cid:18) ρπ I + O (cid:0) ρ − exp( −| τ | π ) (cid:1)(cid:19) − T ⊥ (cid:18) ρ sin ρπ + O (exp( −| τ | π )) (cid:19) . Consequently, we obtain the asymptotic relationsΦ( x, λ ) = T cos ρ ( π − x )cos ρπ + T ⊥ sin ρ ( π − x )sin ρπ + O (cid:0) ρ − exp( −| τ | x ) (cid:1) , (4.2)Φ ′ ( x, λ ) = T ρ sin ρ ( π − x )cos ρπ − T ⊥ ρ cos ρ ( π − x )sin ρπ + O (exp( −| τ | x )) , (4.3)for | ρ | → ∞ , ρ ∈ G δ .Define the matrix function E ( x, λ, µ ) := h S † ( x, ¯ λ ) , Φ( x, µ ) i λ − µ . (4.4)Using the asymptotic formulas (4.1), (4.2) and (4.3) together with the definitions (2.9) and (4.4),we obtain the estimates k D ( x, ρ , θ ) k ≤ C exp(( | Im ρ | + | Im θ | ) x )( | ρ | + 1)( | θ | + 1)( | ρ − θ | + 1) , ρ, θ ∈ C , (4.5) k E ( x, ρ , θ ) k ≤ C exp(( | Im ρ | − | Im θ | ) x )( | ρ | + 1)( | ρ − θ | + 1) , ρ ∈ C , θ ∈ G δ , (4.6)where x ∈ [0 , π ], and C is a positive constant independent of x , ρ and θ .Along with the problem L , consider a problem ˜ L , such that p = ˜ p, T = ˜ T , z s = ˜ z s , A ( s ) = ˜ A ( s ) , s = 1 , m, (4.7)i.e. all the coefficients in the asymptotic formulas (2.4) and (2.5) for the problems L and ˜ L coincide.Consider the collections { G k } k ≥ defined by (2.10). Introduce the notations n := 0 , n j = n + j − , n j +1 = n + j, j ≥ , i.e. n k is the main part in the asymptotic relations (2.4) for the values from G k .11enote by r and r the numbers of distinct values among { z k } pk =1 and among { z k } mk = p +1 ,respectively. Consider the index sets J s := ( { k = 1 , p : z k = z s } , s = 1 , p, { k = p + 1 , m : z k = z s } , s = p + 1 , m. Denote all the distinct sets from {J s } ps =1 and {J s } ms = p +1 by { J (1) s } r s =1 and { J (2) s } r s =1 , respectively.We divide every collection G k into subcollections as follows: G k = p k [ i =1 G ki , p := 1 , p j := r , p j +1 := r ,G := G , G j,i := { ρ n + j,ks } k ∈ J (1) i , s =0 , , G j +1 ,i := { ρ n + j,ks } k ∈ J (2) i , s =0 , . In n is sufficiently large, we have G ki ∩ G kj = ∅ , i = j , k ≥ G , introduce the notations α ( G ) = X ( l,j ): ρ lj ∈G α ′ lj , ˜ α ( G ) = X ( l,j ): ρ lj ∈G α ′ lj . In view of the asymptotics (2.4), (2.5) and the relation (4.7), we haveΞ := ∞ X k =1 ( kξ k ) ! / < ∞ , (4.8) ξ k := p k X i =1 X ρ,θ ∈ G ki | ρ − θ | + 1 k p k X i =1 k α ( G ki ) − ˜ α ( G ki ) k + 1 k k α ( G k ) − ˜ α ( G k ) k . (4.9)Using the estimates (4.5) and (4.6) for ˜ D and ˜ E , respectively, and the standard approach(see, e.g., [36, Lemma 1.6.2]), based on Schwarz’s lemma, we obtain the following result. Lemma 4.1.
For x ∈ [0 , π ] , k ≥ , the following estimates are valid: k ˜ D ( x, χ , ρ ) − ˜ D ( x, θ , ρ ) k ≤ C exp( | τ | x ) k ( | ρ | + 1)( | ρ − n k | + 1) , ρ ∈ C , θ, χ ∈ G k , k ˜ D ( x, χ , ρ ) − ˜ D ( x, θ , ρ ) k ≤ Cξ k exp( | τ | x ) k ( | ρ | + 1)( | ρ − n k | + 1) , ρ ∈ C , χ, θ ∈ G ki , i = 1 , p k , k ˜ E ( x, χ , ρ ) − ˜ E ( x, θ , ρ ) k ≤ C exp( −| τ | x ) k ( | ρ − n k | + 1) , ρ ∈ G δ , θ, χ ∈ G k , k ˜ E ( x, χ , ρ ) − ˜ E ( x, θ , ρ ) k ≤ Cξ k exp( −| τ | x ) k ( | ρ − n k | + 1) , ρ ∈ G δ , χ, θ ∈ G ki , i = 1 , p k , where the constant C does not depend on x , ρ , k , χ and θ . The following proposition has been proved in [40].
Proposition 4.2.
For x ∈ [0 , π ] , the following estimates hold: k ˜ R k,n ( x ) k B ( G k ) → B ( G n ) ≤ Ckξ k n ( | n − k | + 1) , n, k ≥ , k ˜ R ( x ) k B → B ≤ C Ξ . Main Equation Solvability
The aim of this section is to prove that
Asymptotics and
Completeness conditions ofTheorems 3.3 and 3.4 imply the unique solvability of the main equation (2.12).Let S := { λ nk , α nk } n ≥ , k =1 ,m be data from the class SD , satisfying the conditions of The-orem 3.3. Then the integer p ∈ [1 , m − { z k } mk =1 and the matrices { A ( s ) } ms =1 arespecified by the asymptotic condition. Construct the matrix˜Θ := X s ∈ J z s A ( s ) , J := { s = 1 , p : s = 1 or s = p + 1 or z s = z s − } . Put ˜ L = L ( π ˜Θ , T, T is the orthogonal projector from the asymptotics (2.5).It is easy to check, that the spectral data { ˜ λ nk , ˜ α nk } n ≥ , k =1 ,m of the problem ˜ L satisfy theasymptotic relations (2.4) and (2.5) with the same coefficients { z k } mk =1 , T , T ⊥ and { A ( s ) } ms =1 asthe collection S has. Consequently, the estimates of Section 4 are valid for the problems L and˜ L . The results of [40] yield that the operator ˜ R ( x ), constructed in Section 2, is compact in B .Relying on these facts, we prove the following lemma. Lemma 5.1.
Under the above assumptions, the main equation (2.12) has the unique solutionin B for each x ∈ [0 , π ] .Proof. Fix x ∈ [0 , π ]. Let us prove that the operator ( I + ˜ R ( x )) has a bounded inverse. By virtueof Fredholm Theorem, it is sufficient to show that the homogeneous equation f ( I + ˜ R ( x )) = 0has the only solution f = 0 in B . Due to the definitions in Section 2, a solution f = { f k } k ≥ ∈ B of the homogeneous equation satisfies the relations f n ( ρ ) + ∞ X k =1 X ( l,j ): ρ lj ,ρ lj ∈ G k ( f k ( ρ lj ) α ′ lj ˜ D ( x, ρ lj , ρ ) − f k ( ρ lj ) α ′ lj ˜ D ( x, ρ lj , ρ )) = 0 , ρ ∈ G n , and the estimates k f k ( ρ ) k ≤ k f k B k , k f k ( ρ ) − f k ( θ ) k ≤ k f k B | ρ − θ | k , ρ, θ ∈ G k , k ≥ . (5.1)Introduce the matrix functions γ ( λ ) := − ∞ X k =1 X ( l,j ): ρ lj ,ρ lj ∈ G k ( f k ( ρ lj ) α ′ lj ˜ D ( x, λ lj , λ ) − f k ( ρ lj ) α ′ lj ˜ D ( x, λ lj , λ )) , Γ( λ ) := − ∞ X k =1 X ( l,j ): ρ lj ,ρ lj ∈ G k ( f k ( ρ lj ) α ′ lj ˜ E ( x, λ lj , λ ) − f k ( ρ lj ) α ′ lj ˜ E ( x, λ lj , λ )) ,F ( λ ) := Γ( λ ) γ † (¯ λ ) . The matrix function γ ( λ ) is entire and γ ( λ ljs ) = f k ( ρ ljs ) for ρ ljs ∈ G k , k ≥
1. The matrixfunctions Γ( λ ) and F ( λ ) are meromorphic with simple poles from the set { λ ljs } l ≥ , j =1 ,m, s =0 , .Calculations yieldRes λ = λ lj F ( λ ) = γ ( λ lj ) α lj γ † ( λ lj ) , Res λ = λ lj F ( λ ) = 0 , l ≥ , j = 1 , m, (5.2)if λ lj
6∈ { λ nk } n ≥ , k =1 ,m . The opposite case requires minor technical modifications.13sing the relations (4.5), (4.6), (4.9) and Lemma 4.1, we obtain the estimates k γ ( λ ) k ≤ C exp( | τ | x ) | ρ | + 1 ∞ X k =1 ξ k | ρ − n k | + 1 , ρ ∈ C , (5.3) k Γ( λ ) k ≤ C exp( −| τ | x ) ∞ X k =1 ξ k | ρ − n k | + 1 , ρ ∈ G δ . (5.4)Consider the contours Γ N := { λ ∈ C : | λ | = ( N + ) } , N ∈ N , in the λ -plane with thecounter-clockwise circuit. Clearly, λ ∈ Γ N implies ρ = √ λ ∈ G δ for sufficiently large N andsufficiently small δ >
0. Using the estimates (5.3) and (5.4), we obtain k F ( λ ) k ≤ C | ρ | , λ ∈ Γ N . Hence, on the one hand, we have lim N →∞ I Γ N F ( λ ) dλ = 0 . On the other hand, Residue Theorem together with the relations (5.2) imply12 πi I Γ N F ( λ ) dλ = X ( l,j ): | λ lj | < (cid:16) N + 14 (cid:17) γ ( λ lj ) α ′ lj γ † ( λ lj ) . Taking the limit as N → ∞ , we arrive at the relation ∞ X l =1 m X j =1 γ ( λ lj ) α ′ lj γ † ( λ lj ) = 0 . Since α lj = α † lj ≥ l ≥ j = 1 , m , we conclude that γ ( λ lj ) α lj = 0 , l ≥ , j = 1 , m. (5.5)Note that the function ργ ( ρ ) is entire and odd in the ρ -plane. In view of (4.8) and (5.3),we have k γ ( λ ) k = O (cid:0) ρ − exp( | τ | π ) (cid:1) , | ρ | → ∞ , ργ ( ρ ) ∈ L ( R ; C m × m ) . Therefore Paley-Wiener Theorem yields the representation γ ( λ ) = Z π G ( t ) sin ρtρ dt, G ∈ L ((0 , π ); C m × m ) . The relation (5.5) implies that γ ( λ nk ) E nk = Z π G ( t ) sin ρ nk tρ nk E nk dt = 0 , n ≥ , k = 1 , m, where {E nk } are the vectors from Definition 3.1. Since the sequence F is complete in L ((0 , π ); C m ), we conclude that G = 0 in L ((0 , π ); C m × m ). Consequently, γ ( λ ) ≡
0. Hence f = 0 in B , which yields the claim of the lemma. Corollary 5.2.
Suppose that data
S ∈ SD satisfy the conditions 1-2 of Theorem 3.4. Let ˜ L be the auxiliary problem constructed by the formulas (2.8) . Then the main equation (2.12) isuniquely solvable. Proof of Sufficiency
In this section, several lemmas are provided, which finish the proofs of Theorems 3.3 and 3.4.By using the solution ψ ( x ) of the main equation (2.12), we construct Q ∈ L ((0 , π ); C m × m ) and H ∈ C m × m . Further we prove that the given collection S is the spectral data of the boundaryvalue problem L = L ( Q ( x ) , T, H ).Let data S = { λ nk , α nk } n ≥ , k =1 ,m fulfill the conditions of Theorem 3.3, and let ˜ L be themodel problem constructed in the previous section. By Lemma 5.1 the main equation (2.12),constructed by S and ˜ L , has the unique solution ψ ( x ) = { ψ n ( x ) } n ≥ ∈ B for each fixed x ∈ [0 , π ]. Relying on (4.8), (4.9) and (2.11), we prove the following lemma. Lemma 6.1.
For n ≥ , the operator functions ψ n : [0 , π ] → B ( G n ) are continuously differen-tiable with respect to x ∈ [0 , π ] and satisfy the estimates k ψ ( ν ) n ( x ) k B ( G n ) ≤ Cn ν − , ν = 0 , , x ∈ [0 , π ] , (6.1) k ψ n ( x ) − ˜ ψ n ( x ) k B ( G n ) ≤ C Ξ η n n , k ψ ′ n ( x ) − ˜ ψ ′ n ( x ) k B ( G n ) ≤ C Ξ n , x ∈ [0 , π ] , (6.2) η n := ∞ X k =1 k ( | n − k | + 1) ! / , { η n } n ≥ ∈ l . (6.3) Proof.
Analogously to [40, Lemma 4.3], we prove that, for every k, n ≥
1, the operator function˜ R k,n ( x ) is continuously differentiable by x , and k ˜ R ′ k,n ( x ) k B ( G n ) → B ( G n ) ≤ Ckξ k n , x ∈ [0 , π ] , (6.4)Fix x ∈ [0 , π ]. By using (6.4) and (4.8), one can easily show that k ˜ R ( x ) − ˜ R ( x ) k B → B ≤ C Ξ | x − x | , x ∈ [0 , π ] . (6.5)Define the operator P ( x ) := ( I + ˜ R ( x )) − . Relying on the estimate (6.5), we prove that P ( x )is continuous on [0 , π ]. Consequently, k P ( x ) k B → B ≤ C , x ∈ [0 , π ]. Define R ( x ) := I − P ( x ), R ( x ) = [ R k,n ( x )] k,n ≥ . Clearly, k R ( x ) k B → B ≤ C , x ∈ [0 , π ], and( I − R ( x ))( I + ˜ R ( x )) = ( I + ˜ R ( x ))( I − R ( x )) = I . (6.6)The latter relations can be rewritten in the form R k,n ( x ) = ˜ R k,n ( x ) − ∞ X l =1 R k,l ( x ) ˜ R l,n ( x ) , (6.7) R k,n ( x ) = ˜ R k,n ( x ) − ∞ X l =1 ˜ R k,l ( x ) R l,n ( x ) , (6.8)where n, k ≥
1. Using (6.8), Proposition 4.2 and the estimate k R ( x ) k B → B ≤ C , we get k R k,n ( x ) k B ( G k ) → B ( G n ) ≤ Ckξ k n , k, n ≥ , x ∈ [0 , π ] . (6.9)Next, using (6.9) together with (6.7) and Proposition 4.2, we arrive at the estimate k R k,n ( x ) k B ( G k ) → B ( G n ) ≤ Ckξ k n (cid:18) | n − k | + 1 + Ξ η n (cid:19) , (6.10)15here n, k ≥ x ∈ [0 , π ], and η n is defined in (6.3).Since ψ ( x ) = ˜ ψ ( x )( I − R ( x )), we have ψ n ( x ) = ˜ ψ n ( x ) − ∞ X k =1 ˜ ψ k ( x ) R k,n ( x ) , n ≥ . (6.11)Using (6.10), (6.11) and the estimate k ˜ ψ n ( x ) k B ( G n ) ≤ Cn , n ≥
1, we obtain (6.1) for ν = 0and (6.2).One can similarly prove (6.1) for ν = 1, differentiating the relation (6.11). The necessaryestimate for k R ′ k,n ( x ) k B ( G k ) → B ( G n ) can be obtained by differentiating (6.6).Define the matrix functions S ljs ( x ) := ψ k ( x )( ρ ljs ) for l ≥ j = 1 , m , s = 0 ,
1, where k issuch that ρ ljs ∈ G k . Also define S ( x, λ ) := ˜ S ( x, λ ) − ∞ X l =1 m X j =1 ( S lj ( x ) α ′ lj ˜ D ( x, λ lj , λ ) − S lj ( x ) α ′ lj ˜ D ( x, λ lj , λ ) , (6.12)Φ( x, λ ) := ˜Φ( x, λ ) − ∞ X l =1 m X j =1 ( S lj ( x ) α ′ lj ˜ E ( x, λ lj , λ ) − S lj ( x ) α ′ lj ˜ E ( x, λ lj , λ )) , (6.13) ε ( x ) := ∞ X l =1 m X j =1 ( S lj ( x ) α ′ lj ˜ S † lj ( x ) − S lj ( x ) α ′ lj ˜ S † lj ( x )) , ε ( x ) := − ε ′ ( x ) . (6.14)Using the relations (2.12) and (6.12), one can easily show that S ( x, λ lj ) = S lj ( x ) , l ≥ , j = 1 , m. (6.15)Following the algorithm for solving Inverse Problem 1.4 from [40], we find Q ( x ) := ˜ Q ( x ) + ε ( x ) , H := ˜ H − T ε ( π ) T. (6.16)Using Lemma 6.1, (4.1), (4.8) and (4.9), we obtain the following result. Lemma 6.2.
The series in (6.14) converges uniformly with respect to x ∈ [0 , π ] to an absolutelycontinuous matrix functions. Moreover, ε ( x ) ∈ L ((0 , π ); C m × m ) and k ε k L ≤ C Ξ . Hence Q ( x ) defined by (6.16) belongs to L ((0 , π ); C m × m ) . Put L = L ( Q ( x ) , T, H ), where Q ( x ) and H are constructed by (6.16). Our next goal is toshow that S ( x, λ ) is the sine-type matrix solution of eq. (1.5) with the matrix potential Q ( x ),and that Φ( x, λ ) is the Weyl solution of the problem L . Lemma 6.3.
The following relations hold − S ′′ ( x, λ ) + Q ( x ) S ( x, λ ) = λS ( x, λ ) , − Φ ′′ ( x, λ ) + Q ( x )Φ( x, λ ) = λ Φ( x, λ ) , (6.17) S (0 , λ ) = 0 , S ′ (0 , λ ) = I, Φ(0 , λ ) = I. (6.18)The relations (6.17) can be proved by differentiating (6.12) and (6.13), analogously to thescalar case (see [36, Lemma 1.6.9]). The relations (6.18) trivially follow from (6.12) and (6.13). Lemma 6.4.
The following series converges in L ((0 , π ); C m × m ) : f ( x ) = ∞ X n =1 m X k =1 V ( S ( x, λ nk )) α ′ nk ˜ S † ( x, λ nk ) . (6.19)16 roof. Note that p X k =1 V ( S ( x, λ nk )) α ′ nk ˜ S † ( x, λ nk ) = Z n ( x ) + Z n ( x ) + Z n ( x ) ,Z n ( x ) := X s ∈ S p X k =1 z k = z s ( V ( S ( x, λ nk )) − V ( S ( x, λ ns )) α ′ nk ˜ S † ( x, λ nk ) Z n ( x ) := X s ∈ S V ( S ( x, λ ns )) p X k =1 z k = z s α ′ nk ( ˜ S † ( x, λ nk ) − ˜ S † ( x, λ ns )) Z n ( x ) := X s ∈ S V ( S ( x, λ ns )) α ( s ) n ˜ S † ( x, λ ns ) , where S := { s = 1 , p : s = 1 or z s − = z s } .Since the function S ( x, λ ) satisfies (6.17) with Q ∈ L ((0 , π ); C m × m ), the asymptotic rela-tions (4.1) hold. Using (4.1) for S ( x, λ ) and ˜ S ( x, λ ) together with (2.4), we obtain for n ≥ k = 1 , p that V ( S ( x, λ nk )) = ( − n n ( T ( z k I − Ω + H ) + T ⊥ + K n ) , (6.20)˜ S ( x, λ nk ) = sin (cid:0)(cid:0) n − (cid:1) x (cid:1) n − + 1 n cos (cid:0)(cid:0) n − (cid:1) x (cid:1) (cid:18) z k xπ I − Z x ˜ Q ( t ) dt (cid:19) + K n n . (6.21)Using (6.20) and (6.21) and noting that α nk = O ( n ), n ≥
1, one can easily show that theseries ∞ P n =1 Z ni converges in L ((0 , π ); C m × m ) for i = 1 ,
2. In view of (6.20), (6.21) and (2.5), thefollowing relation holds: Z n ( x ) = 2( − n π X s ∈ S T ( z s I − Ω + H ) T A ( s ) sin (cid:0)(cid:0) n − (cid:1) x (cid:1) + Z n ( x ) , (6.22)where the series ∞ P n =1 Z n converges in L ((0 , π ); C m × m ). It follows from (6.16) and constructionof the model problem ˜ L , that T (Ω − H ) T = T ( ˜Ω − ˜ H ) T = T Θ T = T X s ∈ S z s A ( s ) T. Hence the main part in (6.22) vanishes, so ∞ P n =1 Z n converges in L ((0 , π ); C m × m ). The L -convergence of the series ∞ X n =1 m X k = p +1 V ( S ( x, λ nk )) α ′ nk ˜ S † ( x, λ nk )can be proved analogously.The asymptotics (4.1) for ˜ S ( x, λ ), the Completeness condition for S and ˜ S and the resultsof [39] yield the following proposition. Proposition 6.5. (i) The sequence { ˜ S nk ( x ) ˜ E nk } n ≥ , k =1 ,m is complete in L ((0 , π ); C m ) .(ii) The sequence { ˜ S nk ( x ) E nk } n ≥ , k =1 ,m is minimal in L ((0 , π ); C m ) . emma 6.6. V (Φ) = 0 .Proof. Differentiating (6.12) and using (6.14), we derive V ( S ) = V ( ˜ S ) − ∞ X l =1 m X j =1 ( V ( S lj ) α ′ lj ˜ D ( π, λ lj , λ ) − V ( S lj ) α ′ lj ˜ D ( π, λ lj , λ )) − T ε ( π ) ˜ S ( π, λ ) . Since ˜ V ( ˜ S ) = T ( ˜ S ′ ( π, λ ) − ˜ H ˜ S ( π, λ )) − T ⊥ ˜ S ( π, λ ) = V ( ˜ S ) + T ( H − ˜ H ) ˜ S ( π, λ ) , we get V ( S ) = ˜ V ( ˜ S ) − T ε ( π ) T ⊥ ˜ S ( π, λ ) − ∞ X l =1 m X j =1 ( V ( S lj ) α ′ lj ˜ D ( π, λ lj , λ ) − V ( S lj ) α ′ lj ˜ D ( π, λ lj , λ )) . (6.23)Recall that { λ nk } n ≥ , k =1 ,m and { α nk } n ≥ , k =1 ,m are the eigenvalues and the weight matricesof the problem ˜ L , respectively. Therefore˜ V ( ˜ S ( x, λ nk )) α nk = 0 , T ⊥ ˜ S ( π, λ nk ) α nk = 0 , n ≥ , k = 1 , m (6.24)(see [39, Lemma 2.2]). The relations (6.23) and (6.24) together imply V ( S nk ) α nk = − ∞ X l =1 m X j =1 ( V ( S lj ) α ′ lj ˜ D ( π, λ lj , λ nk ) α nk − V ( S lj ) α ′ lj ˜ D ( π, λ lj , λ nk ) α nk ) . (6.25)Using (2.9) and (1.5) for the problem ˜ L , one can easily show that˜ D ( x, λ, µ ) = Z x ˜ S † ( t, ¯ λ ) ˜ S ( t, µ ) dt. (6.26)It has been proved in [39], that α lj Z π ˜ S † ( t, λ lj ) ˜ S ( t, λ nk ) dt α nk = ( α nk , λ nk = λ lj , , λ nk = λ lj . (6.27)Combining the latter relations, we obtain that ∞ X l =1 m X j =1 V ( S lj ) α ′ lj ˜ D ( π, λ lj , λ nk ) α nk = V ( S nk ) α nk . Hence (6.25) takes the form ∞ X l =1 m X j =1 V ( S lj ) α ′ lj ˜ D ( π, λ lj , λ nk ) α nk = 0 . (6.28)Substituting (6.26) into (6.28), we arrive at the relations Z π f ( t ) ˜ S nk ( t ) α nk dt = 0 , n ≥ , k = 1 , m, (6.29)18here f ( t ) is defined by (6.19). By virtue of Lemma 6.4, f ∈ L ((0 , π ); C m × m ). Then we applyProposition 6.5. The completeness of the sequence { ˜ S nk ( t ) ˜ E nk } n ≥ , k =1 ,m together with (6.29)implies f = 0 in L ((0 , π ); C m × m ). The relation (6.19) and the minimality of the sequence { ˜ S nk ( x ) E nk } n ≥ , k =1 ,m yield V ( S lj ) α lj = 0 , l ≥ , j = 1 , m. (6.30)Differentiating (6.13), we derive the following relation similar to (6.23): V (Φ) = ˜ V ( ˜Φ) − T ε ( π ) T ⊥ ˜Φ( π, λ ) − ∞ X l =1 m X j =1 ( V ( S lj ) α ′ lj ˜ E ( π, λ lj , λ ) − V ( S lj ) α ′ lj ˜ E ( π, λ lj , λ )) . Recall that ˜ V ( ˜Φ) = 0, T ⊥ ˜Φ( π, λ ) = 0. Using (4.4) and (6.24), one can easily show that α ′ lj ˜ E ( π, λ lj , λ ) = 0 for λ = λ lj , l ≥ j = 1 , m . Consequently, taking (6.30) into account, weobtain that V (Φ) = 0.Lemmas 6.3 and 6.6 imply that Φ( x, λ ) defined by (6.13) is the Weyl solution of the problem L . Hence M ( λ ) := Φ ′ (0 , λ ) is the Weyl matrix of L . Using (6.13), we derive the relation M ( λ ) = ˜ M ( λ ) − ∞ X l =1 m X j =1 (cid:18) α ′ lj λ − λ lj − α ′ lj λ − λ lj (cid:19) . Obviously, the singularities of M ( λ ) coincide with { λ nk } n ≥ , k =1 ,m and the relation (1.4) holds,so { λ nk , α nk } n ≥ , k =1 ,m are the spectral data of L .In order to finish the proof of Theorem 3.3, it remains to show that the matrices Q ( x ) and H constructed by (6.16) are Hermitian. For this purpose, along with L we consider the boundaryvalue problem L ∗ = L ∗ ( Q ( x ) , T, H ) of the following form: − Z ′′ ( x ) + Z ( x ) Q ( x ) = λZ ( x ) , x ∈ (0 , π ) , (6.31) Z (0) = 0 , V ∗ ( Z ) := ( Z ′ ( π ) − Z ( π ) H ) T − Z ( π ) T ⊥ = 0 . (6.32)The Weyl solution of the problem L ∗ is the matrix solution Φ ∗ ( x, λ ) of eq. (6.31), satisfyingthe conditions Φ ∗ (0 , λ ) = I , V ∗ (Φ ∗ ) = 0. The Weyl matrix is defined as M ∗ ( λ ) := Φ ∗′ (0 , λ ). Us-ing the approach of [47], one can show that M ( λ ) ≡ M ∗ ( λ ), so the eigenvalues { λ ∗ nk } n ≥ , k =1 ,m ofthe problem L ∗ coincide with the eigenvalues of L , and the weight matrices α ∗ nk := Res λ = λ ∗ nk M ∗ ( λ )coincide with α nk , n ≥ k = 1 , m .Taking the conjugate transpose of (6.31) and (6.32), we conclude that the bound-ary value problem L † := L ( Q † ( x ) , T, H † ) has the spectral data { ¯ λ ∗ nk , ( α ∗ nk ) † } n ≥ , k =1 ,m = { ¯ λ nk , α † nk } n ≥ , k =1 ,m . Since λ nk ∈ R and α nk = α † nk , n ≥ k = 1 , m , the spectral data ofthe problems L and L † coincide. Uniqueness of Inverse Problem 1.4 solution (see [38,40]) yieldsthat L = L † , i.e. Q ( x ) = Q † ( x ) for a.a. x ∈ (0 , π ) and H = H † . Theorem 3.3 is completelyproved. Lemma 6.7.
In the graph case, the matrix function Q ( x ) constructed by (6.16) is diagonal.Proof. Recall that, in the graph case, we have an additional requirement: the solution ψ ( x ) ofthe main equation (2.12) is diagonal. Consequently, the matrices S ljs ( x ) are diagonal for all l ≥ j = 1 , m , s = 0 ,
1. The relation (6.15), the asymptotic formulas (2.4) and (4.1) togetherwith interpolation arguments imply that the matrix function S ( x, λ ) is also diagonal. In viewof (6.17), we conclude that Q ( x ) is diagonal. 19ince in the graph case rank( T ) = 1, the matrix H defined by (6.16) has the form H = hT , h ∈ R . According to (2.2) and (2.3), the number h is uniquely specified by the numbers { z j } mj =1 : h = 1 m m X j =2 z j − z . Since z j = ˜ z j , j = 1 , m , we have h = ˜ h (˜ h is defined by (2.8)). Thus, Theorem 3.4 is proved. This section concerns local solvability and stability of the studied inverse problems. The mainresults are formulated in Theorems 7.1 and 7.3 for the general matrix case and for the graphcase, respectively. The proofs of these theorems strongly rely on the results of the previoussections.Let ˜ L = L ( ˜ Q ( x ) , T, ˜ H ) be a fixed boundary value problem in the general form, and let˜ S = { ˜ λ nk , ˜ α nk } n ≥ , k =1 ,m be the spectral data of ˜ L . Denote by SD ( ˜ S ) the set of all collections S = { λ nk , α nk } n ≥ , k =1 ,m ∈ SD , such that the relations (2.4), (2.5) and (4.7) hold.Let us group the numbers { ˜ ρ nk } n ≥ , k =1 ,m into the collections (multisets) as follows˜ G := { ˜ ρ nk } n =1 ,n , k =1 ,m , ˜ G j := { ˜ ρ n + j,k } pk =1 , ˜ G j +1 := { ˜ ρ n + j,k } mk = p +1 , j ≥ , where n is a fixed integer such that ˜ G n ∩ ˜ G k = ∅ for all n = k .Let S be an arbitrary element from SD ( ˜ S ). Consider the partition { G k } k ≥ of the num-bers { ρ nks } n ≥ , k =1 ,m, s =0 , , defined by (2.10) and having the same n as the partition { ˜ G k } k ≥ .Clearly, ˜ G k ⊂ G k , k ≥
1. Consider an arbitrary partition G k = p k S i =1 G ki , p k ∈ N , k ≥ G ki ∩ G kj = ∅ for all i = 1 , p k , k ≥
1, and ρ lj ∈ G ki iff ρ lj ∈ G ki . For any such partition, wedefine the numbers { ξ k } k ≥ and Ξ by the formulas (4.9) and (4.8), respectively. For a collection S , we fix such partition { G ki } k ≥ , i =1 ,p k that the value of Ξ is minimal possible.The following theorem gives local solvability and stability of Inverse Problem 1.4. Theorem 7.1.
Let ˜ L = L ( ˜ Q ( x ) , T, ˜ H ) is a fixed boundary value problem in the general form,and let ˜ S be the spectral data of ˜ L . Then there exists δ > (depending only on ˜ L ) such thatfor any data S ∈ SD ( ˜ S ) such that Ξ ≤ δ there exist a unique matrix function Q ( x ) = Q † ( x ) ∈ L ((0 , π ); C m × m ) and a unique matrix H = H † such that the problem L = L ( Q ( x ) , T, H ) hasthe spectral data S and k Q − ˜ Q k L ≤ C Ξ , k H − ˜ H k ≤ C Ξ . (7.1) The constant C in the estimates (7.1) depends only on ˜ L , δ and not on a particular choice of S .Proof. Fix δ so that, for every S ∈ SD ( ˜ S ) satisfying Ξ ≤ δ , we have G n ∩ G k = ∅ , n = k , n, k ≥
1. By using the partition { G k } k ≥ , we construct the Banach space B and the operator˜ R ( x ), as it was described in Section 2. Then the estimates of Proposition 4.2 hold with aconstant C depending only on ˜ L , n and δ and independent of S and x . Consequently, one canchoose δ ∈ (0 , δ ] such that Ξ ≤ δ implies k ˜ R ( x ) k B → B ≤ . Then the operator ( I + ˜ R ( x )) hasa bounded inverse in B . Hence the main equation (2.12) constructed by ˜ L and S is uniquelysolvable. Relying on the results of Section 6, we conclude that S is the spectral data of a uniqueboundary value problem L = L ( Q ( x ) , T, H ). The estimates (7.1) easily follow from (6.16) andthe estimate k ε k L ≤ C Ξ of Lemma 6.2. 20et us compare Theorems 7.1 and 3.3. Theorem 7.1 has a local nature, while Theorem 3.3establishes global solvability of Inverse Problem 1.4. However, the advantage of Theorem 7.1is that
Completeness condition is not required.Theorem 7.1 yields the following corollary. Let { λ nk , α nk } n ≥ , k =1 ,m be the spectral data ofthe problem L ( π ˜Ω , T, ˜ H ). For every N ∈ N , define the data S N = { λ Nnk , α
Nnk } n ≥ , k =1 ,m , N ∈ N ,as follows λ Nnk = ( ˜ λ nk , n ≤ N,λ nk , n > N, α Nnk = ( ˜ α nk , n ≤ N,α nk , n > N. Obviously, S N ∈ SD ( ˜ S ) for sufficiently large N . Corollary 7.2.
Let ˜ L = L ( ˜ Q ( x ) , T, ˜ H ) be a fixed boundary value problem in the general form,and let ˜ S be the spectral data of ˜ L . Then for sufficiently large N the data S N is the spectraldata of a certain boundary value problem L N = L ( Q N ( x ) , T, H N ) in the general form, and lim N →∞ k Q N − ˜ Q k L = 0 , lim N →∞ k H N − ˜ H k = 0 . Corollary 7.2 is important for numerical solution of Inverse Problem 1.4, since only a finitenumber of spectral data { λ nk , α nk } n =1 ,N, k =1 ,m is usually available in practice.Theorem 7.1 also yields the following corollary. Theorem 7.3.
Let ˜ L = L ( ˜ Q ( x ) , T, ˜ H ) is a fixed boundary value problem in the graph case, andlet ˜ S be the spectral data of ˜ L . Then there exists δ > depending only on ˜ L such that for anydata S ∈ SD ( ˜ S ) , such that Ξ ≤ δ and the solution of the main equation (2.12) is diagonal,there exists a unique real-valued matrix function Q ( x ) = diag { q j ( x ) } mj =1 ∈ L ((0 , π ); C m × m ) such that the problem L = L ( Q ( x ) , T, H ) , H = ˜ H , has the spectral data S , and k q j − ˜ q j k L ≤ C Ξ , j = 1 , m. (7.2) The constant C in the estimate (7.2) depends only on ˜ L , δ and not on a particular choice of S . Theorem 7.3 asserts local solvability and stability of Inverse Problem 1.3. It follows fromthe proof of Theorem 7.1, that for sufficiently small Ξ the main equation (2.12) is automaticallysolvable. Therefore we do not need to require
Solvability condition in Theorem 7.3, so only
Diagonality is additionally required. The equality H = ˜ H in the graph case follows fromthe spectral data asymptotics. Acknowledgement.
This work was supported by Grant 19-71-00009 of the Russian ScienceFoundation.