Direct and inverse problems for the matrix Sturm-Liouville operator with the general self-adjoint boundary conditions
aa r X i v : . [ m a t h . SP ] J u l Direct and inverse problems for the matrix Sturm-Liouvilleoperator with general self-adjoint boundary conditions
Natalia P. BondarenkoAbstract.
The matrix Sturm-Liouville operator on a finite interval with boundary condi-tions in the general self-adjoint form and with singular potential of class W − is studied. Thisoperator generalizes Sturm-Liouville operators on geometrical graphs. We investigate struc-tural and asymptotical properties of the spectral data (eigenvalues and weight matrices) of thisoperator. Furthermore, we prove the uniqueness of recovering the operator from its spectraldata, by using the method of spectral mappings. Keywords: matrix Sturm-Liouville operator; singular potential; Sturm-Liouville operatorson graphs; eigenvalue asymptotics; Riesz-basicity of eigenfunctions; inverse problem; uniquenesstheorem.
AMS Mathematics Subject Classification (2010):
The paper is concerned with spectral theory of matrix Sturm-Liouville operators given by thedifferential expression ℓY = − Y ′′ + Q ( x ) Y , where Q ( x ) = [ q jk ( x )] mj,k =1 is a matrix functioncalled the potential . Such operators generalize scalar Sturm-Liouville operators, which havebeen studied fairly completely (see, e.g., the monographs [1–3]). In this paper, we considerthe matrix Sturm-Liouville operator with boundary conditions in the general self-adjoint form.This operator causes interest, since it generalizes Sturm-Liouville operators on metric graphs.The latter operators are used for modeling various processes in graph-like structures in organicchemistry, mesoscopic physics, nanotechnology, microelectronics, and other applications (see[4–7] and references therein).In order to provide the problem statement, denote by C m and C m × m the spaces ofcomplex m -vectors and ( m × m )-matrices, respectively. The notations L ((0 , π ); C m ) and L ((0 , π ); C m × m ) are used for the spaces of m -vector functions and ( m × m )-matrix functions,respectively, with elements from L (0 , π ).Consider the matrix Sturm-Liouville equation − Y ′′ + Q ( x ) Y = λY, x ∈ (0 , π ) , (1.1)where Y = [ y j ( x )] mj =1 is a vector function, λ is the spectral parameter, Q ( x ) is an ( m × m )Hermitian matrix function with elements of class W − (0 , π ), that is, Q ( x ) = σ ′ ( x ), σ ∈ L ((0 , π ); C m × m ), σ ( x ) = ( σ ( x )) † , the symbol “ † ” denotes the conjugate transpose. Thederivatives of L -functions are understood in the sense of distributions. Equation (1.1) canbe rewritten in the form ℓY := − ( Y [1] ) ′ − σ ( x ) Y [1] − σ ( x ) Y = λY, x ∈ (0 , π ) , (1.2)where Y [1] ( x ) := Y ′ ( x ) − σ ( x ) Y ( x ) is the quasi-derivative . Direct and inverse problem theoryfor the scalar operators in the form (1.2) has been developed in [8–20] and other studies.1enote by L the boundary value problem for equation (1.2) with boundary conditions V ( Y ) := T ( Y [1] (0) − H Y (0)) − T ⊥ Y (0) = 0 , (1.3) V ( Y ) := T ( Y [1] ( π ) − H Y ( π )) − T ⊥ Y ( π ) = 0 . (1.4)Here T j , T ⊥ j , H j ∈ C m × m , T j are orthogonal projection matrices, that is, T j = T † j = T j , T ⊥ j = I − T j , I is the unit matrix in C m × m , H j = H † j = T j H j T j , j = 1 ,
2. Under these assumptions,the problem L is self-adjoint. We observe that, in the special cases T j = 0 and T j = I ,the corresponding boundary condition turns into the Dirichlet and into the Robin boundarycondition, respectively.Denote by D ( L ) the space of m -vector functions Y ( x ) such that the elements of Y ( x ), Y [1] ( x )are absolutely continuous on [0 , π ], ( Y [1] ) ′ ∈ L ((0 , π ); C m ), and Y ( x ) satisfies (1.3), (1.4).The problem L is related with the matrix Sturm-Liouville operator given by the differentialexpression ℓY on the domain D ( L ).There is an extensive literature devoted to matrix Sturm-Liouville operators. Asymptoticformulas and some other properties of spectral data have been obtained in [21–23] for thesecond-order matrix operators and in [24] for the fourth-order matrix operators. The unique-ness of recovering matrix Sturm-Liouville operators on a finite interval from various spectralcharacteristics has been proved in [25–30]. In [31], a constructive algorithm was suggested forsolving these inverse problems. Later on, the spectral data characterization for matrix Sturm-Liouville operators has been obtained in [32–35]. The most of the mentioned studies concernoperators with Dirichlet boundary conditions Y (0) = Y ( π ) = 0 and Robin boundary condi-tions Y ′ (0) − H Y (0) = Y ′ ( π ) + H Y ( π ) = 0. The boundary conditions (1.3)-(1.4) appearto be more difficult for investigation, because of more complicated behavior of the spectrumand structural properties. We know the only paper [30] on inverse problems for the matrixSturm-Liouville operator with the general self-adjoint boundary conditions, but that paper islimited to uniqueness theorems (for square-integrable potential). It is also worth mentioningthat direct and inverse scattering problems for the matrix Sturm-Liouville operator on the half-line with the boundary condition analogous to (1.3) have been investigated in [36, 37]. Thosestudies generalize the approach of Agranovich and Marchenko [38], who considered the scat-tering problem with Dirichlet boundary condition. However, we find inverse problems for thematrix Sturm-Liouville operators on the half-line to be easier for investigation, than analogousproblems on a finite interval, since the first problems have a bounded set of eigenvalues andthere are no difficulties caused by asymptotic behavior of the spectrum.The majority of studies on matrix Sturm-Liouville operators deal with the case of regular potentials from the class L . Inverse problems for matrix Sturm-Liouville operators singular potential of W − on a finite interval were considered in the only paper [33] by Mykytyuk andTrush. However, the authors of [33] studied the operator in the form − (cid:0) ddx + τ (cid:1) ( ddx − τ ) Y , where τ is a square-integrable matrix function. This form differs from (1.2) and can be convenientlyreduced to a Dirac-type operator. The differential expression of Mykytyuk and Trush can bewritten in the form − Y ′′ + Q ( x ) Y with Miura potential Q = τ ′ + τ in the case of Dirichletboundary conditions (but cannot for Neumann ones). The operator considered in our paper is anatural generalization of the scalar Sturm-Liouville operators in the form − ( y [1] ) ′ − σy [1] − σ y .Thus, the results of [33] concern another operator, so our results are novel even for the case ofthe Dirichlet boundary conditions.We also mention that the reduction to Dirac-type operators analogous to [33] was appliedto matrix Sturm-Liouville operators with singular potentials on the half-line and on the line byEckhardt at al [39,40]. Some fundamental properties (maximal and minimal operator, deficiency2ndices, self-adjoint extensions, etc.) for matrix operators in our form (1.2) and in more generalforms have been investigated by Weidmann [41] and by Mirzoev and Safonova [42, 43].The goal of this paper is two-fold. First , we investigate properties of the spectral data(eigenvalues and weight matrices) of the problem L . Our spectral data generalize the classicalspectral data { λ n , α n } n ≥ of the scalar Sturm-Liouville operator − y ′′ + q ( x ) y , q ∈ L (0 , π ), withboundary conditions y (0) = y ( π ) = 0, where { λ n } ∞ n =1 are the eigenvalues, { y n ( x ) } ∞ n =1 are theeigenfunctions normalized by the condition y ′ n (0) = 1, and α n := (cid:18)Z π y n ( x ) dx (cid:19) − , n ≥ , (see [1, 3]). The rigorous definition of the spectral data for the matrix Sturm-Liouville operatoris provided in Section 2. We study the structure and the asymptotic behavior of the spectraldata, and also prove the completeness and the Riesz-basis property of a special sequence ofvector functions constructed by the eigenvalues and the columns of the weight matrices. Suchsequences play an important role in the spectral data characterization of the matrix Sturm-Liouville operators (see [32, 33, 35]). As a corollary, we show that the sequence of vectoreigenfunctions of the problem L is a Riesz basis. In particular, all these results are valid forthe Sturm-Liouville operators on graphs with singular potentials and with rationally-dependentedge lengths. Second , we study the inverse problem that consists in recovering the potential and the bound-ary condition coefficients of the problem L from the spectral data. We prove the correspondinguniqueness theorem, by developing the ideas of the method of spectral mappings [3, 31, 44].We also discuss reconstruction of the potential from the Weyl matrix, consider the case of thesquare-integrable potential Q ( x ), and compare our theorems with the known results. In thesequel study [45], our approach gives a constructive solution of the inverse problem and thecharacterization of the spectral data.The paper is organized as follows. In Section 2, we introduce the notions of Weyl matrix andweight matrices and study structural properties of the spectral characteristics. In Section 3,the spectral data of the problem L with σ = H = H = 0 is explicitly found. In Section 4,asymptotic formulas are derived for the eigenvalues, the weight matrices, and for solutions ofequation (1.2). In Section 5, we prove the completeness and the Riesz-basis property of a specialsequence of vector functions related to the problem L . In Section 6, the inverse problems arestudied and the corresponding uniqueness theorems are obtained. In the Appendix, we describethe reduction of the Sturm-Liouville eigenvalue problems on graphs to the matrix form (1.2)-(1.4).Throughout the paper, we use the notations: • ρ := √ λ , arg ρ ∈ [ − π , π ) (unless stated otherwise), τ := Im ρ . • We use the following vector norm in C m : k a k = m X j =1 | a j | ! / , a = [ a j ] mj =1 , and the corresponding matrix norm k A k = s max ( A ), where s max ( A ) is the maximal sin-gular value of A . 3 The scalar product in the Hilbert space L ((0 , π ); C m ) is defined as follows:( Y, Z ) = Z π ( Y ( x )) † Z ( x ) dx = m X j =1 Z π y j ( x ) z j ( x ) dx, (1.5) Y = [ y j ( x )] mj =1 , Z = [ z j ( x )] mj =1 ∈ L ((0 , π ); C m ) . In this section, we introduce the notions of Weyl matrix and weight matrices and study thestructure of the spectral characteristics of the problem L . Lemma 2.1.
For any functions
Y, Z ∈ D ( L ) , the relation ( ℓY, Z ) = ( Y, ℓZ ) holds. Thus, theoperator induced by the differential expression ℓ and the boundary conditions (1.3) , (1.4) is sym-metric, its eigenvalues are real, and vector eigenfunctions, corresponding to distinct eigenvalues,are orthogonal in L ((0 , π ); C m ) .Proof. Consider arbitrary vector functions
Y, Z ∈ D ( L ). Using (1.2), (1.5) and integration byparts, we obtain( ℓY, Z ) = − Z π (( Y [1] ) ′ ) † Z dx − Z π ( Y [1] ) † σZ dx − Z π Y † σ Z dx = − ( Y [1] ) † Z (cid:12)(cid:12)(cid:12) π + Z π ( Y [1] ) † Z [1] dx − Z π Y † σ Z dx = ( Y † Z [1] − ( Y [1] ) † Z ) (cid:12)(cid:12)(cid:12) π + ( Y, ℓZ )(2.1)The boundary condition (1.3) yields T ⊥ Y (0) = 0 , T Y [1] (0) = H Y (0) . The similar relations also hold for Z . Consequently, we have( Y (0)) † Z [1] (0) − ( Y [1] (0)) † Z (0) = ( Y (0)) † ( T + T ⊥ ) Z [1] (0) − ( Y [1] (0)) † ( T + T ⊥ ) Z (0)= ( Y (0)) † H Z (0) − ( Y (0)) † H Z (0) = 0 . Analogously, the substitution at x = π in (2.1) also vanishes, so relation (2.1) yields theclaim.Let ϕ ( x, λ ) be the matrix solution of equation (1.2) satisfying the initial conditions ϕ (0 , λ ) = T , ϕ [1] (0 , λ ) = T ⊥ + H . Clearly, V ( ϕ ) = 0. For each fixed x ∈ [0 , π ], the matrix functions ϕ ( x, λ ) and ϕ [1] ( x, λ ) are entire in the λ -plane. Lemma 2.2.
The eigenvalues of the boundary value problem L coincide with the zeros of thecharacteristic function ∆( λ ) := det( V ( ϕ ( x, λ ))) counting with their multiplicities. This meansthat, for every eigenvalue, the multiplicity of the zero of the analytical function ∆( λ ) equals thenumber of linearly independent vector eigenfunctions corresponding to this eigenvalue. Lemma 2.2 follows from the general theory of linear differential operators provided in thebook of Naimark [46] (see Chapter I, §
2, p.3 and Chapter III, §
1, p.7). In addition, one canprove Lemma 2.2 similarly to [34, Lemma 3], [35, Lemma 5] or [30, Proposition 3.1].Below, speaking about the roots of an analytic function or about the eigenvalues of someproblem, we always count each value the number of times equal to its multiplicity.4 heorem 2.3.
The spectrum of L is a countable set of real eigenvalues { λ nk } ( n,k ) ∈ J , numberedin non-decreasing order: λ n k ≤ λ n k if ( n , k ) < ( n , k ) . The following asymptotic relationholds: ρ nk := p λ nk = n + r k + κ nk , ( n, k ) ∈ J, { κ nk } ∈ l , (2.2) where J := { ( n, k ) : n ∈ N , k = 1 , m } ∪ { (0 , k ) : k = p ⊥ + 1 , m } ,p ⊥ := dim(Ker T ∩ Ker T ) , (2.3) { r k } mk =1 are the zeros of the function w ( ρ ) := det( W ( ρ )) on [0 , , W ( ρ ) := ( T T + T ⊥ T ⊥ ) sin ρπ + ( T ⊥ T − T T ⊥ ) cos ρπ. (2.4)Theorem 2.3 is proved is Section 4. Now we proceed to define the weight matrices. Considerthe boundary condition V ⊥ ( Y ) := T Y (0) + T ⊥ Y [1] (0) = 0 . (2.5)Let ψ ( x, λ ) be the matrix solution of equation (1.2) satisfying the initial conditions ψ (0 , λ ) = − T ⊥ , ψ [1] (0 , λ ) = T . One can easily check that V ⊥ ( ψ ) = 0, V ( ψ ) = I , V ⊥ ( ϕ ) = I .The Weyl solution of L is the matrix solution Φ( x, λ ) of equation (1.2) satisfying the bound-ary conditions V (Φ) = I , V (Φ) = 0. The matrix function M ( λ ) := V ⊥ (Φ) is called the Weylmatrix of the problem L . The notion of Weyl matrix generalizes Weyl function, which is anatural spectral characteristic in inverse problem theory (see [1, 3]).One can easily derive the relationsΦ( x, λ ) = ψ ( x, λ ) + ϕ ( x, λ ) M ( λ ) , (2.6) M ( λ ) = − ( V ( ϕ )) − V ( ψ ) , (2.7)Φ( x, λ ) = Ψ( x, λ )( V (Ψ)) − , (2.8)where Ψ( x, λ ) is the solution of equation (1.2) under the initial conditions Ψ( π, λ ) = T ,Ψ [1] ( π, λ ) = T ⊥ + H . It follows from (2.6) and (2.7) that the matrix functions M ( λ ) andΦ( x, λ ) for each fixed x ∈ [0 , π ] are meromorphic in the λ -plane with the poles at the eigenval-ues of L . Lemma 2.4.
All the poles of M ( λ ) are simple, and the ranks of the residue-matrices coincidewith the multiplicities of the corresponding eigenvalues of L . Lemma 2.4 can be proved similarly to [34, Lemma 4]. Denote α nk := Res λ = λ nk M ( λ ) , ( n, k ) ∈ J. The matrices { α nk } ( n,k ) ∈ J are called the weight matrices and the data { λ nk , α nk } ( n,k ) ∈ J arecalled the spectral data of L .Without loss of generality, below we assume that H = 0. One can achieve this condition,applying the following transform: σ ( x ) := σ ( x ) + H , H := 0 , H := H − T H T . Obviously, this transform does not change the spectral data { λ nk , α nk } ( n,k ) ∈ J .5ow proceed to study properties of the weight matrices. Note that, if Y and Z satisfyequation (1.2), then the matrix Wronskian h Y † , Z i = ( Y ( x, λ )) † Z [1] ( x, λ ) − ( Y [1] ( x, λ )) † Z ( x, λ )does not depend on x . Therefore, we obtain h (Φ( x, λ )) † , Φ( x, λ ) i = h (Φ( x, λ )) † , Φ( x, λ ) i | x =0 = h (Φ( x, λ )) † , Φ( x, λ ) i | x = π . It follows from (2.6) thatΦ(0 , λ ) = − T ⊥ + T M ( λ ) , Φ [1] (0 , λ ) = T + T ⊥ M ( λ ) . Consequently, h Φ † , Φ i | x =0 = M † ( λ ) − M ( λ ). Since Φ( x, λ ) satisfies (1.4), it follows that h Φ † , Φ i | x = π = 0. We conclude that M ( λ ) ≡ ( M ( λ )) † . Hence, α nk = α † nk , ( n, k ) ∈ J . Lemma 2.5.
The following relations hold for ( n, k ) , ( l, j ) ∈ J : V ( ϕ ( x, λ nk )) α nk = 0 , (2.9) α nk Z π ( ϕ ( x, λ nk )) † ϕ ( x, λ lj ) dx α lj = ( α nk , λ nk = λ lj , , λ nk = λ lj . (2.10)Lemma 2.5 can be proved analogously to [47, Lemma 2.2]. In this section, the problem (1.2)-(1.4) is considered in the case σ = 0 in L ((0 , π ); C m × m ), H = H = 0. We agree to use the superscript 0 for objects corresponding to this special case.One can easily show that ϕ ( x, λ ) = cos ρx T + sin ρxρ T ⊥ , ψ ( x, λ ) = sin ρxρ T − cos ρx T ⊥ , (3.1) V ( ϕ ) = − ( ρT + T ⊥ ) W ( ρ )( T + ρ − T ⊥ ) , (3.2) V ( ψ ) = ( ρT + T ⊥ ) U ( ρ )( ρ − T + T ⊥ ) , (3.3) U ( ρ ) := ( T T + T ⊥ T ⊥ ) cos ρπ + ( T T ⊥ − T ⊥ T ) sin ρπ, (3.4)where W ( ρ ) is defined in (2.4).Let us find the eigenvalues of L . In view of Lemma 2.2 and (3.2), the square roots of nonzeroeigenvalues of the problem L coincide with the zeros of the function w ( ρ ) = det( W ( ρ )). Thisfunction can be represented in the form w ( ρ ) = (sin ρπ ) d (cos ρπ ) d P d (cos ρπ ) , (3.5)where d j , j = 1 ,
3, are non-negative integers, P d ( x ) is a polynomial of degree d , P (0) = 0, P (1) = 0, and d + d + 2 d = m . Consequently, the function w ( ρ ) is either periodic orantiperiodic with period 1. Note that the polynomial P d ( x ) has exactly d roots on (0 , w ( ρ ) has non-real roots, and this contradicts to the self-adjointnessof the problem L . Therefore, the function w ( ρ ) has exactly m roots on [0 , { r k } mk =1 in non-decreasing order. It follows from (3.5) that w ( ρ ) = ± w (1 − ρ ), so for any r k = 0 there exists r s = 1 − r k . The set of all zeros of w ( ρ ) has the form ρ nk = n + r k , n ∈ Z , k = 1 , m. (3.6)Consequently, the non-zero eigenvalues of L have the form λ nk = ( ρ nk ) , n ≥ , k = 1 , m, ρ nk = 0 . Let us separately study the case λ = 0. 6 emma 3.1. The multiplicity of the eigenvalue λ = 0 of the problem L equals p :=dim(Ran T ∩ Ran T ) .Proof. The eigenfunctions corresponding to the eigenvalue λ = 0 of the problem L have theform ϕ ( x, c , where vectors c ∈ C m are such that V ( ϕ ( x, c = 0 . (3.7)Using (2.4) and (3.2), we get V ( ϕ ( x, T T ⊥ − T ⊥ T − πT ⊥ T ⊥ . Clearly, for any c ∈ Ran T ∩ Ran T , relation (3.7) holds. Let us show that there are no othersuch c . Suppose that c = c + c + c , where c and c belong to the subspaces Ran T ∩ Ran T and Ker T ∩ Ker T , respectively, and c is orthogonal to both these subspaces. Then V ( ϕ ( x, c = ( T T ⊥ − T ⊥ T ) c − πc . (3.8)If (3.7) holds, then( c , c ) = π (( T T ⊥ − T ⊥ T ) c , c ) = π ( c , ( T ⊥ T − T T ⊥ ) c ) = 0 . Hence, c = 0. It follows from (3.7) and (3.8) that c = 0. Consequently, the number of linearlyindependent vectors c satisfying (3.7) equals p . This yields the claim. Lemma 3.2.
The multiplicity of the zero ρ = 0 of the function w ( ρ ) equals ( p + p ⊥ ) , where p ⊥ := dim(Ker T ∩ Ker T ) .Proof. Relying on Lemma 2.2, one can show that the desired multiplicity equalsdim(Ker W (0)). Let c ∈ Ker W (0). Represent c in the form c = c + c , c ∈ Ran T , c ∈ Ker T . Using (2.4), we get W (0) c = ( T T ⊥ − T ⊥ T )( c + c ) = T c − T ⊥ c = 0 . This implies T c = T ⊥ c = 0, i.e., c ∈ Ran T , c ∈ Ker T . Therefore,Ker W (0) = (Ran T ∩ Ran T ) ∪ (Ker T ∩ Ker T ) . Thus, dim(Ker W (0)) = p + p ⊥ .Lemmas 3.1 and 3.2 together with the arguments above them yield the assertion of Theo-rem 2.3 for L with κ nk = 0, ( n, k ) ∈ J .Using (2.4) and (3.5), we obtain the important estimate: k ( W ( ρ )) − k ≤ C δ exp( −| τ | π ) , ρ ∈ G δ , (3.9)where G δ := { ρ ∈ C : | ρ − ρ nk | ≥ δ, n ∈ Z , k = 1 , m } , δ > , (3.10)and the constant C δ depends on δ .Let us proceed to find the weight matrices { α nk } . Substituting (3.2) and (3.3) into (2.7),we obtain M ( λ ) = ( T + ρT ⊥ ) E ( ρ )( ρ − T + T ⊥ ) , E ( ρ ) := ( W ( ρ )) − U ( ρ ) . (3.11)7t follows from (2.4) and (3.4) that the matrix function E ( ρ ) is 1-periodic and meromorphicin ρ with the poles at ρ = ρ nk . For ρ nk = 0, we have α nk = Res λ = λ nk M ( λ ) = 2 Res ρ = ρ nk ( T + ρT ⊥ ) E ( ρ )( T + ρT ⊥ ) = 2 π ( T + ρ nk T ⊥ ) A k ( T + ρ nk T ⊥ ) , (3.12) A k := π Res ρ = r k E ( ρ ) , k = 1 , m. (3.13)On the other hand, α nk = 2 Res ρ = − ρ nk ( T + ρT ⊥ ) E ( ρ )( T + ρT ⊥ ) = 2 π ( T − ρ nk T ⊥ ) A s ( T − ρ nk T ⊥ ) , (3.14)where r k + r s = 1 or r k = r s = 0, ρ nk = 0. Comparing (3.12) and (3.14), we get A k = ( T − T ⊥ ) A s ( T − T ⊥ ) , r k + r s = 1 or r k = r s = 0 . (3.15)The case ρ nk = 0 is slightly different: α nk = π T A k T , ρ nk = 0 . (3.16)By Lemma 2.4, we have rank( A k ) = { s = 1 , r : r s = r k } . (3.17)Lemmas 2.4, 3.1, 3.2 together with relations (3.15), (3.16) imply A = T A T + T ⊥ A T ⊥ , rank( T A T ) = p, rank( T ⊥ A T ⊥ ) = p ⊥ , if r = 0 . (3.18)Using (2.10) for α nk , (3.1), (3.6), (3.12), and the relation α nk = ( α nk ) † , we obtain A k = A † k = A k , A k A s = 0 , r k = r s , k, s = 1 , m. (3.19)Hence, { A k } k ∈J are matrices of mutually orthogonal projectors, J := { } ∪ { k = 2 , m : r k − = r k } . In view of (3.17), we have X k ∈J A k = I. (3.20)Thus, we have explicitly described the weight matrices of the problem L . The goal of this section is to derive asymptotic formulas for the eigenvalues and for the weightmatrices of the problem L . We start with asymptotic formulas for solutions of equation (1.2).Let S ( x, λ ) and C ( x, λ ) be the matrix solutions of equation (1.2) satisfying the initial con-ditions S (0 , λ ) = C [1] (0 , λ ) = 0 , S [1] (0 , λ ) = C (0 , λ ) = I. The following theorem represents S ( x, λ ), C ( x, λ ) and their quasi-derivatives in terms oftransformation operators. Such operators were introduced by Marchenko [1] for the classicalcase of regular potentials and play an important role in spectral theory.8 heorem 4.1. The following relations hold S ( x, λ ) = sin ρxρ + Z x K ( x, t ) sin ρtρ dt,S [1] ( x, λ ) = cos ρx + Z x K ( x, t ) cos ρt dt,C ( x, λ ) = cos ρx + Z x K ( x, t ) cos ρt dt,C [1] ( x, λ ) = − ρ sin ρx + ρ Z x K ( x, t ) sin ρt dt + C ( x ) , where the matrix functions K j , j = 1 , , are square integrable in the region { ( x, t ) : 0 < t Let F ( ρ ) and G ( ρ ) be matrix functions analytic in the disk | ρ − a | ≤ r and satisfying the condition k G ( ρ ) F − ( ρ ) k < on the boundary | ρ − a | = r . Then the scalarfunctions det( F ) and det( F + G ) have the same number of zeros inside the circle | ρ − a | < r .Proof of Theorem 2.3. Step 1. Consider the functions F ( ρ ) = ρW ( ρ ) and G ( ρ ) = ρ ( W ( ρ ) − W ( ρ )) entire in the ρ -plane. Using (4.2) and (4.3), we obtain W ( ρ ) − W ( ρ ) = o (exp( | τ | π )) , | ρ | → ∞ . (4.4)The estimates (3.9) and (4.4) yield k G ( ρ ) F − ( ρ ) k < | ρ | , sufficientlysmall δ , and ρ ∈ G δ ( G δ is defined in (3.10)). Applying Proposition 4.2 to the contours9 | ρ | = R } ⊂ G δ and {| ρ − ρ nk | = δ } with sufficiently large R > | n | and sufficiently small δ > 0, we conclude that the functions ρ m det( W ( ρ )) and ρ m det( W ( ρ )) have the same numberof zeros inside these contours. Hence, the function ρ m det( W ( ρ )) has a countable set of zeros { θ k } mk =1 ∪ { ρ nk } n ∈ Z , k =1 ,m . Since δ > ρ nk = ρ nk + κ nk , κ nk = o (1) , n → ±∞ , k = 1 , m. Step 2. Let us prove that { κ nk } ∈ l . Fix a k ∈ { , . . . , m } . Using (2.4), (4.2), and theTaylor formula, we get W ( ρ nk ) = ( − n W ( r k ) + ( − n κ nk ˙ W ( r k ) + O ( κ nk ) + K ( ρ nk ) , (4.5)where ˙ W ( ρ ) = ddρ W ( ρ ). Using (4.3), we obtain K ( ρ nk ) = Z π − π P ( t ) exp( ir k t ) exp( int ) dt + i κ nk Z π − π t P ( t ) exp( ir k t ) exp( int ) dt + O ( κ nk ) + O (cid:0) n − (cid:1) . Thus, K ( ρ nk ) = O ( δ nk ) + O ( κ nk ) , (4.6)where { δ nk } ∈ l is some sequence of positive numbers. Since det( W ( ρ nk )) = 0, there exists anormalized vector y nk ∈ C m such that W ( ρ nk ) y nk = 0 . (4.7)Lemma 2.4 implies that ( W ( ρ )) − has a simple pole at ρ = r k . Consequently, there exist thematrices R − and R such that( W ( ρ )) − = ( W ( r k ) + ( ρ − r k ) ˙ W ( r k ) + . . . ) − = R − ρ − r k + R + . . . . In particular, R − W ( r k ) = 0 , R − ˙ W ( r k ) + R W ( r k ) = I. (4.8)Combining (4.5), (4.6), and (4.7), we obtain( κ − nk R − + R )( W ( r k ) + κ nk ˙ W ( r k ) + O ( κ nk ) + O ( δ nk )) y nk = 0 . Using (4.8), we derive κ nk ( y nk + o (1)) = O ( δ nk ) , n → ±∞ . Since k y nk k = 1, we get κ nk = O ( δ nk ), so { κ nk } ∈ l . The above arguments are only valid for κ nk = 0. The case κ nk = 0 is trivial. Step 3. Using (3.2), (4.2), and the results of steps 1-2, we conclude that the functions∆( λ ) = det( V ( ϕ )) and ∆ ( λ ) = det( V ( ϕ )) have the same number of zeros in any sufficientlylarge disk {| λ | = R } such that √ R = n + r k , n ∈ N , k = 1 , m , and for the zeros { λ nk } ( n,k ) ∈ J of ∆( λ ) asymptotics (2.2) holds. Taking Lemma 2.2 into account, we arrive at the claim of thetheorem. 10et us obtain asymptotics of the weight matrices { α nk } . Using (2.7) and Theorem 4.1, weget M ( λ ) = ( T + ρT ⊥ ) E ( ρ )( ρ − T + T ⊥ ) , (4.9) E ( ρ ) := ( W ( ρ )) − U ( ρ ) , U ( ρ ) = U ( ρ ) + K ( ρ ) , where K ( ρ ) has the form (4.3).Further, we need additional notations. Let λ n k = λ n k = · · · = λ n r k r be a group ofmultiple eigenvalues maximal by inclusion, ( n , k ) < ( n , k ) < · · · < ( n r , k r ). Clearly, α n k = α n k = · · · = α n r k r . Define α ′ n k := α n k , α n j k j := 0, j = 2 , r . We obtain the sequences ofmatrices { α ′ nk } ( n,k ) ∈ J . Below the notation { K nk } is used for various matrix sequences such that {k K nk k} ∈ l . Theorem 4.3. The weight matrices are Hermitian non-negative definite: α nk = α † nk ≥ , ( n, k ) ∈ J . For each ( n, k ) ∈ J , rank( α nk ) equals the multiplicity of the eigenvalue λ nk . Fur-thermore, the asymptotic formula holds: α ( k ) n := X s =1 ,mr s = r k α ′ ns = 2 π ( T + nT ⊥ )( A k + K nk )( T + nT ⊥ ) , n ≥ , k = 1 , m, (4.10) where A k is defined in (3.13) .Proof. It remains to prove (4.10), since all the other properties have been proved in Section 2.In particular, the non-negative definiteness of α nk follows from (2.10).Fix a k ∈ { , . . . , m } and choose a sufficiently small δ > 0. For sufficiently large n , theResidue Theorem and (4.9) imply α ( k ) n = 12 πi I |√ λ − ρ nk | = δ M ( λ ) dλ = 12 πi I | ρ − ρ nk | = δ ρ ( T + ρT ⊥ ) E ( ρ )( ρ − T + T ⊥ ) dρ. (4.11)Using (3.13), we obtain12 πi I | ρ − ρ nk | = δ ρ ( T + ρT ⊥ ) E ( ρ )( ρ − T + T ⊥ ) dρ = 2 π ( T + nT ⊥ )( A k + O ( n − ))( T + nT ⊥ ) . (4.12)Let us estimate the difference E ( ρ ) − E ( ρ ) = (( W ( ρ )) − − ( W ( ρ )) − ) U ( ρ ) + ( W ( ρ )) − ( U ( ρ ) − U ( ρ )) . Note that ( W ( ρ )) − − ( W ( ρ )) − = ( W ( ρ )) − ( I + ( W ( ρ )) − K ( ρ )) − − I ) . Taking (3.9) into account, we get the estimate k E ( ρ ) − E ( ρ ) k ≤ C k K ( ρ ) k exp( −| τ | π ) , ρ ∈ G δ . Represent ρ : | ρ − ρ nk | = δ in the form ρ = n + r k + z , | z | = δ . In view of (4.3), the sum ∞ X n =1 k K ( n + r k + z ) k 11s bounded uniformly on | z | = δ . Consequently, we obtain12 πi I | ρ − ρ nk | = δ ρ ( T + ρT ⊥ )( E ( ρ ) − E ( ρ ))( ρ − T + T ⊥ ) dρ = ( T + nT ⊥ ) K nk ( T + nT ⊥ ) , (4.13)where {k K nk k} ∈ l . Combining (4.11), (4.12), and (4.13), we arrive at (4.10).In addition, we obtain estimates for ϕ ( x, λ ) and Φ( x, λ ). Lemma 4.4. The following relations hold ϕ [ ν ] ( x, λ ) = O ( ρ ν − exp( | τ | x )( ρT + T ) ϕ [ ν ] ( x, λ ) − ϕ ν ] ( x, λ ) = o ( ρ ν − exp( | τ | x )( ρT + T ) (cid:27) (4.14)Φ [ ν ] ( x, λ ) = O ( ρ ν − exp( −| τ | x )( T + ρT ⊥ )Φ [ ν ] ( x, λ ) − Φ ν ] ( x, λ ) = o ( ρ ν − exp( −| τ | x )( T + ρT ⊥ ) (cid:27) ρ ∈ G δ , (4.15) for some δ > , each fixed x ∈ [0 , π ] , ν = 0 , as | ρ | → ∞ . Here y [0] := y .Proof. Fix an x ∈ [0 , π ]. Using (4.1) and Theorem 4.1, we get ϕ ( x, λ ) = (cos ρx T + sin ρx T ⊥ + o (exp( | τ | x ))( T + ρ − T ⊥ ) ,ϕ [1] ( x, λ ) = ( − sin ρx T + cos ρx T ⊥ + o (exp( | τ | x ))( ρT + T ⊥ ) , as | ρ | → ∞ . These asymptotics yield (4.14). Similar asymptotics hold for the solution Ψ( x, λ )appearing in (2.8):Ψ( x, λ ) = (cos ρ ( π − x ) T − sin ρ ( π − x ) T ⊥ + o (exp( | τ | ( π − x )))( T + ρ − T ⊥ ) , Ψ [1] ( x, λ ) = (sin ρ ( π − x ) T + cos ρ ( π − x ) T ⊥ + o (exp( | τ | ( π − x )))( ρT + T ⊥ ) , as | ρ | → ∞ . These asymptotics implyΨ [ ν ] ( x, λ ) = O ( ρ ν − exp( | τ | ( π − x )))( ρT + T ⊥ )Ψ [ ν ] ( x, λ ) − Ψ ν ] ( x, λ ) = o ( ρ ν − exp( | τ | ( π − x )))( ρT + T ⊥ ) (cid:27) (4.16)as | ρ | → ∞ , ν = 0 , V (Ψ) = ( ρT + T ⊥ )(( W ( ρ )) † + K ( ρ ))( T + ρ − T ⊥ ) . Using (3.9), we get ( V (Ψ)) − = ( T + ρT ⊥ ) O (exp( −| τ | π ))( ρ − T + T ⊥ )( V (Ψ)) − − ( V (Ψ )) − = ( T + ρT ⊥ ) o (exp( −| τ | π ))( ρ − T + T ⊥ ) (cid:27) (4.17)as | ρ | → ∞ , ρ ∈ G δ . Relation (2.8) impliesΦ( x, λ ) − Φ ( x, λ ) = (Ψ( x, λ ) − Ψ ( x, λ ))( V (Ψ)) − + Ψ ( x, λ )(( V (Ψ)) − − ( V (Ψ)) − ) . (4.18)Using (2.8) and (4.16)-(4.18), we arrive at (4.15).12 Completeness and Riesz-basis property In this section, we study the completeness and the Riesz-basis property of a special sequenceof vector functions constructed by the spectral data { λ nk , α nk } ( n,k ) ∈ J . Such sequences play animportant role in inverse problem theory for matrix Sturm-Liouville operators (see [32, 33, 35]).Consider a group of multiple eigenvalues λ n k = λ n k = · · · = λ n r k r maximal by inclusion.Lemma 2.4 implies rank( α n k ) = r . Define the matrices T nk := (cid:26) T + ρ nk T ⊥ , ρ nk = 0 ,I, ρ nk = 0 , B nk := π T − nk α nk T − nk . Clearly, Ran B n k is an r -dimensional subspace in C m . Choose an orthonormal basis {E n j k j } rj =1 in this subspace. This choice is non-unique. The assertions below are valid for any choice ofthe basis. Thus, we have defined the vector sequence {E nk } ( n,k ) ∈ J . Consider the sequence ofvector functions Y := { Y nk } ( n,k ) ∈ J , Y nk ( x ) := (cid:26) (cos( ρ nk x ) T + sin( ρ nk x ) T ⊥ ) E nk , ρ nk = 0 , ( T + xT ⊥ ) E nk , ρ nk = 0 . (5.1) Theorem 5.1. The sequence Y is complete in L ((0 , π ); C m ) .Proof. Let a vector function h ∈ L ((0 , π ); C m ) be such that ( h, Y nk ) = 0 for all ( n, k ) ∈ J .This implies Z π ( h ( x )) † (cid:18) cos( ρ nk x ) T + sin( ρ nk x ) ρ nk T ⊥ (cid:19) α nk dx = 0 , ( n, k ) ∈ J. Consequently, the row-vector function γ ( λ ) := Z π ( h ( x )) † (cid:18) cos ρx T + sin ρxρ T ⊥ (cid:19) dx is entire in λ and has the following properties:(i) γ ( λ nk ) α nk = 0, ( n, k ) ∈ J .(ii) γ ( ρ )( T + ρT ⊥ ) = o (exp( | τ | π )), | ρ | → ∞ .If the multiplicity of the eigenvalue λ nk is m nk , thenrank( α nk ) = m nk , rank( V ( ϕ ( x, λ nk )) = m − m nk . Therefore, relation (2.9) and property (i) imply γ ( λ nk ) = D nk V ( ϕ ( x, λ nk )) , D † nk ∈ C m , ( n, k ) ∈ J. Consequently, the vector function F ( λ ) := γ ( λ )( V ( ϕ ( x, λ ))) − is entire in λ . It follows from(3.9) and (4.2) that( V ( ϕ )) − = ( T + ρT ⊥ ) O (exp( −| τ | π ))( ρ − T + T ⊥ ) , ρ ∈ G δ , | ρ | → ∞ . Using this estimate and property (ii), we obtain F ( ρ ) = o (1), ρ ∈ G δ , | ρ | → ∞ . Liouville’sTheorem yields F ( λ ) ≡ 0, so γ ( λ ) ≡ 0. Hence, h = 0 in L ((0 , π ); C m ), so the sequence Y iscomplete. 13n particular, Theorem 5.1 yields that the following sequence Y related to the problem L is complete in L ((0 , π ); C m ): Y := { Y nk } ( n,k ) ∈ J , Y nk := (cos( n + r k ) x T + sin( n + r k ) x T ⊥ ) E k . Here {E s } s ∈ J k is a fixed orthonormal basis in Ran A k for k ∈ J , J k := { s = 1 , m : r s = r k } , J := { }∪{ s = 2 , m : r s = r s − } . We additionally require T E k = 0 for k = 1 , p ⊥ and T ⊥ E k = 0for k = p ⊥ + 1 , p ⊥ + p . The latter requirements can always be achieved because of (3.18). Thus, {E k } mk =1 is an orthonormal basis in C m .Our next goal is to show that Y is a Riesz basis. We will prove this fact for a sequence ofa more general form, not necessarily related to the problem L . Let T , T ∈ C m × m be arbitraryorthogonal projection matrices. Suppose that J and { r k } mk =1 are defined as in Theorem 2.3.Let { ρ nk } ( n,k ) ∈ J be arbitrary complex numbers satisfying asymptotics (2.2). Let { B nk } ( n,k ) ∈ J bearbitrary matrices from C m × m such that B nk = B † nk ≥ 0, ( n, k ) ∈ J . For any group of multiplevalues ρ n k = ρ n k = · · · = ρ n r k r maximal by inclusion, ( n , k ) < ( n , k ) < · · · < ( n r , k r ),we assume that B n k = B n k = · · · = B n r k r and rank( B n k ) = r . Denote B ′ n k = B n k , B ′ n j k j = 0, j = 2 , r . Suppose that the following asymptotic relation holds B ( k ) n := X s =1 ,mr s = r k B ′ ns = A k + K nk , {k K nk k} ∈ l , ( n, k ) ∈ J, (5.2)where { A k } mk =1 are the orthogonal projection matrices defined by (3.13). By using these data { ρ nk , B nk } ( n,k ) ∈ J , choose the basis {E nk } and construct the sequence Y by (5.1). Then thefollowing theorem holds. Theorem 5.2. If the sequence Y is complete in L ((0 , π ); C m ) , then it is a Riesz basis in L ((0 , π ); C m ) . For the proof of Theorem 5.2, we need the following propositions. (Proposition 5.3 is [49,Theorem 2.5.3] and Proposition 5.4 follows from [50, Theorem 3.6.6]). Proposition 5.3. Let A ∈ C m × n and k < rank( A ) . Denote by s ≥ s ≥ · · · ≥ s min( n,m ) thesingular values of A . Then min rank( B )= k k A − B k = s k +1 . Proposition 5.4. Let { f n } ∞ n =1 be a sequence in a Hilbert space H . The sequence { f n } ∞ n =1 is aRiesz basis in H if and only if it is complete in H and there exist constants M , M > suchthat for every finite scalar sequence { b n } one has M X | b n | ≤ (cid:13)(cid:13)(cid:13)X b n f n (cid:13)(cid:13)(cid:13) ≤ M X | b n | . (5.3) Proof of Theorem 5.2. Instead of Y , consider the sequence Y N := { Y nk } ( n,k ) ∈ J, n ≤ N ∪ { ˜ Y nk } n>N, k =1 ,m , N ∈ N , where ˜ Y nk ( x ) := (cos( n + r k ) x T + sin( n + r k ) x T ⊥ ) ˜ E nk , ˜ E nk := A k E nk . We will show that Y N is quadratically close to Y and, for sufficiently large N , the sequence Y N is a Riesz basis in L ((0 , π ); C m ). This will imply that Y is also a Riesz basis.14 tep 1. Let us prove that {kE nk − ˜ E nk k} ∈ l . This will imply that {k Y nk − ˜ Y nk k} ∈ l ,that is, Y N is quadratically close to Y . Fix k ∈ J . Obviously, E nk − ˜ E nk = A ⊥ k E nk , where A ⊥ k := I − A k . Let E ( k ) n ∈ C m ×| J k | be the matrix consisting of the columns {E ns } s ∈ J k . By thedefinitions of {E ns } and B ( k ) n , for each sufficiently large n , there exists the matrix w ( k ) n ∈ C | J k |× m such that B ( k ) n = E ( k ) n w ( k ) n . Asymptotics (5.2) implies B ( k ) n = O (1) as n → ∞ . Since B ns ≥ B ns = O (1), s ∈ J k , n → ∞ . The columns of E ( k ) n are normalized vectors, so w ( k ) n = O (1) as n → ∞ . The asymptotic relation (5.2) implies E ( k ) n w ( k ) n = A k + K nk . Hence, A ⊥ k E ( k ) n w ( k ) n ( w ( k ) n ) † = K nk . Consider the minimal singular value s min ( w ( k ) n ) of the matrix w ( k ) n . If s min ( w ( k ) n ) ≥ δ > n , then k ( w ( k ) n ( w ( k ) n ) † ) − k ≤ δ . Therefore, A ⊥ k E ( k ) n = K nk , so {k A ⊥ k E nk k} ∈ l . Step 2. Let us prove that s min ( w ( k ) n ) ≥ δ > n and a fixed k ∈ J .Suppose that, on the contrary, there exists a subsequence { n j } such that s min ( w ( k ) n j ) → j →∞ . By virtue of Proposition 5.3, there exist matrices ˜ w ( k ) n j ∈ C | J k |× m such that rank( ˜ w ( k ) n j ) < | J k | and k w ( k ) n j − ˜ w ( k ) n j k → j → ∞ . Denote ˜ B ( k ) n j := E ( k ) n j ˜ w ( k ) n j . Obviously, rank( B ( k ) n j ) < | J k | . Notethat k B ( k ) n j − ˜ B ( k ) n j k ≤ kE ( k ) n kk w ( k ) n j − ˜ w ( k ) n j k → , j → ∞ . This together with (5.2) imply k ˜ B ( k ) n j − A k k → j → ∞ . Proposition 5.3 yields s | J k | ( A k ) = 0,but rank( A k ) = J k . This contradiction concludes the proof. Step 3. Let us prove that the sequence Y N is complete in L ((0 , π ); C m ) for each sufficientlylarge N . For k ∈ J , let ˜ E ( k ) n ∈ C m ×| J k | be the matrix consisting of the columns { ˜ E ns } s ∈ J k .Similarly to step 2, one can show that s min ( ˜ E ( k ) n ) ≥ δ > n , sorank( ˜ E ( k ) n ) = | J k | . Since ˜ E ns ∈ Ran( A k ), s ∈ J k , it follows that the vectors {E s } s ∈ J k are linearcombinations of { ˜ E ns } s ∈ J k . Consequently, vector functions { Y ns } s ∈ J k are linear combinations of { ˜ Y ns } s ∈ J k for each sufficiently large fixed n and each k ∈ J . By Theorem 5.1, the sequence Y = { Y nk } ( n,k ) ∈ J is complete in L ((0 , π ); C m ). Therefore, the sequence Y N is also completefor sufficiently large N . Step 4. Let us prove that the sequence Y N is a Riesz basis in L ((0 , π ); C m ) for sufficientlylarge N , relying on the completeness of this sequence (step 3) and on Proposition 5.4. Itremains to prove inequality (5.3), which takes the form M X n,k | b nk | ≤ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)X n,k b nk Y Nnk (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ≤ M X n,k | b nk | , Y Nnk := ( Y nk , n ≤ N, ˜ Y nk , n > N. (5.4)The right-hand side of this inequality is obvious, since k Y Nnk k ≤ π for all ( n, k ) ∈ J . It followsfrom (3.19) that ( Y Nnk , Y Nls ) = 0 if n = l or r k = r s . Hence, (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)X n,k b nk Y Nnk (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) = X n X k ∈J (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)X s ∈ J k b ns Y Nns (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Consequently, in order to prove the left-hand side of (5.4), it is sufficient to show that (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)X s ∈ J k a s ˜ E ns (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ≥ δ X s ∈ J k | a s | , (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)X s ∈ J k a s E s (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ≥ δ X s ∈ J k | a s | , { a s } s ∈ J k , all n > N , k ∈ J , and some δ > 0. The inequality for { ˜ E ns } follows fromthe estimate s min ( ˜ E ( k ) n ) ≥ δ , which is valid for sufficiently large n . The inequality for {E s } isobvious, since these vectors form an orthonormal basis. Thus, we have proved inequality (5.4),which yields the claim. Remark . Similarly to Theorem 5.1, it can be proved that the sequence F := { ϕ ( x, λ nk ) T nk E nk } ( n,k ) ∈ J of the vector eigenfunctions of L is complete in L ((0 , π ); C m ). Since F is quadratically close to Y , it follows that F is a Riesz basis in L ((0 , π ); C m ). In this section, we consider the problem L = L ( σ, T , T , H ) of the form (1.2)-(1.4) with H = 0and prove the uniqueness theorem for the following inverse problem. Inverse Problem 6.1. Given the spectral data { λ nk , α nk } ( n,k ) ∈ J , find σ , T , T , H .Along with L , consider another boundary value problem ˜ L = L (˜ σ, ˜ T , ˜ T , ˜ H ) of the sameform but with different coefficients. We agree that if a symbol γ denotes an object relatedto L , then the symbol ˜ γ with tilde denotes the similar object related to ˜ L . Note that thequasi-derivatives for these two problems are supposed to be different: Y [1] = Y ′ − σY for L and Y [1] = Y ′ − ˜ σY for ˜ L . The goal of this section is to prove the following uniqueness theorem. Theorem 6.2. If λ nk = ˜ λ nk , α nk = ˜ α nk , ( n, k ) ∈ J , J = ˜ J , then σ ( x ) = ˜ σ ( x ) + H ⋄ a.e. on (0 , π ) , T = ˜ T , T = ˜ T , H = ˜ H − T H ⋄ T , (6.1) where H ⋄ = ( H ⋄ ) † = T ⊥ H ⋄ T ⊥ . (6.2) Thus, the spectral data { λ nk , α nk } ( n,k ) ∈ J uniquely specify the problem L up to a transform (6.1) given by an arbitrary matrix H ⋄ satisfying (6.2) . Theorem 6.2 is a natural generalization of the known uniqueness results for m = 1 inthe cases of Dirichlet-Dirichlet, Dirichlet-Robin, Robin-Dirichlet, and Robin-Robin boundaryconditions (see [12, 20]).The inverse is also true: any two problems L and ˜ L satisfying (6.1) with an arbitrarymatrix H ⋄ of the form (6.2) have equal spectral data. Indeed, in this case, we obtain M ( λ ) =˜ M ( λ ) + H ⋄ , so the poles and the residues of M ( λ ) and ˜ M ( λ ) coincide. This together withTheorem 6.2 immediately imply the following result. Theorem 6.3. If M ( λ ) ≡ ˜ M ( λ ) , then σ ( x ) = ˜ σ ( x ) a.e. on (0 , π ) , T = ˜ T , T = ˜ T , H = ˜ H .Thus, the Weyl matrix M ( λ ) uniquely specifies the problem L ( σ, T , T , H ) . For the regular potential Q ∈ L ((0 , π ); C m × m ), Theorem 6.2 implies the uniqueness with-out ambiguity. Indeed, consider the boundary value problem X = X ( Q, T , T , G , G ) forequation (1.1) with Q ∈ L ((0 , π ); C m × m ) and with the boundary conditions V ( Y ) = T ( Y ′ (0) − G Y (0)) − T ⊥ Y (0) = 0 ,V ( Y ) = T ( Y ′ ( π ) − G Y ( π )) − T ⊥ Y ( π ) = 0 , G j ∈ C m × m , G j = G † j = T j G j T j , j = 1 , 2. These conditions are equivalent to (1.3)and (1.4) when G = T σ (0) T (if H = 0), G = H + T σ ( π ) T . Instead of (2.5), we considerthe condition V ⊥ ( Y ) = T ⊥ Y ′ (0) + T Y (0) = 0 . This condition is equivalent to (2.5) if T ⊥ σ (0) T ⊥ = 0, so it fixes the constant H ⋄ in Theorem 6.2.The spectral data { λ nk , α nk } ( n,k ) ∈ J of X are defined similarly to the spectral data of L , by usingthe new V , V , and V ⊥ . Along with X , consider the problem ˜ X = X ( ˜ Q, ˜ T , ˜ T , ˜ G , ˜ G ). For thespectral data of X and ˜ X , the following uniqueness theorem directly follows from Theorem 6.2. Theorem 6.4. If λ nk = ˜ λ nk , α nk = ˜ α nk , ( n, k ) ∈ J , J = ˜ J , then Q ( x ) = ˜ Q ( x ) a.e. on (0 , π ) , T = ˜ T , T = ˜ T , G = ˜ G , G = ˜ G . Theorem 6.4 improves the results of [30] and generalizes the uniqueness results from [28].Before the proof of Theorem 6.2, let us discuss the construction the Weyl matrix M ( λ )by using the spectral data. Fix an arbitrary real ω > β ( λ ) = λλ + ω . Using (2.2)and (4.10), we get (cid:18) λ − λ nk + β ( λ nk ) (cid:19) α nk = O ( n − ) , n → ∞ . Consequently, the series M ( λ ) := X ( n,k ) ∈ J (cid:18) λ − λ nk + β ( λ nk ) (cid:19) α ′ nk converges absolutely and uniformly by λ on compact sets. The following lemma is provedsimilarly to [48, Theorem 1]. Lemma 6.5. M ( λ ) = M ( λ ) + C ∗ , where C ∗ ∈ C m × m is a constant matrix. Using (3.11), (2.4), and (3.4), we obtain the asymptotic formula M ( − τ ) = τ T ⊥ + o (1) , τ → + ∞ . Consequently, the constant matrix C ∗ for M ( λ ) can be found as follows: C ∗ = lim τ → + ∞ ( τ T ⊥ − M ( − τ )) . (6.3)In the general case, M ( λ ) is recovered from the spectral data uniquely up to a constant matrixof the form (6.2).Proceed to reconstruction of T and T by the spectral data. Suppose that { λ nk , α nk } ( n,k ) ∈ J are given. Relations (3.20) and (4.10) imply T ⊥ = lim n →∞ n − m X k =1 α ′ nk ! , T = I − T ⊥ . (6.4)Thus, T is found. Using (2.2) and (4.10), we get r k = lim n →∞ ( p λ nk − n ) , A k = π n →∞ ( T + n − T ⊥ ) α ( k ) n ( T + n − T ⊥ ) , k = 1 , m. (6.5)Construct the spectral data { λ nk , α nk } ( n,k ) ∈ J of L by the formulas ρ nk = n + r k , λ nk = ( ρ nk ) , α nk = ( π ( T + ρ nk T ⊥ ) A k ( T + ρ nk T ⊥ ) , ρ nk = 0 , π T A k T , ρ nk = 0 . (6.6)17sing Lemma 6.5 and (6.3), one can find M ( λ ). According to (3.11), we have E ( ρ ) = ( T + ρ − T ⊥ ) M ( ρ )( ρT + T ⊥ ) . (6.7)On the other hand, (2.4), (3.4), and (3.11) imply E ( ρ ) = ( A tan( ρπ ) + B ) − ( A − tan( ρπ ) B ) , A := T T + T ⊥ T ⊥ , B := T ⊥ T − T T ⊥ . (6.8) Lemma 6.6. Consider the equation ( tA + B ) − ( A − tB ) = E, (6.9) with respect to unknown matrices A, B ∈ C m × m . It is supposed that E ∈ C m × m and t ∈ C areknown, t = ± i , and det( tA + B ) = 0 . Equation (6.9) has the solution ˜ A = E + tI , ˜ B = I − tE unique up to multiplication by a non-singular matrix: ˜ A = DA , ˜ B = DB , det( D ) = 0 . Lemma 6.6 is proved by direct calculations.Fix ρ ∗ = ρ nk , n ∈ Z , k = 1 , m , and apply Lemma 6.6 to (6.8) with t := tan( ρ ∗ π ). Thisyields D ( T T + T ⊥ T ⊥ ) = E ( ρ ∗ ) + tan( ρ ∗ π ) I,D ( T ⊥ T − T T ⊥ ) = I − tan( ρ ∗ π ) E ( ρ ∗ ) , where D ∈ C m × m is an unknown non-singular matrix. Hence, DT = ( E ( ρ ∗ ) + tan( ρ ∗ π ) I ) T + (tan( ρ ∗ π ) E ( ρ ∗ ) − I ) T ⊥ =: D ∗ . (6.10)Thus, T is the matrix of the orthogonal projector onto Ran D †∗ . We summarize the argumentsabove in the following algorithm. Algorithm 6.7. Let the spectral data { λ nk , α nk } ( n,k ) ∈ J be given. We have to find T and T .1. Find T by (6.4).2. Construct the data { λ nk , α nk } ( n,k ) ∈ J , using (6.5) and (6.6).3. Construct M ( λ ) by the formula M ( λ ) = M ( λ ) + C ∗ , M ( λ ) = X ( n,k ) ∈ J (cid:18) λ − λ nk + β ( λ nk ) (cid:19) α nk ′ , where C ∗ can be found by (6.3).4. Find E ( ρ ) by (6.7).5. Fix ρ ∗ = ρ nk , n ∈ Z , k = 1 , m , and construct the matrix D ∗ by (6.10).6. Determine T as the matrix of the orthogonal projector onto Ran D †∗ (e.g., by usingGramSchmidt process). 18 roof of Theorem 6.2. Consider two problems L and ˜ L such that λ nk = ˜ λ nk , α nk = ˜ α nk , ( n, k ) ∈ J , J = ˜ J . The matrices T and T can be uniquely constructed by Algorithm 6.7, so T = ˜ T , T = ˜ T .Introduce the block matrix of spectral mappings [ P jk ( x, λ )] j,k =1 , of size (2 m × m ) as follows: (cid:20) P ( x, λ ) P ( x, λ ) P ( x, λ ) P ( x, λ ) (cid:21) (cid:20) ˜ ϕ ( x, λ ) ˜Φ( x, λ )˜ ϕ [1] ( x, λ ) ˜Φ [1] ( x, λ ) (cid:21) = (cid:20) ϕ ( x, λ ) Φ( x, λ ) ϕ [1] ( x, λ ) Φ [1] ( x, λ ) (cid:21) . Recall that the matrix Wronskian h ( Y ( x )) † , Z ( x ) i does not depend on x if Y and Z are solutionsof (1.2). Using this fact together with the definitions of ϕ ( x, λ ) and Φ( x, λ ), it is easy to showthat (cid:20) ˜ ϕ ( x, λ ) ˜Φ( x, λ )˜ ϕ [1] ( x, λ ) ˜Φ [1] ( x, λ ) (cid:21) − = (cid:20) ( ˜Φ [1] ( x, λ )) † − ( ˜Φ( x, λ )) † − ( ϕ [1] ( x, λ )) † ( ˜ ϕ ( x, λ )) † . (cid:21) Consequently, we obtain P = ϕ ( ˜Φ [1] ) † − Φ( ˜ ϕ [1] ) † = I + ( ϕ − ˜ ϕ )( ˜Φ [1] ) † − (Φ − ˜Φ)( ˜ ϕ [1] ) † P = − ϕ ˜Φ † + Φ ˜ ϕ † = − ( ϕ − ˜ ϕ ) ˜Φ † + (Φ − ˜Φ) ˜ ϕ † (cid:27) (6.11)where the appropriate arguments ( x, λ ) and ( x, λ ) are omitted for brevity. On the one hand,relation (6.11) and Lemma 4.4 imply P ( x, λ ) = I + o (1) , P ( x, λ ) = o (1) , | ρ | → ∞ , ρ ∈ G δ , (6.12)for each fixed x ∈ [0 , π ]. On the other hand, using (2.6), (6.11), and the relation M ( λ ) =( M ( λ )) † , we derive P = ϕ ( ˜ ψ [1] ) † − ψ ( ˜ ϕ [1] ) † + ϕ ( ˜ M − M )( ˜ ϕ [1] ) † , P = − ϕ ˜ ψ † + ψ ˜ ϕ † + ϕ ( M − ˜ M ) ˜ ϕ † . Lemma 6.5 says that the Weyl matrix M ( λ ) can be recovered from the spectral data uniquely upto an additive constant. Hence, ( ˜ M ( λ ) − M ( λ )) is a constant matrix and the matrix functions P ( x, λ ), P ( x, λ ) are entire in λ for each fixed x ∈ [0 , π ]. Therefore, asymptotics (6.12)together with Liouville’s Theorem yield P ( x, λ ) ≡ I , P ( x, λ ) ≡ 0. Consequently, we have ϕ ( x, λ ) ≡ ˜ ϕ ( x, λ ), Φ( x, λ ) ≡ ˜Φ( x, λ ).Subtracting ˜ ℓ ˜ ϕ = λ ˜ ϕ from ℓϕ = λϕ , we obtain(( σ − ˜ σ ) ϕ ) ′ = ( σ − ˜ σ ) ϕ ′ (6.13)a.e. on (0 , π ). In addition, the matrix function ( σ − ˜ σ ) ϕ is absolutely continuous on [0 , π ] foreach fixed λ . The same conclusions are valid for Φ instead of ϕ . One can fix λ and chooseconstant matrices D , D so thatdet( ϕ ( x, λ ) D + Φ( x, λ ) D ) = 0 , x ∈ [0 , π ] . The matrix function ( σ ( x ) − ˜ σ ( x ))( ϕ ( x, λ ) D + Φ( x, λ ) D )is absolutely continuous with respect to x ∈ [0 , π ]. This implies that ( σ ( x ) − ˜ σ ( x )) is absolutelycontinuous on [0 , π ]. Using (6.13), we get ( σ − ˜ σ ) ′ = 0 a.e. on (0 , π ). Thus, σ ( x ) = ˜ σ ( x ) + H ⋄ ,where H ⋄ is a Hermitian constant matrix. Using the initial conditions ϕ (0 , λ ) = ˜ ϕ (0 , λ ) = T , ϕ [1] (0 , λ ) = ˜ ϕ [1] (0 , λ ) = T ⊥ , we conclude that ( σ (0) − ˜ σ (0)) T = 0. This yields (6.2). The relation V (Φ) = ˜ V (Φ) implies T ( ˜ H − H − H ⋄ )Φ( π, λ ) = 0 . It follows from (2.8) that Φ( π, λ ) = T ( V (Ψ)) − . Consequently, we obtain the relation for H from (6.1). This completes the proof. 19 Appendix In this section, we show how to represent Sturm-Liouville operators on graphs in the form (1.2)-(1.4). The Sturm-Liouville eigenvalue problem with singular potentials on the star-shaped graphwith m edges of equal length π has the form − ( y [1] j ) ′ − σ j ( x j ) y [1] j − σ j ( x j ) y j = λy j , x ∈ (0 , π ) , j = 1 , m, (7.1) y j (0) = 0 , j = 1 , m, (7.2) y ( π ) = y j ( π ) , j = 2 , m, m X j =1 y [1] j ( π ) = hy ( π ) , (7.3)where { σ j } mj =1 are real-valued functions from L (0 , π ), y [1] j := y ′ j − σ j y j , y j , y [1] j ∈ AC [0 , π ],( y [1] j ) ′ ∈ L (0 , π ), j = 1 , m , h ∈ R . Conditions (7.3) generalize the standard matching con-ditions, which express Kirchoffs law in electrical circuits, balance of tension in elastic stringnetwork, etc. (see [4–6]).The problem (7.1)-(7.3) is equivalent to (1.2)-(1.4) with σ ( x ) = diag { σ j ( x ) } mj =1 (the diagonalmatrix with the diagonal entries { σ j ( x ) } mj =1 ), Y ( x ) = [ y j ( x )] mj =1 , T = 0, T = [ T ,jk ] mj,k =1 , T ,jk = m , j, k = 1 , m , H = hT . If there are the mixed boundary conditions instead of theDirichlet boundary conditions (7.2): y [1] j (0) − h j y j (0) = 0 , h j ∈ R , j = 1 , r, y j (0) = 0 , j = r + 1 , m, then T = [ T ,jk ] mj,k =1 , T ,jk = 1 if j = k ≤ r and T ,jk = 0 otherwise.Now consider an arbitrary geometrical graph G with the edges { e j } mj =1 of equal length π . Ifthe edge lengths are unequal but rationally dependent, then one can add auxiliary vertices toobtain a graph with equal edge lengths. For every edge e j , j = 1 , m , we introduce a parameter x j ∈ [0 , π ]. Denote the ends of the edge e j by w j − and w j . The value x j = 0 corresponds tothe end w j − and x j = π corresponds to w j . Every vertex v of the graph G is an equivalenceclass of the ends w j incident to this vertex: v = { w j , w j , . . . , w j r } . For j = 1 , m , considerfunctions y j ( x j ) and σ j ( x j ), x j ∈ [0 , π ], from the classes described above. Denote y | w j − = y j (0) , y | w j = y j ( π ) ,y [1] | w j − = − y [1] j (0) , y [1] | w j = y [1] j ( π ) , j = 1 , m. Consider the Sturm-Liouville eigenvalue problem on the graph G given by equations (7.1)on the edges and the following matching conditions in the vertices: y | w j = y | w k , w j , w k ∈ v P w j ∈ v y [1] | w j = h v y | w j , w j ∈ v v ∈ V , (7.4)where V is the set of the vertices of G . Without loss of generality, we may assume that G is a bipartite graph, i.e., its vertices can be divided into two disjoint sets V and V so thateach edge connects two vertices from different sets. To achieve this condition, one can addauxiliary vertices in the middle points of the edges. We may assume that all the vertices from V correspond to x j = 0 and all the vertices from V correspond to x j = π , i.e., if w j − ∈ v ,then v ∈ V and, if w j ∈ v , then v ∈ V . 20ix a vertex v ∈ V . Let e j , e j , . . . , e j r be the edges incident to v . Construct the matrices T v = [ T v ,jk ] mj,k =1 and H v = h v T v , T v ,jk = r if j, k ∈ { j l } rl =1 and T v ,jk = 0 otherwise. Put T := X v ∈ V T v , H := X v ∈ V H v . One can easily check that T is an orthogonal projection matrix and H = H † = T H T .The matrices T and H are constructed analogously by V , σ ( x ) := diag { σ j ( x ) } mj =1 . Thenthe Sturm-Liouville problem (7.1),(7.4) on the graph G is equivalent to (1.2)-(1.4) with theconstructed matrix coefficients.Thus, the results of the paper are valid for Sturm-Liouville operators with singular potentialson arbitrary graphs having rationally dependent edge lengths. Matching conditions of othertypes than (7.4) can be treated similarly (see, e.g., [4, 7]). References [1] Marchenko, V.A. Sturm-Liouville Operators and Their Applications, Naukova Dumka,Kiev (1977) (Russian); English transl., Birkhauser (1986).[2] Levitan, B. M.; Sargsjan, I. S. Sturm-Liouville and Dirac Operators, Springer, Dordrecht(1991).[3] Freiling, G.; Yurko, V. Inverse Sturm-Liouville Problems and Their Applications, Hunt-ington, NY: Nova Science Publishers (2001).[4] Kuchment, P. Quantum graphs. I. Some basic structures, Waves Random Media 14 (2004),no. 1, S107–S128.[5] Pokorny, Yu. V.; Penkin, O. M.; Pryadiev, V. L. et al. Differential Equations on Geomet-rical Graphs, Fizmatlit, Moscow, 2004 (Russian).[6] Berkolaiko, G.; Carlson, R.; Fulling, S.; Kuchment, P. Quantum Graphs and Their Appli-cations, Contemp. Math. 415, Amer. Math. Soc., Providence, RI (2006).[7] Nowaczyk, M. Inverse Problems for Graph Laplacians, Doctoral Theses in MathematicalSciences, Lund, Sweden (2007).[8] Savchuk, A.M.; Shkalikov, A.A. Sturm-Liouville operators with singular potentials, Math.Notes, 66 (1999), no. 6, 741–753.[9] Savchuk, A.M. On the eigenvalues and eigenfunctions of the Sturm-Liouville operator witha singular potential, Math. Notes 69 (2001), no. 2, 245–252.[10] Savchuk, A.M.; Shkalikov, A.A. Trace formula for Sturm-Liouville operators with singularpotentials, Math. Notes 69 (2001), no. 3, 387–400.[11] Korotyaev, E. Characterization of the spectrum of Schr¨odinger operators with periodicdistributions, International Mathematics Research Notices 2003 (2003), no. 37, 2019–2031.[12] Hryniv, R.O.; Mykytyuk, Y.V. Inverse spectral problems for Sturm-Liouville operatorswith singular potentials, Inverse Problems 19 (2003), no. 3, 665–684.2113] Hryniv, R.O.; Mykytyuk, Y.V. Transformation operators for Sturm-Liouville operatorswith singular potentials, Math. Phys. Anal. Geom. 7 (2004), 119–149.[14] Hryniv, R.O.; Mykytyuk, Y.V. Inverse spectral problems for Sturm-Liouville operatorswith singular potentials. II. Reconstruction by two spectra, North-Holland MathematicsStudies 197 (2004), 97–114.[15] Hryniv, R.O.; Mykytyuk, Y.V. Half-inverse spectral problems for Sturm-Liouville opera-tors with singular potentials, Inverse problems 20 (2004), no. 5, 1423–1444.[16] Savchuk, A.M.; Shkalikov, A.A. Inverse problem for Sturm-Liouville operators with distri-bution potentials: Reconstruction from two spectra, Russ. J. Math. Phys. 12 (2005), no. 4,507–514.[17] Djakov, P.; Mityagin, B.N. Spectral gap asymptotics of one-dimensional Schr¨odinger op-erators with singular periodic potentials, Integral Transforms Spec. Funct. 20 (2009), no.3–4, 265–273.[18] Hryniv, R.O. Analyticity and uniform stability in the inverse singular Sturm-Liouvillespectral problem, Inverse Problems 27 (2011), no. 6, 065011.[19] Mirzoev, K.A. Sturm-Liouville operators, Trans. Moscow Math. Soc. 75 (2014), 281–299.[20] Guliyev, N.J. Schr¨odinger operators with distributional potentials and boundary conditionsdependent on the eigenvalue parameter, J. Math. Phys. 60 (2019), 063501.[21] Papanicolaou, V.G. Trace formulas and the behaviour of large eigenvalues, SIAM J. Math.Anal. 26 (1995), no. 1, 218–237.[22] Carlson, R. Large eigenvalues and trace formulas for matrix Sturm-Liouville problems,SIAM J. Math. Anal. 30 (1999), no. 5, 949–962.[23] Veliev, O.A. Non-self-adjoint SturmLiouville operators with matrix potentials, Math.Notes, 81 (2007), no. 4, 440–448.[24] Polyakov, D.M. On the spectral characteristics of non-self-adjoint fourth-order operatorswith matrix coefficients, Math. Notes 105 (2019), no. 4, 630–635.[25] Carlson, R. An inverse problem for the matrix Schr¨odinger equation, J. Math. Anal. Appl.267 (2002), 564–575.[26] Malamud, M.M. Uniqueness of the matrix Sturm-Liouville equation given a part of themonodromy matrix, and Borg type results, Sturm-Liouville Theory, Birkh¨auser, Basel(2005), 237–270.[27] Chabanov, V.M. Recovering the M-channel Sturm-Liouville operator from M+1 spectra,J. Math. Phys. 45 (2004), no. 11, 4255–4260.[28] Yurko, V.A. Inverse problems for matrix Sturm-Liouville operators, Russ. J. Math. Phys.13 (2006), no. 1, 111–118.[29] Shieh, C.-T. Isospectral sets and inverse problems for vector-valued Sturm-Liouville equa-tions, Inverse Problems 23 (2007), 2457–2468.2230] Xu, X.-C. Inverse spectral problem for the matrix Sturm-Liouville operator with the gen-eral separated self-adjoint boundary conditions, Tamkang J. Math. 50 (2019), no. 3, 321–336.[31] Yurko, V. Inverse problems for the matrix Sturm-Liouville equation on a finite interval,Inverse Problems 22 (2006), 1139–1149.[32] Chelkak, D.; Korotyaev, E. Weyl-Titchmarsh functions of vector-valued Sturm-Liouvilleoperators on the unit interval, J. Func. Anal. 257 (2009), 1546–1588.[33] Mykytyuk, Ya.V.; Trush, N.S. Inverse spectral problems for Sturm-Liouville operators withmatrix-valued potentials, Inverse Problems 26 (2009), no. 1, 015009.[34] Bondarenko, N. Spectral analysis for the matrix Sturm-Liouville operator on a finite in-terval, Tamkang J. Math. 42 (2011), no. 3, 305–327.[35] Bondarenko, N. P. An inverse problem for the non-self-adjoint matrix Sturm-Liouvilleoperator, Tamkang J. Math. 50 (2019), no. 1, 71–102.[36] Harmer, M. Inverse scattering for the matrix Schr¨odinger operator and Schr¨odinger op-erator on graphs with general self-adjoint boundary conditions, ANZIAM J. 43 (2002),1–8.[37] Aktosun, T.; Weder, R. Inverse Scattering. In: Direct and Inverse Scattering for the MatrixSchr¨odinger Equation, Applied Mathematical Sciences, vol 203. Springer, Cham (2021).[38] Agranovich, Z. S.; Marchenko, V. A. The inverse problem of scattering theory, Gordonand Breach, New York (1963).[39] Eckhardt, J.; Gesztesy, F.; Nichols, R.; Teschl, G. Supersymmetry and Schr¨odinger-typeoperators with distributional matrix-valued potentials, J. Spectral Theory 4 (2014), no. 4,715–768.[40] Eckhardt, J.; Gesztesy, F.; Nichols, R.; Sakhnovich, A.; Teschl, G. Inverse spectral prob-lems for Schr¨odinger-type operators with distributional matrix-valued potentials, Differ-ential Integral Equations 28 (2015), no. 5/6, 505–522.[41] Weidmann, J. Spectral Theory of Ordinary Differential Operators, Lecture Notes in Math-ematics, Springer-Verlag, Berlin (1987).[42] Mirzoev, K.A.; Safonova, T.A. Singular Sturm-Liouville operators with distribution po-tential on spaces of vector functions, Doklady Mathematics 84 (2011), no. 3, 791–794.[43] Mirzoev, K.A.; Safonova, T.A. On the deficiency index of the vector-valued SturmLiouvilleoperator, Math. Notes 99 (2016), no. 2, 290–303.[44] Bondarenko, N.P. Solving an inverse problem for the Sturm-Liouville operator withsingular potential by Yurko’s method, Tamkang J. Math. (accepted), preprint (2020),arXiv:2004.14721[math.SP].[45] Bondarenko, N.P. Inverse problem solution and spectral data characterization for the ma-trix Sturm-Liouville operator with singular potential, preprint (2020), arXiv:2007.07299[math.SP]. 2346] Naimark, M.A. Linear Differential Operators, 2nd ed., Nauka, Moscow (1969); Englishtransl. of 1st ed., Parts I,II, Ungar, New York (1967, 1968).[47] Bondarenko, N.P. Spectral analysis of the Sturm-Liouville operator on the star-shapedgraph, Math. Meth. Appl. Sci. 43 (2020), no. 2, 471–485.[48] Buterin, S.A.; Shieh, C.-T.; Yurko, V.A. Inverse spectral problems for non-selfadjointsecond-order differential operators with Dirichlet boundary conditions, Boundary ValueProblems (2013), 2013:180.[49] Golub, G.V.; Van Loan, C.F. Matrix Computations, Third edition, The Johns HopkinsUniversity Press, Baltimore and London (1996).[50] Christensen, O. An Introduction to Frames and Riesz Bases, Applied and Numerical Har-monic Analysis, Birkhauser, Boston (2003).Natalia Pavlovna Bondarenko1. Department of Applied Mathematics and Physics, Samara National Research University,Moskovskoye Shosse 34, Samara 443086, Russia,2. Department of Mechanics and Mathematics, Saratov State University,Astrakhanskaya 83, Saratov 410012, Russia,e-mail: