A generalized inverse eigenvalue problem and m -functions
aa r X i v : . [ m a t h . F A ] A ug A GENERALIZED INVERSE EIGENVALUE PROBLEM AND m -FUNCTIONS KIRAN KUMAR BEHERA
Abstract.
In this manuscript, a generalized inverse eigenvalue problem is consideredthat involves a linear pencil ( z J [0 ,n ] − H [0 ,n ] ) of matrices arising in the theory of rationalinterpolation and biorthogonal rational functions. In addition to the reconstruction of theHermitian matrix H [0 ,n ] with the entries b ′ j s , characterizations of the rational functionsthat are components of the prescribed eigenvectors are given. A condition concerningthe positive-definiteness of J [0 ,n ] and which is often an assumption in the direct problemis also isolated. Further, the reconstruction of H [0 ,n ] is viewed through the inverse of thepencil ( z J [0 ,n ] − H [0 ,n ] ) which involves the concept of m -functions. Introduction
A generalized inverse eigenvalue problem (GIEP) concerns the reconstruction of ma-trices from a given set of spectral data. The spectral data may be completely or onlypartially specified in terms of eigenvalues and eigenvectors. Precisely, a GIEP for a pair( H , J ) of square matrices involves the generalized eigenvalue equation H Φ = z J Φ. Withthe prescribed spectral data, the solution to the problem consists in the reconstruction ofthe matrices H and/or J [10, 14].In general, it is often necessary both from the point of view of practical applications andof mathematical interest that the matrices involved have a specified structure [6]. Thisintroduces a structural constraint on the solution in addition to the spectral constraint.Thus, one may require that both the matrices H and J or one of them to be, for instance,banded or Hermitian or Hamiltonian [11, 15] and so on.In the present manuscript, we consider, as an inverse problem, the generalized eigen-value equation arising from the continued fraction1 u ( z ) − v L ( z ) v R ( z ) u ( z ) − v L ( z ) v R ( z ) u ( z ) − . . . , (1.1)where u j ( z ), v Lj ( z ) and v Rj ( z ) are non-vanishing polynomials of degree one [2]. If weterminate the above continued fraction at u n ( z ), then it is a rational function denoted by S n +1 ( z ) = Q n +1 ( z ) / P n +1 ( z ) where, the polynomials Q n ( z ) of degree ≤ n − P n ( z )of degree ≤ n satisfy the three term recurrence relation [13] X n +1 ( z ) = u n ( z ) X n ( z ) − v Ln − ( z ) v Rn − ( z ) X n − ( z ) , n = 0 , , , · · · , (1.2) Mathematics Subject Classification.
Primary 15A29, 30C10.
Key words and phrases.
Generalized inverse eigenvalue problem; Linear pencil of tridiagonal matrices; m -functions .This research is supported by the Dr. D. S. Kothari postdoctoral fellowship scheme of UniversityGrants Commission (UGC), India. ith the initial conditions Q − ( z ) = − , Q ( z ) = 0 , P − ( z ) = 0 and P ( z ) = 1 . (1.3)The linear pencil that is associated with S n +1 ( z ) and (1.2) is ( z J [0 ,n ] − H [0 ,n ] ) where thematrices H [0 ,n ] and J [0 ,n ] are tridiagonal.In the present manuscript, we consider the matrices H [0 ,n ] = a b · · · b a b · · · b a · · · · · · a n − b n − · · · ¯ b n − a n , J [0 ,n ] = c d · · · d c d · · · d c · · · · · · c n − d n − · · · d n − c n , d j = 0 , ≤ j ≤ n − , and the following generalized inverse eigenvalue problem. GIEP 1.
Given: the symmetric matrix J [0 ,n ] , the hermitian matrix H [0 ,k ] , real numbers λ and µ , and vectors p R [ k,n ] = ( p Rk , p Rk +1 , · · · , p Rn ) T and s R [ k,n ] = ( s Rk , s Rk +1 , · · · , s Rn ) T , where ≤ k ≤ n − . To find:(i) hermitian matrix H [0 ,n ] with eigenvalues λ and µ such that H [0 ,k ] is the leading prin-cipal sub-matrix of H [0 ,n ] ,(ii) vectors p R [0 ,k − = ( p R , p R , · · · , p Rk − ) T and s R [0 ,k − = ( s R , s R , · · · , s Rk − ) T such that p R [0 ,n ] = (cid:18) p R [0 ,k − p R [ k,n ] (cid:19) and s R [0 ,n ] = (cid:18) s R [0 ,k − s R [ k,n ] (cid:19) , are the right eigenvectors of the matrix pencil ( z J [0 ,n ] − H [0 ,n ] ) , corresponding to theeigenvalues λ and µ respectively. The pencil z J [0 ,n ] − H [0 ,n ] , which is a linear pencil of tridiagonal matrices arises in thetheory of biorthogonal rational functions and rational interpolation [4, 16]. A particularcase, in which the b ′ s appearing in H [0 ,n ] are purely imaginary and the c ′ s appearingin J [0 ,n ] are unity, has its origins in the continued fraction representation of Nevanlinnafunctions, which in turn are obtained via the Cayley transformation of the continuedfraction representation of a Carath´eodory function [13] (see also [7]). As further specificillustrations, the rational functions arising as components of eigenvectors in such caseshave been related to a class of hypergeometric polynomials orthogonal on the unit circle [8]as well as pseudo-Jacobi polynomials (or Routh-Romanovski polynomials) [7].In the direct problem, the components of the right eigenvector of the linear pencil( z J [0 ,n ] − H [0 ,n ] ) are rational functions with poles at z = b j /d j while that of the lefteigenvector have poles at z = ¯ b j /d j . In addition to poles, the entries of the matrices J [0 ,n ] and H [0 ,n ] also completely specify the numerators of such rational functions appearing aspolynomial solutions of a three term recurrence relation. t is also known that the zeros of these numerator polynomials are the eigenvalues ofthe linear pencil under consideration [8, Theorem 1.1]. These numerator polynomials areprecisely normalized P j ( z ), the denominator of the convergents S j ( z ) of the continuedfraction (1.1). Further, it can be verified with the recurrence relation (1.2) and the initialconditions (1.3) that the following expressions P n +1 ( z ) = det ( z J [0 ,n ] − H [0 ,n ] ) , Q n +1 ( z ) = det ( z J [1 ,n ] − H [1 ,n ] ) , hold which lead to the formula S n +1 ( z ) = Q n +1 ( z ) P n +1 ( z ) = (cid:10) ( z J [0 ,n ] − H [0 ,n ] ) − e , e (cid:11) , with the standard inner product h x, y i = ∞ X j =0 x i ¯ y j , x = ( x , x , · · · ) ∈ ℓ , y = ( y , y , · · · ) ∈ ℓ . on the space ℓ of complex square summable sequences.A fundamental object related to a pair ( H , J ) of matrices is the function m ( z ) = (cid:10) ( z J − H ) − e , e (cid:11) , z ∈ ρ ( H , J )called the m -function or the Weyl function of the linear pencil ( z J −H ) [4] (see also [1,3]).Here σ ( H , J ) and ρ ( H , J ) := C \ σ ( H , J ) are, respectively, the spectrum and the resolventset of the pencil ( z J − H ). We can denote similarly the m -function m ( z, j + 1) = Q j +1 ( z ) P j +1 ( z ) = (cid:10) ( z J [0 ,j ] − H [0 ,j ] ) − e , e (cid:11) , z ∈ ρ ( H [0 ,j ] , J [0 ,j ] ) , (1.4)of the finite pencil ( z J [0 ,j ] − H [0 ,j ] ).Thus, a way to interpret the reconstruction of the matrix H [0 ,n ] is to determine its entriesin terms of rational functions with arbitrary poles. These rational functions enter intothe problem as components of a prescribed eigenvector, while the structural constraint ofthe pencil being tridiagonal characterizes these poles.Our primary goal in this manuscript is to find a representation of the entries b ′ j s ofthe matrix H [0 ,n ] in terms of given spectral points and corresponding eigenvectors. Wefind characterizations of both the given poles and the entries b ′ j s which appear in specialmatrix pencils as mentioned earlier. A condition concerning the positive-definiteness of J [0 ,n ] and which is often an assumption in the direct problem is also isolated. Further, wehave a view at the entries b ′ j s through the m -functions (1.4) which, as is obvious, involvea point in the resolvent set and not in the spectrum of the pair ( H [0 ,n ] , J [0 ,n ] ).The manuscript is organized as follows. Section 2 includes preliminary results thatillustrate the key role played by the entry b k in the inverse approach to the linear pencil( z J [0 ,n ] − H [0 ,n ] ). The matrix H [0 ,n ] is reconstructed in Section 3 thereby solving GIEP 1.In Section 4 we have a further look at the problem through m -functions that involvescomputing the inverse of the matrix ( z J [0 ,n ] − H [0 ,n ] ).2. Preliminary results
In this section, we derive some results that will help in solving the GIEP. Though theentries are yet to be determined, we use them as symbols in the computation, with thefinal result depending only on b k and given components of the eigenvector. In a way, theseresults exhibit the role played by the specific entry b k in the solution. irst of all, it can be seen that if H [0 ,n ] were completely specified, the leading minors P m ( z ) of ( z J [0 ,n ] − H [0 ,n ] ) satisfy the three term recurrence relation X m +1 ( z ) = ( zc m − a m ) X m ( z ) − ( zd m − − b m − )( zd m − − ¯ b m − ) X m − ( z ) , (2.1)for 0 ≤ m ≤ n , where we define P − ( z ) := 0 and P ( z ) := 1. If κ m is the leading coefficientof P m ( z ), then from (2.1), we have κ m +1 = c m κ m − d m − κ m − , with κ = 1 and κ = c .Hence, if κ m κ m − = d m − c m , m ≥ , then P m +1 ( z ) is a polynomial of degree m + 1. Further, ( z J [0 ,n ] − H [0 ,n ] ) p R [0 ,n ] = 0 yieldsthe following relations( zd m − − ¯ b m − ) p Rm − ( z ) + ( zc m − a m ) p Rm ( z ) + ( zd m − b m ) p Rm +1 ( z ) = 0 , (2.2)for m = 0 , , · · · , n , where p Rn +1 ( z ) = 0 and we define p R − ( z ) := 0. The former equalityoccurs if z ∈ σ ( H [0 ,n ] , J [0 ,n ] ). Moreover, with p R ( z ) a non-vanishing function to be speci-fied, the components of the eigenvector p R [0 ,n ] can be obtained from (2.2), for instance byinduction, in the form of the rational functions p Rm ( z ) = P m ( z ) Q m − j =0 ( b j − zd j ) p R ( z ) , m = 1 , , · · · , k, k + 1 , · · · , n − , (2.3)and p Rn ( z ) obtained from ( zc n − a n ) p Rn ( z ) = (¯ b n − − zd n − ) p Rn − ( z ). However, because ofthe prescribed data, we will assume that the components of the vector p R [ k,n ] are given inthe form p Rm ( z ) = T m ( z ) η ( z ) m Q m − j =0 ( α j − z ) , η ( z ) m ∈ R \ { } , m = k, k + 1 , · · · , n, (2.4)where T m ( z ) is a polynomial of degree m with leading coefficient ς m . But, we note thatonce H [0 ,n ] is determined, a component of the right eigenvector of ( z J [0 ,n ] − H [0 ,n ] ) mustbe of the form (2.3). In particular, if we look at p Rk ( z ), this would imply that the set { α , α , · · · , α k − } is necessarily a permutation of the set { b /d , b /d , · · · , b k − /d k − } which is known. The inverse problem thus consists of α j , j = k, · · · , n −
1, being arbitrary,on which the determination of b j , j = k, · · · , n −
1, depends.
Lemma 2.1.
Given ( λ, p R [0 ,n ] ) an eigen-pair for ( H [0 ,n ] , J [0 ,n ] ) , let λ / ∈ σ ( H [0 ,k ] , J [0 ,k ] ) , thenthe components of p R [0 ,k − ( z ) are given by p Rm ( z ) = ( b k − zd k ) Q k − j = m ( b j − zd j ) P m ( z ) P k +1 ( z ) p Rk +1 ( z ) , m = 1 , , · · · , k − , at z = λ and p R ( λ ) assumed to be a non-vanishing function of λ .Proof. Since λ / ∈ σ ( H [0 ,k ] , J [0 ,k ] ), det( λ J [0 ,k ] − H [0 ,k ] ) = 0, which implies that P k +1 ( λ ) = 0.If we use the form of p Rk +1 ( λ ) as suggested by (2.3) with b k unknown at the moment, wewill have that p Rk +1 ( λ ) = 0 and p R ( λ ) = Q kj =0 ( b j − λd j ) p Rk +1 ( λ ) P k +1 ( λ ) = ( b k − λd k ) p Rk +1 ( λ ) P k +1 ( λ ) k − Y j =0 ( b j − λd j ) P ( λ ) . hen, p R ( λ ) can be obtained using (2.2) for m = 0 as ( λc − a ) p R ( λ )+( λd − b ) p R ( λ ) = 0giving p R ( λ ) = ( b k − λd k ) p Rk +1 ( λ ) P k +1 ( λ ) k − Y j =1 ( b j − λd j ) P ( λ ) . The rest of the proof can be completed by induction using (2.2) and (2.3). (cid:3)
We note that P k +1 ( λ ) is known since (2.1) involves a k and b k − . Thus, Lemma 2.1shows that once b k is computed, the vector p R [0 ,k − can be uniquely obtained in terms of p Rk +1 ( λ ), which is now known in the form given by (2.4). This would determine p R [0 ,k − at z = λ completely.To proceed further, we will make use of rational functions of the form p Lm ( z ) = P m ( z ) Q m − j =0 (¯ b j − zd j ) p L ( z ) , m = 1 , · · · , k − , k, · · · , n − , (2.5)and p Ln ( z ) obtained from the equation ( zc n − a n ) p Ln ( z ) = ( b n − − zd n − ) p Ln − ( z ). Thesearise as components of the left eigenvector p L [0 ,n ] of ( z J [0 ,n ] − H [0 ,n ] ) corresponding to theeigenvalue z = λ and owing to the underlying hermitian character of the problem, satisfy p Lj ( λ ) = p Rj ( λ ), 0 ≤ j ≤ n . Again, we reiterate that p Lj ( z ) is specified in the form (2.5)only for j = 0 , , · · · , k −
1, while p Lm ( z ) is obtained in the form suggested by (2.4) for m = k, k + 1 , · · · , n .Similarly, we define the rational functions s Lm ( µ ) and s Rm ( µ ) corresponding to the eigen-value µ . Lemma 2.1 with λ replaced by µ and the assumption µ / ∈ σ ( H [0 ,k ] , J [0 ,k ] ) givesthe corresponding expressions for s Rj ( z ) at z = µ for j = 0 , , · · · , k − p Rj := p Rj ( z ) and similarly for others. Let us also denoteby J [ k +1 ,n ] and H [ k +1 ,n ] , as is clear from the notations, the trailing matrices obtained byremoving the first k + 1 rows and columns from J [0 ,n ] and H [0 ,n ] respectively. Lemma 2.2.
Suppose λ, µ / ∈ σ ( H [0 ,i ] , J [0 ,i ] ) for i = k − , k . Then, the following identities ( λ − µ ) p L [ k +1 ,n ] J [ k +1 ,n ] s R [ k +1] = ( b k − λd k ) p Lk s Rk +1 − (¯ b k − µd k ) p Lk +1 s Rk , (2.6)( λ − µ ) s L [0 ,k ] J [0 ,k ] p R [0 ,k ] = (¯ b k − µd k ) s Lk +1 p Rk − ( b k − λd k ) s Lk p Rk +1 , (2.7) hold.Proof. The relations (2.2) for m = k + 1 · · · , n can be written as z c k +1 d k +1 · · · d k +1 c k +2 d k +2 · · · d k +2 c k +3 · · · · · · c n p Rk +1 p Rk +2 p Rk +3 ... p Rn = a k +1 b k +1 · · · b k +1 a k +2 d k +2 · · ·
00 ¯ b k +2 a k +3 · · · · · · a n p Rk +1 p Rk +2 p Rk +3 ... p Rn + (¯ b k − zd k ) p Rk , or in the compact form H [ k +1 ,n ] p R [ k +1 ,n ] = z J [ k +1 ,n ] p R [ k +1 ,n ] − (¯ b k − zd k ) p Rk ~e , (2.8) here ~e = (1 , , · · · , ∈ R n − k . Post-multiplying (2.8) at z = λ by s L [ k +1 ,n ] , we obtain H [ k +1 ,n ] p R [ k +1 ,n ] s L [ k +1 ,n ] = λ J [ k +1 ,n ] p R [ k +1 ,n ] s L [ k +1 ,n ] − (¯ b k − λd k ) p Rk ~e s L [ k +1 ,n ] , (2.9)while pre-multiplying the conjugate transpose of (2.8) at z = µ by p R [ k +1 ,n ] , we obtain p R [ k +1 ,n ] s L [ k +1 ,n ] H [ k +1 ,n ] = µp R [ k +1 ,n ] s L [ k +1 ,n ] J ( k +1) n +1 − ( b k − µd k ) s Lk p R [ k +1 ,n ] ~e T . (2.10)We proceed with the well-known technique of subtracting traces of the respective sidesof (2.9) and (2.10). The left hand side upon subtraction is zero owing to the fact thatTr( A.B ) = Tr(
B.A ) for any well-defined matrix product. Consequently, we have( λ − µ )Tr[ p R [ k +1 ,n ] s L [ k +1 ,n ] J [ k +1 ,n ] ] = (¯ b k − λd k ) p Rk Tr[ ~e s L [ k +1 ,n ] ] − ( b k − µd k ) s Lk Tr[ p R [ k +1 ,n ] ~e T ] . The left hand side above is equal to the matrix product ( λ − µ ) s L [ k +1 ,n ] J [ k +1 ,n ] p R [ k +1 ,n ] , whilethe right hand side can be simplified to (¯ b k − λd k ) s Lk +1 p Rk − ( b k − µd k ) s Lk p Rk +1 which gives(2.6). A similar computation starting from H [0 ,k ] p R [0 ,k ] = z J [0 ,k ] p R [0 ,k ] − ( b k − zd k ) p Rk +1 ~e k +1 , ~e k +1 = (0 , · · · , ∈ R k +1 , leads to (2.7). (cid:3) The assumptions in Lemma 2.2 are necessary for the right hand sides of (2.6) and(2.7) to be non-vanishing. Later, we will use (2.7) to make an observation regarding thepositive-definiteness of J [0 ,k ] . Before that we solve the stated inverse problem GIEP 1.3. Solution to the GIEP
The given data in GIEP 1 suggest that we write the equation ( λ J [0 ,n ] − H [0 ,n ] ) p R [0 ,n ] = 0in the form (cid:18) λ J [0 ,k ] − H [0 ,k ] O λ O ∗ λ λ J [ k +1 ,n ] − H [ k +1 ,n ] (cid:19) (cid:18) p R [0 ,k ] p R [ k +1 ,n ] (cid:19) = (cid:18) (cid:19) , (3.1)where O λ = · · ·
00 0 · · · λd k − b k · · · . (3.2)Pre-multiplying (3.1) by s L [0 ,n ] gives the relation s L [0 ,k ] H [0 ,k ] p R [0 ,k ] + s L [ k +1 ,n ] H [ k +1 ,n ] p R [ k +1 ,n ] − s L [0 ,k ] O λ p R [ k +1 ,n ] − s L [ k +1 ,n ] O ∗ λ p R [0 ,k ] = λ ( s L [0 ,k ] J [0 ,k ] p R [0 ,k ] + s L [ k +1 ,n ] J [ k +1 ,n ] p R [ k +1 ,n ] ) . (3.3)Similarly, from s L [0 ,n ] ( µ J [0 ,n ] − H [0 ,n ] ) = 0, we obtain s L [0 ,k ] H [0 ,k ] p R [0 ,k ] + s L [ k +1 ,n ] H [ k +1 ,n ] p R [ k +1 ,n ] − s L [0 ,k ] O µ p R [ k +1 ,n ] − s L [ k +1 ,n ] O ∗ µ p R [0 ,k ] = µ ( s L [0 ,k ] J [0 ,k ] p R [0 ,k ] + s L [ k +1 ,n ] J [ k +1 ,n ] p R [ k +1 ,n ] ) , which used with (3.3) to eliminate H [0 ,n ] and H [ k +1 ,n ] gives s L [0 ,k ] [ O µ − O λ ] p R [ k +1 ,n ] + s L [ k +1 ,n ] [ O ∗ µ − O ∗ λ ] p R [0 ,k ] = ( λ − µ )( s L [0 ,k ] J [0 ,k ] p R [0 ,k ] + s L [ k +1 ,n ] J [ k +1 ,n ] p R [ k +1 ,n ] ) . The left hand side can be further simplified to finally obtain s L [0 ,k ] J [0 ,k ] p R [0 ,k ] + s L [ k +1 ,n ] J [ k +1 ,n ] p R [ k +1 ,n ] + d k ( s Lk +1 p Rk + s Lk p Rk +1 ) = 0 , hich, as µ → λ , implies d k (cid:12)(cid:12)(cid:12)(cid:12) p Lk − p Rk p Lk +1 p Lk +1 (cid:12)(cid:12)(cid:12)(cid:12) = − (cid:2) ( p L [0 ,k ] ) ∗ J [0 ,k ] p R [0 ,k ] + ( p L [ k +1 ,n ] ) ∗ J [ k +1 ,n ] p R [ k +1 ,n ] (cid:3) . (3.4)The next lemma provides a crucial characterization of the pole α k , that is, it should notbe a real number if the entry b k is to be determined uniquely. Lemma 3.1.
Suppose α k / ∈ R . Then ∆ k = p Lk +1 p Rk (cid:12)(cid:12)(cid:12)(cid:12) s Lk s Rk s Lk +1 s Rk +1 (cid:12)(cid:12)(cid:12)(cid:12) − s Lk +1 s Rk (cid:12)(cid:12)(cid:12)(cid:12) p Lk p Rk p Lk +1 p Rk +1 (cid:12)(cid:12)(cid:12)(cid:12) = 0 , (3.5) if λ, µ / ∈ σ ( H [0 ,i ] , J [0 ,i ] ) for i = k − , k .Proof. Using the forms as suggested by (2.4), we first note that (cid:12)(cid:12)(cid:12)(cid:12) p Lk p Rk p Lk +1 p Rk +1 (cid:12)(cid:12)(cid:12)(cid:12) = − T k ( λ ) T k +1 ( λ ) η λk η λk +1 Q kt =0 | α t − λ | i Im α k = 0 , by given assumptions and where Im α k is the imaginary part of α k . Then,∆ k = T k ( λ ) T k ( µ ) T k +1 ( λ ) T k ( µ ) η λk η µk η λk +1 η µk +1 Q k − t =0 | α t − λ | | α t − µ | × (cid:20) | α k − λ | ( α k − µ ) − | α k − µ | ( α k − λ ) (cid:21) i Im α k , which simplifies further to give∆ k = T k ( λ ) T k ( µ ) T k +1 ( λ ) T k ( µ ) η λk η µk η λk +1 η µk +1 Q kj =0 | α j − λ | | α j − µ | i ( λ − µ )Im α k . Since α k / ∈ R , we have that ∆ k is not zero. (cid:3) Now, with ∆ k a non-zero purely imaginary number or equivalently, i ∆ k a non-vanishingreal number, we proceed to show that b k can be determined uniquely. Theorem 3.2.
Suppose that the given spectral point points λ, µ / ∈ σ ( H [0 ,i ] , J [0 ,i ] ) for i = k − , k . If α k / ∈ R , then b k = ( λ + µ ) d k + d k ∆ k (cid:20) µs Lk +1 s Rk (cid:12)(cid:12)(cid:12)(cid:12) p Lk p Rk p Lk +1 p Rk +1 (cid:12)(cid:12)(cid:12)(cid:12) − λp Lk +1 p Rk (cid:12)(cid:12)(cid:12)(cid:12) s Lk s Rk s Lk +1 s Rk +1 (cid:12)(cid:12)(cid:12)(cid:12)(cid:21) , (3.6) where ∆ k is given by (3.5) .Proof. We have from (2.2)( λd m − − ¯ b m − ) p Rm − + ( λc m − a m ) p Rm + ( λd m − b m ) p Rm +1 = 0 , ( λd n − − ¯ b n − ) p Rn − + ( λc n − a n ) p Rn = 0 , (3.7)and the corresponding equations for the components of p L [0 ,k ] ( λd m − − b m − ) p Lm − + ( λc m − a m ) p Lm + ( λd m − ¯ b m ) p Lm +1 = 0 , ( λd n − − b n − ) p Ln − + ( λc n − a n ) p Ln = 0 . (3.8)Eliminating p Rm and p Lm between the first equations of (3.7) and (3.8) we obtain( p Lm − p Rm b m − − p Lm p Rm +1 b m ) + ( p Lm +1 p Rm ¯ b m − p Lm p Rm − ¯ b m − )= λd m − ( p Lm − p Rm − p Lm p Rm − ) + λd m ( p Lm +1 p Rm − p Lm p Rm +1 ) , hich on summing respective sides from m = k + 1 to m = n −
1, gives( p Lk p Rk +1 b k − p Ln − p Rn b n − ) + ( p Ln p Rn − ¯ b n − − p Lk +1 p Rk ¯ b k )= λd k ( p Lk p Rk +1 − p Lk +1 p Rk ) + λd n − ( p Ln p Rn − − p Ln − p Rn ) . (3.9)From the last two relations of (3.7) and (3.8), we get p Ln − p Rn b n − − p Ln p Rn − ¯ b n − = λd n − ( p Ln − p Rn − p Ln p Rn − ) , (3.10)which when added to (3.9) yields p Lk p Rk +1 b k − p Lk +1 p Rk ¯ b k = λd k ( p Lk p Rk +1 − p Lk +1 p Rk ) . (3.11)A computation similar to the relations (3.7), (3.8) and (3.9) for the eigenpair ( µ, s R [ o,n ] )gives s Lk s Rk +1 b k − s Lk +1 s Rk ¯ b k = µd k ( s Lk s Rk +1 − s Lk +1 s Rk ) . (3.12)We solve the system of equations (3.11) and (3.12) for b k and ¯ b k . First, the determinantof the system is∆ k = p Lk +1 p Rk s Lk s Rk +1 − p Lk p Rk +1 s Lk +1 s Rk = (cid:12)(cid:12)(cid:12)(cid:12) s Lk p Lk +1 s Rk p Rk +1 p Lk s Lk +1 p Rk s Rk +1 (cid:12)(cid:12)(cid:12)(cid:12) = p Lk +1 p Rk (cid:12)(cid:12)(cid:12)(cid:12) s Lk s Rk s Lk +1 s Rk +1 (cid:12)(cid:12)(cid:12)(cid:12) − s Lk +1 s Rk (cid:12)(cid:12)(cid:12)(cid:12) p Lk p Rk p Lk +1 p Rk +1 (cid:12)(cid:12)(cid:12)(cid:12) , which by Lemma 3.1 is non-zero. It is now a matter of computation to obtain∆ k b k = λd k p Lk +1 p Rk s Lk +1 s Rk + µd k s Lk s Rk +1 p Lk +1 p Rk − λd k p Lk p Rk +1 s Lk +1 s Rk − µd k s Lk +1 s Rk p Lk +1 p Rk , which can be further simplified to obtain∆ k b k = ( λ + µ ) d k ∆ k + d k [ λp Lk +1 p Rk ( s Lk +1 s Rk − s Lk s Rk +1 ) − µs Lk +1 s Rk ( p Lk +1 p Rk − p Lk p Rk +1 )] , leading to (3.6) and specifying the entry b k uniquely. (cid:3) A similar computation for ¯ b k gives¯ b k = ( λ + µ ) d k + d k ∆ k (cid:20) µs Lk s Rk +1 (cid:12)(cid:12)(cid:12)(cid:12) p Lk p Rk p Lk +1 p Rk +1 (cid:12)(cid:12)(cid:12)(cid:12) − λp Lk p Rk +1 (cid:12)(cid:12)(cid:12)(cid:12) s Lk s Rk s Lk +1 s Rk +1 (cid:12)(cid:12)(cid:12)(cid:12)(cid:21) . Theorem 3.2 finds the unique expression for b k . Summing (3.9) from m = j to m = n − j = k + 1 , · · · , n − b j , j = k + 1 , · · · , n −
2. The entry b n − is found from the system of equations consisting of (3.10)and the equivalent equation in µ . The assumptions are α j / ∈ R and λ, µ / ∈ σ ( H [0 ,j ] , J [0 ,j ] ), j = k + 1 , · · · , n −
1. Thus, with b j , j = k, k + 1 , · · · , n − a ′ j s are foundusing (2.2) as a i = λc i + ( λd i − − ¯ b i − ) p Ri − ( λ )+( λd i − b i ) p Ri +1 ( λ ) p Ri ( λ ) , i = k + 1 , k + 2 · · · n − λc n + ( λd n − − ¯ b n − ) p Rn − ( λ ) p Rn ( λ ) , i = n . (3.13)This completes the reconstruction of the matrix H [0 ,n ] . Remark 3.3.
Since λ and µ are zeros of P n +1 ( z ) , the assumptions for the determinationof the matrix H [0 ,n ] requires that P j ( z ) , j = k − , k, · · · , n , do not vanish at λ and µ .However, we emphasize that the determination of each entry b j requires that P j − ( z ) and P j ( z ) do not share a common zero at λ and µ . This condition is often implicit, both inthe direct and inverse problems, in the form of the requirement that the zeros of P j − ( z ) nd P j ( z ) or equivalently, the eigenvalues of the corresponding pencil matrices satisfy aseparation property known as interlacing. Corollary 3.4.
For j = k, k + 1 , · · · , n − , b j is purely imaginary and equals ih j if (cid:12)(cid:12)(cid:12)(cid:12) p Lj p Rj p Lj +1 p Rj +1 (cid:12)(cid:12)(cid:12)(cid:12) = i µh j ∆ j λ ( λ − µ ) d j (cid:12)(cid:12)(cid:12)(cid:12) p Lj − p Rj p Lj +1 p Rj +1 (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) s Lj − s Rj s Lj +1 s Rj +1 (cid:12)(cid:12)(cid:12)(cid:12) , (cid:12)(cid:12)(cid:12)(cid:12) s Lj s Rj s Lj +1 s Rj +1 (cid:12)(cid:12)(cid:12)(cid:12) = i λh j ∆ j µ ( λ − µ ) d j (cid:12)(cid:12)(cid:12)(cid:12) s Lj − s Rj s Lj +1 s Rj +1 (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) p Lj − p Rj p Lj +1 p Rj +1 (cid:12)(cid:12)(cid:12)(cid:12) . (3.14) Proof.
First, let us find the real and imaginary parts of b j . Since α j / ∈ R , we write b j = x j + iy j to obtain from (3.11) and (3.12) the system of equations x j + i p Lj p Rj +1 + p Lj +1 p Rj p Lj p Rj +1 − p Lj +1 p Rj y j = λd j ; x j + i s Lj s Rj +1 + s Lj +1 s Rj s Lj s Rj +1 − s Lj +1 s Rj y j = µd j , which can be solved to yield x j = d j j (cid:20) µ (cid:12)(cid:12)(cid:12)(cid:12) p Lj − p Rj p Lj +1 p Rj +1 (cid:12)(cid:12)(cid:12)(cid:12) (cid:12)(cid:12)(cid:12)(cid:12) s Lj s Rj s Lj +1 s Rj +1 (cid:12)(cid:12)(cid:12)(cid:12) − λ (cid:12)(cid:12)(cid:12)(cid:12) s Lj − s Rj s Lj +1 s Rj +1 (cid:12)(cid:12)(cid:12)(cid:12) (cid:12)(cid:12)(cid:12)(cid:12) p Lj p Rj p Lj +1 p Rj +1 (cid:12)(cid:12)(cid:12)(cid:12)(cid:21) ,y j = ( λ − µ ) d j i ∆ j (cid:12)(cid:12)(cid:12)(cid:12) p Lj p Rj p Lj +1 p Rj +1 (cid:12)(cid:12)(cid:12)(cid:12) (cid:12)(cid:12)(cid:12)(cid:12) s Lj s Rj s Lj +1 s Rj +1 (cid:12)(cid:12)(cid:12)(cid:12) . The above (which can be noted to be in the form x j = A j X j − B j Y j and y j = C j X j Y j )solved further for x j = 0 and y j = h j , gives the required relations (3.14). (cid:3) If α j / ∈ R , the proof of Lemma 3.1 implies that y j = 0 and hence b j / ∈ R , j = k, · · · , n − x j = 0, that is if λ , µ satisfy λµ = (cid:12)(cid:12)(cid:12)(cid:12) p Lj − p Rj p Lj +1 p Rj +1 (cid:12)(cid:12)(cid:12)(cid:12) (cid:12)(cid:12)(cid:12)(cid:12) s Lj s Rj s Lj +1 s Rj +1 (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) s Lj − s Rj s Lj +1 s Rj +1 (cid:12)(cid:12)(cid:12)(cid:12) (cid:12)(cid:12)(cid:12)(cid:12) p Lj p Rj p Lj +1 p Rj +1 (cid:12)(cid:12)(cid:12)(cid:12) , j = k, k + 1 , · · · , n − , (3.15)then b j = iy j = iǫ j d j , ǫ j = 0, that is b j is a scalar multiple of id j . Remark 3.5.
The emphasis on b j being purely imaginary arises from a particular formof the pencil ( z J [0 ,n ] − H [0 ,n ] ) , where in fact b j = id j and c j = 1 , j = 0 , , · · · , n . Asmentioned in Section 1, such pencils appear in analytic function theory and a case hasbeen made to call such pencils as Wall pencils.Further, as an inverse approach to these pencils, the relation (3.15) shows that theexpression on the right hand side of (3.15) must be a constant and equal to the ratio ofthe given spectral points for b j to be at least purely imaginary. Hence, if we begin with b j = id j , j = 0 , · · · , k − and c j = 1 , j = 0 , · · · , k , appropriate conditions can be addedto Corollary 3.4 so that b j is equal to d j , j = k, · · · , n − and we obtain a Wall pencil. The matrix J [0 ,n ] in Wall pencils is positive-definite, while no such assumption has beenmade in the present manuscript. However, since the matrix H [0 ,n ] has been reconstructed, et us have a look in this direction. Suppose the assumptions of Theorem 3.2 hold. Weput λ − µ = h in (2.7) to get s L [0 ,k ] J [0 ,k ] p R [0 ,k ] = (¯ b k − µd k ) s Lk +1 p Rk − ( b k − λd k ) s Lk p Rk +1 h , (3.16)so that as λ → µ or h →
0, we have the left hand side as ( s R [0 ,k ] ) ∗ J [0 ,k ] s R [0 ,k ] . If µ = α j , j = 0 , , · · · , k , s Rk +1 is finite at µ so that by Lemma 2.1, s R [0 ,k ] is a vector with finitecomponent. By L’Hopital’s rule, (3.16) yields( s R [0 ,k ] ) ∗ J [0 ,k ] s R [0 ,k ] = (¯ b k − µd k ) s Rk +1 ( s Rk ) ′ − ( b k − µd k ) s Lk ( s Rk +1 ) ′ + d k s Lk s Rk +1 , (3.17)so that if J [0 ,k ] is a positive-definite matrix, the right hand side above is a positive quantity.The essence of this observation is the following. For ∆ k = 0 to hold, it is necessary that α k / ∈ R . Since µ ∈ R , µ = α k follows trivially. Recalling the set { α , α , · · · , α k − } is justa permutation of { b /d , b /d , · · · , b k − /d k − } , it follows that µ = b j /d j , j = 0 , · · · , k − P n +1 ( z ) = det ( z J [0 ,n ] − H [0 ,n ] ) should coincide with b j /d j , j = 0 , , · · · , k − A view with m -functions In this section, we have a look at the reconstruction of the matrix H [0 ,n ] through theconcept of m -functions. For the general theory of m -functions arising in the context oforthogonal polynomials, we refer to [12] and for that in the context of linear pencil, werefer to [4]. But because of the problem under consideration, we will have use only of therepresentation (1.4) for a point outside the spectrum of the pencil.In the present case, in addition to p Rm ( z ) as defined in (2.3), we will use the rationalfunctions q R ( z ) = 0 , q Rm ( z ) = Q m ( z ) Q m − j =0 ( zd j − b j ) , m = 1 , · · · , n, where Q m ( z ) satisfy (2.1) with initial conditions (1.3). A key role will be played by thefollowing relation P m +1 ( z ) Q m ( z ) − P m ( z ) Q m +1 ( z ) = m − Y j =0 ( zd j − b j )( zd j − ¯ b j ) , (4.1)called the Liouville-Ostrogradsky formula and which follows by induction from the re-currence relation (2.1) along with the initial conditions (1.3). Using (4.1), the matrixrepresentation of the bounded operator ( ω J − H ) − , ω ∈ ρ ( H , J ) has been found interms of p R ( L ) m ( z ) and q R ( L ) m ( z ) [4, Theorem 2.3]. The inverse of banded matrices has beenstudied, for instance, in [9], but we follow [4] to obtain a finite version, that is the inverseof the pencil z J [0 ,n ] − H [0 ,n ] . Lemma 4.1.
Let us denote m j := m ( ω, j ) − m ( ω, n + 1) . Then the inverse R [0 ,n ] ( ω ) of ( ω J [0 ,n ] − H [0 ,n ] ) − , ω ∈ ρ ( H [0 ,n ] , J [0 ,n ] ) is given by R [0 ,n ] ( ω ) = p L m p R p L m p R p L m p R · · · p Ln − m p R p Ln m p R p L m p R p L m p R p L m p R · · · p Ln − m p R p Ln m p R p L m p R p L m p R p L m p R · · · p Ln − m p R p Ln m p R ... ... ... . . . ... ... p L m p Rn − p L m p Rn − p L m p Rn − · · · p Ln − m n − p Rn − p Ln m n − p Rn − p L m p Rn p L m p Rn p L m p Rn · · · p Ln − m n − p Rn p Ln m n p Rn . roof. Consider the 1 × ( n + 1) vector p L [0 ,j ]) := (cid:0) p L p L · · · p Lj · · · (cid:1) and simi-larly the vector q L [0 ,j ] . Using (2.2) for the left eigenvector we obtain p L [0 ,j ] ( ω J (0 ,n ) − H (0 ,n ) ) = − ( ωd j − ¯ b j ) p Lj +1 ~e Tj + ( ωd j − b j ) p Lj ~e Tj +1 ,q L [0 ,j ] ( ω J (0 ,n ) − H (0 ,n ) ) = ~e T − ( ωd j − ¯ b j ) q Lj +1 ~e Tj + ( ωd j − b j ) q Lj ~e Tj +1 , which in view of (4.1) leads to[ q Rj p L [0 ,j ] − p Rj q L [0 ,j ] ]( ω J [0 ,n ] − H [0 ,n ] ) = ~e Tj − p Rj ~e T . (4.2)The vectors p L [0 ,n ] and q L [0 ,n ] with the above computation yield[ q Rn +1 p L [0 ,n ] − p Rn +1 q L (0 ,n ) ]( ω J [0 ,n ] − H [0 ,n ] ) = − p Rn +1 ~e T , which in terms of m -functions can also be written as[ q L [0 ,n ] − m ( ω, n + 1) p L [0 ,n ] ]( ω J [0 ,n ] − H [0 ,n ] ) = ~e T . (4.3)Eliminating ~e T between (4.2) and (4.3), we obtain p Rj [ q L [0 ,n ] − m ( ω, n + 1) p L [0 ,n ] − q L [0 ,j ] + m ( ω, j ) p L [0 ,j ] ]( ω J [0 ,n ] − H [0 ,n ] ) = ~e Tj , which upon further simplification gives the matrix R [0 ,n ] ( ω ). (cid:3) In compact form, the ( i, j ) th entry of R [0 ,n ] ( ω ) is given by p Lj m min ( i,j ) p Ri . Next, for ω ∈ ρ ( H [0 ,n ] , J [0 ,n ] ), let us factorize R [0 ,n ] ( ω ) = ( ω J [0 ,n ] − H [0 ,n ] ) − = L [0 ,n ] ( ω ) D [0 ,n ] ( ω ) U [0 ,n ] ( ω ) . (4.4)Then, owing to the Hermitian nature of the eigenvalue equation involved, we choose U [0 ,n ] ( ω ) = L ∗ [0 ,n ] ( ω ) , where L [0 ,n ] ( ω ) = p R · · · p R p R · · · p Rn − p Rn − p Rn − · · · p Rn − p Rn p Rn p Rn · · · p Rn p Rn , and D [0 ,n ] ( ω ) is the diagonal matrix diag( d ( ω ) , d ( ω ) , · · · , d n ( ω )) where d ( ω ) = m , d j ( ω ) = ( m j − m j − ) = m ( ω, j ) − m ( ω, j − j = 1 , , · · · , n. As a matter of verification, with m − := 0, we have in the right hand side of (4.4) i th row × j th column = p Ri min ( i,j ) X k =0 ( m k − m k − ) p Lj = p Lj m min ( i,j ) p Ri . With this decomposition we can easily invert R [0 ,n ] ( ω ) again, so that [ R [0 ,n ] ( ω )] − =( ω J [0 ,n ] −H [0 ,n ] ) will be a matrix in which the entries are given in terms of the m -functions.We illustrate this for the trailing submatrix [Ψ [ k +1 ,n ] ( ω )] − . Lemma 4.2.
Suppose m ( ω, k +1) = m ( ω, n +1) and m ( ω, i ) = m ( ω, i − , i = k +2 , · · · , n .The entries of the inverse of the trailing sub-matrix Ψ [ k +1 ,n ] ( ω ) are given by [Ψ [ k +1 ,n ] ( ω )] − i,j = p Li [ m ( ω,i ) − m ( ω,i − p Ri + p Lj [ m ( ω,j +1) − m ( ω,j )] p Rj , i=j; − p Li [ m ( ω,j ) − m ( ω,i )] p Rj , i < j , , | i − j | > , (4.5) or i, j = k + 2 , k + 3 , · · · , n − , while [Ψ [ k +1 ,n ] ( ω )] − i,j = ( p Li [ m ( ω,i ) − m ( ω,n +1)] p Ri + p Lj [ m ( ω,j +1) − m ( ω,j )] p Rj , i=j=k+1; p Li [ m ( ω,i ) − m ( ω,i − p Ri , i=j=n. (4.6) Proof.
We start with the decomposition R [ k +1 ,n ] ( ω ) = L [ k +1 ,n ] ( ω )D [ k +1 ,n ] ( ω )U [ k +1 ,n ] ( ω )whereU [ k +1 ,n ] ( ω ) = L ∗ [ k +1 ,n ] ( ω ) with L [ k +1 ,n ] ( ω ) = p Rk +1 · · · p Rk +2 p Rk +2 · · · p Rn − p Rn − p Rn − · · · p Rn − p Rn p Rn p Rn · · · p Rn p Rn and D [ k +1 ,n ] =diag { d k +1 , d k +2 , · · · , d n } given by d k +1 ( ω ) = m k +1 = m ( ω, k + 1) − m ( ω, n + 1) ,d j ( ω ) = ( m j − m j − ) = m ( ω, j ) − m ( ω, j − , j = k + 2 , k + 3 , · · · , n. It can be easily verified that[L [ k +1 ,n ] ( ω )] − = /p Rk +1 · · · − /p Rk +1 /p Rk +2 · · · · · · /p Rn −
00 0 0 · · · − /p Rn − /p Rn , and [U [ k +1 ,n ] ( ω )] − = [L ∗ [ k +1 ,n ] ( ω )] − . Then,[R [ k +1 ,n ] ( ω )] − = [U [ k +1 ,n ] ( ω )] − [D [ k +1 ,n ] ( ω )] − [L [ k +1 ,n ] ( ω )] − , gives the required entries (4.5) and (4.6). (cid:3) We are now ready to view the entries of H [0 ,n ] in terms of m -functions. Let us denote m ji ( ω ) = 1 p Li [ m ( ω, j ) − m ( ω, i )] p Ri and b m ji ( ω ) = 1 p Li [ m ( ω, j ) − m ( ω, i )] p Rj Theorem 4.3.
Suppose the m -functions m ( ω, j ) of the pencil ( ω J [0 ,j ] − H [0 ,j ] ) are knownand satisfy the assumptions of Lemma 4.2 for j = k + 1 , k + 2 , · · · , n, n + 1 . If ω ∈ ρ ( H ,k , J [0 ,k ] ) , then the matrix H [0 ,n ] can be reconstructed with the entries given by b j = ωd j + b m jj +1 ( ω ) , j = k + 1 , k + 2 , · · · , n − , (4.7) and a j = ωc j − m j +1 j ( ω ) + m n +1 j ( ω ) + | ωd k − b k | m jj − ( ω ) j = k + 1 ; ωc j + m j − j ( ω ) − m j +1 j ( ω ) , j = k + 2 , · · · , n − , ωc j + m j − j ( ω ) , j = n . (4.8) Proof.
We use the following representation of the inverse obtained through the conceptof Schur’s complement [5] (cid:18)
A BC D (cid:19) − = (cid:18) I − A − B I (cid:19) (cid:18) A −
00 [ D − CA − B ] − (cid:19) (cid:18) I − CA − I (cid:19) , (4.9) here I is the identity matrix of appropriate order. Since Ψ [ k +1 ,n ] ( ω ) is the trailingsubmatrix of the inverse of the pencil, we also have (cid:18) w J [0 ,k ] − H [0 ,k ] O ω O ∗ ω w J [ k +1 ,n ] − H [ k +1 ,n ] (cid:19) − = (cid:18) Ψ [0 ,k ] ( ω ) ∗∗ Ψ [ k +1 ,n ] ( ω ) (cid:19) . (4.10)Since ω ∈ ρ ( H [0 ,k ] , J [0 ,k ] ), we can substitute A = ω J [0 ,k ] − H [0 ,k ] , B = O ω given by (3.2)and D = ω J [ k +1 ,n ] − H [ k +1 ,n ] . Then comparing the respective blocks of (4.9) and (4.10),we get ( ω J [ k +1 ,n ] − H [ k +1 ,n ] ) = [Ψ [ k +1 ,n ] ( ω )] − + O ∗ ω ( ω J [0 ,k ] − H [0 ,k ] ) − O ω . (4.11)We use Lemma 4.1 to obtain the inverse R [0 ,k ] of ω J [0 ,k ] − H [0 ,k ] so that O ∗ ω [ ω J [0 ,k ] − H [0 ,k ] ] − O ω = p Lk [ m ( ω, k ) − m ( ω, k + 1)] p Rk | ωd k − b k | ~e ~e T , where ~e ∈ R k +1 . The entries of [Ψ [ k +1 ,n ] ] − obtained from (4.5) and (4.6) and used in(4.11) yields the required expressions (4.8) and (4.7). (cid:3) It may be observed that the rational functions q R ( L ) j ( ω ) are only intermediary since thefinal expressions for the entries depend on the m -functions defined by (1.4). As opposedto (3.13), the a ′ j s in the present case, except for a k +1 , are determined independent of b ′ j s while for a k +1 , in addition to b k , we need prior information on m ( ω, n + 1). Finally,the results of the present section can themselves be seen as a sort of inverse problem ofreconstructing the matrix H [0 ,n ] from the knowledge of m -functions, components of aneigenvector and a point in the resolvent set of the linear pencil. References [1] A. I. Aptekarev, V. Kaliaguine and W. Van Assche, Criterion for the resolvent set of nonsymmetrictridiagonal operators, Proc. Amer. Math. Soc. (1995), no. 8, 2423–2430.[2] G. A. Baker, Jr. and P. Graves-Morris,
Pad´e approximants. Part I , Encyclopedia of Mathematicsand its Applications, 13, Addison-Wesley Publishing Co., Reading, MA, 1981.[3] B. Beckermann, V. Kaliaguine, The diagonal of the Pad table and the approximation of the Weylfunction of second order difference operators, Constr. Approx. 13 (1997) 481510.[4] B. Beckermann, M. Derevyagin and A. Zhedanov, The linear pencil approach to rational interpola-tion, J. Approx. Theory (2010), no. 6, 1322–1346.[5] D. Carlson, What are Schur complements, anyway?, Linear Algebra Appl. (1986), 257–275.[6] M. T. Chu, Inverse eigenvalue problems, SIAM Rev. (1998), no. 1, 1–39.[7] M. Derevyagin, A note on Wall’s modification of the Schur algorithm and linear pencils of Jacobimatrices, J. Approx. Theory (2017), 1–21.[8] M. E. H. Ismail and A. Sri Ranga, R II type recurrence, generalized eigenvalue problem and orthog-onal polynomials on the unit circle, Linear Algebra Appl. (2019), 63–90.[9] E. Kılı¸c and P. Stanica, The inverse of banded matrices, J. Comput. Appl. Math. (2013), no. 1,126–135.[10] P. Lancaster and Q. Ye, Inverse spectral problems for linear and quadratic matrix pencils, LinearAlgebra Appl. (1988), 293–309.[11] M. Sen and D. Sharma, Generalized inverse eigenvalue problem for matrices whose graph is a path,Linear Algebra Appl. (2014), 224–236.[12] B. Simon, Orthogonal polynomials on the unit circle. Part 1 , American Mathematical Society Col-loquium Publications, 54, Part 1, American Mathematical Society, Providence, RI, 2005.[13] H. S. Wall,
Analytic Theory of Continued Fractions , D. Van Nostrand Company, Inc., New York,NY, 1948.[14] Y.-X. Yuan and H. Dai, A generalized inverse eigenvalue problem in structural dynamic modelupdating, J. Comput. Appl. Math. (2009), no. 1, 42–49.[15] H. Zhang and Y. Yuan, Generalized inverse eigenvalue problems for Hermitian and J -Hamiltonian/skew-Hamiltonian matrices, Appl. Math. Comput. (2019), 609–616.
16] A. Zhedanov, Biorthogonal rational functions and the generalized eigenvalue problem, J. Approx.Theory (1999), no. 2, 303–329.
Department of Mathematics, Indian Institute of Science, Bangalore, India
E-mail address : [email protected]@iisc.ac.in