Inverse problems for Jacobi operators III: Mass-spring perturbations of semi-infinite systems
aa r X i v : . [ m a t h - ph ] J a n Inverse problems for Jacobi operators III:Mass-spring perturbations of semi-infinite systems
Rafael del Rio
Departamento de M´etodos Matem´aticos y Num´ericosInstituto de Investigaciones en Matem´aticas Aplicadas y en SistemasUniversidad Nacional Aut´onoma de M´exicoC.P. 04510, M´exico D.F. [email protected]
Mikhail Kudryavtsev
Department of MathematicsInstitute for Low Temperature Physics and EngineeringLenin Av. 47, 61103Kharkov, Ukraine [email protected]
Luis O. Silva ∗ Departamento de M´etodos Matem´aticos y Num´ericosInstituto de Investigaciones en Matem´aticas Aplicadas y en SistemasUniversidad Nacional Aut´onoma de M´exicoC.P. 04510, M´exico D.F. [email protected]
Abstract
Consider an infinite linear mass-spring system and a modification of itobtained by changing the first mass and spring of the system. We giveresults on the interplay of the spectra of such systems and on the recon-struction of the system from its spectrum and the one of the modifiedsystem. Furthermore, we provide necessary and sufficient conditionsfor two sequences to be the spectra of the mass-spring system and theperturbed one.
Mathematics Subject Classification(2010): 34K29, 47A75, 47B36, 70F17,Keywords: Infinite mass-spring system; Jacobi matrices; Two-spectra inverse problem. ∗ Partially supported by CONACYT (M´exico) through grant CB-2008-01-99100 . Introduction
Let l fin ( N ) be the linear space of complex sequences with a finite number of non-zero elements. In the Hilbert space l ( N ), consider the operator J defined for every f = { f k } ∞ k =1 in l fin ( N ) by( J f ) := q f + b f , (1.1)( J f ) k := b k − f k − + q k f k + b k f k +1 , k ∈ N \ { } , (1.2)where q n ∈ R and b n > n ∈ N . The operator J is symmetric and hasdeficiency indices (1 ,
1) or (0 ,
0) [1, Chap. 4, Sec. 1.2]. Fix a self-adjoint extension of J and denote it by J . Thus, either J ! J or J = J . According to the definitionof the matrix representation for an unbounded symmetric operator [2, Sec. 47], J is the operator whose matrix representation with respect to the canonical basis { δ n } ∞ n =1 in l ( N ) is q b · · · b q b · · · b q b b q . . .... ... . . . . . . . (1.3)Along with J , we consider the operator e J = J + [ q ( θ −
1) + θ h ] h δ , ·i δ + b ( θ − h δ , ·i δ + h δ , ·i δ ) , θ > , h ∈ R , (1.4)which is a self-adjoint extension of the operator whose matrix representation withrespect to the canonical basis in l ( N ) is θ ( q + h ) θb · · · θb q b · · · b q b b q . . .... ... . . . . . . . (1.5)Note that e J is obtained from J by a particular kind of rank-two perturbation.Under the assumption that J has discrete spectrum (as explained in Section 2,when J has deficiency indices (1 , J and e J , the matrix(1.3) and the “boundary condition at infinity” defining the self-adjoint extension J if necessary (i. e. if J is not essentially self-adjoint, cf. [8, Sec. 2]). To solve thisinverse problem, one should elucidate the distribution of the perturbed spectrum1elative to the unperturbed one and determine the necessary input data for recov-ering the matrix. An important point to note is that this work provides necessaryand sufficient conditions for two sequences to be the spectra of J and e J . Also, wediscuss (the lack of) uniqueness of the reconstruction.Although the two spectra inverse problem for the rank-one perturbation familyof Jacobi operators has been thoroughly studied (see for instance [9, 12, 16, 20] and[5, 6, 10, 11] for the case of finite matrices), there is scarce literature dealing withinverse problems of other kind of perturbations (cf. [8]).The motivation for this work is the inverse spectral problem studied in [15] and[7] which is in its turn related with the physical problem of measuring micro-masseswith the help of micro-cantilevers [18, 19]. Micro-cantilevers are modeled by spring-mass systems whose masses and spring constants are determined by the mechanicalparameters of the micro-cantilevers.In this work we consider the semi-infinite mass-spring system given in Fig. 1.with masses { m j } ∞ j =1 and spring constants { k j } ∞ j =1 . This system is modeled by theJacobi matrix (1.3) with q j = − k j +1 + k j m j , b j = k j +1 √ m j m j +1 , j ∈ N . In [11, 14] it is explained how to deduce these formulae. Since J is considered tohave discrete spectrum, the movement of the system is a superposition of harmonicoscillations whose frequencies are the square roots of the modules of the eigenvalues.The modified mass-spring system corresponding to the perturbed operator e J is m m m k k k k Figure 1: Semi-infinite mass-spring systemobtained by changing the first mass by ∆ m = m ( θ − −
1) and the first springby ∆ k = − hm (see Fig. 2). Here we also consider negative values of ∆ m and∆ k which correspond to θ > h <
0, respectively. Note that the perturbation m m m ∆ m ∆ kk k k k Figure 2: Perturbed semi-infinite mass-spring systeminvolved here is the result of the combined effect of a rank-one perturbation (studied2horoughly in [16]) and the particular rank-two perturbation studied in [8]. However,most of the results obtained here cannot be found from the results in [16] and [8],and require their own proof. Moreover, it turns out that one can single-out classesof isospectral operators within the two parameter perturbation family considered inthis work that were not studied before.The paper is organized as follows. In Section 2 we fix the notation, lay down aconvention for enumerating sequences and recall some results of the inverse spectraltheory for Jacobi operators. Section 3 gives a detailed spectral analysis of the familyof perturbed Jacobi operators. The solution of the two spectra inverse problem for J and e J is given in Section 4. This section also discusses the non-uniqueness ofthe reconstruction and gives some characterization of isospectral operators in theperturbation family under consideration.
2. A review on inverse spectral theory for Jacobi operators
Let us denote by σ ( J ) the spectrum of J and consider the spectral resolution ofthe identity E for J given by the spectral theorem. Then the spectral function ρ of J is defined by ρ ( t ) := h δ , E ( t ) δ i . (2.1)All the moments of ρ exist [1, Thm. 4.1.3], that is, for all k ∈ N ∪ { } , s k = Z R t k dρ ( t ) ∈ R . Moreover, since J turns out to be simple with δ being a cyclic vector, the operatorof multiplication by the independent variable in L ( R , ρ ) (defined on the maximaldomain) is unitarily equivalent to J .Alongside the spectral function we consider the corresponding Weyl m -functiongiven by m ( ζ ) := (cid:10) δ , ( J − ζ I ) − δ (cid:11) = Z R dρ ( t ) t − ζ , ζ σ ( J ) . (2.2)Because of the inverse Stieltjes transform one uniquely recovers ρ from m , so ρ and m are in one-to-one correspondence.The Weyl m -function has the following asymptotic behavior m ( ζ ) = − ζ − q ζ − b + q ζ + O ( ζ − ) , (2.3)as ζ → ∞ with Im ζ ≥ ǫ , ǫ > J is centered on the fact thatthe Weyl m -function (or, equivalently, ρ ) uniquely determines the matrix (1.3) andthe boundary condition at infinity that defines the self-adjoint extension if necessary.Indeed, for recovering the matrix one may use a method based on a discrete Riccati3quation (see [10, Eq. 2.15], [20, Eq. 2.23]) or the method of orthonormalization ofthe polynomial sequence { t k } ∞ k =0 in L ( R , ρ ) [3, Chap. 7, Sec. 1.5]. If (1.3) is thematrix representation of a non-self-adjoint operator, then the condition at infinitymay be found by the method exposed in [16, Sec. 2].In this work we restrict our considerations to the case of σ ( J ) being discrete, viz., σ ess ( J ) = ∅ . It is well known that this is always the case when J is not essentiallyself-adjoint [17, Thm. 4.11], [21, Lem. 2.19]. The discreteness of σ ( J ) implies that(2.1) can be written as follows ρ ( t ) = X λ k Under the assumption that σ ( J ) is discrete, σ ( J ) and σ ( J T ) interlace. Moreover σ ( J ) coincides with the set of poles of the function m and σ ( J T ) is the set of its zeros.Proof. Clearly, one should only establish that the zeros and poles of m are as statedin the proposition. But this is a straightforward conclusion from the definition ofthe Weyl m -function and the formula b m T ( ζ ) = q − ζ − m ( ζ ) , (2.6)where m T is the Weyl m -function corresponding to J T . Equation (2.6) is a particularcase of [10, Eq. 2.15] or [20, Eq. 2.23]. (C1) Convention for enumerating a sequence. Let S be an infinite countableset of real numbers without finite points of accumulation and M an infinite subsetof consecutive integers such that there is a strictly increasing function f : M → S such that f − (0) = 0. We write S = { λ k } k ∈ M , where λ k = f ( k ). Note that M issemi-bounded from above (below) if and only if the same holds for S and that in { λ k } k ∈ M only λ is allowed to be zero. 4 emark 1. Clearly, if two real sequences S , S ′ without finite accumulation pointsinterlace, then one always can find M and functions f : M → S and f ′ : M → S ′ with the properties given in our convention (C1) such that, for any k ∈ M , either λ k < λ ′ k < λ k +1 or λ ′ k < λ k < λ ′ k +1 , where λ k = f ( k ) and λ ′ k = f ′ ( k ). If S is not semi-bounded, then both possibilitieshold simultaneously.The proof of the following proposition can be found in [8, Lem. 4.1] and [16,Sec. 4] and the starting point for it is [13, Chap. 7, Thm. 1]. Proposition 2.2. Let J have discrete spectrum and assume that σ ( J ) = { λ k } k ∈ M ,and σ ( J T ) = { η k } k ∈ M . Then m ( ζ ) = C ζ − η ζ − λ Y k ∈ Mk =0 (cid:18) − ζη k (cid:19) (cid:18) − ζλ k (cid:19) − , (2.7) Moreover, C < and η k < λ k < η k +1 , ∀ k ∈ M , (2.8) if σ ( J ) is semi-bounded from above, while, C > and λ k < η k < λ k +1 , ∀ k ∈ M , (2.9) otherwise. 3. Direct spectral analysis for J and e J Let J and e J be the operators defined in the Introduction. Since J T = e J T , where e J T is the operator in the space δ ⊥ obtained by restricting e J to dom( e J ) ∩ δ ⊥ , oneobtains from (2.6) that θ (cid:18) ζ + 1 m ( ζ ) + h (cid:19) = ζ + 1 e m ( ζ ) , (3.1)where e m is the Weyl m -function corresponding to e J . Let us define the function m ( ζ ) := m ( ζ ) e m ( ζ ) (3.2)Immediately from (3.1) one proves the following proposition. Prior to stating it, inorder to simplify the writing of some expressions, let us introduce a constant that5ill be used recurrently throughout the paper. γ := θ h − θ . (3.3) Proposition 3.1. Consider the Jacobi operator J and the operator e J as given in(1.4) with θ = 1 . If J has discrete spectrum, theni) the set of poles of m is a subset of σ ( J ) and the set of zeros is contained in σ ( e J ) ,ii) γ ∈ σ ( J ) if and only if γ ∈ σ ( e J ) ,iii) the sets σ ( J ) and σ ( e J ) can intersect only at γ . The following alternative expression for m : m ( ζ ) = ( θ − 1) ( ζ − γ ) m ( ζ ) + θ , (3.4)which is obtained by combining (3.1) and (3.2), is the main ingredient in the proofof the following proposition. Proposition 3.2. Consider the Jacobi operator J and the operator e J as given in(1.4) with θ = 1 . If J has discrete spectrum, then the spectra σ ( J ) , σ ( e J ) interlacein the intervals ( γ, + ∞ ) and ( −∞ , γ ) . Moreover, σ ( e J ) in the interval ( γ, + ∞ ) ,respectively ( −∞ , γ ) , is shifted with respect to σ ( J ) to the left, respectively right,when θ < and to the right, respectively left, when θ > . Remark 2. The set σ ( J ) ∩ ( γ, + ∞ ), respectively σ ( J ) ∩ ( −∞ , γ ), may be emptyand, then, there is no spectrum of e J in ( γ, + ∞ ), respectively ( −∞ , γ ). If λ is theonly element in σ ( J ) ∩ ( γ, + ∞ ), respectively σ ( J ) ∩ ( −∞ , γ ), then there is exactlyone element of σ ( e J ) in ( γ, + ∞ ), respectively ( −∞ , γ ). Proof. Let us first prove that between two contiguous eigenvalues of J there isexactly one eigenvalue of e J . Assume that θ > λ, b λ of J such that γ < λ < b λ . Then, by (2.5) and (3.4), one haslim t → b λ − t ∈ R m ( t ) = + ∞ lim t → λ + t ∈ R m ( t ) = −∞ . The function m ↾ R , should cross the 0-axis in ( λ, b λ ) an odd number of times. Ac-tually, it crosses the 0-axis only once. Indeed, if one assumes that m ↾ R crosses the0-axis three or more times as in Fig. 3 (a), then, in view of Propositions 2.1 and3.1, there would be at least two elements of σ ( J T ) in ( λ, b λ ). Note that one crossingof the 0-axis and a tangential touch of it as in Fig. 3 (b) and (c) is also impossiblesince the poles of e m are simple. Analogously, between two contiguous eigenvalues6 b c Figure 3: Impossible crossings of the 0-axis by m of e J , 1 / m ↾ R crosses the 0-axis exactly once. Thus, by means of Proposition 3.1, theinterlacing of σ ( J ) and σ ( e J ) in ( γ, + ∞ ) has been proven.When θ < 1, one haslim t → b λ − t ∈ R m ( t ) = −∞ lim t → λ + t ∈ R m ( t ) = + ∞ . and by the same reasoning used above the interlacing of the spectra in ( γ, + ∞ ) isestablished. The interlacing in ( −∞ , γ ) is proven analogously.Let us now prove the second assertion of the proposition. To this end supposefirst that γ σ ( J ) and observe that, under this assumption, (3.4) implies that m ( γ ) = θ . (3.5)Let us now assume that the contiguous eigenvalues λ, b λ of J are such that λ < γ < b λ . Under the premise that θ > 1, we havelim t → b λ − t ∈ R m ( t ) = + ∞ lim t → λ + t ∈ R m ( t ) = + ∞ . (3.6)In view of (3.5) and (3.6), if m ↾ R crosses the 0-axis one time in the interval ( λ, γ ),it should cross it in ( λ, γ ) at least twice. The same is true for the interval ( γ, b λ ).Note that m ↾ R cannot tangentially touch the 0-axis due to the simplicity of itszeros. So, the assumption that m ↾ R crosses the 0-axis, from what has already beenproven above, would imply that in ( λ, γ ), respectively ( γ, b λ ), there is at least oneeigenvalue of J , which contradicts the fact that λ and b λ are contiguous. Thus, thereis no crossing of the 0-axis by m ↾ R in the interval ( λ, b λ ), which means the absenceof eigenvalues of e J in ( λ, b λ ). If now θ < 1, instead of (3.6), one haslim t → b λ − t ∈ R m ( t ) = −∞ lim t → λ + t ∈ R m ( t ) = −∞ . m ↾ R crosses the 0-axis exactly once in ( λ, γ ) andonce in ( γ, b λ ).The case when γ is in σ ( J ) is treated analogously. Here one only has to takeinto account two things: firstly that now m ( γ ) = θ + ( θ − 1) Res ζ = γ m ( ζ ) (3.7)and secondly, that, since − [Res ζ = γ m ( ζ )] − is the normalizing constant of J corre-sponding to the eigenvalue γ (see (2.5)), one has m ( γ ) > θ > θ < Remark 3. Although, the case θ = 1 reduces to an additive rank-one perturbation,the well known interlacing property (see for instance the proof of [16, Thm. 3.3])cannot be obtained from Proposition 3.2 by a limiting procedure since the limit of γ ( θ ) when θ → Remark 4. Let the positive number θ = 1 and h ∈ R . It is straightforward toverify that, for σ ( J ) and σ ( e J ), there exist a set M and functions f : M → σ ( J ) and e f : M → σ ( e J ), with the properties given in our convention (C1) for enumeratingsequences, such that the following conditions hold under the assumption that λ k = f ( k ) and µ k = e f ( k ): λ k < µ k < λ k +1 in ( γ, + ∞ ) , λ k − < µ k < λ k in ( −∞ , γ ) , (3.8)when θ > 1, and µ k < λ k < µ k +1 in ( γ, + ∞ ) , µ k − < λ k < µ k in ( −∞ , γ ) , (3.9)if θ < 1. Here, implicitly, the intersection of σ ( J ) with the semi-infinite intervals isnot empty, but we are also considering the case when the intersection with one ofthe semi-infinite intervals is empty (see Remark 2). Also, we are not excluding thecase when γ is in σ ( J ) for which there is k ∈ M such that λ k = µ k = γ . Proposition 3.3. Suppose that h ∈ R is such that if θ = 1 then h = 0 . Let J havediscrete spectrum and assume that σ ( J ) = { λ k } k ∈ M and σ ( e J ) = { µ k } k ∈ M , wherethe sequences have been arranged according to Remark 4 if θ = 1 and according toRemark 1 otherwise. Then X k ∈ M ( µ k − λ k ) = h + q ( θ − 1) (3.10)8 roof. Consider the sequence { η k } k ∈ M being the spectrum of b J , where b J := J + h h δ , ·i δ . In the proof of [16, Thm. 3.4] it is shown that X k ∈ M ( η k − λ k ) = h , where η k > λ k for all k ∈ M when h > η k ≤ λ k for all k ∈ M otherwise. Onthe other hand, by [8, Prop. 4.1], one has X k ∈ M ( µ k − η k ) = q ( θ − , where the enumeration obeys [8, Remark 5] if θ = 1.Consider a sequence { M n } ∞ n =1 of subsets of M , such that M n ⊂ M n +1 and ∪ n M n = M . Then the assertion follows from the linearity of the limitlim n →∞ " X k ∈ M n ( µ k − η k ) + X k ∈ M n ( η k − λ k ) , as soon as one notices that the enumeration has been done according to Remark 4when θ = 1. Proposition 3.4. Suppose that h ∈ R is such that if θ = 1 then h = 0 . Let theJacobi operator J have discrete spectrum and assume that σ ( J ) = { λ k } k ∈ M and σ ( e J ) = { µ k } k ∈ M , where e J is given by (1.4), and the sequences have been arrangedaccording to Remark 4 if θ = 1 and according to Remark 1 otherwise. Then, m ( ζ ) = Y k ∈ M ζ − µ k ζ − λ k . Proof. When θ = 1 the assertion follows from the proof of [16, Thm. 3.4] If θ = 1,the proof repeats the one of [8, Prop. 4.2], so we omit some details that the readercan reestablish from [8, Prop. 4.2] if necessary.From Proposition 2.2 and (3.2) it follows that m ( ζ ) = C ζ − µ ζ − λ Y k ∈ Mk =0 (cid:18) − ζµ k (cid:19) (cid:18) − ζλ k (cid:19) − . 9y Proposition 3.3, one actually has m ( ζ ) = C ζ − µ ζ − λ Y k ∈ Mk =0 λ k µ k Y k ∈ Mk =0 ζ − µ k ζ − λ k (3.11)Indeed, (3.10) implies the convergence of the products in (3.11).Now, the assertion of the proposition follows fromlim ζ →∞ Im ζ ≥ ǫ> m ( ζ ) = 1 and lim ζ →∞ Im ζ ≥ ǫ Y k ∈ M ζ − µ k ζ − λ k = 1 . (3.12)The first limit is obtained from (2.3) and (3.4). The second one is a consequence ofthe uniform convergence of Y k ∈ M ζ − µ k ζ − λ k in compacts of C \ R , which, in its turn, can be proven on the basis of (3.10). 4. Inverse spectral analysis for J and e J In this section we give results on reconstruction of the operator J from its spec-trum and the one of e J . Additionally, we provide necessary and sufficient conditionsfor two sequences to be the spectra of the operators J and e J . Finally, we discussisospectral operators within the perturbed family of Jacobi operators. Theorem 4.1. Let the Jacobi operator J have discrete spectrum and e J be as in (1.4)with θ = 1 . If γ is not in σ ( J ) , then the sets σ ( J ) , σ ( e J ) , and the constant γ uniquelydetermine the matrix (1.3), the parameters θ and h , and the boundary condition atinfinity if necessary (i. e. if J turns out to be non-essentially self-adjoint).Proof. In view of what has been said in Section 2, it suffices to show that the inputdata uniquely determine the Weyl m -function of J , and the parameters θ and h .On the basis of Proposition 3.4, one construct m from the sets σ ( J ) and σ ( e J ).Then, since γ σ ( J ), it follows from (3.4) that m ( γ ) = θ . Now, the constants γ and θ allow to find h . Finally, by means of (3.4), one determines the function m . Theorem 4.2. Let the Jacobi operator J have discrete spectrum and e J be as in(1.4) with θ = 1 . Assuming that γ is in σ ( J ) , suppose that one is given the sets σ ( J ) , σ ( e J ) and one of the following constants(a) θ, (b) the normalizing constant corresponding to γ, (c) h, then one recovers uniquely the matrix (1.3), the constant h in case (a), θ and h incase (b), θ in case (c), and the boundary condition at infinity if necessary (i. e. if J turns out to be non-essentially self-adjoint). roof. The proof is similar to the one of Theorem 4.1. The sets σ ( J ) and σ ( e J )determine m and, then, one should obtain from it the function m using either theconstant θ or the normalizing constant corresponding to γ . From Proposition 3.1 itfollows that σ ( J ) ∩ σ ( e J ) = { γ } . Thus θ or h determine θ and h . On the other hand, from (3.4) and taking intoaccount that γ ∈ σ ( J ), we obtain m ( γ ) = θ − α − ( θ − , (4.1)where α is the normalizing constant corresponding to the eigenvalue γ .Suppose now that we are required to enumerate the sequences σ ( J ) and σ ( e J )according to Remark 4, but no information is given about the constant γ other thanit is not in σ ( J ). Clearly, one does not need this number for accomplishing this task,as is stated in the following remark. Remark 5. Assuming that J has discrete spectrum, let S = σ ( J ), e S = σ ( e J ) bedisjoint, and take any θ = 1 and h ∈ R . It follows from Proposition 3.2 that one canfind a set M and functions f : M → S , e f : M → e S , with the properties given in ourconvention for enumerating sequences (C1), such that there exists a unique k ∈ M for which the following conditions hold under the assumption that λ k = f ( k ) and µ k = e f ( k ) for k ∈ M :a) e S ∩ ( λ k − , λ k ) = ∅ ,b) λ k < µ k < λ k +1 , ∀ k ≥ k ,c) λ k − < µ k < λ k , ∀ k < k ,if θ > 1, anda ′ ) e S ∩ ( λ k − , λ k ) = { µ k − , µ k } ,b ′ ) λ k < µ k +1 < λ k +1 , ∀ k ≥ k ,c ′ ) λ k − < µ k − < λ k , ∀ k < k .if θ < J and its perturbation e J , let us introduce the followingparameterized sequence. Suppose that two sequences { λ k } k ∈ M and { µ k } k ∈ M aregiven and enumerated by the set M as convened before. Whenever the series X k ∈ M ( µ k − λ k )11onverges, the sequence τ n ( ω ) := ( µ n − λ n ) Y k ∈ Mk = n λ n − µ k λ n − λ k ( λ n − ω ) Y k ∈ M ω − µ k ω − λ k − ! , ∀ n ∈ M . (4.2)is well defined for any ω ∈ R . Theorem 4.3. Let S and e S be two disjoint infinite real sequences without finitepoints of accumulation. There exist θ > , h ∈ R , and a matrix (1.3) such that S = σ ( J ) γ and e S = σ ( e J ) if and only if the following conditions hold:i) There exist a set M and functions h : M → S , e h : M → e S with the propertiesgiven in our convention for enumerating sequences (C1) such that one can finda unique k ∈ M for which a),b),c) of Remark 5 take place with λ k = h ( k ) and µ k = e h ( k ) .ii) The series P k ∈ M ( µ k − λ k ) is convergent.iii) There exists b ω ∈ ( λ k − , λ k ) such thata) For m = 0 , , , . . . , the series X k ∈ M λ mk τ k ( b ω ) converges.b) If a sequence of complex numbers { β k } k ∈ M is such that the series X k ∈ M | β k | τ k ( b ω ) convergesand, for m = 0 , , , . . . , X k ∈ M β k λ mk τ k ( b ω ) = 0 , then β k = 0 for all k ∈ M .Proof. Due to Propositions 3.2 and 3.3, for proving the necessity of the conditions,it only remains to show the existence of b ω in ( λ k − , λ k ) such that τ n ( b ω ) = α − n forall n ∈ M . Indeed iiia ) and iiib ) will follow from the fact that all moments of thespectral measure (2.4) exist and that the polynomials are dense in L ( R , ρ ).12learly, γ ∈ ( λ k − , λ k ), so let b ω = γ . Then, from (2.5),(3.4), and Proposi-tion 3.4, it follows that α − n = 1 θ − ζ → λ n λ n − ζζ − γ m ( ζ )= µ n − λ n ( λ n − γ )( θ − Y k ∈ Mk = n λ n − µ k λ n − λ k . (4.3)Hence, taking into account (3.5), one verifies that τ n ( b ω ) = α − n .We now prove that conditions i ), ii ), iiia ), and iiib ) are sufficient.The condition i ) implies that λ n − µ k λ n − λ k > ∀ k ∈ M , k = n On the other hand, by ii ) one can define the number ϑ = + s Y k ∈ M b ω − µ k b ω − λ k (4.4)which is clearly strictly greater than 1 since if b ω ∈ ( λ k − , λ k ), then | b ω − µ k | > | b ω − λ k | for all k ∈ M . Thus, µ n − λ n ( λ n − b ω )( ϑ − > ∀ n ∈ M Hence, for all n ∈ M , τ n ( b ω ) > 0, so define the function ρ ( t ) := X λ k 1) + θ h ] h δ , ·i δ + b ( θ − h δ , ·i δ + h δ , ·i δ ) , (4.10)where θ = ϑ , h = b ω − ϑ ϑ . By construction the sequence { λ k } k ∈ M is the spectrum of J . For the proof to becomplete it only remains to show that { µ k } k ∈ M is the spectrum of e J . For thefunction given in (3.2), taking into account (2.5) and (3.4), one has m ( ζ ) = θ + ( ζ − b ω ) (cid:0) θ − (cid:1) X k ∈ M α k ( λ k − ζ ) . 14n the other hand, from (4.6) and (4.9), it follows thatˇ m ( ζ ) = ϑ + ( ζ − b ω ) (cid:0) ϑ − (cid:1) X k ∈ M τ k ( b ω ) λ k − ζ . But θ = ϑ and we have already proven that α − k = τ k ( b ω ) for k ∈ M . Thus m = ˇ m ,meaning that the zeros of m are given by the sequence { µ k } k ∈ M . Remark 6. In accordance with Theorem 4.1, the proof of Theorem 4.3 shows thatthe sequences S , e S , and the parameter b ω satisfying i) , ii) , and iii) , uniquely deter-mine the perturbation parameters θ and h , and the matrix (1.3) with the boundarycondition at infinity if necessary. Thus, S , e S , and b ω amount to the complete inputdata for solving uniquely the inverse spectral problem. Remark 7. Clearly, the assertion of Theorem 4.3 holds true if one substitutes θ > θ < 1, conditions a), b), c) by a ′ ), b ′ ), c ′ ), and b ω ∈ ( λ k − , λ k ) by b ω ∈ ( µ k − , µ k ). Proposition 4.1. Let S and e S be two infinite real sequences without finite pointsof accumulation that satisfy i) and ii) of Theorem 4.3. Suppose that there is b ω ∈ ( λ k − , λ k ) so that the sequence { τ n ( b ω ) } n ∈ M satisfies iiia) and iiib) of Theorem 4.3,then { τ n ( ω ) } n ∈ M also satisfies iiia) and iiib) for all ω ∈ ( λ k − , λ k ) .Proof. Let ρ ω ( t ) := X λ k As in Remark 7, the assertion of Proposition 4.1 holds true if oneassumes that i) is satisfied with a ′ ), b ′ ),c ′ ) instead of a), b), c) and substitute theinterval ( λ k − , λ k ) by ( µ k − , µ k ). Theorem 4.4. Let θ = 1 and assume that the disjoint sets σ ( J ) and σ ( e J ) areenumerated according to Remark 5 with a), b), c) if θ > , and with a ′ ), b ′ ), c ′ )otherwise. Then, for any ω ∈ ( λ k − , λ k ) when θ > and for any ω ∈ ( µ k − , µ k ) when θ < , there is a matrix q ′ b ′ · · · b ′ q ′ b ′ · · · b ′ q ′ b ′ b ′ q ′ . . .... ... . . . . . . , (4.13) where q ′ n ∈ R and b ′ n > for all n ∈ N , and a self-adjoint extension J ′ of the operatorwhose matrix representation is (4.13), such that σ ( J ′ ) = σ ( J ) and σ ( e J ′ ) = σ ( e J ) ,where e J ′ := J ′ + [ q ′ (( θ ′ ) − 1) + ( θ ′ ) h ′ ] h δ , ·i δ + b ′ ( θ ′ − h δ , ·i δ + h δ , ·i δ ) (4.14) with θ ′ := + p m ( ω ) , h ′ := ω − m ( ω ) m ( ω ) . Proof. We prove the assertion for θ > 1. The other case is completely analogous,one only has to take into account Remarks 7 and 8. By Theorem 4.3, it follows that σ ( J ) and σ ( e J ) satisfy i), ii), iiia), and iiib). Then, from Proposition 4.1, iiia) andiiib) are satisfied for any ω ∈ ( λ k − , λ k ). Now, again by Theorem 4.3, there areoperators J ′ and e J ′ such that their spectra coincide with σ ( J ) and σ ( e J ). Lemma 4.1. Let θ = 1 and assume that the disjoint sets σ ( J ) and σ ( e J ) are enumer-ated according to Remark 5 with a), b), c) if θ > , and with a’), b’), c’) otherwise.Then, the equation m ↾ ( λ k − ,λ k ) ( s ) = θ (4.15) has only the solutions s = γ and s = b γ , where b γ is the only point in σ ( J T ) ∩ ( λ k − , λ k ) . Moreover, if γ = b γ , then γ is a local extremum of m ↾ ( λ k − ,λ k ) . roof. First notice that γ is in ( λ k − , λ k ) if θ > µ k − , µ k ) otherwise.By Proposition 2.1 the set σ ( J T ) ∩ ( λ k − , λ k ), has only one element. If θ < 1, since J T = e J T , this only element is actually in ( µ k − , µ k ). Moreover, when θ < 1, bywhat was said in the proof of Proposition 3.2, m ↾ R takes negative values outside( µ k − , µ k ). Now, from (3.4), the solutions of (4.15) are the zeros of ( ζ − γ ) m ( ζ )which are γ and b γ . Clearly, if γ = b γ , the function ( ζ − γ ) m ( ζ ) has a zero ofmultiplicity two which implies the second assertion. Lemma 4.2. Let θ = 1 and assume that the disjoint sets σ ( J ) and σ ( e J ) are enumer-ated according to Remark 5 with a), b), c) if θ > , and with a’), b’), c’) otherwise.Then, the function m ↾ ( λ k − ,λ k ) has only one local extremum in ( λ k − , λ k ) when θ > , and in ( µ k − , µ k ) when θ < , which turns out to be a global minimumgreater than 1 if θ > , and a global maximum less that 1 if θ < .Proof. Suppose that θ > m ↾ ( λ k − ,λ k ) has more than one local ex-tremum. Then one verifies that there are three different points ω , ω , ω in ( λ k − ,λ k ) such that m ( ω ) = m ( ω ) = m ( ω ) . By Theorem 4.4, for ω there are Jacobi operators J ′ and e J ′ such that σ ( J ′ ) = σ ( J )and σ ( e J ′ ) = σ ( e J ). Let n be the quotient of the Weyl m -function of J ′ and the Weyl m -function of e J ′ . By Proposition 3.4, m = n . Hence, on the basis of Theorem 4.4,it follows that n ( ω ) = n ( ω ) = n ( ω ) = ( θ ′ ) . (4.16)On the other hand, Lemma 4.1 tells us that the equation n ↾ ( λ k − ,λ k ) ( s ) = ( θ ′ ) , where θ ′ = + p m ( ω ), has only the solutions ω and the only element of σ ( J ′ T ) in( λ k − , λ k ). This is in contradiction with (4.16).Thus there is only one extremum of m ↾ ( λ k − ,λ k ) when θ > 1. The same rea-soning given above, but replacing all appearances of the interval ( λ k − , λ k ) by( µ k − , µ k ), works for the case θ < m in the interval ( λ k − , λ k ) if θ > 1, andin ( µ k − , µ k ) if θ < 1, given in the proof of Proposition 3.2, one completes theproof. Theorem 4.5. Under the assumptions of Lemma 4.1, if γ = b γ , then there areexactly two different matrices (1.3) and (4.13) such that σ ( J ′ ) = σ ( J ) and σ ( e J ′ ) = σ ( e J ) with θ = θ ′ . If γ = b γ , then for all operators J ′ = J for which σ ( J ′ ) = σ ( J ) and σ ( e J ′ ) = σ ( e J ) it turns out that θ = θ ′ .Proof. Due to Theorem 4.4 and Lemmas 4.1 and 4.2, the proof is straightforward.17 emark 9. Clearly, the condition γ = b γ is equivalent to γ being equal to theminimum of m ↾ ( λ k − ,λ k ) if θ > 1, and being equal to the maximum of m ↾ ( µ k − ,µ k ) if θ < m is positive or negative (see Proposition 3.2). For definiteness,suppose that ∆ m > 0. If no more information is given, then for any value of the ratioof masses θ ∈ (0 , max t ∈ ( µ k − ,µ k ) m ( t )] there are mass-spring systems correspondingto Figs. 1 and 2 having the measured spectra (see Theorem 4.4). However, when oneknows the ratio of masses θ then, in general, there are only two mass-spring systemscorresponding to Fig. 1 that comply with the conditions after the correspondingperturbation (see Theorem 4.5). Moreover, if θ = max t ∈ ( µ k − ,µ k ) m ( t ) , there is only one system with the required properties (see Theorem 4.5).Let us now turn to the case when γ ∈ σ ( J ) or, equivalently, when the spectra of J and e J intersect. Thus, according to Remark 4, consider the sequences { λ k } k ∈ M and { µ k } k ∈ M such that λ k = µ k = γ . If X k ∈ M ( µ k − λ k )converges, then, for any ω ∈ R and n ∈ M , one defines υ n ( ω ) := ( µ n − λ n )( λ n − γ )( ω − Y k ∈ Mk = n λ n − µ k λ n − λ k , n = k ( ω − − ω − Y k ∈ Mk = k γ − µ k γ − λ k n = k (4.17) Theorem 4.6. Let S and e S be two infinite real sequences without finite points ofaccumulation such that S ∩ e S = { γ } . There exist θ > , h ∈ R , and a matrix (1.3)such that S = σ ( J ) and e S = σ ( e J ) if and only if the following conditions hold:I) There exist a set M and functions h : M → S , e h : M → e S with the propertiesgiven in Remark 4 such that (3.8) holds and there is a k ∈ M such that λ k = µ k = γ . I) The series P k ∈ M ( µ k − λ k ) is convergent.III) There exists b ω > Y k ∈ Mk = k γ − µ k γ − λ k such thata) For m = 0 , , , . . . , the series X k ∈ M λ mk υ k ( b ω ) converges.b) If a sequence of complex numbers { β k } k ∈ M is such that the series X k ∈ M | β k | υ k ( b ω ) convergesand, for m = 0 , , , . . . , X k ∈ M β k λ mk υ k ( b ω ) = 0 , then β k = 0 for all k ∈ M .Proof. For proving the necessity of the conditions, in view of Propositions 3.2 and3.3, one only needs to show the existence of b ω strictly greater than m ( γ ) such that υ k ( b ω ) = α − k for all k ∈ M . From (4.1) and the properties of the normalizingconstants, it follows that 1 < m ( γ ) < θ . (4.18)Let b ω = θ , then (4.3) yields υ k ( b ω ) = α − k for k ∈ M , k = k . Moreover, (4.1)implies that υ k ( b ω ) = α − k .Let us now prove that I ), II ), IIIa ), and IIIb ) are sufficient.It follows from (3.8) that | γ − µ k | > | γ − λ k | for any k ∈ M \ { k } . Since γ − µ k and γ − λ k have the same sign, Y k ∈ Mk = k γ − µ k γ − λ k > . Thus b ω > υ k ( b ω ) > 0. Now fix n ∈ N , n = k . By I ) one has λ n − µ k λ n − λ k > ∀ k ∈ M , k = n . Since µ n − λ n and λ n − γ are positive or negative simultaneously, we conclude that υ n ( b ω ) > , ∀ n ∈ M . ρ ( t ) := X λ k IIIa ) that all the moments of the measure corresponding to ρ arefinite.On the basis of I ) and II ), define the meromorphic functionsˇ m ( ζ ) := Y k ∈ Mk = k ζ − µ k ζ − λ k (4.19)and ˇ m ( ζ ) := ˇ m ( ζ ) − b ω ( ζ − γ ) ( b ω − . (4.20)As it was shown in the proof of Theorem 4.3, one verifies thatRes ζ = λ n ˇ m ( ζ ) = − υ n ( b ω ) , n = k . It is also straightforward to show thatRes ζ = γ ˇ m ( ζ ) = ˇ m ( γ ) − b ω b ω − . Thus, since the function ˇ m ( ζ ) vanishes as ζ → ∞ along curves in the upper complexhalf plane, according to [13, Chap. VII, Sec.1 Theorem 2], one can writeˇ m ( ζ ) = X k ∈ M υ k ( b ω ) λ k − ζ . (4.21)From (4.21) and the fact that lim ζ →∞ Im ζ ≥ ǫ> ζ ˇ m ( ζ ) = − 1, it follows that X k ∈ M υ k ( b ω ) = 1 or, equivalently, Z R dρ ( t ) = 1 . On the other hand, by IIIa ), all the moments of ρ exist. Hence, using the methodexplained in Section 2, one obtains a Jacobi matrix and the operator J generatedby it (see the Introduction). Condition IIIb ) implies that ρ is the spectral functionof a self-adjoint extension J of J [17, Prop. 4.15]. Now, consider (4.10), where now θ = + √ b ω , h = γ (cid:18) b ω − (cid:19) . By construction the sequence { λ k } k ∈ M is the spectrum of J . For the proof to becomplete it only remains to show that { µ k } k ∈ M is the spectrum of e J . For the20unction given in (3.2), taking into account (2.5) and (3.4), one has m ( ζ ) = θ + ( ζ − γ ) (cid:0) θ − (cid:1) X k ∈ M α k ( λ k − ζ ) . In view of (4.20) and (4.21), one hasˇ m ( ζ ) = b ω + ( ζ − γ ) ( b ω − X k ∈ M υ k ( b ω ) λ k − ζ . But, since θ = b ω and the fact that α − k = υ k ( b ω ) for k ∈ M , it follows that m = ˇ m .In its turn, this means that the zeros of m are given by the sequence { µ k } k ∈ M . Remark 10. By repeating the reasoning of the proof of Theorem 4.6, it is straight-forward to verify that Theorem 4.6 remains true if one substitutes θ > θ < I) , and b ω > Y k ∈ Mk = k γ − µ k γ − λ k by b ω < Y k ∈ Mk = k γ − µ k γ − λ k . Proposition 4.2. Let S and e S be two infinite real sequences without finite pointsof accumulation such that S ∩ e S = { γ } and I) and II) of Theorem 4.6 hold. Supposethat there is b ω > Y k ∈ Mk = k γ − µ k γ − λ k so that the sequence { υ n ( b ω ) } n ∈ M satisfies IIIa) and IIIb) of Theorem 4.6, then { υ n ( ω ) } n ∈ M also satisfies IIIa) and IIIb) for all ω > Y k ∈ Mk = k γ − µ k γ − λ k Proof. For proving the claim one repeats the reasoning of the proof of Proposi-tion 4.1. Here we observe that, for n ∈ M , n = k , υ n ( ω ) = Cυ n ( b ω ) , where C = b ω − ω − . Remark 11. If, in Proposition 4.2, one substitutes (3.8) by (3.9) in I) and b ω > Y k ∈ Mk = k γ − µ k γ − λ k , ω > Y k ∈ Mk = k γ − µ k γ − λ k b ω < Y k ∈ Mk = k γ − µ k γ − λ k , ω < Y k ∈ Mk = k γ − µ k γ − λ k , then the new assertion holds true.By repeating the proof of Theorem 4.4 with a minor modification one arrives atthe following theorem. Theorem 4.7. Let θ = 1 and assume that the intersecting sets σ ( J ) and σ ( e J ) areenumerated according to Remark 4 with (3.8) if θ > and (3.9) if θ < . Then,for any ω > m ( γ ) when θ > and for any ω < m ( γ ) when θ < , there is a matrix(4.13) and a self-adjoint extension J ′ of the operator whose matrix representation is(4.13), such that σ ( J ′ ) = σ ( J ) and σ ( e J ′ ) = σ ( e J ) , where e J ′ is given by (4.14) with θ ′ := + √ ω , h ′ := γ (cid:18) ω − (cid:19) . Let us now comment on the last results in terms of the perturbed mass-springsystems.Assume that the spectra of the mass-spring system given in Fig. 1 and Fig. 2 aregiven and they intersect. By Proposition 3.2, these input data determine the sign of∆ m . Let us suppose that ∆ m > 0. Due to Theorem 4.7, for any value of the ratioof masses θ < m ( γ ) there are mass-spring systems corresponding to Figs. 1 and2 having the measured spectra. The knowledge of the ratio of masses completelydetermines the mass-spring systems.We have given above the ratio of masses as a parameter of the system when thespectra intersect (see Theorems 4.6 and 4.2 where ω and b ω play the role of the ratioof masses). This is a “natural” choice because the parameter used in the case whenthe spectra are disjoint, namely γ , is now given with the spectra. There is alsoanother choice for the parameter: the spring constant h . Below we briefly discussthis parameterization where now the role of the spring constant is played by ω and b ω . We begin by defining e υ n ( ω ) := ( µ n − λ n )( ω + γ )( γ − λ n ) ω Y k ∈ Mk = n λ n − µ k λ n − λ k , n ∈ M , n = k ω + γω Y k ∈ Mk = k γ − µ k γ − λ k − γω n = k (4.22) Theorem 4.8. Let S and e S be two infinite real sequences without finite points ofaccumulation such that S ∩ e S = { γ } . There exist θ > , h ∈ R , and a matrix (1.3)such that S = σ ( J ) and e S = σ ( e J ) if and only if the conditions I and II of Theorem4.6 hold along with II’) There exists a real number b ω satisfying b ω = 0 if γ = 0 < γ Y k ∈ Mk = k γ − λ k γ − µ k − if γ > > γ Y k ∈ Mk = k γ − λ k γ − µ k − if γ < such thata) For m = 0 , , , . . . , the series X k ∈ M λ mk e υ k ( b ω ) converges.b) If a sequence of complex numbers { β k } k ∈ M is such that the series X k ∈ M | β k | e υ k ( b ω ) convergesand, for m = 0 , , , . . . , X k ∈ M β k λ mk e υ k ( b ω ) = 0 , then β k = 0 for all k ∈ M .Proof. The proof is similar to the one of Theorem 4.6 and we restrict ourselves to thecase when γ > 0. The other cases are proven analogously. Thus, for the necessityof the conditions to be proven, one only should establish that there is b ω < γ Y k ∈ Mk = k γ − λ k γ − µ k − such that e υ k ( b ω ) = α − k for all k ∈ M . On the basis of (3.3) and (4.18), one has1 < m ( γ ) < γγ + h . Note that γ + h = 0. Since γ and γ + h have always the same sign and we are23ssuming that γ > 0, the inequality h < γ (cid:18) m ( γ ) − (cid:19) holds. So let b ω = h , then (4.3) yields e υ k ( b ω ) = α − k for k ∈ M , k = k . Moreover,(4.1) implies that e υ k ( b ω ) = α − k .Let us now prove that I ), II ), IIIa ), and IIIb ) are sufficient. Reasoning as before,one verifies that e υ n ( b ω ) > , ∀ n ∈ M . Now, instead of (4.20) one definesˇ m ( ζ ) := ( b ω − γ ) ˇ m ( ζ ) − γ ( ζ − γ )(2 γ − b ω ) , where ˇ m is given in (4.19). Then it is shown thatRes ζ = λ n ˇ m ( ζ ) = − e υ n ( b ω ) ∀ n ∈ M . (4.23)Having defined ρ ( t ) := X λ k Theorem 4.8 holds true after substituting θ > θ < b ω = 0 if γ = 0 < γ Y k ∈ Mk = k γ − λ k γ − µ k − if γ > > γ Y k ∈ Mk = k γ − λ k γ − µ k − if γ < b ω = 0 if γ = 0 > γ Y k ∈ Mk = k γ − λ k γ − µ k − if γ > < γ Y k ∈ Mk = k γ − λ k γ − µ k − if γ < Theorem 4.9. Let θ = 1 and assume that the intersecting sets σ ( J ) and σ ( e J ) areenumerated according to Remark 4 with (3.8) if θ > and (3.9) if θ < . Assumethat γ > , then, for any ω < γ Y k ∈ Mk = k γ − λ k γ − µ k − when θ > , and for any ω > γ Y k ∈ Mk = k γ − λ k γ − µ k − when θ < , there is a matrix (4.13) and a self-adjoint extension J ′ of the operatorwhose matrix representation is (4.13) such that σ ( J ′ ) = σ ( J ) and σ ( e J ′ ) = σ ( e J ) , here e J ′ is given by (4.14) with θ ′ := + r γω + γ , h ′ := ω . References [1] N. I. Akhiezer, “The Classical Moment Problem and Some Related Questionsin Analysis,” Hafner Publishing Co., New York, 1965.[2] N. I. Akhiezer and I. M. Glazman, “Theory of Linear Operators in HilbertSpace,” Dover Publications, Inc., New York, 1993.[3] Ju. M. Berezans’ki˘ı, “Expansions in eigenfunctions of selfadjoint operators,”Translations of Mathematical Monographs, , American Mathematical Soci-ety, Providence, R.I., 1968.[4] M. Sh. Birman and M. Z. Solomjak, “Spectral Theory of Selfadjoint Operatorsin Hilbert Space,” Mathematics and its Applications (Soviet Series), D. ReidelPublishing Co., Dordrecht, 1987.[5] M. T. Chu and G. H. Golub, “Inverse Eigenvalue Problems: Theory Algorithmsand Applications,” Numerical Mathematics and Scientific Computation, Ox-ford University Press, New York, 2005.[6] C. de Boor and G. H. Golub, The numerically stable reconstruction of a Jacobimatrix from spectral data , Linear Alg. Appl., (1978), 245–260.[7] R. del Rio and M. Kudryavtsev, Inverse problems for Jacobi operators I:Interior mass-spring perturbations in finite systems , arXiv:1106.1691 .[8] R. del Rio, M. Kudryavtsev and L. O. Silva, Inverse problems for Ja-cobi operators II: Mass perturbations of semi-infinite mass-spring systems , arXiv:1106.4598 .[9] L. Fu and H. Hochstadt, Inverse theorems for Jacobi matrices , J. Math. Anal.Appl., (1974), 162–168.[10] F. Gesztesy and B. Simon, m -functions and inverse spectral analysis for finiteand semi-infinite Jacobi matrices , J. Anal. Math., (1997), 267–297.2611] G. M. L. Gladwell, “Inverse Problems in Vibration,” Second edition, SolidMechanics and its Applications, , Kluwer Academic Publishers, Dordrecht,2004.[12] R. Z. Halilova, An inverse problem , (Russian), Izv. Akad. Nauk Azerba˘ıdˇzan,SSR Ser. Fiz.-Tehn. Mat. Nauk, (1967), 169–175.[13] B. Ja. Levin, “Distribution of Zeros of Entire Functions,” Translations ofMathematical Monographs, , American Mathematical Society, Providence,R.I., 1980.[14] V. A. Marchenko and T. V. Misyura, “Se˜nalamientos Metodol´ogicos yDid´acticos al Tema: Problemas Inversos de la Teor´ıa Espectral de Operadoresde Dimensi´on Finita,” Monograf´ıas IIMAS-UNAM, , No. 28, M´exico, 2004.[15] Y. M. Ram, Inverse eigenvalue problem for a modified vibrating system , SIAMAppl. Math., (1993), 1762–1775.[16] L. O. Silva and R. Weder, On the two spectra inverse problem for semi-infiniteJacobi matrices , Math. Phys. Anal. Geom., (2006), 263–290.[17] B. Simon, The classical moment problem as a self-adjoint finite differenceoperator , Adv. Math., (1998), 82–203.[18] M. Spletzer, A. Raman, H. Sumali and J. P. Sullivan, Highly sensitive mass de-tection and identification using vibration localization in coupled microcantileverarrays , Applied Physics Letters, (2008), 114102.[19] M. Spletzer, A. Raman, A. Q. Wu and X. Xu, Ultrasensitive mass sensingusing mode localization in coupled microcantilevers , Applied Physics Letters, (2006), 254102.[20] G. Teschl, Trace formulas and inverse spectral theory for Jacobi operators ,Comm. Math. Phys., (1998), 175–202.[21] G. Teschl, “Jacobi Operators and Completely Integrable Nonlinear Lattices,”Mathematical Surveys and Monographs,72