Analyticity of resolvents of elliptic operators on quantum graphs with small edges
aa r X i v : . [ m a t h . SP ] F e b Resolvents of elliptic operators on quantum graphs withsmall edges: holomorphy and Taylor series
D.I. Borisov
Institute of Mathematics, Ufa Federal Research Center, Russian Academy of Sciences, Ufa, Russia,&Bashkir State University, Ufa, Russia,&University of Hradec Kr´alov´e, Hradec Kr´alov´e, Czech Republic [email protected]
Abstract
We consider an arbitrary metric graph, to which we glue a graph with edges of lengths propor-tional to ε , where ε is a small positive parameter. On such graph, we consider a general self-adjointsecond order differential operator H ε with varying coefficients subject to general vertex conditions;all coefficients in differential expression and vertex conditions are supposed to be holomorphic in ε .We introduce a special operator on a special graph obtained by rescaling the aforementioned smalledges and assume that it has no embedded eigenvalues at the threshold of its essential spectrum.Under such assumption, we show that that certain parts of the resolvent of H ε are holomorphicin ε and we show how to find effectively all coefficients in their Taylor series. This allows us torepresent the resolvent of H ε by an uniformly converging Taylor-like series and its partial sumscan be used for approximating the resolvent up to an arbitrary power of ε . In particular, thezero-order approximation reproduces recent convergence results by G. Berkolaiko, Yu. Latushkin,S. Sukhtaiev and by C. Cacciapuoti, but we additionally show that next-to-leading terms in ε -expansions of the coefficients in the differential expression and vertex conditions can contribute tothe limiting operator producing the Robin part at the vertices, to which small edges are incident.We also discuss possible generalizations of our model including both the cases of a more generalgeometry of the small parts of the graph and a non-holomorphic ε -dependence of the coefficientsin the differential expression and vertex conditions.Keywords: graph, small edge, resolvent, holomorphy, Taylor series, approximation The theory of quantum graphs or, in other words, the spectral theory of differential operators on graphsis actively developing nowadays and a rather new direction is devoted to studying quantum graphswith small edges. One of early studied problems came from the homogenization theory and it wasdevoted to models of woven membrane kind, see [5], [6]. The well-known result states that an ellipticoperator on a graph looking as a net with small edges converge to a two-dimensional operator on anEuclidean domain approximated in a natural sense by such graph.Another situation, which attracted quite a lot of attention, concerned the graphs with finitelymany small edges. As a rather early publication on this subject we mention paper [7], in which it wasshown that a general vertex condition in a graph can be approximated in the norm resolvent sense viaornamenting by small edges supporting a magnetic field and with delta-coupling at the vertices.The case of a general graph with finitely small edges was addressed in [1]. On such graph, aSchr¨odinger operator subject to general vertex conditions was considered. The main result stated thatunder a certain non-resonance condition, see Condition 3.2 in [1], the considered operator converged inthe norm resolvent sense to a Schr¨odinger operator on the limiting graph without small edges, whichwere replaced by certain limiting conditions at the vertices, to which the small edges were attached.The vertex conditions in [1] were described in terms of sympletic geometry and Lagrangian planes; the1imiting vertex conditions were also given in the same terms. A similar study was made in [4], wherethere was considered a star graph, in which the vertex was replaced by a small rescaled finite graph.The operator on this perturbed graph was designed so that after the rescaling the small graph backto a finite size, the differential expression and vertex conditions turned out to be independent of asmall parameter characterizing the size of the small graph. The main result of [4] provided the leadingterms in the asymptotics for the perturbed resolvent and the estimates for the error term. The limitingcondition at the vertex of the limiting star graph contained only the Neumann and Dirichlet parts.The above described results motivated further studies of the graphs with small edges aimed onbetter understanding and describing the behavior of the resolvents. A natural question, apart ofdetermining the limiting operator, is to find several next terms in the asymptotics for the resolvent oreven a complete asymptotic expansion. This question was addressed in few recent papers [9], [10], [11]for some toy models represented by very simple graphs. A main feature of the considered modelswas the possibility of singular dependence of the coefficients in the differential expression on a smallparameter governing the lengths of small edges. The main results obtained for these models statedthat the resolvents were holomorphic in certain sense with respect to the small parameter and in thelimit, the condition at the vertex to which the small edges shrank, could involve a Robin part. ThisRobin part was generated by the next-to-leading terms in the expansions of the coefficients of thedifferential expression with respect to the small parameter. For the resolvents, several first terms intheir asymptotic expansions were found. The holomorphic dependence of the resolvent on the smallparameter was in some sense surprising since small edges is an example of a singular perturbation andusually, while dealing with singular perturbation, one can hope only to find an asymptotic expansionfor the resolvent but not to prove that this expansion is a converging Taylor series.The present work is motivated by papers [1], [4]. We consider a general graph Γ , to which asmall graph γ ε is glued. This small graph is obtained by rescaling a given graph γ into ε − times,where ε is a small parameter, see Figures 1, 2. On this perturbed graph with small edges we definea second order scalar differential operator H ε depending on ε . The differential expression is of ageneral form involving first order terms and varying coefficients. The vertex conditions are also ofa general matrix form with arbitrary not necessarily unitary matrix coefficients. All coefficients inthe differential expression and the vertex conditions depend arbitrarily on ε . For the coefficients inthe vertex conditions and in the differential expression on non-small edges in Γ , this dependence isholomorphic in ε , while the coefficients in the differential expression on the small edges in γ ε aremeromorphic in ε . On the coefficients in the differential expression and the vertex conditions of H ε we impose conditions ensuring its self-adjointness. Then we attach finitely many leads to the graph γ and denote such extended graph by γ ∞ , see Figure 2. On such extended graph γ ∞ we consider a self-adjoint operator H γ with differential expression and vertex conditions generated in a certain way bythose of the operator H ε . Our main assumption is that the operator H γ has no embedded eigenvaluesat the bottom of its essential spectrum; this does not exclude the presence of possible virtual levels atthe same bottom. This assumption is of the same nature as the non-resonance condition in work [1].Under this condition, our first main result states that the resolvent of the operator H ε is in some senseholomorphic in ε . Namely, we consider the restrictions of this resolvent on the graphs Γ and γ ε and thelatter restriction then is sandwiched between the rescaling operators; such parts of the resolvent turnout to be holomorphic in ε . Our second main result provides an effective recurrent scheme allowingone to determine all coefficients in the Taylor series for the aforementioned parts of the resolvent of H ε . And our third main results then provides a Taylor-like series for this resolvent and estimates forthe errors while approximating the resolvent by partial sums of this Taylor-like series. All these resultsare established for the resolvent of H ε regarded not only as operator acting in L , but also as actingfrom L into the Sobolev spaces W j , j = 1 , , and into the spaces of continuous functions C and C j , j = 1 , . Due to the embedding of W into C , the same results hold also for the resolvent regardedas an operator from L into C . As a corollary of our results, we find a limiting operator on the graph Γ , the resolvent of which approximates the resolvent of H ε in the sense of the above norms.This result is similar to the main results in [1] and [4], but our result has a special feature. Namely,we show that the next-to-leading terms in the ε -expansions of the coefficients in the differential expres-sion and vertex conditions of H ε generate a Robin part in the limiting vertex conditions at a vertex M , through which the graph γ ε is glued to the graph Γ . Such case was excluded by the formulationof the problem in [4]. In [1], the Robin part could appear in the limit, but only due to the fact that, inour terms, the vertex condition at the ends of the small edges were independent of ε . Finally, we can2igure 1: Examples of initial graphs Γ and γ . Here the edges incident to M are split into three groups( n = 3 ). also conclude that despite the small edges are a singular perturbation, the resolvent of the operator H ε depends holomorphically in ε . And from this point of view, it behaves similar to the situation, whenthe edges vary but do not vanish. Such situation was considered in [8] and the main result stated thatthe resolvent of the Schr¨odinger operator on such graphs depends analytically on the edges lengths andthe matrices involved in the vertex conditions.In conclusion of this section, let us briefly describe the structure of the paper. In the next sectionwe formulate the problem, our main assumptions and the main results. In the third section we discusshow far our results can be extended to the case, when several small graphs are glued to the graph Γ , see Figure 3, including the case of the dependence on several independent small parameters. Andwe also discuss the case, when the assumed holomorphy in the small parameter for the coefficients inthe differential expression and vertex conditions is replaced by the infinite differentiability in ε or justby the existence of power asymptotic expansions. In the fourth and fifth sections we introduce andstudy some auxiliary operators on the graphs Γ and γ . These operators are employed then in the sixthsection, in which we prove the holomorphy of the parts of the resolvent. In the seventh section wedescribe an effective recurrent procedure for determining all Taylor coefficients for these parts and thisis used in the eighth section to construct a Taylor-like series for the resolvent of H ε and to prove theestimates for the errors terms in this series. Let Γ be a finite metric graph having no isolated vertices. The lengths of its edges can be both finiteor infinite. We fix arbitrarily a vertex in the graph Γ and denote it by M . The degree of this vertex isfinite, that is, there exist finitely many edges e i , i = 1 , . . . , d incident to M . Each loop in the family { e i } i =1 ,...,d is counted twice. By γ we denote one more graph, which is supposed to be finite, metric,having only edges of finite lengths and no isolated vertices or edges, see Figure 1.Letting ε to be a small positive parameter, we shrink each edge of the graph γ in ε − times andthe resulting graph is denoted by γ ε . More precisely, the vertices of the graph γ ε are the same as thoseof γ , while each edge e ∈ γ is replaced by the edge of the length ε | e | .We chose arbitrary vertices M j , j = 1 , . . . , n , in the graph γ ε and then partition the aforementionededges e i , i = 1 , . . . , d , in the graph Γ into n non-empty groups { e i } i ∈ J j , j = 1 , . . . , n , where J j aresome non-empty sets of indices and n S j =1 J j = { , . . . , d } , n d . Then we replace the vertex M in the graph Γ by its n copies, one copy for each group { e i } i ∈ J j . Finally, we introduce a graph,which will be the main object of our study, as the union of the graphs Γ and γ ε , where the vertices3igure 2: Graph Γ ε with a glued small graph and graph γ ∞ . The small edges in the graph Γ ε are indicatedby thin lines. The leads in the graph γ ∞ are of light gray color. M j , j = 1 , . . . , n , are identified with the aforementioned copies of the vertex M in the graph Γ , seeFigure 2. After the described gluing of the graphs Γ and γ ε , the edges e i , i = 1 , . . . , d , are no longerconnected at the vertex M but instead, each group { e i } i ∈ J j is incident to the vertex M j in the graph γ ε . Hereafter, we shall often identify the graphs Γ and γ ε with the corresponding subgraphs in Γ ε . Inparticular, in the sense of this identification, each function defined on Γ ε is also supposed to be definedon Γ and γ ε and vice versa.On each edge in Γ and γ we fix a direction and consequently, a variable on it. The chosen directionthen naturally is transferred to the graph Γ ε .The main object of our study is an unbounded operator H ε in L (Γ ε ) with the differential expression ˆ H ( ε ) := − ddx V (2) ε ddx + i (cid:18) ddx V (1) ε + V (1) ε ddx (cid:19) + V (0) ε , (2.1)where the coefficients are defined as V ( i ) ε := ( V ( i )Γ ( · , ε ) on Γ ,ε i − S ε V ( i ) γ ( · , ε ) on γ ε . (2.2)Here V ( i )Γ = V ( i )Γ ( · , ε ) and V ( i ) γ = V ( i ) γ ( · , ε ) are some real bounded measurable functions defined onthe graphs Γ and V γ , while the symbol S ε stands for a linear operator mapping L ( γ ) onto L ( γ ε ) bythe rule ( S ε u )( x ) := u (cid:16) xε (cid:17) as x ∈ e ε (2.3)on each edge e ε in the graph γ ε . We assume that the functions V ( i )Γ , V ( i ) γ belong to the following spaces: V (2)Γ , V (1)Γ ∈ W ∞ (Γ) , V (2) γ , V (1) γ ∈ W ∞ ( γ ) , V (0)Γ ∈ L (Γ) , V (1) γ ∈ L ( γ ) , and that these functions areholomorphic in ε in the sense of the corresponding norms. We also assume an uniform ellipticitycondition for the expression ˆ H ( ε ) , namely, the existence of a fixed constant c H > independent of x ∈ Γ , ξ ∈ γ such that V Γ ( x, > c H a.e. on Γ , V γ ( ξ, > c H a.e. on γ Thanks to the assumed holomorphy in ε , this condition implies the same condition for V Γ ( x, ε ) and V γ ( x, ε ) for all sufficiently small ε with the constant c replaced by c/ .The vertex conditions on the graph Γ ε are imposed as follows. Let M be an arbitrary vertexin Γ ε of a degree d ( M ) > and e i ( M ) , i = 1 , . . . , d ( M ) , be the edges incident to this vertex and u i := u (cid:12)(cid:12)(cid:12) e i ( M ) , i = 1 , . . . , d ( M ) , be the restrictions of a function u on the edges e i ( M ) . We introduce4wo d ( M ) -dimensional vectors U M ( u ) := u ( M ) ... u d ( M ) ( M ) , U ′ M ( u ) := du dx ( M ) ... du d ( M ) dx d ( M ) ( M ) , (2.4)where x i is the variable on the edge e i ; this variable is chosen according the above discussed direction.At each vertex M ∈ Γ ε of a degree d ( M ) > we impose a general vertex condition A M ( ε ) U M ( u ) + B M ( ε ) U ′ M ( u ) = 0 , (2.5)where A M ( ε ) and B M ( ε ) are some matrices of size d ( M ) × d ( M ) holomorphic in ε . We assume thatthe matrix A M ( ε ) (cid:0) V (2) M ( ε ) (cid:1) − B ∗ M ( ε ) + iB M ( ε ) (cid:0) V (2) M ( ε ) (cid:1) − V (1) M ( ε ) (cid:0) V (2) M ( ε ) (cid:1) − B ∗ M ( ε ) (2.6)is self-adjoint, where V ( j ) M ( ε ) := diag (cid:8) ν i ( M ) V ( j ) ε (cid:12)(cid:12) e i ( M ) ( M ) (cid:9) i =1 ,...,d ( M ) , j = 1 , , (2.7)and e i ( M ) are the edges incident to the vertex M , while ν i ( M ) := 1 if the inward direction on theedge e i ( M ) at the vertex M coincides with the above chosen orientation on this edge and ν i ( M ) := − otherwise.The matrices A M ( ε ) and B M ( ε ) in conditions (2.6) are defined non-uniquely up to the left mul-tiplication by an arbitrary non-degenerate d ( M ) × d ( M ) matrix. In view of this fact, we denote r ( M ) := rank B M (0) and without loss of generality we additionally assume that for each M ∈ Γ ε , thefirst r ( M ) rows in the matrix B M (0) are linearly independent and the other rows are zero, while eachrow among last d ( M ) − r ( M ) ones in the matrix A M (0) is non-zero. Having this assumption in mind,at each vertex M ∈ Γ ε we impose a rank condition: rank (cid:0) A M (0) B M (0) (cid:1) = d ( M ) , which is obviously equivalent to rank (cid:0) A M ( ε ) B M ( ε ) (cid:1) = d ( M ) (2.8)for all sufficiently small ε .The domain of the operator H ε consists of the functions in ˙ W ( γ ε ) satisfying the imposed vertexconditions; hereinafter, for an arbitrary graph we denote ˙ W j ( · ) := L e ∈ · W j (e) , j = 1 , . The action ofthe operator H ε on such functions is defined by differential expression (2.1). To formulate our main results, we need some additional notations. By γ ∞ we denote a graph obtainedby attaching leads (edges of infinite lengths) e ∞ i , i ∈ J j , j = 1 , . . . , n , to each vertex M j , j = 1 , . . . , n ,in the graph γ ; the vertex M j serves as the origin for the attached edges e i .The variable on the graph γ ∞ is denoted by ξ . On the graph γ ∞ we introduce an operator H γ withthe differential expression ˆ H γ := − ddξ V (2) γ ( · , ddξ + i (cid:18) ddξ V (1) γ ( · ,
0) + V (1) γ ( · , ddξ (cid:19) + V (0) γ ( · , on γ, ˆ H γ := − v (2) i (0) d dξ on e ∞ i , i ∈ J j , j = 1 , . . . , n, v ( p ) i ( ε ) := V ( p )Γ (cid:12)(cid:12) e i ( M , ε ) , (2.9)where, we recall, e i are the edges in the graph Γ incident to the vertex M . The vertex conditions readas A (0) M U M ( u ) + B (0) M U ′ M ( u ) = 0 at each M ∈ γ ∞ , (2.10)5here the vectors U M ( u ) and U ′ M ( u ) are introduced as in (2.4) with the derivatives du i dx i substituted by du i dξ i and A (0) M := (cid:18) − M (0) (cid:19) , B (0) M := B + M (0) d B − M dε (0) ! if B M (0) = 0 , (2.11) A (0) M := A M (0) , B (0) M := d B M dε (0) if B M (0) = 0 . (2.12)Here A − M ( · ) and B − M ( · ) are the matrices formed by last d ( M ) − r ( M ) rows of respectively the matrices A − M ( · ) and B − M ( · ) , while B + M ( · ) is the matrix formed by first d ( M ) rows of the matrix B M . The domainof the operator H γ consists of the functions in ˙ W ( γ ) satisfying the imposed vertex conditions. It willbe shown later, see Section 7.2, that the operator H γ is self-adjoint.Since the graph γ is finite and all its edges are of finite lengths, the only edges of infinite length inthe graph γ ∞ are the leads e ∞ i attached to the vertices M j , j = 1 , . . . , n . The differential expression ofthe introduced operator H γ on these leads is just the negative Laplacian multiplied by the constants v (2) i , see (2.9). This is why it is straightforward to confirm that the essential spectrum of the operator H γ is the half-line [0 , + ∞ ) .The main condition we assume in the work is as follows.(A). The operator H γ has no embedded eigenvalue at the bottom of its essential spectrum. At the bottom of its essential spectrum, the operator H γ can have virtual level, namely, there canbe a bounded non-trivial solution ψ ∈ ˙ W ( γ ) ⊕ L i ∈ J j , j =1 ,...,n W ,loc (e i ) of the boundary value problem ˆ H γ ψ = 0 on γ ∞ , A (0) M U M ( ψ ) + B (0) M U ′ M ( ψ ) = 0 at each M ∈ γ ∞ . (2.13)Again in view of the definition of the differential expression ˆ H γ on leads e ∞ i , the aforementionedboundedness condition for ψ is equivalent to the identities ψ = const on e ∞ i , i ∈ J j , j = 1 , . . . , n ,where const stands for some constants depending in general on the choice of the edge e ∞ i .Given a function u defined and continuous on the edges e ∞ i in the vicinity of the points M j , wedenote U γ ( u ) := (cid:16) u (cid:12)(cid:12) e ∞ i ( M j ) (cid:17) i ∈ J j , j =1 ,...,n where u (cid:12)(cid:12) e ∞ i denotes the restriction of u on the edge e ∞ i . In view of the said above, condition (A) isequivalent to the following: each non-trivial solution ψ to problem (2.13) satisfies U γ ( ψ ) = 0 . (2.14) To formulate our main results, we need to introduce some additional notations. Let P Γ : L (Γ ε ) → L (Γ) and P γ ε : L (Γ ε ) → L ( γ ε ) be the operators of restriction on the graphs Γ and γ ε , that is, oneach f ∈ L (Γ ε ) they act as follows: P Γ f := f (cid:12)(cid:12) Γ , P γ ε f := f (cid:12)(cid:12) γ ε . In the sense of the decomposition L (Γ ε ) = L (Γ) ⊕ L ( γ ε ) these operators satisfy the identity P Γ ⊕ P γ ε = I Γ ε , (2.15)where I Γ ε is the identity mapping in L (Γ ε ) .Under Condition (A), we allow the operator H γ to have a virtual level at the bottom of its es-sential spectrum. Let ψ ( j ) , j = 1 , . . . , k , be linearly independent bounded non-trivial solutions toproblem (2.13) satisfying condition (2.14). It is clear that k d since once k > d it is possibleto find a linear combination of the functions ψ ( j ) , for which condition (2.14) is violated. If the op-erator H γ has no virtual level at the bottom of its essential spectrum, we let k := 0 . We denote: Ψ ( j ) := U γ ( ψ ( j ) ) , j = 1 , . . . , k . We choose the functions ψ ( j ) so that the associated vectors Ψ ( j ) are orthonormalized in C d . If k < d , we choose arbitrary vectors Ψ ( j ) ∈ C d , j = k + 1 , . . . , d ,6o that the vectors Ψ ( j ) ∈ C d , j = 1 , . . . , d , form an orthonormalized basis in C d and a matrix Ψ := (cid:0) Ψ (1) . . . Ψ ( k ) Ψ ( k +1) . . . Ψ ( d ) (cid:1) is unitary.For each vertex M ∈ γ ∞ we introduce two matrices A (1) M , B (1) M as follows: A (1) M := A + M (0) d A − M dε (0) ! , B (1) M := d B + M dε (0) d B − M dε (0) ! if B M (0) = 0 , A (1) M := d A M dε (0) , B (1) M := 12 d B M dε (0) if B M (0) = 0 . In terms of these matrices, to each vertex M ∈ γ ∞ we associate extra matrices: ˜A (0) M := A (0) M + iB (0) M (cid:0) V (2) M,γ (0) (cid:1) − V (1) M,γ (0) , ˜B (0) M := B (0) M (cid:0) V (2) M,γ (0) (cid:1) − , U (0) M = − (cid:0) ˜A (0) M − i ˜B (0) M (cid:1) − (cid:0) ˜A (0) M + i ˜B (0) M (cid:1) , (2.16) Q M ( ψ ) := 2i (cid:0) A (0) M − iB (0) M (cid:1) − (cid:16) A (1) M U M ( ψ ) + B (1) M U ′ M ( ψ ) (cid:17) , (2.17) V ( j ) γ,M ( ε ) := diag (cid:8) ν i ( M ) V ( j ) γ (cid:12)(cid:12) e i ( M ) ( M, ε ) (cid:9) i =1 ,...,d ( M ) , (2.18)where e i ( M ) are the edges incident to the vertex M , the numbers ν i ( M ) are defined as in (2.7), andwhile applying formula (2.18) to M = M j , the functions V ( j ) γ are supposed to be continued on e ∞ i , i ∈ J j , j = 1 , . . . , n as follows: V (2) γ ( · , ε ) ≡ v (2) i ( ε ) , V (1) γ ( · , ε ) ≡ ε v (1) i ( ε ) .We shall show that the matrix U (0) M defined in (2.16) is unitary, see Lemma 4.1. By P (0) M we denotethe projector in C d ( M ) onto the eigenspace of the matrix U (0) M associated with the eigenvalue − , andwe let P (0) M, ⊥ := E d ( M ) − P (0) M .We introduce a k × k matrix Q := Q (11) . . . Q ( k ... ... Q (1 k ) . . . Q ( kk ) , with the entries defined as Q ( ij ) := Q ( ij ) γ + X M ∈ γ ∞ Q ( ij ) M , (2.19) Q ( ij ) γ := dV (2) γ dε ( · , dψ ( i ) dξ , dψ ( j ) dξ ! L ( γ ) + dψ ( i ) dξ , i dV (1) γ dε ( · , ψ ( j ) ! L ( γ ) + i dV (1) γ dε ( · , ψ ( i ) , dψ ( j ) dξ ! L ( γ ) + dV (0) γ dε ( · , ψ ( i ) , ψ ( j ) ! L ( γ ) , (2.20) Q ( ij ) M := (cid:0) q ( i ) M , U M ( ψ ( j ) ) (cid:1) C d ( M ) − i2 (cid:16) Q M ( ψ ( i ) ) , V (0) γ,M ( ψ ( j ) ) (cid:17) C d ( M ) , (2.21) q ( i ) M := d V (2) γ,M dε (0) U ′ M ( ψ ( i ) ) − i d V (1) γ,M dε (0) U M ( ψ ( i ) ) + (cid:0) U (0) M + E d ( M ) (cid:1) − P (0) M, ⊥ Q M ( ψ ( i ) ) , (2.22)where i is the imaginary unit, and V (0) γ,M ( · ) := V (2) γ,M (0) U ′ M ( · ) − iV (1) γ,M (0) U M ( · ) . Finally, we define two d × d matrices: A (0) M := (cid:18) Q 00 E d − k (cid:19) Ψ ∗ + i (cid:18) E k
00 0 (cid:19) Ψ ∗ V (1)Γ ,M (0) , B (0) M := − (cid:18) E k
00 0 (cid:19) Ψ ∗ V (2)Γ ,M (0) , (2.23)where the symbol in the first row of the matrix A M stands for the zero matrix of the size k × ( d − k ) ,while in the second row the same symbol denotes the zero matrix of the size ( d − k ) × k . In the definition7f the matrix B M , the first matrix is of the size d × d and the symbols denote the zero matrices ofrespectively the sizes k × ( d − k ) , ( d − k ) × k , and ( d − k ) × ( d − k ) . The matrices V ( j )Γ ,M , j = 1 , ,are defined as V ( j )Γ ,M ( ε ) := diag (cid:8) ν i ( M ) V ( j )Γ (cid:12)(cid:12) e i ( M, ε ) (cid:9) i =1 ,...,d , j = 1 , , where e i are the edges incident to the vertex M , while ν i ( M ) is defined in the same way as in (2.7).By H we denote an operator on the graph Γ with the differential expression − ddx V (2)Γ ( · , ddx + i (cid:18) ddx V (1)Γ ( · ,
0) + V (1)Γ ( · , ddx (cid:19) + V (0)Γ ( · , (2.24)subject to the vertex conditions A (0) M U M ( u ) + B (0) M U ′ M ( u ) = 0 at each vertex M ∈ Γ , (2.25)where the matrices A (0) M , B (0) M are defined in (2.23), while for other vertices M = M these matricesare defined as A (0) M := A M (0) , B (0) M := B M (0) . The operator H acts in L (Γ) on the domain formedby the functions from ˙ W (Γ) satisfying vertex conditions (2.25).For each λ ∈ C \ R the resolvent ( H ε − λ ) − is well-defined. The operators R Γ ( ε, λ ) := P Γ ( H ε − λ ) − ( I Γ ⊕ S − ε ) , R γ ( ε, λ ) := S ε P γ ε ( H ε − λ ) − ( I Γ ⊕ S − ε ) are also well-defined, linear and are bounded as acting from L (Γ) ⊕ L ( γ ) into ˙ W (Γ) and ˙ W ( γ ) ;here the direct sum is understood in the sense of identity (2.15). We also observe an obvious identityimplied immediately by the definition of the operators R Γ ( ε, λ ) and R γ ( ε, λ ) : ( H ε − λ ) − = (cid:0) R Γ ( ε, λ ) ⊕ S ε R γ ( ε, λ ) (cid:1) ( P Γ ⊕ S ε P γ ε ) . (2.26)We define one more operator R (0) γ ( λ ) : L (Γ) → ˙ W ( γ ) acting on each f ∈ L (Γ) by the rule: R (0) γ ( λ ) f := k X i =1 c i ( f ) ψ ( i ) , c ( f ) ... c k ( f ) := Ψ ∗ U M (cid:0) ( H Γ − λ ) − f (cid:1) , Ψ := (cid:0) Ψ (1) . . . Ψ ( k ) (cid:1) , (2.27)We introduce the following spaces of continuous functions on graphs: ˙ C ( · ) := M e ∈ · C (e) , ˙ C ( · ) := M e ∈ · C (e) , ˙ C ( · ) := M e ∈ · C ∞ (e) ,C ∞ (e) := (cid:8) u ∈ C (e) : u ′′ ∈ L ∞ (e) (cid:9) , k u k C ∞ (e) := k u k C (e) + k u ′′ k L ∞ (e) . If the functions V ( i )Γ and V ( i ) γ have an additional smoothness V ( i )Γ ∈ ˙ C (Γ) , V ( i ) γ ∈ ˙ C ( γ ) , i = 1 , , V (0)Γ ∈ ˙ C (Γ) , V (0) γ ∈ ˙ C ( γ ) and are holomorphic in ε in the norms of these spaces, then we let C ∞ (e) := C (e) . Now we are in position to formulate our main results. The first result states that the resolvent ofthe operator H ε is holomorphic in ε in an appropriate sense, namely, that the operators R Γ ( ε, λ ) and R γ ( ε, λ ) are holomorphic in ε . Theorem 2.1.
Assume that the matrices A M ( ε ) , B M ( ε ) satisfy the above formulated conditions. Thenthe operators H ε and H are self-adjoint. Suppose that Condition (A) holds. The operators R Γ ( ε, λ ) and R γ ( ε, λ ) are also linear and bounded as acting from L (Γ) ⊕ L ( γ ) into ˙ C (Γ) and ˙ C ( γ ) . Foreach λ ∈ C \ R there exists ε ( λ ) > such that as ε < ε ( λ ) , the operators R Γ ( ε, λ ) and R γ ( ε, λ ) are olomorphic in ε as operators from L (Γ) ⊕ L ( γ ) into ˙ W (Γ) and ˙ W ( γ ) and into ˙ C (Γ) and ˙ C ( γ ) .In both cases, the leading terms of the Taylor series for these operators read as R Γ ( ε, λ ) = ( H − λ ) − P Γ + O ( ε ) , (2.28) R γ ( ε, λ ) = R (0) γ ( λ ) P Γ + O ( ε ) . (2.29)Our second main result shows how to determine recurrently the coefficients in the Taylor series ofthe operators R Γ ( ε, λ ) and R γ ( ε, λ ) . Theorem 2.2.
Assume that the matrices A M ( ε ) , B M ( ε ) satisfy the above formulated conditionsand Condition (A) holds. For each ( f Γ , f γ ) ∈ L (Γ) ⊕ L ( γ ) , the Taylor series of the functions R Γ ( ε, λ )( f Γ , f γ ) and R γ ( ε, λ )( f Γ , f γ ) are given by series R Γ ( ε, λ )( f Γ , f γ ) = ∞ X p =0 ε p u Γ p , u Γ0 := ( H − λ ) − f Γ , R γ ( ε, λ )( f Γ , f γ ) = ∞ X p =0 ε p u γp , u γ := k X i =1 c i ( f Γ ) ψ ( i ) , (2.30) converging uniformly in ε in the spaces ˙ W (Γ) and ˙ W ( γ ) and in the spaces ˙ C (Γ) and ˙ C ( γ ) . Thecoefficients of these series satisfy the representations u Γ p = u Γ p, ∗ + k X i =1 c i,p v i, Γ , u γp = u γp, ∗ + k X i =1 c i,p ψ ( i ) , (2.31) where u Γ p, ∗ is the unique solution to problem (8.1), (8.2) with the vertex condition U M ( u Γ p, ∗ ) = U γ ( u γp, ∗ ) , (2.32) while u γp, ∗ is a particular solution to problem (8.9), (8.12), (8.5) obeying orthogonality condition (cid:0) U γ ( u γp, ∗ ) , Ψ ( j ) (cid:1) C d = 0 , j = 1 , . . . , k. The symbols v i, Γ denote the unique solutions to problems (8.15). The constants c i,p , p > , are givenby the formulae c p = (cid:0) Q + L (cid:1) − h p , (2.33) c p := c ,p ... c d ,p , h p := h ,p ... h d ,p , L := (cid:0) V ( v , Γ ) , Ψ (1) (cid:1) C d . . . (cid:0) V ( v k, Γ ) , Ψ (1) (cid:1) C d ... ... (cid:0) V ( v , Γ ) , Ψ ( k ) (cid:1) C d . . . (cid:0) V ( v k, Γ ) , Ψ ( k ) (cid:1) C d , (2.34) where the numbers h i,p are given by formulae (8.19), (8.17), (8.11), and V ( · ) := V (2)Γ ,M (0) U ′ M ( · ) − iV (1)Γ ,M (0) U M ( · ) . (2.35)Our third main result provides a representation for the resolvent ( H ε − λ ) − via a Taylor-like seriesand estimates for the remainders of this series. Theorem 2.3.
Assume that the matrices A M ( ε ) , B M ( ε ) satisfy the above formulated conditions andCondition (A) holds. For each f ∈ L (Γ ε ) , the function ( H ε − λ ) − f can be represented by a convergingin ˙ W (Γ ε ) and ˙ C (Γ ε ) series ( H ε − λ ) − f = ∞ X p =0 u Γ p ⊕ S − ε u γp , (2.36) where the functions u Γ p and u γp are the coefficients in series (2.30) with f Γ := P Γ f , f γ := S ε P γ ε f . Foreach N ∈ Z + the estimates hold true: (cid:13)(cid:13)(cid:13)(cid:13) ( H ε − λ ) − f − N X p =0 ε p u Γ p (cid:13)(cid:13)(cid:13)(cid:13) ˙ W (Γ) C N +1 ε N + k f k L (Γ ε ) , (2.37)9 (cid:13)(cid:13)(cid:13) ( H ε − λ ) − f − N X p =0 ε p u Γ p (cid:13)(cid:13)(cid:13)(cid:13) ˙ C (Γ) C N +1 ε N + k f k L (Γ ε ) , (2.38) (cid:13)(cid:13)(cid:13)(cid:13) ( H ε − λ ) − f − N X p =0 ε p S ε u γp (cid:13)(cid:13)(cid:13)(cid:13) L ( γ ε ) C N +1 ε N +1 k f k L (Γ ε ) , (2.39) (cid:13)(cid:13)(cid:13)(cid:13) ( H ε − λ ) − f − N X p =0 ε p S ε u γp (cid:13)(cid:13)(cid:13)(cid:13) ˙ W i ( γ ε ) C N +1 ε N +1 − i k f k L (Γ ε ) , i = 1 , , (2.40) (cid:13)(cid:13)(cid:13)(cid:13) ( H ε − λ ) − f − N X p =0 ε p S ε u γp (cid:13)(cid:13)(cid:13)(cid:13) ˙ C ( γ ε ) C N +1 ε N + k f k L (Γ ε ) , (2.41) (cid:13)(cid:13)(cid:13)(cid:13) ( H ε − λ ) − f − N X p =0 ε p S ε u γp (cid:13)(cid:13)(cid:13)(cid:13) ˙ C i ( γ ε ) C N +1 ε N − i + k f k L (Γ ε ) , i = 1 , , (2.42) where C is some fixed constant independent on ε , N and f . In this subsection we discuss the main features of our problem and the main results. First of all westress that the operator we consider is very general. Namely, its differential expression involves not onlythe potential, but also the first order terms and a varying coefficient at a higher derivatives, see (2.1).On the small edges, the coefficients can depend singularly on ε because of the presence of the negativepowers ε i − in (2.2). The vertex conditions at the vertices of the graph Γ ε are also of general form,see (2.5). The self-adjointness of the matrix in (2.6) and rank condition (2.8) are very natural and infact, they are a criterion ensuring the self-adjointness of the considered operator. The matrices A M ( ε ) and B M ( ε ) are not uniquely defined since vertex condition (2.5) can be multiplied by an arbitrarynon-degenerate matrix. In particular, this means that as ε = 0 , at least one of these matrices can beassumed to be non-zero. Indeed, if this is not the case, then A M ( ε ) = ε p ˜A M ( ε ) , B M ( ε ) = ε q ˜B M ( ε ) for some natural p and q . Then we can divide condition (2.5) by ε min { p,q } and replace the matrices init respectively by ε p − min { p,q } ˜A M ( ε ) and ε q − min { p,q } ˜B M ( ε ) .Our first main result, Theorem 2.1, states that under the made assumptions for the coefficientsin the differential expression and the vertex conditions and under Condition (A), the resolvent of theoperator H ε is holomorphic in ε in certain sense. Namely, the operators R Γ ( ε, λ ) and R γ ( ε, λ ) areholomorphic in ε . Despite definition (2.1) of the latter operators looks a bit cumbersome, their natureis very simple. Namely, given f ∈ L (Γ ε ) and u ε := ( H ε − λ ) − f , we consider the restrictions of thesefunctions on the subgraphs Γ and γ ε . The restrictions on γ ε are additionally rescaled by means of theoperator S ε , that is, they are simply regarded as functions of the rescaled variable ξ := xε − definedon γ . These restrictions, with the rescaling taken into account, are obviously P Γ f , P Γ u ε and S − ε P γ ε f , S − ε P γ ε u ε . Then the operators R Γ ( ε, λ ) and R γ ( ε, λ ) map a pair ( P Γ f, S − ε P γ ε f ) respectively into P Γ u ε and S − ε P γ ε u ε . Since the resolvent ( H ε − λ ) − can be recovered from the operators R Γ ( ε, λ ) and R γ ( ε, λ ) via formula (2.26), Theorem 2.1 in fact states the holomorphy of the parts of the resolvent ( H ε − λ ) − corresponding to subgraphs Γ and γ ε .Once we know that the operators R Γ ( ε, λ ) and R γ ( ε, λ ) are holomorphic in ε , a next naturalquestion is how their Taylor series look like. This issue is resolved constructively in Theorem 2.2. Thecoefficients of these series are determined as unique solutions to recurrent system of boundary valueproblems (8.1), (8.2), (2.32) and (8.9), (8.12), (8.5), (2.34) with coefficients c i,p and h i,p given by (2.33),(2.34), (8.19), (8.17), (8.11). These boundary value problems allow one to determine each coefficientin the Taylor series once we have found previous coefficients.The results of Theorems 2.1, 2.2 produce a Taylor-like series for the resolvent ( H ε − λ ) − as statedin Theorem 2.3. The idea of the proof of this theorem is very elementary and is based on substituting10he Taylor series from Theorem 2.2 into formula (2.26). Series (2.36) can be regarded as a way of exact calculation of the resolvent ( H ε − λ ) − , while estimates (2.37), (2.38), (2.39), (2.40), (2.41), (2.42) infact provide approximations for the same resolvent by partial sums of series (2.36) in various norms.In particular, choosing N = 0 , we obtain (cid:13)(cid:13)(cid:13) ( H ε − λ ) − f − ( H − λ ) − f (cid:13)(cid:13)(cid:13) ˙ W (Γ) Cε k f k L (Γ ε ) , (cid:13)(cid:13)(cid:13) ( H ε − λ ) − f − ( H − λ ) − f (cid:13)(cid:13)(cid:13) ˙ C (Γ) Cε k f k L (Γ ε ) , (3.1)By straightforward calculations it is easy to confirm that kS − ε u γ k L ( γ ε ) = ε k u γ k L ( γ ) Cε k f Γ k L (Γ) Cε k f k L (Γ ε ) , where C denotes fixed constants independent of ε and f . These relations and (2.39) with N = 0 yield: (cid:13)(cid:13) ( H ε − λ ) − f (cid:13)(cid:13) L ( γ ε ) Cε k f k L (Γ ε ) . The latter estimate and (3.1) are exactly a convergence result similar to ones established in papers [1]and [4]. However, in these papers the difference of the resolvents ( H ε − λ ) − f and ( H − λ ) − f wasestimated in L -norm only, while our inequality (3.1) provides the bounds in stronger ˙ W (Γ) -norm and ˙ C (Γ) -norm. Moreover, estimates (2.40), (2.41), (2.42) can be employed for establishing convergenceof γ ε with correctors. Namely, choosing N = i in (2.40), (2.42) and N = 0 in (2.41), we get (cid:13)(cid:13)(cid:13)(cid:13) ( H ε − λ ) − f − i X p =0 ε p S ε u γp (cid:13)(cid:13)(cid:13)(cid:13) ˙ W i ( γ ε ) Cε k f k L (Γ ε ) , i = 1 , , (cid:13)(cid:13)(cid:13)(cid:13) ( H ε − λ ) − f − i X p =0 ε p S ε u γp (cid:13)(cid:13)(cid:13)(cid:13) ˙ C i ( γ ε ) Cε k f k L (Γ ε ) , i = 1 , , (cid:13)(cid:13)(cid:13)(cid:13) ( H ε − λ ) − f − S ε u γ (cid:13)(cid:13)(cid:13)(cid:13) ˙ C ( γ ε ) Cε k f k L (Γ ε ) . These estimates show how to approximate the resolvent ( H ε − λ ) − f on γ ε to get a small error term.In particular, in view of formula for u γ in (2.30), we see that the resolvent ( H ε − λ ) − f is approximatedby k P i =1 c i ( f Γ ) S ε ψ ( i ) in ˙ C ( γ ) -norm with an error of order ε .In comparison with [1] and [4], our result has one more important feature. This feature is relatedwith the limiting condition at the vertex M for the operator H , namely, with the definition of thematrices A (0) M and B (0) M . As formulae (2.23) show, if the operator H γ has a virtual level at the bottomof its essential spectrum, the matrix A (0) M involves the matrix Q . The presence of the latter produces aRobin part in the vertex condition at the vertex M . According formulae (2.19), (2.20), (2.21), (2.22),the matrix Q is determined by the functions dV ( i ) γ dε ( · , and by the matrices A (1) M , B (1) M , M ∈ γ . Thenature of these functions and matrices is as follows. We multiply the differential expression and vertexconditions of the operator H ε on the subgraph γ ε respectively by ε and by ε ˜E M ( ε ) , where ˜E M ( ε ) := (cid:18) E r ( M ) ε − E d ( M ) − r ( M ) (cid:19) , and we pass to the rescaled variable ξ := xε − . This produces the differential expression − ddξ V (2) γ ( · , ε ) ddx + i (cid:18) ddξ V (1) γ ( · , ε ) + V (1) γ ( · , ε ) ddξ (cid:19) + V (0) γ ( · , ε ) on γ, (3.2)and vertex conditions ε ˜E M ( ε )A M ( ε ) U M ( u ) + ˜E M ( ε )B M ( ε ) U ′ M ( u ) = 0 at each vertex M ∈ γ ∞ , U ′ M is treated in the same way as in (2.11), and the matrices in the above conditionssatisfy the identities ε ˜E M ( ε )A M ( ε ) = A (0) M + ε A (1) M + O ( ε ) , ˜E M ( ε )B M ( ε ) = B (0) M + ε B (1) M + O ( ε ) (3.3)if B M (0) = 0 and ε ˜E M ( ε )A M ( ε ) = ε A (0) M + ε A (1) M + O ( ε ) , ˜E M ( ε )B M ( ε ) = ε B (0) M + ε B (1) M + O ( ε ) (3.4)if B M (0) = 0 . Formulae (3.2), (3.3), (3.4) explain how the differential expression on γ in (2.9) andvertex conditions (2.10) arise. We also observe that the functions dV ( i ) γ dε ( · , and the matrices A (1) M , B (1) M are in fact next-to-leading terms in the above differential expression and vertex conditions. Hence, weconclude that the Robin part in the limiting condition at the vertex M is generated solely by the abovenext-to-leading terms . Once such terms vanish, only the Dirichlet and Neumann parts in the vertexcondition at M can be present. In particular, this explains why only the Dirichlet and Neumann partsappeared in the limit in [4]; the next-to-leading terms were apriori assumed to be zero. The results of[1] allowed a Robin part in the limiting vertex condition, but in terms of our notations this was becausethe matrices A M and B M in (3.3) and (3.4) were assumed to be independent of ε and this is why thematrices A (1) M were in general non-zero. However, the issue when and why the Robin part appeared inthe limiting condition at M was not addressed in [1] and moreover, since the main result in [1] wasformulated in terms of Lagrangian planes, the presence and the origination of the Robin part in thelimiting vertex condition was not so explicit as in our case. We also stress that our result shows howthe Robin condition can be generated in the general case by the coefficients both in the differentialexpression and vertex conditions of H ε .It is also interesting to compare our Condition (A) with similar conditions in [1] and [4]. Non-resonance condition 3.2 in [1] is in fact our Condition (A). Both these conditions exclude the case,when the operator H γ has an embedded eigenvalue at the bottom of its essential spectrum, that is,when there exists a non-trivial solution to boundary value problem (2.13) vanishing identically onleads e ∞ i . In particular, it was shown in [1] that if the operator H ε had no Robin parts in its vertexconditions, this non-resonance condition was necessary to ensure the norm resolvent convergence. Asimilar situation holds also for our operator H ε : once Condition (A) is violated, in the general situation,the operator R γ ( ε, λ ) can have a first order pole at ε = 0 . This pole arises due to certain interactionbetween the eigenfunctions associated with the embedded eigenvalues and the next-to-leading termsdiscussed above. In a particular case when all these next-to-leading terms vanish, such pole surely doesnot arise and the operator R γ ( ε, λ ) remains holomorphic in ε . Exactly such particular case was treatedin [4] and this explains why the author succeeded also to treat the case corresponding to the presenceof an embedded eigenvalue at the bottom of the essential spectrum, see Theorem 2, Statement (i) in[4]. In the present work we do not consider the case when Condition (A) is violated and the operatorhas an embedded eigenvalue. The reason is that although our technique is sufficient to treat suchcase, it involves more technical details. The nature of the expected results also suggests that this casedeserves an independent study, which we postpone for the next work. Here we discuss possible ways of extending our results to more general models.The first possibility concerns a case, when instead of one graph γ ε , several similar small rescaledgraphs γ ( j ) ε are glued at different vertices of the graph Γ , see Figure 3. Each of the small graphs γ ( j ) ε can be introduced via rescaling by means of an associated small parameter ε ( i ) ; then coefficients V ( i,j ) ε , i = 0 , , , of the differential expression on each such graph γ ( j ) ε are still defined by formula (2.2),but now V ( i ) ε = ( ε ( j ) ) − i S ( j ) ε V ( i ) γ ( j ) ( · , ε ) , where S ( j ) ε is the rescaling operator on the graph γ ( j ) ε . We cansuppose first that all parameters ε ( j ) are holomorphic functions in the same small parameter ε . Inthis situation, all our results remain valid with obvious minor changes; in particular, the form of thelimiting vertex condition at a vertex M ( j )0 , to which the graph γ ( j ) ε is attached, is determined only bythe graph γ ( j ) ε and there is no influence by the neighbouring ones.12igure 3: Graph Γ with several glued small graphs A more general situation is when in the same setting we assume that the parameters ε ( j ) areindependent and all coefficients in the differential expression and vertex conditions of the operator H ε depend holomorphically on all ε ( j ) . In such case, we first of all should assume additionally that foreach attached graph γ ( j ) ε and each vertex M ∈ γ ( j ) ∞ identities similar to (3.3), (3.4): ε ( j ) ˜E M ( ε ( j ) )A M ( ε ) = (cid:0) ε ( j ) (cid:1) µ (cid:0) A (0) M + ε ( j ) A (1) M (cid:1) + O ( | ε | µ ) , ˜E M ( ε ( j ) )B M ( ε ) = (cid:0) ε ( j ) (cid:1) µ (cid:0) B (0) M + ε ( j ) B (1) M (cid:1) + O ( | ε | µ ) , where µ = 0 if B M (0) = 0 and µ = 1 if B M (0) = 0 . For the coefficients V ( i ) γ ( j ) we should assume that V ( i ) γ ( j ) ( · , ε ) = V ( i ) γ ( j ) ( · ,
0) + ε ( j ) dV ( i ) γ ( j ) dε ( j ) ( · ,
0) + O ( | ε | ) . (3.5)Then the corresponding matrices Q ( j ) should be calculated for each graph γ ( j ) by formulae (2.19),(2.20), (2.21), (2.22). If some j the matrix Q ( j ) turns out to have a zero eigenvalue, then assumptions(3.3), (3.5) should be replaced by stricter ones: ε ( j ) ˜E M ( ε ( j ) )A M ( ε ) = (cid:0) ε ( j ) (cid:1) µ (cid:0) A (0) M + ε ( j ) A (1) M (cid:1) + O ( | ε | µ ) , ˜E M ( ε ( j ) )B M ( ε ) = (cid:0) ε ( j ) (cid:1) µ (cid:0) B (0) M + ε ( j ) B (1) M (cid:1) + O ( | ε | µ ) ,V ( i ) γ ( j ) ( · , ε ) = V ( i ) γ ( j ) ( · ,
0) + ε ( j ) dV ( i ) γ ( j ) dε ( j ) ( · ,
0) + (cid:0) ε ( j ) (cid:1) d V ( i ) γ ( j ) d ( ε ( j ) ) ( · ,
0) + O ( | ε | ) . The above conditions mean that the coefficients in the differential expression and vertex conditionscan depend on all small parameters, but the leading terms in their Taylor should be as specified above.Under such assumptions, all our results remain true and now the holomorphy holds with respect to allparameters ε ( i ) . Once these assumptions fail, in the general situation, the result on holomorphy is nolonger valid. The reason for this situation is in fact a well-known phenomenon in the multi-parametricperturbation theory, which says that the eigenvalues of a matrix holomorphic with respect to morethan one small parameter are not necessary also holomorphic, see [3, Ch. II, Sect. 5.7]. The samealso concerns the case, when several graphs γ ( j ) are rescaled by means of different independent smallparameters and are glued to the same vertex in the graph Γ . In such general situation one can expectonly a convergence result as that proved in [1]. Of course, this does not exclude the situations, when forsome particular graphs and specific choices of small edges, differential expression and vertex conditions,the resolvent has the holomorphy property and Taylor series, but this can happen only due to somespecific features of the considered models.One more important way of extending our model is to assume that all coefficients in the differentialexpression and the vertex conditions are not holomorphic in ε but either infinite differentiable or just13ossess power asymptotic expansions in ε . In these cases all our results, including ones just discussedabove for several glued small graphs, remain true with appropriate modification. Namely, if the men-tioned coefficients are infinite differentiable in ε , then the statement on the holomorphy property is tobe replaced by the infinite differentiability in ε . In such case Theorem 2.1 states infinite differentiabilityin ε for the operators R Γ and R γ , Theorem 2.2 provide not necessarily converging Taylor series forinfinite differentiable operators R Γ and R γ , while Theorem 2.3 provides Taylor-like series (2.36), andconstants of form C N in estimates (2.37), (2.38), (2.39), (2.40), (2.41), (2.42) are to replaced just bysome constants depending somehow on N . If the coefficients in the differential expression and thevertex conditions just possess power asymptotic expansions in ε , then Theorem 2.1 will state that theoperators R Γ and R γ also have power asymptotic expansions in ε , while Theorems 2.2, 2.3 will say howto find the coefficients in these expansions. The way of proving such results is very simple and is basedon the holomorphy case treated above. Namely, we represent the matrices in the vertex conditions as A M ( ε ) = A (0) M + ε A (1) M + ε A (2) M + ε ˘A M ( ε ) , B M ( ε ) = B (0) M + ε B (1) M + ε B (2) M + ε ˘B M ( ε ) , (3.6)where the matrices ˘A M ( ε ) and ˘B M ( ε ) are either infinitely differentiable in ε or have power in ε asymp-totic expansions. For the coefficients of the differentiable expression, similar representation is to beemployed. The terms ˘A M and ˘B M are to be treated as frozen coefficients independent of ε and thematrices in (3.6) should be treated as second order polynomials in ε . Then it is possible to track thedependence on ˘A M and ˘B M ( ε ) and on similar terms for the coefficients in the differential expressionin the proof of Theorem 2.1. As a result, then one can see that the operators R Γ and R γ as well asthe resolvent ( H ε − λ ) − are represented by convergent series (2.30) and (2.36), but now with coeffi-cients depending on ε . These coefficients are either infinitely differentiable in ε or possess power in ε asymptotic expansions. In the first case we conclude immediately that all the operators in questionare infinitely differentiable in ε , while in the other case we can easily find the asymptotic expansionsfor these operators from their series. Once we know the nature of the series representing the operators,their coefficients can be found exactly in the same way as in the proofs of Theorems 2.2, 2.3, that is,these proofs are reproduced literally. Our main result includes a statement on the self-adjointness of both perturbed and limiting operatorsand in the proofs, we shall also employ the same property for certain auxiliary operators. The aim ofthis section is to establish a criterion of the self-adjointness for a general operator that will imply thenthe same property for the operators we deal with.Let Ξ be a finite metric graph having no isolated vertices; the lengths of its edges can be both finiteor infinite. On each edge we fix arbitrarily an orientation and as above, by d ( M ) we denote the degreeof a vertex M ∈ Ξ .We consider an unbounded operator H Ξ in L (Ξ) with the differential expression ˆ H Ξ := − ddx V (2)Ξ ddx + i (cid:18) ddx V (1)Ξ + V (1)Ξ ddx (cid:19) + V (0)Ξ subject to the vertex conditions A (Ξ) M ( ε ) U M ( u ) + B (Ξ) M ( ε ) U ′ M ( u ) = 0 at each vertex M . (4.1)Here V (2)Ξ , V (1)Ξ ∈ W ∞ (Ξ) , V (0)Ξ ∈ L ∞ (Ξ) are real-valued functions, while A (Ξ) M and B (Ξ) M are somematrices of size d ( M ) × d ( M ) . The ellipticity condition is assumed: V (2)Ξ > C Ξ > uniformly on Ξ .The domain of the operator H Ξ consists of the functions in ˙ W (Ξ) obeying vertex condition (4.1).By V ( j )Ξ ,M we denote the following diagonal matrices with real entries: V ( j )Ξ ,M ( ε ) := diag (cid:8) ν i ( M ) V ( j )Ξ (cid:12)(cid:12) e i ( M ) ( M, ε ) (cid:9) i =1 ,...,d ( M ) , j = 1 , , where e i ( M ) are the edges incident to the vertex M and the numbers ν i ( M ) are defined in the sameway as in (2.7).The main statement of this section is the following lemma.14 emma 4.1. The operator H Ξ is self-adjoint if and only if for each vertex M ∈ Ξ the matrices A (Ξ) M and B (Ξ) M satisfy the rank condition rank(A (Ξ) M B (Ξ) M ) = d ( M ) (4.2) and the matrix A (Ξ) M (cid:0) V (2)Ξ ,M (cid:1) − (cid:0) B (Ξ) M (cid:1) ∗ + iB (Ξ) M (cid:0) V (2)Ξ ,M (cid:1) − V (1)Ξ ,M (cid:0) V (2)Ξ ,M (cid:1) − (cid:0) B (Ξ) M (cid:1) ∗ (4.3) is self-adjoint. Under these restrictions, vertex condition (4.1) can be equivalently rewritten as i(U (Ξ) M − E d ( M ) ) U M ( u ) + (U (Ξ) M + E d ( M ) ) (cid:0) V (2)Ξ ,M U ′ M ( u ) − iV (1)Ξ ,M U M ( u ) (cid:1) = 0 (4.4) and P (Ξ) M, ⊥ V (2)Ξ ,M U ′ M ( u ) + (cid:0) K (Ξ) M, ⊥ − iP (Ξ) M, ⊥ V (1)Ξ ,M P (Ξ) M, ⊥ (cid:1) U M ( u ) = 0 , P (Ξ) M U M ( u ) = 0 . (4.5) Here the matrix U (Ξ) M := − (cid:0) A (Ξ) M + iB (Ξ) M (V (2)Ξ ,M ) − (V (1)Ξ ,M − E d ( M ) ) (cid:1) − (cid:0) A (Ξ) M + iB (Ξ) M (V (2)Ξ ,M ) − (V (1)Ξ ,M + E d ( M ) ) (cid:1) is well-defined and is unitary, P (Ξ) M, ⊥ := E d ( M ) − P (Ξ) M is the projector in C d ( M ) onto the eigenspace ofthe matrix U (Ξ) M associated with the eigenvalue − , while K (Ξ) M, ⊥ := i(U (Ξ) M + E d ( M ) ) − P (Ξ) M, ⊥ (U (Ξ) M − E d ( M ) ) is a self-adjoint matrix.Proof. In a particular case V (2)Ξ ≡ , V (1)Ξ ≡ , V (0)Ξ ≡ , the lemma states a well-known result, seeTheorem 1.1.4 in [2, Ch. 1, Sect. 1.4.]. In fact, the proof of this theorem can be adapted also for ourmore general case up to some minor changes. Below we describe how to do this.First of all we observe that vertex condition (4.1) can be equivalently rewritten as ˜A (Ξ) M U M ( u ) + ˜B (Ξ) M (cid:0) V (2)Ξ ,M U ′ M ( u ) − iV (1)Ξ ,M U M ( u ) (cid:1) = 0 , (4.6) ˜A (Ξ) M := A (Ξ) M + iB (Ξ) M (cid:0) V (2)Ξ ,M (cid:1) − V (1)Ξ ,M , ˜B (Ξ) M := B (Ξ) M (cid:0) V (2)Ξ ,M (cid:1) − . (4.7)Let u be an arbitrary function from the domain of the operator H Ξ and v be an arbitrary function from L e ∈ Ξ C ∞ (e) vanishing outside a small neighbourhood of an arbitrarily fixed vertex M ∈ Ξ . Integratingtwice by parts, we get: ( H Ξ u, v ) L (Ξ) = (cid:0) V (2)Ξ ,M U ′ M ( u ) , U M ( v ) (cid:1) C d ( M ) − (cid:0) U M ( u ) , V (2)Ξ ,M U ′ M ( v ) (cid:1) C d ( M ) + ( u, ˆ H Ξ v ) L (Ξ) =( F ′ M , G M ) C d ( M ) − ( F M , G ′ M ) C d ( M ) + ( u, ˆ H Ξ v ) L (Ξ) , where F ′ M := V (2)Ξ ,M U ′ M ( u ) − iV (1)Ξ ,M U M ( u ) , G M := U M ( v ) , F M := U M ( u ) , G ′ M := V (2)Ξ ,M U ′ M ( v ) − iV (1)Ξ ,M U M ( v ) . Hence, the function v is in the domain of the adjoint operator H ∗ Ξ if ( F ′ M , G M ) C d ( M ) − ( F M , G ′ M ) C d ( M ) = 0 . The latter identity is exactly equation (1.4.16) in the proof of Theorem 1.1.4 in [2, Ch. 1, Sect. 1.4] butwith different F M , F ′ M , G M , G ′ M . However, the specific form of these quantities were not important inthat proof in [2]. This is why the arguing from [2] can be reproduced literally with above introduced F M , F ′ M , G M , G ′ M ; as the matrices A and B used in that proof, our matrices ˜A (Ξ) M and ˜B (Ξ) M serve.15ank condition (4.2) is equivalent to similar condition (1.4.10) in [2, Ch. 1, Sect. 1.4] since thanks tothe non-degeneracy of the matrix (cid:0) V (2)Ξ ,M (cid:1) − we have rank( ˜A (Ξ) M ˜B (Ξ) M ) = rank (cid:0) A (Ξ) M + iB (Ξ) M (cid:0) V (2)Ξ ,M (cid:1) − V (1)Ξ ,M B (Ξ) M (cid:0) V (2)Ξ ,M (cid:1) − (cid:1) = rank (cid:0) A (Ξ) M B (Ξ) M (cid:0) V (2)Ξ ,M (cid:1) − (cid:1) = rank (cid:0) A (Ξ) M B (Ξ) M (cid:1) = d ( M ) . Condition (1.4.11) for the matrices A and B from Theorem 1.1.4 in [2, Ch. 1, Sect. 1.4.] is exactly ourcondition (4.3), while equivalent formulations (4.4), (4.5) of the vertex condition are implied immedi-ately by similar formulations (1.4.13) and (1.4.14) in [2] once we let our projector P (Ξ) M, ⊥ to be the sumof the projectors P N,M and P R,M in [2, Ch. 1, Sect. 1.4, Eq. (1.4.14)]. The proof is complete. Γ In this section we study the properties of an auxiliary operator defined on the graph Γ . This is theoperator with the differential expression ˆ H Γ ( ε ) := − ddx V (2)Γ ( · , ε ) ddx + i (cid:18) ddx V (1)Γ ( · , ε ) + V (1)Γ ( · , ε ) ddx (cid:19) + V (0)Γ ( · , ε ) subject to vertex conditions (2.5) at vertices M ∈ Γ , M = M and to the Dirichlet condition U M ( u ) = 0 at vertex M . (5.1)This condition can be easily written in form (4.1) with A (Ξ) M = E d ( M ) , B (Ξ) M = 0 and these matrices obvi-ously obey conditions (4.2), (4.3). We denote such operator by H Γ ( ε ) . By Lemma 4.1, the introducedoperator is self-adjoint in L (Γ) on the domain formed by the functions from ˙ W (Γ) satisfying theimposed vertex conditions. Our main aim is to study the dependence of the resolvent of this operatoron ε .The main statement we are going to prove in this section is the following lemma. Lemma 5.1.
For each λ ∈ C with Im λ = 0 the resolvent ( H Γ ( ε ) − λ ) − is well-defined for all sufficientsmall ε and is holomorphic in ε as an operator from L (Γ) into ˙ W (Γ) . Before proving this lemma, we establish a series of auxiliary results.According Lemma 4.1, at each vertex M ∈ Γ , M = M , vertex condition (2.5) can be equivalentlyrewritten as i(U M ( ε ) − E d ( M ) ) U M ( u ) + (U M ( ε ) + E d ( M ) ) (cid:0) V (2) M ( ε ) U ′ M ( u ) − iV (1) M ( ε ) U M ( u ) (cid:1) = 0 , (5.2)where U M ( ε ) := − (cid:16) A M ( ε ) + iB M ( ε ) (cid:0) V (2) M ( ε ) (cid:1) − (cid:0) V (1) M ( ε ) − E d ( M ) (cid:1)(cid:17) − · (cid:16) A M ( ε ) + iB M ( ε ) (cid:0) V (2) M ( ε ) (cid:1) − (cid:0) V (1) M ( ε ) + E d ( M ) (cid:1)(cid:17) (5.3)is an unitary matrix. Lemma 5.2.
For all M ∈ Γ , M = M and all M ∈ γ , the matrix U M ( ε ) is holomorphic in ε .Proof. Rank condition (2.8) and the self-adjointness of the matrix (2.6) allow us to apply Lemma 4.1and to conclude that the matrices U M ( ε ) defined in (5.3) is well-defined and is unitary. Since thematrices A M ( ε ) and B M ( ε ) are holomorphic in ε , the matrices U M ( ε ) are meromorphic in ε . At thesame time, by the unitarity we have k U M ( ε )x k C d ( M ) = k x k C d ( M ) for all x ∈ C d ( M ) and sufficientlysmall ε . Hence, the matrix U M ( ε ) is bounded uniformly in ε and can not have poles. Therefore, it isholomorphic in ε . The proof is complete.Thanks to the unitarity of the matrix U M ( ε ) , by the results in [3, Ch. II, Sec. 4.6], the eigenvaluesand the associated orthonormalized in C d ( M ) vectors of the matrix U M ( ε ) are holomorphic in ε . By P M ( ε ) we denote the total projector in C d ( M ) onto the eigenspace associated with the eigenvalues16f the matrix U M ( ε ) converging to − as ε → +0 . We also let P M, ⊥ ( ε ) := E d ( M ) − P M ( ε ) . Bythe aforementioned holomorphy in ε of the eigenvalues and the eigenvectors of U M ( ε ) , the projectors P M ( ε ) and P M, ⊥ ( ε ) are also holomorphic in ε .We apply the projectors P M ( ε ) and P M, ⊥ ( ε ) to vertex condition (5.2) and we rewrite it equivalentlyas P M ( ε ) U M ( u ) + K M ( ε ) (cid:0) V (2) M ( ε ) U ′ M ( u ) − iV (1) M ( ε ) U M ( u ) (cid:1) = 0 , P M, ⊥ ( ε )V (2) M ( ε ) U ′ M ( u ) + (cid:0) K M, ⊥ ( ε ) − iP M, ⊥ ( ε )V (1) M ( ε )P M, ⊥ ( ε ) (cid:1) U M ( u ) = 0 , (5.4)where K M ( ε ) := − i (cid:0) U M ( ε ) − E d ( M ) (cid:1) − P M ( ε ) (cid:0) U M ( ε ) + E d ( M ) (cid:1) K M, ⊥ ( ε ) := i (cid:0) U M ( ε ) + E d ( M ) (cid:1) − P M, ⊥ ( ε ) (cid:0) U M ( ε ) − E d ( M ) (cid:1) . (5.5)It is obvious that both matrices K M ( ε ) and K M, ⊥ ( ε ) are well-defined and holomorphic in ε . Employingthe unitarity of the matrix U M ( ε ) , it is straightforward to confirm that both matrices K M ( ε ) and K M, ⊥ ( ε ) are self-adjoint. We also observe that U M (0) = U (0) M , P M (0) = P (0) M , K M (0) = 0 . Lemma 5.3.
Given an arbitrary family of vectors g M ∈ P (0) M C d ( M ) , g M, ⊥ ∈ P M, ⊥ C d ( M ) for eachvertex M ∈ Γ , and g M ∈ C d and an arbitrary function g ∈ L (Γ) , for each λ ∈ C with Im λ = 0 theboundary value problem (cid:0) ˆ H Γ (0) − λ (cid:1) u = g on Γ , U M ( u ) = g M , P (0) M U M ( u ) = g M at M ∈ Γ , M = M , P (0) M, ⊥ V (2) M (0) U ′ M ( u ) + (cid:0) K M, ⊥ (0) − iP (0) M, ⊥ V (1) M (0)P (0) M, ⊥ (cid:1) U M ( u ) = g M, ⊥ at M ∈ Γ , M = M , (5.6) is uniquely solvable in ˙ W (Γ) and its solution satisfies the estimate k u k ˙ W (Γ) C (cid:16) k g k L (Γ) + k g M k C d + X M ∈ Γ M = M (cid:16) k g M k C d ( M ) + k g M, ⊥ k C d ( M ) (cid:17)(cid:17) , (5.7) where C is a fixed constant independent of g , g M , g M, ⊥ , g M , but depending on λ .Proof. We denote the coordinates of the vectors g M , g M , (V (2) M (0)) − g M, ⊥ respectively by g M ,i , i = 1 , . . . , d , g M,i and g M, ⊥ ,i , i = 1 , . . . , d ( M ) . Then in the vicinity of each vertex M ∈ Γ weintroduce a function u M ( x ) := d ( M ) X i =1 (cid:0) g M,i + g M, ⊥ ,i ˜ x i (cid:1) χ (˜ x i ) , M = M , u M ( x ) := d X i =1 g M ,i χ (˜ x i ) , (5.8)where the sum is taken other all edges incident to the vertex M , the symbol ˜ x i denotes a variable at i th edge measured from the vertex M , x = (˜ x , . . . , ˜ x d ( M ) ) and χ = χ ( t ) is an infinitely differentiablecut-off function equalling to one as t < δ and vanishing as t > d for some sufficiently small fixed δ > .The functions u M are continued by zero on entire graph Γ and it is clear that the function u bnd := X M ∈ Γ u M satisfies the vertex conditions in (5.6). We seek a solution to this problem as u = u bnd + ˜ u and thenfor ˜ u we obtain the operator equation ( H Γ (0) − λ )˜ u = ˜ g, ˜ g := g − (cid:0) H Γ (0) − λ (cid:1) u bnd . Since the operator H Γ is self-adjoint and the imaginary part of λ is non-zero, its resolvent ( H Γ (0) − λ ) − is well-defined, ˜ u = ( H Γ (0) − λ ) − ˜ g , and the inequality k ˜ u k ˙ W (Γ) C k ˜ g k L (Γ) C (cid:16) k g k L (Γ) + k g M k C d ( M ) + X M ∈ Γ M = M (cid:16) k g M k C d ( M ) + k g M, ⊥ k C d ( M ) (cid:17)(cid:17) holds true, where C is some fixed constant independent of g , g M , g M and g M, ⊥ . Returning back tothe function u , we arrive at estimate (5.7). The proof is complete.17t each vertex M ∈ Γ , M = M , we rewrite vertex condition (2.5) in form (5.4), (5.5). Since theoperator P M ( ε ) is holomorphic in ε , by the results in [3, Ch. 2, Sect. 4.2], there exists a transformingfunction for this projector, namely, an invertible operator S M ( ε ) in C d ( M ) , holomorphic in small ε together with its inverse operator such that S M (0) = E d ( M ) , S − M ( ε )P (0) M S M ( ε ) = P M ( ε ) . (5.9)The inverse operator S − M ( ε ) can be defined explicitly, see formula (4.18) in [3, Ch. 2, Sect. 4.2], andhence, S M ( ε ) = (cid:0) P M ( ε )P (0) M + P M, ⊥ ( ε )P (0) M, ⊥ (cid:1) − (cid:16) E d ( M ) − (cid:0) P M ( ε ) − P (0) M (cid:1) (cid:17) . (5.10)The second formula in (5.9) yields that S − M ( ε )P (0) M, ⊥ S M ( ε ) = P M, ⊥ ( ε ) . (5.11)In view of formulae (5.9), (5.10), (5.11), vertex condition (5.4) can be equivalently rewritten as P (0) M S M ( ε ) U M ( u ) + S M ( ε )K M ( ε ) (cid:0) V (2) M ( ε )U ′ M ( ε ) − iV (1) M ( ε ) U M ( u ) (cid:1) = 0 , P (0) M, ⊥ S M ( ε )V (2) M ( ε ) U ′ M ( u Γ ) + S M ( ε ) (cid:0) K M, ⊥ ( ε ) − iP M, ⊥ ( ε )V (1) M ( ε ) (cid:1) U M ( u ) = 0 . Since the first terms in the above identities belong respectively to the spaces P (0) M C d ( M ) and P (0) M, ⊥ C d ( M ) ,the same is true for the other terms in these identities. Therefore, we can apply the projector P (0) M, ⊥ tothese identities and we equivalently rewrite them as P (0) M ˜S M ( ε ) U M ( u ) + ˜K M ( ε ) U ′ M ( u ) = 0 , P (0) M, ⊥ ˜S M, ⊥ ( ε ) U ′ M ( u ) + ˜K M, ⊥ ( ε ) U M ( u ) = 0 , (5.12)where ˜S M ( ε ) := S M ( ε ) (cid:0) E d ( M ) − iK M ( ε )V (1) M ( ε ) (cid:1) , ˜K M ( ε ) := P (0) M S M ( ε )K M ( ε )V (2) M ( ε ) , ˜S M, ⊥ ( ε ) := S M ( ε )V (2) M ( ε ) , ˜K M, ⊥ ( ε ) := P (0) M, ⊥ S M ( ε ) (cid:0) K M, ⊥ ( ε ) − iP M, ⊥ ( ε )V (1) M ( ε ) (cid:1) . (5.13)The matrices S M ( ε ) , ˜S M ( ε ) , ˜S M, ⊥ ( ε ) , ˜K M ( ε ) , ˜K M, ⊥ ( ε ) are holomorphic in ε and by S M,j , ˜S M,j , ˜S M, ⊥ ,j , ˜K M,j , ˜K M, ⊥ ,j we denote the coefficients in their Taylor series. For instance, ˜ S M,j := 1 j ! d j ˜S M dε j (0) , ˜ S M, ⊥ ,j := 1 j ! d j ˜S M, ⊥ dε j (0) . (5.14)In the same way, by V ( i )Γ ,j we denote the coefficients in the Taylor series for the functions V ( i )Γ .Now we are in position to prove Lemma 5.1. Proof of Lemma 5.1.
Since the operator H Γ ( ε ) is self-adjoint, the resolvent ( H Γ ( ε ) − λ ) − is well-defined for all λ ∈ C \ R . To prove its holomorphy in ε , we shall construct a power in ε series forthis resolvent and prove that its converges absolutely and uniformly for sufficiently small ε . We choosearbitrary g ∈ L (Γ) and denote u Γ := ( H Γ ( ε ) − λ ) − g .The function u Γ solves a boundary value problem for the equation (cid:0) ˆ H Γ ( ε ) − λ (cid:1) u Γ = g (5.15)subject to vertex conditions (2.5) at the vertices M ∈ Γ .The standard estimates for the coefficients of the Taylor series of holomorphic functions imply theinequalities k S M,j k C d ( M ) → C d ( M ) + k ˜S M, ⊥ ,j k C d ( M ) → C d ( M ) + k ˜S M,j k C d ( M ) → C d ( M ) + k ˜K M, ⊥ ,j k C d ( M ) → C d ( M ) + k ˜K M,j k C d ( M ) → C d ( M ) + k V (2)Γ ,j k W ∞ (Γ) + k V (1)Γ ,j k W ∞ (Γ) + k V (0)Γ ,j k L ∞ (Γ) c j (5.16)with some fixed constant c > independent of j .18irst we seek the function u Γ as a formal power series u Γ = ∞ X j =0 ε j u j . (5.17)Substituting this series and (5.13) into the boundary value problem (5.15), (2.5) and equating thecoefficients at the like powers of ε , for the functions u j we obtain a recurrent system of boundaryvalue problems (5.6) with the following right hand sides in the equation and the vertex conditions. Forthe function u , the right hand side in the equation is the function g , while all vertex conditions arehomogeneous. As j > , the right hand side in the equation for u j is the function − j X i =1 (cid:18) − ddx V (2)Γ ,i ( · , ddx + i (cid:18) ddx V (1)Γ ,i ( · ,
0) + V (1)Γ ,i ( · , ddx (cid:19) + V (0)Γ ,i ( · , (cid:19) u j − i , while the right hand sides in the associated vertex conditions are g M = 0 , g M = − P (0) M j X i =1 (cid:0) ˜S M, ⊥ ,i U M ( u j − i ) ˜K M, ⊥ ,i U ′ M ( u j − i ) (cid:1) , g M, ⊥ = − P (0) M, ⊥ j X i =1 ˜S M, ⊥ ,i U ′ M ( u j − i ) − P (0) M, ⊥ j X i =1 ˜K M, ⊥ ,i U M ( u j − i ) . By Lemma 5.3, these problems are uniquely solvable and their solutions satisfy estimate (5.7). Em-ploying this estimate, by induction we are going to show that there exists a constant c such that k u j k ˙ W (Γ) c j k g k L (Γ) , (5.18)where c > c is some fixed constant independent of j and g . Indeed, for j = 1 this estimate obviouslyholds true. Assuming that it holds for all j l − , l ∈ N , for the aforementioned right hands in theboundary value problem for u l we obtain immediately the following estimates: (cid:13)(cid:13)(cid:13)(cid:13) j X i =1 V Γ ,i u j − i (cid:13)(cid:13)(cid:13)(cid:13) L (Γ) j X i =1 c i c j − i k g k L (Γ) , k g M k C d ( M ) + k g M, ⊥ k C d ( M ) c j X i =1 c i c j − i k g k L (Γ) , where c is some fixed constant independent of j and g . Then by (5.7) we get: k u j k ˙ W ( G ) c j X i =1 c i c j − i k g k L (Γ) = c c j ∞ X i =1 (cid:18) c c (cid:19) j k g k L (Γ) c c − c c c j − k g k L (Γ) , where c is some fixed constant independent of j and g . Now we see that it is sufficient to choose c > { c , c c } and then we arrive immediately to estimates (5.18).Proven estimates (5.18) ensure that series (5.17) converges absolutely and uniformly for ε smallenough in the norm of the space ˙ W (Γ) . The sum of this series solves the above described boundaryvalue problem for the function u Γ . Indeed, we can substitute this series to this problem and thanks tothe above boundary value problems for the functions u j we get immediately the desired fact. Hence,series (5.17) is indeed a Taylor series for u Γ and this completes the proof. γ In this section we consider an auxiliary operator on certain extension of the graph γ and study itsproperties, which will be employed then in the proof of our main results. The mentioned extension isanother graph denoted by γ ex and obtained by attaching additional unit edges e exi , i ∈ J j , j = 1 , . . . , n ,to the each vertex M j , j = 1 , . . . , n , in the graph γ . The boundary vertices, being the end-points ofthe edges e exi and not coinciding with M j , are denoted by M exi , i ∈ J j , j = 1 , . . . , n . By M ex we denotethe vertices in the graph γ ex not coinciding with with M exi , i = 1 , . . . , n . This set can be naturallyidentified with the vertices of the graph γ ∞ . 19he aforementioned auxiliary operator on the graph γ ex is denoted by H exγ ( ε ) . Its differentialexpression reads as ˆ H exγ ( ε ) := − ddξ V (2) γ ( · , ε ) ddξ + i (cid:18) ddξ V (1) γ ( · ε ) + V (1) γ ( · , ε ) ddξ (cid:19) + V (0) γ ( · , ε ) on γ, ˆ H exγ ( ε ) := − v (2) i ( ε ) d dξ i + 2i ε v (1) i ( ε ) ddξ i on e exi , i ∈ J j , j = 1 , . . . , n, where ξ i is the variable on the edge e ∞ i measured from the vertex M j , while e i are the edges in thegraph Γ incident to the edge M .At the vertices M ∈ M ex we impose the vertex conditions ε A M ( ε ) U M ( u ) + B M ( ε ) U ′ M ( u ) = 0 . (6.1)The vertices M exi are subject to the Robin condition V (2)Γ ,M ( ε ) U ′ γ ex ( u ) − i ε V (1)Γ ,M ( ε ) U γ ex ( u ) = 0 , V ( p )Γ ,M ( ε ) := diag (cid:8) v ( p ) i ( ε ) (cid:9) i =1 ,...,d , (6.2)where we denote U ′ γ ex ( u ) := du (cid:12)(cid:12) e ex dξ ( M ex ) ... du (cid:12)(cid:12) e exd dξ d ( M exd ) , U γ ex ( u ) := u ( M ex ) ... u ( M exd ) for all functions u ∈ ˙ W ( γ ex ) . Condition (6.2) is obviously a particular case of condition (6.1).It follows from rank condition (2.8), formulae (2.11), (2.12) and the holomorphy in ε of the matrices A M ( ε ) and B M ( ε ) that rank (cid:0) A M ( ε ) B M ( ε ) (cid:1) = d ( M ) for each vertex M ∈ γ ex and sufficiently small ε .The self-adjointness of the matrix in (2.6) implies the same property for the similar matrix associatedwith each vertex M ∈ γ ex . Hence, the operator H exγ ( ε ) satisfies all assumptions of Lemma 4.1 andtherefore, it is self-adjoint.In this section we study the properties of the resolvent ( H exγ ( ε ) − ε λ ) − for λ ∈ C \ R . Since theoperator H exγ ( ε ) is self-adjoint, this resolvent is well-defined. The main aim of this section is to provethat this operator is meromorphic in ε and to study the structure of its pole at ε = 0 . The final resultis formulated in the end of the section in Lemma 6.6. In this subsection we prove several auxiliary lemmata, which will be employed then for studying theresolvent ( H exγ ( ε ) − ε λ ) − .As in the previous section, we rewrite vertex condition (6.1) at each vertex M ∈ M ex to (5.2) with U M ( ε ) := − (cid:16) ε A M ( ε ) + iB M ( ε ) (cid:0) V (2) M ( ε ) (cid:1) − (cid:0) ε V (1) M ( ε ) − E d ( M ) (cid:1)(cid:17) − · (cid:16) ε A M ( ε ) + iB M ( ε ) (cid:0) V (2) M ( ε ) (cid:1) − (cid:0) ε V (1) M ( ε ) + E d ( M ) (cid:1)(cid:17) . (6.3)Lemma 5.2 remains true also for each M ∈ M ex . We again introduce projectors P M ( ε ) and P M, ⊥ ( ε ) and the transforming function S M ( ε ) obeying (5.9), (5.10), (5.11); these matrices again turn out to beholomorphic in ε . By means of this transforming function, we finally rewrite the vertex condition to(5.12), (5.13). Lemma 6.1.
Assume that Condition (A) holds and the operator H γ can have a virtual level at thebottom of its essential spectrum with associated non-trivial solutions. Given an arbitrary family ofvectors g M ∈ P (0) M C d ( M ) , g M, ⊥ ∈ P M, ⊥ C d ( M ) for each vertex M ∈ γ ex , M = M exi , an arbitrary vector M ex := (cid:0) g M exi (cid:1) i =1 ,...,d ∈ C d , and an arbitrary function g ∈ L ( γ ex ) , for each λ ∈ C \ R the boundaryvalue problem ˆ H exγ (0) u = g on γ ex , V (2)Γ ,M (0) U ′ γ ex ( u ) = g M ex , P (0) M U M ( u ) = g M , P (0) M, ⊥ V (0) γ,M ( u ) + K M, ⊥ (0) U M ( u ) = g M, ⊥ at vertices M ∈ M ex , (6.4) is solvable in ˙ W ( γ ex ) if and only if ( g, ψ ( j ) ) L ( γ ex ) = − (cid:0) g M ex , Ψ ( j ) (cid:1) C d + X M ∈ M ex (cid:0) g M, ⊥ , U M ( ψ ( j ) ) (cid:1) C d ( M ) − X M ∈ M ex (cid:0) g M , V (0) γ,M ( ψ ( j ) ) (cid:1) C d ( M ) (6.5) for each j = 1 , . . . , k . Under these conditions, there exists the unique solution u ∗ obeying the identities (cid:0) U γ ( u ∗ ) , Ψ ( j ) (cid:1) C d = 0 , j = 1 , . . . , k, (6.6) and this solution satisfies the estimate k u ∗ k ˙ W ( γ ex ) C (cid:18) k g k L ( γ ) + k g M ex k C d + X M ∈ γ (cid:16) k g M k C d ( M ) + k g M, ⊥ k C d ( M ) (cid:17)(cid:19) , (6.7) where C is a constant independent of g , g M ex , g M , g M, ⊥ . The general solution of problem (6.4) readsas u = u ∗ + k X j =1 c j ψ ( j ) , (6.8) where c j are arbitrary constants.Proof. Throughout the proof by C we denote various inessential constants independent of g , g M ex , g M , g M, ⊥ . We follow the main lines of the proof of Lemma 5.3. For each M ∈ M ex we define a function u M by formula (5.8) with ˜ x i replaced by ˜ ξ i . We also let u M exi ( ξ ) := g M exi (cid:0) − χ ( ξ i ) (cid:1) ξ i , i = 1 , . . . , d , where g M exi are the coordinate of the vector g M ex , and we continue each of these functions by zero on γ ex \ e exi . Then the function u bnd ( ξ ) := X M ∈ γ ex u M satisfies the vertex conditions in (6.4). We seek a solution to problem (6.4) as u = u bnd + ˜ u (6.9)and for a new unknown function ˜ u we obtain the equation H exγ (0)˜ u = ˜ g, ˜ g := g − ˆ H exγ (0) u bnd (6.10)The function ˜ g obviously satisfies the estimate k ˜ g k L ( γ ex ) C (cid:18) k g k L ( γ ) + k g M ex k C d + X M ∈ γ (cid:16) k g M k C d ( M ) + k g M, ⊥ k C d ( M ) (cid:17)(cid:19) . (6.11)Let us study the solvability of equation (6.10). The operator H exγ (0) is self-adjoint and since thegraph γ ex is finite and contains no infinite edges, the resolvent of this operator is compact. Hence,the spectrum of the operator H exγ (0) consists of infinitely many discrete eigenvalues and to study thesolvability of equation (6.10), we need to know whether zero is an eigenvalue of the operator H exγ (0) and what are the associated eigenfunctions.We consider non-trivial solutions ψ ( j ) associated with the virtual level at the bottom of the essentialspectrum of the operator H exγ . Since these functions are constant on the edges e ∞ i , i = 1 , . . . , d , the21estrictions of these functions on the graph γ ex , still denoted by ψ ( j ) , satisfy homogeneous Neumanncondition at the vertices M exi , i = 1 , . . . , d . Hence, in view of problem (2.13), the functions ψ ( j ) arethe eigenfunctions associated with the zero eigenvalue of the operator H exγ (0) . And vice versa, if thereis a non-trivial solution to the equation H exγ (0) ψ = 0 , then in view of the definition of the differentialexpression H exγ (0) on the edges e exi and the Neumann condition at their end-points M exi , the function ψ is necessary constant on each of these edges. Hence, we can replace the edges e exi by infinite edges e ∞ i passing in this way to the graph γ ∞ and we continue then the function ψ by the aforementionedconstants on entire edges e ∞ i . The resulting function solves problem (2.13). Therefore, zero is aneigenvalue of the operator H exγ (0) if and only if the operator H γ has a virtual level at the bottom ofits essential spectrum. The associated non-trivial solutions to problem (2.13) restricted on γ ex are theeigenfunctions of the operator H ex exhaust all eigenfunctions of the operator H exγ (0) associated withthe zero eigenvalue.Let L ⊥ be the orthogonal complement in L ( γ ex ) to the eigenspace spanned over the functions ψ ( j ) , j = 1 , . . . , d . Then there exists a bounded inverse operator (cid:0) H exγ (0) (cid:1) − acting in the space L ⊥ .Equation (6.10) is solvable if and only if ˜ g ∈ L ⊥ , which is equivalent to the identities (˜ g, ψ ( j ) ) L ( γ ex ) = 0 , j = 1 , . . . , k. (6.12)Integrating by parts and employing the definition of the functions ψ ( j ) and the vertex conditions forthe function u bnd , it is straightforward to confirm that (cid:0) ˆ H exγ (0) u bnd , ψ ( j ) (cid:1) L ( γ ex ) = − d X i =1 v (2) i (0) du M exi (cid:12)(cid:12) e i dξ i ( M exi ) ψ ( j ) (cid:12)(cid:12) M exi ( M exi )+ X M ∈ M ex (cid:0) V (0) γ,M ( u M ) , U M ( ψ ( j ) ) (cid:1) C d ( M ) − X M ∈ M ex (cid:0) U M ( u M ) , V (0) γ,M ( ψ ( j ) ) (cid:1) C d ( M ) = − (cid:0) g M ex , U γ ( ψ ( j ) ) (cid:1) C d + X M ∈ γ (cid:0) g M, ⊥ , U M ( ψ ( j ) ) (cid:1) C d ( M ) − X M ∈ M ex (cid:0) g M , V (0) γ,M ( ψ ( j ) ) (cid:1) C d ( M ) . Substituting these identities into (6.12), we conclude that equation (6.10), and hence, problem (6.4)are solvable if and only if conditions (6.5) are satisfied.Under this condition, the function ˜ u = (cid:0) H exγ (0) (cid:1) − ˜ g solves equation (6.10) and in view of (6.11)this function satisfies the inequality k ˜ u k ˙ W ( γ ex ) C k ˜ g k L ( γ ex ) C (cid:18) k g k L ( γ ex ) + k g M ex k C d + X M ∈ γ (cid:16) k g M k C d ( M ) + k g M, ⊥ k C d ( M ) (cid:17)(cid:19) . (6.13)Then by formula (6.9) we find a particular solution to problem (6.4); we denote this solution by ˜ u ∗ .In view of the orthonormalization conditions for the vectors Ψ ( j ) = U γ ( ψ ( j ) ) , the function u ∗ obeyingconditions (6.6) is given explicitly: u ∗ = ˜ u ∗ − k X j =1 (cid:0) U γ (˜ u ∗ ) , Ψ ( j ) (cid:1) C d ψ ( j ) . This formula, estimate (6.13), identity (6.9), and the definition of the function u bnd imply inequality(6.7). Formula (6.8) is obvious. The proof is complete.The matrices P M ( ε ) , P M, ⊥ ( ε ) , S M ( ε ) , ˜S M ( ε ) , ˜S M, ⊥ ( ε ) , ˜K M, ⊥ ( ε ) , K M, ⊥ ( ε ) , V ( p ) γ,M ( ε ) , V ( p )Γ ,M ( ε ) , p = 1 , , are holomorphic in ε and by P M,j , P M, ⊥ ,j , S M,j , ˜S M,j , ˜S M, ⊥ ,j , ˜K M, ⊥ ,j , K M, ⊥ ,j , V ( p ) γ,M,j , V ( p )Γ ,M ,j , p = 1 , , we denote the coefficients of their Taylor series, see, for instance, (5.14). Lemma 6.2.
For each vertex M ∈ M ex , the identities hold: P (0) M, ⊥ P M, = P M, P (0) M , P (0) M, ⊥ P M, P (0) M, ⊥ = 0 , P (0) M P M, = P M, P (0) M, ⊥ , P (0) M P M, P (0) M = 0 , P ∗ M, = P M, , (6.14)22 M, = P M, P (0) M, ⊥ − P M, P (0) M = P M, P (0) M, ⊥ − P (0) M, ⊥ P M, , (6.15) P (0) M K M, P (0) M, = 0 , (6.16) P (0) M Q M ( ψ ( i ) ) = − (0) M K M, V (0) γ,M ( ψ ( i ) ) − (0) M P M, P (0) M, ⊥ U M ( ψ ( i ) ) , (6.17) (U (0) M + E d ( M ) ) − P (0) M, ⊥ Q M ( ψ ( i ) ) = P (0) M, ⊥ P M, (cid:16) V (2) γ,M U ′ M ( ψ ( i ) ) − iV (1) γ,M (0) P (0) M, ⊥ U M ( ψ ( i ) ) (cid:17) − P (0) M, ⊥ (cid:16) V (2) γ,M, U ′ M ( ψ ( i ) ) + (cid:0) K M, ⊥ , − iV (1) γ,M, (cid:1) P (0) M, ⊥ U M ( ψ ( i ) ) (cid:17) . (6.18) Proof.
Formulae (6.14) are obtained immediately via calculating the coefficients at ε in the obviousidentities: (cid:0) P M, ⊥ ( ε ) (cid:1) = P M, ⊥ ( ε ) , (cid:0) P M ( ε ) (cid:1) = P M ( ε ) , (cid:0) P M, ⊥ ( ε ) (cid:1) ∗ = P M, ⊥ ( ε ) . Formula (6.15) can be confirmed by expanding the right hand side in (5.10) into the power series in ε ,calculating the coefficient at ε , and employing then (6.14).It follows from the formula for K M ( ε ) in (5.5) that i(U M ( ε ) − E d ( M ) )P M ( ε )K M ( ε )P M, ⊥ ( ε ) = P M ( ε )(U M ( ε ) + E d ( M ) )P M, ⊥ ( ε ) = 0 . Expanding this identity into the power series in ε and calculating the coefficient at ε , we arrive at(6.16).According definition (6.3) of U M ( ε ) , formulae (4.6), (4.7) applied with ˜A (Ξ) M = ˜A M ( ε ) := ε A M ( ε ) + iB M ( ε ) (cid:0) V (2) γ,M ( ε ) (cid:1) − V (1) γ,M ( ε ) , ˜B (Ξ) M = ˜B M ( ε ) := B M ( ε ) (cid:0) V (2) γ,M ( ε ) (cid:1) − , and the easily checked identities − (cid:0) ˜A M ( ε ) − i ˜A M ( ε ) (cid:1) − ˜A M ( ε ) = i (cid:0) U M ( ε ) − E d ( M ) (cid:1) , − (cid:0) ˜A M ( ε ) − i ˜A M ( ε ) (cid:1) − ˜B M ( ε ) = U M ( ε ) + E d ( M ) , for each ψ ( i ) and each vertex M ∈ M ex we have: ε A M ( ε ) U M ( ψ ( i ) ) + B M ( ε ) U ′ M ( ψ ( i ) ) = i2 (cid:0) ˜A M ( ε ) − i ˜B M ( ε ) (cid:1)(cid:18) i(U M ( ε ) − E d ( M ) )P M ( ε ) · (cid:16)(cid:0) E d ( M ) − iK M ( ε )V (1) γ,M ( ε ) (cid:1) U M ( ψ ( i ) ) + K M ( ε )V (2) γ,M ( ε ) U ′ M ( ψ ( i ) ) (cid:17) + (cid:0) U M ( ε ) + E d ( M ) (cid:1) P M, ⊥ ( ε ) (cid:0) V (2) γ,M ( ε ) U ′ M ( ψ ( i ) ) + (cid:0) K M, ⊥ ( ε ) − iV (1) γ,M ( ε ) (cid:1) U M ( ψ ( i ) ) (cid:1)(cid:19) . (6.19)The latter long term in the right hand side in the above identity becomes the assumed vertex conditionfor ψ ( i ) as ε = 0 and hence, it vanishes as ε = 0 . We expand then identity (6.19) into the Taylor seriesin ε and calculate the coefficient at ε if B M (0) = 0 and at ε if B M (0) = 0 . Employing then the vertexcondition for ψ ( i ) , we obtain: Q M ( ψ ( i ) ) =i(U (0) M − E d ( M ) )P (0) M (cid:16) P (0) M K M, V (0) γ,M ( ψ ( i ) ) + P M, U M ( ψ ( i ) ) (cid:17) + (U (0) M + E d ( M ) )P (0) M, ⊥ P M, (cid:16) V (0) γ,M ( ψ ( i ) ) + K M, ⊥ (0) U M ( ψ ( i ) )+ V (2) γ,M, U ′ M ( ψ ( i ) ) + (cid:0) K M, ⊥ , − iV (1) γ,M, (cid:1) P (0) M, ⊥ U M ( ψ ( i ) ) (cid:17) . Applying the matrices P (0) M and (U (0) M + E d ( M ) ) − P (0) M, ⊥ to the above identity and employing formulae(6.14) and P (0) M K M, ⊥ (0) = 0 , P (0) M U M ( ψ ( i ) ) = 0 , we arrive at (6.17), (6.18). The proof is complete.Owing to identities (6.18) and the vertex conditions for ψ ( i ) , we can rewrite formula (2.22) for q ( i ) M as q ( i ) M = P (0) M, ⊥ P M, V (0) γ,M ( ψ ( i ) ) − P (0) M, ⊥ K M, ⊥ , P (0) M, ⊥ U M ( ψ ( i ) ) . Q ( ij ) M , we obtain immediately that Q ( ij ) M = (cid:0) P M, V (0) γ,M ( ψ ( i ) ) , P (0) M, ⊥ U M ( ψ ( j ) ) (cid:1) C d ( M ) − (cid:0) K M, ⊥ , P (0) M, ⊥ U M ( ψ ( i ) ) , P (0) M, ⊥ U M ( ψ ( j ) ) (cid:1) C d ( M ) + (cid:0) K M, P (0) M V (0) γ,M ( ψ ( i ) ) , P (0) M U M ( ψ ( j ) ) (cid:1) C d ( M ) + (cid:0) P M, P (0) M U M ( ψ ( i ) ) , P (0) M V (0) γ,M ( ψ ( j ) ) (cid:1) C d ( M ) . (6.20)Since the matrices K M, ⊥ , and K M, are self-adjoint, we conclude immediately that Q ( ji ) M = Q ( ij ) M . Inview of formulae (2.19), (2.20), we hence arrive at the following lemma. Lemma 6.3.
The matrix Q is self-adjoint. The eigenvectors y i , i = 1 , . . . , k , of the self-adjoint matrix Q form an orthonormalized basis in C k and a matrix Y := y . . . y k ... ... y k . . . y kk , y i := y i ... y ik , i = 1 , . . . , k, reduces Q to the diagonal form: Y ∗ QY = diag { λ (Q) , . . . , λ k ( Q ) } , Y ∗ = Y − , (6.21)where λ i (Q) are the eigenvalues of the matrix Q with associated eigenvectors y i . If zero is an eigenvalueof the matrix Q , we order the eigenvalues λ i (Q) so that λ i (Q) = 0 for i = 1 , . . . , k . If zero is not aneigenvalue of Q , we let k := 0 .We define ˜ ψ ( j ) := k X i =1 y ji ψ ( i ) , ψ ( i ) = k X j =1 y ji ˜ ψ ( j ) . (6.22)The introduced functions ˜ ψ ( j ) are non-trivial solutions of problem (2.13) obeying condition (2.14) and (cid:0) U γ ( ˜ ψ ( i ) ) , U γ ( ˜ ψ ( j ) ) (cid:1) C d = δ ij , (6.23)where δ ij is the Kronecker delta.For i = 1 , . . . , k we denote ˜ g ( i ) = − d ˆ H exγ dε (0) ˜ ψ ( i ) , ˜g ( i ) M = − P (0) M (cid:0) ˜S M, U M ( ˜ ψ ( i ) ) + ˜K M, U ′ M ( ˜ ψ ( i ) ) (cid:1) , ˜g ( i ) M ex = iV (1)Γ ,M (0) U γ ex ( ˜ ψ ( i ) ) , ˜g ( i ) M, ⊥ = − P (0) M, ⊥ (cid:0) ˜S M, ⊥ , U ′ M ( ˜ ψ ( i ) ) + ˜K M, ⊥ , U M ( ˜ ψ ( i ) ) (cid:1) . (6.24)Hereafter the derivatives of the differential expression ˆ H exγ are defined as d i ˆ H exγ dε i := − ddξ d i V (2) γ dε i ( · , ddξ + i ddξ d i V (1) γ dε i ( · ,
0) + d i V (1) γ dε i ( · , ddξ ! + d i V (0) γ dε i ( · , . Lemma 6.4.
The identities hold: λ i (Q) δ ij = − (˜ g ( i ) , ˜ ψ ( j ) ) L ( γ ex ) − (cid:0) ˜g ( i ) M ex , U γ ( ˜ ψ ( j ) ) (cid:1) C d + X M ∈ M ex (cid:0) ˜g ( i ) M, ⊥ , U M ( ˜ ψ ( j ) ) (cid:1) C d ( M ) − X M ∈ M ex (cid:0) ˜g ( i ) M , V (0) γ,M ( ψ ( j ) ) (cid:1) C d ( M ) . (6.25) Problem (6.4) with g = ˜ g ( i ) , g M ex = ˜g ( i ) M ex , g M = ˜g ( i ) M , g M, ⊥ = ˜g ( i ) M, ⊥ for i = 1 , . . . , k is solvable. Thereexist solutions ˜ ψ ( i )1 , i = 1 , . . . , k , of these problems such that the quantities ˆ Q ( ij ) := − (ˆ g ( i ) , ˜ ψ ( j ) ) L ( γ ex ) − (ˆg ( i ) M ex , U γ ( ˜ ψ ( j ) )) C d + X M ∈ M ex (cid:0) ˆg ( i ) M, ⊥ , U M ( ˜ ψ ( j ) ) (cid:1) C d ( M ) − X M ∈ M ex (cid:0) ˆg ( i ) M , V (2) γ,M (0) U ′ M ( ˜ ψ ( j ) ) − iV (1) γ,M (0) U M ( ˜ ψ ( j ) ) (cid:1) C d ( M ) (6.26)24 here i = 1 , . . . , k , j = 1 , . . . , k , and ˆ g ( i ) := − X p =1 p ! d p ˆ H exγ dε p (0) ˜ ψ ( i )2 − p , ˜ ψ ( i )0 := ˜ ψ ( i ) , ˆg ( i ) ex := i X p =0 V (1)Γ ,M ,p U γ ex ( ˜ ψ ( i )2 − p ) − V (2)Γ ,M , U γ ( ˜ ψ ( i ) ) , ˆg ( i ) M, ⊥ := − P (0) M, ⊥ X p =1 X M ∈ M ex (cid:0) ˜S M, ⊥ , − p U ′ M ( ˜ ψ ( i ) ) + ˜K M, ⊥ , − p U M ( ˜ ψ ( i ) ) (cid:1) , ˆg ( i ) M := − P (0) M X p =1 X M ∈ M ex (cid:0) ˜S M, − p U M ( ˜ ψ ( i ) ) + ˜K M, − p U ′ M ( ˜ ψ ( i ) ) (cid:1) , satisfy the identities ˆ Q ( ij ) = 0 as i = 1 , . . . , k , j = k + 1 , . . . , k, (6.27) ˆ Q ( ij ) = ˆ Q ( ji ) as i, j = 1 , . . . , k . (6.28) Proof.
According Lemma 6.1, problem (6.4) is solvable if and only if solvability conditions (6.5) aresatisfied. Since each function ψ ( i ) can be expressed as a linear combination of the functions ˜ ψ ( i ) , see(6.22), we can replace conditions (6.5) by equivalent ones: ( g, ˜ ψ ( j ) ) L ( γ ex ) = − (cid:0) g M ex , U γ ( ˜ ψ ( j ) ) (cid:1) C d + X M ∈ M ex (cid:0) g M, ⊥ , U M ( ˜ ψ ( j ) ) (cid:1) C d ( M ) − X M ∈ M ex (cid:0) g M , V (0) γ,M ( ˜ ψ ( j ) ) (cid:1) C d ( M ) (6.29)for each j = 1 , . . . , k .Integrating by parts and taking into consideration the vertex conditions for ˜ ψ ( i ) , we obtain: − (˜ g ( i ) , ˜ ψ ( j ) ) L ( γ ex ) = dV (2) γ dε ( · , d ˜ ψ ( i ) dξ , d ˜ ψ ( j ) dξ ! L ( γ ) + d ˜ ψ ( i ) dξ , i dV (1) γ dε ( · ,
0) ˜ ψ ( j ) ! L ( γ ) + i dV (1) γ dε ( · ,
0) ˜ ψ ( i ) , d ˜ ψ ( j ) dξ ! L ( γ ) + dV (0) γ dε ( · ,
0) ˜ ψ ( i ) , ˜ ψ ( j ) ! L ( γ ) + X M ∈ M ex (cid:16) V (2) γ,M, U ′ M ( ˜ ψ ( i ) ) − iV (1) γ,M, U M ( ψ ( i ) ) , U M ( ˜ ψ ( j ) ) (cid:17) C d ( M ) − i (cid:0) V (1)Γ ,M U γ ( ˜ ψ ( i ) ) , U γ ( ˜ ψ ( j ) ) (cid:1) C d . (6.30)The definition of the matrices ˜S M, ⊥ ( ε ) , ˜K M, ⊥ ( ε ) in (5.13) with V ( p ) M = V ( p ) γ,M , p = 1 , , the definitionof K M, ⊥ (0) , the first identity in (5.9) and identities (6.14), (6.15) imply that ˜S M, = S M, − iK M, V (1) γ,M, , ˜K M, = P (0) M K M, V (2) γ,M (0) , ˜S M, ⊥ , = V γ,M, + S M, V (2) γ,M (0) , ˜K M, ⊥ , = P (0) M, ⊥ (cid:0) K M, ⊥ , − iV (1) γ,M, + iP M, V (1) γ,M (0) (cid:1) . We substitute the latter formulae and (6.30), (6.24) into the right hand side in (6.25) and use identities(6.20), (6.22), (6.23). Comparing then the result with (2.19), (2.20), (2.21) and employing Lemma 6.2,we see that formulae (6.25) are true. In the same way we also confirm that solvability conditions (6.29)for problem (6.4) with g = ˜ g ( i ) , g M ex = ˜g ( i ) M ex , g M = ˜g ( i ) M , g M, ⊥ = ˜g ( i ) M, ⊥ are satisfied for i = 1 , . . . , k since λ i (Q) = 0 for such i and hence, these problems are solvable. In view of (6.8), the general solutionsof these problems read as ˜ ψ ( i )1 = ˜ ψ ( i )1 ∗ + k X p =1 c ip ˜ ψ ( p ) , ˜ ψ ( i )1 ∗ are particular solutions obeying (6.6). We substitute these formulae into the left hand sidein (6.26) and take into consideration identities (6.25). This leads us to the formula ˆ Q ( ij ) = ˆ Q ( ij ) ∗ + λ j ( Q ) c ij , where the numbers ˆ Q ( ij ) are calculated by formula (6.26) with ˜ ψ ( i )1 replaced by ˜ ψ ( i )1 ∗ . Since λ j ( Q ) = 0 as j = k + 1 , . . . , k , we see that by letting c ij := − ˆ Q ( ij ) ∗ λ j ( Q ) , j = k + 1 , . . . , k, we arrive immediately at identities (6.27).We proceed to proving (6.28). Employing the definition of the functions ˜ ψ ( i )0 and ˜ ψ ( i )1 , it is straight-forward to confirm that the functions ˆ ψ ( i ) ε := ˜ ψ ( i )0 + ε ˜ ψ ( i )1 satisfy the identities: ˆ H exγ ( ε ) ˆ ψ ( i ) ε = ε ˆ g ( i ) + O ( ε ) , V (2)Γ ,M ( ε ) U γ ex ( ˆ ψ ( i ) ε ) = ε ˆg ( i ) ex + O ( ε ) , P (0) M ( ε ) (cid:0) ˜S M ( ε ) U M ( ˆ ψ ( i ) ε ) + ˜K M ( ε ) U M ( ˆ ψ ( i ) ε ) (cid:1) = ε ˆg ( i ) M + O ( ε ) , P (0) M, ⊥ (cid:0) ˜S M, ⊥ ( ε ) U ′ M ( ˆ ψ ( i ) ε ) + ˜K M, ⊥ ( ε ) U M ( ˆ ψ ( i ) ε ) (cid:1) = ε ˆg ( i ) M, ⊥ + O ( ε ) . Integrating by parts and taking into consideration the self-adjointness of the matrix K M, ⊥ ( ε ) , we alsofind easily that h exε ( ˆ ψ ( i ) ε , ˆ ψ ( j ) ε ) = h exε ( ˆ ψ ( j ) ε , ˆ ψ ( i ) ε ) , (6.31)where h exε ( ψ, φ ) := (cid:0) ˆ H exγ ( ε ) ψ, φ (cid:1) L ( γ ex ) − X M ∈ M ex (cid:16) S M ( ε )P M, ⊥ ( ε ) (cid:0) V (2) γ,M ( ε ) U ′ M ( ψ )+ (cid:0) K M, ⊥ ( ε ) − iV (1) γ,M ( ε ) (cid:1) U M ( ψ ) (cid:1) , (cid:0) S ∗ M ( ε ) (cid:1) − P M, ⊥ ( ε ) U M ( φ ) (cid:17) C d ( M ) + X M ∈ M ex (cid:16) S M ( ε )P M ( ε ) U M ( ψ ) , (cid:0) S ∗ M ( ε ) (cid:1) − P M ( ε ) (cid:0) V (2) γ,M ( ε ) U ′ M ( φ ) − iV (1) γ,M ( ε ) (cid:1) U M ( φ ) (cid:1)(cid:17) C d ( M ) − (cid:0) V (2)Γ ,M ( ε ) U ′ γ ex ( ψ ) − iV (1)Γ ,M ( ε ) U γ ( ψ ) , U γ ( φ ) (cid:1) C d . Using now formulae (5.9), (5.11) and the definition of the functions ˆ g ( i ) , ˆg ( i ) ex , ˆg ( i ) M, ⊥ , ˆg ( i ) M , by straight-forward calculations we find that h exε ( ˆ ψ ( i ) ε , ˆ ψ ( j ) ε ) = ε ˆ Q ( ij ) + O ( ε ) . This identity and (6.31) imply (6.28). The proof is complete. H exγ ( ε ) In this subsection we study the dependence on ε of the resolvent (cid:0) H exγ ( ε ) − ε λ (cid:1) − . It is clear thatfor each g ∈ L ( γ ex ) the function v exγ := ( H exγ ( ε ) − ε λ ) − g solves the boundary value problem for theequation (cid:0) ˆ H exγ ( ε ) − ε λ (cid:1) v γ = g on γ ex (6.32)subject to vertex conditions (6.1), (6.2). We are going to show that the solution of this boundary valueproblem is meromorphic in ε . Our main idea is the same as in the proof of Lemma 5.1. Namely, wefirst construct a formal series power in ε and prove then that it converges and its sum solves problem(6.32), (6.1), (6.2). The main difference is that now limiting problem (6.4) is solvable only undersolvability conditions (6.5), which we have to check at each step. This produces a pole of the operator ( H exγ ( ε ) − ε λ ) − at ε = 0 . Namely, the aforementioned power series for this resolvent is introduced as v ex ( · , ε ) = ( H exγ ( ε ) − ε λ ) − g = ∞ X j = − ε j v j , (6.33)26here v p = v p ( ξ ) are some functions to be determined. We substitute this series into boundary valueproblem (6.32), (6.2), (5.12), (5.13) with V ( p ) M = V ( p ) γ,M , p = 1 , and expand also the coefficients in theequation and vertex conditions into the Taylor series in ε . We equate the coefficients at the like powersof ε arriving then to a recurrent system of boundary value problems for v p , each being problem (6.4)with g = g j , g M ex = 0 , g M = g M,j , g M, ⊥ = g M, ⊥ ,j , (6.34)where g − := 0 , g − := − d ˆ H exγ dε (0) v − , g := g − d ˆ H exγ dε (0) v − − d ˆ H exγ dε (0) v − − λv − ,g j := − λv j − − j +2 X i =1 i ! d i ˆ H exγ dε i (0) v j − i , p > , (6.35) g M,j := − P (0) M j +2 X i =1 (cid:0) ˜S M,i U M ( v j − i ) + ˜K M,i U ′ M ( v j − i ) (cid:1) , p > − , (6.36) g M, ⊥ ,j := − P (0) M, ⊥ j +2 X i =1 ˜S M, ⊥ ,j U ′ M ( v j − i ) − P (0) M, ⊥ j +2 X i =1 ˜K M, ⊥ ,i U M ( v j − i ) , p > − , (6.37) g M ex ,j := i j +1 X i =1 ˜V (1)Γ ,M ,i U M ( v j − i − ) − j +1 X i =1 ˜V (2)Γ ,M ,i U M ( v j − i ) . (6.38)The problem for the function v − is homogeneous and this is why v − is a linear combination ofthe functions ˜ ψ ( i ) : v − = k X p =1 c − p ˜ ψ ( p ) , where c − p are some constants. We write solvability conditions (6.29) for problem (6.4), (6.34),(6.35), (6.36), (6.37) with p = − and by (6.25) we get immediately that these conditions become λ p ( Q ) δ pj c − p = 0 , j = 1 , . . . , k . The latter identities imply c − p = 0 , p = k + 1 , . . . , k, (6.39)and hence, v − = k X p =1 c − p ˜ ψ ( p ) . (6.40)Then, according to Lemma 6.4 and formula (6.8), the function v − reads as v − = k X p =1 c − p ˜ ψ ( p )1 + k X p =1 c − p ˜ ψ ( p ) , (6.41)where c − p are some constants.We proceed to problem (6.4), (6.34), (6.35), (6.36), (6.37) with p = 0 for v . By Lemma 6.4,solvability conditions (6.29) for this problem read as k X p =1 ˆ Q ( pj ) c − p − λ k X p =1 c − p ( ˜ ψ ( p ) , ˜ ψ ( j ) ) L ( γ ex ) + k X p = k +1 c − p λ p ( Q ) δ pj = ( g, ˜ ψ ( j ) ) L ( γ ex ) , j = 1 , . . . , k, which can be rewritten as ( ˆQ − λ ˜G )c (0) − = F (0)0 , ˜Q ⊥ c ⊥− = F ⊥ + λ ˜G ⊥ c (0) − , (6.42)27here ˆQ := ˆ Q (11) . . . ˆ Q (1 k ) ... ... ˆ Q ( k . . . ˆ Q ( k k ) , ˜Q ⊥ := diag (cid:8) λ k ( Q ) , . . . , λ k ( Q ) (cid:9) , ˜G := ( ˜ ψ (1) , ˜ ψ (1) ) L ( γ ex ) . . . ( ˜ ψ ( k ) , ˜ ψ (1) ) L ( γ ex ) ... ... ( ˜ ψ ( k ) , ˜ ψ (1) ) L ( γ ex ) . . . ( ˜ ψ ( k ) , ˜ ψ ( k ) ) L ( γ ex ) , ˜G ⊥ := ( ˜ ψ (1) , ˜ ψ ( k +1) ) L ( γ ex ) . . . ( ˜ ψ ( k ) , ˜ ψ ( k +1) ) L ( γ ex ) ... ... ( ˜ ψ (1) , ˜ ψ ( k ) ) L ( γ ex ) . . . ( ˜ ψ ( k ) , ˜ ψ ( k ) ) L ( γ ex ) c (0) − := c − ... c − k , c ⊥− := c − k +1 ... c − k , (6.43) F (0)0 := F ... F k , F ⊥ := F k +1 ... F k , F p := (cid:0) g, ˜ ψ ( p ) (cid:1) L ( γ ex ) . (6.44)Since ˜G is the Gram matrix for linear independent functions ˜ ψ (1) , . . . , ˜ ψ ( k ) , this matrix is self-adjoint and positive definite. By identities (6.28), the matrix ˆ Q is self-adjoint. Hence, the matrix ˜G − ˆQ − ˜G − is self-adjoint and all its eigenvalues are real. Since Im λ = 0 , this ensures the invertibilityof the matrix ˜G − ˆQ − ˜G − − λ and the same is true for the matrix ˆQ − λ ˜G thanks to the formulae ˆQ − λ ˜G = ˜G (cid:0) ˜G − ˆQ − ˜G − − λ (cid:1) ˜G . Hence, we can solve equations (6.42): c (0) − = ( ˜Q − λ ˜G ) − F (0)0 , c ⊥− = ˜Q − ⊥ (cid:0) F ⊥ + λ ˜G ⊥ ( ˜Q − λ ˜G ) − F (0)0 (cid:1) . (6.45)The obtained formulae determine completely the function v − and the first sum in (6.41), see (6.40),(6.43), and determine partially the second sum in (6.41), while the coefficients c − p , p = 1 , . . . , k ,remain unknown at this step. Formulae (6.45) also ensure the solvability of the problem for v andthis function reads as v = v ∗ + k X p =1 c − p ˜ ψ ( p )1 + p X k =1 c p ˜ ψ ( p ) , where v ∗ is the particular solution to the problem obeying condition (6.6). To determine the coefficients c − p , p = 1 , . . . , k , and c p , p = 1 , . . . , k , we need to consider solvability conditions for the problemsfor the functions v j , j > .The above described procedure for determining v − , v − , v is to be repeated recurrently for otherfunctions v j leading to general formulae v j = v j ∗ + k X p =1 c j − p ˜ ψ ( p )1 + k X p =1 c jp ˜ ψ ( p ) , where v j ∗ is a particular solution to problem (6.4) with g = g j ∗ , g M = g M,j ∗ , g M, ⊥ = g M, ⊥ ,j ∗ , g M ex = g M ex ,j ∗ , g ∗ := g, j ∗ := − λ + 12 d ˆ H exγ dε (0) ! v j − ∗ + k X p = k +1 c ( j − p ˜ ψ ( p ) − d ˆ H exγ dε (0) v j − ∗ − j +2 X i =3 i ! d i ˆ H exγ dε i (0) v j − i , j > , g M,j ∗ := − P (0) M j +2 X i =3 (cid:0) ˜S M,i U M ( v j − i ) + ˜K M,i U M ( v j − i ) (cid:1) − X i =1 P (0) M (cid:0) ˜S M,i U M ( v j − i ∗ ) + ˜K M,i U ′ M ( v j − i ∗ ) (cid:1) − P (0) M k X p = k +1 c ( j − p (cid:0) ˜S M, U M ( ˜ ψ ( p ) ) + ˜K M, U ′ M ( ˜ ψ ( p ) ) (cid:1) , g M, ⊥ ,j ∗ := − P (0) M, ⊥ j +2 X i =3 ˜S M, ⊥ ,i U ′ M ( v j − i ) − P (0) M, ⊥ j +2 X i =3 ˜K M, ⊥ ,i U M ( v j − i ) − X i =1 (cid:16) P (0) M, ⊥ ˜S M, ⊥ ,i U ′ M ( v j − i ∗ ) − P (0) M, ⊥ ˜K M, ⊥ ,i U M ( v j − i ∗ ) (cid:17) − P (0) M, ⊥ k X p = k +1 c ( j − p (cid:16) ˜S M, U ′ M ( ˜ ψ ( p ) ) + ˜K M, ⊥ , U M ( ˜ ψ ( p ) ) (cid:17) , g M ex ,j ∗ := i j +1 X i =1 V (1)Γ ,M ,i U γ ex ( v j − i − ) − j +2 X i =3 V (2)Γ ,M ,i U γ ( v j − i ) − X i =1 V (2)Γ ,M ,i U γ ( v j − i ∗ )+ iV (1)Γ ,M (0) U γ ex ( v j − ∗ ) − V (2)Γ ,M , k X p = k +1 c ( j − p U γ ( ˜ ψ ( p ) ) , (6.46)where we assume that v j ∗ = 0 for j − and c jp = 0 as j − .The constants c jp are determined by the solvability conditions for problems for v j +1 and v j +2 : c (0) j = ( ˆQ − λ ˜G ) − F (0) j +2 , c ⊥ j = ˜Q − ⊥ (cid:0) F ⊥ j +1 + λ ˜G ⊥ ( ˆQ − λ ˜G ) − F (0) j +1 (cid:1) , c (0) j := c j ... c jk , c ⊥ j := c jk +1 ... c jk , F (0) j := F j ... F jk , F ⊥ j := F jk +1 ... F jk ,F jp := ( g j ∗ , ˜ ψ ( p ) ) L ( γ ex ) + (cid:0) g M ex ,j ∗ , U γ ( ˜ ψ ( p ) ) (cid:1) C d + X M ∈ γ (cid:0) g M, ⊥ ,j ∗ , U M ( ˜ ψ ( p ) ) (cid:1) C d ( M ) − X M ∈ γ (cid:0) g M,j ∗ , V (0) γ,M ( ˜ ψ ( p ) ) (cid:1) C d ( M ) , p = 1 , . . . , k. (6.47)We proceed to proving the convergence of series (6.33). Once we do it, thanks to the above studiedproblems for v j , the convergence will imply that the sum of the series solves boundary value problem(6.32), (6.1), (6.2). Since this problem is uniquely solvable, this will yield that the action of the operator ( H exγ − ε λ ) − is given by converging series (6.33). The convergence of series is obviously implied bythe following lemma. Lemma 6.5.
The estimates k v j k ˙ W ( γ ex ) c c j k g k L ( γ ex ) , j > − , (6.48) hold, where c , c are some fixed constants independent of j , ε and g .Proof. We are going to prove the following estimates: k v j ∗ k ˙ W ( γ ex ) c j k g k L ( γ ex ) , | c jp | c j k g k L ( γ ex ) , p = 1 , . . . , k, j > − , (6.49)29hat will imply (6.48). We begin with mentioning that estimate (5.16) holds with V ( i )Γ ,j replaced by V ( i ) γ,j . Without loss of generality we seek the constant c obeying c > c .As j = − , estimates (6.49) follow immediately from (6.39), (6.45), (6.43) and the identity v − , ∗ = 0 .The further estimates will be proved by the induction. Namely, we assume that for all j q − theinequalities hold: k v j ∗ k ˙ W ( γ ex ) c j k g k L ( γ ex ) , j q − , | c jp | c j k g k L ( γ ex ) , p = k + 1 , . . . , k, j q − , | c jp | c j k g k L ( γ ex ) , p = 1 , . . . , k , j q − . Then, employing formulae (6.46), it is straightforward to confirm that k g q ∗ k L ( γ ) + X M ∈ M ex (cid:0) k g M,q ∗ k C d ( M ) + k g ⊥ M, ⊥ ,q ∗ k C d ( M ) + k g ⊥ M ex ,q ∗ k C d (cid:1) c q X i =1 c i c q − i k g k L ( γ ex ) = c c − c c c q − k g k L ( γ ex ) , where c is a fixed constant independent of g , ε and j . Hence, by estimate (6.7) and formulae (6.47)we get that k v q ∗ k ˙ W ( γ ex ) + k X p =1 | c q − p | + k X p = k +1 | c q − p | c − c c c q k g k L ( γ ex ) , where c is a fixed constant independent of g , ε and j . Choosing then c > { c , c } , we arrive atestimate (6.49) for j = q . The proof is complete.Hence, the operator ( H exγ ( ε ) − ε λ ) − is meromorphic in ε and satisfies representation (6.33). Afinal result is summarized in the following lemma. Lemma 6.6.
For each λ ∈ C \ R , the resolvent ( H exγ ( ε ) − ε λ ) − : L ( γ ex ) → ˙ W ( γ ex ) is meromorphicfor ε small enough. This operator can be represented as (cid:0) H exγ ( ε ) − ε λ (cid:1) − g = ε − v − + ε − v − + ˜ R γ ( λ, ε ) g, (6.50) where ˜ R γ ( λ, ε ) : L ( γ ex ) → ˙ W ( γ ex ) is a bounded operator holomorphic in ε , while the functions v − , v − are given by formulae v − = k X p =1 C − p ( g ) ˜ ψ ( p ) , v − = k X p =1 C − p ( g ) ˜ ψ ( p )1 + k X p =1 C − p ( g ) ˜ ψ ( p ) , (6.51) where C − p , C − p : L ( γ ex ) → C are some bounded linear functionals, and, in particular, C − ( g ) ... C − k ( g ) := ( ˆQ − λ ˜G ) − (cid:0) g, ˜ ψ (1) (cid:1) L ( γ ex ) ... (cid:0) g, ˜ ψ ( k ) (cid:1) L ( γ ex ) C − k +1 ( g ) ... C − k ( g ) := ˜Q − ⊥ (cid:0) g, ˜ ψ ( k +1) (cid:1) L ( γ ex ) ... (cid:0) g, ˜ ψ ( k ) (cid:1) L ( γ ex ) + λ ˜Q − ⊥ ˜G ⊥ ( ˆQ − λ ˜G ) − (cid:0) g, ˜ ψ (1) (cid:1) L ( γ ex ) ... (cid:0) g, ˜ ψ ( k ) (cid:1) L ( γ ex ) . (6.52) In this section we prove Theorem 2.1. The proof consists of several main steps and it is convenient torepresent them in separate subsections. 30 .1 Self-adjointness
We are going to prove that under the assumptions of Theorem 2.1 the operators H and H γ areself-adjoint. The matrix in (2.6) is self-adjoint as ε = 0 for each vertex M ∈ Γ , M = M . Hence,the matrices A (0) M , B (0) M satisfy condition (4.2), (4.3) for each vertex M ∈ Γ , M = M . Thanks tothe self-adjointness of the matrix Q stated in Lemma 6.3, the self-adjointness of the matrix in (4.3)corresponding to the matrices A (0) M and B (0) M is confirmed by simple straightforward calculations. Inview of the non-degeneracy of the matrix V (2)Γ ,M (0) , condition (4.2) corresponding to the same matricesis checked as follows: rank (cid:0) A (0) M B (0) M (cid:1) = rank (cid:16) A (0) M + iB (0) M (cid:0) V (2)Γ ,M (0) (cid:1) − V (1)Γ ,M (0)Ψ − B (0) M (cid:0) V (2)Γ ,M (0) (cid:1) − Ψ (cid:17) = rank (cid:18)(cid:18) Q 00 E d − k (cid:19) (cid:18) E k Q 00 0 (cid:19)(cid:19) = d . Hence, we can apply Lemma 4.1 and we see that the operator H is self-adjoint.We proceed to proving the self-adjointness of the operator H γ . In view of the assumption B − M (0) made in Subsection 2.1, see the explanation after (2.7), we see easily that rank condition (2.8) impliesthe same for the matrices A (0) M and B (0) M defined in (2.11), (2.12). Then we multiply the matrix in (2.6)by ε and then by ˜E M ( ε ) from left and right keeping its self-adjointness and then we pass to the limitas ε → +0 . By identities (3.3), (3.4) we then arrive to the self-adjointness of the matrix A (0) M (cid:0) V (2) γ,M (0) (cid:1) − (cid:0) B (0) M (cid:1) ∗ +iB (0) M (cid:0) V (2) γ,M (0) (cid:1) − V (1) γ,M (0) (cid:0) V (2) γ,M (0) (cid:1) − (cid:0) B (0) M (cid:1) ∗ . Applying now Lemma 4.1, we conclude that the operator H γ is self-adjoint. One of the main ingredients in the proof of Theorem are two families of special auxiliary functionsdefined respectively on the graphs Γ and γ ex . Both series are defined as solutions to certain boundaryvalue problems. The first of them are the boundary value problems for the equation ( ˆ H Γ ( ε ) − λ ) v ( ε )Γ ,i = 0 on Γ , i = 1 , . . . , d , (7.1)subject to homogeneous vertex conditions (2.5) at all vertices M ∈ Γ except for M and to an inho-mogeneous vertex condition U M ( v ( ε )Γ ,i ) = ... ... , i = 1 , . . . , d , (7.2)where one in the vector in the right hand side stands only at i th position. We seek solutions to theseproblems as v ( ε )Γ ,i = χ i + ˜ v ( ε )Γ ,i , where χ i is an infinitely differentiable cut-off function equalling to onein small neighbourhood of the vertex M and vanishing outside a larger neighbourhood. Then for ˜ v ( ε )Γ ,i we obtain the equation (cid:0) H Γ ( ε ) − λ (cid:1) ˜ v ( ε )Γ ,i = − (cid:0) ˆ H Γ ( ε ) − λ (cid:1) χ i , where the right hand side is holomorphic in ε . Hence, by Lemma 5.1 with A M = E d , B M = 0 ,problems (7.1), (2.5), (7.2) are uniquely solvable and their solutions v ( ε )Γ ,i are holomorphic in ε in thesense of the norm in the space ˙ W ( γ ex ) .Similar auxiliary functions on the graph γ ex are introduced as solutions to the following boundaryvalue problems ( ˆ H exγ ( ε ) − ε λ ) v ( i ) γ,ε = 0 on γ ex , V (2)Γ ,M ( ε ) U ′ γ ex ( v ( i ) γ,ε ) − i ε V (1)Γ ,M ( ε ) U γ ex ( v ( i ) γ,ε ) = ˜ Ψ ( i ) , (7.3)31here i = 1 , . . . , d , with vertex conditions (6.1), where ˜ Ψ ( i ) := k X i =1 y ij Ψ ( j ) = U γ ( ˜ ψ ( j ) ) , i = 1 , . . . , k, ˜ Ψ ( i ) := Ψ ( i ) , i = k + 1 , . . . , d . The solvability of these problems and the dependence of the solutions on the parameter ε aredescribed in the following lemma. Lemma 7.1.
For each λ ∈ C \ R problems (7.3), (6.1) are uniquely solvable provided ε is small enough.The solutions are meromorphic in ε and satisfy the following representations: v ( i ) γ,ε = ε − k X j =1 Υ ( j ) γ,i ˜ ψ ( j ) + ε − k X j =1 (cid:16) Υ ( j ) γ,i ˜ ψ ( j )1 + ˜Υ ( j ) γ,i ˜ ψ ( j ) (cid:17) + ˜ v ( i ) γ,ε , i = 1 , . . . , k , (7.4) v ( i ) γ,ε = ε − ˜ ψ ( i ) λ i (Q) + k X j =1 ˜Υ ( j ) γ,i ˜ ψ ( j ) + ˜ v ( i ) γ,ε , i = k + 1 , . . . , k, (7.5) v ( i ) γ,ε = ε − k X j =1 ˜Υ ( j ) γ,i ˜ ψ ( j ) + ˜ v ( i ) γ,ε , i = k + 1 , . . . , d . (7.6) Here Υ ( j ) γ,i and ˜Υ ( j ) γ,i are some constants and Υ (1) γ, . . . Υ (1) γ,k ... ... Υ ( k γ, . . . Υ ( k ) γ,k = ( ˆQ − λ ˜G ) − . (7.7) Proof.
Let Ψ ip be the coordinates of the vectors ˜ Ψ ( i ) and Ψ ( i ) , that is, ˜ Ψ ( i ) = Ψ i ... Ψ id , i = 1 , . . . , d . (7.8)Then we introduce the functions v ip ( ξ, ε ) := Ψ ip φ p ( ξ p − , ε ) χ ( ξ p ) on e exp , p = 1 , . . . , d ,φ p ( t, ε ) := ε − τ − p ( ε ) e i ε v (1) p ( ε ) v (2) p ( ε ) t sin ε τ p ( ε ) v (2) p ( ε ) t,τ p ( ε ) := q λ v (2) p ( ε ) − (cid:0) v (1) p ( ε ) (cid:1) , (7.9)where the branch of the square root is chosen arbitrarily. We continue these functions by zero on γ ex \ e exi and we denote v bndi ( ξ, ε ) := d X p =1 v ip ( ξ, ε ) . This functions satisfies the vertex conditions in the problem for v ( ε ) γ,i as well as U γ ex ( v bndi ) = 0 .We seek v ( ε ) γ,i as v ( ε ) γ,i = v bndi + ˜ v i and for ˜ v i we get boundary value problem (6.32), (6.1), (6.2) with g := ( − (cid:0) ˆ H exγ ( ε ) − ε λ (cid:1) v bndi on γ ex \ γ, on γ. We apply Lemma 6.6 to the obtained problem for ˜ v and conclude that it is uniquely solvable foreach λ ∈ C \ R provided ε is small enough. Hence, the same is true for problem (7.3), (6.1). We32lso apply representations (6.50), (6.51), (6.52) to the problem for ˜ v . In fact, these representationsimply formulae (7.4), (7.5), (7.6), (7.7) owing to the above discussed vertex conditions for v bndi and thefollowing identities based on a simple integration by parts: ( g, ˜ ψ ( j ) ) L ( γ ex ) = − (cid:0) ( ˆ H exγ ( ε ) − ε λ ) v bndi , ˜ ψ ( j ) (cid:1) L ( γ ex \ γ ) = (cid:0) V (2)Γ ,M U ′ γ ex ( v bndi ) − ε V (1)Γ ,M U γ ex ( v bndi ) , ˜Ψ ( j ) (cid:1) C d + ε λ ( v bndi , ˜ ψ ( j ) ) L ( γ ex \ γ ) = δ ij + ε λ ( v bndi , ˜ ψ ( j ) ) L ( γ ex \ γ ) , i, j = 1 , . . . , k, ( g, ˜ ψ ( j ) ) L ( γ ex ) = ε λ ( v bndi , ˜ ψ ( j ) ) L ( γ ex \ γ ) , i = k + 1 , . . . , d , j = 1 , . . . , k. The proof is complete.We introduce three d × d matrices: T Γ ( ε ) := T Γ , ( ε ) . . . T Γ , d ( ε ) ... ... T Γ ,d ( ε ) . . . T Γ ,d d ( ε ) , T γ ( ε ) := T γ, ( ε ) . . . T γ, d ( ε ) ... ... T γ,d ( ε ) . . . T γ,d d ( ε ) , T ′ γ ( ε ) := T ′ γ, ( ε ) . . . T ′ γ, d ( ε ) ... ... T ′ γ,d ( ε ) . . . T ′ γ,d d ( ε ) , (7.10)where T Γ ,ip ( ε ) := v (2) i ( ε ) dv ( p )Γ ,ε (cid:12)(cid:12) e i dx i ( M ) − i v (1) i ( ε ) v ( p )Γ ,ε (cid:12)(cid:12) e i ( M ) , i, p = 1 , . . . , d , (7.11) T γ,ip ( ε ) := v ( p ) γ,ε (cid:12)(cid:12) e exi ( M j ) , T ′ γ,ip ( ε ) := v (2) i ( ε ) dv ( p ) γ,ε (cid:12)(cid:12) e exi dξ i ( M j ) − i ε v (1) i ( ε ) v ( p ) γ,ε (cid:12)(cid:12) e exi ( M j ) , where i ∈ J j , j = 1 , . . . , n , p = 1 , . . . , d .We denote: φ i ( t, ε ) := e i ε v (1) i ( ε ) v (2) i ( ε ) t cos ετ i ( ε ) v (2) i ( ε ) t, φ ′ i ( t, ε ) := e i ε v (1) i ( ε ) v (2) i ( ε ) t ετ i ( ε ) sin ετ i ( ε ) v (2) i ( ε ) t, Θ exp ( ε ) := diag ( exp i ε v (1) i ( ε ) v (2) i ( ε ) !) i =1 ,...,d , Θ cos ( ε ) := diag ( cos ετ i ( ε ) v (2) i ( ε ) ) i =1 ,...,d , Θ τ ( ε ) := diag (cid:8) τ i ( ε ) (cid:9) i =1 ,...,d , Θ sin ( ε ) := diag ( sin ετ i ( ε ) v (2) i ( ε ) ) i =1 ,...,d , ˜Ψ := (cid:0) ˜ Ψ . . . ˜ Ψ k (cid:1) , ˜Ψ = (cid:0) Ψ k +1 . . . Ψ k (cid:1) , ˜Ψ := (cid:0) ˜ Ψ . . . ˜ Ψ d (cid:1) , Φ := (cid:16) U γ ( ˜ ψ (1)1 ) . . . U γ ( ˜ ψ ( k )1 ) (cid:17) . The next lemmata describe some properties of the above matrices, which we shall make use in whatfollows.
Lemma 7.2.
The matrices T Γ ( ε ) and T ′ γ ( ε ) are holomorphic in sufficiently small ε , while the matrix T γ ( ε ) is meromorphic in sufficiently small ε . The leading terms of the Laurent series of the matrix T γ is as follows: T γ ( ε ) = ε − T γ, − + ε − T γ, − + ˜T γ ( ε ) , (7.12) where T γ, − := (cid:0) ˜Ψ ( ˆQ − λ ˜G ) − (cid:1) , (7.13) T γ, − := (cid:16) Φ( ˆQ − λ ˜G ) − ˜Ψ diag { λ − k +1 ( Q ) , . . . , λ − k ( Q ) } (cid:17) + ˜Ψ ˜Υ , (7.14)33 Υ := ˜Υ (1) γ, ... ˜Υ (1) γ,d ... ... ˜Υ ( k ) γ, ... ˜Υ ( k ) γ,d and ˜T γ ( ε ) is a holomorphic in ε matrix. The identities hold: T ′ γ ( ε ) = Θ − ( ε )Θ cos ( ε ) ˜Ψ + ε Θ − ( ε )Θ cos ( ε )Θ τ ( ε )T γ ( ε ) , (7.15) ε V (2)Γ ,M ( ε ) U ′ γ (cid:0) ( H exγ ( ε ) − ε λ ) − χ γ f γ (cid:1) − i ε V (1)Γ ,M ( ε ) U γ (cid:0) ( H exγ ( ε ) − ε λ ) − χ γ f γ (cid:1) = − ε Θ − ( ε ))Θ τ ( ε )Θ sin ( ε ) U γ (cid:0) ( H exγ ( ε ) − ε λ ) − χ γ f γ (cid:1) . (7.16) Proof.
The holomorphy in ε of the matrix T Γ ( ε ) is implied by its definition and the holomorphy in ε of the functions v ( ε )Γ ,i . The meromorphic dependence of the matrix T γ on ε and formulae (7.13), (7.14)for the first term of its Laurent series is a direct implication of Lemma 7.1. This lemma also impliesthat the matrix T ′ γ is meromorphic in ε .On the edges e exi , the equation for v ( p ) γ,ε in (7.3) can be solved explicitly. Taking into considerationthe vertex condition in (7.3), we find that on each edge e exi the function v ( p ) γ,ε is given by the followingformula: v ( p ) γ,ε ( ξ ) = Ψ pi φ ′ i ( ξ i − , ε ) + v ( p ) γ,ε (cid:12)(cid:12) e exi ( M j ) φ i ( ξ i − , ε ) , i ∈ J j , j = 1 , . . . , n, (7.17)where the numbers Ψ ip were introduced in (7.8). By straightforward calculations we then find that V (2)Γ ,M ( ε ) U ′ γ ex ( v ( p ) γ,ε ) − i ε V (1)Γ ,M ( ε ) U γ ex ( v ( p ) γ,ε ) = ε Θ − ( ε ) (cid:16) Θ cos ( ε ) ˜Ψ + Θ τ ( ε )Θ sin ( ε ) U γ ex ( v ( p ) γ,ε ) . (cid:17) This identity implies formula (7.15). This formula and expansion (7.12) implies that the matrix T ′ γ ( ε ) is in fact holomorphic in ε .Identity (7.16) can be proved in the same way as (7.15) by employing the following formula similarto (7.17): v ε ( ξ ) = v ε (cid:12)(cid:12) e exi ( M j ) φ i ( ξ − , ε ) on e exi , i ∈ J j , j = 1 , . . . , n, v ε := (cid:0) H exγ ( ε ) − ε λ (cid:1) − f γ . The proof is complete.
Lemma 7.3.
The matrix T Γ ( ε ) satisfies the representation T Γ ( ε ) = T (1)Γ ( ε ) + i Im λ T (2)Γ ( ε ) , (7.18) where T (1)Γ ( ε ) , T (2)Γ ( ε ) are self-adjoint matrices and the latter matrix is positive definite.Proof. We multiply the equation in (7.1) by v ( ε )Γ ,j and integrate twice by parts over the graph Γ takinginto consideration the vertex conditions for v ( ε )Γ ,i and v ( ε )Γ ,j : (cid:0) ( ˆ H ( ε ) − λ ) v ( ε )Γ ,i , v ( ε )Γ ,j (cid:1) L (Γ) = (cid:0) V (2)Γ ,M ( ε ) U ′ M ( v ( ε )Γ ,i ) − iV (1)Γ ,M ( ε ) U M ( v ( ε )Γ ,i ) , U M ( v ( ε )Γ ,j ) (cid:1) C d − (cid:0) U M ( v ( ε )Γ ,i ) , V (2)Γ ,M ( ε ) U ′ M ( v ( ε )Γ ,j ) − iV (1)Γ ,M ( ε ) U M ( v ( ε )Γ ,j ) (cid:1) C d + (cid:0) v ( ε )Γ ,i , ( ˆ H ( ε ) − λ ) v ( ε )Γ ,j (cid:1) L (Γ) = (cid:0) V (2)Γ ,M ( ε ) U ′ M ( v ( ε )Γ ,i ) − iV (1)Γ ,M ( ε ) U M ( v ( ε )Γ ,i ) , U M ( v ( ε )Γ ,j ) (cid:1) C d − (cid:0) U M ( v ( ε )Γ ,i ) , V (2)Γ ,M ( ε ) U ′ M ( v ( ε )Γ ,j ) − iV (1)Γ ,M ( ε ) U M ( v ( ε )Γ ,j ) (cid:1) C d −
2i Im λ (cid:0) v ( ε )Γ ,i , v ( ε )Γ ,j (cid:1) L (Γ) . By (7.2), (7.11) this yields: T ( ji )Γ ( ε ) − i Im λ (cid:0) v ( ε )Γ ,i , v ( ε )Γ ,j (cid:1) L (Γ) = T ( ij )Γ ( ε ) − i Im λ (cid:0) v ( ε )Γ ,i , v ( ε )Γ ,j (cid:1) L (Γ) . T (1)Γ ( ε ) and T (2)Γ ( ε ) are respectively T ( ij )Γ ( ε ) − i Im λ (cid:0) v ( ε )Γ ,i , v ( ε )Γ ,j (cid:1) L (Γ) and (cid:0) v ( ε )Γ ,i , v ( ε )Γ ,j (cid:1) L (Γ) , and these matrices are self-adjoint.The matrix T (2)Γ is positive definite since (T (2)Γ c , c) C d = (cid:13)(cid:13)(cid:13)(cid:13) d X i =1 c i v ( e )Γ ,i (cid:13)(cid:13)(cid:13)(cid:13) L (Γ) , c := c ... c d and the functions v ( ε )Γ ,i are linearly independent thanks to vertex conditions (7.2). The proof is complete. Lemma 7.4.
For ε small enough the matrix T Γ ( ε ) is invertible and the inverse is holomorphic in ε .For each non-zero projector R in C d and each self-adjoint matrix D the matrix R ˜Ψ ∗ T − ( ε ) ˜ΨR + RDR is invertible and the inverse is holomorphic in ε .Proof. Since by Lemma 7.2 the matrix T Γ ( ε ) is holomorphic in ε , once we prove that the matrix T Γ (0) is non-degenerate, this will imply that the matrix T Γ ( ε ) is invertible and the inverse is holomorphic in ε . Assume that d X p =1 c p T (1 p )Γ ( ε ) ... T ( d p )Γ ( ε ) = ... for some constants c , . . . , c d . Then, in view of formula (7.11) for T ( ip )Γ and problem (7.1), (2.5) forthe functions v ( ε )Γ ,i , we see immediately that the function v := d P p =1 c p v ( ε )Γ ,p solves the same problem butsatisfies the homogeneous Neumann condition at M . Since the parameter λ is non-real, it can notbe an eigenvalue of a self-adjoint operator on Γ with differential expression (2.24) subject to vertexconditions (2.25) and to the Neumann condition at M . Hence, the function v necessarily vanishesand therefore, B M ( v ) = 0 . At the same time, by vertex conditions (7.2), the obtained identity impliesthat c p = 0 for all p = 1 , . . . , d . Therefore, the columns of the matrix T Γ (0) are linearly independentand this matrix is non-degenerate.Let c ∈ R ˜Ψ ∗ C d be a vector such that (cid:0) R ˜Ψ ∗ T − ( ε ) ˜ΨR + RDR (cid:1) c = 0 . Then (cid:0)
R ˜Ψ ∗ T − ( ε ) ˜ΨRc , c (cid:1) C d + (cid:0) RDRc , c (cid:1) C d = (cid:0) T − ( ε ) ˜ΨRc , ˜ΨRc (cid:1) C d + (cid:0) DRc , Rc (cid:1) C d = (cid:0) ˜ c, T Γ ( ε )˜c (cid:1) C d + (DRc , Rc) C d , where ˜c := T − ( ε ) ˜ΨRc , Rc = ˜ΨT Γ ( ε )˜c . Employing now Lemma 7.3 and taking then the imaginarypart of the above identity, we get: (cid:0) ˜ c, T (2)Γ ( ε )˜c (cid:1) C d = 0 . And since by Lemma 7.3 the matrix T (2)Γ ( ε ) ispositive, we conclude that ˜c = 0 . Hence, the matrix R ˜Ψ ∗ T − ( ε ) ˜ΨR + RDR is invertible for all ε smallenough and, in particular, for ε = 0 . Since this matrix is holomorphic in ε , the same is true for itsinverse. The proof is complete. In this subsection we reduce an equation for the resolvent ( H ε − λ ) − to a system of linear algebraicequations. Let f Γ ∈ L (Γ) , f γ ∈ L ( γ ) be two arbitrary functions. In terms of these functions, weintroduce one more function f ∈ L (Γ ε ) as f = ( f Γ on Γ , S ε f γ on γ ε .
35e also observe that an arbitrary function f ∈ L (Γ ε ) can be represented in the above form with f Γ := P Γ f and f γ := S − ε P γ ε f .Since the operator H ε is self-adjoint, the resolvent ( H ε − λ ) − is well-defined for λ ∈ C \ R . We let u ε := ( H ε − λ ) − f and we are going to study the structure of this function and its dependence on ε .The restriction of the function u ε on the graph Γ , that is, the function P Γ u ε , obviously solves theboundary value problem for the equation ( ˆ H Γ ( ε ) − λ ) P Γ u ε = ( ˆ H ( ε ) − λ ) P Γ u ε = f Γ on Γ (7.19)subject to homogeneous vertex conditions (2.5) at all vertices M ∈ Γ except for M , while at the vertex M an inhomogeneous vertex condition U M ( P Γ u ε ) = a( ε ) , a( ε ) := a ( ε ) ... a d ( ε ) (7.20)holds with some constants a i = a i ( ε ) , i = 1 , . . . , d . In view of the boundary value problem (7.19),(2.5), (7.20), we then conclude that R Γ ( ε, λ )( f Γ , f γ ) = P Γ u ε = ( H Γ ( ε ) − λ ) − f Γ + d X i =1 a i ( ε ) v ( ε )Γ ,i . (7.21)We consider the restriction P γ ε u ε of the function u ε on the graph γ ε . The restriction of P γ ε u ε oneach edge e i , i ∈ J j , incident to a vertex M j , j = 1 , . . . , n , is denoted by P γ ε u ε (cid:12)(cid:12) e i . The value of thisrestriction at the vertex M j is exactly constant a i introduced in (7.20): a i ( ε ) = P γ ε u ε (cid:12)(cid:12) e i ( M j ) . We alsodenote: a ′ i ( ε ) := v (2) i ( ε ) d P γ ε u ε (cid:12)(cid:12) e i dx i ( M j ) − i v (1) i ( ε ) P γ ε u ε (cid:12)(cid:12) e i ( M j ) , a ′ ( ε ) := a ′ ( ε ) ... a ′ d ( ε ) . Then we replace the edge e i by an edge of the length ε , such edges are denoted by e i,ε . We continuethe function P γ ε u ε on this edge as follows: P γ ε u ε ( x i ) := a i ( ε ) φ i (cid:16) xε , ε (cid:17) + εa ′ i ( ε ) φ ′ i (cid:16) xε , ε (cid:17) , (7.22)where x i is a variable on the edge e i,ε such that the value x i corresponds to the vertex M j and thefunction τ i ( ε ) was defined in (7.9). It is clear that after such continuation the function P γ ε u ε stillsatisfies vertex conditions (2.5) at vertices M j and of course at other vertices M ∈ γ .We rescale graph γ ε with additional attached edges e i,ε in ε − times and this leads us to the graph γ ex . On this graph we consider the function S − ε P γ ε u ε and in view of its definition and formula (7.22)we see immediately that this function solves the boundary value problem ( ˆ H exγ ( ε ) − ε λ ) S − ε P γ ε u ε = ε χ γ f γ on γ ex , V (2)Γ ,M U ′ γ ex ( S − ε P γ ε u ε ) − i ε V (1)Γ ,M U ′ γ ex ( S − ε P γ ε u ε ) = ε Θ − ( ε ) (cid:0) Θ cos ( ε )a ′ ( ε ) − Θ τ ( ε )Θ sin ( ε )a( ε ) (cid:1) , (7.23)with vertex conditions (6.1), where χ γ is the characteristic function of the graph γ , that is, χ γ = 1 on γ and χ γ = 0 on e exi , i = 1 , . . . , d .In view of the definition of the operator H exγ ( ε ) and the functions v ( i ) γ,ε and in view of problem (7.23),(6.1) we conclude that R γ ( ε, λ )( f Γ , f γ ) = S − ε P γ ε u ε = ε ( H exγ ( ε ) − ε λ ) − χ γ f γ + ε d X i =1 ˜ a i ( ε ) v ( i ) γ,ε (7.24)where the numbers ˜ a i are defined as ˜ a ( ε ) ... ˜ a d ( ε ) = ˜Ψ ∗ Θ exp ( ε ) (cid:0) Θ cos ( ε )a ′ ( ε ) − Θ τ ( ε )Θ sin ( ε )a( ε ) (cid:1) . (7.25)36ence, P γ ε u ε = ε S ε ( H exγ ( ε ) − ε λ ) − f γ + ε d X i =1 ˜ a i ( ε ) S ε v ( i ) γ,ε . Since the functions P Γ u ε and P γ ε u ε are the restrictions of the same function u ε on the graphs Γ and γ ε , the continuity conditions at the vertices are to be satisfied. Namely, for each vertex M j , j = 1 , . . . , n , and each incident edge e i , i ∈ J j , the restrictions of the functions P Γ u ε and P γ ε u ε on theedge e i should have the same values at M j and the same should hold for their derivatives. In view offormula (7.24) this condition can be equivalently rewritten as P Γ u ε (cid:12)(cid:12) e i ( M j ) = S − ε P γ ε u ε (cid:12)(cid:12) e i ( M j ) , v (2) i ( ε ) d P Γ u ε (cid:12)(cid:12) e i dx i ( M j ) − i v (1) i ( ε ) P Γ u ε (cid:12)(cid:12) e i ( M j )= ε − v (2) i ( ε ) d S − ε P γ ε u ε (cid:12)(cid:12) e exi dξ i ( M j ) − i ε v (1) i ( ε ) S − ε P γ ε u ε (cid:12)(cid:12) e exi ( M j ) ! (7.26)for all i ∈ J j , j = 1 , . . . , n . Then continuity conditions (7.26) can be rewritten as two systems of linearequations: (cid:0) E d + ε T γ ( ε ) ˜Ψ ∗ Θ exp ( ε )Θ τ ( ε )Θ sin ( ε ) (cid:1) a( ε ) − ε T γ ( ε ) ˜Ψ ∗ Θ exp ( ε )Θ cos ( ε )a ′ ( ε )= ε U γ (cid:0) ( H exγ ( ε ) − ε λ ) − χ γ f γ (cid:1) , (cid:0) T Γ ( ε ) + T ′ γ ( ε ) ˜Ψ ∗ Θ exp ( ε )Θ τ ( ε )Θ sin ( ε ) (cid:1) a( ε ) − T ′ γ ( ε ) ˜Ψ ∗ Θ exp ( ε )Θ cos ( ε )a ′ ( ε )= ε V (2)Γ ,M U ′ γ (cid:0) ( H exγ ( ε ) − ε λ ) − χ γ f γ (cid:1) − i ε V (1)Γ ,M U γ (cid:0) ( H exγ ( ε ) − ε λ ) − χ γ f γ (cid:1) − V (2)Γ ,M ( ε ) U ′ M (cid:0) ( H Γ ( ε ) − λ ) − f Γ (cid:1) , (7.27)where U ′ γ ( u ) := du (cid:12)(cid:12) e exi dξ i ( M exj ) ! i ∈ J j , j =1 ,...,n We know apriori that the resolvent ( H ε − λ ) − is well-defined. Hence, the above system of linearequations is uniquely solvable. In what follows, our main aim is to study the dependence of its solutionon ε .As a first step, we reduce system (7.27) to a single equation. We apply the operator Θ − ( ε )Θ τ ( ε )Θ sin ( ε ) to the first equation in (7.27) and deduct the result from the second equation. Then we substituteformula (7.15) for T ′ γ ( ε ) and (7.16) and this transforms the equation to the following one: Θ ( ε )a ′ ( ε ) = (cid:0) T Γ ( ε )+(Θ cos ( ε ) − Θ − ( ε ))Θ τ ( ε )Θ sin ( ε ) (cid:1) a( ε )+V (2)Γ ,M ( ε ) U ′ M (cid:0) ( H Γ ( ε ) − λ ) − f Γ (cid:1) . (7.28)We express the vector a ′ ( ε ) from the above equation and substitute the resulting expression into thefirst equation in (7.27). After some simple arithmetic transformations, this leads us to the followingequation: (cid:0) ˆT − ( ε ) − ε T γ ( ε ) (cid:1) ˆa( ε ) = h( ε ) , (7.29)where ˆT Γ ( ε ) := ˜Ψ ∗ (cid:0) Θ exp ( ε )Θ − ( ε )T Γ ( ε ) − Θ − ( ε )Θ τ ( ε )Θ sin ( ε ) (cid:1) , ˆa( ε ) := ˆT Γ ( ε )a( ε ) + ˜Ψ ∗ Θ − ( ε )Θ exp ( ε )V (2)Γ ,M ( ε ) U ′ M (cid:0) ( H Γ ( ε ) − λ ) − f Γ (cid:1) . (7.30) h( ε ) := ε U γ (cid:0) ( H exγ ( ε ) − ε λ ) − χ γ f γ (cid:1) + ˆT − ( ε ) ˜Ψ ∗ Θ exp ( ε )Θ − ( ε )V (2)Γ ,M ( ε ) U ′ M (cid:0) ( H Γ ( ε ) − λ ) − f Γ (cid:1) . (7.31)It follows from Lemma 7.4 and definition of the matrices Θ exp , Θ τ , Θ cos , Θ sin , that the matrix ˆT Γ ( ε ) is holomorphic and ˆT Γ ( ε ) = ˜Ψ ∗ T Γ (0) + O ( ε ) . (7.32)Since the matrix T Γ (0) is invertible by Lemma 7.4, thanks to the above identity we conclude that thematrix ˆT − ( ε ) is well-defined and is holomorphic in sufficiently small ε .37 .4 Solution to equations In this subsection we solve equation (7.29) and recover then a solution of system (7.27). On the space C d we introduce two projectors: R a ... a d = a ... a k , R ⊥ a ... a d = a k +1 ... a d . We observe that due to formulae (7.13), (7.14), the identities hold true: ˜Ψ ∗ T γ, − = (cid:18) ( ˆQ − λ ˜G ) −
00 0 (cid:19) , ˜Ψ ∗ T γ, − = (cid:18) R ˜Ψ ∗ Φ( ˆQ − λ ˜G ) − ⊥ ˜Ψ ∗ Φ( ˆQ − λ ˜G ) − Q ⊥ (cid:19) + (cid:18) ˜Υ ˜Υ ⊥ (cid:19) , (7.33) ˜Υ := ˜Υ (1) γ, ... ˜Υ (1) γ,k ... ... ˜Υ ( k ) γ, ... ˜Υ ( k ) γ,k , ˜Υ ⊥ := ˜Υ (1) γ,k +1 ... ˜Υ (1) γ,d ... ... ˜Υ ( k ) γ,k +1 ... ˜Υ ( k ) γ,d , Q ⊥ := diag { λ − k +1 ( Q ) , . . . , λ − k ( Q ) , , . . . , } , where each matrix in (7.33) is a block matrix of total size d × d ; the widths of the blocks arerespectively k and d − k and their heights are also k and d − k .We apply the matrix ˜Ψ ∗ to the both sides of equation (7.29) and then we employ formula (7.12).This transforms the equation to T( ε )ˆa( ε ) − (cid:0) ε − ˜Ψ ∗ T γ, − + ˜Ψ ∗ T γ, − )ˆa( ε ) = ˜Ψ ∗ h( ε ) , T( ε ) := ˜Ψ ∗ ˆT − ( ε ) − ε ˜Ψ ∗ ˜T γ ( ε ) . (7.34)We also observe that in the sense of the decomposition C d = C k ⊕ C d − k we have the identity E d = R ⊕ R ⊥ . We substitute this formula and (7.33) into (7.34) and split the latter equation intothe following system: ε − T ( ε )R ˆa( ε ) =R ˜Ψ ∗ h( ε ) − R T( ε )(0 ⊕ R ⊥ )ˆa( ε ) − ˜Υ ⊥ R ⊥ ˆa( ε ) , T ⊥ ( ε )R ⊥ ˆa( ε ) =R ⊥ ˜Ψ ∗ h( ε ) + R ⊥ ˜Ψ ∗ Φ( ˆQ − λ ˜G ) − P ˆa( ε ) − R ⊥ T( ε )(R ⊕ ε ) , (7.35)where T ( ε ) := − (E k + ε R ˜Ψ ∗ Φ)( ˆQ − λ ˜G ) − + ε ˜Υ + ε R T( ε )( · ⊕ , T ⊥ ( ε ) := − diag { λ − k +1 (Q) , . . . , λ − k (Q) , , . . . , } + R ⊥ T( ε )(0 ⊕ · ) . (7.36)By Lemma 7.2, the matrices T , T , T ⊥ are holomorphic in ε . Since T (0) = − ( ˆQ − λ ˜G ) − and hence, the matrix T (0) is invertible, we conclude that the matrix T − ( ε ) is well-defined and isholomorphic in ε small enough. This allows us to solve the first equation in (7.35): R ˆa( ε ) = ε T − ( ε )R ˜Ψ ∗ h( ε ) + ε Z ⊥ ( ε )R ⊥ ˆa( ε ) , (7.37)where Z ⊥ ( ε ) := − T − ( ε ) (cid:0) R T( ε )(0 ⊕ · ) + ˜Υ ⊥ (cid:1) . (7.38)Substituting the obtained identity into the second equation in (7.35), we rewrite it as follows: ˜T ⊥ ( ε )R ⊥ ˆa( ε ) = (cid:0) R ⊥ + ε Z( ε )R (cid:1) ˜Ψ ∗ h( ε ) where ˜T ⊥ ( ε ) := T ⊥ ( ε ) − ε Z ( ε )Z ⊥ ( ε ) , Z( ε ) := R ⊥ Φ( ˆQ − λ ˜G ) − − R ⊥ T( ε )( · ⊕ . (7.39)By Lemma 7.4 with R = R ⊥ , D = − diag (cid:8) , . . . , , λ − k +1 ( Q ) , . . . , λ − k ( Q ) , , . . . , (cid:9) , the matrix T ⊥ ( ε ) is invertible and the inverse is holomorphic ε . Hence, by the definition of the matrix ˜T ⊥ ( ε ) in387.39), the latter matrix is also invertible and the inverse is holomorphic in ε . This allows us to solveequation (7.39); together with (7.37) we then find a final form for the solution to (7.34): R ˆa( ε ) = ε (cid:16)(cid:0) T − ( ε ) + ε Z ⊥ ( ε ) ˜T − ⊥ ( ε )Z ( ε ) (cid:1) R + ε Z ⊥ ( ε ) ˜T − ⊥ ( ε )R ⊥ (cid:17) G ( ε )( f Γ , f γ )R ⊥ ˆa( ε ) = ˜T − ⊥ ( ε ) (cid:0) R ⊥ + ε Z( ε )R (cid:1) G ( ε )( f Γ , f γ ) . (7.40)where G ( ε ) : L (Γ) ⊕ L ( γ ) → C d is a bounded operator mapping a pair ( f Γ , f γ ) into the vector ˜Ψ ∗ h( ε ) and the vector h( ε ) is defined by (7.30).Our next step is to study the dependence of the obtained solution on ε . First we prove a simpleauxiliary lemma. Lemma 7.5.
The operator G ( ε ) mapping ( f Γ , f γ ) into the vector ˜Ψ ∗ h( ε ) is holomorphic in ε as abounded operator from L (Γ) ⊕ L ( γ ) into C d . The identity holds: G (0)( f Γ , f γ ) = (cid:18) ( ˆQ − λ ˜G ) −
00 0 (cid:19) (cid:18) F (0)0 (cid:19) + ˜Ψ ∗ T − (0)V (2)Γ ,M (0) U ′ M (cid:0) ( H Γ (0) − λ ) − f Γ (cid:1) , (7.41) where F (0)0 is defined in (6.44) with g = f γ .Proof. The holomorphic dependence is implied by definition (7.31) of the vector h ( ε ) . Formula (7.41)can be confirmed by straightforward calculations using Lemma 6.6 and formula (7.32). The proof iscomplete.The proven lemma and formulae (7.40) imply that the operators mapping ( f Γ , f γ ) into the vectors R ˆa( ε ) and R ⊥ ˆa( ε ) are holomorphic in ε . Formulae (7.40) determine the vector ˆa( ε ) completely.Then we recover the vectors a( ε ) and a ′ ( ε ) from (7.28), (7.30) and we see that both these vectors areholomorphic in ε . Substituting these vectors into formulae (7.21), (7.24), in view of Lemma 7.1 andthe holomorphy of the functions v ( i )Γ ,ε we see that the operators R Γ ( ε, λ ) and R γ ( ε, λ ) are meromorphicin ε as acting from L (Γ) ⊕ L ( γ ) into ˙ W (Γ) and ˙ W ( γ ) . If we prove identities (2.28), (2.29), theywill imply that the operators R Γ ( ε, λ ) and R γ ( ε, λ ) are holomorphic in ε .We proceed to proving identities (2.28), (2.29). In view of formulae (7.36), (7.38), (7.39) and R ˆ a (0) = 0 , R ⊥ ˆ a (0) = T − ⊥ (0)R ⊥ ˜Ψ ∗ T − (0)V (2)Γ ,M (0) U ′ M (cid:0) ( H Γ (0) − λ ) − f Γ (cid:1) . We apply the operator T ⊥ (0) to the second identity and employ the first identity together with defi-nition of the operator T ⊥ in (7.36) and definition (7.30) of ˆa . This yields: R ˜Ψ ∗ (cid:16) T Γ (0)a(0) + V (2)Γ ,M (0) U ′ M (cid:0) ( H Γ (0) − λ ) − f Γ (cid:1)(cid:17) = 0diag { λ − k +1 (Q) , . . . , λ − k (Q) , , . . . , } R ⊥ ˜Ψ ∗ (cid:16) T Γ (0)a(0) + V (2)Γ ,M (0) U ′ M (cid:0) ( H Γ (0) − λ ) − f Γ (cid:1)(cid:17) = R ⊥ ˜Ψ ∗ a(0) . In view of the Dirichlet vertex conditions at M for the operator H Γ (0) and definition (7.10), (7.11) ofthe matrix T Γ , the latter identities can be equivalently rewritten as R D ˜Ψ ∗ U M ( u ) = 0 , R R ˜Ψ ∗ (cid:0) V ( u ) (cid:1) + diag (cid:8) λ (Q) , . . . , λ k (Q) (cid:9) R R ˜Ψ ∗ U M ( u ) = 0 , (7.42)where u := ( H Γ (0) − λ ) − f Γ + d X i =1 a i (0) v (0)Γ ,i , R D a . . .a d := a k +1 ... a d , R R a . . .a d := a ... a k , and we recall that the operator V was defined in (2.35). It is obvious that identities (7.42) can beequivalently rewritten as vertex condition (2.25), (2.23) for the function u Γ0 . Hence, by formulae (7.21),we arrive at identity (2.28).We apply the matrix ˜Ψ ∗ to the first equation in (7.27) and rewrite it as ˜Ψ ∗ a( ε ) = ε U γ (cid:0) ( H exγ ( ε ) − ε λ ) − χ γ f γ (cid:1) + ε ˜Ψ ∗ T γ ( ε )˜a( ε ) , (7.43)39here the symbol ˜a( ε ) stands for the vector in (7.25). Since both vectors a( ε ) and a ′ ( ε ) are holomorphicin ε , the same holds for ˜a and the left hand side in (7.43), and therefore, for the right hand side in(7.43). Then, due to expansion (7.12), (7.13), we necessarily have ˜Ψ ∗ T γ, − ˜a(0) = 0 , R ˜a(0) = 0 .Employing the latter identity, (7.24) and Lemmata 6.6, 7.1, we find that S − ε P γ ε u ε = d X i =1 ˇ a i ˜ ψ ( i ) + O ( ε ) , (7.44) ˇ a ... ˇ a k := C − ( f γ ) ... C − k ( f γ ) + Υ ˜ a ′ (0) ... ˜ a ′ k (0) + ˜Υ ˜ a (0) ... ˜ a d (0) , ˇ a k +1 ... ˇ a d := 0 ˇ a k +1 ... ˇ a k := diag (cid:8) λ − k +1 (Q) , . . . , λ − k (Q) (cid:9) ˜ a k +1 (0) ... ˜ a k (0) . It also follows from (7.43) that ˇ a ... ˇ a d = ˜Ψ ∗ a (0) ... a d (0) . This identity and (7.44) yield (2.29). Hence, the operators R Γ ( ε, λ ) and R γ ( ε, λ ) are holomorphic in ε as acting from L (Γ) ⊕ L ( γ ) into ˙ W (Γ) and ˙ W ( γ ) .Owing to the embeddings ˙ W (Γ) ⊂ ˙ C (Γ) and ˙ W ( γ ) ⊂ ˙ C ( γ ) , the above proven holomorphy in ε of the operators R Γ ( ε, λ ) and R γ ( ε, λ ) imply that they also bounded and holomorphic in ε as acting into ˙ C (Γ) and ˙ C ( γ ) . We can express the second the second derivatives of the functions R Γ ( ε, λ )( f Γ , f γ ) and R γ ( ε, λ )( f Γ , f γ ) from their differential equations, see (7.19), (7.23), via these functions and theirfirst derivatives. In view of the established holomorphy in ε of these functions in ˙ C (Γ) and ˙ C ( γ ) and the assumed smoothness and holomorphy in ε of the functions V ( i )Γ and V ( i ) γ , we conclude thatthe functions R Γ ( ε, λ )( f Γ , f γ ) and R γ ( ε, λ )( f Γ , f γ ) are also holomorphic in the norms of the spaces ˙ C (Γ) and ˙ C ( γ ) . Therefore, the operators R Γ ( ε, λ ) and R γ ( ε, λ ) are bounded and holomorphic in ε as acting from L (Γ) ⊕ L ( γ ) into ˙ C (Γ) and ˙ C ( γ ) . Identities obviously hold as well since they areonce we treat the operators R Γ ( ε, λ ) and R γ ( ε, λ ) as acting into ˙ W (Γ) and ˙ W ( γ ) . In this section we prove Theorems 2.2, 2.3. R Γ and R γ Apart of the leading terms of Taylor series for the operators R Γ ( ε, λ ) and R γ ( ε, λ ) provided in (2.28),(2.29), it is also possible to find further terms. This can be done by expanding the vectors a( ε ) and a ′ ( ε ) into power series in ε via formulae (7.40) and by making similar procedure in (7.21), (7.24).Unfortunately, almost immediately this requires plenty of bulky technical calculations. Nevertheless,it is possible to determine all coefficients in the Taylor series for the operators in R Γ ( ε, λ ) and R γ ( ε, λ ) in a simpler and more elegant way. Below we describe it in all details proving in this way Theorem 2.2.Since the functions P Γ u ε = R Γ ( ε, λ )( f Γ , f γ ) and S − ε P γ ε u ε = R γ ( ε, λ )( f Γ , f γ ) are holomorphicin ε , they are represented by convergent series (2.30). Owing to the embeddings ˙ W (Γ) ⊂ ˙ C (Γ) , ˙ W ( γ ) ⊂ ˙ C ( γ ) , these series are same for both cases when we treat the operators R Γ ( ε, λ ) and R γ ( ε, λ ) as acting into the spaces ˙ W (Γ) and ˙ W ( γ ) and into the spaces ˙ C (Γ) and ˙ C ( γ ) .We substitute the first series into (7.19), expand the coefficients of the equation into the Taylorseries in ε and equate the coefficients at the like powers of ε . This gives the equations for u Γ p : (cid:0) H Γ (0) − λ (cid:1) u Γ p = − p X q =1 q ! d q ˆ H Γ dε q ( · , u Γ p − q on Γ . (8.1)40e also substitute the same series into vertex conditions (2.5) for vertices M ∈ Γ . Expanding thenthe matrices in these conditions into their Taylor series in ε and equating the coefficients at the likepowers of ε , we get: A (0) M U M ( u Γ p ) + B (0) M U ′ M ( u Γ p ) = − p X q =1 q ! (cid:18) d q A M dε q (0) U M ( u Γ p − q ) + d q B M dε q (0) U ′ M ( u Γ p − q ) (cid:19) (8.2)at each vertex M ∈ Γ , M = M .At the next step we are going to find similar boundary value problems for the coefficients of thesecond series in (2.30). First we continue the functions u γp from the graph γ on the graph γ ex . On theadditional edges e exi , i ∈ J j , j = 1 , . . . , n , we define them simply as linear functions u γp ( ξ i ) := du Γ p − (cid:12)(cid:12) e i dx i ( M ) ξ i + u Γ p (cid:12)(cid:12) e i ( M ) , (8.3)where e i are the edges of the graph Γ incident to the vertex M . Under such definition, the followingobvious identities hold: U M ( u Γ p ) = U γ ( u γp ) , p > , (8.4) U ′ γ ex ( u γ ) = 0 , U ′ γ ex ( u γp ) = U ′ M ( u Γ p − ) , p > . (8.5)These identities are exactly the continuity conditions gluing the restrictions of the function u ε on thegraphs Γ and γ ε at the edges e i ; we note here also that the variables at the edges e i and e exi are relatedby the rescaling ξ i = x i ε − .We substitute the second series in (2.30) into the equation (7.23) and take into considerationformulae (8.3). Then we expand the obtained expression into the Taylor series in ε and find thecoefficients at the like powers of ε . This gives the equations for the functions u γp : ˆ H exγ (0) u γ = 0 on γ ex , (8.6) ˆ H exγ (0) u γ = − d ˆ H exγ dε (0) χ γ u γ on γ ex , (8.7) ˆ H exγ (0) u γ = χ γ f γ − X q =1 q ! d q ˆ H exγ dε q (0) χ γ u γ − q + λχ γ u γ on γ ex , (8.8) ˆ H exγ (0) u γp = − p X q =1 q ! d q ˆ H exγ dε q (0) χ γ u γp − q + λχ γ u γp − on γ ex , p > . (8.9)We substitute the second series in (2.30) into (6.1), expand the obtained expression into the Taylorseries in ε and calculate the coefficients at the like powers of ε . This leads us to the following vertexconditions for the functions u γp : A (0) M U M ( u γp ) + B (0) M U ′ M ( u γp ) = − g γp , g γp := p X i =1 (cid:16) A ( i ) M U M ( u γp − i ) + B ( i ) M U ′ M ( u γp − i ) (cid:17) (8.10)at each vertex M ∈ γ , where A ( i ) M := i − d i − A + M dε i − (0) i ! d i A − M dε i (0) ! , B ( i ) M := i ! d i B + M dε i (0) i +1)! d i +1 B + M dε i +1 (0) ! if B M (0) = 0 , A ( i ) M := 1 i ! d i A M dε i (0) , B ( i ) M := 1( i + 1)! d i +1 B M dε i +1 (0) if B M (0) = 0 . (8.11)The above conditions hold also for M = M j , j = 1 , . . . , n , owing to continuation formulae (8.3), theabove proven holomorphy of the functions P Γ ε u ε and S ε P γ ε u ε , and the fact that two latter functionsare the restrictions of the same function u ε satisfying vertex conditions (6.1) at the vertices M j , j = 1 , . . . , n . 41e denote C (0) M := (cid:16) A (0) M + iB (0) M (cid:0) V (2) M (0) (cid:1) − (cid:0) V (1) M (0) − E d ( M ) (cid:1)(cid:17) − and we apply then the matrix − (0) M to vertex condition (8.10). In view of Lemma 4.1, we hence get: i(U M (0) − E d ( M ) ) U M ( u γp ) + (U M (0) + E d ( M ) ) V (0) γ,M ( u γp ) = 2iC (0) M g γp . We apply the projectors P (0) M and P (0) M, ⊥ to the obtained identity and we see that vertex conditions(8.10) are equivalent to the following ones: P (0) M U M ( u γp ) = − P (0) M C (0) M g γp , P (0) M, ⊥ V (0) γ,M ( u γp ) + K M, ⊥ (0) U M ( u γp ) = 2iC (0) M (cid:0) U M (0) + E d ( M ) (cid:1) − C (0) M g γp , (8.12)Recurrent system of boundary value problems (8.1), (8.2), (8.4) and (8.6), (8.7), (8.8), (8.9), (8.12),(8.5) allows us to determine uniquely all functions u Γ p and u γp .The function u Γ0 is defined apriori in (2.30). Then we consider problem (8.6), (8.12), (8.5) for u γ and since it is homogeneous, we conclude immediately that its solution is a linear combination of thefunctions ψ ( i ) : u γ = k X i =1 c i, ψ ( i ) . In view of vertex condition (2.25) at the vertex M , we have (cid:0) U M ( u Γ0 ) , Ψ j (cid:1) C d = 0 , j = k + 1 , . . . , d . Hence, U M ( u Γ0 ) = k X i =1 c i ( f Γ ) Ψ i , with the functionals c i ( f ) introduced in (2.27). Now the coefficients c i, are determined uniquely byidentity (8.4) with p = 0 ; we see immediately that c i, = c i ( f ) . We observe that the found function u γ satisfies the identity u γ = R (0) γ f Γ and this fits identity (2.29). Let us show how to find further termsin series (2.30).We proceed to problem (8.7), (8.12), (8.5) for u γ . In condition (8.5) with p = 1 , the right handside U ′ M ( u Γ0 ) is known since the function u Γ0 is already defined. This problem is a particular case ofproblem (6.4) and the solvability condition is given by (6.5) with f = − d ˆ H exγ dε (0) u γ , g M ex = V (2) M (0) U ′ M ( u Γ0 ) , g M = i2 P (0) M Q M ( u γ ) , g M, ⊥ = (U M (0) + E d ( M ) ) − P (0) M, ⊥ Q M ( u γ ) , where the operator Q M is defined in (2.17).Employing identities (6.21), (6.22) and Lemma 6.2, it is straightforward to confirm that the men-tioned solvability condition is equivalent to vertex condition (2.25) at the vertex M with the matrices A (0) M and B (0) M defined in (2.23). Then the general solution to the problem for u γ reads as u γ = u γ , ∗ + k X i =1 c i, ψ ( i ) , (8.13)where u γ , ∗ is a particular solution to problem (8.7), (8.12), (8.5) obeying orthogonality condition (6.6),and c i, are some constants, which will be determined later.Once the function u γ is found, we are able to find the function u Γ1 . The latter is defined as thesolution to problem (8.1), (8.2), (8.4). According to Lemma 5.3 with A = E d , B = 0 in vertexcondition (5.1) and respectively P = E d , P ⊥ = 0 in (5.6), the above problem for u Γ1 is uniquelysolvable and the solutions reads as u Γ1 = u Γ1 , ∗ + k X i =1 c i, v i, Γ , (8.14)42here u Γ1 , ∗ is the solution to problem (8.1), (8.2) with the vertex condition U M ( u Γ1 , ∗ ) = U γ ( u γ , ∗ ) , while v i, Γ are the unique solutions to problems (cid:0) H Γ (0) − λ (cid:1) v i, Γ = 0 on Γ , A (0) M U M ( v i, Γ ) + B (0) M U ′ M ( v i, Γ ) = 0 at M = M , U M ( v i, Γ ) = Ψ ( i ) . (8.15)We proceed to problem (8.8), (8.12), (8.5). We substitute formula (8.13) into the right hand sidesin (8.8), (8.12), (8.12), and identity (8.14) is substituted into the right hand side in (8.5). The obtainedproblem is a particular case of (6.4). The solvability of this problem is ensured by conditions (6.5).Employing the obvious identities dψ ( i ) dξ = 0 on γ \ γ ex , d ˆ H exγ dε (0) ψ ( i ) = d ˆ H exγ dε (0) χ γ ψ ( i ) on γ, on γ \ γ ex , by straightforward calculations we confirm that d ˆ H exγ dε (0) χ γ ψ ( i ) , ψ ( j ) ! L ( γ ) = d ˆ H exγ dε (0) ψ ( i ) , ψ ( j ) ! L ( γ ex ) = Q ( ij ) γ − i (cid:0) V (1)Γ ,M Ψ ( i ) , Ψ ( j ) (cid:1) C d − X M ∈ M ex (cid:0) V (0) γ,M ( ψ ( i ) ) , U M ( ψ ( j ) ) (cid:1) C d ( M ) . Substituting this identity into solvability conditions (6.5) and comparing the result with (2.19), (2.20),(2.21), (2.22), we arrive at the identities k X i =1 Q ( ij ) c i, + k X i =1 c i, (cid:0) V (2)Γ ,M U ′ M ( v i, Γ ) − iV Γ ,M (0) U M ( v i, Γ ) , Ψ ( j ) (cid:1) C d = h j, , (8.16)where we have also employed vertex conditions at M for v i, Γ in (8.15), and the numbers h j, aredefined as h j, := f γ − d ˆ H exγ dε (0) χ γ u γ + λu γ − d ˆ H exγ dε (0) u γ , ∗ , ψ ( j ) ! L ( γ ) − (cid:0) V (2)Γ ,M (0) U ′ M ( u Γ1 , ∗ ) , Ψ ( j ) (cid:1) C d − X M ∈ M ex (cid:16) (U M (0) + E d ( M ) ) − P (0) M, ⊥ C (0) M (cid:0) A (1) M U M ( u γ , ∗ ) + B (1) M U ′ M ( u γ , ∗ )+ A (2) M U M ( u γ ) + B (2) M U ′ M ( u γ ) (cid:1) , P (0) M, ⊥ U M ( ψ ( j ) ) (cid:17) C d ( M ) − X M ∈ M ex (cid:16) P (0) M C (0) M (cid:0) A (1) M U M ( u γ , ∗ ) + B (1) M U ′ M ( u γ , ∗ )+ A (2) M U M ( u γ ) + B (2) M U ′ M ( u γ ) (cid:1) , P (0) M V (2)Γ ,M U ′ M ( ψ ( j ) ) (cid:17) C d ( M ) . (8.17)Comparing boundary value problem (8.15) and (7.1), (7.2) with ε = 0 and taking into considerationdefinition of the matrix T Γ ( ε ) in (7.10), we see easily that L = Ψ ∗ T Γ (0)Ψ , where, we recall, the matrix L is defined in (2.34). Then equations (8.16) can be rewritten in the matrix form as follows: (cid:0) Q + Ψ ∗ T Γ (0)Ψ (cid:1) c = h , c := c , ... c d , , h := h , ... h d , . (8.18)Employing Lemma 7.3 and proceeding as in the proof Lemma 7.4, it is easy to confirm that thematrix Q + Ψ ∗ T Γ (0)Ψ is invertible. Hence, equation (8.18) is uniquely solvable and c = (cid:0) Q +Ψ ∗ T Γ (0)Ψ (cid:1) − h . This determines completely the function u γ , see (8.13), and u Γ1 , see (8.14), andensures the solvability of the problem for u γ . The solution to this problem reads as u γ = u γ , ∗ + k X i =1 c ,i ψ ( i ) , c ,i are some constants to be determined.Further functions u Γ p and u γp can be found in the same way. Once the function u γp is known, itis defined up to a linear combination of the functions ψ ( j ) with some coefficients c i,p . Then problem(8.1), (8.2) for the function u Γ p is uniquely solvable and its solution involves a linear combination ofthe functions v i, Γ with the coefficients c i,p . Then we can solve problem (8.9), (8.5), (8.10) for u γp +1 since all right hand sides in this problem are expressed via already defined functions. The solvabilitycondition for the latter problem are given by (6.5) and they lead to a uniquely solvable system of linearequations similar to (8.18) for the coefficients c i,p . This allows us to determine these coefficients.The above described procedure can be summarized as follows. The functions u Γ p and u γp satisfyrepresentations (2.31), where u Γ p, ∗ is the unique solution to problem (8.1), (8.2) with the vertex con-dition U M ( u Γ p, ∗ ) = U γ ( u γp, ∗ ) , while u γp, ∗ is a particular solution to problem (8.9), (8.12), (8.5) obeyingorthogonality condition (6.6). The solvability of this problem is ensured by formula (2.33) for thecoefficients c i,p , where h j,p := − p X q =2 q ! d q ˆ H exγ dε q (0) u γp − q , ψ ( j ) ! L ( γ ) − d ˆ H exγ dε (0) u γp − , ∗ , ψ ( j ) ! L ( γ ) + λ ( u γp − , ψ ( j ) ) L ( γ ) − (cid:0) V (2)Γ ,M (0) U ′ M ( u Γ p − , ∗ , Ψ ( j ) (cid:1) C d − X M ∈ γ (cid:16)(cid:0) P (0) M g M,j , P (0) M U ′ M ( ψ ( j ) ) (cid:1) C d ( M ) + 2i (cid:0) (U M (0) + E d ( M ) ) − P (0) M, ⊥ g M,j , P (0) M, ⊥ U M ( ψ ( j ) ) (cid:1) C d ( M ) (cid:17) , g M,j := P (0) M C (0) M (cid:18) p X i =2 (cid:0) A ( i ) M U M ( u γp − i ) + B ( i ) M U ′ M ( u γp − i ) (cid:1) + A (1) M U M ( u γp − , ∗ ) + B (1) M U ′ M ( u γp − , ∗ ) (cid:19) . (8.19)Hence, by the above described recurrent procedure, we can determine all coefficients in Taylor series(2.30). This completes the proof of Theorem 2.2. In this subsection we prove Theorem 2.3. Identity (2.36) is an immediate implication of (2.30) and(2.26); it is sufficient to substitute series (2.30) into formula (2.26).Since by Theorem 2.1 the operators R Γ ( ε, λ ) and R γ ( ε, λ ) are holomorphic in ε and accordingTheorem 2.2 their Taylor series are given by (2.30), the following inequalities hold: (cid:13)(cid:13)(cid:13)(cid:13) R Γ ( ε, λ )( f Γ , f γ ) − N X p =0 ε p u Γ p (cid:13)(cid:13)(cid:13)(cid:13) W (Γ) C N +2 ε N +2 (cid:0) k f Γ k L (Γ) + k f γ k L ( γ ) (cid:1) , (8.20) (cid:13)(cid:13)(cid:13)(cid:13) R γ ( ε, λ )( f Γ , f γ ) − N X p =0 ε p u γp (cid:13)(cid:13)(cid:13)(cid:13) W ( γ ) C N +2 ε N +2 (cid:0) k f Γ k L (Γ) + k f γ k L ( γ ) (cid:1) , (8.21) (cid:13)(cid:13)(cid:13)(cid:13) R Γ ( ε, λ )( f Γ , f γ ) − N X p =0 ε p u Γ p (cid:13)(cid:13)(cid:13)(cid:13) ˙ C (Γ) C N +1 ε N +1 (cid:0) k f Γ k L (Γ) + k f γ k L ( γ ) (cid:1) , (8.22) (cid:13)(cid:13)(cid:13)(cid:13) R γ ( ε, λ )( f Γ , f γ ) − N X p =0 ε p u γp (cid:13)(cid:13)(cid:13)(cid:13) ˙ C ( γ ) C N +1 ε N +1 (cid:0) k f Γ k L (Γ) + k f γ k L ( γ ) (cid:1) , (8.23)where C is a fixed constant independent of ε , N , f Γ and f γ . It follows from definition (2.3) of theoperator S ε that (cid:13)(cid:13)(cid:13)(cid:13) d i S ε udx i (cid:13)(cid:13)(cid:13)(cid:13) L ( γ ε ) = ε − i (cid:13)(cid:13)(cid:13)(cid:13) d i udx i (cid:13)(cid:13)(cid:13)(cid:13) L ( γ ) , (cid:13)(cid:13)(cid:13)(cid:13) d i S ε udx i (cid:13)(cid:13)(cid:13)(cid:13) ˙ C ( γ ε ) = ε − i (cid:13)(cid:13)(cid:13)(cid:13) d i udx i (cid:13)(cid:13)(cid:13)(cid:13) ˙ C ( γ ) , (8.24)and, in particular, k f γ k L ( γ ) = ε − k f k L ( γ ε ) . (8.25)44e also observe that formula (2.26) yields the identity ( H ε − λ ) − f − N X p =0 ε p u Γ p ⊕ S ε u γp = (cid:18) R Γ ( ε, λ )( f Γ , f γ ) − N X p =0 ε p u Γ p (cid:19) ⊕ S ε (cid:18) R γ ( ε, λ )( f Γ , f γ ) − N X p =0 ε p u γp (cid:19) . (8.26)Hence, by the (8.20), (8.25) we get (cid:13)(cid:13)(cid:13)(cid:13) ( H ε − λ ) − f − N X p =0 ε p u Γ p (cid:13)(cid:13)(cid:13)(cid:13) W (Γ) = (cid:13)(cid:13)(cid:13)(cid:13) R Γ ( ε, λ )( f Γ , f γ ) − N X p =0 ε p u Γ p (cid:13)(cid:13)(cid:13)(cid:13) W (Γ) C N +2 ε N +2 (cid:0) k f k L (Γ) + ε − k f k L ( γ ε ) (cid:1) and this implies (2.37). Similarly, employing (8.22) instead of (8.20), we prove easily estimate (2.38).In the same way as above, employing inequality (8.21), the first formula in (8.24) and identities(8.25), (8.26), we prove estimates (2.39), (2.40): (cid:13)(cid:13)(cid:13)(cid:13) ( H ε − λ ) − f − N X p =0 ε p S ε u γp (cid:13)(cid:13)(cid:13)(cid:13) W i ( γ ε ) = (cid:13)(cid:13)(cid:13)(cid:13) S ε (cid:18) R Γ ( ε, λ )( f Γ , f γ ) − N X p =0 ε p u γp (cid:19)(cid:13)(cid:13)(cid:13)(cid:13) W i ( γ ε ) ε − i (cid:13)(cid:13)(cid:13)(cid:13) R Γ ( ε, λ )( f Γ , f γ ) − N X p =0 ε p u γp (cid:13)(cid:13)(cid:13)(cid:13) W i ( γ ) C N +2 ε N − i +1) k f k L (Γ ε ) , where i = 0 , , , and for i = 0 we let ˙ W i := L . Proceeding as above and employing (8.23) instead of(8.21) and the second identity in (8.24) instead of the first one, we prove (2.41), (2.42). The proof ofTheorem 2.3 is complete. Acknowledgments
The research is supported by the Russian Science Foundation (grant no. 20-11-19995).
References [1] G. Berkolaiko, Yu. Latushkin, S. Sukhtaiev.
Limits of quantum graph operators with shrinkingedges // Adv. Math. , 632–669 (2019).[2] G. Berkolaiko, P. Kuchment.
Introduction to Quantum Graphs . Amer. Math. Soc. Providence, RI(2013).[3] T. Kato.
Perturbation Theory for Linear Operators . Springer, Berlin (1976).[4] C. Cacciapuoti.
Scale invariant effective hamiltonians for a graph with a small compact core //Symmetry. :3, 359 (2019).[5] Yu.V. Pokornyi, O.M. Penkin, V.L. Pryadiev, A.V. Borovskikh, K.P. Lazarev, S.A. Shabrov. Differential Equations on Geometric Graphs . Fizmatlit, Moscow (2005). (in Russian).[6] V.V. Zhikov.
Homogenization of elasticity problems on singular structures // Izv. Math. :2,299–365 (2002).[7] T. Cheon, P. Exner, and O. Turek. Approximation of a general singular vertex coupling in quantumgraphs // Ann. Phys. :3, 548–578 (2010).[8] B. Berkolaiko and P. Kuchment.
Dependence of the spectrum of a quantum graph on vertex con-ditions and edge lengths // in Proc. Symp. Pure Math. Amer. Math. Soc. , 117–137 (2012).459] D.I. Borisov, A.I. Mukhametrakhimova. On a model graph with a loop and small edges // J. Math.Sci. :5, 573–601 (2020).[10] D.I. Borisov, M.N. Konyrkulzhayeva.
Perturbation of threshold of the essential spectrum of theSchr¨odinger operator on the simplest graph with a small edge // J. Math. Sci. :3, 248–267(2019).[11] D.I. Borisov, M.N. Konyrkulzhayeva.
Simplest graphs with small edges: asymptotics for resolventsand holomorphic dependence of spectrum // Ufa Math. J.11