Bakry-Émery curvature on graphs as an eigenvalue problem
David Cushing, Supanat Kamtue, Shiping Liu, Norbert Peyerimhoff
aa r X i v : . [ m a t h . C O ] F e b Bakry-Émery curvature on graphs as an eigenvalue problem
David Cushing , Supanat Kamtue , Shiping Liu , and Norbert Peyerimhoff Department of Mathematical Sciences, Durham University School of Mathematical Sciences, University of Science and Technology of China,and Wu Wen-Tsun Key Laboratory of Mathematics of CAS, HefeiFebruary 18, 2021
Abstract
Our main result reformulates the Bakry-Émery curvature on a weighted graph interms of the smallest eigenvalue of a rank one perturbation of the so-called curvaturematrix. As an application, we confirm a conjecture (in a general weighted case) in [9]of the fact that the curvature does not decrease under certain graph modifications.We analyze the curvature as a function of the dimension parameter and show that thiscurvature function is analytic, strictly monotone increasing and strictly concave untila certain threshold after which the function is constant. Furthermore, we derive thecurvature of the Cartesian product using the crucial observation that the curvaturematrix of the product is the direct sum of each component. This allows us to derivean analogous result about the Cartesian product of weighted Riemannian manifolds.
Let G = ( V, w, µ ) be a weighted graph consisting of a vertex set V , a vertex measure µ : V → R + , and an edge-weight function w : V × V → R + ∪ { } which is a symmetricfunction with w xx = 0 for all x ∈ V . Two vertices x, y ∈ V are adjacent if and only if w xy > . The graph G is assumed to be locally finite , that is, each vertex has only finitelymany neighbours. For r ∈ N , the combinatorial sphere (resp. ball ) of radius r centered at x ∈ V , denoted by S r ( x ) (resp. B r ( x ) ), is the set of all vertices whose minimum numberof edges from x is equal to (resp. less than or equal to) r . In particular, S ( x ) contains allneighbours of x .Furthermore, let d x := P y ∈ V w xy be the vertex degree of x , and p xy := w xy µ x be the transitionrate from x to y . In a special case that d x = µ x (that is, P y ∈ V p xy = 1 ) for all x ∈ V ,the terms p xy can be understood as transition probabilities of a reversible Markov chain.Another special situation is a non-weighted (or combinatorial) graph G = ( V, E ) where E is the set of edges (without loops and multiple edges), that is, µ ≡ and w xy = 1 iff x isadjacent to y , and w xy = 0 otherwise.The Laplacian ∆ : C ( V ) → C ( V ) (where C ( V ) is the vector space of all functions f : V → R ) is given by ∆ f ( x ) := 1 µ x X y ∈ V w xy ( f ( y ) − f ( x )) = X y ∈ V p xy ( f ( y ) − f ( x )) . non-normalized Lapla-cian .The Laplacian ∆ gives rise to the symmetric bilinear forms Γ and Γ , namely, f, g ) := ∆( f g ) − f ∆ g − g ∆ f, ( f, g ) := ∆(Γ( f, g )) − Γ( f, ∆ g ) − Γ( g, ∆ f ) , with additional notations Γ( f ) := Γ( f, f ) and Γ ( f ) := Γ ( f, f ) .These bilinear forms are important for the following Ricci curvature notion due to Bakry-Émery, which is motivated by a fundamental identity in Riemannian Geometry calledBochner’s formula. Definition 1.1 (Bakry-Émery curvature) . Let G = ( V, w, µ ) be a locally finite weightedgraph. Let K ∈ R and N ∈ (0 , ∞ ] . We say that a vertex x ∈ V satisfies the Bakry-Émery’s curvature-dimension inequality CD ( K , N ) , if for any f : V → R , we have Γ ( f )( x ) ≥ N (∆ f ( x )) + K Γ( f )( x ) , (1.1)where N is a dimension parameter and K is regarded as a lower Ricci curvature bound at x . The Bakry-Émery curvature, denoted by K ( G, x ; N ) , is then defined to be the largest K such that x satisfies CD ( K , N ) .The Bakry-Émery curvature function of x , namely K G,x ( N ) := K ( G, x ; N ) can be refor-mulated as the solution to the following semidefinite programming:maximize K ( P )subject to Γ ( x ) − N ∆( x ) ⊤ ∆( x ) − K Γ( x ) (cid:23) , where the symmetric matrices Γ( x ) and Γ ( x ) correspond to the symmetric bilinear forms Γ and Γ at x . The explicit expression of these matrices is given in Appendix A. Here, M (cid:23) (resp. M ≻ ) means M is positive semidefinite (resp. strictly positive definite).Note also that the problem ( P ) is well-defined since Γ( x ) ≻ . The above computingmethod has been studied by Schmuckenschläger [16], and later on in [13] and [9].The main result of this paper is the reformulation of the above semidefinite programmingproblem as a smallest eigenvalue problem by employing the Schur complement of a blockmatrix M = (cid:18) M M M M (cid:19) , namely M/M := M − M M − M , applied to the matrix Γ ( x ) ˆ1 = (cid:18) Γ ( x ) S ,S Γ ( x ) S ,S Γ ( x ) S ,S Γ ( x ) S ,S (cid:19) . Here the matrix Γ ( x ) ˆ1 refers to the principle submatrix of Γ ( x ) obtained by removingits first row and column corresponding to the central vertex x . The matrix Γ ( x ) S i ,S j refers to the submatrix of Γ ( x ) whose rows and columns are indexed by the vertices ofthe combinatorial spheres S i ( x ) and S j ( x ) . 2e use the notation Q ( x ) := Γ ( x ) ˆ1 / Γ ( x ) S ,S for simplicity, and define A ∞ ( x ) := 2 diag( v ( x )) − Q ( x ) diag( v ( x )) − ,A N ( x ) := A ∞ ( x ) − N v ( x ) v ( x ) ⊤ , (1.2)where v ( x ) := ( √ p xy √ p xy ... √ p xy m ) ⊤ with S ( x ) = { y , y , ..., y m } labelling the neigh-bours of x . Note that the matrices Q ( x ) , A ∞ ( x ) , A N ( x ) are all symmetric matrices, andthat A N ( x ) is a rank one perturbation of A ∞ ( x ) . Our main result is stated as follows. Theorem 1.2.
Let G = ( V, w, µ ) be a weighted graph. For x ∈ V and N ∈ (0 , ∞ ] , theBakry-Émery curvature K G,x ( N ) is the smallest eigenvalue of the symmetric matrix A N ( x ) ,that is, K G,x ( N ) = λ min ( A N ( x )) . Henceforth we will use the simplified notations v , Q , A ∞ and A N for the vector v ( x ) and the matrices Q ( x ) , A ∞ ( x ) and A N ( x ) , where x is a fixed vertex of G . We may referto the matrix A ∞ = A ∞ ( x ) as the curvature matrix of x .As an application of our main result K G,x ( N ) = λ min ( A N ) , we analyze the curvaturefunction K G,x : (0 , ∞ ] → R by employing the variational description of minimal eigenvaluesvia the Rayleigh quotient λ min ( A N ) = inf v =0 v ⊤ A N vv ⊤ v . We first describe the shape of the curvature functions.
Theorem 1.3.
Let G = ( V, w, µ ) be a weighted graph, and fix x ∈ V . Then the curvaturefunction K G,x : (0 , ∞ ] → R is continuous and there exists a unique threshold N ∈ (0 , ∞ ] (possibly, N = ∞ ) with the following properties:(i) K G,x is analytic, strictly monotone increasing and strictly concave on (0 , N ] with lim N → K G,x ( N ) = −∞ and lim N → N K G,x ( N ) =: K < ∞ .(ii) K G,x is constant on [ N , ∞ ] and equal to K . Corollary 1.4.
Assume that A ∞ ≻ (that is K G,x ( ∞ ) > ). Then there exists a unique N ∈ (0 , ∞ ) such that K G,x ( N ) = 0 , and it is given by N = 2 v A − ∞ v = 2 X i,j √ p xy i p xy i ( A − ∞ ) ij . Next we present the following curvature bounds.
Theorem 1.5 (Upper and lower curvature bound) . Let G = ( V, w, µ ) be a weighted graph.Then we have for x ∈ V and N ∈ (0 , ∞ ] , K G,x ( ∞ ) − N d x µ x ≤ K G,x ( N ) ( ∗ ) ≤ K ∞ ( x ) − N d x µ x (1.3) with K ∞ ( x ) := 12 d x µ x + 3 µ x d x p (2) xx − m x d x X z ∈ S ( x ) p (2) xz . ere we use the notation p (2) uv := P w ∈ V p uw p wv . Moreover, a vertex x ∈ V is called N -curvature sharp iff ( ∗ ) in (1.3) holds with equality. The next proposition clarifies the relation between curvature sharpness and the occurrenceof the following shapes of the curvature function K G,x :• K G,x ( N ) = c − N d x µ x (with a constant c ∈ R ) for all N near , and• K G,x ( N ) is constant for N near ∞ . Proposition 1.6. If x is N -curvature sharp for some N ∈ (0 , ∞ ] , it is also N -curvaturesharp for all N ∈ (0 , N ] . If x is N -curvature sharp for maximally chosen N , then this N is the threshold mentioned in Theorem 1.3, and hence K G,x ( N ) = K ∞ ( x ) − N d x µ x forall N ∈ (0 , N ] and constant on [ N , ∞ ] . Conversely, if K G,x ( N ) = c − N d x µ x for someconstant c ∈ R on some interval ( N ′ , N ′′ ) , then x is N ′′ -curvature sharp. The next proposition provides further insights about the curvature function K G,x whichare related to the spectrum of the curvature matrix A ∞ . Proposition 1.7.
Let G = ( V, w, µ ) be a weighted graph and fix a vertex x ∈ V . Denote E min ( A ∞ ) to be the minimal eigenspace of A ∞ . Then all of the following statements aretrue.(i) v is an eigenvector of A ∞ if and only if x is N -curvature sharp for some N ∈ (0 , ∞ ] .(ii) v ∈ E min ( A ∞ ) if and only if x is ∞ -curvature sharp.(iii) v is perpendicular to E min ( A ∞ ) if and only if K G,x is constant on [ N , ∞ ] for some N < ∞ . Remark 1.8. If v is an eigenvector of A ∞ corresponding a non-smallest eigenvalue of A ∞ ,then v is perpendicular to E min (from the fact that eigenspaces to different eigenvalues areperpendicular). The converse is not true; a counterexample is the non-weighted Cartesianproduct P × P (as discussed in Example 5.2). In this example, v is perpendicular to E min but it is not an eigenvector of A ∞ , and its curvature function K G,x is strictly increasingand strictly concave (but not curvature sharp) on (0 , N ] and constant on [ N , ∞ ] .An interesting property of the curvature matrix is the fact that the curvature matrix ofthe the Cartesian product of two graphs is simply the direct sum of the curvature matricesof each graph. Definition 1.9 (weighted Cartesian product) . Given two weighted graphs
G, G ′ and twofixed positive numbers α, β ∈ R + , the weighted Cartesian product G × α,β G ′ is definedwith the following weight function and vertex measure: for x, y ∈ G and x ′ , y ′ ∈ G ′ , w ( x,x ′ )( y,x ′ ) := αw xy µ x ′ ,w ( x,x ′ )( x,y ′ ) := βw x ′ y ′ µ x ,µ ( x,x ′ ) := µ x µ x ′ . α and β serve two purposes.1. In case of non-weighted graph G and G ′ (i.e., µ ≡ and w ∈ { , } ), the choice of α = β = 1 gives the usual Cartesian product graph G × G ′ .2. In case of G and G ′ representing random walks on Markov chains (i.e., when P y w xy = µ x and P y ′ w x ′ y ′ = µ x ′ ), the choice of α + β = 1 gives the weighted product graph G × α,β G which represents the random walk with probability α and β to go alonghorizontal and vertical edges, respectively. Theorem 1.10.
The curvature matrix of the product G × α,β G ′ is the weighted direct sumof the curvature matrices G and G ′ : A G × α,β G ′ ∞ (( x, x ′ )) = αA G ∞ ( x ) ⊕ βA G ′ ∞ ( x ′ ) . As a consequent, we give a new proof (in a more general case of weighted graphs) to thefact that the curvature function of Cartesian product is the star product of the curvaturefunction in each factor.
Definition 1.11 (star product [9, Definition 7.1]) . Let f , f : (0 , ∞ ] → R be continuousand monotone increasing functions with lim t → f ( t ) = lim t → f ( t ) = −∞ . Then thefunction f ∗ f : (0 , ∞ ] → R is defined by f ∗ f ( t ) := f ( t ) = f ( t ) , where t + t = t such that f ( t ) = f ( t ) . Theorem 1.12.
The curvature function of the product G × α,β G ′ satisfies the followinginequalities: min { α K G,x , β K G ′ ,x ′ } ≤ K G × α,β G ′ , ( x,x ′ ) ≤ max { α K G,x , β K G ′ ,x ′ } . Consequently, we have K G × α,β G ′ , ( x,x ′ ) = α K G,x ∗ β K G ′ ,x ′ . Remark 1.13.
Theorem 1.12 has an interesting analogue in the smooth setting of weightedmanifolds. Consider a weighted Riemannian manifold ( M n , g, e − f d vol g ) with a finite di-mension n , metric g , and a fixed smooth function f : V → R . We define the Ricci curvaturelower bound at x ∈ M to be K M,x ( N ) := inf v ∈ S x ( M ) Ric f,N ( v, v ) ∀ N ∈ ( n, ∞ ] , where S x ( M ) is the space of unit tangent vectors at x , and Ric f,N := Ric + Hess f − df ⊗ dfN − n as defined in [17, Equation (14.36)]. We show that the Cartesian product of two manifolds ( M n i i , g i , e − f i d vol g i ) , i ∈ { , } has the Ricci curvature lower bound K M × M , ( x ,x ) = K M ,x ∗ K M ,x . (1.4)In Section 8, we discuss curvature results related to the geometric structure of B ( x ) . Theresults should be compared with the special case of non-weighted graphs. First we providea sufficient criterion for curvature sharpness.5 heorem 1.14. Let G = ( V, w, µ ) be a weighted graph. A vertex x ∈ V is N -curvaturesharp for some N ∈ (0 , ∞ ] if the following two homogeneity properties of x holds true: • x is S -in regular: p − ( y ) = p yx is independent of y ∈ S ( x ) , • x is S -out regular: p + ( y ) = P z ∈ S ( x ) p yz is independent of y ∈ S ( x ) . In the case of the non-weighted graphs, the S -in regularity is always satisfied ( p − ( y ) = 1 ),and we even have the equivalence between S -out regularity and N -curvature sharpness(see [9, Corollary 5.10]).Our final result states that the curvature is nondecreasing under certain graph modifica-tions. Theorem 1.15.
Let G = ( V, w, µ ) be a weighted graph and fix a vertex x ∈ V . Assumethat x is S -in regular, i.e., p − ( y ) = p yx is independent of y ∈ S ( x ) . Consider a modifiedweighted graph e G obtained from G by one of the following operations:(O1) Increase the edge-weight between a fixed pair y, y ′ ∈ S ( x ) with y = y ′ by ˜ w yy ′ = w yy ′ + C for any constant C > .(O2) Delete a vertex z ∈ S ( x ) and remove all of its incident edges, i.e., ˜ w yz = 0 for all y ∈ S ( x ) . Increase the edge-weight between all pairs y, y ′ ∈ S ( x ) with y = y ′ by ˜ w yy ′ = w yy ′ + C w yz w z y ′ (1.5) with any constant C ≥ p − ( y ) µ x p (2) xz .Then K e G,x ( N ) ≥ K G,x ( N ) for any N ∈ (0 , ∞ ] . The part (O2) of the above theorem confirms Conjecture 6.13 in [9] in the case of non-weighted graphs where we consider ˜ w yy ′ = w yy ′ + 1 for all pairs y, y ′ ∈ S ( x ) of neighboursof z . In this special case, the constant C = 1 is certainly more than the threshold p − ( y ) µ x p (2) xz = in-degree of z . Remark 1.16.
In fact, the S -in regularity condition at x can be weakened to S -inregularity at x for the involved vertices in S ( x ) . In the operation (O1) we only require p yx = p y ′ x , and in (O2) we require p yx is constant for all y ∈ S ( x ) such that w yz = 0 . In this section, we prove our main result (Theorem 1.2), namely the eigenvalue reformula-tion of the curvature. Recall the optimization problem which formulates the Bakry-Émerycurvature K G,x ( N ) , maximize K ( P )subject to Γ ( x ) − N ∆( x )∆( x ) ⊤ − K Γ( x ) (cid:23) , B ( x ) . In particular, the matrix Γ ( x ) is of size | B ( x ) | , and the symmetric matrices ∆( x )∆( x ) ⊤ and Γ( x ) are of sizes | B ( x ) | (and trivially extended by zeros to matrices ofsizes | B ( x ) | ); see Appendix A for details.Schmuckenschläger [16] observed that the size of these matrices can be reduced by one:since Γ ( f ) , Γ( f ) , ∆ f all vanish for constant functions f , the curvature-dimension inequal-ity CD ( K , N ) remains valid after shifting f by an additive constant. It is therefore sufficientto verify (1 . for all functions f : V → R with f ( x ) = 0 . This observation allows us re-move from these matrices the row and column corresponding to the vertex x , and we areable to reformulate the above problem ( P ) asmaximize K ( P ′ )subject to M K,N ( x ) := (cid:18) Γ ( x ) − N ∆( x ) T ∆( x ) − K Γ( x ) (cid:19) S ∪ S ,S ∪ S (cid:23) , Next, we recall the concept of the Schur complement, which allows us to further reducethe size of the involved matrices in ( P ′ ). Lemma 2.1 (Schur complement) . Consider a square matrix M = (cid:18) M M M M (cid:19) , where M and M are square submatrices, and assume that M ≻ . The Schur complement M/M is defined as M/M := M − M M − M . (2.1) Then
M/M (cid:23) if and only if M (cid:23) . We aim to apply this lemma for the matrix M K,N ( x ) given in ( P ′ ). Since ∆( x ) and Γ( x ) have zero entries in the S ( x ) -structure, it means the matrix M K,N ( x ) has the followingblock structure: M K,N ( x ) = (cid:18) Γ ( x ) S ,S − N ∆( x ) S ∆( x ) ⊤ S − K Γ( x ) S ,S Γ ( x ) S ,S Γ ( x ) S ,S Γ ( x ) S ,S (cid:19) . By folding M K,N ( x ) into the upper left block, its Schur complement is given by M K,N ( x ) / Γ ( x ) S ,S = Γ ( x ) S ,S − N ∆( x ) S ∆( x ) ⊤ S − K Γ( x ) S ,S − Γ ( x ) S ,S Γ ( x ) − S ,S Γ ( x ) S ,S = Q ( x ) − N ∆( x ) S ∆( x ) ⊤ S − K Γ( x ) S ,S , where Q ( x ) := Γ ( x ) ˆ1 / Γ ( x ) S ,S denotes the folding of Γ ( x ) ˆ1 = (cid:18) Γ ( x ) S ,S Γ ( x ) S ,S Γ ( x ) S ,S Γ ( x ) S ,S (cid:19) .The importance of Γ ( x ) S ,S for a lower curvature bound was already mentioned inSchmuckenschläger [16, pp.194-195] (where he used the notation A II ).Lemma 2.1 implies that K G,x ( N ) = arg max K (cid:26) Q ( x ) − N ∆( x ) S ∆( x ) ⊤ S − K Γ( x ) S ,S (cid:23) (cid:27) . (2.2)7e recall from Appendix A that Γ( x ) S ,S = diag(∆( x ) S ) and ∆( x ) S = ( p xy p xy ... p xy m ) ⊤ ,where S ( x ) = { y , y , ..., y m } .Denote the vector v := v ( x ) = ( √ p xy √ p xy ... √ p xy m ) ⊤ . The maximum argument in(2.2) does not change under the multiplication by diag( v ) − ≻ both from left and rightsides, that is, K G,x ( N ) = arg max K (cid:26) diag( v ) − Q ( x ) diag( v ) − − N v v ⊤ − K (cid:23) (cid:27) . (2.3)In other words, K G,x ( N ) = λ min (2 diag( v ) − Q ( x ) diag( v ) − − N v v ⊤ )= λ min ( A ∞ − N v v ⊤ ) = λ min ( A N ) , where A ∞ = A ∞ ( x ) and A N = A N ( x ) are defined in (1.2), and λ min ( A N ) denotes thesmallest eigenvalue of A N . This finishes the proof of Theorem 1.2. Remark 2.2.
The curvature matrix A ∞ ( x ) contains more information than the curvaturefunction K G,x , and A ∞ ( x ) cannot be recovered from K G,x . As an example, it is shown belowthat the (non-weighted) cube Q and complete bipartite K , share the same curvaturefunctions, while having different curvature matrices.For any vertex x in G = Q : A GN ( x ) = A G ∞ ( x ) − N v v ⊤ = − N J ,σ ( A GN ( x )) = { − N , , } , K G,x ( N ) = 2 − N .
For any vertex x in H = K , : A HN ( x ) = A H ∞ ( x ) − N v v ⊤ = / − / − / − / / − / − / − / / − N J ,σ ( A HN ( x )) = { − N , , } , K H,x ( N ) = 2 − N . K G,x
This section is devoted to the proof of Theorem 1.3 about properties of the curvaturefunction K G,x : (0 , ∞ ] → R , which will be divided in small steps.8 roposition 3.1. The curvature function K G,x : (0 , ∞ ] → R is continuous, monotoneincreasing and concave with lim N → K G,x ( N ) = −∞ and lim N →∞ K G,x ( N ) < ∞ .Proof. It is known that the zeros of a polynomial are continuous functions of the coefficientsof the polynomial (see, e.g., [14, Theorem (1,4)]). In particular for the characteristicpolynomial in λ , namely det( A N − λ Id) , it means the ordered set of eigenvalues of A N ,respecting their multiplicities, are continuous in N . In particular, K G,x ( N ) = λ min ( A N ) iscontinuous in N .Monotonicity and concavity of K G,x employ the crucial fact that, for symmetric matrices A and B , λ min ( A + B ) = inf v =0 v ⊤ ( A + B ) vv ⊤ v ≥ inf v =0 v ⊤ Avv ⊤ v + inf v =0 v ⊤ Bvv ⊤ v = λ min ( A ) + λ min ( B ) , and the inequality holds with equality iff A and B share an eigenvector correspondingto their minimal eigenvalues. Recall also that v v ⊤ is a rank one matrix with the onlynontrivial eigenvalue v ⊤ v > , so λ max ( v v ⊤ ) = v ⊤ v and λ min ( v v ⊤ ) = 0 .For < N ′ < N ≤ ∞ , we have λ min ( A N ) = λ min (cid:18) A N ′ + (cid:16) N ′ − N (cid:17) v v ⊤ (cid:19) ≥ λ min ( A N ′ ) + λ min (cid:16) ( 2 N ′ − N ) | {z } > v v ⊤ (cid:17) = λ min ( A N ′ ) , (3.1)Similarly, for < N ′ < N ≤ ∞ and α ∈ (0 , , we have λ min ( A αN +(1 − α ) N ′ ) = λ min (cid:16) αA N + (1 − α ) A N ′ + 2 ( αN + 1 − αN ′ − αN + (1 − α ) N ′ ) | {z } > v v ⊤ (cid:17) ≥ αλ min ( A N ) + (1 − α ) λ min ( A N ′ ) , To derive K G,x ( ∞ ) = lim N →∞ K G,x ( N ) < ∞ and lim N → K G,x ( N ) = −∞ , we argue that λ min ( A N ) = λ min (cid:18) A ∞ − N v v ⊤ (cid:19) → λ min ( A ∞ ) as N → ∞ , and λ min ( A N ) ≤ k A ∞ k + λ min (cid:0) − N v v ⊤ (cid:1) = k A ∞ k − N v ⊤ v → −∞ as N → , where k · k denotes the operator norm. Lemma 3.2. If λ min ( A N ′ ) is not simple for some N ′ ∈ (0 , ∞ ] , then λ min ( A N ) = λ min ( A N ′ ) for all N ∈ [ N ′ , ∞ ] . In other words, K G,x is constant on [ N ′ , ∞ ] .Proof of Lemma 3.2. Assume that λ min ( A N ′ ) is not simple, that is, the minimal eigenspace E min ( A N ′ ) has dimension at least . We first argue that there exists a nonzero w ∈ min ( A N ′ ) such that w ⊥ v . Consider any two non-parallel vectors v = a v + b w and v = a v + b w in E min ( A N ′ ) with w ⊥ v and w ⊥ v . In case a = 0 or a = 0 , weimmediately obtain such a vector w . In case a = 0 and a = 0 , the vector a v − a v represents such a vector w .Since w ⊥ v , it lies in the minimal eigenspace E min ( v v ⊤ ) whose minimal eigenvalue iszero. This means w ∈ E min ( A N ′ ) ∩ E min ( v v ⊤ ) , so the inequality (3.1) holds with equality,i.e., λ min ( A N ) = λ min ( A N ′ ) for all N ∈ [ N ′ , ∞ ] . Lemma 3.3. If λ min ( A N ) is simple for some N ∈ (0 , ∞ ] , then K G,x is analytic in a smallneighbourhood of N .Proof. The idea is to prove analyticity by using the implicit function theorem. Moreprecisely, we aim to apply [12, Theorem 6.1.2]. Consider the function of matrices A ( t ) = A /t , and denote λ ( t ) ≤ λ ( t ) ≤ ... ≤ λ m − ( t ) to be all eigenvalues of A ( t ) . Let t = N and assume that λ ( t ) = λ min ( A N ) is simple. Consider the following polynomial in t and λ : F ( t, λ ) := det( A ( t + t ) − ( λ ( t ) + λ )Id) = X i,j a i,j t i λ j . The characteristic polynomial factorization gives F (0 , λ ) = det( A ( t ) − ( λ ( t )+ λ )Id) = m − Y i =0 ( λ i ( t ) − ( λ ( t )+ λ )) = λ m − Y i =1 ( λ i ( t ) − λ ( t )+ λ ) , which means a , = 0 , and a , = 0 since λ ( t ) = λ i ( t ) for i ≥ . The analytic implicitfunction theorem asserts that there exists an analytic function λ ( t ) around t = 0 suchthat λ (0) = 0 and F ( t, λ ( t )) = 0 for all t near , that is, λ ( t ) + λ ( t ) is an eigenvalueof A ( t + t ) . Moreover, the assumption that λ ( t ) is a simple and smallest eigenvalue of A ( t ) implies that λ ( t ) + λ ( t ) stays the smallest eigenvalue of A ( t + t ) for t near . Lemma 3.4. If λ min ( A N ) is not simple for some N ∈ (0 , ∞ ] , then there exists thesmallest such N , and consequently K G,x is analytic, strictly monotone increasing andstrictly concave on (0 , N ] , and constant on [ N , ∞ ] .Proof. Consider the set N ns := { N ∈ (0 , ∞ ] : λ min ( A N ) is not simple } , and denote N := inf N ns .We know from Lemma 3.2 that K G,x is constant on [ N, ∞ ] for all N ∈ N ns . Therefore, K G,x is constant on ( N , ∞ ] . Note that N > ; otherwise K G,x is constant on the whole interval (0 , ∞ ] , which contradicts to the fact from Proposition 3.1 that lim N → K G,x ( N ) = −∞ .If λ min ( A N ) were simple, then λ min ( A N ) would also be simple for all N in a small neigh-bourhood of N . This contradicts to the definition of N . Therefore, λ min ( A N ) is notsimple, and N = min N ns .Since λ min ( A N ) is simple for all N ∈ (0 , N ) , we know from Lemma 3.3 that K G,x isanalytic on (0 , N ) . Recall also from Theorem 3.1 that K G,x is concave and monotone10ncreasing. If K G,x were not strictly concave on (0 , N ) , this would mean K G,x is linearon some interval [ a, b ] ⊂ (0 , N ) . Then the analyticity of K G,x on (0 , N ) would thenimply that K G,x is linear on the entire interval (0 , N ) , which contradicts to the fact that lim N → K G,x ( N ) = −∞ . Thus K G,x is indeed strictly concave on (0 , N ) , and consequentlyit is strictly monotone increasing on (0 , N ) . This finishes the proof of Lemma 3.4.By combining Proposition 3.1 and Lemmas 3.3 and 3.4, we can conclude Theorem 1.3with the description of the threshold N ∈ (0 , ∞ ] , namely N = min { N ∈ (0 , ∞ ] : λ min ( A N ) is not simple } (and N = ∞ in case this set is empty).Let us end this section with the proof Corollary 1.4 about the uniqueness of the threshold N such that K G,x ( N ) = 0 , which is asserted by the intermediate value theorem for thecontinuous curvature function K G,x : (0 , ∞ ] → R . Proof of Corollary 1.4.
Since K G,x ( ∞ ) > (by assumption) and lim N → K G,x ( N ) = −∞ ,the intermediate value theorem asserts that there exists an N ∈ (0 , ∞ ) such that K G,x ( N ) =0 . This implies det A N = 0 .Furthermore, K G,x ( ∞ ) > means det A ∞ > and A ∞ is invertible. The matrix determi-nant formula then gives A N = det( A ∞ − N v v ⊤ ) = (1 − N v ⊤ A − ∞ v ) det A ∞ . (3.2)Therefore, N is uniquely given by N = 2 v ⊤ A − ∞ v . Proof of Theorem 1.5.
We derive the lower curvature bound via the Rayleigh quotient asfollows: K G,x ( N ) = inf v =0 v ⊤ ( A ∞ − N v v ⊤ ) vv ⊤ v ≥ inf v =0 v ⊤ A ∞ vv ⊤ v − N sup v =0 v ⊤ v v ⊤ vv ⊤ v = K G,x ( ∞ ) − N v ⊤ v , where v ⊤ v = P y ∈ S ( x ) p xy = d x µ x .On the other hand, the upper curvature bound can be derived as K G,x ( N ) ≤ v ⊤ A N v v ⊤ v = v ⊤ ( A ∞ − N v v ⊤ ) v v ⊤ v = v ⊤ A ∞ v v ⊤ v − N v ⊤ v , (4.1)where the direct calculation (see Appendix A) gives v ⊤ A ∞ v v ⊤ v = 12 d x µ x + 3 µ x d x p (2) xx − m x d x X z ∈ S ( x ) p (2) xz =: K ∞ ( x ) . Relations between the spectrum of the curvature matrix A ∞ and the curvature function K G,x
In order to justify the properties of curvature sharpness in Proposition 1.6, we need toargue via the relation that v is an eigenvector of the curvature matrix A ∞ . This meanswe will give the proof of Proposition 1.7 together with this proposition. Proof of Proposition 1.7 and Proposition 1.6.
The vertex x is N -curvature sharp if and only if the upper bound (4.1): λ min ( A N ) ≤ v ⊤ A N v v ⊤ v holds with equality, which happens if and only if v is in the minimal eigenspace E min ( A N ) . In particular, x is ∞ -curvature sharp if and only if v ∈ E min ( A ∞ ) . Thisproves Proposition 1.7 (ii).Assume x is N -curvature sharp for some N ∈ (0 , ∞ ] . Then A N v = λ min ( A N ) v , whichimplies A ∞ v = ( λ min ( A N ) + N v ⊤ v ) v , that is, v is an eigenvector of A ∞ .Conversely, assume v is an eigenvector of A ∞ , that is, A ∞ v = λ v for some λ ∈ R .Denote the spectrum of A ∞ by σ ( A ∞ ) = { λ, λ , ..., λ m − } with λ ≤ ... ≤ λ m − . Consider A ∞ v i = λ i v i where all eigenvectors v i of A ∞ (different from v ) are chosen to be orthogonalto v . We then obtain for any N , A N v = ( A ∞ − N v v ⊤ ) v = ( λ − N v ⊤ v ) v ; A N v i = ( A ∞ − N v v ⊤ ) v i = A ∞ v i = λ i v i ∀ ≤ i < m, which mean its spectrum is σ ( A N ) = { λ − N v ⊤ v , λ , ..., λ m − } .We choose the threshold N = v ⊤ v λ − λ in case λ ≥ λ (and choose N = ∞ if λ < λ ), sothat λ min ( A N ) = ( λ − N v ⊤ v if N ≤ N ,λ if N ≥ N . This means for all N ≤ N , v ∈ E min ( A N ) , that is, x is curvature sharp on (0 , N ] . Thisproves Proposition 1.7 (i). Furthermore, for all N ≥ N , λ min ( A N ) = λ = λ min ( A ∞ ) ,that is, K G,x is constant on [ N , ∞ ] . This proves the two forward statements of Proposition1.6.The next result is a nice observation about the non-smallest eigenvalues of A N , which wedid not include in the introduction section. Corollary 5.1. If K G,x ( ∞ ) > , then all of the non-smallest eigenvalues of A N are strictlypositive for all dimensions N ∈ (0 , ∞ ] .Proof. Let λ i ( A N ) denote the i -th smallest eigenvalue of A N ′ (respecting multiplicity).Assume for the sake of contradiction that there exist N ′ ∈ (0 , ∞ ) and i ≥ such that λ i ( A N ′ ) = λ min ( A N ′ ) and λ i ( A N ′ ) < . We also know from K G,x ( ∞ ) > that λ i ( A ∞ ) > . Since λ i ( A N ) is continuous on N , the intermediate value theorem implies that λ i ( A ˆ N ) =0 for some ˆ N ∈ ( N ′ , ∞ ) , and hence det( A ˆ N ) = 0 . The matrix determinant formula12 = det A ˆ N = (1 − N v ⊤ A − ∞ v ) det A ∞ with det A ∞ > (because K G,x ( ∞ ) > ) assertsthat ˆ N = 2 v ⊤ A − ∞ v , which is the same threshold as N in Corollary 1.4. In other words, λ min ( A N ) = 0 = λ i ( A N ) is not simple. By Lemma 3.4, K G,x must then be constant on [ N , ∞ ) , which is contradiction to the fact that K G,x ( N ) = 0 < K G,x ( ∞ ) .Next, we discuss the situation where v ⊥ E min ( A ∞ ) but v σ ( A ∞ ) . Example 5.2.
In view of Proposition 1.7, we would like to find a graph G and a vertex x such that the curvature function K G,x has a finite threshold N , where K G,x is constanton [ N , ∞ ] , and strictly monotone increasing but not curvature-sharp on [0 , N ] . Theidea is to consider the Cartesian product G = G × G of the two non-weighted graphs G = ( V , E ) and G = ( V , E ) . The analysis on the Cartesian product is discussed ingeneral in the next section.For i ∈ { , } , let x i ∈ V i be a S -out regular vertex of G i . By Theorem 1.14 andProposition 1.6, there exists a threshold N i ∈ (0 , ∞ ] such that K G i ,x i is constant on [ N i , ∞ ] . Then the Cartesian product result (see [9]) at the vertex x := ( x , x ) , thecurvature K G × G , ( x ,x ) is constant on [ N + N , ∞ ] .Furthermore, we can make a particular choice for graphs G i and vertices x i so that ( x , x ) is not S -out regular and the threshold N i < ∞ . This means ( x , x ) is not N -curvaturesharp for any dimension N (since curvature sharpness is equivalent to S -out regularity ina non-weighted graph), but K G × G , ( x ,x ) is constant on a nontrivial interval [ N + N , ∞ ] as desired.As a concrete example, we consider the Cartesian product P × P , where P n is the pathcontaining n vertices. x Figure 1: Cartesian product of P and P The curvature matrix at x is given by A ∞ ( x ) = . . , and it has the smallest eigenvalue of . . The vector v = (1 1 1) ⊤ is not an eigenvector,but it is perpendicular to the minimal eigenspace E min ( A ∞ ( x )) = span (0 1 − ⊤ .The rank one perturbation A N ( x ) = A ∞ ( x ) − N v v ⊤ = A ∞ ( x ) − N J has its spectrumequal to σ ( A N ( x )) = { , − N ± q − N + N } . Therefore, the curvature function at x is given by K P × P ( x ) = ( − N − q − N + N if N ∈ (0 , ] if N ∈ [ , ∞ ] . Curvature of Cartesian product of graphs
Given two weighted graphs
G, G ′ and two fixed positive numbers α, β ∈ R + , the weightedCartesian product G × α,β G ′ is defined with the following weight function and vertexmeasure: for x, y ∈ G and x ′ , y ′ ∈ G ′ , w ( x,x ′ )( y,x ′ ) := αw xy µ x ′ ,w ( x,x ′ )( x,y ′ ) := βw x ′ y ′ µ x ,µ ( x,x ′ ) := µ x µ x ′ . One can translate the above definition into the transition rate p as p ( x,x ′ )( y,x ′ ) = α w xy µ x ′ µ x µ x ′ = αp xy p ( x,x ′ )( x,y ′ ) = βp x ′ y ′ ,d ( x,x ′ ) µ ( x,x ′ ) = X y p ( x,x ′ )( y,x ′ ) + X y ′ p ( x,x ′ )( x,y ′ ) = α d x µ x + β d x ′ µ x ′ . Here we use the same symbols w, µ, p and d for all graphs G , G ′ and its product, where theassociated graph can be determined from the input vertices. With this idea, we also usethe notations A ∞ ( · ) , A N ( · ) and Q ( · ) . This simplifies our notations without making themambiguous. Proof of Theorem 1.10.
Now the central vertex is ( x, x ′ ) with horizontal neighbours ( y, x ′ ) for y ∈ S ( x ) and vertical neighbours ( x, y ′ ) for y ′ ∈ S ( x ) . Note also that ( y, x ′ ) and ( x, y ′ ) are not adjacent but sharing one common neighbour in S , namely ( y, y ′ ) . On theother hand, the vertex ( y, y ′ ) has exactly two neighbours in S , namely ( y, x ′ ) and ( x, y ′ ) .The transition rate on each edge are presented in the following scheme. αp xy βp x ′ y ′ αp xy βp x ′ y ′ αp xy ′ αp yy ′ αp yz ( x, x ′ ) ( y, x ′ )( x, y ′ ) ( y, y ′ ) ( z, x ′ )( x, ˜ y ) Figure 2: The scheme showing a horizontal neighbour and a vertical neighbour of thecentral vertex ( x, x ′ ) in the Cartesian product G × α,β G ′ and transition rate p on eachedge. 14or y ∈ S ( x ) , we have Q (( x, x ′ )) ( y,x ′ )( y,x ′ ) = 2 p x,x ′ )( y,x ′ ) + 3 p ( x,x ′ )( y,x ′ ) p ( y,x ′ )( x,x ′ ) − d ( x,x ′ ) µ ( x,x ′ ) p ( x,x ′ )( y,x ′ ) + 3 p ( x,x ′ )( y,x ′ ) (cid:16)X z ∈ S ( x ) p ( y,x ′ )( z,x ′ ) + X y ′ ∈ S ( x ′ ) p ( y,x ′ )( y,y ′ ) (cid:17) + X ˜ y ∈ S ( x ) (3 p ( x,x ′ )( y,x ′ ) p ( y,x ′ )(˜ y,x ′ ) + p ( x,x ′ )(˜ y,x ′ ) p (˜ y,x ′ )( y,x ′ ) ) − (cid:16) X z ∈ S ( x ) p x,x ′ )( y,x ′ ) p y,x ′ )( z,x ′ ) p ( x,x ′ )( y,x ′ ) p ( y,x ′ )( z,x ′ ) + X y ′ ∈ S ( x ′ ) p x,x ′ )( y,x ′ ) p y,x ′ )( y,y ′ ) p ( x,x ′ )( y,x ′ ) p ( y,x ′ )( y,y ′ ) + p ( x,x ′ )( x,y ′ ) p ( x,y ′ )( y,y ′ ) (cid:17) = 2( αp xy ) + 3( αp xy )( αp yx ) − ( α d x µ x + β d x ′ µ x ′ )( αp xy ) + 3( αp xy )( X z ∈ S ( x ) αp yz + X y ′ ∈ S ( x ′ ) βp x ′ y ′ )+ α X ˜ y ∈ S ( x ) (3 p xy p y ˜ y + p xy p ˜ yy ) − (cid:16) X z ∈ S ( x ) ( αp yz ) + X y ′ ∈ S ( x ′ ) ( αp xy ) ( βp x ′ y ′ ) αβp xy βp x ′ y ′ (cid:17) = 4 α Q ( x ) yy . And similarly, Q (( x, x ′ )) ( x,y ′ )( x,y ′ ) = 4 β Q ( x ) y ′ y ′ for y ′ ∈ S ( x ′ ) .For y i = y j ∈ S ( x ) , we have Q (( x, x ′ )) ( y i ,x ′ )( y j ,x ′ ) = 2 p ( x,x ′ )( y i ,x ′ ) p ( x,x ′ )( y j ,x ′ ) − p ( x,x ′ )( y i ,x ′ ) p ( y i ,x ′ )( y j ,x ′ ) − p ( x,x ′ )( y j ,x ′ ) p ( y j ,x ′ )( y i ,x ′ ) − X z ∈ S ( x ) p ( x,x ′ )( y i ,x ′ ) p ( y i ,x ′ )( z,x ′ ) p ( x,x ′ )( y j ,x ′ ) p ( y j ,x ′ )( z,x ′ ) P ˜ y ∈ S ( x ) p ( x,x ′ )(˜ y,x ′ ) p (˜ y,x ′ )( z,x ′ ) = 2( αp ( x,y i ) ) − αp ( x,y i ) )( αp ( y i ,y j ) ) − αp ( x,y j ) )( αp ( y j ,y i ) ) − X z ∈ S ( x ) α p xy i p y i z p xy j p y j z P ˜ y ∈ S ( x ) α p x ˜ y p ˜ yz = 4 α Q ( x ) y i y j . And similarly, Q (( x, x ′ )) ( x,y ′ i )( x,y ′ j ) = 4 β Q ( x ) y ′ i y ′ j for y ′ i = y ′ j ∈ S ( x ′ ) . Q (( x, x ′ )) ( y,x ′ )( x,y ′ ) = 2 p ( x,x ′ )( y,x ′ ) p ( x,x ′ )( x,y ′ ) − p ( x,x ′ )( y,x ′ ) p ( y,x ′ )( y,y ′ ) p ( x,x ′ )( x,y ′ ) p ( x,y ′ )( y,y ′ ) p ( x,x ′ )( y,x ′ ) p ( y,x ′ )( y,y ′ ) + p ( x,x ′ )( x,y ′ ) p ( x,y ′ )( y,y ′ ) = 2 αβp xy p x ′ y ′ − αβp xy p x ′ y ′ ) αβp xy p x ′ y ′ = 0 . We can conclude from the above calculation that Q (( x, x ′ )) = α Q ( x ) ⊕ β Q ( y ) . Note alsothat the matrix diag v (( x, x ′ )) = √ α diag v ( x ) ⊕ √ β diag v ( x ′ ) . Therefore, we derivethe curvature matrix as A ∞ (( x, x ′ )) = 2 diag v (( x, x ′ )) − Q (( x, x ′ )) diag v (( x, x ′ )) − = αA ∞ ( x ) ⊕ βA ∞ ( x ′ ) , as desired. 15ext we would like to prove Theorem 1.12, which will be rephrased in a more abstractstatement. This will prepare us to discuss in a later section about the connection to theRicci curvature in case of weighted manifolds. Theorem 6.1.
For i ∈ { , } , let ( V i , h· , ·i i ) be Euclidean vector spaces, A i : V i → V i be symmetric linear maps, and B i : V i → V i be symmetric rank one maps, i.e., theyare in the form B i = h v i , ·i i v i for some fixed v i ∈ V i . Given fixed weights α, β > ,let V = V ⊕ V with the inner product h v ⊕ v , w ⊕ w i := h v , w i + h v , w i . Let A, B : V → V be symmetric operators defined as A ( v ⊕ v ) := αA ( v ) ⊕ βA ( v ) and B := D √ α v ⊕ √ β v , · E ( √ α v ⊕ √ β v ) .Consider rank one perturbations A i ( N ) := A i − cN − n i B i for i ∈ { , } and A ( N ) := A − cN − n − n B for some constants c , n i ≥ . Then we have min { αλ , βλ } ≤ λ min ( A ( N + N )) ≤ max { αλ , βλ } , (6.1) where λ i := λ min ( A i ( N i )) . The purpose of the constants c and n i is to unify the Cartesian product result in both set-tings of weighted graphs and weighted manifolds. In the case of graphs, we apply Theorem6.1 with c = 2 and n = n = 0 to conclude Theorem 1.12. In the case of manifolds, weapply Theorem 6.1 with c = 1 and n , n being the dimension of the manifolds M , M . Proof of Theorem 6.1.
The proof is given in matrix form: for each i ∈ { , } , A i and B i = v i v ⊤ i are symmetric matrices of size m i , and v i is a vector. The matrix A and B arethen given by A := (cid:18) αA βA (cid:19) and B = (cid:18) α v v ⊤ √ αβ v v ⊤ √ αβ v v ⊤ β v v ⊤ (cid:19) . It follows that A ( N + N ) = (cid:18) αA ( N ) βA ( N ) (cid:19) + cN + N − n − n α N − n N − n v v ⊤ −√ αβ v v ⊤ −√ αβ v v ⊤ β N − n N − n v v ⊤ !| {z } =: J . We want to verify that J (cid:23) , which will then imply the left inequality in (6.1).For any vector w = (cid:18) w w (cid:19) with w i ∈ R m i , we have w ⊤ J w = α N − n N − n w ⊤ v v ⊤ w + β N − n N − n w ⊤ v v ⊤ w − p αβw ⊤ v v ⊤ w = (cid:16)r α N − n N − n w ⊤ v − r β N − n N − n w ⊤ v (cid:17) ≥ . Thus J (cid:23) . Next we prove the right inequality in (6.1). For i ∈ { , } , we choose a uniteigenvector w i such that A i w i = λ i w i where λ i = λ min ( A i ( N i )) , and let w := (cid:18) c w c w (cid:19) with arbitrary constants c i = 0 . It follows from the Rayleigh quotient description that λ min ( A ( N + N )) ≤ w ⊤ (cid:0) αA ( N ) ⊕ βA ( N ) (cid:1) w + cN + N − n − n w ⊤ J ww ⊤ w = 1 c + c αc λ + βc λ + cN + N − n − n (cid:16) c r α N − n N − n w ⊤ v − c r β N − n N − n w ⊤ v (cid:17) ! .
16e may choose c = q β N − n N − n w ⊤ v and c = q α N − n N − n w ⊤ v so that the square termabove becomes zero. As a result, λ min ( A ( N + N )) ≤ αc λ + βc λ c + c ≤ max { αλ , βλ } , which finishes the proof of (6.1). In this section, we compare the Ricci curvature of graphs and Riemannian manifolds in aphilosophical level. What we have shown for graphs is that K G,x ( N ) = λ min (cid:0) A ∞ ( x ) − N v v ⊤ (cid:1) .In [9, Section 1.6], the authors draw a comparison between curvature functions of graphsand that of weighted Riemannian manifolds. In this subsection, we discuss this comparisonfurther.A weighted Riemannian manifold is a triple ( M n , g, e − f d vol g ) , where ( M n , g ) is an n -dimensional Riemannian manifold, d vol g is the Riemannian volume element, and f is asmooth real valued function on M n . The weighted Ricci tensor of ( M n , g, e − f d vol g ) isdefined to be Ric f,N := Ric + Hess f − df ⊗ dfN − n , (7.1)where Ric is the Ricci curvature tensor of ( M n , g ) , Hess f is the Hessian of f ([1, 2]). Usingthe f -Laplacian ∆ f = ∆ g − ∇ f · ∇ , where ∆ g is the Laplace-Beltrami operator on ( M n , g ) ,one can define the Bakry-Émery curvature-dimension inequality CD ( K, N ) (at any point x ∈ M ). Then CD ( K, N ) , N ∈ ( n, ∞ ] (at a given point x ∈ M ) holds if and only if Ric f,N ≥ K (at x ∈ M ) (see [1, pp.93-94]). At a point x ∈ M , let K M,f,x ( N ) be the largest K such that CD ( K, N ) holds at x for a given N ∈ ( n, ∞ ] . Then K M,f,x : ( n, ∞ ] → R iscalled the curvature function of ( M n , g, e − f d vol g ) at x .Recall n in (7.1) is the dimension of the underlying Riemannian manifold. When f isconstant, that is, when the curvature-dimension inequality is based on the Laplace-Beltramioperator, the dimension N can be equal to n , and the function K M,f,x is a constant functionon [ n, ∞ ] (see [1, pp.93-94], [3, Appendix C.6]). When f is not constant, K M,f,x ( N ) tendsto −∞ as N tends to n .Since Ric f,N is a symmetric (0 , -tensor, there exists a linear transformation A N : T x M → T x M from the tangent space T x M of M at x to itself, such that Ric f,N ( v, v ) = g ( A N v, v ) , for any v ∈ T x M. Therefore, the lower Ricci curvature bound K M,f,x ( N ) := inf v ∈ S x M Ric f,N ( v, v ) can beexpressed as the minimal eigenvalue of A N : K M,f,x ( N ) = λ min ( A N ) . For any v, w ∈ T x M , the Ricci curvature can be written independently of the choice of an17rthonormal basis { e i } ni =1 of the tangent space T x M as Ric f,N ( v, w ) = Ric( v, w ) + Hess f ( v, w ) − v ( f ) · w ( f ) N − n = n X i =1 g ( R ( v, e i ) e i , w ) + g ( ∇ v ∇ f, w ) + 1 N − n g ( g ( ∇ f, v ) ∇ f, w ) , where ∇ v · is the covariant derivative along v , R ( · , · ) · is the Riemann curvature tensor, and ∇ f is the gradient of f . Therefore, the linear transformation A N : T x M → T x M can beexplicitly given as follows: for any v ∈ T x M , A N v = n X i =1 R ( v, e i ) e i + ∇ v ∇ f + g ( ∇ f, v ) N − n ∇ f =: A ∞ v + 1 N − n B v, where A ∞ : T x M → T x M can be thought of the linear transformation represented by “thecurvature matrix” at x of the weighted manifold ( M, g, e − f d vol g ) , and A N = A ∞ − N − n B is a rank one perturbation of A ∞ with B := g ( ∇ f, · ) ∇ f . This result can be compared to K G,x ( N ) = λ min ( A N ) = λ min ( A ∞ − N v v ⊤ ) in the case of a weighted graph G .Now for the Cartesian product ( M, g, e − f d vol) = ( M × M , g × g , e − f − f d vol) oftwo weighted manifolds ( M n i i , g i , e − f i d vol) , i ∈ { , } , with a canonical identification T ( x ,x ) M ≃ T x M ⊕ T x M , we observe that A ∞ , B of the product is naturally decomposedinto the corresponding A ∞ , B in each factor, that is, A M ∞ ( v ⊕ v ) = n X i =1 g ( R ( v , e i ) e i , v ) ⊕ n X j =1 g ( R ( v , e i ) e i , v ) + ∇ v ∇ f ⊕ ∇ v ∇ f = A M ∞ ( v ) ⊕ A M ∞ ( v ) , and B ( v ⊕ v ) = g ( ∇ f ⊕ ∇ f , v ⊕ v )( ∇ f ⊕ ∇ f ) , for any v i ∈ T x i M i .Now Theorem 6.1 is also applicable for manifolds, and it yields min { λ min ( A N ) , λ min ( A N ) } ≤ λ min ( A N + N ) ≤ max { λ min ( A N ) , λ min ( A N ) } , and consequently, K M,f, ( x ,x ) = K M ,f ,x ∗ K M ,f ,x . B ( x ) and curvature properties Proof of Theorem 1.14.
In view of Proposition 1.7, we need to show that v is an eigenvec-tor of A ∞ under the S -in and S -out regularity assumption: p − ( y ) := p yx and p + ( y ) := P z ∈ S ( x ) p yz are independent of y ∈ S ( x ) .The vector v = (cid:0) √ p xy √ p xy · · · √ p xy m (cid:1) ⊤ is an eigenvector of A ∞ if and only if λ v = A ∞ v = 2 diag( v ) − Q diag( v ) − for some λ ∈ R , or equivalently, Q m = λ (cid:0) p xy p xy · · · p xy m (cid:1) ⊤ , p xyi P mj =1 Q y i y j = λ is independent of i ∈ [ m ] .A direct calculation using formulae (A.7) and (A.8) yields, for any i ∈ [ m ] , p xy i m X j =1 Q y i y j = 14 d x µ x + 34 p y i x + m X j =1 ( − p y i y j + 14 p xy j p y j y i p xy i ) − X z ∈ S ( x ) p y i z = 14 d x µ x + 34 p y i x + 14 m X j =1 p y i y j ( − p y j x p y i x | {z } =0 ) − X z ∈ S ( x ) p y i z , which is independent of i , given that x is S -in and S -out regular. Proof of Theorem 1.15.
We denote by e Q , e A ∞ and e A N to be the corresponding matrices Q , A ∞ and A N centered at the vertex x of the modified graph e G = ( V, ˜ w, µ ) . We aim toprove that K e G,x ( N ) ≥ K G,x ( N ) , that is, λ min ( e A N ) ≥ λ min ( A N ) . It suffices to show that e A N − A N is positive semidefinite, since it would then imply that λ min ( e A N ) ≥ λ min ( e A N − A N ) + λ min ( A N ) ≥ λ min ( A N ) .Note that the vector v = ( √ p xy √ p xy ... √ p xy m ) ⊤ is unchanged under this graph modifi-cation, so we have e A N − A N = 2 diag( v ) − ( e Q − Q ) diag( v ) − . To prove that e A N − A N (cid:23) is equivalent to showing that e Q − Q (cid:23) . Operation (O1):
The modification ˜ w yy ′ = w yy ′ + C for a constant C > means ˜ p yy ′ − p yy ′ = C µ y and ˜ p y ′ y − p y ′ y = C µ y ′ . We then derive from the formulae (A.7) and (A.8)that the matrix e Q − Q have four nontrivial entries: ( e Q − Q ) yy = 14 (cid:0) p xy (˜ p yy ′ − p yy ′ ) + p xy ′ (˜ p y ′ yy − p y ′ y ) (cid:1) = C (cid:18) p xy µ y + p xy ′ µ y ′ (cid:19) = C µ x (3 p yx + p y ′ x ) , and similarly, ( e Q − Q ) y ′ y ′ = C µ x ( p yx +3 p y ′ x ) and ( e Q − Q ) yy ′ = ( e Q − Q ) y ′ y = − C µ x ( p yx + p y ′ x ) .Consequently, the matrix e Q − Q has two nontrivial eigenvalues, corresponding to those ofthe following × matrix ( e Q − Q ) yy ( e Q − Q ) yy ′ ( e Q − Q ) y ′ y ( e Q − Q ) y ′ y ′ ! = C µ x (cid:18) p yx + p y ′ x − p yx − p y ′ x − p yx − p y ′ x p yx + 3 p y ′ x (cid:19) . This matrix has eigenvalues p yx + p y ′ x ± q p yx + 2 p y ′ x , and it becomes positive semidefiniteonly when we assume p yx = p y ′ x . Operation (O2):
Note that the edge-weight modification ˜ w yy ′ = w yy ′ + C w yz w zy ′ forall different y, y ′ ∈ S ( x ) means ˜ p yy ′ − p yy ′ = C p yz p z y ′ µ z .19or y i , y j ∈ S ( x ) such that y i = y j , the formula (A.8) gives ( e Q − Q ) y i y j = − p xy i (˜ p y i y j − p y i y j ) − p xy j (˜ p y j y i − p y j y i ) + p xy i p y i z p xy j p y j z p (2) xz (8.1) = − p xy i · C p y i z p z y j µ z − p xy j · C p y j z p z y i µ z + p xy i p y i z p xy j p y j z p (2) xz = − C p xy i p y i z p z y j µ z + p xy i p y i z p xy j p y j z p (2) xz = − p xy i p y i z p z y j C µ z − p xy j p y j z p z y j p (2) xz ! , where the third equation is due to p xy i p y i z p z y j = p xy j p y j z p z y i which can be checked by p xy i p y i z p z y j p xy j p y j z p z y i = w xy i w y i z w z y j µ x µ y i µ z · µ x µ y j µ z w xy j w y j z w z y i = w xy i µ y j µ y i w xy j = p y i x p y j x = p − ( y ) p − ( y ) = 1 . For y i ∈ S ( x ) , the formula (A.7) gives ( e Q − Q ) y i y i = − p xy i p y i z + 14 X y j = y i (cid:0) p xy i (˜ p y i y j − p y i y j ) + p xy j (˜ p y j y i p y j y i ) (cid:1) + p xy i p y i z p (2) xz (8.2) = − p xy i p y i z + X y j = y i (cid:18) C µ z p xy i p y i z p z y j + C µ z p xy j p y j z p z y i (cid:19) + p xy i p y i z p (2) xz = − p xy i p y i z + C µ z X y j = y i p xy i p y i z p z y j + p xy i p y i z p (2) xz = p xy i p y i z −
34 + C µ z X y j = y i p z y j + p xy i p y i z p (2) xz . Combining (8.2) and (8.1), we derive the sum of entries in i -th row as ( e Q − Q ) y i y i + X j = i ( e Q − Q ) y i y j = p xy i p y i z −
34 + p xy i p y i z p (2) xz + X j = i p xy j p y j z p (2) xz = p xy i p y i z −
34 + 1 p (2) xz X y ∈ S ( x ) p xy p yz = 14 p xy i p y i z > . (Note that the terms involving C are cancelled out in the above expression.)Under the assumption that C µ z ≥ p xy j p y j z p z y j p (2) xz for all j = i , we can guarantee in (8.1) that ( e Q − Q ) y i y j ≤ . It then follows that ( e Q − Q ) y i y i > − X j = i ( e Q − Q ) y i y j = X j = i (cid:12)(cid:12)(cid:12) ( e Q − Q ) y i y j (cid:12)(cid:12)(cid:12) , e Q − Q is diagonally dominant and hence e Q − Q (cid:23) .Finally, we remark that the assumption C µ z ≥ p xy j p y j z p z y j p (2) xz can be re-written as the as-sumption given in Theorem 1.15, namely C ≥ p − ( y ) µ x p (2) xz due to the following identity: p xy j p y j z p z y j p (2) xz · µ z = w xy j w y j z µ x µ y j w z y j p (2) xz = p y j x µ x p (2) xz = p − ( y ) µ x p (2) xz . Remark 8.1.
The S -in regularity condition helps to balance the term p yx + p y ′ x whichappears on the diagonal with the term − p yx − p y ′ x appearing off-diagonally. A Explicit Structure of relevant matrices