A Two-Level Preconditioned Helmholtz-Jacobi-Davidson Method for the Maxwell Eigenvalue Problem
aa r X i v : . [ m a t h . NA ] J a n A Two-Level Preconditioned Helmholtz-Jacobi-Davidson Methodfor the Maxwell Eigenvalue Problem
Qigang Liang and Xuejun Xu School of Mathematical Science, Tongji University, Shanghai 200092, China, qigang [email protected] Institute of Computational Mathematics, AMSS, Chinese Academy of Sciences, Beijing 100190, China,[email protected]
Abstract:
In this paper, based on a domain decomposition (DD) method, we shall propose anefficient two-level preconditioned Helmholtz-Jacobi-Davidson (PHJD) method for solving the algebraiceigenvalue problem resulting from the edge element approximation of the Maxwell eigenvalue problem.In order to eliminate the components in orthogonal complement space of the eigenvalue, we shall solve aparallel preconditioned system and a Helmholtz projection system together in fine space. After one coarsespace correction in each iteration and minimizing the Rayleigh quotient in a small dimensional Davidsonspace, we finally get the error reduction of this two-level PHJD method as γ = c ( H )(1 − C δ H ), where C is a constant independent of the mesh size h and the diameter of subdomains H , δ is the overlappingsize among the subdomains, and c ( H ) decreasing as H →
0, which means the greater the number ofsubdomains, the better the convergence rate. Numerical results supporting our theory shall be given.
Keywords:
Maxwell eigenvalue problem, edge element, Helmholtz projection, Jacobi-Davidsonmethod, domain decomposition.
In this paper, we develop an efficient numerical algorithm for solving the Maxwell eigenvalue problemwhich plays an important role in computational electromagnetism (see, e.g., [3, 4, 12, 15, 16, 18]). Thegoverning equations are curlE = − jωµ H x ∈ Ω ,div ( ǫ E ) = 0 x ∈ Ω , curlH = jωǫ E x ∈ Ω ,div ( µ H ) = 0 x ∈ Ω , n × E = x ∈ ∂ Ω , ( µ H ) · n = 0 x ∈ ∂ Ω , (1.1)and we eliminate the magnetic field H , then obtain the equivalent eigenvalue problem for the electricfield E as follows: curl µ − curlE = ω ǫ E x ∈ Ω ,div ( ǫ E ) = 0 x ∈ Ω , n × E = x ∈ ∂ Ω , (1.2)where the resonant cavity Ω ⊂ R is a bounded convex polyhedral domain, the coefficients ǫ and µ arethe real electric permittivity and magnetic permeability, respectively, j is an imaginary unit, and the1otation ω is the resonant angular frequency of the electromagnetic wave for cavity Ω. For convenience,we denote that u := E , λ := ǫ µ ω , where ǫ and µ are electric permittivity and magnetic permeability in vacuum, respectively. So equations(1.2) may be rewritten as: curl µ − r curlu = λǫ r u x ∈ Ω ,div ( ǫ r u ) = 0 x ∈ Ω , n × u = x ∈ ∂ Ω , (1.3)where the coefficients ǫ r and µ r are the real relative electric permittivity and magnetic permeability,respectively. For simplicity, we consider that the media is isotropic and homogeneous, i.e., the coefficients ǫ r ( ≥
1) and µ r ( ≥
1) both are positive constants.Edge elements were introduced by N´ e d´ e lec (see, e.g., [19, 20]). Actually, the lowest-order edgeelements are often referred to as the Whitney elements (see, e.g., [4, 12] and therein references). As edgeelements may eliminate the nonphysical spurious nonzero eigenvalues, they have been widely used andstudied to solve the Maxwell eigenvalue problem. It is known that the challenging task for solving theMaxwell eigenvalue problem is how to deal with the infinite dimensional kernel of the curl operator.Owing to this difficulty, one of approaches is chosen to so-called the penalty method, which imposesdivergence-free condition by introducing the penalty term (see [7]), but it usually results in spuriouseigenvalues. Alternative approach is the mixed variational method, which is introduced by Kikuchi (see[11]) and Boffi (see, e.g., [4, 5]). The method may be used to handle the kernel of the curl operator butsimultaneously brings larger scale size computational problem and difficult saddle-problem. The standardmethod drops the divergence-free condition, though it may induce spurious zero frequency, doing so doesnot contaminate the nonzero eigenvalues (see, e.g., [4, 12]). Two grid methods have been widely usedto solve elliptic eigenvalue problems (see, e.g., [14, 27, 28]). Its application to the Maxwell eigenvalueproblem has been considered in [30]. The idea of two grid methods is that one may first compute aneigenvalue problem on the coarse grid and then compute a boundary value problem on the fine grid.Rayleigh quotient may be used to accelerate the approximation of the eigenvalue. Under the assumptions h = O ( H i )( i = 2 , ,
4, see, [27], [30], [28]), asymptotically optimal error estimates may be obtained,respectively. Based on the de Rham complex, Hiptmair and Neymeyr (see [13]) proposed a projectedpreconditioned inverse iterative method. It can handle the kernel of the curl operator by the Helmholtzprojection. A multilevel preconditioned technique is chosen to accelerate the inverse iteration procedure.This idea is also extended to the adaptive discretization for the Maxwell eigenvalue problem (see [9]).The two-level preconditioned Jacobi-Davidson methods for eigenvalue problems were proposed byZhao, Hwang and Cai (see [29]) and further developed and analyzed by Wang and Xu (see [25, 26]) for2 m -order ( m = 1 ,
2) elliptic eigenvalue problems. The convergence rate of the first eigenvalue of theelliptic eigenvalue problems is bounded by c ( H )(1 − C δ m − H m − ) ( m = 1 , c ( H )(1 − C δ H ) , i.e., λ k +1 − λ h ≤ c ( H )(1 − C δ H ) ( λ k − λ h ) , (1.4)where C is a constant independent of the mesh size h and the diameter of subdomains H , δ is theoverlapping size among the subdomains, λ h is the first discretized eigenvalue for the Maxwell eigenvalueproblem and λ k is the k -th iteration of the two-level PHJD method. In addition, our algorithm worksvery well when h << H in contrast to the two grid method which needs h = O ( H ) for the Maxwelleigenvalue problem(see [30]). Meanwhile, our PHJD method holds good scalabilities, which shall beverified by our numerical experiments.We must emphasize that the application of the two-level domain decomposition preconditioned al-2orithm to the Maxwell eigenvalue problem is nontrivial. The first difficulty is that the condition | λ h − λ | + || u − u h || + h || u − u h || a = O ( h ) (1.5)holds for 2 m -order( m = 1 ,
2) elliptic problems (see [26]), but it is not true for the first-type edge elementsbecause the polynomial space is incomplete; The second difficulty is that the kernel of the curl operatorin H ( curl ; Ω), which is ∇ H (Ω) in trivial topology domain, is very large while the kernel of the gradientoperator in H (Ω) is zero space. Hence, if we take advantage of Jacobi-Davidson method to solve thealgebraic system resulting from the edge element approximation of the Maxwell eigenvalue problemdirectly, we shall fail to work due to the fact that the iterative solution may plunge into the kernelof the curl operator; The third difficulty is that the discrete divergence-free space is non-nested for H ( curl ; Ω) ∩ H ( div ; Ω; ǫ r ) (defined in the following, (2.1) , (2.2)), which means that the discrete edgeelement space is essentially nonconforming for the space H ( curl ; Ω) ∩ H ( div ; Ω; ǫ r ). In this paper, weshall overcome these difficulties, and try to prove the convergence result of the principal eigenvalue isalmost near optimal, i.e., (1.4) is true.The outline of this paper is organized as follows: Some preliminaries are introduced in Section 2. InSection 3, our two-level preconditioned Helmholtz-Jacobi-Davidson method for the Maxwell eigenvalueproblem is proposed. Some useful lemmas and the main convergence analysis are given in Section 4.Finally we present our numerical results in Section 5. Let Ω ⊂ R d , d = 2 , H ( curl ; Ω) := ( { u ∈ L (Ω) | curl u ∈ L (Ω) , u · t | ∂ Ω = 0 } ( if d = 2) , { u ∈ L (Ω) | curlu ∈ L (Ω) , u × n | ∂ Ω = } ( if d = 3) , (2.1)equipped with the norm || u || curl := {|| u || + || curlu || } , where curl u ( d = 2) denotes a scale variable ∂u ∂x − ∂u ∂x ( ∂∂x i means a weak derivative), curlu ( d = 3) denotes a vector ( ∂u ∂x − ∂u ∂x , ∂u ∂x − ∂u ∂x , ∂u ∂x − ∂u ∂x ), n denotes the outer unit normal vector, t denotes the unit tangential vector along the boundary ∂ Ω, and || · || symbols usual L -norm induced by L -inner product ( · , · ). We also denote that H ( div ; Ω; ǫ r ) := { u ∈ L (Ω) d | div ( ǫ r u ) ∈ L (Ω) } ( d = 2 , H ( div ; Ω; ǫ r ) := { u ∈ H ( div ; Ω; ǫ r ) | div ( ǫ r u ) = 0 } ( d = 2 , , (2.2)equipped with the norm || u || div := {|| u || + || div ( ǫ r u ) || } , where div u denotes P di =1 ∂u i ∂x i . For theconvenience of symbols, we only consider 3-dimensional Maxwell eigenvalue problem in the following, butour algorithm also works very well and theoretical results also hold for 2-dimensional Maxwell eigenvalueproblem. We focus on the governing equations (1.3). It is known that the equivalent variational form is asfollows(see, e.g., [4]): ( Find ( λ, u ) ∈ R × H ( curl ; Ω), such that λ > , || u || b = 1 ,a ( u , v ) = λb ( u , v ) ∀ v ∈ H ( curl ; Ω) , (2.3)where a ( u , v ) := ( µ − r curlu , curlv ) , b ( u , v ) := ( ǫ r u , v ). Define the norm ||·|| b := p b ( · , · ) in H ( curl ; Ω).Due to the fact that the Poincar´ e inequality holds in H ( curl ; Ω) ∩ H ( div ; Ω; ǫ r ), we may define the3orm || · || a := p a ( · , · ) in H ( curl ; Ω) ∩ H ( div ; Ω; ǫ r ). For any f ∈ L (Ω) , define T : L (Ω) → H ( curl ; Ω) ∩ H ( div ; Ω; ǫ r ), such that a ( T f , v ) = b ( f , v ) ∀ v ∈ H ( curl ; Ω) ∩ H ( div ; Ω; ǫ r ) . (2.4)Combining with the Poincar´ e inequality, we know that the definition of the operator T is meaningful.It is obvious that T is symmetric in the sense of b ( · , · ) because of the definitions of a ( · , · ) and b ( · , · ).Due to the fact that H ( curl ; Ω) ∩ H ( div ; Ω; ǫ r ) is embedded compactly in L (Ω) (see, e.g., [1, 12]),we know that the operator T : L (Ω) → L (Ω) is a compact operator. Furthermore, the operator T : ( L (Ω) , b ( · , · )) → ( L (Ω) , b ( · , · )) is also a compact operator. Hence, owing to the well-knownRiesz-Schauder theorem, it is known that T u i = 1 λ i u i . Meanwhile, 0 < λ ≤ λ ≤ λ ≤ ... ≤ λ n → + ∞ n → + ∞ , and corresponding eigenvectors are u , u , u , ..., which satisfy a ( u i , u j ) = λ i b ( u i , u j ) = δ ij λ i , where δ ij is the Kronecker notation. Obviously the eigen-pair in (2.3) and the eigen-pair of the operator T are all corresponding(see [5, 30]). We use the standard edge elements to discretize H ( curl ; Ω) and our ultimate conclusions are truefor all other edge elements(see [19, 20]). For simplicity, we only consider the lowest-order edge element.The local polynomial space is N D ( K ) = { v | v = a + b × x , a , b ∈ R x ∈ K } , (2.5)and the moments are M ( v , p , e ij ) = Z e ij v · t ij p ds ∀ p ∈ P ( K ) , ≤ i < j ≤ , where t ij is the unit tangential vector along edge e ij , and local basic vector fields are ϕ Kij = λ Ki ∇ λ Kj − λ Kj ∇ λ Ki , ≤ i < j ≤ , where the λ Ki is the barycentric coordinate corresponding to the i -th node on the tetrahedron K . So ouredge element space is E h (Ω h ) := { v ∈ H ( curl ; Ω) | v | K ∈ N D ( K ) , ∀ K ∈ τ h } . Then the discrete standard variational form of (2.3) may be written as follows: ( Find ( λ h , u h ) ∈ R × E h (Ω h ), such that λ h > , || u h || b = 1 a ( u h , v ) = λ h b ( u h , v ) ∀ v ∈ E h (Ω h ) . (2.6)Define the discrete divergence-free space to be E h (Ω h ; ǫ r ) := { v ∈ E h (Ω h ) | b ( v , ∇ p h ) = 0 , ∀ p h ∈ S h (Ω h ) } , with the S h (Ω h ) denoting a continuous piecewise linear polynomial space in Ω with vanishingtrace on ∂ Ω and define the discrete operator T h : L (Ω) → E h (Ω h ; ǫ r ) as follows: ∀ f ∈ L (Ω) , a ( T h f , v ) = b ( f , v ) ∀ v ∈ E h (Ω h ; ǫ r ) . (2.7)4ue to the discrete Poincar´ e inequality (see, e.g., [12]), we know that the definition of T h is meaningfuland may define the norm || · || a := p a ( · , · ) in E h (Ω h ; ǫ r ). It is easy to see that T h is symmetric andcompact. Because of the well-known Riesz-Schauder theorem, we have T h u hi = 1 λ hi u hi . Meanwhile, 0 < λ h ≤ λ h ≤ λ h ≤ ... ≤ λ hnd , and the corresponding eigenvectors are u h , u h , u h , ..., u hnd , which satisfy a ( u hi , u hj ) = λ hi b ( u hi , u hj ) = δ ij λ hi , where nd = dim ( E h (Ω h ; ǫ r )). Similar to the continuous case, the eigen-pair in (2.6) and the eigen-pairof the operator T h are all corresponding. For the convenience of the following error estimate, we definethe operator A h : E h (Ω h ) → E h (Ω h ) such that b ( A h v h , w h ) = a ( v h , w h ) ∀ w h ∈ E h (Ω h ) , which is the discrete analog of the operator curl µ − r curl in E h (Ω h ). Remark 2.1
In this paper, we are interested in the case that the principal eigenvalue is simple, i.e., < λ < λ ≤ λ ≤ ... ≤ λ n → + ∞ , n → + ∞ , and for the corresponding discrete version, we also have < λ h < λ h ≤ λ h ≤ ... ≤ λ hnd . The principal eigenvalue with simple algebraic multiplicity is common. For example, the resonant domain Ω is a cuboid in R , where the length of each edge is different. Specifically, when the resonant cavity Ω =(0 , L ) × (0 , K ) × (0 , R ) is equipped with vacuum in (1.3) , we may know that λ = ( πL ) m +( πK ) n +( πR ) l ,where m, n and l are nonnegative integers and at most one of elements in { m, n, l } takes zero(see [3]). It is known that we have the following spacial decomposition properties(see [4]) H ( curl ; Ω) = ∇ H (Ω) ⊕ M ( λ ) ⊕ M ⊥ ( λ ) , (2.8) E h (Ω h ) = ∇ S h (Ω h ) ⊕ M h ( λ ) ⊕ M ⊥ h ( λ ) , (2.9)where M ( λ ) = span { u } , M h ( λ ) = span { u h } and the notation ⊕ denotes the b ( · , · )-orthogonal(also a ( · , · )-orthogonal) direct sum. For the convenience of the following symbols, we denote that K h : E h (Ω h ) → ∇ S h (Ω h ) , Q h : E h (Ω h ) → M h ( λ ) , Q h : E h (Ω h ) → M ⊥ h ( λ ) are the b ( · , · )-orthogonal projec-tions. Similarly, we may define the operator K H , Q H , Q H on the coarse level and let Z (0) := I − K H .It is obvious that E h (Ω h ; ǫ r ) M ( λ ) ⊕ M ⊥ ( λ ). Denote H ǫ r : H ( curl ; Ω) → M ( λ ) ⊕ M ⊥ ( λ ) tobe b ( · , · )-orthogonal projection, which is usually so-called Hodge operator, and let V + := H ǫ r E h (Ω h ; ǫ r ).It is easy to see that the elements in V + are not finite element functions, but curl V + ⊂ RT (Ω h ), where RT (Ω h ) is a well-known Raviart-Thomas element space with vanishing normal trace along the boundary ∂ Ω. We define an operator P h : M ( λ ) ⊕ M ⊥ ( λ ) → V + as follows: a ( u − P h u , v ) = 0 ∀ v ∈ V + . (2.10)Furthermore, we may also define P h : H ( curl ; Ω) → V + by a trivial extension.According to the definition of P h , we may obtain the following lemma (see [23]).5 emma 2.1 Let Ω be a convex bounded polyhedral domain, we have || u h − P h u h || b ≤ Ch || curlu h || b ∀ u h ∈ E h (Ω h ; ǫ r ) . (2.11)The following a prior error estimates (see Theorem 5.4 in [5], Subsection 4.3 and Subsection 4.4 in[12]) are useful in our convergence analysis. The proof of the estimates essentially needs strengtheneddiscrete compactness properties and standard approximation properties. For interested readers, pleaserefer to Theorem 19.6 in [4] or [21] for details. Theorem 2.2
Let Ω be a bounded convex polyhedral domain. There exists h > such that for any h (0 < h < h ) and for any λ i (1 < i < + ∞ ) , we have | λ i − λ hi,j | ≤ Ch as λ hi,j → λ i , j = 1 , , ..., m i , where C is independent of h but not λ i and m i is the algebraic multiplicity of λ i . Moreover, θ ( V λ i , M h ( λ i )) ≤ Ch, where M h ( λ i ) = ⊕ m i j =1 V λ hi,j , dim ( V λ i ) = m i , the notation V λ denotes the eigenvector space correspondingto the eigenvalue λ , the θ ( M, N ) denotes the gap between subspace M ⊂ H ( curl ; Ω) and subspace N ⊂ H ( curl ; Ω) , i.e., θ ( M, N ) = max { e θ ( M, N ) , e θ ( N, M ) } e θ ( M, N ) = sup || u || b =1 , u ∈ M inf v ∈ N || u − v || b , or e θ ( M, N ) = sup || u || c =1 , u ∈ M inf v ∈ N || u − v || c , where the norm || · || c := p a ( · , · ) + b ( · , · ) is equivalent to the norm || · || curl in H ( curl ; Ω) . Remark 2.2
In general
Banach
Spaces, the gap between subspace M and subspace N does not constructa distance due to the fact that it does not satisfy the triangle inequality but does satisfy θ ( M, N ) ≤ d ( M, N ) ≤ θ ( M, N ) , where the d ( M, N ) denotes the Hausdorff distance between subspace M and subspace N . For interestedreaders, please refer to [17] for details on the distinction between the gap and Hausdorff distance. From Theorem 2.1 and Remark 2 .
2, we obtain the following corollary.
Corollary 2.3
Under the assumption of Theorem . , let ( λ h , u h ) be the first eigen-pair of (2.6) , || u h || b =1 ( || u h || c = 1) . Then there exists h > such that when < h < h , it holds that | λ − λ h | ≤ Ch , (2.12) and there exists u ∈ M ( λ ) , || u || b = 1 ( || u || c = 1) , such that || u − u h || b ≤ Ch, ( || u − u h || c ≤ Ch ) . (2.13)6 The two-level PHJD method
In this subsection, we shall introduce the domain decomposition and the corresponding subspacesdecomposition.Let the coarse quasi-uniform triangulation be τ H := { Ω i } Ni =1 , where H := max { H i | i = 1 , , ..., N } and H i := diam (Ω i ). The fine shaped-regular and quasi-uniform triangulation is obtained by subdividing τ H and we denote it by τ h . We may construct the edge element spaces E H (Ω H ) ⊂ E h (Ω h ) on τ H and τ h but it is well known that E H (Ω H ; ǫ r ) E h (Ω h ; ǫ r ). To get the overlapping subdomains (Ω ′ i , ≤ i ≤ N ),we enlarge the subdomains Ω i by adding fine elements inside Ω h layer by layer such that ∂ Ω ′ i does notcut through any fine element. To measure the overlapping width between neighboring subdomains, wedefine the δ i := dist ( ∂ Ω i \ ∂ Ω , ∂ Ω ′ i \ ∂ Ω) and denote δ := min { δ i | i = 1 , , ..., N } . We also assume that H i is the diameter of the Ω ′ i .The local subspaces may be defined by V ( i ) := { v h ∈ E h (Ω h ) | v h ( x ) = , x ∈ Ω \ Ω ′ i } , (3.1) V ( i )0 := { v ( i ) h ∈ V ( i ) | b ( v ( i ) h , ∇ p h ) = 0 , p h ∈ S ( i ) h } , (3.2)which are associated with the local fine mesh in Ω ′ i ( i = 1 , , ..., N ), where S ( i ) h := { p h ∈ S h (Ω h ) | p h ( x ) =0 x ∈ Ω \ Ω ′ i } . We denote Z ( i ) : V ( i ) → V ( i )0 to be the b ( · , · )-orthogonal projection and let K ( i ) := I − Z ( i ) . Assumption 1
The partition { Ω ′ i } Ni =1 may be colored using at most N colors, in such a way thatsubdomains with the same color are disjoint. N is independent of the number of subdomains N . According to the Assumption 1, we know that if x ∈ Ω, then it belongs to at most N subdomainsin { Ω ′ i } Ni =1 . Besides, we obtain a partition of unity and then there exists a family of functions { θ i } Ni =1 ,which are continuous piecewise linear polynomials, satisfy the following properties (see [24]): supp ( θ i ) ⊂ Ω ′ i , ≤ θ i ≤ , N X i =1 θ i ( x ) = 1 , x ∈ Ω , ||∇ θ i || , ∞ , Ω ′ i ≤ Cδ . (3.3)The strengthened Cauchy-Schwarz inequality is true over the local subspaces V ( i ) , i.e., for v ( i ) ∈ V ( i ) and v ( j ) ∈ V ( j ) , 1 ≤ i, j ≤ N , there exists 0 ≤ η ij ≤ | b ( v ( i ) , v ( j ) ) | ≤ η ij q b ( v ( i ) , v ( i ) ) q b ( v ( j ) , v ( j ) ) . (3.4)Let ρ (Λ) be the spectral radius of the matrix ( η ij ) ≤ i,j ≤ N and we have (see [24]) Lemma 3.1
Let
Λ = ( η ij ) ≤ i,j ≤ N . If Assumption 1 holds, then ρ (Λ) ≤ N . Moreover, for any v ( i ) ∈ V ( i ) , v ( j ) ∈ V ( j ) ( i, j = 1 , , ..., N ) , N X i,j =1 b ( v ( i ) , v ( j ) ) ≤ N N X i =1 b ( v ( i ) , v ( i ) ) , N X i,j =1 a ( v ( i ) , v ( j ) ) ≤ N N X i =1 a ( v ( i ) , v ( i ) ) . (3.5)7 .2 The Jacobi-Davidson method In this subsection, we focus on the Jacobi-Davidson method proposed by Sleijpen and Vorst (see[22]) which is an efficient algebraic method for solving eigenvalue problems. The idea of Jacobi-Davidsonmethod is that one may first solve the Jacobi correction equation in the orthogonal complement space ofthe current approximation of the eigenvector and then update the new iterative solution in the expandingDavidson subspace. It suggests to compute the correction variable e in the b ( · , · )-orthogonal complementspace of the current approximation u k and expand the subspace, in which the new eigen-pair approxi-mation ( λ k +1 , u k +1 ) may be found.Let Q u k : E h (Ω h ) → span { u k } be b ( · , · )-orthogonal projection and we present the detailed Jacobi-Davidson algorithm as follows: Algorithm 1
The Jacobi-Davidson algorithm1. Given the initial approximation ( λ , u ) of the first eigen-pair, W = span { u } and stoppingtolerance tol .2. For k =1, 2, 3, ..., denote r k = λ k u k − A h u k , solving the Jacobi correction equation:( I − Q u k )( A h − λ k )( I − Q u k ) e k +1 = r k , b ( e k +1 , u k ) = 0 . (3.6)3. Minimizing the Rayleigh quotient in W k +1 u k +1 = arg min = v ∈ W k +1 Rq ( v ) , (3.7)where W k +1 = W k + span { e k +1 } , λ k +1 = Rq ( u k +1 ) , where Rq ( v ) = a ( v , v ) b ( v , v ) .4. If || r k || b < tol or | λ k +1 − λ k | < tol , then return ( λ k +1 , u k +1 ), otherwise goto step 2.For the above algorithm, it is easy to see that (3.6) is equivalent to the following formulation b (( A h − λ k ) e k +1 , v ) = b ( r k , v ) v ∈ span { u k } ⊥ . (3.8)The coefficient operator of the Jacobi correction equation is A h − λ k , which is indefinite and nearlysingular as the approximation λ k → λ h . From the fast and parallel computing point of view, we maytry to design some efficient preconditioners for A h − λ k . Assume B kh ∼ = A h − λ k , then we may get thepreconditioned correction equation: b ( B kh e k +1 , v ) = b ( r k , v ) ∀ v ∈ ( span { u k } ) ⊥ . The correction variable e k +1 may be obtained by e k +1 = ( B kh ) − r k + β k ( B kh ) − u k , where β k = − b (( B kh ) − r k , u k ) b (( B kh ) − u k , u k ) is so-called orthogonal parameter. Next, we shall propose our two-level preconditioned Helmholtz-Jacobi-Davidson (PHJD) algorithm.In our case, the preconditioner ( B kh ) − may be chosen as( B kh ) − := ( B k ) − Q H + N X i =1 ( B ki ) − Q ( i ) , (3.9)8here Q H : L (Ω) → E H (Ω H ) , Q ( i ) : E h (Ω h ) → V ( i ) denote the b ( · , · )-orthogonal projections, B k denotes A H − λ k and B ki ( i = 1 , , ..., N ) denotes ( A h − λ k ) | V ( i ) , i.e., they satisfy that for any v ( i ) , w ( i ) ∈ V ( i ) , b ( B ki v ( i ) , w ( i ) ) = b (( A h − λ k ) | V ( i ) v ( i ) , w ( i ) ) = b (( A h − λ k ) v ( i ) , w ( i ) ) . (3.10)Combining the scaling argument, the Poincar´ e inequality and inverse estimate, it is easy to see that B ki is symmetric about b ( · , · ) and we may obtain the following properties: λ min ( B ki | V ( i )0 ) = O ( H − ) , λ max ( B ki | V ( i )0 ) = O ( h − ) , i = 1 , , ..., N. (3.11) Algorithm 2
The two-level PHJD algorithm1. Given the initial approximation ( λ , u ) of the first eigen-pair. A H u H = λ H u H , || u H || b = 1 . (3.12) ( Find p h ∈ S h (Ω h ), such that b ( ∇ p h , ∇ q h ) = b ( u H , ∇ q h ) ∀ q h ∈ S h (Ω h ) . Let ¯ u = u H − ∇ p h , u = ¯ u || ¯ u || b , λ = Rq ( u ) , W = span { u } .
2. Solving preconditioned Jacobi correction equation, i.e., for k =1, 2, 3, ...,denote r k = λ k u k − A h u k and let b ( B kh e k +1 , v ) = b ( r k , v ) ∀ v ∈ ( span { u k } ) ⊥ . (3.13)Then define e k +1 := ( B kh ) − r k + β k ( B kh ) − u k , (3.14)where β k = − b (( B kh ) − r k , u k ) b (( B kh ) − u k , u k ) .
3. Solving the Helmholtz projection system, ( Find p kh ∈ S h (Ω h ), such that b ( ∇ p kh , ∇ q h ) = b ( e k +1 , ∇ q h ) ∀ q h ∈ S h (Ω h ) . (3.15)Let t k +1 = e k +1 − ∇ p kh .4. Minimizing the Rayleigh quotient in W k +1 , u k +1 = arg min = v ∈ W k +1 Rq ( v ) , (3.16)where W k +1 = W k + span { t k +1 } , λ k +1 = Rq ( u k +1 ) .
5. If || r k || b < tol or | λ k +1 − λ k | < tol , then return ( λ k +1 , u k +1 ), otherwise goto step 2 Remark 3.1
We are interested in the simply connected domain Ω and we may handle the kernel of the curl operator by solving the Poisson equation based on the de Rham Complex. If the domain Ω equippedwith the nontrivial topology is considered, we need to consider the harmonic form space H which isisomorphic to the de Rham co-homology group as well as the quotient space ker ( curl ) / ∇ H (Ω) whosedimension is the corresponding Betti number(see, e.g., [2, 4]). Remark 3.2
For the case λ H ≤ λ h , it is obvious that ( B k ) − is well-defined. For the case λ h ≤ λ H , the B k may be singular in our iterative procedure. In this case, we only need to refine the initial grid in step1 in Algorithm 2 to obtain λ , which can ensure that ( B k ) − is well-defined. Convergence analysis
In this section, we focus on giving a convergence analysis of the two-level PHJD method. First, wepresent some useful lemmas in our main proof. The first lemma illustrates that as the coarse grid size H is sufficiently small, the distance in the sense of || · || b -norm between the first approximation of theeigenvector u with u H is sufficiently small too, as well as the distance between the first approximationof the eigenvalue λ with λ H . In fact, the proof of the first inequality in the first lemma essentially needsthe Hodge operator lemma (see [12]). You may see a complete proof in [30]. So next, we only prove thesecond and the third inequality in Lemma 4 . Lemma 4.1
There exists a constant C such that ||∇ p h || b = || u H − ¯ u || b ≤ CH, (4.1) || u H − u || b ≤ CH, (4.2)0 ≤ λ − λ H ≤ CH , (4.3) where the notations p h , u H , λ H , ¯ u , u , λ were defined in the above two-level PHJD algorithm. Proof.
We first prove (4.2). Due to ¯ u = u H − ∇ p h and (4.1), we have || u H − u || b ≤ || ¯ u − u || b + || u H − ¯ u || b = 1 − || ¯ u || b + || u H − ¯ u || b ≤ − ( || u H || b − || u H − ¯ u || b ) + || u H − ¯ u || b ≤ CH.
Next, we compute the formula λ − λ H directly. Owing to ¯ u = u H − ∇ p h and u = ¯ u || ¯ u || b , we have λ − λ H = a ( u , u ) b ( u , u ) − a ( u H , u H ) = a ( ¯ u , ¯ u ) b ( ¯ u , ¯ u ) − a ( u H , u H )= ( 11 − b ( ∇ p h , ∇ p h ) − a ( u H , u H ) . (4.4)Because of (4.1), we obtain λ − λ H ≤ CH − CH a ( u H , u H ) (4.5) ≤ CH . Moreover, from (4.4), we know that λ − λ H ≥ ✷ We now illustrate that all the norms || · || a , || · || E k , || · || E h , || · || E defined in M ⊥ h ( λ ) are equivalent,i.e., these topologies induced by these norms are homeomorphic. Lemma 4.2
The following bilinear forms construct the inner product in M ⊥ h ( λ ) , ( v h , v h ) E k := a ( v h , v h ) − λ k b ( v h , v h ) ∀ v h ∈ M ⊥ h ( λ ) , ( v h , v h ) E h := a ( v h , v h ) − λ h b ( v h , v h ) ∀ v h ∈ M ⊥ h ( λ ) , ( v h , v h ) E := a ( v h , v h ) − λ b ( v h , v h ) ∀ v h ∈ M ⊥ h ( λ ) . Moreover, the norms || · || E k , || · || E h , || · || E induced by above inner products and the norm || · || a in M ⊥ h ( λ ) are equivalent. roof. Combining (4.3) and the fact that the Rayleigh quotient Rq ( v ) is a monotonic decreasingfunction with expanding Davidson space W k , for sufficiently small H , we have λ h − λ k ≥ C, where the constant C is independent of h, H . Furthermore, for any v h ( = ) ∈ M ⊥ h ( λ ), we have( v h , v h ) E k = a ( v h , v h ) − λ k b ( v h , v h ) ≥ ( λ h − λ k ) b ( v h , v h ) (4.6) ≥ C || v h || b > . Hence, we know that ( · , · ) E k defines an inner product in M ⊥ ( λ ). For any v h ∈ M ⊥ h ( λ ), we have a ( v h , v h ) ≥ a ( v h , v h ) − λ k b ( v h , v h ) = ( v h , v h ) E k . Owing to (4.6), we know ( v h , v h ) E k ≥ ( λ h − λ k ) b ( v h , v h ). Hence,( v h , v h ) E k = a ( v h , v h ) − λ k b ( v h , v h ) ≥ a ( v h , v h ) − λ k λ h − λ k ( v h , v h ) E k , that is a ( v h , v h ) ≤ (1 + λ k λ h − λ k )( v h , v h ) E k . It is easy to see that as h → + , λ k λ h − λ k = λ h λ h − λ k → λ λ − λ k ,i.e. , there exists a constant η which is independent of h, H , such that 0 < λ λ − λ k − η ≤ λ h λ h − λ k ≤ λ λ − λ k + η .Other conclusions in this lemma can be proved by a similar argument. ✷ The following lemma, which may be proved by using the same technique as in the case of nodalspaces, may be regarded as the ’bridge’ between the coarse space and the fine space. (see [23, 24] and[6])
Lemma 4.3
Let τ H be a shape-regular and quasi-uniform triangulation. Then || curl Q H u || b ≤ C | u | , Ω ∀ u ∈ H (Ω) , (4.7) || u − Q H u || b ≤ CH | u | , Ω ∀ u ∈ H (Ω) . (4.8)Furthermore, we have Lemma 4.4
For any v h ∈ E h (Ω h ) , it holds that || K h Q H v h || b ≤ CH || Q H v h || b , (4.9) || Q h Q H v h || b ≤ CH || Q H v h || b , and || Q h Q H v h || a ≤ CH || Q H v h || a , (4.10) || Q H Q h v h || b ≤ CH || Q h v h || b , and || Q H Q h v h || a ≤ CH || Q h v h || a , (4.11) where C is independent of H, h . roof. For any v H ∈ M H ( λ ), let u H = v H || v H || b . According to the Theorem 2 . .
3, weknow that there exists a vector w ∈ M ( λ ) ( || w || b = 1) such that || u H − w || b ≤ CH.
By multiplying the || v H || b at the both sides of the above inequality, we have || v H − || v H || b w || b ≤ CH || v H || b . Define ξ := || v H || b w ∈ M ( λ ), we have || ξ || b = || v H || b . Moreover, || v H − ξ || b ≤ CH || ξ || b . (4.12)For the ξ ∈ M ( λ ), similarly, we may find a v h ∈ M h ( λ ) ( || v h || b = || ξ || b ) such that || ξ − v h || b ≤ Ch || ξ || b . (4.13)Hence, combining (4.12) , (4.13) with the triangle inequality, we obtain || v H − v h || b ≤ CH || v H || b . (4.14)For Q H v h ∈ M H ( λ ), there exists a η h ∈ M h ( λ ) such that || Q H v h − η h || b ≤ CH || Q H v h || b . Then || K h Q H v h || b = b ( K h Q H v h , Q H v h − η h )= || K h Q H v h || b || Q H v h − η h || b ≤ CH || K h Q H v h || b || Q H v h || b . Finally, || K h Q H v h || b ≤ CH || Q H v h || b . Note that the norm ||·|| c is equivalent to the norm ||·|| a in the discrete divergence-free space. By corollary2 .
3, we may prove (4.10) and (4.11) by a similar argument. ✷ For the convenience of the following convergence analysis, we choose a special case to analyze theerror reduction. Let u k = arg min = v ∈ W k Rq ( v ) and λ k = Rq ( u k ). Next, we minimize the Rq ( v ) in W k +1 and choose the ˇ u k +1 in W k +1 ( || ˇ u k +1 || b = 1) to analyze the error reduction, i.e., e u k +1 = u k + α t k +1 , (4.15)and set ˇ u k +1 = e u k +1 || e u k +1 || b , (4.16)where α is an undetermined parameter depending on the N . By the above analysis, we know that λ k +1 ≤ ˇ λ k +1 , where ˇ λ k +1 = Rq ( ˇ u k +1 ).Owing to the Helmholtz projection, we know that e u k +1 ∈ E h (Ω h ; ǫ r ) and we denote u k := Q h u k , e k := − Q h u k , e e k := − Q h e u k . (4.17)From (3.14), (4.15), it is easy to see that e e k +12 = − Q h { u k + α t k +1 } = e k − αQ h { e k +1 − ∇ p kh } = e k − αQ h e k +1 = e k − αQ h { ( B kh ) − r k + β k ( B kh ) − u k } .
12e substitute the expression of the preconditioner (3.9) into the above formula, and then obtain e e k +12 = e k − αQ h { ( B kh ) − r k + β k ( B kh ) − u k } = e k − αQ h { ( B k ) − Q H r k + N X i =1 ( B ki ) − Q ( i ) r k + β k ( B k ) − Q H u k + β k N X i =1 ( B ki ) − Q ( i ) u k } = e k − αQ h { ( B k ) − ( K H + Z (0) ) Q H r k + N X i =1 ( B ki ) − ( K ( i ) + Z ( i ) ) Q ( i ) r k + β k ( B k ) − ( K H + Z (0) ) Q H u k + β k N X i =1 ( B ki ) − ( K ( i ) + Z ( i ) ) Q ( i ) u k } , where the last equality has used the partitions of the identity operator defined on E H (Ω H ) and V ( i ) . Forany v ∈ E h (Ω h ), ( B k ) − K H Q H v ∈ ∇ S H (Ω H ) ⊂ ∇ S h (Ω h ), ( B ki ) − K ( i ) Q ( i ) v ∈ ∇ S ( i ) h ⊂ ∇ S h (Ω h ), so wehave Q h ( B k ) − K H Q H v = , Q h ( B ki ) − K ( i ) Q ( i ) v = . Furthermore, by using the partitions of the identity operator defined on E H (Ω H ; ǫ r ), we obtain e e k +12 = e k − αQ h ( B k ) − Z (0) Q H r k − αQ h N X i =1 ( B ki ) − Z ( i ) Q ( i ) r k − αβ k Q h ( B k ) − Z (0) Q H u k − αβ k Q h N X i =1 ( B ki ) − Z ( i ) Q ( i ) u k = e k − αQ h ( B k ) − ( Q H + Q H ) Z (0) Q H r k − αQ h N X i =1 ( B ki ) − Z ( i ) Q ( i ) r k − αβ k Q h ( B k ) − ( Q H + Q H ) Z (0) Q H u k − αβ k Q h N X i =1 ( B ki ) − Z ( i ) Q ( i ) u k , which, together with the definition of residual vector r k and (4.17), yields e e k +12 = { e k − αQ h ( B k ) − Q H Z (0) Q H ( A h − λ k ) e k − αQ h N X i =1 ( B ki ) − Z ( i ) Q ( i ) ( A h − λ k ) e k } + { αQ h ( B k ) − Q H Z (0) Q H ( A h − λ k ) u k − αβ k Q h ( B k ) − Q H Z (0) Q H u k } + { αQ h ( B k ) − Q H Z (0) Q H ( A h − λ k ) u k + αQ h N X i =1 ( B ki ) − Z ( i ) Q ( i ) ( A h − λ k ) u k − αβ k Q h ( B k ) − Q H Z (0) Q H u k − αβ k Q h N X i =1 ( B ki ) − Z ( i ) Q ( i ) u k } := I + I + I , (4.18)here we define G k := I − αQ h ( B k ) − Q H Z (0) Q H ( A h − λ k ) − αQ h P Ni =1 ( B ki ) − Z ( i ) Q ( i ) ( A h − λ k ) and let I := G k e k . we call I as the error principal term and G k as the error principal operator and I as thealmost counterbalanced term. Noting that a ( u k , u k ) = λ k b ( u k , u k ) and using (4.17), it is easy to provethat || e k || E k = λ k b ( u k , u k ) − a ( u k , u k ) = ( λ k − λ h ) b ( u k , u k ) , (4.19)which is useful in the following error estimates. 13oreover, we note that | λ k − λ h | ≤ CH and | λ k − λ | ≤ CH . In fact, as the Rayleigh quotient Rq ( v )is a monotonic decreasing function with expanding the Davidson subspace W k , we have λ h ≤ λ k ≤ λ .For the case λ H ≤ λ h , by corollary 2 . | λ k − λ | ≤ | λ k − λ h | + | λ h − λ | ≤ | λ − λ H | + | λ h − λ | ≤ CH . (4.20)For the case λ H > λ h , by corollary 2 . | λ k − λ | ≤ | λ k − λ h | + | λ h − λ | ≤ | λ − λ h | + | λ h − λ |≤ | λ − λ H | + | λ H − λ h | + | λ h − λ | ≤ CH . (4.21) I In this subsection, we analyze the error principal term I . Theorem 4.5
For sufficiently small α , there exists a constant C such that || G k v h || E k ≤ (1 − C δ H ) || v h || E k ∀ v h ∈ M ⊥ h ( λ ) , (4.22) where the constant C is independent of H, δ and h . First of all, we give two useful lemmas.
Lemma 4.6
The error principal operator G k : M ⊥ h ( λ ) → M ⊥ h ( λ ) is symmetric in the sense of ( · , · ) E k .Furthermore, if α is sufficiently small, the operator G k : M ⊥ h ( λ ) → M ⊥ h ( λ ) is also positive definite. Proof.
It is easy to see that ( B k ) − and ( B ki ) − , ( i = 1 , , ..., N ) are symmetric in the sense of b ( · , · ),which result in( Q h ( B k ) − Q H Z (0) Q H ( A h − λ k ) v h , w h ) E k = ( v h , Q h ( B k ) − Q H Z (0) Q H ( A h − λ k ) w h ) E k , (4.23)and ( Q h N X i =1 ( B ki ) − Z ( i ) Q ( i ) ( A h − λ k ) v h , w h ) E k = ( v h , Q h N X i =1 ( B ki ) − Z ( i ) Q ( i ) ( A h − λ k ) w h ) E k , (4.24)here v h , w h ∈ M ⊥ h ( λ ). Combining (4.23), (4.24) with the definition of G k , we obtain( G k v k , w k ) E k = ( v k , G k w k ) E k , (4.25)which means that G k is symmetric in the sense of ( · , · ) E k .To obtain the second conclusion in this lemma, we define an operator T k : M ⊥ h ( λ ) → M ⊥ H ( λ ) asfollows: ( T k v h , v H ) E k = ( v h , v H ) E k ∀ v H ∈ M ⊥ H ( λ ) , (4.26)where v h ∈ M ⊥ h ( λ ). For sufficiently small H , we know that λ k < λ H . By the Lax-Milgram theoremin M ⊥ H ( λ ), we may see that the above definition is meaningful. Similarly, we may also define operator T ki : M ⊥ h ( λ ) → V ( i )0 ( i = 1 , , ..., N ) as follows:( T ki v h , v ( i )0 ) E k = ( v h , v ( i )0 ) E k ∀ v ( i )0 ∈ V ( i )0 , (4.27)where v h ∈ M ⊥ h ( λ ). Because the Poincar´ e inequality holds in local fine spaces V ( i )0 , i = 1 , , ..., N , theabove definitions are also meaningful. 14t is easy to check that T k = ( B k ) − Q H Z (0) Q H ( A h − λ k ). In fact, for any v h , v H , we have b ( B k T k v h , v H ) = b (( A H − λ k ) T k v h , v H ) = ( T k v h , v H ) E k = ( v h , v H ) E k = b (( A h − λ k ) v h , v H ) = b ( Q H Z (0) Q H ( A h − λ k ) v h , v H ) . Similarly, T ki = ( B ki ) − Z ( i ) Q ( i ) ( A h − λ k ).Hence, for any v h ∈ M ⊥ ( λ ), we have( G k v h , v h ) E k = ( v h , v h ) E k − α ( Q h T k v h , v h ) E k − α ( Q h N X i =1 T ki v h , v h ) E k = || v h || E k − α ( T k v h , v h ) E k − α N X i =1 ( T ki v h , v h ) E k = || v h || E k − α || T k v h || E k − α N X i =1 || T ki v h || E k . (4.28)For the second term of (4.28), owing to the definition of Q h and Cauchy-Schwarz inequality, we have || T k v h || E k = ( T k v h , v h ) E k = ( Q h T k v h , v h ) E k ≤ || Q h T k v h || E k || v h || E k . (4.29)Meanwhile, || Q h T k v h || E k = || T k v h || E k + ( λ k − λ h ) || Q h T k v h || b + λ k || K h T k v h || b ≤ || T k v h || E k + ( λ k − λ h ) || T k v h || b + λ k || T k v h || b ≤ { Cλ + CH }|| T k v h || E k . (4.30)Combining (4.29) and (4.30) together, we have || T k v h || E k ≤ (1 + Cλ + CH ) || v h || E k . (4.31)For the third term of (4.28), we have N X i =1 || T ki v h || E k = N X i =1 ( T ki v h , v h ) E k ≤ || Q h N X i =1 T ki v h || E k || v h || E k . (4.32)Owing to (2.9), Lemma 3 .
1, Lemma 4 . e inequality, we get || Q h N X i =1 T ki v h || E k ≤ C || Q h N X i =1 T ki v h || E h ≤ C || N X i =1 T ki v h || a + Cλ || N X i =1 T ki v h || b = C N X i =1 N X l =1 a ( T ki v h , T kl v h ) + Cλ N X i =1 N X l =1 b ( T ki v h , T kl v h ) ≤ CN N X i =1 || T ki v h || a + CN λ N X i =1 || T ki v h || b ≤ CN (1 + λ H ) N X i =1 || T ki v h || E k . (4.33)Combining (4.32) and (4.33) together, we get N X i =1 || T ki v h || E k ≤ CN (1 + λ H ) || v h || E k , (4.34)15hich, together with (4.28) (4.31), yields( G k v h , v h ) E k = || v h || E k − α ( T k v h , v h ) E k − α N X i =1 || T ki v h || E k ≥ { − α (1 + CN + Cλ + CH ) }|| v h || E k , where C = O (1) , λ = O (1) , N = O (1). If we take 0 < α < α = CN + Cλ + CH , then we may obtainthe conclusion of this lemma. ✷ Lemma 4.7
For the error space M ⊥ h ( λ ) , it holds that M ⊥ h ( λ ) = Q h M ⊥ H ( λ ) + N X i =1 Q h V ( i )0 and there exist w ∈ Q h M ⊥ H ( λ ) and w ( i )0 ∈ V ( i )0 such that ( w , w ) E k + N X i =1 ( w ( i )0 , w ( i )0 ) E k ≤ CN (1 + H δ )( v h , v h ) E k . (4.35) Proof.
For any v h ∈ M ⊥ h ( λ ), we take w = Q h Q H Z (0) Q H P h v h and w ( i )0 = Z ( i ) r h θ i ( v h − w ), where r h is the standard edge element interpolation operator. Then w + N X i =1 Q h w ( i )0 = w + N X i =1 Q h Z ( i ) r h θ i ( v h − w )= w + N X i =1 Q h ( I − K ( i ) ) r h θ i ( v h − w ) , (4.36)which, together with the fact that for any v h ∈ E h (Ω h ) , K ( i ) v h ∈ ∇ S h (Ω h ), yields w + N X i =1 Q h w ( i )0 = w + N X i =1 Q h r h θ i ( v h − w )= w + Q h r h ( N X i =1 θ i )( v h − w ) = w + Q h r h ( v h − w ) = v h . Next, we prove (4.35). For the component in the coarse space, owing to Lemma 4 .
2, the orthogonalityon E h (Ω h ) and E H (Ω H ) in the sense of a ( · , · ) and b ( · , · ), the definition of Z (0) and the Poincar´ e inequality,we get || w || E k ≤ C || Q h Q H Z (0) Q H P h v h || E h ≤ C {|| Q H Z (0) Q H P h v h || E h + λ h || K h Q H Z (0) Q H P h v h || b }≤ C || Z (0) Q H P h v h || a + C || Z (0) Q H P h v h || b + Cλ || K h Q H Z (0) Q H P h v h || b ≤ C || Z (0) Q H P h v h || a + C (1 + λ ) || Z (0) Q H P h v h || b ≤ C || Z (0) Q H P h v h || a ≤ C || curl Q H P h v h || b . (4.37)Using Lemma 4 .
3, the definition of P h and the embedding theorem H ( curl ; Ω) ∩ H ( div ; Ω; ǫ r ) ֒ → H (Ω) , we get || curl Q H P h v h || b ≤ C | P h v h | ≤ C || curl P h v h || ≤ C || curl v h || ≤ C || v h || a ≤ C || v h || E k . (4.38)16hen || w || E k ≤ C || v h || E k . (4.39)For the local fine component, we may use the properties of the partition of unity (3.3) and theproperty of the interpolation operator r h to prove that N X i =1 || Z ( i ) r h θ i ( v h − w ) || a ≤ C N X i =1 || curl r h θ i ( v h − w ) || b, Ω ′ i ≤ CN { δ || v h − w || b + || curl ( v h − w ) || b } , (4.40)where the last inequality holds (see [23] or Lemma 10.9 in [24]). We first estimate the first term of (4.40).Combining the triangle inequality, (4.38), Lemma 2 . .
3, we have || v h − w || b = || v h − Q h Q H Z (0) Q H P h v h || b ≤ || v h − Q H Z (0) Q H P h v h || b ≤ C {|| v h − P h v h || b + || P h v h − Q H P h v h || b + || Q H P h v h − Z (0) Q H P h v h || b + || Z (0) Q H P h v h − Q H Z (0) Q H P h v h || b }≤ C { H || curl v h || b + || Q H P h v h − Z (0) Q H P h v h || b + || Z (0) Q H P h v h − Q H Z (0) Q H P h v h || b } . (4.41)On the one hand, it is easy to see that Q H P h v h − Z (0) Q H P h v h = K H Q H P h v h . So by using theorthogonality of v h ⊥ b ( · , · ) K H Q H P h v h , we know that the second term of (4.41) may be estimate asfollows: || K H Q H P h v h || b = b ( K H Q H P h v h , Q H P h v h )= b ( K H Q H P h v h , Q H P h v h − v h ) ≤ || K H Q H P h v h || b || Q H P h v h − v h || b . Then, combining Lemma 2 . .
3, we obtain || K H Q H P h v h || b ≤ || Q H P h v h − v h || b ≤ CH || curl v h || b , that is || Q H P h v h − Z (0) Q H P h v h || b ≤ CH || curl v h || b . (4.42)On the other hand, we know Z (0) Q H P h v h − Q H Z (0) Q H P h v h = Q H Z (0) Q H P h v h . Using the sameargument with the proof of Lemma 4 .
4, we know that there exists w h ∈ M h ( λ ) such that || Q H Z (0) Q H P h v h − w h || b ≤ CH || Q H Z (0) Q H P h v h || b . Hence, by the orthogonality of w h ⊥ b ( · , · ) v h , Cauchy-Schwarz inequality, Lemma 2 . .
3, weobtain || Q H Z (0) Q H P h v h || b = b ( Q H Z (0) Q H P h v h , Q H P h v h − v h ) + b ( Q H Z (0) Q H P h v h , v h )= b ( Q H Z (0) Q H P h v h , Q H P h v h − v h ) + b ( Q H Z (0) Q H P h v h − w h , v h ) ≤ || Q H Z (0) Q H P h v h || b || Q H P h v h − v h || b + || Q H Z (0) Q H P h v h − w h || b || v h || b ≤ CH || Q H Z (0) Q H P h v h || b || curlv h || b + CH || Q H Z (0) Q H P h v h || b || curlv h || b . Then, by the definition of Q h , Z (0) , we get || Z (0) Q H P h v h − Q H Z (0) Q H P h v h || b ≤ || Q H Z (0) Q H P h v h || b ≤ CH || curlv h || b , (4.43)which, together with (4.41), (4.42), yields || v h − w || b ≤ C { H || curlv h || b + || K H Q H P h v h || b + || Q H Z Q H P h v h || b }≤ CH || curlv h || b ≤ CH || v h || a . (4.44)17or the second term of (4.40), by (4.39) and Lemma 4 .
2, we have || curl ( v h − w ) || b ≤ C {|| curlv h || b + || curlw || b }≤ C {|| curlv h || b + || w || a } ≤ C {|| curlv h || b + || w || E k } ≤ C || v h || a . (4.45)Combining (4.45) and (4.44), we know N X i =1 || Z ( i ) r h θ i ( v h − w ) || a ≤ CN (1 + H δ ) || v h || a . (4.46)Finally, by Lemma 4 .
2, (4.37) and (4.46), we may obtain the proof of (4.35). ✷ Proof of Theorem .
5: Because of Lemma 4 .
7, we know that for any v h ∈ M ⊥ h ( λ ), there exist w ∈ Q h M ⊥ H ( λ ) and w i ∈ V ( i )0 such that v h = w + N X i =1 Q h w i and N X i =0 || w i || E k ≤ CN (1 + H δ ) || v h || E k . (4.47)By (4.26) , (4.27), (4.47) and Cauchy-Schwarz inequality, we may obtain( v h , v h ) E k ≤ CN (1 + H δ )( Q h N X i =0 T ki v h , v h ) E k . (4.48)Moreover, ( G k v h , v h ) E k = ( v h , v h ) E k − α ( Q h N X i =0 T ki v h , v h ) E k ≤ (1 − C
11 + H δ )( v h , v h ) E k ≤ (1 − C δ H )( v h , v h ) E k . Based on Lemma 4 .
6, we may define ( G k ) / : M ⊥ h ( λ ) → M ⊥ h ( λ ) and know that ( G k ) / is also asymmetric positive definite operator on ( · , · ) E k . Hence, we have( G k v h , G k v h ) E k = ( G k ( G k ) / v h , ( G k ) / v h ) E k ≤ (1 − C δ H )(( G k ) / v h , ( G k ) / v h ) E k = (1 − C δ H )( G k v h , v h ) E k ≤ (1 − C δ H ) ( v h , v h ) E k . Then || G k v h || E k ≤ (1 − C δ H ) || v h || E k , (4.49)which completes the proof of this theorem. ✷ I Before we analyze the estimate of the almost counterbalanced term I , we first estimate the orthog-onal parameter β k = − b (( B kh ) − r k , u k ) b (( B kh ) − u k , u k ) . Lemma 4.8
For sufficiently small H , it holds that | β k | ≤ CH q λ k − λ h , where C is independent of H, h . roof. First, we decompose the numerator of | β k | . Due to the definition of r k , ( B kh ) − and the fact u k = u k − e k ∈ E h (Ω h ; ǫ r ), we get | b (( B kh ) − r k , u k ) | = | b (( B kh ) − ( λ k − A h ) u k , u k ) + b (( B kh ) − ( A h − λ k ) e k , u k ) |≤ ( λ k − λ h ) | b (( B kh ) − u k , u k ) | + | b (( B kh ) − ( A h − λ k ) e k , u k ) | = ( λ k − λ h ) | b (( B kh ) − u k , u k ) | + | b (( B k ) − Q H Z (0) Q H ( A h − λ k ) e k , u k ) | + | b (( B k ) − Q H Z (0) Q H ( A h − λ k ) e k , u k ) + | b ( N X i =1 ( B ki ) − Z ( i ) Q ( i ) ( A h − λ k ) e k , u k ) |≤ ( λ k − λ h ) | b (( B kh ) − u k , u k ) | + | b (( B k ) − Q H Z (0) Q H ( A h − λ k ) e k , u k ) | + || T k e k || b + || N X i =1 T ki e k || b := ( λ k − λ h )Γ + Γ + Γ + Γ , (4.50)where the last inequality holds owing to Cauchy-Schwarz inequality and || u k || b = 1. In fact, for k = 1,it is obvious that || u || b = 1 in algorithm 2. As u k is the minimum value point of Rayleigh quotient in W k ( k ≥
2) in algorithm 2, without loss of generality, let || u k || b = 1. Next, we estimate (4.50) one byone. For the first term Γ in (4.50), we have | b (( B kh ) − u k , u k ) | = | b (( B k ) − Q H Z (0) Q H u k , u k ) | + | b (( B k ) − Q H Z (0) Q H u k , u k ) | + | N X i =1 b (( B ki ) − Z ( i ) Q ( i ) u k , u k ) |≤ {|| ( B k ) − Q H Z (0) Q H u k || b + || ( B k ) − Q H Z (0) Q H u k || b + N X i =1 || ( B ki ) − Z ( i ) Q ( i ) u k || b }|| u k || b . (4.51)By the definition of B k , (3.11), (4.11) and Assumption 1, we know that for sufficiently small H , | b (( B kh ) − u k , u k ) | ≤ | λ k − λ H | || Q H u k || b + CH || u k || b + CH N X i =1 || u k || b, Ω ′ i ≤ C | λ k − λ H | || u k || b + CH || u k || b + CN H || u k || b ≤ C | λ k − λ H | . (4.52)For the second term Γ in (4.50), by the definition of Q H , Q h , Lemma 4 . | b (( B k ) − Q H Z (0) Q H ( A h − λ k ) e k , u k ) | = 1 | λ k − λ H | | b ( Q H ( A h − λ k ) e k , u k ) | = 1 | λ k − λ H | | ( e k , Q h Q H u k ) E k | ≤ C | λ k − λ H | || e k || E k || Q h Q H u k || a ≤ CH p λ k − λ h | λ k − λ H | . (4.53)For the third term Γ in (4.50), by the Poincar´ e inequality, (4.19) and (4.31), we obtain || T k e k || b ≤ C || T k e k || E k ≤ C || e k || E k = C q λ k − λ h || u k || b ≤ C q λ k − λ h . (4.54)For the fourth term Γ in (4.50), by Lemma 3 .
1, the Poincar´ e inequality, (4.19) and (4.34), we get || N X i =1 T ki e k || b ≤ vuut N N X i =1 || T ki e k || b ≤ CH vuut N N X i =1 || T ki e k || E k ≤ CH || e k || E k ≤ CH q λ k − λ h . (4.55)Combining (4.50) and (4.52) ∼ (4.55) together, for sufficiently small H , we have | b (( B kh ) − r k , u k ) | ≤ λ k − λ h + CH p λ k − λ h | λ k − λ H | . (4.56)19n the other hand, we analyze the denominator of | β k || b (( B kh ) − u k , u k ) | ≥ | b (( B k ) − Q H Q H u k , u k ) | − | b (( B k ) − Q H Q H u k , u k ) |− | b ( N X i =1 ( B ki ) − Z ( i ) Q ( i ) u k , u k ) | := J − J − J . (4.57)Similarly, we estimate the terms in (4.57) one by one. Note that by Lemma 4 .
4, we have || Q H u k || b ≤ || Q H u k || b + || Q H e k || b ≤ CH || u k || b + C || e k || E k ≤ ( CH + C q λ k − λ h ) || u k || b ≤ CH, (4.58)and || K H u k || b ≤ || K H u k || b + || K H e k || b ≤ CH || u k || b + C || e k || E k ≤ CH. (4.59)Meanwhile, || Q H u k || b = || Q H u k || b − || Q H u k || b − || K H u k || b ≥ || u k || b − || u k − Q H u k || b − CH ≥ − CH , (4.60)where the last inequality holds due to the following estimate || u k − Q H u k || b ≤ || u k − Q H P h u k || b + || Q H P h u k − Q H u k || b ≤ C || u k − P h u k || b + || P h u k − Q H P h u k || b ≤ Ch || curlu k || b + CH | P h u k | ≤ Ch || curlu k || b + CH || curl P h u k || b ≤ CH || curlu k || b ≤ CH || u k || a = CH √ λ k || u k || b ≤ CH, where the similar argument in (4.38) is used. Hence, for the first two terms J , J of (4.57), combining(4.58) and (4.60), as long as sufficiently small H , we have J = | b (( B k ) − Q H u k , Q H u k ) | = 1 | λ k − λ H | || Q H u k || b ≥ C | λ k − λ H | , (4.61)and J ≤ | b (( B k ) − Q H u k , Q H u k ) |≤ λ H − λ k || Q H u k || b ≤ CH λ H − λ k ≤ CH . (4.62)For the third term J of (4.57), by (3.11) and Assumption 1, we know J ≤ CH N X i =1 b ( Z ( i ) Q ( i ) u k , Z ( i ) Q ( i ) u k ) ≤ CH N X i =1 || u k || b, Ω ′ i ≤ CH . (4.63)Combining (4.61) ∼ (4.63) together, we get | b (( B kh ) − u k , u k ) | ≥ C | λ k − λ H | , which, together with (4.56), proves this lemma. ✷ I . For convenience of analysis, we set ˆ v H := Q H Z (0) Q H ( A h − λ k ) u k , ˇ v H := − β k Q H Z (0) Q H u k . It is easy to see that αQ h ( B k ) − Q H Z (0) Q H ( A h − λ k ) u k − αβ k Q h ( B k ) − Q H Z (0) Q H u k = αQ h ( B k ) − (ˆ v H + ˇ v H ) . For s = 1 ,
2, we have Q hs t k +1 = Q hs e k +1 = Q hs ( B kh ) − r k + β k Q hs ( B kh ) − u k = Q hs ( B k ) − Q H Z (0) Q H r k + Q hs ( B k ) − Q H Z (0) Q H r k + Q hs N X i =1 ( B ki ) − Z ( i ) Q ( i ) r k + β k Q hs ( B k ) − Q H Z (0) Q H u k + β k Q hs ( B k ) − Q H Z (0) Q H u k + β k Q hs N X i =1 ( B ki ) − Z ( i ) Q ( i ) u k . (4.64)Denote f ( Q hs ) := || Q hs ( B k ) − Q H Z (0) Q H r k || b + || Q hs N X i =1 ( B ki ) − Z ( i ) Q ( i ) r k || b + | β k ||| Q hs ( B k ) − Q H Z (0) Q H u k || b + | β k ||| Q hs N X i =1 ( B ki ) − Z ( i ) Q ( i ) u k || b . ( s = 1 ,
2) (4.65)Using Lemma 4 .
2, the Pythagorean theorem on E h (Ω h ) in the sense of || · || b and Lemma 4 .
4, we have || Q h ( B k ) − (ˆ v H + ˇ v H ) || E k ≤ C || Q h ( B k ) − (ˆ v H + ˇ v H ) || E h = Cλ h || K h ( B k ) − (ˆ v H + ˇ v H ) || b − ( λ h − λ H ) || ( B k ) − (ˆ v H + ˇ v H ) || b ≤ CH || ( B k ) − (ˆ v H + ˇ v H ) || b . (4.66)Moreover, || ( B k ) − (ˆ v H + ˇ v H ) || b = || K h ( B k ) − (ˆ v H + ˇ v H ) || b + || Q h ( B k ) − (ˆ v H + ˇ v H ) || b + || Q h ( B k ) − (ˆ v H + ˇ v H ) || b ≤ CH || ( B k ) − (ˆ v H + ˇ v H ) || b + || Q h ( B k ) − (ˆ v H + ˇ v H ) || b + CH || ( B k ) − (ˆ v H + ˇ v H ) || b . Then || ( B k ) − (ˆ v H + ˇ v H ) || b ≤ C || Q h ( B k ) − (ˆ v H + ˇ v H ) || b . By (4.66), we may obtain || Q h ( B k ) − (ˆ v H + ˇ v H ) || E k ≤ CH || Q h ( B k ) − (ˆ v H + ˇ v H ) || b . Furthermore, combining (4.64) and (4.65), we have || Q h ( B k ) − (ˆ v H + ˇ v H ) || E k ≤ CH || Q h ( B k ) − (ˆ v H + ˇ v H ) || b ≤ CH || Q h e k +1 − Q h ( B k ) − Q H Z (0) Q H r k − Q h N X i =1 ( B ki ) − Z ( i ) Q ( i ) r k − β k Q h ( B k ) − Q H Z (0) Q H u k − β k Q h N X i =1 ( B ki ) − Z ( i ) Q ( i ) u k || b ≤ CH {|| Q h e k +1 || b + f ( Q h ) } = CH {|| Q h t k +1 || b + f ( Q h ) } . (4.67)Using the orthogonal property of Jacobi-Davidson method and Helmholtz orthogonal decomposotion, weget || Q h t k +1 || b = || Q h e k +1 || b = | b ( u h , e k +1 ) | = | b ( u h − u k , e k +1 ) | = | b ( u h − u k , e k +1 − ∇ p kh ) | ≤ || u h − u k || b || t k +1 || b .
21y (4.17), (4.19) and the Poincar´ e inequality, for sufficiently small H , we have || u h − u k || b ≤ || u h − u k + e k || b ≤ || u h − u k || b + || e k || b = || u h || b − || u k || b + || e k || b = || u k || b − || u k || b + || e k || b ≤ || e k || b ≤ C || e k || E k ≤ C q λ k − λ h . (4.68)Then || Q h t k +1 || b ≤ C q λ k − λ h || t k +1 || b . It is known that || t k +1 || b ≤ || Q h t k +1 || b + || Q h t k +1 || b , we have || t k +1 || b ≤ C || Q h t k +1 || b . Furthermore, || Q h t k +1 || b ≤ C q λ k − λ h || Q h t k +1 || b . (4.69)Combining (4.64), (4.65), (4.67) and (4.69), we obtain || Q h ( B k ) − (ˆ v H + ˇ v H ) || E k ≤ CH {|| Q h t k +1 || b + f ( Q h ) }≤ CHf ( Q h ) + CH q λ k − λ h {|| Q h ( B k ) − (ˆ v H + ˇ v k ) || b + f ( Q h ) }≤ CHf ( Q h ) + CH q λ k − λ h {|| Q h ( B k ) − (ˆ v H + ˇ v k ) || E k + f ( Q h ) } . Then || Q h ( B k ) − (ˆ v H + ˇ v H ) || E k ≤ CHf ( Q h ) + CH q λ k − λ h f ( Q h ) . (4.70)Next, we estimate the terms f ( Q h ) and f ( Q h ) in (4.65). For the first term in (4.65), by the Poincar´ e inequality, (4.19) and (4.31), we get || ( B k ) − Q H Z (0) Q H r k || b ≤ || ( B k ) − Q H Z (0) Q H ( A h − λ k ) e k || b + ( λ k − λ h ) || ( B k ) − Q H Z (0) Q H u k || b ≤ || T k e k || b + C ( λ k − λ h ) || Q H u k || b ≤ || T k e k || E k + CH ( λ k − λ h ) || u k || b ≤ C || e k || E k . (4.71)For the second term in (4.65), by Lemma 3 .
1, the Poincar´ e inequality, (4.19) and (4.34), we have || N X i =1 ( B ki ) − Z ( i ) Q ( i ) r k || b ≤ || N X i =1 ( B ki ) − Z ( i ) Q ( i ) ( A h − λ k ) e k || b + ( λ k − λ h ) || N X i =1 ( B ki ) − Z ( i ) Q ( i ) u k || b ≤ || N X i =1 T ki e k || b + CH ( λ k − λ h ) || N X i =1 Z ( i ) Q ( i ) u k || b ≤ CH || e k || E k . (4.72)For the third term in (4.65), by (4.17), Lemma 4 . .
8, we obtain | β k | || ( B k ) − Q H Z (0) Q H u k || b ≤ CH q λ k − λ h {|| ( B k ) − Q H Z (0) Q H e k || b + || ( B k ) − Q H Z (0) Q H u k || b }≤ CH q λ k − λ h {|| e k || b + || Q H u k || b } ≤ CH || e k || E k . (4.73)22or the fourth term in (4.65), by (3.11) and Lemma 4 .
8, we know | β k | || N X i =1 ( B ki ) − Z ( i ) Q ( i ) u k || b ≤ CH q λ k − λ h · CH N X i =1 || Z ( i ) Q ( i ) u k || b ≤ CH q λ k − λ h || u k || b ≤ CH || e k || E k . (4.74)Finally, we may finish the estimate of the almost counterbalanced term I by combining Lemma 4 . ∼ (4.74), i.e., || Q h ( B k ) − (ˆ v H + ˇ v k ) || E k ≤ CHf ( Q h ) + CH q λ k − λ h f ( Q h ) ≤ CH || e k || E k . (4.75) In this subsection, we first estimate the term I in (4.18), and then we shall give our main result ofthis paper. For convenience, we denote S k := Q h ( B k ) − Q H Z (0) Q H . So Q h ( B k ) − Q H Z (0) Q H ( A h − λ k ) u k = S k ( A h − λ k ) u k . For any v h ∈ M h ( λ ), we have || S k v h || E k = b (( B k ) − Q H Z (0) Q H v h , ( A h − λ k ) S k v h )= b ( Q H v h , T k S k v h ) ≤ || Q H v h || b || T k S k v h || b . (4.76)Combining Lemma 4 .
4, (4.31) and S k v h ∈ M ⊥ h ( λ ), we obtain || S k v h || E k ≤ CH || v h || b || S k v h || E k . Hence, || S k v h || E k ≤ CH || v h || b . Moreover, we may obtain the following estimate || S k ( A h − λ k ) u k || E k = ( λ k − λ h ) || S k u k || E k ≤ CH ( λ k − λ h ) || u k || b ≤ CH || e k || E k , that is || Q h ( B k ) − Q H Z (0) Q H ( A h − λ k ) u k || E k ≤ CH || e k || E k . (4.77)For the term β k Q h ( B k ) − Q H Z (0) Q H u k in I , by the similar argument with (4.76), it is easy to seethat || Q h ( B k ) − Q H Z (0) Q H u k || E k ≤ C || Q H u k || b . Hence, by Lemma 4 .
4, Lemma 4 . | β k ||| Q h ( B k ) − Q H Z (0) Q H u k || E k ≤ C | β k ||| Q H u k || b ≤ C | β k |{|| Q H u k || b + || Q H e k || b }≤ C | β k |{ H || u k || b + || e k || b } ≤ CH q λ k − λ h (1 + H p λ k − λ h ) || e k || E k ≤ CH || e k || E k . (4.78)For the terms Q h P Ni =1 ( B ki ) − Z ( i ) Q ( i ) ( A h − λ k ) u k and β k Q h P Ni =1 ( B ki ) − Z ( i ) Q ( i ) u k in I , we denote F k := P Ni =1 ( B ki ) − Z ( i ) Q ( i ) . For any v h ∈ E h (Ω h ; ǫ r ), combining the Poincar´ e inequality and (4.34), we23ave || Q h F k v h || E k = N X l =1 b (( B kl ) − Z ( l ) Q ( l ) v h , ( A h − λ k ) Q h F k v h )= N X l =1 b ( Q ( l ) v h , T kl Q h F k v h ) ≤ ( N X l =1 || Q ( l ) v h || b, Ω ′ l ) ( N X l =1 || T kl Q h F k v h || b ) ≤ C p N H || v h || b ( N X l =1 || T kl Q h F k v h || E k ) ≤ CH || v h || b || Q h F k v h || E k . Then || Q h N X i =1 ( B ki ) − Z ( i ) Q ( i ) v h || E k ≤ CH || v h || b ∀ v h ∈ E h (Ω h ; ǫ r ) . (4.79)Specially, taking v h = u k , we obtain || Q h N X i =1 ( B ki ) − Z ( i ) Q ( i ) ( A h − λ k ) u k || E k ≤ ( λ k − λ h ) || Q h N X i =1 ( B ki ) − Z ( i ) Q ( i ) u k || E k ≤ CH ( λ k − λ h ) || u k || b ≤ CH || e k || E k . (4.80)and similarly taking v h = u k , v h = e k , we then have | β k ||| Q h N X i =1 ( B ki ) − Z ( i ) Q ( i ) u k || E k ≤ | β k |{|| Q h N X i =1 ( B ki ) − Z ( i ) Q ( i ) u k || E k + || Q h N X i =1 ( B ki ) − Z ( i ) Q ( i ) e k || E k }≤ | β k |{ CH || u k || b + CH || e k || b } ≤ CH q λ k − λ h (1 + 1 p λ k − λ h ) || e k || E k ≤ CH || e k || E k . (4.81)Combining Theorem 4 .
5, Lemma 4 .
8, (4.75), (4.77), (4.78), (4.80) and (4.81), we may obtain themain convergence result of this paper.
Theorem 4.9
For the Algorithm 2, the discrete principal eigenvalue of the Maxwell eigenvalue problemsatisfies || e k +12 || E ≤ c ( H )(1 − C δ H ) || e k || E , (4.82) and λ k +1 − λ h ≤ c ( H )(1 − C δ H ) ( λ k − λ h ) (4.83) where c ( H ) → , as H → and C is independent of h, H, δ . Proof.
By (4.20), (4.21) and Lemma 4 .
2, we have || e e k +12 || E = || e e k +12 || E k + ( λ k − λ ) || e e k +12 || b ≤ { (1 − C δ H ) + CH }|| e k || E k ≤ c ( H )(1 − C δ H ) || e k || E , c ( H ) →
1, as H →
0. Combining the orthogonal property of Jacobi-Davidson method andHelmholtz orthogonal decomposition, we have b ( u k , t k +1 ) = 0. Due to (4.15) and (4.16), we have || e u k +1 || b = || u k || b + α || t k +1 || b ≥ . Then || ˇ e k +12 || E ≤ || e e k +12 || E ≤ c ( H )(1 − C δ H ) || e k || E . By Lemma 4 .
2, we may obtain || ˇ e k +12 || E h ≤ c ( H )(1 − C δ H ) || e k || E h . Moreover, ˇ λ k +1 − λ h ≤ c ( H )(1 − C δ H ) ( λ k − λ h ) . We know that ˇ u k +1 is a special vector in W k +1 but is not stable function when minimizes the Rayleighquotient. Hence, λ k +1 − λ h ≤ ˇ λ k +1 − λ h ≤ c ( H )(1 − C δ H ) ( λ k − λ h ) . Then || e k +12 || E h ≤ c ( H )(1 − C δ H ) || e k || E h . Moreover, || e k +12 || E ≤ c ( H )(1 − C δ H ) || e k || E , which completes the proof of the theorem. ✷ In this section, we shall present several numerical experiments in two and three dimensional casesto support our theoretical findings. We do the computation in double decision but only display fourdigits after the decimal in tables except λ it. in order to show the convergent process. In 2D cases, thestopping tolerance is endowed by | λ k +1 − λ k | < − . In 3D cases, the stopping tolerance is endowed by || r k || b < − . In this subsection, we shall present some numerical results for the 2D Maxwell eigenvalue problemin rectangle domain, square domain and L-shaped domain.
Example 5.1
We consider the Maxwell eigenvalue problem (1.3) on the two dimensional domain
Ω =(0 , π ) × (0 , π ) with ǫ r = µ r = 1 and use the lowest triangle edge element to compute the principaleigenvalue which is λ = 0 . with algebraic multiplicity m = 1 . First, we choose an initial uniformpartition τ H in Ω with the number of subdomains N = 64 , coarse grid size H = √ π . we refine uniformlythe grid layer by layer and fix the ratio δH = . Next, we test the optimality and scalability of our PHJDalgorithm. N = 64, the ratio δH = h d.o.f. it. | λ k +1 − λ k | || r k || b λ it. Con.ord. √ π √ π √ π √ π √ π √ π √ π N = 64 , , N d.o.f it.
64 6288384 7256 6288384 61024 6288384 6The notation h means the fine grid size. d.o.f. means the degree of freedom. it. means iterativesteps in our algorithm. Con.ord. means the convergence order of the principal eigenvalue. It is shownin Table 1 that the convergence rate does not deteriorate when h →
0. Meanwhile, the PHJD algorithmworks very well when h << H which coincides with our theory. In order to verify the scalability ofour PHJD method, we choose the number of domains N = 64 , , d.o.f = 6288384. The iterative steps decreasing in table 2 illustrates that our method based on domaindecomposition is scalable, which coincides with our theory. Actually, the near optimality and scalabilityof our method hold not only for simple case but also for multiple case. Example 5.2
We consider the Maxwell eigenvalue problem (1.3) in (0 , π ) with ǫ r = 1 , µ r = 1 anduse the lowest triangle edge element to compute the principal eigenvalue which is λ = 1 with algebraicmultiplicity m = 2 . First, we choose an initial uniform partition τ H in Ω with the number of subdomains N = 8 , coarse grid size H = √ π . we refine uniformly the grid layer by layer and fix the ratio δH = .Next, we also test the optimality and scalability of our PHJD algorithm. Table 3: The number of subdomains N = 8, the ratio δH = h d.o.f. it. | λ k +1 − λ k | || r k || b λ it. Con.ord. √ π
736 7 2.1972e-09 2.0136e-04 0.998065902158390 - √ π √ π √ π √ π √ π √ π √ π N = 8 , , , N d.o.f. it. h →
0. Meanwhile, our PHJD algorithm worksvery well when h << H . Similarly, we choose the number of domains N = 8 , , ,
512 to count theiterative steps for d.o.f. = 12578816. Table 4 shows that the iterative steps keep stable when the numberof domains N increases, which illustrates that our PHJD method is scalable for the multiple principaleigenvalue case. Example 5.3
We consider the Maxwell eigenvalue problem in ( − , \ [0 , × ( − , with ǫ r = µ r = 1 and use the lowest triangle edge element to compute the principal eigenvalue which is . withalgebraic multiplicity m = 1 (see [8, 30]). First, we choose an initial uniform partition τ H in Ω withthe number of subdomains N = 96 , coarse grid size H = √ . We refine uniformly the grid layer by layerand fix the ratio δH = . Next, we test the optimality and scalability of our PHJD algorithm. Table 5: The number of subdomains N = 96, the ratio δH = h d.o.f. it. | λ k +1 − λ k | λ it. Con.ord. √ √ √ √ √ √ N = 96 , , N d.o.f. it.
96 9433088 7384 9433088 71536 9433088 7We note that the first Maxwell eigenvector in L-shaped domain has a strong unbounded singularityat the re-entrant corner but our PHJD method still works very well. It is shown in Table 5 that theiterative steps it. keep stable when h →
0, which illustrates that the convergence rate of our algorithm isindependent of the fine grid size h . The fact that iterative steps keep stable with the number of subdomainsincreasing in Table 6 illustrates our algorithm is scalable. Although we only give the theoretical analysisfor convex domain, our PHJD method works well and may be extended to more general Lipschitz domain. In this subsection, we shall present some numerical results for the 3D Maxwell eigenvalue problemin cuboid domain and cube domain. 27 xample 5.4
We consider the Maxwell eigenvalue problem in (0 , π ) × (0 , π ) × (0 , . π ) with ǫ r = µ r = 1 and use the lowest cuboid edge element to compute the principal eigenvalue which is = 0 . withalgebraic multiplicity m = 1 . First, we choose an initial uniform partition τ H in Ω with the number ofsubdomains N = 128 , coarse grid size H = . π . We refine uniformly the grid layer by layer and fix theratio δH = . Next, we test the optimality and scalability of our PHJD algorithm. Table 7: The number of subdomains N = 128, the ratio δH = h d.o.f it. || r k || b λ it. Con.ord. . π . π . π N = 128 , N d.o.f. it.
128 1532160 81024 1532160 7It is shown in Table 7 that the iterative steps keep stable when h →
0, which illustrates that theconvergence rate of our algorithm is independent of the fine grid size h . The fact that iterative stepsdecreases in Table 8 illustrates our algorithm is scalable. These coincide with our theory in this paper.Actually, the near optimality and scalability of our method hold not only for simple case but also formultiple case in 3D case. Example 5.5
We consider the Maxwell eigenvalue problem in (0 , π ) with ǫ r = µ r = 1 and use thelowest cuboid edge element to compute the principal eigenvalue which is λ = 2 with algebraic multiplicity m = 3 . First, we choose an initial uniform partition τ H in Ω with the number of subdomains N = 64 ,coarse grid size H = π . We refine uniformly the grid layer by layer and fix the ratio δH = . Next, wealso test the optimality and scalability of our PHJD algorithm. Table 9: The number of subdomains N = 64, the ratio δH = h d.o.f it. || r k || b λ it. Con.ord. π π π π N = 64 , N d.o.f. it.
64 6193536 7512 6193536 6Although we only give the theoretical analysis for the first simple eigenvalue case, our algorithmstill works well and keeps good scalability for the first multiple eigenvalue case. The iterative steps keepstable in Table 9 when h →
0, which illustrates that the convergence rate is independent of h . It is shownin Table 10 that the iterative steps decrease, which illustrates that the PHJD method is scalable.28 Conclusions
In this paper, based on the domain decomposition method, we propose a new and robust two-levelPHJD method for solving the Maxwell eigenvalue problem. The two-level PHJD method has a goodscalability and is asymptotically optimal without any assumption between coarse size H and fine size h .Numerical results confirm our theoretical findings. References [1] Cherif Amrouche, Christine Bernardi, Monique Dauge, and Vivette Girault. Vector potentials in three-dimensional non-smooth domains.
Mathematical Methods in the Applied Sciences , 21(9):823–864, 1998.[2] Douglas N Arnold, Richard S Falk, and Ragnar Winther. Finite element exterior calculus, homologicaltechniques, and applications.
Acta numerica , 15:1–155, 2006.[3] Constantine A Balanis.
Advanced Engineering Electromagnetics 2nd ed . John Wiley & Sons, 2012.[4] Daniele Boffi. Finite element approximation of eigenvalue problems.
Acta Numerica , 19:1–120, 2010.[5] Daniele Boffi, Paolo Fernandes, Lucia Gastaldi, and Ilaria Perugia. Computational models of electromagneticresonators: analysis of edge element approximation.
SIAM journal on numerical analysis , 36(4):1264–1290,1999.[6] James H Bramble and Jinchao Xu. Some estimates for a weighted L projection. Mathematics of Computa-tion , 56(194):463–476, 1991.[7] Annalisa Buffa, Patrick Ciarlet, and Erell Jamelot. Solving electromagnetic eigenvalue problems in polyhedraldomains with nodal finite elements.
Numerische Mathematik , 113(4):497–518, 2009.[8] Annalisa Buffa, Paul Houston, and Ilaria Perugia. Discontinuous Galerkin computation of the Maxwelleigenvalues on simplicial meshes.
Journal of Computational & Applied Mathematics , 204(2):317–333, 2007.[9] Junqing Chen, Yifeng Xu, and Jun Zou. An adaptive inverse iteration for Maxwell eigenvalue problem basedon edge elements.
Journal of Computational Physics , 229(7):2649–2658, 2010.[10] Lawrence C Evans.
Partial Differential Equations: Second Edition , volume 19. American MethematicalSociety, 2010.[11] Kikuchi Fumio. Mixed and penalty formulations for finite element analysis of an eigenvalue problem inelectromagnetism.
Computer Methods in Applied Mechanics and Engineering , 64(1-3):509–521, 1987.[12] Ralf Hiptmair. Finite elements in computational electromagnetism.
Acta Numerica , 11:237–339, 2002.[13] Ralf Hiptmair and Klaus Neymeyr. Multilevel method for mixed eigenproblems.
SIAM Journal on ScientificComputing , 23(6):2141–2164, 2002.[14] Xiaozhe Hu and Xiaoliang Cheng. Acceleration of a two-grid method for eigenvalue problems.
Mathematicsof Computation , 80(275):1287–1301, 2011.[15] Wei Jiang, Na Liu, Yifa Tang, and Qing Huo Liu. Mixed finite element method for 2d vector Maxwell’seigenvalue problem in anisotropic media.
Progress In Electromagnetics Research , 148:159–170, 2014.[16] Wei Jiang, Na Liu, Yongqing Yue, and Qing Huo Liu. Mixed finite-element method for resonant cavityproblem with complex geometric topology and anisotropic lossless media.
IEEE Transactions on Magnetics ,52(2):1–8, 2015.[17] Tosio Kato.
Perturbation Theory for Linear Operators , volume 132. Springer Science & Business Media,2013.[18] Jie Liu, Wei Jiang, Fubiao Lin, Na Liu, and Qing Huo Liu. A two-grid vector discretization scheme for theresonant cavity problem with anisotropic media.
IEEE Transactions on Microwave Theory and Techniques ,65(8):2719–2725, 2017.
19] Jean-Claude N´ed´elec. Mixed finite elements in R . Numerische Mathematik , 35(3):315–341, 1980.[20] Jean-Claude N´ed´elec. A new family of mixed finite elements in R . Numerische Mathematik , 50(1):57–81,1986.[21] John E Osborn. Spectral approximation for compact operators.
Mathematics of Computation , 29(131):712–725, 1975.[22] Gerard LG Sleijpen and Henk A Van der Vorst. A Jacobi–Davidson iteration method for linear eigenvalueproblems.
SIAM Review , 42(2):267–293, 2000.[23] Andrea Toselli. Overlapping schwarz methods for Maxwell’s equations in three dimensions.
NumerischeMathematik , 86(4):733–752, 2000.[24] Andrea Toselli and Olof Widlund.
Domain Decomposition Methods-Algorithms and Theory , volume 34.Springer Science & Business Media, 2006.[25] Wei Wang and Xuejun Xu. A two-level overlapping hybrid domain decomposition method for eigenvalueproblems.
SIAM Journal on Numerical Analysis , 56(1):344–368, 2018.[26] Wei Wang and Xuejun Xu. On the convergence of a two-level preconditioned Jacobi–Davidson method foreigenvalue problems.
Mathematics of Computation , 88(319):2295–2324, 2019.[27] Jinchao Xu and Aihui Zhou. A two-grid discretization scheme for eigenvalue problems.
Mathematics ofComputation , 70(233):17–25, 2001.[28] Yidu Yang and Hai Bi. Two-grid finite element discretization schemes based on shifted-inverse power methodfor elliptic eigenvalue problems.
SIAM Journal on Numerical Analysis , 49(4):1602–1624, 2011.[29] Tao Zhao, Feng-Nan Hwang, and Xiao-Chuan Cai. A domain decomposition based Jacobi-Davidson algorithmfor quantum dot simulation. In
Domain Decomposition Methods in Science and Engineering XXII , pages415–423. Springer, 2016.[30] J Zhou, X Hu, L Zhong, S Shu, and L Chen. Two-grid methods for Maxwell eigenvalue problems.
SIAMJournal on Numerical Analysis , 52(4):2027–2047, 2014., 52(4):2027–2047, 2014.