Stable Generalized Finite Element Method (SGFEM)
aa r X i v : . [ m a t h . NA ] A p r Stable Generalized Finite Element Method(SGFEM)
I. Babuˇska ∗ U. Banerjee † Abstract
The Generalized Finite Element Method (GFEM) is a Partition ofUnity Method (PUM), where the trial space of standard Finite ElementMethod (FEM) is augmented with non-polynomial shape functions withcompact support. These shape functions, which are also known as theenrichments, mimic the local behavior of the unknown solution of the un-derlying variational problem. GFEM has been successfully used to solve avariety of problems with complicated features and microstructure. How-ever, the stiffness matrix of GFEM is badly conditioned (much worse com-pared to the standard FEM) and there could be a severe loss of accuracyin the computed solution of the associated linear system. In this paper,we address this issue and propose a modification of the GFEM, referredto as the Stable GFEM (SGFEM). We show that the conditioning of thestiffness matrix of SGFEM is not worse than that of the standard FEM.Moreover, SGFEM is very robust with respect to the parameters of theenrichments. We show these features of SGFEM on several examples.
Keywords:
Generalized finite element method (GFEM); partition of unity(PU); Extended Finite Element Method (XFEM); approximation; conditionnumber, loss of accuracy, linear system; Validation and Verification
During the last decade, the Generalized Finite Element Method (GFEM) andthe eXtended Finite Element Method (XFEM) – two approaches based on thePartition of Unity Method (PUM) – were developed independently and havebeen widely used to solve various types of problems. Only recently, it wasclearly recognized that these two methods are same and were referred to asXFEM/GFEM ([23]). Hence we believe that it is interesting to briefly describethe early development of these methods. It was also recognized that, though ∗ ICES, University of Texas at Austin, Austin, TX. † Department of Mathematics, 215 Carnegie, Syracuse University, Syracuse, NY 13244. E-mail address: [email protected]. This research was partially supported by IMA, University ofMinnesota, Minneapolis, MN and J. T. Oden Faculty Fellowship, ICES, University of Texasat Austin, Austin, TX.
Brief early history:
The idea of adding non-polynomial basis functions intothe trial space of the FEM started in 1970’s ([10, 12, 21]). However, these basisfunctions had global support and the associated stiffness and mass matrices losttheir local structure.Three Special FEMs, which used non-polynomial shape functions, were pro-posed in [5](1994, sub: Mar.1992) to solve second order problems with roughcoefficients. In particular, the shape functions used in the Special FEM h - p Cloud method in[17](1996, sub: Jun.1995), [18](1996, sub: Apr.1996), where the shape functionswere the products of a PU and polynomials. The goal of this method is to obtain h - p FEM like approximation without using a FE mesh, in the spirit of meshlessmethods. The use of the “customized function” (which mimicked the exactsolution) for crack problems was also suggested in [38](1997, sub: Dec.1996),under this framework. Later, the hat-functions were also used as the PU in the h - p Cloud method in [39](1998, sub: Dec. 1996).Lot of work has been done in the area of GFEM and XFEM since these earlywork, described above. We will comment on some of the recent developmentsnear the end of this section.
GFEM and the problem with conditioning:
PUM is a flexible frameworkto design Galerkin methods that accurately approximate solutions of variationalproblems. The framework involves (a) accurately approximating the solution,locally, using functions in a local approximation space, and (b) gluing the localapproximations, using a PU, to construct a globally conforming approximatesolution. The GFEM, which is a PUM with FE hat functions serving as thePU, retains the important flexibility of choosing the local approximation space.The efficiency of GFEM lies in the fact that is requires only modifying anexisting FE code to incorporate special shape functions with compact support.The GFEM, with appropriate choice of special shape functions, leads to excellentconvergence properties. However, the use of hat-functions as PU may result intoalmost linearly dependent shape functions in GFEM, and the stiffness matrixcould be severely ill-conditioned; the ill-conditioning could be much worse thanthe conditioning of the stiffness matrix of the FEM. This results into the lossof accuracy in the solution of the linear system associated with the GFEM. Infact, the shape functions could be linearly dependent yielding a singular stiffnessmatrix.Various ad-hoc approaches have been developed in the literature to addressthis issue. For example, the stiffness matrix of GFEM was perturbed by anidentity matrix of size ǫ (small) in [47, 49] and an iterative method was usedto solve the perturbed linear system. Preconditioning of the stiffness matrix,based on domain decomposition, have been recently suggested in [34] to addressthe conditioning problem. In [25, 43], a flat-hat PU (modified FE hat functionswith flattened top) was used in the PUM instead of hat-functions. The use offlat-hat PU certainly avoids the problem of loss of accuracy in the linear system,but it requires developing a code from the scratch.Naturally, it is pertinent to ask if GFEM could be modified so that it retainsthe excellent convergence properties of the GFEM, and the loss of accuracy inthe computed solution of the linear system of the modified GFEM is of the sameorder as that of the standard FEM. In this paper, we will show that the SGFEMhas both of these features. We have chosen a 1-d problem to present the ideaof the SGFEM primarily for the clarity of exposition and not to obscure the3nalysis with details that are not directly related to the SGFEM. However, theideas and the associated analysis (including the notational machinery) couldbe easily generalized to higher dimensions and will be reported in a futurepublication. Indicator of the loss of accuracy in computed solution of the linearsystem:
Consider the linear system Ax = b , associated with FEM, GFEM, orSGFEM, where A is an n × n sparse symmetric positive definite matrix. Letˆ x be the computed solution of the linear system, obtained from an eliminationmethod encoded in a linear algebra package and the computations follow theIEEE standard for floating point arithmetic (with guard digits). Set η := k x − ˆ x k / k x k – the relative error that measures the loss of accuracy in the computedsolution. η depends on the round-off, but in general, it also depends on theelimination algorithm and its implementation in the package, the compiler, theprocessor, and the computing platform with single or multiple processors. η isrelated to the relative error in the approximate solution due to round-off.We seek an indicator that reliably indicates the loss of accuracy in the com-puted solution, characterized by η , and is practically independent of other fac-tors mentioned above. Let H = DAD , where D is a diagonal matrix with D ii = A − / ii . Define the scaled condition number K ( A ) of A by K ( A ) := κ ( H ) , where κ ( H ) = k H k k H − k is the condition number of H based on the k · k vector norm. We hypothesize that K ( A ) is the indicator, which we formalize asfollows: Hypothesis H: η ≈ Cn β K ( A ) ǫ ; β ≈ , (1.1)where ǫ is the machine precision.We will elaborate on the precise meaning of the hypothesis and validate itin the Appendix, borrowing the ideas from the area of Validation and Ver-ification . The indicator K ( A ) will be used to compare various GFEMs withrespect to the loss of accuracy in the computed solution, which will allow usto choose a preferable GFEM. In particular, we will show in this paper that K ( A SGF EM ) ≤ K ( A GF EM ), where A SGF EM and A GF EM are the stiffness matri-ces of SGFEM and GFEM, respectively, and therefore the SGFEM is preferableover the GFEM.
Some current work in GFEM/XFEM:
These methods have been used in avariety of applications. For example, XFEM has been used recently to addresstwo-phase fluid flow problems ([19]), mechanical behavior of nano-structures([20]), and heterogeneous material with random interfaces ([36]); GFEM hasbeen used to address heat transfer problems with sharp thermal gradient ([40]),grain boundary in polycrystals ([44]), and electromagnetic problems ([30]). Spe-cial shape functions for problems with locally periodic coefficients are con-structed in [31] that yield exponential order of convergence. Also local problemsto compute the shape functions for problems with rough coefficients are con-structed in [3], and it has been proved that GFEM yields exponential order4f convergence. For an extensive collection of references in XFEM/GFEM, werefer to [23].
Organization of the paper:
In Section 2, we give the model problem in 1-d.We intentionally chose the problem in 1-d so that we could communicate themain ideas of SGFEM, when applied to this problem, in a fairly general fashion,without the notational and other technical complexity associated with higherdimensions. We describe the PUM and GFEM, together with the approxima-tion results, in Section 3 and show the conditioning problem in GFEM on anexample. In Section 4, we first describe the SGFEM in a simpler setting, showthat SGFEM retains the convergence properties of GFEM, and establish thatthe scaled condition numbers of the stiffness matrices of the SGFEM and FEMare of the same order. We chose the simpler setting primarily to communicatethe main idea of the method and the associated analysis. We then describe theSGFEM and provide the analysis in full generality. We note that some of theideas presented here could have been presented in a simpler fashion by using 1-darguments. However we did not take this approach; the notations and frame-work of the analysis, developed in this section, could be easily generalized tohigher dimensions. In Section 5, we applied SGFEM to three specific examples,namely, interface problems, problems with singular solutions, and problems withdiscontinuous solutions. In the Appendix, we discuss the validation of Hypoth-esis H and present many validation experiments. We note that the Appendix isa very important part of this paper
Let Ω = (0 ,
1) and, for an integer k ≥
0, we denote the standard Sobolevspaces by H k (Ω) with the norm k · k H k (Ω) and seminorm | · | H k (Ω) ; for k = 0, H (Ω) = L (Ω). We would also use the spaces H k ( A ), where A is a sub-domainof Ω. Consider the variational problem u ∈ H (Ω) , B ( u, v ) = F ( v ) , ∀ v ∈ H (Ω) , (2.1)where B ( u, v ) := ˆ Ω au ′ v ′ dx and F ( v ) := ˆ Ω f v dx (2.2)such that F (1) = ´ Ω f dx = 0. We assume that the function a ( x ) is bounded,i.e., there are constants α, β such that0 < α ≤ a ( x ) ≤ β, ∀ x ∈ Ω (2.3)We note that a ( x ) could be smooth, but it also could be rough. It is well knownthat the problem (2.1) has a unique solution, up to an additive constant.We define the Energy norm , k v k E ( A ) , of v ∈ H ( A ), where A is a sub-domainof Ω, by k v k E ( A ) := B A ( v, v ) , where B A ( w, z ) := ˆ A aw ′ z ′ dx.
5t is well known that the solution u of (2.1) is also the solution of a boundaryvalue problem (BVP), posed in the strong form as − [ a ( x ) u ′ ] ′ = f, au ′ (0) = au ′ (1) = 0 (2.4)provided au ′ is differentiable. Let S be a finite dimensional subspace of H (Ω). The Ritz-Galerkin method toapproximate the solution u of (2.1) is given by u h ∈ S , B ( u h , v ) = F ( v ) , ∀ v ∈ S . (3.1)The solution u h is unique up to an additive constant. We can obtain a uniquesolution by imposing a natural constraint on u h , namely, u h (0) = 0.A Partition of Unity method (PUM) is a Ritz-Galerkin method, where S is constructed employing a (a) Partition of Unity (PU) and (b)
Local approxi-mating spaces . A Generalized Finite Element method (GFEM) is a PUM withspecial PU. We first briefly described the PUM.For a parameter h >
0, Let I h := { i ∈ Z : 0 ≤ i ≤ N } , where N = N ( h ) isan integer. For i ∈ I h , let ω hi := ( a hi , b hi ) ⊂ Ω such that (i) Ω = ∪ i ∈ I h ω hi , and(ii) any x ∈ Ω belongs to at most κ of the open intervals ω hi ; κ is independent of i, h . The open interval ω hi is called a patch . Subordinate to the cover { ω hi } i ∈ I h ,let { N hi } i ∈ I h be a C PU satisfying X i ∈ I h N hi ( x ) = 1 , ∀ x ∈ Ω , k N hi k L ∞ (Ω) ≤ C, diam { ω hi }k ( N hi ) ′ k L ∞ (Ω) ≤ C, where C > i (for details, see [6, 33, 4]).On each patch ω hi , i ∈ I h , we consider an ( n i + 1)-dimensional space V hi –the local approximating space , namely V hi = span { ϕ [ i ] ,hj } n i j =0 , ϕ [ i ] ,hj ∈ H ( ω i ) and ϕ [ i ] ,h = 1 , (3.2)where n i s are non-negative integers. The functions ϕ [ i ] ,hj , j >
0, are carefullychosen such the functions in V hi mimic the the exact solution u , locally in ω hi .We will further comment on this issue later. In the rest of the paper, we willwrite I, ω i , N i , V i , ϕ [ i ] j in place of I h , ω hi , N hi , V hi , ϕ [ i ] ,hj , respectively, with anunderstanding that they depend on the parameter h . The PUM is precisely(3.1), with the finite dimensional space S is given by S = X i ∈ I N i V i = span { N i ϕ [ i ] j , ≤ j ≤ n i , i ∈ I } := S + S , (3.3)where S = { ζ : ζ = X i ∈ I y [ i ]0 ϕ [ i ]0 N i } , S = { ζ : ζ = X i ∈ I n i X j =1 y [ i ] j ϕ [ i ] j N i } , (3.4)6nd y [ i ]0 , y [ i ] j ∈ R . The functions ϕ [ i ] j , j ≥
1, and the associated spaces V i aresometimes referred to as enrichments and enrichment spaces respectively in theliterature. We will refer to S as the basic part of S ; S will be referred to asthe enrichment part of S . Moreover, we will refer to the Galerkin method with S = S as the basic part of PUM . Thus every PUM has a basic part based onlyon the PU.We now present the main approximation result of PUM in the Energy norm(see [6, 33, 4]). Theorem 3.1
Suppose u ∈ H (Ω) . Suppose for i ∈ I , there exists ξ i ∈ V i and C > , independent of i , such that k u − ξ i k L ( ω i ) ≤ C diam ( ω i ) k u − ξ i k E ( ω i ) and k u − ξ i k E ( ω i ) ≤ ǫ i . Then there exists v ∈ S such that k u − v k E (Ω) ≤ C (cid:2) P i ∈ I ǫ i (cid:3) / , (3.5) where the positive constant C depends on κ, C , β/α . It is immediate from Theorem 3.1 that the PUM solution u h ∈ S = S + S of (3.1) satisfies k u − u h k E (Ω) ≤ inf v ∈S k u − v k E (Ω) ≤ C (cid:2) P i ∈ I ǫ i (cid:3) / , (3.6)where u is the solution of (2.1). It is clear from above that the global accuracyof the PUM solution u h depends on how accurately the solution u of (2.1) canbe approximated by the functions in V i , locally on the patches ω i .We mention that in higher dimensions, the patches ω i are subdomains, whichcan have quite general shape. Theorem 3.1, as presented above, is also true ishigher dimensions.We now describe the GFEM. Recall that the choice of PU in PUM is arbi-trary. The GFEM is a PUM, where (a) the patches ω i are “FE stars” relativeto a finite element (FE) triangulation of Ω, and (b) the piecewise linear FEhat-functions N i , associated with the vertices of FE triangulation, serve as thePU.Let N = 1 /h and recalling I = { i : 0 ≤ i ≤ N } , let T := { x i = ih : i ∈ I .Let { τ k } k ∈ I \{ } be the uniform mesh on Ω, where τ k := [ x k − , x k ] are the elements ; o τ k := ( x k − , x k ) is the interior of τ k . The points x i are called the vertices of the mesh. The patches { ω i } i ∈ I are defined as ω i := ( x i − , x i +1 ), i = 1 , , · · · , N −
1; also ω := ( x , x ) and ω N := ( x N − , x N ). For i ∈ I , let N i be the standard hat-functions associated with the vertex x i ; the support of N i is ω i . Note that ω = τ , ω N = τ N and ω i = τ i ∪ τ i +1 for i = 1 , , . . . , N − ω i is the FE star associated with the vertex x i . Clearly, { N i } i ∈ I form a PUsubordinate to the patches { ω i } i ∈ I . The associated GFEM is the Galerkinmethod (3.1) with S = S + S (see (3.3)). Clearly S is the standard FE spaceof piecewise linear functions, and consequently, the basic part of the GFEM is7he standard finite element method (FEM). Thus the trial space S of the GFEMis precisely the standard FE trial space, augmented with the space S . ThusGFEM could be implemented by incorporating enrichments into an existing FEcode. The name GFEM was first used in [47, 48] to highlight exactly this point.The description of GFEM is exactly same in higher dimensions; it is based onthe standard FE triangulation of Ω. Remark 3.2
We note that we have considered a uniform mesh only for thesimplicity of exposition; in fact, the ideas and theory in this paper could alsobe presented for locally quasi-uniform meshes , i.e., when C ≤ | τ k +1 | / | τ k | ≤ C for k = 1 , · · · , N −
1, with C , C > k . (cid:5) The accuracy of the GFEM (also PUM) solution depends on the choice of V i ,as mentioned before (see Theorem 3.1). The functions ϕ [ i ] j ∈ V i (see (3.2),(3.3))are carefully chosen based on the available information on the unknown solution u of (2.1) to mimic the unknown solution locally in ω i . Examples of V i , suitablefor specific applications are available in the literature (e.g., see [4]). We brieflymention some of the examples that we will consider in this paper: • If the unknown solution u is smooth in ω i , then the ϕ [ i ] j s are usually chosen tobe polynomials in P j ( ω i ) and the associated spaces V i are spaces of polynomialsof degree n i . We note that n i could could be different for different i , based onthe available information on u . • When a ( x ) is a piecewise smooth and discontinuous function (interface prob-lems), ϕ [ i ] j s are chosen such that a [ ϕ [ i ] j ] ′ is smooth on ω i . Clearly, ϕ [ i ] j arecontinuous piecewise smooth functions with derivatives that are discontinuousat the discontinuities of a ( x ). • If the unknown solution u is singular, then ϕ [ i ] j should be chosen as singularfunctions, mimicking the singularity of u . • If u is discontinuous at x = c in the domain, then ϕ [ i ] j s are chosen to bediscontinuous functions on those ω i s that contain x = c . We note however thatproblems with discontinuous solutions cannot be cast as (2.1); we will addressthese problems in Section 5.3 of this paper. Remark 3.3
GFEM provides a flexible framework to obtain various Galerkinmethods. Many classical methods could be cast in this framework. For example,with n i = 0 for i ∈ I in (3.2), GFEM (with S = S ) yields the classical FEM.Moreover, let s ( x ) be a function (could be singular) defined on Ω. Consider n i = 1 and ϕ [ i ]1 = s ( x ) | ω i in the definition of S in (3.4). Then GFEM, with y [ i ]1 = b (a constant) for i ∈ I , yields the classical “singular FEM” (see [46, 21,11, 12, 10, 41]), where the standard finite element trial space is augmented bythe global function s ( x ). Moreover, we note that n i in (3.2) could be differentfor different values of i . In fact, one may use n i > ω i ,as needed for accuracy, based on the available information; for other patches,8 i = 0, i.e., V i = span { } . This idea was also discussed and implemented in theoriginal GFEM papers [47, 48]. The stiffness matrix A of the GFEM is positive semi-definite. Even when theGFEM solution is naturally constrained with u h (0) = 0, i.e., when A is positivedefinite, the condition number κ ( A ) can be extremely large, specifically largerthan the condition number of standard FE stiffness matrix, which is O ( h − )for second order problems. However, according to Hypothesis H, the scaledcondition number K ( A ) = κ ( H ) is a reliable indicator of the loss of accuracyin the computed solution of A x = b . Recall H = D A D , where D is a diagonalmatrix with D ii = A − / ii . We now present an example where K ( A ) is muchlarger than the scaled condition number of the standard FE stiffness matrix,which is again O ( h − ).Suppose a ( x ) = 1 in (2.2) and let u = x α with 1 / < α < / α = 1. Notethat x α ∈ H (Ω) but x α / ∈ H (Ω). We consider a GFEM with n i = 1 and ϕ [ i ]1 := x α | ω i , i ∈ I . From the definition of S , any v ∈ S is of the form v ( x ) = X i ∈ I \{ } a i N i ( x ) + X i ∈ I b i N i ( x ) x α ; a i , b i ∈ R . (3.7)We have set a = 0 to impose the constraint u h (0) = 0 on the GFEM solution u h . It can be easily shown that u h = u , i.e. there is no approximation error .We let η := [ a , . . . , a N , b , · · · , b N ] T ∈ R N +1 . Then B ( v, v ) = η T A η ,where A is the (2 N +1) × (2 N +1) positive definite stiffness matrix of the GFEM.We note that A ii = | N i | H ( ω i ) for 1 ≤ i ≤ N and A N +1+ j,N +1+ j = | N j x α | H ( ω j ) for 0 ≤ j ≤ N . Therefore by considering v ∈ S of the form v ( x ) = X i ∈ I \{ } a i N i ( x ) | N i | H (Ω) + X i ∈ I b i N i ( x ) x α | N i x α | H (Ω) , a i , b i ∈ R , (3.8)it is easy to see that B ( v, v ) = η T Hη , where H is as mentioned before.We consider a v ∈ S of the form (3.8) with a i = 0 for 1 ≤ i ≤ N − a N = 1,and b i = 0 for i ∈ I . Then B ( v, v ) = ˆ Ω v ′ dx = 1 and k η k := X i ∈ I \{ } a i + X i ∈ I b i = 1 . Therefore, B ( v, v ) k η k = 1 ≤ λ M , (3.9)where λ M is the largest eigenvalue of H .Let g ( x ) ∈ H (Ω) be a non-decreasing function with g ( x ) = 0 for 0 ≤ x ≤ / < C ≤ g ( x i ) ≤ i ≥ ⌈ N/ ⌉ . For h small enough, let9 / < x k ≤ / x = 1 /
4. Clearly x α and gx α are in H ( b Ω), where b Ω := (1 / , v ∈ S of the form (3.8) with a i = − g ( x i ) x αi | N i | H (Ω) and b i = g ( x i ) | N i x α | H (Ω) . Then v ( x ) = − N X i = k g ( x i ) x αi N i ( x ) + N X i = k g ( x i ) N i ( x ) x α . Thus v = 0 on [0 , x k ]. Moreover, on τ i , i ≥ k + 1, we have v | τ i = −I ih ( gx α ) + x α I ih ( g ) , where I ih ( f ) is the linear interpolant of f on τ i , interpolating at x i − and x i .Therefore, | v | H ( o τ i ) = (cid:12)(cid:12) gx α − I ih ( gx α ) − gx α + x α I ih ( g ) (cid:12)(cid:12) H ( o τ i ) ≤ Ch (cid:2) | gx α | H ( o τ i ) + k x α k H ( o τ i ) | g | H ( o τ i ) (cid:3) , where we have used standard interpolation estimates. Thus recalling that v = 0on [0 , x k ], we have B ( v, v ) = | v | H (Ω) = N X i = k +1 | v | H ( o τ i ) ≤ Ch (cid:2) | gx α | H ( b Ω) + k x α k H ( b Ω) k g k H ( b Ω) (cid:3) := Ch ||| gx α ||| . (3.10)Also, k η k ≥ N X i = k [ g ( x i )] | N i x α | H (Ω) ≥ N X i = ⌈ N/ ⌉ [ g ( x i )] | N i x α | H (Ω) ≥ Ch , where we have used that | N i x α | H (Ω) ≥ C/h for i ≥ ⌈ N/ ⌉ . Thus using (3.10),we have B ( v, v ) k η k ≤ Ch ||| gx α ||| , and hence, λ m ≤ Ch ||| gx α ||| where λ m is the smallest eigenvalue of H . Finally, from (3.9), we get K ( A ) = κ ( H ) = λ M λ m ≥ Ch − ||| gx α ||| , which is much bigger than the scaled condition number of the stiffness matrix ofthe standard FEM; we recall that the standard FEM is basic part of the GFEM.Thus from Hypothesis H, there will be severe loss of accuracy in the computedsolution of A x = b . We will show this feature in the Appendix.It is interesting to note that using v ∈ S of the form (3.7) and followingthe same arguments as before, we can also show that the condition number κ ( A ) ≥ Ch − / ||| gx α ||| . We stated this property at the beginning of thissubsection. 10 Stable Generalized Finite Element Method(SGFEM):
A GFEM will be referred to as an SGFEM if the GFEM satisfies the followingproperty: the scaled condition number K ( A ) of the associated stiffness matrix A is of the same order with respect to h as of the stiffness matrix of the basic partof the GFEM. Since the basic part of any GFEM is the standard FEM, thereforea GFEM is an SGFEM provided K ( A ) = O ( h − ) for second order problems.As mentioned before, we will present the analysis for uniform meshes. However,the analysis is valid for locally quasi-uniform meshes.We first present a particular example highlighting the ideas and results re-lated to SGFEM in a simpler setting. Let a ( x ) = 1 in (2.1) and suppose the solution of (2.1) is smooth, in particularlet u ∈ H (Ω). Since the solution is unique up to an additive constant, we seek u with u (0) = 0. It is well known that a function in H (Ω) could be accuratelyapproximated, locally in ω i , by polynomials of degree 2; recall that the patches ω i have been defined in Section 3. Based on this information, we consider V i = span { ϕ [ i ] j } j =0 (i.e., n i = 2), where ϕ [ i ]1 = ( x − x i ) and ϕ [ i ]2 = ( x − x i ) , for0 ≤ i ≤ N . Recall that ϕ [ i ]0 = 1. Thus V i = P ( ω i ).We let ϕ [ i ] j := ϕ [ i ] j − I ω i ( ϕ [ i ] j ) , where I ω i ( ϕ [ i ] j ) := X − ≤ k ≤ i +1 ϕ [ i ] j ( x k ) N k (cid:12)(cid:12) ω i ; I ω i ( ϕ [ i ] j ) is the piecewise linear interpolant of ϕ [ i ] j on the patch ω i . We adjust theoperators I ω and I ω ; they interpolate at { x , x } and { x N − , x N } respectively.We define a modified local approximation space V i = span { ϕ [ i ] j } j =0 , associatedwith V i . Clearly, ϕ [ i ] j = 0 for j = 0 , V i = span { ϕ [ i ]2 } .It is well known (see [33, 49]) that the scaled condition number of the stiff-ness matrix of the GFEM, with V i as the local approximation spaces, could beextremely large or even unbounded. We will use the GFEM with V i precisely toaddress this issue, and show that the GFEM based on the approximation space S = S + S , with S = X i ∈ I \{ } a i N i and S = X i ∈ I N i V i is an SGFEM. Note that v (0) = 0 for all v ∈ S . We have chosen a = 0 in thedefinition of S to impose the constraint u h (0) = 0 to obtain a unique GFEMsolution u h ∈ S .It is easy to check that the assumptions in Theorem 3.1 hold; in fact, thereexists ξ i ∈ V i such that k u − ξ i k E ( ω i ) ≤ Ch | u | H ( ω i ) . Therefore it is clear from(3.6) that k u − u h k E (Ω) = O ( h ), where u h is the GFEM solution, based on11 = S + S (recall S = P i ∈ I N i V i ). We first show that the GFEM based on S = S + S also yields the same optimal order of convergence. Proposition 4.1
There exists a v ∈ S = S + S , such that k u − v k E (Ω) ≤ Ch | u | H (Ω) , where the positive constant C independent of h .Proof: Since u ∈ H ( ω i ) for 0 ≤ i ≤ N , it is well known that there exists ξ i ∈ V i = P ( ω i ) such that k u − ξ i k E ( ω i ) ≤ Ch | u | H ( ω i ) . (4.1)Let I h u = P i ∈ I u ( x i ) N i . It is clear that I h u = I ω i u on ω i , Therefore usingstandard interpolation results, we have k ( u − I h u ) − ( ξ i − I ω i ξ i ) k L ( ω i ) = k ( u − ξ i ) − I ω i ( u − ξ i ) k L ( ω i ) ≤ C diam( ω i ) k ( u − ξ i ) k E ( ω i ) , and similarly, k ( u − I h u ) − ( ξ i − I ω i ξ i ) k E ( ω i ) ≤ C k u − ξ i k E ( ω i ) ≤ Ch | u | H ( ω i ) . Let w := u −I h u ; clearly w ∈ H (Ω). From above, ξ i −I ω i ξ i ∈ V i approximates w locally in ω i . Therefore, from the Theorem 3.1, there is v ∈ S such that k w − v k E (Ω) ≤ C X i ∈ I h | u | H ( ω i ) ≤ C h | u | H (Ω) . (4.2)Let v = I h u − u ( x ) + v . Since { N i } i ∈ I is a PU, we have I h u − u ( x ) = P i ∈ I \{ } [ u ( x i ) − u ( x )] N i ∈ S . Thus v ∈ S and using (4.2), we get k u − v k E (Ω) = k w − v k E (Ω) ≤ Ch | u | H (Ω) , which is the desired result.Using Proposition 4.1, we immediately get that k u − u h k E (Ω) = O ( h ), where u h is the GFEM solution based on S = S + S . We also note that we approx-imated u − I h u by the functions in V i in the proof of Proposition 4.1 – this isthe main idea of SGFEM. Later, we will further comment on this issue.We now address the scaled condition number of the stiffness matrix A asso-ciated with the GFEM based on S = S + S . With a suitable ordering of theshape function of S , the matrix A is of the form A = (cid:20) A A A A (cid:21) , (4.3)where A ij are block matrices. The matrix A = { B ( N i , N j ) } i,j ∈ I \{ } , whichis the stiffness matrix of the basic part of GFEM, is the standard N × N FE12tiffness matrix. The ( N + 1) × ( N + 1) matrix A is of the form A = { B ( N i ϕ [ i ]2 , N j ϕ [ j ]2 ) } i,j ∈ I . Also A = A T . For the clarity of notation, wewill write A = { ( A ) ij } Mi,j =1 , where M = N + 1. Note that ( A ) jj areassociated with the vertices x j − , respectively, and the GFEM solution u h iscomputed by postprocessing. We remark that, in general, M will vary based onthe application and ( A ) jj will be associated with some vertex x i ( j ) .We first note that ϕ [ i ]2 ( x j ) = 0 for j = i − , i, i + 1. Therefore it is easy toshow that S and S are orthogonal in the inner product B ( · , · ), i.e., B ( v , v ) = 0 , ∀ v ∈ S , v ∈ S . (4.4)Thus it is immediate that A and A in (4.3) are “zero-matrices”.The matrix A is tridiagonal and is constructed by the assembly processfrom the element stiffness matrices A ( k )11 , for the element τ k = [ x k − , x k ], k =1 , , · · · , N . the matrices A ( k )11 are given by A ( k )11 = 1 h ˆ A ( k )11 ; ˆ A ( k )11 := (cid:20) − − (cid:21) , ≤ k ≤ N, and ˆ A (1)11 := [1] . (4.5)Similarly, the matrix A , which is also tridiagonal, is constructed by the as-sembly process from the element matrices A ( k )22 = B τ k ( N k − ϕ [ k − , N k − ϕ [ k − ) B τ k ( N k ϕ [ k ]2 , N k − ϕ [ k − ) B τ k ( N k − ϕ [ k − , N k ϕ [ k ]2 ) B τ k ( N k ϕ [ k ]2 , N k ϕ [ k ]2 ) , (4.6)for the element τ k , k = 1 , , · · · , N , and B τ k ( w, v ) := ´ τ k au ′ v ′ dx . A directcomputation yields A ( k )22 = h ˆ A ( k )22 ; ˆ A ( k )22 = "
215 130130 215 . It is easy to check that the matrix ˆ A ( k )22 is positive definite (the eigenvalues are and ) and thus h k y k ≤ y T A ( k )22 y ≤ h k y k , ∀ y = ( y , y ) ∈ R . (4.7)We now consider the diagonal matrix D = diag ( D , D ) with D = diag ( d ) , d = m − / (2 − / , · · · , − / , T ∈ R N , D = diag ( d ) , d = m − / (1 , − / , · · · , − / , T ∈ R N +1 , where m = 1 /h and m = 2 h / b A := DAD = (cid:20) D A D
00 D A D (cid:21) = " b A b A , b A = D A D and b A = D A D . Clearly b A and b A are N × N and ( N + 1) × ( N + 1) tri-diagonal matrices, respectively. The diagonal elementsof b A and b A are equal to 1. Consequently, diagonal elements of b A are equalto 1 and the scaled condition number K ( A ) of A is κ ( b A ). Proposition 4.2
Suppose K ( A ) be the scaled condition number of A and let λ min ( b A ) , λ max ( b A ) be the smallest and largest eigenvalue of b A , respec-tively. Then K ( A ) ≤ K ( A ) ≤ K ( A ) max { , C /λ max ( b A )min { , C /λ min ( b A ) } , where C = 3 / and C = 5 / .Proof: Let z = ( z , z ) T ∈ R N +1 , where z ∈ R N and z ∈ R N +1 . Then z T b Az = ( D z ) T A ( D z ) + ( D z ) T A ( D z )= z T b A z + z T b A z . (4.8)Let z = ( y , y , · · · , y N +1 ) T , then D z = m − ( y , − y , · · · , − y N , y N +1 ) T , where m = 2 h /
15. We define z ,k := ( y k , y k +1 ) T , and z , := m − / ( y , − / y ) T , z ,N := m − / (2 − / y N , y N +1 ) T , z ,k := (2 m ) − / ( y k , y k +1 ) T , for k = 2 , · · · , N − . Recalling that A could be obtained from the element matrices A ( i )22 throughthe assembly process, using (4.7) we get, z T b A z = ( D z ) T A ( D z ) = N X k =1 z ,kT A ( k )22 z ,k ≤ h N X k =1 k z ,k k = h N +1 X i =1 m − y i = 54 k z k . Similarly from (4.7), we also get34 k z k ≤ z T b A z , and therefore from (4.8), z T b A z + C k z k ≤ z T b Az ≤ z T b A z + C k z k . (4.9)It is clear from above that z T b Az ≥ z T b A z + C k z k ≥ λ min ( b A ) k z k + C k z k ≥ min { C , λ min ( b A ) }k z k , C := . Therefore, λ min ( b A ) ≥ min { C , λ min ( b A ) } = λ min ( b A ) min { , C /λ min ( b A ) } . Similarly from the upper bound of (4.9), we can show that λ max ( b A ) ≤ λ max ( b A ) max { , C /λ max ( b A ) } , where C = . Thus K ( A ) = κ ( b A ) = λ max ( b A ) λ min ( b A ) ≤ λ max ( b A ) max { , C /λ max ( b A ) } λ min ( b A ) min { , C /λ min ( b A ) } = K ( A ) max { , C /λ max ( b A ) } min { , C /λ min ( b A ) } , where K ( A ) = κ ( b A ). Thus we have the required upper bound of K ( A ).Now let z be an eigenvector of b A associated with λ max ( b A ). Also let z = . Then from (4.8), we have z T b Az = λ max ( b A ) k z k = λ max ( b A ) k z k , and therefore, λ max ( b A ) ≥ λ max ( b A ) . Similarly, considering z to be an eigenvector of b A associated with λ min ( b A )and z = , we have z T b Az = λ min ( b A ) k z k = λ min ( b A ) k z k , and therefore, λ min ( b A ) ≤ λ min ( b A ) . Now, K ( A ) = λ max ( b A ) λ min ( b A ) ≥ λ max ( b A ) λ min ( b A ) = K ( A ) , which is the required lower bound of K ( A ).The Proposition 4.2 establishes that K ( A ) ≈ K ( A ), i.e., the scaled con-dition numbers of the stiffness matrices for the GFEM and the basic part ofthe GFEM are of same order. Thus the GFEM with S = S + S is indeed anSGFEM. Remark 4.3
We note that the orthogonality of the spaces S and S was essen-tial in proving Proposition 4.2. This property does not hold in general. Later wewill define a notion of “almost orthogonality” of S and S , which will addressthis issue. (cid:5) emark 4.4 The inequality (4.7) played an important role in obtaining Propo-sition 4.2. This property depends on the functions in V i . For general approxi-mation spaces V i , we need an assumption that will be presented later. (cid:5) Remark 4.5
SGFEM uses V i as the enrichment space, which is a modificationof V i . Other modifications of V i have been reported in different contexts. Forexample, the shifting modification, namely, ϕ [ i ] j ( x ) = ϕ [ i ] j ( x ) − ϕ [ i ] j ( x i ), j >
0, isused in XFEM in the context of approximation error as well as enforcement the
Kronecker delta property (see [23]). (cid:5)
We now present the SGFEM for (2.1), with a ∈ L ∞ (Ω) and 0 < α ≤ a ( x ) ≤ β .Moreover, suppose it is a priori known that u (0) = 0 (we will further comment ona priori information later). We consider the uniform mesh { τ k } k ∈ I \{ } with theset of vertices T , as described in Section 3. Recall that the hat function N i andthe patch ω i is associated with each x i ∈ T . We will refer to { x i − , x i , x i +1 } asthe vertices of ω i ; the vertices of ω , ω N are { x , x } , { x N − , x N } , respectively.Let T , T ⊂ T ; ζ := card( T ) , ζ := card( T ); ζ , ζ ≤ N + 1 . We define S = P x i ∈T a i N i , a i ∈ R ; T will be referred to as S -relevant set ofvertices . We consider T = { x i ∈ T : 1 ≤ i ≤ N } as in the example in Section4.1. For other choices of T , we refer to Remark 4.6.For x i ∈ T , let V i = span { ϕ [ i ] j } n i j =0 ⊂ H ( ω i ) such that there exists ξ i ∈ V i satisfying k u − ξ i k E ( o τ k ) ≤ ǫ i for all τ k ⊂ ω i . Clearly, k u − ξ i k E ( ω i ) ≤ ǫ i . Weconsider the modified space V i = span { ϕ [ i ] j } n i j =1 , where ϕ [ i ] j = ϕ [ i ] j − I ω i ϕ [ i ] j ; I ω i ϕ [ i ] j := X i − ≤ k ≤ i +1 ϕ [ i ] j ( x k ) N k (cid:12)(cid:12) ω i . I ω i v is the piecewise linear interpolant of v ∈ H ( ω i ) on the patch ω i based onthe vertices of ω i ; we adjust I ω and I ω N accordingly as before. It is important tonote that if for some x i ∈ T , V i = { ξ ∈ H ( ω i ) : ξ | τ k ∈ P ( τ k ) for all τ k ⊂ ω i } ,then V i = { } . Also ξ i ( x k ) = 0 with k = i − , i, i + 1 for all ξ i ∈ V i . Werefer to a patch ω i as enriched if V i = { } . Let T := { x i ∈ T : ω i is enriched } and define S = P x i ∈T N i V i ; T will be referred to as the S -relevant setof vertices . In Section 4.1, we chose T = T . We will present examples with ζ << N + 1 (i.e., only few patches enriched) later in the paper. Remark 4.6
The sets T , T ⊂ T provide a framework to address numericaltreatment of many applications. Selection of both sets depends on a prioriinformation on the problem and its solution. Selection of T will be apparentfrom the examples in Section 5. Suppose T ⊂ T contains all the vertices x j ∈ T ,where it is known a priori that u ( x j ) = 0. We choose T = T \T . Typically,16 will not contain any boundary vertex with homogeneous Dirichlet condition.However T may exclude other vertices in T based on a priori information. Forexample, let f ( x ) = P ∞ k =0 c k cos[2 π (2 k +1) x ], a ( x ) = 1, and suppose it it knownthat u (0) = 0. Then u (1 /
4) = u (3 /
4) = 0, and the vertices x j / ∈ T if x j = 1 / x j = 3 /
4. Thus we can accommodate many a priori information in thisframework. Only for simplicity, we have considered T = { x } in this section. (cid:5) We now consider a GFEM with S = S + S = X x i ∈T a i N i + X x i ∈T N i V i . (4.10)Note that v (0) = 0 for all v ∈ S . We will show that this GFEM is an SGFEM,under certain assumptions on the space S , which we will present later. Wemention that T and T are called S and S relevant vertices, respectively,since the degrees of freedom associated only with these vertices appear in theGFEM.We first present an approximation result for the GFEM with S = S + S . Theorem 4.7
Let u ∈ H (Ω) be the solution of (2.1) . Suppose for each x i ∈T , there exists ¯ ξ i ∈ V i and C > , independent of i , such that k u − I ω i u − ¯ ξ i k L ( ω i ) ≤ C diam ( ω i ) k u − I ω i u − ¯ ξ i k E ( ω i ) , and k u − I ω i u − ¯ ξ i k E ( ω i ) ≤ ǫ i . Then there exists v ∈ S = S + S such that k u − v k E (Ω) ≤ C (cid:8) X x i ∈T \T k u − I ω i u k E ( ω i ) + X x i ∈T ǫ i (cid:9) / . (4.11) Proof:
Let I h u = P x i ∈T u ( x i ) N i be the piecewise linear interpolant of u .We note that I h u = I ω i u on ω i . Define w := u −I h u and let v := P x i ∈T N i ¯ ξ i ∈S . Then recalling that { N i } x i ∈T is a PU, we have w − v = X x i ∈T N i w − X x i ∈T N i ¯ ξ i = X x i ∈T \T N i w + X x i ∈T N i ( w − ¯ ξ i ) . Therefore k w − v k E (Ω) ≤ C (cid:2)(cid:13)(cid:13) X x i ∈T \T N i w (cid:13)(cid:13) E (Ω) + (cid:13)(cid:13) X x i ∈T N i ( w − ¯ ξ i ) (cid:13)(cid:13) E (Ω) . (4.12)We first address the last term of (4.12). Using the fact that x ∈ Ω is in atmost two patches ω i , ω i +1 , we see that the sum P x i ∈T [ N i ( w − ¯ ξ i )] ′ has atmost two terms for any x ∈ Ω. Using this observation, the assumption that k N ′ i k L ∞ (Ω) ≤ C [diam { ω i } ] − , and the hypothesis of the Theorem, we can show17hat (cid:13)(cid:13) X x i ∈T N i ( w − ¯ ξ i ) (cid:13)(cid:13) E (Ω) ≤ C h X x i ∈T k w − ¯ ξ i k L ( ω i ) diam { ω i } + X x i ∈T k w − ¯ ξ i k E ( ω i ) i ≤ X x i ∈T k w − ¯ ξ i k E ( ω i ) ≤ X x i ∈T ǫ i . (4.13)(We refer to the proof of Theorem 3.2 in [4] for details of the argument lead-ing to (4.13)). Using exactly same argument and the interpolation estimate k w k L ( ω i ) = k u − I ω i u k L ( ω i ) ≤ Ch k u − I ω i u k E ( ω i ) , we get (cid:13)(cid:13) X x i ∈T \T N i w (cid:13)(cid:13) E (Ω) ≤ C X x i ∈T \T k u − I ω i u k E ( ω i ) . Therefore, from (4.12) and (4.13), we have k w − v k E (Ω) ≤ C (cid:2) X x i ∈T \T k u − I ω i u k E ( ω i ) + X x i ∈T ǫ i (cid:3) . Finally, writing w = u − I h u and setting v = I h u + v ∈ S + S , we get thedesired result.We mention that unlike in Theorem 4.1, we did not assume T = T in The-orem 4.7. We further note that I h u for u ∈ H (Ω) is not defined in higherdimensions, since the point values of u , in general, do not exist in higher dimen-sions (in contrast to 1-d). However, using a generalized interpolant based onthe average of u in a ball around the vertices x i , the proof of the above resultcan be easily generalized to higher dimensions. Remark 4.8
From the proof of Proposition 4.1, it is clear that accurate localapproximation of u − I h u by functions in V i is crucial to obtain the desiredresult. This is the main idea of SGFEM – the spaces V i are constructed suchthat the functions in V i accurately approximate u −I h u in ω i . This is in contrastto the standard GFEM, where the functions in local approximating spaces V i accurately approximate u in ω i . (cid:5) Remark 4.9
We note that V i = { } for x i ∈ T \T . If u ∈ H (Ω) is locallysmooth, namely, u ∈ H ( ω i ) for x i ∈ T \T , then k u − I ω i u k E ( ω i ) ≤ Ch | u | H ( ω i ) for x i ∈ T \T and (4.11) could be written as k u − v k E (Ω) ≤ C (cid:8) h X x i ∈T \T | u | H ( ω i ) + X x i ∈T ǫ i (cid:9) / . (4.14)By incorporating the available information on the solution u in V i , for x i ∈ T ,we can have ǫ i = O ( h ), and consequently, k u − v k E (Ω) = O ( h ). The set T can18e chosen adaptively with respect to a prescribed tolerance, which we do notelaborate in this paper. (cid:5) Remark 4.10
A rate of convergence of O ( h ) for various problems have beenreported for the Corrected XFEM (which is also a GFEM); see e.g., [22]. How-ever, for the crack propagation problems, the enrichment spaces V i in XFEMrequires the use of a ramp-function to obtain the O ( h ) rate of convergence.In contrast, the GFEM based on S = S + S does not require the use of aramp-function to obtain the rate of convergence of O ( h ).We now address the scaled condition number of the stiffness matrix of theGFEM with S = S + S . For clarity of the exposition, we will present theanalysis for the case when n i = 1 i.e., V i = span { ϕ [ i ]1 } . The analysis forgeneral n i is similar.As in the example presented in Section 4.1, the stiffness matrix A is ofthe form A = (cid:20) A A A A (cid:21) , where A = { B ( N i , N j ) } x i ,x j ∈T is the ζ × ζ stiffness matrix of the basic part of the GFEM. Let D be a diagonal matrixwith ( D ) ii = ( A ) − / ii . Clearly, the diagonal elements of b A := D A D (4.15)are equal to 1.The matrix A plays a central role in our analysis and depends on elementsthat have been enriched. We will refer to an element τ k = [ x k − , x k ] as enriched if (a) x k − ∈ T and ϕ [ k − | τ k
0, or (b) x k ∈ T and ϕ [ k ]1 | τ k
0. Let K enr := { τ k : τ k is enriched } . The matrix A is constructed by the assembly process using the element stiff-ness matrices A ( k )22 defined only on τ k ∈ K enr .We now address the structure of the element matrices A ( k )22 in detail and setup some notions and notations that will be used in the analysis. We denotethe vertices of the element τ k as x ( k )1 := x k − and x ( k )2 := x k ; we consider only τ k ∈ K enr . The element stiffness matrix A ( k )22 is of the form A ( k )22 = " b ( k )11 b ( k )12 b ( k )12 b ( k )22 , (4.16)where b ( k ) ij = B τ k (cid:0) N k − i ϕ [ k − i ]1 , N k − j ϕ [ k − j ]1 (cid:1) , 1 ≤ i, j ≤ b ( k )11 , b ( k )22 >
0, then A ( k )22 is 2 × A ( k )22 is associated with the vertices x ( k )1 = x k − and x ( k )2 = x k . We definea diagonal matrix D ( k ) = diag { δ ( k )1 , δ ( k )2 } with δ ( k )1 , δ ( k )2 >
0, such that thediagonal elements of ˆ A ( k )22 := D ( k ) A ( k )22 D ( k ) O (1), independent of h . Clearly, δ ( k )1 , δ ( k )2 are associatedwith vertices x k − , x k respectively.On the other hand, if b ( k )22 = 0 (i.e., ϕ [ k ]1 | τ k ≡ b ( k )12 = b ( k )21 =0) in (4.16), then the local stiffness matrix A ( k )22 = [ b ( k )11 ] is of size 1 × x ( k )1 = x k − . We define D ( k ) = [ δ ( k )1 ], where δ ( k )1 = { b ( k )11 } − / ; δ ( k )1 is associated with the vertex x ( k )1 = x k − . Similarly, if b ( k )11 = 0 in (4.16), then the local stiffness matrix A ( k )22 = [ b ( k )22 ] is associated onlywith the vertex x ( k )2 = x k . Also D ( k ) = [ δ ( k )2 ] with δ ( k )1 = { b ( k )22 } − / associatedwith the vertex x ( k )2 = x k . Let ς ( k ) be the number of vertices associated withthe local stiffness matrix A ( k )22 . Thus the size of A ( k )22 is ς ( k ) × ς ( k ) ; note that ς ( k ) is either 1 or 2 with our assumption n i = 1.Recall that A is obtained by the assembly process using the element stiff-ness matrices A ( k )22 ; the size of A is ζ × ζ . Let c = ( c , c , · · · , c ζ ), then c T A c = X τ k ∈K enr [ c ( k ) ] T A ( k )22 c ( k ) , (4.17)where c ( k ) ∈ R ς ( k ) . Moreover, the components of c ( k ) are also the componentsof c that correspond to those vertices of τ k that are associated with A ( k )22 . Forexample, if b ( k )11 , b ( k )22 > A ( k )22 , then as mentioned before, the vertices x ( k )1 = x k − , x ( k )2 = x k are associated with A ( k )22 . Suppose the components c j ( k ) − , c j ( k ) of c are associated with the vertices x k − , x k , respectively, of τ k . Then c ( k ) =[ c j ( k ) − , c j ( k ) ] T . Similarly, if A ( k )22 = [ b ( k )11 ], then A ( k )22 is associated with x ( k )1 and c ( k ) = [ c j ( k ) − ] – a vector with one component. Later in our analysis, we willuse (4.17) with a particular vector c and c ( k ) as defined above.Next we note that each vertex x i of the FE mesh is associated with a FEstar – union of all elements τ k ⊂ ω i (equivalently, union of all elements τ k withcommon vertex x i ). For x i ∈ T , we define K i := { τ k ∈ K enr : τ k ⊂ ω i } . For x i ∈ T and τ k ∈ K i , we set the index 1 ≤ l ( i, k ) ≤ k ∈ { i, i + 1 } . For k = i , we set l ( i, k ) = l ( i, i ) = 2 and for k = i + 1, weset l ( i, k ) = l ( i, i + 1) = 1. Thus l ( i, k ) is the index such that x ( k ) l ( i,k ) = x i ; note x ( k ) l ( i,k ) may not be associated with A ( k )22 . We define K ∗ i := { τ k ∈ K i : x ( k ) l ( i,k ) is associated with A ( k )22 } . Thus K ∗ i is the set of τ k ∈ K i such that ϕ [ i ]1 | τ k
0. For x i ∈ T , we define∆ i := X τ k ∈K ∗ i [ δ ( k ) l ( i,k ) ] − , (4.18)which will be used later in our analysis.Each diagonal element of A is associated with a vertex in T . Let ( A ) j i j i be associated with x i ∈ T . Moreover, we note that ( A ) j i j i = P τ k ∈K ∗ i b ( k ) l ( i,k ) ,l ( i,k ) ,20here b ( k ) pq was defined in (4.16). Thus ( A ) j i j i > x i ∈ T (i.e., allthe diagonal elements of A are positive). We now define the diagonal ma-trix D = diag { d , d , · · · , d ζ } with d j = ( A ) − / jj , 1 ≤ j ≤ ζ . Note that d j i = ( A ) − / j i j i is associated with x i ∈ T . Clearly, the diagonal elements of b A := D A D (4.19)are equal to 1. Define the diagonal matrix D := diag { D , D } . Since thediagonal elements of b A , b A (see (4.15), (4.19)) are equal to 1, the diagonalelements of b A := DAD = " b A b A b A b A (4.20)are also equal to 1. Also b A = D A D and b A = b A T .We will show that the GFEM with S = S + S is an SGFEM, under thefollowing assumptions on the local approximation spaces V i and the enrichmentpart of S , namely S . Assumption 1
The spaces S and S are almost orthogonal with respect to theinner product B ( · . · ) , i.e., there exist constants < L , U < ∞ , independent of h , such that L (cid:8) k v k E (Ω) + k v k E (Ω) (cid:9) ≤ | B ( v + v , v + v ) | ≤ U (cid:8) k v k E (Ω) + k v k E (Ω) (cid:9) , for all v ∈ S and v ∈ S . Assumption 2
For τ k ∈ K enr , there exist constants < L , U < ∞ , indepen-dent of k and h such that L k [ D ( k ) ] − x k ≤ x T A ( k )22 x ≤ U k [ D ( k ) ] − x k , ∀ x ∈ R ς ( k ) , where the diagonal matrices D ( k ) have been defined before. Assumption 3
For x i ∈ T , there exist constants < L , U < ∞ , independentof i and h such that L ≤ ( A ) − j i j i ∆ i ≤ U , where ( A ) j i j i is the diagonal element of A associated with x i , and ∆ i is asdefined in (4.18) . The following result is an easy consequence of Assumption 1.
Lemma 4.11
Let x = ( ξ T , η T ) T ∈ R ζ + ζ where ξ ∈ R ζ and η ∈ R ζ . Thenthere exist positive constants L and U , independent of h , such that L (cid:2) ξ T A ξ + η T A η (cid:3) ≤ x T A x ≤ U (cid:2) ξ T A ξ + η T A η (cid:3) , where A , A and A are matrices defined before. roof: Let ξ = ( ξ i ) x i ∈T and η = ( η i ) x i ∈T . Consider v = P x i ∈T ξ i N i ∈ S and v = P x i ∈T η i N i ϕ [ i ]1 ∈ S . Then B ( v + v , v + v ) = x T A x , B ( v , v ) = ξ T A ξ , and B ( v , v ) = η T A η . The desired result is now immediate fromAssumption 1. Theorem 4.12
Suppose the Assumptions 1, 2, and 3 are satisfied. Let A bethe stiffness matrix of the GFEM with S = S + S . Then L U K ( A ) ≤ K ( A ) ≤ K ( A ) U L max (cid:8) , U U /λ max ( b A ) (cid:9) min (cid:8) , L L /λ min ( b A ) (cid:9) , where λ min ( b A ) , λ max ( b A ) are the smallest and largest eigenvalues, respectively,of the matrix b A defined before. Remark 4.13
This result shows that under the Assumptions 1, 2, and 3, thescaled condition numbers of the stiffness matrices of the GFEM with S = S + S and the basic part of the GFEM are of the same order. Thus the GFEM with S = S + S is indeed an SGFEM. Proof:
Let z = ( z , z ) T ∈ R ζ + ζ , where z ∈ R ζ and z ∈ R ζ . Then fromthe definition of b A (see (4.20)), we have z T b Az = z T DADz = ( Dz ) T A ( Dz ),and since Dz = (cid:2) ( D z ) T , ( D z ) T (cid:3) T , from Lemma 4.11 we get L (cid:2) ( D z ) T A ( D z ) + ( D z ) T A ( D z ) (cid:3) ≤ z T b Az ≤ U (cid:2) ( D z ) T A ( D z ) + ( D z ) T A ( D z ) (cid:3) . (4.21)Let z = ( f , f , · · · , f ζ ) T and consider D = diag( d , d , · · · , d ζ ) with d i =( A ) − / ii as defined before. Then D z = ( d f , d f , · · · , d ζ f ζ ) T . Recallthat d j i is associated with x i ∈ T . Consequently, d j i f j i is associated with x i ∈ T .Consider an element τ k ∈ K enr . Following the notation given after (4.17), let z ( k )2 := ( D z ) ( k ) ∈ R ς ( k ) such that the components of z ( k )2 are the componentsof D z corresponding to the vertices of τ k associated with A ( k )22 . Now from(4.17) and using Assumption 2, we have( D z ) T A ( D z ) = X τ k ∈K enr z k T A ( k )22 z k ≥ L X τ k ∈K enr k [ D ( k ) ] − z k k . (4.22)We note that if D ( k ) = diag { δ ( k )1 , δ ( k )2 } , then k [ D ( k ) ] − z k k = [ δ ( k )1 ] − [ d j ( k ) − ] [ f j ( k ) − ] + [ δ ( k )2 ] − [ d j ( k ) ] [ f j ( k ) ] , where z k = [ d j ( k ) − , f j ( k ) − ] T following the notation given after (4.17). Simi-larly, if D ( k ) = [ δ ( k )1 ], then k [ D ( k ) ] − z k k = [ δ ( k )1 ] − [ d j ( k ) − ] [ f j ( k ) − ] , and if D ( k ) = [ δ ( k )2 ], then k [ D ( k ) ] − z k k = [ δ ( k )2 ] − [ d j ( k ) ] [ f j ( k ) ] .22ow, it is important to note that if J := { d j ( k ) − f j ( k ) − , d j ( k ) f j ( k ) } τ k ∈K enr (where the repeated elements appear only once) and J := { d j i f j i } x i ∈T , then J = J . Thus from (4.22), we have( D z ) T A ( D z ) ≥ L X x i ∈T ∆ i d j i f j i ≥ L L k z k , (4.23)where we used Assumption 3 to get the last inequality. Similarly, we can showthat ( D z ) T A ( D z ) ≤ U U k z k . Therefore from (4.21) and using the definition of b A , we get L h z T b A z + L L k z k i ≤ z T b Az ≤ U h z T b A z + U U k z k i . (4.24)Now from the lower bound of z T b Az in (4.24), we have z T b Az ≥ L h λ min ( b A ) k z k + L L k z k i , and therefore, λ min ( b A ) ≥ L λ min ( b A ) min (cid:8) , L L /λ min ( b A ) (cid:9) . (4.25)Similarly, using the upper bound of z T b Az in (4.24), we can show λ max ( b A ) ≤ U λ max ( b A ) max (cid:8) , U U /λ max ( b A ) (cid:9) . (4.26)Thus from (4.25) and (4.26), we have K ( A ) = λ max ( b A ) λ min ( b A ) ≤ K ( A ) U L max (cid:8) , U U /λ max ( b A ) (cid:9) min (cid:8) , L L /λ min ( b A ) (cid:9) , (4.27)which the required upper bound. The required lower bound could be obtainedby following the exact arguments in Proposition 4.2 and (4.24). Thus we getthe desired result.We mention that the notions and notations developed leading to Theorem4.12 can also be extended to higher dimensions. An element will have n e vertices,e.g., n e could be 3 or 4 in 2-d. And the element stiffness matrices A ( k )22 could beat most n e × n e . The assembly argument (4.17) could be easily generalized tohigher dimensions. For a given vertex x i and an enriched element τ k in the FEstar associated with x i , the index l ( i, k ) will again represent the local index of thevertex x ( k ) l ( i,k ) of τ k that coincides with x i , i.e., x ( k ) l ( i,k ) = x i . The expressions for∆ i , ( A ) ii and the Assumpsions 1, 2, 3, are exactly same in higher dimensions.Using these notions, the proof of Theorem 4.12 can be easily extended to higherdimensions. The approach presented here can also be extended for elasticityequations etc. We note however, the notations become a little more involved if n i >
1. 23 emark 4.14
We now make comments on the assumptions. The Assumption1 is always satisfied in 1-d. Let B ( u, v ) := ´ Ω u ′ v ′ dx . Since ϕ [ i ] j ( x k ) = 0 for k = i − , i, i + 1, it can be easily shown that B ( v , v ) = 0 for all v ∈ S and v ∈ S . Therefore, B ( v + v , v + v ) ≥ αB ( v + v , v + v )= α [ B ( v , v ) + B ( v , v )] ≥ αβ [ k v k E (Ω) + k v k E (Ω) ] . Similarly, we can show that B ( v + v , v + v ) ≤ βα [ k v k E (Ω) + k v k E (Ω) ] , and thus Assumption 1 is satisfied with L = αβ and L = βα . In higher dimen-sions, this assumption has to be checked.Assumption 2 is equivalent to L k y k ≤ y T ˆ A ( k )22 y ≤ U k y k for all y ∈ R ζ ( k ) .Thus ˆ A ( k )22 is uniformly positive definite in k and its eigenvalues are uniformlybounded.It is always possible to choose the diagonal matrix D ( k ) such that Assump-tion 3 is satisfied. For example, it is easy to check that Assumption 3 is satisfiedwith L = U = 1 by choosing D ( k ) = diag { δ k , δ ( k )2 } with δ ( k )2 = ( b ( k ) jj ) − / . TheAssumption 3 is trivially satisfied with L = U = 1 when D ( k ) is a 1 × (cid:5) Remark 4.15
As shown in the Appendix, the implementation of the SGFEMdoes not require scaling the stiffness matrix, i.e., the linear system involvingthe stiffness matrix A , and not scaled version b A , is solved. The scaling wasused only to define K ( A ) and to study its order through Theorem 4.12. Wewill show in the Appendix that K ( A ) is an indicator of the loss of accuracy inthe computed solution of the linear system associated with FEM, GFEM, andSGFEM. (cid:5) In this section we will present the SGFEM, when applied to three specific appli-cations. We will primarily address in detail the scaled condition number of thestiffness matrix of the method and show that the assumptions presented in thelast section hold. The SGFEM, applied to each of these applications, will basedon the uniform mesh { τ k } k ∈ I \{ } with the set of vertices T , defined before.24 .1 Interface Problems Let a ( x ) in (2.1) be a piecewise constant function and let f be smooth. We willconsider two situations, namely, a ( x ) = a ( x ) and a ( x ) = a ( x ), where a ( x ) = (cid:26) , ≤ x < b ∗ , b ∗ ≤ x ≤ a ( x ) = , ≤ x < b ∗ , b ∗ ≤ x < b ∗ , b ∗ ≤ x ≤ u of (2.1) does not belong to H (Ω).We first consider a ( x ) = a ( x ). We consider the set T ⊂ T as before. Thereexists an m such that b ∗ ∈ o τ m +1 = ( x m , x m +1 ) and therefore, b ∗ ∈ ω m ∩ ω m +1 .For x i ∈ T , we consider V i = span { , ϕ [ i ]1 = ´ xx i − (1 /a ( t )) dt } . Clearly, for i = m, m + 1, we have V i = span { , ( x − x i − ) } . Therefore recalling that V i = span { ϕ [ i ]1 } , where ϕ [ i ]1 = ϕ [ i ]1 − I ω i ϕ [ i ]1 , we get V i = { } for i = m, m + 1.We set T = { x m , x m +1 } ⊂ T and from the definition of S , we have S = X x i ∈T N i V i = N m V m + N m +1 V m +1 . We further note that ϕ [ m ]1 is linear on τ m and therefore, ϕ [ m ]1 | τ m = 0. Sim-ilarly, ϕ [ m +1]1 | τ m +2 = 0. Therefore τ m +1 is the only enriched element, i.e., K enr = { τ m +1 } , and A = A ( m +1)22 . Also, we can easily show that ϕ [ m ]1 | τ m +1 = ϕ [ m +1]1 | τ m +1 . Let b ∗ = x m + βh with 0 < β <
1. Then from a direct computation,we have A ( m +1)22 = (cid:20) hβ (1 − β ) ( + β − β ) / hβ (1 − β ) (1 + 4 β ) / hβ (1 − β ) (1 + 4 β ) / hβ (1 − β )(1 + 2 β ) / (cid:21) . (5.1)Clearly, A ( m +1)22 is associated with the vertices x m , x m +1 . We choose the diag-onal matrix D ( m +1) = diag { δ ( m +1)1 , δ ( m +1)2 } , where δ ( m +1)1 = h − / β − / (1 − β ) − , δ ( m +1)2 = h − / β − (1 − β ) − / . (5.2)Then ˆ A ( m +1)22 = D ( m +1) A ( m +1)22 D ( m +1) = (cid:20) ( + β − β ) / β / (1 − β ) / (1 + 4 β ) / β / (1 − β ) / (1 + 4 β ) / β ) / (cid:21) . (5.3)The diagonal elements of ˆ A ( m +1)22 are O (1) for all 0 < β <
1. Also the eigenvaluesof ˆ A ( m +1)22 are λ = (2 − β ) / λ = (1 + β ) /
2. Therefore, recalling Remark4.14, we have16 k [ D ( m +1) ] − x k ≤ x T A ( m +1)22 x ≤ k [ D ( m +1) ] − x k , ∀ x ∈ R , (5.4)25nd hence, Assumption 2 is satisfied with L = and U = 1.We set D = diag { d , d } with d i = ( A ) − / ii . Clearly, the diagonal ele-ments of b A = D A D are equal to 1. Recall that T = { x m , x m +1 } and K m = K m +1 = { τ m +1 } . Therefore, l ( m, m + 1) = 1 and l ( m + 1 , m + 1) = 2,where the index l ( i, k ) for x i ∈ T and τ k ∈ K i was defined just before (4.18).We also have K ∗ m = K ∗ m +1 = { τ m +1 } . Therefore from (4.18), we have ∆ m =[ δ ( m +1)1 ] − and ∆ m +1 = [ δ ( m +1)2 ] − . Also the vertices x m , x m +1 ∈ T are as-sociated with the diagonal elements ( A ) j m j m , ( A ) j m +1 j m +1 , respectively, of A , where j m = 1 , j m +1 = 2. It is easy to check that1 < ( A ) − ∆ m , ( A ) − ∆ m +1 ≤ L = 1 and U = 6.We have shown in Remark 4.14 that the Assumption 1 is always satisfiedin 1-d; in this case L = and U = 2. Therefore, from Theorem 4.12, wehave that K ( A ) = O ( h − ), and thus the GFEM with S = S + S is indeedan SGFEM. We further note that Assumptions 1, 2, 3 are satisfied for any0 < β <
1, i.e., the constants L , U , L , U , L and U are independent of β .Therefore K ( A ) = O ( h − ) even when β ≈ β ≈
1, i.e., when the point ofdiscontinuity b ∗ of a ( x ) is close to the one of the vertices x i (see also Remark5.1).We next consider the (2.1) with a ( x ) = a ( x ). We again choose V i =span { , ϕ [ i ]1 = ´ xx i − (1 /a ( t )) dt } . If the points of discontinuity b ∗ , b ∗ of a ( x )are separated, e.g., b ∗ ∈ o τ l and b ∗ ∈ o τ l ∗ with | l − l ∗ | ≥
2, then we can again showthat the GFEM with S = S + S is an SGFEM, based on the arguments givenabove.Suppose there is an m such that b ∗ ∈ o τ m and b ∗ ∈ o τ m +1 . Moreover, suppose b ∗ = x m − + h/ b ∗ = x m + βh with 0 < β <
1. Note that b ∗ is away fromthe vertices, whereas, b ∗ could be close to either x m ( β ≈
0) or x m +1 ( β ≈ V i = span { ϕ [ i ]1 } ; clearly, V i = { } for i = m − , m, m + 1.Therefore T = { x m − , x m , x m +1 } and S = X i = m − ,m,m +1 N i V i . We further note that ϕ [ m − (cid:12)(cid:12) τ m − = ϕ [ m +1]1 (cid:12)(cid:12) τ m +2 = 0. Also it can be shownthat ϕ [ m − (cid:12)(cid:12) τ m = ϕ [ m ]1 (cid:12)(cid:12) τ m and ϕ [ m ]1 (cid:12)(cid:12) τ m +1 = ϕ [ m +1]1 (cid:12)(cid:12) τ m +1 . Therefore K enr = { τ m , τ m +1 } (i.e., τ m , τ m +1 are the only enriched elements), and hence A isassembled from local stiffness matrices A ( m )22 and A ( m +1)22 .From direct computation, we get A ( m )22 = (cid:20) h h h h (cid:21) , and it is associated with the vertices x m − and x m . The matrix A ( m +1)22 issame as in (5.1) and is associated with x m and x m +1 . We choose D ( m ) =26iag( δ ( m )1 , δ ( m )2 ) with δ ( m )1 = δ ( m )2 = h − / and D ( m +1) = diag( δ ( m +1)1 , δ ( m +2)2 )with δ ( m +1)1 , δ ( m +2)2 as given in (5.2). Thenˆ A ( m )22 := D ( m ) A ( m )22 D ( m ) = (cid:20)
116 132132 116 (cid:21) . Clearly the diagonal elements of ˆ A ( m )22 are O (1). The eigenvalues of ˆ A ( m )22 are λ = 1 /
32 and λ = 3 /
32 and therefore (recall Remark 4.14),132 k [ D ( m ) ] − x k ≤ x T A ( m )22 x ≤ k [ D ( m ) ] − x k . (5.5)Next, the matrix ˆ A ( m +1)22 := D ( m +1) A ( m +1)22 D ( m +1) is same as the matrixgiven in (5.3). The diagonal elements of ˆ A ( m +1)22 are O (1) and its eigenvaluesare λ = (2 − β ) / λ = (1 + β ) /
2. Therefore16 k [ D ( m +1) ] − x k ≤ x T A ( m +1)22 x ≤ k [ D ( m +1) ] − x k , ∀ x ∈ R . Thus the above inequality together with (5.5) implies that Assumption 2 issatisfied with L = and U = 1 for all 0 < β < A is assembled from the matrices A ( m )22 , A ( m +1)22 and is givenby A = h h h h + hβ (1 − β ) ( + β − β ) hβ (1 − β ) ( + 2 β )0 hβ (1 − β ) ( + 2 β ) hβ (1 − β )3 (1 + 2 β ) . We choose D = diag( d , d , d ) with d i = ( A ) − / ii . Clearly the diagonalelements of b A := D A D are equal to 1. Consider the vertex x m ∈ T .Then K m = { τ m , τ m +1 } and l ( m, m ) = 2, l ( m, m + 1) = 1. Also in this case, K ∗ i = K i . Therefore, from (4.18), we have ∆ m = [ δ ( m )2 ] − +[ δ ( m +1)1 ] − . Similarly,we can show that ∆ m − = [ δ ( m )1 ] − and ∆ m +1 = [ δ ( m +1)2 ] − . We also note thatthe vertices x m − , x m , x m +1 ∈ T are associated with the diagonal elements( A ) j m − j m − , ( A ) j m j m , ( A ) j m +1 j m +1 , respectively, of A , where j m − =1 , j m = 2 , j m +1 = 3. An easy calculation yields1 ≤ ( A ) − ∆ m − , ( A ) − ∆ m , ( A ) − ∆ m +1 ≤ . Thus Assumption 3 is satisfied with L = 1 and U = 16 for all 0 < β <
1. Wehave shown before that Assumption 1 is always satisfied in 1-d. Therefore fromTheorem 4.12, we infer that K ( A ) = O ( h − ); the result is true even when β ≈ β ≈
1. Thus the GFEM with S = S + S is indeed an SGFEM.We remark that for a ( x ) = a ( x ) or a ( x ) = a ( x ), we can show that thereexists ¯ ξ i ∈ V i such that k u − I ω i u − ¯ ξ i k E ( ω i ) = O ( h ) for each x i ∈ T . Thususing the standard interpolation estimates and using Theorem 4.7, we have k u − u h k E (Ω) = O ( h ), where u h is the SGFEM solution.27 emark 5.1 Note that A ( m +1)22 and thus A , A degenerate as β → β → ǫ be small, say, ǫ = 10 − . We adjust the implementation when β ≤ ǫ or 1 − β ≤ ǫ by setting β = ǫ or 1 − β = ǫ , respectively. We emphasize that K ( A ) is bounded independently of β . (cid:5) Let a ( x ) = 1 in (2.1) and suppose f ( x ) be such that the solution u of (2.1)-(2.2) is of the form u = x α + u , where < α < , α = 1, and u is smoothwith u (0) = 0. Clearly u / ∈ H (Ω). Let 0 < D < l := (0 , D ),Ω r := ( D, u ∈ H (Ω r ) and | u | H (Ω r ) ≤ C [ | x α | H (Ω r ) + | u | H (Ω r ) ].Clearly, | u | H (Ω r ) depends on D and is extremely large for D ≈ T ⊂ T as before. Let T := { x i ∈ T : ω i ∩ Ω l = ∅} , where thepatches ω i have been defined before. Clearly, x , x ∈ T . Let k ∗ ∈ I be thelargest index such that x i ∈ T for 0 ≤ i ≤ k ∗ −
1. For x i ∈ T , let V i = span { , ϕ [ i ]1 = ( x − x i ) , ϕ [ i ]2 = x α | ω i } , and for x i ∈ T \T , let V i = span { , ϕ [ i ]1 = ( x − x i ) } . Clearly, V i = { } for x i ∈ T \T . For x i ∈ T , we have V i = span { ϕ [ i ]2 = σ i } 6 = 0 , where σ i := ( x α − I ω i x α ) | ω i ; recall I ω i x α is the piecewise linear polynomialthat interpolates x α at the vertices { x i − , x i , x i +1 } of ω i for i = 0, and I ω x α interpolates x α at { x , x } . For an element τ k ⊂ ω i (with x i ∈ T ), we define σ ( k ) := ( x α − I k x α ) | τ k , where I k x α ∈ P ( τ k ) interpolates x α at x k − , x k .Clearly, I ω i x α = I k x α on τ k ⊂ ω i . It is also clear that [ σ ( k ) ] ′ τ k ⊂ ω i .We define S = P k ∗ − i =0 N i V i and we consider the GFEM based S = S + S .We first address the convergence of the GFEM solution u h . It is easy to showthat for x i ∈ T , there exists ¯ ξ i ∈ V i such that k u − I ω i u − ¯ ξ i k E ( ω i ) ≤ Ch | u | H ( ω i ) . Also for x i ∈ T \T , from standard interpolation result we have k u − I ω i u k E ( ω i ) ≤ Ch | u | H ( ω i ) ≤ Ch [ | x α | H ( ω i ) + | u | H ( ω i ) ] . Therefore, from Theorem 4.7, there exists v ∈ S + S such that k u − v k E (Ω) ≤ Ch (cid:2) X x i ∈T | u | H ( ω i ) + X x i ∈T \T {| x α | H ( ω i ) + | u | H ( ω i ) } (cid:3) / ≤ Ch (cid:2) | u | H (Ω) + | x α | H (Ω r ) (cid:3) / . Thus we have k u − u h k E (Ω) ≤ Ch , where u h is the GFEM solution; note that C depends on | x α | H (Ω r ) and thus on D .28e note that Ω l is independent of h . However, if D = h γ , γ < | Ω l | = h γ ), then one can show that k u − u h k E (Ω) = O ( h − γ ). Thus if we enrichonly a fixed number of patches in the neighborhood of the singularity, we loosethe optimal order of convergence.We now address the scaled condition number of the stiffness matrix A ofthe GFEM. We note that the matrix A is assembled from element stiffnessmatrices A ( k )22 for the element τ k , where τ k is enriched. We note that the set ofenriched elements is given by K enr := { τ k ∈ { τ l } l ∈ I \{ } : x k ∈ T } . We furthernote that if τ k ∈ K enr , then τ j ∈ K enr for 1 ≤ j ≤ k . Also from the definition of k ∗ , it is clear that τ k ∗ ∈ K enr and τ j / ∈ K enr for j ≥ k ∗ + 1. Now, for τ k ∈ K enr , k = k ∗ , the matrices A ( k )22 are of the form A ( k )22 = { b ( k ) lm } l,m =1 ; the entries b ( k ) lm are as given by b ( k ) lm = ˆ τ k ( N k − l σ ) ′ ( N k − m σ ) ′ dx . Also, since x k ∗ / ∈ T , we have A ( k ∗ )22 = [ b ( k ∗ )11 ] (an 1 × b ( k ∗ )11 isgiven by the above expression. Lemma 5.2
The entries of the matrix A ( k )22 are as follows: b ( k )11 = ˆ τ k N k − σ ′ dx, b ( k )22 = ˆ τ k N k σ ′ dx,b ( k )12 = b ( k )21 = ˆ τ k N k − N k σ ′ dx . The proof is easy and we do not present it here.It is clear from above that for τ k ∈ K enr and k = k ∗ , the diagonal elements b ( k )11 , b ( k )22 > A ( k )22 is associated with x k − , x k . Also b ( k ∗ )11 > A ( k ∗ )22 is associated with x k ∗ − . A simple observation yields that the sizeof A is k ∗ × k ∗ .Let τ k ∈ K enr and set x k − / := ( k − ) h ; x k − / is the mid-point of τ k . Wedefine G k = (cid:12)(cid:12) [ x α ] ′′ ( x k − / ) (cid:12)(cid:12) = | α ( α − k − ) α − h α − | . Note that for 1 ≤ j ≤ k ∗ − τ j +1 ∈ K enr implies τ j ∈ K enr , and we have1 ≤ G j G j +1 = (cid:16) j + j − (cid:17) − α ≤ − α . (5.6)We now obtain a few results, which will be used to establish that the GFEMwith S = S + S is an SGFEM. Lemma 5.3
For x k ∈ K enr , there exist positive constants C ∗ , C ∗ , independentof k and h but may depend on α , such that C ∗ h / ≤ k [ σ ( k ) ] ′ k L ( τ k ) G k ≤ C ∗ h / . roof: (a) First let 2 ≤ k ≤ k ∗ and let g ( x ) = [ σ ( k ) ] ′ ( x ) for x ∈ τ k . Thenmax | g ′ ( x ) | = max x ∈ τ k | [ σ ( k ) ] ′′ ( x ) | = | α ( α − | x α − k − = | α ( α − | ( k − ) α − h α − (cid:16) k − k − (cid:17) α − = G k (cid:16) k − k − (cid:17) − α ≤ G k (cid:16) (cid:17) − α := M k . (5.7)Similarly, min | g ′ ( x ) | = G k (cid:16) k − k (cid:17) − α ≥ G k (cid:16) (cid:17) − α := m k . (5.8)We next note that g ′ does not change sign in τ k and thus g is monotonic in τ k .Also, since ´ τ k g dx = σ ( k ) (cid:12)(cid:12) x k x k − = 0 , there exists a unique x ∗ k = x k − + ζh ∈ τ k with 0 < ζ < g ( x ∗ k ) = 0 and x ∗ k is characterized by ˆ x ∗ k x k − | g | dx = ˆ x k x ∗ k | g | dx . (5.9)We now obtain bounds on ζ , independent of k . Since g ( x ∗ k ) = 0, it is clear fromthe mean value theorem, (5.7), and (5.8) that m k | x − x ∗ k | ≤ min | g ′ || x − x ∗ k | ≤ | g ( x ) |≤ max | g ′ || x − x ∗ k | ≤ M k | x − x ∗ k | , x ∈ τ k . (5.10)Consequently, m k ζ h ≤ ˆ x ∗ k x k − | g | dx ≤ M k ζ h m k (1 − ζ ) h ≤ ˆ x k x ∗ k | g | dx ≤ M k (1 − ζ ) h . (5.12)Now from (5.9), (5.11), and (5.12), we have m k ζ h ≤ ˆ x ∗ k x k − | g | dx = ˆ x k x ∗ k | g | dx ≤ M k (1 − ζ ) h , and thus ζ ≤ √ M k √ M k + √ m k = (3 / (2 − α ) / (3 / (2 − α ) / + (3 / (2 − α ) / , where we used the definition of m k and M k given in (5.8) and (5.7) respectively.Using a similar argument we obtain a lower bound of ζ ; we summarize thebounds of ζ as ζ l ≤ ζ ≤ ζ r , where ζ l = (3 / (2 − α ) / (3 / (2 − α ) / + (3 / (2 − α ) / and ζ r = (3 / (2 − α ) / (3 / (2 − α ) / + (3 / (2 − α ) / . (5.13)30inally, from (5.10) and using the definition of M k (given in (5.7)), we get ˆ τ k | g | dx = ˆ x ∗ k x k − | g | dx + ˆ x k x ∗ k | g | dx ≤ M k ˆ x ∗ k x k − ( x ∗ k − x ) dx + M k ˆ x k x ∗ k ( x − x ∗ k ) dx = M k h ζ + (1 − ζ ) ] ≤ G k (3 / − α ) h ζ r + (1 − ζ l ) ] , and similarly, we have ˆ τ k | g | dx ≥ m k h ζ + (1 − ζ ) ] ≥ G k (3 / − α ) h ζ l + (1 − ζ r ) ] . Thus ¯ C ∗ h / ≤ k [ σ ( k ) ] ′ k L ( τ k ) G k ≤ ¯ C ∗ h / , for 2 ≤ i ≤ N, where ¯ C ∗ = (3 / − α p [ ζ l + (1 − ζ r ) ] / C ∗ = (3 / − α p [ ζ r + (1 − ζ l ) ] / k = 1. We note that on τ = (0 , h ),[ σ (1) ] ′ ( x ) = αx α − − h α − . By a direct computation, we get ˆ h | [ σ (1) ] ′ | dx = ( α − h α − α − . Therefore, ´ h | [ σ (1) ] ′ | dxG = ( α − h α − (2 α − α ( α − h α − − α = h (2 α − α − α := ¯ C ∗ . Hence we get the desired result with C ∗ = min( ¯ C ∗ , ¯ C ∗ ) and C ∗ = max( ¯ C ∗ , ¯ C ∗ ). Lemma 5.4
Suppose τ k ∈ K enr and let l k ( x ) be a linear function, defined on τ k ,such that l k ( x k − ) = y and l k ( x k ) = y . Then there exists a positive constant C ∗ , independent of k and h but may depend on α , such that k [ σ ( k ) ] ′ l k k L ( τ k ) ≥ C ∗ G k h / ( y + y ) / . roof: (a) Let 2 ≤ k ≤ k ∗ and define g ( x ) = [ σ ( k ) ] ′ ( x ) for x ∈ τ k . On τ k , wehave seen in the proof of Lemma 5.3 that g ( x ∗ k ) = 0 where x ∗ k = x k − + ζh and0 < ζ l ≤ ζ ≤ ζ r <
1. We have also seen that m k | x − x ∗ k | ≤ | g ( x ) | ≤ M k | x − x ∗ k | , ∀ x ∈ τ k , (5.14)where m k = G k (3 / − α , M k = G k (3 / − α . Let ¯ x k ≡ x k − + ζ r h + 1 − ζ r h. Then | ¯ x k − x ∗ k | = | x k − + ζ r h + 1 − ζ r h − x k − − ζh | ≥ − ζ r h. (5.15)Also from the definition of ¯ x k , it is clear that g ( x ) = 0 in (¯ x k , x k ) and thus from(5.14) and (5.15), we have | g ( x ) | ≥ m k | ¯ x k − x ∗ k | ≥ − ζ r m k h. (5.16)Therefore, ˆ x k x k − | g | | l k | dx ≥ ˆ x k ¯ x k | g | | l k | dx ≥ m k h (1 − ζ r ) ˆ x k ¯ x k | l k | dx . (5.17)We make the change of variable y = x − ¯ x k h − ζ r . Then F ( y , y ) := ´ x i ¯ x k | l k | dxy + y = (1 − ζ r ) h ´ | ˜ l ( y ) | dy y + y ) , where˜ l ( y ) = l k (¯ x k + (1 − ζ r ) hy y − ζ r − y ) + y ( 1 + ζ r − ζ r y ) . Thus F ( y , y ) is independent of k . We next note that F ( y , y ) is a continuousfunction and F ( βy , βy ) = F ( y , y ). It is well known that the minimum of F ( y , y ) is attained on the compact set y + y = 1. Hence there is a constant C min , independent of k but may depend on ζ r , such that0 < C min (1 − ζ r ) h ≤ F ( y , y ) = ´ x k ¯ x k | l | dxy + y . Thus from (5.17), we have ˆ x k x k − | g | | l k | dx ≥ C min m k h (1 − ζ r ) y + y )= B ∗ G k h ( y + y ) , B ∗ = (3 / − α ) C min (1 − ζ r ) / k = 1. On τ = (0 , h ), we have g ( x ) = αx α − − h α − .It is easy to see that g ( x ∗ ) = 0, where x ∗ = ζh with ζ = ζ ( α ) = α / − α .Since ζ ( α ) is increasing for < α < (with ζ redefined for α = 1), we have ζ ≤ ζ ∗ ≡ ζ (3 /
2) = (2 / .Set ¯ x = ζ ∗ h + (1 − ζ ∗ ) h/
2. Since | g ( x ) | is increasing in ( x ∗ , h ), we have | g ( x ) | ≥ ¯ g min ≡ | g (¯ x ) | on (¯ x , h ). Therefore, ˆ h | g | | l | dx > ˆ h ¯ x | g | | l | dx ≥ ¯ g min ˆ h ¯ x | l | dx = ¯ g min G α | α − | h α − / α − ˆ h ¯ x | l | dx = C G h ˆ h ¯ x | l | dx , where C = 2 − α ¯ g min α ( α − h α − = 2 − α [ α (1+ ζ ∗ )2 − α ( α − . As before, we can show that ˆ h ¯ x | l | dx ≥ C min (1 − ζ ∗ ) h y + y ) / , and therefore, ˆ h | g | | l | dx ≥ B ∗ G h ( y + y ) / , where B ∗ = C C min (1 − ζ ∗ ) h/
2. Finally, defining C ∗ = min( B ∗ , B ∗ ) andrecalling that g = [ σ ( k ) ] ′ , we get the desired result.Now, for k = k ∗ , consider the diagonal matrix D ( k ) = diag( δ ( k )1 , δ ( k )2 ) with δ ( k )1 = δ ( k )2 = G − k h − / and set ˆ A ( k )22 = D ( k ) A ( k )22 D ( k ) . The diagonal elements ofˆ A ( k )22 (see Lemma 5.2) are¯ b ( k )11 = 1 G k h ˆ τ k N k − σ ′ dx , ¯ b ( k )22 = 1 G k h ˆ τ k N k σ ′ dx . Using Lemmas 5.4 and 5.3, it is clear that C ∗ ≤ ¯ b ( k )11 ≤ G k h ˆ τ k σ ′ dx ≤ C ∗ , where C ∗ , C ∗ are independent of k and h . Similarly, C ∗ ≤ ¯ b ( k )22 ≤ G k h ˆ τ k σ ′ dx ≤ C ∗ . We let D ( k ∗ ) = [ δ ( k ∗ )1 ] with δ ( k ∗ )1 = G − k ∗ h − / . Using similar arguments weshow the C ∗ ≤ ¯ b ( k ∗ )11 ≤ C ∗ , where ˆ A ( k ∗ )22 = D ( k ∗ ) A ( k ∗ )22 D ( k ∗ ) = [¯ b ( k ∗ )11 ]. Thus thediagonal elements of b A ( k )22 are O (1) for all τ k ∈ K enr .We next show that the element matrices A ( k )22 satisfy the Assumption 2.33 roposition 5.5 For τ k ∈ K enr , the matrices A ( k )22 satisfies Assumption 2.Proof: Suppose k = k ∗ and let x = ( x , x ) T ∈ R . Then, using Lemma 5.2, wehave x T A ( k )22 x = b ( k )11 x + 2 b ( k )12 x x + b ( k )22 x = ˆ τ k [ x N k − + 2 x x N k − N k + x N k ][ σ ( k ) ] ′ dx = ˆ τ k [ x N k − + x N k ] [ σ ( k ) ] ′ dx ≤ x + x ) ˆ τ k [ σ ( k ) ] ′ dx , and using Lemma 5.3, we have x T A ( k )22 x ≤ C ∗ h G k k x k = C ∗ k [ D ( k ) ] − x k . Next from Lemma 5.4, it is immediate that x T A ( k )22 x = ˆ τ k [ x N k − + x N k ] [ σ ( k ) ] ′ dx ≥ C ∗ G k h ( y + y ) = C ∗ k [ D ( k ) ] − x k . Similar bounds fox A ( k ∗ )22 x for all x ∈ R also hold. Thus Assumption 2 issatisfied with L = C ∗ and U = C ∗ .Next, recalling that A is k ∗ × k ∗ , we choose the diagonal matrix D =diag( d , d , · · · , d k ∗ ) with d j = ( A ) − / jj . Clearly, the diagonal elements of b A = D A D are equal to 1. Note that ( A ) jj , 1 ≤ j ≤ k ∗ , is associatedwith the vertex x j − ∈ T . Also, ( A ) = b (0)11 and ( A ) jj = b ( j − + b ( j )11 for2 ≤ j ≤ k ∗ .Now, for x i ∈ T and i = 0 , k ∗ − x i / ∈ T for k ∗ ≤ i ≤ N ), wehave K i = { τ i , τ i +1 } , l ( i, i ) = 2, l ( i, i + 1) = 1 and K ∗ i = K i . Also K = { τ } , l (0 ,
1) = 1, K ∗ = K and K k ∗ − = { τ k ∗ } , l ( k ∗ − , k ∗ ) = 1, K ∗ k ∗ − = K k ∗ − .Therefore, from (4.18), we have ∆ i = [ δ ( i )2 ] − + [ δ ( i +1)1 ] − for x i ∈ T and i = 1 , k ∗ −
1; also ∆ = [ δ (1)1 ] − and ∆ k ∗ − = [ δ ( k ∗ )1 ] − .We now show that Assumption 3 is satisfied. Proposition 5.6
Let ( A ) jj , ≤ j ≤ k ∗ , be the diagonal elements of A andconsider ∆ i for x i ∈ T , defined above. Then Assumption 3 is satisfied.Proof: Let x i ∈ T and i = 0 , k ∗ − x i is associated with ( A ) j i j i , where j i = i + 1. Therefore using the definition of ( A ) i +1 ,i +1 , δ ( i +1)1 , and δ ( i )2 , wehave ( A ) − j i j i ∆ i = ( A ) − i +1 ,i +1 ∆ i = G i h b ( i )22 + b ( i +1)11 + G i +1 h b ( i )22 + b ( i +1)11 . (5.18)34ow using Lemmas 5.4, 5.3, and (5.6), it is immediate that C ∗ ≤ b ( i )22 G i h ≤ C ∗ ,C ∗ − α ) ≤ b ( i +1)11 G i +1 h G i +1 G i = b ( i +1)11 G i +1 h ≤ C ∗ , and therefore, 12 C ∗ ≤ G i h b ( i )22 + b ( i +1)11 ≤ − α ) C ∗ (1 + 3 − α ) ) . (5.19)Similarly, we get 1 C ∗ (1 + 3 − α ) ) ≤ G i +1 h b ( i )22 + b ( i +1)11 ≤ C ∗ , and combining (5.18),(5.19), we infer that there exist constants L , U , suchthat L ≤ ( A ) − j i j i ∆ i ≤ U , where L = 12 C ∗ + 1 C ∗ (1 + 3 − α ) ) ,U = 12 C ∗ + 3 − α ) C ∗ (1 + 3 − α ) ) . Thus Assumption 3 hold for x i ∈ T , i = 0 , k ∗ −
1. The proofs for x i ∈ T , i = 0 , k ∗ − K ( A ) = O ( h − ) and the GFEM with S = S + S is an SFEM. We now address a problem, which is different from (2.1). Let Ω = (0 ,
1) and setΩ l = (0 , c ) and Ω r = ( c, < c < H (Ω) := (cid:26) v ∈ L (Ω) : v (0) = v (1) = 0 , ˆ Ω l v ′ dx < ∞ , and ˆ Ω l v ′ dx < ∞ (cid:27) . Then ( H (Ω) , k · k H ) is a Hilbert space, where k v k H := | v | H (Ω l ) + | v | H (Ω r ) .
35e note that H (Ω) ⊂ H (Ω) and functions in H (Ω) may have jump disconti-nuity at x = c .For f ∈ L (Ω), we consider the problem u ∈ H (Ω) , B ( u, v ) = F ( v ) , ∀ v ∈ H (Ω) , (5.20)where B ( u, v ) := ˆ Ω l u ′ v ′ dx + ˆ Ω r u ′ v ′ dx and F ( v ) := ˆ Ω f v dx. The bilinear form B ( · , · ) is coercive and bounded in H (Ω). Also F ( · ) is abounded linear functional on H (Ω). Thus the problem (5.20) has a uniquesolution.If f and the solution u ∈ H (Ω) of (5.20) are smooth in Ω l and Ω r , then u isthe solution of the boundary value problem − u ′′ = f on Ω l , − u ′′ = f on Ω r ,u (0) = u (1) = 0 and u ′ ( c − ) = u ′ ( c + ) = 0 . This problem mimics the problem with a crack in higher dimensions, where thesolution is discontinuous along the crack line away from the crack-tip.We now give a characterization of the solution of (5.20). We will use theHeaviside function H c ( x ) = (cid:26) , ≤ x < c ; − , c ≤ x ≤ . (5.21) Lemma 5.7
Suppose u ∈ H (Ω) such that u ′ ( c − ) = u ′ ( c + ) = 0 and ˆ Ω l ( u ′′ ) dx < ∞ , ˆ Ω l ( u ′′ ) dx < ∞ . Then u ( x ) = s ( x ) + ˜ u ( x ) , where s is a step function with discontinuity at x = c and ˜ u ∈ H (Ω)Proof: We first note that u ( c − ) and u ( c + ) are well defined. We define˜ u = u − u ( c − ) − u ( c + )2 [ H c − , (5.22)where H c ( x ) is given in (5.21). It is easy to check that˜ u (cid:12)(cid:12) Ω l = u (cid:12)(cid:12) Ω l , ˜ u ( c − ) = u ( c − ) , ˜ u ′ (cid:12)(cid:12) Ω l = u ′ (cid:12)(cid:12) Ω l , and ˜ u ′ ( c − ) = u ′ ( c − ) . Similarly, ˜ u (cid:12)(cid:12) Ω r = u (cid:12)(cid:12) Ω r + [ u ( c − ) − u ( c + )] , ˜ u ( c + ) = u ( c − ) , ˜ u ′ (cid:12)(cid:12) Ω r = u ′ (cid:12)(cid:12) Ω r , and ˜ u ′ ( c + ) = u ′ ( c + ) . u ( c − ) = ˜ u ( c + ) = u ( c − ). We define ˜ u ( c ) = u ( c − ) so that ˜ u is contin-uous at x = c and thus is continuous on Ω. Also since u ′ ( c − ) = u ′ ( c + ) = 0, itis clear from above that ˜ u ′ ( c − ) = ˜ u ′ ( c + ) = 0. We define ˜ u ′ ( c ) = 0 so that ˜ u ′ iscontinuous at x = c , and consequently ˜ u ′ is continuous in Ω. Moreover, ˆ Ω (˜ u ′′ ) dx = ˆ Ω l (˜ u ′′ ) dx + ˆ Ω r (˜ u ′′ ) dx = ˆ Ω l ( u ′′ ) dx + ˆ Ω r ( u ′′ ) dx ≤ ∞ . Thus ˜ u ∈ H (Ω) and considering s = u ( c − ) − u ( c + )2 [ H c −
1] in (5.22), we get thedesired result.Suppose c / ∈ T , i.e., c is not a vertex of the mesh. Therefore, there exists an m such that c ∈ τ m +1 and hence, c ∈ ω m ∩ ω m +1 . We assume that m = 1 , N ;this is always achieved for h small. Since u (0) = u (1) = 0, we consider T = T \{ x , x N } (see Remark 4.6).For 1 ≤ i ≤ N −
1, we consider V i = span { , ϕ [ i ]1 = ( x − x i ) , ϕ [ i ]2 = H c ( x ) } andwe set V = span { ϕ [0]1 = ( x − x ) } and V N = span { ϕ [ N ]1 = ( x − x N ) } . Note that V i ∈ H ( ω i ) for i ∈ I (i.e., for x i ∈ T ). Clearly, V i = { } for i ∈ I \{ m, m + 1 } .We set T = { x m , x m +1 } ⊂ T and define S = N m V m + N m +1 V m +1 . We consider the GFEM with S = S + S .Since ϕ [ m ]2 = H c is constant in τ m , we have ϕ [ m ]2 | τ m = 0. Similarly, ϕ [ m +1]2 | τ m +2 =0. Therefore K enr = { τ m +1 } . Moreover, the functions ϕ [ m ]2 , ϕ [ m +1]2 are discon-tinuous at x = c , their values are zero at x = x m , x m +1 , and ϕ [ m ]2 | τ m +1 = ϕ [ m +1]2 | τ m +1 .We assume that f is such that solution u ∈ H (Ω) of (5.20) satisfies theassumptions of Lemma 5.7 and u = s + ˜ u , where s is a step-function with adiscontinuity at x = c and ˜ u ∈ H (Ω). We now address the convergence ofthe GFEM solution. We first note that Theorem 4.7 hold for u ∈ H (Ω) with E (Ω) , E ( ω i ) replaced by H (Ω) , H ( ω i ) and with V i ∈ H ( ω i ). Now for x i ∈ T \T ,we have u ∈ H ( ω i ) and from the standard interpolation result k u − I ω i u k H ( ω i ) = k u − I ω i u k E ( ω i ) ≤ Ch | u | H ( ω i ) = Ch | ˜ u | H ( ω i ) . (5.23)For x m ∈ T , it is easy to show that there exists ¯ ξ m ∈ V m such that u − I ω m u − ¯ ξ m = ˜ u − I ω m ˜ u on ω m . Therefore, k u − I ω m u − ¯ ξ m k L ( ω m ) ≤ C | ω m | k u − I ω m u − ¯ ξ m k E ( ω m ) , and from the standard interpolation result, we have k u − I ω m u − ¯ ξ m k H ( ω m ) = k ˜ u − I ω m ˜ u k H ( ω m ) ≤ Ch | ˜ u | H ( ω m ) . (5.24)Similarly, there exists ¯ ξ m +1 ∈ V m +1 such that k u − I ω m +1 u − ¯ ξ m +1 k H ( ω m +1 ) ≤ Ch | ˜ u | H ( ω m +1 ) . Therefore combining (5.23), (5.24), (5.3) and using the Theorem 4.7 with mod-ifications as mentioned above, we infer that there exists v ∈ S = S + S suchthat k u − v k H (Ω) ≤ Ch | ˜ u | H (Ω) . k u − u h k H (Ω) = O ( h ), where u h is the GFEM solution.We next address the scaled condition number of the stiffness matrix of theGFEM. Since τ m +1 is the only element in K enr , we have A = A ( m +1)22 .Let c = x m + βh with 0 < β <
1. A direct computation yields that A ( m +1)22 = 4 h (cid:20) (4 − β + 6 β ) / − + 2 β − β − + 2 β − β (1 − β + 6 β ) / (cid:21) . Thus A ( m +1)22 is associated with vertices x m and x m +1 . We choose the diagonalmatrix D ( m +1) = diag { δ ( m +1)1 , δ ( m +1)2 } with δ ( m +1)1 = δ ( m +1)2 = h / /
2. Thenˆ A ( m +1)22 = D ( m +1) A ( m +1)22 D ( m +1) = (cid:20) (4 − β + 6 β ) / − + 2 β − β − + 2 β − β (1 − β + 6 β ) / (cid:21) . The diagonal elements of ˆ A ( m +1)22 are O (1) for any 0 < β <
1. The eigenvaluesof ˆ A ( m +1)22 are λ = − β + 2 β − T and λ = − β + 2 β + T , where T = p − β + 228 β − β + 144 β (obtained from MAPLE ). It can beshown that 16 ≤ λ ≤ − √ , ≤ λ ≤
56 + √ . Thus, as before, we have16 k [ D ( m +1) ] − x k ≤ x T A ( m +1)22 x ≤ (cid:16)
56 + √ (cid:17) k [ D ( m +1) ] − x k , ∀ x ∈ R , and the Assumption 2 is satisfied with L = and U = + √ .We choose D = diag { d , d } with d i = ( A ) − / ii . Clearly, the diagonalelements of b A = D A D are equal to 1. As in the first example (i.e., when a ( x ) = a ( x )) in Section 5.1, we have T = { x m , x m +1 } and K m = K m +1 = { τ m +1 } . Therefore, l ( m, m +1) = 1 and l ( m +1 , m +1) = 2. Also K ∗ m = K ∗ m +1 = { τ m +1 } and therefore from (4.18), we have ∆ m = [ δ ( m +1)1 ] − and ∆ m +1 =[ δ ( m +1)2 ] − . Also x m , x m +1 ∈ T are associated with ( A ) y m y m , ( A ) y m +1 y m +1 ,respectively, where y m = 1 , and y m +1 = 2. It is easy to check that34 ≤ ( A ) − ∆ m , ( A ) − ∆ m +1 ≤ . Thus Assumption 3 is satisfied with L = 3 / U = 24 /
5. Therefore fromTheorem 4.12, we have K ( A ) = O ( h − ), where A is the stiffness matrix, for all0 < β < The GFEM uses special enrichment functions, based on the available (or ex-tracted) information on the unknown solution of the underlying variational38roblem. The use of special enrichment functions gives rise to the excellentconvergence properties of the GFEM. In fact, for a given problem, it is possibleto choose several classes of enrichment functions such that the GFEM, employ-ing each of these enrichment classes, will yield excellent convergence properties.However, GFEM employing some of these classes of enrichments could be ill-conditioned, i.e., there could be severe loss of accuracy in the computed solutionof the linear system associated with the GFEM. The loss of accuracy could bemuch more than that experienced in a standard FEM. In this paper, we have pre-sented and analyzed a modification of the GFEM – the stable GFEM (SGFEM),which does not have the problem with severe loss of accuracy. SGFEM has allthe advantages of the GFEM and is also very robust with respect to the parame-ters of the enrichments (e.g., the parameter β in Sections 5.1 and 5.3). The lossof accuracy is characterized by the scaled condition number and is expressedthrough Hypothesis H, which was validated based on various examples.The abstract framework developed in this paper has been applied to a one-dimensional problem for the clarity of exposition. This framework could also beapplied to higher dimensional problems, which will be reported in a forthcomingpaper. Acknowledgement:
We thank Professor C. Armando Duarte of the Depart-ment of Civil and Environmental Engineering, University of Illinois at Urbana-Champaign, and Professor John E. Osborn of the Department of Mathemat-ics, University of Maryland at College Park, for fruitful discussions on certainaspects of this paper. We also thank Mr. Karl Schulz of PECOS, ICES, Uni-versity of Texas at Austin, for his help on the use of superLU and
MUMPS on the
Lonestar system , Texas Advanced Computing Center, and also for vari-ous illuminating discussions and interpretation of the results, presented in thispaper.
Validation and Verification (V & V) is a fairly new field and is still in its devel-oping stage ([1, 42, 7, 37]). Suppose a mathematical model of some “Reality”(e.g., a physical, chemical or biological system or process), formulated for a par-ticular goal or purpose, is given. The objective of V & V is to assess whetherthe predictions based on the computed solution of a mathematical model arereliable enough so that they could be the basis for certain decisions related tothe goal.Validation is the process of building confidence on the mathematical model([1, 42, 37]). The process is of course is constrained by the cost, available time,and skills, as explicitly underlined in [45]. It is based on a set of properly se-lected problems and their mathematical models for which experimental data isavailable. These problems are called validation problems and they are chosenwith varying level of complexity; more complex problems are closer to the “Re-ality”. Of course, obtaining the experimental data for the validation problemswith increasing complexity is increasingly costly. The prediction based on the39omputed solution of these problems is then compared with the experimentaldata. The assessment of the difference is based on a specified tolerance and asuitably selected metric (could be more than one) relative to the specific goal.If the measure of the difference is larger than the tolerance for any validationproblem, the mathematical model is rejected. If none of the validation modelsare rejected, then one could have confidence that the mathematical model re-alistically describes the “Reality”, with respect to the goal, beyond the scopeof the chosen validation problems. The level of confidence will be based on thetolerance as well as the number and the selection of the validation problems.We mention that the set of the validation problems is finite, their selection has alarge subjective component, and a philosophical question about the justificationof the confidence in the mathematical model could certainly be raised (see [28]).Numerical algorithms and their properties obtained from the mathematicalanalysis are always based on various assumptions that are not satisfied whenthe algorithm is implemented on a computer. For example, infinite precisionarithmetic is often assumed while describing a numerical algorithm or statingan inference about the algorithm obtained from the analysis. However, this as-sumption is always violated by a computer working with finite precision arith-metic. The output from the computer implementation of the algorithm mayalso depend, for example, on the package in which the algorithm have been im-plemented, the compiler, the processor, the computing platform with single ormultiple processors, among other factors. Consequently, the output may varyeven when the same outcome is predicted by the mathematical analysis for twodifferent algorithms. For example, suppose the solution of the linear system Ax = b is sought using algorithms of the form P Ti AP i z = P Ti b , x = P i z , where P i is a permutation matrix for i = 1 ,
2. Both the algorithms should yield thatsolution x , however, the computed solutions could be different (see Problems1a,b, below). Thus the implementation of a numerical algorithm in a computeris analogous to a “Reality”; the goal is to obtain a particular quantity of inter-est for a particular purpose (related to a decision). The mathematical modelof this “Reality” is the inference obtained from the mathematical analysis, orother statements based on the inference, about obtaining the quantity of inter-est from the algorithm. Therefore, the process of validation of the inference hasto be performed to have confidence in the inference or a statement based on theinference.We have briefly formulated the following hypothesis in the Introduction. Let Ax = b , x, b ∈ R n be a linear system, where the n × n matrix A belongs to aclass of sparse matrices that include the stiffness matrices associated with FEM,GFEM, or SGFEM. Let ˆ x be the computed solution of the linear system, ob-tained from an elimination method, e.g., some variant of Gaussian elimination.Moreover, ˆ x is computed in finite precision arithmetic with machine precision ǫ .Let H = DAD where D is a diagonal matrix with D ii = A − / ii ; clearly, H ii = 1.Recall the scaled condition number K ( A ) of A is given by K ( A ) := κ ( H ) , where κ ( H ) = k H k k H − k is the condition number of H based on the k · k vectornorm. Also recall η := k x − ˆ x k / k x k .40 ypothesis H: For n , not small, η ≈ Cn β K ( A ) ǫ ; β ≈ , (7.1)where ˆ x has been computed in an computing environment satisfyingthe IEEE standard for floating point arithmetic (with the guarddigit), there is no overflow or underflow during the computation ofˆ x , and C , β do not depend on n as well as other factors mentionedbefore.The ≈ in (7.1) means there exist 0 < ¯ C , ¯ C and 0 < ¯ β small, such that η = Cn β K ( A ) ǫ with ¯ C ≤ C ≤ ¯ C and | β | ≤ ¯ β . Also this hypothesis addressesthe range N for which not (almost) all digits of accuracy is lost (see Problem3a).Hypothesis H is based on certain mathematical inferences (results), whichwe will discuss later. The validation of (7.1) with respect to the tolerance τ = { τ , τ } means that ¯ C / ¯ C ≤ τ and ¯ β ≤ τ . Note that τ is primary and shouldbe small for confidence in (7.1), however τ could be allowed to be larger. The setof validation problems consists of stiffness matrices of FEM, GFEM, SGFEM,and other similar matrices, e.g., arising in finite difference method, applied tosolve various linear elliptic variational problems of increasing complexity. Forconfidence in the Hypothesis, we require that (7.1) is not rejected for any of thevalidation problems relative to the given tolerance τ . We note that it is possibleto select a tolerance such that the hypothesis is not rejected, however, thetolerance have to be admissible (e.g., reasonably small) for the decision makingprocess. In our case, the decision will be whether to accept the SGFEM over thestandard GFEM. We note that the class of matrices for which the hypothesiswill be validated is not precisely defined, similar to a class of complex physicalor engineering problem.We now give a theoretical rationale for (7.1). There is a lot of literatureavailable on the accuracy of the computed solutions of the linear system Ax = b .We particularly mention the classic [51] and a modern book [26] with an excellentsurvey of the theoretical results in the area. Typically, the loss of accuracy inthe numerical solution due to round-offs is analyzed by the backward erroranalysis. This analysis shows that the computed solution is the exact solutionof a perturbed linear system, and it provides estimates of the perturbations interms of the data of the linear system. A bound on the loss of accuracy in thecomputed solution, measured by η defined before, is then obtained using theperturbation estimates.It is well known from a standard perturbation argument that for a full matrix A , η ≤ f ( n ) κ ( A ) ǫ, (7.2)where ǫ is the machine precision and f ( n ) depends on the algorithm used tosolve Ax = b (see e.g., [24, 27]). In Hypothesis H, we hypothesize that κ ( A ) isreplaced by K ( A ). We also hypothesize that ¯ C n β ≤ f ( n ) ≤ ¯ C n β and | β | ≤ ¯ β ,where ¯ C , ¯ C , and ¯ β are as defined before. It is important to note that in the41athematical literature, only an upper bound of η is available; in contrast, theHypothesis H addresses both the upper and lower bounds of η .Consider the linear system DADz = Db , where D is a diagonal matrix with D ii = 2 g i in the rage of the floating point system. Clearly x = Dz . We nowcite the following old result of F. L. Bauer ([8]): Theorem 7.1
Let ˆ x , ˆ z be the computed solutions of the linear systems Ax = b and DADz = Db , respectively, obtained from an elimination method with nopivoting. Furthermore, we assume that there is no overflow or underflow in thecomputation of ˆ x , ˆ z . Then all the digits of ˆ x and ˆ x D are same, where ˆ x D = D ˆ z . We note that the result of the above Theorem is not true if the diagonal elementsof D are not binary. However in that situation, the quantities k x − ˆ x k , k x − ˆ x D k ,and k ˆ x − ˆ x D k are of the same order.We next note that it is possible to find a diagonal matrix such that κ ( DAD ) ≤ κ ( A ). For example, for µ >
0, let A = (cid:20) µ (cid:21) . Then κ ( A ) = 1 /µ is large for µ small. Let µ = χ − d , where 1 / < χ <
2. Consider D = (cid:20) d/ (cid:21) . Then DAD = (cid:20) d/ χ (cid:21) , and 1 ≤ κ ( DAD ) < κ ( DAD ) < κ ( A ) for µ small. Let D ∗ be the diagonal matrix suchthat κ ( D ∗ AD ∗ ) = min D κ ( DAD ) (minimum over all diagonal matrices D with binary diagonal elements), then from the above theorem and (7.2), wehave η ≤ f ( n ) κ ( D ∗ AD ∗ ) ǫ ≤ f ( n ) κ ( A ) ǫ. Thus κ ( D ∗ AD ∗ ) provides more accurate information about η than κ ( A ). Butin general, it is not easy to find either D ∗ or κ ( D ∗ AD ∗ ). In Hypothesis H,we used K ( A ) = κ ( DAD ), where D is a diagonal matrix with D ii = A − / ii and D ii may not be binary. We note, however, that not using a binary onlyinfluences ¯ C , ¯ C by factors of 1 / η is availablewith f ( n ) = Cn and κ ( A ) replaced by K ( A ) for symmetric positive definitelinear systems solved by Cholesky decomposition. In Hypothesis H , we used f ( n ) = Cn β , β ≈
0, based on our computational experience.We now consider a set of validation problems, whose exact solution (exper-imental data) is known. The solution to these problems will be computed onvarious computers using double precision, i.e., with 16 digits of accuracy.
Problem 1:
We consider approximating the solution u ( x ) = x of the problem − u ′′ ( x ) = 0 , x ∈ (0 , , u (0) = 0 , u (1) = 1 , by the FEM using piecewiselinear finite elements. Problem 1a:
We use the FE mesh vertices x i = ih for i = 0 , , · · · , N and h = 1 /N . The FE solution is same as the exact solution u of the problem. Let the associated linear systems be A (1) x (1) = b (1) . The exact solution vector x (1) is known, namely, x (1) i = ih ,42 = 1 , , · · · , N . We will solve the linear system by the standard LUdecomposition algorithm for sparse matrices without partial pivoting. Problem 1b:
We use the mesh vertices x i = ( N − i +1) h , i = 0 , , · · · , N .The FE solution is same as the exact solution u and let the associatedlinear system be A (2) x (2) = b (2) ; it is known that x (2) i = ( N − i +1) h , i = 1 , · · · , N . Note that the elements of x (2) are the permutedelements of x (1) and thus k x (1) k = k x (2) k . We will solve the linearsystem by the same algorithm as Problem 1a.The computations are performed on a Dell Latitude PC with INTELCORE(TM)2 CPU, 1.20GHZ. Problem 2:
We approximate the solution u ( x ) = 1 of the problem − u ′′ ( x ) =0 , x ∈ (0 , , u (0) = 1 , u (1) = 1 , by the piecewise linear FEM based on themesh vertices as in Problem 1a. Let the associated linear system be Ax = b . It is clear that the exact solution is given by x i = 1, i = 1 , , · · · , N . Problem 2a:
The linear system is solved by a sparse matrix direct solver superLU [29] on a single processor.
Problem 2b:
The linear system is solved by a sparse matrix direct solver
MUMPS [2] on a single processor.
Problem 2c:
The linear system is solved by
MUMPS , using parallel com-putation, on 128 processors.The computations were performed on the Lonestar system at Texas Ad-vanced Computing Center. Lonestar is a Linux based cluster comprised of1888 compute nodes connected via high speed quad-data rate infiniband,with each compute node containing two hex-core socket (INTEL Xeon5680 processors) for an aggregate system size of 22656 cores. Each coreruns at a peak of 3.33GHZ.
Problem 3:
We consider approximating the solution u ( x ) = x of the problem − u ′′ ( x ) = − , x ∈ (0 , u (0) = 0 , u ′ (1) = 2, by the GFEM based on S = S + S (see (3.4)). We use n i = 1 and ϕ [ i ]1 ( x ) = x , i = 0 , , · · · , N .We order the shape functions as N ϕ [0]1 , N , N ϕ [1]1 , N , · · · , N N , N N ϕ [ N ]1 and suppose the associated stiffness matrix is Ax = b , where A is of theorder 2 N + 1. The GFEM solution is same as the exact solution u . It iseasy to see that x i +1 = 1, i = 0 , , · , N and x i = 0, i = 1 , , · · · , N . Thelinear system is solved by the same algorithm and on the same platformas in Problem 1a. Problem 4:
We consider approximating the solution of the same problem inProblem 3 by the SGFEM based on S = S + S (see (4.10)) with n i = 1, T = T , and ϕ [ i ]1 = x −I ω i x . We order the shape functions as N ϕ [0]1 , N , N ϕ [1]1 , N , · · · , N N , N N ϕ [ N ]1 and suppose the associated stiffness matrixis Ax = b , where A is of the order 2 N + 1. The GFEM solution is same43 −15 −14 −13 −12 −11 −10 −9 η N slope=2(a) −15 −14 −13 −12 −11 −10 −9 η N(b) slope=2
Figure 1: Log-log plots of η ( k ) = k x ( k ) − ˆ x ( k ) k / k x ( k ) k where ˆ x ( k ) is the computedsolution of A ( k ) x ( k ) = b ( k ) , k = 1 ,
2, associated with FEM with vertices x i = ih and x i = ( N − i ) h , i = 0 , , · · · , N , respectively. η (1) , η (2) have been computed andpresented in (a) and (b), respectively, for N = 100 , , · · · , . as the exact solution u and it is easy to see that x i +1 = 1, i = 0 , , · , N and x i = ( ih ) , i = 1 , , · · · , N . The linear system is solved by the samemethod and on the same platform as in Problem 1a.We will now validate Hypothesis H based on the validation problems de-scribed above. We will consider the tolerance τ = ( τ , τ ), with τ = 400 and τ = 0.Let ˆ x (1) and ˆ x (2) be the computed solutions of the linear systems A (1) x (1) = b (1) and A (2) x (2) = b (2) of Problem 1a and
Problem 1b , respectively. It canbe shown that for large N , K ( A (1) ) = K ( A (2) ) ≈ . N . We have computed andpresented the log-log plots of the relative errors η ( k ) = k x ( k ) − ˆ x ( k ) k / k x ( k ) k , k = 1 ,
2, with respect to N = 100 , , · · · , C ( k )1 [0 . N ] ≤ η ( k ) ≤ ¯ C ( k )2 [0 . N ] for k = 1 , C / ¯ C ≤ < τ (note τ = 0). Thus we do not reject Hypothesis H. Note that we did not reject thehypothesis based only on the subset of meshes with the values of N , mentionedabove. Moreover, it is interesting to note that the plots of η (1) and η (2) are quitedifferent. Thus the computed solution is affected by changing the order of theFE mesh vertices, in spite of the fact that k x (1) k = k x (2) k .In Problem 2 , we solve the linear system Ax = b using two different soft-ware superLU and MUMPS ; we also implement
MUMPS on multiple proces-sors. Let ˆ x ( a ) , ˆ x ( b ) , ˆ x ( c ) be the computed solutions of Problems 2a, 2b, and 2c,respectively. These solutions were computed for 10 ≤ N ≤ , with 90 valuesof N in the range [10 , ), with 400 values of N in the range [10 , ), and360 values of N in the range [10 i , i +1 ), i = 3 , , ,
6, and with N = 10 . Wepresented the log-log plots of η ( k ) = k x − ˆ x ( k ) k / k x k , k = a, b , for the valuesof N given before, in Figures 2a and 2b respectively. We observed that for N ≥ C / ¯ C ≤ < τ for both the problems. Thus we do not reject theHypothesis H for N ≥ N ≤ β ≈ −
1. It is also clear from Figure 2b that the implementation of the44 −16 −14 −12 −10 −8 −6 −4 η N slope=2(a) −16 −14 −12 −10 −8 −6 −4 η N slope=2(b) −40−20020406080100120140 NR e (c) Figure 2: (a) Log-log plot of η ( a ) = k x − ˆ x ( a ) k / k x k with respect to N , whereˆ x ( a ) is the computed solution of Problem 2a (using superLU ). (b) Log-log plot of η ( b ) = k x − ˆ x ( b ) k / k x k with respect to N , where ˆ x ( b ) is the computed solution ofProblem 2b (using MUMPS ). (c) Semi-log plot of 100 ∗ ( η ( c ) − η ( b ) ) /η ( b ) with respectto N , where η ( c ) = k x − ˆ x ( c ) k / k x k and ˆ x ( c ) is the computed solution of Problem 1c(using MUMPS with 128 processors). Proportionally distributed 1931 values of N inthe interval [10 , ] are used in all the figures. algorithm in MUMPS changes drastically for
N > × ; this is not the casewith superLU , as seen in Figure 2a. Thus the computed solution depends onthe software package, as mentioned before. For Problem 2c, we did not displaythe log-log plot of η ( c ) = k x − ˆ x ( c ) k / k x k as it would be very similar to the plotof η ( b ) in Figure 2b. However, we computed R e ≡ η ( c ) − η ( b ) ) /η ( b ) – the“signed relative difference percent” — and presented the semi-log plot of R e inFigure 2c for the same values of N , given before. For N ≤ × , we see that R e ≈ R e starts to oscillate for N > × . This indicates thatthe implementation in MUMPS changes drastically. Figure 2c also suggests that η ( c ) is larger than η ( b ) for most values on N , and η ( c ) gets closer to η ( b ) as N increases.Let ˆ x be the solution of the linear system Ax = b of Problem 3 . Wehave K ( A ) = O ( N ) (see Section 3.1). The log-log plot of η = k x − ˆ x k / k x k −12 −10 −8 −6 −4 −2 η N slope=4(a) −10 −9 −8 −7 −6 −5 −4 −3 slope=4N η (b) Figure 3: Plots of η = k x − ˆ x k / k x k where ˆ x is the computed solution of the linearsystem Ax = b of Problem 3, associated with the GFEM with vertices x i = ih , i = 0 , , · · · , N . In (a), we used N = 50 , , , · · · , N in the interval [100 , with respect to N = 50 , , , · · · , ≤ N ≤ η for every value of N in this range. Basedon both these data (i.e., the values of η for every value of N in the range100 ≤ N ≤ N = 1050 , , , · · · , C N ≤ η ≤ ¯ C N with ¯ C / ¯ C ≤ < τ (note τ = 0). Thus we do notreject the Hypothesis H, again based on the subset of meshes with the values of N mentioned above. It is important to note that in Problem 3a, all the digits ofaccuracy were lost for N ≥ N ≥ η for every value of N in the range9000 ≤ N ≤ η was of the order 1 and oscillated around 1.Let ˆ x be the computed solution of the linear system Ax = b of Problem4 . We have shown in this paper that K ( A ) = O ( h ). We have presented thelog-log plot of η = k x − ˆ x k / k x k , with respect to N = 50 , , , · · · , N in the range 100 ≤ N ≤ η for every value of N in therange 100 ≤ N ≤ N = 1050 , , , · · · , C N ≤ η ≤ ¯ C N with ¯ C / ¯ C ≤ < τ (note τ = 0). Thus we do notreject the Hypothesis H (based on meshes with these values of N ).Thus we did not reject the Hypothesis H for any validation problems withrespect to the tolerance τ = 400 and τ = 0. But we would reject the HypothesisH if we choose τ = 300, since ¯ C / ¯ C ≤ τ in Problem 3. However, ifthe values of η for every value of N in the range [100 , C / ¯ C ≤ < τ , and we thus we wouldnot reject Hypothesis H. Hence validation depends on the values of N , i.e., onthe number of validation problems considered, since each value of N (in eachof Problems 1, 2, 3, and 4) constitutes a separate validation problem. But asmentioned before, the choice of the tolerance depends on the type of decisionrelated to the goal. For example in this paper, we have to decide whether to46 −15 −14 −13 −12 −11 −10 −9 η N slope=2(a) −15 −14 −13 −12 −11 η N slope = 2(b)
Figure 4: Plots of η = k x − ˆ x k / k x k where ˆ x is the computed solution of the linearsystem Ax = b of Problem 4, associated with the SGFEM with vertices x i = ih , i = 0 , , · · · , N . In (a), we used N = 50 , , , · · · , N in the interval [100 , accept SGFEM over the standard GFEM. In this case, we may allow τ to bebigger; in fact if τ = 500, we still accept SGFEM over GFEM since the valueof η for GFEM will be much larger than the η of SGFEM for large N .We summarize by stating that(a) we have confidence in Hypothesis H, based on the chosen val-idation problems (Problems 1–4). We underline that we have alsoconsidered other 2- and 3-dimensional validation problems for theHypothesis H, which we do not present in this paper. We will presenta more substantial validation of Hypothesis H in a future publica-tion.(b) Because of our confidence in Hypothesis H, we prefer the use ofSGFEM over GFEM, since linear system of SGFEM is less proneto the loss of accuracy than the linear system of the GFEM, whensolved using an elimination method. Remark 7.2
As mentioned before, all the computations presented here wereperformed with 10 − accuracy. However, all the figures, presented above, indi-cate that that the apparent accuracy is about 10 − . This is likely the effect ofvarious cancelations. (cid:5) References [1] American Society of Mechanical Engineers, New York.
ASME guide forVerification and Validation in Computational Solid Mechanics , 2006. V&V10.[2] P. R. Amestoy, I. S. Duff, J. Koster, and J. Y. L’Excellent. A fully asyn-chronous multifrontal solver using distributed dynamic scheduling.
SIMAX ,23:15–41, 2001. 473] I. Babu´ska and R. Lipton. Optimal local approximation spaces for gen-eralized finite element methods with application to multiscale problems.Technical Report 10-12, ICES, University of Texas at Austin, 2010.[4] I. Babuˇska, U. Banerjee, and J. Osborn. Generalized finite element meth-ods: Main ideas, results, and perspective.
International Journal of Com-putational Methods , 1(1):1–37, 2004.[5] I. Babuˇska, G. Caloz, and J. Osborn. Special finite element methods fora class of second order elliptic problems with rough coefficients.
SIAM J.Numer. Anal. , 31:945–981, 1994.[6] I. Babuˇska and J. M. Melenk. The partition of unity finite element method.
Int. J. Numer. Meth. Engng. , 40:727–758, 1997.[7] I. Babuˇska and J. T. Oden. Verification and Validation in computationalengineering and science: basic concepts.
Comput. Methods Appl. Mech.Engrg. , 193:4057–4066, 2004.[8] F. L. Bauer. Optimal scaling of matrices and the importance of the minimalcondition. In C. M. Popplewell, editor,
Information Processing 62 , IFIPCongress 1962, pages 198–201, Amsterdam, 1963. North-Holland.[9] T. Belytschko and T. Black. Elastic crack growth in finite elements withminimal remeshing.
Int. J. Numer. Meth. Engng. , 45:601–620, 1999.[10] S. E. Benzley. Representation of singularities with isoparametric finiteelements.
Int. J. Numer. Meth. Engng. , 8:537–545, 1974.[11] H. Blum and M. Dobrowoski. On finite element methods for elliptic equa-tions on domains with corners.
Computing , 28:53–63, 1982.[12] E. Byskov. The calculation of stress intensity factors using finite elementwith cracked element.
Int. J. Fract. Mech. , 6:159–167, 1970.[13] C. Daux, N. Moes, J. Dolbow, N. Sukumar, and T. Belytschko. Arbitrarybranched and intersecting cracks with extended finite element method.
Int.J. Numer. Meth. Engng. , 48:1741–1760, 2000.[14] J. Demmel. On floating point error in cholesky. Technical Report CS-89-87,Dept. of Computer Science, Univ. of Tennessee, 1989.[15] J. Dolbow, N. Mo¨es, and T. Belytschko. Modeling fracture in Mindlin-Reissner plates with the extended finite element method.
J. Solids Struct. ,37:7161–7183, 2000.[16] J. E. Dolbow.
An Extended Finite Element Method with DiscontinuousEnrichment for Applied Mechanics . PhD thesis, Northwestern University,1999. 4817] C. A. Duarte and J. T. Oden. An h-p adaptive method using clouds.
Comput. Methods Appl. Mech. Engrg. , 139:237–262, 1996.[18] C. A. Duarte and J. T. Oden. H-p Clouds – An h - p Meshless Method.
Numer. Methods Partial Differential Equations , 12:673–705, 1996.[19] P. Esser, J. Grande, and A. Reusken. An extended finite element methodapplied to levitated droplet problems.
Int. J. Numer. Meth. Engng , 84:757–773, 2010.[20] M. Farsad, F. J. Vernerey, and H. S. Park. An extended finite element/levelset method to study surface effects on the mechanical behavior and prop-erties of nanomaterials.
Int. J. Numer. Meth. Engng , 84:1466–1489, 2010.[21] G. Fix, S. Gulati, and G. I. Wakoff. On the use of singular functions withfinite element approximations.
J. Comp. Phys. , 13:209–228, 1973.[22] T.-P. Fries. A corrected XFEM approximation without problems in blend-ing elements.
Int. J. Numer. Meth. Engng. , 75:503–532, 2008.[23] T-P Fries and T. Belytschko. The extended/generalized finite elementmethod: An overview of the method and its applications.
Int. J. Numer.Meth. Engng. , 84:253–304, 2010.[24] G. H. Golub and C. F. Van Loan.
Matrix Computations . The Johns HopkinsUniversity Press, Baltimore, USA, 1996.[25] M. Griebel and M. A. Schweitzer. A Particle-Partition of Unity method –Part VI: Adaptivity. In M. Griebel and M. A. Schweitzer, editors,
MeshfreeMethods for Partial Differential Equations III , Lecture Notes on ComputerScience and Engineering, Vol. 26, pages 121–148. Springer, 2006.[26] N. J. Higham.
Accuracy and Stability of Numerical Algorithms . SIAM,Philadelphia, 2002.[27] D. Kincaid and W. Cheney.
Numerical Analysis; Mathematics of ScientificComputing . American Mathematical Society, 2002.[28] G. B. Kleindorfer, L. O’Neill, and R. Ganeshan. Validation in simila-tion: various positions in the philosophy of science.
Management Science ,44:1087–1099, 1998.[29] X. S. Li and J. W. Demmel. SuperLU-DIST: A scalable distributed-memorysparse direct solver for unsymmetric linear systems.
ACM Trans, Mathe-matical Software , 29:110–140, 2003.[30] C. Lu and B. Shanker. Generalized finite element method for vector elec-tromafnetic problems.
IEEE Transactions on Antennas and Propagation ,55:1369–1381, 2007. 4931] A. M. Matache, I. Babu´ska, and C. Schwab. Generalized p -FEM in homog-enization. Numer. Math. , 86, 2000.[32] J. M. Melenk.
On Generalized Finite Element Methods . PhD thesis, Uni-versity of Maryland, 1995.[33] J. M. Melenk and I. Babuˇska. The partition of unity finite elementmethod: Basic theory and application.
Comput. Methods Appl. Mech. En-grg. , 139:289–314, 1996.[34] A. Menk and S. P. A. Bordas. A robust preconditioning technique for theextended finite element method.
Int. J. Meth. Engng. , 85:1609–1632, 2011.[35] N. Moes, J. Dolbow, and T. Belytschko. A finite element method for crackwithout remeshing.
Int. J. Numer. Meth. Engrg. , 46:131–150, 1999.[36] A. Nouy and A Cl´ement. eXtended stochastic finite element method forthe numerical simulation of heterogegeous materials with random materialinterfaces.
Int. J. Numer. Meth. Engng , 83:1312–1344, 2010.[37] W. L. Oberkampf and Ch. J. Roy.
Verification and Validation in ScientificComputing . Cambridge University Press, New York, 2010.[38] J. T. Oden and C. A. M. Duarte. Clouds, Cracks and FEMs. In B. DayaReddy, editor,
Recent Developments in Computational and Applied Me-chanics , 1997.[39] J. T. Oden, C. A. M. Duarte, and O. C. Zienkiewicz. A new cloud-based hp finite element method. Comput. Methods Appl. Mech. Engrg. , 153:117–126,1998.[40] P. O’Hara, C. A. Duarte, and T. Eason. Generalized finite element analysisfor three-dimensional problems exhibiting sharp thermal gradients.
Com-put. Methods Appl. Mech. Engrg. , 198:1857–1871, 2009.[41] A. K. Rao, I. S. Raju, and A. V. K. Murthy. A powerful hybrid method infinite element analysis.
Int. J. Numer. Meth. Engng, , 3:389–403, 1971.[42] P. J. Roache.
Fundamentals of Verification and Validation . Hermosa Pub-lisher, Albuquerque, NM, 2009.[43] M. A. Schweitzer.
A Parallel Multilevel Partition of Unity Method forElliptic Partial Differential Equations . Springer, 2003. Lecture Notes inComputational Science, vol. 29.[44] A. Simone, C. A. Duarte, and E. Van der Giessen. A generalized finiteelement method for polycrystals with discontinuous grain boundaries.
Int.J. Numer. Meth. Engng , 67:1122–1145, 2006.5045] Simulation Interoperability Standards Organization, Orlando, FL.
Guidefor generic methodology for Verificatin and Validation (V&V) and accep-tance of models, simulations, and data , 2007.[46] G. Strang and G. Fix.
An Analysis of the Finite Element Method . Wellesley-Cambridge, 2008. 2nd. edition.[47] T. Strouboulis, I. Babuˇska, and K. Copps. The design and analysis of thegeneralized finite element method.
Comput. Methods Appl. Mech. Engrg. ,181:43–69, 2000.[48] T. Strouboulis, K. Copps, and I. Babuˇska. The generalized finite elementmethod: an example of its implementation and illustration of its perfor-mance.
Int. J. Numer. Meth. Engng. , 47:1401–1417, 2000.[49] T. Strouboulis, K. Copps, and I. Babuˇska. The generalized finite elementmethod.
Comput. Methods Appl. Mech. Engrg. , 190:4081–4193, 2001.[50] N. Sukumar, N. Moes, B. Moran, and T. Belytschko. Extended finiteelement method for for three dimensional crack modelling.
Int. J. Numer.Meth. Engrg. , 48(11):1549–1570, 2000.[51] J. H. Wilkinson.